Compressing Sequences in the Latent Embedding Space: $K$-Token Merging for Large Language Models

Researchers propose K-Token Merging, a compression technique that groups token embeddings in latent space to reduce computational overhead in LLM inference. The method uses a lightweight encoder to merge K consecutive tokens into single embeddings, then processes the compressed sequence through a LoRA-adapted model while preserving original vocabulary output.
MentionsK-Token Merging · LoRA
Read full story at arXiv cs.CL →(arxiv.org)
Modelwire summarizes — we don’t republish. The full article lives on arxiv.org. If you’re a publisher and want a different summarization policy for your work, see our takedown page.