Modelwire
Subscribe

Stability and Generalization in Looped Transformers

Illustration accompanying: Stability and Generalization in Looped Transformers

Researchers introduce a fixed-point framework for analyzing looped transformers, which enable test-time compute scaling. The work proves that architectures without recall cannot achieve strong input-dependence, while recall plus outer normalization enables stable, reachable fixed points for meaningful predictions.

MentionsLooped Transformers · Fixed-point iteration

Modelwire summarizes — we don’t republish. The full article lives on arxiv.org. If you’re a publisher and want a different summarization policy for your work, see our takedown page.

Related

A Nonlinear Separation Principle: Applications to Neural Networks, Control and Learning

arXiv cs.LG·

AdaSplash-2: Faster Differentiable Sparse Attention

arXiv cs.CL·

Structural interpretability in SVMs with truncated orthogonal polynomial kernels

arXiv cs.LG·
Stability and Generalization in Looped Transformers · Modelwire