Papers
arxiv:2507.09394

A Random Matrix Theory Perspective on the Learning Dynamics of Multi-head Latent Attention

Published on Jul 12
Authors:
,

Abstract

Multi-head latent attention affects transformer capacity during pretraining, with MLA-Decoupled maintaining broad spectral support and preventing rank collapse compared to standard MHA and MLA-PreRoPE.

AI-generated summary

In this work, we study how multi-head latent attention (MLA), a popular strategy for compressing key/value memory, affects a transformer's internal capacity during pretraining. Using a lightweight suite of Marchenko-Pastur (MP) diagnostics, we analyze the spectrum of the W_{Q}W_{K}^top gram matrix throughout training, comparing three variants: the standard multi-head attention (MHA) baseline, MLA-PreRoPE with rotary applied before compression, and MLA-Decoupled, which shares a single rotary sub-vector across all heads. Our random matrix analysis reveals three key findings: i) capacity bottlenecks emerge locally: both MHA and MLA-PreRoPE exhibit sharp, early spikes in specific layers that persist and propagate, disrupting the balance between bulk and outlier directions; ii) these spikes coincide with rank collapse, concentrating the model's expressivity into narrow subspaces; iii) only the decoupled variant prevents this cascade, maintaining broad spectral support and suppressing outlier formation across layers. These results underscore that how rotary embeddings are applied is just as critical as where compression occurs. Sharing rotary components across heads mitigates spectral fragmentation and preserves representational capacity.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2507.09394 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2507.09394 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2507.09394 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.