Papers
arxiv:2507.07996

Skip a Layer or Loop it? Test-Time Depth Adaptation of Pretrained LLMs

Published on Jul 10
· Submitted by zhoutianyi on Jul 11
Authors:
,
,

Abstract

A method using chain-of-layers (CoLa) and Monte Carlo Tree Search (MCTS) optimizes the architecture of a pretrained large language model for individual samples, improving inference efficiency and performance.

AI-generated summary

Can a pretrained neural network adapt its architecture to different inputs without any finetuning? Do we need all layers for simple tasks, and are they adequate for challenging tasks? We found that the layers of a pretrained large language model (LLM) can be manipulated as separate modules to build a better and even shallower model customized for each test sample. In particular, each layer from the pretrained model can be skipped/pruned or repeated multiple times as recurrent neural networks (RNN), and stacked with others in arbitrary orders, yielding a chain-of-layers (CoLa) per sample. This compositional space greatly expands the scope of existing works on looped/recurrent pretrained modules, layer pruning, or early-exit networks. We develop a Monte Carlo Tree Search (MCTS) protocol to explore and identify the optimal CoLa for each sample from math and commonsense reasoning benchmarks. Compared to a static model of a fixed depth, CoLa allows shortcut paths (fast thinking), recurrence of the same layer(s) (slow thinking), and combining both, offering more flexible, dynamic architectures for different inputs. We conduct an extensive analysis of the MCTS-optimized CoLa, which leads to two key findings: (1) For >75% of samples with correct predictions by the original LLM, we can find shorter CoLa, suggesting a large space for improving inference efficiency; (2) For >60% of samples with originally incorrect predictions, we can identify CoLa achieving correct predictions, suggesting a large space of performance enhancement. Our results highlight the shortcomings of using a fixed architecture of pre-trained LLMs for inference on different samples and pave the way to unlock the generalization power of test-time depth adaptation.

Community

Paper author Paper submitter

We found that the layers of a pretrained large language model (LLM) can be manipulated as separate modules to build a better and even shallower model customized for each test sample. In particular, each layer from a pretrained LLM can be skipped or repeated multiple times as recurrent neural networks (RNN), and stacked with others in arbitrary orders, yielding a chain-of-layers (CoLa) per sample. This compositional space significantly expands the scope of existing works on looped or recurrently pretrained modules, layer pruning, or early-exit networks.

Screenshot 2025-07-11 at 1.14.03 AM.png

We develop a Monte Carlo Tree Search (MCTS) protocol to explore and identify the optimal CoLa for each sample from math and commonsense reasoning benchmarks. Compared to a static model of a fixed depth, CoLa allows shortcut paths (fast thinking), recurrence of the same layer(s) (slow thinking), and combining both, offering more flexible, dynamic architectures for different inputs. Specifically,

  • We introduce a new dimension of generalization that turns a static pretrained LLM into dynamic architectures of adaptive depths without training any parameter: for different test samples/tasks, the pretrained layers can be skipped, repeated, and assembled to create better (more accurate and/or shallower) CoLa models without further training.

  • We develop an MCTS protocol for efficient architecture search of CoLa with adaptive depth
    for each sample. In-depth analysis of patterns in the achieved CoLa models sheds critical insights
    into the importance and redundancy of layers at different depths of pretrained/finetuned models
    of different sizes, which also vary for tasks at different difficulty levels.

We conduct an extensive analysis of the MCTS-optimized CoLa, which leads to two key findings:

(1) For >75% of samples with correct predictions by the original LLM, we can find shorter CoLa, suggesting a large space for improving inference efficiency;

(2) For >60% of samples with originally incorrect predictions, we can identify CoLa achieving correct predictions, suggesting a large space of performance enhancement.

Screenshot 2025-07-11 at 1.21.28 AM.png

Screenshot 2025-07-11 at 1.21.44 AM.png

Our results highlight the shortcomings of using a fixed architecture of pre-trained LLMs for inference on different samples and pave the way to unlock the generalization power of test-time depth adaptation.

Hi @zhoutianyi , Thank you for sharing your paper--very interesting findings! I had a quick question about Algorithm 1. The simulation step says that it will "evaluate path accuracy on held-out input(s)". How does the algorithm actually select those held-out inputs? I'm guessing they need to be (closely?) related to the current input that CoLA is being applied to, but I couldn't find the details or maybe have misunderstood something.
Thanks!

·
Paper author

Hi @myeesw , it is great to hear that you find our discoveries interesting. The "held-out input(s)" here refer to the inputs from held-out test sets that have not been used for model training. We will make it clearer in later versions. In this work, we do not explore the possibility of generalizing CoLA for one input to other (similar) inputs. But what you mentioned is exactly what we are trying, and we will share the results in later preprints. Good point! Thanks!

Hi @zhoutianyi . Thanks for sharing your paper.

One question about Table 1: Do you search distinct layer inference strategies, or a single strategy, for different datasets?

·
Paper author

Hi @Enigrand , the MCTS strategy we used to search/optimize CoLA for each sample is the same across all datasets, but the searched architecture varies across different samples.

Impressive results! Congratulations 👏

·
Paper author

Thanks!

Thank you for the perspectives in your paper. when performing MCTS, does each sample need to be searched and run 200 times individually. Wouldn’t this result in high computational complexity? During the search process, can only one sample be searched at a time, since the search paths for different samples are different and cannot be parallelized.In addition, how should the KV cache be managed, given that under different paths, the same layer’s KV cache may be used multiple times?
Thanks!

·
Paper author

Thanks! These are good points for better practical implementations. Since our primary goal is to find the upper bound of the proposed layer composition space, we run 200 simulations per sample to ensure the optimality of the searched paths. However, it is plausible in practice to considerably reduce the number without hurting the performance too much. The path search for different samples can be parallelized since they are independent of each other. KV cache of each layer can be reused several times during the search. In the search tree of MCTS, each child node can reuse the KV cache of the parent node.

Where may I find your code, if you have it?

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2507.07996 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2507.07996 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2507.07996 in a Space README.md to link it from this page.

Collections including this paper 3