Get trending papers in your email inbox once a day!
Get trending papers in your email inbox!
SubscribeSpecTr: Fast Speculative Decoding via Optimal Transport
Autoregressive sampling from large language models has led to state-of-the-art results in several natural language tasks. However, autoregressive sampling generates tokens one at a time making it slow, and even prohibitive in certain tasks. One way to speed up sampling is speculative decoding: use a small model to sample a draft (block or sequence of tokens), and then score all tokens in the draft by the large language model in parallel. A subset of the tokens in the draft are accepted (and the rest rejected) based on a statistical method to guarantee that the final output follows the distribution of the large model. In this work, we provide a principled understanding of speculative decoding through the lens of optimal transport (OT) with membership cost. This framework can be viewed as an extension of the well-known maximal-coupling problem. This new formulation enables us to generalize the speculative decoding method to allow for a set of k candidates at the token-level, which leads to an improved optimal membership cost. We show that the optimal draft selection algorithm (transport plan) can be computed via linear programming, whose best-known runtime is exponential in k. We then propose a valid draft selection algorithm whose acceptance probability is (1-1/e)-optimal multiplicatively. Moreover, it can be computed in time almost linear with size of domain of a single token. Using this new draft selection algorithm, we develop a new autoregressive sampling algorithm called SpecTr, which provides speedup in decoding while ensuring that there is no quality degradation in the decoded output. We experimentally demonstrate that for state-of-the-art large language models, the proposed approach achieves a wall clock speedup of 2.13X, a further 1.37X speedup over speculative decoding on standard benchmarks.
Rectified Flow: A Marginal Preserving Approach to Optimal Transport
We present a flow-based approach to the optimal transport (OT) problem between two continuous distributions pi_0,pi_1 on R^d, of minimizing a transport cost E[c(X_1-X_0)] in the set of couplings (X_0,X_1) whose marginal distributions on X_0,X_1 equals pi_0,pi_1, respectively, where c is a cost function. Our method iteratively constructs a sequence of neural ordinary differentiable equations (ODE), each learned by solving a simple unconstrained regression problem, which monotonically reduce the transport cost while automatically preserving the marginal constraints. This yields a monotonic interior approach that traverses inside the set of valid couplings to decrease the transport cost, which distinguishes itself from most existing approaches that enforce the coupling constraints from the outside. The main idea of the method draws from rectified flow, a recent approach that simultaneously decreases the whole family of transport costs induced by convex functions c (and is hence multi-objective in nature), but is not tailored to minimize a specific transport cost. Our method is a single-object variant of rectified flow that guarantees to solve the OT problem for a fixed, user-specified convex cost function c.
u-μP: The Unit-Scaled Maximal Update Parametrization
The Maximal Update Parametrization (muP) aims to make the optimal hyperparameters (HPs) of a model independent of its size, allowing them to be swept using a cheap proxy model rather than the full-size target model. We present a new scheme, u-muP, which improves upon muP by combining it with Unit Scaling, a method for designing models that makes them easy to train in low-precision. The two techniques have a natural affinity: muP ensures that the scale of activations is independent of model size, and Unit Scaling ensures that activations, weights and gradients begin training with a scale of one. This synthesis opens the door to a simpler scheme, whose default values are near-optimal. This in turn facilitates a more efficient sweeping strategy, with u-muP models reaching a lower loss than comparable muP models and working out-of-the-box in FP8.
Quantum limit for two-dimensional resolution of two incoherent optical point sources
We obtain the multiple-parameter quantum Cram\'er-Rao bound for estimating the transverse Cartesian components of the centroid and separation of two incoherent optical point sources using an imaging system with finite spatial bandwidth. Under quite general and realistic assumptions on the point-spread function of the imaging system, and for weak source strengths, we show that the Cram\'er-Rao bounds for the x and y components of the separation are independent of the values of those components, which may be well below the conventional Rayleigh resolution limit. We also propose two linear optics-based measurement methods that approach the quantum bound for the estimation of the Cartesian components of the separation once the centroid has been located. One of the methods is an interferometric scheme that approaches the quantum bound for sub-Rayleigh separations. The other method using fiber coupling can in principle attain the bound regardless of the distance between the two sources.
Entanglement Purification in Quantum Networks: Guaranteed Improvement and Optimal Time
While the concept of entanglement purification protocols (EPPs) is straightforward, the integration of EPPs in network architectures requires careful performance evaluations and optimizations that take into account realistic conditions and imperfections, especially probabilistic entanglement generation and quantum memory decoherence. It is important to understand what is guaranteed to be improved from successful EPP with arbitrary non-identical input, which determines whether we want to perform the EPP at all. When successful EPP can offer improvement, the time to perform the EPP should also be optimized to maximize the improvement. In this work, we study the guaranteed improvement and optimal time for the CNOT-based recurrence EPP, previously shown to be optimal in various scenarios. We firstly prove guaranteed improvement for multiple figures of merit, including fidelity and several entanglement measures when compared to practical baselines as functions of input states. However, it is noteworthy that the guaranteed improvement we prove does not imply the universality of the EPP as introduced in arXiv:2407.21760. Then we prove robust, parameter-independent optimal time for typical error models and figures of merit. We further explore memory decoherence described by continuous-time Pauli channels, and demonstrate the phenomenon of optimal time transition when the memory decoherence error pattern changes. Our work deepens the understanding of EPP performance in realistic scenarios and offers insights into optimizing quantum networks that integrate EPPs.
Minimum Entropy Coupling with Bottleneck
This paper investigates a novel lossy compression framework operating under logarithmic loss, designed to handle situations where the reconstruction distribution diverges from the source distribution. This framework is especially relevant for applications that require joint compression and retrieval, and in scenarios involving distributional shifts due to processing. We show that the proposed formulation extends the classical minimum entropy coupling framework by integrating a bottleneck, allowing for a controlled degree of stochasticity in the coupling. We explore the decomposition of the Minimum Entropy Coupling with Bottleneck (MEC-B) into two distinct optimization problems: Entropy-Bounded Information Maximization (EBIM) for the encoder, and Minimum Entropy Coupling (MEC) for the decoder. Through extensive analysis, we provide a greedy algorithm for EBIM with guaranteed performance, and characterize the optimal solution near functional mappings, yielding significant theoretical insights into the structural complexity of this problem. Furthermore, we illustrate the practical application of MEC-B through experiments in Markov Coding Games (MCGs) under rate limits. These games simulate a communication scenario within a Markov Decision Process, where an agent must transmit a compressed message from a sender to a receiver through its actions. Our experiments highlight the trade-offs between MDP rewards and receiver accuracy across various compression rates, showcasing the efficacy of our method compared to conventional compression baseline.
Metastable Cosmological Constant and Gravitational Bubbles: Ultra-Late-Time Transitions in Modified Gravity
The observed cosmological constant may originate as the minimum value U_{min} of a scalar field potential, where the scalar field is frozen due to a large mass. If this vacuum is metastable, it may decay to a true vacuum either at present or in the future. Assuming its decay rate Gamma is comparable to the Hubble expansion rate H_0, we estimate the scale of true vacuum bubbles and analyze their evolution. We find that their initial formation scale is sub-millimeter and their tension causes rapid collapse if m gtrsim 1.7 cdot 10^{-3}, eV. For smaller masses, the bubbles expand at the speed of light. We extend our analysis to scalar-tensor theories with non-minimal coupling, finding that the nucleation scale of gravitational constant bubbles remains consistent with the sub-millimeter regime of General Relativity. The critical mass scale remains around 10^{-3},eV. A theoretical estimate at redshift z_{obs} sim 0.01 suggests an observable bubble radius of sim 50 Mpc, implying a gravitational transition triggered sim 300 Myr ago, with a present-day size approaching 100 Mpc. Additionally, we explore mass ranges (m < 10^{-3},eV) and non-minimal coupling xi ranges (10^{-8},eV^{2-n} - 10^{-1},eV^{2-n}) that lead to a variation Delta G/G_N within the 1%-7% range. We assume non-minimal coupling of the form F(phi)=1/kappa - xi phi^n, with kappa=8pi G_N and 2 leq n leq 9. Finally, we review various local physics or/and transition based proposed solutions to the Hubble tension, including ultra-late-time transitional models (z sim 0.01), screened fifth-force mechanisms, and the Lambda_{rm s}CDM model, which features a transition at z sim 2. We discuss observational hints supporting these scenarios and the theoretical challenges they face.
Efficient and Equivariant Graph Networks for Predicting Quantum Hamiltonian
We consider the prediction of the Hamiltonian matrix, which finds use in quantum chemistry and condensed matter physics. Efficiency and equivariance are two important, but conflicting factors. In this work, we propose a SE(3)-equivariant network, named QHNet, that achieves efficiency and equivariance. Our key advance lies at the innovative design of QHNet architecture, which not only obeys the underlying symmetries, but also enables the reduction of number of tensor products by 92\%. In addition, QHNet prevents the exponential growth of channel dimension when more atom types are involved. We perform experiments on MD17 datasets, including four molecular systems. Experimental results show that our QHNet can achieve comparable performance to the state of the art methods at a significantly faster speed. Besides, our QHNet consumes 50\% less memory due to its streamlined architecture. Our code is publicly available as part of the AIRS library (https://github.com/divelab/AIRS).
Multi-Controlled Quantum Gates in Linear Nearest Neighbor
Multi-controlled single-target (MC) gates are some of the most crucial building blocks for varied quantum algorithms. How to implement them optimally is thus a pivotal question. To answer this question in an architecture-independent manner, and to get a worst-case estimate, we should look at a linear nearest-neighbor (LNN) architecture, as this can be embedded in almost any qubit connectivity. Motivated by the above, here we describe a method which implements MC gates using no more than sim 4k+8n CNOT gates -- up-to 60% reduction over state-of-the-art -- while allowing for complete flexibility to choose the locations of n controls, the target, and a dirty ancilla out of k qubits. More strikingly, in case k approx n, our upper bound is sim 12n -- the best known for unrestricted connectivity -- and if n = 1, our upper bound is sim 4k -- the best known for a single long-range CNOT gate over k qubits -- therefore, if our upper bound can be reduced, then the cost of one or both of these simpler versions of MC gates will be immediately reduced accordingly. In practice, our method provides circuits that tend to require fewer CNOT gates than our upper bound for almost any given instance of MC gates.
Metrological detection of multipartite entanglement through dynamical symmetries
Multipartite entanglement, characterized by the quantum Fisher information (QFI), plays a central role in quantum-enhanced metrology and understanding quantum many-body physics. With a dynamical generalization of the Mazur-Suzuki relations, we provide a rigorous lower bound on the QFI for the thermal Gibbs states in terms of dynamical symmetries, i.e., operators with periodic time dependence. We demonstrate that this bound can be saturated when considering a complete set of dynamical symmetries. Moreover, this lower bound with dynamical symmetries can be generalized to the QFI matrix and to the QFI for the thermal pure states, predicted by the eigenstate thermalization hypothesis. Our results reveal a new perspective to detect multipartite entanglement and other generalized variances in an equilibrium system, from its nonstationary dynamical properties, and is promising for studying emergent nonequilibrium many-body physics.
Bootstrability in Line-Defect CFT with Improved Truncation Methods
We study the conformal bootstrap of 1D CFTs on the straight Maldacena-Wilson line in 4D {cal N}=4 super-Yang-Mills theory. We introduce an improved truncation scheme with an 'OPE tail' approximation and use it to reproduce the 'bootstrability' results of Cavagli\`a et al. for the OPE-coefficients squared of the first three unprotected operators. For example, for the first OPE-coefficient squared at 't Hooft coupling (4pi)^2, linear-functional methods with two sum rules from integrated correlators give the rigorous result 0.294014873 pm 4.88 cdot 10^{-8}, whereas our methods give with machine-precision computations 0.294014228 pm 6.77 cdot 10^{-7}. For our numerical searches, we benchmark the Reinforcement Learning Soft Actor-Critic algorithm against an Interior Point Method algorithm (IPOPT) and comment on the merits of each algorithm.
Quantum-Enhanced Conformal Methods for Multi-Output Uncertainty: A Holistic Exploration and Experimental Analysis
In this paper, we propose a unified approach to harness quantum conformal methods for multi-output distributions, with a particular emphasis on two experimental paradigms: (i) a standard 2-qubit circuit scenario producing a four-dimensional outcome distribution, and (ii) a multi-basis measurement setting that concatenates measurement probabilities in different bases (Z, X, Y) into a twelve-dimensional output space. By combining a multioutput regression model (e.g., random forests) with distributional conformal prediction, we validate coverage and interval-set sizes on both simulated quantum data and multi-basis measurement data. Our results confirm that classical conformal prediction can effectively provide coverage guarantees even when the target probabilities derive from inherently quantum processes. Such synergy opens the door to next-generation quantum-classical hybrid frameworks, providing both improved interpretability and rigorous coverage for quantum machine learning tasks. All codes and full reproducible Colab notebooks are made available at https://github.com/detasar/QECMMOU.
Domain Adaptation and Entanglement: an Optimal Transport Perspective
Current machine learning systems are brittle in the face of distribution shifts (DS), where the target distribution that the system is tested on differs from the source distribution used to train the system. This problem of robustness to DS has been studied extensively in the field of domain adaptation. For deep neural networks, a popular framework for unsupervised domain adaptation (UDA) is domain matching, in which algorithms try to align the marginal distributions in the feature or output space. The current theoretical understanding of these methods, however, is limited and existing theoretical results are not precise enough to characterize their performance in practice. In this paper, we derive new bounds based on optimal transport that analyze the UDA problem. Our new bounds include a term which we dub as entanglement, consisting of an expectation of Wasserstein distance between conditionals with respect to changing data distributions. Analysis of the entanglement term provides a novel perspective on the unoptimizable aspects of UDA. In various experiments with multiple models across several DS scenarios, we show that this term can be used to explain the varying performance of UDA algorithms.
Driving Enhanced Exciton Transfer by Automatic Differentiation
We model and study the processes of excitation, absorption, and transfer in various networks. The model consists of a harmonic oscillator representing a single-mode radiation field, a qubit acting as an antenna, a network through which the excitation propagates, and a qubit at the end serving as a sink. We investigate how off-resonant excitations can be optimally absorbed and transmitted through the network. Three strategies are considered: optimising network energies, adjusting the couplings between the radiation field, the antenna, and the network, or introducing and optimising driving fields at the start and end of the network. These strategies are tested on three different types of network with increasing complexity: nearest-neighbour and star configurations, and one associated with the Fenna-Matthews-Olson complex. The results show that, among the various strategies, the introduction of driving fields is the most effective, leading to a significant increase in the probability of reaching the sink in a given time. This result remains stable across networks of varying dimensionalities and types, and the driving process requires only a few parameters to be effective.
Maximal Initial Learning Rates in Deep ReLU Networks
Training a neural network requires choosing a suitable learning rate, which involves a trade-off between speed and effectiveness of convergence. While there has been considerable theoretical and empirical analysis of how large the learning rate can be, most prior work focuses only on late-stage training. In this work, we introduce the maximal initial learning rate eta^{ast} - the largest learning rate at which a randomly initialized neural network can successfully begin training and achieve (at least) a given threshold accuracy. Using a simple approach to estimate eta^{ast}, we observe that in constant-width fully-connected ReLU networks, eta^{ast} behaves differently from the maximum learning rate later in training. Specifically, we find that eta^{ast} is well predicted as a power of depth times width, provided that (i) the width of the network is sufficiently large compared to the depth, and (ii) the input layer is trained at a relatively small learning rate. We further analyze the relationship between eta^{ast} and the sharpness lambda_{1} of the network at initialization, indicating they are closely though not inversely related. We formally prove bounds for lambda_{1} in terms of depth times width that align with our empirical results.