Get trending papers in your email inbox once a day!
Get trending papers in your email inbox!
SubscribeIndirect dark matter searches at ultrahigh energy neutrino detectors
High to ultrahigh energy neutrino detectors can uniquely probe the properties of dark matter χ by searching for the secondary products produced through annihilation and/or decay processes. We evaluate the sensitivities to dark matter thermally averaged annihilation cross section langleσvrangle and partial decay width into neutrinos Γ_{χrightarrowνbarν} (in the mass scale 10^7 leq m_χ/{rm GeV} leq 10^{15}) for next generation observatories like POEMMA and GRAND. We show that in the range 10^7 leq m_χ/{rm GeV} leq 10^{11}, space-based Cherenkov detectors like POEMMA have the advantage of full-sky coverage and rapid slewing, enabling an optimized dark matter observation strategy focusing on the Galactic center. We also show that ground-based radio detectors such as GRAND can achieve high sensitivities and high duty cycles in radio quiet areas. We compare the sensitivities of next generation neutrino experiments with existing constraints from IceCube and updated 90\% C.L. upper limits on langleσvrangle and Γ_{χrightarrowνbarν} using results from the Pierre Auger Collaboration and ANITA. We show that in the range 10^7 leq m_χ/{rm GeV} leq 10^{11} POEMMA and GRAND10k will improve the neutrino sensitivity to particle dark matter by factors of 2 to 10 over existing limits, whereas GRAND200k will improve this sensitivity by two orders of magnitude. In the range 10^{11} leq m_χ/{rm GeV} leq 10^{15}, POEMMA's fluorescence observation mode will achieve an unprecedented sensitivity to dark matter properties. Finally, we highlight the importance of the uncertainties related to the dark matter distribution in the Galactic halo, using the latest fit and estimates of the Galactic parameters.
Robust Layerwise Scaling Rules by Proper Weight Decay Tuning
Empirical scaling laws prescribe how to allocate parameters, data, and compute, while maximal-update parameterization (muP) enables learning-rate transfer across widths by equalizing early-time update magnitudes. However, in modern scale-invariant architectures, training quickly enters an optimizer-governed steady state where normalization layers create backward scale sensitivity and the effective learning rate becomes width dependent, degrading muP transfer. We address this by introducing a weight-decay scaling rule for AdamW that preserves sublayer gain across widths. Empirically, the singular-value spectrum of each matrix parameter scales in norm as eta/lambda with an approximately invariant shape; under width scaling d, we observe that the top singular value scales approximately as eta/lambdacdot d^{0.75}. Combining this observation with the muP learning-rate rule eta_2propto d^{-1} for matrix-like parameters implies an empirical weight-decay scaling rule lambda_2propto d that approximately keeps sublayer gains width invariant. Together with vector-like parameters trained at eta_1=Theta_d(1) and lambda_1=0, this yields zero-shot transfer of both learning rate and weight decay from proxy to target widths, removing per-width sweeps. We validate the rule on LLaMA-style Transformers and in a minimal synthetic setting, and we provide a simple diagnostic, matching top singular values, to check sublayer-gain invariance. Our results extend muP beyond the near-init regime by explicitly controlling steady-state scales set by the optimizer, offering a practical recipe for width-robust hyperparameter transfer under AdamW.
Observation of the open-charm tetraquark state $T_{cs 0}^{*}(2870)^0$ in the $B^- \rightarrow D^- D^0 K_\mathrm{S}^0$ decay
An amplitude analysis of B^-rightarrow D^- D^0 K_S^0 decays is performed using proton-proton collision data, corresponding to an integrated luminosity of 9,fb^{-1}, collected with the LHCb detector at center-of-mass energies of 7, 8, and 13,Tekern -0.1em V. A resonant structure of spin-parity 0^+ is observed in the D^0 K_S^0 invariant-mass spectrum with a significance of 5.3,sigma. The mass and width of the state, modeled with a Breit-Wigner lineshape, are determined to be 2883pm11pm6,Mekern -0.1em V!/c^2 and 87_{-47}^{+22}pm6,Mekern -0.1em V respectively, where the first uncertainties are statistical and the second systematic. These properties and the quark content are consistent with those of the open-charm tetraquark state T_{cs 0}^{*}(2870)^0 observed previously in the D^+ K^- final state of the B^-rightarrow D^- D^+ K^- decay. This result confirms the existence of the T_{cs 0}^{*}(2870)^0 state in a new decay mode. The T_{cs1}^{*}(2900)^0 state, reported in the B^-rightarrow D^- D^+ K^- decay, is also searched for in the D^0 K_S^0 invariant-mass spectrum of the B^- rightarrow D^- D^0 K_S^0 decay, without finding evidence for it.
Some Theoretical Results on Layerwise Effective Dimension Oscillations in Finite Width ReLU Networks
We analyze the layerwise effective dimension (rank of the feature matrix) in fully-connected ReLU networks of finite width. Specifically, for a fixed batch of m inputs and random Gaussian weights, we derive closed-form expressions for the expected rank of the \mtimes n hidden activation matrices. Our main result shows that E[EDim(ell)]=m[1-(1-2/pi)^ell]+O(e^{-c m}) so that the rank deficit decays geometrically with ratio 1-2 / pi approx 0.3634. We also prove a sub-Gaussian concentration bound, and identify the "revival" depths at which the expected rank attains local maxima. In particular, these peaks occur at depths ell_k^*approx(k+1/2)pi/log(1/rho) with height approx (1-e^{-pi/2}) m approx 0.79m. We further show that this oscillatory rank behavior is a finite-width phenomenon: under orthogonal weight initialization or strong negative-slope leaky-ReLU, the rank remains (nearly) full. These results provide a precise characterization of how random ReLU layers alternately collapse and partially revive the subspace of input variations, adding nuance to prior work on expressivity of deep networks.
On Calibration of Modern Neural Networks
Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
Baryon-number-violating nucleon decays in SMEFT extended with a light scalar
New light particles have received considerable attention in recent years. Baryon-number-violating (BNV) nucleon decays involving such light particles are able to provide stringent constraints. They exhibit distinctive experimental signatures that merit thorough investigation. We systematically investigate BNV nucleon decay with a light scalar in an effective field theory framework. Within this framework, we set stringent bounds on BNV operators using available experimental data and predict the occurrence of several BNV three-body nucleon decays. We further study contributions to dinucleon to dilepton transitions in a nucleus mediated by the scalar, which complements single nucleon decay. Finally, we provide three ultraviolet-complete models that can generate different subsets of BNV operators in leading order. Our theoretical framework will facilitate experimental searches for those exotic nucleon decays.
How Learning Rate Decay Wastes Your Best Data in Curriculum-Based LLM Pretraining
Due to the scarcity of high-quality data, large language models (LLMs) are often trained on mixtures of data with varying quality levels, even after sophisticated data curation. A natural approach to better leverage high-quality data is curriculum-based pretraining, where the model is trained on data sorted in ascending order of quality as determined by a quality metric. However, prior studies have reported limited improvements from such curriculum-based pretraining strategies. This work identifies a critical factor constraining these methods: the incompatibility between the ascending data quality order and the decaying learning rate (LR) schedule. We find that while curriculum-based training substantially outperforms random shuffling when using a constant LR, its advantage diminishes under standard LR decay schedules. Our experiments show this incompatibility can be mitigated by two simple strategies: (1) employing a more moderate LR decay schedule, where the final LR is only moderately smaller than the peak LR, and (2) replacing LR decay with model averaging, i.e., computing a weighted average of the final few checkpoints. By combining these strategies, we improve the average score on a suite of standard benchmarks by 1.64% over random shuffling, without additional data refinement. Validated on 1.5B-parameter models trained over 30B tokens with various data-quality metrics, our findings call for a re-evaluation of curriculum-based LLM pretraining and underscore the potential of co-designing data curricula with optimization methods.
Beyond Cosine Decay: On the effectiveness of Infinite Learning Rate Schedule for Continual Pre-training
The ever-growing availability of unlabeled data presents both opportunities and challenges for training artificial intelligence systems. While self-supervised learning (SSL) has emerged as a powerful paradigm for extracting meaningful representations from vast amounts of unlabeled data, existing methods still struggle to adapt to the non-stationary, non-IID nature of real-world data streams without forgetting previously learned knowledge. Recent works have adopted a repeated cosine annealing schedule for large-scale continual pre-training; however, these schedules (1) inherently cause forgetting during the re-warming phase and (2) have not been systematically compared to existing continual SSL methods. In this work, we systematically compare the widely used cosine schedule with the recently proposed infinite learning rate schedule and empirically find the latter to be a more effective alternative. Our extensive empirical evaluation across diverse image and language datasets demonstrates that the infinite learning rate schedule consistently enhances continual pre-training performance compared to a repeated cosine decay without being restricted to a fixed iteration budget. For instance, in a small-scale MAE pre-training setup, it outperforms several strong baselines from the literature. We then scale up our experiments to larger MAE pre-training and autoregressive language model pre-training. Our results show that the infinite learning rate schedule remains effective at scale, surpassing repeated cosine decay for both MAE pre-training and zero-shot LM benchmarks.
On the infinite-depth limit of finite-width neural networks
In this paper, we study the infinite-depth limit of finite-width residual neural networks with random Gaussian weights. With proper scaling, we show that by fixing the width and taking the depth to infinity, the pre-activations converge in distribution to a zero-drift diffusion process. Unlike the infinite-width limit where the pre-activation converge weakly to a Gaussian random variable, we show that the infinite-depth limit yields different distributions depending on the choice of the activation function. We document two cases where these distributions have closed-form (different) expressions. We further show an intriguing change of regime phenomenon of the post-activation norms when the width increases from 3 to 4. Lastly, we study the sequential limit infinite-depth-then-infinite-width and compare it with the more commonly studied infinite-width-then-infinite-depth limit.
First Light And Reionisation Epoch Simulations (FLARES) IV: The size evolution of galaxies at zgeq5
We present the intrinsic and observed sizes of galaxies at zgeq5 in the First Light And Reionisation Epoch Simulations (FLARES). We employ the large effective volume of FLARES to produce a sizeable sample of high redshift galaxies with intrinsic and observed luminosities and half light radii in a range of rest frame UV and visual photometric bands. This sample contains a significant number of intrinsically ultra-compact galaxies in the far-UV (1500 angstrom), leading to a negative intrinsic far-UV size-luminosity relation. However, after the inclusion of the effects of dust these same compact galaxies exhibit observed sizes that are as much as 50 times larger than those measured from the intrinsic emission, and broadly agree with a range of observational samples. This increase in size is driven by the concentration of dust in the core of galaxies, heavily attenuating the intrinsically brightest regions. At fixed luminosity we find a galaxy size redshift evolution with a slope of m=1.21-1.87 depending on the luminosity sample in question, and we demonstrate the wavelength dependence of the size-luminosity relation which will soon be probed by the Webb Space Telescope.
The redshift dependence of the inferred H_0 in a local void solution to the Hubble tension
Galaxy number counts suggest that we are located within the Gpc-scale KBC void. The Hubble tension might arise due to gravitationally driven outflow from this void, as explored in detail by Haslbauer et al. We explore how the impact of the void on redshift decays at large distances. We define H_0(z) as the present expansion rate H_0 that would be inferred from observations in a narrow redshift range centred on z. We find H_0(z) in three different ways, all of which give similar results. We then compare these results with the observations of Jia et al., who were careful to minimise the impact of correlations between H_0 measurements from data in different redshift bins. We find reasonable agreement with their results for the Gaussian and Exponential void underdensity profiles, although the agreement is less good in the Maxwell-Boltzmann case. The latter profile causes severe disagreement with the observed bulk flow curve at z < 0.1 (Mazurenko et al.), so the tension with higher redshift data further highlights that the deepest part of the KBC void is probably near its centre. The observations show a decline of H_0(z) towards the background Planck value in qualitative agreement with the considered models, even if we use a larger void. The good overall agreement with the recent results of Jia et al. suggests that the local supervoid evident from the galaxy luminosity density out to a Gpc might also solve the Hubble tension while retaining a low background H_0 consistent with Planck data, assuming enhanced structure formation on >100 Mpc scales.
Visible and Invisible Pseudoscalar Meson Decays from Anomaly Sum Rules
The decays of pseudoscalar mesons to real and virtual photons as well as neutrino-antineutrino pairs are considered in the framework of the dispersive method based on Anomaly Sum Rules. The contribution of singlet channel involving the new non-perturbative gluon form factor of virtual photon B(q^2) is systematically taken into account. The detailed analysis of its dependence on photon virtuality q^2 relying on the available data for meson transition fomfactors is performed. It is shown that B has quite a nontrivial structure at q^2 sim 1 GeV^2 which may be a signal of the existence of pseudoscalar glueball with a mass about 1.5-2 GeV. The calculation of the decay to νbar ν pairs leads to the compatibility with the result of Arnellos, Marciano and Parsa of 1982, when pion decay is considered neglecting the mixing effects. The account for these effects results, however, in the enhancement of pion branching ratio by a factor of 3, while that for eta decay is larger by several orders of magnitude. It is stressed, that dependence on the pair invariant mass is entirely defined by QCD and coincides with that of the meson transition form factor. The role of obtained results for the physics at HHaS detector at HIAF is discussed.
Bubbles in a box: Eliminating edge nucleation in cold-atom simulators of vacuum decay
The decay of metastable 'false vacuum' states via bubble nucleation plays a crucial role in many cosmological scenarios. Cold-atom analog experiments will soon provide the first empirical probes of this process, with potentially far-reaching implications for early-Universe cosmology and high-energy physics. However, an inevitable difference between these analog systems and the early Universe is that the former have a boundary. We show, using a combination of Euclidean calculations and real-time lattice simulations, that these boundaries generically cause rapid bubble nucleation on the edge of the experiment, obscuring the bulk nucleation that is relevant for cosmology. We demonstrate that implementing a high-density 'trench' region at the boundary completely eliminates this problem, and recovers the desired cosmological behavior. Our findings are relevant for ongoing efforts to probe vacuum decay in the laboratory, providing a practical solution to a key experimental obstacle.
The Mu3e Experiment: Status and Short-Term Plans
Mu3e is an experiment currently under construction at the Paul Scherrer Institute in Switzerland, designed to search for the Lepton Flavor Violating (LFV) decay mu^+ rightarrow e^+e^-e^+. In extensions of the Standard Model (SM) that account for neutrino masses, this decay is theoretically allowed but occurs only through extremely rare loop processes, with a predicted branching ratio of approximately O(10^{-54}). Such a small probability implies that any observation of this decay would provide clear evidence for physics beyond the SM. The Mu3e experiment aims to probe the mu^+ rightarrow e^+e^-e^+ decay with a sensitivity of approximately O(10^{-15}) in its Phase-1 and plans to achieve a sensitivity of O(10^{-16}) after future upgrades. To reach its Phase-1 ambitious goals, Mu3e is going to use the most intense continuous muon beam in the world, generating 10^{8} muon stops per second in the target placed at the center of the Mu3e. Mu3e will use three main technologies for particle detection. The tracking will done through ultra-thin (50 - 70 mu m) pixel detectors based on MuPix11 sensors. These are high-voltage monolithic active pixel sensors (HV-MAPS) with a sim 23~mum spatial resolution. The timing will be done through scintillating fibres (sim 250 ps) and tiles (sim 40 ps), coupled to silicon photomultipliers and read out by MuTRiG3 ASICs. A triggerless DAQ system based on FPGAs will collect data from the detectors, which will then undergo reconstruction in a GPU filter farm. The assembly of the detectors has started, with a detector commissioning beam time planned for 2025. This document reports on the status of the construction, installation, and data-taking plans for the near future.
Observation of a new boson at a mass of 125 GeV with the CMS experiment at the LHC
Results are presented from searches for the standard model Higgs boson in proton-proton collisions at sqrt(s) = 7 and 8 TeV in the Compact Muon Solenoid experiment at the LHC, using data samples corresponding to integrated luminosities of up to 5.1 inverse femtobarns at 7 TeV and 5.3 inverse femtobarns at 8 TeV. The search is performed in five decay modes: gamma gamma, ZZ, WW, tau tau, and b b-bar. An excess of events is observed above the expected background, with a local significance of 5.0 standard deviations, at a mass near 125 GeV, signalling the production of a new particle. The expected significance for a standard model Higgs boson of that mass is 5.8 standard deviations. The excess is most significant in the two decay modes with the best mass resolution, gamma gamma and ZZ; a fit to these signals gives a mass of 125.3 +/- 0.4 (stat.) +/- 0.5 (syst.) GeV. The decay to two photons indicates that the new particle is a boson with spin different from one.
A Hybrid ANN-SNN Architecture for Low-Power and Low-Latency Visual Perception
Spiking Neural Networks (SNN) are a class of bio-inspired neural networks that promise to bring low-power and low-latency inference to edge devices through asynchronous and sparse processing. However, being temporal models, SNNs depend heavily on expressive states to generate predictions on par with classical artificial neural networks (ANNs). These states converge only after long transient periods, and quickly decay without input data, leading to higher latency, power consumption, and lower accuracy. This work addresses this issue by initializing the state with an auxiliary ANN running at a low rate. The SNN then uses the state to generate predictions with high temporal resolution until the next initialization phase. Our hybrid ANN-SNN model thus combines the best of both worlds: It does not suffer from long state transients and state decay thanks to the ANN, and can generate predictions with high temporal resolution, low latency, and low power thanks to the SNN. We show for the task of event-based 2D and 3D human pose estimation that our method consumes 88% less power with only a 4% decrease in performance compared to its fully ANN counterparts when run at the same inference rate. Moreover, when compared to SNNs, our method achieves a 74% lower error. This research thus provides a new understanding of how ANNs and SNNs can be used to maximize their respective benefits.
Faith and Fate: Limits of Transformers on Compositionality
Transformer large language models (LLMs) have sparked admiration for their exceptional performance on tasks that demand intricate multi-step reasoning. Yet, these models simultaneously show failures on surprisingly trivial problems. This begs the question: Are these errors incidental, or do they signal more substantial limitations? In an attempt to demystify Transformers, we investigate the limits of these models across three representative compositional tasks -- multi-digit multiplication, logic grid puzzles, and a classic dynamic programming problem. These tasks require breaking problems down into sub-steps and synthesizing these steps into a precise answer. We formulate compositional tasks as computation graphs to systematically quantify the level of complexity, and break down reasoning steps into intermediate sub-procedures. Our empirical findings suggest that Transformers solve compositional tasks by reducing multi-step compositional reasoning into linearized subgraph matching, without necessarily developing systematic problem-solving skills. To round off our empirical study, we provide theoretical arguments on abstract multi-step reasoning problems that highlight how Transformers' performance will rapidly decay with increased task complexity.
Performance Limits of Network Densification
Network densification is a promising cellular deployment technique that leverages spatial reuse to enhance coverage and throughput. Recent work has identified that at some point ultra-densification will no longer be able to deliver significant throughput gains. In this paper, we provide a unified treatment of the performance limits of network densification. We develop a general framework, which incorporates multi-slope pathloss and the entire space of shadowing and small scale fading distributions, under strongest cell association in a Poisson field of interferers. First, our results show that there are three scaling regimes for the downlink signal-to-interference-plus-noise ratio (SINR), coverage probability, and average per-user rate. Specifically, depending on the near-field pathloss and the fading distribution, the user performance of 5G ultra dense networks (UDNs) would either monotonically increase, saturate, or decay with increasing network density. Second, we show that network performance in terms of coverage density and area spectral efficiency can scale with the network density better than the user performance does. Furthermore, we provide ordering results for both coverage and average rate as a means to qualitatively compare different transmission techniques that may exhibit the same performance scaling. Our results, which are verified by simulations, provide succinct insights and valuable design guidelines for the deployment of 5G UDNs.
First Light And Reionisation Epoch Simulations (FLARES) XVI: Size Evolution of Massive Dusty Galaxies at Cosmic Dawn from UV to IR
We use the First Light And Reionisation Epoch Simulations (FLARES) to study the evolution of the rest-frame ultraviolet (UV) and far-infrared (FIR) sizes for a statistical sample of massive (gtrsim10^{9}M_{odot}) high redshift galaxies (z in [5,10]). Galaxies are post-processed using the SKIRT radiative transfer code, to self-consistently obtain the full spectral energy distribution and surface brightness distribution. We create mock observations of the galaxies for the Near Infrared Camera (NIRCam) to study the rest-frame UV 1500 xC5 morphology. We also generate mock rest-frame FIR (50 mum) photometry and mock ALMA (158 mum) (0.01"-0.03" and approx0.3" angular resolution) observations to study the dust-continuum. We find the effect of dust on observed sizes reduces with increasing wavelength from the UV to optical (sim0.6 times the UV at 0.4mum), with no evolution in FIR sizes. Observed sizes vary within 0.4-1.2 times the intrinsic sizes at different signal to noise ratios (SNR = 5-20) across redshifts. The effect of PSF and noise makes bright structures prominent, whereas fainter regions blend with noise, leading to an underestimation (factor of 0.4-0.8) of sizes at SNR=5. At SNR=15-20, the underestimation reduces (factor of 0.6-0.9) at z=5-8 but due to PSF, at z=9-10, bright cores are dominant, resulting in an overestimation (factor of 1.0-1.2). For ALMA, low resolution sizes are effected by noise which acts as extended emission. The size evolution in UV broadly agrees with current observational samples and other simulations. This work is one of the first to analyse the panchromatic sizes of a statistically significant sample of simulated high-redshift galaxies, complementing a growing body of research highlighting the importance of conducting an equivalent comparison between observed galaxies and their simulated counterparts in the early Universe.
Analyzing Data Quality and Decay in Mega-Constellations: A Physics-Informed Machine Learning Approach
In the era of mega-constellations, the need for accurate and publicly available information has become fundamental for satellite operators to guarantee the safety of spacecrafts and the Low Earth Orbit (LEO) space environment. This study critically evaluates the accuracy and reliability of publicly available ephemeris data for a LEO mega-constellation - Starlink. The goal of this work is twofold: (i) compare and analyze the quality of the data against high-precision numerical propagation. (ii) Leverage Physics-Informed Machine Learning to extract relevant satellite quantities, such as non-conservative forces, during the decay process. By analyzing two months of real orbital data for approximately 1500 Starlink satellites, we identify discrepancies between high precision numerical algorithms and the published ephemerides, recognizing the use of simplified dynamics at fixed thresholds, planned maneuvers, and limitations in uncertainty propagations. Furthermore, we compare data obtained from multiple sources to track and analyze deorbiting satellites over the same period. Empirically, we extract the acceleration profile of satellites during deorbiting and provide insights relating to the effects of non-conservative forces during reentry. For non-deorbiting satellites, the position Root Mean Square Error (RMSE) was approximately 300 m, while for deorbiting satellites it increased to about 600 m. Through this in-depth analysis, we highlight potential limitations in publicly available data for accurate and robust Space Situational Awareness (SSA), and importantly, we propose a data-driven model of satellite decay in mega-constellations.
First Light And Reionisation Epoch Simulations (FLARES) XI: [OIII] emitting galaxies at 5<z<10
JWST has now made it possible to probe the rest-frame optical line emission of high-redshift galaxies extending to z~9, and potentially beyond. To aid in the interpretation of these emerging constraints, in this work we explore predictions for [OIII] emission in high-redshift galaxies using the First Light and Reionisation Epoch Simulations (FLARES). We produce predictions for the [OIII] luminosity function, its correlation with the UV luminosity, and the distribution of equivalent widths (EWs). We also explore how the [OIII] EW correlates with physical properties including specific star formation rate, metallicity, and dust attenuation. Our predictions are largely consistent with recent observational constraints on the luminosity function, average equivalent widths, and line ratios. However, they fail to reproduce the observed tail of high-EW sources and the number density of extreme line emitters. Possibilities to explain these discrepancies include an additional source of ionising photons and/or greater stochasticity in star formation in the model or photometric scatter and/or bias in the observations. With JWST now rapidly building larger samples and a wider range of emission lines the answer to this remaining discrepancy should be available imminently.
Cautious Weight Decay
We introduce Cautious Weight Decay (CWD), a one-line, optimizer-agnostic modification that applies weight decay only to parameter coordinates whose signs align with the optimizer update. Unlike standard decoupled decay, which implicitly optimizes a regularized or constrained objective, CWD preserves the original loss and admits a bilevel interpretation: it induces sliding-mode behavior upon reaching the stationary manifold, allowing it to search for locally Pareto-optimal stationary points of the unmodified objective. In practice, CWD is a drop-in change for optimizers such as AdamW, Lion, and Muon, requiring no new hyperparameters or additional tuning. For language model pre-training and ImageNet classification, CWD consistently improves final loss and accuracy at million- to billion-parameter scales.
