MSA: Memory Sparse Attention for Efficient End-to-End Memory Model Scaling to 100M Tokens Paper • 2603.23516 • Published Mar 6 • 46
Training Language Models via Neural Cellular Automata Paper • 2603.10055 • Published about 1 month ago • 7
SimpleGPT: Improving GPT via A Simple Normalization Strategy Paper • 2602.01212 • Published Feb 1 • 3 • 6
Empty Shelves or Lost Keys? Recall Is the Bottleneck for Parametric Factuality Paper • 2602.14080 • Published Feb 15 • 21
On the Mechanism and Dynamics of Modular Addition: Fourier Features, Lottery Ticket, and Grokking Paper • 2602.16849 • Published Feb 18 • 7
2Mamba2Furious: Linear in Complexity, Competitive in Accuracy Paper • 2602.17363 • Published Feb 19 • 8
Preliminary sonification of ENSO using traditional Javanese gamelan scales Paper • 2602.14560 • Published Feb 16 • 1
On Surprising Effectiveness of Masking Updates in Adaptive Optimizers Paper • 2602.15322 • Published Feb 17 • 10
DICE: Diffusion Large Language Models Excel at Generating CUDA Kernels Paper • 2602.11715 • Published Feb 12 • 6 • 3
DICE: Diffusion Large Language Models Excel at Generating CUDA Kernels Paper • 2602.11715 • Published Feb 12 • 6
Pretraining A Large Language Model using Distributed GPUs: A Memory-Efficient Decentralized Paradigm Paper • 2602.11543 • Published Feb 12 • 6 • 4
Pretraining A Large Language Model using Distributed GPUs: A Memory-Efficient Decentralized Paradigm Paper • 2602.11543 • Published Feb 12 • 6
LoopFormer: Elastic-Depth Looped Transformers for Latent Reasoning via Shortcut Modulation Paper • 2602.11451 • Published Feb 11 • 16
NanoQuant: Efficient Sub-1-Bit Quantization of Large Language Models Paper • 2602.06694 • Published Feb 6 • 15 • 5
NanoQuant: Efficient Sub-1-Bit Quantization of Large Language Models Paper • 2602.06694 • Published Feb 6 • 15
SimpleGPT: Improving GPT via A Simple Normalization Strategy Paper • 2602.01212 • Published Feb 1 • 3 • 6
SimpleGPT: Improving GPT via A Simple Normalization Strategy Paper • 2602.01212 • Published Feb 1 • 3