rank
int64 | name
string | times_trended
int64 | best_rank
int64 | avg_rank
float64 | median_rank
int64 | publish_date
string | max_upvotes
int64 | max_github_stars
int64 | arxiv_link
string |
|---|---|---|---|---|---|---|---|---|---|
301
|
GeoVista: Web-Augmented Agentic Visual Reasoning for Geolocalization
| 11
| 9
| 27.09
| 28
|
Nov 19, 2025
| 88
| 170
|
https://arxiv.org/abs/2511.15705
|
302
|
Scrub It Out! Erasing Sensitive Memorization in Code Language Models via
Machine Unlearning
| 10
| 16
| 24.8
| 19
|
Sep 17, 2025
| 18
| 24
|
https://arxiv.org/abs/2509.13755
|
303
|
NaTex: Seamless Texture Generation as Latent Color Diffusion
| 9
| 10
| 21.89
| 21
|
Nov 20, 2025
| 15
| 85
|
https://arxiv.org/abs/2511.16317
|
304
|
StealthAttack: Robust 3D Gaussian Splatting Poisoning via Density-Guided
Illusions
| 9
| 2
| 22.11
| 23
|
Oct 2, 2025
| 55
| 51
|
https://arxiv.org/abs/2510.02314
|
305
|
Human-Agent Collaborative Paper-to-Page Crafting for Under $0.1
| 9
| 13
| 22.56
| 19
|
Oct 22, 2025
| 64
| 98
|
https://arxiv.org/abs/2510.19600
|
306
|
F1: A Vision-Language-Action Model Bridging Understanding and Generation
to Actions
| 10
| 15
| 25.8
| 23
|
Sep 8, 2025
| 26
| 64
|
https://arxiv.org/abs/2509.06951
|
307
|
ROSE: Remove Objects with Side Effects in Videos
| 12
| 14
| 30.67
| 33
|
Aug 26, 2025
| 4
| 32
|
https://arxiv.org/abs/2508.18633
|
308
|
VideoCanvas: Unified Video Completion from Arbitrary Spatiotemporal
Patches via In-Context Conditioning
| 7
| 2
| 16.43
| 16
|
Oct 9, 2025
| 59
| 50
|
https://arxiv.org/abs/2510.08555
|
309
|
EditScore: Unlocking Online RL for Image Editing via High-Fidelity
Reward Modeling
| 9
| 16
| 24.33
| 21
|
Sep 28, 2025
| 26
| 60
|
https://arxiv.org/abs/2509.23909
|
310
|
Reinforced Visual Perception with Tools
| 9
| 18
| 24.44
| 22
|
Sep 1, 2025
| 27
| 28
|
https://arxiv.org/abs/2509.01656
|
311
|
Is Diversity All You Need for Scalable Robotic Manipulation?
| 18
| 30
| 37.78
| 37
|
Jul 8, 2025
| 20
| 2,460
|
https://arxiv.org/abs/2507.06219
|
312
|
RLFR: Extending Reinforcement Learning for LLMs with Flow Environment
| 7
| 5
| 17
| 14
|
Oct 11, 2025
| 32
| 34
|
https://arxiv.org/abs/2510.10201
|
313
|
BAPO: Stabilizing Off-Policy Reinforcement Learning for LLMs via
Balanced Policy Optimization with Adaptive Clipping
| 9
| 15
| 24.56
| 23
|
Oct 21, 2025
| 77
| 62
|
https://arxiv.org/abs/2510.18927
|
314
|
GenoMAS: A Multi-Agent Framework for Scientific Discovery via
Code-Driven Gene Expression Analysis
| 11
| 25
| 29.45
| 26
|
Jul 28, 2025
| 3
| 93
|
https://arxiv.org/abs/2507.21035
|
315
|
Latent Zoning Network: A Unified Principle for Generative Modeling,
Representation Learning, and Classification
| 9
| 12
| 24.89
| 19
|
Sep 19, 2025
| 43
| 39
|
https://arxiv.org/abs/2509.15591
|
316
|
Grasp Any Region: Towards Precise, Contextual Pixel Understanding for
Multimodal LLMs
| 10
| 13
| 27.6
| 27
|
Oct 21, 2025
| 33
| 57
|
https://arxiv.org/abs/2510.18876
|
317
|
FantasyTalking2: Timestep-Layer Adaptive Preference Optimization for
Audio-Driven Portrait Animation
| 13
| 22
| 33.15
| 31
|
Aug 15, 2025
| 8
| 21
|
https://arxiv.org/abs/2508.11255
|
318
|
OpenVision: A Fully-Open, Cost-Effective Family of Advanced Vision
Encoders for Multimodal Learning
| 17
| 26
| 37.47
| 39
|
May 7, 2025
| 28
| 351
|
https://arxiv.org/abs/2505.04601
|
319
|
Video-LMM Post-Training: A Deep Dive into Video Reasoning with Large
Multimodal Models
| 10
| 10
| 28.2
| 27
|
Oct 6, 2025
| 42
| 89
|
https://arxiv.org/abs/2510.05034
|
320
|
AWorld: Dynamic Multi-Agent System with Stable Maneuvering for Robust
GAIA Problem Solving
| 31
| 34
| 43.71
| 45
|
Aug 13, 2025
| 32
| 694
|
https://arxiv.org/abs/2508.09889
|
321
|
Video-Thinker: Sparking "Thinking with Videos" via Reinforcement
Learning
| 10
| 13
| 28.5
| 22
|
Oct 27, 2025
| 81
| 95
|
https://arxiv.org/abs/2510.23473
|
322
|
Cook and Clean Together: Teaching Embodied Agents for Parallel Task Execution
| 11
| 19
| 30.64
| 28
|
Nov 24, 2025
| 7
| 270
|
https://arxiv.org/abs/2511.19430
|
323
|
BEAVR: Bimanual, multi-Embodiment, Accessible, Virtual Reality
Teleoperation System for Robots
| 11
| 19
| 30.82
| 31
|
Aug 13, 2025
| 0
| 50
|
https://arxiv.org/abs/2508.09606
|
324
|
NaViL: Rethinking Scaling Properties of Native Multimodal Large Language
Models under Data Constraints
| 8
| 19
| 23.38
| 23
|
Oct 9, 2025
| 17
| 71
|
https://arxiv.org/abs/2510.08565
|
325
|
Human3R: Everyone Everywhere All at Once
| 9
| 22
| 26.44
| 26
|
Oct 7, 2025
| 8
| 304
|
https://arxiv.org/abs/2510.06219
|
326
|
BRIDGE - Building Reinforcement-Learning Depth-to-Image Data Generation
Engine for Monocular Depth Estimation
| 17
| 33
| 38.06
| 37
|
Sep 29, 2025
| 13
| 106
|
https://arxiv.org/abs/2509.25077
|
327
|
THOR: Tool-Integrated Hierarchical Optimization via RL for Mathematical
Reasoning
| 10
| 16
| 29.1
| 28
|
Sep 17, 2025
| 11
| 17
|
https://arxiv.org/abs/2509.13761
|
328
|
T2I-ReasonBench: Benchmarking Reasoning-Informed Text-to-Image
Generation
| 8
| 11
| 23.75
| 17
|
Aug 24, 2025
| 25
| 24
|
https://arxiv.org/abs/2508.17472
|
329
|
SPATIALGEN: Layout-guided 3D Indoor Scene Generation
| 10
| 15
| 29.5
| 30
|
Sep 18, 2025
| 22
| 255
|
https://arxiv.org/abs/2509.14981
|
330
|
G^2VLM: Geometry Grounded Vision Language Model with Unified 3D Reconstruction and Spatial Reasoning
| 10
| 22
| 29.5
| 27
|
Nov 26, 2025
| 8
| 151
|
https://arxiv.org/abs/2511.21688
|
331
|
CogVLA: Cognition-Aligned Vision-Language-Action Model via
Instruction-Driven Routing & Sparsification
| 18
| 31
| 39.22
| 39
|
Aug 28, 2025
| 8
| 56
|
https://arxiv.org/abs/2508.21046
|
332
|
DyPE: Dynamic Position Extrapolation for Ultra High Resolution Diffusion
| 14
| 26
| 35.86
| 31
|
Oct 23, 2025
| 33
| 248
|
https://arxiv.org/abs/2510.20766
|
333
|
Hunyuan-MT Technical Report
| 14
| 26
| 36
| 35
|
Sep 5, 2025
| 13
| 517
|
https://arxiv.org/abs/2509.05209
|
334
|
CrossOver: 3D Scene Cross-Modal Alignment
| 8
| 4
| 24.75
| 22
|
Feb 20, 2025
| 2
| 204
|
https://arxiv.org/abs/2502.15011
|
335
|
FullPart: Generating each 3D Part at Full Resolution
| 8
| 15
| 25.12
| 22
|
Oct 30, 2025
| 4
| 57
|
https://arxiv.org/abs/2510.26140
|
336
|
Beyond Pass@1: Self-Play with Variational Problem Synthesis Sustains
RLVR
| 8
| 14
| 25.25
| 21
|
Aug 19, 2025
| 109
| 26
|
https://arxiv.org/abs/2508.14029
|
337
|
RegionE: Adaptive Region-Aware Generation for Efficient Image Editing
| 9
| 14
| 28.22
| 24
|
Oct 29, 2025
| 24
| 46
|
https://arxiv.org/abs/2510.25590
|
338
|
pi-Flow: Policy-Based Few-Step Generation via Imitation Distillation
| 13
| 26
| 35.38
| 33
|
Oct 16, 2025
| 7
| 97
|
https://arxiv.org/abs/2510.14974
|
339
|
Reinforcement Learning Foundations for Deep Research Systems: A Survey
| 8
| 15
| 25.88
| 24
|
Sep 8, 2025
| 25
| 10
|
https://arxiv.org/abs/2509.06733
|
340
|
Efficient Part-level 3D Object Generation via Dual Volume Packing
| 13
| 24
| 35.62
| 36
|
Jun 11, 2025
| 8
| 701
|
https://arxiv.org/abs/2506.09980
|
341
|
3D Gaussian Splatting for Real-Time Radiance Field Rendering
| 41
| 41
| 46.12
| 46
|
Aug 8, 2023
| 192
| 19,600
|
https://arxiv.org/abs/2308.04079
|
342
|
A Style is Worth One Code: Unlocking Code-to-Style Image Generation with Discrete Style Space
| 9
| 19
| 29.67
| 33
|
Nov 13, 2025
| 53
| 122
|
https://arxiv.org/abs/2511.10555
|
343
|
A Vision-Language-Action-Critic Model for Robotic Real-World
Reinforcement Learning
| 11
| 21
| 33.64
| 33
|
Sep 19, 2025
| 16
| 145
|
https://arxiv.org/abs/2509.15937
|
344
|
LivePortrait: Efficient Portrait Animation with Stitching and
Retargeting Control
| 29
| 39
| 44.48
| 44
|
Jul 3, 2024
| 3
| 16,900
|
https://arxiv.org/abs/2407.03168
|
345
|
MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and
Open Resources
| 11
| 24
| 33.91
| 31
|
Sep 25, 2025
| 90
| 190
|
https://arxiv.org/abs/2509.21268
|
346
|
Time Is a Feature: Exploiting Temporal Dynamics in Diffusion Language
Models
| 13
| 25
| 36.62
| 34
|
Aug 12, 2025
| 31
| 38
|
https://arxiv.org/abs/2508.09138
|
347
|
Yume: An Interactive World Generation Model
| 4
| 3
| 5.25
| 4
|
Jul 23, 2025
| 69
| 185
|
https://arxiv.org/abs/2507.17744
|
348
|
Fast-dLLM v2: Efficient Block-Diffusion LLM
| 7
| 7
| 25
| 22
|
Sep 30, 2025
| 40
| 537
|
https://arxiv.org/abs/2509.26328
|
349
|
DeepScientist: Advancing Frontier-Pushing Scientific Findings
Progressively
| 11
| 26
| 34.64
| 32
|
Sep 30, 2025
| 16
| 119
|
https://arxiv.org/abs/2509.26603
|
350
|
GigaBrain-0: A World Model-Powered Vision-Language-Action Model
| 13
| 30
| 37.15
| 36
|
Oct 22, 2025
| 46
| 218
|
https://arxiv.org/abs/2510.19430
|
351
|
Visual Jigsaw Post-Training Improves MLLMs
| 8
| 20
| 28.62
| 26
|
Sep 29, 2025
| 34
| 29
|
https://arxiv.org/abs/2509.25190
|
352
|
GenExam: A Multidisciplinary Text-to-Image Exam
| 8
| 17
| 28.75
| 28
|
Sep 17, 2025
| 16
| 17
|
https://arxiv.org/abs/2509.14232
|
353
|
CapRL: Stimulating Dense Image Caption Capabilities via Reinforcement
Learning
| 8
| 11
| 29.12
| 28
|
Sep 26, 2025
| 30
| 62
|
https://arxiv.org/abs/2509.22647
|
354
|
InternSVG: Towards Unified SVG Tasks with Multimodal Large Language
Models
| 6
| 10
| 21.83
| 25
|
Oct 13, 2025
| 31
| 54
|
https://arxiv.org/abs/2510.11341
|
355
|
UniMoE-Audio: Unified Speech and Music Generation with Dynamic-Capacity
MoE
| 12
| 32
| 36.5
| 35
|
Oct 15, 2025
| 62
| 1,010
|
https://arxiv.org/abs/2510.13344
|
356
|
VITA-E: Natural Embodied Interaction with Concurrent Seeing, Hearing,
Speaking, and Acting
| 8
| 20
| 29.5
| 25
|
Oct 21, 2025
| 41
| 120
|
https://arxiv.org/abs/2510.21817
|
357
|
EdgeTAM: On-Device Track Anything Model
| 16
| 31
| 40.38
| 40
|
Jan 13, 2025
| 1
| 757
|
https://arxiv.org/abs/2501.07256
|
358
|
Spatial-SSRL: Enhancing Spatial Understanding via Self-Supervised
Reinforcement Learning
| 11
| 25
| 36.09
| 34
|
Oct 31, 2025
| 25
| 54
|
https://arxiv.org/abs/2510.27606
|
359
|
Self-Rewarding Vision-Language Model via Reasoning Decomposition
| 16
| 33
| 40.88
| 42
|
Aug 27, 2025
| 77
| 79
|
https://arxiv.org/abs/2508.19652
|
360
|
Uni-MoE: Scaling Unified Multimodal LLMs with Mixture of Experts
| 11
| 32
| 36.27
| 35
|
May 18, 2024
| 19
| 1,010
|
https://arxiv.org/abs/2405.11273
|
361
|
U-Bench: A Comprehensive Understanding of U-Net through 100-Variant
Benchmarking
| 8
| 21
| 31
| 28
|
Oct 8, 2025
| 3
| 65
|
https://arxiv.org/abs/2510.07041
|
362
|
VChain: Chain-of-Visual-Thought for Reasoning in Video Generation
| 9
| 25
| 33.33
| 29
|
Oct 6, 2025
| 34
| 60
|
https://arxiv.org/abs/2510.05094
|
363
|
EasySteer: A Unified Framework for High-Performance and Extensible LLM
Steering
| 9
| 23
| 33.56
| 30
|
Sep 29, 2025
| 25
| 49
|
https://arxiv.org/abs/2509.25175
|
364
|
LongCodeZip: Compress Long Context for Code Language Models
| 8
| 23
| 31.38
| 30
|
Oct 1, 2025
| 88
| 63
|
https://arxiv.org/abs/2510.00446
|
365
|
ARTDECO: Towards Efficient and High-Fidelity On-the-Fly 3D
Reconstruction with Structured Scene Representation
| 9
| 25
| 33.56
| 32
|
Oct 9, 2025
| 30
| 73
|
https://arxiv.org/abs/2510.08551
|
366
|
DeCo: Frequency-Decoupled Pixel Diffusion for End-to-End Image Generation
| 10
| 25
| 35.4
| 33
|
Nov 24, 2025
| 58
| 75
|
https://arxiv.org/abs/2511.19365
|
367
|
π^3: Scalable Permutation-Equivariant Visual Geometry Learning
| 6
| 10
| 25.17
| 17
|
Jul 17, 2025
| 64
| 1,100
|
https://arxiv.org/abs/2507.13347
|
368
|
LoopTool: Closing the Data-Training Loop for Robust LLM Tool Calls
| 9
| 21
| 33.78
| 39
|
Nov 12, 2025
| 15
| 20
|
https://arxiv.org/abs/2511.09148
|
369
|
Explain Before You Answer: A Survey on Compositional Visual Reasoning
| 19
| 35
| 42.89
| 44
|
Aug 24, 2025
| 4
| 279
|
https://arxiv.org/abs/2508.17298
|
370
|
IMG: Calibrating Diffusion Models via Implicit Multimodal Guidance
| 7
| 16
| 29.14
| 21
|
Sep 30, 2025
| 16
| 25
|
https://arxiv.org/abs/2509.26231
|
371
|
ObjectClear: Complete Object Removal via Object-Effect Attention
| 4
| 8
| 13
| 11
|
May 28, 2025
| 1
| 328
|
https://arxiv.org/abs/2505.22636
|
372
|
LLaVA-OneVision-1.5: Fully Open Framework for Democratized Multimodal
Training
| 16
| 33
| 41.5
| 41
|
Sep 28, 2025
| 40
| 508
|
https://arxiv.org/abs/2509.23661
|
373
|
Stable Video Infinity: Infinite-Length Video Generation with Error
Recycling
| 19
| 34
| 43.05
| 45
|
Oct 10, 2025
| 14
| 388
|
https://arxiv.org/abs/2510.09212
|
374
|
SAC: Neural Speech Codec with Semantic-Acoustic Dual-Stream Quantization
| 7
| 16
| 29.57
| 31
|
Oct 19, 2025
| 0
| 60
|
https://arxiv.org/abs/2510.16841
|
375
|
ReasonRank: Empowering Passage Ranking with Strong Reasoning Ability
| 13
| 28
| 39.54
| 40
|
Aug 9, 2025
| 109
| 101
|
https://arxiv.org/abs/2508.07050
|
376
|
DiT360: High-Fidelity Panoramic Image Generation via Hybrid Training
| 9
| 21
| 34.56
| 38
|
Oct 13, 2025
| 29
| 105
|
https://arxiv.org/abs/2510.11712
|
377
|
AnyUp: Universal Feature Upsampling
| 11
| 31
| 37.73
| 37
|
Oct 14, 2025
| 10
| 268
|
https://arxiv.org/abs/2510.12764
|
378
|
Mixture-of-Recursions: Learning Dynamic Recursive Depths for Adaptive
Token-Level Computation
| 4
| 11
| 15
| 13
|
Jul 14, 2025
| 60
| 311
|
https://arxiv.org/abs/2507.10524
|
379
|
Performance Prediction for Large Systems via Text-to-Text Regression
| 15
| 29
| 41.47
| 45
|
Jun 26, 2025
| 6
| 255
|
https://arxiv.org/abs/2506.21718
|
380
|
Sequential Diffusion Language Models
| 7
| 22
| 30.71
| 29
|
Sep 28, 2025
| 29
| 29
|
https://arxiv.org/abs/2509.24007
|
381
|
TiViBench: Benchmarking Think-in-Video Reasoning for Video Generative Models
| 5
| 17
| 22.6
| 23
|
Nov 17, 2025
| 40
| 50
|
https://arxiv.org/abs/2511.13704
|
382
|
Gradio: Hassle-Free Sharing and Testing of ML Models in the Wild
| 10
| 32
| 36.9
| 35
|
Jun 6, 2019
| 1
| 40,700
|
https://arxiv.org/abs/1906.02569
|
383
|
Elevating 3D Models: High-Quality Texture and Geometry Refinement from a
Low-Quality Model
| 4
| 9
| 17.25
| 15
|
Jul 15, 2025
| 11
| 106
|
https://arxiv.org/abs/2507.11465
|
384
|
Patch-as-Decodable-Token: Towards Unified Multi-Modal Vision Tasks in
MLLMs
| 7
| 23
| 31.86
| 29
|
Oct 2, 2025
| 12
| 183
|
https://arxiv.org/abs/2510.01954
|
385
|
More Thought, Less Accuracy? On the Dual Nature of Reasoning in
Vision-Language Models
| 9
| 26
| 36.33
| 35
|
Sep 30, 2025
| 56
| 44
|
https://arxiv.org/abs/2509.25848
|
386
|
Open-o3 Video: Grounded Video Reasoning with Explicit Spatio-Temporal
Evidence
| 9
| 22
| 36.44
| 34
|
Oct 23, 2025
| 46
| 63
|
https://arxiv.org/abs/2510.20579
|
387
|
AutoPR: Let's Automate Your Academic Promotion!
| 5
| 8
| 25.2
| 26
|
Oct 10, 2025
| 43
| 36
|
https://arxiv.org/abs/2510.09558
|
388
|
VMem: Consistent Interactive Video Scene Generation with Surfel-Indexed
View Memory
| 4
| 18
| 19
| 18
|
Jun 23, 2025
| 22
| 255
|
https://arxiv.org/abs/2506.18903
|
389
|
RLP: Reinforcement as a Pretraining Objective
| 9
| 25
| 36.78
| 34
|
Sep 26, 2025
| 32
| 149
|
https://arxiv.org/abs/2510.01265
|
390
|
Gated Attention for Large Language Models: Non-linearity, Sparsity, and
Attention-Sink-Free
| 10
| 31
| 38.2
| 36
|
May 10, 2025
| 6
| 240
|
https://arxiv.org/abs/2505.06708
|
391
|
CMPhysBench: A Benchmark for Evaluating Large Language Models in
Condensed Matter Physics
| 5
| 13
| 25.6
| 19
|
Aug 25, 2025
| 45
| 16
|
https://arxiv.org/abs/2508.18124
|
392
|
Spotlight on Token Perception for Multimodal Reinforcement Learning
| 6
| 10
| 29.83
| 27
|
Oct 10, 2025
| 31
| 26
|
https://arxiv.org/abs/2510.09285
|
393
|
Video-as-Answer: Predict and Generate Next Video Event with Joint-GRPO
| 6
| 23
| 29.83
| 29
|
Nov 20, 2025
| 29
| 44
|
https://arxiv.org/abs/2511.16669
|
394
|
SyGra: A Unified Graph-Based Framework for Scalable Generation, Quality
Tagging, and Management of Synthetic Data
| 6
| 16
| 30
| 28
|
Aug 21, 2025
| 6
| 16
|
https://arxiv.org/abs/2508.15432
|
395
|
Uni-cot: Towards Unified Chain-of-Thought Reasoning Across Text and
Vision
| 7
| 24
| 33.29
| 32
|
Aug 7, 2025
| 0
| 136
|
https://arxiv.org/abs/2508.05606
|
396
|
SRUM: Fine-Grained Self-Rewarding for Unified Multimodal Models
| 9
| 15
| 37.22
| 44
|
Oct 14, 2025
| 17
| 51
|
https://arxiv.org/abs/2510.12784
|
397
|
JanusCoder: Towards a Foundational Visual-Programmatic Interface for
Code Intelligence
| 7
| 19
| 33.29
| 38
|
Oct 27, 2025
| 90
| 54
|
https://arxiv.org/abs/2510.23538
|
398
|
SearchInstruct: Enhancing Domain Adaptation via Retrieval-Based
Instruction Dataset Creation
| 5
| 18
| 26.4
| 26
|
Sep 12, 2025
| 9
| 8
|
https://arxiv.org/abs/2509.10708
|
399
|
RynnVLA-002: A Unified Vision-Language-Action and World Model
| 11
| 29
| 39.82
| 41
|
Nov 21, 2025
| 24
| 669
|
https://arxiv.org/abs/2511.17502
|
400
|
BitNet Distillation
| 13
| 35
| 41.62
| 41
|
Oct 15, 2025
| 44
| 24,300
|
https://arxiv.org/abs/2510.13998
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.