OneVision-Encoder: Codec-Aligned Sparsity as a Foundational Principle for Multimodal Intelligence Paper • 2602.08683 • Published Feb 9 • 52
DanQing: An Up-to-Date Large-Scale Chinese Vision-Language Pre-training Dataset Paper • 2601.10305 • Published Jan 15 • 36
DanQing: An Up-to-Date Large-Scale Chinese Vision-Language Pre-training Dataset Paper • 2601.10305 • Published Jan 15 • 36
Towards Cross-View Point Correspondence in Vision-Language Models Paper • 2512.04686 • Published Dec 4, 2025
RoboOS-NeXT: A Unified Memory-based Framework for Lifelong, Scalable, and Robust Multi-Robot Collaboration Paper • 2510.26536 • Published Oct 30, 2025
Robo-Dopamine: General Process Reward Modeling for High-Precision Robotic Manipulation Paper • 2512.23703 • Published Dec 29, 2025 • 7
Robo-Dopamine: General Process Reward Modeling for High-Precision Robotic Manipulation Paper • 2512.23703 • Published Dec 29, 2025 • 7
ProCLIP: Progressive Vision-Language Alignment via LLM-based Embedder Paper • 2510.18795 • Published Oct 21, 2025 • 11
LLaVA-OneVision-1.5: Fully Open Framework for Democratized Multimodal Training Paper • 2509.23661 • Published Sep 28, 2025 • 49
UniME-V2: MLLM-as-a-Judge for Universal Multimodal Embedding Learning Paper • 2510.13515 • Published Oct 15, 2025 • 12
LLaVA-OneVision-1.5: Fully Open Framework for Democratized Multimodal Training Paper • 2509.23661 • Published Sep 28, 2025 • 49
Gradient-Attention Guided Dual-Masking Synergetic Framework for Robust Text-based Person Retrieval Paper • 2509.09118 • Published Sep 11, 2025 • 8
DeepGlint-AI/ViCToR-LLaVA-SigLIP2-Qwen2.5-7b Image-Text-to-Text • 8B • Updated Aug 15, 2025 • 4 • 2
DeepGlint-AI/ViCToR-LLaVA-SigLIP2-Qwen2.5-7b Image-Text-to-Text • 8B • Updated Aug 15, 2025 • 4 • 2