Finally, our new paper is out! "๐๐ถ๐ป๐ฒ๐ฉ๐ถ๐๐ถ๐ผ๐ป: ๐ข๐ฝ๐ฒ๐ป ๐๐ฎ๐๐ฎ ๐๐ ๐๐น๐น ๐ฌ๐ผ๐ ๐ก๐ฒ๐ฒ๐ฑ"! ๐ฅณ FineVision: Open Data Is All You Need (2510.17269)
If you've ever trained a VLM, you know this problem: nobody shares their data mixtures. It's a black box, making replicating SOTA work impossible. We wanted to change that.
FineVision unifies 200 sources into 24 million samples. With 17.3 million images and 9.5 billion answer tokens, it's the largest open resource of its kind.
In the paper, we share how we built it: ๐ finding and cleaning data at scale ๐งน removing excessive duplicates across sources ๐ค decontaminating against 66 public benchmarks
My favorite part is Figure 6 (in the video!). It's our visual diversity analysis. It shows that FineVision isn't just bigger; it's more balanced and conceptually richer than other open datasets. NVIDIA's Eagle 2 paper highlighted just how critical this visual diversity is, and our results confirm it: models trained on FineVision consistently outperform those trained on any other open dataset on 11 benchmarks!
๐ To celebrate the paper, Iโm also releasing a concatenated and shuffled version of the full dataset! ๐HuggingFaceM4/FineVision_full_shuffled
Itโs ready to stream, so you can start training your own models right away:
from datasets import load_dataset d = load_dataset("HuggingFaceM4/FineVision_full_shuffled", split="train", streaming=True) print(next(iter(d)))
A big shoutout to the first authors: Luis Wiedmann and Orr Zohar. They are rockstars!
Many VLMs claim to process hours of video. But can they follow the story?๐ค Today, we introduce TimeScope: The benchmark that separates true temporal understanding from marketing hype. Let's see how much VLMs really understand!โณ
We test three skills that matter for real-world use: ๐ Localized Retrieval: Find a specific action. ๐งฉ Information Synthesis: Piece together scattered clues. ๐ Fine-Grained Perception: Analyze detailed motion (e.g., count how many times a person swings an axe).
The results are in, and they're revealing. Only Gemini 2.5 pro handles 1-hour-long videos. Performance drops sharply with duration, proving that long video understanding is still challenging. We've found the breaking pointsโnow the community can start fixing them.๐
Want to learn more? TimeScope is 100% open-source. Benchmark your model and help us build the next generation of video AI.
Humans often solve visual problems by sketching ideas in our minds. What if Vision-Language Models (VLMs) could do something similar, not by generating full images, but by using internal โmental sketchesโ?
Thatโs the idea behind Mirage, a new framework that empowers VLMs to reason using latent visual tokens. Instead of just thinking in words, Mirage mixes in abstract visual representations that help the model solve complex tasks.
These aren't photorealistic images. They're compact, internal representations optimized purely to support reasoning.
๐ง Mirage is trained in two phases:
1) Grounding: It learns to produce latent tokens anchored in real images. 2) Refinement: The model drops the images and learns to generate visual tokens on its own.
๐ And yes, it works! On challenging benchmarks like Visual Spatial Planning, Jigsaw puzzles, and Spatial Attention Tasks, Mirage clearly outperforms GPT-4o and other strong baselines. Smart sketches > empty words.