Abstract
LingBot-World is an open-source world simulator with high-fidelity dynamics, long-term memory capabilities, and real-time interactivity for diverse environments.
We present LingBot-World, an open-sourced world simulator stemming from video generation. Positioned as a top-tier world model, LingBot-World offers the following features. (1) It maintains high fidelity and robust dynamics in a broad spectrum of environments, including realism, scientific contexts, cartoon styles, and beyond. (2) It enables a minute-level horizon while preserving contextual consistency over time, which is also known as "long-term memory". (3) It supports real-time interactivity, achieving a latency of under 1 second when producing 16 frames per second. We provide public access to the code and model in an effort to narrow the divide between open-source and closed-source technologies. We believe our release will empower the community with practical applications across areas like content creation, gaming, and robot learning.
Community
LingBot-World: Advancing Open-source World Models
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- WorldPlay: Towards Long-Term Geometric Consistency for Real-Time Interactive World Modeling (2025)
- LongVie 2: Multimodal Controllable Ultra-Long Video World Model (2025)
- TeleWorld: Towards Dynamic Multimodal Synthesis with a 4D World Model (2025)
- The World is Your Canvas: Painting Promptable Events with Reference Images, Trajectories, and Text (2025)
- A Mechanistic View on Video Generation as World Models: State and Dynamics (2026)
- Astra: General Interactive World Model with Autoregressive Denoising (2025)
- DriveLaW:Unifying Planning and Video Generation in a Latent Driving World (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper