yuhangzang commited on
Commit
3c4afe4
·
verified ·
1 Parent(s): f4329d8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +127 -1
README.md CHANGED
@@ -13,4 +13,130 @@ tags:
13
  - sptial understanding
14
  - self-supervised learning
15
  library_name: transformers
16
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  - sptial understanding
14
  - self-supervised learning
15
  library_name: transformers
16
+ ---
17
+
18
+
19
+ # Spatial-SSRL-Qwen3VL-4B
20
+
21
+ 📖<a href="https://arxiv.org/abs/2510.27606">Paper</a>| 🏠<a href="https://github.com/InternLM/Spatial-SSRL">Github</a> |🤗<a href="https://huggingface.co/internlm/Spatial-SSRL-7B">Spatial-SSRL-7B Model</a> |
22
+ 🤗<a href="https://huggingface.co/internlm/Spatial-SSRL-Qwen3VL-4B">Spatial-SSRL-Qwen3VL-4B Model</a> |
23
+ 🤗<a href="https://huggingface.co/datasets/internlm/Spatial-SSRL-81k">Spatial-SSRL-81k Dataset</a> | 📰<a href="https://huggingface.co/papers/2510.27606">Daily Paper</a>
24
+
25
+ Spatial-SSRL-Qwen3VL-4B is a large vision-language model targeting spatial understanding, built on the base of Qwen3-VL-4B-Instruct. It's optimized by applying Spatial-SSRL, a lightweight self-supervised reinforcement learning
26
+ paradigm which can scale RLVR efficiently. The model demonstrates strong spatial intelligence while preserving the original general visual capabilities of the base model.
27
+
28
+ ## 📢 News
29
+ - 🚀 [2025/11/24] We have released the [🤗Spatial-SSRL-Qwen3VL-4B Model](https://huggingface.co/internlm/Spatial-SSRL-Qwen3VL-4B), initialized from Qwen3-VL-4B-Instruct.
30
+ - 🚀 [2025/11/03] Now you can try out Spatial-SSRL-7B on [🤗Spatial-SSRL Space](https://huggingface.co/spaces/yuhangzang/Spatial-SSRL).
31
+ - 🚀 [2025/11/03] We have released the [🤗Spatial-SSRL-7B Model](https://huggingface.co/internlm/Spatial-SSRL-7B), and [🤗Spatial-SSRL-81k Dataset](https://huggingface.co/datasets/internlm/Spatial-SSRL-81k).
32
+ - 🚀 [2025/11/02] We have released the [🏠Spatial-SSRL Repository](https://github.com/InternLM/Spatial-SSRL).
33
+
34
+ ## 🌈 Overview
35
+ We are thrilled to introduce <strong>Spatial-SSRL</strong>, a novel self-supervised RL paradigm aimed at enhancing LVLM spatial understanding.
36
+ By optimizing Qwen2.5-VL-7B with Spatial-SSRL, the model exhibits stronger spatial intelligence across seven spatial understanding benchmarks in both image and video settings.
37
+ </p>
38
+ <p style="text-align: center;">
39
+ <img src="assets/teaser_1029final.png" alt="Teaser" width="100%">
40
+ </p>
41
+ Spatial-SSRL is a <strong>lightweight</strong> tool-free framework that is natually compatible with the RLVR training paradigm and easy to extend to a multitude of pretext tasks.
42
+ Five tasks are currently formulated in the framework, requiring only ordinary RGB and RGB-D images. <strong>And we welcome you to join Spatial-SSRL with effective pretext tasks to further strengthen the capabilities of LVLMs!</strong>
43
+
44
+ <p style="text-align: center;">
45
+ <img src="assets/pipeline_1029final.png" alt="Pipeline" width="100%">
46
+ </p>
47
+
48
+ ## 💡 Highlights
49
+ - 🔥 **Highly Scalable:** Spatial-SSRL uses ordinary raw RGB and RGB-D images instead of richly-annotated public datasets or manual labels for data curation, making it highly scalable.
50
+ - 🔥 **Cost-effective:** Avoiding the need for human labels or API calls for general LVLMs throughout the entire pipeline endows Spatial-SSRL with cost-effectiveness.
51
+ - 🔥 **Lightweight:** Prior approaches for spatial understanding heavily rely on annotation of external tools, incurring inherent errors in training data and additional cost. In constrast, Spatial-SSRL is completely tool-free and can easily be extended to more self-supervised tasks.
52
+ - 🔥 **Naturally Verifiable:** Intrinsic supervisory signals determined by pretext objectives are naturally verifiable, aligning Spatial-SSRL well with the RLVR paradigm.
53
+ <p style="text-align: center;">
54
+ <img src="assets/comparison_1029final.png" alt="Teaser" width="100%">
55
+ </p>
56
+
57
+
58
+ ## 🛠️ Usage
59
+ Here we provide a code snippet for you to start a simple trial of <strong>Spatial-SSRL-Qwen3VL-4B</strong> on your own device. You can download the model from 🤗<a href="https://huggingface.co/internlm/Spatial-SSRL-Qwen3VL-4B">Spatial-SSRL-Qwen3VL-4B Model</a > before your trial!
60
+ </p>
61
+
62
+ ```python
63
+ from transformers import AutoProcessor, AutoModelForImageTextToText #transformers==4.57.1
64
+ from qwen_vl_utils import process_vision_info #0.0.14
65
+ import torch
66
+
67
+ model_path = "internlm/Spatial-SSRL-Qwen3-VL-4B" #You can change it to your own local path if deployed already
68
+
69
+ #Change the path of the input image
70
+ img_path = "eg1.jpg"
71
+
72
+ #Change your question here
73
+ question = "Question: Consider the real-world 3D locations and orientations of the objects. If I stand at the man's position facing where it is facing, is the menu on the left or right of me?\nOptions:\nA. on the left\nB. on the right\n"
74
+
75
+
76
+ question += "Please select the correct answer from the options above. \n"
77
+
78
+ #We recommend using the format prompt to make the inference consistent with training
79
+ format_prompt = "You FIRST think about the reasoning process as an internal monologue and then provide the final answer. The reasoning process MUST BE enclosed within <think> </think> tags. The final answer MUST BE put in \\boxed{}."
80
+
81
+ model = AutoModelForImageTextToText.from_pretrained(
82
+ model_path, torch_dtype=torch.float16, device_map='auto', attn_implementation='flash_attention_2'
83
+ )
84
+ processor = AutoProcessor.from_pretrained(model_path)
85
+
86
+ messages = [
87
+ {
88
+ "role": "user",
89
+ "content": [
90
+ {
91
+ "type": "image",
92
+ "image": img_path,
93
+ },
94
+ {"type": "text", "text": question + format_prompt},
95
+ ],
96
+ }
97
+ ]
98
+
99
+ text = processor.apply_chat_template(
100
+ messages, tokenize=False, add_generation_prompt=True
101
+ )
102
+ image_inputs, video_inputs = process_vision_info(messages)
103
+ inputs = processor(
104
+ text=[text],
105
+ images=image_inputs,
106
+ videos=video_inputs,
107
+ padding=True,
108
+ return_tensors="pt",
109
+ )
110
+ inputs = inputs.to("cuda")
111
+
112
+ generated_ids = model.generate(**inputs, max_new_tokens=4096, do_sample=False)
113
+ generated_ids_trimmed = [
114
+ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
115
+ ]
116
+ output_text = processor.batch_decode(
117
+ generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
118
+ )
119
+ print("Model Response:", output_text[0])
120
+ ```
121
+
122
+ ## Cases
123
+ <p style="text-align: center;">
124
+ <img src="assets/case1.jpg" alt="Teaser" width="100%">
125
+ </p>
126
+
127
+
128
+ ## ✒️Citation
129
+ If you find our model useful, please kindly cite:
130
+ ```
131
+ @article{liu2025spatial,
132
+ title={Spatial-SSRL: Enhancing Spatial Understanding via Self-Supervised Reinforcement Learning},
133
+ author={Liu, Yuhong and Zhang, Beichen and Zang, Yuhang and Cao, Yuhang and Xing, Long and Dong, Xiaoyi and Duan, Haodong and Lin, Dahua and Wang, Jiaqi},
134
+ journal={arXiv preprint arXiv:2510.27606},
135
+ year={2025}
136
+ }
137
+ ```
138
+
139
+ ## 📄 License
140
+ ![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg) ![Data License](https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg)
141
+
142
+ **Usage and License Notices**: The data and code are intended and licensed for research use only.