jymmmmm commited on
Commit
f6e39c0
·
verified ·
1 Parent(s): 24b2ebc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -2
README.md CHANGED
@@ -18,11 +18,17 @@ MAmmoTH-VL2, the model trained with VisualWebInstruct.
18
  [Paper](https://arxiv.org/abs/2503.10582)|
19
  [Website](https://tiger-ai-lab.github.io/VisualWebInstruct/)
20
 
 
 
 
21
  # Example Usage
 
 
 
 
 
22
  To perform inference using MAmmoTH-VL2, you can use the following code snippet:
23
  ```python
24
- # pip install git+https://github.com/LLaVA-VL/LLaVA-NeXT.git
25
-
26
  from llava.model.builder import load_pretrained_model
27
  from llava.mm_utils import process_images
28
  from llava.constants import DEFAULT_IMAGE_TOKEN
 
18
  [Paper](https://arxiv.org/abs/2503.10582)|
19
  [Website](https://tiger-ai-lab.github.io/VisualWebInstruct/)
20
 
21
+
22
+
23
+
24
  # Example Usage
25
+ ## Requirements
26
+ ```python
27
+ llava==1.7.0.dev0 # pip install git+https://github.com/LLaVA-VL/LLaVA-NeXT.git
28
+ torch==2.5.1
29
+ ```
30
  To perform inference using MAmmoTH-VL2, you can use the following code snippet:
31
  ```python
 
 
32
  from llava.model.builder import load_pretrained_model
33
  from llava.mm_utils import process_images
34
  from llava.constants import DEFAULT_IMAGE_TOKEN