Soptq commited on
Commit
9da8eae
·
verified ·
1 Parent(s): 87b068b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -34,13 +34,13 @@ size_categories:
34
 
35
  *Latest News* 🔥
36
 
37
- [Latest] We are officially integrated by [VLMEvalKit](https://github.com/open-compass/VLMEvalKit). [Intern-S1](https://github.com/InternLM/Intern-S1), the most advanced open-source multimodal reasoning model to date, benchmarked on SFE.
38
 
39
  <details>
40
  <summary>Unfold to see more details.</summary>
41
  <be>
42
-
43
- - [2025/07] [Intern-S1](https://github.com/InternLM/Intern-S1), the most advanced open-source multimodal reasoning model to date, benchmarked on SFE.
44
  - [2025/07] We are officially integrated by [VLMEvalKit](https://github.com/open-compass/VLMEvalKit).
45
  - [2025/06] We officially released SFE! SFE is designed to evaluate the scientific cognitive capacities of MLLMs through three cognitive levels: **scientific signal perception**, **scientific attribute understanding**, and **scientific comparative reasoning**.
46
  </details>
 
34
 
35
  *Latest News* 🔥
36
 
37
+ [Latest] [Seed-1.8](https://lf3-static.bytednsdoc.com/obj/eden-cn/lapzild-tss/ljhwZthlaukjlkulzlp/research/Seed-1.8-Modelcard.pdf), the model with native support for generalized real-world agency, is benchmarked on SFE.
38
 
39
  <details>
40
  <summary>Unfold to see more details.</summary>
41
  <be>
42
+ - [2025/12] [Seed-1.8](https://lf3-static.bytednsdoc.com/obj/eden-cn/lapzild-tss/ljhwZthlaukjlkulzlp/research/Seed-1.8-Modelcard.pdf), the model with native support for generalized real-world agency, is benchmarked on SFE.
43
+ - [2025/07] [Intern-S1](https://github.com/InternLM/Intern-S1), the most advanced open-source multimodal reasoning model to date, is benchmarked on SFE.
44
  - [2025/07] We are officially integrated by [VLMEvalKit](https://github.com/open-compass/VLMEvalKit).
45
  - [2025/06] We officially released SFE! SFE is designed to evaluate the scientific cognitive capacities of MLLMs through three cognitive levels: **scientific signal perception**, **scientific attribute understanding**, and **scientific comparative reasoning**.
46
  </details>