Improve model card: Add pipeline tag, paper/code links, abstract, framework, and usage examples

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +47 -3
README.md CHANGED
@@ -1,3 +1,47 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ pipeline_tag: zero-shot-object-detection
4
+ ---
5
+
6
+ # PropVG: End-to-End Proposal-Driven Visual Grounding with Multi-Granularity Discrimination
7
+
8
+ ## Paper
9
+ [PropVG: End-to-End Proposal-Driven Visual Grounding with Multi-Granularity Discrimination](https://huggingface.co/papers/2509.04833)
10
+
11
+ ## Code
12
+ The official code and models are available at: [https://github.com/Dmmm1997/PropVG](https://github.com/Dmmm1997/PropVG)
13
+
14
+ ## Abstract
15
+ Recent advances in visual grounding have largely shifted away from traditional proposal-based two-stage frameworks due to their inefficiency and high computational complexity, favoring end-to-end direct reference paradigms. However, these methods rely exclusively on the referred target for supervision, overlooking the potential benefits of prominent prospective targets. Moreover, existing approaches often fail to incorporate multi-granularity discrimination, which is crucial for robust object identification in complex scenarios. To address these limitations, we propose PropVG, an end-to-End proposal-based framework that, to the best of our knowledge, is the first to seamlessly integrate foreground object proposal generation with referential object comprehension without requiring additional detectors. Furthermore, we introduce a Contrastive-based Refer Scoring (CRS) module, which employs contrastive learning at both sentence and word levels to enhance the capability in understanding and distinguishing referred objects. Additionally, we design a Multi-granularity Target Discrimination (MTD) module that fuses object- and semantic-level information to improve the recognition of absent targets. Extensive experiments on gRefCOCO (GREC/GRES), Ref-ZOM, R-RefCOCO, and RefCOCO (REC/RES) benchmarks demonstrate the effectiveness of PropVG.
16
+
17
+ ## FrameWork
18
+ ![framework](https://github.com/Dmmm1997/PropVG/raw/main/asserts/framework.jpg)
19
+
20
+ ## Sample Usage (Demo)
21
+ Here, demo for PropVG are provided.
22
+
23
+ The following scripts can be used to test on the GRES task.
24
+ ```bash
25
+ python tools/demo.py --img "asserts/imgs/Figure_1.jpg" --expression "three skateboard guys" --config "configs/gres/PropVG-grefcoco.py" --checkpoint /PATH/TO/PropVG-grefcoco.pth --img_size 320
26
+ ```
27
+
28
+ The following scripts can be used to test on the RIS task.
29
+ ```bash
30
+ python tools/demo.py --img "asserts/imgs/Figure_2.jpg" --expression "full half fruit" --config "configs/refcoco/PropVG-refcoco-mix.py" --checkpoint /PATH/TO/PropVG-refcoco-mix.pth --img_size 384
31
+ ```
32
+
33
+ For loading alternative pretrained weights or adjusting threshold settings, please consult the `tools/demo.py`.
34
+
35
+ ## Citation
36
+ If you find this repository helpful, feel free to cite our paper:
37
+ ```bibtex
38
+ @misc{propvg,
39
+ title={PropVG: End-to-End Proposal-Driven Visual Grounding with Multi-Granularity Discrimination},
40
+ author={Ming Dai and Wenxuan Cheng and Jiedong Zhuang and Jiang-jiang Liu and Hongshen Zhao and Zhenhua Feng and Wankou Yang},
41
+ year={2025},
42
+ eprint={2509.04833},
43
+ archivePrefix={arXiv},
44
+ primaryClass={cs.CV},
45
+ url={https://arxiv.org/abs/2509.04833},
46
+ }
47
+ ```