File size: 2,297 Bytes
d082744
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
af68d94
 
 
 
 
 
 
 
 
 
 
d082744
af68d94
593d5cf
 
dbcf37a
593d5cf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dbcf37a
 
 
 
 
 
 
 
 
593d5cf
 
dbcf37a
593d5cf
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
---
dataset_info:
  features:
  - name: text
    dtype: string
  - name: images
    sequence: binary
  splits:
  - name: train
    num_bytes: 39831925059
    num_examples: 118193
  download_size: 36493510192
  dataset_size: 39831925059
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
task_categories:
- visual-document-retrieval
license: cc-by-nc-4.0
library_name:
- transformers
tags:
- multimodal
- embeddings
- pretraining
- document-retrieval
- interleaved-data
---

# Copali train split used in MoCa Continual Pre-training

[🏠 Homepage](https://haon-chen.github.io/MoCa/) | [πŸ’» Code](https://github.com/haon-chen/MoCa) | [πŸ€– MoCa-Qwen25VL-7B](https://huggingface.co/moca-embed/MoCa-Qwen25VL-7B) | [πŸ€– MoCa-Qwen25VL-3B](https://huggingface.co/moca-embed/MoCa-Qwen25VL-3B) | [πŸ“š Datasets](https://huggingface.co/moca-embed/datasets) | [πŸ“„ Paper](https://arxiv.org/abs/2506.23115)

## Introduction

This is a interleaved multimodal pre-training dataset used in the modality-aware continual pre-training of MoCa models. It is adapted from [Copali](https://huggingface.co/datasets/Tevatron/colpali) and its [corpus](https://huggingface.co/datasets/Tevatron/colpali-corpus) by concatenating queries and positive documents.

The dataset consists of interleaved multimodal examples. text is a string containing text while images are image binaries that can be loaded with the following code snippet:

```python
import PIL.Image
from io import BytesIO

image_bytes = example['images'][0]
image = PIL.Image.open(BytesIO(image_bytes))
```


## Citation
MoCa

```bibtex
@article{chen2025moca,
  title={MoCa: Modality-aware Continual Pre-training Makes Better Bidirectional Multimodal Embeddings},
  author={Chen, Haonan and Liu, Hong and Luo, Yuping and Wang, Liang and Yang, Nan and Wei, Furu and Dou, Zhicheng},
  journal={arXiv preprint arXiv:2506.23115},
  year={2025}
}
```

Colpali

```bibtex
@inproceedings{faysse2024colpali,
  title={Colpali: Efficient document retrieval with vision language models},
  author={Faysse, Manuel and Sibille, Hugues and Wu, Tony and Omrani, Bilel and Viaud, Gautier and Hudelot, C{\'e}line and Colombo, Pierre},
  booktitle={The Thirteenth International Conference on Learning Representations},
  year={2024}
}
```