| language: | |
| - en | |
| license: | |
| - cc-by-4.0 | |
| multilinguality: | |
| - monolingual | |
| # CLIP-BERT training data | |
| This data was used to train the CLIP-BERT model first described in [this paper](https://arxiv.org/abs/2109.11321). | |
| The dataset is based on text and images from MS COCO, SBU Captions, Visual Genome QA and Conceptual Captions. | |
| The image features have been extracted using the CLIP model [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) available on Huggingface. |