File size: 4,129 Bytes
b8eeb82
 
 
83ae559
b8eeb82
 
 
 
 
 
 
 
cb74231
 
b8eeb82
 
 
cb74231
 
b8eeb82
 
 
cb74231
 
b8eeb82
 
 
83ae559
 
 
 
 
 
b8eeb82
 
 
cb74231
b8eeb82
 
 
2b9053e
b8eeb82
 
 
b504de3
 
 
 
83ae559
 
b504de3
 
 
cb74231
b8eeb82
 
 
 
83ae559
 
 
b8eeb82
 
 
 
 
 
b7828bb
83ae559
b7828bb
861cfe1
 
 
b8eeb82
b7828bb
b8eeb82
 
 
83ae559
 
 
 
 
 
 
 
 
 
b8eeb82
 
 
 
 
 
 
83ae559
 
b8eeb82
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
83ae559
 
 
 
 
 
 
 
 
 
b8eeb82
 
 
 
 
 
 
 
 
 
 
83ae559
 
 
 
 
 
 
 
 
b8eeb82
 
83ae559
 
 
 
 
 
b8eeb82
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
---
task_categories:
- text-retrieval
- image-to-text
- sentence-similarity
language:
- en
tags:
- embeddings
- vector-database
- benchmark
---

# GAS Indexing Artifacts

## Dataset Description

This dataset contains pre-computed deterministic centroids and associated geometric metadata generated using our GAS (Geometry-Aware Selection) algorithm. 
These artifacts are designed to benchmark Approximate Nearest Neighbor (ANN) search performance in privacy-preserving or dynamic vector database environments.

### Purpose

To serve as a standardized benchmark resource for evaluating the efficiency and recall of vector databases implementing the GAS architecture. 
It is specifically designed for integration with VectorDBBench.

### Dataset Summary

- **Source Data**: 
  - Wikipedia (Public Dataset)
  - LAION0400M (Public Dataset)
- **Embedding Model**: 
  - google/embeddinggemma-300m
  - sentence-transformers/clip-ViT-B-32

## Dataset Structure

For each embedding model, the directory contains two key file:

| Data | Description |
|-------|-------------|
| `centroids.npy` | centroids as followed IVF |

## Data Fields

### Centroids: `centroids.npy`

- **Purpose**: Finding the nearest clusters for IVF (Inverted File Index)
- **Type**: NumPy array (`np.ndarray`)
- **Shape**: `[32768, 768]` or `[1024, 512]`
- **Description**: 768-dimensional vectors representing 32,768 cluster centroids, or 512-dimensional vectors representing 1,024 cluster centroids.
- **Normalization**: L2-normalized (unit norm)
- **Format**: float32


## Dataset Creation

### Source Data

Source dataset is a large public dataset: 
- Wikipedia: [mixedbread-ai/wikipedia-data-en-2023-11](https://huggingface.co/datasets/mixedbread-ai/wikipedia-data-en-2023-11)
- LAION: [LAION-400M](https://laion.ai/blog/laion-400-open-dataset/).

### Preprocessing

1. Create Centroids by GAS approach:

   Description TBD

2. Chunking (for text): For texts exceeding 2048 tokens:

    - Split into chunks with ~100 token overlap
    - Embedded each chunk separately
    - Averaged chunk embeddings for final representation
   
3. Normalization: All embeddings are L2-normalized

### Embedding Generation

- Text:
  - Model: google/embeddinggemma-300m
  - Dimension: 768
  - Max Token Length: 2048
  - Normalization: L2-normalized

- Multi-Modal:
  - Model: sentence-transformers/clip-ViT-B-32
  - Dimension: 512
  - Normalization: L2-normalized

## Usage

```python
import wget

def download_centroids(embedding_model: str, dataset_dir: str) -> None:
    """Download pre-computed centroids for IVF_GAS."""
    dataset_link = f"https://huggingface.co/datasets/cryptolab-playground/gas-centroids/resolve/main/{embedding_model}"
    wget.download(f"{dataset_link}/centroids.npy", out="centroids.npy")
```

## License

Apache 2.0

## Citation

If you use this dataset, please cite:

```bibtex
@dataset{gas-centroids,
  author = {CryptoLab, Inc.},
  title = {GAS Centroids},
  year = {2025},
  publisher = {Hugging Face},
  url = {https://huggingface.co/datasets/cryptolab-playground/gas-centroids}
}
```

### Source Dataset Citation

```bibtex
@dataset{wikipedia_data_en_2023_11,
  author = {mixedbread-ai},
  title = {Wikipedia Data EN 2023 11},
  year = {2023},
  publisher = {Hugging Face},
  url = {https://huggingface.co/datasets/mixedbread-ai/wikipedia-data-en-2023-11}
}
```

```bibtex
@dataset{laion400m,
  author = {Schuhmann, Christoph and others},
  title = {LAION-AI},
  year = {2021},
  publisher = {LAION},
  url = {https://laion.ai/blog/laion-400-open-dataset}
}
```

### Embedding Model Citation

```bibtex
@misc{embeddinggemma,
  title={Embedding Gemma},
  author={Google},
  year={2024},
  url={https://huggingface.co/google/embeddinggemma-300m}
}
```

```bibtex
@misc{clipvitb32,
  title={CLIP ViT-B/32},
  author={Open AI},
  year={2021},
  url={https://huggingface.co/sentence-transformers/clip-ViT-B-32}
}
```

### Acknowledgments

- Original dataset: 
  - mixedbread-ai/wikipedia-data-en-2023-11
  - LAION-400M
- Embedding model: 
  - google/embeddinggemma-300m
  - sentence-transformers/clip-ViT-B-32
- Benchmark framework: VectorDBBench