Search is not available for this dataset
id int64 0 369k | emb listlengths 768 768 |
|---|---|
0 | [
-0.07604265958070755,
0.023236187174916267,
-0.02229350432753563,
-0.011657600291073322,
0.00917251780629158,
0.05950607731938362,
0.009974473156034946,
0.014211021363735199,
0.005393241532146931,
-0.05632426589727402,
-0.015450854785740376,
-0.003666442120447755,
0.001512855407781899,
0.0... |
1 | [
-0.0670490488409996,
0.016039125621318817,
-0.056620024144649506,
-0.013741649687290192,
-0.020836718380451202,
0.009258131496608257,
0.003915552515536547,
0.04619099572300911,
-0.004331274423748255,
-0.023581596091389656,
-0.02328239008784294,
0.015048516914248466,
0.02495650015771389,
-0... |
2 | [
-0.07132741808891296,
0.016465069726109505,
-0.05077587068080902,
-0.01476146187633276,
-0.014385263435542583,
-0.005578394513577223,
-0.020932577550411224,
0.047094739973545074,
-0.010740389116108418,
-0.021461229771375656,
-0.029348311945796013,
0.0069718388840556145,
0.023549368605017662,... |
3 | [
-0.013171509839594364,
-0.014109136536717415,
-0.048214275389909744,
-0.014750261791050434,
-0.03851470723748207,
0.0037476166617125273,
-0.024776391685009003,
0.044340912252664566,
-0.03201958164572716,
0.005121951922774315,
-0.01967962458729744,
0.04418802633881569,
0.05196870118379593,
... |
4 | [
-0.1092483326792717,
-0.004666365683078766,
0.08308432251214981,
0.0028522780630737543,
-0.00922560878098011,
0.023050468415021896,
-0.02601056918501854,
0.04844604805111885,
0.02216249704360962,
-0.02934340015053749,
-0.03417252376675606,
-0.03938499838113785,
0.0008785162353888154,
-0.02... |
5 | [
-0.075035959482193,
0.043400838971138,
0.002835319610312581,
-0.01039460115134716,
0.021887218579649925,
0.013991289772093296,
-0.024669883772730827,
0.04208776354789734,
0.052379850298166275,
0.012204940430819988,
0.04641551524400711,
-0.0018855058588087559,
-0.02182559296488762,
-0.00125... |
6 | [
-0.049152672290802,
0.0908556804060936,
0.014161494560539722,
0.007109698839485645,
0.010721748694777489,
0.04270708188414574,
0.002308039693161845,
0.0571257621049881,
-0.008928806520998478,
-0.02058504894375801,
0.0017156796529889107,
-0.005926874931901693,
0.029622577130794525,
0.056669... |
7 | [
-0.08123759925365448,
0.05937562510371208,
-0.011445067822933197,
-0.02522379532456398,
0.000019173974578734487,
0.056862592697143555,
-0.034629277884960175,
-0.008603664115071297,
0.0412849597632885,
-0.02069924771785736,
0.033574651926755905,
-0.048392295837402344,
-0.001786419190466404,
... |
8 | [
-0.060108136385679245,
0.06282030045986176,
0.03790869563817978,
0.02448662556707859,
0.045234307646751404,
0.028580453246831894,
-0.017500044777989388,
0.01501578651368618,
0.04485338553786278,
0.003917052410542965,
0.002994172042235732,
0.0009402803843840957,
0.007671303581446409,
0.0800... |
9 | [
-0.06459119915962219,
0.044943712651729584,
0.011427859775722027,
0.032630909234285355,
-0.013295472599565983,
0.018985504284501076,
-0.007374919485300779,
0.0034849918447434902,
-0.020132893696427345,
0.023012815043330193,
0.008760007098317146,
-0.004924575798213482,
0.0047448622062802315,
... |
10 | [
-0.062464531511068344,
0.06847363710403442,
0.012420449405908585,
0.06311558187007904,
-0.05588812381029129,
0.0649278461933136,
0.010542754083871841,
0.009129787795245647,
0.022083597257733345,
0.016201261430978775,
0.02190927229821682,
-0.011460838839411736,
-0.0348048135638237,
0.062507... |
11 | [
-0.07395514845848083,
0.017370525747537613,
0.024587780237197876,
0.03878532350063324,
-0.008441516198217869,
0.03937187045812607,
0.015320372767746449,
0.03897338733077049,
-0.011826514266431332,
-0.015290766023099422,
0.010232856497168541,
-0.006241147872060537,
0.020872652530670166,
0.0... |
End of preview. Expand
in Data Studio
Bloomberg Financial News Embeddings for Vector Database Benchmarking
Dataset Description
This dataset contains pre-computed embeddings of Bloomberg financial news articles, designed for evaluating vector database performance. The embeddings are generated using Google's EmbeddingGemma-300M model.
Purpose
Benchmark dataset for evaluating vector database performance on financial news domain, specifically designed for use with VectorDBBench.
Dataset Summary
- Total Training Samples: 368,816
- Test Queries: 1,000
- Ground Truth: Top-1000 nearest neighbors per query
- Embedding Dimension: 768
- Embedding Model: google/embeddinggemma-300m
- Source Data: danidanou/Bloomberg_Financial_News
Dataset Structure
Data Splits
| Split | Samples | Description |
|---|---|---|
train |
368,816 | Training embeddings (80% random sample from source) |
test |
1,000 | Test query embeddings (from remaining 20%, non-overlapping) |
neigbors.parquet |
1,000 | Top-1000 nearest neighbors for each test query |
Data Fields
train & test
id(int64): Unique identifier for each articleemb(List[float64]): 768-dimensional L2-normalized embedding vector
neigbors.parquet
id(int64): Query identifier (matches test)neighbors_id(List[int64]): List of 1000 nearest neighbor IDs from train set
Dataset Creation
Source Data
The dataset is derived from approximately 447K Bloomberg financial news articles:
- Train: 80% random sample (368,816 articles)
- Test: 1,000 articles randomly sampled from remaining 20% (non-overlapping with train)
Preprocessing
- Text Preparation: Concatenated Headline + Article for each news item
- Chunking: For texts exceeding 2048 tokens:
- Split into chunks with ~100 token overlap
- Embedded each chunk separately
- Averaged chunk embeddings for final representation
- Normalization: All embeddings are L2-normalized
Embedding Generation
- Model: google/embeddinggemma-300m
- Dimension: 768
- Max Token Length: 2048
- Normalization: L2-normalized
Ground Truth Generation
Ground truth nearest neighbors were computed using:
- Method: Flat search (brute-force)
- Metric: Cosine similarity
- K: Top-1000 neighbors per query
Usage
Loading the Dataset
from datasets import load_dataset
import pandas as pd
# Load train and test splits
dataset = load_dataset("redcourage/Bloomberg-Financial-News-embedding-gemma-300m")
train = dataset['train']
test = dataset['test']
# Load ground truth
neigbors = pd.read_parquet(
"hf://datasets/redcourage/Bloomberg-Financial-News-embedding-gemma-300m/neigbors.parquet"
)
Evaluation Example
import numpy as np
from datasets import load_dataset
import pandas as pd
# Load data
dataset = load_dataset("redcourage/Bloomberg-Financial-News-embedding-gemma-300m")
train_data = dataset['train']
test_data = dataset['test']
neigbors = pd.read_parquet(
"hf://datasets/redcourage/Bloomberg-Financial-News-embedding-gemma-300m/neigbors.parquet"
)
# Convert to numpy arrays
train_embeddings = np.array(train_data['emb'])
test_embeddings = np.array(test_data['emb'])
# Example: Compute recall@10
def compute_recall_at_k(retrieved_ids, neigbors_ids, k=10):
"""
Compute Recall@K
Args:
retrieved_ids: List of retrieved neighbor IDs
neigbors_ids: List of ground truth neighbor IDs
k: Number of top results to consider
"""
retrieved_k = set(retrieved_ids[:k])
neigbors_k = set(neigbors_ids[:k])
if len(neigbors_k) == 0:
return 0.0
return len(retrieved_k & neigbors_k) / len(neigbors_k)
# Use with your vector database
# ... insert your vector DB search code here ...
Use Cases
- Vector database performance benchmarking on financial domain
- Approximate nearest neighbor (ANN) algorithm evaluation
- Retrieval system testing for financial news
Limitations
- Domain-Specific: Optimized for financial news; may not generalize to other domains
- Language: English only
- Temporal Coverage: Limited to articles available in the source dataset (2006-2021)
- Chunking Strategy: Long documents are averaged, which may lose fine-grained information
- Ground Truth: Based on cosine similarity with embeddings, not human relevance judgments
- Financial Bias: May reflect biases present in Bloomberg's reporting and article selection
License
Apache 2.0
Citation
If you use this dataset, please cite:
@dataset{bloomberg_embeddings_gemma,
author = {redcourage},
title = {Bloomberg Financial News Embeddings for Vector Database Benchmarking},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/redcourage/Bloomberg-Financial-News-embedding-gemma-300m}
}
Source Dataset Citation
@dataset{bloomberg_financial_news,
author = {danidanou},
title = {Bloomberg Financial News},
year = {2024},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/danidanou/Bloomberg_Financial_News}
}
Embedding Model Citation
@misc{embeddinggemma,
title={Embedding Gemma},
author={Google},
year={2024},
url={https://huggingface.co/google/embeddinggemma-300m}
}
Acknowledgments
- Original dataset: danidanou/Bloomberg_Financial_News
- Embedding model: google/embeddinggemma-300m
- Benchmark framework: VectorDBBench
- Downloads last month
- 21