stephantulkens's picture
Update README.md
f5f04c3 verified
metadata
dataset_info:
  features:
    - name: text
      dtype: string
    - name: embedding
      list: float32
      length: 768
  splits:
    - name: train
      num_examples: 13028
  dataset_size: 13028
  download_size: 13028
configs:
  - config_name: default
    data_files:
      - split: train
        path: train/*
metadata:
  model_name: Alibaba-NLP/gte-modernbert-base
  dataset_name: mteb/lotte

Embedpress: Alibaba-NLP/gte-modernbert-base on the mteb/lotte dataset

This is the mteb/lotte dataset, embedded with Alibaba-NLP/gte-modernbert-base.

For each example, we embed the text directly (no additional instruction prompt). Embeddings have dimensionality 768.

These embeddings are intended for tasks like large-scale distillation, retrieval, and similarity search. Because the raw text may exceed the model’s limit, we recommend truncating to the model’s maximum token length at build time.

Schema

  • text (string) — the query text used for embedding
  • embedding (float32[768]) — the vector representation from Alibaba-NLP/gte-modernbert-base

Split

  • train13028 examples

Notes

  • Produced with Alibaba-NLP/gte-modernbert-base from Hugging Face Hub.
  • If you need a smaller embedding size (e.g., matryoshka/truncated vectors), you can safely slice the embeddings without re-embedding.

Acknowledgments

Thanks Mixedbread AI for a GPU grant for research into small retrieval models.