BiomedBERT Small Embeddings
This is a BiomedBERT Small model fined-tuned using sentence-transformers. It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
The training dataset was generated using a random sample of PubMed title-abstract pairs along with similar title pairs. The training workflow was a two step distillation process as follows.
- Distill embeddings from the larger pubmedbert-base-embeddings model using this model distillation script from Sentence Transformers.
- Build a distilled dataset of teacher scores using the biomedbert-base-reranker cross-encoder for a separate random sample of title-abstract pairs.
- Further fine-tune the model on the distilled dataset using KLDivLoss.
Usage (txtai)
This model can be used to build embeddings databases with txtai for semantic search and/or as a knowledge source for retrieval augmented generation (RAG).
import txtai
embeddings = txtai.Embeddings(path="neuml/biomedbert-small-embeddings", content=True)
embeddings.index(documents())
# Run a query
embeddings.search("query to run")
Usage (Sentence-Transformers)
Alternatively, the model can be loaded with sentence-transformers.
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer("neuml/biomedbert-small-embeddings")
embeddings = model.encode(sentences)
print(embeddings)
Usage (Hugging Face Transformers)
The model can also be used directly with Transformers.
from transformers import AutoTokenizer, AutoModel
import torch
# Mean Pooling - Take attention mask into account for correct averaging
def meanpooling(output, mask):
embeddings = output[0] # First element of model_output contains all token embeddings
mask = mask.unsqueeze(-1).expand(embeddings.size()).float()
return torch.sum(embeddings * mask, 1) / torch.clamp(mask.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("neuml/biomedbert-small-embeddings")
model = AutoModel.from_pretrained("neuml/biomedbert-small-embeddings")
# Tokenize sentences
inputs = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
output = model(**inputs)
# Perform pooling. In this case, mean pooling.
embeddings = meanpooling(output, inputs['attention_mask'])
print("Sentence embeddings:")
print(embeddings)
Evaluation Results
Performance of this model compared to the top base models on the MTEB leaderboard is shown below. A popular smaller model was also evaluated along with the most downloaded PubMed similarity model on the Hugging Face Hub.
The following datasets were used to evaluate model performance.
- PubMed QA
- Subset: pqa_labeled, Split: train, Pair: (question, long_answer)
- PubMed Subset
- Split: test, Pair: (title, text)
- PubMed Summary
- Subset: pubmed, Split: validation, Pair: (article, abstract)
Evaluation results are shown below. The Pearson correlation coefficient is used as the evaluation metric.
| Model | PubMed QA | PubMed Subset | PubMed Summary | Average |
|---|---|---|---|---|
| all-MiniLM-L6-v2 | 90.40 | 95.92 | 94.07 | 93.46 |
| biomedbert-base-colbert | 94.59 | 97.18 | 96.21 | 95.99 |
| biomedbert-base-embeddings | 94.60 | 98.39 | 97.61 | 96.87 |
| biomedbert-base-reranker | 97.66 | 99.76 | 98.81 | 98.74 |
| biomedbert-small-colbert | 93.51 | 97.20 | 95.85 | 95.52 |
| biomedbert-small-embeddings | 93.25 | 97.93 | 96.65 | 95.94 |
| biomedbert-hash-nano-embeddings | 90.39 | 96.29 | 95.32 | 94.00 |
| pubmedbert-base-embeddings | 93.27 | 97.00 | 96.58 | 95.62 |
This model is a solid performer at a small size. It even beats the original PubMedBERT Embeddings model across the board at only 20% of the parameters. It also does much better than all-MiniLM-L6-v2, a commonly used small model which is roughly the same size.
This is a great model that can be used in CPU-only setups without trading off much on the accuracy front.
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertModel'})
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
More Information
Read more about the model in this article.
- Downloads last month
- -
Model tree for NeuML/biomedbert-small-embeddings
Base model
NeuML/biomedbert-small