--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers language: pl license: gemma widget: - source_sentence: "zapytanie: Jak dożyć 100 lat?" sentences: - "Trzeba zdrowo się odżywiać i uprawiać sport." - "Trzeba pić alkohol, imprezować i jeździć szybkimi autami." - "Gdy trwała kampania politycy zapewniali, że rozprawią się z zakazem niedzielnego handlu." ---

Stella-PL-retrieval-mini-8k

This is an embedding model based on [stella_en_400M_v5](https://huggingface.co/NovaSearch/stella_en_400M_v5) and further fine-tuned for retrieval tasks in Polish. It transforms texts into 1024-dimensional vectors. The model training consisted of two stages: - In the first stage, we adapted the model to support the Polish language using the [multilingual knowledge distillation](https://aclanthology.org/2020.emnlp-main.365/) method, leveraging a diverse corpus of 20 million Polish-English text pairs. - The original Stella model and the output of the first stage were limited to a short context of 512 tokens. In the second stage, we extended the context to 8192 tokens and then fine-tuned the model using contrastive loss on a dataset comprising 1.5 million queries. Positive and negative passages for each query have been selected with the help of [BAAI/bge-reranker-v2.5-gemma2-lightweight](https://huggingface.co/BAAI/bge-reranker-v2.5-gemma2-lightweight) reranker. The model was trained for five epochs with a batch size of 1024 queries.
Note: The model uses a custom implementation that requires the XFormers library. For XFormers to function correctly, you need a compatible versions of Flash-Attention and PyTorch installed. Before using the model, make sure your XFormers installation is properly configured.
## Usage (Sentence-Transformers) The model utilizes the same prompts as the original [stella_en_400M_v5](https://huggingface.co/NovaSearch/stella_en_400M_v5). For retrieval, queries should be prefixed with **"Instruct: Given a web search query, retrieve relevant passages that answer the query.\nQuery: "**. For symmetric tasks such as semantic similarity, both texts should be prefixed with **"Instruct: Retrieve semantically similar text.\nQuery: "**. Please note that the model uses a custom implementation, so you should add `trust_remote_code=True` argument when loading it. You can use the model like this with [sentence-transformers](https://www.SBERT.net): ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( "sdadas/stella-pl-retrieval-mini-8k", trust_remote_code=True, device="cuda" ) model.bfloat16() # Retrieval example query_prefix = "Instruct: Given a web search query, retrieve relevant passages that answer the query.\nQuery: " queries = [query_prefix + "Jak dożyć 100 lat?"] answers = [ "Trzeba zdrowo się odżywiać i uprawiać sport.", "Trzeba pić alkohol, imprezować i jeździć szybkimi autami.", "Gdy trwała kampania politycy zapewniali, że rozprawią się z zakazem niedzielnego handlu." ] queries_emb = model.encode(queries, convert_to_tensor=True, show_progress_bar=False) answers_emb = model.encode(answers, convert_to_tensor=True, show_progress_bar=False) best_answer = cos_sim(queries_emb, answers_emb).argmax().item() print(answers[best_answer]) # Semantic similarity example sim_prefix = "Instruct: Retrieve semantically similar text.\nQuery: " sentences = [ sim_prefix + "Trzeba zdrowo się odżywiać i uprawiać sport.", sim_prefix + "Warto jest prowadzić zdrowy tryb życia, uwzględniający aktywność fizyczną i dietę.", sim_prefix + "One should eat healthy and engage in sports.", sim_prefix + "Zakupy potwierdzasz PINem, który bezpiecznie ustalisz podczas aktywacji." ] emb = model.encode(sentences, convert_to_tensor=True, show_progress_bar=False) print(cos_sim(emb, emb)) ``` ## Evaluation Results The model achieves **NDCG@10** of **61.29** on the Polish Information Retrieval Benchmark. See [PIRB Leaderboard](https://huggingface.co/spaces/sdadas/pirb) for detailed results. ## Acknowledgements The research was supported [in part] by project “Cloud Artificial Intelligence Service Engineering (CAISE) platform to create universal and smart services for various application areas”, No. KPOD.05.10-IW.10-0005/24, as part of the European IPCEI-CIS program, financed by NRRP (National Recovery and Resilience Plan) funds. Computations were carried out using the computers of Centre of Informatics Tricity Academic Supercomputer & Network at Gdansk University of Technology. ## Citation ```bibtex @inproceedings{dadas2024pirb, title={PIRB: A Comprehensive Benchmark of Polish Dense and Hybrid Text Retrieval Methods}, author={Dadas, Slawomir and Pere{\l}kiewicz, Micha{\l} and Po{\'s}wiata, Rafa{\l}}, booktitle={Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)}, pages={12761--12774}, year={2024} } ```