mistral
#33 opened 22 days ago
by
Kiranpdevops1
Request: DOI
#30 opened 7 months ago
by
mengzixian
Download_guff
#29 opened 7 months ago
by
himanshu012
mistral
#27 opened 10 months ago
by
Dmitriy-Egorov
running on HFTGI
#26 opened 11 months ago
by
saaketvarma
Tokenizer not compatible with TGI — breaks on model load
#25 opened 11 months ago
by
pavlonator
max output token and knowledge cutoff for Mistral-8B-Instruct-2410
#24 opened 11 months ago
by
MengboZhou
add_model_card
#23 opened 11 months ago
by
Nahieli777777
add model card
#22 opened 11 months ago
by
Nahieli777777
Recommended way to cite the model ?
#21 opened about 1 year ago
by
chicham
Calibration of Ministral-8B-Instruct Logprobs
#20 opened about 1 year ago
by
philipmuellerdev
Request: DOI
#19 opened over 1 year ago
by
jliu7350
Passkey evaluation on Flash Infer backend
#16 opened over 1 year ago
by
joejose2728
Base model?
#15 opened over 1 year ago
by
deltanym
Weird behavior of chat template
#14 opened over 1 year ago
by
kpriyanshu256
Request access to Ministral 3B
👍 1
2
#12 opened over 1 year ago
by
tinatywang
Can not use HF transformers for inference?
1
#11 opened over 1 year ago
by
haili-tian
Error when setting max_model_len to 65536 for Ministral-8B-Instruct-2410 on A100 | VLLM
1
#10 opened over 1 year ago
by
Byerose
Where is Ministral 3B?
1
#9 opened over 1 year ago
by
ZeroWw
an error when trying to infer in Chinese
1
#8 opened over 1 year ago
by
mario479
Looks like not as good as Qwen2.5 7B
👍 🔥 3
9
#5 opened over 1 year ago
by
MonolithFoundation
3B Version Weights
🔥 9
6
#4 opened over 1 year ago
by
TKDKid1000
This LLM is hallucinating like crazy. Can someone verify these prompts?
28
#3 opened over 1 year ago
by
phil111
Not MRL again :(
➕ 🔥 14
3
#1 opened over 1 year ago
by
LyraNovaHeart