BEncoderRT commited on
Commit
d7307fd
·
verified ·
1 Parent(s): c2abde4

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +83 -0
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - FreedomIntelligence/medical-o1-reasoning-SFT
5
+ language:
6
+ - en
7
+ base_model:
8
+ - unsloth/DeepSeek-R1-Distill-Llama-8B
9
+ pipeline_tag: text-generation
10
+ tags:
11
+ - unsloth
12
+ ---
13
+
14
+ # DeepSeek-R1 Medical Reasoning Model
15
+
16
+ This repository contains a **fine-tuned medical reasoning model** based on
17
+ [DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B)
18
+ and trained on the [medical-o1-reasoning-SFT](https://huggingface.co/datasets/FreedomIntelligence/medical-o1-reasoning-SFT) dataset.
19
+
20
+ ⚠️ **The uploaded file (`unsloth.Q8_0.gguf`) contains quantized weights** for efficient inference.
21
+
22
+ ---
23
+
24
+ ## 🔍 Model Overview
25
+
26
+ - **Base Model**: unsloth/DeepSeek-R1-Distill-Llama-8B
27
+ - **Training Method**: SFT (Supervised Fine-Tuning)
28
+ - **Domain**: Medical reasoning and clinical knowledge
29
+ - **Language**: English
30
+ - **Quantization**: Q8_0 (gguf format for efficient inference)
31
+
32
+ ---
33
+
34
+ ## 📚 Training Data
35
+
36
+ The model was fine-tuned on:
37
+
38
+ - **Dataset**: `FreedomIntelligence/medical-o1-reasoning-SFT`
39
+ - **Language**: English
40
+ - **Task**: Medical reasoning, clinical question-answering
41
+
42
+ ---
43
+
44
+ ## 🚀 Usage Example
45
+
46
+ > **Note:** The model is stored in `.gguf` format (quantized). You can load it using `unsloth` library.
47
+
48
+ ```python
49
+ from unsloth import FastLanguageModel
50
+ import torch
51
+
52
+ # Load the quantized GGUF model
53
+ model, tokenizer = FastLanguageModel.from_pretrained(
54
+ "./unsloth.Q8_0.gguf",
55
+ max_seq_length=2048,
56
+ load_in_8bit=True, # optional depending on quantization
57
+ )
58
+
59
+ FastLanguageModel.for_inference(model)
60
+
61
+ def generate(model, prompt, max_new_tokens=200):
62
+ inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
63
+
64
+ with torch.no_grad():
65
+ outputs = model.generate(
66
+ **inputs,
67
+ max_new_tokens=max_new_tokens,
68
+ do_sample=True,
69
+ temperature=0.7,
70
+ top_p=0.9,
71
+ )
72
+
73
+ return tokenizer.decode(outputs[0], skip_special_tokens=True)
74
+
75
+ # Example prompt
76
+ prompt = """### Instruction:
77
+ A patient presents with persistent chest pain and shortness of breath. What are possible differential diagnoses?
78
+
79
+ ### Response:
80
+ """
81
+
82
+ print(generate(model, prompt))
83
+