File size: 4,586 Bytes
32a4914
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
---
language:
- en
license: mit
library_name: transformers
tags:
- code
- typescript
- reasoning
- react
- nextjs
- angular
- nodejs
- deepseek
- gguf
- ollama
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
datasets:
- github-code
model-index:
- name: TypeScript-SLM-7B-Reasoning-Full
  results: []
---

# TypeScript-SLM-7B-Reasoning-Full

**TypeScript-SLM-7B-Reasoning** is a 7B-parameter DeepSeek-based model fine-tuned for step-by-step TypeScript reasoning. It merges the base model with LoRA adapters and includes GGUF quantization for local/Ollama workflows.

This repository hosts the **full merged model** plus **GGUF (q4_k_m)** for lightweight inference.

## Model Description

- **Base Model**: [deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B)
- **Model Type**: Causal LM (code reasoning)
- **Parameters**: 7B
- **Context Length**: Inherits base DeepSeek-R1-Distill-Qwen-7B window
- **Fine-tuning**: LoRA on TypeScript reasoning/debugging tasks
- **License**: MIT
- **Language**: English, TypeScript/JavaScript code
- **System Prompt**: Focus on step-by-step debugging, refactoring, and design-level explanations before giving the final typed solution.

### What it is good at
- ✅ Explaining TypeScript bugs and fixes
- ✅ Refactoring and API design discussions
- ✅ Generating strongly-typed code for React/Next.js/Angular/Node.js
- ✅ Producing clear reasoning traces before final answers

## Intended Uses

**Primary**: TypeScript reasoning, debugging, refactoring, and guided code generation.  
**Out-of-scope**: Arbitrary natural-language chat unrelated to code; safety-sensitive or factual tasks outside TypeScript.

### Prompt Examples

```
"Debug this TypeScript function and explain the bug step by step:\n\nfunction add(a?: number, b?: number) { return a + b; }"

"Design a typed API surface for a Next.js todo service. Explain design choices, then show the final code."
```

## How to Use

### Ollama (recommended for local)

```bash
ollama create typescript-slm-7b-reasoning -f gguf/Modelfile-q4_k_m
ollama run typescript-slm-7b-reasoning "Explain why this React hook re-renders too often..."
```

### Transformers

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model = AutoModelForCausalLM.from_pretrained(
    "sylvester-francis/typescript-slm-7b-reasoning-full",
    torch_dtype=torch.float16,
    device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained("sylvester-francis/typescript-slm-7b-reasoning-full")

prompt = "Refactor this TypeScript service for better typing and error handling..."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
    **inputs,
    max_new_tokens=512,
    temperature=0.3,
    top_p=0.95,
    do_sample=True,
    pad_token_id=tokenizer.eos_token_id,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```

### GGUF (llama.cpp)

```bash
huggingface-cli download sylvester-francis/typescript-slm-7b-reasoning-full \
  gguf/typescript-slm-7b-reasoning-q4_k_m.gguf --local-dir ./models

./llama-cli -m ./models/gguf/typescript-slm-7b-reasoning-q4_k_m.gguf \
  -p "Explain and fix this TypeScript type error..."
```

## Model Files

- `gguf/typescript-slm-7b-reasoning-q4_k_m.gguf` (≈4.7GB)
- `gguf/Modelfile-q4_k_m` (Ollama import)

## Training Data (summary)

- Curated TypeScript code from popular GitHub repos (React, Next.js, Angular, Node.js)
- StackOverflow Q&A focused on debugging and reasoning
- Filters for strong typing, framework best practices, and reasoning-rich examples

## Training Configuration (LoRA)

```yaml
Base Model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
Method: LoRA fine-tuning
Target Domains: TypeScript reasoning, debugging, refactoring
LoRA Rank / Alpha: tuned for stability and reasoning depth
Optimizer: AdamW
Max Sequence Length: inherits base model context window
```

## Evaluation

Qualitative checks on TypeScript debugging/refactoring prompts show:
- Clear reasoning steps before final code
- Strong type usage and framework-aware patterns
- Concise, actionable fixes

## Safety & Limitations

- May generate incorrect code or hallucinate APIs—review before production use.
- Not a security scanner; do not rely on it for vulnerability assessments.
- Avoid non-code or high-stakes factual tasks.

## License

MIT for the fine-tuned model; base model license and dataset terms also apply.

## Contact

- Maintainer: Sylvester Francis (`@sylvester-francis` on Hugging Face)
- Issues/feedback: open a discussion on the model repo