DeepSeek-R1-Distill-Qwen-32B Uncensored
This is an uncensored/abliterated version of deepseek-ai/DeepSeek-R1-Distill-Qwen-32B, created using the Heretic abliteration technique.
Model Details
- Base Model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
- Parameters: 32B
- Architecture: Qwen2ForCausalLM
- Context Length: 131,072 tokens
- License: DeepSeek License
About DeepSeek-R1-Distill
DeepSeek-R1-Distill models are distilled from the DeepSeek-R1 reasoning model, inheriting strong reasoning capabilities while being more efficient to run. This model excels at:
- Mathematical reasoning
- Code generation
- Logical problem solving
- Step-by-step explanations
What is Abliteration?
Abliteration is a technique that removes refusal behavior from language models by identifying and suppressing the "refusal direction" in the model's activation space. This allows the model to respond to a wider range of queries without built-in restrictions.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "richardyoung/Deepseek-R1-Distill-Qwen-32b-uncensored"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="auto",
device_map="auto"
)
messages = [
{"role": "user", "content": "Solve this step by step: What is 15% of 240?"}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=1024)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Disclaimer
This model has reduced safety guardrails. Users are responsible for ensuring appropriate use. The model creator is not responsible for any misuse.
Important Disclaimer
This model has been modified to remove safety guardrails and refusal behaviors.
Intended Use
- Research and educational purposes
- Understanding model behavior and limitations
- Creative writing and roleplay with consenting adults
- Red-teaming and safety research
Not Intended For
- Generating harmful, illegal, or unethical content
- Harassment, abuse, or malicious activities
- Misinformation or deception
- Any use that violates applicable laws
User Responsibility
By using this model, you acknowledge that:
- You are solely responsible for how you use this model and any content it generates
- The model creator accepts no liability for misuse or harmful outputs
- You will comply with all applicable laws and ethical guidelines
- You understand this model may produce inaccurate, biased, or inappropriate content
Technical Note
This model was created using abliteration techniques that suppress the "refusal direction" in the model's activation space. This does not add new capabilities—it only removes trained refusal behaviors from the base model.
Use responsibly. You have been warned.
Credits
- Base Model: DeepSeek AI
- Abliteration: Heretic
- Model Curator: Richard Young | DeepNeuro.AI
- Downloads last month
- 20
Model tree for richardyoung/Deepseek-R1-Distill-Qwen-32b-uncensored
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-32B