Antislop: A Comprehensive Framework for Identifying and Eliminating Repetitive Patterns in Language Models
Paper • 2510.15061 • Published • 3
A Qwen2.5-7B-Instruct fine-tune designed to write natural, human-like story prose while actively suppressing typical AI-isms ("delve into", "tapestry of", "it's important to note", etc.).
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("SirOswald/prose-humanizer-7b")
tokenizer = AutoTokenizer.from_pretrained("SirOswald/prose-humanizer-7b")
messages = [
{"role": "system", "content": "You are a fiction writer. Write vivid, natural prose."},
{"role": "user", "content": "Write a short story about a lighthouse keeper who discovers something unexpected in the fog."}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=1024, temperature=0.8, top_p=0.95)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))