Text Generation
creative-writing
prose
humanizer
anti-slop
dpo
sft

Prose Humanizer 7B

A Qwen2.5-7B-Instruct fine-tune designed to write natural, human-like story prose while actively suppressing typical AI-isms ("delve into", "tapestry of", "it's important to note", etc.).

Training Pipeline

Stage 1: DPO — Learn Human Prose Preferences

  • Gutenberg DPO (918 examples): Classic literature vs LLM-generated continuations. Teaches preference for authentic literary prose over AI-generated text.
  • LitBench-Train (43,827 examples): Reddit r/WritingPrompts stories ranked by community upvotes. Human-written chosen/rejected pairs with real quality signal.
  • Method: DPO with LoRA (rank 128, alpha 256), β=0.1, lr=5e-6

Stage 2: SFT — Anti-Slop Editing

  • SlopToPolish (1,000 examples): Chain-of-thought editing task where the model learns to identify and fix AI-typical writing patterns (clichés, redundancy, lack of subtext).
  • Method: SFT with LoRA (rank 64, alpha 128), lr=3e-5, 3 epochs

Key References

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("SirOswald/prose-humanizer-7b")
tokenizer = AutoTokenizer.from_pretrained("SirOswald/prose-humanizer-7b")

messages = [
    {"role": "system", "content": "You are a fiction writer. Write vivid, natural prose."},
    {"role": "user", "content": "Write a short story about a lighthouse keeper who discovers something unexpected in the fog."}
]

text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=1024, temperature=0.8, top_p=0.95)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for SirOswald/prose-humanizer-7b

Base model

Qwen/Qwen2.5-7B
Finetuned
(3260)
this model

Datasets used to train SirOswald/prose-humanizer-7b

Papers for SirOswald/prose-humanizer-7b