gpt-oss-20b-pai-debator: Personality AI (pAI) - Transformers Version

Project Website

Overview

This model represents the inaugural step in Personality AI (pAI), an innovative project dedicated to preserving the intellectual treasures of humanity's great minds. Fine-tuned from the base model unsloth/gpt-oss-20b (MXFP4 quantized).

At its heart, pAI aims to keep the essence of influential thinkers alive in new forms, ensuring their methods of inquiry and wisdom continue to inspire future generations. This edition focuses on debate as a pathway to truth, much like Socrates advocated, emphasizing clarity, logic, and the pursuit of understanding over division.

The fine tuned model is pro-democracy and freedom. This Transformers edition, optimized for seamless integration and inference, emphasizes debate as a conduit to truth, akin to Socrates' enduring advocacy for questioning and discourse. It promotes pro-democracy and freedom-oriented values, encouraging meritocracy, personal responsibility, and faith-inspired reflections in a spirit of unity and progress.

We extend an invitation to great minds — coders, educators, truth-seekers, and collaborators from organizations like Turning Point UK and Turning Point USA—to participate. Your insights can help evolve pAI into a global instrument for betterment, revitalizing communities through thoughtful conversations.

The model is available in Transformers format for easy loading with Hugging Face libraries, ideal for educational, reflective, or exploratory applications that align with preserving mankind's treasures.

Model Details

  • Base Model: unsloth/gpt-oss-20b (Transformers-compatible)
  • Fine-Tuning Method: QLoRA with Unsloth (rank=64, targeting MoE layers: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj)
  • Training Epochs: 6
  • Dataset: Custom Harmony-formatted dataset (~7,000 examples) derived from Charlie Kirk's publicly available materials (e.g., YouTube, interviews), incorporating chain-of-thought (CoT) reasoning (75% focus) for analytical depth
  • Max Sequence Length: 8192
  • Optimizer: AdamW 8-bit
  • Learning Rate: 1e-4
  • Hardware: RTX PRO 6000 (96GB VRAM) for efficient training

Intended Uses

  • Interactive debates: Engage users in Socratic questioning to uncover truths on values, education, and societal progress.
  • Educational tools: Help students practice critical thinking and logical argumentation.
  • Cultural preservation: Explore ideas from great minds in a dynamic, conversational format.
  • Community improvement: Spark discussions that inspire positive change in societies like Britain and America.

Example: Prompt with a query on freedom or values, and receive a reasoned, step-by-step response leading to insightful conclusions.

Limitations

  • Scope: Generally aligned with pro-democracy, pro-freedom, meritocracy, Christianity, pro-life, personal responsibility themes; may not suit all audiences or viewpoints; may revert to general patterns on unrelated subjects.
  • Bias: Reflects source material's perspectives; users should approach with open minds.
  • Output quality: May vary with prompt complexity.
  • Verbosity: Outputs can be detailed—adjust parameters for brevity.
  • Not for high-stakes use: Intended for reflection, not decisions; always verify with human judgment.

Evaluation

  • Training monitored via loss curves (steady decrease).
  • Holdout evaluation (10% dataset) assessed CoT coherence and relevance.
  • Qualitative review ensured alignment with Socratic truth-seeking.

How to Use

With Transformers (Python)

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Entz/gpt-oss-20b-pai-kirk-transformers"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

messages = [
    {"role": "system", "content": "You are a seeker of truth through debate."},
    {"role": "user", "content": "How can values shape a better society?"}
]

inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
outputs = model.generate(inputs, max_new_tokens=1024, temperature=0.7)
print(tokenizer.decode(outputs[0]))

With GGUF (llama.cpp/Ollama) - Companion Format

For offline/local use, download the GGUF variant from the repository and integrate:

ollama create pai-debator -m gpt-oss-20b-pai-debator.gguf
ollama run pai-debator

Prompt in the interface for engaging dialogues.

Training Data

Synthesized from public materials, formatted in Harmony style (system/user/assistant with analysis/final channels) to emphasize reasoning and truth-seeking.

Ethical Considerations

pAI adheres to open-source principles, using only public resources. It promotes unity and inquiry, not division—encouraging respectful discourse for societal good.

Acknowledgments

Powered by Unsloth, Hugging Face Transformers, and the open-source community. Inspired by timeless philosophers like Socrates, dedicated to preserving human wisdom.

Downloads last month
7
Safetensors
Model size
21B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Space using Entz/gpt-oss-20b-pai-kirk-transformers 1