Mojana COMET - Active Inference Edition
Fine-tuned COMET model with Active Inference framework and Stackelberg Leader-Follower Game Theory for robotics cognitive architectures.
Features
12 relation types for comprehensive situation understanding:
- Original: xSee, xReact, xNeed, xAnticipate, xIntent
- Active Inference: xBelief, xPrediction, xFreeEnergy, xHeuristic
- Game Theory: xStackelberg, xLeaderAction, xFollowerResponse
28 balanced scenario categories including:
- Life-threatening emergencies (fire, cardiac, choking, child safety)
- Threats and danger (weapons, aggressive people, accidents)
- Household situations (cooking, water, broken items, trivial)
- Social interactions (friendly, emotional support, help requests)
- Professional/business (metrics, technical failures, customer issues)
- Aerospace/robotics (propulsion, guidance systems)
- Decision-making (life decisions, advice, how-to)
Usage
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Haya-as/mojana-comet-active-inference")
model = AutoModelForSeq2SeqLM.from_pretrained("Haya-as/mojana-comet-active-inference")
def ask_comet(situation, relation):
prompt = f"{situation} [REL] {relation}"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=60, num_beams=4, no_repeat_ngram_size=3)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
# Example
print(ask_comet("a stranger is waving at me", "xSee"))
# Output: "I see: friendly gesture from stranger"
print(ask_comet("a stranger is waving a weapon at me", "xSee"))
# Output: "I see: armed threat - immediate danger to safety"
Training
- Base model: COMET (from Mosaic)
- Training examples: ~21,000 balanced across 28 categories
- Training: fp32 (fp16 caused gradient issues)
- Epochs: 10
- Batch size: 16
- Learning rate: 3e-5
Developed by
NoMaVerse - for Mojana robotics cognitive architecture
- Downloads last month
- -