MoodShift โ€” RoBERTa+ESA+TF-IDF+FL with LLM Data Augmentation

Novel contribution: LLM-based minority class augmentation via Groq (llama-3.3-70b)

with self-consistency filtering for label fidelity.

Test Accuracy: 0.9265 | Macro F1: 0.8926

Baseline (no augmentation): Acc=0.9235 | F1=0.8831

ICCA 2026 HCI Research โ€” MoodShift Adaptive Chatbot

Downloads last month
18
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Dataset used to train Sarjinkhan2003/moodshift-roberta-llm-augmented