Eland Sentiment - Chinese Multi-Domain Sentiment Analysis
A Chinese multi-domain sentiment analysis model fine-tuned on Qwen3-4B using LoRA, achieving 92.03% on financial and 86.85% on multi-domain text.
Model Description
This is a LoRA adapter for Qwen/Qwen3-4B, fine-tuned for Chinese sentiment analysis across multiple domains:
- Financial - Stock market, investment, economic news
- Product - Product reviews, shopping discussions
- Brand - Brand image, corporate reputation
- Organization - Company news, workplace discussions
- Social - Social media, public affairs
Supported Tasks
- Overall Sentiment - Classify the overall sentiment (ๆญฃ้ข/่ฒ ้ข/ไธญ็ซ)
- Entity Sentiment - Classify sentiment towards specific entities
- Opinion Sentiment - Classify sentiment of specific opinions
- Stance Detection - Determine if text agrees with an opinion
Performance
Financial Domain
| Metric | Score |
|---|---|
| Macro Average | 92.03% |
| Overall Sentiment | 95.00% |
| Entity Sentiment | 93.10% |
| Opinion Sentiment | 85.00% |
| Agrees with Text | 95.00% |
Multi-Domain (Product, Brand, Organization, Social)
| Metric | Score |
|---|---|
| Macro Average | 86.85% |
| Overall Sentiment | 71.00% |
| Entity Sentiment | 78.57% |
| Opinion Sentiment | 97.83% |
| Agrees with Text | 100.00% |
Usage
With Transformers + PEFT
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model
base_model = "Qwen/Qwen3-4B"
model = AutoModelForCausalLM.from_pretrained(
base_model,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
# Load LoRA adapter
model = PeftModel.from_pretrained(model, "p988744/eland-sentiment-zh")
tokenizer = AutoTokenizer.from_pretrained("p988744/eland-sentiment-zh")
# Example: Overall sentiment analysis
messages = [
{"role": "system", "content": "ไฝ ๆฏไธๅๅฐๆฅญ็้่ๆๆฌๆ
ๆๅๆๅฉๆใ่ซๅๆไปฅไธๆๆฌ็ๆด้ซๆ
ๆ๏ผๅ็ญใๆญฃ้ขใใใ่ฒ ้ขใๆใไธญ็ซใใ"},
{"role": "user", "content": "ๅฐ็ฉ้ปไปๆฅ่กๅนๅคงๆผฒ๏ผๅธๅ ด็ๅฅฝAI้ๆฑๆ็บๆ้ทใ"}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=10, do_sample=False)
response = tokenizer.decode(outputs[0][inputs['input_ids'].shape[1]:], skip_special_tokens=True)
print(response) # Expected: ๆญฃ้ข
Task Prompts
Overall Sentiment:
System: ไฝ ๆฏไธๅๅฐๆฅญ็้่ๆๆฌๆ
ๆๅๆๅฉๆใ่ซๅๆไปฅไธๆๆฌ็ๆด้ซๆ
ๆ๏ผๅ็ญใๆญฃ้ขใใใ่ฒ ้ขใๆใไธญ็ซใใ
User: [your text]
Entity Sentiment:
System: ไฝ ๆฏไธๅๅฐๆฅญ็้่ๆๆฌๆ
ๆๅๆๅฉๆใ่ซๅๆไปฅไธๆๆฌไธญๅฐใ{entity}ใ็ๆ
ๆ๏ผๅ็ญใๆญฃ้ขใใใ่ฒ ้ขใๆใไธญ็ซใใ
User: [your text]
Opinion Sentiment:
System: ไฝ ๆฏไธๅๅฐๆฅญ็้่ๆๆฌๆ
ๆๅๆๅฉๆใ่ซๅคๆทไปฅไธ่ง้ป็ๆ
ๆๅพๅ๏ผๅ็ญใๆญฃ้ขใใใ่ฒ ้ขใๆใไธญ็ซใใ
User: ๆๆฌ๏ผ[text]
่ง้ป๏ผ[opinion]
Model Variants
| Version | Repository | Use Case |
|---|---|---|
| LoRA Adapter | p988744/eland-sentiment-zh | HuggingFace + PEFT |
| GGUF | p988744/eland-sentiment-zh-gguf | Ollama / llama.cpp |
| Full Merged | p988744/eland-sentiment-zh-vllm | vLLM |
Training Details
| Parameter | Value |
|---|---|
| Base Model | Qwen/Qwen3-4B |
| Method | LoRA (PEFT) |
| LoRA Rank | 32 |
| LoRA Alpha | 64 |
| Trainable Params | 66M (1.62%) |
| Epochs | 8 |
| Learning Rate | 1e-5 |
| Batch Size | 8 (effective) |
| Training Time | ~47 minutes |
| Hardware | NVIDIA L40S |
Dataset
Trained on combined dataset:
- Financial: 1,887 training samples (Taiwan stock market forum and news)
- Multi-domain: 600 training samples (product, brand, organization, social)
- Total: 2,487 training samples
Task distribution:
- Overall sentiment: ~40%
- Entity sentiment: ~30%
- Opinion sentiment: ~30%
Limitations
- Language: Chinese (Traditional) only
- Best Domain: Financial text (92% accuracy)
- Other Domains: Product, brand, organization, social (~87% accuracy)
Citation
@misc{eland-sentiment-zh,
author = {Eland AI},
title = {Eland Sentiment: Chinese Financial Sentiment Analysis Model},
year = {2025},
publisher = {HuggingFace},
url = {https://huggingface.co/p988744/eland-sentiment-zh}
}
License
Apache 2.0
- Downloads last month
- 82
Hardware compatibility
Log In
to view the estimation
16-bit