GLiNER Fine-Tuned for Floatbot.ai
Fine-tuned version of knowledgator/gliner-x-large for domain-specific NER in the conversational AI / customer support domain.
Available Formats
| Format | File | Size | Use Case |
|---|---|---|---|
| PyTorch | pytorch_model.bin |
2.3 GB | Training, GPU inference |
| ONNX FP32 | onnx/model.onnx + onnx/model.onnx.data |
2.3 GB | Baseline ONNX, maximum accuracy |
| ONNX INT8 ⭐ | onnx/model_int8.onnx |
582 MB | Recommended for CPU production |
| ONNX UINT8 | onnx/model_quantized.onnx |
582 MB | Alternative CPU quantization |
Recommendation: Use
model_int8.onnxfor production CPU deployment — 4× smaller than PyTorch with ~80% entity agreement and faster inference.
Entity Types (30)
This model recognizes 30 entity types relevant to Floatbot.ai's platform:
customer_name · organization · product_name · service_type · channel · date · time · monetary_amount · order_id · ticket_id · account_number · phone_number · email_address · complaint_category · intent_keyword · department · plan_name · feature_name · api_endpoint · bot_name · language · platform · integration · metric_name · percentage · duration · location · priority_level · status · error_type
Usage
PyTorch (original)
from gliner import GLiNER
model = GLiNER.from_pretrained("Rishi2455/gliner-floatbot-ai")
text = "Rajesh from Infosys wants to integrate Floatbot with Salesforce for their Mumbai call center."
labels = ["customer_name", "organization", "product_name", "integration", "location", "service_type"]
entities = model.predict_entities(text, labels, threshold=0.4)
for ent in entities:
print(f" '{ent['text']}' → {ent['label']} (score: {ent['score']:.3f})")
ONNX INT8 Quantized (recommended for production)
from gliner import GLiNER
# Load the INT8 quantized ONNX model — same API, 4x smaller, faster on CPU
model = GLiNER.from_pretrained(
"Rishi2455/gliner-floatbot-ai",
load_onnx_model=True,
onnx_model_file="model_int8.onnx"
)
text = "Rajesh from Infosys wants to integrate Floatbot with Salesforce for their Mumbai call center."
labels = ["customer_name", "organization", "product_name", "integration", "location", "service_type"]
entities = model.predict_entities(text, labels, threshold=0.4)
for ent in entities:
print(f" '{ent['text']}' → {ent['label']} (score: {ent['score']:.3f})")
ONNX FP32 (full precision)
from gliner import GLiNER
model = GLiNER.from_pretrained(
"Rishi2455/gliner-floatbot-ai",
load_onnx_model=True,
onnx_model_file="model.onnx"
)
Benchmarks
Tested on CPU (Intel Xeon, single-threaded):
| Format | Latency (ms/inference) | Size | Entity Agreement vs PyTorch |
|---|---|---|---|
| PyTorch FP32 | 379 ms | 2.3 GB | Baseline |
| ONNX INT8 | 343 ms (1.10× faster) | 582 MB (4× smaller) | ~80% |
Note: Speedup is more significant on optimized hardware (AVX-512, ARM NEON). The entity agreement metric measures overlap of detected entities at threshold=0.3 across test examples — minor differences in borderline entities are expected and do not indicate quality degradation for high-confidence predictions.
Training Details
| Parameter | Value |
|---|---|
| Base model | knowledgator/gliner-x-large (1.3B params) |
| Training samples | 86 |
| Entity types | 30 |
| Learning rate (encoder) | 5e-6 |
| Learning rate (others) | 1e-5 |
| Loss | Focal loss (α=0.75, γ=2) |
| Epochs | 12 |
| Effective batch size | 8 |
Training Recipe
Based on published research:
- GLiNER-BioMed — domain adaptation blueprint
- NERCat — small dataset fine-tuning recipe
- GLiNER — original model architecture
ONNX Export Details
The ONNX models were exported using GLiNER's built-in export_to_onnx() method with opset version 17. Quantization uses ONNX Runtime's quantize_dynamic:
- INT8: Signed 8-bit integer weights via
QuantType.QInt8 - UINT8: Unsigned 8-bit integer weights via
QuantType.QUInt8
Both use dynamic quantization — no calibration dataset needed, scales computed at runtime per batch.
Training Data & Script
See Rishi2455/gliner-floatbot-ai-training for the complete training dataset and fine-tuning script.
How to Run Training
pip install gliner torch transformers accelerate trackio huggingface_hub
huggingface-cli login
# Download and run the training script
wget https://huggingface.co/datasets/Rishi2455/gliner-floatbot-ai-training/resolve/main/train_gliner.py
python train_gliner.py
Hardware required: GPU with ≥24GB VRAM (A10G, RTX 3090, A100, etc.)
- Downloads last month
- 96
Model tree for Rishi2455/gliner-floatbot-ai
Base model
knowledgator/gliner-x-large