Updating README
Browse files
README.md
CHANGED
|
@@ -7,10 +7,10 @@ widget:
|
|
| 7 |
---
|
| 8 |
# CAMeLBERT MSA NER Model
|
| 9 |
## Model description
|
| 10 |
-
**CAMeLBERT MSA NER Model** is Named Entity Recognition (NER) model that was built by fine-tuning the [CAMeLBERT Modern Standard Arabic (MSA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model. For the fine-tuning, we used the [ANERcorp](https://camel.abudhabi.nyu.edu/anercorp/) dataset. Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
|
| 11 |
|
| 12 |
## Intended uses
|
| 13 |
-
You can use the CAMeLBERT MSA NER
|
| 14 |
|
| 15 |
#### How to use
|
| 16 |
You can use this model directly with a pipeline to do NER:
|
|
|
|
| 7 |
---
|
| 8 |
# CAMeLBERT MSA NER Model
|
| 9 |
## Model description
|
| 10 |
+
**CAMeLBERT MSA NER Model** is a Named Entity Recognition (NER) model that was built by fine-tuning the [CAMeLBERT Modern Standard Arabic (MSA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model. For the fine-tuning, we used the [ANERcorp](https://camel.abudhabi.nyu.edu/anercorp/) dataset. Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
|
| 11 |
|
| 12 |
## Intended uses
|
| 13 |
+
You can use the CAMeLBERT MSA NER model directly as part of the transformers pipeline or as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) NER component.
|
| 14 |
|
| 15 |
#### How to use
|
| 16 |
You can use this model directly with a pipeline to do NER:
|