Update README.md
Browse files
README.md
CHANGED
|
@@ -105,16 +105,16 @@ tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base")
|
|
| 105 |
Because this model uses language adapters, you need to specify the language of your input so that the correct adapter can be activated:
|
| 106 |
|
| 107 |
```python
|
| 108 |
-
from transformers import
|
| 109 |
|
| 110 |
-
model =
|
| 111 |
model.set_default_language("en_XX")
|
| 112 |
```
|
| 113 |
|
| 114 |
A directory of the language adapters in this model is found at the bottom of this model card.
|
| 115 |
|
| 116 |
## Fine-tuning
|
| 117 |
-
|
| 118 |
|
| 119 |
```python
|
| 120 |
model.freeze_embeddings_and_language_adapters()
|
|
|
|
| 105 |
Because this model uses language adapters, you need to specify the language of your input so that the correct adapter can be activated:
|
| 106 |
|
| 107 |
```python
|
| 108 |
+
from transformers import XmodModel
|
| 109 |
|
| 110 |
+
model = XmodModel.from_pretrained("jvamvas/xmod-base")
|
| 111 |
model.set_default_language("en_XX")
|
| 112 |
```
|
| 113 |
|
| 114 |
A directory of the language adapters in this model is found at the bottom of this model card.
|
| 115 |
|
| 116 |
## Fine-tuning
|
| 117 |
+
In the experiments in the original paper, the embedding layer and the language adapters are frozen during fine-tuning. A method for doing this is provided in the code:
|
| 118 |
|
| 119 |
```python
|
| 120 |
model.freeze_embeddings_and_language_adapters()
|