Notes on Outetts Compatibility changes
Browse files
README.md
CHANGED
|
@@ -26,12 +26,23 @@ This is a text-to-speech (TTS) model for Moroccan Darija, fine-tuned from [OuteA
|
|
| 26 |
- **Demo:** [Try it here](https://huggingface.co/spaces/Lyte/DarijaTTS-test)
|
| 27 |
|
| 28 |
## Usage
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
You can run the model using `outetts` as follows:
|
| 30 |
|
| 31 |
-
|
| 32 |
```bash
|
| 33 |
-
pip install outetts llama-cpp-python huggingface_hub
|
| 34 |
-
|
| 35 |
|
| 36 |
```python
|
| 37 |
import outetts
|
|
@@ -66,11 +77,13 @@ print(f"Generated audio saved at: {audio_path}")
|
|
| 66 |
```
|
| 67 |
|
| 68 |
## Training
|
|
|
|
| 69 |
The model was fine-tuned using `Unsloth`'s `SFTTrainer`. The dataset was preprocessed following the [OuteTTS training guide](https://github.com/edwko/OuteTTS/blob/main/examples/training/OuteTTS-0.3/train.md). LoRA-based fine-tuning was applied to improve efficiency.
|
| 70 |
|
| 71 |
# Support Me
|
| 72 |
|
| 73 |
-
[
|
| 74 |
|
| 75 |
-
|
| 76 |
-
|
|
|
|
|
|
| 26 |
- **Demo:** [Try it here](https://huggingface.co/spaces/Lyte/DarijaTTS-test)
|
| 27 |
|
| 28 |
## Usage
|
| 29 |
+
|
| 30 |
+
> [!IMPORTANT]
|
| 31 |
+
> **Compatibility Note**
|
| 32 |
+
> Recent updates to `outetts` have introduced breaking changes. If you encounter the error:
|
| 33 |
+
> `AttributeError: module 'outetts' has no attribute 'GGUFModelConfig_v2'`
|
| 34 |
+
>
|
| 35 |
+
> **Solution:** Please install a compatible version (0.3.3 or 0.3.2) to resolve this:
|
| 36 |
+
> ```bash
|
| 37 |
+
> pip install outetts==0.3.3
|
| 38 |
+
> ```
|
| 39 |
+
|
| 40 |
You can run the model using `outetts` as follows:
|
| 41 |
|
| 42 |
+
Install `outetts` and `llama-cpp-python`:
|
| 43 |
```bash
|
| 44 |
+
pip install outetts==0.3.3 llama-cpp-python huggingface_hub
|
| 45 |
+
````
|
| 46 |
|
| 47 |
```python
|
| 48 |
import outetts
|
|
|
|
| 77 |
```
|
| 78 |
|
| 79 |
## Training
|
| 80 |
+
|
| 81 |
The model was fine-tuned using `Unsloth`'s `SFTTrainer`. The dataset was preprocessed following the [OuteTTS training guide](https://github.com/edwko/OuteTTS/blob/main/examples/training/OuteTTS-0.3/train.md). LoRA-based fine-tuning was applied to improve efficiency.
|
| 82 |
|
| 83 |
# Support Me
|
| 84 |
|
| 85 |
+
[](https://ko-fi.com/lyte)
|
| 86 |
|
| 87 |
+
-----
|
| 88 |
+
|
| 89 |
+
For any issues or improvements, feel free to open a discussion or PR\!
|