--- title: Wan LoRA Trainer sdk: docker --- # Wan 2.1 LoRA Trainer This Space provides a ready-to-use environment for training a LoRA adapter for the Wan 2.1 image-to-video model. ## How it Works * **Self-Contained:** All required models are stored directly within this Space for maximum reliability. * **Automatic Startup:** When the Space builds, it automatically executes the `start.sh` script to begin the training process. You can monitor the progress in the "Logs" section. ## How to Train on Your Own Data This Space is pre-configured to run with a test dataset. To train on your own data, simply upload your dataset files, create a `.toml` configuration file for it, and update the `start.sh` script to point to your new config. --- # Wan 2.1LoRA Trainer This Space runs the Wan 2.1 LoRA training script. The required models are linked via the repository configuration above. --- # Simple GUI for [Musubi Tuner](https://github.com/kohya-ss/musubi-tuner) (Wan 2.1 models only) # How to use GUI - Download the repository by running in the command line: `git clone https://github.com/Kvento/musubi-tuner-wan-gui` - To open the GUI just run `Start_Wan_GUI.bat`. - All settings can be saved and loaded using the "**Load Settings**" and "**Save Setting**" buttons. - More info about settings see in [Wan2.1 documentation](./docs/wan.md), [Advanced Configuration](./docs/advanced_config.md#fp8-quantization), [Dataset configuration guide](./dataset/dataset_config.md).