File size: 5,817 Bytes
7fb9b2d 4891029 7fb9b2d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 |
# WAN 2.1 LoRA Training Tutorial
> **Original source:** CivitAI article by malcolmrey
Hello, my dear readers!
There is a lot of stuff so I'll try to make it brief! :)
---
## 1) WAN 2.1 LoRA Models Release
My new WAN loras have just dropped at:
🔗 [https://huggingface.co/malcolmrey/wan/tree/main/wan2.1](https://huggingface.co/malcolmrey/wan/tree/main/wan2.1)
There is like **130+ loras** so have fun :) (40 GB)
---
## 2) ComfyUI Workflow for WAN 2.1
Workflow for WAN 2.1 with all my loras listed in the powerlora:
🔗 [https://huggingface.co/datasets/malcolmrey/workflows/tree/main/WAN](https://huggingface.co/datasets/malcolmrey/workflows/tree/main/WAN)
This is the workflow that I use for 2.1, it has all the available loras listed there.
There are also some Notes left to explain some parts a bit :)
---
## 3) How to Train WAN 2.1 LoRAs
Nowadays it is so simple that pretty much tutorials are very short and straightforward.
Currently I use the **AI Toolkit**:
🔗 [ostris/ai-toolkit: The ultimate training toolkit for finetuning diffusion models](https://github.com/ostris/ai-toolkit/)
Most of the settings are default, but I did tweak some minor things for my convenience.
The way to quickly start a new training is to replace the configuration file and edit two values directly in it via the GUI.
**Example configuration files are provided in the `scripts/` subfolder:**
- [`wan2.1-training-config-man.txt`](scripts/wan2.1-training-config-man.txt) - Configuration for training male subjects
- [`wan2.1-training-config-woman.txt`](scripts/wan2.1-training-config-woman.txt) - Configuration for training female subjects
- [`wan2.1-training-config-style.txt`](scripts/wan2.1-training-config-style.txt) - Configuration for training artistic styles
### Training Steps
After you install AI Toolkit and run the GUI:
#### i) Click on New Job
#### ii) Click on Show Advanced

#### iii) Replace the configuration
You will see something like this below, replace all of it with the content from my file:

Replace:
- `name: "wan_NAMEYOURMODEL_v1"` with your model name
- `folder_path: "c:\\Development\\ai-toolkit\\datasets/DATASETFORYOURMODEL"` with the folder of your dataset (you should have the folder prepared by now)
#### iv) Click Show Simple
Then click Show Simple and confirm that all looks good:

#### v) Create Job
And then just click Create Job (I don't even click Show Simple anymore):

---
## Training Tips & Best Practices
### Dataset Requirements
**The most important thing is the dataset.** Pretty much it is what makes or breaks the model.
After extensive tests and trainings I can confirm that the default learning rate is the way to go (I did experience others but they were hit or miss).
However, I nailed down what seems to be the sweet spot: **20 images in the dataset and 2500 steps.**
### Understanding Steps vs Images
The training is a function based on the amount of images and steps made. In simple terms it just iterates over the images X times till it reaches those 2500 steps in total. Is it simple 2500/20=125 per image? Maybe, I don't know. What I do know is that if you increase the amount of images - you would realistically need to increase amount of steps as well. It is not linear so 40 images does not translate to 5000 steps.
I tried many variations and decided that **20** (up to 22-25, this is still fine, but it can also be 18 or so) is the best number for 2500 steps.
I had good results with many images but I had to go to 4000 steps. The results weren't better, they were just very good. But that was 1500 more steps to do, I don't think it is necessary.
### Image Selection Guidelines
- **Pick images where the face really resembles the person.** Sometimes you can have an image where the lighting or makeup or pose make the person less recognizable.
- **WAN is much better than anything in picking up on details.** If you give it great images then the results will be great, if you put dubious images, the results will be that too.
- **No captions needed.** There is no need for captions at all.
- **Variety in images is important.** Do not put all "red carpet" shots cause you will be limited in WAN's imagination when you do basic prompts.
- **Lower quality images can be beneficial.** I've noticed that it is also beneficial to actually add lower quality images AS LONG as the face is recognizable very well. The imperfections are trained into the model. In my "red carpet" example - you would get mainly red carpet quality outputs without heavy prompting (which might be fine for some, but it might be better to add some screengrabs from some interviews/movies/candid shots).
### Multiple LoRA Training
As per usual (those who follow me at least should be familiar with it) → **you can train one person multiple times** (using different dataset) and then use those different loras together in a prompt (see workflow for examples). Again, there are positives for that (better likeness).
---
## Support & Links
If you have a particular priority request you can always drop it at my coffee page:
☕ [https://buymeacoffee.com/malcolmrey](https://buymeacoffee.com/malcolmrey)
---
## Find malcolmrey
**Other places where you can find me:**
🔗 **Reddit:** [http://reddit.com/r/malcolmrey](http://reddit.com/r/malcolmrey)
🤗 **Hugging Face:** [https://b-61j.pages.devm/malcolmrey](https://b-61j.pages.devm/malcolmrey)
🎨 **CivitAI:** [http://civitai.com/user/malcolmrey](http://civitai.com/user/malcolmrey)
---
Cheers and have fun using the models, the training info, the loras and the workflow :)
*- malcolmrey*
|