Negative alpha as distill accelerate LoRA!

#5
by TkskKurumi - opened

Quite simple idea.
(Distilled Model) + (This LoRA) = (Non-Distilled Model) -> (Non-Distilled Model) + (-1) * (This LoRA) = (Distilled Model)
I tried to train LoRA without the adapter. And infer with
A. LoRA I trained.
B. LoRA I trained + LoRA you provided * (-1)
Found that A ruins distillation and B brings it back. It works!
More experiment is needed for prove.
I haven't yet done enough extensive experiments though. And my LoRA trained is rubbish even with cfg+more-steps, mine maybe need tuning hyper parameter/adjusting dataset.

Example:

pink hair, cat ears, heterochromia, sailor collar, serafuku, school uniform, red eyes, blue eyes

original-model
original model
my LoRA
my LoRA with turbo inferene
My LoRA with non-distill inference settings
My LoRA with "base" inference (cfg=2.5 with negative prompts, steps=25)
My LoRA + (-1) * this LoRA
My LoRA + (-1)* this LoRA with turbo inference.

Quite simple idea.
(Distilled Model) + (This LoRA) = (Non-Distilled Model) -> (Non-Distilled Model) + (-1) * (This LoRA) = (Distilled Model)
I tried to train LoRA without the adapter. And infer with
A. LoRA I trained.
B. LoRA I trained + LoRA you provided * (-1)
Found that A ruins distillation and B brings it back. It works!
More experiment is needed for prove.
I haven't yet done enough extensive experiments though. And my LoRA trained is rubbish even with cfg+more-steps, mine maybe need tuning hyper parameter/adjusting dataset.

Example:

pink hair, cat ears, heterochromia, sailor collar, serafuku, school uniform, red eyes, blue eyes

original-model
original model
my LoRA
my LoRA with turbo inferene
My LoRA with non-distill inference settings
My LoRA with "base" inference (cfg=2.5 with negative prompts, steps=25)
My LoRA + (-1) * this LoRA
My LoRA + (-1)* this LoRA with turbo inference.

so in practical term what should i do with this lora in comfyui or in a workflow for normal users?

I continued training and the trick no longer works. Seem my previous lora is under-trained.

Sign up or log in to comment