A question for the author about Lightning LoRA model merging.
I'm having trouble merging "qwen edit 2509" and "8step lightning LoRA". I did not merge the CLIP or VAE.
The output from the merged model has severe artifacts and poor detail, and it doesn't match the quality of using the LoRA without merging. However, merging my own custom-trained LoRAs works fine with the same process.
Any idea why this specific Lightning LoRA fails to merge correctly? Is there a special procedure for them?
what base model? fp32 bf16 fp8?
did you use the lightning lora for 2509?
what version of lightning lora did you try fp32 of bf16?
my bet: you didnt use the fp8 model, thats what happened to me...
Don't add accelerators like lightning. They are already included.
what base model? fp32 bf16 fp8?
did you use the lightning lora for 2509?
what version of lightning lora did you try fp32 of bf16?my bet: you didnt use the fp8 model, thats what happened to me...
Thank you for your suggestion, the base model is qwen edit plus 2509 bf16, I want to merge base model and Qwen-Image-Edit-2509-Lightning-8steps-V1.0-bf16
Don't add accelerators like lightning. They are already included.
Thank you very much, I will test it
Don't add accelerators like lightning. They are already included.
Thank you very much, I will test it
Well.. they are already included in my "all in one". The base "Qwen Edit" model doesn't... "merging" them is just doing a "Load Lora" in ComfyUI. You can use "Save Checkpoint" after that if you want to generate a safetensors to load later with the accelerator already included.