Recommended settings ?
Hi there, thanks for the models, any recommendations for the sampler / scheduler / steps / shift ? Getting bad results currently. Thanks !
I have the same problem
After some more testing I'm realizing the fp8 version is borked bad, I will try to get a fixed version up soon. the bf16 -> fp8 in comfy works as expected and very minor differences from the bf16 version.
After some more testing I'm realizing the fp8 version is borked bad, I will try to get a fixed version up soon. the bf16 -> fp8 in comfy works as expected and very minor differences from the bf16 version.
I do use comfy myself and I tested the FP8 on several workflows, not just one, I tried one of the usual workflows with detail daemon, one of my chroma wf and finally tried the most basic flux dev one and it failed miserably on all. I've never come across any FP8 from any model that had such a major difference. I've been using comfy for a year now and until recently I've been mostly using fp8s for most models, So this isn't an issue from my end, the fp8 has been made real miserably.
Get way better result with B16, fp8 was def broken
Guy's do you think it's worth to train lora on this model and even full finetuned it for remove the censorship ?
I do use comfy myself and I tested the FP8 on several workflows, not just one, I tried one of the usual workflows with detail daemon, one of my chroma wf and finally tried the most basic flux dev one and it failed miserably on all. I've never come across any FP8 from any model that had such a major difference. I've been using comfy for a year now and until recently I've been mostly using fp8s for most models, So this isn't an issue from my end, the fp8 has been made real miserably.
I was speaking of the conversion it does when you load the model and then you can set the dtype to fp8, and using the bf16 model as the input model. I've converted a few models to fp8 but this one seems remarkably sensitive to converting to fp8 so needs the other techniques to reduce the performance reduction. Techniques I'm not quite good at or know of just yet. These techniques are inside comfyui so using bf16 to load and use weight_dtype to fp8.









