Text-to-Image
Diffusers

Recommended settings ?

#2
by Jehex - opened

Hi there, thanks for the models, any recommendations for the sampler / scheduler / steps / shift ? Getting bad results currently. Thanks !

I have the same problem

img_00001_.png
img_00002_.png
img_00003_.png
img_00004_.png
img_00005_.png

A fine model mate, the quality is die for, such impeccable quality upon various workflows that even my dead granps have come back to life.

Beta scheduler and ipndm sampler I have used. for other info refer to the parent model, this is just the bf16 and fp8 versions (which might be lower quality than the fp32 version they released)

flux.1 dev flux.1 dev SRPO
c-f1-2025-09-10_00026_.png c-f1-2025-09-10_00022_.png

Seems the fp8 version is not good though I do not think it's my method but maybe it is more sensitive since they upgraded the bf16 version from flux.1 dev to fp32 version in SRPO

fp8 bf16
c-f1-2025-09-11_00034_.png c-f1-2025-09-11_00033_.png

After some more testing I'm realizing the fp8 version is borked bad, I will try to get a fixed version up soon. the bf16 -> fp8 in comfy works as expected and very minor differences from the bf16 version.

After some more testing I'm realizing the fp8 version is borked bad, I will try to get a fixed version up soon. the bf16 -> fp8 in comfy works as expected and very minor differences from the bf16 version.

I do use comfy myself and I tested the FP8 on several workflows, not just one, I tried one of the usual workflows with detail daemon, one of my chroma wf and finally tried the most basic flux dev one and it failed miserably on all. I've never come across any FP8 from any model that had such a major difference. I've been using comfy for a year now and until recently I've been mostly using fp8s for most models, So this isn't an issue from my end, the fp8 has been made real miserably.

Get way better result with B16, fp8 was def broken

Guy's do you think it's worth to train lora on this model and even full finetuned it for remove the censorship ?

I do use comfy myself and I tested the FP8 on several workflows, not just one, I tried one of the usual workflows with detail daemon, one of my chroma wf and finally tried the most basic flux dev one and it failed miserably on all. I've never come across any FP8 from any model that had such a major difference. I've been using comfy for a year now and until recently I've been mostly using fp8s for most models, So this isn't an issue from my end, the fp8 has been made real miserably.

I was speaking of the conversion it does when you load the model and then you can set the dtype to fp8, and using the bf16 model as the input model. I've converted a few models to fp8 but this one seems remarkably sensitive to converting to fp8 so needs the other techniques to reduce the performance reduction. Techniques I'm not quite good at or know of just yet. These techniques are inside comfyui so using bf16 to load and use weight_dtype to fp8.

Screenshot 2025-09-13 at 12-39-44 flux-SRPO - ComfyUI.png

Sign up or log in to comment