Flux quantized checkpoints
					Collection
				
This collection regroups quantized flux checkpoints that we used in this blogpost: https://huggingface.co/blog/diffusers-quantization
					• 
				5 items
				• 
				Updated
					
				•
					
					2
 BnB 8-bit
  BnB 8-bit 
To use this quantized FLUX.1 [dev] checkpoint, you need to install the 🧨 diffusers and bitsandbytes library:
pip install -U diffusers
pip install -U bitsandbytes
After installing the required library, you can run the following script:
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained(
    "diffusers/FLUX.1-dev-bnb-8bit",
    torch_dtype=torch.bfloat16
)
pipe.to("cuda")
prompt = "Baroque style, a lavish palace interior with ornate gilded ceilings, intricate tapestries, and dramatic lighting over a grand staircase."
pipe_kwargs = {
    "prompt": prompt,
    "height": 1024,
    "width": 1024,
    "guidance_scale": 3.5,
    "num_inference_steps": 50,
    "max_sequence_length": 512,
}
image = pipe(
    **pipe_kwargs, generator=torch.manual_seed(0),
).images[0]
image.save("flux.png")
This checkpoint was created with the following script using "black-forest-labs/FLUX.1-dev" checkpoint:
import torch
from diffusers import FluxPipeline
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig
from diffusers.quantizers import PipelineQuantizationConfig
from transformers import BitsAndBytesConfig as TransformersBitsAndBytesConfig
pipeline_quant_config = PipelineQuantizationConfig(
    quant_mapping={
        "transformer": DiffusersBitsAndBytesConfig(load_in_8bit=True),
        "text_encoder_2": TransformersBitsAndBytesConfig(load_in_8bit=True),
    }
)
pipe = FluxPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    quantization_config=pipeline_quant_config,
    torch_dtype=torch.bfloat16
)
pipe.save_pretrained("FLUX.1-dev-bnb-8bit")
Base model
black-forest-labs/FLUX.1-dev