File size: 1,858 Bytes
207e5e9 096efc7 207e5e9 0a12670 207e5e9 6bafa98 0a12670 207e5e9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
---
base_model: Qwen/Qwen-Image
library_name: gguf
quantized_by: city96
license: apache-2.0
language:
- en
- zh
pipeline_tag: text-to-image
---
This is a direct GGUF conversion of [Qwen/Qwen-Image](https://huggingface.co/Qwen/Qwen-Image).
The model files can be used in [ComfyUI](https://github.com/comfyanonymous/ComfyUI/) with the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF) custom node. Place the required model(s) in the following folders:
| Type | Name | Location | Download |
| ------------ | ------------------------------ | --------------------------------- | ---------------- |
| Main Model | Qwen-Image | `ComfyUI/models/diffusion_models` | GGUF (this repo) |
| Text Encoder | Qwen2.5-VL-7B | `ComfyUI/models/text_encoders` | [Safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/tree/main/split_files/text_encoders) / [GGUF](https://huggingface.co/unsloth/Qwen2.5-VL-7B-Instruct-GGUF/tree/main)|
| VAE | Qwen-Image VAE | `ComfyUI/models/vae` | [Safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/blob/main/split_files/vae/qwen_image_vae.safetensors) |
[**Example workflow**](media/qwen-image_workflow.json)
[**Example outputs**](media/qwen-image.jpg) - sample size of 1, not strictly representative

### Notes
> [!NOTE]
> The Q5_K_M, Q4_K_M and most importantly the low bitrate quants (Q3_K_M, Q3_K_S, Q2_K) use a new dynamic logic where the first/last layer is kept in high precision.
>
> For a comparison, see this [imgsli page](https://imgsli.com/NDA0MTIy). With this method, even Q2_K remains somewhat usable.
*As this is a quantized model not a finetune, all the same restrictions/original license terms still apply.* |