| --- |
| license: apache-2.0 |
| language: |
| - en |
| - zh |
| pipeline_tag: text-to-image |
| tags: |
| - comfyui |
| - diffusion-single-file |
| base_model: |
| - Tongyi-MAI/Z-Image-Turbo |
| base_model_relation: quantized |
| --- |
| For more information (including how to compress models yourself), check out https://huggingface.co/DFloat11 and https://github.com/LeanModels/DFloat11 |
|
|
| Feel free to request for other models for compression as well, although models whose architecture I am unfamiliar with might be slightly tricky for me. |
|
|
| ### How to Use |
|
|
| #### ComfyUI |
| Install my own fork of the DF11 ComfyUI custom node: https://github.com/mingyi456/ComfyUI-DFloat11-Extended. After installing the DF11 custom node, use the provided workflow [json](z_image_turbo_bf16-DF11-workflow.json), or simply replace the "Load Diffusion Model" node of an existing workflow with the "Load Diffusion Model" node. If you run into any issues, feel free to leave a comment. The workflow is also embedded in the below [png](z_image_turbo_bf16-DF11-workflow.png) image. |
|
|
|  |
|
|
| #### `diffusers` |
| Refer to this [model](https://huggingface.co/mingyi456/Z-Image-Turbo-DF11) instead. |
|
|
| ### Compression Details |
|
|
| This is the `pattern_dict` for compressing Z-Image-based models in ComfyUI: |
|
|
| ```python |
| pattern_dict_comfyui = { |
| r"noise_refiner\.\d+": ( |
| "attention.qkv", |
| "attention.out", |
| "feed_forward.w1", |
| "feed_forward.w2", |
| "feed_forward.w3", |
| "adaLN_modulation.0" |
| ), |
| r"context_refiner\.\d+": ( |
| "attention.qkv", |
| "attention.out", |
| "feed_forward.w1", |
| "feed_forward.w2", |
| "feed_forward.w3", |
| ), |
| r"layers\.\d+": ( |
| "attention.qkv", |
| "attention.out", |
| "feed_forward.w1", |
| "feed_forward.w2", |
| "feed_forward.w3", |
| "adaLN_modulation.0" |
| ) |
| } |
| ``` |
|
|
|
|