This is the conversion of SmoothMix WAN 2.2 V2.0 - https://civitai.com/models/1995784/smooth-mix-wan-22-i2vt2v-14b

I'm experimentei running the script https://github.com/Kickbub/Dequant-FP8-ComfyUI/blob/main/dequantize_fp8v2.py to turn it to FP16 to check if there is an imporvement on quality and size

There is the full FP16 conversion, around 40Gb, and some GGUF.

Please provide feedback so I know if I should run this script always before GGUF other models.

Videos made with UMT GGUF Q8 and VAE FP32

. F8 F16
Full
Q8

There is another example in the example folder

===================================================================================

if you would like to help me, it seems that runpod has a Refer thing - https://runpod.io?ref=d2452mau

You get I get
- A one-time credit of $5 when they sign up with your link and adds $10 for the first time
- Instant access to Runpod's GPU resources
- A one-time credit of $5 when a user signs up with your link and adds $10 for the first time
- Credits on referred user spend during their first 6 months. (5% Serverless and 3% Pods)
Downloads last month
416
GGUF
Model size
14B params
Architecture
wan
Hardware compatibility
Log In to view the estimation

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Collection including BigDannyPt/WAN-2.2-SmoothMix-FP16