This is a Q8_0 GGUF quantization of inclusionAI/Ling-flash-2.0.
- Downloads last month
- 37
Hardware compatibility
Log In
to view the estimation
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for ddh0/Ling-flash-2.0-Q8_0.gguf
Base model
inclusionAI/Ling-flash-base-2.0
Finetuned
inclusionAI/Ling-flash-2.0