GLM-4.6V-GGUF

This model is converted from zai-org/GLM-4.6V to GGUF using convert_hf_to_gguf.py

To use it:

llama-server -hf ggml-org/GLM-4.6V-GGUF
Downloads last month
1,973
GGUF
Model size
107B params
Architecture
glm4moe
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ggml-org/GLM-4.6V-GGUF

Base model

zai-org/GLM-4.6V
Quantized
(12)
this model

Collection including ggml-org/GLM-4.6V-GGUF