Model Details

This is meta-llama/Meta-Llama-3-8B quantized and serialized with AutoAWQ in 4-bit.

Details here:

Fine-tune Llama 3 on Your Computer

Downloads last month
2
Safetensors
Model size
2B params
Tensor type
F16
ยท
I32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Space using kaitchup/Llama-3-8b-awq-4bit 1