Fara-7B-mlx-metal-int4

This is a quantized INT4 model based on Apple MLX Framework Fara-7B. You can deploy it on Apple Silicon devices (M1,M2,M3,M4).

Note: This is unoffical version,just for test and dev.

Downloads last month
24
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support