-
-
-
-
-
-
Inference Providers
Active filters:
Q8
cibernicola/FLOR-6.3B-xat-Q8_0
Text Generation
•
6B
•
Updated
•
16
cibernicola/FLOR-1.3B-xat-Q8
Text Generation
•
1B
•
Updated
•
7
cibernicola/FLOR-6.3B-xat-Q5_K
Text Generation
•
6B
•
Updated
•
8
prithivMLmods/Qwen2.5-Coder-7B-Instruct-GGUF
Text Generation
•
8B
•
Updated
•
100
•
2
prithivMLmods/Qwen2.5-Coder-7B-GGUF
Text Generation
•
8B
•
Updated
•
93
•
4
prithivMLmods/Qwen2.5-Coder-3B-GGUF
Text Generation
•
3B
•
Updated
•
73
•
3
prithivMLmods/Qwen2.5-Coder-1.5B-GGUF
Text Generation
•
2B
•
Updated
•
418
•
4
prithivMLmods/Qwen2.5-Coder-1.5B-Instruct-GGUF
Text Generation
•
2B
•
Updated
•
48
•
3
prithivMLmods/Qwen2.5-Coder-3B-Instruct-GGUF
Text Generation
•
3B
•
Updated
•
42
•
5
prithivMLmods/Llama-3.2-3B-GGUF
Text Generation
•
3B
•
Updated
•
79
•
2
harisnaeem/Phi-4-mini-instruct-GGUF-Q8
Text Generation
•
4B
•
Updated
•
3
ykarout/llama3-deepseek_Q8
Text Generation
•
8B
•
Updated
•
3
michelkao/Ollama-3.2-GGUF
Text Generation
•
3B
•
Updated
•
299
SiddhJagani/gpt-oss-20b-no-think-mlx-Q8
Text Generation
•
21B
•
Updated
•
203
Mattimax/DACMini-IT_Q8_0
0.1B
•
Updated
•
10