-
-
-
-
-
-
Inference Providers
Active filters:
Qwen3
nvidia/Qwen3-Next-80B-A3B-Thinking-NVFP4
Text Generation
•
Updated
•
216
•
4
nightmedia/Qwen3-30B-A3B-Architect18-qx64-hi-mlx
Text Generation
•
31B
•
Updated
•
73
•
2
McG-221/Qwen3-30B-A3B-Architect18-SE_Sarek-Edition-mlx-8Bit
Text Generation
•
31B
•
Updated
•
61
•
2
mradermacher/Qwen3-14B-Scientist-BF16-i1-GGUF
15B
•
Updated
•
1.5k
•
2
nvidia/Qwen3-235B-A22B-NVFP4
Text Generation
•
133B
•
Updated
•
3.3k
•
9
nvidia/Qwen3-30B-A3B-NVFP4
Text Generation
•
16B
•
Updated
•
49k
•
18
NVFP4/Qwen3-235B-A22B-Instruct-2507-FP4
Text Generation
•
118B
•
Updated
•
459
•
3
Text Generation
•
8B
•
Updated
•
4.6k
•
3
nvidia/Qwen3-Next-80B-A3B-Instruct-NVFP4
Text Generation
•
Updated
•
1.76k
•
1
DavidAU/Qwen3-48B-A4B-Savant-Commander-GATED-12x-Closed-Open-Source-Distill-GGUF
Text Generation
•
34B
•
Updated
•
1.43k
•
10
DavidAU/Qwen3-4B-Gemini-TripleX-High-Reasoning-Thinking-Heretic-Uncensored-GGUF
Text Generation
•
4B
•
Updated
•
2.53k
•
9
nightmedia/Qwen3-14B-Researcher-qx86-hi-mlx
Text Generation
•
15B
•
Updated
•
115
•
1
mradermacher/Qwen3-30B-A3B-Architect18-i1-GGUF
31B
•
Updated
•
1.73k
•
1
mradermacher/Qwen3-14B-Scientist-BF16-GGUF
15B
•
Updated
•
320
•
1
nightmedia/Qwen3-14B-Scientist-BF16
Text Generation
•
Updated
•
2
•
1
McG-221/Qwen3-30B-A3B-Element5-mlx-8Bit
Text Generation
•
31B
•
Updated
•
1
DavidAU/Qwen3-8B-Hivemind-Instruct-Heretic-Abliterated-Uncensored-NEO-Imatrix-GGUF
Text Generation
•
8B
•
Updated
•
9.18k
•
8
DavidAU/Qwen3-24B-A4B-Freedom-Thinking-Abliterated-Heretic-NEO-Imatrix-GGUF
Text Generation
•
17B
•
Updated
•
11.4k
•
16
DavidAU/Qwen3-48B-A4B-Savant-Commander-Distill-12X-Closed-Open-Heretic-Uncensored-GGUF
Text Generation
•
34B
•
Updated
•
4.2k
•
16
DavidAU/Qwen3-24B-A4B-Freedom-HQ-Thinking-Abliterated-Heretic-NEOMAX-Imatrix-GGUF
Text Generation
•
18B
•
Updated
•
5.41k
•
9
tensorblock/UnfilteredAI_DAN-Qwen3-1.7B-GGUF
Text Generation
•
2B
•
Updated
•
575
•
5
JunHowie/Qwen3-0.6B-GPTQ-Int4
Text Generation
•
0.6B
•
Updated
•
374
•
1
JunHowie/Qwen3-0.6B-GPTQ-Int8
Text Generation
•
0.6B
•
Updated
•
21
JunHowie/Qwen3-1.7B-GPTQ-Int4
Text Generation
•
2B
•
Updated
•
453
•
1
JunHowie/Qwen3-1.7B-GPTQ-Int8
Text Generation
•
2B
•
Updated
•
48
JunHowie/Qwen3-32B-GPTQ-Int4
Text Generation
•
33B
•
Updated
•
681
•
3
JunHowie/Qwen3-32B-GPTQ-Int8
Text Generation
•
33B
•
Updated
•
203
•
3
JunHowie/Qwen3-30B-A3B-GPTQ-Int4
Text Generation
•
5B
•
Updated
•
115
•
1
Text Generation
•
Updated
•
22
JunHowie/Qwen3-14B-GPTQ-Int8
Text Generation
•
15B
•
Updated
•
80
•
1