-
-
-
-
-
-
Inference Providers
Active filters:
mixtral
mistralai/Mixtral-8x7B-Instruct-v0.1
47B
•
Updated
•
404k
•
4.62k
DavidAU/Llama3.2-30B-A3B-II-Dark-Champion-INSTRUCT-Heretic-Abliterated-Uncensored
Text Generation
•
30B
•
Updated
•
2.39k
•
5
DavidAU/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-gguf
Text Generation
•
24B
•
Updated
•
3.86k
•
90
SIP-med-LLM/SIP-jmed-llm-3-8x13b-AC-32k-instruct
73B
•
Updated
•
285
•
5
mistralai/Mixtral-8x7B-v0.1
47B
•
Updated
•
58.5k
•
1.77k
TheBloke/Mixtral-8x7B-v0.1-GGUF
47B
•
Updated
•
6.67k
•
433
TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF
47B
•
Updated
•
27.9k
•
650
TheBloke/mixtral-8x7b-v0.1-AWQ
Text Generation
•
47B
•
Updated
•
337
•
11
argilla/notux-8x7b-v1
Text Generation
•
47B
•
Updated
•
43
•
164
dphn/dolphin-2.5-mixtral-8x7b
Text Generation
•
47B
•
Updated
•
1.83k
•
1.24k
BiMediX/BiMediX-Bi
Text Generation
•
Updated
•
953
•
6
alpindale/WizardLM-2-8x22B
Text Generation
•
141B
•
Updated
•
8.81k
•
•
408
MaziyarPanahi/Mixtral-8x22B-Instruct-v0.1-GGUF
Text Generation
•
141B
•
Updated
•
2.52k
•
34
dphn/dolphin-2.9.2-mixtral-8x22b
Text Generation
•
141B
•
Updated
•
7.89k
•
•
43
DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters
llm-jp/llm-jp-3-8x1.8b-instruct3
Text Generation
•
9B
•
Updated
•
31
•
4
microsoft/NatureLM-8x7B
47B
•
Updated
•
73
•
18
SuperbEmphasis/Viloet-Eclipse-2x12B-v0.2-MINI-Reasoning
21B
•
Updated
•
4
•
8
DavidAU/Mistral-2x24B-MOE-Magistral-2506-Devstral-2507-1.1-Coder-Reasoning-Ultimate-44B
Text Generation
•
44B
•
Updated
•
30
•
3
QuantTrio/MiniMax-M2-AWQ
Text Generation
•
229B
•
Updated
•
43.7k
•
7
DavidAU/Llama3.2-24B-A3B-II-Dark-Champion-INSTRUCT-Heretic-Abliterated-Uncensored
Text Generation
•
18B
•
Updated
•
431
•
2
Rub11037/NeuvilletteBot
Text Generation
•
2B
•
Updated
•
19
TheBlokeAI/Mixtral-tiny-GPTQ
Text Generation
•
0.2B
•
Updated
•
16
•
3
LoneStriker/Mixtral-8x7B-Instruct-v0.1-HF
Text Generation
•
Updated
•
24
•
4
LoneStriker/Mixtral-8x7B-v0.1-HF
Text Generation
•
Updated
•
18
TheBloke/Mixtral-8x7B-v0.1-GPTQ
Text Generation
•
47B
•
Updated
•
1.51k
•
127
mobiuslabsgmbh/Mixtral-8x7B-Instruct-v0.1-hf-2bit_g16_s128-HQQ
Text Generation
•
Updated
•
78
•
9
mobiuslabsgmbh/Mixtral-8x7B-Instruct-v0.1-hf-4bit_g64-HQQ
Text Generation
•
Updated
•
39
•
9
marcsun13/Mixtral-tiny-GPTQ
Text Generation
•
0.2B
•
Updated
•
207
mobiuslabsgmbh/Mixtral-8x7B-v0.1-hf-2bit_g16_s128-HQQ
Text Generation
•
Updated
•
64
•
4