Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
amd
's Collections
Ryzen-AI-1.6-NPU-LLM
Ryzen-AI-1.6-Hybrid-LLM
Quark Quantized Auto Mixed Precision (AMP) Models
Quark ByteDance Models
OGA_CPU_8_8_2025
OGA_DML_8_6_2025
Dell Pro AI Studio
RyzenAI-1.5_LLM_Hybrid_Models
RyzenAI-1.5_LLM_NPU_Models
Gumiho
PARD
OGA FP32 CPU models
WCR-4-30
OGA CPU LLM Collection
Quark Quantized MXFP4 models
Quark Quantized DeepSeek Models
Quark Quantized LLaMA4 Models
RyzenAI-finetuned-local-LLMs
AMDGPU OnnxGenAI
RyzenAI-1.4_LLM_NPU_Models
RyzenAI-1.4_LLM_Hybrid_Models
Instella-VL✨
Instella ✨
AMD-HybridLM-Models
AMD-RyzenAI-Deepseek-R1-Distill-Hybrid
AMDGPU onnx
RyzenAI-1.3_LLM_NPU_Models
RyzenAI-1.3_LLM_Hybrid_Models
Quark Quantized INT8 Models
Nitro Diffusion 💥
Quark Quantized Diffusion Models
AMD-OLMo
Quark Quantized ONNX LLMs for Ryzen AI 1.3 EA
Quark Quantized OCP FP8 Models
Quark ONNX: int8 Quantized Models
Quark Quantized INT4 Models
Quark Quantized INT4 ONNX Models
RyzenAI-1.3_LLM_Hybrid_Models
updated
Jun 16
Models quantized by Quark and prepared for the OGA-based hybrid execution flow (Ryzen AI 1.3)
Upvote
2
amd/Phi-3-mini-4k-instruct-awq-g128-int4-asym-fp16-onnx-hybrid
Text Generation
•
Updated
Aug 27
•
35
amd/Phi-3.5-mini-instruct-awq-g128-int4-asym-fp16-onnx-hybrid
Text Generation
•
Updated
Sep 16
•
61
amd/Mistral-7B-Instruct-v0.3-awq-g128-int4-asym-fp16-onnx-hybrid
Updated
Sep 16
•
106
amd/Qwen1.5-7B-Chat-awq-g128-int4-asym-fp16-onnx-hybrid
Text Generation
•
Updated
Sep 16
•
36
amd/chatglm3-6b-awq-g128-int4-asym-fp16-onnx-hybrid
Updated
Jun 23
•
2
amd/Llama-2-7b-hf-awq-g128-int4-asym-fp16-onnx-hybrid
Text Generation
•
Updated
Jun 23
•
3
amd/Llama-2-7b-chat-hf-awq-g128-int4-asym-fp16-onnx-hybrid
Text Generation
•
Updated
Jun 23
•
5
amd/Llama-3-8B-awq-g128-int4-asym-fp16-onnx-hybrid
Text Generation
•
Updated
Jun 23
•
1
amd/Llama-3.1-8B-awq-g128-int4-asym-fp16-onnx-hybrid
Text Generation
•
Updated
Jun 23
•
8
amd/Llama-3.2-1B-Instruct-awq-g128-int4-asym-fp16-onnx-hybrid
Updated
Sep 16
•
274
amd/Llama-3.2-3B-Instruct-awq-g128-int4-asym-fp16-onnx-hybrid
Updated
Sep 16
•
207
Upvote
2
Share collection
View history
Collection guide
Browse collections