Quantized Hermes 2 Pro Models
This repository provides quantized GGUF versions of Hermes 2 Pro model. Hermes 2 Pro is an upgraded version of Nous Hermes 2, trained on a cleaned OpenHermes 2.5 dataset plus a new in-house Function Calling and JSON Mode dataset. These 4-bit and 5-bit quantized variants retain the original model’s strengths excels at general tasks, structured JSON outputs, and reliable function calling (90% accuracy in Fireworks.AI evals). With a special system prompt, multi-turn function calling, and new single-token tags like and , it’s optimized for agentic parsing and streaming.
Model Overview
- Original Model: Meta-Llama-3-8B
- Quantized Versions:
- Q4_K_M (4-bit quantization)
- Q5_K_M (5-bit quantization)
- Architecture: Decoder-only transformer
- Base Model: Hermes-2-Pro-Llama-3-8B
- Modalities: Text only
- Developer: Nous Research
- License: Llama 3 Community License Agreement
- Language: English
Quantization Details
Q4_K_M Version
- Approx. ~75% size reduction
- Lower memory footprint (~4.58 GB)
- Best suited for deployment on edge devices or low-resource GPUs
- Slight performance degradation in complex reasoning scenarios
Q5_K_M Version
- Approx. ~71% size reduction
- Higher fidelity (~5.38 GB)
- Better performance retention, recommended when quality is a priority.
Key Features
- Retrained on a cleaned OpenHermes-2.5 dataset with added Function-Calling & JSON-Mode data.
- Strong Function Calling performance (≈90% in partnered evaluation) and structured JSON output accuracy (≈84%).
- Uses ChatML prompt format and a special tool_use chat template to produce multi-turn, machine-parsable tool calls.
- Adds single-token markers to help streaming/agent parsing: , , (and closing tags).
Usage
Hermes 2 Pro — Llama-3 8B is ideal for building agents that require reliable function calling, structured JSON outputs, and strong reasoning. Its 8B size balances capability with efficiency, making it suitable for research, prototyping, and real-world applications.
llama.cpp (text-only)
./llama-cli -hf SandLogicTechnologies/Hermes-2-Pro-GGUF -p "Write a python script designed for adding to a library on data cleaning"
Model Data
Pretraining Overview
Hermes 2 Pro — Llama-3 8B was trained on a refined version of the OpenHermes-2.5 dataset, combined with a custom Function Calling and JSON Mode corpus developed in-house. The data mix includes high-quality web content, code, reasoning tasks, STEM material, and multilingual samples. This targeted training enables the model to excel not only at general conversation but also at structured output generation and reliable tool use.
Recommended Use Cases
Function Calling & Tool Use
Powering agentic workflows where the model selects and invokes external tools or APIs using reliable JSON-based calls.Structured JSON Outputs
Generating machine-readable responses that conform to a schema, useful for automation, integration with services, and structured data extraction.Resource-conscious Deployment
The 8B parameter size makes it suitable for smaller GPUs and cloud environments, balancing performance with accessibility.Low-resource deployment
Low-resource deployment runs AI models efficiently on limited hardware like CPUs, edge devices, or small GPUs.
Acknowledgments
These quantized models are based on the original work by the NousResearch development team.
Special thanks to:
The NousResearch team for developing and releasing the Hermes-2-Pro-Llama-3-8B model.
Georgi Gerganov and the entire
llama.cppopen-source community for enabling efficient model quantization and inference via the GGUF format.
Contact
For any inquiries or support, please contact us at [email protected] or visit our Website.
- Downloads last month
- 45
4-bit
5-bit
Model tree for SandLogicTechnologies/Hermes-2-Pro-Llama-3-8B-GGUF
Base model
NousResearch/Meta-Llama-3-8B