Quantized Falcon3-7B-Base Models
This repository provides quantized GGUF versions of Falcon3-7B-Base model. These 4-bit and 5-bit quantized variants retain the original model’s strengths in language understanding, instruction following, code and mathematics tasks, Falcon3-7B-Base supports 4 languages (english, french, spanish, portuguese) and a context length up to 32K while reducing memory and compute requirements—ideal for efficient inference on resource-constrained devices.
Model Overview
- Original Model: Falcon3-7B-Base
- Quantized Versions:
- Q4_K_M (4-bit quantization)
- Q5_K_M (5-bit quantization)
- Architecture: Decoder-only transformer
- Base Model: Falcon3-7B-Base
- Modalities: Text only
- Developer: Technology Innovation Institute
- License: falcon-llm-license
- Language: English, French, Spanish, Portuguese
Quantization Details
Q4_K_M Version
- Approx. ~71% size reduction
- Lower memory footprint (~4.26 GB)
- Best suited for deployment on edge devices or low-resource GPUs
- Slight performance degradation in complex reasoning scenarios
Q5_K_M Version
- Approx. ~67% size reduction
- Higher fidelity (~4.96 GB)
- Better performance retention, recommended when quality is a priority
Key Features
- Pretrained on 14 Teratokens of datasets comprising of web, code, STEM, high quality and mutlilingual data using 1024 H100 GPU chips.
- Text-only instruction-following model optimized for multi-turn scientific question answering.
- Includes multilingual data, but with a primary focus on English (plus French, Spanish, Portuguese).
- 32K token context length with Grouped Query Attention (12 query heads, 4 KV heads) for scalable inference.
Usage
This lightweight variants model is intended for developers and researchers who work on reasoning, coding, mathematics, and multilingual tasks while cutting down memory and compute costs..
llama.cpp (text-only)
./llama-cli -hf SandLogicTechnologies/Falcon3-7B-Base-GGUF -p "Explain the Object Oriented Programming in simple terms"
Recommended Use Cases
Code generation & programming
Assisting with code completion, debugging, or generating small snippets, especially for use in developer tools or coding assistants.Scientific & technical research
Useful for answering complex scientific questions, solving mathematical problems, and working with STEM content.Long-context workflows
Good for documents, research papers, logs, transcripts etc., where you need to process or reference up to ~32K tokens in a single input.Low-resource deployment
Low-resource deployment runs AI models efficiently on limited hardware like CPUs, edge devices, or small GPUs.
Acknowledgments
These quantized models are based on the original work by Technology Innovation Institute development team.
Special thanks to:
The Technology Innovation Institute team for developing and releasing the Falcon3-7B-Base model.
Georgi Gerganov and the entire
llama.cppopen-source community for enabling efficient model quantization and inference via the GGUF format.
Contact
For any inquiries or support, please contact us at [email protected] or visit our Website.
- Downloads last month
- 27
4-bit
5-bit