GGUF
Portuguese
conversational
Prosodia logo

Introduction

Prosodia is an organization dedicated to developing and transparently distributing open-source Portuguese language models. The Prosodia 1 is a 0.6b billion parameter Small Language Model (SLM) trained on publicly available data and released for community use. Built on the Qwen3 architecture, it represents our commitment to accessible AI development. It was the first of its kind and exists only as a proof we can do it with one single MI300X gpu in a record time of 7 days.

Training

This model was developed through a focused one-week effort with substantially limited computational resources compared to industry leaders. Its primary purpose is to demonstrate that through intelligent design, transparency, and community collaboration, it is possible to create high-quality Brazilian and Portuguese language models without massive infrastructure.

The training regimen utilized approximately 20 billion tokens for the base model and just under 1 billion tokens for instruction tuning and subsequent refinements. While these volumes are far from ideal, they serve as a rapid proof-of-concept that establishes a foundation for future, more comprehensive development.

Inference

The model is fully compatible with standard LLM deployment platforms and can be used and distributed across frameworks such as HuggingFace, vLLM, GGUF, and similar tools.

Downloads last month
4
GGUF
Model size
0.6B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Prosodia/Prosodia_1-gguf

Finetuned
Qwen/Qwen3-0.6B
Quantized
(260)
this model

Datasets used to train Prosodia/Prosodia_1-gguf