Introduction
Prosodia is an organization dedicated to developing and transparently distributing open-source Portuguese language models. The Prosodia T0.6B is a 0.6 billion parameter Small Language Model (SLM) trained on publicly available data and released for community use. Built on the Qwen3 architecture, it represents our commitment to accessible AI development.
Training
This model was developed through a focused one-month effort with substantially limited computational resources compared to industry leaders. Its primary purpose is to demonstrate that through intelligent design, transparency, and community collaboration, it is possible to create high-quality Brazilian and Portuguese language models without massive infrastructure.
The training regimen utilized approximately 40 billion tokens for the base model and just under 1 billion tokens for instruction tuning and subsequent refinements. While these volumes are far from ideal, they serve as a rapid proof-of-concept that establishes a foundation for future, more comprehensive development.
Inference
The model is fully compatible with standard LLM deployment platforms and can be used and distributed across frameworks such as HuggingFace, vLLM, GGUF, and similar tools.
- Downloads last month
- 21