Safetensors
Portuguese
qwen3
Prosodia logo

Introduction

Prosodia is an organization dedicated to developing and transparently distributing open-source Portuguese language models. The Prosodia T0.6B is a 0.6 billion parameter Small Language Model (SLM) trained on publicly available data and released for community use. Built on the Qwen3 architecture, it represents our commitment to accessible AI development.

Training

This model was developed through a focused one-month effort with substantially limited computational resources compared to industry leaders. Its primary purpose is to demonstrate that through intelligent design, transparency, and community collaboration, it is possible to create high-quality Brazilian and Portuguese language models without massive infrastructure.

The training regimen utilized approximately 40 billion tokens for the base model and just under 1 billion tokens for instruction tuning and subsequent refinements. While these volumes are far from ideal, they serve as a rapid proof-of-concept that establishes a foundation for future, more comprehensive development.

Inference

The model is fully compatible with standard LLM deployment platforms and can be used and distributed across frameworks such as HuggingFace, vLLM, GGUF, and similar tools.

Downloads last month
21
Safetensors
Model size
0.6B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Prosodia/Prosodia_T0.6B-Instruct

Finetuned
Qwen/Qwen3-0.6B
Finetuned
(403)
this model
Quantizations
1 model

Datasets used to train Prosodia/Prosodia_T0.6B-Instruct