๐งฌ Qwen2.5-0.5B Tamil 'Chaos' Experiment
This is an experimental SLM testing weight pruning and quantization resilience on Tamil literature.
๐งช Methodology
- Base: Qwen2.5-0.5B
- Data: Thirukkural (Classical Tamil)
- Chaos: 20% Random Pruning (Protected Embeddings & LM Head)
- Quantization: GGUF Q4_K_M
๐ Usage
./llama-cli -m qwen-tamil-refined-Q4_K.gguf -p 'เฎเฏเฎฑเฎณเฏ: ' -n 128 --temp 0.8
- Downloads last month
- 2
Hardware compatibility
Log In to add your hardware
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support