๐Ÿงฌ Qwen2.5-0.5B Tamil 'Chaos' Experiment

This is an experimental SLM testing weight pruning and quantization resilience on Tamil literature.

๐Ÿงช Methodology

  • Base: Qwen2.5-0.5B
  • Data: Thirukkural (Classical Tamil)
  • Chaos: 20% Random Pruning (Protected Embeddings & LM Head)
  • Quantization: GGUF Q4_K_M

๐Ÿš€ Usage

./llama-cli -m qwen-tamil-refined-Q4_K.gguf -p 'เฎ•เฏเฎฑเฎณเฏ: ' -n 128 --temp 0.8
Downloads last month
2
GGUF
Model size
0.5B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support