Qwen2.5-3B-Gita-FT
A Bhagavad Gitaโfocused assistant that adopts a Krishna-inspired teaching persona for guiding in your spiritual path.
๐ Model Description
Qwen2.5-3B-Gita-FT is a LoRA-tuned model built on Qwen/Qwen2.5-3B-Instruct, focused on tasks around the Bhagavad Gฤซtฤ. It supports:
- Krishna-inspired persona: Calm, compassionate, and practical tone for guidance and teaching.
- Commentary Q&A โ approachable explanations of concepts (e.g., niแนฃkฤma-karma, guแนa theory), in a Krishna-like tone.
Important: The model is not Krishna, nor a religious authority. It patterns its style from training data and prompts. It can make mistakes, simplify nuanced ideas, misremember verse numbers, or produce non-canonical wording. For study or citation, please verify with authoritative editions and scholars.
๐ Key Features
- Commentary tone control: System prompts steer classical or modern explanatory style.
- Resource efficient: LoRA adapters with mixed precision; optional 4-bit inference.
๐ Model Specs
| Parameter | Value |
|---|---|
| Base Model | Qwen/Qwen2.5-3B-Instruct |
| Fine-tuning | LoRA (rank=16, alpha=32) |
| Seq Length | 1024 (recommend โฅ 512 for long verses) |
| Epochs | 3 |
| LR | 2e-4 |
| Batch | 2 (micro) ร 4 (grad acc) |
| Optimizer | AdamW 8-bit |
| Precision | bf16 (training & inference where available) |
๐ฏ Intended Uses
โ Recommended
- Study aids for verse comprehension, transliteration, and quick glosses.
- Educational apps and assistive tools for learners.
- Search & summarize experiences for specific verses and concepts.
โ ๏ธ Limitations
- Interpretation variance: Philosophical terms can have multiple valid readings.
- Historical/cultural nuance: May miss context without retrieval.
- Hallucinations: Makes a lot of mistakes while generating Hindi and Gujarati
๐ ๏ธ Quickstart (Transformers)
Requires
transformers>=4.41,torch,accelerate. Some Qwen models needtrust_remote_code=True.
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "JDhruv14/Qwen2.5-3B-Gita-FT"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Prepare the conversation
messages = [
{
"role": "system",
"content": "You are Lord Krishnaโthe serene, compassionate teacher of the Bhagavad Gita."
},
{
"role": "user",
"content": "Hey Keshav, what's my dharma?"
}
]
# Apply chat template and generate
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7)
response = tokenizer.decode(outputs[0][len(inputs.input_ids[0]):], skip_special_tokens=True)
๐ Citation
@misc{gita-qwen-assistant,
title={Gita-qwen-3B-Assistan: A Bhagavad Gฤซtฤ-focused model for motivating you, guiding you based on the eternal guidance of Madhav.},
author={JDhruv14},
year={2025},
url={https://huggingface.co/JDhruv14/Qwen2.5-3B-Gita-FT}
}
๐ค Contributing
- Add verse-aligned examples, domain-checked glosses, and evaluation sets.
- Propose prompt templates for specific chapters/themes (e.g., Karma-yoga, Bhakti-yoga).
- Open issues/PRs for bugs or inaccuracies.
๐ License
Released under Apache 2.0. See LICENSE.
- Downloads last month
- 20