DM-LLM-Tiny
A tiny (1.1B parameter) language model fine-tuned for Dungeons & Dragons content generation.
What it does
Generates creative D&D content including:
- NPCs โ memorable characters with backstories, motivations, and quirks
- Quests โ hooks, outlines, and full quest arcs
- Dialog โ in-character conversations, monologues, and banter
- Locations โ vivid descriptions of dungeons, towns, and wilderness
- Encounters โ combat, social, and puzzle encounters
Usage
With Ollama (easiest)
ollama run JBHarris/dm-llm-tiny
With Transformers
from transformers import pipeline
pipe = pipeline("text-generation", model="JBHarris/dm-llm-tiny")
messages = [
{"role": "system", "content": "You are a creative D&D dungeon master's assistant."},
{"role": "user", "content": "Create a mysterious NPC for a tavern scene."},
]
result = pipe(messages, max_new_tokens=512)
print(result[0]["generated_text"][-1]["content"])
Training
- Base model: TinyLlama-1.1B-Chat-v1.0
- Method: QLoRA (4-bit NF4 quantization + LoRA r=64)
- Data: ~500 synthetic D&D instruction/response pairs generated with Claude
- Hardware: NVIDIA RTX 4080 16GB
Limitations
This is a 1.1B parameter model. It's creative and fun for brainstorming but will not match the quality of larger models (7B+). Best used as a quick idea generator, not a replacement for a human DM's judgment.
License
Apache 2.0 (same as base model)
- Downloads last month
- 646
Model tree for JBHarris/dm-llm-tiny
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0