Prima-24B / README.md
DarwinAnim8or's picture
Update README.md
a1419d8 verified
metadata
license: apache-2.0
base_model:
  - mistralai/Mistral-Small-24B-Base-2501
language:
  - en
pipeline_tag: text-generation
tags:
  - character_roleplay
  - creative_writing
  - roleplay

Prima-24B

A larger character roleplay model trained on the custom "Actors" dataset, the largest I've ever made! This model was made to expand on the things I learned from TinyRP, and to overcome certain limitations I found from it; also on an entirely new dataset made just for this model.

This model does have a bit of AI-style writing, but is overall more reliable in it's outputs than the smaller Trouper-12B; also tolerates mismatched templates more. That said, I do think that it is worth releasing this model, especially since it's larger size will likely help it's knowledgebase / longer conversations.

IF you want less purple prose, and more emotionally intimate characters, do check out Trouper-12B.

-> You can find it here! Trouper-12B

Looking for feedback, so please do share if you got any!

Key Features

  • Reliable: Consistent behavior without meta-breaks or template issues
  • Grammatically consistent: Zero perspective confusion errors
  • Long context: Better handling of 50+ turn conversations
  • Action-oriented: Natural energy for adventure/action roleplay scenarios
  • Zero-fuss setup: More forgiving of template variations

Recommended Settings

Use chat completion mode

  • Temperature: 0.7 (tested and validated)
  • Template: Mistral-V7-Tekken or ChatML (critical for proper formatting and stop behavior, ChatML may perform better)
  • Context: Handles longer turn conversations effectively
  • Prompt Preprocessing: Semi-strict, no tools

Strengths

  • Reliability: No meta-narration breaks, consistent stopping behavior
  • Template Flexibility: Works with various Mistral templates
  • Long Context: Maintains quality over extended conversations
  • Adventure Energy: Better at action-oriented, dynamic scenarios
  • Accessibility: Easier to set up and use than Trouper

Comparison to Trouper-12B

Prima-24B and Trouper-12B are trained on identical data but offer different trade-offs:

Aspect Prima-24B Trouper-12B
Prose Style Slightly more elaborate Direct and concrete
AI Slop Moderate (some patterns) Minimal
Reliability Excellent Good (template-sensitive)
Long Context Better (24B) Good (12B)
Inference Speed Slower (24B) Faster (12B)
Setup Difficulty Easy Moderate (template critical)
Action RP Excellent Good
Emotional RP Good Excellent

Choose Prima-24B if: You want reliability, long context, or action-oriented RP
Choose Trouper-12B if: You want best prose quality and don't mind template setup or the occaisional regeneration of a reply.

Known Characteristics

  • Prose Style: Tends toward slightly more elaborate descriptions (some users may perceive as "AI-ish")
  • Repetitive Descriptors: May occasionally reuse phrases like "blue eyes" + descriptor
  • Purple Prose: Occasional tendency toward flowery language (not excessive)
  • Structural Patterns: Generally good variety, but slightly more predictable than Trouper-12B

None of these are critical flaws - just characteristics to be aware of. Temperature adjustments (0.8-0.9) may help increase variety.

Got Feedback?

Issues, questions, or feedback welcome! Particularly interested in:

  • Long conversation quality (20+ turns)
  • Template compatibility findings
  • Comparison with other RP models

Feel free to make a post in the Community tab here!