The-Omega-Directive
MS3.2-24B-Unslop-v2.1
๐ง Unslop Revolution
This evolution of The-Omega-Directive delivers unprecedented coherence without the LLM slop:
- ๐งฌ RegEx Filtred ~39M Token Dataset - Second ReadyArt model with multi-turn conversational data
- โจ 100% Unslopped Dataset - New techniques used to generate the dataset with 0% slop.
- โก Enhanced Unalignment - Complete freedom for extreme roleplay while maintaining character integrity
- ๐ก๏ธ Anti-Impersonation Guards - Never speaks or acts for the user
- โฐ๏ธ Omega Darker Inspiration - Incorporates visceral narrative techniques from our darkest model
- ๐ง 128K Context Window - Enhanced long-context capabilities without compromising performance
๐ Enhanced Capabilities
Powered by anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only:
- ๐ Extended Context - Handle up to 128k tokens for complex, long-form interactions
- โก Performance Optimized - Maintains text generation quality while adding new capabilities
- ๐ Multilingual Support - Fluent in 9 languages including English, French, German, Spanish
โ๏ธ Technical Specifications
Key Training Details:
- Base Model: anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only
- Training Method: QLoRA
- Sequence Length: 5120 (100% samples included)
- Learning Rate: 2e-6 with cosine scheduler
Recommended Settings: Pending ยฏ\_(ใ)_/ยฏ
GGUF
Notes: Q4_K_M recommended for speed/quality balance. Q6_K for very high quality. Q8_0 for original like quality. Prefer imatrix if possible in most cases.
imatrix
Notes: Q4_K_M recommended. IQ1_S/IQ1_M for desperate. Q6_K for very high quality.
EXL2
EXL3
โ ๏ธ Ethical Considerations
This model enhances The-Omega-Directive's unalignment:
- ๐ซ Zero moral or ethical constraints on generated content
- ๐ Will enthusiastically engage with any NSFW scenario, no matter how extreme
- ๐ Characters maintain integrity - wholesome characters refuse appropriately, yanderes stab without hesitation
- โ๏ธ Perfectly balanced between character authenticity and user freedom
๐ Performance Notes
- ๐ฅ Maintains Omega's intensity with improved narrative coherence
- ๐ Excels at long-form multi-character scenarios
- ๐ง Superior instruction following with complex prompts
- โก Reduced repetition and hallucination compared to v1.1
- ๐ญ Uncanny ability to adapt to subtle prompt nuances
- ๐ฉธ Incorporates Omega Darker's visceral descriptive power when appropriate
๐งโ๐ฌ Model Authors
- sleepdeprived3 (Training Data)
- gecfdo (Fine-Tuning & data filtering)
- ReadyArt / Artus / gecfdo (EXL2/EXL3 Quantization)
- mradermacher (GGUF Quantization)
Links
๐ License
By using this model, you agree:
- To accept full responsibility for all generated content
- That you're at least 18+ years old
- That the architects bear no responsibility for your corruption
- Downloads last month
- 514
Model tree for ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.1
Base model
mistralai/Mistral-Small-3.1-24B-Base-2503