--- tags: - discord - chatml - conversation - dialogue - multi-turn - single-turn - fine-tuning - reward-model - llm-training - chat-dataset - open-source - anonymized-data - casual-dialogue license: apache-2.0 language: - en pretty_name: Discord-Dialogues size_categories: - 1MThis is a clone of mookiezi/Discord-Dialogues.

Discord-Dialogues

> **Discord-Dialogues** is a large-scale dataset of anonymized Discord conversations from late spring to early fall 2025 for training and evaluating realistic conversational AI models in a ChatML-friendly format. This dataset contains 7.5 million exchanges spread out over 17 million turns, with more than 145 million words. ---

discord-alpha

Nomic Atlas Map

--- ## Features - Mixed single and multi-turn exchanges - Human-only dialogues (no bots) - Filtered for ToS and harmful content - Links, embeds, and commands removed - Trading posts, code blocks, and LFG removed - Two-author chains only - Merged self-replies from the same author into a single message - Cleaned and deduplicated for relevance - Primarily English, with some other languages present --- ## Use - Fine-tuning conversational models - Training relevance/reward models - Dialogue generation research Use case examples: - [mookiezi/Discord-Micae-8B-Preview](https://huggingface.co/mookiezi/Discord-Micae-8B-Preview) — experimental larger model - [mookiezi/Discord-Micae-Hermes-3-3B](https://huggingface.co/mookiezi/Discord-Micae-Hermes-3-3B) — stable smaller model --- ## Filtering Pipeline This dataset was constructed with a custom multi-stage filtering toolkit: 1. **SQL filters** (`filter.sql`) Postgres regex/text filters for PII, bot/command patterns, links, embeds, and automation noise. 2. **Smart cleaner** (`smartclean.py`) Multi-stage process: normalize text, slang replacement, resample by length, and enforce structural validation. Filters out structural noise such as code blocks, trading posts, and LFG. 3. **Dedupe** (`dedupe.py`) Deduplicates conversations by hashing message chains Keeps only unique rows preferring the longest final assistant message when duplicates occur. 4. **Fix End** (`fixend.py`) Strips any prefix of spaces, commas, or non-emoticon colons before `<|im_end|>` to the plain token. 5. **ToS risk filter** (`tos.py`) Drops or redacts unsafe categories (sexual violence, CSA, slurs, harassment, doxxing, self-harm, extremism) and PII. Uses fuzzy/leet/diacritic-aware regex. The full filtering scripts are open source at the [filters GitHub repository](https://github.com/mookiezi/filters). --- ## Dataset Pipeline The full end-to-end pipeline is documented in the [dataset-pipeline GitHub repository](https://github.com/mookiezi/dataset-pipeline). --- ## Collection Policy - All data was collected adhering to Discord's [Terms of Service](https://discord.com/terms) and [Community Guidelines](https://discord.com/guidelines). --- ## Dataset Statistics (using the [NousResearch/Hermes-3-Llama-3.1-8B tokenizer](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B))
| Metric | Value | | ------------------------ | --------------: | | Samples (count) | 7,546,294 | | Min length (tokens) | 7 | | Max length (tokens) | 5,979 | | Mean length (tokens) | 33.02 | | Median length (tokens) | 29 | | Std dev (tokens) | 17.39 | | Skew | 26.46 | | Kurtosis | 7,487.55 | | Total tokens | 249,193,745 | | Total characters | 1,291,480,299 | | Total words | 145,887,976 | | Avg chars per sample | 171.14 | | Avg words per sample | 19.33 | | Avg chars per word | 8.85 | | Tokens per char | 0.19 | | Total assistant blocks | 9,341,891 |
| Tokens | Count | | --------- | --------: | | 0–8 | 1 | | 8–16 | 110,310 | | 16–32 | 4,382,094 | | 32–64 | 2,674,780 | | 64–128 | 360,401 | | 128–256 | 18,083 | | 256–384 | 417 | | 384–512 | 75 | | 512–768 | 78 | | 768–1024 | 30 | | 1024–2048 | 18 | | 2048–4096 | 3 |
| Turns | Count | | ----- | --------: | | 2 | 5,969,540 | | 3 | 1,080,526 | | 4 | 319,794 | | 5 | 102,553 | | 6 | 41,246 | | 7 | 16,904 | | 8 | 7,715 | | 9 | 3,691 | | 10 | 1,867 | | 11 | 1,007 | | 12 | 575 | | 13 | 334 | | 14 | 189 | | 15 | 129 | | 16 | 67 | | 17 | 62 | | 18 | 32 | | 19 | 21 | | 20 | 8 | | 21 | 11 | | 22 | 11 | | 23 | 2 | | 24 | 1 | | 25 | 3 | | 27 | 2 | | 29 | 1 | | 32 | 1 | | 33 | 2 |
--- ## Disclaimer Although filtering reduced the exchanges by about 75% (leaving roughly 7.5% of the full data dump), this dataset is still intended as a large-scale dump. For best training results, further curation to target high-signal data relevant to your goals is recommended. --- ## License This project is licensed under the Apache License 2.0. --- ## How to cite: ```bibtex @misc{discord-dialogues-2025, title = {Discord-Dialogues}, author = {mookiezi}, year = {2025}, url={https://huggingface.co/datasets/mookiezi/Discord-Dialogues} } ``` --- ## Related - [mookiezi/Discord-Micae-Hermes-3-3B](https://huggingface.co/mookiezi/Discord-Micae-Hermes-3-3B) - [mookiezi/Discord-OpenMicae](https://huggingface.co/datasets/mookiezi/Discord-OpenMicae) - [NousResearch/Hermes-3-Llama-3.2-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) [​](https://20000.online/micae) [​](https://20000.online/openmicae) [​](https://20000.online/discord-dialogues)