mookiezi commited on
Commit
d9be8f2
·
1 Parent(s): 5fe76de

Remove more blanks and ToS breaking content

Browse files
CHANGELOG → CHANGEGLOG RENAMED
@@ -2,4 +2,6 @@ v.01 - Inital upload
2
  v.02 - Further deduping
3
  v.03 - ToS filtered. Added filters script repo
4
  v.04 - Fixed end tags and emoticons having missing leading spaces
5
- v.05 - Added dataset pipeline
 
 
 
2
  v.02 - Further deduping
3
  v.03 - ToS filtered. Added filters script repo
4
  v.04 - Fixed end tags and emoticons having missing leading spaces
5
+ v.05 - Added dataset pipeline
6
+ v.06 - Removed entries with blank messages
7
+ v.07 - Remove addition blanks and filtered for more ToS
README.md CHANGED
@@ -29,7 +29,7 @@ size_categories:
29
 
30
  > **Discord-Dialogues** is a large-scale dataset of anonymized Discord conversations from late spring to early fall 2025 for training and evaluating realistic conversational AI models in a ChatML-friendly format.
31
 
32
- This dataset contains 7.5 million exchanges spread out over 17 million turns, with more than 145 million words.
33
 
34
  ---
35
 
@@ -59,9 +59,10 @@ size_categories:
59
  - Training relevance/reward models
60
  - Dialogue generation research
61
 
62
- Use case examples:
63
- - [mookiezi/Discord-Micae-8B-Preview](https://huggingface.co/mookiezi/Discord-Micae-8B-Preview) — experimental larger model
64
- - [mookiezi/Discord-Micae-Hermes-3-3B](https://huggingface.co/mookiezi/Discord-Micae-Hermes-3-3B) — stable smaller model
 
65
 
66
  ---
67
 
@@ -70,22 +71,22 @@ Use case examples:
70
  This dataset was constructed with a custom multi-stage filtering toolkit:
71
 
72
  1. **SQL filters** (`filter.sql`)
73
- Postgres regex/text filters for PII, bot/command patterns, links, embeds, and automation noise.
74
 
75
  2. **Smart cleaner** (`smartclean.py`)
76
- Multi-stage process: normalize text, slang replacement, resample by length, and enforce structural validation.
77
- Filters out structural noise such as code blocks, trading posts, and LFG.
78
 
79
  3. **Dedupe** (`dedupe.py`)
80
- Deduplicates conversations by hashing message chains
81
- Keeps only unique rows preferring the longest final assistant message when duplicates occur.
82
 
83
  4. **Fix End** (`fixend.py`)
84
- Strips any prefix of spaces, commas, or non-emoticon colons before `<|im_end|>` to the plain token.
85
 
86
  5. **ToS risk filter** (`tos.py`)
87
- Drops or redacts unsafe categories (sexual violence, CSA, slurs, harassment, doxxing, self-harm, extremism) and PII.
88
- Uses fuzzy/leet/diacritic-aware regex.
89
 
90
  The full filtering scripts are open source at the [filters GitHub repository](https://github.com/mookiezi/filters).
91
 
@@ -109,80 +110,78 @@ The full end-to-end pipeline is documented in the [dataset-pipeline GitHub repos
109
 
110
  <div>
111
 
112
- | Metric | Value |
113
- | ------------------------ | --------------: |
114
- | Samples (count) | 7,546,294 |
115
- | Min length (tokens) | 7 |
116
- | Max length (tokens) | 5,979 |
117
- | Mean length (tokens) | 33.02 |
118
- | Median length (tokens) | 29 |
119
- | Std dev (tokens) | 17.39 |
120
- | Skew | 26.46 |
121
- | Kurtosis | 7,487.55 |
122
- | Total tokens | 249,193,745 |
123
- | Total characters | 1,291,480,299 |
124
- | Total words | 145,887,976 |
125
- | Avg chars per sample | 171.14 |
126
- | Avg words per sample | 19.33 |
127
- | Avg chars per word | 8.85 |
128
- | Tokens per char | 0.19 |
129
- | Total assistant blocks | 9,341,891 |
 
130
 
131
  </div>
132
 
133
  <div>
134
 
135
-
136
- | Tokens | Count |
137
- | --------- | --------: |
138
- | 08 | 1 |
139
- | 816 | 110,310 |
140
- | 1632 | 4,382,094 |
141
- | 3264 | 2,674,780 |
142
- | 64128 | 360,401 |
143
- | 128256 | 18,083 |
144
- | 256384 | 417 |
145
- | 384512 | 75 |
146
- | 512768 | 78 |
147
- | 7681024 | 30 |
148
- | 1024–2048 | 18 |
149
- | 2048–4096 | 3 |
150
 
151
  </div>
152
 
153
  <div>
154
 
155
-
156
- | Turns | Count |
157
- | ----- | --------: |
158
- | 2 | 5,969,540 |
159
- | 3 | 1,080,526 |
160
- | 4 | 319,794 |
161
- | 5 | 102,553 |
162
- | 6 | 41,246 |
163
- | 7 | 16,904 |
164
- | 8 | 7,715 |
165
- | 9 | 3,691 |
166
- | 10 | 1,867 |
167
- | 11 | 1,007 |
168
- | 12 | 575 |
169
- | 13 | 334 |
170
- | 14 | 189 |
171
- | 15 | 129 |
172
- | 16 | 67 |
173
- | 17 | 62 |
174
- | 18 | 32 |
175
- | 19 | 21 |
176
- | 20 | 8 |
177
- | 21 | 11 |
178
- | 22 | 11 |
179
- | 23 | 2 |
180
- | 24 | 1 |
181
- | 25 | 3 |
182
- | 27 | 2 |
183
- | 29 | 1 |
184
- | 32 | 1 |
185
- | 33 | 2 |
186
 
187
  </div>
188
 
 
29
 
30
  > **Discord-Dialogues** is a large-scale dataset of anonymized Discord conversations from late spring to early fall 2025 for training and evaluating realistic conversational AI models in a ChatML-friendly format.
31
 
32
+ This dataset contains 7.3 million exchanges spread out over 16 million turns, with more than 139 million words.
33
 
34
  ---
35
 
 
59
  - Training relevance/reward models
60
  - Dialogue generation research
61
 
62
+ Use case examples:
63
+
64
+ - [mookiezi/Discord-Micae-8B-Preview](https://huggingface.co/mookiezi/Discord-Micae-8B-Preview) — experimental larger model
65
+ - [mookiezi/Discord-Micae-Hermes-3-3B](https://huggingface.co/mookiezi/Discord-Micae-Hermes-3-3B) — stable smaller model
66
 
67
  ---
68
 
 
71
  This dataset was constructed with a custom multi-stage filtering toolkit:
72
 
73
  1. **SQL filters** (`filter.sql`)
74
+ Postgres regex/text filters for PII, bot/command patterns, links, embeds, and automation noise.
75
 
76
  2. **Smart cleaner** (`smartclean.py`)
77
+ Multi-stage process: normalize text, slang replacement, resample by length, and enforce structural validation.
78
+ Filters out structural noise such as code blocks, trading posts, and LFG.
79
 
80
  3. **Dedupe** (`dedupe.py`)
81
+ Deduplicates conversations by hashing message chains
82
+ Keeps only unique rows preferring the longest final assistant message when duplicates occur.
83
 
84
  4. **Fix End** (`fixend.py`)
85
+ Strips any prefix of spaces, commas, or non-emoticon colons before `<|im_end|>` to the plain token.
86
 
87
  5. **ToS risk filter** (`tos.py`)
88
+ Drops or redacts unsafe categories (sexual violence, CSA, slurs, harassment, doxxing, self-harm, extremism) and PII.
89
+ Uses fuzzy/leet/diacritic-aware regex.
90
 
91
  The full filtering scripts are open source at the [filters GitHub repository](https://github.com/mookiezi/filters).
92
 
 
110
 
111
  <div>
112
 
113
+ | Metric | Value |
114
+ | ---------------------- | ------------: |
115
+ | Samples (count) | 7,303,464 |
116
+ | Total turns | 16,881,010 |
117
+ | Total assistant turns | 9,016,287 |
118
+ | Min length (tokens) | 10 |
119
+ | Max length (tokens) | 2,542 |
120
+ | Mean length (tokens) | 32.79 |
121
+ | Median length (tokens) | 28 |
122
+ | Std dev (tokens) | 16.56 |
123
+ | Skew | 6.04 |
124
+ | Kurtosis | 326.54 |
125
+ | Total tokens | 239,458,213 |
126
+ | Total characters | 1,242,238,794 |
127
+ | Total words | 139,922,950 |
128
+ | Avg chars per sample | 170.09 |
129
+ | Avg words per sample | 19.16 |
130
+ | Avg chars per word | 8.88 |
131
+ | Tokens per char | 0.19 |
132
 
133
  </div>
134
 
135
  <div>
136
 
137
+ | Tokens | Count |
138
+ | --------- | --------: |
139
+ | 8–16 | 107,264 |
140
+ | 1632 | 4,278,713 |
141
+ | 3264 | 2,566,176 |
142
+ | 64128 | 334,829 |
143
+ | 128256 | 15,920 |
144
+ | 256384 | 363 |
145
+ | 384512 | 71 |
146
+ | 512768 | 78 |
147
+ | 7681024 | 30 |
148
+ | 10242048 | 17 |
149
+ | 20484096 | 3 |
 
 
150
 
151
  </div>
152
 
153
  <div>
154
 
155
+ | Turns | Count |
156
+ | ----- | --------: |
157
+ | 2 | 5,795,019 |
158
+ | 3 | 1,038,500 |
159
+ | 4 | 304,442 |
160
+ | 5 | 96,758 |
161
+ | 6 | 38,620 |
162
+ | 7 | 15,714 |
163
+ | 8 | 7,108 |
164
+ | 9 | 3,391 |
165
+ | 10 | 1,709 |
166
+ | 11 | 909 |
167
+ | 12 | 526 |
168
+ | 13 | 291 |
169
+ | 14 | 163 |
170
+ | 15 | 113 |
171
+ | 16 | 58 |
172
+ | 17 | 57 |
173
+ | 18 | 28 |
174
+ | 19 | 20 |
175
+ | 20 | 7 |
176
+ | 21 | 10 |
177
+ | 22 | 10 |
178
+ | 23 | 2 |
179
+ | 24 | 1 |
180
+ | 25 | 2 |
181
+ | 27 | 2 |
182
+ | 29 | 1 |
183
+ | 32 | 1 |
184
+ | 33 | 2 |
 
185
 
186
  </div>
187
 
train.parquet → data/train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fe52210778f5d71724664d9c365a71599913f93c30cc60df8e674dd3c45c08ca
3
- size 362018517
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:241e350e7f651085c5c2cb4d5274f7cb671b84b3d5fba091101823678da454ec
3
+ size 346784147
dataset_infos.json CHANGED
@@ -1,29 +1,29 @@
1
  {
2
- "default": {
3
- "description": "Discord-Dialogues is a large-scale dataset of anonymized Discord conversations formatted for ChatML. It includes mixed single- and multi-turn exchanges between two human participants, cleaned of bots, links, embeds, commands, ToS breaking content, and duplicate messages—primarily in English, suitable for fine-tuning conversational AI models.",
4
- "citation": "@misc{discord-dialogues-2025,\n title = {Discord-Dialogues},\n author = {mookiezi},\n year = {2025},\n url = {https://huggingface.co/datasets/mookiezi/Discord-Dialogues}\n}",
5
- "homepage": "https://huggingface.co/datasets/mookiezi/Discord-Dialogues",
6
- "license": "Apache License 2.0",
7
- "features": {
8
- "text": { "dtype": "string", "_type": "Value" },
9
- "tokens": { "dtype": "int64", "_type": "Value" },
10
- "turns": { "dtype": "int64", "_type": "Value" },
11
- "characters": { "dtype": "int64", "_type": "Value" },
12
- "words": { "dtype": "int64", "_type": "Value" }
13
- },
14
- "splits": {
15
- "train": {
16
- "name": "train",
17
- "num_bytes": 362022690,
18
- "num_examples": 7546294,
19
- "dataset_name": "default"
20
- }
21
- },
22
- "download_size": 362022690,
23
- "dataset_size": 362022690,
24
- "size_in_bytes": 362022690,
25
- "data_files": {
26
- "train": [{ "filename": "train.parquet" }]
27
- }
28
- }
29
- }
 
1
  {
2
+ "default": {
3
+ "description": "Discord-Dialogues is a large-scale dataset of anonymized Discord conversations formatted for ChatML. It includes mixed single- and multi-turn exchanges between two human participants, cleaned of bots, links, embeds, commands, ToS breaking content, and duplicate messages—primarily in English, suitable for fine-tuning conversational AI models.",
4
+ "citation": "@misc{discord-dialogues-2025,\n title = {Discord-Dialogues},\n author = {mookiezi},\n year = {2025},\n url = {https://huggingface.co/datasets/mookiezi/Discord-Dialogues}\n}",
5
+ "homepage": "https://huggingface.co/datasets/mookiezi/Discord-Dialogues",
6
+ "license": "Apache License 2.0",
7
+ "features": {
8
+ "text": { "dtype": "string", "_type": "Value" },
9
+ "tokens": { "dtype": "int64", "_type": "Value" },
10
+ "turns": { "dtype": "int64", "_type": "Value" },
11
+ "characters": { "dtype": "int64", "_type": "Value" },
12
+ "words": { "dtype": "int64", "_type": "Value" }
13
+ },
14
+ "splits": {
15
+ "train": {
16
+ "name": "train",
17
+ "num_bytes": 346784147,
18
+ "num_examples": 7300966,
19
+ "dataset_name": "default"
20
+ }
21
+ },
22
+ "download_size": 346784147,
23
+ "dataset_size": 346784147,
24
+ "size_in_bytes": 346784147,
25
+ "data_files": {
26
+ "train": [{ "filename": "data/train.parquet" }]
27
+ }
28
+ }
29
+ }
tokens.log DELETED
@@ -1,135 +0,0 @@
1
- Stats for text:
2
- min: 7
3
- max: 5979
4
- mean: 33.02200325086725
5
- median: 29.0
6
- std: 17.390580671916503
7
- skew: 26.456841814125784
8
- kurt: 7487.549682758939
9
- count: 7546294
10
- sum: 249193745
11
- 99.9%: 152.0
12
- 1%: 15.0
13
- 2%: 16.0
14
- 3%: 16.0
15
- 4%: 17.0
16
- 5%: 17.0
17
- 6%: 18.0
18
- 7%: 18.0
19
- 8%: 18.0
20
- 9%: 19.0
21
- 10%: 19.0
22
- 11%: 19.0
23
- 12%: 19.0
24
- 13%: 20.0
25
- 14%: 20.0
26
- 15%: 20.0
27
- 16%: 20.0
28
- 17%: 21.0
29
- 18%: 21.0
30
- 19%: 21.0
31
- 20%: 21.0
32
- 21%: 22.0
33
- 22%: 22.0
34
- 23%: 22.0
35
- 24%: 22.0
36
- 25%: 22.0
37
- 26%: 23.0
38
- 27%: 23.0
39
- 28%: 23.0
40
- 29%: 23.0
41
- 30%: 24.0
42
- 31%: 24.0
43
- 32%: 24.0
44
- 33%: 24.0
45
- 34%: 24.0
46
- 35%: 25.0
47
- 36%: 25.0
48
- 37%: 25.0
49
- 38%: 25.0
50
- 39%: 26.0
51
- 40%: 26.0
52
- 41%: 26.0
53
- 42%: 26.0
54
- 43%: 27.0
55
- 44%: 27.0
56
- 45%: 27.0
57
- 46%: 27.0
58
- 47%: 28.0
59
- 48%: 28.0
60
- 49%: 28.0
61
- 50%: 29.0
62
- 51%: 29.0
63
- 52%: 29.0
64
- 53%: 29.0
65
- 54%: 30.0
66
- 55%: 30.0
67
- 56%: 30.0
68
- 57%: 31.0
69
- 58%: 31.0
70
- 59%: 31.0
71
- 60%: 32.0
72
- 61%: 32.0
73
- 62%: 32.0
74
- 63%: 33.0
75
- 64%: 33.0
76
- 65%: 34.0
77
- 66%: 34.0
78
- 67%: 34.0
79
- 68%: 35.0
80
- 69%: 35.0
81
- 70%: 36.0
82
- 71%: 36.0
83
- 72%: 37.0
84
- 73%: 37.0
85
- 74%: 38.0
86
- 75%: 38.0
87
- 76%: 39.0
88
- 77%: 39.0
89
- 78%: 40.0
90
- 79%: 41.0
91
- 80%: 42.0
92
- 81%: 42.0
93
- 82%: 43.0
94
- 83%: 44.0
95
- 84%: 45.0
96
- 85%: 46.0
97
- 86%: 47.0
98
- 87%: 48.0
99
- 88%: 49.0
100
- 89%: 51.0
101
- 90%: 52.0
102
- 91%: 54.0
103
- 92%: 56.0
104
- 93%: 58.0
105
- 94%: 60.0
106
- 95%: 64.0
107
- 96%: 68.0
108
- 97%: 73.0
109
- 98%: 80.0
110
- 99%: 95.0
111
- 100%: 5979.0
112
- total_chars: 1291480299
113
- total_words: 145887976
114
- avg_chars: 171.14099967480726
115
- avg_words: 19.332400248386826
116
- avg_chars_per_word: 8.852547923483426
117
- avg_chars_per_sample: 171.14099967480726
118
- avg_words_per_sample: 19.332400248386826
119
- tokens_per_char: 0.19295202969255673
120
- bin_0-8: 1
121
- bin_8-16: 110310
122
- bin_16-32: 4382094
123
- bin_32-64: 2674780
124
- bin_64-128: 360401
125
- bin_128-256: 18083
126
- bin_256-384: 417
127
- bin_384-512: 75
128
- bin_512-768: 78
129
- bin_768-1024: 30
130
- bin_1024-2048: 18
131
- bin_2048-4096: 3
132
- assistant_blocks: 9341891
133
-
134
- Total tokens across all columns: 249193745
135
- Total assistant blocks: 9341891
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
tokenstats.txt DELETED
@@ -1,135 +0,0 @@
1
- Stats for text:
2
- min: 8
3
- max: 5979
4
- mean: 33.024094354044
5
- median: 29.0
6
- std: 17.402215788976577
7
- skew: 26.64223385581348
8
- kurt: 7506.831274998159
9
- count: 7543842
10
- sum: 249128550
11
- 99.9%: 153.0
12
- 1%: 15.0
13
- 2%: 16.0
14
- 3%: 16.0
15
- 4%: 17.0
16
- 5%: 17.0
17
- 6%: 18.0
18
- 7%: 18.0
19
- 8%: 18.0
20
- 9%: 19.0
21
- 10%: 19.0
22
- 11%: 19.0
23
- 12%: 19.0
24
- 13%: 20.0
25
- 14%: 20.0
26
- 15%: 20.0
27
- 16%: 20.0
28
- 17%: 21.0
29
- 18%: 21.0
30
- 19%: 21.0
31
- 20%: 21.0
32
- 21%: 22.0
33
- 22%: 22.0
34
- 23%: 22.0
35
- 24%: 22.0
36
- 25%: 22.0
37
- 26%: 23.0
38
- 27%: 23.0
39
- 28%: 23.0
40
- 29%: 23.0
41
- 30%: 24.0
42
- 31%: 24.0
43
- 32%: 24.0
44
- 33%: 24.0
45
- 34%: 24.0
46
- 35%: 25.0
47
- 36%: 25.0
48
- 37%: 25.0
49
- 38%: 25.0
50
- 39%: 26.0
51
- 40%: 26.0
52
- 41%: 26.0
53
- 42%: 26.0
54
- 43%: 27.0
55
- 44%: 27.0
56
- 45%: 27.0
57
- 46%: 27.0
58
- 47%: 28.0
59
- 48%: 28.0
60
- 49%: 28.0
61
- 50%: 29.0
62
- 51%: 29.0
63
- 52%: 29.0
64
- 53%: 29.0
65
- 54%: 30.0
66
- 55%: 30.0
67
- 56%: 30.0
68
- 57%: 31.0
69
- 58%: 31.0
70
- 59%: 31.0
71
- 60%: 32.0
72
- 61%: 32.0
73
- 62%: 32.0
74
- 63%: 33.0
75
- 64%: 33.0
76
- 65%: 34.0
77
- 66%: 34.0
78
- 67%: 34.0
79
- 68%: 35.0
80
- 69%: 35.0
81
- 70%: 36.0
82
- 71%: 36.0
83
- 72%: 37.0
84
- 73%: 37.0
85
- 74%: 38.0
86
- 75%: 38.0
87
- 76%: 39.0
88
- 77%: 39.0
89
- 78%: 40.0
90
- 79%: 41.0
91
- 80%: 42.0
92
- 81%: 42.0
93
- 82%: 43.0
94
- 83%: 44.0
95
- 84%: 45.0
96
- 85%: 46.0
97
- 86%: 47.0
98
- 87%: 48.0
99
- 88%: 49.0
100
- 89%: 51.0
101
- 90%: 52.0
102
- 91%: 54.0
103
- 92%: 56.0
104
- 93%: 58.0
105
- 94%: 60.0
106
- 95%: 64.0
107
- 96%: 68.0
108
- 97%: 73.0
109
- 98%: 80.0
110
- 99%: 95.0
111
- 100%: 5979.0
112
- total_chars: 1290998934
113
- total_words: 145717457
114
- avg_chars: 171.13281720375375
115
- avg_words: 19.316080188317837
116
- avg_chars_per_word: 8.85960378789756
117
- avg_chars_per_sample: 171.13281720375375
118
- avg_words_per_sample: 19.316080188317837
119
- tokens_per_char: 0.19297347460087058
120
- bin_0-8: 0
121
- bin_8-16: 109538
122
- bin_16-32: 4381031
123
- bin_32-64: 2674243
124
- bin_64-128: 360330
125
- bin_128-256: 18072
126
- bin_256-384: 418
127
- bin_384-512: 78
128
- bin_512-768: 77
129
- bin_768-1024: 30
130
- bin_1024-2048: 17
131
- bin_2048-4096: 4
132
- assistant_blocks: 9339690
133
-
134
- Total tokens across all columns: 249128550
135
- Total assistant blocks: 9339690