andyP commited on
Commit
d8db5ff
·
verified ·
1 Parent(s): 166b00f

init commit

Browse files
Files changed (5) hide show
  1. README.md +235 -3
  2. test.csv +0 -0
  3. test_ner.csv +0 -0
  4. train.csv +0 -0
  5. train_ner.csv +0 -0
README.md CHANGED
@@ -1,3 +1,235 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ annotations_creators:
4
+ - expert-generated
5
+ language_creators:
6
+ - found
7
+ task_categories:
8
+ - text-classification
9
+ language:
10
+ - ro
11
+ multilinguality:
12
+ - monolingual
13
+ source_datasets:
14
+ - readerbench/ro-offense
15
+ tags:
16
+ - hate-speech-detection
17
+ - offensive speech
18
+ - romanian
19
+ - nlp
20
+ task_ids:
21
+ - hate-speech-detection
22
+ pretty_name: RO-Offense-Sequences
23
+ size_categories:
24
+ - 1K<n<10K
25
+ extra_gated_prompt: 'Warning: this repository contains harmful content (abusive language,
26
+ hate speech).'
27
+ configs:
28
+ - config_name: default
29
+ data_files:
30
+ - split: train
31
+ path: "train.csv"
32
+ - split: test
33
+ path: "test.csv"
34
+ - config_name: ner
35
+ data_files:
36
+ - split: train
37
+ path: "train_ner.csv"
38
+ - split: test
39
+ path: "test_ner.csv"
40
+ ---
41
+
42
+ # Dataset Card for "RO-Offense-Sequences"
43
+
44
+ ## Table of Contents
45
+ - [Dataset Description](#dataset-description)
46
+ - [Dataset Summary](#dataset-summary)
47
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
48
+ - [Languages](#languages)
49
+ - [Dataset Structure](#dataset-structure)
50
+ - [Data Instances](#data-instances)
51
+ - [Data Fields](#data-fields)
52
+ - [Data Splits](#data-splits)
53
+ - [Dataset Creation](#dataset-creation)
54
+ - [Curation Rationale](#curation-rationale)
55
+ - [Source Data](#source-data)
56
+ - [Annotations](#annotations)
57
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
58
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
59
+ - [Social Impact of Dataset](#social-impact-of-dataset)
60
+ - [Discussion of Biases](#discussion-of-biases)
61
+ - [Other Known Limitations](#other-known-limitations)
62
+ - [Additional Information](#additional-information)
63
+ - [Dataset Curators](#dataset-curators)
64
+ - [Licensing Information](#licensing-information)
65
+ - [Citation Information](#citation-information)
66
+ - [Contributions](#contributions)
67
+
68
+ ## Dataset Description
69
+ <!--
70
+ - **Paper:** News-RO-Offense - A Romanian Offensive Language Dataset and Baseline Models Centered on News Article Comments
71
+ -->
72
+ - **Homepage:** [https://github.com/readerbench/ro-offense-sequences](https://github.com/readerbench/ro-offense-sequences)
73
+ - **Repository:** [https://github.com/readerbench/ro-offense-sequences](https://github.com/readerbench/ro-offense-sequences)
74
+ - **Point of Contact:** [Andrei Paraschiv](https://github.com/AndyTheFactory)
75
+ -
76
+ ### Dataset Summary
77
+ a novel Romanian language dataset for offensive language detection with manually
78
+ annotated offensive labels from a local Romanian sports news website (gsp.ro):
79
+
80
+
81
+ Resulting in 12,445 annotated messages
82
+
83
+ ### Languages
84
+
85
+ Romanian
86
+
87
+ ## Dataset Structure
88
+
89
+
90
+ ### Data Instances
91
+
92
+
93
+ An example of 'train' looks as follows.
94
+
95
+ ```
96
+ {
97
+ 'id': 5,
98
+ 'text':'PLACEHOLDER TEXT',
99
+ 'label': 'OTHER'
100
+ }
101
+ ```
102
+
103
+
104
+ ### Data Fields
105
+
106
+ - `id`: The unique comment ID, corresponding to the ID in [RO Offense](https://huggingface.co/datasets/readerbench/ro-offense)
107
+ - `text`: full comment text
108
+ - `label`: the type of offensive message (OTHER, PROFANITY, INSULT, ABUSE)
109
+
110
+ ### Data Splits
111
+
112
+
113
+ Train | Other | Profanity | Insult | Abuse
114
+ :---| :---| :---| :---| :---:
115
+ 9953 | 3656 | 1293 | 2236 | 2768
116
+
117
+ Test | Other | Profanity | Insult | Abuse
118
+ :---| :---| :---| :---| :---:
119
+ 2492 | 916 | 324 | 559 | 693
120
+
121
+
122
+ ## Dataset Creation
123
+
124
+
125
+ ### Curation Rationale
126
+
127
+ Collecting data for abusive language classification for Romanian Language.
128
+
129
+ For the labeling of texts we loosely base our definitions on the Germeval 2019 task for detecting offensive language in german tweets (Struß et al., 2019)
130
+
131
+ Data source: Comments on articles in Gazeta Sporturilor (gsp.ro) between 2011 and 2020
132
+
133
+ Selection for annotation: we select comments from a pool of secific articles based on the number of comments in the article.
134
+ The number of comments per article has the following distribution:
135
+ ```
136
+ mean 183.820923
137
+ std 334.707177
138
+ min 1.000000
139
+ 25% 20.000000
140
+ 50% 58.000000
141
+ 75% 179.000000
142
+ max 2151.000000
143
+ ```
144
+
145
+ Based on this we select only comments from articles having between 20 and 50 comments. Also, we remove comments containing urls or three consecutive *, since these were mostly censored by editors or automatic profanity detection algorythms.
146
+
147
+ Additional, in order to have some meaningful messages for annotation, we select only messages with length between 50 and 500 characters.
148
+
149
+
150
+ ### Source Data
151
+
152
+ Sports News Articles comments
153
+
154
+
155
+ #### Initial Data Collection and Normalization
156
+
157
+
158
+
159
+ #### Who are the source language producers?
160
+
161
+ Sports News Article readers
162
+
163
+
164
+ ### Annotations
165
+
166
+ - Andrei Paraschiv
167
+ - Irina Maria Sandu
168
+
169
+ #### Annotation process
170
+
171
+
172
+ ##### OTHER
173
+
174
+ Label used for non offensive texts.
175
+
176
+ ##### PROFANITY
177
+
178
+ This is the "lighter" form of abusive language. When profane words are used without a direct intend on offending a target, or without ascribing some negative qualities to a target we use this label. Some messages in this class may even have a positive sentiment and uses swearwords as emphasis. Messages containing profane words that are not directed towards a specific group or person, we label as **PROFANITY**
179
+
180
+ Also, self censored messages with swear words having some letters hidden, or some deceitful misspellings of swearwords that have clear intend on circumventing profanity detectors will be treated as **PROFANITY**.
181
+
182
+
183
+ ##### INSULT
184
+
185
+ The message clearly wants to offend someone, ascribing negatively evaluated qualities or deficiences, labeling a person or a group of persons as unworthy or unvalued. Insults do imply disrespect and contempt directed towards a target.
186
+
187
+
188
+ ##### ABUSE
189
+
190
+ This label marks messages containing the stronger form of offensive and abusive language. This type of language ascribes the target a social identity that is judged negatively by the majority of society, or at least is percieved as a mostly negative judged identity. Shameful, unworthy or morally unaceptable identytities fall in this category. In contrast to insults, instances of abusive language require that the target of judgment is seen as a representative of a group and it is ascribed negative qualities that are taken to be universal, omnipresent and unchangeable characteristics of the group.
191
+
192
+ In contrast to insults, instances of abusive language require that the target of judgment tis seen as a representative of a group and it is ascribed negative qualities that are taken to be universal, omnipresent and unchangeable characteristics of the group.
193
+
194
+ Additional, dehumanizing language targeting a person or group is also classified as ABUSE.
195
+
196
+ #### Who are the annotators?
197
+
198
+ Native speakers
199
+
200
+ ### Personal and Sensitive Information
201
+
202
+ The data was public at the time of collection. PII removal has been performed.
203
+
204
+ ## Considerations for Using the Data
205
+
206
+
207
+
208
+ ### Social Impact of Dataset
209
+
210
+ The data definitely contains abusive language. The data could be used to develop and propagate offensive language against every target group involved, i.e. ableism, racism, sexism, ageism, and so on.
211
+
212
+ ### Discussion of Biases
213
+
214
+
215
+ ### Other Known Limitations
216
+
217
+
218
+ ## Additional Information
219
+
220
+
221
+ ### Dataset Curators
222
+
223
+
224
+ ### Licensing Information
225
+
226
+ This data is available and distributed under Apache-2.0 license
227
+
228
+ ### Citation Information
229
+
230
+ ```
231
+ tbd
232
+ ```
233
+
234
+
235
+ ### Contributions
test.csv ADDED
The diff for this file is too large to render. See raw diff
 
test_ner.csv ADDED
The diff for this file is too large to render. See raw diff
 
train.csv ADDED
The diff for this file is too large to render. See raw diff
 
train_ner.csv ADDED
The diff for this file is too large to render. See raw diff