dontpatronizeme_pcl / README.md
carlaperez's picture
Update README.md
447fc8a verified
metadata
license: apache-2.0

------------------------------------------------------ DISCLAIMER ------------------------------------------------------

The Don’t Patronize Me! dataset has been created for research purposes. Patronizing and Condescending Language (PCL) towards vulnerable communities is understood in this dataset as a commonly used, generally unconscious and well intended writing style. We consider that the authors of the paragraphs included in this dataset do not intend any harm towards the vulnerable communities they talk about and we reckon that their objective is to support these communities and/or raise awareness towards difficult situations. The Don’t Patronize Me! dataset can only be used for research purposes.


This README.txt file describes version 1.5 of the training set for the dataset "Don't Patronize Me! An Annotated Dataset with Patronizing and Condescending Language towards Vulnerable Communities", an annotated corpus with PCL (Patronizing and Condescending Language) in the newswire domain. The training set of the dataset consists of the following files:

-- dontpatronizeme_pcl.tsv contains paragraphs annotated with a label from 0 (not containing PCL) to 4 (being highly patronizing or condescending) towards vulnerable communities.

It contains one instance per line with the following format:

- <par_id> <tab> <art_id> <tab> <keyword> <tab> <country_code> <tab> <text> <tab> <label>

where
- <par_id> is a unique id for each one of the paragraphs in the corpus.
- <art_id> is the document id in the original NOW corpus (News on Web: https://www.english-corpora.org/now/).
- <keyword> is the search term used to retrieve texts about a target community.
- <country_code> is a two-letter ISO Alpha-2 country code for the source media outlet.
- <text> is the paragraph containing the keyword.
- <label> is an integer between 0 and 4. Each paragraph has been annotated by two annotators as 0 (No PCL), 1 (borderline PCL) and 2 (contains PCL). The combined 
annotations have been used in the following graded scale:

0 -> Annotator 1 = 0 AND Annotator 2 = 0
1 -> Annotator 1 = 0 AND Annotator 2 = 1 OR Annotator 1 = 1 AND Annotator 2 = 0
2 -> Annotator 1 = 1 AND Annotator 2 = 1
3 -> Annotator 1 = 1 AND Annotator 2 = 2 OR Annotator 1 = 2 AND Annotator 2 = 1
4 -> Annotator 1 = 2 AND Annotator 2 = 2

The experiments reported in the paper consider the following tag grouping: 
- {0,1}   = No PCL
- {2,3,4} = PCL

###################################################################################################

** For more information about the categories or the dataset, please see our papers:

--- Pérez-Almendros, Carla, Luis Espinosa Anke, and Steven Schockaert. "Don’t Patronize Me! An Annotated Dataset with Patronizing and Condescending Language towards 
Vulnerable Communities." Proceedings of the 28th International Conference on Computational Linguistics. 2020. ---

--- Pérez-Almendros, Carla, Luis Espinosa Anke, and Steven Schockaert. "SemEval-2022 task 4: Patronizing and condescending language detection." Proceedings of the 
16th International Workshop on Semantic Evaluation (SemEval-2022). 2022. ---

###################################################################################################

###################################################################################################

For more information and code related to the DPM! dataset, please see 
https://github.com/Perez-AlmendrosC/dontpatronizeme 

###################################################################################################