 
				
					Datasets:
				
			
			
	
			
	
		
			
	
		Tasks:
	
	
	
	
	Text Classification
	
	
	Sub-tasks:
	
	
	
	
	natural-language-inference
	
	
	Languages:
	
	
	
		
	
	English
	
	
	Size:
	
	
	
	
	10K<n<100K
	
	
	ArXiv:
	
	
	
	
	
	
	
	
License:
	
	
	
	
	
	
	
Dataset Viewer
	The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
			removing the
			loading script
			and relying on
			automated data support
			(you can use
			convert_to_parquet
			from the datasets library). If this is not possible, please
			open a discussion
			for direct help.
		
Dataset Card for "hans"
Dataset Summary
The HANS dataset is an NLI evaluation set that tests specific hypotheses about invalid heuristics that NLI models are likely to learn.
Supported Tasks and Leaderboards
Languages
Dataset Structure
Data Instances
plain_text
- Size of downloaded dataset files: 30.94 MB
- Size of the generated dataset: 31.81 MB
- Total amount of disk used: 62.76 MB
An example of 'train' looks as follows.
Data Fields
The data fields are the same among all splits.
plain_text
- premise: a- stringfeature.
- hypothesis: a- stringfeature.
- label: a classification label, with possible values including- entailment(0),- non-entailment(1).
- parse_premise: a- stringfeature.
- parse_hypothesis: a- stringfeature.
- binary_parse_premise: a- stringfeature.
- binary_parse_hypothesis: a- stringfeature.
- heuristic: a- stringfeature.
- subcase: a- stringfeature.
- template: a- stringfeature.
Data Splits
| name | train | validation | 
|---|---|---|
| plain_text | 30000 | 30000 | 
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Citation Information
@article{DBLP:journals/corr/abs-1902-01007,
  author    = {R. Thomas McCoy and
               Ellie Pavlick and
               Tal Linzen},
  title     = {Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural
               Language Inference},
  journal   = {CoRR},
  volume    = {abs/1902.01007},
  year      = {2019},
  url       = {http://arxiv.org/abs/1902.01007},
  archivePrefix = {arXiv},
  eprint    = {1902.01007},
  timestamp = {Tue, 21 May 2019 18:03:36 +0200},
  biburl    = {https://dblp.org/rec/journals/corr/abs-1902-01007.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}
Contributions
Thanks to @TevenLeScao, @thomwolf for adding this dataset.
- Downloads last month
- 951
Models trained or fine-tuned on jhu-cogsci/hans
 
				
			Zero-Shot Classification
			• 
		
				0.2B
			• 
	
				Updated
					
				
				• 
					
					686k
				• 
			
	
				• 
					
					130
				
 
				