--- dataset_info: - config_name: continuation features: - name: input dtype: string - name: output sequence: string - name: stripped_input dtype: string splits: - name: train num_bytes: 163781224 num_examples: 138384 - name: test num_bytes: 21361028 num_examples: 17944 download_size: 52682360 dataset_size: 185142252 - config_name: empirical_baselines features: - name: input dtype: string - name: output sequence: string - name: stripped_input dtype: string splits: - name: train num_bytes: 207372184 num_examples: 138384 - name: test num_bytes: 27013388 num_examples: 17944 download_size: 56425268 dataset_size: 234385572 - config_name: ling_1s features: - name: input dtype: string - name: output sequence: string - name: stripped_input dtype: string splits: - name: train num_bytes: 309222808 num_examples: 138384 - name: test num_bytes: 40220172 num_examples: 17944 download_size: 65291826 dataset_size: 349442980 - config_name: simple_instruct features: - name: input dtype: string - name: output dtype: string - name: stripped_input dtype: string splits: - name: train num_bytes: 184678014 num_examples: 138384 - name: test num_bytes: 24070651 num_examples: 17944 download_size: 49126627 dataset_size: 208748665 - config_name: verb_1s_top1 features: - name: input dtype: string - name: output sequence: string - name: stripped_input dtype: string splits: - name: train num_bytes: 289572280 num_examples: 138384 - name: test num_bytes: 37672124 num_examples: 17944 download_size: 63239849 dataset_size: 327244404 - config_name: verb_1s_topk features: - name: input dtype: string - name: output sequence: string - name: stripped_input dtype: string splits: - name: train num_bytes: 349492552 num_examples: 138384 - name: test num_bytes: 45441876 num_examples: 17944 download_size: 68254894 dataset_size: 394934428 - config_name: verb_2s_cot features: - name: input dtype: string - name: output dtype: string - name: stripped_input dtype: string splits: - name: train num_bytes: 276841758 num_examples: 138384 - name: test num_bytes: 36021355 num_examples: 17944 download_size: 56288479 dataset_size: 312863113 - config_name: verb_2s_top1 features: - name: input dtype: string - name: output dtype: string - name: stripped_input dtype: string splits: - name: train num_bytes: 207372990 num_examples: 138384 - name: test num_bytes: 27013467 num_examples: 17944 download_size: 50921767 dataset_size: 234386457 - config_name: verb_2s_topk features: - name: input dtype: string - name: output dtype: string - name: stripped_input dtype: string splits: - name: train num_bytes: 235049790 num_examples: 138384 - name: test num_bytes: 30602267 num_examples: 17944 download_size: 53253834 dataset_size: 265652057 configs: - config_name: continuation data_files: - split: train path: continuation/train-* - split: test path: continuation/test-* - config_name: empirical_baselines data_files: - split: train path: empirical_baselines/train-* - split: test path: empirical_baselines/test-* - config_name: ling_1s data_files: - split: train path: ling_1s/train-* - split: test path: ling_1s/test-* - config_name: simple_instruct data_files: - split: train path: simple_instruct/train-* - split: test path: simple_instruct/test-* - config_name: verb_1s_top1 data_files: - split: train path: verb_1s_top1/train-* - split: test path: verb_1s_top1/test-* - config_name: verb_1s_topk data_files: - split: train path: verb_1s_topk/train-* - split: test path: verb_1s_topk/test-* - config_name: verb_2s_cot data_files: - split: train path: verb_2s_cot/train-* - split: test path: verb_2s_cot/test-* - config_name: verb_2s_top1 data_files: - split: train path: verb_2s_top1/train-* - split: test path: verb_2s_top1/test-* - config_name: verb_2s_topk data_files: - split: train path: verb_2s_topk/train-* - split: test path: verb_2s_topk/test-* --- # Dataset Card for triviaqa This is a preprocessed version of triviaqa dataset for benchmarks in LM-Polygraph. ## Dataset Details ### Dataset Description - **Curated by:** https://huggingface.co/LM-Polygraph - **License:** https://github.com/IINemo/lm-polygraph/blob/main/LICENSE.md ### Dataset Sources [optional] - **Repository:** https://github.com/IINemo/lm-polygraph ## Uses ### Direct Use This dataset should be used for performing benchmarks on LM-polygraph. ### Out-of-Scope Use This dataset should not be used for further dataset preprocessing. ## Dataset Structure This dataset contains the "continuation" subset, which corresponds to main dataset, used in LM-Polygraph. It may also contain other subsets, which correspond to instruct methods, used in LM-Polygraph. Each subset contains two splits: train and test. Each split contains two string columns: "input", which corresponds to processed input for LM-Polygraph, and "output", which corresponds to processed output for LM-Polygraph. ## Dataset Creation ### Curation Rationale This dataset is created in order to separate dataset creation code from benchmarking code. ### Source Data #### Data Collection and Processing Data is collected from https://huggingface.co/datasets/triviaqa and processed by using https://github.com/IINemo/lm-polygraph/blob/main/dataset_builders/build_dataset.py script in repository. #### Who are the source data producers? People who created https://huggingface.co/datasets/triviaqa ## Bias, Risks, and Limitations This dataset contains the same biases, risks, and limitations as its source dataset https://huggingface.co/datasets/triviaqa ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset.