You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Repo Structure

Each file contains 1M documents (apart from the last file, which contains the remaining documents). Each file is around 2GB in size (slight differences are due to certain documents being longer or shorter than the "average" across files). Each document has a unique id assigned (a simply sequential int).

  • /data: The raw documents. This config is the same as EleutherAI/the_pile_deduplicated. One minor point is that, instead of copying those data, I detokenised the data in /tokenized (see below). Defining the original data as the detokenised data prevents any inconsistency in future analysis. Apart from minor difference (e.g., 0.1% of the data in my tests), such as angle brackets being detokenised to a different Unicode, the original data and the detokenised data are exactly the same and they get tokenised exactly in the same way. I added a column called num_chars which simply reports the number of characters per document.

  • /tokenized: Include the data available in EleutherAI/pythia_deduped_pile_idxmaps. I added a column called num_tokens which simply reports the number of tokens in the tokenised document.

Downloads last month
27

Collection including pietrolesci/pile-deduped