Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

DCLM Baseline 500B Tokens (Decontaminated)

Dataset Description

This dataset is a decontaminated subset of the DCLM-Baseline corpus, specifically prepared for the Hubble memorization research project. The dataset has been carefully processed to remove overlap with memorization evaluation data and subsampled around 500 billion tokens of English text.

This corpus serves as the foundational training data for all Hubble models, providing a clean baseline for studying memorization phenomena in large language models while attempting to remove confounding effects from contamination.

Dataset Summary

  • Total Size: ~500 billion tokens
  • Language: English
  • Source: Decontaminated DCLM-Baseline corpus
  • Purpose: Training language models for memorization research
  • License: CC-BY-4.0 (inherited from DCLM Baseline)

Data Revisions

We provide multiple revisions of the training corpus corresponding to different Hubble models:

Revision Description Effective Token Count Models Trained
standard Full 500B token corpus 500B hubble-{1/8}b-{100/500}b_toks-*-standard-*
perturbed-500b Same as standard with perturbation data inserted across the 500B tokens used in training 500B hubble-{1/8}b-500b_toks-perturbed-*
perturbed-100b Same as standard with perturbation data inserted across the first 100B tokens used in training 100B hubble-{1/8}b-100b_toks-perturbed-* and hubble-1b-100b_toks-*_depth-perturbed-*
perturbed-100b-paraphrased Same as perturbed-100b but with the paraphrased variants of MMLU and YAGO biographies 100B hubble-{1/8}b-100b_toks-paraphrased-perturbed-*

We do not release the corpora for the Timing and Interference experiments but these can be reproduced from the provided standard revision and tokenized perturbation data.

Dataset Structure

The dataset repository contains the following structure:

dclm-baseline-500b_toks/
├── tokenized/                                # (only in main) Tokenized perturbation data
├── tokenized_paraphrase/                     # (only in main) Tokenized perturbation data with paraphrased YAGO and MMLU
├── *-bin.md5sum.txt                          # MD5 checksums for tokenized corpus (bin file)
├── standard_text_document.bin.zstd.part_**   # Shards of the compressed tokenized corpus (~22 GB each)
├── standard_text_document.idx                # Index file for tokenized corpus (8.25 GB)
├── *_perturbation_info.json                  # (only in perturbed revisions) Perturbation metadata (260 MB)
├── *_perturbation_viz_docs.jsonl             # (only in perturbed revisions) Visualization documents (9.29 MB)
├── *_test_indexmap_*_doc_idx.npy             # Test index mapping - doc indices (1.65 MB)
├── *_test_indexmap_*_sample_idx.npy          # Test index mapping - sample indices (2.14 MB)
├── *_test_indexmap_*_shuffle_idx.npy         # Test index mapping - shuffle indices (1.07 MB)
├── *_train_indexmap_*_doc_idx.npy            # Train index mapping - doc indices (1.65 MB)
├── *_train_indexmap_*_sample_idx.npy         # Train index mapping - sample indices (2.14 MB)
├── *_train_indexmap_*_shuffle_idx.npy        # Train index mapping - shuffle indices (1.07 MB)
├── *_valid_indexmap_*_doc_idx.npy            # Validation index mapping - doc indices (1.65 MB)
├── *_valid_indexmap_*_sample_idx.npy         # Validation index mapping - sample indices (2.14 MB)
├── *_valid_indexmap_*_shuffle_idx.npy        # Validation index mapping - shuffle indices (1.07 MB)

File Types

  • .bin.zstd.part_*: Compressed data archives split into multiple parts. These need to be concatenated and uncompressed to obtain the tokenized dataset (*.bin) (~1TB uncompressed)
  • .idx: Index files recording the document boundaries in the tokenized corpus
  • perturbation_info.json: Metadata to identify the insertion position of the perturbation data
  • perturbation_viz_docs.jsonl: Sample of training sequences with inserted perturbation data
  • *_{train|valid|test}_indexmap_{num_samples}ns_{seq_length}sl_{seed}s_packedpi_ac_{doc|sample|shuffle}_idx.npy: NumPy arrays containing doc/sample/shuffle index mappings for a training run using num_samples training sequences, seq_length tokens per sequence and seed as the random seed for shuffling. Useful for reproducing the exact training order of sequences.
  • .md5sum.txt: Checksum files for data integrity verification
  • tokenized/: Directory containing tokenized versions of the perturbation datasets
  • tokenized_paraphrase/: Directory containing tokenized paraphrase variations of perturbation datasets

Access Methods

Refer to our README for instructions on downloading and preparing the corpus.

Dataset Creation

Source Data

The dataset is derived from DCLM-Baseline, which consists of:

  • CommonCrawl web scrapes
  • Language identification to retain English content
  • Refined filtering for quality and safety
  • Extensive deduplication to remove near-duplicates

Data Processing

  1. Subsampling: We use a subset of DCLM to retain around 500B tokens. The source files used are listed here. Note that we divided global-shard_01_of_10 into global-shard_01.0_of_10 and global-shard_01.0_of_10 for ease of processing.

  2. Decontamination: Systematic removal of text overlapping with Hubble evaluation benchmarks using infini-gram as described in this doc. Candidate documents for decontamination include:

    • Test sets (PopQA, MMLU, HellaSwag, PIQA, WinoGrande, Ellie, MUNCH)
    • Passages (Wikipedia, Gutenberg)
    • Paraphrases (MRPC, PAWS)
    • Biographies (Synthetic YAGO, ECtHR)
    • Chat logs (Personachat)

Uses

Direct Use

This dataset is intended for pretraining language models for memorization research. The clean training data provides the foundation for the Hubble model suite. The dataset is released to support further research on memorization, mechanistic interpretability, study of training dynamics, and reproducibility.

Out-of-Scope Use

This dataset should NOT be used for:

  • Production language models (research-focused, may contain biases)
  • Commercial applications without understanding license implications
  • Safety-critical systems (inherits web data biases and risks)

Bias, Risks, and Limitations

Known Biases

Inherited from Web Data:

  • Geographic bias: Overrepresentation of content from certain regions
  • Temporal bias: Reflects internet content from specific time periods
  • Platform bias: Overrepresentation of certain websites and platforms

Language and Cultural Bias:

  • English-centric: Only English content retained
  • Socioeconomic bias: Overrepresentation of content creators with internet access

Risks

Certain revisions of the dataset explicitly contain private information and copyrighted material. Thus, we recommend not using this dataset for commercial purposes or general use language models.

Additional Information

Dataset Curators

  • Hubble Research Team: Johnny Tian-Zheng Wei*, Ameya Godbole*, Mohammad Aflah Khan*, Ryan Wang, Xiaoyuan Zhu, James Flemings, Nitya Kashyap
  • Institutions: University of Southern California, Max Planck Institute for Software Systems
  • Based on: DCLM corpus by ML Foundations

Licensing Information

This dataset is distributed under the CC-BY-4.0 (Creative Commons Attribution 4.0 International) license, inherited from the original DCLM-Baseline corpus. See license details for full terms.

Citation Information

If you use this dataset in your research, please cite both the Hubble project and the original DCLM work:

@misc{wei2025hubblemodelsuiteadvance,
      title={Hubble: a Model Suite to Advance the Study of LLM Memorization}, 
      author={Johnny Tian-Zheng Wei and Ameya Godbole and Mohammad Aflah Khan and Ryan Wang and Xiaoyuan Zhu and James Flemings and Nitya Kashyap and Krishna P. Gummadi and Willie Neiswanger and Robin Jia},
      year={2025},
      eprint={2510.19811},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2510.19811}, 
}

@misc{li2025datacomplmsearchgenerationtraining,
      title={DataComp-LM: In search of the next generation of training sets for language models}, 
      author={Jeffrey Li and Alex Fang and Georgios Smyrnis and Maor Ivgi and Matt Jordan and Samir Gadre and Hritik Bansal and Etash Guha and Sedrick Keh and Kushal Arora and Saurabh Garg and Rui Xin and Niklas Muennighoff and Reinhard Heckel and Jean Mercat and Mayee Chen and Suchin Gururangan and Mitchell Wortsman and Alon Albalak and Yonatan Bitton and Marianna Nezhurina and Amro Abbas and Cheng-Yu Hsieh and Dhruba Ghosh and Josh Gardner and Maciej Kilian and Hanlin Zhang and Rulin Shao and Sarah Pratt and Sunny Sanyal and Gabriel Ilharco and Giannis Daras and Kalyani Marathe and Aaron Gokaslan and Jieyu Zhang and Khyathi Chandu and Thao Nguyen and Igor Vasiljevic and Sham Kakade and Shuran Song and Sujay Sanghavi and Fartash Faghri and Sewoong Oh and Luke Zettlemoyer and Kyle Lo and Alaaeldin El-Nouby and Hadi Pouransari and Alexander Toshev and Stephanie Wang and Dirk Groeneveld and Luca Soldaini and Pang Wei Koh and Jenia Jitsev and Thomas Kollar and Alexandros G. Dimakis and Yair Carmon and Achal Dave and Ludwig Schmidt and Vaishaal Shankar},
      year={2025},
      eprint={2406.11794},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2406.11794}, 
}

Contact

For questions about this dataset:

Related Resources

Downloads last month
254

Models trained or fine-tuned on allegrolab/dclm-baseline-500b_toks

Collections including allegrolab/dclm-baseline-500b_toks