Dataset Viewer
Auto-converted to Parquet
Search is not available for this dataset
tokens
list
[50256,36,12,5239,5597,416,262,7467,4307,6169,29999,25782,4816,357,4023,1378,2503,13,6024,26059,13,3(...TRUNCATED)
[339,318,7997,355,262,10033,2888,286,257,3430,11,465,20093,17747,286,2972,10826,286,262,4039,366,944(...TRUNCATED)
[6875,13,198,198,447,250,5195,423,345,1908,329,502,438,10919,466,345,477,765,502,284,466,30,921,1244(...TRUNCATED)
[262,442,21612,1834,481,5368,13,9170,10818,21712,257,1218,11,314,10484,416,2628,286,347,420,956,508,(...TRUNCATED)
[51,3301,62,373,1908,284,262,30942,5934,918,64,7490,7051,286,262,19517,284,1394,257,804,12,448,329,3(...TRUNCATED)
[21893,13,198,198,3152,257,28576,286,9961,1526,73,19257,11067,339,4001,284,4808,5460,62,8606,11,1309(...TRUNCATED)
[8212,319,607,1986,290,373,5597,407,284,5490,11,284,6486,351,6841,11,290,284,1011,262,16638,835,287,(...TRUNCATED)
[13,1375,1183,991,307,287,340,438,392,991,423,607,1745,319,345,1106,921,1053,1392,284,1011,607,1497,(...TRUNCATED)
[262,38042,284,307,12228,2402,6071,4480,11,523,1290,379,1551,355,262,781,8369,373,5213,26,290,319,26(...TRUNCATED)
[534,29179,13,29051,257,14348,14669,11,1107,13,843,12472,11,1770,6091,5209,1636,11,644,1943,423,4808(...TRUNCATED)
End of preview. Expand in Data Studio

Dataset Card for Project Gutenberg (Cleaned English Subset, Tokenized) Dataset

A cleaned and tokenized English-language subset of the Project Gutenberg dataset containing 38,026 books. Non-English texts, duplicates, and boilerplate license sections were removed for clarity and usability. The dataset was tokenized using the OpenAI's tiktoken tokenizer, and structured for efficient streaming and distributed (DDP) training — the number of shards per split is divisible by 8, each split’s shards are balanced and contain an equal number of tokens. Each row includes 65,537 tokens (64×1,024+1), optimized for autoregressive modeling and batch packing.

Cleaning and Preprocessing

The following steps were applied to prepare this dataset:

✅ Filtered English split only (config='en')

🧹 Removed Project Gutenberg headers and footers from each book (boilerplate license & metadata sections)

✂️ Removed excessive whitespace and blank lines

🔁 Deduplicated entries based on book ID, preserving only one unique copy of each book.

Dataset Structure and Sharding

This dataset was designed for efficient streaming and distributed training (DDP). To ensure balanced workload distribution across multiple GPU processes, the number of shards per split was made divisible by 8, allowing each process to handle an equal portion of data during parallel training. All shards within the same split contain an identical number of tokens for uniformity and performance consistency. Each row in the dataset consists of 65,537 tokens (64 × 1,024 + 1), enabling efficient batch packing and facilitating the creation of aligned input-target sequences for autoregressive model training.

Intended Use

This dataset is suitable for:

  • Pretraining and fine-tuning autoregressive language models.
  • Token-level or sequence-level language modeling experiments.
  • Benchmarking data-loading performance in multi-GPU or distributed setups.

Usage

from datasets import load_dataset
ds = load_dataset("nikolina-p/gutenberg_flat", split="train", streaming=True)
print(next(iter(ds)))

License

The dataset inherits the license terms of Project Gutenberg manu/project_gutenberg. All texts are in the public domain in the United States unless otherwise noted.

Downloads last month
42