Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Tags:
stylometry
authorship-attribution
literary-analysis
austen
classic-literature
project-gutenberg
License:
| language: en | |
| license: mit | |
| task_categories: | |
| - text-generation | |
| tags: | |
| - stylometry | |
| - authorship-attribution | |
| - literary-analysis | |
| - austen | |
| - classic-literature | |
| - project-gutenberg | |
| size_categories: | |
| - n<1K | |
| pretty_name: Jane Austen Corpus | |
| # ContextLab Jane Austen Corpus | |
| ## Dataset Description | |
| This dataset contains works of **Jane Austen** (1775-1817), preprocessed for computational stylometry research. The texts were sourced from [Project Gutenberg](https://www.gutenberg.org/) and cleaned for use in the paper ["A Stylometric Application of Large Language Models"](https://arxiv.org/abs/2510.21958) (Stropkay et al., 2025). | |
| The corpus includes **7 books** by Jane Austen, including Pride and Prejudice, Sense and Sensibility, and Emma. All text has been converted to **lowercase** and cleaned of Project Gutenberg headers, footers, and chapter headings to focus on the author's prose style. | |
| ### Quick Stats | |
| - **Books:** 7 | |
| - **Total characters:** 4,127,071 | |
| - **Total words:** 740,058 (approximate) | |
| - **Average book length:** 589,581 characters | |
| - **Format:** Plain text (.txt files) | |
| - **Language:** English (lowercase) | |
| ## Dataset Structure | |
| ### Books Included | |
| Each `.txt` file contains the complete text of one book: | |
| | File | Title | | |
| |------|-------| | |
| | `105.txt` | Persuasion | | |
| | `121.txt` | Northanger Abbey | | |
| | `1342.txt` | Pride and Prejudice | | |
| | `141.txt` | Mansfield Park | | |
| | `158.txt` | Emma | | |
| | `161.txt` | Sense and Sensibility | | |
| | `946.txt` | Lady Susan | | |
| ### Data Fields | |
| - **text:** Complete book text (lowercase, cleaned) | |
| - **filename:** Project Gutenberg ID | |
| ### Data Format | |
| All files are plain UTF-8 text: | |
| - Lowercase characters only | |
| - Punctuation and structure preserved | |
| - Paragraph breaks maintained | |
| - No chapter headings or non-narrative text | |
| ## Usage | |
| ### Load with `datasets` library | |
| ```python | |
| from datasets import load_dataset | |
| # Load entire corpus | |
| corpus = load_dataset("contextlab/austen-corpus") | |
| # Iterate through books | |
| for book in corpus['train']: | |
| print(f"Book length: {len(book['text']):,} characters") | |
| print(book['text'][:200]) # First 200 characters | |
| print() | |
| ``` | |
| ### Load specific file | |
| ```python | |
| # Load single book by filename | |
| dataset = load_dataset( | |
| "contextlab/austen-corpus", | |
| data_files="54.txt" # Specific Gutenberg ID | |
| ) | |
| text = dataset['train'][0]['text'] | |
| print(f"Loaded {len(text):,} characters") | |
| ``` | |
| ### Download files directly | |
| ```python | |
| from huggingface_hub import hf_hub_download | |
| # Download one book | |
| file_path = hf_hub_download( | |
| repo_id="contextlab/austen-corpus", | |
| filename="54.txt", | |
| repo_type="dataset" | |
| ) | |
| with open(file_path, 'r') as f: | |
| text = f.read() | |
| ``` | |
| ### Use for training language models | |
| ```python | |
| from datasets import load_dataset | |
| from transformers import GPT2Tokenizer, GPT2LMHeadModel, Trainer, TrainingArguments | |
| # Load corpus | |
| corpus = load_dataset("contextlab/austen-corpus") | |
| # Combine all books into single text | |
| full_text = " ".join([book['text'] for book in corpus['train']]) | |
| # Tokenize | |
| tokenizer = GPT2Tokenizer.from_pretrained("gpt2") | |
| def tokenize_function(examples): | |
| return tokenizer(examples['text'], truncation=True, max_length=1024) | |
| tokenized = corpus.map(tokenize_function, batched=True, remove_columns=['text']) | |
| # Initialize model | |
| model = GPT2LMHeadModel.from_pretrained("gpt2") | |
| # Set up training | |
| training_args = TrainingArguments( | |
| output_dir="./results", | |
| num_train_epochs=10, | |
| per_device_train_batch_size=8, | |
| save_steps=1000, | |
| ) | |
| # Train | |
| trainer = Trainer( | |
| model=model, | |
| args=training_args, | |
| train_dataset=tokenized['train'] | |
| ) | |
| trainer.train() | |
| ``` | |
| ### Analyze text statistics | |
| ```python | |
| from datasets import load_dataset | |
| import numpy as np | |
| corpus = load_dataset("contextlab/austen-corpus") | |
| # Calculate statistics | |
| lengths = [len(book['text']) for book in corpus['train']] | |
| print(f"Books: {len(lengths)}") | |
| print(f"Total characters: {sum(lengths):,}") | |
| print(f"Mean length: {np.mean(lengths):,.0f} characters") | |
| print(f"Std length: {np.std(lengths):,.0f} characters") | |
| print(f"Min length: {min(lengths):,} characters") | |
| print(f"Max length: {max(lengths):,} characters") | |
| ``` | |
| ## Dataset Creation | |
| ### Source Data | |
| All texts sourced from [Project Gutenberg](https://www.gutenberg.org/), a library of over 70,000 free eBooks in the public domain. | |
| **Project Gutenberg Links:** | |
| - Books identified by Gutenberg ID numbers (filenames) | |
| - Example: `54.txt` corresponds to https://www.gutenberg.org/ebooks/54 | |
| - All works are in the public domain | |
| ### Preprocessing Pipeline | |
| The raw Project Gutenberg texts underwent the following preprocessing: | |
| 1. **Header/footer removal:** Project Gutenberg license text and metadata removed | |
| 2. **Lowercase conversion:** All text converted to lowercase for stylometry | |
| 3. **Chapter heading removal:** Chapter titles and numbering removed | |
| 4. **Non-narrative text removal:** Tables of contents, dedications, etc. removed | |
| 5. **Encoding normalization:** Converted to UTF-8 | |
| 6. **Structure preservation:** Paragraph breaks and punctuation maintained | |
| **Why lowercase?** Stylometric analysis focuses on word choice, syntax, and style rather than capitalization patterns. Lowercase normalization removes this variable. | |
| **Preprocessing code:** Available at https://github.com/ContextLab/llm-stylometry | |
| ## Considerations for Using This Dataset | |
| ### Known Limitations | |
| - **Historical language:** Reflects 19th-century England vocabulary, grammar, and cultural context | |
| - **Lowercase only:** All text converted to lowercase (not suitable for case-sensitive analysis) | |
| - **Incomplete corpus:** May not include all of Jane Austen's writings (only public domain works on Gutenberg) | |
| - **Cleaning artifacts:** Some formatting irregularities may remain from Gutenberg source | |
| - **Public domain only:** Limited to works published before copyright restrictions | |
| ### Intended Use Cases | |
| - **Stylometry research:** Authorship attribution, style analysis | |
| - **Language modeling:** Training author-specific models | |
| - **Literary analysis:** Computational study of Jane Austen's writing | |
| - **Historical NLP:** 19th-century England language patterns | |
| - **Educational:** Teaching computational text analysis | |
| ### Out-of-Scope Uses | |
| - Case-sensitive text analysis | |
| - Modern language applications | |
| - Factual information retrieval | |
| - Complete scholarly editions (use academic sources) | |
| ## Citation | |
| If you use this dataset in your research, please cite: | |
| ```bibtex | |
| @article{StroEtal25, | |
| title={A Stylometric Application of Large Language Models}, | |
| author={Stropkay, Harrison F. and Chen, Jiayi and Jabelli, Mohammad J. L. and Rockmore, Daniel N. and Manning, Jeremy R.}, | |
| journal={arXiv preprint arXiv:2510.21958}, | |
| year={2025} | |
| } | |
| ``` | |
| ## Additional Information | |
| ### Dataset Curator | |
| [ContextLab](https://www.context-lab.com/), Dartmouth College | |
| ### Licensing | |
| MIT License - Free to use with attribution | |
| ### Contact | |
| - **Paper & Code:** https://github.com/ContextLab/llm-stylometry | |
| - **Issues:** https://github.com/ContextLab/llm-stylometry/issues | |
| - **Contact:** Jeremy R. Manning ([email protected]) | |
| ### Related Resources | |
| Explore datasets for all 8 authors in the study: | |
| - [Jane Austen](https://huggingface.co/datasets/contextlab/austen-corpus) | |
| - [L. Frank Baum](https://huggingface.co/datasets/contextlab/baum-corpus) | |
| - [Charles Dickens](https://huggingface.co/datasets/contextlab/dickens-corpus) | |
| - [F. Scott Fitzgerald](https://huggingface.co/datasets/contextlab/fitzgerald-corpus) | |
| - [Herman Melville](https://huggingface.co/datasets/contextlab/melville-corpus) | |
| - [Ruth Plumly Thompson](https://huggingface.co/datasets/contextlab/thompson-corpus) | |
| - [Mark Twain](https://huggingface.co/datasets/contextlab/twain-corpus) | |
| - [H.G. Wells](https://huggingface.co/datasets/contextlab/wells-corpus) | |