Datasets:
Comic Books Dataset v0.1 - Books
Book-level metadata that extends the page-level dataset.
This dataset contains metadata only and is meant to be used together with
emanuelevivoli/comix-v0_1-pages.
For quick experimentation, use the tiny subset:
emanuelevivoli/comix-v0_1-books-tiny
Relationship to pages:
comix-v0_1-booksextendscomix-v0_1-pages.
Every book record is built from the pages dataset and provides:
- book-level metadata (placeholder in v0.1)
- ordered list of pages and shard locations
- hooks for future segments/characters annotations.
What's Included
Each sample corresponds to one book and contains a single JSON file:
{book_id}.json- book metadata and page references
Book JSON Schema (v0.1)
{
"book_id": "c00004",
"book_metadata": {
"series_title": null,
"issue_number": null,
"publication_date": null,
"publisher": null,
"total_pages": 68,
"license_status": "Public Domain",
"digital_source": "Digital Comic Museum"
},
"pages": [
{
"page_number": 0,
"page_id": "c00004_p000",
"tar_file": "pages-train-00000.tar",
"has_segmentation": true
}
// ...
],
"segments": [], // v2+: story segments (to be added)
"characters": [] // v2+: character bank (to be added)
}
Primordial status (v0.1):
book_metadatafields are mostly empty/null for now.segmentsandcharactersare placeholders for future versions.- Future releases (v2+) will fill bibliographic information, story segments, summaries and richer character annotations.
Data Splits
| Split | Books |
|---|---|
| Train | 19158 |
| Validation | 2 |
| Test | 5 |
| Total | 19165 |
Split Strategy
Splits are exactly aligned with the pages dataset:
- Splits are defined per book, not per page.
- Each
book_idis assigned to train / validation / test using an MD5 hash-based mapping consistent with the CoMix benchmark (C100 + DCM) as defined in: https://github.com/emanuelevivoli/CoMix - All pages of a given book are in the same split in both datasets.
- Books not present in the benchmark split lists fall back to train.
Sharding & Relationship with pages
Books are created after the pages dataset:
Build
comix-v0_1-pagesfrom groupsgroup_00-group_14.For each group, read the page JSONs and shard metadata.
Group pages by
book_idand split.For each book, create a JSON file referencing:
- ordered list of
page_ids - the
tar_fileof the corresponding page in the pages dataset - whether segmentation is available (
has_segmentation)
- ordered list of
Like the pages dataset, books are sharded as:
books-train-XXXXX.tarbooks-validation-XXXXX.tarbooks-test-XXXXX.tar
with the same 5-digit index XXXXX corresponding to the original
source tar.
Tiny versions (*-books-tiny) are built from a reduced number of tars
per group and aggregated in the same way.
Quick Start (Hugging Face datasets)
from datasets import load_dataset
import json
# Load books dataset (streaming recommended for large-scale use)
books = load_dataset(
"emanuelevivoli/comix-v0_1-books",
split="train",
streaming=True,
)
for book in books:
book_data = json.loads(book["json"])
book_id = book_data["book_id"]
total_pages = book_data["book_metadata"]["total_pages"]
# Iterate through page references
for page_ref in book_data["pages"]:
page_id = page_ref["page_id"]
page_number = page_ref["page_number"]
tar_file = page_ref["tar_file"]
has_seg = page_ref["has_segmentation"]
print(f"Book {book_id}: {total_pages} pages (train split)")
Example: Join Books with Pages
from datasets import load_dataset
import json
pages = load_dataset("emanuelevivoli/comix-v0_1-pages", split="train")
books = load_dataset("emanuelevivoli/comix-v0_1-books", split="train")
# Build a simple index from page_id to page sample
page_index = {p["json"]["page_id"]: p for p in pages}
book = books[0]
book_data = json.loads(book["json"])
book_pages = []
for page_ref in book_data["pages"]:
pid = page_ref["page_id"]
if pid in page_index:
book_pages.append(page_index[pid])
print(f"Book {book_data['book_id']}: {len(book_pages)} pages loaded")
Use Cases
- Book-level analysis - model complete story arcs/book structure
- Page grouping - group pages into sequences for autoregressive tasks
- Metadata enrichment - attach external metadata at book level
- Benchmark alignment - leverage CoMix splits in downstream tasks
- Multimodal pipelines - use book metadata + page images for VQA, story generation, summarisation, etc.
Because books extends pages, this is the recommended way to work at
“comic book” granularity while keeping the heavy pixel data in a shared
page-level dataset.
Known Limitations (v0.1)
Primordial metadata:
- Most bibliographic fields (
series_title,issue_number,publication_date,publisher, etc.) are empty/null. segmentsandcharactersare empty lists for now.
- Most bibliographic fields (
Dependency on
pages:- This dataset does not contain images. All visual information
must be loaded from
emanuelevivoli/comix-v0_1-pages. - Page references (
tar_file,page_id) assume the v0.1 layout of the pages dataset and may change in future major versions.
- This dataset does not contain images. All visual information
must be loaded from
Noisy / missing lower-level annotations:
- Any noisiness or missing data in the pages dataset
(detections, captions, segmentations, empty
seg.npzfiles) naturally propagates here.
- Any noisiness or missing data in the pages dataset
(detections, captions, segmentations, empty
If you run into issues, please get in touch:
emanuele [dot] vivoli [at] unifi [dot] it.
Processing Pipeline (High-Level)
Group pages by book using
book_idfrom the pages JSON.Compute split for each book using MD5 hash and CoMix split lists.
Create book JSON with:
- placeholder
book_metadata - ordered
pagesreferences (withtar_fileandhas_segmentation) - empty
segmentsandcharactersfor future v2+ annotations.
- placeholder
Export WebDataset shards mirroring the pages shard indices.
Tiny Subset
For experimenting with book-level code:
Books tiny:
emanuelevivoli/comix-v0_1-books-tiny- Very small number of shards, aligned with the tiny pages subset.
Citation
@dataset{comix_v0_1_books_2025,
title = {Comic Books Dataset v0.1 - Books},
author = {Emanuele Vivoli},
year = {2025},
note = {Book-level metadata built on top of the comix-v0.1 pages dataset},
url = {https://huggingface.co/datasets/emanuelevivoli/comix-v0_1-books}
}
License
- Dataset: Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0).
- Underlying comic scans are from public-domain sources.
- If you extend these metadata (e.g. enrich bibliographic fields, add segments/characters annotations), please share your work with the community under a compatible share-alike license.
- Downloads last month
- 17