You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Comic Books Dataset v0.1 - Books

Book-level metadata that extends the page-level dataset.

This dataset contains metadata only and is meant to be used together with emanuelevivoli/comix-v0_1-pages.

For quick experimentation, use the tiny subset:

emanuelevivoli/comix-v0_1-books-tiny

Relationship to pages:
comix-v0_1-books extends comix-v0_1-pages.
Every book record is built from the pages dataset and provides:

  • book-level metadata (placeholder in v0.1)
  • ordered list of pages and shard locations
  • hooks for future segments/characters annotations.

What's Included

Each sample corresponds to one book and contains a single JSON file:

  • {book_id}.json - book metadata and page references

Book JSON Schema (v0.1)

{
  "book_id": "c00004",
  "book_metadata": {
    "series_title": null,
    "issue_number": null,
    "publication_date": null,
    "publisher": null,
    "total_pages": 68,
    "license_status": "Public Domain",
    "digital_source": "Digital Comic Museum"
  },
  "pages": [
    {
      "page_number": 0,
      "page_id": "c00004_p000",
      "tar_file": "pages-train-00000.tar",
      "has_segmentation": true
    }
    // ...
  ],
  "segments": [],     // v2+: story segments (to be added)
  "characters": []    // v2+: character bank (to be added)
}

Primordial status (v0.1):

  • book_metadata fields are mostly empty/null for now.
  • segments and characters are placeholders for future versions.
  • Future releases (v2+) will fill bibliographic information, story segments, summaries and richer character annotations.

Data Splits

Split Books
Train 19158
Validation 2
Test 5
Total 19165

Split Strategy

Splits are exactly aligned with the pages dataset:

  • Splits are defined per book, not per page.
  • Each book_id is assigned to train / validation / test using an MD5 hash-based mapping consistent with the CoMix benchmark (C100 + DCM) as defined in: https://github.com/emanuelevivoli/CoMix
  • All pages of a given book are in the same split in both datasets.
  • Books not present in the benchmark split lists fall back to train.

Sharding & Relationship with pages

Books are created after the pages dataset:

  1. Build comix-v0_1-pages from groups group_00-group_14.

  2. For each group, read the page JSONs and shard metadata.

  3. Group pages by book_id and split.

  4. For each book, create a JSON file referencing:

    • ordered list of page_ids
    • the tar_file of the corresponding page in the pages dataset
    • whether segmentation is available (has_segmentation)

Like the pages dataset, books are sharded as:

  • books-train-XXXXX.tar
  • books-validation-XXXXX.tar
  • books-test-XXXXX.tar

with the same 5-digit index XXXXX corresponding to the original source tar.

Tiny versions (*-books-tiny) are built from a reduced number of tars per group and aggregated in the same way.

Quick Start (Hugging Face datasets)

from datasets import load_dataset
import json

# Load books dataset (streaming recommended for large-scale use)
books = load_dataset(
    "emanuelevivoli/comix-v0_1-books",
    split="train",
    streaming=True,
)

for book in books:
    book_data = json.loads(book["json"])

    book_id = book_data["book_id"]
    total_pages = book_data["book_metadata"]["total_pages"]

    # Iterate through page references
    for page_ref in book_data["pages"]:
        page_id = page_ref["page_id"]
        page_number = page_ref["page_number"]
        tar_file = page_ref["tar_file"]
        has_seg = page_ref["has_segmentation"]

    print(f"Book {book_id}: {total_pages} pages (train split)")

Example: Join Books with Pages

from datasets import load_dataset
import json

pages = load_dataset("emanuelevivoli/comix-v0_1-pages", split="train")
books = load_dataset("emanuelevivoli/comix-v0_1-books", split="train")

# Build a simple index from page_id to page sample
page_index = {p["json"]["page_id"]: p for p in pages}

book = books[0]
book_data = json.loads(book["json"])

book_pages = []
for page_ref in book_data["pages"]:
    pid = page_ref["page_id"]
    if pid in page_index:
        book_pages.append(page_index[pid])

print(f"Book {book_data['book_id']}: {len(book_pages)} pages loaded")

Use Cases

  • Book-level analysis - model complete story arcs/book structure
  • Page grouping - group pages into sequences for autoregressive tasks
  • Metadata enrichment - attach external metadata at book level
  • Benchmark alignment - leverage CoMix splits in downstream tasks
  • Multimodal pipelines - use book metadata + page images for VQA, story generation, summarisation, etc.

Because books extends pages, this is the recommended way to work at “comic book” granularity while keeping the heavy pixel data in a shared page-level dataset.

Known Limitations (v0.1)

  • Primordial metadata:

    • Most bibliographic fields (series_title, issue_number, publication_date, publisher, etc.) are empty/null.
    • segments and characters are empty lists for now.
  • Dependency on pages:

    • This dataset does not contain images. All visual information must be loaded from emanuelevivoli/comix-v0_1-pages.
    • Page references (tar_file, page_id) assume the v0.1 layout of the pages dataset and may change in future major versions.
  • Noisy / missing lower-level annotations:

    • Any noisiness or missing data in the pages dataset (detections, captions, segmentations, empty seg.npz files) naturally propagates here.

If you run into issues, please get in touch: emanuele [dot] vivoli [at] unifi [dot] it.

Processing Pipeline (High-Level)

  1. Group pages by book using book_id from the pages JSON.

  2. Compute split for each book using MD5 hash and CoMix split lists.

  3. Create book JSON with:

    • placeholder book_metadata
    • ordered pages references (with tar_file and has_segmentation)
    • empty segments and characters for future v2+ annotations.
  4. Export WebDataset shards mirroring the pages shard indices.

Tiny Subset

For experimenting with book-level code:

  • Books tiny: emanuelevivoli/comix-v0_1-books-tiny

    • Very small number of shards, aligned with the tiny pages subset.

Citation

@dataset{comix_v0_1_books_2025,
  title   = {Comic Books Dataset v0.1 - Books},
  author  = {Emanuele Vivoli},
  year    = {2025},
  note    = {Book-level metadata built on top of the comix-v0.1 pages dataset},
  url     = {https://huggingface.co/datasets/emanuelevivoli/comix-v0_1-books}
}

License

  • Dataset: Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0).
  • Underlying comic scans are from public-domain sources.
  • If you extend these metadata (e.g. enrich bibliographic fields, add segments/characters annotations), please share your work with the community under a compatible share-alike license.
Downloads last month
17