dutch-notarial-ner / README.md
TimKoornstra's picture
Upload dataset
cb4c909 verified
metadata
license: cc-by-4.0
task_categories:
  - token-classification
language:
  - nl
tags:
  - ner
  - named-entity-extraction
  - dutch
  - historical-documents
  - archival-texts
  - relabeling
  - datasets
  - 17th-century
  - 18th-century
  - 19th-century
pretty_name: Dutch Historical Notarial NER Dataset (Tag de Tekst)
dataset_info:
  features:
    - name: tokenized_text
      sequence: string
    - name: ner
      list:
        - name: end
          dtype: int64
        - name: label
          dtype: string
        - name: start
          dtype: int64
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 32850137
      num_examples: 8885
    - name: val
      num_bytes: 8141505
      num_examples: 2232
    - name: test_rhc
      num_bytes: 8944
      num_examples: 6
    - name: test_nha
      num_bytes: 178200
      num_examples: 38
    - name: test_voc
      num_bytes: 322644
      num_examples: 95
    - name: test_sa
      num_bytes: 327305
      num_examples: 96
  download_size: 15870302
  dataset_size: 41828735
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: val
        path: data/val-*
      - split: test_rhc
        path: data/test_rhc-*
      - split: test_nha
        path: data/test_nha-*
      - split: test_voc
        path: data/test_voc-*
      - split: test_sa
        path: data/test_sa-*

Dutch Historical Notarial NER Dataset

This dataset is a relabeled version of the "AI-trainingset voor Named Entity Recognition (NER)" created during the crowdsourcing project "Tag de Tekst" on VeleHanden.nl in 2020. It has been adapted for use in Named Entity Recognition (NER) tasks, with relabeling conducted using Google Deepmind's Gemini 2.0 Flash model.

Dataset Overview

  • Original Source: Transcriptions of Dutch notarial texts from the 17th to 19th centuries.
  • Annotations: Annotated by ~150 volunteers and reviewed by super users.
  • Relabeling: Automatic relabeling into 4 entity classes:
    • persoon (person names)
    • locatie (locations)
    • datum (dates)
    • organisatie (organizations)
  • Sources: Includes material from:
    • Stadsarchief Amsterdam
    • Nationaal Archief
    • Noord-Hollands Archief
    • Other regional historical centers
  • Total Scans: 10,567
  • Language: Dutch

Format

This dataset is formatted for use with GLiNER. Each sample includes:

  • text: The full text.
  • tokenized_text: The text split into full-word tokens.
  • ner: A list of annotated entities with start and end token indices and entity types.

Example:

{
  "text": "Henrick Cardamon Op huijden ...",
  "tokenized_text": ["Henrick", "Cardamon", "Op", "huijden", ...],
  "ner": [
    {
      "start": 0,
      "end": 1,
      "label": "persoon"
    },
    ...
  ]
}

Usage

To load this dataset with the Hugging Face Datasets library, run:

from datasets import load_dataset

dataset = load_dataset("TimKoornstra/dutch-notarial-ner")

Data Preprocessing and Relabeling

The original dataset, created during the "Tag de Tekst" project, was annotated by a large group of volunteers. While this collaborative effort provided a valuable starting point, the annotations were often inconsistent and contained inaccuracies. Common issues included:

  • Mislabeling of entities (e.g., locations marked as persons).
  • Overlapping or incomplete entity spans.
  • Inconsistent application of annotation guidelines.

To address these challenges, the dataset was automatically relabeled using Google Deepmind's Gemini 2.0 Flash model. This process mapped all annotations into a simplified schema with four entity types:

  • persoon (person names),
  • locatie (locations),
  • datum (dates),
  • organisatie (organizations).

Additionally:

  • Inconsistent spans were corrected to ensure uniformity.
  • The data was reformatted for compatibility with modern tools like GLiNER and the Hugging Face datasets library.

These preprocessing steps ensure that the dataset is more accurate and consistent for training and evaluating Named Entity Recognition (NER) models.

Limitations

Despite preprocessing and relabeling, the dataset has some limitations:

  • Incomplete Entity Coverage: While many errors were corrected, there may still be missed entities or incorrect spans, especially in complex cases.
  • Model-Induced Bias: The relabeling process relied on Google Deepmind's Gemini 2.0 Flash model, which may introduce biases inherent to the model's training data.
  • Historical Context Challenges: The dataset consists of historical Dutch texts (17th–19th century) with archaic language and formatting, which may pose additional challenges for modern models.
  • Potential Noise: Due to the automatic relabeling process, there may still be minor inconsistencies or errors in the annotations.
  • HTR Artifacts: The dataset is based on handwritten text recognition (HTR) outputs, so any transcription errors from the HTR process remain in the data. This limitation is consistent with the original dataset.

Users of this dataset should carefully evaluate its performance on their specific use case and consider further fine-tuning or validation if needed.

License

This dataset respects the original license: CC BY 4.0.

Citation

If you use this dataset in your research, please cite the original dataset and this repository:

@misc{dutch_notarial_ner,
  author = {Tim Koornstra},
  title = {Dutch Historical Notarial NER Dataset},
  year = {2025},
  howpublished = {\url{https://huggingface.co/TimKoornstra/dutch-notarial-ner}},
  note = {Relabeled with Gemini 2.0 Flash model}
}

Acknowledgements

This dataset was originally developed as part of the projects:

Special thanks to the volunteers of the "Tag de Tekst" project and the organizations contributing archival material.