The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: UnicodeDecodeError
Message: 'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3357, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2111, in _head
return next(iter(self.iter(batch_size=n)))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2315, in iter
for key, example in iterator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__
for key, pa_table in self._iter_arrow():
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1878, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 476, in _iter_arrow
for key, pa_table in iterator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 323, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 188, in _generate_tables
csv_file_reader = pd.read_csv(file, iterator=True, dtype=dtype, **self.config.pd_read_csv_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 75, in wrapper
return function(*args, download_config=download_config, **kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1213, in xpandas_read_csv
return pd.read_csv(xopen(filepath_or_buffer, "rb", download_config=download_config), **kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1026, in read_csv
return _read(filepath_or_buffer, kwds)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 620, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1620, in __init__
self._engine = self._make_engine(f, self.engine)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1898, in _make_engine
return mapping[engine](f, **self.options)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 93, in __init__
self._reader = parsers.TextReader(src, **kwds)
File "parsers.pyx", line 574, in pandas._libs.parsers.TextReader.__cinit__
File "parsers.pyx", line 663, in pandas._libs.parsers.TextReader._get_header
File "parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
File "parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
File "parsers.pyx", line 2053, in pandas._libs.parsers.raise_parser_error
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 0: invalid start byteNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Fidel: A Large-Scale Sentence Level Amharic OCR Dataset
Overview
Fidel is a comprehensive dataset for Amharic Optical Character Recognition (OCR) at the sentence level. It contains a diverse collection of Amharic text images spanning handwritten, typed, and synthetic sources. This dataset aims to advance language technology for Amharic, serving critical applications such as digital ID initiatives, document digitization, and automated form processing in Ethiopia.
Citation
If you use the Fidel: A Large-Scale Sentence Level Amharic OCR Dataset in your work, please cite the associated pre-print using the following BibTeX entry:
@article{Fidel2025,
author = {Chamisso, Tunga Tessema and Guda, Blessed and Adego, Bereket Retta and Sagbo, Carmel Prosper and Ashungafac, Gabrial Zencha and Gueye, Assane},
title = {{Fidel: A Large-Scale Sentence Level Amharic OCR Dataset}},
journal = {Research Square},
year = {2025},
doi = {10.21203/rs.3.rs-8118465/v1},
url = {[https://doi.org/10.21203/rs.3.rs-8118465/v1](https://doi.org/10.21203/rs.3.rs-8118465/v1)}
}
Dataset Structure
The dataset is organized into train and test splits:
fidel-dataset/
├── train/ # training images (handwritten, typed, and synthetic)
├── test/ # test images (handwritten, typed, and synthetic)
└── metadata.json # Croissant metadata file
├── train_labels.json # Contains filenames and corresponding text labels
└── test_labels.json # Contains filenames and corresponding text labels
Labels Format
Each CSV file contains the following columns:
image_filename: Name of the image fileline_text: The Amharic text content in the imagetype: The source type (handwritten, typed, or synthetic)writer: The writer number (for handwritten types only)
Example Labels
| image_filename | text |
|---|---|
| 25_line_4.png | ዲግሪዎች የህዝብ አስተዳደር ትምህርት ተምረው አንዳገኟቸው ሲገልጹ የቆዩ ሲሆን ይህንንም በፓርላማ ድረገጽ ፣ በፌስቡክ ፣ ዊኪፔድያ ፣ |
| 3_line_2.png | ዮርክ ኬኔዲ አየር ጣቢያ ተነስቶ ሎንዶን ሂዝሮው አየር ጣቢያ አረፈ። ዝምባብዌም በመንግስት ለታገዘ ዝርፊያ እንዲሁም ለድህነትና በሽታ እጇን |
Usage
# Install git-lfs if not already installed
git lfs install
# Clone with LFS support for large files
git clone https://huggingface.co/datasets/upanzi/fidel-dataset
cd fidel-dataset
# Pull LFS files (zip archives)
git lfs pull
# Extract the archives
unzip train.zip
unzip test.zip
Dataset Statistics
Overall Statistics
- Total samples: 193,185
- Training samples: 169,499
- Test samples: 23,686
By Source Type
- Handwritten: 40,265 samples
- Typed: 28,342 samples
- Synthetic: 61,530 samples
- HDD: 63,048 samples
Image Characteristics
- Average image width: varies by type (handwritten: 2,480px, typed: 2,482px, synthetic: 2,956px)
- Average image height: varies by type (handwritten: 199px, typed: 71px, synthetic: 244px)
- Average aspect ratio: varies by type (handwritten: 14.0, typed: 19.5, synthetic: 11.6)
Text Characteristics
- Average text length: varies by type (handwritten: 62.0 characters, typed: 95.2 characters, synthetic: 74.7 characters)
- Average word count: varies by type (handwritten: 11.3 words, typed: 16.9 words, synthetic: 14.7 words)
- Unique characters: 249 in handwritten, 200 in typed, 190 in synthetic
License
This dataset is released under: MIT License
Acknowledgments
We thank all contributors who provided handwritten samples and the organizations that supported this data collection effort.
- Downloads last month
- 465