Datasets:

Modalities:
Image
Text
ArXiv:
License:
Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

image
image
End of preview.

Food Portion Benchmark (FPB) Dataset

The Food Portion Benchmark (FPB) is a comprehensive dataset and benchmark suite for multi-task food scene understanding, combining food detection and portion size (weight) estimation. It was introduced to support research in dietary analysis, nutrition tracking, and food computing. The dataset is built with high-quality annotations and evaluated using an extended YOLOv12-based multi-task model .


πŸ“¦ Dataset Overview

  • Total images: 14,083
  • Food classes: 138
  • Annotations: Bounding boxes + Ground-truth weights (in grams)
  • Image angles: Top-down and four side views
  • Cameras: Intel RealSense D455 + smartphones
  • Split: Train (9,521) / Validation (2,365) / Test (2,197)
  • Collection setting: Controlled lab environment using local Central Asian cuisine

Each food item was weighed and categorized into small, medium, or large portions. Images were captured from different angles to enable robust volume and weight estimation. Portion examples

πŸ“ Dataset Structure and Format

The FPB dataset follows the YOLO annotation format, with a custom 6th column for food weight (in grams).

🧾 Label Format (YOLO-style with weight)

  • class_id: ID of the food class (0–137)
  • x_center, y_center, width, height: Bounding box coordinates (normalized to [0, 1])
  • weight: Ground truth weight in grams (used for regression)

Each .txt file matches the name of its corresponding image file.


πŸ“₯ Dataset Access & Benchmarking

Test labels are hidden to ensure fair evaluation.


🧠 Model Overview

The baseline model is a YOLOv12 multitask variant, extended with a regression head for predicting food weight (see Figure below). It was designed to be agnostic to missing labels, making it compatible with datasets that do not have weight annotations. Alt text

Github Source Code: Multitask-Food-Portion-Estimation

Best Model (YOLOv12-M @ 640Γ—640):

  • Detection: mAP50 = 0.974, mAP50-95 = 0.948
  • Weight Estimation: MAE = 90.95g

πŸ§ͺ Performance Tables

Table 1: Performance of YOLOv12M at different resolutions

YOLOv12M at different resolutions

Table 2: YOLOv8 vs YOLOv12 on FPB

YOLOv8 vs YOLOv12 results

πŸ‹οΈβ€β™‚οΈ Training

Train the multi-task YOLOv12 model using train.py

πŸ” Inference

Download the trained best models from the drive link and run inference on test images using test.py

  • Provide path to your images folder or image file
  • Replace model with the path to the downloaded model
  • Set show=True to save annotated images with bounding boxes and predicted weights

πŸ“š In case of using our work in your research, please cite this paper

 @article{Sanatbyek_2025,
    title={A multitask deep learning model for food scene recognition and portion estimationβ€”the Food Portion Benchmark (FPB) dataset}, 
    volume={13}, 
    DOI={10.1109/access.2025.3603287}, 
    journal={IEEE Access}, 
    author={Sanatbyek, Aibota and Rakhimzhanova, Tomiris and Nurmanova, Bibinur and Omarova, Zhuldyz and Rakhmankulova, Aidana and Orazbayev, Rustem and Varol, Huseyin Atakan and Chan, Mei Yen}, 
    year={2025}, 
    pages={152033–152045}
}

References

[1] Tian, Y., Ye, Q., & Doermann, D. (2025). YOLOv12: Attention-centric real-time object detectors. arXiv. https://arxiv.org/abs/2502.12524 [2] https://github.com/ultralytics/ultralytics

Downloads last month
417

Space using issai/Food_Portion_Benchmark 1