The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
image
image |
|---|
Food Portion Benchmark (FPB) Dataset
The Food Portion Benchmark (FPB) is a comprehensive dataset and benchmark suite for multi-task food scene understanding, combining food detection and portion size (weight) estimation. It was introduced to support research in dietary analysis, nutrition tracking, and food computing. The dataset is built with high-quality annotations and evaluated using an extended YOLOv12-based multi-task model .
π¦ Dataset Overview
- Total images: 14,083
- Food classes: 138
- Annotations: Bounding boxes + Ground-truth weights (in grams)
- Image angles: Top-down and four side views
- Cameras: Intel RealSense D455 + smartphones
- Split: Train (9,521) / Validation (2,365) / Test (2,197)
- Collection setting: Controlled lab environment using local Central Asian cuisine
Each food item was weighed and categorized into small, medium, or large portions. Images were captured from different angles to enable robust volume and weight estimation.

π Dataset Structure and Format
The FPB dataset follows the YOLO annotation format, with a custom 6th column for food weight (in grams).
π§Ύ Label Format (YOLO-style with weight)
class_id: ID of the food class (0β137)x_center, y_center, width, height: Bounding box coordinates (normalized to [0, 1])weight: Ground truth weight in grams (used for regression)
Each .txt file matches the name of its corresponding image file.
π₯ Dataset Access & Benchmarking
- π¦ Download Dataset: Hugging Face link
- π Evaluate Your Model: Submit predictions on the test set using the automated score-checker
Test labels are hidden to ensure fair evaluation.
π§ Model Overview
The baseline model is a YOLOv12 multitask variant, extended with a regression head for predicting food weight (see Figure below). It was designed to be agnostic to missing labels, making it compatible with datasets that do not have weight annotations.

Github Source Code: Multitask-Food-Portion-Estimation
Best Model (YOLOv12-M @ 640Γ640):
- Detection: mAP50 = 0.974, mAP50-95 = 0.948
- Weight Estimation: MAE = 90.95g
π§ͺ Performance Tables
Table 1: Performance of YOLOv12M at different resolutions
Table 2: YOLOv8 vs YOLOv12 on FPB
ποΈββοΈ Training
Train the multi-task YOLOv12 model using train.py
π Inference
Download the trained best models from the drive link and run inference on test images using test.py
- Provide path to your images folder or image file
- Replace
modelwith the path to the downloaded model - Set
show=Trueto save annotated images with bounding boxes and predicted weights
π In case of using our work in your research, please cite this paper
@article{Sanatbyek_2025,
title={A multitask deep learning model for food scene recognition and portion estimationβthe Food Portion Benchmark (FPB) dataset},
volume={13},
DOI={10.1109/access.2025.3603287},
journal={IEEE Access},
author={Sanatbyek, Aibota and Rakhimzhanova, Tomiris and Nurmanova, Bibinur and Omarova, Zhuldyz and Rakhmankulova, Aidana and Orazbayev, Rustem and Varol, Huseyin Atakan and Chan, Mei Yen},
year={2025},
pages={152033β152045}
}
References
[1] Tian, Y., Ye, Q., & Doermann, D. (2025). YOLOv12: Attention-centric real-time object detectors. arXiv. https://arxiv.org/abs/2502.12524 [2] https://github.com/ultralytics/ultralytics
- Downloads last month
- 417

