Datasets:
metadata
pretty_name: GenDS
tags:
- diffusion
- image-restoration
- computer-vision
license: mit
language:
- en
task_categories:
- text-to-image
size_categories:
- 100K<n<1M
[CVPR-2025] GenDeg: Diffusion-based Degradation Synthesis for Generalizable All-In-One Image Restoration
Dataset Card for GenDS dataset
The GenDS dataset is a large dataset to boost the generalization of image restoration models. It is a combination of existing image restoration datasets and diffusion-generated degraded samples from GenDeg.
Usage
The dataset is fairly large at ~360GB. We recommend having at least 800GB of free space. To download the dataset, git-lfs is required.
Download Instructions
# Install git lfs
git lfs install
# Clone the dataset repository
git clone https://huggingface.co/datasets/Sudarshan2002/GenDS.git
cd GenDS
# Pull the parts
git lfs pull
Extract the Dataset:
# Combine and extract
cat GenDS_part_* > GenDS.tar.gz
tar -xzvf GenDS.tar.gz
After extraction, rename GenDSFull to GenDS.
Dataset Structure
The dataset includes:
train_gends.json: Metadata for the training dataval_gends.json: Metadata for the validation data
Each JSON file contains a list of dictionaries with the following fields:
{
"image_path": "/relpath/to/image",
"target_path": "/relpath/to/ground_truth",
"dataset": "Source dataset name",
"degradation": "Original degradation type",
"category": "real | synthetic",
"degradation_sub_type": "GenDeg-generated degradation type OR 'Original' (if from existing dataset)",
"split": "train | val",
"mu": "mu value used in GenDeg",
"sigma": "sigma value used in GenDeg",
"random_sampled": true | false,
"sampled_dataset": "Dataset name if mu/sigma are not random"
}
Example Usage:
import json
# Load train metadata
with open("/path/to/train_gends.json") as f:
train_data = json.load(f)
print(train_data[0])
Citation
If you use GenDS in your work, please cite:
@article{rajagopalan2024gendeg,
title={GenDeg: Diffusion-Based Degradation Synthesis for Generalizable All-in-One Image Restoration},
author={Rajagopalan, Sudarshan and Nair, Nithin Gopalakrishnan and Paranjape, Jay N and Patel, Vishal M},
journal={arXiv preprint arXiv:2411.17687},
year={2024}
}