Datasets:

Modalities:
Text
Formats:
text
ArXiv:
Libraries:
Datasets
License:
LR0.FM / README.md
ppriyank's picture
Update README.md
930ca45 verified
metadata
license: apache-2.0
size_categories:
  - 1K<n<10K

πŸš€ LR0.FM (ICLR-25 πŸŽ‰)
webpage | paper | video | results | weights

Captions randomly sampled from Conceptual Captions, and the diffusion model PIXART-Ξ± generates synthetic dataset for it. 7,000 randomly sampled captions.

import torch 
from diffusers import PixArtAlphaPipeline
pipe = PixArtAlphaPipeline.from_pretrained("PixArt-alpha/PixArt-XL-2-1024-MS", torch_dtype=torch.float16)
pipe = pipe.to('cuda')


line = line.strip() ## caption line from either `caption_2k.txt' or `caption_5k.txt'

offset = 0 
for fold in range(7):
    images =pipe(line, num_images_per_prompt=10,  ).images
    [img.save(f"{ROOT}/{offset + k}.png") for k,img in enumerate(images)]
    offset += 10

@inproceedings{
    pathak2025lrfm,
    title={{ LR0.FM: Low-Res Benchmark and Improving robustness for Zero-Shot Classification in Foundation Models} },
    author={Priyank Pathak and Shyam Marjit and Shruti Vyas and Yogesh S Rawat},
    booktitle={The Thirteenth International Conference on Learning Representations},
    year={2025},
    url={https://openreview.net/forum?id=AsFxRSLtqR}
}

@article{pathak2025lr0,
  title={LR0. FM: Low-Resolution Zero-shot Classification Benchmark For Foundation Models},
  author={Pathak, Priyank and Marjit, Shyam and Vyas, Shruti and Rawat, Yogesh S},
  journal={arXiv preprint arXiv:2502.03950},
  year={2025}
}


license: cc