Datasets:

Modalities:
Text
Formats:
text
ArXiv:
Libraries:
Datasets
License:
LR0.FM / README.md
ppriyank's picture
Update README.md
930ca45 verified
---
license: apache-2.0
size_categories:
- 1K<n<10K
---
<div align="center">
## πŸš€ LR0.FM (ICLR-25 πŸŽ‰)<br> [webpage](https://ucf-crcv.github.io/lr0.fm/) | [paper](https://arxiv.org/abs/2502.03950) | [video](https://recorder-v3.slideslive.com/#/share?share=99927&s=b52e48b7-e501-45c7-b7c9-b1d415e77f1e) | [results]() | [weights]()<br><br> <p align="left"></p>
</div>
Captions randomly sampled from [Conceptual Captions](https://github.com/google-research-datasets/conceptual-captions), and the diffusion model [PIXART-Ξ±](https://github.com/PixArt-alpha/PixArt-alpha) generates synthetic dataset for it.
7,000 randomly sampled captions.
```
import torch
from diffusers import PixArtAlphaPipeline
pipe = PixArtAlphaPipeline.from_pretrained("PixArt-alpha/PixArt-XL-2-1024-MS", torch_dtype=torch.float16)
pipe = pipe.to('cuda')
line = line.strip() ## caption line from either `caption_2k.txt' or `caption_5k.txt'
offset = 0
for fold in range(7):
images =pipe(line, num_images_per_prompt=10, ).images
[img.save(f"{ROOT}/{offset + k}.png") for k,img in enumerate(images)]
offset += 10
```
---
```bibtex
@inproceedings{
pathak2025lrfm,
title={{ LR0.FM: Low-Res Benchmark and Improving robustness for Zero-Shot Classification in Foundation Models} },
author={Priyank Pathak and Shyam Marjit and Shruti Vyas and Yogesh S Rawat},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=AsFxRSLtqR}
}
@article{pathak2025lr0,
title={LR0. FM: Low-Resolution Zero-shot Classification Benchmark For Foundation Models},
author={Pathak, Priyank and Marjit, Shyam and Vyas, Shruti and Rawat, Yogesh S},
journal={arXiv preprint arXiv:2502.03950},
year={2025}
}
```
---
---
license: cc
---