Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
image
imagewidth (px)
540
1.92k
End of preview. Expand in Data Studio

UI Screenshots and View Hierarchies (6 GB)

This dataset is extracted from the the Google Drive link provided on the official Rico dataset website. This dataset only contains the images of the first link to shorten extraction time of the original from 1 hour to under 5 minutes. All the data is in the training split with the expectation that downstream users will split the dataset as they desire.

To load dataset (~ 2 mins):

from datasets import load_dataset
dataset = load_dataset("ThomasDh-C/RicoScreenshots", num_proc=8)

To extract images from the Arrow files that are cached from this command (~ 1 min):

import pyarrow as pa
import os
from tqdm import tqdm

# Get list of arrow files
root_folder = "/root/.cache/huggingface/datasets/ThomasDh-C___rico_screenshots/default/0.0.0"
uuid_folder = [item for item in os.listdir(root_folder) if os.path.isdir(f"{root_folder}/{item}")][0]
arrow_files = [f_name for f_name in os.listdir(f"{root_folder}/{uuid_folder}") if f_name.endswith('.arrow')]
sorted_arrow_files = sorted(arrow_files, key=lambda x: int(x.split('-')[2]))
full_sorted_arrow_files = [f"{root_folder}/{uuid_folder}/{f_name}" for f_name in sorted_arrow_files]

# Extract bytes from arrow file's structs
os.makedirs("extracted_images", exist_ok=True)
for arrow_file in tqdm(full_sorted_arrow_files):
    with pa.memory_map(arrow_file, "r") as source:
        table = pa.ipc.open_stream(source).read_all()
        for img_struct in table["image"]:
            img_path = img_struct["path"].as_py()
            with open(f"extracted_images/{img_path}", "wb") as f:
                f.write(img_struct["bytes"].as_buffer())
Downloads last month
38