The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/hdf5/hdf5.py", line 64, in _split_generators
with h5py.File(first_file, "r") as h5:
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/h5py/_hl/files.py", line 564, in __init__
fid = make_fid(name, mode, userblock_size, fapl, fcpl, swmr=swmr)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/h5py/_hl/files.py", line 238, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "h5py/_objects.pyx", line 56, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 57, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 102, in h5py.h5f.open
FileNotFoundError: [Errno 2] Unable to synchronously open file (unable to open file: name = 'zip://data/sim_cube_sort/episode_0.hdf5::hf://datasets/sanskxr02/act_sim_cube_sort@dbef48f903aefb3a8c5b97265d97576c59e72e47/episodes-v2.zip', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
๐ฆพ ACT Simulation Dataset (v2): Cube Sort
A lightweight, randomized imitation learning dataset optimized for Action Chunking Transformers.
๐ Overview
This dataset contains 228 episodes of synthetic robot manipulation data generated using MuJoCo. It was created to train Action Chunking Transformers (ACT) on consumer hardware (specifically an Apple M2 MacBook Air).
Unlike "sterile" datasets that follow perfect straight lines, this v2 dataset utilizes Domain Randomization to ensure the policy learns robust correction behaviors rather than simple memorization.
- Task: A 4-DOF Robotic Arm picking up a cube and moving it to a specific target zone.
- Hardware Virtualization: Mimics the data structure of a real Dynamixel-based robot arm.
- Goal: Enable "Zero-to-Hero" training of Visuomotor Policies without NVIDIA H100s.
โ๏ธ Dataset Specifications
| Feature | Details |
|---|---|
| Episodes | 228 (approx. 68,000 timesteps) |
| Format | .hdf5 |
| Simulation | MuJoCo (via dm_control / mujoco-py) |
| Robot | Custom 4-DOF Arm (Base, Shoulder, Elbow, Wrist) |
| Camera | 1x Static Front Camera (480x640, RGB) |
| Control Freq | 50Hz |
๐ง Data Structure (HDF5)
Each .hdf5 file represents one full episode and contains:
observations/images/front:(T, 480, 640, 3)- Raw RGB images.observations/qpos:(T, 4)- Joint positions (radians).observations/qvel:(T, 4)- Joint velocities.action:(T, 4)- Target joint positions for the next step.
๐ ๏ธ How to Use
1. Download & Unzip
This dataset is stored as a compressed ZIP file (episodes-v2.zip) to maintain directory structure.
from huggingface_hub import hf_hub_download
import zipfile
import os
# 1. Download
zip_path = hf_hub_download(
repo_id="sanskxr02/act_sim_cube_sort",
filename="episodes-v2.zip",
repo_type="dataset"
)
# 2. Extract
extract_path = "data/sim_cube_sort"
os.makedirs(extract_path, exist_ok=True)
with zipfile.ZipFile(zip_path, 'r') as zip_ref:
zip_ref.extractall(extract_path)
print(f"โ
Dataset ready at {extract_path}")
- Downloads last month
- 8