File size: 5,121 Bytes
6525449 7531342 6525449 ee12dc3 6525449 cb0fd56 6525449 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 |
---
license: other
license_name: microagi-os-l1
license_link: LICENSE
task_categories:
- robotics
language:
- en
tags:
- dataset
- egocentric
- robotics
- rgbd
- depth
- manipulation
- mcap
- ros2
- computer_vision
pretty_name: MicroAGI00 Egocentric Dataset for Simple Household Manipulation
size_categories:
- 1M<n<10M
---
---
# MicroAGI00: MicroAGI Egocentric Dataset (2025)
> License: MicroAGI00 Open Use, No-Resale v1.0 (see `LICENSE`).
> No resale: You may not sell or paywall this dataset or derivative data. Trained models/outputs may be released under any terms.
## Overview
MicroAGI00 is a large-scale egocentric RGB+D dataset of human manipulation in https://behavior.stanford.edu/challenge/index.html tasks.
## Quick facts
* Modality: synchronized RGB + 16‑bit depth + IMU + annotations
* Resolution & rate (RGB): 1920×1080 @ 30 FPS (in MCAP)
* Depth: 16‑bit, losslessly compressed inside MCAP
* Scale: ≈1,000,000 synchronized RGB frames and ≈1,000,000 depth frames (≈1M frame pairs)
* Container: `.mcap` (all signals + annotations)
* Previews: For as sample for only some bags `.mp4` per sequence (annotated RGB; visualized native depth)
* Annotations: Only in %5 of the dataset, hand landmarks and short action text
## What’s included per sequence
* One large **MCAP** file containing:
* RGB frames (1080p/30 fps)
* 16‑bit depth stream (lossless compression)
* IMU data (as available)
* For Some Data the Embedded annotations (hands, action text)
**MP4** preview videos:
* Annotated RGB (for quick review)
* Visualized native depth map (for quick review)
> Note: MP4 previews may be lower quality than MCAP due to compression and post‑processing. Research use should read from MCAP.
## Annotations
Annotations are generated by our in‑house.
### Hand annotations 21 Joints (not all shown below as it would be too long) (per frame) — JSON schema example
```
{
"frame_number": 9,
"timestamp_seconds": 0.3,
"resolution": { "width": 1920, "height": 1080 },
"hands": [
{
"hand_index": 0,
"landmarks": [
{ "id": 0, "name": "WRIST", "x": 0.7124036550521851, "y": 0.7347621917724609, "z": -1.444301744868426e-07, "visibility": 0.0 },
],
"hand": "Left",
"confidence": 0.9268525838851929
},
{
"hand_index": 1,
"landmarks": [
{ "id": 0, "name": "WRIST", "x": 0.4461262822151184, "y": 0.35183972120285034, "z": -1.2342320587777067e-07, "visibility": 0.0 },
"hand": "Right",
"confidence": 0.908446729183197
}
],
"frame_idx": 9,
"exact_frame_timestamp": 1758122341583104000,
"exact_frame_timestamp_sec": 1758122341.583104
}
```
### Text (action) annotations (per frame/window) — JSON schema example
```
{
"schema_version": "v1.0",
"action_text": "Right hand, holding a knife, is chopping cooked meat held by the left hand on the red cutting board.",
"confidence": 1.0,
"source": { "model": "MicroAGI, MAGI01" },
"exact_frame_timestamp": 1758122341583104000,
"exact_frame_timestamp_sec": 1758122341.583104
}
```
## Data access and structure
* Each top-level sample folder contains: One folder of strong heavy mcap dump, one folder of annotated mcap dump, one folder of mp4 previews
* All authoritative signals and annotations are inside the MCAP. Use the MP4s for quick visual QA only.
## Getting started
* Inspect an MCAP: `mcap info your_sequence.mcap`
* Extract messages: `mcap cat --topics <topic> your_sequence.mcap > out.bin`
* Python readers: `pip install mcap` (see the MCAP Python docs) or any MCAP-compatible tooling. Typical topics include RGB, depth, IMU, and annotation channels.
## Intended uses
* Policy and skill learning (robotics/VLA)
* Action detection and segmentation
* Hand/pose estimation and grasp analysis
* Depth-based reconstruction, SLAM, scene understanding
* World-model pre-post training
## Services and custom data
MicroAGI provides on-demand:
* Real‑to‑Sim pipelines
* ML‑enhanced 3D point clouds and SLAM reconstructions
* New data capture via our network of skilled tradespeople and factory workers (often below typical market cost)
* Enablement for your workforce to wear our device and run through our processing pipeline
Typical lead times: under two weeks (up to four weeks for large jobs).
## How to order more
Email `[email protected]` with:
* Task description
* Desired hours or frame counts
* Proposed price
We will reply within one business day with lead time and final pricing.
Questions: `[email protected]`
## License
This dataset is released under the MicroAGI00 Open Use, No‑Resale License v1.0 (custom). See [`LICENSE`](./LICENSE). Redistribution must be free‑of‑charge under the same license. Required credit: "This work uses the MicroAGI00 dataset (MicroAGI, 2025)."
## Attribution reminder
Public uses of the Dataset or Derivative Data must include the credit line above in a reasonable location for the medium (papers, repos, product docs, dataset pages, demo descriptions). Attribution is appreciated but not required for Trained Models or Outputs. |