Update README.md
Browse files
README.md
CHANGED
|
@@ -1,9 +1,9 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: cc-by-nc-nd-4.0
|
| 3 |
-
tags:
|
| 4 |
-
- Autonomous Driving
|
| 5 |
-
- Computer Vision
|
| 6 |
-
---
|
| 7 |
# Dataset Tutorial
|
| 8 |
|
| 9 |
### The MARS dataset follows the same structure as the NuScenes Dataset.
|
|
@@ -12,9 +12,9 @@ Multitraversal: each location is saved as one NuScenes object, and each traversa
|
|
| 12 |
|
| 13 |
Multiagent: the whole set is a NuScenes object, and each multi-agent encounter is one scene.
|
| 14 |
|
| 15 |
-
|
| 16 |
-
|
| 17 |
## Initialization
|
|
|
|
| 18 |
First, install `nuscenes-devkit` following NuScenes's repo tutorial, [Devkit setup section](https://github.com/nutonomy/nuscenes-devkit?tab=readme-ov-file#devkit-setup). The easiest way is install via pip:
|
| 19 |
```
|
| 20 |
pip install nuscenes-devkit
|
|
@@ -38,8 +38,7 @@ Multiagent example: loading data for the full set:
|
|
| 38 |
mars_multiagent = NuScenes(version='v1.0', dataroot=f'/MARS_multiagent', verbose=True)
|
| 39 |
```
|
| 40 |
|
| 41 |
-
|
| 42 |
-
|
| 43 |
## Scene
|
| 44 |
To see all scenes in one set (one location of the Multitraversal set, or the whole Multiagent set):
|
| 45 |
```
|
|
@@ -77,8 +76,7 @@ Output:
|
|
| 77 |
- `intersection`: location index.
|
| 78 |
- `err_max`: maximum time difference (in millisecond) between camera images of a same frame in this scene.
|
| 79 |
|
| 80 |
-
|
| 81 |
-
|
| 82 |
## Sample
|
| 83 |
Get the first sample (frame) of one scene:
|
| 84 |
```
|
|
@@ -111,8 +109,7 @@ Output:
|
|
| 111 |
- `data`: dict of data tokens of this sample's sensor data.
|
| 112 |
- `anns`: empty as we do not have annotation data at this moment.
|
| 113 |
|
| 114 |
-
|
| 115 |
-
|
| 116 |
## Sample Data
|
| 117 |
Our sensor names are different from NuScenes' sensor names. It is important that you use the correct name when querying sensor data. Our sensor names are:
|
| 118 |
```
|
|
@@ -333,8 +330,7 @@ CAM_FRONT_CENTER pose:
|
|
| 333 |
|
| 334 |
```
|
| 335 |
|
| 336 |
-
|
| 337 |
-
|
| 338 |
## LiDAR-Image projection
|
| 339 |
- Use NuScenes devkit's `render_pointcloud_in_image()` method.
|
| 340 |
- The first variable is a sample token.
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-nc-nd-4.0
|
| 3 |
+
tags:
|
| 4 |
+
- Autonomous Driving
|
| 5 |
+
- Computer Vision
|
| 6 |
+
---
|
| 7 |
# Dataset Tutorial
|
| 8 |
|
| 9 |
### The MARS dataset follows the same structure as the NuScenes Dataset.
|
|
|
|
| 12 |
|
| 13 |
Multiagent: the whole set is a NuScenes object, and each multi-agent encounter is one scene.
|
| 14 |
|
| 15 |
+
---
|
|
|
|
| 16 |
## Initialization
|
| 17 |
+
|
| 18 |
First, install `nuscenes-devkit` following NuScenes's repo tutorial, [Devkit setup section](https://github.com/nutonomy/nuscenes-devkit?tab=readme-ov-file#devkit-setup). The easiest way is install via pip:
|
| 19 |
```
|
| 20 |
pip install nuscenes-devkit
|
|
|
|
| 38 |
mars_multiagent = NuScenes(version='v1.0', dataroot=f'/MARS_multiagent', verbose=True)
|
| 39 |
```
|
| 40 |
|
| 41 |
+
---
|
|
|
|
| 42 |
## Scene
|
| 43 |
To see all scenes in one set (one location of the Multitraversal set, or the whole Multiagent set):
|
| 44 |
```
|
|
|
|
| 76 |
- `intersection`: location index.
|
| 77 |
- `err_max`: maximum time difference (in millisecond) between camera images of a same frame in this scene.
|
| 78 |
|
| 79 |
+
---
|
|
|
|
| 80 |
## Sample
|
| 81 |
Get the first sample (frame) of one scene:
|
| 82 |
```
|
|
|
|
| 109 |
- `data`: dict of data tokens of this sample's sensor data.
|
| 110 |
- `anns`: empty as we do not have annotation data at this moment.
|
| 111 |
|
| 112 |
+
---
|
|
|
|
| 113 |
## Sample Data
|
| 114 |
Our sensor names are different from NuScenes' sensor names. It is important that you use the correct name when querying sensor data. Our sensor names are:
|
| 115 |
```
|
|
|
|
| 330 |
|
| 331 |
```
|
| 332 |
|
| 333 |
+
---
|
|
|
|
| 334 |
## LiDAR-Image projection
|
| 335 |
- Use NuScenes devkit's `render_pointcloud_in_image()` method.
|
| 336 |
- The first variable is a sample token.
|