-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
0 parents
commit 310bac4
Showing
109 changed files
with
15,866 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,4 @@ | ||
# Sphinx build info version 1 | ||
# This file records the configuration used when building these files. When it is not found, a full rebuild will be done. | ||
config: 45fe1b4e61cb66370d3bd587e5d62e82 | ||
tags: 645f666f9bcd5a90fca523b33c5a78b7 |
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,322 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"# Getting started\n", | ||
"\n", | ||
"This notebook showcases the main functionalities of tri3d.\n", | ||
"It can be run for any of the supported dataset." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"pycharm": { | ||
"is_executing": false | ||
} | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"%matplotlib widget\n", | ||
"\n", | ||
"import k3d\n", | ||
"import matplotlib.pyplot as plt\n", | ||
"import numpy as np\n", | ||
"\n", | ||
"from tri3d.datasets import KITTIObject, NuScenes, Waymo, SemanticKITTI, ZODFrames\n", | ||
"from tri3d.plot import (\n", | ||
" plot_bbox_cam,\n", | ||
" plot_annotations_bev,\n", | ||
" plot_bboxes_3d,\n", | ||
" to_k3d_colors,\n", | ||
" gen_discrete_cmap,\n", | ||
")\n", | ||
"from tri3d.geometry import test_box_in_frame" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"pycharm": { | ||
"is_executing": false | ||
}, | ||
"tags": [] | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"# dataset = KITTIObject(\"/home/ngranger/Datasets/kitti/training\")\n", | ||
"# camera, image, lidar = \"cam\", \"img2\", \"velo\"\n", | ||
"\n", | ||
"dataset = NuScenes(\"/home/ngranger/Datasets/NuScenes\", \"v1.0-mini\")\n", | ||
"camera, image, lidar = \"CAM_FRONT\", \"IMG_FRONT\", \"LIDAR_TOP\"\n", | ||
"\n", | ||
"# dataset = Waymo(\"/home/ngranger/Datasets/waymo\", split=\"training\")\n", | ||
"# camera, image, lidar = \"CAM_FRONT\", \"IMG_FRONT\", \"LIDAR_TOP\"\n", | ||
"\n", | ||
"# dataset = SemanticKITTI(\"/home/ngranger/Datasets/semantickitti/\", split=\"train\")\n", | ||
"# camera, image, lidar = None, None, \"LIDAR_TOP\"\n", | ||
"\n", | ||
"# dataset = ZODFrames(\n", | ||
"# \"datasets/zod_frames_mini\",\n", | ||
"# \"datasets/zod_frames_mini/trainval-frames-mini.json\",\n", | ||
"# \"train\",\n", | ||
"# )\n", | ||
"# camera, image, lidar = \"front\", \"img_front\", \"velodyne\"" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## Pose and calibration\n", | ||
"\n", | ||
"In order to plot sensor poses, use `Dataset.cam_sensors` and `Dataset.pcl_sensors` to get the sensor names. Get the list of available frames for that sensor with `Dataset.frames()`. Then use the `Dataset.alignment` to get the transformation from that sensor coordinate system to reference axes (in this example we use the lidar at frame 0)." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"plot = k3d.plot(name=\"pcl\", camera_mode=\"orbit\")\n", | ||
"\n", | ||
"p0 = dataset.poses(seq=0, sensor=dataset.pcl_sensors[0])[0]\n", | ||
"\n", | ||
"# iterate over sensors\n", | ||
"for sensor in dataset.cam_sensors + dataset.pcl_sensors:\n", | ||
" if len(dataset.timestamps(seq=0, sensor=sensor)) == 0:\n", | ||
" continue\n", | ||
"\n", | ||
" poses = p0.inv() @ dataset.poses(seq=0, sensor=sensor)\n", | ||
"\n", | ||
" # plot axes along the trajectory\n", | ||
" origins = poses.apply(np.zeros([3]))\n", | ||
" for edge, color in zip(np.eye(3) * 0.2, [0xff0000, 0x00ff00, 0x0000ff]):\n", | ||
" vectors = poses.apply(edge) - origins\n", | ||
" plot += k3d.vectors(\n", | ||
" origins.astype(np.float32),\n", | ||
" vectors.astype(np.float32),\n", | ||
" color=color,\n", | ||
" head_size=0.3,\n", | ||
" )\n", | ||
"\n", | ||
"plot" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## Annotations in frame\n", | ||
"\n", | ||
"Requesting annotations in image coordinates will automatically interpolate and project annotations to the timestamps and coordinates of the camera.\n", | ||
"\n", | ||
"Two helpers are used to reduce the boilerplate: `tri3d.plot.test_box_in_frame` and `tri3d.plot.plot_bbox_cam`." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"pycharm": { | ||
"is_executing": false | ||
}, | ||
"tags": [] | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"sample_image = dataset.image(seq=0, frame=0, sensor=image)\n", | ||
"img_size = sample_image.size\n", | ||
"sample_annotations = dataset.boxes(seq=0, frame=0, coords=image)\n", | ||
"\n", | ||
"cmap = gen_discrete_cmap(len(dataset.det_labels))\n", | ||
"\n", | ||
"plt.figure()\n", | ||
"plt.xlim((0, img_size[0]))\n", | ||
"plt.ylim((img_size[1], 0))\n", | ||
"\n", | ||
"plt.imshow(sample_image)\n", | ||
"\n", | ||
"for i, ann in enumerate(sample_annotations):\n", | ||
" # skip boxes where no vertex is in the frame\n", | ||
" if test_box_in_frame(ann.transform, ann.size, img_size):\n", | ||
" obj_id = str(getattr(ann, \"instance\", i))[:4]\n", | ||
" hdg = ann.heading / np.pi * 180\n", | ||
" color = cmap(dataset.det_labels.index(ann.label))\n", | ||
" u, v, z = ann.center\n", | ||
" print(f\" {obj_id:4s}: {ann.label:20s} at {z:3.0f}m heading {hdg:+.0f}°\")\n", | ||
"\n", | ||
" # plot the bounding box\n", | ||
" plot_bbox_cam(ann.transform, ann.size, img_size, c=color)\n", | ||
"\n", | ||
" # plot the uid\n", | ||
" plt.text(\n", | ||
" np.clip(u, 0, img_size[0]),\n", | ||
" np.clip(v, 0, img_size[1]),\n", | ||
" obj_id,\n", | ||
" horizontalalignment=\"center\",\n", | ||
" verticalalignment=\"center\",\n", | ||
" bbox=dict(facecolor=\"white\", linewidth=0, alpha=0.7),\n", | ||
" )" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## Lidar points over images\n", | ||
"\n", | ||
"Requesting points in image coordinates will automatically compensate for ego car movement between the two sensor acquisitions and for extrinsic and intrinsic calibrations." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"pycharm": { | ||
"is_executing": false | ||
} | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"points = dataset.points(seq=0, frame=0, sensor=lidar, coords=image)\n", | ||
"\n", | ||
"# discard extra dims (intensity, etc.)\n", | ||
"uvz = points[:, :3]\n", | ||
"\n", | ||
"# only keep visible points\n", | ||
"in_frame = (\n", | ||
" np.all(uvz > 0, axis=1) # in front of the camera\n", | ||
" & (uvz[:, 0] < sample_image.size[0])\n", | ||
" & (uvz[:, 1] < sample_image.size[1])\n", | ||
")\n", | ||
"uvz = uvz[in_frame]\n", | ||
"\n", | ||
"plt.figure()\n", | ||
"plt.xlim((0, img_size[0]))\n", | ||
"plt.ylim((img_size[1], 0))\n", | ||
"\n", | ||
"plt.imshow(sample_image)\n", | ||
"plt.scatter(uvz[:, 0], uvz[:, 1], c=uvz[:, 2], clim=(5, 40), s=0.5, alpha=1)\n", | ||
"\n", | ||
"plt.xlim(0, sample_image.size[0])\n", | ||
"plt.ylim(sample_image.size[1], 0)\n", | ||
"plt.gca().set_aspect(\"equal\")" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## Lidar points and boxes" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"sample_pcl = dataset.points(seq=0, frame=0, coords=lidar, sensor=lidar)\n", | ||
"sample_annotations = dataset.boxes(seq=0, frame=0, coords=lidar)\n", | ||
"\n", | ||
"plt.figure()\n", | ||
"\n", | ||
"plt.scatter(\n", | ||
" sample_pcl[:, 0], sample_pcl[:, 1], s=5, c=\"k\", alpha=0.6, edgecolors=\"none\"\n", | ||
")\n", | ||
"\n", | ||
"cmap = gen_discrete_cmap(len(dataset.det_labels))\n", | ||
"\n", | ||
"plot_annotations_bev(\n", | ||
" [a.center for a in sample_annotations],\n", | ||
" [a.size for a in sample_annotations],\n", | ||
" [a.heading for a in sample_annotations],\n", | ||
" ax=plt.gca(),\n", | ||
" c=cmap([dataset.det_labels.index(a.label) for a in sample_annotations]),\n", | ||
")\n", | ||
"\n", | ||
"plt.xlim(-25, 25)\n", | ||
"plt.ylim(-25, 25)\n", | ||
"plt.gca().set_aspect(\"equal\")" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"xyz = dataset.points(seq=0, frame=0, coords=lidar, sensor=lidar)[:, :3]\n", | ||
"\n", | ||
"sample_annotations = dataset.boxes(seq=0, frame=0, coords=lidar)\n", | ||
"sample_annotations = [a for a in sample_annotations if a.label != \"DontCare\"]\n", | ||
"\n", | ||
"c = to_k3d_colors(plt.get_cmap(\"viridis\")((xyz[:, 2] + 2) / 4))\n", | ||
"\n", | ||
"plot = k3d.plot(name=\"pcl\", camera_mode=\"orbit\")\n", | ||
"\n", | ||
"plot += k3d.points(positions=xyz, point_size=1, shader=\"dot\", colors=c)\n", | ||
"\n", | ||
"plot_bboxes_3d(\n", | ||
" plot,\n", | ||
" [a.transform for a in sample_annotations],\n", | ||
" [a.size for a in sample_annotations],\n", | ||
" c=0xFF0000,\n", | ||
")\n", | ||
"\n", | ||
"plot" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# if isinstance(dataset, Waymo): # waymo does not have segmentation for all frames\n", | ||
"# seq, frame = dataset.frames(0, sensor=\"SEG_LIDAR_TOP\")[0]\n", | ||
"# else:\n", | ||
"# seq, frame = 0, 0\n", | ||
"\n", | ||
"# xyz = dataset.points(seq, frame, sensor=lidar)[:, :3]\n", | ||
"# semantic = dataset.semantic(seq, frame, sensor=lidar)\n", | ||
"\n", | ||
"# cmap = gen_discrete_cmap(len(dataset.sem_labels))\n", | ||
"# c = to_k3d_colors(cmap(semantic)) * (semantic >= 0)\n", | ||
"\n", | ||
"# plot = k3d.plot(name=\"pcl\", camera_mode=\"orbit\")\n", | ||
"\n", | ||
"# plot += k3d.points(positions=xyz, point_size=1, shader=\"dot\", colors=c)\n", | ||
"\n", | ||
"# plot" | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"kernelspec": { | ||
"display_name": ".venv", | ||
"language": "python", | ||
"name": "python3" | ||
}, | ||
"language_info": { | ||
"codemirror_mode": { | ||
"name": "ipython", | ||
"version": 3 | ||
}, | ||
"file_extension": ".py", | ||
"mimetype": "text/x-python", | ||
"name": "python", | ||
"nbconvert_exporter": "python", | ||
"pygments_lexer": "ipython3", | ||
"version": "3.12.7" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 4 | ||
} |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Empty file.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,24 @@ | ||
Data preparation | ||
================ | ||
|
||
Tri3D is capable to load most datasets without preprocessing, except for the ones | ||
mentioned on this page. | ||
|
||
Waymo | ||
----- | ||
|
||
Tri3d supports the Waymo dataset with parquet file format. | ||
However, its files must be re-encoded with better chunking and sorting parameters to allow | ||
faster data loading. | ||
|
||
To optimize the sequences in a folder, a script is provided that can be used like so: | ||
|
||
.. code-block:: | ||
python -m tri3d.datasets.optimize_waymo \ | ||
--input waymo_open_dataset_v_2_0_1/training \ | ||
--output optimized_waymo/training \ | ||
--workers 8 | ||
The resulting files in the output directory will contain the same data but sorted, chunked | ||
and compressed with better settings. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
.. currentmodule:: tri3d.datasets | ||
|
||
tri3d.datasets | ||
============== | ||
|
||
Datasets | ||
-------- | ||
|
||
.. autosummary:: | ||
:toctree: generated/ | ||
:template: base.rst | ||
|
||
Box | ||
KITTIObject | ||
NuScenes | ||
Waymo | ||
ZODFrames |
Oops, something went wrong.