Skip to content

Commit

Permalink
doc
Browse files Browse the repository at this point in the history
  • Loading branch information
nlgranger committed Oct 22, 2024
0 parents commit ead16b3
Show file tree
Hide file tree
Showing 103 changed files with 14,849 additions and 0 deletions.
4 changes: 4 additions & 0 deletions .buildinfo
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Sphinx build info version 1
# This file records the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 45fe1b4e61cb66370d3bd587e5d62e82
tags: 645f666f9bcd5a90fca523b33c5a78b7
Binary file added .doctrees/data_preparation.doctree
Binary file not shown.
Binary file added .doctrees/datasets.doctree
Binary file not shown.
Binary file added .doctrees/environment.pickle
Binary file not shown.
Binary file added .doctrees/example.doctree
Binary file not shown.
Binary file added .doctrees/generated/tri3d.datasets.Box.doctree
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file added .doctrees/generated/tri3d.datasets.Waymo.doctree
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file added .doctrees/geometry.doctree
Binary file not shown.
Binary file added .doctrees/index.doctree
Binary file not shown.
Binary file added .doctrees/installation.doctree
Binary file not shown.
324 changes: 324 additions & 0 deletions .doctrees/nbsphinx/example.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,324 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Getting started\n",
"\n",
"This notebook showcases the main functionalities of tri3d.\n",
"It can be run for any of the supported dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"is_executing": false
}
},
"outputs": [],
"source": [
"%matplotlib inline\n",
"%load_ext autoreload\n",
"%autoreload 2\n",
"\n",
"import k3d\n",
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"\n",
"from tri3d.datasets import KITTIObject, NuScenes, Waymo, ZODFrames\n",
"from tri3d.plot import (\n",
" plot_bbox_cam,\n",
" plot_annotations_bev,\n",
" plot_bboxes_3d,\n",
" to_k3d_colors,\n",
" gen_discrete_cmap,\n",
")\n",
"from tri3d.geometry import test_box_in_frame, Rotation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"is_executing": false
},
"tags": []
},
"outputs": [],
"source": [
"# dataset = KITTIObject(\"datasets/kittiobject\")\n",
"# camera, image, lidar = \"cam\", \"img2\", \"velo\"\n",
"\n",
"# dataset = NuScenes(\"datasets/nuscenes\", \"v1.0-mini\")\n",
"# camera, image, lidar = \"CAM_FRONT\", \"IMG_FRONT\", \"LIDAR_TOP\"\n",
"\n",
"dataset = Waymo(\"../../datasets/waymo\", split=\"training\")\n",
"camera, image, lidar = \"CAM_FRONT\", \"IMG_FRONT\", \"LIDAR_TOP\"\n",
"\n",
"# dataset = ZODFrames(\n",
"# \"datasets/zodframes\",\n",
"# \"trainval-frames-mini.json\",\n",
"# \"train\",\n",
"# )\n",
"# camera, image, lidar = \"front\", \"img_front\", \"velodyne\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Pose and calibration\n",
"\n",
"In order to plot sensor poses, use `Dataset.cam_sensors` and `Dataset.pcl_sensors` to get the sensor names. Get the list of available frames for that sensor with `Dataset.frames()`. Then use the `Dataset.alignment` to get the transformation from that sensor coordinate system to reference axes (in this example we use the lidar at frame 0)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"plot = k3d.plot(name=\"pcl\", camera_mode=\"orbit\", camera_auto_fit=False)\n",
"\n",
"p0 = dataset.poses(seq=0, sensor=dataset.pcl_sensors[0])[0]\n",
"\n",
"# iterate over sensors\n",
"for sensor in dataset.pcl_sensors + dataset.cam_sensors:\n",
" if len(dataset.timestamps(seq=0, sensor=sensor)) == 0:\n",
" continue\n",
"\n",
" poses = p0.inv() @ dataset.poses(seq=0, sensor=sensor)\n",
"\n",
" # plot axes along the trajectory\n",
" origins = poses.apply(np.zeros([3]))\n",
" for edge, color in zip(np.eye(3) * 0.2, [0xFF0000, 0x00FF00, 0x0000FF]):\n",
" vectors = poses.apply(edge) - origins\n",
" plot += k3d.vectors(\n",
" origins.astype(np.float32),\n",
" vectors.astype(np.float32),\n",
" color=color,\n",
" head_size=0.3,\n",
" group=sensor,\n",
" )\n",
"\n",
"plot.camera = [6.44, -4.52, 3.66, 11.35, 6.17, -3.55, 0.0, 0.0, 1.0]\n",
"plot"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Annotations in frame\n",
"\n",
"Requesting annotations in image coordinates will automatically interpolate and project annotations to the timestamps and coordinates of the camera.\n",
"\n",
"Two helpers are used to reduce the boilerplate: `tri3d.plot.test_box_in_frame` and `tri3d.plot.plot_bbox_cam`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"is_executing": false
},
"tags": []
},
"outputs": [],
"source": [
"sample_image = dataset.image(seq=0, frame=0, sensor=image)\n",
"img_size = sample_image.size\n",
"sample_annotations = dataset.boxes(seq=0, frame=0, coords=image)\n",
"\n",
"cmap = gen_discrete_cmap(len(dataset.det_labels))\n",
"\n",
"plt.figure(dpi=125, figsize=(10, 6))\n",
"plt.xlim((0, img_size[0]))\n",
"plt.ylim((img_size[1], 0))\n",
"\n",
"plt.imshow(sample_image)\n",
"\n",
"for i, ann in enumerate(sample_annotations):\n",
" # skip boxes where no vertex is in the frame\n",
" if test_box_in_frame(ann.transform, ann.size, img_size):\n",
" obj_id = str(getattr(ann, \"instance\", i))[:4]\n",
" hdg = ann.heading / np.pi * 180\n",
" color = cmap(dataset.det_labels.index(ann.label))\n",
" u, v, z = ann.center\n",
" print(f\" {obj_id:4s}: {ann.label:20s} at {z:3.0f}m heading {hdg:+.0f}°\")\n",
"\n",
" # plot the bounding box\n",
" plot_bbox_cam(ann.transform, ann.size, img_size, c=color)\n",
"\n",
" # plot the uid\n",
" plt.text(\n",
" np.clip(u, 0, img_size[0]),\n",
" np.clip(v, 0, img_size[1]),\n",
" obj_id,\n",
" horizontalalignment=\"center\",\n",
" verticalalignment=\"center\",\n",
" bbox=dict(facecolor=\"white\", linewidth=0, alpha=0.7),\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Lidar points over images\n",
"\n",
"Requesting points in image coordinates will automatically compensate for ego car movement between the two sensor acquisitions and for extrinsic and intrinsic calibrations."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"is_executing": false
}
},
"outputs": [],
"source": [
"points = dataset.points(seq=0, frame=0, sensor=lidar, coords=image)\n",
"\n",
"# discard extra dims (intensity, etc.)\n",
"uvz = points[:, :3]\n",
"\n",
"# only keep visible points\n",
"in_frame = (\n",
" np.all(uvz > 0, axis=1) # in front of the camera\n",
" & (uvz[:, 0] < sample_image.size[0])\n",
" & (uvz[:, 1] < sample_image.size[1])\n",
")\n",
"uvz = uvz[in_frame]\n",
"\n",
"plt.figure(dpi=125, figsize=(10, 6))\n",
"plt.xlim((0, img_size[0]))\n",
"plt.ylim((img_size[1], 0))\n",
"\n",
"plt.imshow(sample_image)\n",
"plt.scatter(uvz[:, 0], uvz[:, 1], c=uvz[:, 2], clim=(0, 30), s=0.5, alpha=1)\n",
"\n",
"plt.xlim(0, sample_image.size[0])\n",
"plt.ylim(sample_image.size[1], 0)\n",
"plt.gca().set_aspect(\"equal\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Lidar points and boxes"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"sample_pcl = dataset.points(seq=0, frame=0, coords=lidar, sensor=lidar)\n",
"sample_annotations = dataset.boxes(seq=0, frame=0, coords=lidar)\n",
"\n",
"plt.figure(dpi=125, figsize=(8, 8))\n",
"\n",
"plt.scatter(\n",
" sample_pcl[:, 0], sample_pcl[:, 1], s=5, c=\"k\", alpha=0.6, edgecolors=\"none\"\n",
")\n",
"\n",
"cmap = gen_discrete_cmap(len(dataset.det_labels))\n",
"\n",
"plot_annotations_bev(\n",
" [a.center for a in sample_annotations],\n",
" [a.size for a in sample_annotations],\n",
" [a.heading for a in sample_annotations],\n",
" ax=plt.gca(),\n",
" c=cmap([dataset.det_labels.index(a.label) for a in sample_annotations]),\n",
")\n",
"\n",
"plt.xlim(-25, 25)\n",
"plt.ylim(-25, 25)\n",
"plt.gca().set_aspect(\"equal\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"xyz = dataset.points(seq=0, frame=0, coords=lidar, sensor=lidar)[:, :3].astype(np.float32)\n",
"\n",
"sample_annotations = dataset.boxes(seq=0, frame=0, coords=lidar)\n",
"sample_annotations = [a for a in sample_annotations if a.label != \"DontCare\"]\n",
"\n",
"c = to_k3d_colors(plt.get_cmap(\"viridis\")((xyz[:, 2] + 2) / 4))\n",
"\n",
"plot = k3d.plot(name=\"pcl\", camera_mode=\"orbit\", camera_auto_fit=False)\n",
"\n",
"plot += k3d.points(positions=xyz, point_size=1, shader=\"dot\", colors=c)\n",
"\n",
"plot_bboxes_3d(\n",
" plot,\n",
" [a.transform for a in sample_annotations],\n",
" [a.size for a in sample_annotations],\n",
" c=0xFF0000,\n",
")\n",
"\n",
"plot.camera = [-0.52, -12.19, 17.0, 4.94, -1.7, 5.19, 0.0, 0.0, 1.0]\n",
"plot"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# if isinstance(dataset, Waymo): # waymo does not have segmentation for all frames\n",
"# seq, frame = dataset.frames(0, sensor=\"SEG_LIDAR_TOP\")[0]\n",
"# else:\n",
"# seq, frame = 0, 0\n",
"\n",
"# xyz = dataset.points(seq, frame, sensor=lidar)[:, :3]\n",
"# semantic = dataset.semantic(seq, frame, sensor=lidar)\n",
"\n",
"# cmap = gen_discrete_cmap(len(dataset.sem_labels))\n",
"# c = to_k3d_colors(cmap(semantic)) * (semantic >= 0)\n",
"\n",
"# plot = k3d.plot(name=\"pcl\", camera_mode=\"orbit\")\n",
"\n",
"# plot += k3d.points(positions=xyz, point_size=1, shader=\"dot\", colors=c)\n",
"\n",
"# plot"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.7"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
Binary file added .doctrees/nbsphinx/example_10_0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .doctrees/nbsphinx/example_6_1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .doctrees/nbsphinx/example_8_0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added .doctrees/plot.doctree
Binary file not shown.
Empty file added .nojekyll
Empty file.
4 changes: 4 additions & 0 deletions _images/coordinates.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/example_10_0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/example_6_1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/example_8_0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 4 additions & 0 deletions _images/timelines.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/waymo.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
24 changes: 24 additions & 0 deletions _sources/data_preparation.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
Data preparation
================

Tri3D is capable to load most datasets without preprocessing, except for the ones
mentioned on this page.

Waymo
-----

Tri3d supports the Waymo dataset with parquet file format.
However, its files must be re-encoded with better chunking and sorting parameters to allow
faster data loading.

To optimize the sequences in a folder, a script is provided that can be used like so:

.. code-block::
python -m tri3d.datasets.optimize_waymo \
--input waymo_open_dataset_v_2_0_1/training \
--output optimized_waymo/training \
--workers 8
The resulting files in the output directory will contain the same data but sorted, chunked
and compressed with better settings.
17 changes: 17 additions & 0 deletions _sources/datasets.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
.. currentmodule:: tri3d.datasets

tri3d.datasets
==============

Datasets
--------

.. autosummary::
:toctree: generated/
:template: base.rst

Box
KITTIObject
NuScenes
Waymo
ZODFrames
Loading

0 comments on commit ead16b3

Please sign in to comment.