From 8b52ec6255317ba40664bf809f68e89e3254ee4f Mon Sep 17 00:00:00 2001 From: skylark Date: Mon, 19 Sep 2022 11:11:51 +0000 Subject: [PATCH] Update doc --- README.md | 2 +- doc/source/index.rst | 3 +++ doc/source/overview.md | 59 ++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 63 insertions(+), 1 deletion(-) create mode 100644 doc/source/overview.md diff --git a/README.md b/README.md index a2d883e09..5a7887378 100644 --- a/README.md +++ b/README.md @@ -5,7 +5,7 @@ Rofunc package focuses on the **robotic Imitation Learning (IL) and Learning from Demonstration (LfD)** fields and provides valuable and convenient python functions for robotics, including _demonstration collection, data pre-processing, LfD algorithms, planning, and control methods_. We also plan to provide an Isaac Gym-based robot simulator for evaluation. This package aims to advance the field by building a full-process toolkit and validation platform that simplifies and standardizes the process of demonstration data collection, processing, learning, and its deployment on robots. -![](./img/pipepline.png) +![](./img/pipeline.png) ### Installation The installation is very easy, diff --git a/doc/source/index.rst b/doc/source/index.rst index 5b9ada538..95bd85ba4 100644 --- a/doc/source/index.rst +++ b/doc/source/index.rst @@ -8,6 +8,8 @@ Rofunc: The Full Process Python Package for Robot Learning from Demonstration Rofunc ---------------- +:doc:`overview` + :doc:`devices/README` How to record, process, visual and export the multimodal demonstration data. :doc:`lfd/README` @@ -29,6 +31,7 @@ Roadmap :caption: Rofunc :hidden: + overview devices/README lfd/README planning/README diff --git a/doc/source/overview.md b/doc/source/overview.md new file mode 100644 index 000000000..9a6c41d60 --- /dev/null +++ b/doc/source/overview.md @@ -0,0 +1,59 @@ +# Overview + +Rofunc package focuses on the **robotic Imitation Learning (IL) and Learning from Demonstration (LfD)** fields and provides valuable and +convenient python functions for robotics, including _demonstration collection, data pre-processing, LfD algorithms, planning, and control methods_. We also plan to provide an Isaac Gym-based robot simulator for evaluation. This package aims to advance the field by building a full-process toolkit and validation platform that simplifies and standardizes the process of demonstration data collection, processing, learning, and its deployment on robots. + +![](../../img/pipeline.png) + +## Installation +The installation is very easy, + +``` +pip install rofunc +``` + +and as you'll find later, it's easy to use as well! + +```python +import rofunc as rf +``` + +Thus, have fun in the robotics world! + + +## Available functions +Currently, we provide a simple document; please refer to [here](./rofunc/). A comprehensive one with both English and +Chinese versions is built via the [readthedoc](https://rofunc.readthedocs.io/en/stable/). +The available functions and plans can be found as follows. + + +| Classes | Types | Functions | Description | Status | +|---------------------------------|--------------|-------------------------|----------------------------------------------------------------------|--------| +| **Devices** | Xsens | `xsens.record` | Record the human motion via network streaming | | +| | | `xsens.process` | Decode the .mvnx file | ✅ | +| | | `xsens.visualize` | Show or save gif about the motion | ✅ | +| | Optitrack | `optitrack.record` | Record the motion of markers via network streaming | | +| | | `optitrack.process` | Process the output .csv data | ✅ | +| | | `optitrack.visualize` | Show or save gif about the motion | | +| | ZED | `zed.record` | Record with multiple cameras | ✅ | +| | | `zed.playback` | Playback the recording and save snapshots | ✅ | +| | | `zed.export` | Export the recording to mp4 | ✅ | +| | Multimodal | `mmodal.record` | Record multi-modal demonstration data simultaneously | | +| | | `mmodal.export` | Export multi-modal demonstration data in one line | ✅ | +| **Learning from Demonstration** | DMP | `dmp.uni` | DMP for one agent with several (or one) demonstrated trajectories | | +| | GMR | `gmr.uni` | GMR for one agent with several (or one) demonstrated trajectories | ✅ | +| | TP-GMM | `tpgmm.uni` | TP-GMM for one agent with several (or one) demonstrated trajectories | ✅ | +| | | `tpgmm.bi` | TP-GMM for two agent with coordination learned from demonstration | ✅ | +| | TP-GMR | `tpgmr.uni` | TP-GMR for one agent with several (or one) demonstrated trajectories | ✅ | +| | | `tpgmr.bi` | TP-GMR for two agent with coordination learned from demonstration | ✅ | +| **Planning** | LQT | `lqt.uni` | LQT for one agent with several via-points | ✅ | +| | | `lqt.bi` | LQT for two agent with coordination constraints | ✅ | +| | | `lqt.recursive` | Generate smooth trajectories for robot execution recursively | ✅ | +| **Logger** | | `logger.write` | Custom tensorboard-based logger | | +| **Coordinate** | | `coord.custom_class` | Define the custom class of `Pose` | | +| | | `coord.transform` | Useful functions about coordinate transformation | ✅ | +| **VisuaLab** | Trajectory | `visualab.trajectory` | 2-dim/3-dim/with ori trajectory visualization | ✅ | +| | Distribution | `visualab.distribution` | 2-dim/3-dim distribution visualization | ✅ | +| | Ellipsoid | `visualab.ellipsoid` | 2-dim/3-dim ellipsoid visualization | ✅ | +| **RoboLab** | Kinematics | `robolab.kinematics` | ... | ✅ | +