From c41a0fa1e6886ad76e47d970303c61afde82998b Mon Sep 17 00:00:00 2001 From: Daniel Butterfield Date: Mon, 30 Sep 2024 21:54:11 -0400 Subject: [PATCH] Add Figure 3 a and b --- docs/index.html | 94 ++++++++++---------------------- docs/static/css/index.css | 5 +- docs/static/images/figure3a.png | Bin 0 -> 50524 bytes docs/static/images/figure3b.png | Bin 0 -> 71282 bytes 4 files changed, 33 insertions(+), 66 deletions(-) create mode 100644 docs/static/images/figure3a.png create mode 100644 docs/static/images/figure3b.png diff --git a/docs/index.html b/docs/index.html index cf6d441..428c735 100644 --- a/docs/index.html +++ b/docs/index.html @@ -199,27 +199,47 @@

- Performance: -

- Generalization: For the real-world Contact Detection task, the test set includes recordings on unseen ground types and gait types; and for the simulated GRF Estimation task, the test set includes unseen friction, speed, and terrain parameters. Therefore, our improved performance also demonstrates the generalization ability of our model on out-of-distribution data. -

- Model Efficiency: Our MI-HGNN is not sensitive to the hyperparameters. Using our smallest MI-HGNN model, we reduce the model parameters from ECNN by a factor of 482 and still improve its performance by 8.4%. -

- Sample Efficiency: Overall, we find that the performance of our model does not drop significantly until the number of training samples is reduced to 10% of the entire training set. In addition, competitive results can still - be achieved by our model using only 2.5% of the training set, i.e., 15863 training samples. + +

+ + +
+
+

+ Performance: +

+ Generalization: For the real-world Contact Detection task, the test set includes recordings on unseen ground types and gait types; and for the simulated GRF Estimation task, the test set includes unseen friction, speed, and terrain parameters. Therefore, our improved performance also demonstrates the generalization ability of our model on out-of-distribution data.

+
+
+ +
+
+ +
+
+

+ Model Efficiency: Our MI-HGNN is not sensitive to the hyperparameters. Using our smallest MI-HGNN model, we reduce the model parameters from ECNN by a factor of 482 and still improve its performance by 8.4%. +

+
+
+ +

Sample Efficiency: Overall, we find that the performance of our model does not drop significantly until the number of training samples is reduced to 10% of the entire training set. In addition, competitive results can still be achieved by our model using only 2.5% of the training set, i.e., 15863 training samples.

+
+ +
@@ -257,8 +277,6 @@

- -
@@ -279,56 +297,6 @@

Video Presentation

- - - - - - - - - - -
@@ -345,19 +313,17 @@

BibTeX

-