diff --git a/figures/ofa_search_cost.png b/figures/ofa_search_cost.png new file mode 100644 index 0000000..241a124 Binary files /dev/null and b/figures/ofa_search_cost.png differ diff --git a/figures/predictor_based_search.png b/figures/predictor_based_search.png new file mode 100644 index 0000000..c844d3e Binary files /dev/null and b/figures/predictor_based_search.png differ diff --git a/figures/select_subnets.png b/figures/select_subnets.png new file mode 100644 index 0000000..dbe7214 Binary files /dev/null and b/figures/select_subnets.png differ diff --git a/tutorial/ofa.ipynb b/tutorial/ofa.ipynb index 24e128c..9993b54 100644 --- a/tutorial/ofa.ipynb +++ b/tutorial/ofa.ipynb @@ -15,7 +15,7 @@ "Different sub-nets can directly grab weights from the OFA network without training.\n", "Therefore, getting a new specialized neural network with the OFA network is highly efficient, incurring little computation cost.\n", "\n", - "![](https://hanlab.mit.edu/files/OnceForAll/figures/ofa_search_cost.png)" + "![](../figures/ofa_search_cost.png)" ] }, { @@ -300,7 +300,7 @@ "metadata": {}, "source": [ "## 2. Using Pretrained Specialized OFA Sub-Networks\n", - "![](https://hanlab.mit.edu/files/OnceForAll/figures/select_subnets.png)\n", + "![](../figures/select_subnets.png)\n", "The specialized OFA sub-networks are \"small\" networks sampled from the \"big\" OFA network as is indicated in the figure above.\n", "The OFA network supports over $10^{19}$ sub-networks simultaneously, so that the deployment cost for multiple scenarios can be saved by 16$\\times$ to 1300$\\times$ under 40 deployment scenarios.\n", "Now, let's play with some of the sub-networks through the following interactive command line prompt (**Notice that for CPU users, this will be skipped**).\n", @@ -372,7 +372,7 @@ "(different from the official 50K validation set) so that we do **NOT** need to run very costly inference on ImageNet\n", "while searching for specialized models. Such an accuracy predictor is trained using an accuracy dataset built with the OFA network.\n", "\n", - "![](https://hanlab.mit.edu/files/OnceForAll/figures/predictor_based_search.png)" + "![](../figures/predictor_based_search.png)" ] }, { @@ -657,7 +657,7 @@ "source": [ "**Notice:** You can further significantly improve the accuracy of the searched sub-net by fine-tuning it on the ImageNet training set.\n", "Our results after fine-tuning for 25 epochs are as follows:\n", - "![](https://hanlab.mit.edu/files/OnceForAll/figures/diverse_hardware.png)\n", + "![](../figures/diverse_hardware.png)\n", "\n", "\n", "### 3.2 FLOPs-Constrained Efficient Deployment\n", @@ -915,8 +915,8 @@ "**Notice:** Again, you can further improve the accuracy of the search sub-net by fine-tuning it on ImageNet.\n", "The final accuracy is much better than training the same architecture from scratch.\n", "Our results are as follows:\n", - "![](https://hanlab.mit.edu/files/OnceForAll/figures/imagenet_80_acc.png)\n", - "![](https://hanlab.mit.edu/files/OnceForAll/figures/cnn_imagenet_new.png)\n", + "![](../figures/imagenet_80_acc.png)\n", + "![](../figures/cnn_imagenet_new.png)\n", "\n", "Congratulations! You've finished all the content of this tutorial!\n", "Hope you enjoy playing with the OFA Networks. If you are interested, please refer to our paper and GitHub Repo for further details.\n",