-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to combine Road Following and Collision avoidance? #267
Comments
Hello, you would need a road following code that is actually based on single object recognition, like lanes, signs, or other objects. You can have a look at the jetracer in NVIDA-AI-IOT/jetracer, it allows to have several categories for training. I added the jetbot control to jetracer script "interactive_regression_datacollection", it allows to control the bot with joystick and collect data by the display clickable widget. Script is at my Jetbot-Project Repository. It requires installation of Jetcam and Clickable Image Widget.. With the jetbot scripts based on classification of images and interference, you could try to train collision avoidance with a single object, like water bottle (from many different directions as blocked) and all other images like street, strips, etc. as free. Using the Trt version of the collision and road following models it might work a bit, but the bot might not move fast. If you put your script in your forked repository, then it would be possible to work on it a bit too. Best |
Hi @tomMEM Thank you for your answer. Is it possible to control without a joystick? I'm still new in machine learning, jetson nano, and git(github). |
Hello sakae-og, could add keyboard button control instead. But for now, just do not activate the controller cell, or just ignore the error message, it is not required for data accumulation, just a convince for avoiding the squats. The Clickable Image Widget, however, you need, it could be installed following the It worked at jetbot_image_v0p4p0 SD-image, without updating Jupyter etc.. However, my a second time installation stuck at sudo pip3 install -e . , cd #go home directory sudo python3 setup.py build #TB modified, takes a long time >30 min sudo pip3 install -e . |
Hi @tomMEM , I am looking to accomplish the same task as @sakae-og. When looking at your "CategoryRoad_Jetracer_2_Jetbot" on Github in the interactive_regression notebook, I see that you used 2 categories (Apex and Bottle), I am not sure what exactly you would like to accomplish but I would like my Jetbot to stop when seeing the "bottle" in the road and proceed when the bottle is removed, is that what your script is meant to accomplish? I apologize for my limited script understanding as I am just a beginner. Also I have tried using just Apex and have gotten good results but have never really figured out how to use several categories. P.S. I would also like to also try the second point you mentioned using the Trt version of both collision and road following models on Jetbot using a single object as blocked. |
Hello Abuel, thank you for your interest.
|
Hi @tomMEM, Thanks allot for the scripts will definitely try it out, as for now, using the original Nvidia Jetbot codes I have trained my collision avoidance model on my road following street, setting lanes, strips etc, to be 'free' and a bottle (trained on various directions) set as 'blocked'. I would like to also try and combine these scripts ("road following live demo" and "collision avoidance live demo" and see if the robot is able to perform both at the same time (even if lower speed is required). I know that the system architecture for a system that uses multiple models will generally need to be different, is there anybody who has already done this and combined the two before? |
Hi @tomMEM, I have just completed the initial setup (Installed Jetcam and Clickable Image Widgets) and was able to run the script "interactive_regression_category_datacollection" successfully and with no errors. Thanks! While collecting training data however I am still a little confused. As for category 'apex' it is quite clear to perform labelling on the path which we would like the Jetbot to follow, however, for the second category 'bottle' what exactly should I set to be the target? P.S my end result is to create road following model in which the Jetbot would follow a certain path and would come to a halt whenever the second category "bottle" is placed in the front of it. |
Hello Abuel, I combined the original Jetbot anti-collision and road following TRTs scripts into one script - trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb. It requires well trained two models. The object might need NOT to be trained on the road and with different backgrounds etc.. The probability threshold (0-1) can be adjusted with one of the sliders (start with 0.8 or 0.9) (it gives less false positive, but more false negative) The category version works actually in a similar way. Hope it runs. |
Hello Abuel, thank you for testing. In the category script extraction of classification probability is missing, or at least is not very powerful. Since we have four prediction values, so it would need some sum or averaging to use the probability values to set a prediction threshold. The object learning is most likely similar to your trials with Jetbot collision learning, isolated object on different background, but also street. Hope it works out. Best, T |
Hi @tomMEM, I just tested your script "trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb" with my Jetbot and it works just fine. Very much appreciated. Hope to do the same experiment with my fast Jetracer. |
Hello Abuel, good that it works a bit. |
Hi @tomMEM, Nice, I would love to try it out, however, i'm still confused about how to train multiple categories (Apex: Regression and Bottle: Classification) using the "Interactive_regression_category_datacollection". |
Hi @tomMEM I'm sorry for my late response. As Abuel heard, I tried a lot. Why doesn't it work?? I'm sorry for the really basic things. |
Hi @sakae-og, xy_dataset is a python script and needs to be uploaded into the working directory in order to run the "Task" part. It can be found at the Jetracer notebooks directory (https://github.com/NVIDIA-AI-IOT/jetracer/tree/master/notebooks) together with "utils.py". |
Hello Abuel, the data collection for road is similar to Jetbot with the difference to take more care where to place the spot, collect in database A a good number of images. It can be trained and tested with live, and more images could be added. |
Hello Sakae-og, thank u for the feedback. Check the names of your models in one of the first cells and give it a run. Sorry for the confusion with Jetracer scripts, which require an additional clone of the Jetracer repository and installation, as well as installation of Jetcam and Clickable etc. Hope it works out. |
Hi Abuel. But next Hi @ tomMEN I thought I would create it with data_collection ~, but is it different? |
Hello sakae-og, the models best_Steering_model_xy_trt.pth’ and ‘best_model_trt.pth’ have to be created with the Jetbot scripts - The jetracer-2-Jetbot project is based on one database that is created by interactive_regression.ipynb of the jetracer/roadfollowing repository (or the one I modified a bit for use with Joystick, but you can disable by place # in front of line). Thus, you need to clone jetracer and place scripts there. Hope it works, best T |
Hi @tomMEM, I have just tried running the the live demo script "trt_jetracer_category_Model_for_jetbot_with_stop.ipynb", I have also noticed similar problems with the prediction, when the robot is completely free (no "bottle" on the track) the prediction shows a value of around 0.49 and when the "bottle" is placed on the road the prediction goes up to around 0.6 max, so I tried setting the threshold to like 0.5 and as assumed there were allot of false positives. Also for the "category" part in the live demo, I did not fully understand it but it shows a value of '3' when the robot is fully free and '1' or '2' when the robot is being blocked. |
Hello Abuel, good that you got it running.
Also
Hope it helps a bit, still need to find out how to get probability scores just for one class out of inference model without normalization to all. |
Hello, looks like there is no way to get a likelihood measure of the inference strength per category using the current TRT model. The four scores in the current approach are not useful. T |
Hello Abuel, as an exercise I added the Jetbot collision_avoidance model to the Jetracer script and removed the dependencies on categories, - uploaded as "trt_jetracer_categoryModel_for_jetbot_with_collision_avoidance_of_jetbot.ipynb". Still categories could be used to switch from road following model to object following mode. Please note I tested it with the old Jetbot collision_avoidance "best_model_trt.pth', and not with the newer "best_model_resnet18.pth". Hope it works. T |
Hi @tomMEM, Thank you, will definitely try out your suggestions as well as the new script with the collision avoidance model this coming Monday. Actually what I would like to really accomplish since the road following and collision avoidance worked really well combining the 2 models is I would like to add more categories to the collision for example if a "traffic light" is detected then stop for 5 seconds. So if "bottle" is detected then stop until "bottle" is removed, if "traffic light" is detected then stop for 5 seconds and proceed. May I ask how you would accomplish such a task? Secondly, for object following I also had a look at your script "live_demo-object following_tweak.ipynb" and after playing around with the motor adjustments etc. the object following worked amazingly well. I appreciate your works. However, for the object following I would assume that once the robot reached the "target" object it would stop however it would just end up going into the object. So I would like to ask how would you calculate the distance of a "target" object without using external sensors (by using bounding boxes for example) so that the robot can "stop" after reaching the object. Thank you |
Hello Abuel, thank you for your feedback and testing. 1a) " road following and collision avoidance worked" did you mean that the category script worked a bit? Actually, you could have a high number of categories, if you do not need to predict the x y coordinates to turn towards the category/object. Just train one model for categories without x and y that can be used for inference (probability of detection around 60%), and the other one for the road following with x and y (you have already). 1b) if the environment is fixed, then classical OpenCV object recognition (threshold, edge, etc.) could be used. If models, then there might be a speed problem, however there are specialized databases or pre-trained models for traffic based on e.g. German Traffic Sign Recognition Benchmark (GTSRB) dataset or models like TrafficCamNet. A simplified dataset could be trained using the ResNet18 - might be interesting task. Best, T |
Hi @tomMEM, Sorry for the late reply, thank you for the "live_demo-objectfollowing_tweak_object_stop.ipynb" scripts I haven't yet tried it out but definitely will! Thanks. As for the "road following and collision avoidance worked" I meant using the "trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb" as well as the "trt_jetracer_categoryModel_for_jetbot_with_collision_avoidance_of_jetbot.ipynb" (I have just recently tried it out and it also worked amazingly well however I felt that using the Jetbot RoadFollowing the results are slightly better, but it could just be my training dataset. For the "Traffic light" and "bottle bottle" experiment, I would love to just be able to add several categories to the "Interactive Regression script", one category being for apex (regression) and the other two for classification (without x&y), however in the interactive regression, I see that there is only regression (x&y) option when training. So the only way I see it possible to accomplish this task is to train a model for the custom "Traffic light" and possibly add it to the script with the road following and bottle already (trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb). Or is there another way of accomplishing this task by just modifying ready Jetbot or Jetracer scripts? |
Hello Abuel, thanks for your feedback and interest. Traffic light: We could add categories to the collision_avoidance model - only requires few changes and add directly to "category bottle" or use it as a third model. It looks like it is better to work with "specialized" models per task. The third model could then use every other frame for check. Will try these days. However, it will detect and respond to objects early - so I guess you could only try then if it remains useful for your task. If the ssd mobilenet is detecting your traffic light as an object, then this would be a more "future-like" solution - however, because of time lag (about 1-2s) it prevents a good performance on road following. |
Hello Abuel, I added a new folder "Classification_Stop" with 1_datacollection, 2_TRT and 3_live run. It is using four classes (more classes could be added). The background is most important (trash everything in which should not give a stop signal, actually it will but we are not going to use it). If backgrounds are very different, then a second background class could be added. The network cannot say “I do not know”, so it will always chose a class, to avoid signals in essential classes, non-essential objects need to be in background The road following got a bit a time lag, so hope it will still run sufficiently. |
Hi @tomMEM, That is definitely something to try out, however I do have a few questions, firstly I see that there are 4 categories ('background1', 'redlight', 'greenlight' and 'bottle') but there is no 'Apex' (Road Following), are you trying to create 2 models separately, one for these object detections and behavior and another road following separately and then combine the 2? As for the experiment I wanted to accomplish at first, I would say it is a lot less complex than what you are trying to accomplish here with the Classification_Stop. I wanted to start with something more simple using just 3 categories ('Apex', 'Bottle', 'Trafficlight') here the traffic light is more like a lego for my lego track and if the 'Trafficlight' is detected the Jetbot will just pause for 5seconds and continue on with the Road Following, however, if 'Bottle' is detected then the Jetbot will stop until the 'Bottle' is removed. Nevertheless, I really look forward to trying and testing out your 'Classification_Stop' and see what the results are. Thanks! |
Hello Abuel, thanks for checking it out. Now we have too many versions of the scripts, so it might be a bit tricky. The road following is using the Jetbot road following script - but you could replace it with two categories Jetracer model [apex, bottle], but the category bottle will be not used. |
Hi @tomMEM, thank you for the very well explanation. I would love to try the experiment out now as soon as I can and would surely update you. Thanks again! |
Hello @abuelgasimsaadeldin and @sakae-og , I removed sliders etc. to increase road following speed in trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb and 3_roadfollowing_classification_behavior.ipynb. Also the last scripts do not need Jetcam and clickable. Still there is a build up of memory over hours - so you need to "kill" your explorer in time and restart. |
Hi @tomMEM I ran "trt_jetracer_categoryModel_for_jetbot_with_collision_avoidance_of_jetbot.ipynb" using the file created by jetbot. Then I got a key error. KeyError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in load_state_dict(self, state_dict, strict) /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in load(module, prefix) /usr/local/lib/python3.6/dist-packages/torch2trt/torch2trt.py in _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs) KeyError: 'engine' What is the cause of this? |
Hello @sakae-og, keyError can be happen when the model is corrupted either during copy (have to wait long enough to finish) or during saving while or after training. Please just try a fresh model (copy or after training), the model in the TRT build needs to be saved manually. Your current model might also not work in the original jetbot roadfollowing script if you would copy back to jetbot. Hope it works out. Best. T |
Hello @tomMEM Where do you create the 「best_steering_model_xy_trt.pth」 and 「best_model_trt.pth」 in the first place? I shoot with「data_collection」in 「collision_avoidance」and 「road_Following」of NVIDIA-AI-IOT and train with「train_model」. Is the method different in the first place? |
Hello @sakae-og, I wrote a bit more in the readme at point 4. So for jetbot collision_avoidance model the steps are: |
Hi @tomMEM However, "live_demo" of "road_following" works, but jetbot spins around with this program. But I'm glad the program worked! I will touch it a little more and try it. |
Hello @sakae-og, thank you for your feedback.
Best, T |
Hi @tomMEM I was able to perform line tracing and stopping using "trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb". I am very happy to be able to do what I want to do. Best regards, Yuu |
Hello @sakae-og, I’m glad the script helped you to achieve some of your goals, and thank you for your feedback. |
hi @tomMEM ,what compiler are you used to build TRT model. |
Hello @weweew1, the build of the TRT model is based on the original jetbot build script (live_demo_resnet18_build_trt.ipynb). It requires additional installations as described inside that build script. The transformation of the trained ResNet model to TRT takes less than 5 min on the jetson nano. a) I did not try myself, but Colab itself has some pre-installations, but those might be to new or to old. If you like to run the training script there you need to install as described in jetbot SD from scratch (start from point 4, but do install 8-to end), however making sure you indicate every time the same version as at the jetson nano (e.g. pip install numpy==???) . A list of the installed library versions at the jetson nano you might get with !pip freeze and !pip list or other commands. Unfortunately there seems to be no "requirements" file in jetbot, so you need to sync yourself. You could try to install the libraries at least for the dependencies that are required for the training script (train_model.ipynb, see in notebook cell 1 import ...). b) To run "setup.py" might be actually not necessary for training after cloning jetbot. |
hi @tomMEM How to transform the trained ResNet model to TRT on the jetson nano |
Hello @weweew1, you need to find the scripts with the word "build" in it, for example "live_demo_resnet18_build_trt.ipynb" and open in your jupyter lab, as well check the name and path in the corresponding cell for your ResNet model (e.g. model.load_state_dict(torch.load('best_model_resnet18.pth')) and rename if needed. Finally run cell-by-cell. The corresponding ResNet model (e.g. 'best_model_resnet18.pth') need to be placed and present in the same folder as the respective build script e.g. in the "collision_avoidance" folder. If you like to compare two models you need to repeat the process for the "road_following" folder (live_demo_build_trt.ipynb), or for the "classification_Stop_roadfollowing" folder (2_load_build_2_TRT_classification_model.ipynb) with the corresponding models. Of course, it will only work in case you have the corresponding jetbot installation based on an jetbot SD image (4.3), . Also you need to make sure that you have "torch2trt" previously installed: Best. T |
Hi @tomMEM thk for your answer!! |
Hi All.
I have recently got a Jetbot and am running a demo.
Road Following and Collision avoidance can be operated separately.
I want to make a combination that stops when there is an object during Road Following.
It doesn't work even if the two codes are combined, so I would like to know how you can do it.
I'm not good at English.
I'm sorry if it's hard to understand.
The text was updated successfully, but these errors were encountered: