-
Notifications
You must be signed in to change notification settings - Fork 196
tutorials vrx_docker_manual_trial
crvogt edited this page Sep 19, 2023
·
12 revisions
In some cases, you may need to debug behavior that is only expressed when your solution is interacting with the vrx server.
- The
run_trial.bash
script provided in thevrx-server
repository is the simplest way to run both the competitor image and server together. - This method is described in our tutorial on examining a running container.
- We now take an alternate approach and step through the process of executing a trial manually.
- This method is more complex, but has the advantage of giving more control over the execution process, and can help locate points of failure.
In this tutorial we assume the following:
- you have already built both competitor and server images
- you have completed all the steps list in the troubleshooting prerequisites tutorial
- you are working on a machine with an
nvidia
graphics card, as described in the VRX system requirements.
- Delete previous instances of vrx containers, if there are any:
docker rm vrx-competitor-system docker rm vrx-server-system
- Create the network that the two containers will use to communicate:
Don't worry if this network already exists.
docker network create --subnet 172.16.0.10/16 vrx-network
- Set the appropriate values to specify your team, task, and trial:
(Replace the example values above with the desired values.)
TEAM=example_team TASK=stationkeeping TRIAL=0
- Specify the name and tag for your vrx-server image:
Unless you customized the vrx-server build process, the default value shown above is probably correct.
SERVER_IMG="vrx-server-humble-nvidia:latest"
- Change to the root of the
vrx-docker
repository:(Replacecd /PATH/TO/vrx-docker
/PATH/TO
with the correct path.) - Set up variables with paths to directories you need to tell the vrx-server repository about:
TEAM_GENERATED_DIR=`pwd`/generated/team_generated/${TEAM} TASK_GENERATED_DIR=`pwd`/generated/task_generated/${TASK} HOST_LOG_DIR=`pwd`/generated/logs/${TEAM}/${TASK}/${TRIAL}
- First, we start the vrx server:
./vrx_server/run_container.bash -n vrx-server-system $SERVER_IMG "--net vrx-network \ --ip 172.16.0.22 \ -v ${TEAM_GENERATED_DIR}:/team_generated \ -v ${TASK_GENERATED_DIR}:/task_generated \ -v ${HOST_LOG_DIR}:/vrx/logs \ -e ROS_MASTER_URI=http://172.16.0.22:11311 \ -e ROS_IP=172.16.0.22 \ -e VRX_DEBUG=true -it --entrypoint /bin/bash"
- The
run_container.bash
script will print out its commands as it executes them. - When it's finished it should open a shell into the container with a username that matches your local user.
- The
- Now open a second terminal for the competitor container.
- Set the name of your competitor image.
This is the value you chose when building your image.
COMPETITOR_IMG="virtualrobotx/vrx_2023_simple:test"
- Start the competitor image:
docker run \ --net vrx-network \ --name vrx-competitor-system \ --env ROS_MASTER_URI=http://172.16.0.22:11311 \ --env ROS_IP=172.16.0.20 \ --ip 172.16.0.20 \ --privileged \ --runtime=nvidia \ --entrypoint=/bin/bash \ -it \ ${COMPETITOR_IMG}
You will now start the task in the server container and run your solution in the competitor container. In order for this to work, the commands in this section should be run within a few seconds of each other.
- Back in the server terminal, begin the trial by running the VRX entrypoint script:
TEAM=example_team TASK=stationkeeping TRIAL=0 /run_vrx_trial.sh /team_generated/"${TEAM}".urdf /task_generated/worlds/${TASK}${TRIAL}.world /vrx/logs
- Note that you need to set the
TEAM
,TASK
, andTRIAL
variables again inside the container. - You will see some output with the message
Starting vrx trial...
and the simulation will begin to run.
- Note that you need to set the
- Switch back to the competitor terminal and test your solution by manually executing the entrypoint.
/ros_entrypoint.sh
- Both server and competitor should now run the trial.
- A major advantage of this approach is that the containers will not exit after the trial is complete.
- This gives you more time to diagnose failures.
- You can also rerun trials simply by re-entering the entrypoint commands given in the section above.
The Gazebo GUI is disabled by default in the VRX server, but we can enable it by modifying the roslaunch
command in the run_vrx_trial.sh
script.
- This can be accomplished with the following
sed
command:sudo sed -i 's/gui:=false/gui:=true/g' /run_vrx_trial.sh
- This modification will cause Gazebo to open a graphical window while the trial is running so you can see your WAMV's behavior in real time.
- To disable the GUI, simply reverse the argument in the
sed
command:sudo sed -i 's/gui:=true/gui:=false/g' /run_vrx_trial.sh
- Since changes are not persistent, you can always get back to the original configuration by exiting and restarting the container.
- To exit containers when finished, log out of the bash session with the
exit
command. - If desired, you can clean up the containers and network by running:
docker rm vrx-competitor-system docker rm vrx-server-system docker network rm vrx-network
Back: Run a Container Manually | Up: VRX Docker Image Overview |
---|