-
Notifications
You must be signed in to change notification settings - Fork 2
Make HERO do stuff
Using HERO at first might be a bit confusing. Here are some useful commands that you can use to get you started. (Examples below)
roscore # Starts the central ROS server
hero-start # Run basic HERO software (no autonomy yet)
hero-free-mode # Run autonomy-supporting software, it also loads the world model
hero-console # Interactive python console to experiment with hero
# Usage: hero.<tab>
# Example: hero.head.reset()
robot-console # Send commands directly to hero for it to parse them and execute them
# Usage: hero <tab> (you need autonomy-supporting software running)
# Example: "hero inspect the cabinet" or "hero bring the coke to the dinner_table"
hero-challenge-<tab> # Run a specific challenge
# Example: hero-challenge-storing-groceries
hero-core # Use real HERO hardware
# Example: `hero-core` followed by `hero-rviz`
############################
# For all of the following commands:
# run `hero-core` before to use the real robot
############################
hero-rviz # RViz is the Ros VIZualization software, to see what the robot (thinks it) is seeing.
hero-dashboard # Open a windows to show the battery levels of hero1 and hero2
hero-show-rgbd # Open a windows to see the RGBD and depth camera of HERO
hero-say "msg" # Make HERO speak "msg" out loud
Once everything is installed, you should be able to simulate our robots and environment. To do so, run the following commands each in a separate terminal (terminator is very handy for this, you can split the windows with ctrl
+ shift
+ O
and ctrl
+ shift
+ E
or via right-clicks).
To run HERO software, you can either:
- Run locally in a simulation
- Run directly on the robot
To do this, you just need to have the software installed following the wiki page Getting Started.
For this, you will first have to start the hardware and connect to HERO. You can refer to the wiki page starting HERO and if you want more details for experimenting with HERO: experimenting with HERO.
Once you successfully started HERO on, you can SSH into it (ask any team member for the login passwords):
sshhero1 # SSH into the actual robot itself (hero1)
sshhero2 # SSH into the robot laptop (hero2)
If these don't work, try:
ssh [email protected]
# or for hero 2
ssh [email protected]
To run HERO software such as challenges or scripts in Python you will need to connect to HERO2 and not HERO1.
Every command you run now will directly be executed on hero1 or hero2.
Note: if you split your terminal, the new pane will NOT be connected to HERO through SSH and will have to connect again for that pane.
You can also control HERO through the web UI, which is very intuitive.
For this first start the robot and run hero-free-mode
, then you can connect to hero1
on port 8000
through a web browser.
http://hero1.local:8000/
To simply play around with the robot, you can execute these commands each one in a different pane or terminal:
Execute in order:
###################################
####### SIMULATION ################
###################################
roscore
hero-start
hero-rviz # Will open a new window with RVIZ (don't close the terminal!)
##################################
##### REAL ROBOT #################
##################################
sshhero2 # BEFORE every command
hero-core # BEFORE rviz
hero-rviz # Will open a new window with RVIZ (don't close the terminal!)
If you run hero-free-mode
, in RVIZ, you can the order HERO to move to a place in the world model by clicking the purple arrow.
Execute the commands in the section use HERO, followed by:
sshhero2 # If running in real HERO
hero-free-mode # Keep this terminal located, here is where all the logs from HERO are going to appear!
robot-console
Here, you can send queries to the robot such as (press TAB
while typing for autocompletion):
bring me a coke from the dinner_table
bring a beer from the salon_table to the dinner_table
...
Execute commands in the section use HERO, followed by any of these commands:
sshhero2 # If running in real HERO
# Then, run any command such as:
hero-open-gripper
Or from hero-console
, execute in order:
sshhero2 # If running in real HERO
hero-console # Opens python interactive terminal
hero.<tab> # Pressing TAB will show you all hero's parts and you will be able to run all it's functions
# Example: hero.head.reset()
Execute commands in the section use HERO, followed by:
hero-challenge-<tab> # Example: hero-challenge-take-out-the-garbage
Execute commands in the section use HERO, followed by:
hero-free-mode # Keep this terminal located, here is where all the logs from HERO are going to appear!
python3 /path/to/script.py # For python scripts
./script # For other executables (don't forget to make it executable by running `chmod +x path/to/script`!!)
There are plenty of sources for ROS out there, but here is an overview on how ROS works:
- packages holds nodes
- nodes perform computation
- nodes communicate with each other sending messages through topics, services, parameter server
Here are some commands that can come in handy:
rosnode list # Lists all the ROS nodes
rosnode info <node>
rostopic list # Lists all the ROS topics
rostopic info <topic>
rosrun <node> # Run a node with ROS
# You can use `grep` to find a specific node/topic (e.g. any topic containing "image_raw"):
rostopic list | grep image_raw
If you want to gather raw (unprocessed) sensordata there are various ways of doing so. This tutorial will recommend some methods for each sensor
As a general note, rosbag can be used to easily save data as long as it is passed over a topic. The topics for the sensors on HERO are
- laser range finder:
- raw: /hero/base_scan
- filtered: /hero/base_laser/scan
- Head RGBD camera:
- rgb image: /hero/head_rgbd_sensor/rgb/image_raw
- depth image: /hero/head_rgbd_sensor/depth_registered/image
- camera info: /hero/head_rgbd_sensor/rgb/camera_info
- Wrist Force Torque sensor:
- raw data: /hero/wrist_wrench/raw
- filtered data: /hero/wrist_wrench/compensated
- base bumper
- front: /hero/base_f_bumper_sensor
- back: /hero/base_b_bumper_sensor
Just an rgbd image is often not enough for most sensor processing. Another important factor is the pose of the sensor at that point in time. For that reason the image saver tool was created.