You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is kind of a meta issue for the "Serving Drinks" scenario of RoboCup 2019 to keep track of what needs to be done. I will open more issues for every point listed below.
In "Serving Drinks" the robot has to find all party guests without a drink, take orders and deliver the drinks.
Things to clarify
Decide how to deliver drinks: The robot can either hand over drinks or deliver them with an attached tray. If we use the tray, we need an extra action for detecting if the guest takes the wrong drink. Do we have a tray at all? If we hand over the drink (from a tray, or directly from the bar? -> clarify), we need to implement an action for that as well.
Implementation design choice: Should people be actively detected (i.e. should we implement an action for that) or should we passively always try to detect people and store information about them (location, attributes) in the knowledge base to be retrieved when needed? This might be related: Continuous scene perception for accurate world modelling #73
Create action "find_people" for finding all persons in a room / in front of the robot, also remember their faces and possibly other attributes like clothing or what they are holding (identify if holding a drink)
This is kind of a meta issue for the "Serving Drinks" scenario of RoboCup 2019 to keep track of what needs to be done. I will open more issues for every point listed below.
In "Serving Drinks" the robot has to find all party guests without a drink, take orders and deliver the drinks.
Things to clarify
To do
The text was updated successfully, but these errors were encountered: