-
Notifications
You must be signed in to change notification settings - Fork 2
Potential thesis projects
-
Door opening (and closing) with planning involved. E.g. have the robot reason about: 'does it take more time to open the door on the shortest path, or is taking the long way around but with likely opened doors already faster?' The robot can learn which doors are typically open from observation over (extended) time. Consider how long it takes to open a (specific) door.
-
Figure out how to open linear and revolving doors, maybe using reinforcement learning in simulation and then transfer to reality.
-
Generic door opener based on lazy manipulation.
Our robot is often tasked with finding something. In a cluttered environment there are a lot of occlusions so a single viewpoint will not be able to see everything. This project asks the question: Which viewpoint(s) should the robot take to be reasonably sure they've seen everything or conclude that something is really not there.
It is often not enough for the robot to be able to understand what is being said. Another thing the robot will need to do is figure out who is talking. This can be used in two usecases:
- recognise who is giving a certain command.
- find who is calling the robot.
Image recognition has come a long way since we first implemented it on our software. We could definitely look into how we can use these techniques to perform not only classification of objects but also segmentation. Two applications of this could be
- Recognition of furniture
- Recognition of objects on tables
Previous research managed to perform grasp monitoring for objects weighing more than 45 grams. We would be very interested in figuring out how to detect grasps of objects with lower weights. Available sensors for this are a force-torque sensor in the wrist of the robot and cameras. Potentially more information can be gained from the fingers of the robot.
A service robot should be able to recognise people and should also be able to recognise someone they met or saw previously.
Extend the work done by Naomi to perform visual servoing with the arm.
Current issue with the furniture fitting is that our algorithm will provide a result regardless of whether or not the image contains sufficient information. One way of solving this is to do the fitting using features.
In a household environment there are many objects on the floor. Currently Hero will gladly drive over them as an absence of obstacles is deemed acceptable to drive over. We are looking for a method which can avoid these obstacles. One option here is to make a closed world assumption and detect open floorspace rather than the obstacles.
We made a topological action planner for navigation. We would like to develop this further. Topics that are open to further development: • Generating the navigation graph: from what? Using slam? • Moving furniture
Chairs are challenging objects in a household environment. They are large but not static. Therefore they cannot be ignored during navigation but they also cannot reliably be added to a worldmodel. Furthermore the robot will often need to work around them or even interact with people sitting in them. Possible avenues to get started with this are: • Detection of chairs • Manipulation of chairs • (once chair is detected) Detect if occupied
A service robot should be able to deliver drinks without spilling. The question is whether this is at all possible using current hardware. If it is possible, under what conditions can this task be accomplished? What sensing and monitoring are required to ensure no spillage occurs. If this is not possible using current hardware, what changes should be made to the hardware of the robot.
Initial testing seems to suggest that reducing the maximum acceleration of the local planner helps with spilling. However even with a glass half full this was not enough to prevent spilling entirely. Another aspect to take into account is that the robot will vibrate harder when driving at high speeds. One could look into how hard the hand is vibrating. Perhaps we can use the force torque sensor in the robots hand to detect this, or perhaps an external sensor is needed.
Our service robot will need to be able to set a table. Due to our current method of object segmentation cutlery is currently not detected. We will need to find a way to recognise cutlary to work with it. Our current grasping pipeline is also not capable of picking up and placing cutlery. Once cutlery is detected, what changes need to be made to the grapsing pipeline to enable us to manipulate cutlery?
Previous research allows us to model a table using data from the rgbd camera. We would like to develop this further.
- Pushing a wheelchair
- Moving chesspieces
- Open a fridge
- Take something from the fridge
- Make tea
- Mop the floor
- Assist in the supermarket
- Get a bottle of shampoo from the bathroom
- Open a can