-
Notifications
You must be signed in to change notification settings - Fork 200
Fiducial Tracking
In FRC, each game typically will have a goal that robots will need to score game elements on/in/at. These goals are typically lined or marked with 3M retro-reflective tape to assist teams in locating them.
This has been made easy with COTS solutions such as Limelight, Gloworm, and Snakeeyes, but at the cost of bright green LEDs blasting everyone's retinas for 2.5mins every match.
These targets are also not always designed such that a robot can easily localize itself based on them, or a lot of work is needed to do so. This is a huge departure from robotics in the real world, that either use inside-out tracking such as SLAM, or use fiducials such as ArUco or AprilTag.
However, FIRST continues to employ the retro-reflective tape, as teams are used to it and have solutions for tracking it available to them.
Well, if FIRST is ever to consider adding fiducials to an official FRC field, it's likely only if there are canned solutions already available and proven effective. Even more important than effective, is more effective.
Volunteers and teams alike have raised concerns about the bright lights in years past, so there is precedent for people wanting something better.
Fiducials such as AprilTags are this magical something.
Computer vision works by feeding an algorithm (or many of them) with an image and some parameters to set up these algorithms.
In the case of retro-reflective tape tracking, this input image gets HSV thresholded, "contours" (blobs) found based on remaining data, and then contour filtering to drop any remaining unwanted data. For good stable tracking, the camera is commonly dropped to a very low exposure, relying on the bright light source and the reflected light off the retro-reflective tape to leave as little unnecessary data in the image as possible.
This means humans can't really use the image from the camera unless they switch to a "driver mode", disabling vision tracking.
This sucks. Not only is it highly susceptible to any other reflections from the light source, but also to overhead lighting that may get picked up or otherwise interfere.
Additionally, it takes a lot of legwork to localize from these targets, and can still be quite noisy.
Fiducials require no dedicated light source, just sufficient ambient lighting and a decent resolution image.
They require no additional algorithms to make the input image usable, and the image can still be seen by humans.
As a bonus, they are designed to be easy to localize from, and can be generated with unique IDs to identify which one you are looking at.
TL;DR: Fiducials require no bright lights, image modifications, leave the image human-usable, and give out everything needed to localize from them out of the box.