-
Notifications
You must be signed in to change notification settings - Fork 11
Object Classification & Detection
Joshua Williams edited this page Oct 29, 2021
·
1 revision
Using the cameras (and possibly sensors) on board Voltron, we would like the vehicle to recognize objects and classify them accordingly. Some examples are attached, but ideally, this system would be able to recognize any potential hazard and make decisions based on the classification.
OVERALL OBJECT DETECTION MODEL:
YOLOv4 with ZED SDK will be used to track objects
The reasons we want to use YOLOv4 (at least for now):
- YOLO is touted for its speed and accuracy as one of the premiere object detection models
- ZED's SDK natively supports Darknet (which is what YOLO runs off of!), so the implementation would be easier
- Additionally, the ZED SDK adds its own little spice by supplementing the bounding box annotation with the depth of the object our model is labeling, which of course would be very useful for a variety of purposes, like keeping distance from cars or determining distance from a sign or pedestrian. (This is because the ZED camera has depth sensors, so this is how the camera is able to supply that additional data to us with our YOLO model)
Input: Image/Video frames
Output: Bounding Box information + Depth relative to camera of Identifiable Objects
Road signs
- For now, we have the LiSA dataset for stop and yield signs; however, since we'll have mapping data available on that, we'll shift our focus to other classes/labels, like pedestrians and other vehicles. We can incorporate road signs after an initial trial with training a model concerning only pedestrians and vehicles.
- Road sign detection can be done in a few ways. The distinguishing facotr for us will be speed, because it is a vehicle.
- Several ways exist for this already, but combining and taking the best parts of each will help us the most
- Road sign detection can be done with Lidar, though our implementation will have to be fairly new
- Road signs are also coated in a special film, making them "brighter" when the contrast is turned all the way up
- This can be used to identify potential road sign candidates and then classify them using open source image datasets Pedestrians
- We have the Eurocity dataset that contains a plethora of annotated data on pedestrians and motorists during both the night and day; the dataset is split into training, testing, and validation for us already. Last I recall it was a 60-25-15 split, but will update once confirmed.
Crosswalks
- Cross-walk detection can also be simplified, similar to road signs. All cross-walks are built with a fairly standard set of rules/guidelines, so a standard image classification algorithm will be possible to implement.
- There isn't a "trick" we can use like with the road signs.
Other Vehicles
- Recognizing and identifying other vehicles is very important.
- The classification is especially important, because the vehicle should react differently to each type of vehicle or object.
General
- Papers for literature review
- Demo 2: Grand Tour (Overview)
- Our Team
- Learning resources
- Meeting notes
- Archived Pages
Development & Simulation
- Code Standards and Guidelines
- Writing and Running Tests
- Installation and usage
- Logging into the Quad Remote Simulator
- Running the Simulator
Software Design
Outdated or Uncategorized