-
Notifications
You must be signed in to change notification settings - Fork 1
Home
Executive Summary:
Driving fatigue is a major cause of fatal road accidents, mostly due to the driver’s inability to take emergency actions prior to collision. An effective approach is proposed in this document to automatically detect the driver’s drowsiness state in real-time, alerting the user while continued drowsiness was discovered, and alerting the user with vibrating back seat and stopping the vehicle should he ignore the warnings.
Raw Electroencephalography (EEG) signals are transmitted from a 4 channel consumer EEG device, known as Interaxon Muse, to the user’s personal computer. The proposed solution will denoise the EEG signal through a 60 Hz bandpass filter. The signal is then transformed to power spectral density signals by short time Fourier transform, where 43 features such as delta ([0-3]Hz), theta (θ), ([4-8]Hz), alpha (α), ([8-12]Hz) and beta (β) ([12-26]Hz), and gamma ([27 -60] Hz) band are extracted by band pass filter. The solution will then transmitted the features into an already trained Artificial Neural Network (ANN) with Scaled Conjugated Gradient Backpropagation Model. The ANN was trained by the user’s personal EEG data and their subjectively selected drowsiness level using 454 adaptive weight variables. Once the training was finished, a personalized ANN was created for that user with these variables. Further training will allow those variables to be more adaptive towards the user’s profile. Every time the proposed solution will call the ANN by sending one or several 43x1 input to the ANN. A 2x1 output will be transmitted to the front-end graphic interface, in which the percentage of drowsiness is displayed at a frame rate of 48 frames/s.
Meanwhile, the internal camera of the personal computer is turned on, and extracting images at a rate of 10 frames/s. The images are then prepared using the OpenCV library to analyze each snapshot such that both the face and the eye locations are detected and stored in memory. The movement of the eyes and the face were tracked and analyzed using ANN along with the 43 EEG-based features. The facial-based drowsiness detection method was simplified and only used when the EEG signal was damaged or lost.
The classification output is a 2x1 matrix, in which the first row indicates the probability of drowsiness and the second row indicates the probability of alertness. If the drowsiness is above 50%, then a serial command will be transmitted to a microcontroller connected with two vibration motors. The vibration will alert the user once it’s detected, and a robotic toy car will be switched off indicating the stop motion of the vehicle.
The project has officially entered into its final stage as individual testing has been finished, and the team will finish the integral testing as well as preparing for the design fair.