Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added example using queues for processing just last received camera frame #37

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

JEDominguezVidal
Copy link

New example added to apps folder called "live_scene_and_gaze_nocalib_using_queues.py".
Based on "live_scene_and_gaze_nocalib.py", it includes two modifications:

  1. Import queue and threading. Creating a frame_grabber() function. Using this for just processing the last received frame.
  2. Import YOLO from ultralytics. Additionally to drawing a circle at the gaze position, the received frame is sent to YOLOv8 to detect what is in the scene. Detected object's bounding boxes are drawn in red/green depending on gaze location.

First modification allows for a faster processing. This improvement may go unnoticed if the machine's CPU is sufficiently powerful. However, if heavy post-processing is performed on that image, the difference is noticeable. The second modification shows the difference. YOLOv8 is used running on CPU so that the observed delay between gaze and camera image is about 3 seconds if frame_grabber() is not used and about 0.4 seconds if it is used.

Nonetheless, this does NOT guarantee synchronization between gaze and camera image.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant