How do I efficiently display OpenCV webcam stream #4541
-
Heyo, I am new to rust and asking for help. I have built a small app that takes a webcam stream using OpenCV, processes it with the user settings and then displays it. The problem I face is going from OpenCV RGB to ColorImage. Using
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
I don't know about calling the backend directly, you may want to look at discussions like #1641 #3457 #1606 for ideas. But for what it's worth, enabling optimizations in the debug build makes a huge difference. On my test it made the time to generate the texture go from 140ms to about 5ms for a 2520x1920 image (disclaimer: my camera is 640x480, so I had OpenCV interpolate the image to test, and I have an i7-1260P). Here is my entire test code for reference: Details
[package]
name = "opencv-egui"
version = "0.1.0"
edition = "2021"
[dependencies]
eframe = "0.27.0"
opencv = "0.91.0"
[profile.dev]
opt-level = 1 use std::time::Instant;
use eframe::{
egui::{self, DragValue, Ui},
App,
};
use opencv::{
imgproc,
prelude::*,
videoio::{self, VideoCapture},
Result,
};
fn main() -> Result<(), eframe::Error> {
let options = eframe::NativeOptions::default();
eframe::run_native(
env!("CARGO_PKG_NAME"),
options,
Box::new(|_cc| Box::<Application>::default()),
)
}
pub struct Application {
cam: VideoCapture,
scale: f64,
}
impl Default for Application {
fn default() -> Self {
let cam = videoio::VideoCapture::new(0, videoio::CAP_ANY).unwrap(); // 0 is the default camera
Self { cam, scale: 1.0 }
}
}
impl App for Application {
fn update(&mut self, ctx: &eframe::egui::Context, _frame: &mut eframe::Frame) {
egui::CentralPanel::default().show(ctx, |ui| {
self.display_stream(ui).expect("failed to render frame");
});
ctx.request_repaint();
}
}
impl Application {
fn display_stream(&mut self, ui: &mut Ui) -> Result<(), Box<dyn std::error::Error>> {
let mut frame = Mat::default();
self.cam.read(&mut frame)?;
if frame.size()?.width > 0 {
let mut rgb = Mat::default();
imgproc::cvt_color(&frame, &mut rgb, imgproc::COLOR_BGR2RGB, 0)?;
let mut scaled = Mat::default();
imgproc::resize(
&rgb,
&mut scaled,
Default::default(),
self.scale,
self.scale,
imgproc::INTER_CUBIC,
)?;
frame = scaled;
}
let now = Instant::now();
let frame_size = [frame.cols() as usize, frame.rows() as usize];
let img = egui::ColorImage::from_rgb(frame_size, frame.data_bytes()?);
let texture = ui.ctx().load_texture("frame", img, Default::default());
let elapsed = now.elapsed();
ui.horizontal(|ui| {
ui.add(
DragValue::new(&mut self.scale)
.clamp_range(0.02..=4.0)
.speed(0.05)
.prefix("Scale:"),
);
ui.label(format!("Size: {frame_size:?} | Time: {elapsed:#?}"));
});
ui.image(&texture);
Ok(())
}
} |
Beta Was this translation helpful? Give feedback.
I don't know about calling the backend directly, you may want to look at discussions like #1641 #3457 #1606 for ideas.
But for what it's worth, enabling optimizations in the debug build makes a huge difference. On my test it made the time to generate the texture go from 140ms to about 5ms for a 2520x1920 image (disclaimer: my camera is 640x480, so I had OpenCV interpolate the image to test, and I have an i7-1260P).
Here is my entire test code for reference:
Details