Best Approach for Displaying Multiple Camera Streams in Qt 6.9.1 with Microservices
-
Hi,
I’m working with Qt 6.9.1 in a microservice-based architecture. Our current setup involves parsing RTSP camera streams, converting them into JPEG format, and rendering them through QML’s VideoOutput using a VideoSink.
We need to handle multiple video streams simultaneously—potentially more than 10–15 at once. What would be the most scalable and efficient approach to achieve this?
Regards,
Adnan -
Hi,
What are the specs of the streams ?
Is there a particular need to convert them to jpeg ?
How are you doing that conversion ?
What is the target OS to run that application ? -
Hi,
-
Currently, we are considering 1024×768 @ 30fps and 720p as standard resolutions. However, in some cases, 1080p streams may also be required, depending on the camera capabilities.
-
The solution needs to be ONVIF compliant, as ONVIF mandates support for streaming MJPEG video over RTSP.
-
Our approach uses a GStreamer pipeline within a service application to read the RTSP stream. The raw frames are extracted through an appsink and written to a shared memory segment. A separate streaming service consumes this shared memory, encodes the frames into MJPEG, and sends them to the client UI application via gRPC, where they are displayed using a VideoOutput element’s videoSink. The same shared memory data will also be leveraged for video analysis.
-
The target OS is Windows for now, but the implementation should remain as cross-platform as possible for future portability.
-
-
Just to be sure I understand things correctly:
- you have three different applications:
- stream receiver
- converter
- remote GUI
Each running independently
Correct ?
- you have three different applications:
-
Yes, each service is running independently.
Currently, we’re able to receive and play the stream successfully, but I want to ensure that our approach is correct, scalable, and efficient. So far, we have only tested with 4 streams on a PC, but in our actual use case we will be handling multiple streams. From a software perspective, we want to be certain that we’re on the right track.
Regards,
Adnan -
Hi,
I’m working with Qt 6.9.1 in a microservice-based architecture. Our current setup involves parsing RTSP camera streams, converting them into JPEG format, and rendering them through QML’s VideoOutput using a VideoSink.
We need to handle multiple video streams simultaneously—potentially more than 10–15 at once. What would be the most scalable and efficient approach to achieve this?
Regards,
Adnan@greed_14 I have developped an application like that, for videosurveillance https://github.com/jordanprog86/watcher
Your main problem will be to make sure that the program does not consume excess memory.You can do the streams on GPU but avoid too much copy -
@greed_14 I have developped an application like that, for videosurveillance https://github.com/jordanprog86/watcher
Your main problem will be to make sure that the program does not consume excess memory.You can do the streams on GPU but avoid too much copy@Ronel_qtmaster thanks for your response so have you used a similar method and divided the tasks into micro services and used gstreamer to obtain raw and send it to UI.
What are you using for face detection? -
@Ronel_qtmaster thanks for your response so have you used a similar method and divided the tasks into micro services and used gstreamer to obtain raw and send it to UI.
What are you using for face detection?@greed_14 Yes. exactly. I have a thread class that gets the streamed image in realtime and send it to the UI. I do not do any copy of the images, as it consumes a lot of memory. I am using ffmpeg for frame streaming and opencv for object detection. For QML it is the same approach. So, each streaming contains a thread for getting streamed images, an image processing alogirithm and memory allocation feature. You should also consider how to run the program on GPU or loading those image in the GPU