Best Approach for Displaying Multiple Camera Streams in Qt 6.9.1 with Microservices
-
Hi,
I’m working with Qt 6.9.1 in a microservice-based architecture. Our current setup involves parsing RTSP camera streams, converting them into JPEG format, and rendering them through QML’s VideoOutput using a VideoSink.
We need to handle multiple video streams simultaneously—potentially more than 10–15 at once. What would be the most scalable and efficient approach to achieve this?
Regards,
Adnan -
Hi,
What are the specs of the streams ?
Is there are a particular need to convert them to jpeg ?
How are you doing that conversion ?
What is the target OS to run that application ? -
Hi,
-
Currently, we are considering 1024×768 @ 30fps and 720p as standard resolutions. However, in some cases, 1080p streams may also be required, depending on the camera capabilities.
-
The solution needs to be ONVIF compliant, as ONVIF mandates support for streaming MJPEG video over RTSP.
-
Our approach uses a GStreamer pipeline within a service application to read the RTSP stream. The raw frames are extracted through an appsink and written to a shared memory segment. A separate streaming service consumes this shared memory, encodes the frames into MJPEG, and sends them to the client UI application via gRPC, where they are displayed using a VideoOutput element’s videoSink. The same shared memory data will also be leveraged for video analysis.
-
The target OS is Windows for now, but the implementation should remain as cross-platform as possible for future portability.
-