How to Display Frames in QML from C++ for Real-Time Applications
-
Hello,
I am currently developing a user interface using Qt Creator with C++ and QML to display frames from a Mercu0909F flat panel detector. The integration involves establishing a connection with the device and implementing functionalities such as receiving X-Ray images via UDP, using the SDK provided by the manufacturer.
The detector streams frames at 30 FPS, each with a resolution of 1024x1024 in Gray-8 format, resulting in a data size of 2 MB per frame. According to the device's documentation, each frame requires a deep copy during processing. While I have successfully saved the frames to files on my computer, I am encountering challenges when attempting to display these frames in real time on the QML interface.
Currently, I can perform a deep copy and process the incoming data with the following conversion sequence:
"Unsigned short 16-bit little-endian (pixel format) ---> WW/WL (LUT) ---> BMP (RGB) 8-bit ---> Bitmap."The evt_image signal inside the SDKHandler is triggered continuously, providing the image data as expected. Although I can successfully receive this data, I am struggling to render it on a QML Image element in real time.
I would greatly appreciate your guidance on how to proceed. I am attaching the relevant code and debug output for your reference.
Thank you for your support!
SDKHandler:
struct IRayImageStructure { public: int nWidth; int nHeight; int nBytesPerPixel; unsigned short* pData; IRayImageStructure(const IRayImage* input) { this->nWidth = input->nWidth; this->nHeight = input->nHeight; this->nBytesPerPixel = input->nBytesPerPixel; this->pData = (unsigned short*)malloc(this->nWidth * this->nHeight * this->nBytesPerPixel); std::memcpy(this->pData, input->pData, this->nWidth * this->nHeight * this->nBytesPerPixel); }; IRayImageStructure(const IRayImageStructure& input) { this->nWidth = input.nWidth; this->nHeight = input.nHeight; this->nBytesPerPixel = input.nBytesPerPixel; // Yeni bir bellek ayır ve veriyi kopyala size_t dataSize = this->nWidth * this->nHeight * this->nBytesPerPixel; this->pData = static_cast<unsigned short*>(malloc(dataSize)); if (this->pData) { // Bellek başarıyla ayrıldıysa std::memcpy(this->pData, input.pData, dataSize); } else { // Bellek ayrılmadıysa, güvenli şekilde hata ile başa çıkabilirsiniz this->nWidth = this->nHeight = this->nBytesPerPixel = 0; } }; ~IRayImageStructure(){ if (this->pData) { free(this->pData); } }; }; void Controller::SDKHandler(int nDetectorID, int nEventID, int nEventLevel, const char* pszMsg, int nParam1, int nParam2, int nPtrParamLen, void* pParam) { qDebug() << "SDK Event Received!"; qDebug() << "Detector ID:" << nDetectorID << ", Event ID:" << nEventID; qDebug() << "Message:" << pszMsg; // Olay türüne göre işlem yapın switch (nEventID) { case Evt_TaskResult_Succeed: // Örnek: Başarılı görev switch (nParam1) { case Cmd_Connect: qDebug() << "Connect the detector!"; break; case Cmd_Disconnect: qDebug() << "Disconnect the detector!"; break; case Cmd_ReadTemperature: qDebug() << "Read temperature from detector!"; break; default: qDebug() << "Unhandled command in Evt_TaskResult_Succeed"; break; } break; // Bu switch için break ekleniyor case Evt_TaskResult_Failed: // Örnek: Başarısız görev switch (nParam1) { case Cmd_Connect: { string errorInfo = m_iraySdk->GetErrorInfo(nParam2); qDebug() << "Error: " << QString::fromStdString(errorInfo); break; } case Cmd_Disconnect: qDebug() << "Failed to disconnect!"; break; default: qDebug() << "Unhandled command in Evt_TaskResult_Failed"; break; } break; // Bu switch için break ekleniyor case Evt_Image: { IRayImage* pImg = reinterpret_cast<IRayImage*>(pParam); const unsigned short* pImageData = pImg->pData; int nImageWidth = pImg->nWidth; int nImageHeight = pImg->nHeight; int nImageSize = nImageWidth * nImageHeight * pImg->nBytesPerPixel; qDebug() << "Image received!"; qDebug() << "Width:" << nImageWidth << ", Height:" << nImageHeight; IRayImageStructure tempFrameData(pImg); imageBuffer.enqueue(tempFrameData); break; // Evt_Image için break ekleniyor } default: qDebug() << "Unhandled event ID:" << nEventID; break; } }
Circuler Buffer:
#ifndef THREADSAFEQUEUE_H #define THREADSAFEQUEUE_H #include <QMutex> #include <QWaitCondition> #include <QQueue> #include <QDebug> template <typename T> class ThreadSafeQueue { public: ThreadSafeQueue() = default; // Kuyruğa bir eleman ekler void enqueue(const T& value) { QMutexLocker locker(&mutex); // Mutex'i güvenli bir şekilde kilitle queue.enqueue(value); condition.wakeOne(); // Bekleyen bir thread'i uyandır } // Kuyruktan bir eleman çeker, eleman yoksa bekler T dequeue() { QMutexLocker locker(&mutex); // Kuyruk boşsa eleman gelmesini bekle while (queue.isEmpty()) { condition.wait(&mutex); } return queue.dequeue(); // Elemanı kopyalayarak döndür } // Kuyruğun boş olup olmadığını kontrol eder bool isEmpty() { QMutexLocker locker(&mutex); return queue.isEmpty(); } private: QQueue<T> queue; // Qt'nin QQueue sınıfı QMutex mutex; // Thread-safety için mutex QWaitCondition condition; // Kuyruk boşken beklemesi için condition }; #endif // THREADSAFEQUEUE_H
ImageFrameGenerator:
#ifndef IMAGEFRAMEGENERATOR_H #define IMAGEFRAMEGENERATOR_H #include <QGuiApplication> #include <QQmlApplicationEngine> #include <QQmlContext> #include <QThread> #include <QImage> #include "iraysdk.h" // IRayImageStructure'ın tanımı burada bulunuyor #include "ImageProvider.h" #include "threadsafequeue.h" extern ThreadSafeQueue<IRayImageStructure> imageBuffer; class ImageFrameGenerator : public QObject { Q_OBJECT Q_PROPERTY(int value READ value NOTIFY valueChanged) public: explicit ImageFrameGenerator(ImageProvider* provider, QObject *parent = nullptr) : QObject(parent), m_imageProvider(provider) {} int value() const { return m_value; } // Getter metodu eklendi signals: // frameReceived fonksiyonunu yaz ve sinyali bağla. datanın formatına göre dönüştürme işlemi yap ve qml de bir yapıya yönlendir. void frameReceived(const QImage& img); // QImage formatında sinyal void valueChanged(); // Sinyal eklendi public slots: void start() { while (m_running) { QThread::msleep(3); if (!imageBuffer.isEmpty()) { qDebug() << "Dequeue çağrılıyor..."; IRayImageStructure imgData = imageBuffer.dequeue(); qDebug() << "Veri işleniyor: Boyut =" << imgData.nWidth << "x" << imgData.nHeight; uint16_t* gray16Data = reinterpret_cast<uint16_t*>(imgData.pData); uint8_t* rgbData = new uint8_t[imgData.nWidth * imgData.nHeight * 3]; // RGB için 3 kat veri // Window Width (WW) ve Window Level (WL) ile Normalize int ww = 1024; // Window Width int wl = 512; // Window Level for (int i = 0; i < imgData.nWidth * imgData.nHeight; ++i) { int pixel = (gray16Data[i] - wl + (ww / 2)) * 255 / ww; uint8_t normalizedPixel = static_cast<uint8_t>(std::clamp(pixel, 0, 255)); // RGB kanallarını ayarla (örnek: grayscale -> RGB eşleştirme) rgbData[i * 3 + 0] = normalizedPixel; // R rgbData[i * 3 + 1] = normalizedPixel; // G rgbData[i * 3 + 2] = normalizedPixel; // B } // QImage oluştur QImage image(rgbData, imgData.nWidth, imgData.nHeight, QImage::Format_RGB888); // QML ile bağlantı için sinyal gönder //emit frameReceived(image); if (m_imageProvider) { m_imageProvider->updateImage(image); // ImageProvider'a doğrudan erişim } delete[] rgbData; // Belleği temizleyin } } } void stop() { m_running = false; } private: bool m_running = true; int m_value = 0; // Getter için private üye değişken ImageProvider* m_imageProvider; // ImageProvider için pointer ImageProvider* m_imageProvider; // ImageProvider için pointer }; #endif // IMAGEFRAMEGENERATOR_H
ImageProvider:
#ifndef IMAGEPROVIDER_H #define IMAGEPROVIDER_H #include <QQuickImageProvider> #include <QImage> #include <QMutex> #include <QThread> class ImageProvider : public QQuickImageProvider { Q_OBJECT signals: void imageUpdated(); public: ImageProvider() : QQuickImageProvider(QQuickImageProvider::Image) {} void updateImage(const QImage& image) { QMutexLocker locker(&m_mutex); if (image.isNull()) { qDebug() << "Gelen görüntü geçersiz."; return; } m_image = image; // Görüntüyü sakla qDebug() << "Görüntü başarıyla güncellendi: Boyut =" << m_image.size(); emit imageUpdated(); // QML'e bildirim gönder } QImage requestImage(const QString& id, QSize* size, const QSize& requestedSize) { QMutexLocker locker(&m_mutex); qDebug() << "requestImage çağrıldı!"; if (size) { *size = m_image.size(); } if (requestedSize.isValid()) { return m_image.scaled(requestedSize, Qt::KeepAspectRatio); // Boyutlandırma } return m_image; } private: QImage m_image; QMutex m_mutex; }; #endif // IMAGEPROVIDER_H
main:
#include <QGuiApplication> #include <QQmlApplicationEngine> #include "controller.h" #include <QQmlContext> #include "ImageProvider.h" int main(int argc, char *argv[]) { QGuiApplication app(argc, argv); QQmlApplicationEngine engine; Controller controller; controller.InitializeSDK(); engine.rootContext()->setContextProperty("controller", &controller); // Controller sınıfını QML'e bağladık ImageProvider* imageProvider = new ImageProvider(); engine.addImageProvider("dynamic", imageProvider); // ImageFrameGenerator sınıfı ve thread başlatma ImageFrameGenerator* frameGenerator = new ImageFrameGenerator(imageProvider); QThread* frameThread = new QThread(); // ImageProvider ekleniyor frameGenerator->moveToThread(frameThread); QObject::connect(frameThread, &QThread::started, frameGenerator, &ImageFrameGenerator::start); QObject::connect(&app, &QCoreApplication::aboutToQuit, frameGenerator, &ImageFrameGenerator::stop); // frameReceived sinyalini ImageProvider'a bağlama QObject::connect(frameGenerator, &ImageFrameGenerator::frameReceived, imageProvider, &ImageProvider::updateImage); QObject::connect(imageProvider, &ImageProvider::imageUpdated, [&engine]() { // QML tarafındaki görüntüyü yenile QObject* rootObject = engine.rootObjects().isEmpty() ? nullptr : engine.rootObjects().first(); if (rootObject) { QMetaObject::invokeMethod(rootObject, "updateImage"); } }); frameThread->start(); QObject::connect( &engine, &QQmlApplicationEngine::objectCreationFailed, &app, []() { QCoreApplication::exit(-1); }, Qt::QueuedConnection); engine.loadFromModule("flatPanel", "Main"); return app.exec(); }
main_qml:
Frame { width: 1024 height: 1080 x: 848 visible: true background: Rectangle { color: "lightblue" } Image { id: displayImage anchors.fill: parent fillMode: Image.PreserveAspectFit source: "image://dynamic/" } // function updateImage() { // displayImage.source = displayImage.source; // Kaynağı yeniden yükle // } Connections { target: imageProvider onImageUpdated: { console.log("Image updated signal received"); displayImage.source = displayImage.source; // Kaynağı yeniden yükle } } }
My Output:
Handling High-Speed Data Streams:
How can I efficiently handle and render high-frequency (30 FPS, 2 MB) frames in a Qt-based application?Any advice or suggestions would be greatly appreciated. Thank you in advance for your help! 😊
-
@Wertyus said in How to Display Frames in QML from C++ for Real-Time Applications:
delete[] rgbData;
Please read https://doc.qt.io/qt-6/qimage.html#QImage-7 :
"The buffer must remain valid throughout the life of the QImage and all copies that have not been modified or otherwise detached from the original buffer."Also, why do you convert from 16bit to 8bit if you receive 8bit ("each with a resolution of 1024x1024 in Gray-8 format")?
-
1)Regarding "The buffer must remain valid throughout the life of the QImage and all copies":
I understand now that when using a raw buffer to construct a QImage, I need to ensure the buffer remains valid until all copies of the QImage are either modified or destroyed. To avoid potential issues, I plan to use QImage::copy() to create a detached copy of the image, which will manage its own memory independently. This should eliminate the need to manage the lifetime of the original buffer manually. Please let me know if this approach is recommended.
2)Regarding "Why do you convert from 16-bit to 8-bit if you receive 8-bit data?":
Initially, I assumed the data was in Gray-8 format based on the frame size (1024x1024, 2 MB). However, after consulting the manufacturer's documentation, I learned that the data is actually in Unsigned short 16-bit little-endian format. The conversion process I mentioned (16-bit → WW/WL → LUT → BMP 8-bit RGB) is based on their recommended workflow for rendering. This explains why the conversion is necessary in this case.
3)Additional Request:
If you have any additional advice or suggestions based on your experience, I’d be happy to hear them. Also, if you had a chance to review the code I shared, could you let me know if you noticed any mistakes or missed steps in my implementation? Your feedback would be highly appreciated.
Considering QBitmap:
I am also reviewing the link you provided. Based on the manufacturer's workflow and the documentation, I wonder if using QBitmap instead of QImage would be a better fit for this scenario. Do you think it would simplify handling or improve performance in any way? I’d appreciate your insights on this.
Thanks again for your support and guidance! -
For a real time image stream I would use
VideoOutput
+QVideoSink
instead ofQQuickImageProvider
. -
@Wertyus Hi . Lets make some things clear. For both Your Device and User Interface you have to implement the UDP protocol to send and receive frames.
Now in your device, create a QAbstractVideoFilter to detect frames in realtime and couple it with a VideoOutput on the qml side.
Create a Class inherited from QThread that will take each frames from the filter, and send it to the Server using QUdpSocket.
On the server side (Your GUI app) create a QUdpSocket listenning to the incomming frames.
if you have high resolution images, you might consider scaling or compressing them. Hope it helps -
@Ronel_qtmaster @GrecKo @jsulm
Hello,
I have resolved the issue. I am now temporarily saving a few frames to a directory on the computer and updating the Image element's source in QML with these frames. Currently, I am working on achieving smoother transitions, but so far, this is the only method that has worked reliably.
Since UDP management is handled by the device's SDK, I couldn't use Qt Creator's UDP library. Additionally, I encountered an issue where the signal-slot connections couldn't find the QML element. When trying the "VideoOutput + QVideoSink" approach, the VideoOutput element was not recognized at all.
I still haven't figured out the reasons behind these issues, but at least I now have a working method. If you have any recommendations regarding these topics, I'd be happy to hear them. Otherwise, I just wanted to thank you for your support and wish you all the best in your work!
Best regards,
-
-
@Wertyus Maybe take a look at gstreamer and FFmpeg to handle this. Use raw gstreamer or FFmpeg code to catch the streaming and render it in qml sink in gstreamer. You can test with Gstreamer or FFmpeg command-line code and then convert it into C++ code.
Qt multimedia module is not ideal for this type of app.
-
I don't know if it helps, but I have done something that involves rendering a stream of images as an animated view in QML.
I simply use a class derived from
QQuickPaintedItem
that I expose to QML. This incorporates a timer that checks for the availability of a new image when it times out. The image is provided as a char buffer plus a unique ID. All management of the image stream source is in a separate thread.When the UI thread requests the latest image from the image source it is provided with a pair comprising a
shared_ptr
containing the frame's char buffer and an integer ID. If the ID is different from the last frame on the QML side, the view is repainted with the new image. Otherwise it is ignored until the next timer trigger.The
QuickPaintedItem
holds aQPixmap
member. Each time a new image is received, this is filled, viaconvertFromImage
, from aQImage
constructed from the received buffer and thenupdate()
is called. Apaint()
method implemented in the class is responsible for drawing the updated pixmap.The use of the
shared_ptr
makes copying the image between the threads fast while still addressing the lifetime issues. Obviously there is a lock on the request for the latest image but that is the only explicit synchronisation needed. While this frame is current, both sides will have a reference to it. Once a new one comes in on the stream manager side, it will be replaced with that and a reference dropped. The reference on the QML side will be dropped once it has finished updating itsQPixmap
member from it. Deletion of the held object in ashared_ptr
is thread safe according the the standard. -
@JoeCFD I have worked extensively with GStreamer before, so I’m fairly confident (about 95%) that I can implement this solution if needed. However, I would like to explore other approaches first before switching to GStreamer or FFmpeg.
Currently, my goal is to make it work using Qt's built-in capabilities. If I run into further issues, I will definitely consider using GStreamer as a fallback solution.
Thanks again :) İf ı need your help about that, i will ask you :)
-
Thank you for sharing your approach! Using QQuickPaintedItem along with a timer and shared_ptr for thread-safe image handling sounds like an efficient way to manage the rendering process.
Right now, I am trying to make it work using Qt’s built-in solutions, but if I run into synchronization or performance issues, your approach could be a great alternative.
One thing I’m curious about:
Did you experience any latency issues with the QQuickPaintedItem approach when handling high frame rates (e.g., 30 FPS)?
Also, how did you handle buffer updates efficiently when new frames arrived?
Thanks again for the suggestion! I really appreciate it. -
Thanks for the suggestion! I actually tried exposing a C++ property and passing the videoSink to VideoOutput in QML. However, for some reason, QML does not recognize the VideoOutput element at all.
I checked that I have the required Qt Multimedia imports, but it still doesn’t work. Do you have any idea what might be causing this issue?
Could it be a missing dependency or something related to the QML engine initialization?
Do I need to manually register VideoOutput in my C++ code for it to be recognized?
I’d appreciate any insights you might have!Thanks again for your help.
-
@Wertyus said in How to Display Frames in QML from C++ for Real-Time Applications:
Did you experience any latency issues with the QQuickPaintedItem approach when handling high frame rates (e.g., 30 FPS)?
I did some initial experiments to test the viability of the general approach and was able to get frame rates above 30fps. Since implementing the real thing, I have not explicitly measured this but it has been good enough for our purposes. In our case, the frames are provided not by a video stream but by a rendering of a 3D model provided by a back end server. Using
QQuickPaintedItem
was driven by our need to be able to react to mouse events and so on that are fed back to the server.It might be that for your case, treating it as a video stream and using the specialised support for that would be more appropriate.
Also, how did you handle buffer updates efficiently when new frames arrived?
This is a sketch of the code in the
QQuickPaintedItem
. Because it's a "pull" approach, it is possible that frames could arrive too quickly for this to display all of them and anything between the last image requested and the current one will have been dropped.void ViewerItem::checkForNewImage() { // Called on timer trigger // Note no significant copying here. // `imageProvider` works on separate thread; `currentImage()` is mutex protected internally std::pair<std::shared_ptr<Image>, int> image = imageProvider->currentImage(); if (image.second != m_currentIndex) { // New image - update the pixmap member m_pixmap.convertFromImage( QImage(image.first->pixelBuffer(), image.first->width, image.first->height, image.first->width*3, QImage::Format_RGB888 ) ); // New image so update index m_currentIndex = image.second; // update() is QQuickPaintedItem member - will call `paint(QPainter*)` on the present class // our implementation of paint() uses `drawPixmap(0, 0, m_pixmap)` update() } }