Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Groups
  • Search
  • Get Qt Extensions
  • Unsolved
Collapse
Brand Logo
  1. Home
  2. Qt Development
  3. QML and Qt Quick
  4. How to use QAbstractVideoFilter to process frames from camera
QtWS25 Last Chance

How to use QAbstractVideoFilter to process frames from camera

Scheduled Pinned Locked Moved QML and Qt Quick
camerafilterframeqabstractvideof
27 Posts 10 Posters 18.8k Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • D Offline
    D Offline
    DennisZhang
    wrote on last edited by
    #1

    Hello

    I want to do image processing to every frames from camera, so I searched Google and I found a new class - QAbstractVideoFilter that is a convenient function in QT 5.5.

    But I can't find any examples, the only thing I can do is to try it by myself using the information of QT documentation.

    After Trying it, I found it reported that the function "finished()" doesn't exist...
    I have no idea how to declare this function as a signal function to emit a signal from "Run" function.

    Could anybody point me in the right direction?

    Here is the code I try to implement:

    //main(C++)

    #include <QApplication>
    #include <QQmlApplicationEngine>
    #include <QQMlEngine>
    #include <QtQml>
    #include <QtMultimedia/QAbstractVideoFilter>
    
    class MFRunnable: public QVideoFilterRunnable{
    public:
    	explicit MFRunnable(QAbstractVideoFilter *filter): QVideoFilterRunnable(){
    		m_filter = filter;
    	}
    
    	QVideoFrame run(QVideoFrame *input, const QVideoSurfaceFormat &surfaceFormat, RunFlags flags){
    		QString* result;
    		*result = "123";
    		emit m_filter->finished(result);
    		return *input;
    
    	}
    private:
    	QAbstractVideoFilter* m_filter;
    
    };	
    class MF: public QAbstractVideoFilter{
    public:
    	QVideoFilterRunnable *createFilterRunnable(){return new MFRunnable(this);}
    
    signals:
    	void finished(QObject *result);
    };
    
    
    
    int main(int argc, char *argv[])
    {
    	qmlRegisterType<MF>("AVF", 1, 0, "MF");
    	QApplication app(argc, argv);
    
    	QQmlApplicationEngine engine;
    	engine.load(QUrl(QStringLiteral("qrc:/main.qml")));
    
    	return app.exec();
    }
    

    //main(QML)

    import QtQuick 2.4
    import QtQuick.Controls 1.3
    import QtQuick.Window 2.2
    import QtQuick.Dialogs 1.2
    import QtMultimedia 5.5
    import AVF 1.0
    
    
    ApplicationWindow {
    	title: qsTr("Hello World")
    	width: 640
    	height: 480
    	visible: true
    
    	Camera {
    		id: camera
    		exposure.exposureCompensation:  -1.0
    		exposure.exposureMode: Camera.ExposurePortrait
    
    	}
    	MF {
    		id:filter
    		onFinished: console.log("result of the computation: " + result)
    	}
    
    	VideoOutput {
    		source: camera
    		filters: [filter]
    		anchors.fill: parent
    		focus: visible
    	}
    }
    
    1 Reply Last reply
    0
    • L Offline
      L Offline
      Leonardo
      wrote on last edited by
      #2

      Hi. What kind of image processing are you trying to achieve?

      D 1 Reply Last reply
      0
      • L Leonardo

        Hi. What kind of image processing are you trying to achieve?

        D Offline
        D Offline
        DennisZhang
        wrote on last edited by
        #3

        @Leonardo Thanks for your applying!

        I want to implement ViBe Algorithm and Human Skeletonization by OpenGL ES 2.0(Shader Language), and because of heavy computation needed by these Algorithms, I must implement them using OpenGL ES 2.0.

        I know there is a convenient function in QtQuick - that is ShaderEffect , but I can't use it because it limits too many functions such as creating external texture buffer for further computing, so I need to focus on the original way to use OpenGL ES (using C++) in cooperation with QAbstractVideoFilter.

        1 Reply Last reply
        0
        • SGaistS Offline
          SGaistS Offline
          SGaist
          Lifetime Qt Champion
          wrote on last edited by
          #4

          Hi and welcome to devnet,

          You can find some more information in this blog post

          Note that the finished signal is just for the example here to communicate a result to the outside. If you only modify the image then there's no need for it.

          Interested in AI ? www.idiap.ch
          Please read the Qt Code of Conduct - https://forum.qt.io/topic/113070/qt-code-of-conduct

          1 Reply Last reply
          0
          • T Offline
            T Offline
            theshadowx
            wrote on last edited by
            #5

            Hi @DennisZhang ,

            I have the same problem as you. First when I emit the signal from QVideoFilterRunnable , I got :

            Undefined reference to subFilterClass::finished(...)

            So then I comment it out.

            and in the QML : if I call onFinished, I got this error while executing the app:

            Cannot assign to non-existent property "onFinished"

            have you been able to solve this problem ?

            1 Reply Last reply
            0
            • A Offline
              A Offline
              agocs
              wrote on last edited by
              #6

              There is no built-in finished() signal. You have to declare it yourself and can be called anything.

              So in the QAbstractVideoFilter subclass, have

              signals:
              void finished();

              then in the QVideoFilterRunnable, do

              emit m_filter->finished();

              when the computation is done.

              T 1 Reply Last reply
              0
              • B Offline
                B Offline
                belab
                wrote on last edited by
                #7

                Hi DennisZhang,

                Is your file processed by moc? I guess you forget the QOBJECT macro.

                1 Reply Last reply
                0
                • A agocs

                  There is no built-in finished() signal. You have to declare it yourself and can be called anything.

                  So in the QAbstractVideoFilter subclass, have

                  signals:
                  void finished();

                  then in the QVideoFilterRunnable, do

                  emit m_filter->finished();

                  when the computation is done.

                  T Offline
                  T Offline
                  theshadowx
                  wrote on last edited by
                  #8

                  @agocs of course I declared it in signals

                  1 Reply Last reply
                  0
                  • K Offline
                    K Offline
                    kolegs
                    wrote on last edited by
                    #9

                    I had the same problem, tried diffrent ways and it worked when I declared both childs of QVideoFilterRunnable and QAbstractVideoFilter in the same header and source file(so I have 2 files, one .h and one .cpp)

                    My header file looks like this:

                    class QRCode_filter : public QAbstractVideoFilter
                    {
                        Q_OBJECT
                    public:
                        QRCode_filter();
                        QVideoFilterRunnable *createFilterRunnable();
                    private:
                    signals:
                        void finished(QObject *result);
                    public slots:
                    
                    };
                    
                    class MyQRCodeFilterRunnable : public QVideoFilterRunnable
                    {
                    public:
                        QVideoFrame run(QVideoFrame *input, const QVideoSurfaceFormat &surfaceFormat, RunFlags flags);
                        MyQRCodeFilterRunnable(QRCode_filter *filter = NULL);
                    private:
                        QRCode_filter *m_filter;
                    };
                    
                    T 2 Replies Last reply
                    0
                    • K kolegs

                      I had the same problem, tried diffrent ways and it worked when I declared both childs of QVideoFilterRunnable and QAbstractVideoFilter in the same header and source file(so I have 2 files, one .h and one .cpp)

                      My header file looks like this:

                      class QRCode_filter : public QAbstractVideoFilter
                      {
                          Q_OBJECT
                      public:
                          QRCode_filter();
                          QVideoFilterRunnable *createFilterRunnable();
                      private:
                      signals:
                          void finished(QObject *result);
                      public slots:
                      
                      };
                      
                      class MyQRCodeFilterRunnable : public QVideoFilterRunnable
                      {
                      public:
                          QVideoFrame run(QVideoFrame *input, const QVideoSurfaceFormat &surfaceFormat, RunFlags flags);
                          MyQRCodeFilterRunnable(QRCode_filter *filter = NULL);
                      private:
                          QRCode_filter *m_filter;
                      };
                      
                      T Offline
                      T Offline
                      theshadowx
                      wrote on last edited by
                      #10

                      @kolegs Thanks for sharing this solution.

                      1 Reply Last reply
                      0
                      • K kolegs

                        I had the same problem, tried diffrent ways and it worked when I declared both childs of QVideoFilterRunnable and QAbstractVideoFilter in the same header and source file(so I have 2 files, one .h and one .cpp)

                        My header file looks like this:

                        class QRCode_filter : public QAbstractVideoFilter
                        {
                            Q_OBJECT
                        public:
                            QRCode_filter();
                            QVideoFilterRunnable *createFilterRunnable();
                        private:
                        signals:
                            void finished(QObject *result);
                        public slots:
                        
                        };
                        
                        class MyQRCodeFilterRunnable : public QVideoFilterRunnable
                        {
                        public:
                            QVideoFrame run(QVideoFrame *input, const QVideoSurfaceFormat &surfaceFormat, RunFlags flags);
                            MyQRCodeFilterRunnable(QRCode_filter *filter = NULL);
                        private:
                            QRCode_filter *m_filter;
                        };
                        
                        T Offline
                        T Offline
                        theshadowx
                        wrote on last edited by
                        #11

                        @kolegs Have you tried using OpenCV in

                        QVideoFrame run(QVideoFrame *input, const QVideoSurfaceFormat &surfaceFormat, RunFlags flags);
                        

                        I have a problem about that :

                        QVideoFrame FilterRunnable::run(QVideoFrame *input, const QVideoSurfaceFormat &surfaceFormat, QVideoFilterRunnable::RunFlags flags)
                        {
                            if(input->isValid()){
                                input->map(QAbstractVideoBuffer::ReadWrite);
                                /// QVideoFrame to cv::Mat
                                Mat cvImg = Mat(input->height(),input->width(), CV_8UC3,input->bits(),input->bytesPerLine());
                        
                                /// Apply Filter
                                Mat edges;
                        
                                /// to grayscale
                                cvtColor(cvImg, edges, COLOR_YUV2GRAY_420);
                        
                                /// apply filters
                                GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5);
                                Canny(edges, edges, 0, 30, 3);
                        
                                /// convert to YUV420
                                cvtColor(edges,edges, COLOR_GRAY2RGB);
                                cvtColor(edges,edges, COLOR_RGB2YUV_I420);
                        
                                ///  what to do here to send back the modified frame .....   ???
                            }
                        
                             return *input;
                        }
                        
                        M 1 Reply Last reply
                        0
                        • K Offline
                          K Offline
                          kolegs
                          wrote on last edited by
                          #12

                          @theshadowx I've never used openCV so I'm not sure how to use this libraries, but about run() function as far as I know you need to modify the frame that you get(QVideoFrame *input) and simply return it in the same format.

                          1 Reply Last reply
                          0
                          • T theshadowx

                            @kolegs Have you tried using OpenCV in

                            QVideoFrame run(QVideoFrame *input, const QVideoSurfaceFormat &surfaceFormat, RunFlags flags);
                            

                            I have a problem about that :

                            QVideoFrame FilterRunnable::run(QVideoFrame *input, const QVideoSurfaceFormat &surfaceFormat, QVideoFilterRunnable::RunFlags flags)
                            {
                                if(input->isValid()){
                                    input->map(QAbstractVideoBuffer::ReadWrite);
                                    /// QVideoFrame to cv::Mat
                                    Mat cvImg = Mat(input->height(),input->width(), CV_8UC3,input->bits(),input->bytesPerLine());
                            
                                    /// Apply Filter
                                    Mat edges;
                            
                                    /// to grayscale
                                    cvtColor(cvImg, edges, COLOR_YUV2GRAY_420);
                            
                                    /// apply filters
                                    GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5);
                                    Canny(edges, edges, 0, 30, 3);
                            
                                    /// convert to YUV420
                                    cvtColor(edges,edges, COLOR_GRAY2RGB);
                                    cvtColor(edges,edges, COLOR_RGB2YUV_I420);
                            
                                    ///  what to do here to send back the modified frame .....   ???
                                }
                            
                                 return *input;
                            }
                            
                            M Offline
                            M Offline
                            medyakovvit
                            wrote on last edited by
                            #13

                            @theshadowx, i don't know how to work with YUV format, but with QVideoFrame::Format_RGB32 you can do something like that:

                            QVideoFrame ThresholdFilterRunnable::run(QVideoFrame *input, const QVideoSurfaceFormat &surfaceFormat, QVideoFilterRunnable::RunFlags flags)
                            {
                                if (!input->isValid())
                                    return *input;
                            
                                input->map(QAbstractVideoBuffer::ReadWrite);
                            
                                cv::Mat mat = cv::Mat(input->height(),input->width(), CV_8UC4, input->bits());
                            
                                cv::Mat gray, threshold, color;
                                cv::cvtColor(mat, gray, COLOR_RGB2GRAY);
                                cv::threshold(gray, threshold, 150.0, 255.0, THRESH_BINARY);
                                cv::cvtColor(threshold, color, COLOR_GRAY2RGB);
                                this->toVideoFrame(input, color);
                            
                                input->unmap();
                                return *input;
                            }
                            
                            void ThresholdFilterRunnable::toVideoFrame(QVideoFrame *input, cv::Mat &mat)
                            {
                                int aCols = 4*mat.cols;
                                int aRows = mat.rows;
                                uchar* inputBits = input->bits();
                            
                                for (int i=0; i<aRows; i++)
                                {
                                    uchar* p = mat.ptr<uchar>(i);
                                    for (int j=0; j<aCols; j++)
                                    {
                                        int index = i*aCols + j;
                                        inputBits[index] = p[j];
                                    }
                                }
                            }
                            

                            I can't test it right now, because i'm in hospital.

                            T 1 Reply Last reply
                            0
                            • M medyakovvit

                              @theshadowx, i don't know how to work with YUV format, but with QVideoFrame::Format_RGB32 you can do something like that:

                              QVideoFrame ThresholdFilterRunnable::run(QVideoFrame *input, const QVideoSurfaceFormat &surfaceFormat, QVideoFilterRunnable::RunFlags flags)
                              {
                                  if (!input->isValid())
                                      return *input;
                              
                                  input->map(QAbstractVideoBuffer::ReadWrite);
                              
                                  cv::Mat mat = cv::Mat(input->height(),input->width(), CV_8UC4, input->bits());
                              
                                  cv::Mat gray, threshold, color;
                                  cv::cvtColor(mat, gray, COLOR_RGB2GRAY);
                                  cv::threshold(gray, threshold, 150.0, 255.0, THRESH_BINARY);
                                  cv::cvtColor(threshold, color, COLOR_GRAY2RGB);
                                  this->toVideoFrame(input, color);
                              
                                  input->unmap();
                                  return *input;
                              }
                              
                              void ThresholdFilterRunnable::toVideoFrame(QVideoFrame *input, cv::Mat &mat)
                              {
                                  int aCols = 4*mat.cols;
                                  int aRows = mat.rows;
                                  uchar* inputBits = input->bits();
                              
                                  for (int i=0; i<aRows; i++)
                                  {
                                      uchar* p = mat.ptr<uchar>(i);
                                      for (int j=0; j<aCols; j++)
                                      {
                                          int index = i*aCols + j;
                                          inputBits[index] = p[j];
                                      }
                                  }
                              }
                              

                              I can't test it right now, because i'm in hospital.

                              T Offline
                              T Offline
                              theshadowx
                              wrote on last edited by
                              #14

                              @medyakovvit Hi, thanks for responding, I hope everything is fine for you.

                              if my camera takes video in RGB32 it would be easier as I can create QImage from the edge matrix and then create a QVideoFrame from QImage.:

                               ///cv::Mat to QImage
                               QImage ImgDest(edges.data, edges.cols, edges.rows, edges.step, QImage::Format_RGB888);
                               QVideoFrame *output = new QVideoFrame(ImgDest);
                              

                              And then I would return the output.

                              The problem (From what I understood) is that my camera generates images with YUV color scheme. which means that QVideoSurfaceFormat is in YUV, so if I use RGB32 format it won't work.

                              Unfortunately QImage doesn't support YUV format. :(

                              M 1 Reply Last reply
                              0
                              • T theshadowx

                                @medyakovvit Hi, thanks for responding, I hope everything is fine for you.

                                if my camera takes video in RGB32 it would be easier as I can create QImage from the edge matrix and then create a QVideoFrame from QImage.:

                                 ///cv::Mat to QImage
                                 QImage ImgDest(edges.data, edges.cols, edges.rows, edges.step, QImage::Format_RGB888);
                                 QVideoFrame *output = new QVideoFrame(ImgDest);
                                

                                And then I would return the output.

                                The problem (From what I understood) is that my camera generates images with YUV color scheme. which means that QVideoSurfaceFormat is in YUV, so if I use RGB32 format it won't work.

                                Unfortunately QImage doesn't support YUV format. :(

                                M Offline
                                M Offline
                                medyakovvit
                                wrote on last edited by
                                #15

                                @theshadowx So, if correctly understood this YUV420 and YUV format, Y - luma component, U and V - color component. And in single frame Ys go first, then Us and Vs. So if you need grayscale image, you can work with Ys and ignore Us and Vs. Maybe you can try:

                                QVideoFrame YUVFilterRunnable::run(QVideoFrame *input, const QVideoSurfaceFormat &surfaceFormat, QVideoFilterRunnable::RunFlags flags)
                                {
                                    if (!input->isValid())
                                        return *input;
                                
                                    input->map(QAbstractVideoBuffer::ReadWrite);
                                
                                    this->deleteColorComponentFromYUV(input);
                                    cv::Mat mat(input->height(),input->width(), CV_8UC1, input->bits()); // create grayscale mat with input's Ys only and avoid additional color conversion and copying
                                
                                    cv::GaussianBlur(mat, mat, Size(7,7), 1.5, 1.5);
                                    cv::Canny(mat, mat, 0, 30, 3);
                                
                                    input->unmap();
                                    return *input;
                                }
                                
                                void YUVFilterRunnable::deleteColorComponentFromYUV(QVideoFrame *input)
                                {
                                    // Assign 0 to Us and Vs
                                    int firstU = input->width()*input->height(); // if i correctly understand YUV420
                                    int lastV = input->width()*input->height() + input->width()*input->height()/4*2;
                                    uchar* inputBits = input->bits();
                                
                                    for (int i=firstU; i<lastV; i++)
                                        inputBits[i] = 0;
                                }
                                
                                T 1 Reply Last reply
                                0
                                • M medyakovvit

                                  @theshadowx So, if correctly understood this YUV420 and YUV format, Y - luma component, U and V - color component. And in single frame Ys go first, then Us and Vs. So if you need grayscale image, you can work with Ys and ignore Us and Vs. Maybe you can try:

                                  QVideoFrame YUVFilterRunnable::run(QVideoFrame *input, const QVideoSurfaceFormat &surfaceFormat, QVideoFilterRunnable::RunFlags flags)
                                  {
                                      if (!input->isValid())
                                          return *input;
                                  
                                      input->map(QAbstractVideoBuffer::ReadWrite);
                                  
                                      this->deleteColorComponentFromYUV(input);
                                      cv::Mat mat(input->height(),input->width(), CV_8UC1, input->bits()); // create grayscale mat with input's Ys only and avoid additional color conversion and copying
                                  
                                      cv::GaussianBlur(mat, mat, Size(7,7), 1.5, 1.5);
                                      cv::Canny(mat, mat, 0, 30, 3);
                                  
                                      input->unmap();
                                      return *input;
                                  }
                                  
                                  void YUVFilterRunnable::deleteColorComponentFromYUV(QVideoFrame *input)
                                  {
                                      // Assign 0 to Us and Vs
                                      int firstU = input->width()*input->height(); // if i correctly understand YUV420
                                      int lastV = input->width()*input->height() + input->width()*input->height()/4*2;
                                      uchar* inputBits = input->bits();
                                  
                                      for (int i=firstU; i<lastV; i++)
                                          inputBits[i] = 0;
                                  }
                                  
                                  T Offline
                                  T Offline
                                  theshadowx
                                  wrote on last edited by theshadowx
                                  #16

                                  @medyakovvit

                                  Yes that's right, there is one porblem in the code is that with you code I get a greenscale. So to convert to grayscale the solution is that instead of 0 for chroma, it should be 127 (http://stackoverflow.com/a/20609599/2775917) :

                                  void YUVFilterRunnable::deleteColorComponentFromYUV(QVideoFrame *input)
                                  {
                                      // Assign 0 to Us and Vs
                                      int firstU = input->width()*input->height(); // if i correctly understand YUV420
                                      int lastV = input->width()*input->height() + input->width()*input->height()/4*2;
                                      uchar* inputBits = input->bits();
                                  
                                      for (int i=firstU; i<lastV; i++)
                                          inputBits[i] = 127;
                                  }
                                  

                                  Thanks a lot for your help

                                  M 1 Reply Last reply
                                  0
                                  • T theshadowx

                                    @medyakovvit

                                    Yes that's right, there is one porblem in the code is that with you code I get a greenscale. So to convert to grayscale the solution is that instead of 0 for chroma, it should be 127 (http://stackoverflow.com/a/20609599/2775917) :

                                    void YUVFilterRunnable::deleteColorComponentFromYUV(QVideoFrame *input)
                                    {
                                        // Assign 0 to Us and Vs
                                        int firstU = input->width()*input->height(); // if i correctly understand YUV420
                                        int lastV = input->width()*input->height() + input->width()*input->height()/4*2;
                                        uchar* inputBits = input->bits();
                                    
                                        for (int i=firstU; i<lastV; i++)
                                            inputBits[i] = 127;
                                    }
                                    

                                    Thanks a lot for your help

                                    M Offline
                                    M Offline
                                    medyakovvit
                                    wrote on last edited by
                                    #17

                                    @theshadowx I'm glad to help

                                    1 Reply Last reply
                                    0
                                    • C Offline
                                      C Offline
                                      cs_a994
                                      wrote on last edited by
                                      #18

                                      Hello!

                                      Is it possible to use the QVideoFilterRunnuble without QML?

                                      As an example I would like to embed it into Camera example ( http://doc.qt.io/qt-5/qtmultimediawidgets-camera-example.html ) to be able to change data format.

                                      Thank you.

                                      M 1 Reply Last reply
                                      0
                                      • C cs_a994

                                        Hello!

                                        Is it possible to use the QVideoFilterRunnuble without QML?

                                        As an example I would like to embed it into Camera example ( http://doc.qt.io/qt-5/qtmultimediawidgets-camera-example.html ) to be able to change data format.

                                        Thank you.

                                        M Offline
                                        M Offline
                                        medyakovvit
                                        wrote on last edited by
                                        #19

                                        @cs_a994 Don't think so. But you can try to subclass QAbstractVideoSurface.

                                        C 1 Reply Last reply
                                        0
                                        • M medyakovvit

                                          @cs_a994 Don't think so. But you can try to subclass QAbstractVideoSurface.

                                          C Offline
                                          C Offline
                                          cs_a994
                                          wrote on last edited by
                                          #20

                                          @medyakovvit

                                          Thank you. I know about this way. But use of filters seams to me more neat.

                                          I tryed to use filters with QML ( in the Camera example ) and did not yet get a result.

                                          M T 2 Replies Last reply
                                          0

                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • Users
                                          • Groups
                                          • Search
                                          • Get Qt Extensions
                                          • Unsolved