What's the fastest way to display some text in Qt?
-
If you want high frequency display you should do your own painting...
What's the point in doing high frequency drawing if the display's refresh rate is itself around 60Hz? You yourself said that let however fast my drawing might be, nothing can be drawn anything on screen faster than ~16.6ms.
-
By high frequency I meant those 60Hz. This as opposed to just setting a text in a UI widget, which is not reliable to achieve even that.
The difference is that with your own drawing you can make sure you are ready every frame the display displays. Setting text in a UI widget is just telling it to update it but it will do so only when it is ready to do so, which is out of your control. -
@Wings said:
@KiwiJeff said:
Out of curiosity, but how are you actually measuring the time? It sounds like something I might want to check out myself.
Right now I'm using a Qt::PreciseTimer type QTimer. But, probably I'll move to one with <10ms resolution.
I was afraid I would get an answer like that. I mean, you are more or less guessing the speed inside the system, while your use case is to have a preserved speed. All you know in the end is the time between the "sure, put the text up" and the "sure, clear the text" and not the actual time on the screen? Of course, if this time is lower when using the QPainter/QML rather then setText, I would go for that.
In the end, I think the only way to know if your software works for your use case is to have a high speed camera. And, as mentioned in other replies, a calibrated system.
-
@KiwiJeff said:
In the end, I think the only way to know if your software works for your use case is to have a high speed camera.
What would you need a high speed camera for? 60Hz is not that much and you can see every frame with your own eyes if you focus enough. If you want to analyze the frames you can always use a screen recording software. If you're an owner of not that old Nvidia graphics card you can even use the built in ShadowPlay feature to record your app and analyze frame by frame. Invaluable tool for such tasks.
-
You could use timer and force gui update to update gui faster than standard optimization.
If screen 100fps then every 5-10 ms maybe fast enough.
Human preception probably less , only thing i get if watching movies over 40 fps is moving smoothness and relaxes eyes more. -
@Q139 The whole point is not to do more work than the display can handle. Just cause you update faster than the screen refreshes doesn't mean you're getting a better result. Let me visualize this:
The black vertical bars is the v-sync (hardware refresh points).
The red bars is the drawing and the light red bars is the drawing that will never make it to the screen.
The arrows point to data that will be used for given refresh.As you can see drawing faster than refresh rate is not good at all. You will skip irregular amounts of frames. If you tried to animate an object with this approach it would appear very "jumpy", skipping ahead or falling behind. It's very uneasy on the eyes.
Drawing just a little slower than the refresh (17ms) seems to be good at first, but notice that every few frames a frame will be displayed twice (because the new update is just a little late). This will look like an animated object is moving in bursts. Staying for two frames in place and then going steady for few next. Not good at all.
The above example doesn't take into account that the drawing does not always take the same amount of time (the rectangles vary in width frame to frame). Also neither the timer ticks nor the v-sync is performed in perfect intervals so the red rectangles and black bars can move slightly causing even more havoc. And then there's also an issue of double buffering which will make all this even worse.
The correct approach is to not use a timer at all. Wait for v-sync and draw then. This is good for couple of reasons:
- we're not wasting cpu on drawing anything more than necessary.
- we don't care how often the refresh is done. We just always paint for the next one and don't try to "fit just right in".
- we never skip a frame (important if you're going to display something for one frame only).
- since we're starting to draw just after the refresh we have the whole cycle for ourselves to draw, so it doesn't matter if drawing takes 3ms in one frame and 11 in another. With timer you need to make sure you're "just in time" for refresh, which is basically impossible to assure.
-
@Chris-Kawa
I disagree. What you are recording with ShadowPlay is what is send out to the monitor, but not what the monitor is actually showing. Maybe the more advanced monitors with HDMI or DisplayPort will have a 1:1 representation, but then you are still assuming the monitor itself displays everything it receives. -
@KiwiJeff said:
What you are recording with ShadowPlay is what is send out to the monitor, but not what the monitor is actually showing. Maybe the more advanced monitors with HDMI or DisplayPort will have a 1:1 representation, but then you are still assuming the monitor itself displays everything it receives.
Well... yes, yes i do. Are you saying a display will skip frames or otherwise loose pixels? I find it hard to believe. Can you back this up with some official information? I would be interested to know more details if that's true.
Of course I totally agree in terms of stuff like color representation or artifacts like ghosting, but skipping frames? -
Well, in games, when you don't have V-sync enabled, you see tearing. I consider this skipping part of the frame. I have no official information on actual skipping of frames in modern 60 hz monitors.
Either way, the actual point that I wanted to make is that you can not trust information from within the system to know what is preserved outside the system. The speed indication in the car and the actual speed of the car are almost always slightly off.
Nevertheless, we are going off topic and this thread has already shown good ways of measuring techniques, so I suggest we close it for now :).
-
@KiwiJeff said:
Nevertheless, we are going off topic and this thread has already shown good ways of measuring techniques, so I suggest we close it for now :).
I don't think that's off topic. It's actually my mistake that I neglected this thread by marking it solved, prematurely. Many thanks to you and @Chris-Kawa for providing valuable input. :) I'll remove the "SOLVED" prefix because it is still not solved. I have questions and I'll irritate you again. :D
-
Okay, so I studied @Chris-Kawa's answer again. And what hit me like a bolt of lightning is that what I was dreaming of doing is virtually nonsensical. Now I really understand what he was saying.
This is what I wished to make happen (when I was naive :D):
- The background timer's timeout event, issues an update() command which draws immediately into the screen (like magic), which in turn, triggers a ~17ms single-shot timer.
- When that timer's timeout occurs, the text shown on the screen is cleared immediately (again magic).
- This keeps on looping till the user stops the background timer.
This is what actually happens:
- Issue an update() command that "draws" the text (text is NOT shown yet). The single-shot timer (@~17ms) is triggered.
...probably some milliseconds elapsed (let this be x ms). - At the vertical refresh moment, text becomes visible on screen. (this is damn fast, so I'm ignoring this interval)
...probably some milliseconds elapsed (let this be y ms) - The single-shot timer has ended after ~(x+y) ms and a command to clear the text has been issued.
...probably some milliseconds elapsed (let this be z ms). - At the vertical refresh moment, text is cleared from the screen. (ignoring this interval again)
So, the user sees the text for about (x+y+z+k) ms where, k is the sum of the additional milliseconds used in actually doing that stuff. Now, x+y+z+k ms is almost always not equal to the desired interval.
That's why, this process sucks, big time.Now, this is the best that can happen:
- Wait for the background timer to end. After it ends, as soon as a refresh cycle occurs, issue the command to draw the text.
- At the next refresh cycle, the text will be drawn to the screen. After the text is drawn, start the single-shot timer.
- After the timer has ended, issue the command to remove the text. Wait for the next refresh cycle.
- At the next refresh cycle, the text will be removed from the screen. Now, start the background time. Now go to the first step.
Visualizing the process,
Now, it is evident that showing the text exactly for 15ms is impossible using the previous method. But at least we can make sure that we don't show that more than ~32ms. And that's good news. At least in my case.
Now, the point is how to achieve that. I don't know. But I'll find you and I'll kill you, nay implement you.