QCoreApplication::processEvents() and Qt::QueuedConnection
-
It's only a guess, but I'd like to go a little deeper into what @SGaist said: For the UI to actually change somewhere there is a call to
update(). Especially this call will be compressed to avoid unnecessary redraws. In order to force a redraw one has to callrepaint()instead ofupdate(). BecauseprocessEvents()is called immediately aftersomeSignal()there is a chance thatupdate()will not trigger the UI update, yet.One reason why it works a lot of times might be that there are enough other events in the queue that still have to be processed (e.g. some mouse events). Maybe enough time has passed until the
update()is handled. If I am right about that, a (nearly) empty queue might skip the current call toupdate()because it has been too early. -
@SGaist, @SimonSchroeder
Thank you both for answering. Unfortunately I do not understand what either of you is saying! :( I don't know how much patience you have to explain further?Let's get a quick one out of the way:
@SGaist said in QCoreApplication::processEvents() and Qt::QueuedConnection:Also, switching context.
What does this mean? As I said, if it is relevant no threads are involved (despite the call to
QueuedConnection).Don't forget that the missing visual update only happens if
QueuedConnection, notDirectConnection. Why? The former are put in a queue (even if single thread) while the latter are not, and that changes behaviour somehow? But the queuing only affects my signal emission/slot execution, not whatever "paint/update" event which then gets posted as a result of the slot calling a function which affects the UI (button dis/enablement in my case)? As I said, by debugger inspection my slot is always called during theprocessEvents()immediately after theemit, it's not that that gets missed/delayed. I am guessing the issue is then that my slot indirectly gets an update posted for its UI call but that update is not always processed during thatprocessEvents()? I fail to see why whether my slot was connected queued or direct that has a later effect on the update?And what is this compression/update all about? Something about if there are multiple posted update events then they are "coalesced" and only one is called? But further, since in all cases at least one update must be posted, sometimes even one update is not acted on immediately? Instead there can be a time delay?
Let's take the simplest situation. Miraculously there are absolutely no events in the queue when I do my
emit. What do you expect to happen then? Always get to see immediate redraw? Still depends on the time when the post happens, perhaps relative to the previous redraw?So I would like to understand:
- What's the outline behaviour of these updates and compression?
- How does whether the original signal was connected queued versus direct, and we know its slot gets executed, affect the final outcome of when the redraw happens?
-
I appreciate your motivation to explore this topic further, and I admit that my own understanding of the internal mechanics here is limited as well.
However, based on this discussion and several others recently, I’m starting to wonder whether this might be a “Claude Code TUI” kind of situation.In the linked post about Claude Code, a significant amount of engineering effort went into building a highly sophisticated terminal UI, complete with React-style rendering, diffing, and performance optimizations targeting things like smooth updates and high frame rates. What struck me while reading it was that at no point does anyone seem to pause and ask a more fundamental question: why do we actually need a 60 FPS terminal user interface in the first place?
-
I appreciate your motivation to explore this topic further, and I admit that my own understanding of the internal mechanics here is limited as well.
However, based on this discussion and several others recently, I’m starting to wonder whether this might be a “Claude Code TUI” kind of situation.In the linked post about Claude Code, a significant amount of engineering effort went into building a highly sophisticated terminal UI, complete with React-style rendering, diffing, and performance optimizations targeting things like smooth updates and high frame rates. What struck me while reading it was that at no point does anyone seem to pause and ask a more fundamental question: why do we actually need a 60 FPS terminal user interface in the first place?
@J.Hilk
Well, that was all very advanced for me! ;)If you are talking about frequency of updates and whether one really needs updates so frequently, just for the record here is my situation, why my question matters to it and why I don't think frequency has anything to do with the problem for me.
Briefly, I am writing a CPU emulator, to include visuals (and an "execution debugger") from a Qt UI layer. Speed of the underlying "emulator code execution" is crucial, it is executing a lot of instructions rapidly. Obviously if I am not careful UI updating will dominate. Per previous posts I have already done a lot of work on signal emission reduction and slot queuing compression as one area.
The problem does not arise when "stepping" or "running" where I allow periodic surrender to UI for update (via
processEvents()). It arises in a mode I have named "Turbo". That must be as fast as possible, so it does not emit signals and does not surrender to UI (processEvents()) during execution. Fair enough, only that way do I get desired performance. However, at the start of Turbo mode buttons get disabled, and since Turbo execution may take "some time" [e.g. prime number or sorting algorithms :) ] I want the UI to have shown this update visually just before "blocking" while it runs. It is about forcing a one-off repaint not frequent ones. So I callprocessEvents()once after the button disablement code and before the Turbo blocking execution. And what I see is sometimes the repaint happens then (good) but more often than not it does not repaint (bad). That is why I am wishing to understand.Yes, I could probably "fix" by, say, doing the disablement call and then a
QTimer::singleShot(0)instead of immediate call to launch the Turbo execution. But me being me I would like to understand the current behaviour, just because I am interested! -
@J.Hilk
Well, that was all very advanced for me! ;)If you are talking about frequency of updates and whether one really needs updates so frequently, just for the record here is my situation, why my question matters to it and why I don't think frequency has anything to do with the problem for me.
Briefly, I am writing a CPU emulator, to include visuals (and an "execution debugger") from a Qt UI layer. Speed of the underlying "emulator code execution" is crucial, it is executing a lot of instructions rapidly. Obviously if I am not careful UI updating will dominate. Per previous posts I have already done a lot of work on signal emission reduction and slot queuing compression as one area.
The problem does not arise when "stepping" or "running" where I allow periodic surrender to UI for update (via
processEvents()). It arises in a mode I have named "Turbo". That must be as fast as possible, so it does not emit signals and does not surrender to UI (processEvents()) during execution. Fair enough, only that way do I get desired performance. However, at the start of Turbo mode buttons get disabled, and since Turbo execution may take "some time" [e.g. prime number or sorting algorithms :) ] I want the UI to have shown this update visually just before "blocking" while it runs. It is about forcing a one-off repaint not frequent ones. So I callprocessEvents()once after the button disablement code and before the Turbo blocking execution. And what I see is sometimes the repaint happens then (good) but more often than not it does not repaint (bad). That is why I am wishing to understand.Yes, I could probably "fix" by, say, doing the disablement call and then a
QTimer::singleShot(0)instead of immediate call to launch the Turbo execution. But me being me I would like to understand the current behaviour, just because I am interested!@JonB said in QCoreApplication::processEvents() and Qt::QueuedConnection:
I would like to understand the current behaviour
I don't have a good explanation; I can only point out that
processEvent()contains behaviour that is known to be irrational and inconsistent, yet fixing those irrationalities is too risky:- [QTBUG-142807] Discrepancy between platforms with queued one-to-many connections with processEvents…
- [QTBUG-119225] [Reg 6.2 -> 6.5] QSplashScreen no longer shows on Linux with a single call to QCoreApplication::processEvents()
That 2nd bug report might be related to what you see: Calling processEvents() once didn't produce the expected results but calling it more than once did.
-
@JonB said in QCoreApplication::processEvents() and Qt::QueuedConnection:
I would like to understand the current behaviour
I don't have a good explanation; I can only point out that
processEvent()contains behaviour that is known to be irrational and inconsistent, yet fixing those irrationalities is too risky:- [QTBUG-142807] Discrepancy between platforms with queued one-to-many connections with processEvents…
- [QTBUG-119225] [Reg 6.2 -> 6.5] QSplashScreen no longer shows on Linux with a single call to QCoreApplication::processEvents()
That 2nd bug report might be related to what you see: Calling processEvents() once didn't produce the expected results but calling it more than once did.
@JKSH said in QCoreApplication::processEvents() and Qt::QueuedConnection:
That 2nd bug report might be related to what you see: Calling processEvents() once didn't produce the expected results but calling it more than once did.
Thank you! For the resolution if not the ultimate reason :) There wasn't any detail as to what/why in the bug report, and I didn't feel like diff-ing. But putting a second, extra
processEvents()right after my existing one, run just before the simulation starts running and blocking, makes the UI disabled buttons change from hardly ever showing as such with a singleprocessEvents()to, seemingly, showing correctly every time is much better! Some relationship between whyever required forQSplashScreenand my situation. I am Linux Qt 6.4.2 if that happens to be relevant.So thank you and I will now mark this as solved, even though I will still have sleepless nights wondering exactly why ;-)
-
J JonB has marked this topic as solved
-
@JKSH said in QCoreApplication::processEvents() and Qt::QueuedConnection:
That 2nd bug report might be related to what you see: Calling processEvents() once didn't produce the expected results but calling it more than once did.
Thank you! For the resolution if not the ultimate reason :) There wasn't any detail as to what/why in the bug report, and I didn't feel like diff-ing. But putting a second, extra
processEvents()right after my existing one, run just before the simulation starts running and blocking, makes the UI disabled buttons change from hardly ever showing as such with a singleprocessEvents()to, seemingly, showing correctly every time is much better! Some relationship between whyever required forQSplashScreenand my situation. I am Linux Qt 6.4.2 if that happens to be relevant.So thank you and I will now mark this as solved, even though I will still have sleepless nights wondering exactly why ;-)
Glad I could provide a bit of closure!
@JonB said in QCoreApplication::processEvents() and Qt::QueuedConnection:
I will still have sleepless nights wondering exactly why ;-)
I don't remember the fine details, but IIRC it has something to do with asynchronous communication between the Qt event loop and the "native" linux event system. Meaning:
processEvents()does process all tasks that are available on the Qt side, but some important tasks might not have arrived from the OS yet -- the current call toprocessEvents()can't see and process those.But putting a second, extra
processEvents()right after my existing one, run just before the simulation starts running and blocking, makes the UI disabled buttons change from hardly ever showing as such with a singleprocessEvents()to, seemingly, showing correctly every time is much better!Because of the asynchronous comms mentioned above, I think what the 2nd call does is bump the probability of success from ~25% ("3 no shows to 1 show") to ~99%. There's still a tiny chance that you won't get the outcome you expect, because it's really a race condition between the OS and Qt's event loop.
-
Glad I could provide a bit of closure!
@JonB said in QCoreApplication::processEvents() and Qt::QueuedConnection:
I will still have sleepless nights wondering exactly why ;-)
I don't remember the fine details, but IIRC it has something to do with asynchronous communication between the Qt event loop and the "native" linux event system. Meaning:
processEvents()does process all tasks that are available on the Qt side, but some important tasks might not have arrived from the OS yet -- the current call toprocessEvents()can't see and process those.But putting a second, extra
processEvents()right after my existing one, run just before the simulation starts running and blocking, makes the UI disabled buttons change from hardly ever showing as such with a singleprocessEvents()to, seemingly, showing correctly every time is much better!Because of the asynchronous comms mentioned above, I think what the 2nd call does is bump the probability of success from ~25% ("3 no shows to 1 show") to ~99%. There's still a tiny chance that you won't get the outcome you expect, because it's really a race condition between the OS and Qt's event loop.
-
@SGaist, @SimonSchroeder
Thank you both for answering. Unfortunately I do not understand what either of you is saying! :( I don't know how much patience you have to explain further?Let's get a quick one out of the way:
@SGaist said in QCoreApplication::processEvents() and Qt::QueuedConnection:Also, switching context.
What does this mean? As I said, if it is relevant no threads are involved (despite the call to
QueuedConnection).Don't forget that the missing visual update only happens if
QueuedConnection, notDirectConnection. Why? The former are put in a queue (even if single thread) while the latter are not, and that changes behaviour somehow? But the queuing only affects my signal emission/slot execution, not whatever "paint/update" event which then gets posted as a result of the slot calling a function which affects the UI (button dis/enablement in my case)? As I said, by debugger inspection my slot is always called during theprocessEvents()immediately after theemit, it's not that that gets missed/delayed. I am guessing the issue is then that my slot indirectly gets an update posted for its UI call but that update is not always processed during thatprocessEvents()? I fail to see why whether my slot was connected queued or direct that has a later effect on the update?And what is this compression/update all about? Something about if there are multiple posted update events then they are "coalesced" and only one is called? But further, since in all cases at least one update must be posted, sometimes even one update is not acted on immediately? Instead there can be a time delay?
Let's take the simplest situation. Miraculously there are absolutely no events in the queue when I do my
emit. What do you expect to happen then? Always get to see immediate redraw? Still depends on the time when the post happens, perhaps relative to the previous redraw?So I would like to understand:
- What's the outline behaviour of these updates and compression?
- How does whether the original signal was connected queued versus direct, and we know its slot gets executed, affect the final outcome of when the redraw happens?
@JonB said in QCoreApplication::processEvents() and Qt::QueuedConnection:
@SGaist, @SimonSchroeder
Thank you both for answering. Unfortunately I do not understand what either of you is saying! :( I don't know how much patience you have to explain further?First of all: All answers are just guesses based on year long experience, but don't really have hard evidence to back them up. Probably, nobody really understands what's happening.
Generally, there might be a lot of things quickly after another that request to update the UI. Back in the day when Qt startet we had at most 60Hz displays. Thus, it doesn't make sense to redraw the screen more often than that. Human perception is even less than that. If redraw requests happen more often they can be consolidated (usually, these request have a QRect/QRegion and those regions get merged). In order to be able to consolidate these calls, we cannot process them immediately, but have to wait for a split second (and then make sure we don't wait too long). I don't really know about the exact mechanisms how Qt handles this. I just know some mechanism is there.
Let's get a quick one out of the way:
@SGaist said in QCoreApplication::processEvents() and Qt::QueuedConnection:Also, switching context.What does this mean? As I said, if it is relevant no threads are involved (despite the call to QueuedConnection).
Switching contexts might be between different programs. Because Qt shortly waits for consolidating update calls, enough time might have passed while another program got some CPU time.
Don't forget that the missing visual update only happens if QueuedConnection, not DirectConnection.
Depending on how full the queue is the signal from the queued connection will be handled a lot later. Here is most likely what happens: the
changeButtonsEnablement()slot is put in the queue. ThenprocessEvents()is called. This will handlechangeButtonsEnablement()as the last slot in the queue. You callsetEnabled()on some buttons which internally callsupdate(). However,update()is put in the queue, but still waits to consolidate some calls. That might be why the first call toprocessEvents()does not update the UI (visually). If you have a direct connection instead, theupdate()call is placed into the queue immediately (through your calls tosetEnabled()). And after that you callprocessEvents(). Now, theupdate()slot call is already in the queue and might get processed. This might depend on how many other things are still in the queue when you callprocessEvents(). If you have enough things to do, the waiting time to consolidate severalupdate()calls might be over (or not).As I said, by debugger inspection my slot is always called during the processEvents() immediately after the emit, it's not that that gets missed/delayed.
The debugger slows things down enough (because it has to check if breakpoints are hit) so that the update call is always triggered (enough time has passed to consolidate calls).
And what is this compression/update all about? Something about if there are multiple posted update events then they are "coalesced" and only one is called? But further, since in all cases at least one update must be posted, sometimes even one update is not acted on immediately? Instead there can be a time delay?
Yes, there might be a time delay. I only know this to be true for
update()calls. I am not sure if this also applies to other signals/slots.Let's take the simplest situation. Miraculously there are absolutely no events in the queue when I do my emit. What do you expect to happen then? Always get to see immediate redraw? Still depends on the time when the post happens, perhaps relative to the previous redraw?
If the queue is empty, you immediately hit the
update()call. Most likely not enough time has passed to consolidate these calls. If the queue has some items, some time has passed when theupdate()calls is finally processed. The time for consolidating calls might have passed.@JonB said in QCoreApplication::processEvents() and Qt::QueuedConnection:
However, at the start of Turbo mode buttons get disabled, and since Turbo execution may take "some time" [e.g. prime number or sorting algorithms :) ] I want the UI to have shown this update visually just before "blocking" while it runs.
In general,
processEvents()is the enemy of fast UI. I am saying this because most of the time people are usingprocessEvents()inside a loop to update a progress bar. This is why we have created a small header-only library (https://github.com/SimonSchroeder/QtThreadHelper) that helps to put some code into a separate thread without blocking the UI thread. The simplest way to use this library is to put your code into a lambda function and putworkerThreadJoin([&]() { /* do something */ });around it. The 'join' part at the end will internally start an event loop and wait for your thread to finish. (Better approaches would useworkerThread(...)instead, but this might require a rewrite of some parts.) In order to do something inside the GUI thread (e.g. callsetEnabled()) you can useguiThread(...)from the worker thread. If the UI thread is not blocked you will get the fastest UI update possible. This only works with separate threads. One caveat with my library: you should only create new worker threads on the fly if you have some work to do. For just ten lines of code this might be too slow. However, the library also allows to create aWorkerThreadobject as a member variable of your class that can be reused. I personally believe you can only make a turbo mode work with a separate thread.BTW, are you aware of
blockSignals()for your turbo mode? -
@JonB said in QCoreApplication::processEvents() and Qt::QueuedConnection:
@SGaist, @SimonSchroeder
Thank you both for answering. Unfortunately I do not understand what either of you is saying! :( I don't know how much patience you have to explain further?First of all: All answers are just guesses based on year long experience, but don't really have hard evidence to back them up. Probably, nobody really understands what's happening.
Generally, there might be a lot of things quickly after another that request to update the UI. Back in the day when Qt startet we had at most 60Hz displays. Thus, it doesn't make sense to redraw the screen more often than that. Human perception is even less than that. If redraw requests happen more often they can be consolidated (usually, these request have a QRect/QRegion and those regions get merged). In order to be able to consolidate these calls, we cannot process them immediately, but have to wait for a split second (and then make sure we don't wait too long). I don't really know about the exact mechanisms how Qt handles this. I just know some mechanism is there.
Let's get a quick one out of the way:
@SGaist said in QCoreApplication::processEvents() and Qt::QueuedConnection:Also, switching context.What does this mean? As I said, if it is relevant no threads are involved (despite the call to QueuedConnection).
Switching contexts might be between different programs. Because Qt shortly waits for consolidating update calls, enough time might have passed while another program got some CPU time.
Don't forget that the missing visual update only happens if QueuedConnection, not DirectConnection.
Depending on how full the queue is the signal from the queued connection will be handled a lot later. Here is most likely what happens: the
changeButtonsEnablement()slot is put in the queue. ThenprocessEvents()is called. This will handlechangeButtonsEnablement()as the last slot in the queue. You callsetEnabled()on some buttons which internally callsupdate(). However,update()is put in the queue, but still waits to consolidate some calls. That might be why the first call toprocessEvents()does not update the UI (visually). If you have a direct connection instead, theupdate()call is placed into the queue immediately (through your calls tosetEnabled()). And after that you callprocessEvents(). Now, theupdate()slot call is already in the queue and might get processed. This might depend on how many other things are still in the queue when you callprocessEvents(). If you have enough things to do, the waiting time to consolidate severalupdate()calls might be over (or not).As I said, by debugger inspection my slot is always called during the processEvents() immediately after the emit, it's not that that gets missed/delayed.
The debugger slows things down enough (because it has to check if breakpoints are hit) so that the update call is always triggered (enough time has passed to consolidate calls).
And what is this compression/update all about? Something about if there are multiple posted update events then they are "coalesced" and only one is called? But further, since in all cases at least one update must be posted, sometimes even one update is not acted on immediately? Instead there can be a time delay?
Yes, there might be a time delay. I only know this to be true for
update()calls. I am not sure if this also applies to other signals/slots.Let's take the simplest situation. Miraculously there are absolutely no events in the queue when I do my emit. What do you expect to happen then? Always get to see immediate redraw? Still depends on the time when the post happens, perhaps relative to the previous redraw?
If the queue is empty, you immediately hit the
update()call. Most likely not enough time has passed to consolidate these calls. If the queue has some items, some time has passed when theupdate()calls is finally processed. The time for consolidating calls might have passed.@JonB said in QCoreApplication::processEvents() and Qt::QueuedConnection:
However, at the start of Turbo mode buttons get disabled, and since Turbo execution may take "some time" [e.g. prime number or sorting algorithms :) ] I want the UI to have shown this update visually just before "blocking" while it runs.
In general,
processEvents()is the enemy of fast UI. I am saying this because most of the time people are usingprocessEvents()inside a loop to update a progress bar. This is why we have created a small header-only library (https://github.com/SimonSchroeder/QtThreadHelper) that helps to put some code into a separate thread without blocking the UI thread. The simplest way to use this library is to put your code into a lambda function and putworkerThreadJoin([&]() { /* do something */ });around it. The 'join' part at the end will internally start an event loop and wait for your thread to finish. (Better approaches would useworkerThread(...)instead, but this might require a rewrite of some parts.) In order to do something inside the GUI thread (e.g. callsetEnabled()) you can useguiThread(...)from the worker thread. If the UI thread is not blocked you will get the fastest UI update possible. This only works with separate threads. One caveat with my library: you should only create new worker threads on the fly if you have some work to do. For just ten lines of code this might be too slow. However, the library also allows to create aWorkerThreadobject as a member variable of your class that can be reused. I personally believe you can only make a turbo mode work with a separate thread.BTW, are you aware of
blockSignals()for your turbo mode?@SimonSchroeder
Hi Simon, lots to digest here, thank you! And you may have identified a difference between direct and queued connections which might explain the apparently different behaviour.I am very well aware of how
processEvents()slows down execution, and not to call it too frequently. That is why my Turbo mode (unlike other modes) essentially does not call it all during execution. The issue here is a single call just before Turbo starts blocking, to ensure UI is up-to-date at that point for the user to see just before going into no-update mode for an unknown period of time. [And the double call advised by @JKSH/Qt bug has solved this for me now.]I regard
blockSignals()as devil-spawn. For one thing you have no idea whether things will not work correctly as a consequence of not receiving a signal which should have been emitted and is relied on somewhere. But it does not work for me in any case: it is important that the last of a number of similar signals is emitted so that the UI can be up-to-date with that, and you don't know when the "last" one will be as you go. I already have extensive code at the receiving, slot side to queue-and-compress-and-delay acting on signals. That made a big difference, but I have actually discovered it's not enough where "thousands" of signals may be emitted: in this case there appears to be a significant overhead during just emitting the signals at the sender side even if later they will be reduced/removed at the receiver side. If I want to address that I would have to do queuing/compression at the sender side, which doesn't seem totally logical and would impose an overhead to calculate it during the backend Turbo run which I am keen to avoid (backed should run as fast and uninterrupted as possible). More information in my https://forum.qt.io/topic/164055/how-many-emitted-signals-is-too-many.