Thanks a lot for these answers.
To summarize:
Solution 1 from the original post is confusing and therefore not good. Solution 2 from the original post is much better. An alternative would be to disable tracking of the slider, so the image only gets updated on mouse release.Disable the tracking would be the easiest thing to do, but it is not very flexible. There are two main problems for me:
I also have spin boxes in the dialog, which also change the image, and I do not want to disable the arrow-up and arrow-down support for these spin boxes because that seems a bit strange. I do not know the speed of the target system, so anyway I prefer to use threads to make sure the UI stays reactive.Therefore, I think I will go for solution 2.
Stays the question of how to implement this.
Did you try to see the hot paths of your application with a tool like hotspot ?
No.
Beside that, what exactly takes that long in your application ?
From a quick look at your code, you likely do too much in your paint event.
You are creating a new QImage every time paintEvent is called, why not create it once on resize and then just repaint on top of it.
Indeed, the paint event (and also the rest of the code) is not speed-optimized at all. My plan was:
Define which behaviour I want. (That is done now, thanks to you!) Decide which threading technologies to use. (To be done.) Optimize the individual parts where necessary. (To be done.)But I remember that, when once ago on a pretty fast computer and with a small widget size, I measured the old synchronous code, the paint event took 360 ms, of which 330 ms were the image rendering algorithm, which has therefore the biggest problem.
Also, I will look into your suggestion to keep always a QImage in memory, so that it is not allocated again on each paint event. (Also, I have heard somewhere that Qt itself does the same thing.)
you might want to check out the Mandelbrot example.
Yes, indeed, the current multithread code is based on the Mandelbrot example.
One other possibility is to update the content of the image when something changes and just paint the updated image once that is done.
Indeed, I'm using yet an image cache, so there is no unnecessary rendering of the image.
Of course, only you know why this thing takes this long and whether anything can be done to reduce it.
That's the most interesting question. The algorithm calculates each pixel independently: Perfect for splitting the work into as many threads as the count of virtual cores of the target computer. And probably, the algorithm itself can also be optimized.
How could this work? I have no experience in multithreading. Doing it manually with QThread seems therefore complicated and error-prone to me. Could I query the number of virtual cores, and then use QtConcurrent::run() as many times as virtual cores are available, each call with just a fraction of the rendering image? Would this scale well? (As I'm developing a library, I have no knowledge about what other threads the application developer will use in his own code.)
it can't be done as a gradual thing.
Actually, probably in can be done as a gradual thing. Maybe allocate a QImage in full-resolution size. Then, calculate pixel (0, 0) and set this color for pixel (0, 0), but also for (0, 1), (1, 0) and (1, 1). And so on, until the QImage is complete. Do a paint event with this low-resolution image. Do a new run and calculate (0, 1), (1, 0) and (1, 1) exactly. And so on, until the QImage is complete. Then, do a new paint event with the full-resolution image. This idea could also be adapted to work with more than only two steps. Do you think this is a good idea?