Could Someone Give me Advice on Optimizing QTableView with Large Data Sets?
-
Hello there,
I am working on a project where I need to display and interact with a large dataset approximately 1 million rows using QTableView in a desktop application. The dataset is sourced dynamically from a database, and while I have managed to load it into the model, performance issues are becoming a bottleneck.
Currently; I am using QSqlTableModel as the underlying model for QTableView. While this works for smaller datasets; scrolling; searching; and sorting become sluggish as the dataset grows. I suspect that rendering such a large amount of data at once is not the most efficient approach; but I am unsure about the best way to optimize this.
Is there a recommended way to implement lazy loading or data fetching on demand for QTableView? I have read about QAbstractItemModel; but I am not sure if that is the best approach here.
Would implementing a custom model improve performance significantly compared to using QSqlTableModel? If so; could anyone provide a basic example or point me to relevant documentation?
What are some efficient techniques to handle searching and sorting within a large dataset? Would it be better to offload these tasks to the database instead of doing it within the application?
Thank you in advance for your help and assistance.
-
@Hadley
I presume you have already come acrossfetchMore()
which implements "paging" when filling the model from the SQL result set. See when that is called to determine whetherQTableView
pre-fetches everything from the start or only as needed as you move through the table.You might interpose some
QAbstractProxyModel
between the SQL model and the table view or do your ownQAbstractTableModel
to implement some kind of paging.https://stackoverflow.com/questions/72909009/displaying-large-amount-of-data-in-qtableview
https://forum.qt.io/topic/116231/load-big-database-into-model-to-be-shown-in-table-view
You might Googleqtableview large model
or similar.Regardless of the table view it will of course be hugely faster and cost vastly less memory if you do searching/filtering/sorting at the database side instead of the client's in-memory model.
-
One million rows is nothing, but the trick is of course to have a data model that is well suited for it.
In my system where I'm throwing hundreds of thousands to millions rows of data at a QTableView I have:
- Data that has been organized for quick loading ahead of time
- System that provides quick access to data without creating objects or having allocate stuff with new.
Basically the data is split into chunks that are stored on disk. Each chunk is then memory mapped and each chunk contains "records". Since the records can have variable size each data chunk has a header and and offset table which stores a starting offset for reach record in the data part of the chunk. (You can think of this as a jump table)
So when a QTableView needs to display any particular row it all essentially boils down to taking a base pointer to a chunk of data and then reading data at offsets relative to the base pointer.
The system is extremely fast.