You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Out of Memory Errors have been reported when the Lucene index is being built (which typically happens at startup). This occurs with some database driver (MSSQL for example), that, while processing a large SQL result set (which contains all rows of a the table that contains archived messages), keeps all rows that it has iterated over in memory.
The implementation should cause the database driver to discard rows that have been processed instead.
As a workaround, the property conversation.search.index-enabled can be set to false in or after release 2.6.0 of the plugin. This prevents the Lucene index from being created. As a result, full text search will yield no results.
The text was updated successfully, but these errors were encountered:
guusdk
added a commit
to guusdk/openfire-monitoring-plugin
that referenced
this issue
Dec 10, 2024
…mory when indexing
When iterating over all rows of a potentially large table, ensure that the database result set is configured to be 'linear' (forward-only and read-only).
This gives a better chance of the database driver to release all rows that have been iterated over, which prevents the fetch buffer to eventually include all rows (and potentially cause out-of-memory issues).
guusdk
added a commit
to guusdk/openfire-monitoring-plugin
that referenced
this issue
Dec 10, 2024
…mory when indexing
When iterating over all rows of a potentially large table, ensure that the database result set is configured to be 'linear' (forward-only and read-only).
This gives a better chance of the database driver to release all rows that have been iterated over, which prevents the fetch buffer to eventually include all rows (and potentially cause out-of-memory issues).
Out of Memory Errors have been reported when the Lucene index is being built (which typically happens at startup). This occurs with some database driver (MSSQL for example), that, while processing a large SQL result set (which contains all rows of a the table that contains archived messages), keeps all rows that it has iterated over in memory.
The implementation should cause the database driver to discard rows that have been processed instead.
As a workaround, the property
conversation.search.index-enabled
can be set tofalse
in or after release 2.6.0 of the plugin. This prevents the Lucene index from being created. As a result, full text search will yield no results.The text was updated successfully, but these errors were encountered: