-
Notifications
You must be signed in to change notification settings - Fork 211
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RAM usage keeps increasing when deleting and reimporting data #1240
Comments
So basically you are continuously deleting and loading the same RDF dataset ? What does the output of the Linux Please also provide a copy of the |
Correct. I'm continuosly deleting and reloading the same dataset. Top shows that Virtuoso started at 0,2 MEM % (I have 64GB RAM). After the first import it reaches 0,5%. I was using Ubuntu's System Monitor to check the RAM usage. And the virtuoso.ini: |
Looking at the
Setting these to |
I gave it a try just now, setting both parameters to 1 in the ini file. Confirmed that the parameters are loaded via the web interface also: but it's still only increasing the memory. Nothing is being free'd. Here's a short recording: The queries being executed are always the same as mentioned in the posts above as well as the dataset being loaded. |
I assume the Virtuoso instance was restarted when the INI file parameters where changed, such that they will take effect? (These settings do not take effect on a running instance without a restart, though the Conductor editor will immediately show the values have been changed in the INI file.) Looking at your loop test case again, i.e., DELETE FROM DB.DBA.RDF_QUAD
DB.DBA.TTLP_MT (file_to_string_output ('/path/to/ttl_file.ttl'), '', 'http://localhost:8890/XXX') This is a bad test case, as the Even if you were to qualify it with the actual graph name being loaded, i.e., You should probably also run the |
I have restarted the server multiple times between the tests, and as noted in the screenshot above the parameters were active in the running instance. I re-run the tests and changed:
I was also already The actions on each loop iteration are now:
Result: Memory still keeps increasing. The parameters Also tested:
Made no difference. Edit: Downloaded and compiled the last release v7.2.11 to see if the problem would manifest on there too and indeed, I have the same problem with that version. Same as on the current |
Please could you check the
What are these statistics after continuous operation, and how do they change? |
I changed the cycle time to 1sec instead of 5sec just to speed things up a bit. Here's a recording of how the VM values change over a period of 6-7 minutes: |
Hello there,
I'm experimenting a bit with Virtuoso as we're trying to figure out if it's a good choice for our Project and while doing some tests I stumbled upon a weird memory usage case.
In this particular case, the Queries I run are:
DELETE FROM DB.DBA.RDF_QUAD
DB.DBA.TTLP_MT (file_to_string_output ('/path/to/ttl_file.ttl'), '', 'http://localhost:8890/XXX')
The dataset is about 5,7 MB and contains roughly 80k triplets.
Everytime I run the above queries, the RAM usage of Virtuoso increases by about 0,5 MB. Occasionally a few (1-2 MB) get reclaimed, but other than that it keeps increasing.
The test I ran lasted 40 minutes. The queries above were executed at 5sec intervals continuosly and each time the RAM kept increasing by 0,3 - 0,6 MB (mostly 0,5 MB).
Virtuoso started at 99,2 MB and after the first run (delete + import) it increased to 237,1 MB.
After 7:30mins it was sitting at 302 MB.
After 10:50mins it was at 318,2 MB and after 40mins it was above 400MB.
I rebuild Virtuoso with debug symbols so that I could inspect the issue with Valgrind and these are the findings:
However, I also run the Valgrind Massif heap profiler, but there it shows constant allocations with occasional spikes which return to normal right after:
I'm running the develop/7 branch locally on Ubuntu 20.04.
Configuration is unchanged, running default values.
Any idea what could be the cause? Are there any known issues that have yet to be fixed perhaps?
P.S.: I cannot share the dataset as it's confidential (company stuff).
The text was updated successfully, but these errors were encountered: