more efficient Matrix computation for large monoplex networks #11
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
I've found that, for large monoplex networks, the current implementation of the matrix transformation becomes quite inefficient, as it uses a lot (hundres of GB) of RAM. The culprit seems to be this line of code in
compute.adjacency.matrix()
:offdiag <- (delta/(L-1))*Idem_Matrix
However, as far as I can see, this step (and everything related to it) is not necessary for monoplex networks. I assume, however, that it is relevant for multiplex networks, but as I am not currently working with those, I had no way of testing this. Therefore, I slightly adjusted the code to skip this step for monoplex networks, and leave as-is for multiplex networks.
In all my tests (both with a toy example and a larger dataset) the results for monoplex datasets were identical. See the following reprex, where the adjusted function is called
compute.adjacency.matrix_2()
:While this should not cause any problems for multiplex networks, a second look and potentially more testing would be appreciated.