Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New branch strategies and collectors changes #1

Open
wants to merge 11 commits into
base: master
Choose a base branch
from

Conversation

Chrisys93
Copy link
Owner

Had to do a few changes to try and get absolutely everything right, from a repo point of view - including configurations, strategies, message receive times and collectors. TODO: I still need to get all the strategy and configuration scenarios sorted, with the output results paths and files, but that should be pretty easy and straight-worward with a few for loops, like the already-implemented strategy experiment queues in the current configs

Had to do a few changes to try and get absolutely everything right, from a repo point of view - including configurations, strategies, message receive times and collectors. TODO: I still need to get all the strategy and configuration scenarios sorted, with the output results paths and files, but that should be pretty easy and straight-worward with a few for loops, like the already-implemented strategy experiment queues in the current configs
Did quite a few odifications and added an extra print statement, to show when messages are created for storage at the receivers - with constant storage information generation, the repo environment seems to be amazing at handling anything!
Changed storage to account for each second passed in each node (with first update of last_period from 0 to time.time(), of course. Need to see results and try and get the most interesting result readings, comparisons and metrics possible.
Changed a few things in config, policies, for accounting for proc and non-proc messages and added one repo storage check in handle as well
Changed the for loops in the strategies and most of the strategies should now properly evict messages overloading the storage. Need to develop more on the next steps - work a bit more on the report after results, tomorrow
Had problems with stopping the storage from overloading, but got it sorted by applying the data scopes and limiting the number of replicas for each content/service identifier (and its processed counterparts) in the system. Further, after, there was a problem with too many events being generated. That issue was resolved by adding the replicas to the message replica numbers checks before the messages were stored, as many events would be created before the messages would actually be stored, rendering simulations inefficient. However, this does pose the added complexity and not-exactly-efficient problem of the system marking messages as stored before they are actually stored and not being able to store other messages, processed present in that specific node in the mean time, due to the service being marked as having those replicas in the system already. TODO: That could be done right at some point later, for completeness and further efficiency!
Had to add a way to output internal useful data offload speeds, changed the way in which the setup is done for several variables, including non-proc data generation rates, redefined both configurations, changed network functions for better data management
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant