-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document backend_config.ini #623
Comments
There are 2 last keys to document :
The names used for the keys seem to express the fact that we can use these keys to limit the number of processes depending on the usage. However looking at the current code, I can't find much on this potential limitation. What I found is the following usage, where both values are used simultaneously to compute the maximum number of processes. sub new_PM {
my $self = shift;
my $maximum_processes = $self->NumberOfProcessesForFrontendTesting() + $self->NumberOfProcessesForBatchTesting();
...
my $pm = Parallel::ForkManager->new( $maximum_processes );
...
} I then tried to look for code that can limit the creation of a process for each of the usage (frontend or batch) without success. Going through the git history I was able to find code where the limitation was explicit (this is quite old, but this is around the time where the sub can_start_new_worker {
my ($priority, $test_id) = @_;
my $result = 0;
my @nb_instances = split(/\n+/, `ps -ef | grep "execute_zonemaster_P$priority.pl" | grep -v "sh -c" | grep -v grep | grep -v tail`);
my @same_test_id = split(/\n+/, `ps -ef | grep "execute_zonemaster_P$priority.pl $test_id " | grep -v "sh -c" | grep -v grep | grep -v tail`);
my $max_slots = 0;
if ($priority == 5) {
$max_slots = $batch_slots;
}
elsif ($priority == 10) {
$max_slots = $frontend_slots;
}
$result = 1 if (scalar @nb_instances < $max_slots && !@same_test_id);
}
So it looks like the file my $maximum_processes =
Zonemaster::WebBackend::Config->NumberOfProfessesForFrontendTesting() +
Zonemaster::WebBackend::Config->NumberOfProfessesForBatchTesting(); Unless I am missing something it looks like the idea to limit the number of processes per usage has not been kept into the code. So I suggest we decide what we do about batch processing as proposed in #743. The best I come with at this point is to specify that these values are added together and the result defines the total number of allowed processes. Despite their names, they do not limit the number of process by type (frontend or batch). |
@matsduf and @mattias-p can you have a look at my above comment ? (I don't know if editing the comment with @ mentions raises a notification on your sides) |
I think it would be fine to deprecate both I think it is unclear, however, what will happen when you reach the limit. Before we do anything about #743 we should contact RIPE that we could be using the batch function. |
I agree with this. If you want two test agent worker pools it seems more reasonable to just run two test agent daemons.
I tried to look around to see what other people use. In one page of search engine hits I found two examples. The first one I found, Gunicorn, defaults to 1. The second one, parpool (some MatLab parallelization library or I don't know), defaults to one worker per physical core. Having a default value of 1 worker for the test agent makes sense to me. One worker per core would make sense to me in a way if the test utilization is close to 100%. From looking at
Unclear in what way? Do you mean that there might be bugs in Parallel::ForkManager? |
Well, unclear to me, I guess. :-) |
The way I read the design is that Parallel::ForkManager grows its worker pool until it reaches its given limit. If new test requests come in at a higher rate than the worker pool can handle them, the test requests start queuing up. The test requests have a priority field that is used for prioritization when consuming the queue. If more than one test requests have the same priority value they are consumed in FIFO order. |
@mattias-p, thank you! |
I think that all settings are now properly documented. Can we close this issue? |
Fine to close. If we find something new that is properly documented it is better to open a new issue and specify what is missing. |
All settings in backend_config.ini are not documented in Configuration.md, e.g. ZONEMASTER.force_hash_id_use_in_API_starting_from_id is not.
The text was updated successfully, but these errors were encountered: