-
-
Notifications
You must be signed in to change notification settings - Fork 413
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance degradation on production #132
Comments
This seems to be due to "slow" connections to the database. But why is the performance with RR lower, and not the same as that of FPM? |
Hi, gc_collect_cycles is pretty expensive operation FYI, you don’t need it in every script. The performance is lower most likely due your script has a lot of blocking code and concurrency level is higher than number of availble workers. Check your CPU load time. Since your bottleneck is connections to DB and not CPU try to bump number of workers x2-5 to avoid building request pipeline. |
I will correct Wiki to accommodate your usecase. |
It will also be solved automatically once #97 is implemented. |
Oh, I see you mention numWorkers: 50 What integration script do you use? Do you see high timings in rr debug log or server is slow on itself? |
Can you try to run numWorkers: 50 without maxJobs? Also try another benchmark tool like siege, it will give better picture about what is average/medium request time. |
Closed due to no activity. |
@wolfy-j do you expect activity from me? |
I have requested clarification from you. The issue most likely is application specific. Unfortunately, I'm unable to debug it without a reliable way to reproduce it. |
@wolfy-j you're right. But I cannot show a commercial application having a problem. Making a special fake application to demonstrate a problem is too much work. The employer will not pay for it. Perhaps a problem with some kind of PHP-module or open-source library. Now I can only provide this data.
and newrelic agent
|
Ok, thank you for the context. I'll check if there is anything which might be red flag. |
Yeah I wonder about redis. |
Well, if it's slower - something takes a too long time to be destructed/disconnected. Nginx ignored destruction time, RR does not. The question why it's OK on dev machine. Can be doctrine? |
I found project https://github.com/mrsuh/php-load-test |
Based on the results of those tests if include But I could not run these tests on my servers. You also do kernel reboot in the example: |
Reboot is required for symfony as it not fully support long-running mode (at least not for every appplication). But per our logic reboot should not be slower than full initialization. It’s almost as if reboot too expensive. But why? We run all our applications without reboot, but we had to create our own framework for that. |
@wolfy-j Actually, this is not correct anymore. From last year, Symfony is supporting long-running mode without memory leaks via But for some time, and still unknown reasons, I get insane amount of memory used on blank project. When I played with swoole recently, memory leaks happened during conversion to Symfony request: k911/swoole-bundle#30 If interested, tomorrow I can start RR test on blank Symfony project and report you the details. It is the same problem; memory went wild during conversion. Code I used for RR test: $kernel = new Kernel($env, $debug);
$relay = new StreamRelay(STDIN, STDOUT);
$psr7 = new PSR7Client(new Worker($relay));
$httpFoundationFactory = new HttpFoundationFactory();
$diactorosFactory = new DiactorosFactory();
$kernel->boot();
while ($req = $psr7->acceptRequest()) {
try {
$request = $httpFoundationFactory->createRequest($req);
$response = $kernel->handle($request);
$psr7->respond($diactorosFactory->createResponse($response));
$kernel->terminate($request, $response);
// $kernel->reboot(null);
} catch (\Throwable $e) {
$psr7->getWorker()->error((string)$e);
}
} |
I’m very curious to see what you are going to find. |
PHP 7.3 / Symfony 3.4
Hi! On local dev server i get x 2-10-20 request time speed up for 1 concurrency request.
But on production for 50 concurent request performance on FPM 180 requests per second, on RR 165 rps. If add gc_collect_cycles() for each cycle - perfomance for RR down to 140 rps...
It is strange, there are ideas why it can work slower than FPM?
ab -n 10000 -k -c 50 ...
Enable/disablee keep alive; direct connect to roadrunner port or nginx proxypass do not affect performance.
config:
bin/rr serve -c roadrunner.prod.yaml
With numWorkers: 50 and maxJobs: 200 the same perfomance.
Processor: AMD Ryzen Pro 1700x (8 x 3,4 GHz)
RAM: 32 GB DDR4
The text was updated successfully, but these errors were encountered: