Replies: 2 comments
-
I have no clue but I assume an environment/networking issue, after all the code is really the same, the env is not |
Beta Was this translation helpful? Give feedback.
0 replies
-
@iroll007 Could you compare your NFS mount options between the two systems? (i.e. what |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
We have apache servers in our datacenter that we use to proxy openidc, we use the openidc cache on an nfs share for use in an apache cluster. The nfs is managed by netapp and authentication times are instantaneous.
We are testing the same infrastructure on the ovh host but we are using an nfs nas (zfs). When I do an authentication, it takes 2mn to be redirected and by putting the cache locally it is instantaneous. So the problem comes from the nas.
We have put the same setup as for servers where it works.
We tested the write time on this nfs and on other servers where the cache works with 10k files (size of the cache files). The result is the same. It differs as soon as you go up in size (>9s at ovh for 250m and <1s at ours).
So apart from a performance issue, we don't understand why it takes so long because selinux, conf are the same. The difference is on the vendor (zfs for ovh and netapp for us) and the network speed.
In the logs in debug mode, between the clean and the successfull, there are 40s!
I don't know where to look anymore and I have no solution :(
Do you have any idea why there would be so much slowness on this nfs?
What could the module do to be so impacted?
Thanks in advance for your answers
Regards
Beta Was this translation helpful? Give feedback.
All reactions