Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document why/how to deal with filehandling max limitation on Linux #292

Open
kelson42 opened this issue Feb 16, 2022 · 4 comments
Open

Document why/how to deal with filehandling max limitation on Linux #292

kelson42 opened this issue Feb 16, 2022 · 4 comments

Comments

@kelson42
Copy link
Contributor

Following #289

@kelson42
Copy link
Contributor Author

After discussion which should be able to implement a solution to:

  • Save files smaller than cluster size in memory
  • Keep filehandle open for bigger files

@ghost
Copy link

ghost commented Aug 20, 2022

How do I work around the "Too many open files" error?

@mgautierfr
Copy link
Collaborator

run ulimit -n 2048 before running zimwriterfs to change the limit of open files to 2048 (or more, the default value is 1024)

@ghost
Copy link

ghost commented Aug 22, 2022

run ulimit -n 2048 before running zimwriterfs to change the limit of open files to 2048 (or more, the default value is 1024)

Thank you, this worked (well, a higher value than 2048, but it worked).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants