-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Faster loading of basic data on large files #66
Comments
+monster_squeezed -pages:
And on full read and write:
|
Now:
|
(Turns out other programs were just reading /Count, not actually loading the page tree. Loading the page tree, we have found, is a useful way to trigger rebuilding on broken files, so we opted not to use /Count alone). So the problem is monster_squeezed.pdf alone, now. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Whilst cpdf is generally fast, we are behind on simple operations on large files - perhaps by not delaying the reading of objects from object streams in some way?
The text was updated successfully, but these errors were encountered: