Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How fast should checksumming in AWS be? #382

Open
sampierson opened this issue May 8, 2019 · 0 comments
Open

How fast should checksumming in AWS be? #382

sampierson opened this issue May 8, 2019 · 0 comments

Comments

@sampierson
Copy link
Member

sampierson commented May 8, 2019

Now that we are using the crc32c module in AWS, we expected to see extremely rapid checksumming ( on the order of 1 GB/s). We're not seeing that. We're still seeing < 100MB/s in Lambda, and much slower than that in Docker. The question is, why?

Using the py-cpuinfo package, it was confirmed that the machines AWS is giving us do have SSE 4.2 CPU extensions.

Hypotheses:

  1. CRC32C is no longer the bottleneck, but one of the other algorithms is now bottlenecking us
  2. checksums are always computed while streaming the file from S3. Perhaps that I/O is the bottleneck?

Experiments to perform:

  1. See how fast a file can be streamed from S3 -> Lambda.
  2. See how fast a file can be streamed from S3 -> Batch.
  3. Locally, checksum files using dcplib.ChecksummingSink, experimenting with inclusion/exclusion of the different checksumming algorithms to benchmark the performance of each algorithm, and their use in combination.
  4. Craft a Lambda that checksums a local (non-streamed) file. Test the performance of the various algos in that environment.
  5. Craft a Docker image that checksums a local (non-streamed) file. Run it in AWS Batch. Test the performance of the various algos in that environment.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants