This sample provides steps to deploy ArmNN on AWS Lambda (arm64) by building and loading a container image. The Lambda function will be used to perform ML inference using an example image classification ML model in ONNX format (ResNet 50).
ArmNN is an open-source software library that optimises the execution of ML workloads on Arm-based processors and uses the Arm Compute Library - ACL. It is one of the options available to perform CPU-based machine learning inference on Graviton instances.
- Ensure the AWS region selected supports CodeBuild Arm containers
- Any changes to the ML inference code require re-building the Docker container image and re-deploying it on Lambda
- ArmNN supports only 2 file formats: ONNX and TfLite. Since TfLite is typically used in edge devices (i.e. mobile) with lightweight models, the example provided here is a ResNet model in ONNX format
Go to service AWS CodeCommit to create a new code repository. Upload to it the 2 files: Dockerfile and app.py from this repository.
Go to service AWS CloudFormation and create a new stack using the CodeBuild template.
Parameters to provide:
-
CodeCommitRepositoryName: provide name given in Step 1
-
ContainerImageName: provide a name for the container image that will be created
Go to service AWS CodeBuild to locate the created project and to start a build to produce the Docker container image.
Go to service Amazon ECR and confirm the built container image is published there.
Go to service AWS CloudFormation and create a new stack using the Lambda template.
Parameters to provide:
- LambdaMemorySize: specify amount of memory required by ML inference code
The example code uses approximately 0.9GB of memory, however Lambda supports up to 3GB or 10GB depending on region availability.
Go to service AWS Lambda to view the function created in Step 3.
Test the ML inference code with a sample image by using this event:
{
"image_url": "<<PROVIDE_IMAGE_URL>>"
}
Example image URL: https://s3.amazonaws.com/model-server/inputs/kitten.jpg
Confirm the output of the test shows the image classification results. Ignore the additional log output.
- PyArmNN: https://github.com/ARM-software/armnn/tree/branches/armnn_22_02/python/pyarmnn
- ONNX model: https://github.com/onnx/models/blob/main/vision/classification/resnet/model/resnet50-v1-7.onnx
- ONNX model labels: https://s3.amazonaws.com/onnx-model-zoo/synset.txt
- To learn more about Lambda support for the arm64 architecture: https://docs.aws.amazon.com/lambda/latest/dg/foundation-arch.html
- For workloads which require more resources than Lambda provides, Amazon ECS or AWS Fargate are suitable alternatives to deploy the container image to. To learn more about ECS support for arm64 architecture: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-arm64.html
See CONTRIBUTING for more information.
This sample code is made available under a MIT-0 license. See the LICENSE file.