This code uses the Serverless Framework to deploy an AWS lambda function that, when triggered by a file uploaded in an S3 bucket, will run the SAF CLI with the given input command (COMMAND_STRING
) and can optionally upload results to an S3 bucket.
- Clone this repository:
git clone https://github.com/mitre/saf-lambda-function.git
- Install the Serverless Framework:
npm install -g serverless
- Install the latest dependencies:
npm install
- Configure your AWS credentials. Recommended method is to add a profile in the
~/.aws/credentials
file and then export that profile:
export AWS_PROFILE=<your_creds_profile_name>
# To ensure your access to AWS, run:
aws s3 ls
This lambda function uses environment variables to orchestrate its function. The required environment variables are INPUT_BUCKET
and COMMAND_STRING_INPUT
. The bucket environment variable defines the source bucket for your input to the SAF CLI command, and the command string defines the SAF CLI function and its flags excluding the -i input
and -o output
flags which are handled by your input and output bucket and object configurations.
Additional optional variables can be set to further configure the function. The table below shows each variable and the default behavior. The INPUT_PREFIX
specifies a path in the INPUT_BUCKET. If set, this function will trigger when a file is uploaded to that path location and run the SAF CLI with the uploaded file. If not set, the function will trigger when an object is loaded in the main directory of the INPUT_BUCKET. The OUTPUT_BUCKET
can be set as the location to upload results of the SAF CLI command. The OUTPUT_ENABLED
variable can be set to false
if the function should not upload results to an S3 bucket. The OUTPUT_EXTENSION
is the appended name and extension for the output file, if the output is enabled. For example, if the input file is named "my-file.csv" and the OUTPUT_EXTENSION
is set to "_results.json", then the output file will be named "my-file_results.json". The OUTPUT_PREFIX
specifies a path within the OUTPUT_BUCKET to place the results of the SAF CLI call. The SERVICE_NAME
will be the name of this lambda service when deployed.
ENVIRONMENT NAME | Required | Default | Examples |
---|---|---|---|
COMMAND_STRING | x | none | "convert hdf2splunk -H 127.0.0.1 -u admin -p Valid_password! -I hdf", "convert burpsuite2hdf", See more here |
INPUT_BUCKET | x | none | "bucket-name" |
INPUT_PREFIX | "" | "unprocessed/", "unprocessed/hdf/" | |
OUTPUT_BUCKET | The value assigned to INPUT_BUCKET |
"other-bucket-name" | |
OUTPUT_ENABLED | true | false | |
OUTPUT_EXTENSION | "_results.json" | ".json", ".csv", "_output.json" | |
OUTPUT_PREFIX | "results/" | "output/", "results/hdf/", "" | |
OUTPUT_TIMEOUT | 60 | lambda timeout value in seconds | |
SERVICE_NAME | "saf-lambda-function" | "different-service-name" | |
LOG_LEVEL | "info" | "debug" |
- Set the required variables:
INPUT_BUCKET
andCOMMAND_STRING
.
- Example:
export INPUT_BUCKET="bucket-name"
export COMMAND_STRING="convert hdf2splunk -H 127.0.0.1 -u admin -p Valid_password! -I your_index_name"
- NOTE: Do not include the input flag (i.e. "-i hdf_file.json") in the command string as this will be handled by the S3 input bucket configuration.
- NOTE: Do not include the output flag in the command string. Instead, set the output configuration variables.
- NOTE: This action does not support
view heimdall
. - More examples can be found at SAF CLI Usage
- You can ensure that the environment variables are set properly:
env
.
- Set any optional variables that you may want to change. If the default value for any of these variables suffices, it does not need to be set.
- Create an AWS bucket with the name that you set as the value for
INPUT_BUCKET
. - Load a file into the
INPUT_BUCKET
. Upload the file in theINPUT_PREFIX
path if specified. - If testing for the first time, run
npm make-event
. This will generate an s3 test event by running the commandserverless generate-event -t aws:s3 > test/event.json
. - Edit the bucket name and key in
test/event.json
.
"bucket": {
"name": "your-bucket-name",
...
},
"object": {
"key": "your-input-folder/you-file-name.json",
- Run
npm test
. You should see logging in the terminal and an uploaded output file in your s3 bucket if the output is enabled.
Here, npm test
is running the command: serverless invoke local --function saf-lambda-function --path test/event.json
.
You can change the specifications more if needed by looking at the documentation for serverless invoke local.
serverless deploy --verbose
. This may take several minutes.
-
When the service is deployed successfully, log into the AWS console, go to the "Lamda" interface, and set the S3 bucket as the trigger if not already shown.
-
You can test the service by uploading your input file into the
INPUT_BUCKET
. Upload the file in theINPUT_PREFIX
path if specified.
Please feel free to look through our issues, make a fork and submit PRs and improvements. We love hearing from our end-users and the community and will be happy to engage with you on suggestions, updates, fixes or new capabilities.
Please feel free to contact us by opening an issue on the issue board, or, at [email protected] should you have any suggestions, questions or issues.
© 2022 The MITRE Corporation.
Approved for Public Release; Distribution Unlimited. Case Number 18-3678.
MITRE hereby grants express written permission to use, reproduce, distribute, modify, and otherwise leverage this software to the extent permitted by the licensed terms provided in the LICENSE.md file included with this project.
This software was produced for the U. S. Government under Contract Number HHSM-500-2012-00008I, and is subject to Federal Acquisition Regulation Clause 52.227-14, Rights in Data-General.
No other use other than that granted to the U. S. Government, or to those acting on behalf of the U. S. Government under that Clause is authorized without the express written permission of The MITRE Corporation.
For further information, please contact The MITRE Corporation, Contracts Management Office, 7515 Colshire Drive, McLean, VA 22102-7539, (703) 983-6000.