Skip to content
This repository has been archived by the owner on Nov 22, 2023. It is now read-only.

Emulate AWS λ and API Gateway locally when developing your Serverless project

License

Notifications You must be signed in to change notification settings

smunch-co/serverless-offline

 
 

Repository files navigation

Serverless Offline

This Serverless plugin emulates AWS λ and API Gateway on your local machine to speed up your development cycles. To do so, it starts an HTTP server that handles the request's lifecycle like APIG does and invokes your handlers.

Features

  • Node.js, Python, Ruby, Go, Java (incl. Kotlin, Groovy, Scala) λ runtimes.
  • Velocity templates support.
  • Lazy loading of your handler files.
  • And more: integrations, authorizers, proxies, timeouts, responseParameters, HTTPS, CORS, etc...

This plugin is updated by its users, I just do maintenance and ensure that PRs are relevant to the community. In other words, if you find a bug or want a new feature, please help us by becoming one of the contributors ✌️ ! See the contributing section.

Documentation

Installation

First, add Serverless Offline to your project:

npm install serverless-offline --save-dev

Then inside your project's serverless.yml file add following entry to the plugins section: serverless-offline. If there is no plugin section you will need to add it to the file.

Note that the "plugin" section for serverless-offline must be at root level on serverless.yml.

It should look something like this:

plugins:
  - serverless-offline

You can check whether you have successfully installed the plugin by running the serverless command line:

serverless --verbose

the console should display Offline as one of the plugins now available in your Serverless project.

Usage and command line options

In your project root run:

serverless offline or sls offline.

to list all the options for the plugin run:

sls offline --help

All CLI options are optional:

--apiKey                    Defines the API key value to be used for endpoints marked as private Defaults to a random hash.
--corsAllowHeaders          Used as default Access-Control-Allow-Headers header value for responses. Delimit multiple values with commas. Default: 'accept,content-type,x-api-key'
--corsAllowOrigin           Used as default Access-Control-Allow-Origin header value for responses. Delimit multiple values with commas. Default: '*'
--corsDisallowCredentials   When provided, the default Access-Control-Allow-Credentials header value will be passed as 'false'. Default: true
--corsExposedHeaders        Used as additional Access-Control-Exposed-Headers header value for responses. Delimit multiple values with commas. Default: 'WWW-Authenticate,Server-Authorization'
--disableCookieValidation   Used to disable cookie-validation on hapi.js-server
--disableScheduledEvents    Disables all scheduled events. Overrides configurations in serverless.yml.
--dockerHost                The host name of Docker. Default: localhost
--dockerHostServicePath     Defines service path which is used by SLS running inside Docker container
--dockerNetwork             The network that the Docker container will connect to
--dockerReadOnly            Marks if the docker code layer should be read only. Default: true
--enforceSecureCookies      Enforce secure cookies
--hideStackTraces           Hide the stack trace on lambda failure. Default: false
--host                  -o  Host name to listen on. Default: localhost
--httpPort                  Http port to listen on. Default: 3000
--httpsProtocol         -H  To enable HTTPS, specify directory (relative to your cwd, typically your project dir) for both cert.pem and key.pem files
--ignoreJWTSignature        When using HttpApi with a JWT authorizer, don't check the signature of the JWT token. This should only be used for local development.
--lambdaPort                Lambda http port to listen on. Default: 3002
--layersDir                 The directory layers should be stored in. Default: ${codeDir}/.serverless-offline/layers'
--localEnvironment          Copy local environment variables. Default: false
--noAuth                    Turns off all authorizers
--noPrependStageInUrl       Don't prepend http routes with the stage.
--noStripTrailingSlashInUrl Don't strip trailing slash from http routes.
--noTimeout             -t  Disables the timeout feature.
--prefix                -p  Adds a prefix to every path, to send your requests to http://localhost:3000/[prefix]/[your_path] instead. Default: ''
--printOutput               Turns on logging of your lambda outputs in the terminal.
--reloadHandler             Reloads handler with each request.
--resourceRoutes            Turns on loading of your HTTP proxy settings from serverless.yml
--useChildProcesses         Run handlers in a child process
--useDocker                 Run handlers in a docker container.
--useInProcess              Run handlers in the same process as 'serverless-offline'.
--webSocketHardTimeout      Set WebSocket hard timeout in seconds to reproduce AWS limits (https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html#apigateway-execution-service-websocket-limits-table). Default: 7200 (2 hours)
--webSocketIdleTimeout      Set WebSocket idle timeout in seconds to reproduce AWS limits (https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html#apigateway-execution-service-websocket-limits-table). Default: 600 (10 minutes)
--websocketPort             WebSocket port to listen on. Default: 3001

Any of the CLI options can be added to your serverless.yml. For example:

custom:
  serverless-offline:
    httpsProtocol: 'dev-certs'
    httpPort: 4000
      foo: 'bar'

Options passed on the command line override YAML options.

By default you can send your requests to http://localhost:3000/. Please note that:

  • You'll need to restart the plugin if you modify your serverless.yml or any of the default velocity template files.
  • When no Content-Type header is set on a request, API Gateway defaults to application/json, and so does the plugin. But if you send an application/x-www-form-urlencoded or a multipart/form-data body with an application/json (or no) Content-Type, API Gateway won't parse your data (you'll get the ugly raw as input), whereas the plugin will answer 400 (malformed JSON). Please consider explicitly setting your requests' Content-Type and using separate templates.

Run modes

node.js

Lambda handlers for the node.js runtime can run in different execution modes with serverless-offline and they have subtle differences with a variety of pros and cons. they are mutually exclusive and it is planned to combine the flags into one single flag in the future.

worker-threads (default)

  • handlers run in their own context
  • memory is not being shared between handlers, memory consumption is therefore higher
  • memory is being released when handlers reload or after usage
  • environment (process.env) is not being shared across handlers
  • global state is not being shared across handlers
  • easy debugging

in-process

  • handlers run in the same context (instance) as serverless and serverless-offline
  • memory is being shared across lambda handlers as well as with serverless and serverless-offline
  • no reloading capabilities as it is [currently] not possible to implement for commonjs handlers (without memory leaks) and for esm handlers
  • environment (process.env) is being shared across handlers as well as with serverless and serverless-offline
  • global state is being shared across lambda handlers as well as with serverless and serverless-offline
  • easy debugging

child-processes

  • handlers run in a separate node.js instance
  • memory is not being shared between handlers, memory consumption is therefore higher
  • memory is being released when handlers reload or after usage
  • environment (process.env) is not being shared across handlers
  • global state is not being shared across handlers
  • debugging more complicated

docker

  • handlers run in a docker container
  • memory is not being shared between handlers, memory consumption is therefore higher
  • memory is being released when handlers reload or after usage
  • environment (process.env) is not being shared across handlers
  • global state is not being shared across handlers
  • debugging more complicated

Python, Ruby, Go, Java (incl. Kotlin, Groovy, Scala)

the Lambda handler process is running in a child process.

Usage with invoke

To use Lambda.invoke you need to set the lambda endpoint to the serverless-offline endpoint:

const { env } = require('node:process')
const { Lambda } = require('aws-sdk')

const lambda = new Lambda({
  apiVersion: '2015-03-31',
  // endpoint needs to be set only if it deviates from the default
  endpoint: env.IS_OFFLINE
    ? 'http://localhost:3002'
    : 'https://lambda.us-east-1.amazonaws.com',
})

All your lambdas can then be invoked in a handler using

const { Buffer } = require('node:buffer')
const { Lambda } = require('aws-sdk')

const { stringify } = JSON

const lambda = new Lambda({
  apiVersion: '2015-03-31',
  endpoint: 'http://localhost:3002',
})

exports.handler = async function () {
  const clientContextData = stringify({ foo: 'foo' })

  const params = {
    ClientContext: Buffer.from(clientContextData).toString('base64'),
    // FunctionName is composed of: service name - stage - function name, e.g.
    FunctionName: 'myServiceName-dev-invokedHandler',
    InvocationType: 'RequestResponse',
    Payload: stringify({ data: 'foo' }),
  }

  const response = await lambda.invoke(params).promise()

  return {
    body: stringify(response),
    statusCode: 200,
  }
}

You can also invoke using the aws cli by specifying --endpoint-url

aws lambda invoke /dev/null \
  --endpoint-url http://localhost:3002 \
  --function-name myServiceName-dev-invokedHandler

List of available function names and their corresponding serverless.yml function keys are listed after the server starts. This is important if you use a custom naming scheme for your functions as serverless-offline will use your custom name. The left side is the function's key in your serverless.yml (invokedHandler in the example below) and the right side is the function name that is used to call the function externally such as aws-sdk (myServiceName-dev-invokedHandler in the example below):

serverless offline
...
offline: Starting Offline: local/us-east-1.
offline: Offline [http for lambda] listening on http://localhost:3002
offline: Function names exposed for local invocation by aws-sdk:
           * invokedHandler: myServiceName-dev-invokedHandler

To list the available manual invocation paths exposed for targeting by aws-sdk and aws-cli, use SLS_DEBUG=* with serverless offline. After the invoke server starts up, full list of endpoints will be displayed:

SLS_DEBUG=* serverless offline
...
offline: Starting Offline: local/us-east-1.
...
offline: Offline [http for lambda] listening on http://localhost:3002
offline: Function names exposed for local invocation by aws-sdk:
           * invokedHandler: myServiceName-dev-invokedHandler
[offline] Lambda Invocation Routes (for AWS SDK or AWS CLI):
           * POST http://localhost:3002/2015-03-31/functions/myServiceName-dev-invokedHandler/invocations
[offline] Lambda Async Invocation Routes (for AWS SDK or AWS CLI):
           * POST http://localhost:3002/2014-11-13/functions/myServiceName-dev-invokedHandler/invoke-async/

You can manually target these endpoints with a REST client to debug your lambda function if you want to. Your POST JSON body will be the Payload passed to your function if you were to calling it via aws-sdk.

The process.env.IS_OFFLINE variable

Will be "true" in your handlers when using serverless-offline.

Docker and Layers

To use layers with serverless-offline, you need to have the useDocker option set to true. This can either be by using the --useDocker command, or in your serverless.yml like this:

custom:
  serverless-offline:
    useDocker: true

This will allow the docker container to look up any information about layers, download and use them. For this to work, you must be using:

  • AWS as a provider, it won't work with other provider types.
  • Layers that are compatible with your runtime.
  • ARNs for layers. Local layers aren't supported as yet.
  • A local AWS account set-up that can query and download layers.

If you're using least-privilege principals for your AWS roles, this policy should get you by:

{
  "Statement": [
    {
      "Action": "lambda:GetLayerVersion",
      "Effect": "Allow",
      "Resource": "arn:aws:lambda:*:*:layer:*:*"
    }
  ],
  "Version": "2012-10-17"
}

Once you run a function that boots up the Docker container, it'll look through the layers for that function, download them in order to your layers folder, and save a hash of your layers so it can be re-used in future. You'll only need to re-download your layers if they change in the future. If you want your layers to re-download, simply remove your layers folder.

You should then be able to invoke functions as normal, and they're executed against the layers in your docker container.

Additional Options

There are 5 additional options available for Docker and Layer usage.

  • dockerHost
  • dockerHostServicePath
  • dockerNetwork
  • dockerReadOnly
  • layersDir

dockerHost

When running Docker Lambda inside another Docker container, you may need to configure the host name for the host machine to resolve networking issues between Docker Lambda and the host. Typically in such cases you would set this to host.docker.internal.

dockerHostServicePath

When running Docker Lambda inside another Docker container, you may need to override the code path that gets mounted to the Docker Lambda container relative to the host machine. Typically in such cases you would set this to ${PWD}.

dockerNetwork

When running Docker Lambda inside another Docker container, you may need to override network that Docker Lambda connects to in order to communicate with other containers.

dockerReadOnly

For certain programming languages and frameworks, it's desirable to be able to write to the filesystem for things like testing with local SQLite databases, or other testing-only modifications. For this, you can set dockerReadOnly: false, and this will allow local filesystem modifications. This does not strictly mimic AWS Lambda, as Lambda has a Read-Only filesystem, so this should be used as a last resort.

layersDir

By default layers are downloaded on a per-project basis, however, if you want to share them across projects, you can download them to a common place. For example, layersDir: /tmp/layers would allow them to be shared across projects. Make sure when using this setting that the directory you are writing layers to can be shared by docker.

Authorizers

Token authorizers

As defined in the Serverless Documentation you can use API Keys as a simple authentication method.

Serverless-offline will emulate the behaviour of APIG and create a random token that's printed on the screen. With this token you can access your private methods adding x-api-key: generatedToken to your request header. All api keys will share the same token. To specify a custom token use the --apiKey cli option.

Custom authorizers

Only custom authorizers are supported. Custom authorizers are executed before a Lambda function is executed and return an Error or a Policy document.

The Custom authorizer is passed an event object as below:

{
  "authorizationToken": "<Incoming bearer token>",
  "methodArn": "arn:aws:execute-api:<Region id>:<Account id>:<API id>/<Stage>/<Method>/<Resource path>",
  "type": "TOKEN"
}

The methodArn does not include the Account id or API id.

The plugin only supports retrieving Tokens from headers. You can configure the header as below:

"authorizer": {
  "authorizerResultTtlInSeconds": "0",
  "identitySource": "method.request.header.Authorization", // or method.request.header.SomeOtherHeader
  "type": "TOKEN"
}

Remote authorizers

You are able to mock the response from remote authorizers by setting the environmental variable AUTHORIZER before running sls offline start

Example:

Unix: export AUTHORIZER='{"principalId": "123"}'

Windows: SET AUTHORIZER='{"principalId": "123"}'

JWT authorizers

For HTTP APIs, JWT authorizers defined in the serverless.yml can be used to validate the token and scopes in the token. However at this time, the signature of the JWT is not validated with the defined issuer. Since this is a security risk, this feature is only enabled with the --ignoreJWTSignature flag. Make sure to only set this flag for local development work.

Serverless plugin authorizers

If your authentication needs are custom and not satisfied by the existing capabilities of the Serverless offline project, you can inject your own authentication strategy. To inject a custom strategy for Lambda invocation, you define a custom variable under serverless-offline called authenticationProvider in the serverless.yml file. The value of the custom variable will be used to require(your authenticationProvider value) where the location is expected to return a function with the following signature.

module.exports = function (endpoint, functionKey, method, path) {
  return {
    getAuthenticateFunction: () => ({
      async authenticate(request, h) {
        // your implementation
      },
    }),

    name: 'your strategy name',
    scheme: 'your scheme name',
  }
}

A working example of injecting a custom authorization provider can be found in the projects integration tests under the folder custom-authentication.

Custom headers

You are able to use some custom headers in your request to gain more control over the requestContext object.

Header Event key Example
cognito-identity-id event.requestContext.identity.cognitoIdentityId
cognito-authentication-provider event.requestContext.identity.cognitoAuthenticationProvider
sls-offline-authorizer-override event.requestContext.authorizer { "iam": {"cognitoUser": { "amr": ["unauthenticated"], "identityId": "abc123" }}}

By doing this you are now able to change those values using a custom header. This can help you with easier authentication or retrieving the userId from a cognitoAuthenticationProvider value.

Environment variables

You are able to use environment variables to customize identity params in event context.

Environment Variable Event key
SLS_COGNITO_IDENTITY_POOL_ID event.requestContext.identity.cognitoIdentityPoolId
SLS_ACCOUNT_ID event.requestContext.identity.accountId
SLS_COGNITO_IDENTITY_ID event.requestContext.identity.cognitoIdentityId
SLS_CALLER event.requestContext.identity.caller
SLS_API_KEY event.requestContext.identity.apiKey
SLS_API_KEY_ID event.requestContext.identity.apiKeyId
SLS_COGNITO_AUTHENTICATION_TYPE event.requestContext.identity.cognitoAuthenticationType
SLS_COGNITO_AUTHENTICATION_PROVIDER event.requestContext.identity.cognitoAuthenticationProvider

You can use serverless-dotenv-plugin to load environment variables from your .env file.

AWS API Gateway Features

Velocity Templates

Serverless doc ~ AWS doc

You can supply response and request templates for each function. This is optional. To do so you will have to place function specific template files in the same directory as your function file and add the .req.vm extension to the template filename. For example, if your function is in code-file: helloworld.js, your response template should be in file: helloworld.res.vm and your request template in file helloworld.req.vm.

Velocity nuances

Consider this requestTemplate for a POST endpoint:

"application/json": {
  "payload": "$input.json('$')",
  "id_json": "$input.json('$.id')",
  "id_path": "$input.path('$').id"
}

Now let's make a request with this body: { "id": 1 }

AWS parses the event as such:

{
  "payload": {
    "id": 1
  },
  "id_json": 1,
  "id_path": "1" // Notice the string
}

Whereas Offline parses:

{
  "payload": {
    "id": 1
  },
  "id_json": 1,
  "id_path": 1 // Notice the number
}

Accessing an attribute after using $input.path will return a string on AWS (expect strings like "1" or "true") but not with Offline (1 or true). You may find other differences.

CORS

Serverless doc

For HTTP APIs, the CORS configuration will work out of the box. Any CLI arguments passed in will be ignored.

For REST APIs, if the endpoint config has CORS set to true, the plugin will use the CLI CORS options for the associated route. Otherwise, no CORS headers will be added.

Catch-all Path Variables

AWS doc

Set greedy paths like /store/{proxy+} that will intercept requests made to /store/list-products, /store/add-product, etc...

ANY method

AWS doc

Works out of the box.

Lambda and Lambda Proxy Integrations

Serverless doc ~ AWS doc

Works out of the box. See examples in the manual_test directory.

HTTP Proxy

Serverless doc ~ AWS doc - AWS::ApiGateway::Method ~ AWS doc - AWS::ApiGateway::Resource

Example of enabling proxy:

custom:
  serverless-offline:
    resourceRoutes: true

or

    YourCloudFormationMethodId:
      Properties:
        ......
        Integration:
          Type: HTTP_PROXY
          Uri: 'https://s3-${self:custom.region}.amazonaws.com/${self:custom.yourBucketName}/{proxy}'
          ......
      Type: AWS::ApiGateway::Method
custom:
  serverless-offline:
    resourceRoutes:
      YourCloudFormationMethodId:
        Uri: 'http://localhost:3001/assets/{proxy}'

Response parameters

AWS doc

You can set your response's headers using ResponseParameters.

May not work properly. Please PR. (Difficulty: hard?)

Example response velocity template:

"responseParameters": {
  "method.response.header.X-Powered-By": "Serverless", // a string
  "method.response.header.Warning": "integration.response.body", // the whole response
  "method.response.header.Location": "integration.response.body.some.key" // a pseudo JSON-path
},

WebSocket

Usage in order to send messages back to clients:

POST http://localhost:3001/@connections/{connectionId}

Or,

const { ApiGatewayManagementApi } = require('aws-sdk')

const apiGatewayManagementApi = new ApiGatewayManagementApi({
  apiVersion: '2018-11-29',
  endpoint: 'http://localhost:3001',
});

apiGatewayManagementApi.postToConnection({
  ConnectionId: ...,
  Data: ...,
});

Where the event is received in the lambda handler function.

There's support for websocketsApiRouteSelectionExpression in it's basic form: $request.body.x.y.z, where the default value is $request.body.action.

Debug process

Serverless offline plugin will respond to the overall framework settings and output additional information to the console in debug mode. In order to do this you will have to set the SLS_DEBUG environmental variable. You can run the following in the command line to switch to debug mode execution.

Unix: export SLS_DEBUG=*

Windows: SET SLS_DEBUG=*

Interactive debugging is also possible for your project if you have installed the node-inspector module and chrome browser. You can then run the following command line inside your project's root.

Initial installation: npm install -g node-inspector

For each debug run: node-debug sls offline

The system will start in wait status. This will also automatically start the chrome browser and wait for you to set breakpoints for inspection. Set the breakpoints as needed and, then, click the play button for the debugging to continue.

Depending on the breakpoint, you may need to call the URL path for your function in separate browser window for your serverless function to be run and made available for debugging.

Interactive Debugging with Visual Studio Code (VSC)

With newer versions of node (6.3+) the node inspector is already part of your node environment and you can take advantage of debugging inside your IDE with source-map support. Here is the example configuration to debug interactively with VSC. It has two steps.

Step 1 : Adding a launch configuration in IDE

Add a new launch configuration to VSC like this:

{
  "cwd": "${workspaceFolder}",
  "name": "Debug Serverless Offline",
  "request": "launch",
  "runtimeArgs": ["run", "debug"],
  "runtimeExecutable": "npm",
  "sourceMaps": true,
  "type": "node"
}

Step2 : Adding a debug script

You will also need to add a debug script reference in your package.json file

Add this to the scripts section:

Unix/Mac: "debug" : "export SLS_DEBUG=* && node --inspect /usr/local/bin/serverless offline"

Windows: "debug": "SET SLS_DEBUG=* && node --inspect node_modules\\serverless\\bin\\serverless offline"

Example:

....
"scripts": {
  "debug" : "SET SLS_DEBUG=* && node --inspect node_modules\\serverless\\bin\\serverless offline"
}

In VSC, you can, then, add breakpoints to your code. To start a debug sessions you can either start your script in package.json by clicking the hovering debug intellisense icon or by going to your debug pane and selecting the Debug Serverless Offline configuration.

Resource permissions and AWS profile

Lambda functions assume an IAM role during execution: the framework creates this role and set all the permission provided in the iamRoleStatements section of serverless.yml.

However, serverless offline makes use of your local AWS profile credentials to run the lambda functions and that might result in a different set of permissions. By default, the aws-sdk would load credentials for you default AWS profile specified in your configuration file.

You can change this profile directly in the code or by setting proper environment variables. Setting the AWS_PROFILE environment variable before calling serverless offline to a different profile would effectively change the credentials, e.g.

AWS_PROFILE=<profile> serverless offline

Simulation quality

This plugin simulates API Gateway for many practical purposes, good enough for development - but is not a perfect simulator. Specifically, Lambda currently runs on Node.js v12.x, v14.x and v16.x (AWS Docs), whereas Offline runs on your own runtime where no memory limits are enforced.

Usage with other plugins

When combining this plugin with other plugins there are a few things that you need to keep in mind.

You should run serverless offline start instead of serverless offline. The start command fires the offline:start:init and offline:start:end lifecycle hooks which can be used by other plugins to process your code, add resources, perform cleanups, etc.

The order in which plugins are added to serverless.yml is relevant. Plugins are executed in order, so plugins that process your code or add resources should be added first so they are ready when this plugin starts.

For example:

plugins:
  - serverless-middleware # modifies some of your handler based on configuration
  - serverless-webpack # package your javascript handlers using webpack
  - serverless-dynamodb-local # adds a local dynamo db
  - serverless-offline # runs last so your code has been pre-processed and dynamo is ready

That works because all those plugins listen to the offline:start:init to do their processing. Similarly they listen to offline:start:end to perform cleanup (stop dynamo db, remove temporary files, etc).

Credits and inspiration

This plugin was initially a fork of Nopik's Serverless-serve.

License

MIT

Contributing

Yes, thank you! This plugin is community-driven, most of its features are from different authors. Please update the docs and tests and add your name to the package.json file. We try to follow Airbnb's JavaScript Style Guide.

Contributors

dnalborczyk dherault computerpunc leonardoalifraco daniel-cottone
dnalborczyk dherault computerpunc leonardoalifraco daniel-cottone
mikestaub Bilal-S dl748 frsechet zoellner
mikestaub Bilal-S dl748 frsechet zoellner
johncmckim ThisIsNoZaku darthtrevino miltador gertjvr
johncmckim ThisIsNoZaku darthtrevino miltador gertjvr
juanjoDiaz perkyguy jack-seek hueniverse robbtraister
juanjoDiaz perkyguy jack-seek hueniverse robbtraister
dortega3000 ansraliant joubertredrat andreipopovici Andorbal
dortega3000 ansraliant joubertredrat andreipopovici Andorbal
AyushG3112 franciscocpg kajwiklund ondrowan sulaysumaria
AyushG3112 franciscocpg kajwiklund ondrowan sulaysumaria
jormaechea awwong1 c24w vmadman encounter
jormaechea awwong1 c24w vmadman encounter
Bob-Thomas njriordan bebbi trevor-leach emmoistner
Bob-Thomas njriordan bebbi trevor-leach emmoistner
OrKoN adieuadieu apalumbo anishkny cameroncooper
OrKoN adieuadieu apalumbo anishkny cameroncooper
cmuto09 dschep DimaDK24 dwbelliston eabadjiev
cmuto09 dschep DimaDK24 dwbelliston eabadjiev
Arkfille garunski james-relyea joewestcott LoganArnett
Arkfille garunski james-relyea joewestcott LoganArnett
ablythe marccampbell purefan mzmiric5 paulhbarker
ablythe marccampbell purefan mzmiric5 paulhbarker
pmuens pierreis ramonEmilio rschick selcukcihan
pmuens pierreis ramonEmilio rschick selcukcihan
patrickheeney rma4ok clschnei djcrabhat adam-nielsen
patrickheeney rma4ok clschnei djcrabhat adam-nielsen
adamelliottsweeting againer alebianco-doxee koterpillar triptec
adamelliottsweeting againer alebianco-doxee koterpillar triptec
constb cspotcode aliatsis arnas akaila
constb cspotcode aliatsis arnas akaila
ac360 austin-payne bencooling BorjaMacedo BrandonE
ac360 austin-payne bencooling BorjaMacedo BrandonE
guerrerocarlos chrismcleod christophgysin Clement134 rlgod
guerrerocarlos chrismcleod christophgysin Clement134 rlgod
dbunker dobrynin domaslasauskas enolan minibikini
dbunker dobrynin domaslasauskas enolan minibikini
em0ney erauer gbroques guillaume balassy
em0ney erauer gbroques guillaume balassy
idmontie jacintoArias jgrigg jsnajdr horyd
idmontie jacintoArias jgrigg jsnajdr horyd
jaydp17 jeremygiberson josephwarrick jlsjonas joostfarla
jaydp17 jeremygiberson josephwarrick jlsjonas joostfarla
kenleytomlin lalifraco-devspark DynamicSTOP medikoo neverendingqs
kenleytomlin lalifraco-devspark DynamicSTOP medikoo neverendingqs
msjonker Takeno mjmac ojongerius thepont
msjonker Takeno mjmac ojongerius thepont
WooDzu PsychicCat Raph22 wwsno gribnoysup
WooDzu PsychicCat Raph22 wwsno gribnoysup
starsprung shineli-not-used-anymore stesie stevemao ittus
starsprung shineli-not-used-anymore stesie stevemao ittus
tiagogoncalves89 tuanmh Gregoirevda gcphost YaroslavApatiev
tiagogoncalves89 tuanmh Gregoirevda gcphost YaroslavApatiev
zacacollier allenhartwig demetriusnunes hsz electrikdevelopment
zacacollier allenhartwig demetriusnunes hsz electrikdevelopment
jgilbert01 polaris340 kobanyan leruitga-ss livingmine
jgilbert01 polaris340 kobanyan leruitga-ss livingmine
lteacher martinmicunda nori3tsu ppasmanik ryanzyy
lteacher martinmicunda nori3tsu ppasmanik ryanzyy
m0ppers footballencarta bryanvaz njyjn kdybicz
m0ppers footballencarta bryanvaz njyjn kdybicz
ericctsf brazilianbytes
ericctsf brazilianbytes

About

Emulate AWS λ and API Gateway locally when developing your Serverless project

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • JavaScript 96.3%
  • Python 1.0%
  • Shell 0.6%
  • Ruby 0.6%
  • Go 0.5%
  • TypeScript 0.3%
  • Other 0.7%