A simple "hello world" reference app using the serverless framework targeting an AWS Lambda deploy.
- Overview
- Installation
- Development
- Support Stack Provisioning (Superuser)
- Serverless Deployment (IAM Roles)
Getting a serverless application into the cloud "the right way" can be a challenge. To this end, we start with a super-simple, "hello world" Express app targeting AWS Lambda using serverless. Along the way, this reference project takes care of all of the tough supporting pieces that go into a production-ready, best-practices-following cloud infrastructure like:
- Local development workflows.
- Terraform stack controlling IAM permissions and cloud resources to support a vanilla
serverlessapplication. - Remote state management for Terraform.
- Serverless application deployment and production lifecycle management.
Using this project as a template, you can hopefully take a new serverless application and set up "everything else" to support it in AWS the right way, from the start.
This reference application is meant for developers / architects who are already familiar with AWS infrastructures (and CloudFormation), Terraform, and Serverless framework applications. This project will hopefully provide some guidance / examples to get the whole shebang all the way to a multi-environment deployment and support a team of administrators and engineers for the application.
We use very simple, very common tools to allow a mostly vanilla Express server to run in localdev / Docker like a normal Node.js HTTP server and also as a Lambda function exposed via API Gateway.
Tech stack:
express: A server.
Infrastructure stack:
- serverless: Build / deployment framework for getting code to Lambda.
- serverless-http: Bridge to make a vanilla Express server run on Lambda.
Infrastructure tools:
- AWS CloudFormation: Create AWS cloud resources using YAML. The
serverlessframework creates a CloudFormation stack of Lambda-supporting resources as part of a normal deployment. This project also uses a small CloudFormation stack to bootstrap an S3 bucket and DynamoDB to handle Terraform state. - HashiCorp Terraform: Create AWS cloud resources using HCL. Typically more flexible and expressive than CloudFormation. We have a simple Terraform stack that uses a plugin to set up a production-ready set of resources (IAM, monitoring, etc.) to support the resources/stack generated by
serverless.
We use a naming convention in cloud resources and yarn tasks to separate some various high level things:
cf: AWS CloudFormation specific names.tf: Terraform specific names.sls: Serverless framework names.
Development hits a local machine, and when programmatically named, is usually referred to as:
localdev: A development-only setup running on a local machine.
We target four different stages/environments of AWS hosted deployments:
sandbox: A loose environment where developers can manually push / check things / break things with impunity. Typically deployed from developer laptops.development: Tracks feature development branches. Typically deployed by CI on merges todevelopbranch if using git flow workflow.staging: A near-production environment to validate changes before committing to actual production. Typically deployed by CI for release candidate branches before merging tomaster.production: The real deal. Typically deployed by CI after a merge tomaster.
Note that these are completely arbitrary groups, both in composition and naming. There a sensible set of groups if you need just some starting point. But the final group (or even one if you want) is totally up to you!
All of our yarn run <task> tasks should be run with a STAGE=<value> prefix. The default is to assume STAGE=localdev and only commands like yarn run node:localdev or yarn run lambda:localdev can run without specification successfully. For commands actually targeting AWS, please prefix like:
$ STAGE=sandbox yarn run <task>
$ STAGE=development yarn run <task>
$ STAGE=stage yarn run <task>
$ STAGE=production yarn run <task>Note: We separate the STAGE variable from NODE_ENV because often there are build implications of NODE_ENV that are distinct from our notion of deploy target environments.
Our task runner scheme is a bash + yarn based system crafted around the following environment variables (with defaults):
STAGE:localdevSERVICE_NAME:simple-reference(The name of the application/service in the cloud.)AWS_REGION:us-east-1
... and some minor localdev only ones:
AWS_XRAY_CONTEXT_MISSING:LOG_ERROR(Have Xray not error in localdev)SERVER_PORT:3000SERVER_HOST:0.0.0.0
... and some implied ones:
FUNCTION_NAME: The name of a given Lambda function. In this project, the main one isserver.
If your project supports Windows, you will want to have a more general / permissive approach.
We rely on IAM roles to limit privileges to the minimum necessary to provision, update, and deploy the service. Typically this involves creating personalized users in the AWS console, and then assigning them groups for varying appropriate degrees of privilege. Here are the relevant ones for this reference project:
- Superuser - Support Stack: A privileged user that can create the initial bootstrap CloudFormation stack and Terraform service module that will support a Serverless application. It should not be used for Serverless deploys.
- IAM Groups - Serverless App: The
FormidableLabs/serverless/awsmodule provides IAM groups and support for different types of users to create/update/delete the Serverless application. The IAM groups created are:tf-${SERVICE_NAME}-${STAGE}-admin: Can create/delete/update the Severless app.tf-${SERVICE_NAME}-${STAGE}-developer: Can deploy the Severless app.tf-${SERVICE_NAME}-${STAGE}-ci: Can deploy the Severless app.
Our application is a Node.js server.
First, make sure you have our version of node (determined by .nvmrc) that matches our Lambda target (you will need to have nvm installed):
$ nvm useThen, yarn install the Node.js dependencies:
$ yarn installCertain administrative / development work require the AWS CLI tools to prepare and deploy our staging / production services. To get those either do:
# Install via Python
$ sudo pip install awscli --ignore-installed six
# Or brew
$ brew install awscliAfter this you should be able to type:
$ aws --versionTo work with this reference app, you need AWS credentials for your specific user (aka, FIRST.LAST). To create the bootstrap and service support stacks, that user will need to be a superuser. To deploy serverless applications, the user will need to be attached to given tf-${SERVICE_NAME}-${STAGE}-(admin|developer) IAM groups after the service stack is created.
Once you have a user + access + secret keys, you need to make them available to commands requiring them. There are a couple of options:
You can append the following two environment variables to any command like:
$ AWS_ACCESS_KEY_ID=INSERT \
AWS_SECRET_ACCESS_KEY=INSERT \
STAGE=sandbox \
yarn run lambda:infoThis has the advantage of not storing secrets on disk. The disadvantage is needing to keep the secrets around to paste and/or export into every new terminal.
Another option is to store the secrets on disk. You can configure your ~/.aws credentials like:
$ mkdir -p ~/.aws
$ touch ~/.aws/credentialsThen add a default entry if you only anticipate working on this one project or a named profile entry of your username (aka, FIRST.LAST):
$ vim ~/.aws/credentials
[default|FIRST.LAST]
aws_access_key_id = INSERT
aws_secret_access_key = INSERTIf you are using a named profile, then export it into the environment in any terminal you are working in:
$ export AWS_PROFILE="FIRST.LAST"
$ STAGE=sandbox yarn run lambda:infoOr, you can declare the variable inline:
$ AWS_PROFILE="FIRST.LAST"\
STAGE=sandbox \
yarn run lambda:infoThe most secure mix of the two above options is to install and use aws-vault. Once you've followed the installation instructions, you can set up and use a profile like:
# Store AWS credentials for a profile named "FIRST.LAST"
$ aws-vault add FIRST.LAST
Enter Access Key Id: INSERT
Enter Secret Key: INSERT
# Execute a command with temporary creds
$ aws-vault exec FIRST.LAST -- STAGE=sandbox yarn run lambda:infoWe have several options for developing a service locally, with different advantages. Here's a quick list of application ports / running commands:
3000: Node server vianodemon. (yarn node:localdev)3001: Lambda offline local simulation. (yarn lambda:localdev)
Run the server straight up in your terminal with Node.js via nodemon for instant restarts on changes:
$ yarn node:localdevSee it in action!:
Or from the command line:
$ curl -X POST "http://127.0.0.1:3000/hello.json" \
-H "Content-Type: application/json"Run the server in a Lambda simulation via the serverless-offline plugin
$ yarn lambda:localdevSee it in action!:
This section discusses getting AWS resources provisioned to support Terraform and then Serverless.
The basic overview is:
- Bootstrap Stack: Use AWS CloudFormation to provision resources to manage Terraform state.
- Service Stack: Use Terraform to provision resources / permissions to accompany a Serverless deploy.
after this, then we are ready to deploy a standard serverless application with full support!
This step creates an S3 bucket and DynamoDB data store to enable Terraform to remotely manage it's state. We do this via AWS CloudFormation.
All commands in this section should be run by an AWS superuser. The configuration for all of this section is controlled by: aws/bootstrap.yml. Commands and resources created are all prefixed with cf as a project-specific choice for ease of identification in the AWS console (vs. Terraform vs. Serverless-generated).
Create the CloudFormation stack:
# Provision stack.
$ STAGE=sandbox yarn run cf:bootstrap:create
{
"StackId": "arn:aws:cloudformation:${AWS_REGION}:${AWS_ACCOUNT}:stack/cf-${SERVICE_NAME}-${STAGE}-bootstrap/HASH"
}
# Check status until reach `CREATE_COMPLETE`
$ STAGE=sandbox yarn run cf:bootstrap:status
"CREATE_COMPLETE"Once this is complete, you can move on to provisioning the service stack section. The remaining commands below are only if you need to update / delete the bootstrap stack, which shouldn't happen that often.
Update the CloudFormation stack:
# Update, then check status.
$ STAGE=sandbox yarn run cf:bootstrap:update
$ STAGE=sandbox yarn run cf:statusDelete the CloudFormation stack:
The bootstrap stack should only be deleted after you have removed all of the -admin|-developer|-ci groups from users and deleted the Serverless and Terraform service stacks.
# **WARNING**: Use with extreme caution!!!
$ STAGE=sandbox yarn run cf:bootstrap:_delete
# Check status. (A status or error with `does not exist` when done).
$ STAGE=sandbox yarn run cf:bootstrap:status
An error occurred (ValidationError) when calling the DescribeStacks operation: Stack with id cf-SERVICE_NAME-STAGE does not existThis step provisions a Terraform stack to provide us with IAM groups and other AWS resources to support and enhance a Serverless provision (in the next section).
All commands in this section should be run by an AWS superuser. The configuration for all of this section is controlled by: terraform/main.tf. Commands and resources created are all prefixed with tf as a project-specific choice for ease of identification.
Init your local Terraform state.
This needs to be run once to be able to run any other Terraform commands.
$ STAGE=sandbox yarn run tf:service:initPlan the Terraform stack.
Terraform allows you to see what's going to happen / change in your cloud infrastructure before actually committing to it, so it is always a good idea to run a plan before any Terraform mutating command.
$ STAGE=sandbox yarn run tf:service:planApply the Terraform stack:
This creates / updates as appropriate.
# Type in `yes` to go forward
$ STAGE=sandbox yarn run tf:service:apply
# YOLO: run without checking first
$ STAGE=sandbox yarn run tf:service:apply -auto-approveDelete the Terraform stack:
The service stack should only be deleted after you have removed all of the -admin|-developer|-ci groups from users and deleted the Serverless stack.
# **WARNING**: Use with extreme caution!!!
# Type in `yes` to go forward
$ STAGE=sandbox yarn run tf:service:_delete
# YOLO: run without checking first
$ STAGE=sandbox yarn run tf:service:_delete -auto-approveVisualize the Terraform stack:
These are Mac-based instructions, but analogous steps are available on other platforms. First, you'll need GraphViz for the dot tool:
$ brew install graphvizFrom there, you can visualize with:
# Generate SVG
$ STAGE=sandbox yarn run -s tf:terraform graph | dot -Tsvg > ~/Desktop/infrastructure.svgThis section discusses developers getting code and secrets deployed (manually from local machines to an AWS development playground or automated via CI).
All commands in this section should be run by AWS users with attached IAM groups provisioned by our support stack of tf-${SERVICE_NAME}-${STAGE}-(admin|developer|ci). The configuration for this section is controlled by: serverless.yml
These actions are reserved for -admin users.
Create the Lambda app. The first time through a deploy, an -admin user
is required (to effect the underlying CloudFormation changes).
$ STAGE=sandbox yarn run lambda:deploy
# Check on app and endpoints.
$ STAGE=sandbox yarn run lambda:infoDelete the Lambda app.
# **WARNING**: Use with extreme caution!!!
$ STAGE=sandbox yarn run lambda:_delete
# Confirm (with expected error).
$ STAGE=sandbox yarn lambda:info
...
Serverless Error ---------------------------------------
Stack with id sls-${SERVICE_NAME}-${STAGE} does not existMetrics:
# Show metrics for an application
$ STAGE=sandbox yarn run lambda:metricsThese actions can be performed by any user (-admin|developer|ci).
Get server information:
$ STAGE=sandbox yarn run lambda:info
...
endpoints:
ANY - https://HASH.execute-api.AWS_REGION.amazonaws.com/STAGE/base
ANY - https://HASH.execute-api.AWS_REGION.amazonaws.com/STAGE/base/{proxy+}
ANY - https://HASH.execute-api.AWS_REGION.amazonaws.com/STAGE/xray
ANY - https://HASH.execute-api.AWS_REGION.amazonaws.com/STAGE/xray/{proxy+}
...See the logs:
$ STAGE=sandbox yarn run lambda:logs -f FUNCTION_NAMENote: To see the logs in the AWS console, you unfortunately cannot just click on "CloudWatch > Logs" and see the relevant potential ones listed because a wildcard would be needed for log:DescribeLogGroups|Streams. However, if you know the log group generated name, and we do here, you can fill in the blanks and navigate to:
https://console.aws.amazon.com/cloudwatch/home?#logStream:group=/aws/lambda/sls-SERVICE_NAME-STAGE-FUNCTION_NAME;streamFilter=typeLogStreamPrefix
Update the Lambda server.
$ STAGE=sandbox yarn run lambda:deployRollback to a previous Lamba deployment:
If something has gone wrong, you can see the list of available states to roll back to with:
$ STAGE=sandbox yarn run lambda:rollbackThen choose a datestamp and add with the -t flag like:
$ STAGE=sandbox yarn run lambda:rollback -t 2019-02-07T00:35:56.362Z