Recently, Lambda has become popular thanks to AWS, but many folks still use Lambda and serverless interchangeably. In this post, we are going to shed some light on AWS Lambda, including how it ties into serverless architecture, how to create it, and when to use it. Show
(This tutorial is part of our AWS Guide. Use the right-hand menu to navigate.) Understanding serverlessTo understand what AWS Lambda is, we have to first understand what serverless architecture is all about. Serverless applications in general are applications that don’t require any provisioning of servers for them to run. When you run a serverless application, you get the benefit of not worrying about OS setup, patching, or scaling of servers that you would have to consider when you run your application on a physical server. Serverless applications or platforms have four characteristics:
Serverless applications have several components and layers:
As a cloud provider, AWS offers services that can be used for each of these components that make up a serverless architecture. This is where AWS Lambda comes in. Describing AWS LambdaAWS Lambda service is a high-scale, provision-free serverless compute offering based on functions. It is used only for the compute layer of a serverless application. The purpose of AWS Lambda is to build event-driven applications that can be triggered by several events in AWS. In the case where you have multiple simultaneous events, Lambda simply spins up multiple copies of the function to handle the events. In other words, Lambda can be described as a type of function as a service (FaaS). Three components comprise AWS Lambda:
When you specify an event source, your function is invoked when an event from that source occurs. The diagram below shows what this looks like: Running a Lambda functionWhen configuring a lambda function, you specify which runtime environment you’d like to run your code in. Depending on the language you use, each environment provides its own set of binaries available for use in your code. You are also allowed to package any libraries or binaries you like as long as you can use them within the runtime environment. All environments are based on Amazon Linux AMI. The current available runtime environments are:
When running a lambda function, we only focus on the code because AWS manages capacity and all updates. AWS Lambda can be invoked synchronously using the ResponseRequest InvocationType and asynchronously using the EventInvocationType. Concepts of Lambda functionTo better understand how lambda function works, there are key concepts to understand. Event sourceAlthough AWS Lambda can be triggered using the Invoke API, the recommended way of triggering lambda is through event sources from within AWS. There are two models of invocation supported: (a) Push which get triggered by other events such as API gateway, new object in S3 or Amazon Alexa. (b) Pull where the lambda function goes and poll an event source for new objects. Examples of such event sources are DynamoDB or Amazon Kinesis. Lambda configurationThere are few configuration settings that can be used with lambda functions:
Timeouts which is the allowed amount of time a function is allowed to run before it is timed out. Create an AWS LambdaThere are few ways to create a lambda function in AWS, but the most common way to create it is with the console but this method should only be used if testing in dev. For production, it is best practice to automate the deployment of the lambda. There are few third-party tools to set up automation, like Terraform, but since we are specifically talking about an AWS service, AWS recommends using Serverless Application Model (SAM) for this task. SAM is pretty much built on top of AWS CloudFormation and the template looks like a normal CloudFormation template except it has a transform block that specifically says we want the template to be a SAM template as opposed to a normal CloudFormation template. You can take a look at some example templates in the AWSlabs. AWS Lambda use casesYou can use AWS Lambda in a variety of situations, including but not limited to:
These postings are my own and do not necessarily represent BMC's position, strategies, or opinion. See an error or have a suggestion? Please let us know by emailing .
If you are using AWS as a provider, all functions inside the service are AWS Lambda functions. ConfigurationAll of the Lambda functions in your serverless service can be found in serverless.yml under the functions property. service: myService provider: name: aws runtime: nodejs14.x memorySize: 512 timeout: 10 versionFunctions: false tracing: lambda: true functions: hello: handler: handler.hello name: ${sls:stage}-lambdaName description: Description of what the lambda function does runtime: python2.7 memorySize: 512 timeout: 10 provisionedConcurrency: 3 reservedConcurrency: 5 tracing: PassThroughThe handler property points to the file and module containing the code you want to run in your function. module.exports.functionOne = function (event, context, callback) {};You can add as many functions as you want within this property. service: myService provider: name: aws runtime: nodejs14.x functions: functionOne: handler: handler.functionOne description: optional description for your Lambda functionTwo: handler: handler.functionTwo functionThree: handler: handler.functionThreeYour functions can either inherit their settings from the provider property. service: myService provider: name: aws runtime: nodejs14.x memorySize: 512 functions: functionOne: handler: handler.functionOneOr you can specify properties at the function level. service: myService provider: name: aws runtime: nodejs14.x functions: functionOne: handler: handler.functionOne memorySize: 512You can specify an array of functions, which is useful if you separate your functions in to different files: functions: - ${file(./foo-functions.yml)} - ${file(./bar-functions.yml)} getFoo: handler: handler.foo deleteFoo: handler: handler.fooPermissionsEvery AWS Lambda function needs permission to interact with other AWS infrastructure resources within your account. These permissions are set via an AWS IAM Role. You can set permission policy statements within this role via the provider.iam.role.statements property. service: myService provider: name: aws runtime: nodejs14.x iam: role: statements: - Effect: Allow Action: - dynamodb:DescribeTable - dynamodb:Query - dynamodb:Scan - dynamodb:GetItem - dynamodb:PutItem - dynamodb:UpdateItem - dynamodb:DeleteItem Resource: 'arn:aws:dynamodb:us-east-1:*:*' functions: functionOne: handler: handler.functionOne memorySize: 512Another example: service: myService provider: name: aws iam: role: statements: - Effect: 'Allow' Action: - 's3:ListBucket' Resource: { 'Fn::Join': ['', ['arn:aws:s3:::', { 'Ref': 'ServerlessDeploymentBucket' }]] } - Effect: 'Allow' Action: - 's3:PutObject' Resource: Fn::Join: - '' - - 'arn:aws:s3:::' - 'Ref': 'ServerlessDeploymentBucket' - '/*' functions: functionOne: handler: handler.functionOne memorySize: 512You can also use an existing IAM role by adding your IAM Role ARN in the iam.role property. For example: service: new-service provider: name: aws iam: role: arn:aws:iam::YourAccountNumber:role/YourIamRoleSee the documentation about IAM for function level IAM roles. Lambda Function URLsA Lambda Function URL is a simple solution to create HTTP endpoints with AWS Lambda. Function URLs are ideal for getting started with AWS Lambda, or for single-function applications like webhooks or APIs built with web frameworks. You can create a function URL via the url property in the function configuration in serverless.yml. By setting url to true, as shown below, the URL will be public without CORS configuration. functions: func: handler: index.handler url: trueAlternatively, you can configure it as an object with the authorizer and/or cors properties. The authorizer property can be set to aws_iam to enable AWS IAM authorization on your function URL. functions: func: handler: index.handler url: authorizer: aws_iamWhen using IAM authorization, the URL will only accept HTTP requests with AWS credentials allowing lambda:InvokeFunctionUrl (similar to API Gateway IAM authentication). You can also configure CORS headers so that your function URL can be called from other domains in browsers. Setting cors to true will allow all domains via the following CORS headers:
You can also additionally adjust your CORS configuration by setting allowedOrigins, allowedHeaders, allowedMethods, allowCredentials, exposedResponseHeaders, and maxAge properties as shown in example below. functions: func: handler: index.handler url: cors: allowedOrigins: - https://url1.com - https://url2.com allowedHeaders: - Content-Type - Authorization allowedMethods: - GET allowCredentials: true exposedResponseHeaders: - Special-Response-Header maxAge: 6000In the table below you can find how the cors properties map to CORS headers
It is also possible to remove the values in CORS configuration that are set by default by setting them to null instead. functions: func: handler: index.handler url: cors: allowedHeaders: nullReferencing container image as a targetAlternatively lambda environment can be configured through docker images. Image published to AWS ECR registry can be referenced as lambda source (check AWS Lambda – Container Image Support). In addition, you can also define your own images that will be built locally and uploaded to AWS ECR registry. Serverless will create an ECR repository for your image, but it currently does not manage updates to it. An ECR repository is created only for new services or the first time that a function configured with an image is deployed. In service configuration, you can configure the ECR repository to scan for CVEs via the provider.ecr.scanOnPush property, which is false by default. (See documentation) In service configuration, images can be configured via provider.ecr.images. To define an image that will be built locally, you need to specify path property, which should point to valid docker context directory. Optionally, you can also set file to specify Dockerfile that should be used when building an image. It is also possible to define images that already exist in AWS ECR repository. In order to do that, you need to define uri property, which should follow <account>.dkr.ecr.<region>.amazonaws.com/<repository>@<digest> or <account>.dkr.ecr.<region>.amazonaws.com/<repository>:<tag> format. Additionally, you can define arguments that will be passed to the docker build command via the following properties:
When uri is defined for an image, buildArgs, cacheFrom, and platform cannot be defined. Example configuration service: service-name provider: name: aws ecr: scanOnPush: true images: baseimage: path: ./path/to/context file: Dockerfile.dev buildArgs: STAGE: ${opt:stage} cacheFrom: - my-image:latest platform: linux/amd64 anotherimage: uri: 000000000000.dkr.ecr.sa-east-1.amazonaws.com/test-lambda-docker@sha256:6bb600b4d6e1d7cf521097177dd0c4e9ea373edb91984a505333be8ac9455d38When configuring functions, images should be referenced via image property, which can point to an image already defined in provider.ecr.images or directly to an existing AWS ECR image, following the same format as uri above. Both handler and runtime properties are not supported when image is used. Example configuration: service: service-name provider: name: aws ecr: images: baseimage: path: ./path/to/context functions: hello: image: 000000000000.dkr.ecr.sa-east-1.amazonaws.com/test-lambda-docker@sha256:6bb600b4d6e1d7cf521097177dd0c4e9ea373edb91984a505333be8ac9455d38 world: image: baseimageIt is also possible to provide additional image configuration via workingDirectory, entryPoint and command properties of to functions[].image. The workingDirectory accepts path in form of string, where both entryPoint and command needs to be defined as a list of strings, following "exec form" format. In order to provide additional image config properties, functions[].image has to be defined as an object, and needs to define either uri pointing to an existing AWS ECR image or name property, which references image already defined in provider.ecr.images. Example configuration: service: service-name provider: name: aws ecr: images: baseimage: path: ./path/to/context functions: hello: image: uri: 000000000000.dkr.ecr.sa-east-1.amazonaws.com/test-lambda-docker@sha256:6bb600b4d6e1d7cf521097177dd0c4e9ea373edb91984a505333be8ac9455d38 workingDirectory: /workdir command: - executable - flag entryPoint: - executable - flag world: image: name: baseimage command: - command entryPoint: - executable - flagDuring the first deployment when locally built images are used, Framework will automatically create a dedicated ECR repository to store these images, with name serverless-<service>-<stage>. Currently, the Framework will not remove older versions of images uploaded to ECR as they still might be in use by versioned functions. During sls remove, the created ECR repository will be removed. During deployment, Framework will attempt to docker login to ECR if needed. Depending on your local configuration, docker authorization token might be stored unencrypted. Please refer to documentation for more details: https://docs.docker.com/engine/reference/commandline/login/#credentials-store Instruction set architectureBy default, Lambda functions are run by 64-bit x86 architecture CPUs. However, using arm64 architecture (AWS Graviton2 processor) may result in better pricing and performance. To switch all functions to AWS Graviton2 processor, configure architecture at provider level as follows: provider: ... architecture: arm64To toggle instruction set architecture per function individually, set it directly at functions[] context: functions: hello: ... architecture: arm64VPC ConfigurationYou can add VPC configuration to a specific function in serverless.yml by adding a vpc object property in the function configuration. This object should contain the securityGroupIds and subnetIds array properties needed to construct VPC for this function. Here's an example configuration: service: service-name provider: aws functions: hello: handler: handler.hello vpc: securityGroupIds: - securityGroupId1 - securityGroupId2 subnetIds: - subnetId1 - subnetId2Or if you want to apply VPC configuration to all functions in your service, you can add the configuration to the higher level provider object, and overwrite these service level config at the function level. For example: service: service-name provider: name: aws vpc: securityGroupIds: - securityGroupId1 - securityGroupId2 subnetIds: - subnetId1 - subnetId2 functions: hello: handler: handler.hello vpc: securityGroupIds: - securityGroupId1 - securityGroupId2 subnetIds: - subnetId1 - subnetId2 users: handler: handler.usersThen, when you run serverless deploy, VPC configuration will be deployed along with your lambda function. If you have a provider VPC set but wish to have specific functions with no VPC, you can set the vpc value for these functions to ~ (null). For example: service: service-name provider: name: aws vpc: securityGroupIds: - securityGroupId1 - securityGroupId2 subnetIds: - subnetId1 - subnetId2 functions: hello: handler: handler.hello vpc: ~ users: handler: handler.usersVPC IAM permissions The Lambda function execution role must have permissions to create, describe and delete Elastic Network Interfaces (ENI). When VPC configuration is provided the default AWS AWSLambdaVPCAccessExecutionRole will be associated with your Lambda execution role. In case custom roles are provided be sure to include the proper ManagedPolicyArns. For more information please check configuring a Lambda Function for Amazon VPC Access VPC Lambda Internet Access By default, when a Lambda function is executed inside a VPC, it loses internet access and some resources inside AWS may become unavailable. In order for S3 resources and DynamoDB resources to be available for your Lambda function running inside the VPC, a VPC end point needs to be created. For more information please check VPC Endpoint for Amazon S3. In order for other services such as Kinesis streams to be made available, a NAT Gateway needs to be configured inside the subnets that are being used to run the Lambda, for the VPC used to execute the Lambda. For more information, please check Enable Outgoing Internet Access within VPC Environment VariablesYou can add environment variable configuration to a specific function in serverless.yml by adding an environment object property in the function configuration. This object should contain a key-value pairs of string to string: service: service-name provider: aws functions: hello: handler: handler.hello environment: TABLE_NAME: tableNameOr if you want to apply environment variable configuration to all functions in your service, you can add the configuration to the higher level provider object. Environment variables configured at the function level are merged with those at the provider level, so your function with specific environment variables will also have access to the environment variables defined at the provider level. If an environment variable with the same key is defined at both the function and provider levels, the function-specific value overrides the provider-level default value. For example: service: service-name provider: name: aws environment: SYSTEM_NAME: mySystem TABLE_NAME: tableName1 functions: hello: handler: handler.hello users: handler: handler.users environment: TABLE_NAME: tableName2If you want your function's environment variables to have the same values from your machine's environment variables, please read the documentation about Referencing Environment Variables. Using the tags configuration makes it possible to add key / value tags to your functions. Those tags will appear in your AWS console and make it easier for you to group functions by tag or find functions with a common tag. functions: hello: handler: handler.hello tags: foo: barOr if you want to apply tags configuration to all functions in your service, you can add the configuration to the higher level provider object. Tags configured at the function level are merged with those at the provider level, so your function with specific tags will get the tags defined at the provider level. If a tag with the same key is defined at both the function and provider levels, the function-specific value overrides the provider-level default value. For example: service: service-name provider: name: aws tags: foo: bar baz: qux functions: hello: handler: handler.hello users: handler: handler.users tags: foo: quuxReal-world use cases where tagging your functions is helpful include:
LayersUsing the layers configuration makes it possible for your function to use Lambda Layers functions: hello: handler: handler.hello layers: - arn:aws:lambda:region:XXXXXX:layer:LayerName:YLayers can be used in combination with runtime: provided to implement your own custom runtime on AWS Lambda. To publish Lambda Layers, check out the Layers documentation. Log Group ResourcesBy default, the framework will create LogGroups for your Lambdas. This makes it easy to clean up your log groups in the case you remove your service, and make the lambda IAM permissions much more specific and secure. You can opt out of the default behavior by setting disableLogs: true You can also specify the duration for CloudWatch log retention by setting logRetentionInDays. functions: hello: handler: handler.hello disableLogs: true goodBye: handler: handler.goodBye logRetentionInDays: 14Versioning Deployed FunctionsBy default, the framework creates function versions for every deploy. This behavior is optional, and can be turned off in cases where you don't invoke past versions by their qualifier. If you would like to do this, you can invoke your functions as arn:aws:lambda:....:function/myFunc:3 to invoke version 3 for example. Versions are not cleaned up by serverless, so make sure you use a plugin or other tool to prune sufficiently old versions. The framework can't clean up versions because it doesn't have information about whether older versions are invoked or not. This feature adds to the number of total stack outputs and resources because a function version is a separate resource from the function it refers to. To turn off function versioning, set the provider-level option versionFunctions. provider: versionFunctions: falseDead Letter Queue (DLQ)When AWS lambda functions fail, they are retried. If the retries also fail, AWS has a feature to send information about the failed request to a SNS topic or SQS queue, called the Dead Letter Queue, which you can use to track and diagnose and react to lambda failures. You can setup a dead letter queue for your serverless functions with the help of a SNS topic and the onError config parameter. Note: You can only provide one onError config per function. DLQ with SNSThe SNS topic needs to be created beforehand and provided as an arn on the function level. service: service provider: name: aws runtime: nodejs14.x functions: hello: handler: handler.hello onError: arn:aws:sns:us-east-1:XXXXXX:testDLQ with SQSAlthough Dead Letter Queues support both SNS topics and SQS queues, the onError config currently only supports SNS topic arns due to a race condition when using SQS queue arns and updating the IAM role. We're working on a fix so that SQS queue arns will be supported in the future. KMS KeysAWS Lambda uses AWS Key Management Service (KMS) to encrypt your environment variables at rest. The awsKmsKeyArn config variable enables you a way to define your own KMS key which should be used for encryption. service: name: service-name awsKmsKeyArn: arn:aws:kms:us-east-1:XXXXXX:key/some-hash provider: name: aws environment: TABLE_NAME: tableName1 functions: hello: handler: handler.hello awsKmsKeyArn: arn:aws:kms:us-east-1:XXXXXX:key/some-hash environment: TABLE_NAME: tableName2 goodbye: handler: handler.goodbyeSecrets using environment variables and KMSWhen storing secrets in environment variables, AWS strongly suggests encrypting sensitive information. AWS provides a tutorial on using KMS for this purpose. AWS X-Ray TracingYou can enable AWS X-Ray Tracing on your Lambda functions through the optional tracing config variable: service: myService provider: name: aws runtime: nodejs14.x tracing: lambda: trueYou can also set this variable on a per-function basis. This will override the provider level setting if present: functions: hello: handler: handler.hello tracing: Active goodbye: handler: handler.goodbye tracing: PassThroughAsynchronous invocationWhen intention is to invoke function asynchronously you may want to configure following additional settings: Destinationsdestination targets Target can be the other lambdas you also deploy with a service or other qualified target (externally managed lambda, EventBridge event bus, SQS queue or SNS topic) which you can address via its ARN or reference functions: asyncHello: handler: handler.asyncHello destinations: onSuccess: otherFunctionInService onFailure: arn:aws:sns:us-east-1:xxxx:some-topic-name asyncGoodBye: handler: handler.asyncGoodBye destinations: onFailure: type: sns arn: Ref: SomeTopicNameMaximum Event Age and Maximum Retry AttemptsmaximumEventAge accepts values between 60 seconds and 6 hours, provided in seconds. maximumRetryAttempts accepts values between 0 and 2. functions: asyncHello: handler: handler.asyncHello maximumEventAge: 7200 maximumRetryAttempts: 1EFS ConfigurationYou can use Amazon EFS with Lambda by adding a fileSystemConfig property in the function configuration in serverless.yml. fileSystemConfig should be an object that contains the arn and localMountPath properties. The arn property should reference an existing EFS Access Point, where the localMountPath should specify the absolute path under which the file system will be mounted. Here's an example configuration: service: service-name provider: aws functions: hello: handler: handler.hello fileSystemConfig: localMountPath: /mnt/example arn: arn:aws:elasticfilesystem:us-east-1:111111111111:access-point/fsap-0d0d0d0d0d0d0d0d0 vpc: securityGroupIds: - securityGroupId1 subnetIds: - subnetId1Ephemeral storageBy default, Lambda allocates 512 MB of ephemeral storage in functions under the /tmp directory. You can increase its size via the ephemeralStorageSize property. It should be a numerical value in MBs, between 512 and 10240. functions: helloEphemeral: handler: handler.handler ephemeralStorageSize: 1024Lambda Hashing Algorithm migrationNote Below migration guide is intended to be used if you are already using v3 version of the Framework and you have provider.lambdaHashingVersion property set to 20200924 in your configuration file. If you are still on v2 and want to upgrade to v3, please refer to V3 Upgrade docs. In v3, Lambda version hashes are generated using an improved algorithm that fixes determinism issues. If you are still using the old hashing algorithm, you can follow the guide below to migrate to new default version. Please keep in mind that these changes require two deployments with manual configuration adjustment between them. It also creates two additional versions and temporarily overrides descriptions of your functions. Migration will need to be done separately for each of your environments/stages.
Now your whole service is fully migrated to the new Lambda Hashing Algorithm. If you do not want to temporarily override descriptions of your functions or would like to avoid creating unnecessary versions of your functions, you might want to use one of the following approaches:
|