API Service with FastAPI + AWS Lambda + API Gateway and Make It Work

FanchenBao
17 min readSep 12, 2023

Update 2023–11–23: the section where environment variables are incorporated to the Lambda function has been updated to show how they can be retrieved from .env files.

Update 2023–11–30: add the section “Gotcha #3. ENV=dev” to explain why setting ENV=dev breaks the local development.

The goal of this article is to show one way to create a FastAPI app, deploy it to AWS Lambda via a Docker Container, trigger it with API Gateway, and basically just make it work. Combined together, the FastAPI + AWS Lambda + API Gateway setup allows one to quickly spin up a maintainable and scalable API service.

Who is The Target Audience?

This article will focus on the design choices of FastAPI, the procedures of deployment (exclusively via AWS CLI), and the explanation of a few gotchas. It will NOT dive deep into each technology. Thus, it is assumed that the target audience has already had a hands-on understanding of FastAPI, Docker, AWS ECR, AWS Lambda, and API Gateway.

Why FastAPI + AWS Lambda + API Gateway?

From my prior experience, AWS Lambda + API Gateway is sufficient to create an API service, in which AWS Lambda handles all the execution of API requests and API Gateway controls the HTTP methods, routes, authentication, traffic, and more. This is a valid approach, but it has two main drawbacks.

  1. Multiple Lambda functions might be needed to handle different API requests. In fact, it is not unreasonable to assume that each HTTP method of each API Gateway resource has its own Lambda integration. Once the number of API routes and methods scale up, the shear number of Lambda functions become unmanageable.
  2. The API design (i.e., routes and methods) is embedded in API Gateway. If one does not use a tool to manage cloud resources, updating the API design is a nightmare. Even with a proper tool, making changes to the routes or function integration is still not a trivial task.

The inclusion of FastAPI resolves these two problems because it centralizes all the API source code under one repo and takes over the API design responsibility. Essentially, FastAPI drastically simplifies the configuration of Lambda (only one Lambda is needed and it is merely a container to run the FastAPI app) and API Gateway (only one route and one integration is needed; all the other API design details are delegated to FastAPI). With all the API source code centralized, maintenance of the API service is no different than regular software development.

Another key benefit of FastAPI is that if one decides that Lambda is no longer a suitable way to deploy the API service, the source code can be easily re-packaged and re-deployed elsewhere (e.g., EC2) without much modification. Such flexibility is unthinkable if the API design is locked in with API Gateway.

Why Do We Need Another Article on This Topic?

A quick google search returns quite a few articles on setting up FastAPI + AWS Lambda + API Gateway. Why do we need another one?

Because although they all sort of work, they are either outdated, do not work end-to-end for me, or suited my constraints.

For example, the article from Sean Baier deployed the FastAPI app by zipping the source code and upload the zip file to Lambda. I tried it, but the pydantic installed on my M1 chip macBook did not work on amazon linux. To make this approach work, I had to download a pydantic version suitable for amazon linux and upload it as a layer. That was way too much work.

The article from Prasad Middle used a docker image to deploy the FastAPI app to Lambda (Lambda’s support of container image is quite recent, but it is truly a godsend). Although the article was very detailed for the most part, it slacked off towards the end and its approach of using Lambda function URL to display the OpenAPI documentation did not work for me (the documentation did show up but I could not make any API calls from it, which more or less defeats the purpose of having interactive documentation).

The article from Adem Usta resolved the OpenAPI documentation issue, but it relied on Serverless. There is nothing wrong with using Serverless, but I did not want to learn and set up another technology just for quick prototyping. Similarly, the article from AWS itself used AWS CDK for deployment, which was not what I wanted.

I am looking for a way to deploy the FastAPI app to Lambda, hook it up with API Gateway, and just make it work using AWS console or CLI. This article aims to document the procedures I used to achieve this goal.

Set up The FastAPI App

The source code of the FastAPI app is available on this repo. The directory layout is as follows:

app
├── api
│ ├── __init__.py
│ └── api_v0
│ ├── __init__.py
│ ├── api.py
│ └── endpoints
│ ├── __init__.py
│ ├── items.py
│ └── users.py
└── main.py

Why this design choice?

app/main.py is the entrance of the app, while different versions of the API live in their respective folders under app/api/ . In the layout above, we only have API version 0, which lives under app/api/api_v0/ . app/api/api_v0/api.py handles all the routing of API version 0. app/api/api_v0/endpoints/ holds the actual endpoints.

This design allows different API versions to coexist under one repo, which simplifies the maintenance of API code. Creation or deprecation of API versions is as easy as adding or commenting out a line in app/main.py that includes the router from a particular API version.

Run the FastAPI app locally

cd app
uvicorn main:app --reload

Test the API

curl -X 'GET'  'http://127.0.0.1:8000/api/v0'  -H 'accept: application/json'

This shall respond with {"ENV": "dev", "message": "Hello World!"}

curl -X 'GET'  'http://127.0.0.1:8000/api/v0/items'  -H 'accept: application/json'

This shall respond with {"message": "Get all items"}

curl -X 'GET'  'http://127.0.0.1:8000/api/v0/users'  -H 'accept: application/json'

This shall respond with {"message": "Get all users"}

View the OpenAPI documentation

open a browser for http://127.0.0.1:8000/docs

OpenAPI documentation hosted locally

Build the Docker Image

The Dockerfile used is as follows

FROM public.ecr.aws/lambda/python:3.10

# Copy requirements.txt
COPY requirements.txt ${LAMBDA_TASK_ROOT}

# Install the specified packages
RUN pip install -r requirements.txt

# Copy function code. This copies EVERYTHING inside the app folder to the lambda root.
COPY ./app ${LAMBDA_TASK_ROOT}

# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
# Since we copied
CMD ["main.handler"]
# IMAGE is the image name
docker build --platform linux/amd64 -t $IMAGE:latest .

Push the Image to AWS ECR

We will store the image on AWS ECR. Before running these commands, make sure you have properly configured AWS CLI.

# Get AWS_ACCOUNT and AWS_REGION. Both will be used in future commands
AWS_ACCOUNT=$(aws sts get-caller-identity --query 'Account' --output text)
AWS_REGION=$(aws ec2 describe-availability-zones --query 'AvailabilityZones[0].RegionName' --output text)

# login AWS ECR
aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT.dkr.ecr.$AWS_REGION.amazonaws.com

If an image repo has not been created yet, create one using the same name as $IMAGE

# create repo
aws ecr create-repository --repository-name $IMAGE --image-scanning-configuration scanOnPush=true --image-tag-mutability MUTABLE

Tag the local image for use on AWS ECR. It is important that a new tag is used each time the image is rebuilt. Otherwise, the change in the new image will not be reflected on ECR (i.e., if the tag of the new image already exists on ECR, pushing this new image will not replace the original one on ECR). One strategy to create a new tag each time is to use timestamp as the tag.

# Create a timestamp tag
TAG=$(date +%Y%m%d_%H%M%S)

# Tag the image
docker tag $IMAGE:latest $AWS_ACCOUNT.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE:$TAG

Push the newly tagged image to ECR

docker push $AWS_ACCOUNT.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE:$TAG

Create a Lambda Execution Role

If a Lambda execution role does not exist, it must be created before a Lambda function itself can be created. We will attach the most basic Lambda execution policies to the role.

# Give the Lambda execution role a name in AWS_LAMBDA_ROLE_NAME
aws iam create-role --role-name $AWS_LAMBDA_ROLE_NAME --assume-role-policy-document '{"Version": "2012-10-17","Statement": [{ "Effect": "Allow", "Principal": {"Service": "lambda.amazonaws.com"}, "Action": "sts:AssumeRole"}]}'
aws iam attach-role-policy --role-name $AWS_LAMBDA_ROLE_NAME --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
aws iam attach-role-policy --role-name $AWS_LAMBDA_ROLE_NAME --policy-arn arn:aws:iam::aws:policy/AWSXRayDaemonWriteAccess

Create a Lambda Function

If a Lambda function does not exist, create it using the docker image just pushed and attach the Lambda execution role to it. Note that we must set up an ENV variable and suffix it to the Lambda function name (we choose to use $IMAGE as the Lambda function name). We will discuss the reason for this later.

ENV=stage
AWS_LAMBDA_FUNC_NAME="$IMAGE-$ENV"

aws lambda create-function \
--function-name $AWS_LAMBDA_FUNC_NAME \
--package-type Image \
--code ImageUri=$AWS_ACCOUNT.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE:$TAG \
--role $(aws iam get-role --role-name $AWS_LAMBDA_ROLE_NAME --query 'Role.Arn' --output text)

Update The Environmental Variables

If we do not use a .env file, incorporating the environment variables in the Lambda function is very straightforward. For example, below shows how we can add ENV to the Lambda function.

# Upload the current ENV to Lambda
aws lambda update-function-configuration \
--function-name $AWS_LAMBDA_FUNC_NAME \
--environment "Variables={ENV=$ENV}"

If a .env file is used, we need to parse the file, extract the variables, and create the environment argument string for the update-function-configuration command. Suppose we store the environment variables for stage in .env.stage and that for prod in .env.prod, we can parse the file and create the environment argument string in $VARIABLES as follows.

# Update ENV variables for the lambda function
comment_re="^#.*"
VARIABLES="Variables={"
# Handle a last line that may not be followed by a newline
# https://unix.stackexchange.com/a/418067
while IFS= read -r line || [ -n "$line" ]; do
# Trime leading white space: https://stackoverflow.com/a/3232433/9723036
trimmed="$(echo $line | sed -e 's/^[[:space:]]*//')"
# ignore commented out env and empty lines
if [[ ! $trimmed =~ $comment_re ]] && [ "$trimmed" != "" ];
then
VARIABLES+="$trimmed,"
fi
done < .env.$ENV
VARIABLES+="ENV=$ENV}"

Suppose our .env.stage file has the content below

SOME_ENV=Anni
OTHER_ENV=Raven
# COMMENTED_OUT_ENV=Shako

then after parsing, we will have $VARIABLES equal to string Variables={SOME_ENV=Anni,OTHER_ENV=Raven,ENV=stage}. It can then be used update the Lambda function.

# Upload the current ENV to Lambda
aws lambda update-function-configuration \
--function-name $AWS_LAMBDA_FUNC_NAME \
--environment $VARIABLES

Create an API Gateway

In this example, we use the same image name for the API Gateway.

API_GATEWAY_NAME=$IMAGE

# Create API Gateway
aws apigateway create-rest-api --name $API_GATEWAY_NAME --region $AWS_REGION

# Get the API Gateway ID.
# API_GATEWAY_ID might not be available immediately after the creation of the
# new API Gateway. You might have to wait.
API_GATEWAY_ID=$(aws apigateway get-rest-apis --query "items[?name=='$API_GATEWAY_NAME'].id" --output text)

Create proxy resource

A proxy resource, instead of a resource with a predefined route, allows network traffic of any route to be accepted by the API Gateway and forwarded to the Lambda function. Since we handle all the API routes in the FastAPI app in Lambda, there is no need for API Gateway to check the routes.

# First obtain the parent ID of the newly created API Gateway
PARENT_ID=$(aws apigateway get-resources --rest-api-id $API_GATEWAY_ID --region $AWS_REGION --query 'items[0].id' --output text)

# Then create a proxy resource under the parent ID.
# PARENT_ID might not be available immediately after the creation of the
# new API Gateway. You might have to wait.
aws apigateway create-resource --rest-api-id $API_GATEWAY_ID --region $AWS_REGION --parent-id $PARENT_ID --path-part {proxy+}

Add ANY method on the proxy resource

The ANY method, instead of a predefined method like GET, POST, etc., tells the proxy resource that it can accepts any REST methods. Similar to the proxy resource, the ANY method delegates the method handling to the FastAPI; differentiating the methods is not the responsibility of API Gateway.

# First obtain the ID of the proxy resource just created
RESOURCE_ID=$(aws apigateway get-resources --rest-api-id $API_GATEWAY_ID --query "items[?parentId=='$PARENT_ID'].id" --output text)

# Then add "ANY" method to the resource
# RESOURCE_ID might not be available immediately after the creation of the
# proxy resource. You might have to wait.
aws apigateway put-method --rest-api-id $API_GATEWAY_ID --region $AWS_REGION --resource-id $RESOURCE_ID --http-method ANY --authorization-type "NONE"

Add Lambda integration to the ANY method

This steps tells API Gateway which Lambda function to use for the ANY method we just created.

# get the ARN of the Lambda function we created earlier
LAMBDA_ARN=$(aws lambda get-function --function-name $AWS_LAMBDA_FUNC_NAME --query 'Configuration.FunctionArn' --output text)

aws apigateway put-integration \
--region $AWS_REGION \
--rest-api-id $API_GATEWAY_ID \
--resource-id $RESOURCE_ID \
--http-method ANY \
--type AWS_PROXY \
--integration-http-method POST \
--uri arn:aws:apigateway:${AWS_REGION}:lambda:path/2015-03-31/functions/arn:aws:lambda:$AWS_REGION:$AWS_ACCOUNT:function:$IMAGE-\${stageVariables.env}/invocations

Note that although we have already created a Lambda function with name $IMAGE-$ENV , we do not use it for the Lambda URI. Instead, we refer to it as

arn:aws:apigateway:${AWS_REGION}:lambda:path/2015–03–31/functions/arn:aws:lambda:$AWS_REGION:$AWS_ACCOUNT:function:$IMAGE-\${stageVariables.env}/invocations

This URI is dynamic and will change depending on the value of stageVariables.env . We are leveraging the stage variables offered by API Gateway to target different Lambda functions based on the deployment need. If we deploy API Gateway to the environment stage , we are pointing to the Lambda function $IMAGE-stage . If we deploy to the environment of prod , we are pointing to the Lambda function $IMAGE-prod . To read more about stage variables in API Gateway, refer to the documentation.

Using two separate Lambda functions is one way to isolate environments for staging and production (only the stage Lambda has been created in this example so far). Another way to do so is via Lambda aliases, which creates different versions of the function within the same Lambda. From my prior experience, Lambda alias is not the friendliest to use. Hence, we are using separate Lambda functions for environment isolation. However, the best way to isolate environment is to create different AWS account, one per environment.

Grant permission for API Gateway to invoke Lambda

aws lambda add-permission --function-name $LAMBDA_ARN --source-arn "arn:aws:execute-api:$AWS_REGION:$AWS_ACCOUNT:$API_GATEWAY_ID/*/*/{proxy+}" --principal apigateway.amazonaws.com --statement-id apigateway-access --action lambda:InvokeFunction

Deploy API Gateway

The endpoint of API Gateway is not exposed until it is deployed. Deployment requires a stage name, which we will oblige with the $ENV value.

# Deploy to $ENV
aws apigateway create-deployment --rest-api-id $API_GATEWAY_ID --stage-name $ENV --variables env=$ENV

View the OpenAPI Documentation

Open a web browser for

https://$API_GATEWAY_ID.execute-api.$AWS_REGION.amazonaws.com/$ENV/docs

Play with the API on the web browser. Also try calling the API with curl . Notice that when you call the endpoint https://$API_GATEWAY_ID.execute-api.$AWS_REGION.amazonaws.com/stage/api/v0 , the output shows that ENV is stage .

OpenAPI documentation hosted on stage

Deploy to Production

Use this as a practice to double check the validity of the entire deployment process. Follow the same procedures as shown above, except that ENV is now set to prod . You need to create a new Lambda function with name $IMAGE-prod , but there is NO need to create a new API Gateway. You do have to create a new deployment with stage name being prod. Recall that we use stage variables to point different deployment of API Gateway to different Lambda functions.

OpenAPI documentation hosted on prod

Try the API at

https://$API_GATEWAY_ID.execute-api.$AWS_REGION.amazonaws.com/prod/api/v0

and the output should show that ENV is prod .

All-in-one Deployment Script

All the steps above can be carried out via AWS CLI. Thus, an end-to-end all-in-one deployment script is provided in the repo. Run the script with

ENV=stage IMAGE=fastapi ./scripts/deploy.sh
ENV=prod IMAGE=fastapi ./scripts/deploy.sh

to rebuild the docker image and deploy the new version of the FastAPI app. The script performs extra checks and does not re-create a resource if it already exists. The scripts require that environment variables be created in .env.stage and .env.prod files. For details, refer to the readme file in the repo.

Although the script is very convenient to use, it is recommended that one go through the steps manually at least once before using the script.

There is also a teardown script in the repo to remove all the resources spun up in the process.

ENV=stage IMAGE=fastapi ./scripts/teardown.sh
ENV=prod IMAGE=fastapi ./scripts/teardown.sh

Gotcha #1. CORS

Once the API is deployed and tested for functionality on both the browser and curl , the next step is to test whether CORS (Cross-Origin Resource Sharing) is enabled. CORS being enabled is crucial to allow a client hosted on a different domain to access the API.

An easy way to test CORS is through this online tester. Paste an endpoint either stage or prod (e.g., https://$API_GATEWAY_ID.execute-api.$AWS_REGION.amazonaws.com/$ENV/api/v0 ) to the online tester and check whether CORS is enabled. You shall see that the CORS test is passed.

CORS test passed

Now, let’s make a small change to app/main.py as follows:

# app/main.py

from fastapi import FastAPI
import uvicorn
import os

from api.api_v0.api import router as api_v0_router
from mangum import Mangum
from dotenv import load_dotenv
from fastapi.middleware.cors import CORSMiddleware


load_dotenv()

root_path = os.getenv('ENV', default='')
app = FastAPI(root_path=f'/{root_path}')

# app.add_middleware(
# CORSMiddleware,
# allow_origins=['*'],
# allow_methods=["*"],
# allow_headers=["*"],
# )

# Add or comment out the following lines of code to include a new version of API or
# deprecate an old version
app.include_router(api_v0_router, prefix="/api/v0")

# The magic that allows the integration with AWS Lambda
handler = Mangum(app)


if __name__ == "__main__":
uvicorn.run(app, port=8000)

We comment out the lines with app.add_middleware . Rebuild the docker image and deploy again on stage. The API shall still work, but if you test the API endpoint with the CORS online tester, you will see that the CORS test fails.

CORS test failed

This shows that the app.add_middleware line is very important for enabling CORS (for more details on CORS in FastAPI, refer to the documentation).

On the other hand, there is no need to enable CORS in API Gateway. This again demonstrates that API Gateway is only a middle man that directs network traffic. It does not make any direct API-related decisions. All the API responses are handled by the FastAPI app.

Gotcha #2. Path Prefix

Return to app/main.py and make the following change

# app/main.py

from fastapi import FastAPI
import uvicorn
import os

from api.api_v0.api import router as api_v0_router
from mangum import Mangum
from dotenv import load_dotenv
from fastapi.middleware.cors import CORSMiddleware


load_dotenv()

# root_path = os.getenv('ENV', default='')
# app = FastAPI(root_path=f'/{root_path}')
app = FastAPI()

app.add_middleware(
CORSMiddleware,
allow_origins=['*'],
allow_methods=["*"],
allow_headers=["*"],
)

# Add or comment out the following lines of code to include a new version of API or
# deprecate an old version
app.include_router(api_v0_router, prefix="/api/v0")

# The magic that allows the integration with AWS Lambda
handler = Mangum(app)


if __name__ == "__main__":
uvicorn.run(app, port=8000)

Note that we comment out the path regarding root_path and create the FastAPI app without a root_path argument app = FastAPI() . Rebuild the docker image and deploy it to stage .

Test the API endpoint and they still work.

curl 'https://$API_GATEWAY_ID.execute-api.$AWS_REGION.amazonaws.com/stage/api/v0'
# response

{"ENV":"stage","message":"Hello World!"}

But if you open the browser for the OpenAPI documentation, you will be greeted by this error:

Unable to load the OpenAPI configuration JSON file

If you dig a little bit by inspecting the network traffic, you will see that the browser fails to fetch the JSON file at

https://$API_GATEWAY_ID.execute-api.$AWS_REGION.amazonaws.com/openapi.json

This is expected behavior, because that URL path lacks /stage . But why does a regular API call still work yet the OpenAPI documentation fails? The detailed explanation is available on the FastAPI documentation. Briefly, API Gateway serves as a proxy to the FastAPI app. A user interacts directly with the endpoint provided by API Gateway, which includes the domain $API_GATEWAY_ID.execute-api.$AWS_REGION.amazonaws.com and the path prefix /stage. Since the path prefix is specific only to the proxy, when API Gateway forwards the traffic to the FastAPI app, it strips away the path prefix. In other words, what the FastAPI sees is just

$API_GATEWAY_ID.execute-api.$AWS_REGION.amazonaws.com/api/v0

which is exactly what it expects. Hence, the regular API calls function normally.

However, when we access the OpenAPI documentation via

https://$API_GATEWAY_ID.execute-api.$AWS_REGION.amazonaws.com/stage/docs

the traffic that hits the FastAPI app is

https://$API_GATEWAY_ID.execute-api.$AWS_REGION.amazonaws.com/docs

with the path prefix /stage stripped. This prompts the app to fetch the OpenAPI JSON file at

https://$API_GATEWAY_ID.execute-api.$AWS_REGION.amazonaws.com/openapi.json

which is an invalid URL.

To resolve this issue, we must include root path when creating the FastAPI app. Since we use ENV to indicate path prefix value, the root path must also be dependent on ENV . By adding a root path, we are telling the FastAPI app that whenever it makes a request (e.g., fetch the OpenAPI JSON file), the stripped path prefix will be added again in the form of the root path. Therefore, root path must match path prefix exactly.

root_path = os.getenv('ENV', default='')
app = FastAPI(root_path=f'/{root_path}')

Gotcha #3. ENV=dev

Everything seems to be working now, especially the stage and prod on AWS. You might even include ENV=stage in the .env.stage and ENV=prod in the .env.prod file for completeness sake (this is encouraged).

Then you think, why don’t I add ENV=dev in the .env file as well? It won’t hurt, right? But when we visit

http://127.0.0.1:8000/dev/docs

or try

curl -X 'GET'  'http://127.0.0.1:8000/dev/api/v0'  -H 'accept: application/json'

we are greeted with

{"detail":"Not Found"}

From previous experience, we suspect that this might have something to do with the proxy or root path. So we remove /dev from the URL, but it doesn’t work for documentation.

http://127.0.0.1:8000/docs
Unable to load the OpenAI configuration JSON file from dev

It does, however, work for the actual API

curl -X 'GET'  'http://127.0.0.1:8000/api/v0'  -H 'accept: application/json'
# Responds with
# {"ENV":"dev","message":"Hello World!","SOME_ENV":"xxx","OTHER_ENV":"xxx"}

What happened here?

This is still a problem with proxy, path prefix, and root path. When we do local development, we normally do not have a proxy sitting between us and the FastAPI app to strip path prefix. However, according to our app setup, if we set ENV=dev in the .env file, we are contradicting ourselves by telling the app that THERE IS a proxy that strips the path prefix /dev for all traffic.

Therefore, when we make the following call,

curl -X 'GET'  'http://127.0.0.1:8000/dev/api/v0'  -H 'accept: application/json'

the app assumes that the path prefix /dev would be stripped and whatever traffic that hits it contains the clean URL. Yet the actual traffic that hits the app, http://127.0.0.1:8000/dev/api/v0, is not clean (because there is no proxy) and the app fails with the error message {"detail":"Not Found"}.

The same thing happens when we try to access the documentation with the URL http://127.0.0.1:8000/dev/docs . The app does not know it is an attempt at the documentation, because /dev is not stripped. Instead, the app treats it as a regular API call. Since the app does not recognize the URL, the error message {"detail":"Not Found"} is shown again.

This explains why the curl call to http://127.0.0.1:8000/api/v0 works (because this is the correct URL and the app recognizes it).

It also explains why accessing http://127.0.0.1:8000/docs returns the error Not Found /dev/openapi.json (The app recognizes that this is a call for documentation. Since it assumes /dev has been stripped, when the app issues its own call to retrieve the JSON file, it adds back the “stripped” part http://127.0.0.1:8000/dev/openapi.json , which of course cannot be found).

In summary, according to the setup of our FastAPI app, ENV is required for API Gateway (it will be stripped by the proxy) but it should NOT be specified or included in the .env file for local development (it will NOT be stripped). That said, for anyone who wishes to have a local mechanism to strip path prefix such that dev resembles EXACTLY like stage and prod, you may follow the FastAPI documentation to set up Traefik.

Conclusion

In this article, a detailed end-to-end process of creating a FastAPI app, deploying it to AWS Lambda, and integrating it with API Gateway is provided. To handle environment isolation, we use stage variables from API Gateway to point to different instances of Lambda function (we intentionally avoid using Lambda aliases). We also explain two gotchas related to enabling CORS and path prefix.

I hope this article can help the readers quickly set up an API service for prototyping, and avoid the gotchas. That said, the most sustainable way for deployment is by using an infrastructure-as-code tool.

--

--