Deploying with AWS Lambda

How to deploy Apollo Server with AWS Lambda


AWS Lambda is a serverless computing platform with a pay-for-use billing model that enables you to run code without worrying about provisioning or managing servers.

In this guide, we'll walk through how to deploy Apollo Server's AWS Lambda integration to AWS Lambda using the Serverless framework .

Prerequisites

Make sure you've completed the following before proceeding with this guide:

⚠️ AWS best practices warn against using your AWS account root user keys for any task where it's not required (e.g., don't use these keys to configure the AWS CLI). Instead, create an IAM user account with the least privilege required to deploy your application, and configure the AWS CLI to use that account.

Setting up your project

For this example, we'll start from scratch to show how all the pieces fit together.

Begin by installing the necessary packages for using Apollo Server and its integration for AWS Lambda:

shell
1npm install @apollo/server graphql @as-integrations/aws-lambda
shell
1npm install -D typescript

Next, we'll create a file with a basic Apollo Server setup. Note the file's name and location; we'll need those in a later step:

TypeScript
src/server.ts
1import { ApolloServer } from '@apollo/server';
2
3// The GraphQL schema
4const typeDefs = `#graphql
5  type Query {
6    hello: String
7  }
8`;
9
10// A map of functions which return data for the schema.
11const resolvers = {
12  Query: {
13    hello: () => 'world',
14  },
15};
16
17// Set up Apollo Server
18const server = new ApolloServer({
19  typeDefs,
20  resolvers,
21});

Now we can import the startServerAndCreateLambdaHandler function and handlers object from @as-integrations/aws-lambda , passing in our ApolloServer instance:

TypeScript
src/server.ts
1import { ApolloServer } from '@apollo/server';
2// highlight-start
3import {
4  startServerAndCreateLambdaHandler,
5  handlers,
6} from '@as-integrations/aws-lambda';
7// highlight-end
8
9const typeDefs = `#graphql
10  type Query {
11    hello: String
12  }
13`;
14
15const resolvers = {
16  Query: {
17    hello: () => 'world',
18  },
19};
20
21const server = new ApolloServer({
22  typeDefs,
23  resolvers,
24});
25
26// This final export is important!
27// highlight-start
28export const graphqlHandler = startServerAndCreateLambdaHandler(
29  server,
30  // We will be using the Proxy V2 handler
31  handlers.createAPIGatewayProxyEventV2RequestHandler()
32);
33// highlight-end

The final line in the code snippet above creates an export named graphqlHandler with a Lambda function handler. We'll get back to this function in a moment!

Deploying using the Serverless framework

Serverless is a framework that helps make deploying serverless applications to platforms like AWS Lambda easier.

Installing the CLI

We'll use the Serverless CLI to deploy our application. You can either install the Serverless package into your project directly or install the Serverless CLI globally:

Bash
1npm install -g serverless

The Serverless CLI can access the credentials of the AWS CLI, which you configured earlier . So now we just need to tell Serverless which service we want to deploy.

AWS best practices recommend rotating your access keys for use cases that require long-term credentials (e.g., hosting an application).

Configuring services

You can configure Serverless using a serverless.yml file, letting it know which services to deploy and where the handlers are.

If you are using TypeScript, download the serverless-plugin-typescript package to enable Serverless to use your TS file:

Bash
1npm install -D serverless-plugin-typescript

You use the example serverless.yml configuration below; take care to ensure the file path you use is pointing to the file where you export your handler:

YAML
serverless.yml
1service: apollo-lambda
2provider:
3  name: aws
4  runtime: nodejs16.x
5  httpApi:
6    cors: true
7functions:
8  graphql:
9    # Make sure your file path is correct!
10    # (e.g., if your file is in the root folder use server.graphqlHandler )
11    # The format is: <FILENAME>.<HANDLER>
12    handler: src/server.graphqlHandler # highlight-line
13    events:
14      - httpApi:
15          path: /
16          method: POST
17      - httpApi:
18          path: /
19          method: GET
20# Omit the following lines if you aren't using TS!
21plugins:
22  - serverless-plugin-typescript

Running locally

Before deploying, we can use the Serverless CLI to invoke our handler locally to ensure everything is working. We'll do this by mocking an HTTP request with a GraphQL operation.

You can store a mock HTTP requests locally by creating a query.json file, like so:

JSON
query.json
1{
2  "version": "2",
3  "headers": {
4    "content-type": "application/json",
5  },
6  "isBase64Encoded": false,
7  "rawQueryString": "",
8  "requestContext": {
9    "http": {
10      "method": "POST",
11    },
12    // Other requestContext properties omitted for brevity
13  },
14  "rawPath": "/",
15  "routeKey": "/",
16  "body": "{\"operationName\": null, \"variables\": null, \"query\": \"{ hello }\"}"
17}

Now we can use serverless to invoke our handler using the query above:

Bash
1serverless invoke local -f graphql -p query.json

Your response should look something like this:

JSON
1{
2  "statusCode": 200,
3  "headers": {
4    "cache-control": "no-store",
5    "content-type": "application/json; charset=utf-8",
6    "content-length": "27"
7  },
8  "body": "{\"data\":{\"hello\":\"world\"}}\n"
9}

With everything working locally, we can move on to deployment!

Deploying

As we mentioned earlier , Serverless already has access to your AWS CLI credentials, so to deploy, all you need to do is run the following command:

Bash
1serverless deploy

If successful, serverless should output something like this:

Bash
1> serverless deploy
2> Deploying apollo-lambda to stage dev (us-east-1)
3> ✔ Service deployed to stack apollo-lambda-dev (187s)
4> ..............
5> endpoints:
6> POST - https://ujt89xxyn3.execute-api.us-east-1.amazonaws.com/dev/
7> GET - https://ujt89xxyn3.execute-api.us-east-1.amazonaws.com/dev/
8> functions:
9> graphql: apollo-lambda-dev-graphql
10> Monitor all your API routes with Serverless Console: run "serverless --console"

You can now navigate to your endpoints and query your newly hosted server using Apollo Sandbox .

What does serverless do?

First, it builds the functions, zips up the artifacts, and uploads them to a new S3 bucket. Then, it creates a Lambda function with those artifacts and outputs the HTTP endpoint URLs to the console if everything is successful.

Managing the resulting services

The resulting S3 buckets and Lambda functions are accessible from the AWS Console . The AWS Console also lets you view the IAM user you created earlier.

To find the S3 bucket that Serverless created, search in Amazon's listed services for S3, then look for the name of your bucket (e.g., apollo-lambda-dev-serverlessdeploymentbucket-1s10e00wvoe5f is the name of our bucket).

To find the Lambda function that Serverless created, search in Amazon's listed services for Lambda. Double-check the region at the top right of the screen if your list of Lambda functions is empty or missing your new function. The default region for Serverless deployments is us-east-1 (N. Virginia).

If you ever want to remove the S3 bucket or Lambda functions that Serverless created, you can run the following command:

Bash
1 serverless remove

Middleware

In order to implement event and result mutations, type-safe middleware can be passed to the startServerAndCreateLambdaHandler call. The API is as follows:

TypeScript
1
2import { middleware, startServerAndCreateLambdaHandler, handlers } from "@as-integrations/aws-lambda";
3import { server } from "./server";
4
5const requestHandler = handlers.createAPIGatewayProxyEventV2RequestHandler();
6
7// Middleware is an async function whose type is based on your request handler. Middleware
8// can read and mutate the incoming event. Additionally, returning an async function from your
9// middleware allows you to read and mutate the result before it's sent.
10const middlewareFn: middleware.MiddlewareFn<typeof requestHandler> = async (event) => {
11  // read or update the event here
12  // optionally return a callback to access the result
13  return async (result) => {
14    // read or update the result here
15  }
16}
17
18startServerAndCreateLambdaHandler(server, requestHandler, {
19  middleware: [middlewareFn],
20});

One use case for middleware is cookie modification. The APIGatewayProxyStructuredResultV2 type contains a property cookies which can be pushed to. This allows you to set multiple set-cookie headers in the response.

TypeScript
1import {
2  startServerAndCreateLambdaHandler,
3  middleware,
4  handlers,
5} from '@as-integrations/aws-lambda';
6import { server } from './server';
7import { refreshCookie } from './cookies';
8
9const requestHandler = handlers.createAPIGatewayProxyEventV2RequestHandler();
10
11// Utilizing typeof
12const cookieMiddleware: middleware.MiddlewareFn<typeof requestHandler> = async (
13  event,
14) => {
15  // Access existing cookies and produce a refreshed one
16  const cookie = refreshCookie(event.cookies);
17  return async (result) => {
18    // Ensure proper initialization of the cookies property on the result
19    result.cookies = result.cookies ?? [];
20    // Result is mutable so it can be updated here
21    result.cookies.push(cookie);
22  };
23};
24
25
26export default startServerAndCreateLambdaHandler(server, requestHandler, {
27  middleware: [
28    cookieMiddleware,
29  ],
30});

More use-cases and API information can be found in the library's README .

Event extensions

In many cases, API Gateway events will have an authorizer in front of them that contains custom state that will be used for authorization during GraphQL resolution. All of the handlers that are packaged with the library contain a generic type which allows you to explicitly extend the base event type. By passing an event with authorization information, that event type will be used during the creation of contextValue and for middleware. Below is an example, and more information can be found in the library's README .

TypeScript
1import {
2  startServerAndCreateLambdaHandler,
3  middleware,
4  handlers,
5} from '@as-integrations/aws-lambda';
6import type { APIGatewayProxyEventV2WithLambdaAuthorizer } from 'aws-lambda';
7import { server } from './server';
8
9export default startServerAndCreateLambdaHandler(
10  server,
11  handlers.createAPIGatewayProxyEventV2RequestHandler<
12    APIGatewayProxyEventV2WithLambdaAuthorizer<{
13      myAuthorizerContext: string;
14    }>
15  >(),
16);

Custom request handling

In order to support all event types from AWS Lambda (including custom ones), a request handler creation utility is exposed as handlers.createHandler(eventParser, resultGenerator). This function returns a fully typed request handler that can be passed as the second argument to the startServerAndCreateLambdaHandler call. Below is an example and the exact API is documented in the library's README .

TypeScript
1import {
2  startServerAndCreateLambdaHandler,
3  handlers,
4} from '@as-integrations/aws-lambda';
5import type { APIGatewayProxyEventV2 } from 'aws-lambda';
6import { HeaderMap } from '@apollo/server';
7import { server } from './server';
8
9type CustomInvokeEvent = {
10  httpMethod: string;
11  queryParams: string;
12  headers: Record<string, string>;
13  body: string;
14};
15
16type CustomInvokeResult =
17  | {
18      success: true;
19      body: string;
20    }
21  | {
22      success: false;
23      error: string;
24    };
25
26const requestHandler = handlers.createRequestHandler<
27  CustomInvokeEvent,
28  CustomInvokeResult
29>(
30  {
31    parseHttpMethod(event) {
32      return event.httpMethod;
33    },
34    parseHeaders(event) {
35      const headerMap = new HeaderMap();
36      for (const [key, value] of Object.entries(event.headers)) {
37        headerMap.set(key, value);
38      }
39      return headerMap;
40    },
41    parseQueryParams(event) {
42      return event.queryParams;
43    },
44    parseBody(event) {
45      return event.body;
46    },
47  },
48  {
49    success({ body }) {
50      return {
51        success: true,
52        body: body.string,
53      };
54    },
55    error(e) {
56      if (e instanceof Error) {
57        return {
58          success: false,
59          error: e.toString(),
60        };
61      }
62      console.error('Unknown error type encountered!', e);
63      throw e;
64    },
65  },
66);
67
68export default startServerAndCreateLambdaHandler(server, requestHandler);

Using event information

You can use the context function to get information about the current operation from the original Lambda data structures.

Your context function can access this information from its argument containing event and context objects:

TypeScript
1const server = new ApolloServer<MyContext>({
2  typeDefs,
3  resolvers,
4});
5
6// This final export is important!
7export const graphqlHandler = startServerAndCreateLambdaHandler(
8  server,
9  handlers.createAPIGatewayProxyEventV2RequestHandler(),
10  {
11    // highlight-start
12    context: async ({ event, context }) => {
13      return {
14        lambdaEvent: event,
15        lambdaContext: context,
16      };
17    },
18    // highlight-end
19  }
20);

The event object contains the API Gateway event (HTTP headers, HTTP method, body, path, etc.). The context object (not to be confused with the context function) contains the current Lambda Context (function name, function version, awsRequestId, time remaining, etc.).

If you've changed your setup to use @vendia/serverless-express your context function receives req and res options which are express.Request and express.Response objects:

TypeScript
1const { ApolloServer } = require('@apollo/server');
2const { expressMiddleware } = require('@apollo/server/express4');
3const serverlessExpress = require('@vendia/serverless-express');
4const express = require('express');
5const cors = require('cors');
6
7const server = new ApolloServer({
8  typeDefs: 'type Query { x: ID }',
9  resolvers: { Query: { x: () => 'hi!' } },
10});
11
12server.startInBackgroundHandlingStartupErrorsByLoggingAndFailingAllRequests();
13
14const app = express();
15app.use(
16  cors(),
17  express.json(),
18  expressMiddleware(server, {
19    // The Express request and response objects are passed into
20    // your context initialization function
21    context: async ({ req, res }) => {
22      // Here is where you'll have access to the
23      // API Gateway event and Lambda Context
24      const { event, context } = serverlessExpress.getCurrentInvoke();
25      return {
26        expressRequest: req,
27        expressResponse: res,
28        lambdaEvent: event,
29        lambdaContext: context,
30      };
31    },
32  }),
33);
34
35exports.handler = serverlessExpress({ app });

Customizing HTTP routing behavior

If you want to customize your HTTP routing behavior, you can couple Apollo Server's Express integration (i.e., expressMiddleware ) with the @vendia/serverless-express package. The @vendia/serverless-express library translates between Lambda events and Express requests. Despite their similar names, the Serverless CLI and the @vendia/serverless-express package are unrelated.

You can update your Apollo Server setup to the following to have a fully functioning Lambda server that works in a variety of AWS features:

TypeScript
1const { ApolloServer } = require('@apollo/server');
2const { expressMiddleware } = require('@apollo/server/express4');
3const serverlessExpress = require('@vendia/serverless-express');
4const express = require('express');
5const cors = require('cors');
6
7const server = new ApolloServer({
8  typeDefs: 'type Query { x: ID }',
9  resolvers: { Query: { x: () => 'hi!' } },
10});
11
12server.startInBackgroundHandlingStartupErrorsByLoggingAndFailingAllRequests();
13
14const app = express();
15app.use(cors(), express.json(), expressMiddleware(server));
16
17exports.graphqlHandler = serverlessExpress({ app });

The setup enables you to customize your HTTP behavior as needed.