Chapter 5. Creating a GraphQL API

You explored the history. You wrote some queries. You created a schema. Now you’re ready to create a fully functioning GraphQL service. This can be done with a range of different technologies, but we’re going to use JavaScript. The techniques that are shared here are fairly universal, so even if the implementation details differ, the overall architecture will be similar no matter which language or framework you choose.

If you are interested in server libraries for other languages, you can check out the many that exist at GraphQL.org.

When the GraphQL spec was released in 2015, it focused on a clear explanation of the query language and type system. It intentionally left details about server implementation more vague to allow developers from a variety of language backgrounds to use what was comfortable for them. The team at Facebook did provide a reference implementation that they built in JavaScript called GraphQL.js. Along with this, they released express-graphql, a simple way to create a GraphQL server with Express, and notably, the first library to help developers accomplish this task.

After our exploration of JavaScript implementations of GraphQL servers, we’ve chosen to use Apollo Server, an open-source solution from the Apollo team. Apollo Server is fairly simple to set up and offers an array of production-ready features including subscription support, file uploads, a data source API for quickly hooking up existing services, and Apollo Engine integration out of the box. It also includes GraphQL Playground for writing queries directly in the browser.

Project Setup

Let’s begin by creating the photo-share-api project as an empty folder on your computer. Remember: you can always visit the Learning GraphQL repo to see the completed project or to see the project running on Glitch. From within that folder, we’ll generate a new npm project using the npm init -y command in your Terminal or Command Prompt. This utility will generate a package.json file and set all of the options as the default, since we used the -y flag.

Next, we’ll install the project dependencies: apollo-server and graphql. We’ll also install nodemon:

npm install apollo-server graphql nodemon

apollo-server and graphql are required to set up an instance of Apollo Server. nodemon will watch files for changes and restart the server when we make changes. This way, we won’t have to stop and restart the server every time we make a change. Let’s add the command for nodemon to the package.json on the scripts key:

  "scripts": {
    "start": "nodemon -e js,json,graphql"
  }

Now every time we run npm start, our index.js file will run and nodemon will watch for changes in any files with a js, json, or graphql extension. Also, we want to create an index.js file at the root of the project. Be sure that the main file in the package.json is pointing to index.js:

  "main": "index.js"

Resolvers

In our discussion of GraphQL so far, we’ve focused a lot on queries. A schema defines the query operations that clients are allowed to make and also how different types are related. A schema describes the data requirements but doesn’t perform the work of getting that data. That work is handled by resolvers.

A resolver is a function that returns data for a particular field. Resolver functions return data in the type and shape specified by the schema. Resolvers can be asynchronous and can fetch or update data from a REST API, database, or any other service.

Let’s take a look at what a resolver might look like for our root query. In our index.js file at the root of the project, let’s add the totalPhotos field to the Query:

const typeDefs = `
    type Query {
        totalPhotos: Int!
    }
`

const resolvers = {
  Query: {
    totalPhotos: () => 42
  }
}

The typeDefs variable is where we define our schema. It’s just a string. Whenever we create a query like totalPhotos, it should be backed by a resolver function of the same name. The type definition describes which type the field should return. The resolver function returns the data of that type from somewhere—in this case, just a static value of 42.

It is also important to note that the resolver must be defined under an object with the same typename as the object in the schema. The totalPhotos field is a part of the query object. The resolver for this field must also be a part of the Query object.

We have created initial type definitions for our root query. We’ve also created our first resolver that backs the totalPhotos query field. To create the schema and enable the execution of queries against the schema, we will use Apollo Server:

// 1. Require 'apollo-server'
const { ApolloServer } = require('apollo-server')

const typeDefs = `
	type Query {
		totalPhotos: Int!
	}
`

const resolvers = {
  Query: {
    totalPhotos: () => 42
  }
}

// 2. Create a new instance of the server.
// 3. Send it an object with typeDefs (the schema) and resolvers
const server = new ApolloServer({
  typeDefs,
  resolvers
})


// 4. Call listen on the server to launch the web server
server
  .listen()
  .then(({url}) => console.log(`GraphQL Service running on ${url}`))

After requiring ApolloServer, we’ll create a new instance of the server, sending it an object with two values: typeDefs and resolvers. This is a quick and minimal server setup that still allows us to stand up a powerful GraphQL API. Later in the chapter, we will talk about how to extend the functionality of the server using Express.

At this point, we are ready to execute a query for totalPhotos. Once we run npm start, we should see the GraphQL Playground running on http://localhost:4000. Let’s try the following query:

{
    totalPhotos
}

The returned data for totalPhotos is 42 as expected:

{
  "data": {
    "totalPhotos": 42
  }
}

Resolvers are key to the implementation of GraphQL. Every field must have a corresponding resolver function. The resolver must follow the rules of the schema. It must have the same name as the field that was defined in the schema, and it must return the datatype defined by the schema.

Root Resolvers

As discussed in Chapter 4, GraphQL APIs have root types for Query, Mutation, and Subscription. These types are found at the top level and represent all of the possible entry points into the API. So far, we’ve added the totalPhotos field to the Query type, meaning that our API can query this field.

Let’s add to this by creating a root type for Mutation. The mutation field is called postPhoto and will take in a name and description as arguments of the type String. When the mutation is sent, it must return a Boolean:

const typeDefs = `
    type Query {
        totalPhotos: Int!
    }

    type Mutation {
    postPhoto(name: String! description: String): Boolean!
  }
`

After we create the postPhoto mutation, we need to add a corresponding resolver in the resolvers object:

// 1. A data type to store our photos in memory
var photos = []

const resolvers = {
  Query: {

    // 2. Return the length of the photos array
    totalPhotos: () => photos.length

  },

  // 3. Mutation and postPhoto resolver
  Mutation: {
    postPhoto(parent, args) {
        photos.push(args)
        return true
    }
  }

}

First, we need to create a variable called photos to store the photo details in an array. Later on in this chapter, we will store photos in a database.

Next, we enhance the totalPhotos resolver to return the length of the photos array. Whenever this field is queried, it will return the number of photos that are presently stored in the array.

From here, we add the postPhoto resolver. This time, we are using function arguments with our postPhoto function. The first argument is a reference to the parent object. Sometimes you’ll see this represented as _, root, or obj in documentation. In this case, the parent of the postPhoto resolver is a Mutation. The parent does not currently contain any data that we need to use, but it is always the first argument sent to a resolver. Therefore, we need to add a placeholder parent argument so that we can access the second argument sent to the resolver: the mutation arguments.

The second argument sent to the postPhoto resolver is the GraphQL arguments that were sent to this operation: the name and, optionally, the description. The args variable is an object that contains these two fields: {name,description}. Right now, the arguments represent one photo object, so we push them directly into the photos array.

It’s now time to test the postPhoto mutation in GraphQL Playground, sending a string for the name argument:

mutation newPhoto {
    postPhoto(name: "sample photo")
}

This mutation adds the photo details to the array and returns true. Let’s modify this mutation to use query variables:

mutation newPhoto($name: String!, $description: String) {
    postPhoto(name: $name, description: $description)
}

After variables are added to the mutation, data must be passed to provide the string variables. In the lower-left corner of the Playground, let’s add values for name and description to the Query Variables window:

{
    "name": "sample photo A",
    "description": "A sample photo for our dataset"
}

Type Resolvers

When a GraphQL query, mutation, or subscription is executed, it returns a result that is the same shape of the query. We’ve seen how resolvers can return scalar type values like integers, strings, and Booleans, but resolvers can also return objects.

For our photo app, let’s create a Photo type and an allPhotos query field that will return a list of Photo objects:

const typeDefs = `

  # 1. Add Photo type definition
    type Photo {
      id: ID!
      url: String!
      name: String!
      description: String
    }

  # 2. Return Photo from allPhotos
    type Query {
      totalPhotos: Int!
      allPhotos: [Photo!]!
    }

  # 3. Return the newly posted photo from the mutation
    type Mutation {
      postPhoto(name: String! description: String): Photo!
    }
`

Because we’ve added the Photo object and the allPhotos query to our type definitions, we need reflect these adjustments in the resolvers. The postPhoto mutation needs to return data in the shape of the Photo type. The query allPhotos needs to return a list of objects that have the same shape as the Photo type:

// 1. A variable that we will increment for unique ids
var _id = 0
var photos = []

const resolvers = {
  Query: {
    totalPhotos: () => photos.length,
    allPhotos: () => photos
  },
  Mutation: {
    postPhoto(parent, args) {

      // 2. Create a new photo, and generate an id
        var newPhoto = {
          id: _id++,
          ...args
      }
      photos.push(newPhoto)

      // 3. Return the new photo
      return newPhoto

    }
  }
}

Because the Photo type requires an ID, we created a variable to store the ID. In the postPhoto resolver, we will generate IDs by incrementing this value. The args variable provides the name and description fields for the photo, but we also need an ID. It is typically up to the server to create variables like identifiers and timestamps. So, when we create a new photo object in the postPhoto resolver, we add the ID field and spread the name and description fields from args into our new photo object.

Instead of returning a Boolean, the mutation returns an object that matches the shape of the Photo type. This object is constructed with the generated ID and the name and description fields that were passed in with data. Additionally, the postPhoto mutation adds photo objects to the photos array. These objects match the shape of the Photo type that we defined in our schema, so we can return the entire array of photos from the allPhotos query.

Warning

Generating unique IDs with an incremented variable is clearly a very unscalable way to create IDs, but will serve our purposes here as a demonstration. In a real app, your ID would likely be generated by the database.

To verify that postPhoto is working correctly, we can adjust the mutation. Because Photo is a type, we need to add a selection set to our mutation:

mutation newPhoto($name: String!, $description: String) {
    postPhoto(name: $name, description: $description) {
        id
        name
        description
    }
}

After adding a few photos via mutations, the following allPhotos query should return an array of all of the Photo objects added:

query listPhotos {
    allPhotos {
        id
        name
        description
    }
}

We also added a non-nullable url field to our photo schema. What happens when we add a url to our selection set?

query listPhotos {
    allPhotos {
        id
        name
        description
        url
    }
}

When url is added to our query’s selection set, an error is displayed: Cannot return null for non-nullable field Photo.url. We do not add a url field in the dataset. We do not need to store URLs, because they can be automatically generated. Each field in our schema can map to a resolver. All we need to do is add a Photo object to our list of resolvers and define the fields that we want to map to functions. In this case, we want to use a function to help us resolve URLs:

const resolvers = {
  Query: { ... },
  Mutation: { ... },
  Photo: {
    url: parent => `http://yoursite.com/img/${parent.id}.jpg`
  }
}

Because we are going to use a resolver for photo URLs, we’ve added a Photo object to our resolvers. This Photo resolver added to the root is called a trivial resolver. Trivial resolvers are added to the top level of the resolvers object, but they are not required. We have the option to create custom resolvers for the Photo object using a trivial resolver. If you do not specify a trivial resolver, GraphQL will fall back to a default resolver that returns a property as the same name as the field.

When we select a photo’s url in our query, the corresponding resolver function is invoked. The first argument sent to resolvers is always the parent object. In this case, the parent represents the current Photo object that is being resolved. We’re assuming here that our service handles only JPEG images. Those images are named by their photo ID and can be found on the http://yoursite.com/img/ route. Because the parent is the photo, we can obtain the photo’s ID through this argument and use it to automatically generate a URL for the current photo.

When we define a GraphQL schema, we describe the data requirements of our application. With resolvers, we can powerfully and flexibly fulfill those requirements. Functions give us this power and flexibility. Functions can be asynchronous, can return scalar types and return objects, and can return data from various sources. Resolvers are just functions, and every field in our GraphQL schema can map to a resolver.

Using Inputs and Enums

It’s time to introduce an enumeration type, PhotoCategory, and an input type, PostPhotoInput, to our typeDefs:

  enum PhotoCategory {
    SELFIE
    PORTRAIT
    ACTION
    LANDSCAPE
    GRAPHIC
  }

  type Photo {
    ...
    category: PhotoCategory!
  }

  input PostPhotoInput {
    name: String!
    category: PhotoCategory=PORTRAIT
    description: String
  }

  type Mutation {
    postPhoto(input: PostPhotoInput!): Photo!
  }

In Chapter 4, we created these types when we designed the schema for the PhotoShare application. We also added the PhotoCategory enumeration type and added a category field to our photos. When resolving photos, we need to make sure that the photo category—a string that matches the values defined in the enumeration type—is available. We also need to collect a category when users post new photos.

We’ve added a PostPhotoInput type to organize the argument for the postPhoto mutation under a single object. This input type has a category field. Even when a user does not supply a category field as an argument, the default, PORTRAIT, will be used.

For the postPhoto resolver, we need to make some adjustments, as well. The details for the photo, the name, description, and category are now nested within the input field. We need to make sure that we access these values at args.input instead of args:

postPhoto(parent, args) {
    var newPhoto = {
        id: _id++,
        ...args.input
    }
    photos.push(newPhoto)
    return newPhoto
}

Now, we run the mutation with the new input type:

mutation newPhoto($input: PostPhotoInput!) {
  postPhoto(input:$input) {
    id
    name
    url
    description
    category
  }
}

We also need to send the corresponding JSON in the Query Variables panel:

{
  "input": {
    "name": "sample photo A",
    "description": "A sample photo for our dataset"
 }
}

If the category is not supplied, it will default to PORTRAIT. Alternatively, if a value is provided for category, it will be validated against our enumeration type before the operation is even sent to the server. If it’s a valid category, it will be passed to the resolver as an argument.

With input types, we can make passing arguments to mutations more reusable and less error-prone. When combining input types and enums, we can be more specific about the types of inputs that can be supplied to specific fields. Inputs and enums are incredibly valuable and are made even better when you use them together.

Edges and Connections

As we’ve discussed previously, the power of GraphQL comes from the edges: the connections between data points. When standing up a GraphQL server, types typically map to models. Think of these types as being saved in tables of like data. From there, we link types with connections. Let’s explore the kinds of connections that we can use to define the interconnected relationships between types.

One-to-many connections

Users need to access the list of photos they previously posted. We will access this data on a field called postedPhotos that will resolve to a filtered list of photos that the user has posted. Because one User can post many Photos, we call this a one-to-many relationship. Let’s add the User to our typeDefs:

type User {
  githubLogin: ID!
  name: String
  avatar: String
  postedPhotos: [Photo!]!
}

At this point, we’ve created a directed graph. We can traverse from the User type to the Photo type. To have an undirected graph, we need to provide a way back to the User type from the Photo type. Let’s add a postedBy field to the Photo type:

type Photo {
  id: ID!
  url: String!
  name: String!
  description: String
  category: PhotoCategory!
  postedBy: User!
}

By adding the postedBy field, we have created a link back to the User who posted the Photo, creating an undirected graph. This is a one-to-one connection because one photo can only be posted by one User.

Because connections are created using the fields of an object type, they can map to resolver functions. Inside these functions, we can use the details about the parent to help us locate and return the connected data.

Let’s add the postedPhotos and postedBy resolvers to our service:

const resolvers = {
  ...
  Photo: {
    url: parent => `http://yoursite.com/img/${parent.id}.jpg`,
    postedBy: parent => {
      return users.find(u => u.githubLogin === parent.githubUser)
    }
  },
  User: {
    postedPhotos: parent => {
      return photos.filter(p => p.githubUser === parent.githubLogin)
    }
}

In the Photo resolver, we need to add a field for postedBy. Within this resolver, it’s up to us to figure out how to find the connected data. Using the .find() array method, we can obtain the user who’s githubLogin matches the githubUser value saved with each photo. The .find() method returns a single user object.

In the User resolver, we retrieve a list of photos posted by that user using the array’s .filter() method. This method returns an array of only those photos that contain a githubUser value that matches the parent user’s githubLogin value. The filter method returns an array of photos.

Now let’s try to send the allPhotos query:

query photos {
  allPhotos {
    name
    url
    postedBy {
      name
    }
  }
}

When we query each photo, we are able to query the user who posted that photo. The user object is being located and returned by the resolver. In this example, we select only the name of the user who posted the photo. Given our sample data, the result should return the following JSON:

{
  "data": {
    "allPhotos": [
      {
        "name": "Dropping the Heart Chute",
        "url": "http://yoursite.com/img/1.jpg",
        "postedBy": {
          "name": "Glen Plake"
        }
      },
      {
        "name": "Enjoying the sunshine",
        "url": "http://yoursite.com/img/2.jpg",
        "postedBy": {
          "name": "Scot Schmidt"
        }
      },
      {
        "name": "Gunbarrel 25",
        "url": "http://yoursite.com/img/3.jpg",
        "postedBy": {
          "name": "Scot Schmidt"
        }
      }
    ]
  }
}

We are responsible for connecting the data with resolvers, but as soon as we are able to return that connected data, our clients can begin writing powerful queries. In the next section, we show you some techniques to create many-to-many connections.

Many-to-many

The next feature we want to add to our service is the ability to tag users in photos. This means that a User could be tagged in many different photos, and Photo could have many different users tagged in it. The relationship that photo tags will create between users and photos can be referred to as many-to-many—many users to many photos.

To facilitate the many-to-many relationship, we add the taggedUsers field to Photo and a inPhotos field to User. Let’s modify the typeDefs:

  type User {
        ...
        inPhotos: [Photo!]!
    }

    type Photo {
        ...
        taggedUsers: [User!]!
  }

The taggedUsers field returns a list of users, and the inPhotos field returns a list of photos in which a user appears. To facilitate this many-to-many connection, we need to add a tags array. To test the tagging feature, you need to populate some sample data for tags:

var tags = [
    { "photoID": "1", "userID": "gPlake" },
    { "photoID": "2", "userID": "sSchmidt" },
    { "photoID": "2", "userID": "mHattrup" },
    { "photoID": "2", "userID": "gPlake" }
]

When we have a photo, we must search our datasets to find the users who have been tagged in the photo. When we have a user, it is up to us to find the list of photos in which that user appears. Because our data is currently stored in JavaScript arrays, we will use array methods within the resolvers to find the data:

Photo: {
    ...
    taggedUsers: parent => tags

      // Returns an array of tags that only contain the current photo
      .filter(tag => tag.photoID === parent.id)

      // Converts the array of tags into an array of userIDs
      .map(tag => tag.userID)

      // Converts array of userIDs into an array of user objects
      .map(userID => users.find(u => u.githubLogin === userID))

  },
  User: {
    ...
    inPhotos: parent => tags

      // Returns an array of tags that only contain the current user
      .filter(tag => tag.userID === parent.id)

      // Converts the array of tags into an array of photoIDs
      .map(tag => tag.photoID)

      // Converts array of photoIDs into an array of photo objects
      .map(photoID => photos.find(p => p.id === photoID))

  }

The taggedUsers field resolver filters out any photos that are not the current photo and maps that filtered list to an array of actual User objects. The inPhotos field resolver filters the tags by user and maps the user tags to an array of actual Photo objects.

We can now view which users are tagged in every photo by sending a GraphQL query:

query listPhotos {
  allPhotos {
    url
    taggedUsers {
      name
    }
  }
}

You might have noticed that we have an array for tags, but we do not have a GraphQL type called Tag. GraphQL does not require our data models to exactly match the types in our schema. Our clients can find the tagged users in every photo and the photos that any users are tagged in by querying the User type or the Photo type. They don’t need to query a Tag type: that would just complicate things. We’ve already done the heavy lifting of finding the tagged users or photos in our resolver specifically to make it easy for clients to query this data.

Custom Scalars

As discussed in Chapter 4, GraphQL has a collection of default scalar types that you can use for any fields. Scalars like Int, Float, String, Boolean, and ID are suitable for the majority of situations, but there might be instances for which you need to create a custom scalar type to suit your data requirements.

When we implement a custom scalar, we need to create rules around how the type should be serialized and validated. For example, if we create a DateTime type, we will need to define what should be considered a valid DateTime.

Let’s add this custom DateTime scalar to our typeDefs and use it in the Photo type for the created field. The created field is used to store the date and time at which a specific photo was posted:

const typeDefs = `
  scalar DateTime
  type Photo {
    ...
    created: DateTime!
  }
  ...
`

Every field in our schema needs to map to a resolver. The created field needs to map to a resolver for the DateTime type. We created a custom scalar type for DateTime because we want to parse and validate any fields that use this scalar as JavaScript Date types.

Consider the various ways in which we can represent a date and time as a string. All of these strings represent valid dates:

  • “4/18/2018”

  • “4/18/2018 1:30:00 PM”

  • “Sun Apr 15 2018 12:10:17 GMT-0700 (PDT)”

  • “2018-04-15T19:09:57.308Z”

We can use any of these strings to create datetime objects with JavaScript:

var d = new Date("4/18/2018")
console.log( d.toISOString() )
// "2018-04-18T07:00:00.000Z"

Here, we created a new date object using one format and then converted that datetime string into an ISO-formatted date string.

Anything that the JavaScript Date object does not understand is invalid. You can try to parse the following data:

var d = new Date("Tuesday March")
console.log( d.toString() )
// "Invalid Date"

When we query the photo’s created field, we want to make sure that the value returned by this field contains a string in the ISO date-time format. Whenever a field returns a date value, we serialize that value as an ISO-formatted string:

const serialize = value => new Date(value).toISOString()

The serialize function obtains the field values from our object, and as long as that field contains a date formatted as a JavaScript object or any valid datetime string, it will always be returned by GraphQL in the ISO datetime format.

When your schema implements a custom scalar, it can be used as an argument in a query. Let’s assume that we created a filter for the allPhotos query. This query would return a list of photos taken after a specific date:

type Query {
  ...
  allPhotos(after: DateTime): [Photo!]!
}

If we had this field, clients could send us a query that contains a DateTime value:

query recentPhotos(after:DateTime) {
  allPhotos(after: $after) {
    name
    url
  }
}

And they would send the $after argument using query variables:

{
  "after": "4/18/2018"
}

We want to make sure that the after argument is parsed into a JavaScript Date object before it is sent to the resolver:

const parseValue = value => new Date(value)

We can use the parseValue function to parse the values of incoming strings that are sent along with queries. Whatever parseValue returns is passed to the resolver arguments:

const resolvers = {
  Query: {
    allPhotos: (parent, args) => {
      args.after // JavaScript Date Object
      ...
    }
  }
}

Custom scalars need to be able to serialize and parse date values. There is one more place that we need to handle date strings. This is when clients add the date string directly to the query itself:

query {
  allPhotos(after: "4/18/2018") {
    name
    url
  }
}

The after argument is not being passed as a query variable. Instead, it has been added directly to the query document. Before we can parse this value, we need to obtain it from the query after it has been parsed into an abstract syntax tree (AST). We use the parseLiteral function to obtain these values from the query document before they are parsed:

const parseLiteral = ast => ast.value

The parseLiteral function is used to obtain the value of the date that was added directly to the query document. In this case, all we need to do is return that value, but if needed, we could take extra parsing steps inside this function.

We need all three of these functions that we designed to handle DateTime values when we create our custom scalar. Let’s add the resolver for our custom DateTime scalar to our code:

const { GraphQLScalarType } = require('graphql')
...
const resolvers = {
  Query: { ... },
  Mutation: { ... },
  Photo: { ... },
  User: { ... },
  DateTime: new GraphQLScalarType({
      name: 'DateTime',
      description: 'A valid date time value.',
      parseValue: value => new Date(value),
      serialize: value => new Date(value).toISOString(),
      parseLiteral: ast => ast.value
  })
}

We use the GraphQLScalarType object to create resolvers for custom scalars. The DateTime resolver is placed within our list of resolvers. When creating a new scalar type, we need to add the three functions: serialize, parseValue, and parseLiteral, which will handle any fields or arguments that implement the DateType scalar.

Now, when we add DateTime fields to our selection sets, we can see those dates and types formatted as ISO date strings:

query listPhotos {
  allPhotos {
    name
    created
  }
}

The only thing left to do is make sure that we add a timestamp to each photo when it is posted. We do this by adding a created field to every photo and timestamping it with the current DateTime using the JavaScript Date object:

postPhoto(parent, args) {
    var newPhoto = {
        id: _id++,
        ...args.input,
        created: new Date()
    }
    photos.push(newPhoto)
    return newPhoto
}

Now, when new photos are posted, they will be timestamped with the date and time that they were created.

apollo-server-express

There might be a scenario where you want to add Apollo Server to an existing app, or you might want to take advantage of Express middleware. In that case, you might consider using apollo-server-express. With Apollo Server Express, you’ll get to use all of the latest features of Apollo Server, but you’ll also be able to set up a more custom configuration. For our purposes, we are going to refactor the server to use Apollo Server Express in order to set up a custom home route, a playground route, and later to allow for images that are posted to be uploaded and saved on the server.

Let’s start by removing apollo-server:

npm remove apollo-server

Then, let’s install Apollo Server Express and Express:

npm install apollo-server-express express

Express

Express is by far one of the most popular projects in the Node.js ecosystem. It allows you to set up a Node.js web application quickly and efficiently.

From here, we can refactor our index.js file. We’ll start by changing the require statement to include apollo-server-express. Then we’ll include express:

// 1. Require `apollo-server-express` and `express`
const { ApolloServer } = require('apollo-server-express')
const express = require('express')

...

// 2. Call `express()` to create an Express application
var app = express()

const server = new ApolloServer({ typeDefs, resolvers })

// 3. Call `applyMiddleware()` to allow middleware mounted on the same path
server.applyMiddleware({ app })

// 4. Create a home route
app.get('/', (req, res) => res.end('Welcome to the PhotoShare API'))

// 5. Listen on a specific port
app.listen({ port: 4000 }, () =>
  console.log(`GraphQL Server running @ http://localhost:4000${server.graphqlPath}`)
)

By including Express, we can take advantage of all of the middleware functions provided to us by the framework. To incorporate this into the server, we just need to call the express function, call applyMiddleware, and then we can set up a custom route. Now when we visit http://localhost:4000, we should see a page that reads “Welcome to the PhotoShare API”. This is a placeholder for now.

Next, we want to set up a custom route for the GraphQL Playground to run at http://localhost:4000/playground. We can do so by installing a helper package from npm. First, we need to install the package, graphql-playground-middleware-express:

npm install graphql-playground-middleware-express

Then require this package at the top of the index file:

const expressPlayground = require('graphql-playground-middleware-express').default

...

app.get('/playground', expressPlayground({ endpoint: '/graphql' }))

Then we’ll use Express to create a route for the Playground, so anytime we want to use the Playground, we’ll visit http://localhost:4000/playground.

Now our server is set up with Apollo Server Express, and we have three distinct routes running:

  • / for a homepage

  • /graphql for the GraphQL endpoint

  • /playground for the GraphQL Playground

At this point, we’ll also reduce the length of our index file by moving the typeDefs and resolvers to their own files.

First, we’ll create a file called typeDefs.graphql and place it at the root of the project. This will be just the schema, only text. You can also move the resolvers to their own folder called resolvers. You can place these functions in an index.js file, or you can modularize the resolver files as we do in the repository.

Once complete, you can import the typeDefs and resolvers as shown below. We’ll use the fs module from Node.js to read the typeDefs.graphql file:

const { ApolloServer } = require('apollo-server-express')
const express = require('express')
const expressPlayground = require('graphql-playground-middleware-express').default
const { readFileSync } = require('fs')

const typeDefs = readFileSync('./typeDefs.graphql', 'UTF-8')
const resolvers = require('./resolvers')

var app = express()

const server = new ApolloServer({ typeDefs, resolvers })

server.applyMiddleware({ app })

app.get('/', (req, res) => res.end('Welcome to the PhotoShare API'))
app.get('/playground', expressPlayground({ endpoint: '/graphql' }))

app.listen({ port: 4000 }, () =>
  console.log(`GraphQL Server running at http://localhost:4000${server.graphqlPath}`)
)

Now that we’ve refactored the server, we’re ready to take the next step: integrating a database.

Context

In this section, we take a look at context, which is the location where you can store global values that any resolver can access. Context is a good place to store authentication information, database details, local data caches, and anything else that is needed to resolve a GraphQL operation.

You can directly call REST APIs and databases in your resolvers, but we commonly abstract that logic into an object that we place on the context to enforce separation of concerns and allow for easier refactors later. You can also use context to access REST data from an Apollo Data Source. For more information on that, check out Apollo Data Sources in the documentation.

For our purposes here though, we are going to incorporate context now to address some of our app’s current limitations. First of all, we’re storing data in memory, which is not a very scalable solution. We are also handling IDs sloppily by incrementing these values with each mutation. Instead, we are going to rely on a database to handle data storage and ID generation. Our resolvers will be able to access this database from context.

Installing Mongo

GraphQL does not care what database you use. You can use Postgres, Mongo, SQL Server, Firebase, MySQL, Redis, Elastic—whatever you want. Due to its popularity among the Node.js community, we will use Mongo as the data storage solution for our application.

To get started with MongoDB on a Mac, we will use Homebrew. To install Homebrew, visit https://brew.sh/. After you have installed Homebrew, we will go through the process of installing Mongo with it by running the following commands:

brew install mongo
brew services list
brew services start

After you have successfully started MongoDB, we can start reading and writing data to the local Mongo instance.

Note for Windows Users

If you want to run a local version of MongoDB on Windows, check out http://bit.ly/inst-mdb-windows.

You can also use an online Mongo service like mLab, as pictured in Figure 5-1. You can create a sandbox database for free.

MongoLab
Figure 5-1. mLab

Adding Database to Context

Now it’s time to connect to our database and add the connection to context. We are going to use a package called mongodb to communicate with our database. We can install this by using the command: npm install mongodb.

After we install this package, we will modify the Apollo Server configuration file, the index.js. We need to wait until mongodb successfully connects to our database to start the service. We also will need to pull the database host information from an environment variable called DB_HOST. We’ll make this environment variable accessible in our project in a file called .env at the root of the project.

If you’re using Mongo locally, your URL will look something like this:

DB_HOST=mongodb://localhost:27017/<Your-Database-Name>

If you’re using mLab, your URL will look like this. Be sure to create a user and password for the database and replace <dbuser> and <dbpassword> with those values.

DB_HOST=mongodb://<dbuser>:<dbpassword>@5555.mlab.com:5555/<Your-Database-Name>

Let’s connect to the database and build a context object before starting the service. We’ll also use the dotenv package to load the DB_HOST URL:

const { MongoClient } = require('mongodb')
require('dotenv').config()

...

// 1. Create Asynchronous Function
async function start() {
  const app = express()
  const MONGO_DB = process.env.DB_HOST

  const client = await MongoClient.connect(
    MONGO_DB,
    { useNewUrlParser: true }
  )
  const db = client.db()

  const context = { db }

  const server = new ApolloServer({ typeDefs, resolvers, context })

  server.applyMiddleware({ app })

  app.get('/', (req, res) => res.end('Welcome to the PhotoShare API'))

  app.get('/playground', expressPlayground({ endpoint: '/graphql' }))

  app.listen({ port: 4000 }, () =>
    console.log(
      `GraphQL Server running at http://localhost:4000${server.graphqlPath}`
    )
  )
}

// 5. Invoke start when ready to start
start()

With start, we connect to the database. Connecting to a database is an asynchronous process. It will take some time to successfully connect to a database. This asynchronous function allows us to wait for a promise to resolve with the await keyword. The first thing we do in this function is wait for a successful connection to the local or remote database. After we have a database connection, we can add that connection to the context object and start our server.

Now we can modify our query resolvers to return information from our Mongo collections instead of local arrays. We’ll also add queries for totalUsers and allUsers and add them to the schema:

Schema

  type Query {
      ...
      totalUsers: Int!
      allUsers: [User!]!
  }

Resolvers

Query: {

  totalPhotos: (parent, args, { db }) =>
      db.collection('photos')
        .estimatedDocumentCount(),

  allPhotos: (parent, args, { db }) =>
    db.collection('photos')
      .find()
      .toArray(),

  totalUsers: (parent, args, { db }) =>
    db.collection('users')
      .estimatedDocumentCount(),

  allUsers: (parent, args, { db }) =>
    db.collection('users')
      .find()
      .toArray()

}

db.collection('photos') is how you access a Mongo collection. We can count the documents in the collection with .estimatedDocumentCount(). We can list all of the documents in a collection and convert them to an array with .find().toArray(). At this point, the photos collection is empty, but this code will work. The totalPhotos and totalUsers resolver should return nothing. The allPhotos and allUsers resolvers should return empty arrays.

To add photos to the database, a user must be logged in. In the next section, we handle authorizing a user with GitHub and posting our first photo to the database.

GitHub Authorization

Authorizing and authenticating users is an important part of any application. There are a number of strategies that we can use to make this happen. Social authorization is a popular one because it leaves a lot of the account management details up to the social provider. It also can help users feel more secure when logging in, because the social provider might be a service with which they’re already comfortable. For our application, we implement a GitHub authorization because it’s highly likely that you already have a GitHub account (and if you don’t, it’s simple and quick to get one!).1

Setting Up GitHub OAuth

Before we get started, you need to set up GitHub authorization for this app to work. To do this, perform the following steps:

  1. Go to https://www.github.com and log in.

  2. Go to Account Settings.

  3. Go to Developer Settings.

  4. Click New OAuth App.

  5. Add the following settings (as shown in Figure 5-2):

    Application name

    Localhost 3000

    Homepage URL

    http://localhost:3000

    Application description

    All authorizations for local GitHub Testing

    Authorization callback URL

    http://localhost:3000

    New OAuth App
    Figure 5-2. New OAuth App
  6. Click Save.

  7. Go to the OAuth Account Page and get your client_id and client_secret, as shown in Figure 5-3.

    OAuth App Settings
    Figure 5-3. OAuth App Settings

With this setup in place, we can now get an auth token and information about the user from GitHub. Specifically, we will need the client_id and client_secret.

The Authorization Process

The process of authorizing a GitHub app happens on the client and the server. In this section, we discuss how to handle the server, and in Chapter 6, we go over the client implementation. As Figure 5-4 illustrates below, the full authorization process occurs in the following steps. Bold steps indicate what will happen in this chapter on the server:

  1. Client: Asks GitHub for a code using a url with a client_id

  2. User: Allows access to account information on GitHub for client application

  3. GitHub: Sends code to OAuth redirect url: http://localhost:3000?code=XYZ

  4. Client: Sends GraphQL Mutation githubAuth(code) with code

  5. API: Requests a GitHub access_token with credentials: client_id, client_secret, and client_code

  6. GitHub: Responds with access_token that can be used with future info requests

  7. API: Request user info with access_token

  8. GitHub: Responds with user info: name, githubLogin, and avatar

  9. API: Resolves authUser(code) mutation with AuthPayload, which contains a token and the user

  10. Client: Saves token to send with future GraphQL requests

Authorization Process
Figure 5-4. Authorization Process

To implement the githubAuth mutation, we’ll assume that we have a code. After we use the code to obtain a token, we’ll save the new user information and the token to our local database. We’ll also return that info to the client. The client will save the token locally and send it back to us with each request. We’ll use the token to authorize the user and access their data record.

githubAuth Mutation

We handle authorizing users by using a GitHub mutation. In Chapter 4, we designed a custom payload type for our schema called AuthPayload. Let’s add the AuthPayload and the githubAuth mutation to our typeDefs:

type AuthPayload {
  token: String!
  user: User!
}

type Mutation {
  ...
  githubAuth(code: String!): AuthPayload!
}

The AuthPayload type is used only as a response to authorization mutations. It contains the user who was authorized by the mutation along with a token that they can use to identify themselves during future requests.

Before we program the githubAuth resolver, we need to build two functions to handle GitHub API requests:

const requestGithubToken = credentials =>
    fetch(
        'https://github.com/login/oauth/access_token',
        {
            method: 'POST',
            headers: {
                'Content-Type': 'application/json',
                Accept: 'application/json'
            },
            body: JSON.stringify(credentials)
        }
    )
    .then(res => res.json())
    .catch(error => {
      throw new Error(JSON.stringify(error))
    })

The requestGithubToken function returns a fetch promise. The credentials are sent to a GitHub API URL in the body of a POST request. The credentials consist of three things: the client_id, client_secret, and code. After it is completed, the GitHub response is then parsed as JSON. We can now use this function to request a GitHub access token with credentials. This and future helper functions can be found in a lib.js file in the repo.

As soon as we have a GitHub token, we need to access information from the current user’s account. Specifically, we want their GitHub login, name, and profile picture. To obtain this information, we need to send another request to the GitHub API along with the access token that we obtained from the previous request:

const requestGithubUserAccount = token =>
    fetch(`https://api.github.com/user?access_token=${token}`)
        .then(toJSON)
        .catch(throwError)

This function also returns a fetch promise. On this GitHub API route, we can access information about the current user so long as we have an access token.

Now, let’s combine both of these requests into a single asynchronous function that we can use to authorize a user with GitHub:

async authorizeWithGithub(credentials) {
  const { access_token } = await requestGithubToken(credentials)
  const githubUser = await requestGithubUserAccount(access_token)
  return { ...githubUser, access_token }
}

Using async/await here makes it possible to handle multiple asynchronous requests. First, we request the access token and wait for the response. Then, using the access_token, we request the GitHub user account information and wait for a response. After we have the data, we’ll put it all together in a single object.

We’ve created the helper functions that will support the functionality of the resolver. Now, let’s actually write the resolver to obtain a token and a user account from GitHub:

async githubAuth(parent, { code }, { db }) {
  // 1. Obtain data from GitHub
    let {
      message,
      access_token,
      avatar_url,
      login,
      name
    } = await authorizeWithGithub({
      client_id: <YOUR_CLIENT_ID_HERE>,
      client_secret: <YOUR_CLIENT_SECRET_HERE>,
      code
    })
  // 2. If there is a message, something went wrong
    if (message) {
      throw new Error(message)
    }
  // 3. Package the results into a single object
    let latestUserInfo = {
      name,
      githubLogin: login,
      githubToken: access_token,
      avatar: avatar_url
    }
  // 4. Add or update the record with the new information
    const { ops:[user] } = await db
      .collection('users')
      .replaceOne({ githubLogin: login }, latestUserInfo, { upsert: true })
  // 5. Return user data and their token
    return { user, token: access_token }

  }

Resolvers can be asynchronous. We can wait for a network response before returning the result of an operation to a client. The githubAuth resolver is asynchronous because we must wait for two responses from GitHub before we’ll have the data that we need to return.

After we have obtained the user’s data from GitHub, we check our local database to see if this user has signed in to our app in the past, which would mean that they already have an account. If the user has an account, we will update their account details with the information that we received from GitHub. They might have changed their name or profile picture since they last logged in. If they do not already have an account, we will add the new user to our collection of users. In both cases, we return the logged in user and the token from this resolver.

It’s time to test this authorization process, and to test, you need code. To obtain the code, you’ll need to add your client ID to this URL:

https://github.com/login/oauth/authorize?client_id=YOUR-ID-HERE&scope=user

Paste the URL with your GitHub client_id into the location bar of a new browser window. You will be directed to GitHub, where you will agree to authorize this app. When you authorize the app, GitHub will redirect you back to http://localhost:3000 with a code:

http://locahost:3000?code=XYZ

Here, the code is XYZ. Copy the code from the browser URL and then send it with the githubAuth mutation:

mutation {
  githubAuth(code:"XYZ") {
    token
    user {
      githubLogin
      name
      avatar
    }
  }
}

This mutation will authorize the current user and return a token along with information about that user. Save the token. We’ll need to send it in the header with future requests.

Bad Credentials

When you see the error “Bad Credentials,” the client ID, client secret, or code that was sent to the GitHub API is incorrect. Check the client ID and client secret to be sure; often it’s the code that causes this error.

GitHub codes are good for only a limited time period and can be used only once. If there is a bug in the resolver after the credentials were requested, the code used in the request will no longer be valid. Typically, you can resolve this error by requesting another code from GitHub.

Authenticating Users

To identify yourself in future requests, you will need to send your token with every request in the Authorization header. That token will be used to identify the user by looking up their database record.

The GraphQL Playground has a location where you can add headers to each request. In the bottom corner, there is a tab right next to “Query Variables” called “HTTP Headers.” You can add HTTP Headers to your request using this tab. Just send the headers as JSON:

{
  "Authorization": "<YOUR_TOKEN>"
}

Replace <YOUR_TOKEN> with the token that was returned from the githubAuth mutation. Now, you are sending the key to your identification with each GraphQL request. We need to use that key to find your account and add it to context.

me Query

From here, we want to create a query that refers back to our own user information: the me query. This query returns the current logged-in user based on the token sent in the HTTP headers of the request. If there is not currently a logged-in user, the query will return null.

The process begins when a client sends the GraphQL query, me, with Authorization: token for secure user information. The API then captures an Authorization header and uses the token to look up the current user record in the database. It also adds the current user account to context. After it’s in context, every resolver will have access to the current user.

It’s up to us to identify the current user and put them in context. Let’s modify the configuration of our server. We’ll need to change the way we build the context object. Instead of an object, we will use a function to handle context:

async function start() {
    const app = express()
    const MONGO_DB = process.env.DB_HOST

    const client = await MongoClient.connect(
      MONGO_DB,
      { useNewUrlParser: true }
    )

    const db = client.db()

    const server = new ApolloServer({
      typeDefs,
      resolvers,
      context: async ({ req }) => {
        const githubToken = req.headers.authorization
        const currentUser = await db.collection('users').findOne({ githubToken })
        return { db, currentUser }
      }
    })

    ...

}

Context can be an object or a function. For our application to work, we need it to be a function so that we can set the context every time there is a request. When context is a function, it is invoked for every GraphQL request. The object that is returned by this function is the context that is sent to the resolver.

In the context function, we can capture the authorization header from the request and parse it for the token. After we have a token, we can use it to look up a user in our database. If we have a user, they will be added to context. If not, the value for user in context will be null.

With this code in place, it is time to add the me query. First, we need to modify our typeDefs:

type Query {
  me: User
  ...
}

The me query returns a nullable user. It will be null if a current authorized user is not found. Let’s add the resolver for the me query:

const resolvers = {
  Query: {
    me: (parent, args, { currentUser }) => currentUser,
    ...
  }
}

We’ve already done the heavy lifting of looking up the user based on their token. At this point, you’ll simply return the currentUser object from context. Again, this will be null if there is not a user.

If the correct token has been added to the HTTP authorization header, you can send a request to obtain details about yourself using the me query:

query currentUser {
  me {
    githubLogin
    name
    avatar
  }
}

When you run this query, you will be identified. A good test to confirm that everything is correct is to try to run this query without the authorization header or with an incorrect token. Given a wrong token or missing header, you should see that the me query is null.

postPhoto mutation

To post a photo to our application, a user must be logged in. The postPhoto mutation can determine who is logged in by checking context. Let’s modify the postPhoto mutation:

async postPhoto(parent, args, { db, currentUser }) {

  // 1. If there is not a user in context, throw an error
  if (!currentUser) {
      throw new Error('only an authorized user can post a photo')
  }

  // 2. Save the current user's id with the photo
  const newPhoto = {
      ...args.input,
      userID: currentUser.githubLogin,
      created: new Date()
  }

  // 3. Insert the new photo, capture the id that the database created
  const { insertedIds } = await db.collection('photos').insert(newPhoto)
  newPhoto.id = insertedIds[0]

  return newPhoto

}

The postPhoto mutation has undergone several changes in order to save a new photo to the database. First, the currentUser is obtained from context. If this value is null, we throw an error and prevent the postPhoto mutation from executing any further. To post a photo, the user must send the correct token in the Authorization header.

Next, we add the current user’s ID to the newPhoto object. Now, we can save the new photo record to the photos collection in the database. Mongo creates a unique identifier for each document that it saves. When the new photo is added, we can obtain that identifier by using the insertedIds array. Before we return the photo, we need to make sure that it has a unique identifier.

We also need to change the Photo resolvers:

const resolvers = {
  ...
  Photo: {
    id: parent => parent.id || parent._id,
    url: parent => `/img/photos/${parent._id}.jpg`,
    postedBy: (parent, args, { db }) =>
      db.collection('users').findOne({ githubLogin: parent.userID })
}

First, if the client asks for a photo ID, we need to make sure it receives the correct value. If the parent photo does not already have an ID, we can assume that a database record has been created for the parent photo and it will have an ID saved under the field _id. We need to make sure that the ID field of the photo resolves to the database ID.

Next, let’s assume that we are serving these photos from the same web server. We return the local route to the photo. This local route is created using the photo’s ID.

Finally, we need to modify the postedBy resolver to look up the user who posted the photo in the database. We can use the userID that is saved with the parent photo to look up that user’s record in the database. The photo’s userID should match the users githubLogin, so the .findOne() method should return one user record, the user who posted the photo.

With our authorization header in place, we should be able to post new photos to the GraphQL service:

mutation post($input: PostPhotoInput!) {
  postPhoto(input: $input) {
    id
    url
    postedBy {
      name
      avatar
    }
  }
}

After we post the photo, we can ask for its id and url, along with the name and the avatar of the user who posted the photo.

Add fake users mutation

To test our application with users other than ourselves, we are going to add a mutation that will allow us to populate the database with fake users from the random.me API.

We can handle this with a mutation called addFakeUsers. Let’s first add this to the schema:

type Mutation {
  addFakeUsers(count: Int = 1): [User!]!
  ...
}

Notice that the count argument takes in the number of fake users to add and returns a list of users. This list of users contains the accounts of the fake users added by this mutation. By default, we add one user at a time, but you can add more by sending this mutation a different count:

addFakeUsers: async (root, {count}, {db}) => {

    var randomUserApi = `https://randomuser.me/api/?results=${count}`

    var { results } = await fetch(randomUserApi)
      .then(res => res.json())

    var users = results.map(r => ({
      githubLogin: r.login.username,
      name: `${r.name.first} ${r.name.last}`,
      avatar: r.picture.thumbnail,
      githubToken: r.login.sha1
    }))

    await db.collection('users').insert(users)

    return users
}

To test adding new users, first we want to obtain some fake data from randomuser.me. addFakeUsers is an asynchronous function that we can use to fetch that data. Then, we serialize the data from randomuser.me, creating user objects that match our schema. Next, we add these new users to the database and return the list of new users.

Now, we can populate the database with a mutation:

mutation {
  addFakeUsers(count: 3) {
    name
  }
}

This mutation adds three fake users to the database. Now that we have fake users, we also want to sign in with a fake user account via a mutation. Let’s add a fakeUserAuth to our Mutation type:

type Mutation {
  fakeUserAuth(githubLogin: ID!): AuthPayload!
  ...
}

Next, we need to add a resolver that returns a token that we can use to authorize our fake users:

async fakeUserAuth (parent, { githubLogin }, { db }) {

  var user = await db.collection('users').findOne({ githubLogin })

  if (!user) {
      throw new Error(`Cannot find user with githubLogin "${githubLogin}"`)
  }

  return {
      token: user.githubToken,
      user
  }

}

The fakeUserAuth resolver obtains the githubLogin from the mutation’s arguments and uses it to find that user in the database. After it finds that user, the user’s token and user account are returned in the shape of our AuthPayload type.

Now we can authenticate fake users by sending a mutation:

mutation {
  fakeUserAuth(githubLogin:"jDoe") {
    token
  }
}

Add the returned token to the authorization HTTP header to post new photos as this fake user.

Conclusion

Well, you did it. You built a GraphQL Server. You started by getting a thorough understanding of resolvers. You handled queries and mutations. You added GitHub authorization. You identified the current user via an access token that is added to the header of every request. And finally, you modified the mutation that reads the user from the resolver’s context and allows users to post photos.

If you want to run a completed version of the service that we constructed in this chapter, you can find it in this book’s repository. This app will need to know what database to use and which GitHub OAuth credentials to use. You can add these values by created a new file named .env and placing it in the project root:

DB_HOST=<YOUR_MONGODB_HOST>
CLIENT_ID=<YOUR_GITHUB_CLIENT_ID>
CLIENT_SECRET=<YOUR_GITHUB_CLIENT_SECRET>

With the .env file in place, you are ready to install the dependencies: yarn or npm install and run the service: yarn start or npm start. Once the service is running on port 4000, you can send queries to it using the Playground at: http://localhost:4000/playground. You can request a GitHub code by clicking the link found at http://localhost:4000. If you want to access the GraphQL endpoint from some other client, you can find it at: http://localhost:4000/graphql.

In Chapter 7, we show you how to modify this API to handle subscriptions and file uploads. But before we do that, we need to show you how clients will consume this API, so in Chapter 6, we look at how to construct a frontend that can work with this service.

1 You can create an account at https://www.github.com.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset