So far, you have designed a schema, constructed a GraphQL API, and implemented a client using Apollo Client. We’ve taken one complete full-stack iteration with GraphQL and developed an understanding of how GraphQL APIs are consumed by clients. Now it’s time to prepare our GraphQL APIs and clients for production.
To take your new skills into production, you are going to need to meet the requirements of your current applications. Our current applications likely allow for file transfer between the client and the server. Our current applications might use WebSockets to push real-time data updates to our clients. Our current APIs are secure and protect against malicious clients. To work with GraphQL in production, we need to be able to meet these requirements.
Also we need to think about our development teams. You might be working with a full-stack team, but more often than not, teams are split into frontend developers and backend developers. How can all of your developers work efficiently from different specializations within our GraphQL stack?
And what about the sheer scope of your current code base? At present, you likely have many different services and APIs running in production and probably have neither the time nor the resources to rebuild everything from the ground up with GraphQL.
In this chapter, we address all of these requirements and concerns. We begin by taking two more iterations in the PhotoShare API. First, we incorporate subscriptions and real-time data transport. Second, we allow users to post photos by implementing a solution for file transport with GraphQL. After we’ve completed these iterations on the PhotoShare application, we will look at ways to secure our GraphQL API to guard against malicious client queries. We wrap up this chapter by examining ways in which teams can work together to effectively migrate to GraphQL.
Real-time updates are an essential feature for modern web and mobile applications. The current technology that allows for real-time data transport between websites and mobile applications are WebSockets. You can use the WebSocket protocol to open duplex two-way communication channels over a TCP socket. This means that web pages and applications can send and receive data over a single connection. This technology allows updates to be pushed from the server directly to the web page in real time.
Up to this point, we have implemented GraphQL queries and mutations using the HTTP protocol. HTTP gives us a way to send and receive data between the client and the server, but it does not help us connect to a server and listen for state changes. Before WebSockets were introduced, the only way to listen for state changes on the server was to incrementally send HTTP requests to the server to determine whether anything had changed. We saw how to easily implement polling with the query tag in Chapter 6.
But if we really want to take full advantage of the new web, GraphQL has to be able to support real-time data transport over WebSockets in addition to HTTP requests. The solution is subscriptions. Let’s take a look at how we can implement subscriptions in GraphQL.
In GraphQL, you use subscriptions to listen to your API for specific
data changes. Apollo Server already supports subscriptions. It wraps
a couple of npm packages that are routinely used to set up WebSockets in
GraphQL applications: graphql-subscriptions
and
subscriptions-transport-ws
. The graphql-subscriptions
package is an
npm package that provides an implementation of the publisher/subscriber design
pattern, PubSub
. PubSub
is essential for publishing data
changes that client subscribers can consume.
subscriptions-transport-ws
is a WebSocket server and client that
allows transporting subscriptions over WebSockets. Apollo Server
automatically incorporates both of these packages to support
subscriptions out of the box.
By default, Apollo Server sets up a WebSocket at ws://localhost:4000
. If you use the simple server configuration that we demonstrated at the beginning of Chapter 5, you’re using a configuration that supports WebSockets out of the box.
Because we are working with apollo-server-express
, we’ll have to take few steps to make subscriptions work. Locate the index.js
file in the photo-share-api
and import the createServer
function from the http
module:
const
{
createServer
}
=
require
(
'http'
)
Apollo Server will automatically set up subscription support, but to do so, it needs an HTTP server. We’ll use createServer
to create one. Locate the code at the bottom of the start
function where the GraphQL service is started on a specific port with app.listen(...)
. Replace this code with the following:
const
httpServer
=
createServer
(
app
)
server
.
installSubscriptionHandlers
(
httpServer
)
httpServer
.
listen
({
port
:
4000
},
()
=>
console
.
log
(
`GraphQL Server running at localhost:4000
${
server
.
graphqlPath
}
`
)
)
First, we create a new httpServer
using the Express app instance. The httpServer
is ready to handle all of the HTTP requests sent to it based upon our current Express configuration. We also have a server instance where we can add WebSocket support. The next line of code, server.installSubscriptionHandlers(httpServer)
is what makes the WebSockets work. This is where Apollo Server adds the necessary handlers to support subscriptions with WebSockets. In addition to an HTTP server, our backend is now ready to receive requests at ws://localhost:4000/graphql
.
Now that we have a server that is ready to support subscriptions, it’s time to implement them.
We want to know when any of our users post a photo. This is a good use case for a subscription. Just like everything else in GraphQL, to implement subscriptions we need to start with the schema first. Let’s add a subscription type to the schema just below the definition for the Mutation
type:
type Subscription { newPhoto: Photo! }
The newPhoto
subscription will be used to push data to the client when photos are added. We send a subscription operation with the following GraphQL query language operation:
subscription { newPhoto { url category postedBy { githubLogin avatar } } }
This subscription will push data about new photos to the client. Just like a Query
or Mutation
, GraphQL allows us to request data about specific fields with selection sets. With this subscription every time there is a new photo, we will receive its url
and category
along with the githubLogin
and avatar
of the user who posted this photo.
When a subscription is sent to our service, the connection remains open. It is listening for data changes. Every photo that is added will push data to the subscriber. If you set up a subscription with GraphQL Playground, you will notice that the Play button will change to a red Stop button.
The Stop button means that the subscription is currently open and listening for data. When you press the Stop button, the subscription will be unsubscribed. It will stop listening for data changes.
It is finally time to take a look at the postPhoto
mutation: the mutation that adds new photos to the database. We want to publish new photo details to our subscription from this mutation:
async
postPhoto
(
root
,
args
,
{
db
,
currentUser
,
pubsub
})
{
if
(
!
currentUser
)
{
throw
new
Error
(
'only an authorized user can post a photo'
)
}
const
newPhoto
=
{
...
args
.
input
,
userID
:
currentUser
.
githubLogin
,
created
:
new
Date
()
}
const
{
insertedIds
}
=
await
db
.
collection
(
'photos'
).
insert
(
newPhoto
)
newPhoto
.
id
=
insertedIds
[
0
]
pubsub
.
publish
(
'photo-added'
,
{
newPhoto
})
return
newPhoto
}
This resolver expects that an instance of pubsub
has been added to context. We’ll do that in the next step. pubsub
is a mechanism that can publish events and send data to our subscription resolver. It’s like the Node.js EventEmitter
. You can use it to publish events and send data to every handler that has subscribed to an event. Here, we publish a photo-added
event just after we insert a new photo to the database. The details of the new photo are passed as the second argument of the pubsub.publish
method. This will pass details about the new photo to every handler that has subscribed to photo-added
events.
Next, let’s add the Subscription
resolver that will be used to subscribe to photo-added
events:
const
resolvers
=
{
...
Subscription
:
{
newPhoto
:
{
subscribe
:
(
parent
,
args
,
{
pubsub
})
=>
pubsub
.
asyncIterator
(
'photo-added'
)
}
}
}
The Subscription
resolver is a root resolver. It should be added directly to the resolver object right next to the Query
and Mutation
resolvers. Within the Subscription resolver, we need to define resolvers for each field. Since we defined the newPhoto
field in our schema, we need to make sure a newPhoto
resolver exists in our resolvers.
Unlike Query
or Mutation
resolvers, Subscription
resolvers contain a subscribe method. The subscribe method receives the parent
, args
, and context
just like the any other resolver functions. Inside of this method, we subscribe to specific events. In this case, we are using the pubsub.asyncIterator
to subscribe to photo-added
events. Any time a photo-added
event is raised by pubsub
, it will be passed through this new photo subscription.
The example files in the GitHub repository breaks the resolvers up into several files. The above code can be found in resolvers/Subscriptions.js
.
The postPhoto
resolver and the newPhoto
subscription resolver both expect there to be an instance of pubsub
in context. Let’s modify the context to include pubsub
. Locate the index.js
file and make the following changes:
const
{
ApolloServer
,
PubSub
}
=
require
(
'apollo-server-express'
)
...
async
function
start
()
{
...
const
pubsub
=
new
PubSub
()
const
server
=
new
ApolloServer
({
typeDefs
,
resolvers
,
context
:
async
({
req
,
connection
})
=>
{
const
githubToken
=
req
?
req
.
headers
.
authorization
:
connection
.
context
.
Authorization
const
currentUser
=
await
db
.
collection
(
'users'
)
.
findOne
({
githubToken
})
return
{
db
,
currentUser
,
pubsub
}
}
})
...
}
First, we need to import the PubSub
constructor from apollo-server-express
package. We use this constructor to create a pubsub
instance and add it to context.
You may have also notice that we change the context function. Queries and mutations will still use HTTP. When we send either of these operations to the GraphQL Service the request argument, req
, is sent to the context handler. However, when the operation is a Subscription, there is no HTTP request so the req
argument is null
. Information for subscriptions is instead passed when the client connects to the WebSocket. In this case, the WebSocket connection
argument will be sent to the context function instead. When we have a subscription we’ll have to pass authorization details through the connection’s context
, not the HTTP request headers.
Now we are ready to try out our new subscription. Open the playground and start a subscription:
subscription { newPhoto { name url postedBy { name } } }
Once the subscription is running, open a new Playground tab and run the postPhoto
mutation. Every time you run this mutation, you will see your new photo data sent to the subscription.
Assuming that you completed the challenge in the preceding sidebar, the PhotoShare server supports subscriptions for Photos
and Users
. In this next section, we subscribe to the newUser
subscription and immediately display any new users on the page. Before we can get started, we need to set up Apollo Client to handle subscriptions.
Subscriptions are used over WebSockets. To enable WebSockets on the server, we need to install a few additional packages:
npm install apollo-link-ws apollo-utilities subscription-transport-ws
From here, we want to add a WebSocket link to the Apollo Client
configuration. Locate the src/index.js
file in the photo-share-client
project and add the following imports:
import
{
InMemoryCache
,
HttpLink
,
ApolloLink
,
ApolloClient
,
split
}
from
'apollo-boost'
import
{
WebSocketLink
}
from
'apollo-link-ws'
import
{
getMainDefinition
}
from
'apollo-utilities'
Notice that we’ve imported split
from apollo-boost
. We will use this
to split GraphQL operations between HTTP requests and WebSockets. If the operation is a mutation or a query, Apollo Client will send an HTTP request. If the operation is a subscription, the client will connect to the WebSocket.
Under the hood of Apollo Client, network requests are managed with ApolloLink
. In the current app, this has been responsible for sending HTTP requests to the GraphQL service. Any time we send an operation with the Apollo Client, that operation is sent to an Apollo Link to handle the network request. We can also use an Apollo Link to handle networking over WebSockets.
We’ll need to set up two types of links to support WebSockets: an HttpLink
and a WebsocketLink
:
const
httpLink
=
new
HttpLink
({
uri
:
'http://localhost:4000/graphql'
})
const
wsLink
=
new
WebSocketLink
({
uri
:
`ws://localhost:4000/graphql`
,
options
:
{
reconnect
:
true
}
})
The httpLink
can be used to send HTTP requests over the network to http://localhost:4000/graphql
and the wsLink
can be used to connect to ws://localhost:4000/graphql
and consume data over WebSockets.
Links are composable. That means they can be chained together to build custom pipelines for our GraphQL operations. In addition to being able to send an operation to a single ApolloLink
, we can send an operation through a chain of reusable links where each link in the chain can manipulate the operation before it reaches the last link in the chain which handles the request and returns a result.
Lets create a link chain with the httpLink
by adding a custom Apollo Link that is responsible for adding the authorization header to the operation:
const
authLink
=
new
ApolloLink
((
operation
,
forward
)
=>
{
operation
.
setContext
(
context
=>
({
headers
:
{
...
context
.
headers
,
authorization
:
localStorage
.
getItem
(
'token'
)
}
}))
return
forward
(
operation
)
})
const
httpAuthLink
=
authLink
.
concat
(
httpLink
)
The httpLink
is concatenated to the authLink
to handle user authorization for HTTP requests. Keep in mind that this .concat
function is not the same function that you’ll find in JavaScript that concatenates arrays. This is a special function that concatenates Apollo Links. Once concatenated, we have more appropriately named the link httpAuthLink
to describe the behavior more clearly. When an operation is sent to this link, it will first be passed to the authLink
where the authorization header is added to the operation before it is forwarded to the httpLink
to handle the network request. If you are familiar with middleware in Express or Redux, the pattern is similar.
Now we need to tell the client which link to use. This is where split
comes in handy. The split
function will return one of two Apollo Links based upon a predicate. The first argument of the split
function is the predicate. A predicate is a function that returns true
or false
. The second argument of the split function represents the link to return when the predicate returns true
, and the third argument represents the link to return when the predicate returns false.
Let’s implement a split
link that will check to see if our operation happens to be a subscriptions. If it is a subscription, we will use the wsLink
to handle the network, otherwise we will use the httpLink
:
const
link
=
split
(
({
query
})
=>
{
const
{
kind
,
operation
}
=
getMainDefinition
(
query
)
return
kind
===
'OperationDefinition'
&&
operation
===
'subscription'
},
wsLink
,
httpAuthLink
)
The first argument is the predicate function. It will check the operation’s query
AST using the getMainDefinition
function. If this operation is a subscription, then our predicate will return true. When the predicate returns true
, the link
will return the wsLink
. When the predicate returns false
the link
will return the httpAuthLink
.
Finally, we need to change our Apollo Client configuration to use our custom links by passing it the link
and the cache
:
const
client
=
new
ApolloClient
({
cache
,
link
})
Now our client is ready to handle subscriptions. In the next section, we will send our first subscription operation using Apollo Client.
On the client, we can listen for new users by creating a constant called LISTEN_FOR_USERS
. This contains a string with our subscription that will return a new user’s githubLogin
, name
, and avatar
:
const
LISTEN_FOR_USERS
=
gql
`
subscription {
newUser {
githubLogin
name
avatar
}
}
`
Then, we can use the <Subscription />
component to listen for new users:
<Subscription
subscription=
{LISTEN_FOR_USERS}
>
{({ data, loading }) => loading ?<p>
loading a new user...</p>
:<div>
<img
src=
{data.newUser.avatar}
alt=
""
/>
<h2>
{data.newUser.name}</h2>
</div>
</Subscription>
As you can see here, the <Subscription />
component works like the
<Mutation />
or <Query />
components. You send it the subscription,
and when a new user is received, their data is passed to a function. The
problem with using this component in our app is that the newUser
subscription passes one new user at a time. So, the preceding component
would show only the last new user that was created.
What we want to do is listen for new users when the PhotoShare client starts, and when we have a new user, we add them to our current local cache. When the cache is updated, the UI will follow, so there is no need to change anything about the UI for new users.
Let’s modify the App
component. First, we convert it to a class
component so that we can take advantage of React’s component lifecycle.
When the component mounts, we start listening for new users via our
subscription. When the App
component unmounts, we stop listening by
invoking the subscription’s unsubscribe method:
import
{
withApollo
}
from
'react-apollo'
...
class
App
extends
Component
{
componentDidMount
()
{
let
{
client
}
=
this
.
props
this
.
listenForUsers
=
client
.
subscribe
({
query
:
LISTEN_FOR_USERS
})
.
subscribe
(({
data
:
{
newUser
}
})
=>
{
const
data
=
client
.
readQuery
({
query
:
ROOT_QUERY
})
data
.
totalUsers
+=
1
data
.
allUsers
=
[
...
data
.
allUsers
,
newUser
]
client
.
writeQuery
({
query
:
ROOT_QUERY
,
data
})
})
}
componentWillUnmount
()
{
this
.
listenForUsers
.
unsubscribe
()
}
render
()
{
...
}
}
export
default
withApollo
(
App
)
When we export the <App />
component, we use the withApollo
function to pass the client to the App via props. When the component mounts, we will use the client to start listening for new users. When the component unmounts, we stop the subscription using the unsubscribe
method.
The subscription is created using the client.subscribe().subscribe()
. The first subscribe
function is an Apollo Client method that is used to send the subscription operation to our service. It returns an observer object. The second subscribe
function is a method of the observer object. It is used to subscribe handlers to the observer. The handlers are invoked every time the subscription pushes data to the client. In the above code, we’ve added a handler that captures the information about each new users and adds them directly to the Apollo Cache using writeQuery
.
Now, when new users are added, they are instantly pushed into our local cache which immediately updates the UI. Because the subscription is adding every new user to the list in real time, there is no longer a need to update the local cache from src/Users.js
. Within this file, you should remove the updateUserCache
function as well as the mutation’s update
property. You can see a completed version of the app component at the book’s website.
There’s one last step to creating our PhotoShare application—actually
uploading a photo. In order to upload a file with GraphQL, we need to modify both the API and the client so that they can handle multipart/form-data
, the encoding type that is required to pass a file with a POST body over the internet. We are going to take an additional step that will allow us to pass a file as a GraphQL argument so that the file itself can be handled directly within the resolver.
To help us with this implementation, we are going to use two npm packages: apollo-upload-client
and apollo-upload-server
. Both of these packages are designed to pass files from a web browser over HTTP. apollo-upload-client
will be responsible for capturing the file in the browser and passing it along to the server with the operation. apollo-upload-server
is designed to handle files passed to the server from apollo-upload-client
. apollo-upload-server
captures the file and maps it to the appropriate query argument before sending it to the resolver as an argument.
Apollo Server automatically incorporates the apollo-upload-server
.
There is no need to install that npm in your API project because it’s
already there and working. The GraphQL API needs to be ready to accept
an uploaded file. An Upload
custom scalar type is provided in the Apollo Server. It can be used to capture the file stream
, mimetype
, and encoding
of an uploaded file.
We’ll start with the schema first, adding a custom scalar to our type definitions. In the schema file, we’ll add the Upload
scalar:
scalar Upload input PostPhotoInput { name: String! category: Photo_Category = PORTRAIT description: String, file: Upload! }
The Upload
type will allow us to pass the contents of a file with our PostPhotoInput
. This means that we will receive the file itself in the resolver. The Upload
type contains information about the file including an upload stream
that we can use to save the file. Let’s use this stream
in the postPhoto
mutation. Add the following code to the bottom of the postPhoto
mutation found in resolvers/Mutation.js
:
const
{
uploadStream
}
=
require
(
'../lib'
)
const
path
=
require
(
'path'
)
...
async
postPhoto
(
root
,
args
,
{
db
,
user
,
pubsub
})
=>
{
...
var
toPath
=
path
.
join
(
__dirname
,
'..'
,
'assets'
,
'photos'
,
`
${
photo
.
id
}
.jpg`
)
await
{
stream
}
=
args
.
input
.
file
await
uploadFile
(
input
.
file
,
toPath
)
pubsub
.
publish
(
'photo-added'
,
{
newPhoto
:
photo
})
return
photo
}
In this example, the uploadStream
function would return a promise which would be resolved when the upload is complete. The file
argument contains the upload stream that can be piped to a writeStream
and saved locally to the assets/photos
directory. Each newly posted photo will be named based upon its unique identifier. We are only handling JPEG images in this example for brevity.
If we want to serve these photo files from the same API, we will have to add some middleware to our Express application that will allow us to serve static JPEG images. In the index.js
file where we set up our Apollo Server, we can add the express.static
middleware that allows us to serve local static files over a route:
const
path
=
require
(
'path'
)
...
app
.
use
(
'/img/photos'
,
express
.
static
(
path
.
join
(
__dirname
,
'assets'
,
'photos'
))
)
This bit of code will handle serving the static files from assets/photos
to /img/photos
for HTTP requests.
With that, our server is in place and can now handle photo uploads. It’s time to transition to the client side where we’ll create a form that can manage photo uploads.
In a real Node.js application, you would typically save user uploads to
a cloud-based file storage service. The previous example uses an uploadFile
function to upload the file to a local directory, which limits the
scalability of this sample application. Services such as AWS, Google
Cloud, or Cloudinary can handle large volumes of file uploads from
distributed applications.
Now, let’s handle the photos on the client. First to display the photos, we’ll need to add the allPhotos
field to our ROOT_QUERY
. Modify the following query in the src/App.js
file:
export const ROOT_QUERY = gql` query allUsers { totalUsers totalPhotos allUsers { ...userInfo } me { ...userInfo } allPhotos { id name url } } fragment userInfo on User { githubLogin name avatar } `
Now when the website loads, we will receive the id
, name
, and url
of every photo stored in the database. We can use this information to display the photos. Let’s create a Photos
component that will be used to display each photo:
import
React
from
'react'
import
{
Query
}
from
'react-apollo'
import
{
ROOT_QUERY
}
from
'./App'
const
Photos
=
()
=>
<
Query
query
=
{
ALL_PHOTOS_QUERY
}
>
{({
loading
,
data
})
=>
loading
?
<
p
>
loading
...
<
/p> :
data
.
allPhotos
.
map
(
photo
=>
<
img
key
=
{
photo
.
id
}
src
=
{
photo
.
url
}
alt
=
{
photo
.
name
}
width
=
{
350
}
/>
)
}
<
/Query>
export
default
Photos
Remember, the Query
component takes in the ROOT_QUERY
as a property. Then, we use the render prop pattern to display all of the photos when loading is complete. For each photo in the data.allPhotos
array, we’ll add a new img
element with metadata that we pull from each photo object including the photo.url
and photo.name
.
When we add this code to the App
component, our photos will be displayed. But first, let’s create another component. Let’s create a PostPhoto
component that will contain the form:
import
React
,
{
Component
}
from
'react'
export
default
class
PostPhoto
extends
Component
{
state
=
{
name
:
''
,
description
:
''
,
category
:
'PORTRAIT'
,
file
:
''
}
postPhoto
=
(
mutation
)
=>
{
console
.
log
(
'todo: post photo'
)
console
.
log
(
this
.
state
)
}
render
()
{
return
(
<
form
onSubmit
=
{
e
=>
e
.
preventDefault
()}
style
=
{{
display
:
'flex'
,
flexDirection
:
'column'
,
justifyContent
:
'flex-start'
,
alignItems
:
'flex-start'
}}
>
<
h1
>
Post
a
Photo
<
/h1>
<
input
type
=
"text"
style
=
{{
margin
:
'10px'
}}
placeholder
=
"photo name..."
value
=
{
this
.
state
.
name
}
onChange
=
{({
target
})
=>
this
.
setState
({
name
:
target
.
value
})}
/>
<
textarea
type
=
"text"
style
=
{{
margin
:
'10px'
}}
placeholder
=
"photo description..."
value
=
{
this
.
state
.
description
}
onChange
=
{({
target
})
=>
this
.
setState
({
description
:
target
.
value
})}
/>
<
select
value
=
{
this
.
state
.
category
}
style
=
{{
margin
:
'10px'
}}
onChange
=
{({
target
})
=>
this
.
setState
({
category
:
target
.
value
})}
>
<
option
value
=
"PORTRAIT"
>
PORTRAIT
<
/option>
<
option
value
=
"LANDSCAPE"
>
LANDSCAPE
<
/option>
<
option
value
=
"ACTION"
>
ACTION
<
/option>
<
option
value
=
"GRAPHIC"
>
GRAPHIC
<
/option>
<
/select>
<
input
type
=
"file"
style
=
{{
margin
:
'10px'
}}
accept
=
"image/jpeg"
onChange
=
{({
target
})
=>
this
.
setState
({
file
:
target
.
files
&&
target
.
files
.
length
?
target
.
files
[
0
]
:
''
})}
/>
<
div
style
=
{{
margin
:
'10px'
}}
>
<
button
onClick
=
{()
=>
this
.
postPhoto
()}
>
Post
Photo
<
/button>
<
button
onClick
=
{()
=>
this
.
props
.
history
.
goBack
()}
>
Cancel
<
/button>
<
/div>
<
/form>
)
}
}
The PostPhoto
component is simply a form. This form uses input elements for the name
, description
, category
, and the file
itself. In React, we call this controlled because each input element is linked to a state variable. Any time an input’s value changes, the state of the PostPhoto
component will change too.
We submit photos by pressing the “Post Photo” button. The file input accepts a JPEG and sets the state for file
. This state field represents the actual file, not just text. We have not added any form validation to this component for brevity.
It’s time to add our new components to the App
component. When we do so, we will make sure that the home route displays our Users
and Photos
. We will also add a /newPhoto
route that can be used to display the form.
import
React
,
{
Fragment
}
from
'react'
import
{
Switch
,
Route
,
BrowserRouter
}
from
'react-router-dom'
import
Users
from
'./Users'
import
Photos
from
'./Photos'
import
PostPhoto
from
'./PostPhoto'
import
AuthorizedUser
from
'./AuthorizedUser'
const
App
=
()
=>
<
BrowserRouter
>
<
Switch
>
<
Route
exact
path
=
"/"
component
=
{()
=>
<
Fragment
>
<
AuthorizedUser
/>
<
Users
/>
<
Photos
/>
<
/Fragment>
}
/>
<
Route
path
=
"/newPhoto"
component
=
{
PostPhoto
}
/>
<
Route
component
=
{({
location
})
=>
<
h1
>
"{location.pathname}"
not
found
<
/h1>
}
/>
<
/Switch>
<
/BrowserRouter>
export
default
App
The <Switch>
component allows us to render one route at a time. When the url contains the home route, “/”, we will display a component that contains the AuthorizedUser
, Users
, and Photos
components. The Fragment
is used in React when we want to display sibling components without having to wrap them in an extra div
element. When the route is “/newPhoto”, we will display the new photo form. And when the route is not recognized, we will display a h1
element that let’s the user know that we can’t find the route that they provided.
Only authorized users can post photos, so we’ll append a “Post Photo” NavLink
to the AuthorizedUser
component. Clicking this button will cause the PostPhoto
to render.
import
{
withRouter
,
NavLink
}
from
'react-router-dom'
...
class
AuthorizedUser
extends
Component
{
...
render
()
{
return
(
<
Query
query
=
{
ME_QUERY
}
>
{({
loading
,
data
})
=>
data
.
me
?
<
div
>
<
img
src
=
{
data
.
me
.
avatar_url
}
width
=
{
48
}
height
=
{
48
}
alt
=
""
/>
<
h1
>
{
data
.
me
.
name
}
<
/h1>
<
button
onClick
=
{
this
.
logout
}
>
logout
<
/button>
<
NavLink
to
=
"/newPhoto"
>
Post
Photo
<
/NavLink>
<
/div> :
...
Here we import the <NavLink>
component. When the Post Photo link is clicked, the user will be sent to the /newPhoto
route.
At this point, the app navigation should work. A user is allowed to navigate between screens, and when posting a photo, we should see the necessary input data logged in the console. It is time for us to take that post data, including the file, and send it with a mutation.
First, let’s install apollo-upload-client
:
npm install apollo-upload-client
We are going to replace the current HTTP link with an HTTP link that is supplied by apollo-upload-client
. This link will support multipart/form-data
requests that contain upload files. To create this link, we’ll use the createUploadLink
function:
import
{
createUploadLink
}
from
'apollo-upload-client'
...
const
httpLink
=
createUploadLink
({
uri
:
'http://localhost:4000/graphql'
})
We’ve replaced the old HTTP link with a new one called using the createUploadLink
function. This looks fairly similar to the HTTP link. It has the API route included as the uri
.
It’s time to add the postPhoto
mutation to the PostPhoto
form:
import
React
,
{
Component
}
from
'react'
import
{
Mutation
}
from
'react-apollo'
import
{
gql
}
from
'apollo-boost'
import
{
ROOT_QUERY
}
from
'./App'
const
POST_PHOTO_MUTATION
=
gql
`
mutation postPhoto($input: PostPhotoInput!) {
postPhoto(input:$input) {
id
name
url
}
}
`
The POST_PHOTO_MUTATION
is our mutation parsed as an AST and ready to be sent to the server. We import the ALL_PHOTOS_QUERY
because we’ll need to use it when it is time to update the local cache with the new photo that will be returned by the mutation.
To add the mutation, we will encapsulate the Post Photo button element with the Mutation
component:
<
div
style
=
{{
margin
:
'10px'
}}
>
<
Mutation
mutation
=
{
POST_PHOTO_MUTATION
}
update
=
{
updatePhotos
}
>
{
mutation
=>
<
button
onClick
=
{()
=>
this
.
postPhoto
(
mutation
)}
>
Post
Photo
<
/button>
}
<
/Mutation>
<
button
onClick
=
{()
=>
this
.
props
.
history
.
goBack
()}
>
Cancel
<
/button>
<
/div>
The Mutation component passes the mutation as a function. When we click the button, we will pass the mutation function to postPhoto
so that it can be used to change the photo data. Once the mutation is complete, the updatePhotos
function will be called in order to update the local cache.
Next, let’s actually send the mutation:
postPhoto
=
async
(
mutation
)
=>
{
await
mutation
({
variables
:
{
input
:
this
.
state
}
}).
catch
(
console
.
error
)
this
.
props
.
history
.
replace
(
'/'
)
}
This mutation function returns a promise. Once complete, we will use React Router to navigate the user back to the home page by replacing the current route using the history property. When the mutation is complete, we need to capture the data returned from it to update the local cache:
const
updatePhotos
=
(
cache
,
{
data
:
{
postPhoto
}
})
=>
{
var
data
=
cache
.
readQuery
({
query
:
ALL_PHOTOS_QUERY
})
data
.
allPhotos
=
[
postPhoto
,
...
allPhotos
]
cache
.
writeQuery
({
query
:
ALL_PHOTOS_QUERY
,
data
})
}
The updatePhotos
method handles the cache update. We will read the photos from the cache using the ROOT_QUERY
. Then, we’ll add the new photo to the cache using writeQuery
. This little bit of maintenance will make sure that our local data is in sync.
At this point, we are ready to post new photos. Go ahead and give it a shot.
We’ve taken a closer look at how queries, mutations, and subscriptions are handled on the client side. When you’re using React Apollo, you can take advantage of the <Query>
, <Mutation>
, and <Subscription>
components to help you connect the data from your GraphQL service to your user interface.
Now that the application is working, we’ll add one more layer to handle security.
Your GraphQL service provides a lot of freedom and flexibility to your clients. They have the flexibility to query data from multiple sources in a single request. They also have the ability to request large amounts of related, or connected, data in a single request. Left unchecked, your clients have the capability of requesting too much from your service in a single request. Not only will the strain of large queries affect server performance, it could also take your service down entirely. Some clients might do this unwittingly or unintentionally, whereas other clients might have more malicious intent. Either way, you need to put some safeguards in place and monitor your server’s performance in order to protect against large or malicious queries.
In this next section, we cover some of the options available to improve the security of your GraphQL service.
A request timeout is a first defense against large or malicious queries. A request timeout allows only a certain amount of time to process each request. This means that requests of your service need to be completed within a specific time frame. Request timeouts are used not only for GraphQL services, they are used for all sorts of services and processes across the internet. You might have already implemented these timeouts for your Representational State Transfer (REST) API to guard against lengthy requests with too much POST data.
You can add an overall request timeout to the express server by setting
the timeout
key. In the following, we’ve added a timeout of five seconds to guard
against troublesome queries:
const
httpServer
=
createServer
(
app
)
server
.
installSubscriptionHandlers
(
httpServer
)
httpServer
.
timeout
=
5000
Additionally, you can set timeouts for overall queries or individual resolvers. The trick to implementing timeouts for queries or resolvers is to save the start time for each query or resolver and validate it against your preferred timeout. You can record the start time for each request in context:
const
context
=
async
({
request
})
=>
{
...
return
{
...
timestamp
:
performance
.
now
()
}
}
Now each of the resolvers will know when the query began and can throw an error if the query takes too long.
Another simple safeguard that you can place against large or malicious queries is to limit the amount of data that can be returned by each query. You can return a specific number of records, or a page of data, by allowing your queries to specify how many records to return.
For example, recall in Chapter 4 that we designed a schema that could handle data paging. But what if a client requested an extremely large page of data? Here’s an example of a client doing just that:
query allPhotos { allPhotos(first=99999) { name url postedBy { name avatar } }
You can guard against these types of large requests by simply setting a limit for a page of data. For example, you could set a limit for 100 photos per query in your GraphQL server. That limit can be enforced in the query resolver by checking an argument:
allPhotos
:
(
root
,
data
,
context
)
{
if
(
data
.
first
>
100
)
{
throw
new
Error
(
'Only 100 photos can be requested at a time'
)
}
}
When you have a large number of records that can be requested, it is always a good idea to implement data paging. You can implement data paging simply by providing the number of records that should be returned by a query.
One of the benefits GraphQL provides the client is the ability to query connected data. For example, in our photo API, we can write a query that can deliver information about a photo, who posted it, and all the other photos posted by that photograph all in one request:
query getPhoto($id:ID!) { Photo(id:$id) { name url postedBy { name avatar postedPhotos { name url } } } }
This is a really nice feature that can improve network performance
within your applications. We can say that the preceding query has a depth of
3 because it queries the photo itself along with two connected fields:
postedBy
and postedPhotos
. The root query has a depth of 0, the
Photo
field has a depth of 1, the postedBy
field has a depth of 2 and
the postedPhotos
field has a depth of 3.
Clients can take advantage of this feature. Consider the following query:
query getPhoto($id:ID!) { Photo(id:$id) { name url postedBy { name avatar postedPhotos { name url taggedUsers { name avatar postedPhotos { name url } } } } } }
We’ve added two more levels to this query’s depth: the taggedUsers
in
all of the photos posted by the photographer of the original photo, and
the postedPhotos
of all of the taggedUsers
in all of the photos
posted by the photographer of the original photo. This means that if I posted
the original photo, this query would also resolve to all of the photos I’ve
posted, all of the users tagged in those photos, and all of the photos
posted by all of those tagged users. That’s a lot of data to request. It
is also a lot of work to be performed by your resolvers. Query depth
grows exponentially and can easily get out of hand.
You can implement a query depth limit for your GraphQL services to prevent deep queries from taking your service down. If we had set a query depth limit of 3, the first query would have been within the limit, whereas the second query would not because it has a query depth of 5.
Query depth limitations are typically implemented by parsing the query’s
AST and determining how deeply nested the selection sets are within
these objects. There are npm packages like graphql-depth-limit
that
can assist with this task:
npm install graphql-depth-limit
After you install it, you can add a validation rule to your GraphQL server
configuration using the depthLimit
function:
const
depthLimit
=
require
(
'graphql-depth-limit'
)
...
const
server
=
new
ApolloServer
({
typeDefs
,
resolvers
,
validationRules
:
[
depthLimit
(
5
)],
context
:
async
({
req
,
connection
})
=>
{
...
}
})
Here, we have set the query depth limit to 10, which means that we provided our clients with the capability of writing queries that can go 10 selection sets deep. If they go any deeper, the GraphQL server will prevent the query from executing and return an error.
Another measurement that can help you identify troublesome queries is query complexity. There are some client queries that might not run too deep but can still be expensive due to the amount of fields that are queried. Consider this query:
query everything($id:ID!) { totalUsers Photo(id:$id) { name url } allUsers { id name avatar postedPhotos { name url } inPhotos { name url taggedUsers { id } } } }
The everything
query does not exceed our query depth limit, but it’s
still pretty expensive due to the number of fields that are being
queried. Remember, each field maps to a resolver
function that needs to
be invoked.
Query complexity assigns a complexity value to each field and then totals the overall complexity of any query. You can set an overall limit that defines the maximum complexity available for any given query. When implementing query complexity you can identify your expensive resolvers and give those fields a higher complexity value.
There are several npm packages available to assist with the
implementation of query complexity limits. Let’s take a look at how we
could implement query complexity in our service using
graphql-validation-complexity
:
npm install graphql-validation-complexity
GraphQL validation complexity has a set of default rules out of the box for determining query complexity. It assigns a value of 1 to each scalar field. If that field is in a list, it multiplies the value by a factor of 10.
For example, let’s look at how graphql-validation-complexity
would
score the everything
query:
query everything($id:ID!) { totalUsers # complexity 1 Photo(id:$id) { name # complexity 1 url # complexity 1 } allUsers { id # complexity 10 name # complexity 10 avatar # complexity 10 postedPhotos { name # complexity 100 url # complexity 100 } inPhotos { name # complexity 100 url # complexity 100 taggedUsers { id # complexity 1000 } } } } # total complexity 1433
By default, graphql-validation-complexity
assigns each field a value.
It multiplies that value by a factor of 10 for any list. In this
example, totalUsers
represents a single integer field and is assigned
a complexity of 1. Querying fields in a single photo have the same
value. Notice that the fields queried in the allUsers
list are
assigned a value of 10. This is because they are within a list. Every
list field is multiplied by 10. So a list within a list is assigned a value of
100. Because taggedUsers
is a list within the inPhotos
list, which is
within the allUsers
list, the values of taggedUser fields is 10 × 10 × 10, or 1000.
We can prevent this particular query from executing by setting an overall query complexity limit of 1000:
const
{
createComplexityLimitRule
}
=
require
(
'graphql-validation-complexity'
)
...
const
options
=
{
...
validationRules
:
[
depthLimit
(
5
),
createComplexityLimitRule
(
1000
,
{
onCost
:
cost
=>
console
.
log
(
'query cost: '
,
cost
)
})
]
}
In this example, we set the maximum complexity limit to 1000 with the use of the
createComplexityLimitRule
found in the graphql-validation-complexity
package. We’ve also implemented the onCost
function, which will be
invoked with the total cost of each query as soon as it is calculated.
The preceding query would not be allowed to execute under these
circumstances because it exceeds a maximum complexity of 1000.
Most query complexity packages allow you to set your own rules. We could
change the complexity values assigned to scalars, objects, and lists
with the graphql-validation-complexity
package. It is also possible to
set custom complexity values for any field that we deem very complicated
or expensive.
It is not recommended to simply implement security features and hope for the best. Any good security and performance strategy needs metrics. You need a way to monitor your GraphQL service so that you can identify your popular queries and see where your performance bottlenecks occur.
You can use Apollo Engine to monitor your GraphQL service, but it’s more than just a monitoring tool. Apollo Engine is a robust cloud service that provides insights into your GraphQL layer so that you can run the service in production with confidence. It monitors the GraphQL operations sent to your services and provides a detailed live report available online at https://engine.apollographql.com, which you can use to identify your most popular queries, monitor execution time, monitor errors, and help find bottlenecks. It also provides tools for schema management including validation.
Apollo Engine is already included in your Apollo Server 2.0 implementation. With just one line of code, you can run Engine anywhere that Apollo Server runs, including serverless environments and on the edge. All you need to do is turn it on by setting the engine
key to true
:
const
server
=
new
ApolloServer
({
typeDefs
,
resolvers
,
engine
:
true
})
The next step is to make sure that you have an environment variable called ENGINE_API_KEY
set to your Apollo Engine API key. Head to https://engine.apollographql.com to create an account and generate your key.
In order to publish your application to Apollo Engine, you will need to install the Apollo CLI tools:
npm install -g apollo
Once installed you can use the CLI to publish your app:
apollo schema:publish --key=<YOUR ENGINE API KEY> --endpoint=http://localhost:4000/graphql
Don’t forget to add your ENGINE_API_KEY
to the environment variables as well.
Now when we run the PhotoShare GraphQL API, all operations sent to the GraphQL service will be monitored. You can view an activity report at the Engine website. This activity report can be used to help find and alleviate bottlenecks. Additionally, Apollo Engine will improve the performance and response time of our queries as well as monitor the performance of our service.
Throughout this book, you’ve learned about graph theory; you’ve written queries; you’ve designed schemas; you’ve set up GraphQL servers and explored GraphQL client solutions. The foundation is in place, so you can use what you need to improve your applications with GraphQL. In this section, we share some concepts and resources that will further support your future GraphQL applications.
Our PhotoShare app is a prime example of a Greenfield project. When you are working on your own projects, you might not have the luxury of starting from scratch. The flexibility of GraphQL allows you to start incorporating GraphQL incrementally. There’s no reason that you need to tear down everything and start over to benefit from GraphQL’s features. You can start slow by applying the following ideas:
Instead of rebuilding every REST endpoint, use GraphQL as a gateway and make a fetch request for that data on the server inside of a resolver. Your service can also cache the data sent from REST to improve query response time.
Robust client solutions are great, but implementing
them at the start might be too much setup. To get started simply, use
graphql-request
and make a request in the same place you use fetch
for a REST API. This approach will get you started, get you hooked on
GraphQL, and will likely lead you to a more comprehensive client solution when
you’re ready to optimize for performance. There is no reason you
cannot fetch data from four REST endpoints and one GraphQL service within
the same app. Everything does not have to migrate to GraphQL all at the
same time.
Instead of rebuilding your entire site, pick a single component or page and drive the data to that particular feature using GraphQL. Keep everything else about your site in place while you monitor the experience of moving a single component.
Instead of expanding REST, build a GraphQL endpoint for your new service or feature. You can host a GraphQL endpoint on the same server as your REST endpoints. Express does not care if it is routing a request to a REST function or a GraphQL resolver. Every time a task requires a new REST endpoint, add that feature to your GraphQL service, instead.
The next time there is a task to modify a REST endpoint or create a custom endpoint for some data, don’t! Instead, take the time to section off this one endpoint and update it to GraphQL. You can slowly move your entire REST API this way.
Moving to GraphQL slowly can allow you to benefit from features right away without the pains associated with starting from nothing. Start with what you have, and you can make your transition to GraphQL a smooth and gradual one.
You’re at a meeting for a new web project. Members of different frontend and backend teams are represented. After the meeting, someone might come up with some specifications, but these documents are often lengthy and underutilized. Frontend and backend teams start coding, and without clear guidelines, the project is delivered behind schedule and is different than everyone’s initial expectations.
Problems with web projects usually stem from a lack of communication or miscommunication about what should be built. Schemas provide clarity and communication, which is why many projects practice schema-first development. Instead of getting bogged down by domain-specific implementation details, disparate teams can work together on solidifying a schema before building anything.
Schemas are an agreement between the frontend and the backend teams and define all of the data relationships for an application. When teams sign off on a schema, they can work independently to fulfill the schema. Working to serve the schema yields better results because there is clarity in type definitions. Frontend teams know exactly which queries to make to load data into user interfaces. Backend teams know exactly what the data needs are and how to support them. Schema-first development provides a clear blueprint, and the teams can build the project with more consensus and less stress.
Mocking is an important part of Schema First Development. Once the front-end team has the schema, they can use it to start developing components immediately. The following code is all that is needed to stand up a mock GraphQL service running on http://localhost:4000
.
const
{
ApolloServer
}
=
require
(
'apollo-server'
)
const
{
readFileSync
}
=
require
(
'fs'
)
var
typeDefs
=
readFileSync
(
'./typeDefs.graphql'
,
'UTF-8'
)
const
server
=
new
ApolloServer
({
typeDefs
,
mocks
:
true
})
server
.
listen
()
Assuming you’ve provided the typeDefs.graphql
file designed during the schema-first process, you can begin developing UI components that send query, mutation, and subscription operations to the mock GraphQL service while the back-end team implements the real service.
Mocks work out of the box by providing default values for each scalar type. Everywhere a field is supposed to resolve to a string, you’ll see “Hello World” as the data.
You can customize the data this is returned by a mock server. This makes it possible to return data that looks more like the real data. This is an important feature that will assist with the task of styling your user interface components:
const
{
ApolloServer
,
MockList
}
=
require
(
'apollo-server'
)
const
{
readFileSync
}
=
require
(
'fs'
)
const
typeDefs
=
readFileSync
(
'./typeDefs.graphql'
,
'UTF-8'
)
const
resolvers
=
{}
const
mocks
=
{
Query
:
()
=>
({
totalPhotos
:
()
=>
42
,
allPhotos
:
()
=>
new
MockList
([
5
,
10
]),
Photo
:
()
=>
({
name
:
'sample photo'
,
description
:
null
})
})
}
const
server
=
new
ApolloServer
({
typeDefs
,
resolvers
,
mocks
})
server
.
listen
({
port
:
4000
},
()
=>
console
.
log
(
`Mock Photo Share GraphQL Service`
)
)
The above code adds a mock for the totalPhotos
and allPhotos
fields along with the Photo
type. Every time we query the totalPhotos
the number 42
will be returned. When we query the allPhotos
field we will receive somewhere between 5 and 10 photos. The MockList
constructor is included in the apollo-server
and is used to generate list types with specific lengths. Every time a Photo
type is resolved by the service the name
of the photo is “a sample photo” and the description is null
. You can create pretty robust mocks in conjunction with packages like faker
or casual
. These npms provide all sorts of fake data that can be used to build realistic mocks.
To learn more about mocking an Apollo Server, check out Apollo’s documentation.
There are a number of conferences and meetups that focus on GraphQL content.
A community-organized GraphQL conference in Helsinki, Finland.
You’ll also find GraphQL content at almost any development conference, particularly those that focus on JavaScript.
If you’re looking for events near you, there are also GraphQL meetups in cities all over the world. If there’s not one near you, you could be the one to start a local group!
GraphQL is popular because it’s a wonderful technology. It also is popular due to the fervent support of the GraphQL community. The community is quite welcoming, and there are a number of ways of getting involved and staying on top of the latest changes.
The knowledge that you’ve gained about GraphQL will serve as a good foundation when you’re exploring other libraries and tools. If you’re looking to take the next steps to expand your skills, here are some other topics to check out:
Schema stitching allows you to create a single GraphQL schema from multiple GraphQL APIs. Apollo provides some great tooling around the composition of remote schemas. Learn more about how to take on a project like this in the Apollo documentation.
Throughout the book, we’ve used GraphQL Playground and GraphQL Request: two tools from the Prisma team. Prisma is a tool that turns your existing database into a GraphQL API, no matter what database you’re using. While a GraphQL API stands between the client and the database, Prisma stands between a GraphQL API and the database. Prisma is open-source, so you can deploy your Prisma service in production using any cloud provider.
The team has also released a related tool called Prisma Cloud, a hosting platform for Prisma services. Instead of having to set up your own hosting, you can use Prisma Cloud to manage all of the DevOps concerns for you.
Another new player in the ecosystem is Amazon Web Services. It has released a new product built on GraphQL and Apollo tools to simplify the process of setting up a GraphQL service. With AppSync, you create a schema and then connect to your data source. AppSync updates the data in real-time and even handles offline data changes.
Another great way to get involved is to join one of the many GraphQL community Slack channels. Not only can you stay connected to the latest news in GraphQL, but you can ask questions sometimes answered by the creators of these technologies.
You can also share your knowledge with others in these growing communities from wherever you are:
As you continue your journey with GraphQL, you can become more involved in
the community as a contributor, as well. Right now, there are high-profile projects like React Apollo, Prisma, and GraphQL itself that
have open issues with help wanted
tags. Your help with one of these
issues could help many others! There are also many opportunities to
contribute new tools to the ecosystem.
Though change is inevitable, the ground under our feet as GraphQL API developers is very solid. At the heart of everything we do, we’re creating a schema and writing resolvers to fulfill the data requirements of the schema. No matter how many tools come out to shake things up in the ecosystem, we can rely on the stability of the query language itself. On the API timeline, GraphQL is very new, but the future is very bright. So, let’s all go build something amazing.