From 1b3e75dcbeb07c250de25f1bfbe492881a0e3f82 Mon Sep 17 00:00:00 2001 From: Peggy Rayzis Date: Wed, 14 Nov 2018 20:22:19 -0500 Subject: [PATCH] Delete old best practices articles --- docs/source/best-practices/authentication.md | 248 ------------- docs/source/best-practices/caching.md | 23 -- docs/source/best-practices/monitoring.md | 9 - docs/source/best-practices/organization.md | 261 ------------- docs/source/best-practices/performance.md | 210 ----------- docs/source/best-practices/schema-design.md | 371 ------------------- docs/source/best-practices/security.md | 82 ---- docs/source/best-practices/testing.md | 13 - docs/source/best-practices/versioning.md | 10 - 9 files changed, 1227 deletions(-) delete mode 100644 docs/source/best-practices/authentication.md delete mode 100644 docs/source/best-practices/caching.md delete mode 100644 docs/source/best-practices/monitoring.md delete mode 100644 docs/source/best-practices/organization.md delete mode 100644 docs/source/best-practices/performance.md delete mode 100644 docs/source/best-practices/schema-design.md delete mode 100644 docs/source/best-practices/security.md delete mode 100644 docs/source/best-practices/testing.md delete mode 100644 docs/source/best-practices/versioning.md diff --git a/docs/source/best-practices/authentication.md b/docs/source/best-practices/authentication.md deleted file mode 100644 index 2d986a78..00000000 --- a/docs/source/best-practices/authentication.md +++ /dev/null @@ -1,248 +0,0 @@ ---- -title: Auth -description: Securing our app and serving our users ---- - -

Background: Authentication vs. Authorization

- -**Authentication** describes a process where an application proves the identity of a user, meaning someone claiming to be a certain user through the client is the actual user that has permission to make a request to the server. In most systems, a user and server share a handshake and token that uniquely pairs them together, ensuring both sides know they are communicating with their intended target. - -**Authorization** defines what a user, such as admin or user, is allowed to do. Generally a server will authenticate users and provide them an authorization role that permits the user to perform a subset of all possible operations, such as read and not write. - -

Auth in GraphQL

- -GraphQL offers similar authentication and authorization mechanics as REST and other data fetching solutions with the possibility to control more fine grain access within a single request. There are two common approaches: schema authorization and operation authorization. - -**Schema authorization** follows a similar guidance to REST, where the entire request and response is checked for an authenticated user and authorized to access the servers data. - -**Operation authorization** takes advantage of the flexibility of GraphQL to provide public portions of the schema that don't require any authorization and private portions that require authentication and authorization. - -> Authorization within our GraphQL resolvers is a great first line of defense for securing our application. We recommended having similar authorization patterns within our data fetching models to ensure a user is authorized at every level of data fetching and updating. - -

Authenticating users

- -All of the approaches require that users be authenticated with the server. If our system already has login method setup to authenticate users and provide credentials that can be used in subsequent requests, we can use this same system to authenticate GraphQL requests. With that said, if we are creating a new infrastructure for user authentication, we can follow the existing best practice to authenticate users. For a full example of authentication, follow [this example](#auth-example), which uses [passport.js](http://www.passportjs.org/). - -

Schema Authorization

- -Schema authorization is useful for GraphQL endpoints that require known users and allow access to all fields inside of a GraphQL endpoint. This approach is useful for internal applications, which are used by a group that is known and generally trusted. Additionally it's common to have separate GraphQL services for different features or products that are entirely available to users, meaning if a user is authenticated, they are authorized to access all the data. Since schema authorization does not need to be aware of the GraphQL layer, our server can add a middleware in front of the GraphQL layer to ensure authorization. - -```js -// authenticate for schema usage -const context = ({ req }) => { - const user = myAuthenticationLookupCode(req); - if (!user) { - throw new Error("You need to be authenticated to access this schema!"); - } - - return { user } -}; - -const server = new ApolloServer({ typeDefs, resolvers, context }); - -server.listen().then(({ url }) => { - console.log(`🚀 Server ready at ${url}`) -}); -``` - -Currently this server will allow any authenticated user to request all fields in the schema, which means that authorization is all or nothing. While some applications provide a shared view of the data to all users, many use cases require scoping authorizations and limiting what some users can see. The authorization scope is shared across all resolvers, so this code adds the user id and scope to the context. - -```js -const { ForbiddenError } = require("apollo-server"); - -const context = ({ req }) => { - const user = myAuthenticationLookupCode(req); - if (!user) { - throw new ForbiddenError( - "You need to be authenticated to access this schema!" - ); - } - - const scope = lookupScopeForUser(user); - - return { user, scope }; -}; - -const server = new ApolloServer({ - typeDefs, - resolvers, - context -}); - -server.listen().then(({ url }) => { - console.log(`🚀 Server ready at ${url}`); -}); -``` - -Now within a resolver, we are able to check the user's scope. If the user is not an administrator and `allTodos` are requested, a GraphQL specific forbidden error is thrown. Apollo Server will handle associate the error with the particular path and return it along with any other data successfully requested, such as `myTodos`, to the client. - -```js -const { ForbiddenError } = require("apollo-server"); - -const resolvers = { - Query: { - allTodos: (source, args, context) => { - if (context.scope !== "ADMIN") { - throw ForbiddenError("Need Administrator Privileges"); - } - return context.Todos.getAll(); - }, - myTodos: (source, args, context) => { - return context.Todos.getById(context.user_id); - } - } -}; -``` - -The major downside to schema authorization is that all requests must be authenticated, which prevents unauthenticated requests to access information that should be publicly accessible, such as a home page. The next approach, partial query authorization, enables a portion of the schema to be public and authorize portions of the schema to authenticated users. - -## Operation Authorization - -Operation authorization removes the catch all portion of our context function that throws an unauthenticated error, moving the authorization check within resolvers. The instantiation of the server becomes: - -```js -const context = ({ req }) => { - const user = myAuthenticationLookupCode(req); - if (!user) { - return { user: null, scope: null } - } - - const scope = lookupScopeForUser(user); - return { user, scope } -}; - -const server = new ApolloServer({ - typeDefs, - resolvers, - context -}); - -server.listen().then(({ url }) => { - console.log(`🚀 Serverready at ${url}`) -}); -``` - -The benefit of doing operation authorization is that private and public data is more easily managed an enforced. Take for example a schema that allows finding `allTodos` in the app (an administrative action), seeing any `publicTodos` which requires no authorization, and returning just a single users todos via `myTodos`. Using Apollo Server, we can easily build complex authorization models like so: - -```js -const { ForbiddenError, AuthenticationError } = require("apollo-server"); - -const resolvers = { - Query: { - allTodos: (source, args, context) => { - if (!context.scope) { - throw AuthenticationError("You must be logged in to see all todos"); - } - - if (context.scope !== "ADMIN") { - throw ForbiddenError("You must be an administrator to see all todos"); - } - - return context.Todos.getAllTodos(); - }, - publicTodos: (source, args, context) => { - return context.Todos.getPublicTodos(); - }, - myTodos: (source, args, context) => { - if (!context.scope) { - throw AuthenticationError("You must be logged in to see all todos"); - } - - return context.Todos.getByUserId(context.user.id); - } - } -}; -``` - -## Should I send a password in a mutation? - -Since GraphQL queries are sent to a server in the same manner as REST requests, the same policies apply to sending sensitive data over the wire. The current best practice is to provide an encrypted connection over https or wss if we are using websockets. Provided we setup this layer, passwords and other sensitive information should be secure. - -## Auth Example - -If you are new setting up new infrastructure or would like to understand an example of how to adapt your existing login system, you can follow this example using passport.js. We will use this example of authentication in the subsequent sections. To skip this section, jump down to the - -```shell -npm install --save express passport body-parser express-session node-uuid passport-local apollo-server graphql -``` - -```js -const bodyParser = require('body-parser'); -const express = require('express'); -const passport = require('passport'); -const session = require('express-session'); -const uuid = require('node-uuid'); -``` - -After installing and importing the necessary packages, this code checks the user's password and attaches their id to the request. - -```js -let LocalStrategy = require('passport-local').Strategy; -const { DB } = require('./schema/db.js'); - -passport.use( - 'local', - new LocalStrategy(function(username, password, done) { - let checkPassword = DB.Users.checkPassword(username, password); - let getUser = checkPassword - .then(is_login_valid => { - if (is_login_valid) return DB.Users.getUserByUsername(username); - else throw new Error('invalid username or password'); - }) - .then(user => done(null, user)) - .catch(err => done(err)); - }), -); - -passport.serializeUser((user, done) => done(null, user.id)); - -passport.deserializeUser((id, done) => - DB.Users.get(id).then((user, err) => done(err, user)) -); -``` - -Now that passport has been setup, we initialize the server application to use the passport middleware, attaching the user id to the request. - -```js -const app = express(); - -//passport's session piggy-backs on express-session -app.use( - session({ - genid: function(req) { - return uuid.v4(); - }, - secret: 'Z3]GJW!?9uP"/Kpe', - }) -); - -//Provide authentication and user information to all routes -app.use(passport.initialize()); -app.use(passport.session()); -``` - -Finally we provide the login route and start Apollo Server. - -```js -const { typeDefs, resolvers } = require('./schema'); - -//login route for passport -app.use('/login', bodyParser.urlencoded({ extended: true })); -app.post( - '/login', - passport.authenticate('local', { - successRedirect: '/', - failureRedirect: '/login', - failureFlash: true, - }), -); - -//Depending on the authorization model chosen, you may include some extra middleware here before you instantiate the server - -//Create and start your apollo server -const server = new ApolloServer({ typeDefs, resolvers, app }); - -server.listen().then(({ url }) => { - console.log(`🚀 Server ready at ${url}`) -}); -``` diff --git a/docs/source/best-practices/caching.md b/docs/source/best-practices/caching.md deleted file mode 100644 index 70a62770..00000000 --- a/docs/source/best-practices/caching.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -title: Caching -description: Caching operations ---- - -One of the best ways we can speed up our application is to implement caching into it. Apollo Client has a intelligent cache which greatly lowers the work the client needs to do to fetch and manage data, but what about our server? Caching in Apollo Server can be done in a number of ways, but we recommend three in particular that have a good balance between complexity to manage and benefit of use. - -

Whole query caching

- -GraphQL operations on a client are best when they are statically defined and used in an application. When this is the case, often times there will be operations that could easily be cached as a full result of the the request. We call this *whole query caching* and it is incredibly easy to implement with Apollo Server. Unlike custom REST endpoints, using Apollo Server allows us to define the cacheability of our resources and dynamically calculate the best possible cache timing for any given operation. - -- For more information about setting up Apollo Engine with Apollo Server, [read this guide]() -- For more information about setting up whole query caching with Apollo Engine, [read this guide](https://www.apollographql.com/docs/engine/caching.html) - -

CDN integration

- -If our application has a lot of public data that doesn’t change very frequently, and it’s important for it to load quickly, we will probably benefit from using a CDN to cache our API results. This can be particularly important for media or content companies like news sites and blogs. - -A CDN will store our API result close to the “edge” of the network — that is, close to the region the user is in — and deliver a cached result much faster than it would have required to do a full round-trip to our actual server. As an added benefit, we get to save on server load since that query doesn’t actually hit our API. - -- Setting up CDN caching with Apollo Server is incredibly easy, simply setup Apollo Engine then follow this [guide](https://www.apollographql.com/docs/engine/cdn.html) -- For more information about using a CDN with Apollo Engine, check out this [article](https://blog.apollographql.com/caching-graphql-results-in-your-cdn-54299832b8e2) - diff --git a/docs/source/best-practices/monitoring.md b/docs/source/best-practices/monitoring.md deleted file mode 100644 index 9882362c..00000000 --- a/docs/source/best-practices/monitoring.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: Monitoring ---- - -Intro about what to watch for? - -## ENGINE - -## formatError diff --git a/docs/source/best-practices/organization.md b/docs/source/best-practices/organization.md deleted file mode 100644 index c4282a74..00000000 --- a/docs/source/best-practices/organization.md +++ /dev/null @@ -1,261 +0,0 @@ ---- -title: Organizing your code -description: Scaling your Apollo Server from a single file to your entire team ---- - -The GraphQL schema defines the api for Apollo Server, providing the single source of truth between client and server. A complete schema contains type definitions and resolvers. Type definitions are written and documented in the [Schema Definition Language(SDL)]() to define the valid server entry points. Corresponding to one to one with type definition fields, resolvers are functions that retrieve the data described by the type definitions. - -To accommodate this tight coupling, type definitions and resolvers should be kept together in the same file. This collocation allows developers to modify fields and resolvers with atomic schema changes without unexpected consequences. At the end to build a complete schema, the type definitions are combined in an array and resolvers are merged together. Throughout all the examples, the resolvers delegate to a data model, as explained in [this section](). - -> Note: This schema separation should be done by product or real-world domain, which create natural boundaries that are easier to reason about. - -## Prerequisites - -* essentials/schema for connection between: - * GraphQL Types - * Resolvers - -

Organizing schema types

- -With large schemas, defining types in different files and merging them to create the complete schema may become necessary. We accomplish this by importing and exporting schema strings, combining them into arrays as necessary. The following example demonstrates separating the type definitions of [this schema](#first-example-schema) found at the end of the page. - -```js -// comment.js -const typeDefs = gql` - type Comment { - id: ID! - message: String - author: String - } -`; - -export typeDefs; -``` - -The `Post` includes a reference to `Comment`, which is added to the array of type definitions and exported: - -```js -// post.js -const typeDefs = gql` - type Post { - id: ID! - title: String - content: String - author: String - comments: [Comment] - } -`; - -// Export Post and all dependent types -export typeDefs; -``` - -Finally the root Query type, which uses Post, is created and passed to the server instantiation: - -```js -// schema.js -const Comment = require('./comment'); -const Post = require('./post'); - -const RootQuery = gql` - type Query { - post(id: ID!): Post - } -`; - -const server = new ApolloServer({ - typeDefs: [RootQuery, Post.typeDefs, Comment.typeDefs], - resolvers, //defined in next section -}); - -server.listen().then(({ url }) => { - console.log(`🚀 Server ready at ${url}`) -}); -``` - -

Organizing resolvers

- -For the type definitions above, we can accomplish the same modularity with resolvers by combining each type's resolvers together with Lodash's `merge` or another equivalent. The [end of this page](#first-example-resolvers) contains a complete view of the resolver map. - -```js -// comment.js -const CommentModel = require('./models/comment'); - -const resolvers = { - Comment: { - votes: (parent) => CommentModel.getVotesById(parent.id) - } -}; - -export resolvers; -``` - -The `Post` type: - -```js -// post.js -const PostModel = require('./models/post'); - -const resolvers = { - Post: { - comments: (parent) => PostModel.getCommentsById(parent.id) - } -}; - -export resolvers; -``` - -Finally, the Query type's resolvers are merged and the result is passed to the server instantiation: - -```js -// schema.js -const { merge } = require('lodash'); -const Post = require('./post'); -const Comment = require('./comment'); - -const PostModel = require('./models/post'); - -// Merge all of the resolver objects together -const resolvers = merge({ - Query: { - post: (_, args) => PostModel.getPostById(args.id) - } -}, Post.resolvers, Comment.resolvers); - -const server = new ApolloServer({ - typeDefs, //defined in previous section - resolvers, -}); - -server.listen().then(({ url }) => { - console.log(`🚀 Server ready at ${url}`) -}); -``` - -

Extending types

- -The `extend` keyword provides the ability to add fields to existing types. Using `extend` is particularly useful in avoiding a large list of fields on root Queries and Mutations. - -```js -//schema.js -const bookTypeDefs = gql` -extend type Query { - books: [Book] -} - -type Book { - id: ID! -} -`; - -// These type definitions are often in a separate file -const authorTypeDefs = gql` -extend type Query { - authors: [Author] -} - -type Author { - id: ID! -} -`; -export const typeDefs = [bookTypeDefs, authorTypeDefs] -``` - -```js -const {typeDefs, resolvers} = require('./schema'); - -const rootQuery = gql` -"Query can and must be defined once per schema to be extended" -type Query { - _empty: String -}`; - -const server = new ApolloServer({ - typeDefs: [RootQuery].concat(typeDefs), - resolvers, -}); - -server.listen().then(({ url }) => { - console.log(`🚀 Server ready at ${url}`) -}); -``` - -> Note: In the current version of GraphQL, you can’t have an empty type even if you intend to extend it later. So we need to make sure the Query type has at least one field — in this case we can add a fake `_empty` field. Hopefully in future versions it will be possible to have an empty type to be extended later. - -

Documenting a Schema

- -In addition to modularization, documentation within the SDL enables the schema to be effective as the single source of truth between client and server. GraphQL GUIs have built-in support for displaying docstrings with markdown syntax, such as those found in the following schema. - -```graphql -""" -Description for the type -""" -type MyObjectType { - """ - Description for field - Supports multi-line description - """ - myField: String! - - otherField( - """ - Description for argument - """ - arg: Int - ) -} -``` - -

API

- -Apollo Server pass `typeDefs` and `resolvers` to the `graphql-tools`'s `makeExecutableSchema`. - -TODO point at graphql-tools `makeExecutableSchema` api - -

Example Application Details

- -

Schema

- -The full type definitions for the first example: - -```graphql -type Comment { - id: ID! - message: String - author: String - votes: Int -} - -type Post { - id: ID! - title: String - content: String - author: String - comments: [Comment] -} - -type Query { - post(id: ID!): Post -} -``` - -

Resolvers

- -The full resolver map for the first example: - -```js -const CommentModel = require('./models/comment'); -const PostModel = require('./models/post'); - -const resolvers = { - Comment: { - votes: (parent) => CommentModel.getVotesById(parent.id) - } - Post: { - comments: (parent) => PostModel.getCommentsById(parent.id) - } - Query: { - post: (_, args) => PostModel.getPostById(args.id) - } -} -``` diff --git a/docs/source/best-practices/performance.md b/docs/source/best-practices/performance.md deleted file mode 100644 index 46d09ca6..00000000 --- a/docs/source/best-practices/performance.md +++ /dev/null @@ -1,210 +0,0 @@ ---- -title: Performance -description: Reduce requests and speeding up applications ---- - -GraphQL offers performance benefits for most applications. By reducing round-trips when fetching data, lower the amount of data we are sending back, and make it easier to batch data lookups. Since GraphQL is often built as a stateless request-response pattern, scaling our app horizontally becomes much easier. In this section, we will dive into some benefits that Apollo Server brings to our app, and some patterns for speeding up our service. - -## Prevent over-fetching - -Rest endpoints often return all of the fields for whatever data they are returning. As applications grow, their data needs grow as well, which leads to a lot of unnecessary data being downloaded by our client applications. With GraphQL this isn't a problem because Apollo Server will only return the data that we ask for when making a request! Take for example a screen which shows an avatar of the currently logged in user. In a rest app we may make a request to `/api/v1/currentUser` which would return a response like this: - -```json -{ - "id": 1, - "firstName": "James", - "lastName": "Baxley", - "suffix": "III", - "avatar": "/photos/profile.jpg", - "friendIds": [2, 3, 4, 5, 6, 7], - "homeId": 1, - "occupation": "farmer", - // and so on for every field on this model that our client **could** use -} -``` - -Contrast that to the request a client would send to Apollo Server and the response they would receive: - -```graphql -query GetAvatar { - currentUser { - avatar - } -} -``` - -```json -{ - "data": { - "currentUser": { - "avatar": "/photos/profile.jpg" - } - } -} -``` - -No matter how much our data grows, this query will always only return the smallest bit of data that the client application actually needs! This will make our app faster and our end users data plan much happier! - -## Reducing round-trips - -Applications typically need to fetch multiple resources to load any given screen for a user. When building an app on top of a REST API, screens need to fetch the first round of data, then using that information, make another request to load related information. A common example of this would be to load a user, then load their friends: - -```js -const userAndFriends = fetch("/api/v1/user/currentUser").then(user => { - const friendRequest = Promise.all( - user.friendIds.map(id => fetch(`/api/vi/user/${id}`)) - ); - - return friendRequest.then(friends => { - user.friends = friends; - return user; - }); -}); - -``` - -The above code would make at minimum two requests, one for the logged in user and one for a single friend. With more friends, the number of requests jumps up quite a lot! To get around this, custom endpoints are added into a RESTful API. In this example, a `/api/v1/friends/:userId` may be added to make fetching friends a single request per user instead of one per friend. - -With GraphQL this is easily done in a single request! Given a schema like this: - -```graphql -type User { - id: ID! - name: String! - friends: [User] -} - -type Query { - currentUser: User -} -``` - -We can easily fetch the current user and all of their friends in a single request! - -```graphql -query LoadUserAndFriends { - currentUser { - id - name - friends { - id - name - } - } -} -``` - -## Batching data lookups - -If we take the above query we may think GraphQL simply moves the waterfall of requests from the client to the server. Even if this was true, application speeds would still be improved. However, Apollo Server makes it possible to make applications even faster by batching data requests. - -The most common way to batch requests is by using Facebook's [`dataloader`](https://github.com/facebook/dataloader) library. Let's explore a few options for request batching the previous operation: - -

Custom resolvers for batching

- -The simplest (and often easiest) way to speed up a GraphQL service is to create resolvers that optimistically fetch the needed data. Often times the best thing to do is to write the simplest resolver possible to look up data, profile it with a tool like Apollo Engine, then improve slow resolvers with logic tuned for the way our schema is used. Take the above query, for example: - -```js -const User = { - friends: (user, args, context) => { - // A simple approach to find each friend. - return user.friendIds.map(id => context.UserModel.findById(id)); - } -} - -``` - -The above resolver will make a database lookup for the initial user and then one lookup for every friend that our user has. This would quickly turn into an expensive resolver to call so lets look at how we could speed it up! First, lets take a simple, but proven technique: - -```js -const User = { - friends: (user, args, context) => { - // a custom model method for looking up multiple users - return context.UserModel.findByIds(user.friendIds); - } -} -``` - -Instead of fetching each user independently, we could fetch all users at once in a single lookup. This would be analogous to `SELECT * FROM users WHERE id IN (1,2,3,4)` vs the previous query would have been multiple versions of `SELECT * FROM users WHERE id = 1`. - -Often times, custom resolvers are enough to speed up our server to the levels we want. However, there may be times where we want to be even more efficient when batching data. Lets say we expanded our operation to include more information: - -```graphql -query LoadUserAndFriends { - currentUser { - id - name - friends { - id - name - } - family { - id - name - } - } -} -``` - -Assuming that `family` returns more `User` types, we now are making at minimum three database calls: 1) the user, 2) the batch of friends, and 3) the batch of family members. If we expand the query deeper: - -``` -query LoadUserAndFriends { - currentUser { - id - name - friends { - id - name - ...peopleTheyCareAbout - } - family { - id - name - ...peopleTheyCareAbout - } - } -} - -fragment peopleTheyCareAbout on User { - family { - id - name - } - friends { - id - name - } -} -``` - -We are now looking at any number of database calls! The more friends and families that are connected in our app, the more expensive this query gets. Using a library like `dataloader`, we can reduce this operation to a maximum of three database lookups. Let's take a look at how to implement it to understand what is happening: - -```js -const DataLoader = require('dataloader'); - -// give this to ApolloServer's context -const UserModelLoader = new DataLoader(UserModel.findByIds); - -// in the User resolvers -const User = { - friends: (user, args, context) => { - return context.UserModelLoader.loadMany(user.friendIds); - }, - family: (user, args, context) => { - return context.UserModelLoader.loadMany(user.familyIds); - } -} -``` - -After the first data request returns with our current user's information, we execute the resolvers for `friends` and `family` within the same "tick" of the event loop, which is technical talk for "pretty much at the same time". DataLoader will delay making a data request (in this case the `UserModel.findByIds` call) long enough for it to capture the request to look up both friends and families at once! It will combine the two arrays of ids into one so our `SELECT * FROM users WHERE id IN ...` request will contain the ids of both friends **and** families! - -The friends and families request will return at the same time so when we select friends and families for all of previously returned users, the same batching can occur across all of the new users requests! So instead of potentially hundreds of data lookups, we can only perform 3 for a query like this! - - -## Scaling our app - -Horizontal scaling is a fantastic way to increase the amount of load that our servers can handle without having to purchase more expensive computing resources to handling it. Apollo Server can scale extremely well like this as long as a couple of concerns are handled: - -- Every request should ensure it has access to the required data source. If we are building on top of a HTTP endpoint this isn't a problem, but when using a database it is a good practice to verify our connection on each request. This helps to make our app more fault tolerant and easily scale up a new service which will connect as soon as requests start! -- Any state should be saved into a shared stateful datastore like redis. By sharing state, we can easily add more and more servers into our infrastructure without fear of losing any kind of state between scale up and scale down. diff --git a/docs/source/best-practices/schema-design.md b/docs/source/best-practices/schema-design.md deleted file mode 100644 index d586eaaa..00000000 --- a/docs/source/best-practices/schema-design.md +++ /dev/null @@ -1,371 +0,0 @@ ---- -title: Schema Design -description: The best way to fetch data, update it, and keep things running for a long time ---- - -GraphQL schemas are at their best when they are designed around the need of client applications, instead of the shape of how the data is stored. Often times teams will create schemas that are literal mappings on top of their collections or tables with CRUD like root fields. While this may be a fast way to get up and running, a strong long term GraphQL schema is built around the products usage. - -## Style conventions - -The GraphQL specification is flexible in the style that it dictates and doesn't impose specific naming guidelines. In order to facilitate development and continuity across GraphQL deployments, we suggest the following style conventions : - -- **Fields**: are recommended to be written in `camelCase`, since the majority of consumers will be client applications written in JavaScript. -- **Types**: should be `PascalCase`. -- **Enums**: should have their name in `PascalCase` and their values in `ALL_CAPS` to denote their special meaning. - -## Using interfaces - -Interfaces are a powerful way to build and use GraphQL schemas through the use of _abstract types_. Abstract types can't be used directly in schema, but can be used as building blocks for creating explicit types. - -Consider an example where different types of books share a common set of attributes, such as _text books_ and _coloring books_. A simple foundation for these books might be represented as the following `interface`: - -```graphql -interface Book { - title: String - author: Author -} -``` - -We won't be able to directly use this interface to query for a book, but we can use it to implement concrete types. Imagine a screen within an application which needs to display a feed of all books, without regard to their (more specific) type. To create such functionality, we could define the following: - -```graphql -type TextBook implements Book { - title: String - author: Author - classes: [Class] -} - -type ColoringBook implements Book { - title: String - author: Author - colors: [Color] -} - -type Query { - schoolBooks: [Book] -} -``` - -In this example, we've used the `Book` interface as the foundation for the `TextBook` and `ColoringBook` types. Then, a `schoolBooks` field simply expresses that it returns a list of books (i.e. `[Book]`). - -Implementing the book feed example is now simplified since we've removed the need to worry about what kind of `Book`s will be returned. A query against this schema, which could return _text books_ and _coloring_ books, might look like: - -```graphql -query GetBooks { - schoolBooks { - title - author - } -} -``` - -This is really helpful for feeds of common content, user role systems, and more! - -Furthermore, if we need to return fields which are only provided by either `TextBook`s or `ColoringBook`s (not both) we can request fragments from the abstract types in the query. Those fragments will be filled in only as appropriate; in the case of the example, only coloring books will be returned with `colors`, and only text books will have `classes`: - -```graphql -query GetBooks { - schoolBooks { - title - ... on TextBook { - classes { - name - } - } - ... on ColoringBook { - colors { - name - } - } - } -} -``` - -To see an interface in practice, check out this [example]() - -## A `Node` interface - -A so-called "`Node` interface" is an implementation of a generic interface, on which other types can be built on, which enables the ability to fetch other _types_ in a schema by only providing an `id`. This interface isn't provided automatically by GraphQL (not does it _have_ to be called `Node`), but we highly recommend schemas consider implementing one. - -To understand its value, we'll present an example with two collections: _authors_ and _posts_, though the usefulness of such an interface grows as more collections are introduced. As is common with most database collections, each of these collections have unique `id` columns which uniquely represent the individual documents within the collection. - -To implement a so-called "`Node` interface", we'll add a `Node` interface to the schema, as follows: - -```graphql -interface Node { - id: ID! -} -``` - -This `interface` declaration has the only field it will ever need: an `ID!` field, which is required to be non-null in all operations (as indicated by the `!`). - -To take advantage of this new interface, we can use as the underlying implementation for the other types that our schema will define. For our example, this means we'll use it to build `Post` and `Author` object types: - -```graphql -type Post implements Node { - id: ID! - title: String! - author: Author! -} - -type Author implements Node { - id: ID! - name: String! - posts: [Post] -} -``` - -By implementing the `Node` interface as the foundation for `Post` and `Author`, we know that anytime a client has obtained an `id` (from either type), we can send it back to the server and retrieve that exact piece of data back! - -

Global Ids

- -When using the `Node` interface, we will want to create schema unique `id` fields. The most common way to do this is to take the `id` from the datasource and join it with the type name where it is being exposed (i.e `Post:1`, `Author:1`). In doing so, even though the database `id` is the same for the first Post and first Author, the client can refetch each successfully! - -Global Ids are often encoded into a base64 string after joined together. This is for consistency but also to denote that the client shouldn't try to parse and use the information as the shape of the `id` may change over time with schema revisions, but the uniqueness of it will not. - -

Using the node interface

- -Now that we have the `Node` interface, we need a way to globally refetch any id that the client can send. To do this, we add a field called `node` to our `Query` which returns a `Node` abstract type: - -```graphql -type Query { - node(id: ID!): Node -} -``` - -Now our client can refetch any type they want to as long as they have an `id` value for it: - - -```graphql -query GetAuthor($authorId: ID!) { - node(id: $authorId) { - id - ... on Author { - name - posts { - id - title - } - } - } -} -``` - -Using the `Node` interface can remove a ton of unnecessary fields on the `Query` type, as well as solve common patterns like data fetching for routing. Say we had a route showing content our user has liked: `/favorites` and then we wanted to drill down into those likes: `/favorites/:id` to show more information. Instead of creating a route for each kind of liked content (i.e `/favories/authors/:id`, `/favorites/posts/:id`), we can use the `node` field to request any type of liked content: - -```graphql -query GetLikedContent($id: ID!){ - favorite: node(id: $id){ - id - ... on Author { - pageTitle: name - } - ... on Post { - pageTitle: title - } - } -} -``` - -Thanks to the `Node` interface and field aliasing, my response data is easily used by my UI no matter what my likes are: - -```json -[ - { id: "Author:1", pageTitle: "Sashko" }, - { id: "Post:1", pageTitle: "GraphQL is great!" } -] -``` - -To see this in practice, check out the following [example]() - -## Mutation responses - -Mutations are an incredibly powerful part of GraphQL as they can easily return both information about the data updating transaction, as well as the actual data that has changed very easily. One pattern that we recommend to make this consistent is to have a `MutationResponse` interface that can be easily implemented for any `Mutation` fields. The `MutationResponse` is designed to allow transactional information alongside returning valuable data to make client side updates automatic! The interface looks like this: - -```graphql -interface MutationResponse { - code: String! - success: Boolean! - message: String! -} -``` - -An implementing type would look like this: - -```graphql -type AddPostMutationResponse { - code: String! - success: Boolean! - message: String! - post: Post -} -``` - -Lets break this down by field: - -- **code** is a string representing a transactional value explaning details about the status of the data change. Think of this like HTTP status codes. -- **success** is a boolean telling the client if the update was successful. It is a coarse check that makes it easy for the client application to respond to failures -- **message** is a string that is meant to be a human readable description of the status of the transaction. It is intended to be used in the UI of the product -- **post** is added by the implementing type `AddPostMutationResponse` to return back the newly created post for the client to use! - -Following this pattern for mutations provides detailed information about the data that has changed and how the operation to change it went! Client developers can easily react to failures and fetch the information they need to update their local cache. - -

Organizing your schema

- -When schemas get large, we can start to define types in different files and import them to create the complete schema. We accomplish this by importing and exporting schema strings, combining them into arrays as necessary. - -```js -// comment.js -const typeDefs = gql` - type Comment { - id: Int! - message: String - author: String - } -`; - -export typeDefs; -``` - -```js -// post.js -const Comment = require('./comment'); - -const typeDefs = [` - type Post { - id: Int! - title: String - content: String - author: String - comments: [Comment] - } -`].concat(Comment.typeDefs); - -// we export Post and all types it depends on -// in order to make sure we don't forget to include -// a dependency -export typeDefs; -``` - -```js -// schema.js -const Post = require('./post.js'); - -const RootQuery = ` - type RootQuery { - post(id: Int!): Post - } -`; - -const SchemaDefinition = ` - schema { - query: RootQuery - } -`; - -const server = new ApolloServer({ - //we may destructure Post if supported by our Node version - typeDefs: [SchemaDefinition, RootQuery].concat(Post.typeDefs), - resolvers, -}); - -server.listen().then(({ url }) => { - console.log(`🚀 Server ready at ${url}`) -}); -``` - -

Extending types

- -The `extend` keyword provides the ability to add fields to existing types. Using `extend` is particularly useful in avoiding a large list of fields on root Queries and Mutations. - -```js -const barTypeDefs = ` -"Query can and must be defined once per schema to be extended" -type Query { - bars: [Bar] -} - -type Bar { - id: String -} -`; - -const fooTypeDefs = ` -type Foo { - id: String -} - -extend type Query { - foos: [Foo] -} -` - -const typeDefs = [barTypeDefs, fooTypeDefs] -``` - -

Sharing types

- -Schemas often contain circular dependencies or a shared type that has been hoisted to be referenced in separate files. When exporting array of schema strings with circular dependencies, the array can be wrapped in a function. The Apollo Server will only include each type definition once, even if it is imported multiple times by different types. Preventing deduplication of type definitions means that domains can be self contained and fully functional regardless of how they are combined. - -```js -// author.js -const Book = require('./book'); - -const Author = ` - type Author { - id: Int! - firstName: String - lastName: String - books: [Book] - } -`; - -// we export Author and all types it depends on -// in order to make sure we don't forget to include -// a dependency and we wrap it in a function -// to avoid strings deduplication -export const typeDefs = () => [Author].concat(Book.typeDefs); -``` - -```js -// book.js -const Author = require('./author'); - -const Book = ` - type Book { - title: String - author: Author - } -`; - -export const typeDefs = () => [Book].concat(Author.typeDefs); -``` - -```js -// schema.js -const Author = require('./author.js'); - -const RootQuery = ` - type RootQuery { - author(id: Int!): Author - } -`; - -const SchemaDefinition = ` - schema { - query: RootQuery - } -`; - -const server = new ApolloServer({ - //we may destructure Post if supported by our Node version - typeDefs: [SchemaDefinition, RootQuery].concat(Author.typeDefs), - resolvers, -}); - -server.listen().then(({ url }) => { - console.log(`🚀 Server ready at ${url}`) -}); -``` - - diff --git a/docs/source/best-practices/security.md b/docs/source/best-practices/security.md deleted file mode 100644 index 816fd00b..00000000 --- a/docs/source/best-practices/security.md +++ /dev/null @@ -1,82 +0,0 @@ ---- -title: Security ---- - -Apollo Server is a safer way to build applications thanks to GraphQL's strong typing and the conversion of raw operations into a trusted syntax tree. By validating each part of an operation, GraphQL is mostly exempt from injection-attacks which are of concern in other data-driven applications. - - This guide will discuss additional security measures which further harden the excellent foundation which GraphQL is already built upon. While Apollo Server will enable some additional protections automatically, others require attention on the part of the developer. - -

Introspection in production

- -Introspection is a powerful tool to have enabled during development and allows developers to get real-time visibility of a GraphQL server's capabilities. - -In production, such insight might be less desireable unless the server is intended to be a "public" API. - -Therefore, Apollo Server introspection is automatically disabled when the `NODE_ENV` is set to `production` in order to reduce visibility into the API. - -Of course, no system should rely solely on so-called "security through obscurity" and this practice should be combined with other security techniques like open security and security by design. - -

Securing with SSL/TLS

- -You can secure all communication between the clients and your GraphQL server by using SSL/TLS. Apollo Server, with subscriptions, can be configured to use the `https` module with `apollo-server-express`. See [example server code](../essentials/server.html#ssl). - -Alternatively, you can use a reverse proxy solution like [NGINX](https://www.nginx.com/) or [Traefik](https://traefik.io/). An additional benefit of using Traefik is that you can use a free [Let's Encrypt SSL certificate](http://niels.nu/blog/2017/traefik-https-letsencrypt.html). - -

Injection prevention

- -As we build out our schema, it may be tempting to allow for shortcut arguments to creep in which have security risks. This most commonly happens on filters and on mutation inputs: - -```graphql -query OhNo { - users(filter: "id = 1;' sql injection goes here!") { - id - } -} - -mutation Dang { - updateUser(user: { firstName: "James", id: 1 }) { - success - } -} -``` - -In the first operation we are passing a filter that is a database filter directly as a string. This opens the door for SQL injection since the string is preserved from the client to the server. - -In the second operation we are passing an id value which may let an attacker update information for someone else! This often happens if generic Input Types are created for corresponding data sources: - -```graphql -# used for both creating and updating a user -input UserInput { - id: Int - firstName: String -} - -type Mutation { - createUser(user: UserInput): User - updateUser(user: UserInput): User -} -``` - -The fix for both of these attack vectors is to create more detailed arguments and let the validation step of Apollo Server filter out bad values as well as **never** pass raw values from a client into our datasource. - -

Denial-of-Service (DoS) Protection

- -Apollo Server is a Node.js application and standard precautions should be taken in order to avoid Denial-of-Service (DoS) attacks. - -Since GraphQL involves the traversal of a graph in which circular relationships of arbitrary depths might be accessible, some additional precautions can be taken to limit the risks of Complexity Denial-of-Service (CDoS) attacks, where a bad actor could craft expensive operations and lock up resources indefinitely. - -There are two common techniques to mitigate CDoS risks, and can be enabled together: - -1. **Operation white-listing** - - By hashing the potential operations a client might send (e.g. based on field names) and storing these "permitted" hashes on the server (or a shared cache), it becomes possible to check incoming operations against the permitted hashes and skip execution if the hash is not allowed. - - Since many consumers of non-public APIs have their operations statically defined within their source code, this technique is often sufficient and is best implemented as an automated deployment step. - -2. **Complexity limits** - - These can be used to limit the use of queries which, for example, request a list of books including the authors of each book, plus the books of those authors, and _their_ authors, and so on. By limiting operations to an application-defined depth of "_n_", these can be easily prevented. - - We suggest implementing complexity limits using community-provided packages like [graphql-depth-limit](https://github.com/stems/graphql-depth-limit) and [graphql-validation-complexity](https://github.com/4Catalyzer/graphql-validation-complexity). - -> For additional information on securing a GraphQL server deployment, check out [Securing your GraphQL API from malicious queries](https://blog.apollographql.com/securing-your-graphql-api-from-malicious-queries-16130a324a6b) by Spectrum co-founder, Max Stoiber. diff --git a/docs/source/best-practices/testing.md b/docs/source/best-practices/testing.md deleted file mode 100644 index 802f2566..00000000 --- a/docs/source/best-practices/testing.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Testing ---- - -Intro section about separation of concerns making GraphQL ideal for unit testing as well integration testing - -> (James) Add API for ApolloServer to make it easy to run integration tests against? Dependency injection anyone? - -## Unit testing resolvers - -## Integration testing operations - -## Using your schema to mock data for client testing diff --git a/docs/source/best-practices/versioning.md b/docs/source/best-practices/versioning.md deleted file mode 100644 index 7999c2d1..00000000 --- a/docs/source/best-practices/versioning.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Versioning -description: How to add and remove parts of your schema without breaking your clients ---- - -tl;dr don't. Use a tool like Engine (one day) to help you iterate - -## Why versioning isn't needed - -## Practical examples of field rollovers \ No newline at end of file