mirror of
https://github.com/vale981/apollo-server
synced 2025-03-04 17:21:42 -05:00
Merge branch 'master' into re-stage-pr-1971
This commit is contained in:
commit
b4df18b31a
64 changed files with 1589 additions and 1021 deletions
20
CHANGELOG.md
20
CHANGELOG.md
|
@ -2,7 +2,23 @@
|
|||
|
||||
### vNEXT
|
||||
|
||||
### v2.4.0
|
||||
|
||||
- Implement an in-memory cache store to save parsed and validated documents and provide performance benefits for repeat executions of the same document. [PR #2111](https://github.com/apollographql/apollo-server/pull/2111) (`>=2.4.0-alpha.0`)
|
||||
- Fix: Serialize arrays as JSON on fetch in `RESTDataSource`. [PR #2219](https://github.com/apollographql/apollo-server/pull/2219)
|
||||
- Fix: The `privateHeaders` configuration for `apollo-engine-reporting` now allows headers to be specified using any case and lower-cases them prior to comparison. [PR #2276](https://github.com/apollographql/apollo-server/pull/2276)
|
||||
|
||||
### v2.3.3
|
||||
|
||||
- `apollo-server` (only): Stop double-invocation of `serverWillStart` life-cycle event. (More specific integrations - e.g. Express, Koa, Hapi, etc. - were unaffected.) [PR #2239](https://github.com/apollographql/apollo-server/pull/2239)
|
||||
- Avoid traversing `graphql-upload` module tree in run-time environments which aren't Node.js. [PR #2235](https://github.com/apollographql/apollo-server/pull/2235)
|
||||
|
||||
### v2.3.2
|
||||
|
||||
- Switch from `json-stable-stringify` to `fast-json-stable-stringify`. [PR #2065](https://github.com/apollographql/apollo-server/pull/2065)
|
||||
- Fix cache hints of `maxAge: 0` to mean "uncachable". [#2197](https://github.com/apollographql/apollo-server/pull/2197)
|
||||
- Apply `defaultMaxAge` to scalar fields on the root object. [#2210](https://github.com/apollographql/apollo-server/pull/2210)
|
||||
- Don't write to the persisted query cache until execution will begin. [PR #2227](https://github.com/apollographql/apollo-server/pull/2227)
|
||||
|
||||
### v2.3.1
|
||||
|
||||
|
@ -17,9 +33,9 @@
|
|||
While Node.js 6.x is covered by a [Long Term Support agreement by the Node.js Foundation](https://github.com/nodejs/Release#release-schedule) until April 2019, there are substantial performance (e.g. [V8](https://v8.dev/) improvements) and language changes (e.g. "modern" ECMAScript support) offered by newer Node.js engines (e.g. 8.x, 10.x). We encourage _all users_ of Apollo Server to update to newer LTS versions of Node.js prior to the "end-of-life" dates for their current server version.
|
||||
|
||||
**We intend to drop support for Node.js 6.x in the next major version of Apollo Server.**
|
||||
|
||||
|
||||
For more information, see [PR #2054](https://github.com/apollographql/apollo-server/pull/2054) and [our documentation](https://www.apollographql.com/docs/apollo-server/v2/migration-file-uploads.html).
|
||||
|
||||
|
||||
### v2.2.7
|
||||
|
||||
- `apollo-engine-reporting`: When multiple instances of `apollo-engine-reporting` are loaded (an uncommon edge case), ensure that `encodedTraces` are handled only once rather than once per loaded instance. [PR #2040](https://github.com/apollographql/apollo-server/pull/2040)
|
||||
|
|
|
@ -171,7 +171,7 @@ The `applyMiddleware` method is provided by the `apollo-server-{integration}` pa
|
|||
|
||||
Pass the integration-specific cors options. False removes the cors middleware and true uses the defaults.
|
||||
|
||||
* `bodyParser`: <`Object` | `boolean`> ([express](https://github.com/expressjs/body-parser#body-parser))
|
||||
* `bodyParserConfig`: <`Object` | `boolean`> ([express](https://github.com/expressjs/body-parser#body-parser))
|
||||
|
||||
Pass the body-parser options. False removes the body parser middleware and true uses the defaults.
|
||||
|
||||
|
@ -303,8 +303,10 @@ addMockFunctionsToSchema({
|
|||
|
||||
* `calculateSignature`: (ast: DocumentNode, operationName: string) => string
|
||||
|
||||
Specify the function for creating a signature for a query. See signature.ts
|
||||
for details.
|
||||
Specify the function for creating a signature for a query.
|
||||
|
||||
> See [`apollo-graphql`'s `signature.ts`](https://npm.im/apollo-graphql)
|
||||
> for more information on how the default signature is generated.
|
||||
|
||||
* `reportIntervalMs`: number
|
||||
|
||||
|
|
|
@ -105,7 +105,7 @@ const resolverFunctions = {
|
|||
MyCustomScalar: myCustomScalarType
|
||||
};
|
||||
|
||||
const server = new ApolloServer({ typeDefs: schemaString, resolvers: resolveFunctions });
|
||||
const server = new ApolloServer({ typeDefs: schemaString, resolvers: resolverFunctions });
|
||||
|
||||
server.listen().then(({ url }) => {
|
||||
console.log(`🚀 Server ready at ${url}`)
|
||||
|
@ -292,7 +292,7 @@ server.listen().then(({ url }) => {
|
|||
|
||||
<h3 id="internal-values">Internal values</h3>
|
||||
|
||||
Sometimes a backend forces a different value for an enum internally than in the public API. In this exmple the API contains `RED`, however in resolvers we use `#f00` instead. The `resolvers` argument to `ApolloServer` allows the addition custom values to enums that only exist internally:
|
||||
Sometimes a backend forces a different value for an enum internally than in the public API. In this example the API contains `RED`, however in resolvers we use `#f00` instead. The `resolvers` argument to `ApolloServer` allows the addition of custom values to enums that only exist internally:
|
||||
|
||||
```js
|
||||
const resolvers = {
|
||||
|
|
|
@ -218,7 +218,9 @@ For additional information, check out the [guide on configuring GraphQL playgrou
|
|||
|
||||
### File Uploads
|
||||
|
||||
For server integrations that support file uploads(express, hapi, koa, etc), Apollo Server enables file uploads by default. To enable file uploads, reference the `Upload` type in the schema passed to the Apollo Server construction.
|
||||
> Note: This feature is incompatible with `graphql-tools`' schema stitching. See [this issue](https://github.com/apollographql/graphql-tools/issues/671) for additional details.
|
||||
|
||||
For server integrations that support file uploads (e.g. Express, hapi, Koa), Apollo Server enables file uploads by default. To enable file uploads, reference the `Upload` type in the schema passed to the Apollo Server construction.
|
||||
|
||||
```js
|
||||
const { ApolloServer, gql } = require('apollo-server');
|
||||
|
|
|
@ -21,7 +21,7 @@ module.exports = {
|
|||
// We don't want to match `apollo-server-env` and
|
||||
// `apollo-engine-reporting-protobuf`, because these don't depend on
|
||||
// compilation but need to be initialized from as parto of `prepare`.
|
||||
'^(?!apollo-server-env|apollo-engine-reporting-protobuf)(apollo-(?:server|datasource|cache-control|tracing|engine)[^/]*|graphql-extensions)(?:/dist)?((?:/.*)|$)': '<rootDir>/../../packages/$1/src$2'
|
||||
'^(?!apollo-server-env|apollo-engine-reporting-protobuf)(apollo-(?:server|graphql|datasource|cache-control|tracing|engine)[^/]*|graphql-extensions)(?:/dist)?((?:/.*)|$)': '<rootDir>/../../packages/$1/src$2'
|
||||
},
|
||||
clearMocks: true,
|
||||
globals: {
|
||||
|
|
1497
package-lock.json
generated
1497
package-lock.json
generated
File diff suppressed because it is too large
Load diff
40
package.json
40
package.json
|
@ -40,6 +40,7 @@
|
|||
"apollo-datasource-rest": "file:packages/apollo-datasource-rest",
|
||||
"apollo-engine-reporting": "file:packages/apollo-engine-reporting",
|
||||
"apollo-engine-reporting-protobuf": "file:packages/apollo-engine-reporting-protobuf",
|
||||
"apollo-graphql": "file:packages/apollo-graphql",
|
||||
"apollo-server": "file:packages/apollo-server",
|
||||
"apollo-server-azure-functions": "file:packages/apollo-server-azure-functions",
|
||||
"apollo-server-cache-memcached": "file:packages/apollo-server-cache-memcached",
|
||||
|
@ -64,23 +65,24 @@
|
|||
},
|
||||
"devDependencies": {
|
||||
"@types/async-retry": "1.2.1",
|
||||
"@types/aws-lambda": "8.10.17",
|
||||
"@types/aws-lambda": "8.10.19",
|
||||
"@types/body-parser": "1.17.0",
|
||||
"@types/connect": "3.4.32",
|
||||
"@types/fast-json-stable-stringify": "^2.0.0",
|
||||
"@types/fast-json-stable-stringify": "2.0.0",
|
||||
"@types/fibers": "0.0.30",
|
||||
"@types/graphql": "14.0.5",
|
||||
"@types/hapi": "17.8.4",
|
||||
"@types/jest": "23.3.13",
|
||||
"@types/hapi": "17.8.5",
|
||||
"@types/jest": "23.3.14",
|
||||
"@types/koa-multer": "1.0.0",
|
||||
"@types/koa-router": "7.0.38",
|
||||
"@types/koa-router": "7.0.39",
|
||||
"@types/lodash": "4.14.120",
|
||||
"@types/lodash.sortby": "4.7.4",
|
||||
"@types/lru-cache": "4.1.1",
|
||||
"@types/memcached": "2.2.5",
|
||||
"@types/micro": "7.3.3",
|
||||
"@types/multer": "1.3.7",
|
||||
"@types/node": "10.12.18",
|
||||
"@types/node-fetch": "2.1.4",
|
||||
"@types/node": "10.12.23",
|
||||
"@types/node-fetch": "2.1.5",
|
||||
"@types/redis": "2.8.10",
|
||||
"@types/request": "2.48.1",
|
||||
"@types/request-promise": "4.1.42",
|
||||
|
@ -88,8 +90,8 @@
|
|||
"@types/type-is": "1.6.2",
|
||||
"@types/ws": "6.0.1",
|
||||
"apollo-fetch": "0.7.0",
|
||||
"apollo-link": "1.2.6",
|
||||
"apollo-link-http": "1.5.9",
|
||||
"apollo-link": "1.2.8",
|
||||
"apollo-link-http": "1.5.11",
|
||||
"apollo-link-persisted-queries": "0.2.2",
|
||||
"body-parser": "1.18.3",
|
||||
"codecov": "3.1.0",
|
||||
|
@ -101,26 +103,26 @@
|
|||
"graphql": "14.1.1",
|
||||
"graphql-subscriptions": "1.0.0",
|
||||
"graphql-tag": "2.10.1",
|
||||
"graphql-tools": "4.0.3",
|
||||
"hapi": "17.8.1",
|
||||
"graphql-tools": "4.0.4",
|
||||
"hapi": "17.8.4",
|
||||
"husky": "1.3.1",
|
||||
"jest": "23.6.0",
|
||||
"jest-junit": "5.2.0",
|
||||
"jest-matcher-utils": "23.6.0",
|
||||
"js-sha256": "0.9.0",
|
||||
"koa": "2.6.2",
|
||||
"koa": "2.7.0",
|
||||
"koa-multer": "1.0.2",
|
||||
"lerna": "3.10.6",
|
||||
"lint-staged": "8.1.0",
|
||||
"lerna": "3.11.0",
|
||||
"lint-staged": "8.1.3",
|
||||
"memcached-mock": "0.1.0",
|
||||
"meteor-promise": "0.8.7",
|
||||
"mock-req": "0.2.0",
|
||||
"multer": "1.4.1",
|
||||
"node-fetch": "2.3.0",
|
||||
"prettier": "1.15.3",
|
||||
"prettier": "1.16.4",
|
||||
"prettier-check": "2.0.0",
|
||||
"qs-middleware": "1.0.3",
|
||||
"redis-mock": "0.42.0",
|
||||
"redis-mock": "0.43.0",
|
||||
"request": "2.88.0",
|
||||
"request-promise": "4.2.2",
|
||||
"subscriptions-transport-ws": "0.9.15",
|
||||
|
@ -128,9 +130,9 @@
|
|||
"test-listen": "1.1.0",
|
||||
"ts-jest": "23.10.5",
|
||||
"tslint": "5.12.1",
|
||||
"typescript": "3.2.4",
|
||||
"ws": "6.1.2",
|
||||
"yup": "0.26.7"
|
||||
"typescript": "3.3.3",
|
||||
"ws": "6.1.3",
|
||||
"yup": "0.26.10"
|
||||
},
|
||||
"jest": {
|
||||
"projects": [
|
||||
|
|
|
@ -1,5 +1,27 @@
|
|||
# Changelog
|
||||
|
||||
> **A note on ommitted versions**: Due to the way that the Apollo Server
|
||||
> monorepo releases occur (via Lerna with _exact_ version pinning), the
|
||||
> version of the `apollo-cache-control` package is sometimes bumped and
|
||||
> published despite having no functional changes in its behavior. We will
|
||||
> always attempt to specifically mention functional changes to the
|
||||
> `apollo-cache-control` package within this particular `CHANGELOG.md`.
|
||||
|
||||
### v0.4.1
|
||||
|
||||
* Fix cache hints of `maxAge: 0` to mean "uncachable". (#2197)
|
||||
* Apply `defaultMaxAge` to scalar fields on the root object. (#2210)
|
||||
|
||||
### v0.3.0
|
||||
|
||||
* Support calculating Cache-Control HTTP headers when used by `apollo-server@2.0.0`.
|
||||
|
||||
### v0.2.0
|
||||
|
||||
Moved to the `apollo-server` git repository. No code changes. (There are a
|
||||
number of other 0.2.x releases with no code changes due to how the
|
||||
`apollo-server` release process works.)
|
||||
|
||||
### v0.1.1
|
||||
|
||||
* Fix `defaultMaxAge` feature (introduced in 0.1.0) so that `maxAge: 0` overrides the default, as previously documented.
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "apollo-cache-control",
|
||||
"version": "0.4.0",
|
||||
"version": "0.5.0",
|
||||
"description": "A GraphQL extension for cache control",
|
||||
"main": "./dist/index.js",
|
||||
"types": "./dist/index.d.ts",
|
||||
|
|
|
@ -30,6 +30,25 @@ describe('@cacheControl directives', () => {
|
|||
expect(hints).toContainEqual({ path: ['droid'], maxAge: 0 });
|
||||
});
|
||||
|
||||
it('should set maxAge: 0 and no scope for a top-level scalar field without cache hints', async () => {
|
||||
const schema = buildSchemaWithCacheControlSupport(`
|
||||
type Query {
|
||||
name: String
|
||||
}
|
||||
`);
|
||||
|
||||
const hints = await collectCacheControlHints(
|
||||
schema,
|
||||
`
|
||||
query {
|
||||
name
|
||||
}
|
||||
`,
|
||||
);
|
||||
|
||||
expect(hints).toContainEqual({ path: ['name'], maxAge: 0 });
|
||||
});
|
||||
|
||||
it('should set maxAge to the default and no scope for a field without cache hints', async () => {
|
||||
const schema = buildSchemaWithCacheControlSupport(`
|
||||
type Query {
|
||||
|
|
|
@ -0,0 +1,68 @@
|
|||
import { ResponsePath } from 'graphql';
|
||||
import { CacheControlExtension, CacheScope } from '../';
|
||||
|
||||
describe('CacheControlExtension', () => {
|
||||
let cacheControlExtension: CacheControlExtension;
|
||||
|
||||
beforeEach(() => {
|
||||
cacheControlExtension = new CacheControlExtension();
|
||||
});
|
||||
|
||||
describe('computeOverallCachePolicy', () => {
|
||||
const responsePath: ResponsePath = {
|
||||
key: 'test',
|
||||
prev: undefined,
|
||||
};
|
||||
const responseSubPath: ResponsePath = {
|
||||
key: 'subTest',
|
||||
prev: responsePath,
|
||||
};
|
||||
const responseSubSubPath: ResponsePath = {
|
||||
key: 'subSubTest',
|
||||
prev: responseSubPath,
|
||||
};
|
||||
|
||||
it('returns undefined without cache hints', () => {
|
||||
const cachePolicy = cacheControlExtension.computeOverallCachePolicy();
|
||||
expect(cachePolicy).toBeUndefined();
|
||||
});
|
||||
|
||||
it('returns lowest max age value', () => {
|
||||
cacheControlExtension.addHint(responsePath, { maxAge: 10 });
|
||||
cacheControlExtension.addHint(responseSubPath, { maxAge: 20 });
|
||||
|
||||
const cachePolicy = cacheControlExtension.computeOverallCachePolicy();
|
||||
expect(cachePolicy).toHaveProperty('maxAge', 10);
|
||||
});
|
||||
|
||||
it('returns undefined if any cache hint has a maxAge of 0', () => {
|
||||
cacheControlExtension.addHint(responsePath, { maxAge: 120 });
|
||||
cacheControlExtension.addHint(responseSubPath, { maxAge: 0 });
|
||||
cacheControlExtension.addHint(responseSubSubPath, { maxAge: 20 });
|
||||
|
||||
const cachePolicy = cacheControlExtension.computeOverallCachePolicy();
|
||||
expect(cachePolicy).toBeUndefined();
|
||||
});
|
||||
|
||||
it('returns PUBLIC scope by default', () => {
|
||||
cacheControlExtension.addHint(responsePath, { maxAge: 10 });
|
||||
|
||||
const cachePolicy = cacheControlExtension.computeOverallCachePolicy();
|
||||
expect(cachePolicy).toHaveProperty('scope', CacheScope.Public);
|
||||
});
|
||||
|
||||
it('returns PRIVATE scope if any cache hint has PRIVATE scope', () => {
|
||||
cacheControlExtension.addHint(responsePath, {
|
||||
maxAge: 10,
|
||||
scope: CacheScope.Public,
|
||||
});
|
||||
cacheControlExtension.addHint(responseSubPath, {
|
||||
maxAge: 10,
|
||||
scope: CacheScope.Private,
|
||||
});
|
||||
|
||||
const cachePolicy = cacheControlExtension.computeOverallCachePolicy();
|
||||
expect(cachePolicy).toHaveProperty('scope', CacheScope.Private);
|
||||
});
|
||||
});
|
||||
});
|
|
@ -75,26 +75,27 @@ export class CacheControlExtension<TContext = any>
|
|||
}
|
||||
}
|
||||
|
||||
// If this field is a field on an object, look for hints on the field
|
||||
// itself, taking precedence over previously calculated hints.
|
||||
const parentType = info.parentType;
|
||||
if (parentType instanceof GraphQLObjectType) {
|
||||
const fieldDef = parentType.getFields()[info.fieldName];
|
||||
if (fieldDef.astNode) {
|
||||
hint = mergeHints(
|
||||
hint,
|
||||
cacheHintFromDirectives(fieldDef.astNode.directives),
|
||||
);
|
||||
}
|
||||
// Look for hints on the field itself (on its parent type), taking
|
||||
// precedence over previously calculated hints.
|
||||
const fieldDef = info.parentType.getFields()[info.fieldName];
|
||||
if (fieldDef.astNode) {
|
||||
hint = mergeHints(
|
||||
hint,
|
||||
cacheHintFromDirectives(fieldDef.astNode.directives),
|
||||
);
|
||||
}
|
||||
|
||||
// If this resolver returns an object and we haven't seen an explicit maxAge
|
||||
// hint, set the maxAge to 0 (uncached) or the default if specified in the
|
||||
// constructor. (Non-object fields by default are assumed to inherit their
|
||||
// cacheability from their parents.)
|
||||
// If this resolver returns an object or is a root field and we haven't seen
|
||||
// an explicit maxAge hint, set the maxAge to 0 (uncached) or the default if
|
||||
// specified in the constructor. (Non-object fields by default are assumed
|
||||
// to inherit their cacheability from their parents. But on the other hand,
|
||||
// while root non-object fields can get explicit hints from their definition
|
||||
// on the Query/Mutation object, if that doesn't exist then there's no
|
||||
// parent field that would assign the default maxAge, so we do it here.)
|
||||
if (
|
||||
(targetType instanceof GraphQLObjectType ||
|
||||
targetType instanceof GraphQLInterfaceType) &&
|
||||
targetType instanceof GraphQLInterfaceType ||
|
||||
!info.path.prev) &&
|
||||
hint.maxAge === undefined
|
||||
) {
|
||||
hint.maxAge = this.defaultMaxAge;
|
||||
|
@ -156,16 +157,19 @@ export class CacheControlExtension<TContext = any>
|
|||
let scope: CacheScope = CacheScope.Public;
|
||||
|
||||
for (const hint of this.hints.values()) {
|
||||
if (hint.maxAge) {
|
||||
lowestMaxAge = lowestMaxAge
|
||||
? Math.min(lowestMaxAge, hint.maxAge)
|
||||
: hint.maxAge;
|
||||
if (hint.maxAge !== undefined) {
|
||||
lowestMaxAge =
|
||||
lowestMaxAge !== undefined
|
||||
? Math.min(lowestMaxAge, hint.maxAge)
|
||||
: hint.maxAge;
|
||||
}
|
||||
if (hint.scope === CacheScope.Private) {
|
||||
scope = CacheScope.Private;
|
||||
}
|
||||
}
|
||||
|
||||
// If maxAge is 0, then we consider it uncacheable so it doesn't matter what
|
||||
// the scope was.
|
||||
return lowestMaxAge
|
||||
? {
|
||||
maxAge: lowestMaxAge,
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "apollo-datasource-rest",
|
||||
"version": "0.2.1",
|
||||
"version": "0.3.0",
|
||||
"author": "opensource@apollographql.com",
|
||||
"license": "MIT",
|
||||
"repository": {
|
||||
|
|
|
@ -218,11 +218,12 @@ export abstract class RESTDataSource<TContext = any> extends DataSource {
|
|||
url.searchParams.append(name, value);
|
||||
}
|
||||
|
||||
// We accept arbitrary objects as body and serialize them as JSON
|
||||
// We accept arbitrary objects and arrays as body and serialize them as JSON
|
||||
if (
|
||||
options.body !== undefined &&
|
||||
options.body !== null &&
|
||||
(options.body.constructor === Object ||
|
||||
Array.isArray(options.body) ||
|
||||
((options.body as any).toJSON &&
|
||||
typeof (options.body as any).toJSON === 'function'))
|
||||
) {
|
||||
|
|
|
@ -268,6 +268,31 @@ describe('RESTDataSource', () => {
|
|||
);
|
||||
});
|
||||
|
||||
it('serializes a request body that is an array as JSON', async () => {
|
||||
const dataSource = new class extends RESTDataSource {
|
||||
baseURL = 'https://api.example.com';
|
||||
|
||||
postFoo(foo) {
|
||||
return this.post('foo', foo);
|
||||
}
|
||||
}();
|
||||
|
||||
dataSource.httpCache = httpCache;
|
||||
|
||||
fetch.mockJSONResponseOnce();
|
||||
|
||||
await dataSource.postFoo(['foo', 'bar']);
|
||||
|
||||
expect(fetch.mock.calls.length).toEqual(1);
|
||||
expect(fetch.mock.calls[0][0].url).toEqual('https://api.example.com/foo');
|
||||
expect(fetch.mock.calls[0][0].body).toEqual(
|
||||
JSON.stringify(['foo', 'bar']),
|
||||
);
|
||||
expect(fetch.mock.calls[0][0].headers.get('Content-Type')).toEqual(
|
||||
'application/json',
|
||||
);
|
||||
});
|
||||
|
||||
it('serializes a request body that has a toJSON method as JSON', async () => {
|
||||
const dataSource = new class extends RESTDataSource {
|
||||
baseURL = 'https://api.example.com';
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "apollo-datasource",
|
||||
"version": "0.2.1",
|
||||
"version": "0.3.0",
|
||||
"author": "opensource@apollographql.com",
|
||||
"license": "MIT",
|
||||
"repository": {
|
||||
|
|
|
@ -1,4 +1,10 @@
|
|||
### vNext
|
||||
|
||||
* Initial release.
|
||||
# v1.0.0
|
||||
|
||||
* The signature functions which were previously exported from this package's
|
||||
main module have been removed from `apollo-engine-reporting` and
|
||||
moved to the `apollo-graphql` package. They should be more universally
|
||||
helpful in that library, and should avoid tooling which needs to use them
|
||||
from needing to bring in all of `apollo-server-core`.
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "apollo-engine-reporting",
|
||||
"version": "0.2.0",
|
||||
"version": "1.0.0",
|
||||
"description": "Send reports about your GraphQL services to Apollo Engine",
|
||||
"main": "./dist/index.js",
|
||||
"types": "./dist/index.d.ts",
|
||||
|
@ -12,9 +12,10 @@
|
|||
},
|
||||
"dependencies": {
|
||||
"apollo-engine-reporting-protobuf": "file:../apollo-engine-reporting-protobuf",
|
||||
"apollo-graphql": "file:../apollo-graphql",
|
||||
"apollo-server-core": "file:../apollo-server-core",
|
||||
"apollo-server-env": "file:../apollo-server-env",
|
||||
"async-retry": "^1.2.1",
|
||||
"graphql-extensions": "file:../graphql-extensions",
|
||||
"lodash": "^4.17.10"
|
||||
"graphql-extensions": "file:../graphql-extensions"
|
||||
}
|
||||
}
|
||||
|
|
|
@ -16,7 +16,7 @@ import {
|
|||
import { Trace, google } from 'apollo-engine-reporting-protobuf';
|
||||
|
||||
import { EngineReportingOptions, GenerateClientInfo } from './agent';
|
||||
import { defaultSignature } from './signature';
|
||||
import { defaultEngineReportingSignature } from 'apollo-graphql';
|
||||
import { GraphQLRequestContext } from 'apollo-server-core/dist/requestPipelineAPI';
|
||||
|
||||
const clientNameHeaderKey = 'apollographql-client-name';
|
||||
|
@ -125,11 +125,14 @@ export class EngineReportingExtension<TContext = any>
|
|||
for (const [key, value] of o.request.headers) {
|
||||
if (
|
||||
this.options.privateHeaders &&
|
||||
typeof this.options.privateHeaders === 'object' &&
|
||||
Array.isArray(this.options.privateHeaders) &&
|
||||
// We assume that most users only have a few private headers, or will
|
||||
// just set privateHeaders to true; we can change this linear-time
|
||||
// operation if it causes real performance issues.
|
||||
this.options.privateHeaders.includes(key.toLowerCase())
|
||||
this.options.privateHeaders.some(privateHeader => {
|
||||
// Headers are case-insensitive, and should be compared as such.
|
||||
return privateHeader.toLowerCase() === key.toLowerCase();
|
||||
})
|
||||
) {
|
||||
continue;
|
||||
}
|
||||
|
@ -164,7 +167,7 @@ export class EngineReportingExtension<TContext = any>
|
|||
Object.keys(o.variables).forEach(name => {
|
||||
if (
|
||||
this.options.privateVariables &&
|
||||
typeof this.options.privateVariables === 'object' &&
|
||||
Array.isArray(this.options.privateVariables) &&
|
||||
// We assume that most users will have only a few private variables,
|
||||
// or will just set privateVariables to true; we can change this
|
||||
// linear-time operation if it causes real performance issues.
|
||||
|
@ -214,7 +217,7 @@ export class EngineReportingExtension<TContext = any>
|
|||
let signature;
|
||||
if (this.documentAST) {
|
||||
const calculateSignature =
|
||||
this.options.calculateSignature || defaultSignature;
|
||||
this.options.calculateSignature || defaultEngineReportingSignature;
|
||||
signature = calculateSignature(this.documentAST, operationName);
|
||||
} else if (this.queryString) {
|
||||
// We didn't get an AST, possibly because of a parse failure. Let's just
|
||||
|
|
|
@ -1,10 +1 @@
|
|||
export {
|
||||
hideLiterals,
|
||||
dropUnusedDefinitions,
|
||||
sortAST,
|
||||
removeAliases,
|
||||
printWithReducedWhitespace,
|
||||
defaultSignature,
|
||||
} from './signature';
|
||||
|
||||
export { EngineReportingOptions, EngineReportingAgent } from './agent';
|
||||
|
|
|
@ -7,6 +7,8 @@
|
|||
"include": ["src/**/*"],
|
||||
"exclude": ["**/__tests__", "**/__mocks__"],
|
||||
"references": [
|
||||
{ "path": "../graphql-extensions" }
|
||||
{ "path": "../graphql-extensions" },
|
||||
{ "path": "../apollo-graphql" },
|
||||
{ "path": "../apollo-server-core/tsconfig.requestPipelineAPI.json" }
|
||||
]
|
||||
}
|
||||
|
|
6
packages/apollo-graphql/.npmignore
Normal file
6
packages/apollo-graphql/.npmignore
Normal file
|
@ -0,0 +1,6 @@
|
|||
*
|
||||
!src/**/*
|
||||
!dist/**/*
|
||||
dist/**/*.test.*
|
||||
!package.json
|
||||
!README.md
|
4
packages/apollo-graphql/CHANGELOG.md
Normal file
4
packages/apollo-graphql/CHANGELOG.md
Normal file
|
@ -0,0 +1,4 @@
|
|||
# Change Log
|
||||
|
||||
### vNEXT
|
||||
|
1
packages/apollo-graphql/README.md
Normal file
1
packages/apollo-graphql/README.md
Normal file
|
@ -0,0 +1 @@
|
|||
# `apollo-graphql`
|
3
packages/apollo-graphql/jest.config.js
Normal file
3
packages/apollo-graphql/jest.config.js
Normal file
|
@ -0,0 +1,3 @@
|
|||
const config = require('../../jest.config.base');
|
||||
|
||||
module.exports = Object.assign(Object.create(null), config);
|
19
packages/apollo-graphql/package.json
Normal file
19
packages/apollo-graphql/package.json
Normal file
|
@ -0,0 +1,19 @@
|
|||
{
|
||||
"name": "apollo-graphql",
|
||||
"version": "0.1.0",
|
||||
"description": "Apollo GraphQL utility library",
|
||||
"main": "dist/index.js",
|
||||
"types": "dist/index.d.ts",
|
||||
"keywords": [],
|
||||
"author": "Apollo <opensource@apollographql.com>",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=6"
|
||||
},
|
||||
"dependencies": {
|
||||
"lodash.sortby": "^4.7.0"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"graphql": "^14.0.0"
|
||||
}
|
||||
}
|
|
@ -7,82 +7,12 @@ import {
|
|||
dropUnusedDefinitions,
|
||||
sortAST,
|
||||
removeAliases,
|
||||
} from '../signature';
|
||||
} from '../transforms';
|
||||
|
||||
// The gql duplicate fragment warning feature really is just warnings; nothing
|
||||
// breaks if you turn it off in tests.
|
||||
disableFragmentWarnings();
|
||||
|
||||
describe('printWithReducedWhitespace', () => {
|
||||
const cases = [
|
||||
{
|
||||
name: 'lots of whitespace',
|
||||
// Note: there's a tab after "tab->", which prettier wants to keep as a
|
||||
// literal tab rather than \t. In the output, there should be a literal
|
||||
// backslash-t.
|
||||
input: gql`
|
||||
query Foo($a: Int) {
|
||||
user(
|
||||
name: " tab-> yay"
|
||||
other: """
|
||||
apple
|
||||
bag
|
||||
cat
|
||||
"""
|
||||
) {
|
||||
name
|
||||
}
|
||||
}
|
||||
`,
|
||||
output:
|
||||
'query Foo($a:Int){user(name:" tab->\\tyay",other:"apple\\n bag\\ncat"){name}}',
|
||||
},
|
||||
];
|
||||
cases.forEach(({ name, input, output }) => {
|
||||
test(name, () => {
|
||||
expect(printWithReducedWhitespace(input)).toEqual(output);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('hideLiterals', () => {
|
||||
const cases = [
|
||||
{
|
||||
name: 'full test',
|
||||
input: gql`
|
||||
query Foo($b: Int, $a: Boolean) {
|
||||
user(name: "hello", age: 5) {
|
||||
...Bar
|
||||
... on User {
|
||||
hello
|
||||
bee
|
||||
}
|
||||
tz
|
||||
aliased: name
|
||||
}
|
||||
}
|
||||
|
||||
fragment Bar on User {
|
||||
age @skip(if: $a)
|
||||
...Nested
|
||||
}
|
||||
|
||||
fragment Nested on User {
|
||||
blah
|
||||
}
|
||||
`,
|
||||
output:
|
||||
'query Foo($b:Int,$a:Boolean){user(name:"",age:0){...Bar...on User{hello bee}tz aliased:name}}' +
|
||||
'fragment Bar on User{age@skip(if:$a)...Nested}fragment Nested on User{blah}',
|
||||
},
|
||||
];
|
||||
cases.forEach(({ name, input, output }) => {
|
||||
test(name, () => {
|
||||
expect(printWithReducedWhitespace(hideLiterals(input))).toEqual(output);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('aggressive signature', () => {
|
||||
function aggressive(ast: DocumentNode, operationName: string): string {
|
||||
return printWithReducedWhitespace(
|
77
packages/apollo-graphql/src/__tests__/transforms.test.ts
Normal file
77
packages/apollo-graphql/src/__tests__/transforms.test.ts
Normal file
|
@ -0,0 +1,77 @@
|
|||
import { default as gql, disableFragmentWarnings } from 'graphql-tag';
|
||||
|
||||
import { printWithReducedWhitespace, hideLiterals } from '../transforms';
|
||||
|
||||
// The gql duplicate fragment warning feature really is just warnings; nothing
|
||||
// breaks if you turn it off in tests.
|
||||
disableFragmentWarnings();
|
||||
|
||||
describe('printWithReducedWhitespace', () => {
|
||||
const cases = [
|
||||
{
|
||||
name: 'lots of whitespace',
|
||||
// Note: there's a tab after "tab->", which prettier wants to keep as a
|
||||
// literal tab rather than \t. In the output, there should be a literal
|
||||
// backslash-t.
|
||||
input: gql`
|
||||
query Foo($a: Int) {
|
||||
user(
|
||||
name: " tab-> yay"
|
||||
other: """
|
||||
apple
|
||||
bag
|
||||
cat
|
||||
"""
|
||||
) {
|
||||
name
|
||||
}
|
||||
}
|
||||
`,
|
||||
output:
|
||||
'query Foo($a:Int){user(name:" tab->\\tyay",other:"apple\\n bag\\ncat"){name}}',
|
||||
},
|
||||
];
|
||||
cases.forEach(({ name, input, output }) => {
|
||||
test(name, () => {
|
||||
expect(printWithReducedWhitespace(input)).toEqual(output);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('hideLiterals', () => {
|
||||
const cases = [
|
||||
{
|
||||
name: 'full test',
|
||||
input: gql`
|
||||
query Foo($b: Int, $a: Boolean) {
|
||||
user(name: "hello", age: 5) {
|
||||
...Bar
|
||||
... on User {
|
||||
hello
|
||||
bee
|
||||
}
|
||||
tz
|
||||
aliased: name
|
||||
}
|
||||
}
|
||||
|
||||
fragment Bar on User {
|
||||
age @skip(if: $a)
|
||||
...Nested
|
||||
}
|
||||
|
||||
fragment Nested on User {
|
||||
blah
|
||||
}
|
||||
`,
|
||||
output:
|
||||
'query Foo($b:Int,$a:Boolean){user(name:"",age:0){...Bar...on User{hello bee}tz aliased:name}}' +
|
||||
'fragment Bar on User{age@skip(if:$a)...Nested}fragment Nested on User{blah}',
|
||||
},
|
||||
];
|
||||
cases.forEach(({ name, input, output }) => {
|
||||
test(name, () => {
|
||||
expect(printWithReducedWhitespace(hideLiterals(input))).toEqual(output);
|
||||
});
|
||||
});
|
||||
});
|
7
packages/apollo-graphql/src/__tests__/tsconfig.json
Normal file
7
packages/apollo-graphql/src/__tests__/tsconfig.json
Normal file
|
@ -0,0 +1,7 @@
|
|||
{
|
||||
"extends": "../../../../tsconfig.test.base",
|
||||
"include": ["**/*"],
|
||||
"references": [
|
||||
{ "path": "../../" }
|
||||
]
|
||||
}
|
1
packages/apollo-graphql/src/index.ts
Normal file
1
packages/apollo-graphql/src/index.ts
Normal file
|
@ -0,0 +1 @@
|
|||
export { defaultEngineReportingSignature } from './signature';
|
68
packages/apollo-graphql/src/signature.ts
Normal file
68
packages/apollo-graphql/src/signature.ts
Normal file
|
@ -0,0 +1,68 @@
|
|||
// In Engine, we want to group requests making the same query together, and
|
||||
// treat different queries distinctly. But what does it mean for two queries to
|
||||
// be "the same"? And what if you don't want to send the full text of the query
|
||||
// to Apollo Engine's servers, either because it contains sensitive data or
|
||||
// because it contains extraneous operations or fragments?
|
||||
//
|
||||
// To solve these problems, EngineReportingAgent has the concept of
|
||||
// "signatures". We don't (by default) send the full query string of queries to
|
||||
// the Engine servers. Instead, each trace has its query string's "signature".
|
||||
//
|
||||
// You can specify any function mapping a GraphQL query AST (DocumentNode) to
|
||||
// string as your signature algorithm by providing it as the 'signature' option
|
||||
// to the EngineReportingAgent constructor. Ideally, your signature should be a
|
||||
// valid GraphQL query, though as of now the Engine servers do not re-parse your
|
||||
// signature and do not expect it to match the execution tree in the trace.
|
||||
//
|
||||
// This module utilizes several AST transformations from the adjacent
|
||||
// 'transforms' module (which are also for writing your own signature method).
|
||||
|
||||
// - dropUnusedDefinitions, which removes operations and fragments that
|
||||
// aren't going to be used in execution
|
||||
// - hideLiterals, which replaces all numeric and string literals as well
|
||||
// as list and object input values with "empty" values
|
||||
// - removeAliases, which removes field aliasing from the query
|
||||
// - sortAST, which sorts the children of most multi-child nodes
|
||||
// consistently
|
||||
// - printWithReducedWhitespace, a variant on graphql-js's 'print'
|
||||
// which gets rid of unneeded whitespace
|
||||
//
|
||||
// defaultSignature consists of applying all of these building blocks.
|
||||
//
|
||||
// Historical note: the default signature algorithm of the Go engineproxy
|
||||
// performed all of the above operations, and the Engine servers then re-ran a
|
||||
// mostly identical signature implementation on received traces. This was
|
||||
// primarily to deal with edge cases where some users used literal interpolation
|
||||
// instead of GraphQL variables, included randomized alias names, etc. In
|
||||
// addition, the servers relied on the fact that dropUnusedDefinitions had been
|
||||
// called in order (and that the signature could be parsed as GraphQL) to
|
||||
// extract the name of the operation for display. This caused confusion, as the
|
||||
// query document shown in the Engine UI wasn't the same as the one actually
|
||||
// sent. apollo-engine-reporting uses a new reporting API which requires it to
|
||||
// explicitly include the operation name with each signature; this means that
|
||||
// the server no longer needs to parse the signature or run its own signature
|
||||
// algorithm on it, and the details of the signature algorithm are now up to the
|
||||
// reporting agent.
|
||||
|
||||
import { DocumentNode } from 'graphql';
|
||||
import {
|
||||
printWithReducedWhitespace,
|
||||
dropUnusedDefinitions,
|
||||
removeAliases,
|
||||
sortAST,
|
||||
hideLiterals,
|
||||
} from './transforms';
|
||||
|
||||
// The default signature function consists of removing unused definitions
|
||||
// and whitespace.
|
||||
// XXX consider caching somehow
|
||||
export function defaultEngineReportingSignature(
|
||||
ast: DocumentNode,
|
||||
operationName: string,
|
||||
): string {
|
||||
return printWithReducedWhitespace(
|
||||
sortAST(
|
||||
removeAliases(hideLiterals(dropUnusedDefinitions(ast, operationName))),
|
||||
),
|
||||
);
|
||||
}
|
|
@ -1,71 +1,26 @@
|
|||
// XXX maybe this should just be its own graphql-signature package
|
||||
|
||||
// In Engine, we want to group requests making the same query together, and
|
||||
// treat different queries distinctly. But what does it mean for two queries to
|
||||
// be "the same"? And what if you don't want to send the full text of the query
|
||||
// to Apollo Engine's servers, either because it contains sensitive data or
|
||||
// because it contains extraneous operations or fragments?
|
||||
//
|
||||
// To solve these problems, EngineReportingAgent has the concept of
|
||||
// "signatures". We don't (by default) send the full query string of queries to
|
||||
// the Engine servers. Instead, each trace has its query string's "signature".
|
||||
//
|
||||
// You can specify any function mapping a GraphQL query AST (DocumentNode) to
|
||||
// string as your signature algorithm by providing it as the 'signature' option
|
||||
// to the EngineReportingAgent constructor. Ideally, your signature should be a
|
||||
// valid GraphQL query, though as of now the Engine servers do not re-parse your
|
||||
// signature and do not expect it to match the execution tree in the trace.
|
||||
//
|
||||
// This file provides several useful building blocks for writing your own
|
||||
// signature function. These are:
|
||||
//
|
||||
// - dropUnusedDefinitions, which removes operations and fragments that
|
||||
// aren't going to be used in execution
|
||||
// - hideLiterals, which replaces all numeric and string literals as well
|
||||
// as list and object input values with "empty" values
|
||||
// - removeAliases, which removes field aliasing from the query
|
||||
// - sortAST, which sorts the children of most multi-child nodes
|
||||
// consistently
|
||||
// - printWithReducedWhitespace, a variant on graphql-js's 'print'
|
||||
// which gets rid of unneeded whitespace
|
||||
//
|
||||
// defaultSignature consists of applying all of these building blocks.
|
||||
//
|
||||
// Historical note: the default signature algorithm of the Go engineproxy
|
||||
// performed all of the above operations, and the Engine servers then re-ran a
|
||||
// mostly identical signature implementation on received traces. This was
|
||||
// primarily to deal with edge cases where some users used literal interpolation
|
||||
// instead of GraphQL variables, included randomized alias names, etc. In
|
||||
// addition, the servers relied on the fact that dropUnusedDefinitions had been
|
||||
// called in order (and that the signature could be parsed as GraphQL) to
|
||||
// extract the name of the operation for display. This caused confusion, as the
|
||||
// query document shown in the Engine UI wasn't the same as the one actually
|
||||
// sent. apollo-engine-reporting uses a new reporting API which requires it to
|
||||
// explicitly include the operation name with each signature; this means that
|
||||
// the server no longer needs to parse the signature or run its own signature
|
||||
// algorithm on it, and the details of the signature algorithm are now up to the
|
||||
// reporting agent.
|
||||
|
||||
import { sortBy, ListIteratee } from 'lodash';
|
||||
|
||||
import { visit } from 'graphql/language/visitor';
|
||||
import {
|
||||
print,
|
||||
visit,
|
||||
DocumentNode,
|
||||
FloatValueNode,
|
||||
IntValueNode,
|
||||
StringValueNode,
|
||||
OperationDefinitionNode,
|
||||
SelectionSetNode,
|
||||
FieldNode,
|
||||
FragmentSpreadNode,
|
||||
InlineFragmentNode,
|
||||
FragmentDefinitionNode,
|
||||
DirectiveNode,
|
||||
IntValueNode,
|
||||
FloatValueNode,
|
||||
StringValueNode,
|
||||
ListValueNode,
|
||||
FieldNode,
|
||||
FragmentDefinitionNode,
|
||||
ObjectValueNode,
|
||||
separateOperations,
|
||||
} from 'graphql';
|
||||
ListValueNode,
|
||||
} from 'graphql/language/ast';
|
||||
import { print } from 'graphql/language/printer';
|
||||
import { separateOperations } from 'graphql/utilities';
|
||||
// We'll only fetch the `ListIteratee` type from the `@types/lodash`, but get
|
||||
// `sortBy` from the modularized version of the package to avoid bringing in
|
||||
// all of `lodash`.
|
||||
import { ListIteratee } from 'lodash';
|
||||
import sortBy from 'lodash.sortby';
|
||||
|
||||
// Replace numeric, string, list, and object literals with "empty"
|
||||
// values. Leaves enums alone (since there's no consistent "zero" enum). This
|
||||
|
@ -93,6 +48,22 @@ export function hideLiterals(ast: DocumentNode): DocumentNode {
|
|||
});
|
||||
}
|
||||
|
||||
// In the same spirit as the similarly named `hideLiterals` function, only
|
||||
// hide string and numeric literals.
|
||||
export function hideStringAndNumericLiterals(ast: DocumentNode): DocumentNode {
|
||||
return visit(ast, {
|
||||
IntValue(node: IntValueNode): IntValueNode {
|
||||
return { ...node, value: '0' };
|
||||
},
|
||||
FloatValue(node: FloatValueNode): FloatValueNode {
|
||||
return { ...node, value: '0' };
|
||||
},
|
||||
StringValue(node: StringValueNode): StringValueNode {
|
||||
return { ...node, value: '', block: false };
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
// A GraphQL query may contain multiple named operations, with the operation to
|
||||
// use specified separately by the client. This transformation drops unused
|
||||
// operations from the query, as well as any fragment definitions that are not
|
||||
|
@ -226,17 +197,3 @@ export function printWithReducedWhitespace(ast: DocumentNode): string {
|
|||
JSON.stringify(Buffer.from(hex, 'hex').toString('utf8')),
|
||||
);
|
||||
}
|
||||
|
||||
// The default signature function consists of removing unused definitions
|
||||
// and whitespace.
|
||||
// XXX consider caching somehow
|
||||
export function defaultSignature(
|
||||
ast: DocumentNode,
|
||||
operationName: string,
|
||||
): string {
|
||||
return printWithReducedWhitespace(
|
||||
sortAST(
|
||||
removeAliases(hideLiterals(dropUnusedDefinitions(ast, operationName))),
|
||||
),
|
||||
);
|
||||
}
|
10
packages/apollo-graphql/tsconfig.json
Normal file
10
packages/apollo-graphql/tsconfig.json
Normal file
|
@ -0,0 +1,10 @@
|
|||
{
|
||||
"extends": "../../tsconfig.base",
|
||||
"compilerOptions": {
|
||||
"rootDir": "./src",
|
||||
"outDir": "./dist"
|
||||
},
|
||||
"include": ["src/**/*"],
|
||||
"exclude": ["**/__tests__", "**/__mocks__"],
|
||||
"references": []
|
||||
}
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "apollo-server-azure-functions",
|
||||
"version": "2.3.1",
|
||||
"version": "2.4.0",
|
||||
"description": "Production-ready Node.js GraphQL server for Azure Functions",
|
||||
"keywords": [
|
||||
"GraphQL",
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "apollo-server-cache-memcached",
|
||||
"version": "0.2.1",
|
||||
"version": "0.3.0",
|
||||
"author": "opensource@apollographql.com",
|
||||
"license": "MIT",
|
||||
"repository": {
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "apollo-server-cache-redis",
|
||||
"version": "0.2.1",
|
||||
"version": "0.3.0",
|
||||
"author": "opensource@apollographql.com",
|
||||
"license": "MIT",
|
||||
"repository": {
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "apollo-server-caching",
|
||||
"version": "0.2.1",
|
||||
"version": "0.3.0",
|
||||
"author": "opensource@apollographql.com",
|
||||
"license": "MIT",
|
||||
"repository": {
|
||||
|
|
|
@ -1,32 +1,33 @@
|
|||
import LRU from 'lru-cache';
|
||||
import { KeyValueCache } from './KeyValueCache';
|
||||
|
||||
function defaultLengthCalculation(item: any) {
|
||||
if (Array.isArray(item) || typeof item === 'string') {
|
||||
return item.length;
|
||||
}
|
||||
|
||||
// Go with the lru-cache default "naive" size, in lieu anything better:
|
||||
// https://github.com/isaacs/node-lru-cache/blob/a71be6cd/index.js#L17
|
||||
return 1;
|
||||
}
|
||||
|
||||
export class InMemoryLRUCache<V = string> implements KeyValueCache<V> {
|
||||
private store: LRU.Cache<string, V>;
|
||||
|
||||
// FIXME: Define reasonable default max size of the cache
|
||||
constructor({ maxSize = Infinity }: { maxSize?: number } = {}) {
|
||||
constructor({
|
||||
maxSize = Infinity,
|
||||
sizeCalculator = defaultLengthCalculation,
|
||||
onDispose,
|
||||
}: {
|
||||
maxSize?: number;
|
||||
sizeCalculator?: (value: V, key: string) => number;
|
||||
onDispose?: (key: string, value: V) => void;
|
||||
} = {}) {
|
||||
this.store = new LRU({
|
||||
max: maxSize,
|
||||
length(item) {
|
||||
if (Array.isArray(item) || typeof item === 'string') {
|
||||
return item.length;
|
||||
}
|
||||
|
||||
// If it's an object, we'll use the length to get an approximate,
|
||||
// relative size of what it would take to store it. It's certainly not
|
||||
// 100% accurate, but it's a very, very fast implementation and it
|
||||
// doesn't require bringing in other dependencies or logic which we need
|
||||
// to maintain. In the future, we might consider something like:
|
||||
// npm.im/object-sizeof, but this should be sufficient for now.
|
||||
if (typeof item === 'object') {
|
||||
return JSON.stringify(item).length;
|
||||
}
|
||||
|
||||
// Go with the lru-cache default "naive" size, in lieu anything better:
|
||||
// https://github.com/isaacs/node-lru-cache/blob/a71be6cd/index.js#L17
|
||||
return 1;
|
||||
},
|
||||
length: sizeCalculator,
|
||||
dispose: onDispose,
|
||||
});
|
||||
}
|
||||
|
||||
|
@ -43,4 +44,7 @@ export class InMemoryLRUCache<V = string> implements KeyValueCache<V> {
|
|||
async flush(): Promise<void> {
|
||||
this.store.reset();
|
||||
}
|
||||
async getTotalSize() {
|
||||
return this.store.length;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "apollo-server-cloud-functions",
|
||||
"version": "2.3.1",
|
||||
"version": "2.4.0",
|
||||
"description": "Production-ready Node.js GraphQL server for Google Cloud Functions",
|
||||
"keywords": [
|
||||
"GraphQL",
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "apollo-server-cloudflare",
|
||||
"version": "2.3.1",
|
||||
"version": "2.4.0",
|
||||
"description": "Production-ready Node.js GraphQL server for Cloudflare workers",
|
||||
"main": "dist/index.js",
|
||||
"types": "dist/index.d.ts",
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "apollo-server-core",
|
||||
"version": "2.3.1",
|
||||
"version": "2.4.0",
|
||||
"description": "Core engine for Apollo GraphQL server",
|
||||
"main": "dist/index.js",
|
||||
"types": "dist/index.d.ts",
|
||||
|
|
|
@ -9,12 +9,13 @@ import {
|
|||
GraphQLFieldResolver,
|
||||
ValidationContext,
|
||||
FieldDefinitionNode,
|
||||
DocumentNode,
|
||||
} from 'graphql';
|
||||
import { GraphQLExtension } from 'graphql-extensions';
|
||||
import { EngineReportingAgent } from 'apollo-engine-reporting';
|
||||
import { InMemoryLRUCache } from 'apollo-server-caching';
|
||||
import { ApolloServerPlugin } from 'apollo-server-plugin-base';
|
||||
import supportsUploadsInNode from './utils/supportsUploadsInNode';
|
||||
import runtimeSupportsUploads from './utils/runtimeSupportsUploads';
|
||||
|
||||
import {
|
||||
SubscriptionServer,
|
||||
|
@ -90,7 +91,11 @@ function getEngineServiceId(engine: Config['engine']): string | undefined {
|
|||
}
|
||||
|
||||
const forbidUploadsForTesting =
|
||||
process && process.env.NODE_ENV === 'test' && !supportsUploadsInNode;
|
||||
process && process.env.NODE_ENV === 'test' && !runtimeSupportsUploads;
|
||||
|
||||
function approximateObjectSize<T>(obj: T): number {
|
||||
return Buffer.byteLength(JSON.stringify(obj), 'utf8');
|
||||
}
|
||||
|
||||
export class ApolloServerBase {
|
||||
public subscriptionsPath?: string;
|
||||
|
@ -114,6 +119,11 @@ export class ApolloServerBase {
|
|||
// the default version is specified in playground.ts
|
||||
protected playgroundOptions?: PlaygroundRenderPageOptions;
|
||||
|
||||
// A store that, when enabled (default), will store the parsed and validated
|
||||
// versions of operations in-memory, allowing subsequent parses/validates
|
||||
// on the same operation to be executed immediately.
|
||||
private documentStore?: InMemoryLRUCache<DocumentNode>;
|
||||
|
||||
// The constructor should be universal across all environments. All environment specific behavior should be set by adding or overriding methods
|
||||
constructor(config: Config) {
|
||||
if (!config) throw new Error('ApolloServer requires options.');
|
||||
|
@ -136,6 +146,9 @@ export class ApolloServerBase {
|
|||
...requestOptions
|
||||
} = config;
|
||||
|
||||
// Initialize the document store. This cannot currently be disabled.
|
||||
this.initializeDocumentStore();
|
||||
|
||||
// Plugins will be instantiated if they aren't already, and this.plugins
|
||||
// is populated accordingly.
|
||||
this.ensurePluginInstantiation(plugins);
|
||||
|
@ -205,7 +218,7 @@ export class ApolloServerBase {
|
|||
|
||||
if (uploads !== false && !forbidUploadsForTesting) {
|
||||
if (this.supportsUploads()) {
|
||||
if (!supportsUploadsInNode) {
|
||||
if (!runtimeSupportsUploads) {
|
||||
printNodeFileUploadsMessage();
|
||||
throw new Error(
|
||||
'`graphql-upload` is no longer supported on Node.js < v8.5.0. ' +
|
||||
|
@ -486,6 +499,18 @@ export class ApolloServerBase {
|
|||
});
|
||||
}
|
||||
|
||||
private initializeDocumentStore(): void {
|
||||
this.documentStore = new InMemoryLRUCache<DocumentNode>({
|
||||
// Create ~about~ a 30MiB InMemoryLRUCache. This is less than precise
|
||||
// since the technique to calculate the size of a DocumentNode is
|
||||
// only using JSON.stringify on the DocumentNode (and thus doesn't account
|
||||
// for unicode characters, etc.), but it should do a reasonable job at
|
||||
// providing a caching document store for most operations.
|
||||
maxSize: Math.pow(2, 20) * 30,
|
||||
sizeCalculator: approximateObjectSize,
|
||||
});
|
||||
}
|
||||
|
||||
// This function is used by the integrations to generate the graphQLOptions
|
||||
// from an object containing the request and other integration specific
|
||||
// options
|
||||
|
@ -509,6 +534,7 @@ export class ApolloServerBase {
|
|||
return {
|
||||
schema: this.schema,
|
||||
plugins: this.plugins,
|
||||
documentStore: this.documentStore,
|
||||
extensions: this.extensions,
|
||||
context,
|
||||
// Allow overrides from options. Be explicit about a couple of them to
|
||||
|
|
|
@ -20,6 +20,11 @@ import {
|
|||
import { processGraphQLRequest, GraphQLRequest } from '../requestPipeline';
|
||||
import { Request } from 'apollo-server-env';
|
||||
import { GraphQLOptions, Context as GraphQLContext } from 'apollo-server-core';
|
||||
import {
|
||||
ApolloServerPlugin,
|
||||
GraphQLRequestListener,
|
||||
} from 'apollo-server-plugin-base';
|
||||
import { InMemoryLRUCache } from 'apollo-server-caching';
|
||||
|
||||
// This is a temporary kludge to ensure we preserve runQuery behavior with the
|
||||
// GraphQLRequestProcessor refactoring.
|
||||
|
@ -49,10 +54,12 @@ interface QueryOptions
|
|||
| 'cacheControl'
|
||||
| 'context'
|
||||
| 'debug'
|
||||
| 'documentStore'
|
||||
| 'extensions'
|
||||
| 'fieldResolver'
|
||||
| 'formatError'
|
||||
| 'formatResponse'
|
||||
| 'plugins'
|
||||
| 'rootValue'
|
||||
| 'schema'
|
||||
| 'tracing'
|
||||
|
@ -444,6 +451,172 @@ describe('runQuery', () => {
|
|||
});
|
||||
});
|
||||
|
||||
describe('parsing and validation cache', () => {
|
||||
function createLifecyclePluginMocks() {
|
||||
const validationDidStart = jest.fn();
|
||||
const parsingDidStart = jest.fn();
|
||||
|
||||
const plugins: ApolloServerPlugin[] = [
|
||||
{
|
||||
requestDidStart() {
|
||||
return {
|
||||
validationDidStart,
|
||||
parsingDidStart,
|
||||
} as GraphQLRequestListener;
|
||||
},
|
||||
},
|
||||
];
|
||||
|
||||
return {
|
||||
plugins,
|
||||
events: { validationDidStart, parsingDidStart },
|
||||
};
|
||||
}
|
||||
|
||||
function runRequest({
|
||||
queryString = '{ testString }',
|
||||
plugins = [],
|
||||
documentStore,
|
||||
}: {
|
||||
queryString?: string;
|
||||
plugins?: ApolloServerPlugin[];
|
||||
documentStore?: QueryOptions['documentStore'];
|
||||
}) {
|
||||
return runQuery({
|
||||
schema,
|
||||
documentStore,
|
||||
queryString,
|
||||
plugins,
|
||||
request: new MockReq(),
|
||||
});
|
||||
}
|
||||
|
||||
function forgeLargerTestQuery(
|
||||
count: number,
|
||||
prefix: string = 'prefix',
|
||||
): string {
|
||||
if (count <= 0) {
|
||||
count = 1;
|
||||
}
|
||||
|
||||
let query: string = '';
|
||||
|
||||
for (let q = 0; q < count; q++) {
|
||||
query += ` ${prefix}_${count}: testString\n`;
|
||||
}
|
||||
|
||||
return '{\n' + query + '}';
|
||||
}
|
||||
|
||||
// This should use the same logic as the calculation in InMemoryLRUCache:
|
||||
// https://github.com/apollographql/apollo-server/blob/94b98ff3/packages/apollo-server-caching/src/InMemoryLRUCache.ts#L23
|
||||
function approximateObjectSize<T>(obj: T): number {
|
||||
return Buffer.byteLength(JSON.stringify(obj), 'utf8');
|
||||
}
|
||||
|
||||
it('validates each time when the documentStore is not present', async () => {
|
||||
expect.assertions(4);
|
||||
|
||||
const {
|
||||
plugins,
|
||||
events: { parsingDidStart, validationDidStart },
|
||||
} = createLifecyclePluginMocks();
|
||||
|
||||
// The first request will do a parse and validate. (1/1)
|
||||
await runRequest({ plugins });
|
||||
expect(parsingDidStart.mock.calls.length).toBe(1);
|
||||
expect(validationDidStart.mock.calls.length).toBe(1);
|
||||
|
||||
// The second request should ALSO do a parse and validate. (2/2)
|
||||
await runRequest({ plugins });
|
||||
expect(parsingDidStart.mock.calls.length).toBe(2);
|
||||
expect(validationDidStart.mock.calls.length).toBe(2);
|
||||
});
|
||||
|
||||
it('caches the DocumentNode in the documentStore when instrumented', async () => {
|
||||
expect.assertions(4);
|
||||
const documentStore = new InMemoryLRUCache<DocumentNode>();
|
||||
|
||||
const {
|
||||
plugins,
|
||||
events: { parsingDidStart, validationDidStart },
|
||||
} = createLifecyclePluginMocks();
|
||||
|
||||
// An uncached request will have 1 parse and 1 validate call.
|
||||
await runRequest({ plugins, documentStore });
|
||||
expect(parsingDidStart.mock.calls.length).toBe(1);
|
||||
expect(validationDidStart.mock.calls.length).toBe(1);
|
||||
|
||||
// The second request should still only have a 1 validate and 1 parse.
|
||||
await runRequest({ plugins, documentStore });
|
||||
expect(parsingDidStart.mock.calls.length).toBe(1);
|
||||
expect(validationDidStart.mock.calls.length).toBe(1);
|
||||
|
||||
console.log(documentStore);
|
||||
});
|
||||
|
||||
it("the documentStore calculates the DocumentNode's length by its JSON.stringify'd representation", async () => {
|
||||
expect.assertions(14);
|
||||
const {
|
||||
plugins,
|
||||
events: { parsingDidStart, validationDidStart },
|
||||
} = createLifecyclePluginMocks();
|
||||
|
||||
const queryLarge = forgeLargerTestQuery(3, 'large');
|
||||
const querySmall1 = forgeLargerTestQuery(1, 'small1');
|
||||
const querySmall2 = forgeLargerTestQuery(1, 'small2');
|
||||
|
||||
// We're going to create a smaller-than-default cache which will be the
|
||||
// size of the two smaller queries. All three of these queries will never
|
||||
// fit into this cache, so we'll roll through them all.
|
||||
const maxSize =
|
||||
approximateObjectSize(parse(querySmall1)) +
|
||||
approximateObjectSize(parse(querySmall2));
|
||||
|
||||
const documentStore = new InMemoryLRUCache<DocumentNode>({
|
||||
maxSize,
|
||||
sizeCalculator: approximateObjectSize,
|
||||
});
|
||||
|
||||
await runRequest({ plugins, documentStore, queryString: querySmall1 });
|
||||
expect(parsingDidStart.mock.calls.length).toBe(1);
|
||||
expect(validationDidStart.mock.calls.length).toBe(1);
|
||||
|
||||
await runRequest({ plugins, documentStore, queryString: querySmall2 });
|
||||
expect(parsingDidStart.mock.calls.length).toBe(2);
|
||||
expect(validationDidStart.mock.calls.length).toBe(2);
|
||||
|
||||
// This query should be large enough to evict both of the previous
|
||||
// from the LRU cache since it's larger than the TOTAL limit of the cache
|
||||
// (which is capped at the length of small1 + small2) — though this will
|
||||
// still fit (barely).
|
||||
await runRequest({ plugins, documentStore, queryString: queryLarge });
|
||||
expect(parsingDidStart.mock.calls.length).toBe(3);
|
||||
expect(validationDidStart.mock.calls.length).toBe(3);
|
||||
|
||||
// Make sure the large query is still cached (No incr. to parse/validate.)
|
||||
await runRequest({ plugins, documentStore, queryString: queryLarge });
|
||||
expect(parsingDidStart.mock.calls.length).toBe(3);
|
||||
expect(validationDidStart.mock.calls.length).toBe(3);
|
||||
|
||||
// This small (and the other) should both trigger parse/validate since
|
||||
// the cache had to have evicted them both after accommodating the larger.
|
||||
await runRequest({ plugins, documentStore, queryString: querySmall1 });
|
||||
expect(parsingDidStart.mock.calls.length).toBe(4);
|
||||
expect(validationDidStart.mock.calls.length).toBe(4);
|
||||
|
||||
await runRequest({ plugins, documentStore, queryString: querySmall2 });
|
||||
expect(parsingDidStart.mock.calls.length).toBe(5);
|
||||
expect(validationDidStart.mock.calls.length).toBe(5);
|
||||
|
||||
// Finally, make sure that the large query is gone (it should be, after
|
||||
// the last two have taken its spot again.)
|
||||
await runRequest({ plugins, documentStore, queryString: queryLarge });
|
||||
expect(parsingDidStart.mock.calls.length).toBe(6);
|
||||
expect(validationDidStart.mock.calls.length).toBe(6);
|
||||
});
|
||||
});
|
||||
|
||||
describe('async_hooks', () => {
|
||||
let asyncHooks: typeof import('async_hooks');
|
||||
let asyncHook: import('async_hooks').AsyncHook;
|
||||
|
|
|
@ -6,7 +6,7 @@ import {
|
|||
} from 'graphql';
|
||||
import { GraphQLExtension } from 'graphql-extensions';
|
||||
import { CacheControlExtensionOptions } from 'apollo-cache-control';
|
||||
import { KeyValueCache } from 'apollo-server-caching';
|
||||
import { KeyValueCache, InMemoryLRUCache } from 'apollo-server-caching';
|
||||
import { DataSource } from 'apollo-datasource';
|
||||
import { ApolloServerPlugin } from 'apollo-server-plugin-base';
|
||||
|
||||
|
@ -43,6 +43,7 @@ export interface GraphQLServerOptions<
|
|||
cache?: KeyValueCache;
|
||||
persistedQueries?: PersistedQueryOptions;
|
||||
plugins?: ApolloServerPlugin[];
|
||||
documentStore?: InMemoryLRUCache<DocumentNode>;
|
||||
}
|
||||
|
||||
export type DataSources<TContext> = {
|
||||
|
|
|
@ -41,7 +41,7 @@ export const gql: (
|
|||
...substitutions: any[]
|
||||
) => DocumentNode = gqlTag;
|
||||
|
||||
import supportsUploadsInNode from './utils/supportsUploadsInNode';
|
||||
import runtimeSupportsUploads from './utils/runtimeSupportsUploads';
|
||||
import { GraphQLScalarType } from 'graphql';
|
||||
export { default as processFileUploads } from './processFileUploads';
|
||||
|
||||
|
@ -53,6 +53,6 @@ export { default as processFileUploads } from './processFileUploads';
|
|||
// experimental ECMAScript modules), this conditional export is necessary
|
||||
// to avoid modern ECMAScript from failing to parse by versions of Node.js
|
||||
// which don't support it (yet — eg. Node.js 6 and async/await).
|
||||
export const GraphQLUpload = supportsUploadsInNode
|
||||
export const GraphQLUpload = runtimeSupportsUploads
|
||||
? (require('graphql-upload').GraphQLUpload as GraphQLScalarType)
|
||||
: undefined;
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
/// <reference path="./types/graphql-upload.d.ts" />
|
||||
|
||||
import supportsUploadsInNode from './utils/supportsUploadsInNode';
|
||||
import runtimeSupportsUploads from './utils/runtimeSupportsUploads';
|
||||
|
||||
// We'll memoize this function once at module load time since it should never
|
||||
// change during runtime. In the event that we're using a version of Node.js
|
||||
|
@ -8,7 +8,7 @@ import supportsUploadsInNode from './utils/supportsUploadsInNode';
|
|||
const processFileUploads:
|
||||
| typeof import('graphql-upload').processRequest
|
||||
| undefined = (() => {
|
||||
if (supportsUploadsInNode) {
|
||||
if (runtimeSupportsUploads) {
|
||||
return require('graphql-upload')
|
||||
.processRequest as typeof import('graphql-upload').processRequest;
|
||||
}
|
||||
|
|
|
@ -44,6 +44,7 @@ import {
|
|||
} from 'apollo-server-plugin-base';
|
||||
|
||||
import { Dispatcher } from './utils/dispatcher';
|
||||
import { InMemoryLRUCache, KeyValueCache } from 'apollo-server-caching';
|
||||
|
||||
export {
|
||||
GraphQLRequest,
|
||||
|
@ -76,6 +77,7 @@ export interface GraphQLRequestPipelineConfig<TContext> {
|
|||
formatResponse?: Function;
|
||||
|
||||
plugins?: ApolloServerPlugin[];
|
||||
documentStore?: InMemoryLRUCache<DocumentNode>;
|
||||
}
|
||||
|
||||
export type DataSources<TContext> = {
|
||||
|
@ -102,6 +104,7 @@ export async function processGraphQLRequest<TContext>(
|
|||
|
||||
let queryHash: string;
|
||||
|
||||
let persistedQueryCache: KeyValueCache | undefined;
|
||||
let persistedQueryHit = false;
|
||||
let persistedQueryRegister = false;
|
||||
|
||||
|
@ -116,10 +119,14 @@ export async function processGraphQLRequest<TContext>(
|
|||
);
|
||||
}
|
||||
|
||||
// We'll store a reference to the persisted query cache so we can actually
|
||||
// do the write at a later point in the request pipeline processing.
|
||||
persistedQueryCache = config.persistedQueries.cache;
|
||||
|
||||
queryHash = extensions.persistedQuery.sha256Hash;
|
||||
|
||||
if (query === undefined) {
|
||||
query = await config.persistedQueries.cache.get(`apq:${queryHash}`);
|
||||
query = await persistedQueryCache.get(`apq:${queryHash}`);
|
||||
if (query) {
|
||||
persistedQueryHit = true;
|
||||
} else {
|
||||
|
@ -134,11 +141,11 @@ export async function processGraphQLRequest<TContext>(
|
|||
);
|
||||
}
|
||||
|
||||
// We won't write to the persisted query cache until later.
|
||||
// Defering the writing gives plugins the ability to "win" from use of
|
||||
// the cache, but also have their say in whether or not the cache is
|
||||
// written to (by interrupting the request with an error).
|
||||
persistedQueryRegister = true;
|
||||
|
||||
Promise.resolve(
|
||||
config.persistedQueries.cache.set(`apq:${queryHash}`, query),
|
||||
).catch(console.warn);
|
||||
}
|
||||
} else if (query) {
|
||||
// FIXME: We'll compute the APQ query hash to use as our cache key for
|
||||
|
@ -162,42 +169,81 @@ export async function processGraphQLRequest<TContext>(
|
|||
requestContext,
|
||||
});
|
||||
|
||||
const parsingDidEnd = await dispatcher.invokeDidStartHook(
|
||||
'parsingDidStart',
|
||||
requestContext,
|
||||
);
|
||||
|
||||
try {
|
||||
let document: DocumentNode;
|
||||
try {
|
||||
document = parse(query);
|
||||
parsingDidEnd();
|
||||
} catch (syntaxError) {
|
||||
parsingDidEnd(syntaxError);
|
||||
return sendErrorResponse(syntaxError, SyntaxError);
|
||||
// If we're configured with a document store (by default, we are), we'll
|
||||
// utilize the operation's hash to lookup the AST from the previously
|
||||
// parsed-and-validated operation. Failure to retrieve anything from the
|
||||
// cache just means we're committed to doing the parsing and validation.
|
||||
if (config.documentStore) {
|
||||
try {
|
||||
requestContext.document = await config.documentStore.get(queryHash);
|
||||
} catch (err) {
|
||||
console.warn(
|
||||
'An error occurred while attempting to read from the documentStore.',
|
||||
err,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
requestContext.document = document;
|
||||
// If we still don't have a document, we'll need to parse and validate it.
|
||||
// With success, we'll attempt to save it into the store for future use.
|
||||
if (!requestContext.document) {
|
||||
const parsingDidEnd = await dispatcher.invokeDidStartHook(
|
||||
'parsingDidStart',
|
||||
requestContext,
|
||||
);
|
||||
|
||||
const validationDidEnd = await dispatcher.invokeDidStartHook(
|
||||
'validationDidStart',
|
||||
requestContext as WithRequired<typeof requestContext, 'document'>,
|
||||
);
|
||||
try {
|
||||
requestContext.document = parse(query);
|
||||
parsingDidEnd();
|
||||
} catch (syntaxError) {
|
||||
parsingDidEnd(syntaxError);
|
||||
return sendErrorResponse(syntaxError, SyntaxError);
|
||||
}
|
||||
|
||||
const validationErrors = validate(document);
|
||||
const validationDidEnd = await dispatcher.invokeDidStartHook(
|
||||
'validationDidStart',
|
||||
requestContext as WithRequired<typeof requestContext, 'document'>,
|
||||
);
|
||||
|
||||
if (validationErrors.length === 0) {
|
||||
validationDidEnd();
|
||||
} else {
|
||||
validationDidEnd(validationErrors);
|
||||
return sendErrorResponse(validationErrors, ValidationError);
|
||||
const validationErrors = validate(requestContext.document);
|
||||
|
||||
if (validationErrors.length === 0) {
|
||||
validationDidEnd();
|
||||
} else {
|
||||
validationDidEnd(validationErrors);
|
||||
return sendErrorResponse(validationErrors, ValidationError);
|
||||
}
|
||||
|
||||
if (config.documentStore) {
|
||||
// The underlying cache store behind the `documentStore` returns a
|
||||
// `Promise` which is resolved (or rejected), eventually, based on the
|
||||
// success or failure (respectively) of the cache save attempt. While
|
||||
// it's certainly possible to `await` this `Promise`, we don't care about
|
||||
// whether or not it's successful at this point. We'll instead proceed
|
||||
// to serve the rest of the request and just hope that this works out.
|
||||
// If it doesn't work, the next request will have another opportunity to
|
||||
// try again. Errors will surface as warnings, as appropriate.
|
||||
//
|
||||
// While it shouldn't normally be necessary to wrap this `Promise` in a
|
||||
// `Promise.resolve` invocation, it seems that the underlying cache store
|
||||
// is returning a non-native `Promise` (e.g. Bluebird, etc.).
|
||||
Promise.resolve(
|
||||
config.documentStore.set(queryHash, requestContext.document),
|
||||
).catch(err =>
|
||||
console.warn('Could not store validated document.', err),
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// FIXME: If we want to guarantee an operation has been set when invoking
|
||||
// `willExecuteOperation` and executionDidStart`, we need to throw an
|
||||
// error here and not leave this to `buildExecutionContext` in
|
||||
// `graphql-js`.
|
||||
const operation = getOperationAST(document, request.operationName);
|
||||
const operation = getOperationAST(
|
||||
requestContext.document,
|
||||
request.operationName,
|
||||
);
|
||||
|
||||
requestContext.operation = operation || undefined;
|
||||
// We'll set `operationName` to `null` for anonymous operations.
|
||||
|
@ -212,6 +258,16 @@ export async function processGraphQLRequest<TContext>(
|
|||
>,
|
||||
);
|
||||
|
||||
// Now that we've gone through the pre-execution phases of the request
|
||||
// pipeline, and given plugins appropriate ability to object (by throwing
|
||||
// an error) and not actually write, we'll write to the cache if it was
|
||||
// determined earlier in the request pipeline that we should do so.
|
||||
if (persistedQueryRegister && persistedQueryCache) {
|
||||
Promise.resolve(persistedQueryCache.set(`apq:${queryHash}`, query)).catch(
|
||||
console.warn,
|
||||
);
|
||||
}
|
||||
|
||||
const executionDidEnd = await dispatcher.invokeDidStartHook(
|
||||
'executionDidStart',
|
||||
requestContext as WithRequired<
|
||||
|
@ -224,7 +280,7 @@ export async function processGraphQLRequest<TContext>(
|
|||
|
||||
try {
|
||||
response = (await execute(
|
||||
document,
|
||||
requestContext.document,
|
||||
request.operationName,
|
||||
request.variables,
|
||||
)) as GraphQLResponse;
|
||||
|
|
|
@ -158,6 +158,7 @@ export async function runHttpQuery(
|
|||
| CacheControlExtensionOptions
|
||||
| undefined,
|
||||
dataSources: options.dataSources,
|
||||
documentStore: options.documentStore,
|
||||
|
||||
extensions: options.extensions,
|
||||
persistedQueries: options.persistedQueries,
|
||||
|
|
|
@ -30,7 +30,7 @@ export class Dispatcher<T> {
|
|||
public invokeDidStartHook<
|
||||
TMethodName extends FunctionPropertyNames<
|
||||
Required<T>,
|
||||
((...args: any[]) => AnyFunction | void)
|
||||
(...args: any[]) => AnyFunction | void
|
||||
>,
|
||||
TEndHookArgs extends Args<ReturnType<AsFunction<T[TMethodName]>>>
|
||||
>(
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
const supportsUploadsInNode = (() => {
|
||||
const runtimeSupportsUploads = (() => {
|
||||
if (
|
||||
process &&
|
||||
process.release &&
|
||||
|
@ -13,9 +13,12 @@ const supportsUploadsInNode = (() => {
|
|||
if (nodeMajor < 8 || (nodeMajor === 8 && nodeMinor < 5)) {
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
return true;
|
||||
// If we haven't matched any of the above criteria, we'll remain unsupported
|
||||
// for this mysterious environment until a pull-request proves us otherwise.
|
||||
return false;
|
||||
})();
|
||||
|
||||
export default supportsUploadsInNode;
|
||||
export default runtimeSupportsUploads;
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "apollo-server-express",
|
||||
"version": "2.3.1",
|
||||
"version": "2.4.0",
|
||||
"description": "Production-ready Node.js GraphQL server for Express and Connect",
|
||||
"main": "dist/index.js",
|
||||
"types": "dist/index.d.ts",
|
||||
|
@ -30,7 +30,7 @@
|
|||
"@types/accepts": "^1.3.5",
|
||||
"@types/body-parser": "1.17.0",
|
||||
"@types/cors": "^2.8.4",
|
||||
"@types/express": "4.16.0",
|
||||
"@types/express": "4.16.1",
|
||||
"accepts": "^1.3.5",
|
||||
"apollo-server-core": "file:../apollo-server-core",
|
||||
"body-parser": "^1.18.3",
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "apollo-server-hapi",
|
||||
"version": "2.3.1",
|
||||
"version": "2.4.0",
|
||||
"description": "Production-ready Node.js GraphQL server for Hapi",
|
||||
"main": "dist/index.js",
|
||||
"types": "dist/index.d.ts",
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
{
|
||||
"name": "apollo-server-integration-testsuite",
|
||||
"private": true,
|
||||
"version": "2.3.1",
|
||||
"version": "2.4.0",
|
||||
"description": "Apollo Server Integrations testsuite",
|
||||
"main": "dist/index.js",
|
||||
"types": "dist/index.d.ts",
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "apollo-server-koa",
|
||||
"version": "2.3.1",
|
||||
"version": "2.4.0",
|
||||
"description": "Production-ready Node.js GraphQL server for Koa",
|
||||
"main": "dist/index.js",
|
||||
"types": "dist/index.d.ts",
|
||||
|
@ -30,14 +30,14 @@
|
|||
"@types/accepts": "^1.3.5",
|
||||
"@types/cors": "^2.8.4",
|
||||
"@types/koa": "^2.0.46",
|
||||
"@types/koa-bodyparser": "^5.0.1",
|
||||
"@types/koa-bodyparser": "^4.2.1",
|
||||
"@types/koa-compose": "^3.2.2",
|
||||
"@types/koa__cors": "^2.2.1",
|
||||
"accepts": "^1.3.5",
|
||||
"apollo-server-core": "file:../apollo-server-core",
|
||||
"graphql-subscriptions": "^1.0.0",
|
||||
"graphql-tools": "^4.0.0",
|
||||
"koa": "2.6.2",
|
||||
"koa": "2.7.0",
|
||||
"koa-bodyparser": "^3.0.0",
|
||||
"koa-router": "^7.4.0",
|
||||
"type-is": "^1.6.16"
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "apollo-server-lambda",
|
||||
"version": "2.3.1",
|
||||
"version": "2.4.0",
|
||||
"description": "Production-ready Node.js GraphQL server for AWS Lambda",
|
||||
"keywords": [
|
||||
"GraphQL",
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "apollo-server-micro",
|
||||
"version": "2.3.1",
|
||||
"version": "2.4.0",
|
||||
"description": "Production-ready Node.js GraphQL server for Micro",
|
||||
"main": "dist/index.js",
|
||||
"types": "dist/index.d.ts",
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "apollo-server-plugin-base",
|
||||
"version": "0.2.1",
|
||||
"version": "0.3.0",
|
||||
"description": "Apollo Server plugin base classes",
|
||||
"main": "dist/index.js",
|
||||
"types": "dist/index.d.ts",
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "apollo-server-testing",
|
||||
"version": "2.3.1",
|
||||
"version": "2.4.0",
|
||||
"description": "Test utils for apollo-server",
|
||||
"main": "dist/index.js",
|
||||
"types": "dist/index.d.ts",
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "apollo-server",
|
||||
"version": "2.3.1",
|
||||
"version": "2.4.0",
|
||||
"description": "Production ready GraphQL Server",
|
||||
"author": "opensource@apollographql.com",
|
||||
"main": "dist/index.js",
|
||||
|
|
|
@ -78,8 +78,6 @@ export class ApolloServer extends ApolloServerBase {
|
|||
|
||||
// Listen takes the same arguments as http.Server.listen.
|
||||
public async listen(...opts: Array<any>): Promise<ServerInfo> {
|
||||
await this.willStart();
|
||||
|
||||
// This class is the easy mode for people who don't create their own express
|
||||
// object, so we have to create it.
|
||||
const app = express();
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "apollo-tracing",
|
||||
"version": "0.4.0",
|
||||
"version": "0.5.0",
|
||||
"description": "Collect and expose trace data for GraphQL requests",
|
||||
"main": "./dist/index.js",
|
||||
"types": "./dist/index.d.ts",
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "graphql-extensions",
|
||||
"version": "0.4.1",
|
||||
"version": "0.5.0",
|
||||
"description": "Add extensions to GraphQL servers",
|
||||
"main": "./dist/index.js",
|
||||
"types": "./dist/index.d.ts",
|
||||
|
|
|
@ -9,6 +9,7 @@
|
|||
{ "path": "./packages/apollo-datasource" },
|
||||
{ "path": "./packages/apollo-datasource-rest" },
|
||||
{ "path": "./packages/apollo-engine-reporting" },
|
||||
{ "path": "./packages/apollo-graphql" },
|
||||
{ "path": "./packages/apollo-server" },
|
||||
{ "path": "./packages/apollo-server-azure-functions" },
|
||||
{ "path": "./packages/apollo-server-cache-memcached" },
|
||||
|
|
|
@ -8,6 +8,7 @@
|
|||
{ "path": "./packages/apollo-cache-control/src/__tests__/" },
|
||||
{ "path": "./packages/apollo-datasource-rest/src/__tests__/" },
|
||||
{ "path": "./packages/apollo-engine-reporting/src/__tests__/" },
|
||||
{ "path": "./packages/apollo-graphql/src/__tests__/" },
|
||||
{ "path": "./packages/apollo-server/src/__tests__/" },
|
||||
{ "path": "./packages/apollo-server-azure-functions/src/__tests__/" },
|
||||
{ "path": "./packages/apollo-server-cache-memcached/src/__tests__/" },
|
||||
|
|
Loading…
Add table
Reference in a new issue