Skip to content

Releases: hasura/ndc-mongodb

v2.0.1

13 Mar 21:58
1dd8e88

Choose a tag to compare

Fixed

  • Bug and release fixes

v2.0.0

12 Mar 22:52
f31f110

Choose a tag to compare

Added

  • Add relational query support including projection, filtering, sorting, pagination, joins, aggregates,
    window functions, unions, and streaming (#180)
  • Add postgres-backed configuration store for on-demand schema loading, enabling dynamic schema management without connector restarts (#179)
  • You can now group documents for aggregation according to multiple grouping criteria (#144, #145)

Changed

  • BREAKING: Update to ndc-spec v0.2 (#139)
  • BREAKING: Remove custom count aggregation - use standard count instead (#144)
  • Results for avg and sum aggregations are coerced to consistent result types (#144)

ndc-spec v0.2

This database connector communicates with the GraphQL Engine using an IR
described by ndc-spec. Version 0.2 makes
a number of improvements to the spec, and enables features that were previously
not possible. Highlights of those new features include:

  • relationships can use a nested object field on the target side as a join key
  • grouping result documents, and aggregating on groups of documents
  • queries on fields of nested collections (document fields that are arrays of objects)
  • filtering on scalar values inside array document fields - previously it was possible to filter on fields of objects inside arrays, but not on scalars

For more details on what has changed in the spec see the
changelog
.

Use of the new spec requires a version of GraphQL Engine that supports ndc-spec
v0.2, and there are required metadata changes.

Removed custom count aggregation

Previously there were two options for getting document counts named count and
_count. These did the same thing. count has been removed - use _count
instead.

Results for avg and sum aggregations are coerced to consistent result types

This change is required for compliance with ndc-spec.

Results for avg are always coerced to double.

Results for sum are coerced to double if the summed inputs use a fractional
numeric type, or to long if inputs use an integral numeric type.

v1.8.4

23 Jul 21:44
6e8ba43

Choose a tag to compare

Fixed

  • Escape field names or aliases with invalid characters in field selections (#175)

v1.8.3

08 Jul 22:28
9dd4b16

Choose a tag to compare

Fixed

  • Filtering on field of related collection inside nested object that is not selected for output (#171)
  • Ensure correct ordering of sort criteria in MongoDB query plan (#172)

v1.8.2

16 Jun 02:48
9cbf7c5

Choose a tag to compare

Added

  • Enable support for the MONGODB-AWS authentication mechanism.

v1.8.1

05 Jun 03:44
f4f3b8e

Choose a tag to compare

Fixed

  • Include TLS root certificates in docker images to fix connections to otel collectors (#167)

Root certificates

Connections to MongoDB use the Rust MongoDB driver, which uses rust-tls, which bundles its own root certificate store.
So there was no problem connecting to MongoDB over TLS. But the connector's OpenTelemetry library uses openssl instead
of rust-tls, and openssl requires a separate certificate store to be installed. So this release fixes connections to
OpenTelemetry collectors over https.

v1.8.0

28 Apr 16:57
cdf780a

Choose a tag to compare

Added

  • Add option to skip rows on response type mismatch (#162)

Option to skip rows on response type mismatch

When sending response data for a query if we encounter a value that does not match the type declared in the connector
schema the default behavior is to respond with an error. That prevents the user from getting any data. This change adds
an option to silently skip rows that contain type mismatches so that the user can get a partial set of result data.

This can come up if, for example, you have database documents with a field that nearly always contains an int value,
but in a handful of cases that field contains a string. Introspection may determine that the type of the field is
int if the random document sampling does not happen to check one of the documents with a string. Then when you run
a query that does read one of those documents the query fails because the connector refuses to return a value of an
unexpected type.

The new option, onResponseTypeMismatch, has two possible values: fail (the existing, default behavior), or skipRow
(the new, opt-in behavior). If you set the option to skipRow in the example case above the connector will silently
exclude documents with unexpected string values in the response. This allows you to get access to the "good" data.
This is opt-in because we don't want to exclude data if users are not aware that might be happening.

The option is set in connector configuration in configuration.json. Here is an example configuration:

{
  "introspectionOptions": {
    "sampleSize": 1000,
    "noValidatorSchema": false,
    "allSchemaNullable": false
  },
  "serializationOptions": {
    "extendedJsonMode": "relaxed",
    "onResponseTypeMismatch": "skipRow"
  }
}

The skipRow behavior does not affect aggregations, or queries that do not request the field with the unexpected type.

v1.7.2

17 Apr 00:34
c9a11e4

Choose a tag to compare

Fixed

  • Database introspection no longer fails if any individual collection cannot be sampled (#160)

v1.7.1

12 Mar 15:33
fcc66ef

Choose a tag to compare

Added

  • Add watch command while initializing metadata (#157)

Changed

Fixed

v1.7.0

10 Mar 21:42

Choose a tag to compare

Added

  • Add uuid scalar type (#148)

Changed

  • On database introspection newly-added collection fields will be added to existing schema configurations (#152)

Fixed

  • Update dependencies to get fixes for reported security vulnerabilities (#149)

Changes to database introspection

Previously running introspection would not update existing schema definitions, it would only add definitions for
newly-added collections. This release changes that behavior to make conservative changes to existing definitions:

  • added fields, either top-level or nested, will be added to existing schema definitions
  • types for fields that are already configured will not be changed automatically
  • fields that appear to have been added to collections will not be removed from configurations

We take such a conservative approach to schema configuration changes because we want to avoid accidental breaking API
changes, and because schema configuration can be edited by hand, and we don't want to accidentally reverse such
modifications.

If you want to make type changes to fields that are already configured, or if you want to remove fields from schema
configuration you can either make those edits to schema configurations by hand, or you can delete schema files before
running introspection.

UUID scalar type

Previously UUID values would show up in GraphQL as BinData. BinData is a generalized BSON type for binary data. It
doesn't provide a great interface for working with UUIDs because binary data must be given as a JSON object with binary
data in base64-encoding (while UUIDs are usually given in a specific hex-encoded string format), and there is also
a mandatory "subtype" field. For example a BinData value representing a UUID fetched via GraphQL looks like this:

{ "base64": "QKaT0MAKQl2vXFNeN/3+nA==", "subType":"04" }

With this change UUID fields can use the new uuid type instead of binData. Values of type uuid are represented in
JSON as strings. The same value in a field with type uuid looks like this:

"40a693d0-c00a-425d-af5c-535e37fdfe9c"

This means that you can now, for example, filter using string representations for UUIDs:

query {
  posts(where: {id: {_eq: "40a693d0-c00a-425d-af5c-535e37fdfe9c"}}) {
    title
  }
}

Introspection has been updated so that database fields containing UUIDs will use the uuid type when setting up new
collections, or when re-introspecting after deleting the existing schema configuration. For migrating you may delete and
re-introspect, or edit schema files to change occurrences of binData to uuid.

Security Fixes

Rust dependencies have been updated to get fixes for these advisories: