The simplest way to store data in AWS Dynamo DB, with or without JSON schemas.
npm -i dynamodm
import DynamoDM from 'dynamodm'
// get an instance of the API (options can be passed here)
const ddm = DynamoDM()
// get a reference to a table:
const table = ddm.Table('my-dynamodb-table')
// Create User and Comment models with their JSON schemas in this table:
const UserSchema = ddm.Schema('user', {
properties: {
emailAddress: {type: 'string'},
marketingComms: {type: 'boolean', default: false}
},
})
const CommentSchema = ddm.Schema('c', {
properties: {
text: {type: 'string' },
user: ddm.DocId,
// identify a field to be used as the creation timestamp using a
// built-in schema:
createdAt: ddm.CreatedAtField
},
additionalProperties: true
}, {
// The schema also defines the indexes (GSI) that this model needs:
index: {
findByUser: {
hashKey: 'user',
sortKey: 'createdAt'
}
}
})
const User = table.model(UserSchema)
const Comment = table.model(CommentSchema)
// wait for the table to be ready, all models should be added first.
await table.ready()
// create some documents (instances of models):
const aUser = new User({ emailAddress: "[email protected]" })
await aUser.save()
const aComment = new Comment({ user: aUser.id, text: "My first comment." })
await aComment.save()
// query for some documents:
const commentsForUser = await Comment.queryMany({ user: aUser.id })
import DynamoDM from 'dynamodm'
const ddm = DynamoDM()
const table = ddm.Table('my-dynamodb-table')
// a model that has no schema and will allow any data to be
// stored and loaded:
const Model = table.model(ddm.Schema('any'));
const doc = new Model({
aKey: 'a value',
'another key': {
a: 123, b: { c: null }
},
anArray: [
1, true, { x: 123 },
]
})
await doc.save();
// all dynamodm documents have an .id field by default, which is
// used as the table's primary (hash) key:
const loadedDoc = await Model.getById(doc.id);
// change the document and re-save:
loadedDoc.aKey = 'a different value';
await loadedDoc.save();
DynamoDM is designed to make it easy to write simple, scalable, apps using DynamoDB.
It supports Single Table Design, where different model types are stored in a single DynamoDB table.
Each document has a unique ID which is used as the table hash key, ensuring documents are always evenly spread across all partitions.
Not all DynamoDB functions are available, but DynamoDM is designed to be efficient, and make it easy to write apps that make the most of DynamoDB's scalability, performance, and low cost.
The simple API is inspired by Mongoose, but there are many differences between MongoDB and DynamoDB, in particular when it comes to querying documents: DynamDB's indexing and query capabilities are much more limited.
Index to main classes and methods:
The DynamoDM() function returns an instance of the API. The API instance holds default options (including logging), and provides access to create Tables and Schemas, and to the built in schemas.
Schemas from one DynamoDM instance can be used with tables from another. Aside from default options no state is stored in the API instance.
import DynamoDM from 'dynamodm'
const ddm = DynamoDM({
logger: { level:'error' },
// clientOptions.endpoint can be used to onnect to dynamodb-local for example:
clientOptions: { endpoint:'http://localhost:8000' },
})
const table = ddm.Table('my-table-name')
const aSchema ddm.Schema('my-model-name', {}, {})
Options:
logger
: valid values:false
/undefined
: logging is disabled- A
pino
logger (or any other logger with a.child()
method), in which caselogger.child({module:'dynamodm'})
is called to create a logger. - An pino options
object, which will be
used to create a new pino instance. For example
logger:{level:'trace'}
to enable trace-level logging.
- ... all other options supported by .Table or .Schema, which will be used as defaults.
Create a handle to a DynamoDB table. The table stores connection options, model types and indexes, and validates compatibility of all the different models being used in the same table.
All models must be added to a table before calling either .ready()
(for full
validation, including creating the table and indexes if necessary), or
.assumeReady()
(for a quick compatibility check, without checking the
DynamoDB state).
const table = ddm.Table('my-table-name', tableOptions)
// add models here ...
await table.ready()
Options:
name
: The name of the dynamodb table (tableName
may be passed as anoptions.name
andtableName
omitted).client
: TheDynamoDBClient
to be used to connect to DynamoDB, if omitted then one will be created.clientOptions
: Options forDynamoDBClient
creation (ignored ifoptions.client
is passed).- For available options see the dynamodb client documentation
retry
: Options for request retries, requests are re-tried when dynamodb batching limits are exceeded. Defaults to{exponent: 2, delayRandomness: 0.75, maxRetries: 5}
.
Wait for the table to be ready. The current state of the table is queried and it is created if necessary.
If the table is missing required indexes then the creation of a missing index
will be started (but not waited on). To create and wait for all missing
indexes, use the waitForIndexes
option.
Options:
waitForIndexes
: if true then all missing indexes required by schemas in this table will also be created. This may take a long time, especially if indexes are being created that must be back-filled with existing data. Recommended for convenience during development only!
Check the basic compatibility of the models in this table, and assume it has
been set up correctly already in dynamodb. Use this instead of .ready()
if
using dynanamoDM in a short-lived environment like a lambda function.
Create and return a Model
in this table, using the specified
schema. Or return the existing Model type for this schema if it has
already been added.
Delete the DynamoDB table (sends a DeleteTableCommand
with the name of this
table). This will delete all data in the table! Will fail if deletion
protection has been enabled for the table.
Clears the state of this table connection, and if the underlying DynamoDB
client
was created by this table (if it was not passed in as an option), calls and
awaits
client.destroy()
before returning.
Returns nothing and accepts no options.
.name
: The name of the table, as passed to the constructor..client
: The DynamoDB client for the table..docClient
: The DynamoDB document client for the table.
Create a Schema instance named name
, with the schema (which may be empty), and options.
The jsonSchema is implied to be an object (type:'object'
), and must define
properties.
Other schema keywords may not be used at the top-level of the schema, apart
from
additionalProperties
,
and
required
.
Schemas may define special fields using built-in schema fragments in
.properties
. If multiple models are defined in the same table, the special
fields must all be compatible (for example all models must use the same names
for their ID fields and type fields).
Supported options:
options.index
: The indexes for this schema, if any. See Indexing Documents for details.options.generateId
: A function used to generate a new id for documents of this type. Defaults to() => `${schema.name}.${new ObjectId()}`
options.versioning
: Passfalse
to disable versioning for instances of this schema.
After creating a schema, .methods
,
.statics
, .virtuals
, and
.converters
may be defined. These will be added to the
model instances created from this schema.
Because DynamoDM uses the DynamoDB Document client, native javascript types such as Arrays and Objects are converted to their DynamoDB types in the same way.
The built-in schema types can also be used to
conveniently convert numbers to Date
objects and binary data to Buffer
objects.
Defining a model of type 'any', that has no restrictions on its fields:
const AnythingSchema = table.Schema('any')
const Anything = table.model(AnythingSchema)
await (new Anything({ someField: 123 })).save()
await (new Anything({ someField: 'foo' })).save()
Defining a model with nested object fields (M
map type in DynamoDB):
const FooSchema = table.Schema('foo', {
properties: {
nested: {
type: 'object',
properties: {
field1: {type: 'number'},
field2: {type: 'string'},
}
},
}
})
const Foo = table.model(FooSchema)
const f1 = await (new Foo({ nested: { field1: 123 } })).save()
const f2 = await (new Foo({ nested: { field2: 'a string' } })).save()
// { nested: {field1: 123}, type:'foo', id: ... }
console.log(await Foo.getById(f1.id))
Defining a model with a timestamp field (a Date object on the model which is stored as a number in DynamoDB), which has an index that can be used for range queries:
const CommentSchema = table.Schema('comment', {
properties: {
text: {type: 'string'}
commentedAt: DynamoDM().Timestamp,
}
}, {
index: {
myFirstIndex: {
// every index must have a hash key for which an exact
// value is supplied to any query. The built-in .type
// field is often a sensible choice of hash key:
hashKey: "type",
sortKey: "commentedAt"
}
}
})
const Comment = table.model(CommentSchema)
const c1 = await (new Comment({ text: 'some text', commentedAt: new Date() })).save()
// { text: 'some text', commentedAt: 2028-02-29T16:43:53.656Z, type:'comment', id: ... }
console.log(await Foo.getById(f1.id))
const recentComments = await Comment.queryMany({
type: 'comment',
commentedAt: { $gt: new Date(Date.now() - 60*60*24*1000) }
})
// [ { text: 'some text', commentedAt: 2028-02-29T16:43:53.656Z, type:'comment', id: ... } ]
console.log(recentComments)
DynamoDM().Timestamp
: Converted toDate
object on load, Saved as a DynamoDBN
number type (the.getTime()
value).DynamoDM().Binary
: Converted to aBuffer
on load. Saved as DynamoDBB
binary type. DynamoDB binary types are otherwise returned asUint8Array
s.
Special fields are defined by using fragments of schema by value.
DynamoDM().DocIdField
: used to indicate the id field, used by getById and other methods. The default id field name isid
.DynamoDM().TypeField
: used to indicate the type field, which stores the name of the model that a saved document was created with. The default type field name istype
.DynamoDM().VersionField
: The version field, which stores a number that is incremented by 1 each time a model is saved, and is used to prevent data from being silently overwritten by multiple clients accessing the same document. The default version field name isv
. See document versioning for details.DynamoDM().CreatedAtField
: used to indicate a timestamp field that is updated when a model is first created by dynamodm. This field is not used unless you include this schema fragment in a model's schema.DynamoDM().UpdateAtField
: used to indicate a timestamp field that is updated whenever.save()
is called on a document. This field is not used unless you include this schema fragment in a model's schema.
All models in the same Table
must share the same .id and .type fields
identified by the built-in DocIdField
and TypeField
schemas. If they don't
then an error will be thrown when calling table.ready()
.
For example, declaring models that use ._dynamodm_id
as the id field, instead of
the default .id
:
import DynamoDM from 'dynamodm'
const ddm = DynamoDM()
const table = ddm.Table('my-table-name');
const Model1 table.model(ddm.Schema('m1', {
properties: {
_dynamodm_id: ddm.DocIdField
}
}));
const Model2 = table.model(ddm.Schema('m2, {
properties: {
_dynamodm_id: ddm.DocIdField
}
}));
// if any models have been added to the table that use a different id field
// name, this will throw:
await table.ready();
const m1 = await (new Model1()).save();
const m2 = await (new Model2()).save();
console.log(m1._dynamodm_id);
Instance methods on a model may be defined by assigning to schema.methods
:
const CommentSchema = table.Schema('comment', {
properties: {text: {type: 'string'}}
})
CommentSchema.methods.countWords = function() {
return this.text.split().length
}
const Comment = table.model(CommentSchema)
const comment = new Comment({text:'text for my comment'})
const wc = comment.countWords()
Static methods on a model may be defined by assigning to schema.statics
:
const CommentSchema = table.Schema('comment', {
properties: {text: {type: 'string'}, user: {type: ddm.DocId}}
})
CommentSchema.statics.createAndSaveForUser = async function(user, properties) {
// in static methods 'this' is the model prototype:
const comment = new this(properties)
comment.user = user.id
await comment.save()
return comment
}
const Comment = table.model(CommentSchema)
const aComment = await Comment.createAndSaveForUser(
aUser, {text: 'my comment text'}
)
Virtual properties for a model may be defined by assigning to
schema.virtuals
. Virtual properties are useful for computing properties that
are required by the application but which are not saved in the database, or
making the separate parts of a compound
property
easily accessible.
Virtual properties can either be a string alias for another property, in which case a getter and setter for the property are defined automatically:
const CommentSchema = table.Schema('comment', {
properties: {text: {type: 'string'}}
})
CommentSchema.virtuals.someText = 'text'
const Comment = table.model(CommentSchema)
const comment = new Comment({text:'text for my comment'})
console.log(comment.someText) // 'text for my comment'
comment.someText = 'new text'
await comment.save()
console.log(comment.text) // 'new text'
Or a data descriptor or accessor descriptor that will be passed to
Object.defineProperties
,
and which defines its own get
and/or set
methods:
const CommentSchema = table.Schema('comment', {
properties: {text: {type: 'string'}}
})
CommentSchema.virtuals.wordCount = {
get: function() {
return this.text.split().length
}
}
const Comment = table.model(CommentSchema)
const comment = new Comment({text:'text for my comment'})
const wc = comment.wordCount
Virtual properties must be synchronous, but sometimes it's useful to
asynchronously compute field values. To enable this
.toObject()
will
asynchronously iterate over the array of Schema.converters when converting a
document to a plain object.
Converters can also be used to redact fields that should be hidden from the serialised versions of documents (for example when serialising for an API).
.converters
is an array, and the converters are always executed in order:
const UserSchema = table.Schema('user', {
properties: {emailAddress: {type:'string'}, name: {type:'string'}
})
// converter to count the comments this user has made:
UserSchema.converters.push(async (value, options) => {
// get a handle to a previously defined Comment Model from its schema:
const Comment = this.table.model(CommentSchema)
// update value asynchronously
value.commentCount = (await Comment.queryManyIds(
{ user: this.id },
{ limit: 100 }
)).length
// converters must return the new value
return value
})
// converter to redact the email address:
UserSchema.converters.push((value, options) => {
delete value.emailAddress
// the converted value will no longer have .emailAddress, but
// 'this.emailAddress' is still available to subsequent
// converters if they need it
return value
})
// converter that uses an option:
UserSchema.converters.push((value, options) => {
value.newField = options.someOptionForConverters
return value
})
const User = table.model(UserSchema)
const user = User.getById('user.someid')
const asPlainObj = await user.toObject({
someOptionForConverters: 'foo'
})
// { commentCount: 4, newField: 'foo', type: 'user', id: ...}
console.log(asPlainObj)
Model types are the main way that documents stored in dynamodb are accessed. A
unique class is created for each model type in a table, with the name
Model_schemaname
. All methods are provided by an internal base class
(BaseModel
), which is not directly accessible.
Instances of a model (const doc = new MyModel(properties)
) are referred to as
Documents.
To set fields in the database, set properties on a document and then call
doc.save()
. There are no limits on field names that can be used, apart from
the normal javascript reserved names like constructor
.
Models are created by calling table.model() with a schema.
Each model class that is created has static fields:
Model.type
: The name of the schema that was used to create this model (which is the same as the value of the built in type field for documents of this model type).Model.table
: the table in which this model was created.
For example:
const MyFooModel = table.model(ddm.Schema('foo'));
// MyFooModel.table === table
// MyFooModel.type === 'foo'
// these are static, so only on the model class, not on its instances:
const fooDoc = new MyFooModel();
// fooDoc.table === undefined
Create a new document (a model instance) with the specified properties.
const aCommment = new Comment({
text: 'some text',
user: aUser.id,
commentTime: new Date()
});
Save the current version of this document to the database, if this document was loaded from the database then an existing document will be updated, otherwise a new document will be created.
Save a new document:
const aCommment = new Comment({
text: 'some text',
user: aUser.id,
});
await aComment.save();
Update and save an existing document:
const aComment = await Comemnt.getById(someId);
aComment.text = 'new text';
await aComment.save();
Delete a document.
const aComment = await Comemnt.getById(someId);
await aComment.delete();
Convert a document into a plain object representation (i.e. suitable for JSON stringification):
Note that this method is asynchronous (returns a Promise that must be awaited),
because it may execute the .converters
that the
schema defines for this model type.
const aCommment = new Comment({
text: 'some text',
user: aUser.id,
});
await aComment.save();
const stingified = JSON.stringify(await aComment.toObject());
The version field of a model is incremented each
time it is saved, starting at 0 for un-saved models. the .save()
and
.remove()
methods check that the version in the database is the same as the
current one using a Condition
Expression
before updating or deleting the data (they fail with an error if the version
does not match).
Versioning can be disabled for a model by setting the Schema's
options.versioning
property to false
.
Get a document by its ID. By default models use .id
as the ID field. It's
possible to change this by using the built-in schema
fragments in your model's schema.
With the default ID field (.id
):
const aComment = await Comemnt.getById(someId);
// aComment.id === someId
With a custom ID field:
import DynamoDM from 'dynamodm'
const ddm = DynamoDM()
const table = ddm.Table('my-table-name');
const FooSchema = ddm.Schema('foo', {
properties: {
_dynamodm_id: ddm.DocIdField
}
});
const Foo = table.model(FooSchema);
// if any models have been added to the table that use a different id field
// name, this will throw:
await table.ready();
const a = await (new Foo()).save();
const b = Foo.getById(a._dynamodm_id);
As Model.getById
, but accepts an array of
up to 100 ids to be fetched in a batch.
The query API accepts mongo-like queries, of the form
{ fieldName: valueToSearchFor }
For indexes over a single field (where the single field is the hash index) values can only be queried by equality. However since Global Secondary Indexes may contain multiple values for the same hash key multiple results may still match the query.
A limited set of non-equality query operators are supported. They may be used only on fields for which an index with a sort key (also known as a range key) has been declared, and always require a value to be specified for the corresponding index's hash key.
See Indexing Documents for declaring indexes.
$gt
: Find items where the specified field has a value strictly greater than the supplied value.{ a: "some value", // the .a field must be the GSI hash key b: { $gt: 123 } // the .b field must be the GSI sort key }
$gte
: Find items where the specified field has a value greater than or equal to the supplied value.{ a: "some value", // the .a field must be the GSI hash key b: { $gte: 123 } // the .b field must be the GSI sort key }
$lt
: Find items where the specified field has a value strictly less than the supplied value.{ a: "some value", // the .a field must be the GSI hash key b: { $lt: 123 } // the .b field must be the GSI sort key }
$lte
Find items where the specified field has a value less than or equal to the supplied value.{ a: "some value", // the .a field must be the GSI hash key b: { $lte: 123 } // the .b field must be the GSI sort key }
$between
Find items where the specified field has a value greater than or equal to the first value, and less than or equal to the second value{ a: "some value", // the .a field must be the GSI hash key b: { $between: [123, 234] } // the .b field must be the GSI sort key }
$begins
Find items where the specified field (which must be a string type) begins with the specified prefix.{ a: "some value", // the .a field must be the GSI hash key // the .b field must be the GSI sort key, and the type // of .b must be string. b: { $begins: "some prefix" } }
Querying for a single document property (a dynamodb attribute) named
someField
, equal to a value "someValue"
. This requires an index that
includes someField
as its hash key:`
const result = await Comment.queryOne({
someField: "someValue"
})
Querying for a two properties named field1
, and field2
, equal to
values "v1"
and 2
. This requires an index that either:
- has
field1
as its hash key, andfield2
as its sort key, or: - has
field2
as its hash key, andfield2
as its sort key.
Note that this query may return multiple results, since neither hash key nor sort key values in global secondary indexes are necessarily unique.
const results = await Comment.queryMany({
field1: "v1",
field2: 2
})
If you are always querying for equality on two fields, then consider combining
them into a single field, and using .virtuals
to make them
separately accessible.
Querying for a value range. Using the range operators $lt
, $lte
, $gt
,
$gte
, $between
or $begins
requires a sort key, and always also requires
that a hash key is specified by value.
const MyModelSchema = ddb.Schema({
properties: {
field1: {type: 'string'},
field2: {type: 'string'}
}
}, {
index: {
myIndexName: {
hashKey: 'field1',
sortKey: 'field2'
}
}
})
const MyModel = table.model(MyModelSchema);
const results = await MyModel.queryMany({
field1: "v1",
field2: {
$gt: "2013-01-28"
}
})
If the query includes a sort key, then results will be ordered by the sort key.
Otherwise the order of query results is undefined. The order can be reversed by
setting options.rawQueryOptions.ScanIndexForward: false
.
Query for a single document. See query format for the supported query format.
Supported options:
- abortSignal: The
.signal
of anAbortController
, which may be used to interrupt the asynchronous request. - startAfter: A document after which to search for the next query result. This can be used for pagination by returning the result from a previous query.
- rawQueryOptions
- rawFetchOptions
Resolves with a document instance of the model type on which this was called, or null if there were no results. Rejects if there's an error.
Query for the ID of a single model. See query format for the supported query format.
Supported options:
- abortSignal: The
.signal
of anAbortController
, which may be used to interrupt the asynchronous request. - startAfter: A document after which to search for the next query result. This can be used for pagination by returning the result from a previous query.
- rawQueryOptions
Resolves with a document id (string), or null if no document matched the query. Rejects if there's an error.
Query for an array of documents. See query format for the supported query format.
Supported options:
- limit: The maxuimum number of models to return. May be combined with
startAfter
to paginate restults. - abortSignal: The
.signal
of anAbortController
, which may be used to interrupt the asynchronous request. - startAfter: A document after which to search for the next query result. This can be used for pagination by returning the result from a previous query.
- rawQueryOptions
- rawFetchOptions
Resolves with an array of document instances of the model type on which this was called, or an empty array if there were no results. Rejects if there's an error.
Query for an array of document Ids. See query format for the supported query format.
Supported options:
- limit: The maxuimum number of models to return. May be combined with
startAfter
to paginate restults. - abortSignal: The
.signal
of anAbortController
, which may be used to interrupt the asynchronous request. - startAfter: A document after which to search for the next query result. This can be used for pagination by returning the result from a previous query.
- rawQueryOptions
Resolves with an array of document ids (strings), or an empty array if there were no results. Rejects if there's an error.
The raw query API allows queries to be executed with a raw lib-dynamodb query, of the form:
{
IndexName: <name of index to query against>,
KeyConditionExpression: <key condition expression>,
ExpressionAttributeValues: <expression attribute values>,
ExpressionAttributeNames: <expression attribute names>,
Limit: <query document limit>,
...
}
The index name is mandatory, since it cannot be determined automatically, however the table name does not need to be provided.
The key condition expression, expression attribute values, and expression attribute names must all be specified. Other values supported by the query command are optional.
Send a raw query command and return a single document ID.
Supported rawOptions:
abortSignal
: The.signal
of anAbortController
, which may be used to interrupt the asynchronous request.
Send a raw query command and return an array of document IDs.
Supported rawOptions:
limit
: maximum number of IDs to return. Detauls to Infinity.abortSignal
: The.signal
of anAbortController
, which may be used to interrupt the asynchronous request.
An async generator that yields IDs (up to rawOptions.limit, which may be Infinity).
Supported rawOptions:
limit
: maximum number of IDs to return. Detauls to Infinity.abortSignal
: The.signal
of anAbortController
, which may be used to interrupt the asynchronous request.
DynamoDM supports only Global Secondary Indexes. Any document field name which
is indexed must have the same type in all documents in the table in which it
occurs (this is checked by .ready()
).
An index may have either:
- just a hash key (which need not be unique), which only supports queries by exact value.
- Or a hash key and a sort key (range key), where the hash key must be specified by exact value, but the sort key supports range queries.
To specify an index, use the .index
option when creating a
Schema:
The .index
option is an object where the fields are the names of the indexes,
and the value is either an object specifing the hash key and optionally the
sort key for the index, or it may just be the value '1' or true indicating that
the index name is the same as the hash key of the index, and there is no sort
key:
{
// an index called anIndexName where .field1 is the
// hash key and .field2 is the sort key
anIndexName: {
hashKey: 'field1',
sortKey: 'field2',
},
// an index called 'field3' wheres `field3` is the hash
// key, and there is no sort key:
field3: 1
}
All fields referred to in the index option must be defined in the schema. This is because the types of the fields need to be known to use and create the index.
Example:
const CommentSchema = ddm.Schema('c', {
properties: {
text: {type: 'string' },
user: ddm.DocId,
section: {type: 'string' },
createdAt: ddm.CreatedAtField
}
}, {
index: {
findByUser: {
hashKey: 'user',
sortKey: 'createdAt'
},
section: 1
}
})
const Comment = table.model(CommentSchema)
// ...
console.log(await Comment.queryMany({ user: userId }))
console.log(await Comment.queryMany({ user: userId, createdAt: { $gt: new Date('2024-01-01') } }))
console.log(await Comment.queryMany({ section: 'thread-123' }))
A dynamoDB table supports up to 20 global secondary indexes in the default quota. DynamoDM creates one built-in index on the id field.
All documents in the same table share the same indexes, and all documents that include a field that is used as the hash key of an index will be included in that index, even if they are not the same type as the schema that declared the index.
This can be an advantageous, by allowing multiple document types to share a single index (if multiple models declare the same index, DynamoDM will only create it once), but care must be taken to ensure that your query only returns documents of the desired type.
The easiest way to share indexes between model types is by using the built-in
type field as the hash key of the index, for
example, to allow both Comments
and Uploads
belonging to a particular user
to be found using the same index:
import DynamoDM from 'dynamodm'
// get an instance of the API (options can be passed here)
const ddm = DynamoDM()
// get a reference to a table:
const table = ddm.Table('my-dynamodb-table')
// Create User and Comment models with their JSON schemas in this table:
const UserSchema = ddm.Schema('user', { })
const CommentSchema = ddm.Schema('comment', {
properties: {
text: { type: 'string' },
user: ddm.DocId
}
}, {
index: {
findByUser: {
hashKey: 'type',
sortKey: 'user'
}
}
})
const UploadSchema = ddm.Schema('upload', {
properties: {
url: { type: 'string' },
user: ddm.DocId
}
}, {
index: {
findByUser: {
hashKey: 'type',
sortKey: 'user'
}
}
})
const User = table.model(UserSchema)
const Comment = table.model(CommentSchema)
const Upload = table.model(UploadSchema)
await table.ready()
// both these queries will use the findByUser index. Since the hash
// key of the index is `type`, we can be sure that only documents
// of the correct type are returned to each query:
const commentsForUser = await Comment.queryMany({
type: CommentSchema.name, user: aUser.id
})
const uploadsForUser = await Upload.queryMany({
type: UploadSchema.name, user: aUser.id
})
It's possible to extend this idea to take advantage of sorting within the sort
key. For example, if we want to be able to efficiently find recent uploads and
comments for a single user we can create a compound property user_and_time
that is used as the sort key of the index, and take advantage of virtual
properties to make its details transparent to model users. See
examples/fields_sharing_index.mjs
for an implentation.
Please open a github issue :)
This project is supported by:
- TraitorBird, simple canary tokens.
- Coggle, simple collaborative mind maps.