From 4431b1043d025a36ca3b64c17b3398de606ec6e8 Mon Sep 17 00:00:00 2001 From: Ashley Brown <98361885+mdb-ashley@users.noreply.github.com> Date: Thu, 18 Apr 2024 11:01:30 -0400 Subject: [PATCH] DOCSP-38724 - Merge master into v8.0 (#7470) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * DOCS-16509 Add vectorSearch to Agg Stage Table (#6496) * DOCS-16509 Add vectorSearch to Agg Stage Table * build error * atlas note * copy * add version * DOCS-12059 documents network.physicalBytesIn and network.physicalBytesOut (#6465) * DOCS-12059 documents network.physicalBytesIn and network.physicalBytesOut * internal review * internal review * DOCSP-34198: kotlin code for query nulls (#6536) * DOCSP-34198: kotlin code for query nulls * add tab * fixed source code, testing if build errors resolve * DOCSP-37157 Removed duplicate text, fixed version (#6551) * Removed duplicate text * Fixed version number * DOCSP-37115 Time series granularity recommendations and limits (#6554) * Fixed time series granularity example and added a limitation on max bucket size * Adjusted and expanded taxonomy tags * Rounding * Fixed example values * Internal PR feedback * Wording cleanup * Wording cleanup * DOCSP-34200: kotlin code for update docs (#6555) * DOCSP-34200: kotlin code for update docs * JD PR fixes 1 * fix typo * line break * DOCSP-37146 Fixed expireAfterSeconds value and table entry (#6569) * Fixed expireAfterSeconds value and table entry * Taxonomy tagging * spelling * 575 recovery (#6575) * add 2 examples to test output * add tab on host page and add "post" example * fix indent, post-code description and ref, adds c links * more examples * fix links; more testing * more examples * fix build issues * fix build issues * add 10,11,12,and13 * changing tab style * fix indentation with newer style of tabs * fix spacing * missing ` * remove other tabs sections... * add c tab to intro * adding back in tabs * add all remaining c-lang code snippets * fix indentation on post code * update dedent to match updated source code (https://github.com/mongodb/mongo-c-driver/pull/1532) * repushing to bring in latest code file * final tweak for dedentation * update delete page with C examples * add tab to see also section * found more files missing the c-lang tab * fix borked link and update text to post code * how many links can one person break? * review changes * remove cleanup code from snippets and move to the last snippet on each page only * update path to include * paths are not relative * Apply suggestions from code review Co-authored-by: ltran-mdb2 <143426234+ltran-mdb2@users.noreply.github.com> * remove extra line * Update source/tutorial/insert-documents.txt --------- Co-authored-by: ltran-mdb2 <143426234+ltran-mdb2@users.noreply.github.com> * DOCSP-37135 tcmallocAggressiveMemoryDecommit Default Value (#6556) * DOCS-16635 Fix Incorrect convertToCapped Sharding Statement (#6564) * DOCS-16635 Fix Incorrect convertToCapped Sharding Statement * warning title * DOCS-15625-shards-must-be-replica-sets (#6566) * DOCS-15625-shards-must-be-replica-sets * Centralizing content and replacing with includes. * DOCSP-37116 fsync backup clarification (#6539) * DOCSP-37116 fsync Backup Version * Adds notice on mongodump page * Fixes heading * Reworks to existing includes * Fixes build issue * Fixes build issue * Minor edit * Minor edits * Minor edits * Minor edits * Fixes build issue * Minor edits * Adds version limitation to filesystem snapshots. * Fixes text * Reworks include * Reworks include * Fixes build issue * Reworks include * Reworks include * Vale fixes * Vale fixes * Vale fixes * Fixes link * Fixes link * Fixes per Sarah * Fixes per Nandini * DOCSP-33746 Removes Capped Collection from compact (#6571) * DOCSP-37048 7.2.1 Release Notes and Changelog (#6565) * DOCSP-37048 7.2.1 Release Notes and Changelog * adds release date * Fix indents bug (#6613) * fix indentation errors * another indentation error and missing line * spacing issues within code examples * experimenting removal to track down build error * add back in blocks * fix indentation near bottom of page * more indentation issues found. Updated C# wording to match other languages * throwing darts * two code-blocks mis-indented * DOCSP-37242 Fix Project Stage Syntax Rendering (#6585) * DOCSP-37242 Fix Project Stage Syntax Rendering * build errors * Revert "build errors" This reverts commit 7406d4aeec1fdf5ebb9dad30415b630a9156c60e. * DOCSP-36733 4.4 to 5.0 Upgrade Performance (#6464) * DOCSP-36733 4.4 to 5.0 Upgrade Performance * * * * * * * add SERVER-60176 to past release limitations * JD feedback * * * SH feedback * add note on wiredTiger idle file handles * add read concern changes * GH feedback, remove idle timeout note * fix ref error for vector search tutorial (#6618) * DOCSP 24858: common use of hidden nodes (#6607) * Updates primary use case * Typo fixes * Review suggestion * DOCS-11089 clarify syntax of sum (#6552) * DOCS-11089 clarify syntax of sum * DOCS-11089 copy edit * DOCS-11089 copy edit * DOCS-11089 copy edit * DOCS-11089 tech edit * DOCSP-25129 cursor.count() applySkipLimit limitation (#6495) * DOCSP-25129 cursor.count() applySkipLimit limitation * DOCSP-25129 updates for feedback * Apply suggestions from code review Co-authored-by: Ashley Brown <98361885+mdb-ashley@users.noreply.github.com> * DOCSP-25129 updates for copy feedback --------- Co-authored-by: Ashley Brown <98361885+mdb-ashley@users.noreply.github.com> * DOCS-14965 doc for Enable all accumulators for group pushdown (#6574) * DOCS-14965 doc for Enable all accumulators for group pushdown * DOCS-14965 updates for copy feedback * DOCSP-37098 changelog, release notes for 4.4.29 (#6620) * DOCSP-37098 changelog, release notes for 4.4.29 * DOCSP-37098 changelog, release notes for 4.4.29 * DOCS-14853 doc that $densify stage must be removed prior to downgrade (#6559) * DOCS-14853 doc that densify stage must be removed prior to downgrade * DOCS-14853 updates for copy feedback * fix indentation build error for sum page (#6636) * DOCS-15574 documents $toHashedIndexKey agg stage (#4701) (#6633) * DOCS-15574 documents toHashedIndexKey agg stage * index page * internal review Co-authored-by: jmd-mongo <73852296+jmd-mongo@users.noreply.github.com> * DOCSP-34224: kotlin code for stable api ref (#6632) * DOCSP-34202: kotlin code for create index tutorial (#6630) * DOCSP-34202: kotlin code for create index tutorial * fix driver target * (DOCSP-30364): Clarification for optimized pipeline (#6627) * (DOCSP-30364): Clarification for optimized pipeline * edit * remove admonition * nit * remove and/or * edits * add line highlights * reorder * review fixes * DOCSP-34221: kotlin code for changestream reference (#6631) * DOCSP-34221: kotlin in changestreams * completed changes * JD PR fixes 1 * DOCS-4853 Expose the Reason an Operation Fails Document Validation (#6641) * DOCSP-34201: kotlin code for delete docs (#6563) * DOCSP-34201: kotlin code for delete docs * fixes * fix errors * remove formatting changes * remove formatting changes * JA fix * DOCSP-37319: MongoDb -> MongoDB (#6656) * DOCSP-37045: finalize 6.0.14 release notes (#6655) * DOCSP-37099: Update 7.0.6 release notes (#6654) * DOCS-13008-sync-wired-tiger (#6578) * DOCS-13008-sync-wired-tiger * DOCS-13008-sync-wired-tiger * DOCS-13008-sync-wired-tiger * DOCS-13008-sync-wired-tiger * DOCS-13008-sync-wired-tiger * DOCS-13008-sync-wired-tiger * DOCS-13008-sync-wired-tiger * DOCS-13008-sync-wired-tiger --------- Co-authored-by: jason-price-mongodb * DOCSP-26653 doc passing Mongo instance as first argument to Mongo() (#6514) * DOCSP-26653 doc passing Mongo instance as first argument to Mongo() * DOCSP-26653 updates for feedback * Apply suggestions from code review Co-authored-by: Anna Henningsen --------- Co-authored-by: Anna Henningsen * DOCSP-15840 fix currentOp example for running index builds (#6553) * DOCSP-35213 Transaction Error Handling & Retry Logic (#6366) * DOCSP-35213 Transaction Error Handling & Retry Logic * build errors * * * ref text * external feedback * revert to original * revert to externally approved version * DOCSP-37035 release notes, changelogs 5.0.25 (#6640) * DOCSP-37035 release notes, changelogs 5.0.25 * internal review * DOCSP-31784 Raspberry Pi Compatibility Note (#6444) * DOCSP-31784-raspberry-pi-compatibility-note * Empty-Commit * Adding footnote with draft content for PM consideration. * Moving footnote to relevant section. * Changing footnote to note directive. * DOCSP-31784-raspberry-pi-compatibility-note * Empty-Commit * Adding footnote with draft content for PM consideration. * Moving footnote to relevant section. * Changing footnote to note directive. * Updating text per feedback, reformatting as footnote. * Fixing footnote format * Removing duplicate note. * Specifying RPI 4. * Reverting back to note format per style guide. * Update source/administration/production-notes.txt Co-authored-by: ianf-mongodb <85948430+ianf-mongodb@users.noreply.github.com> --------- Co-authored-by: ianf-mongodb <85948430+ianf-mongodb@users.noreply.github.com> * DOCS-16032 update CoordinateCommitReturnImmediatelyAfterPersistingDecision when false (#6673) * "version changed" => "version added" (#6629) * "version changed" => "version added" * update wording based on server ticket timing * specific patch versions * (DOCS-16666): Remove outdated VMWare journaling consideration (#6650) * (DOCS-16666): Remove outdated journaling consideration * more removals * wording * DOCSP-37262 7.2.2 Release Notes (#6648) * DOCSP-37262 7.2.2 Release Notes * Fixes build isuse * Fixes text * Fixes text * Fixes per Maria * DOCS-12074 adding oplog section to write commands (#6570) * DOCS-12074 adding oplog section to write commands * DOCS-12074 adding oplog section to write commands * DOCS-12074 adding oplog section to write commands * DOCS-12074 tech edit * DOCS-12074 tech edit * DOCS-12074 tech edit * change wording about name length (#6710) * update version info (#6700) * DOCSP-37207 increase number of dimensions to 4096 (#6579) * DOCSP-36046 Transaction FindOneAndUpdate Doesn't Lock Document (#6100) * DOCSP-36046 Transaction FindOneAndUpdate Doesn't Lock Document * wording * IF feedback: add stale reads to glossary, add procedure steps * formatting * typo * * * IF suggestions * io code block * typo * hide default visibility * nit edit * AK feedback * wording * * * * * * * Asya feedback * Asya feedback 2/2 * final feedback * DOCS-5400 Explain how $mod handles negative numbers (#6701) * DOCS-5400 * update * internal review * DOCSP-37346 Convert Instruqt Find Lab to Drawer (#6691) * DOCSP-37346 Convert Instruqt Find Lab to Drawer * * * * * * * DOCSP-34390 bitAnd Results NumberLong (#6716) * (DOCSP-36567): Add advisory for CVE-86 (#6500) (#6747) * (DOCSP-36567): Add advisory for CVE-86 * fix build error * remove 'mongodb server' from list * add CVE id * review edits * convert to include * add info to prior release pages * February -> Feb in Release Notes (#6754) * DOCSP-17335 document BSONRegExp in the shell (#6722) * DOCSP-17335 document BSONRegExp in the shell * DOCSP-17335 add example for false setting * DOCSP-17335 updates for copy feedback * DOCSP-17405 document bsonRegExp boolean option (#6740) * DOCSP-17405 document bsonRegExp boolean option * DOCSP-17405 updates for AH's feedback * DOCSP-17405 fix for odd rendering issue * DOCS-16665 fixes analyzeShardKey() syntax (#6766) * DOCSP-17405 fix for build errors (#6769) * (DOCS-14917): Add missing data types for boolean conversion (#6762) * (DOCS-14917): Add missing data types for boolean conversion * fix build error * standardize punctuation * remove duplicate table * typo fix * add missing types * add link to bson types page * clarify null * clarify null * (DOCSP-34568): Add cross-links for shard key analysis pages (#6744) * (DOCSP-34568): Add cross-links for shard key analysis pages * wording * wording * wording * wording * wording * wording * address review feedback * wording * wording * add section for viewing sampled queries * DOCSP-21468 document support for snapshot: true in startSession() (#6745) * DOCSP-21468 document support for snapshot: true in startSession() * DOCSP-21468 updates for AH's feedback * Update source/reference/method/Mongo.startSession.txt Co-authored-by: Nick Villahermosa * DOCSP-21468 updates for NV's feedback --------- Co-authored-by: Nick Villahermosa * DOCSP-37081 Set At for General Parameters (#6490) * DOCSP-37081 Standardizes phrasing on where to set parameter. * Adds callouts * Adds next set of parameters * Removes double spacing * Reverses and fixes space change. * Fixes missing periods * Reworks include text * Reworks include text * Minor edits * Fixes build issue * Fixes build issue * Fixes per Jocelyn * Fixes per Jocelyn * DOCSP-23869-elem-match-updates (#6625) * DOCSP-23869-elem-match-updates * DOCSP-23869-elem-match-updates * DOCSP-23869-elem-match-updates * DOCSP-23869-elem-match-updates * DOCSP-23869-elem-match-updates * DOCSP-23869-elem-match-updates * DOCSP-23869-elem-match-updates * DOCSP-23869-elem-match-updates * DOCSP-23869-elem-match-updates * DOCSP-23869-elem-match-updates * DOCSP-23869-elem-match-updates * DOCSP-23869-elem-match-updates * DOCSP-23869-elem-match-updates * DOCSP-23869-elem-match-updates * DOCSP-23869-elem-match-updates * DOCSP-23869-elem-match-updates * DOCSP-23869-elem-match-updates * DOCSP-23869-elem-match-updates --------- Co-authored-by: jason-price-mongodb * DOCSP-37645 Refactors userAuthorizationClaims (#6797) * DOCSP-23334 document that unique is not required (#6761) * DOCSP-29878 Reference inspectDepth shell config option (#6788) * DOCSP-29878 Reference inspectDepth shell config option * DOCSP-29878 updates for copy feedback * updated wildcard index page based on translation QA feedback (#6799) * updated wildcard index page based on translation QA feedback * grammar * queryable encryption -> Queryable Encryption (#6800) * queryable encryption -> Queryable Encryption * two more * add collection-specific snippet (#6760) * DOCSP-37592 v4.4 Ref Removal (#6787) * DOCSP-37592 v4.4 Ref Removal * remove 'version 4.4' * track 'mongodb 4.4' * track 'v4.4' * remove rest * * * build errors * add ulimit note to include * hedged read copy * remove repetitive sentence * * * * * namespace * JD + IF feedback * add minimum-lts-version * final build errors * DOCSP-37579 add changelog release notes for 7.0.7 rc2 (#6803) * DOCSP-37579-add-changelog-release-notes-for-7.0.7-rc2 * Fixing spacing, typo in link * Fixing heading on release notes * Changing date to upcoming * DOCS-16576 status rename (#6738) * DOCS-16576 stateTransition changes * Test build issue * Adds rename entry * DOCS-16576 stateTransition changes * Adds release notes and code * Reorders words for active voice * Fixes per Nick * Fixes per Vishnu * DOCSP-26745 clarify sortby limitation (#6816) * DOCSP-26745 clarify sortby limitation * DOCSP-26745 clarify sortby limitation * DOCSP-33683 Merge qe-csfle-restructure to master (#6823) * DOCSP-33685 consolidate key management topics (#5340) * Restructured topics, updated refs * Fixed more refs * Removed unused includes after consolidating * More ref fixes * Added keys-key-vaults to the toc_landing_pages list * Added CSFLE specific information to consolidated content * Added CSFLE specific information to consolidated content * Fixed ref * Internal review feedback * Rebuild * Empty rebuild * DOCSP-33688 Combine auto encryption shared library and mongocryptd install steps (#5580) * Added POC content, cleaned up tab structure, kept libmongocrypt separate * Added doc constants, removed ambiguous refs from to-be-deleted files * Updated refs, made language in install topic QE/CSFLE agnostic * More ref cleanup * Cleaned up some includes * Moved defaults to end of table cells per style guide * Fixed build errors * Moved around some info * Removed original separate topics * Nested heading rendering test * Procedure style test * Changing procedure style * Fixed CSFLE wording in an include * Internal review feedback: * DOCSP-34863 qe csfle compatibiity tables backport (#5880) * Update QE and CSFLE compatability tables (#5788) * wip while switching branches * wip while waiting for more deets * ready for review * add facets * fix formatting * add mongodb-crypt dependency * DOCSP-29667-bulk-command-remove-content (#5778) * DOCSP-29667-bulk-command-remove-content * DOCSP-29667-bulk-command-remove-content --------- Co-authored-by: jason-price-mongodb * DOCSP-35223 7.0.5 Release Notes (Final) (#5773) * DOCSP-35223 7.0.5 Release Notes (Final) * Fixes per Maria Prinus * DOCSP-35317 5.0.24 Release Notes (#5749) * DOCSP-35317 5.0.24 Release Notes * * * fix affected versions * Add tcmallocAggressiveMemoryDecommit (#5650) * add tcmallocAggressiveMemoryDecommit * wordsmithing * external review suggestions and clarifications * writing review * Update source/reference/parameters.txt Co-authored-by: Alison Huh <112565127+ajhuh-mdb@users.noreply.github.com> * Update source/reference/parameters.txt Co-authored-by: Alison Huh <112565127+ajhuh-mdb@users.noreply.github.com> * fix formatting * final? review * Update source/reference/parameters.txt Co-authored-by: Alison Huh <112565127+ajhuh-mdb@users.noreply.github.com> * final changes, I hope... * fix formatting --------- Co-authored-by: Alison Huh <112565127+ajhuh-mdb@users.noreply.github.com> * fix single-hash issue (#5789) * DOCSP-35006-glossary-4 (#5642) * DOCSP-35006-glossary-4 * DOCSP-35006-glossary-4 * DOCSP-35006-glossary-4 * DOCSP-35006-glossary-4 * DOCSP-35006-glossary-4 * DOCSP-35006-glossary-4 * DOCSP-35006-glossary-4 * DOCSP-35006-glossary-4 * DOCSP-35006-glossary-4 * DOCSP-35006-glossary-4 * DOCSP-35006-glossary-4 * DOCSP-35006-glossary-4 * DOCSP-35006-glossary-4 * DOCSP-35006-glossary-4 * DOCSP-35006-glossary-4 * DOCSP-35006-glossary-4 * DOCSP-35006-glossary-4 * DOCSP-35006-glossary-4 * DOCSP-35006-glossary-4 * DOCSP-35006-glossary-4 * DOCSP-35006-glossary-4 * DOCSP-35006-glossary-4 * DOCSP-35006-glossary-4 * DOCSP-35006-glossary-4 * DOCSP-35006-glossary-4 * DOCSP-35006-glossary-4 * DOCSP-35006-glossary-4 * DOCSP-35006-glossary-4 * DOCSP-35006-glossary-4 * DOCSP-35006-glossary-4 --------- Co-authored-by: jason-price-mongodb * DOCSP-34943-wildcard-index (#5764) * DOCSP-34943-wildcard-index * DOCSP-34943-wildcard-index * DOCSP-34943-wildcard-index * DOCSP-34943-wildcard-index * DOCSP-34943-wildcard-index --------- Co-authored-by: jason-price-mongodb * change scala version --------- Co-authored-by: jason-price-mongodb <69260375+jason-price-mongodb@users.noreply.github.com> Co-authored-by: jason-price-mongodb Co-authored-by: Kenneth P. J. Dyer <93145796+kennethdyer@users.noreply.github.com> Co-authored-by: Alison Huh <112565127+ajhuh-mdb@users.noreply.github.com> * Removed facet:genre:reference tag * Separate ref for Java Reactive Streams * Removed reference facet --------- Co-authored-by: MongoCaleb <32645888+MongoCaleb@users.noreply.github.com> Co-authored-by: jason-price-mongodb <69260375+jason-price-mongodb@users.noreply.github.com> Co-authored-by: jason-price-mongodb Co-authored-by: Kenneth P. J. Dyer <93145796+kennethdyer@users.noreply.github.com> Co-authored-by: Alison Huh <112565127+ajhuh-mdb@users.noreply.github.com> * DOCSP-33709 Create encryption schema topic (#5666) * Initial IA restructure * Ref fixes * Ref fixes * Fixed substeps * TOC updates * Rebuilding * Apply suggestions from code review Co-authored-by: ianf-mongodb <85948430+ianf-mongodb@users.noreply.github.com> * Changed substeps to alphabetical, removed step title punctuation * Added suggestion to encrypt fields in existing documents * Testing procedure numbering reset * Moved and extended the contention factor include. Added Kenn's context info to the contention factor overview to provide more examples of high and low cardinality * Testing auto enumeration on nested steps * Testing doc constant in glossary * Testing ref anchor within step content * Testing ref anchor within step content * Testing ref anchor within step content * Removed test line * Testing glossary entry with doc constant * Testing glossary entry with doc constant * Internal review feedback * Fixed include path * Taxonomy tagging * Taxonomy tagging * Apply suggestions from code review Co-authored-by: ianf-mongodb <85948430+ianf-mongodb@users.noreply.github.com> * Title underlines --------- Co-authored-by: ianf-mongodb <85948430+ianf-mongodb@users.noreply.github.com> * DOCSP-33710 Encrypted Collections topic (#5752) * Initial TOC placement and editorial pass * Added taxonomy tagging * Revert to clean commit * Taxonomy tag fix * DOCSP-33686 Combine QE and CSFLE "Compatibility" topics (#5518) * Merge fixes * Merge fixes * Merge fixes * Fixed table formatting * Merge fixes * Table fixes, moved drivers tutorials to Tutorials landing page * Added ref * Moved include and updated refs to it * Fixed refs * Taxonomy tagging * Removed reference facet tags * DOCSP-33687 Add in use encryption stub page (#5538) * Added content from POC branch, fixed some incorrect statements * Fixed refs * Removed broken TOC entry in order to run staging build * Update source/core/security-in-use-encryption.txt Formatting fix Co-authored-by: ianf-mongodb <85948430+ianf-mongodb@users.noreply.github.com> * Update source/core/security-in-use-encryption.txt Internal review Co-authored-by: ianf-mongodb <85948430+ianf-mongodb@users.noreply.github.com> * Update source/core/security-in-use-encryption.txt Co-authored-by: ianf-mongodb <85948430+ianf-mongodb@users.noreply.github.com> * Added TODO comments for related PR * Rebuild * Added taxonomy tagging --------- Co-authored-by: ianf-mongodb <85948430+ianf-mongodb@users.noreply.github.com> * DOCSP-33690 Create "Overview: Enable QE" section (#5539) * Added draft content from POC * Kept install as a separate topic, removed mention of rotating keys since that will not go under this section * Mentioned MDB deployment setup * Fixed ref names * Added stub topic for using QE * Moved stub topics into TOC * DOCSP-33688 Combine auto encryption shared library and mongocryptd install steps (#5580) * Added POC content, cleaned up tab structure, kept libmongocrypt separate * Added doc constants, removed ambiguous refs from to-be-deleted files * Updated refs, made language in install topic QE/CSFLE agnostic * More ref cleanup * Cleaned up some includes * Moved defaults to end of table cells per style guide * Fixed build errors * Moved around some info * Removed original separate topics * Nested heading rendering test * Procedure style test * Changing procedure style * Fixed CSFLE wording in an include * Internal review feedback: * Moved stub topics * Fixed drawer topics to not be click-through * Refactored driver instructions into a procedure * Added libmongocrypt to procedure flow * Removed duplicate TOC entries * Rendering fix * Rendering fix * Fixed drawer topics * Removed unused includes, cleaned up tasks, re-ordered topics * Cleaned up before you start sections * Clarified config for different KMS provider * Clarified qe library name * Tab rendering test * Indentation * Indentation * Whitespace * Whitespace * Indentation * typo * Un-nested some steps * Fixed next steps * Elevated AWS note to a warning * Undid the callout change * Typo fix * Apply suggestions from code review Co-authored-by: Jordan Smith <45415425+jordan-smith721@users.noreply.github.com> * Rebuild * Apply suggestions from code review Co-authored-by: Jordan Smith <45415425+jordan-smith721@users.noreply.github.com> * Rebuild * Update source/core/queryable-encryption/qe-create-application.txt Co-authored-by: Jordan Smith <45415425+jordan-smith721@users.noreply.github.com> * Update source/core/queryable-encryption/qe-create-application.txt Co-authored-by: Jordan Smith <45415425+jordan-smith721@users.noreply.github.com> * Update source/core/queryable-encryption/qe-create-application.txt Co-authored-by: Jordan Smith <45415425+jordan-smith721@users.noreply.github.com> * Update source/core/queryable-encryption/qe-create-application.txt Co-authored-by: Jordan Smith <45415425+jordan-smith721@users.noreply.github.com> * capitalization * Tabbed KMS rendering test * Rendering test * Rendering test * Rendering test * Rendering test * Rendering test * Rendering test * Removed comment syntax * Added other key provider steps * Fixed refs * Added better CMK refs * Refs and cleanup * Moved tab formatting changes into the main file and removed test file * Apply suggestions from code review Co-authored-by: Jordan Smith <45415425+jordan-smith721@users.noreply.github.com> * Update source/core/queryable-encryption/qe-create-application.txt Co-authored-by: Jordan Smith <45415425+jordan-smith721@users.noreply.github.com> * Internal review feedback * Apply suggestions from code review Co-authored-by: Ashley Brown <98361885+mdb-ashley@users.noreply.github.com> * Fixed broken links * Fixed broken links * Fixed broken links * Update source/core/queryable-encryption/qe-create-cmk.txt Co-authored-by: Ashley Brown <98361885+mdb-ashley@users.noreply.github.com> --------- Co-authored-by: Jordan Smith <45415425+jordan-smith721@users.noreply.github.com> Co-authored-by: Ashley Brown <98361885+mdb-ashley@users.noreply.github.com> * DOCSP-33704 Use queryable encryption (#6479) * Created stub topic * Initial draft commit * Draft updates * Rendering check * Fixed drawer page opening in TOC * Copy edits * Added encryption schema to topic flow * Topic ref flow and copy edits * Copy edit, removed periods in some steps * Split steps * Consolidated collection creation and document insertion * Ref fixes * Moved around topics per internal review discussion * ref fixes * Copy edits, task flow * Task flow * Update source/core/queryable-encryption/overview-use-qe.txt Co-authored-by: Jordan Smith <45415425+jordan-smith721@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Jordan Smith <45415425+jordan-smith721@users.noreply.github.com> * Moved adminition on explicitly creating a collection * Taxonomy, build fixes * Fixed before you start section * TOC fixes * Rebuild * Wording fix * Apply suggestions from code review Co-authored-by: Jordan Smith <45415425+jordan-smith721@users.noreply.github.com> --------- Co-authored-by: Jordan Smith <45415425+jordan-smith721@users.noreply.github.com> * Docsp 36974 Merge master to feature branch (#6535) * DOCSP-34023 Rename WiredTiger Parameters (#5400) * DOCSP-34023 Rename WiredTiger Parameters * KD feedback * * * JO feedback * DOCSP-34498 7.1.1 release notes (#5404) * Generated changelog * Bumped release constant * Generated changelog (#5405) * DOCSP-33214 Sharded Cluster Backup (#4976) * DOCSP-33214 Self-Managed Backups for Sharded Clusters * Backup procedure prep * Backups * Fixes build issue * Tests tutorial * Adds before you begin content * minor text edits * Fixes per Jeff * Fixes rendering error * Fixes rendering error * Fixes rendering error * Fixes per Jeff * Fixes per Jeff * Fixes per Jeff Co-authored-by: Jeff Allen * Fixes per Jeff * Fixes per Jeff * Fixes per Nandinin * Fixes per Nandinin * Fixes per Nandini * Fixes per Maria * fixes per Maria * Fixes per Ashley * Fixes per Nandini * Fixes per Nandini * Docsp-34017-bucket-rounding-seconds additional updates saved file (#5326) * DOCSP-34017-bucket-rounding-seconds * DOCSP-34017-bucket-rounding-seconds * DOCSP-34017-bucket-rounding-seconds * DOCSP-34017-bucket-rounding-seconds * DOCSP-34017-bucket-rounding-seconds * DOCSP-34017-bucket-rounding-seconds --------- Co-authored-by: jason-price-mongodb * Removes artifact from another branch * Removes artifact from another branch * Fixes per Tim * Fixes per Tim * Fixes per Tim * Fixes per Tim * Fixes per Tim * Fixes per Jason * Fixes per Jason * fixes per Jason * Fixes per Jason * Fixes per Jason --------- Co-authored-by: Jeff Allen Co-authored-by: jason-price-mongodb <69260375+jason-price-mongodb@users.noreply.github.com> Co-authored-by: jason-price-mongodb * DOCSP-34011 Server Docs 404 (#5455) * Removing incorrect output from command example due to consistent data set usage throughout doc (#5466) * DOCS-14772-fix renames sharding metadata refresh include (#5475) * DOCSP-34711 Restore wiredTigerMaxCacheOverflowSizeGB (#5472) * Merge 7.2 into master (7.3) (#5465) * DOCSP-29667-bulk-write (#4764) * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write * DOCSP-29667-bulk-write --------- Co-authored-by: jason-price-mongodb * (DOCS-16452) routingTableCacheChunkBucketSize must be greater than 0 (#5198) (#5220) * (DOCS-16452) routingTableCacheChunkBucketSize must be greater than 0 * Includes change from copy review * (DOCS-16412) minDistance and maxDistance can be expressions (#5203) * (DOCS-16412) minDistance and maxDistance can be expressions * Includes changes from tech review * (DOCS-16473) restore role can drop system.views collection (#5200) * DOCS-16384 readPreferenceCounters Metrics in serverStatus (#5196) * DOCS-16384 readPreferenceCounters Metrics in serverStatus * add availability note * rn * order * internal review feedback * AM feedback * update rn * * * (DOCS-16446) Changes initial minimum number of chunks per shard for empty collections (#5216) * (DOCS-16446) Changes initial minimum number of chunks per shard for empty collections * Includes copy review changes * (DOCS-16471) getField supports strings that are not string literal (#5197) * (DOCSP-33111) Reverts changes from #4076 (#5215) (#5236) * DOCSP-33388 Mongos Cursor Results For Non-Existant Dbs (#5244) * DOCSP-33388 Mongos Cursor Results For Non-Existant Dbs * * * * * * * IR Feedback * * * Adds new "specificShard" info and changes formatting (#5251) * replacing PR 5229 * review changes * final review change * DOCS-16389 query plan cache subdocument (#5217) * DOCS-16389 query plan cache subdocument * * * IF feedback * clarity * nit * move to compatibility * * * wording * wording * (DOCS-16390) You can't specify WT encryption options in createCollection (#5260) * (DOCS-16390) You can't specify WT encryption options in createCollection * Includes change from tech review * DOCS-16363 Document killedDueToMaxTimeMSExpired under serverStatus (#5193) * DOCS-16363 Document killedDueToMaxTimeMSExpired under serverStatus * add to TOC * wording * build error * (DOCSP-34349) Adds release notes and compatibility changes for 7.2 (#5242) * DOCS-16363 Document killedDueToMaxTimeMSExpired under serverStatus (#5193) * DOCS-16363 Document killedDueToMaxTimeMSExpired under serverStatus * add to TOC * wording * build error * Includes PM external review changes * Revert "Includes PM external review changes" This reverts commit 0263ebe646848bf840c75b414815d3daa660e9fc. * Includes changes from PM review * Includes copy review changes --------- Co-authored-by: Alison Huh <112565127+ajhuh-mdb@users.noreply.github.com> * Add ``useAuthorizationClaim`` field info. (#5258) * Add useAuthorizationClaim * fix formatting and add version * one last formatting tweak * review suggestions * DOCS-16389 query plan cache subdocument (#5217) * DOCS-16389 query plan cache subdocument * * * IF feedback * clarity * nit * move to compatibility * * * wording * wording * (DOCS-16390) You can't specify WT encryption options in createCollection (#5260) * (DOCS-16390) You can't specify WT encryption options in createCollection * Includes change from tech review * DOCS-16363 Document killedDueToMaxTimeMSExpired under serverStatus (#5193) * DOCS-16363 Document killedDueToMaxTimeMSExpired under serverStatus * add to TOC * wording * build error * (DOCSP-34349) Adds release notes and compatibility changes for 7.2 (#5242) * DOCS-16363 Document killedDueToMaxTimeMSExpired under serverStatus (#5193) * DOCS-16363 Document killedDueToMaxTimeMSExpired under serverStatus * add to TOC * wording * build error * Includes PM external review changes * Revert "Includes PM external review changes" This reverts commit 0263ebe646848bf840c75b414815d3daa660e9fc. * Includes changes from PM review * Includes copy review changes --------- Co-authored-by: Alison Huh <112565127+ajhuh-mdb@users.noreply.github.com> * rewording * staging to check formatting * cleaning up the process * formatting foo * more review rewording and table formatting * still trying to get the table format correct... * funwith table formatting... * table formatting... --------- Co-authored-by: Alison Huh <112565127+ajhuh-mdb@users.noreply.github.com> Co-authored-by: Sarah Simpers <82042374+sarahsimpers@users.noreply.github.com> * (DOCS-16394): persistedChunkCacheUpdateMaxBatchSize parameter (#5270) * (DOCS-16394): persistedChunkCacheUpdateMaxBatchSize parameter * add example * update version list * add 7.2 to release notes toc * address review comments * edits * minimalism * tweak * wording * address review comments * Merges master into v7.2 (#5361) * DOCSP-33154 Make covered query info discoverable (#4762) * Summarized covered query benefits. Opted not to use an include * Removed covered query info from wildcard indexes * DOCS-16216 add hint option to distinct command (#4666) * DOCS-16216 add hint option * DOCS-16216 nit fix * DOCS-16216 internal feedback * DOCS-16216 internal feedback * DOCS-16216 update example * DOCS-16216 fix code example * DOCS-16216 feedback * DOCS-16200 $currentOp - targetAllNodes (#4724) * DOCS-16200 Documents targetAllNodes option * Fixes build issue * Fixes build issue * Expands explanation of conditions * Fixes build issue * Fixes per Jeff --------- Co-authored-by: Kenneth P. J. Dyer * (DOCSP-32321): Add Atlas-centric content to JSON Schema Validation page (#4765) * Add Atlas compatability and step * Apply Ashley's suggestion Co-authored-by: Ashley Brown <98361885+mdb-ashley@users.noreply.github.com> * Apply Sarah's suggestion Co-authored-by: Sarah Simpers <82042374+sarahsimpers@users.noreply.github.com> --------- Co-authored-by: Ashley Brown <98361885+mdb-ashley@users.noreply.github.com> Co-authored-by: Sarah Simpers <82042374+sarahsimpers@users.noreply.github.com> * (DOCSP-32345) Adds Atlas CTA to convert standalone page (#4769) * DOCSP-33324 Fix Legacy Shell 404 (#4777) * DOCSP-33285 Add Release Date to 6.0.10 RN (#4787) * DOCSP-26175 auditConfig Cluster Parameters (#4790) * DOCSP-32722 Cluster Parameter Reference Page (#4471) (#4682) * DOCSP-32722 Cluster Parameter Reference Page * wording * * * typo * JA feedback * adjust wording under Syntax section * nit edits * DOCSP-32723 changeStreamOptions Reference Page (#4592) (#4710) * DOCSP-32723 changeStreamOptions Reference Page * text styling * indentations * move text to include * JD feedback * DOCSP-32724 auditConfig Cluster Parameter (#4691) * DOCSP-32724 auditConfig Cluster Parameter * build errors * add to release notes * build errors * JD feedback [still in progress] * deprecation warnings * compatibility changes * external review feedback * typo * (DOCSP-32355): Add steps to remove document from the Atlas UI (#4766) * Add Atlas UI steps to page * Revise for updated procedure guidelines * Apply Sarah's suggestion Co-authored-by: Sarah Simpers <82042374+sarahsimpers@users.noreply.github.com> * Move limitation note higher up --------- Co-authored-by: Sarah Simpers <82042374+sarahsimpers@users.noreply.github.com> * (DOCSP-32334) Revamps to include Atlas mention for Atlas Top 250 initiative. * (Revises per copy reviews. * (DOCSP-32336): Add Atlas-centric content to Log Messages page (#4772) * Add Atlas UI steps to page * Add atlas-centric content to log messages page * apply Sarah's suggestion Co-authored-by: Sarah Simpers <82042374+sarahsimpers@users.noreply.github.com> --------- Co-authored-by: Sarah Simpers <82042374+sarahsimpers@users.noreply.github.com> * DOCSP 33219 (#4803) * DOCSP-33219 add descending index limitation * empty commit * DOCSP-33219 add descending index limitation * (DOCSP-26200): queryStats aggregation stage (#4524) * (DOCSP-26200): queryStats aggregation stage * WIP * WIP * WIP * Remove accidental file * add TODO * add missing ref * formatting * WIP * WIP * WIP * wording * WIP * edits * add examples * try to fix table format * add transformed example * remove TODO * more table formatting * add atlas data collection * table formatting * clarity * add more context for data collection * table formatting * wording tweaks * address review comments * formatting * use simplified examples * add input/output * edits * add link to transformed example * spacing * edits * address review comments * fix incomplete sentence * rework transformation section * edits * fix collation link formatting * change wording for output fields * edit glossary * more edits * remove glossary entry * formatting * edits * Address review feedback * wording * review edit * wording * WIP review edits * remove Atlas data collection info * (DOCSP-33347): Re-add Atlas data collection info * typo * minimalism * typo * clarity * remove Atlas data collection info * edits * (DOCSP-33347): Re-add Atlas data collection info * add find queryShape properties * change case * formatting * abstracted > normalized consistency * clarify privilege * address review comments * review edits * (DOCSP-33099): queryStats serverStatus metrics (#4811) * (DOCSP-26200): queryStats aggregation stage * WIP * WIP * WIP * Remove accidental file * add TODO * add missing ref * formatting * WIP * WIP * WIP * wording * WIP * edits * add examples * try to fix table format * add transformed example * remove TODO * more table formatting * add atlas data collection * table formatting * clarity * add more context for data collection * table formatting * wording tweaks * address review comments * formatting * use simplified examples * add input/output * edits * add link to transformed example * spacing * edits * (DOCSP-33099): queryStats serverStatus metrics * (DOCSP-33099): serverStatus metrics for queryStats * formatting * clarity * address review comments * fix incomplete sentence * rework transformation section * edits * remove notion of a 'cache' * remove extra queryStats heading * minimalism * fix collation link formatting * move virtual collection details to the queryStats page * change wording for output fields * (DOCSP-26200): queryStats aggregation stage * WIP * WIP * WIP * Remove accidental file * add TODO * add missing ref * formatting * WIP * WIP * WIP * wording * WIP * edits * add examples * try to fix table format * add transformed example * remove TODO * more table formatting * add atlas data collection * table formatting * clarity * add more context for data collection * table formatting * wording tweaks * address review comments * formatting * use simplified examples * add input/output * edits * add link to transformed example * spacing * edits * address review comments * fix incomplete sentence * rework transformation section * edits * fix collation link formatting * change wording for output fields * edit glossary * more edits * remove glossary entry * formatting * edits * Address review feedback * wording * review edit * address review comments * edit * wording * WIP review edits * remove Atlas data collection info * (DOCSP-33347): Re-add Atlas data collection info * typo * minimalism * typo * clarity * remove Atlas data collection info * edits * (DOCSP-33347): Re-add Atlas data collection info * add find queryShape properties * change case * formatting * abstracted > normalized consistency * clarify privilege * address review comments * review edits * (DOCSP-33216): Document new binData subtype 8 (#4812) * (DOCSP-26200): queryStats aggregation stage * WIP * WIP * WIP * Remove accidental file * add TODO * add missing ref * formatting * WIP * WIP * WIP * wording * WIP * edits * add examples * try to fix table format * add transformed example * remove TODO * more table formatting * add atlas data collection * table formatting * clarity * add more context for data collection * table formatting * wording tweaks * address review comments * formatting * use simplified examples * add input/output * edits * add link to transformed example * spacing * edits * address review comments * fix incomplete sentence * rework transformation section * edits * fix collation link formatting * change wording for output fields * edit glossary * more edits * remove glossary entry * formatting * edits * (DOCSP-33216): Document new binData subtype 8 * edits * Address review feedback * wording * review feedback * wording * tweaks * typo * clarification * review edit * address review comments * wording * formatting * WIP review edits * remove Atlas data collection info * (DOCSP-33347): Re-add Atlas data collection info * typo * minimalism * typo * clarity * remove Atlas data collection info * edits * (DOCSP-33347): Re-add Atlas data collection info * add find queryShape properties * change case * formatting * abstracted > normalized consistency * clarify privilege * address review comments * review edits * DOCS-16225 TTL Indexes on Capped Collections (#4792) * DOCS-16225 TTL Indexes on Capped Collections * RN order * remove overview header * Docs 16242 Abort Expired Transaction Metrics (#4653) * DOCS-16242 abortExpiredTransactions serverStatus * Adds example * Minor edit * Fixes per Joe --------- Co-authored-by: Kenneth P. J. Dyer * (DOCSP-32357): Add Atlas UI steps to Update Documents page (#4785) * Add Atlas UI steps to page * copy review feedback * Copy review feedback pt 2 * DOCS 13445 adding details to query plans page (#4770) * DOCS-13445 adding details on cached query plans * DOCS-13445 adding details on cached query plans * DOCSP-33235 Add Space on Agg Project Page (#4806) * (DOCSP-32295) Adds Atlas-centric info to manual landing page (#4739) * (DOCSP-32295) Adds Atlas-centric info to manual landing page * Cleans up links and format * Includes tech review changes * Splits out connect step and removes import on first tab * Changes mongosh command line example to atlas setup and adds link * Includes copy and tech review changes * Adds MongoDB manual location for links within these docs * DOCS-16233 Expands port Description (#4649) * DOCS-16233 port 0 configuration * Fixes port number * Adjusts port description * Fixes per Joe --------- Co-authored-by: Kenneth P. J. Dyer * (DOCSP-32332) [Revamp] Adds column for Atlas compatibility. * Revises per copy review. * Revises per copy review. * DOCSP-33315 4.4.25 Release Notes Finalize (#4825) * 4.4.25 Release Notes Finalize * * * * * DOCSP-33319 7.0.2 Release Notes (#4826) * DOCSP-27817 manual tagging for docs-mongodb-internal (#4773) * DOCSP-27817 manual tags added to /source/reference/method/ on 9/15 * DOCSP-27817 manual tags for /manual/reference * DOCSP-27817 manual tagging /manual/core/ * manual tags & updated descriptions for /manual/tutorial/ * facet :name: fix for /core/ & remaining server pages * ran the programmatic script again after updating server to docs * Update db.collection.updateOne.txt spacing fix * Update db.collection.updateMany.txt spacing fix * Update db.collection.replaceOne.txt spacing fix * Update db.collection.insertOne.txt spacing fix * Update db.collection.insertMany.txt spacing fix * Update db.collection.findOneAndUpdate.txt spacing fix * Update db.collection.findOne.txt spacing fix * Update db.collection.findAndModify.txt spacing fix * Update db.collection.find.txt spacing fix * Update db.collection.drop.txt spacing fix * Update db.collection.distinct.txt spacing fix * Update db.collection.deleteOne.txt spacing fic * Update db.collection.deleteMany.txt spacing fix * Update db.collection.createIndex.txt spacing fix * db.collection.countDocuments.txt spacing fix * db.collection.count.txt spacing fix * batch 2 spacing fix Co-authored-by: Sarah Olson <98367156+sarah-olson-mongodb@users.noreply.github.com> * db.collection.bulkWrite.txt spacing fix * db.collection.aggregate.txt spacing fix * cursor.sort.txt spacing fix * Date.txt spacing fix * log-messages.txt spacing fix * limits.txt spacing fix * database-references.txt spacing fix * connection-string.txt spacing fix * configuration-options.txt spacing fix * spacing fix collation.txt * spacing fix timeseries-collections.txt * batch 1 spacing fix Co-authored-by: Sarah Olson <98367156+sarah-olson-mongodb@users.noreply.github.com> * Update spacing bson-types.txt * Update query-documents.txt facet name fix * Update query-arrays.txt facet name fix * Update query-array-of-documents.txt facet name fix * Update project-fields-from-query-results.txt facet name fix * Update insert-documents.txt facet name fix * Update getting-started.txt facet name fix * Update expire-data.txt facet name fix * Update deploy-replica-set.txt facet name fix * Update create-users.txt facet name fix * Update configure-ssl.txt facet name fix * Update query-embedded-documents.txt facet name fix * Update query-for-null-fields.txt facet name fix * Update remove-documents.txt facet name fix * Update update-documents-with-aggregation-pipeline.txt facet name fix * Update update-documents.txt facet name fix * type to name & value to values * spacing fixes from Sarah Olson * fixed programming misspelling * geospatial queries language value fix --------- Co-authored-by: Sarah Olson <98367156+sarah-olson-mongodb@users.noreply.github.com> * DOCSP-31180 v4.0 404s, Fixed on master redirects (#4860) * DOCSP-31180 v4.0 404s, Fixed on master redirects * * * (DOCSP-32021)(DOCSP-33353) Adds procedure to query embedded docs in Atlas and updates selector instructions (#4821) * (DOCSP-32021)(DOCSP-33353) Adds procedure to query embedded docs in Atlas and updates selector instructions * Adds some includes to use in other PRs * Includes changes from copy and tech review * Includes copy review fix to format * DOCS-16308 findAndModify and deleteOne partial shard key updates (#4798) * DOCS-16308 partial shard key * DOCS-16308 partial shard key * DOCS-16308 update delete one * DOCS-16308 fix method label * DOCS-16308 updates to italicized letters * (DOCS-16318): serverStatus expireAfterSeconds metric for change streams (#4850) * (DOCS-16318): serverStatus expireAfterSeconds metric for change streams * review edits * add new metric to release notes * alphabetize * copy review feedback * (DOCSP-32346) Revamps to include Atlas steps for Atlas Top 250 initiative. * (DOCSP-32333) Revamps database reference page to include Atlas UI steps (#4842) * (DOCSP-32333) Revamps database reference page to include Atlas UI steps * Includes changes from copy review * Fixes copy review issues * (DOCSP-32354): Add Atlas UI steps to Query for Null or Missing Fields page (#4804) * Add Atlas UI steps to page * copy review feedback * Use new includes files * Revises per copy review. * (DOCSP-32012) Adds procedure to project fields from query results in Atlas (#4822) * (DOCSP-32012) Adds procedure to project fields from query results in Atlas * Includes changes from copy review * DOCSP-31171 Concurrent DDL Ops (#4747) * DOCSP-31171 Concurrent DDL Ops * build error * release notes + ddl operations section * remove include * text formatting * alphabetize stages * JA feedback * SS feedback * link glossary * typo * nit feedback * Add CTA for top 250 project (#4807) * (DOCSP-32349) Revamps the expire data page to include Atlas UI (#4827) * (DOCSP-32349) Revamps the expire data page to include Atlas UI * Includes copy review changes * Includes final copy review changes * (DOCSP-32018) Adds procedure to query an array of documents in Atlas (#4824) * (DOCSP-32018) Adds procedure to query an array of documents in Atlas * Includes copy revie w change * (DOCSP-32356): Revamp Updates with Aggregation page for the top 250 project (#4793) * Draft procedures * Add OTP and missing image * Add more links and update headers * Adjust heading depth * Substitutions apparently don't work with links * Incorporate review feedback * Remove images * Incorporate review feedback * Incorporate review feedback * Arbitrary commit to trigger a staging rebuild * DOCSP-33147 improve serverStatus descriptions for metrics (#4805) * DOCSP-33147 improve serverStatus descriptions for metrics * DOCSP-33147 improve serverStatus descriptions for metrics * DOCSP-33147 improve serverStatus descriptions for metrics * DOCSP-33147 improve serverStatus descriptions for metrics * DOCSP-33147 active voice * DOCSP-33147 active voice * DOCSP-33147 consistency * DOCSP-33147 consistency * DOCSP-33147 copy edits * DOCSP-33147 copy edits * DOCSP-33147 typo * DOCSP 32904 emphasizing dot notation (#4781) * DOCSP-32904 emphasizing dot notation for querying embedded fields * DOCSP-32904 emphasizing dot notation for querying embedded fields * DOCSP-32904 tech edit * (DOCSP-32020) Adds procedure to query documents with Atlas (#4823) * DOCS-16386 Fix 7.0 Upgrade 7.0 Standalone FCV Version (#4810) * DOCS-16386 Fix 7.0 Upgrade 7.0 Standalone FCV Version * remove should * * * * * * * * * * * * * DOCSP-33284 Fix Compact Behavior Notes (#4797) * DOCSP-33284 Fix Compact Behavior Notes * * * * * * * * * * * * * DOCS-15714 Adding consideration of clustered index scan in multi planning (#4796) * DOCS-15714 adding behavior on clustered index scans * DOCS-15714 adding behavior on clustered index scans * DOCS-15714 adding behavior on clustered index scans * DOCS-15714 copy edits * DOCS-15714 copy edits * DOCS-15714 adding active voice * DOCS-15714 fixing bullet * DOCS-15714 fixing plurals * DOCS-15714 more active voice * DOCS-15714 removed seocnd note bullet * DOCS-15714 removed seocnd note bullet * DOCS-15714 removed seocnd note bullet * DOCS-15714 removing colon * DOCS-15714 present tense * DOCS-15714 copy edits round 2 * DOCS-15714 changing prior versioning text to past tense * DOCSP-31389 Capped Collections with TTL (#4662) * Quick fix, removed statements that TTL index aren't supported on capped collections * Internal review * Fixed comma * Rebuild * Docsp 32680 time series migration using out (#4794) * Updated to reflect correct syntax for to time series collections * Rendering fix * Rendering fixes * Metadata field is optional * syntax fixes * Empty rebuild * PR comments * External PR feedback * External PR feedback * External PR feedback * Fixed location of metadata field * External feedback * (DOCSP-32324) Adds info on how to create a view in the Atlas UI (#4872) * (DOCSP-32324) Adds info on how to create a view in the Atlas UI * Moves content to the Materialized View page * Includes copy review changes * Includes copy review change * (DOCSP-32301) Adds link to query documents collation section (#4935) * Docs 16078 support out with time series (#4795) * Draft * Externalized time series config field descriptions * Removed self references * Added missing field * Removed self refs * Added entry to out vs merge comparison table * Removed time series collection limitation from restrictions * caps * Fixed time series reference table syntax for expireAfterSeconds * Internal PR feedback * External PR feedback * External PR feedback * Internal review feedback * Used enum values for time series granularity syntax * Update shared library dropdown (#4938) * update shared-library install instructions * revert snooty formatting changes * typo & wording update * DOCSP-33406: 6.0.11 RC0 changelogs (#4937) * DOCSP-33406: 6.0.11 RC0 changelogs * DOCSP-33406: Fix some typos * DOCSP-33406: Typo * DOCSP-32322 [Revamp] /docs/manual/core/timeseries-collections/ (#4786) * DOCSP-32322 [Revamp] /docs/manual/core/timeseries-collections/ * edit * edit * Edit --------- Co-authored-by: pierwill * (DOCSP-32327): Atlas top 250 Geospatial query revamp (#4819) * Draft tabbed procedures to get started * Add steps to query geospatial data in the Atlas UI * Add links to Atlas docs * Apply suggestions from review Co-authored-by: Sarah Simpers <82042374+sarahsimpers@users.noreply.github.com> * Remove Search * Switch to procedures with lists * Switch the refs to pipeline links * Fix broken pipeline links --------- Co-authored-by: Sarah Simpers <82042374+sarahsimpers@users.noreply.github.com> * DOCSP 32572 - adding performance and considerations to $lookup (#4771) * DOCSP-32572 adding performance and considerations to lookup * DOCSP-32572 adding performance and considerations to lookup * DOCSP-32572 formatting errors * DOCSP-32572 performance and considerations for lookup * DOCSP-32572 performance and considerations for lookup * DOCSP-32572 performance and considerations for lookup * DOCSP-32572 performance and considerations for lookup * DOCSP-32572 performance and considerations for lookup * DOCSP-32572 performance and considerations for lookup * DOCSP-32572 performance and considerations for lookup * DOCSP-32572 performance and considerations for lookup * DOCSP-32572 performance and considerations for lookup * DOCSP-32572 performance and considerations for lookup * DOCSP-32572 fixing bullet spacing * DOCSP-32572 adding general strategies * DOCSP-32572 fixing spacing * DOCSP-32572 adding embedded data modeling reference * DOCSP-32572 adding embedded data modeling reference * DOCSP-32572 copy edits from Jeff * DOCSP-32572 copy edits from Jeff * DOCSP-32572 copy edits from Jeff * DOCSP-32572 copy edits from Jeff * DOCSP-32572 copy edits round 2 * DOCSP-32572 copy edits round 2 * DOCSP-32572 copy edits round 2 * DOCSP-32572 copy edits round 2 * DOCSP-32572 copy edits round 2 * DOCSP-32572 copy edits round 2 * DOCSP-32572 tech edit * DOCSP-32572 fixing line * DOCSP-32572 fixing line * DOCSP-32572 tech edit * (DOCSP-32347) Adds steps to create a replica set in Atlas (#4888) * (DOCSP-32347) Adds steps to create a replica set in Atlas * Includes change from copy review * (DOCSP-32016) Adds procedure to insert documents with Atlas (#4900) * (DOCSP-32016) Adds procedure to insert documents with Atlas * Includes change from cyopy review * Removed references to archived doc (#4974) * DOCS-16342 Deprecates minNumChunksForSessionsCollection (#4661) * DOCS-16342 deprecate minNumChunksForSessionsCollection * internal review feedback * external review feedback * DOCSP 31908 removing free monitoring pages and references (#4883) * DOCSP-31908 changing warning and adding redirects * DOCSP-31908 updating warning title * DOCSP-31908 removing pages and sections * DOCSP-31908 editing warning * DOCSP-31908 removing references to free monitoring * DOCSP-31908 removing references to free monitoring * DOCSP-31908 removing references to free monitoring * DOCSP-31908 removing references to free monitoring * DOCSP-31908 removing references to free monitoring * DOCSP-31908 removing references to free monitoring * DOCSP-31908 typo fix * DOCSP-31908 * DOCSP-31908 applying redirect to all versions * (DOCSP-32350): Revamped Users and Roles tutorial. (#4874) * (DOCSP-32350): Revamped Users and Roles tutorial. * (DOCSP-32350): Incorporated Sarah's feedback. * (DOCSP-32350): Incorporated Jack's feedback. * (DOCSP-32350): Incorporated Jack's feedback. * DOCSP-33129 Recommend keyVersion field for dataKeyOpts with Azure KV (#4960) * Recommended usage of keyVersion field for dataKeyOpts when using Azure as the KMS * Moved disclaimer after table rather than within * Externalized Azure KV warning to an include * Fixed MongoDB to Azure KV for which performs encryption * Docsp 30653 index compaction reword (#4867) * Cleaned up use of metadata compaction * Rebuild * Fixed metadata collection terminology in reference page * (DOCSP-32019): Add Atlas UI procedure to Query Arrays page (#4898) * Add Atlas UI procedure * copy review feedback * DOCSP-33213 fsync/fsyncUnlock Changes (#4802) * DOCSP-33213 Updates to fsync/fsyncUnlock for Self-Managed Backups of Sharded Clusters * Adds 7.1 versionadded * Reworks Larger Deployment notes * Documents new feature on fsyncUnlock and mongosh helpers * Fixes build issue * Adds lock timeout * Minor edits for cluster coverage * Minor edits for cluster coverage * Adds include for lock/unlock * Adjusts fsync text * Fixes tables * Adds notice for lock/unlock * Minor edit * Major edits to db.fsyncLock to bring it in sync * Minor edit * fsyncUnlock method edits * Minor edits * Minor edits * Adds release note * Fixes build issue * Fixes build issue * Fixes build issue * Fixes build issue * Fixes build issue * Fixes per Ali Co-authored-by: Alison Huh <112565127+ajhuh-mdb@users.noreply.github.com> * Fixes per Ali * Fixes per Ali Co-authored-by: Alison Huh <112565127+ajhuh-mdb@users.noreply.github.com> * Fixes per Nandini * Fixes per Nandini * Fixes per Nandini * Fixes per Nandini * Fixes per Nandini --------- Co-authored-by: Kenneth P. J. Dyer Co-authored-by: Alison Huh <112565127+ajhuh-mdb@users.noreply.github.com> * (DOCSP-32344): [Revamp] /docs/manual/tutorial/backup-and-restore-tools/ (#4918) * (DOCSP-32344): [Revamp] /docs/manual/tutorial/backup-and-restore-tools/ * More places to add Atlas * fix formatting errors * more typo fixes * Apply suggestions from code review Co-authored-by: Sarah Simpers <82042374+sarahsimpers@users.noreply.github.com> * edits from review * more little fixes * second copy review --------- Co-authored-by: Sarah Simpers <82042374+sarahsimpers@users.noreply.github.com> * DOCSP-33410-log-rotation (#5004) * DOCSP-33410-log-rotation * DOCSP-33410-log-rotation * DOCSP-33410-log-rotation * DOCSP-33410-log-rotation * DOCSP-33410-log-rotation * DOCSP-33410-log-rotation --------- Co-authored-by: jason-price-mongodb * (DOCSP-33442): Add new fields to listSearchIndexes output (#4887) * WIP * (DOCSP-33442): Add new fields to listSearchIndexes output * add line breaks to field names * add more info link for statuses * edits * typo * change examples * wording * edits * standardization * typo * wording * add possible statuses for synonyms * address review comments * simplify * typo * standardize object vs document * WIP * fix build errors * add table heading * add synonym detail link * edit fields * update examples with real output * add multi doc example * add staged index to output * better pretty printing in output docs * tweak * add version-added * Evan review edits * formatting * edits * DOCS-16280 Support Exhaust Cursors on mongos (#4901) * DOCS-16280 Support Exhaust Cursors on mongos * * * JA feedback * * * Revert "*" This reverts commit 4355501b1c270874e4f9937611c79ab17414fa4c. * DOCSP-33547: Finalize Release Notes and change logs for 6.0.11 (#5021) * DOCSP-32192 Add cursorTimeoutMillis Warning (#4355) * DOCSP-32192 Add cursorTimeoutMillis Warning * * * * * * * * * IR * * * * * * * * * * * * * IR feedback * * * * * * * XR1 * * * * * * * * * * * * * Apply suggestions from code review Co-authored-by: Jeff Allen * * * * * * --------- Co-authored-by: Jeff Allen * DOCSP 31910 removing free monitoring references (#4990) * DOCSP-31910 removing TOC free monitoring * DOCSP-31910 removing free monitoring references * DOCSP-31910 removing free monitoring references * DOCSP-31910 removing free monitoring references * DOCSP-31910 adding decomission warning to release notes * DOCS-16392 Document periodicNoopIntervalSecs (#4975) * DOCS-16392 Document periodicNoopIntervalSecs * IF feedback * DOCSP-33528 Default OpenSSL under FIPS section (#5027) * DOCS-16425-uni-link (#5032) Co-authored-by: jason-price-mongodb * (DOCSP-33497) Adds Rust to the Connection Strings page dropdown (#4942) * (DOCSP-33497) Adds Rust to the Connection Strings page dropdown * Includes changes from tech review * DOCS-33143-positional-note (#5031) * DOCSP-33143-positional-note * DOCSP-33143-positional-note * DOCSP-33143-positional-note * DOCSP-33143-positional-note * DOCSP-33143-positional-note --------- Co-authored-by: jason-price-mongodb * DOCSP-33482 Default Concurrent Read/Write Transactions (#4953) * DOCSP-33482 Default Concurrent Read/Write Transactions * move default below * JD feedback * SR feedback * more fixes * include replacement * typo * DOCSP 33220 Clarify unique index missing fields limitation (#4889) * DOCSP-33220 clarify unique index missing field limitation * DOCSP-33220 clarify unique index missing field limitation * DOCSP-33220 copy edits and adding example * DOCSP-33220 copy edits and adding example * DOCSP-33220 copy edits and adding example * DOCSP33220 updating example and heading update * DOCSP33220 updating example and heading update * DOCSP33220 updating example and heading update * DOCSP-33220 wording changes * DOCSP-33220 typos * DOCSP33220 copy edits * DOCSP-33198 Clarify that SRV URI has No Port (#4886) * DOCSP-33198 Clarify that SRV URI has No Port * copy * edit examples to remove multiple host names + port * SO feedback * external feedback * DOCS-16051 plan cache stats all hosts (#4845) * DOCS-16051 planCacheStats allHosts option * Adds allHosts option * Adds allHosts option * Reworks to bullet list * Minor edits * Fixes per Ian Co-authored-by: ianf-mongodb <85948430+ianf-mongodb@users.noreply.github.com> * Fixes per Ian --------- Co-authored-by: Kenneth P. J. Dyer Co-authored-by: ianf-mongodb <85948430+ianf-mongodb@users.noreply.github.com> * DOCS-16428-upsert-n (#5074) * DOCS-16428-upsert-n * DOCS-16428-upsert-n * DOCS-16428-upsert-n * DOCS-16428-upsert-n * DOCS-16428-upsert-n --------- Co-authored-by: jason-price-mongodb * DOCS-16239 Pipeline limit error codes (#4885) * DOCS-16239 agg pipe stage limit errors * DOCS-16239 fix build error * DOCS-16239 renaming release notes * DOCS-16239 internal feedback * DOCS-16239 feedback * DOCS-16239 nit * DOCS-16083 add new error warning for duplicate field types (#4936) * DOCS-16083 add new error warning * DOCS-16083 fix wording * DOCS-16083 concise term * DOCS-16083 update warning * DOCS-16083 add period * DOCS-16083 nick feedback * DOCS-16222 Removed unused parameter from sh.removeTagRange() (#4973) * Removed unused parameter from sh.removeTagRange() * Removed self references * Some PR feedback * External review feedback * DOCSP-33644 5.0.22 changelog & release notes (#5084) * DOCSP-33644 5.0.22 changelog * Updates release notes * Bugfix * DOCSP-33132 Redirect to Correct Atlas Docs Section (#5076) * DOCSP-33132 Redirect to Correct Atlas Docs Section * replace both links * DOCSP 32407 moving aggregation quick reference page (#5098) * DOCSP-32407 moving aggregation quick reference page * DOCSP-32407 updating redirects * DOCSP-32407 updating redirects * DOCSP-32407 updating page references * DOCSP-32407 updating page references * DOCSP-32407 updating page references * DOCSP-32407 updating page references * DOCSP-32407 changing doc to ref * DOCSP-32407 changing doc to ref * DOCSP-33240 indexStats Privileges (#5099) * DOCSP-33240 indexStats Privileges * wording * more text edits * ref link * copy edits * extra space * external feedback * Removes upcoming and RC mentions from 7.1 (#5104) * Removes upcoming and RC mentions from 7.1 * Fixes top-level release notes page * Minor fixes * DOCSP-33002 Mention %b and %B Specifiers in 7.0 RN (#5105) * DOCSP-33002 Mention %b and %B Specifiers in 7.0 RN * formatting * IF feedback * IF feedback * spelling * DOCSP-33330 Fix autoMergerIntervalSecs Examples (#5028) * DOCSP-33330 Fix autoMergerIntervalSecs Examples * nit * DOCS-16278 Mongos port range release notes (#5115) * DOCS-16278 update release notes * DOCS-16278 fix build error * DOCSP-33826: Fix typo (#5128) * DOCSP-33826: Fix typo * DOCSP-33826: edit * DOCSP-33846 updating agg quick reference redirect (#5129) * DOCSP-33842: moveChunk typo (#5131) * DOCSP-33304 Clarify Mirrored Reads (#4945) * DOCSP-33304 Clarify Mirrored Reads * * * punctuation * DOCSP-33298 Update vm.max_map_count (#4931) * DOCSP-33298 Update vm.max_map_count * CM feedback * DOCS-16350 fixing example (#5136) * DOCS-16350 fixing example * Empty-Commit * DOCSP23123 reformatting code example (#5078) * DOCSP23123 reformatting code example * DOCSP-23123 copy edits * DOCSP-23123 updating number type * (DOCSP-33407) Changes Atlas CTA format (#4820) * (DOCSP-33407) Stages alternative CTA format * Stages one page as a tip admonition * CTA banner * Changes out all pages for CTA banner * Adds remaining pages * Adds final pages and changes * Includes changes from copy review * Rebases to resolve conflicts * DOCS-16189 Phrase Text Search Limitations (#5137) * DOCS-16189 Phrase Text Search Limitations * copy edits * IF feedback * (DOCSP-33865): Security clarifications to queryStats (#5132) * (DOCSP-33865): Updates to queryStats * style guide * external review * DOCSP-33986 fsync lock after crash (#5140) * DOCSP-33986 fsync lock after crash * Adjusts text * Adjusts text * Fixes per Jeff * Fixes per Jeff * (DOCSP-33794) Adds data lake pipeline limit (#5154) * DOCSP-33283 GeoJSON data limitations (#5033) * DOCSP-33283 GeoJSON data limitations * add query limitation * remove unnecessary note * add error message * (DOCSP-33812) Adds Atlas CTA for RS conversion to sharded page (#5168) * DOCSP 34021 adding m0 cluster limit (#5192) * DOCSP-34021 adding m0 limit * Empty-Commit * DOCSP-34021 adding m0 limit * DOCSP-34021 adding m0 limit * DOCSP-34021 changing m0 to monospace * DOCSP-33218 Remove Mentio of 500MB Queue Limit (#5085) * DOCSP-33660 Fix Balancer and Even Chunk Distribution Text (#5083) * DOCSP-33660 Fix Balancer and Even Chunk Distribution Text * title * nit * external feedback * (DOCS-16452) routingTableCacheChunkBucketSize must be greater than 0 (#5198) * (DOCS-16452) routingTableCacheChunkBucketSize must be greater than 0 * Includes change from copy review * DOCSP 24995 documenting performance degradation of bulk inserts (#5141) * DOCSP-24995 documenting performance degredation of bulk random insert * DOCSP-24995 small wording changes * DOCSP-24995 small wording changes * DOCSP-24995 small wording changes * DOCSP-24995 small wording changes * DOCSP-24995 rephrase * DOCSP-24995 rephrase * DOCSP-24995 copy edits * DOCSP-24995 copy edits * DOCSP-24995 copy edits * DOCSP-24995 copy edits * DOCSP-24995 copy edits * DOCSP-24995 tech edit * DOCS-16258-setWindowFields-update (#5186) Co-authored-by: jason-price-mongodb * Revert "(DOCS-16452) routingTableCacheChunkBucketSize must be greater than 0 (#5198)" (#5223) This reverts commit a55f624946c228336bac5130e2afa031c6a4d399. * DOCSP-33866: Finalize 5.0.22 changelogs (#5134) * DOCSP-33866: Finalize 5.0.22 changelogs * DOCSP-33866: Change logs modifications * DOCSP-33866: 5.0.22 change log updates * DOCSP-33850 Fixes typo in deleteMany example (#5150) * (DOCSP-33111) Reverts changes from #4076 (#5215) * DOCS-12534 Add Partial TTL Index Content (#5143) * DOCS-12534 Add Partial TTL Index Content * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * DOCSP 34064 Finalize changelog, release notes for 7.0.3 (#5194) * DOCSP34064 finalize changelog, release notes for 7.0.3 * DOCSP-34064 7.0.3 changelog and release notes * DOCSP-34064 7.0.3 changelog and release notes * DOCSP-34064 7.0.3 changelog and release notes * DOCSP-34064 adding date * DOCSP-27629 warning about indexes with collation size (#5224) * DOCSP-27629 warning about indexes with collation size * DOCSP-27629 fixing format * DOCSP-27629 small fix * DOCSP-27629 changing note to warning * DOCSP-27629 tech edits * DOCSP-34334 Remove References to 500MB Queue Limit for Range Deletion (#5256) * DOCSP-34090 $graphLookup limitations (#5226) * DOCSP-34090 cannot use graphLookup in transactions targeting sharded collections * DOCSP-34090 cannot use graphLookup in transactions targeting sharded collections * DOCSP-34281 7.1.1 release notes (#5261) * DOCSP-34281 7.1.1 release notes * DOCSP-34281 updating release date * DOCSP-34325 6.0.12 release notes (#5257) * DOCSP-34325 6.0.12 release notes * DOCSP-34325 fixing url * DOCSP-34325 updating date * DOCSP-33225 removing fcv section (#5201) * DOCSP-32941-resharding-update (#5254) * DOCSP-32941-resharding-update * DOCSP-32941-resharding-update --------- Co-authored-by: jason-price-mongodb * DOCSP-34410 adding reference back (#5294) * DOCS-16413 $listSearchIndexes error (#5110) * DOCS-16413 error * fixes build issue * fixes build issue * Fixes per Joe * DOCSP-34331 4.4.26 Release Notes (#5293) * DOCSP-34331 4.4.26 Release Notes * add changelog * (DOCS-16434) Specifies behavior if issuer URI is unreachable (#5255) * (DOCS-16434) Specifies behavior if issuer URI is unreachable * Includes tech review change * DOCS-16476 Resharding a unique collection (#5191) * First draft' * Copy edits * DOCSP-33543-datetime-enhancements (#5272) * DOCSP-33543-datetime-enhancements * DOCSP-33543-datetime-enhancements * DOCSP-33543-datetime-enhancements * DOCSP-33543-datetime-enhancements * DOCSP-33543-datetime-enhancements * DOCSP-33543-datetime-enhancements --------- Co-authored-by: jason-price-mongodb * DOCSP-33963-time-series-table-updates (#5302) * DOCSP-33963-time-series-table-updates * DOCSP-33963-time-series-table-updates --------- Co-authored-by: jason-price-mongodb * DOCSP-33732 Fixes Connection String (#5153) * DOCSP-33732 Fixes connection string typo * Fixes per Ian * DOCS-10757 Add page for error codes (#5319) * add error code page and update TOC * removed obsolute and not-yet-available * DOCSP-34017-bucket-rounding-seconds (#5262) * DOCSP-34017-bucket-rounding-seconds * DOCSP-34017-bucket-rounding-seconds * DOCSP-34017-bucket-rounding-seconds * DOCSP-34017-bucket-rounding-seconds --------- Co-authored-by: jason-price-mongodb * Docsp-34017-bucket-rounding-seconds additional updates saved file (#5326) * DOCSP-34017-bucket-rounding-seconds * DOCSP-34017-bucket-rounding-seconds * DOCSP-34017-bucket-rounding-seconds * DOCSP-34017-bucket-rounding-seconds * DOCSP-34017-bucket-rounding-seconds * DOCSP-34017-bucket-rounding-seconds --------- Co-authored-by: jason-price-mongodb * DOCSP-32522: updated page names to specify CSFLE & QE (#5288) * DOCSP-32522 updated page names for CSFLE & QE * DOCSP-32522 updated page name for CSFLE * updated lines above/below to match the title width * updated lines above/below to match the title width for QE * DOCSP-33122 fix ref (#5320) * DOCSP-33107-read-concern-operation-time (#5245) * read-concern-operation-time * DOCSP-33107-read-concern-operation-time * DOCSP-33107-read-concern-operation-time --------- Co-authored-by: jason-price-mongodb * DOCSP-34469 Update Instruqt db.collection.find Lab to V2 (#5334) * DOCSP-34413 mongodump Regex Syntax (#5307) * DOCSP-34413 mongodump Regex Syntax * update note w/ quote syntax rules * remove redundant links * split up paragraph * JM feedback * ``numad`` daemon causes performance degradation. (#5271) * Add note about numad degredating performance * external review complete * (DOCS-16478): persistedChunkCacheUpdateMaxBatchSize parameter (#5270) (#5347) * (DOCS-16394): persistedChunkCacheUpdateMaxBatchSize parameter (#5270) * (DOCS-16394): persistedChunkCacheUpdateMaxBatchSize parameter * add example * update version list * add 7.2 to release notes toc * address review comments * edits * minimalism * tweak * wording * address review comments * Adjust version numbers * DOCSP-33423-explain-version-field (#5350) * DOCSP-33423-explain-version-field * DOCSP-33423-explain-version-field * DOCSP-33423-explain-version-field * DOCSP-33423-explain-version-field * DOCSP-33423-explain-version-field * DOCSP-33423-explain-version-field --------- Co-authored-by: jason-price-mongodb * DOCSP-31721 Fixed typo in Geospatial Queries (#5352) * DOCSP-31653 Remove duplicate aggregation content (#4200) * move pages around * remove alphabetical list of stages * remove alphabetical list of operators * change header levels * remove quick reference page * fix broken refs * add missing ref * fix build errors * fix build errors * adjust page title * update toc order * fix ref link * wording * remove superfluous ref * remove superfluous ref * consistency * remove superfluous ref * fix git conflict * edits * fix ref * Fixing build errors, I hope * Fixing build errors, I still hope * Fixes from Jeff --------- Co-authored-by: Nick Villahermosa Co-authored-by: jocelyn-mendez1 <91144778+jocelyn-mendez1@users.noreply.github.com> Co-authored-by: Kenneth P. J. Dyer <93145796+kennethdyer@users.noreply.github.com> Co-authored-by: Kenneth P. J. Dyer Co-authored-by: davidhou17 <55004296+davidhou17@users.noreply.github.com> Co-authored-by: Sarah Simpers <82042374+sarahsimpers@users.noreply.github.com> Co-authored-by: ianf-mongodb <85948430+ianf-mongodb@users.noreply.github.com> Co-auth… * Docsp 28981 disclaimer different major versions in replica sets (#6796) * Removed outdated include * Drafted note * Taxonomy tagging * Removed duplicate sentence * add note about coordinate limits for both 2d and 2dsphere indexes (#6713) * add note about coordinate limits for both 2d and 2dsphere indexes * add note about wrapping * update with info about overrides * no longer need a shared blob * final review suggestion * DOCSP-37704 7.0.7 Release Notes (#6828) * DOCSP-37704 7.0.7 Release Notes * nit fix * revert 7.0 removal * DOCSP-31877 Remove circular definitions for clutered indexes (#6840) * (DOCSP-31877): Remove circular definitions for clutered indexes and collections * edits * edit * review edits * wording * review edits * edits * typo * edits * tweak * tweak * edit * DOCSP-36056 listSearchIndexes Visibility (#6767) * DOCSP-36056 listSearchIndexes Visibility * build errors * atlas-first edit * adjust TOC + better highlight Atlas Search index methods * DOCSP-37568 clarify options in sh.stopBalancer Documentation (#6790) * DOCSP-37568 clarify options in sh.stopBalancer Documentation * DOCSP-37568 updates for MP's feedback * Update source/reference/method/sh.stopBalancer.txt Co-authored-by: ltran-mdb2 <143426234+ltran-mdb2@users.noreply.github.com> * DOCSP-37568 updates for copy feedback --------- Co-authored-by: ltran-mdb2 <143426234+ltran-mdb2@users.noreply.github.com> * (DOCSP-26094): Clarify VMWare balloon recommendation (#6831) * (DOCSP-26094): Clarify VMWare balloon recommendation * edits * present tense * edits * edits * edits * review feedback * review feedback * add info about explain ignoring query plan (#6836) * add info about explain ignoring query plan * wording changes to be more accurate & clear * DOCSP-20504: Mention comment inheritance on listIndexes and listCollections (#6851) * DOCSP-37342 Configuration File CAFile Requirement (#6677) * DOCSP-37342 Configuration File CAFile Requirement * fixes * BM external feedback * BM edits * DOCSP-32184 Update Time-Series Sharding Admin Commands (#6780) * DOCSP-32184 Update Time-Series Sharding Admin Commands * DOCSP-32184 Sharding Admin Commands on system.buckets * nit fix * AB feedback * DOCSP-37695 5.0.26 Release Notes (#6850) * DOCSP-37695 5.0.26 Release Notes * build error * * * DOCSP-21596-storeFindAndModifyImagesInSideCollection-parameter-update-version (#6890) Co-authored-by: jason-price-mongodb * DOCSP-37384: resting connection -> idle connection (#6883) * DOCSP-26062 modify redirect for manual/reference/method/db.collection.copyTo (#6889) * DOCSP-26062 modify redirect for manual/reference/method/db.collection.copyTo * DOCSP-26062 updates for JA's feedback * DOCSP-34414 fix redirect for manual/tutorial/copy-databases-between-instances (#6900) * DOCSP-37870 Edit Release Notes Top 5 (#6903) * DOCSP-37870 Edit Release Notes Top 5 * fix * Add info about journal file drive space needs (#6837) * Add info about journal fiole drive space needs * review changes * refactor for clarity * drive space => disk space * DOCS-6403: Consolidate query diagnostic information (#6721) * WIP * WIP * update snooty.toml * fix indentation * WIP * WIP * WIP * fix broken refs * review edits * more review edits * more review edits * edits * edit * nits * review edits * reorder table * mention top command * formatting * restructure * change heading levels * reorder * tweaks * add more Atlas tools * simplify * edits * wording * minimalism * final reorg * re-add db profiler * add link to db profiler page * DOCS-15802 add details to $sample page (#6801) * DOCSP-15802 add details to sample page * DOCS-15802 updates for review feedback * DOCS-15802 updates for copy feedback * DOCS-15127 Remove support for authentication as multiple simultaneous users (#6854) * DOCS-15127 Remove support for authentication as multiple simultaneous users * DOCS-15127 updates for AH's feedback * DOCSP-34494 Document that startTransaction() is a no-op (#6905) * DOCS-14326 voting nodes in majority (#6924) * DOCSP-27214-sharding-release-note (#6887) * DOCSP-27214-sharding-release-note * Editorial feedback per review. * Adding glossary ref to 6.1 * DOCSP-34751 Clarifies STARTUP2 vote (#6676) * DOCSP-34751 Clarifies STARTUP2 vote * Fixes per Kaitlin * Update Node QE Quickstart code (#6937) * add $match to coalesce section (#6852) * add $match to coalesce section * rewording for clarity and fix formatting * updates from review * more rewording for clarity; rearrange explain output * review format changes * add note/link and (hopefully) fix code formatting * moar code formatting * DOCSP-37453: add missing monospace (#6878) * DOCSP-34283 Add resharding a collection with zones example (#6628) * DOCSP-34283 add reshard example * DOCSP-34283 nit changes * DOCSP-34283 nit change * DOCSP-34283 sarah feedback * DOCSP-34283 fix typo * DOCSP-34283 ratika feedback * DOCSP-34283 typo fix * DOCSP-34283 typo fix * DOCSP-34283 ratika feedback * DOCSP-34283 updates * DOCSP-34283 updates * DOCSP-37891 Incorrectly Formatted Redirects (#6932) * DOCSP-37891 Incorrectly Formatted Redirects * change to 8.0 * DOCSP-37846 Finalize 7.3.0 Release Notes (#6902) * DOCSP-38013 remove self referencing links page (#6958) * DOCSP-38013-Remove-self-referencing-links--page * Converting additional self-referential query directives to monospace. * Editorial per feedback. * DOCS-16706 updating clustered collection behavior version (#6956) * DOCSP 24949 improve low selectivity example (#6626) * empty * DOCSP-24949 improve low-selectivity example * DOCSP-24949 improve low-selectivity example * DOCSP-24949 improve low-selectivity example * DOCSP-24949 improve low-selectivity example * DOCSP-24949 improve low-selectivity example * DOCSP-24949 copy edits and changing second example * DOCSP-24949 copy edits * DOCSP-24949 copy edits * DOCSP-24949 copy edits * DOCSP-24949 copy edits * DOCSP-24949 copy edits * DOCSP-24949 copy edits * DOCSP-24949 copy edits * DOCSP-24949 copy edits * DOCSP-24949 copy edits * DOCSP-24949 tech edits * DOCSP-24949 tech edits * DOCSP-24949 tech edits from asya and katya * DOCSP-24949 tech edits from asya and katya * DOCSP-24949 tech edits from asya and katya * DOCSP-24949 tech edits from asya and katya * DOCSP-26465 Replace complete list of FAQs - downgrading (#6961) * DOCSP-38046 fix typo (#6980) * DOCSP-37896-5.0.26-release-notes (#6986) * DOCSP-37896-5.0.26-release-notes * DOCSP-37896-5.0.26-release-notes * DOCSP-37896-5.0.26-release-notes --------- Co-authored-by: jason-price-mongodb * DOCS-16564 writeConcernErrors (#5776) * DOCS-16564 writeConcernError on mongos * Adds include to other commands * Edits substitution * Minor edits * fixes minor issue * Adds facets * Minor edits * Fixes per Joe * Fixes per Brett * Fixes per Brett * Renames file * DOCS-16708 Comment Out 8.0 Redirects (#6964) * DOCSP-37082 Adds setAt text to Authentication Parameters (#6921) * DOCSP-37082 SetAt for Auth Parameters * Parameter updates * Parameter updates * Parameter updates * DOCSP-38060 fixes analyzeShardKey example (#6989) * (DOCSP-36026) Add Atlas SP mongosh commands. (#7001) * (DOCSP-36026) Add Atlas SP mongosh commands. * (DOCSP-36026) Formatting fixes. * (DOCSP-36026) * (DOCSP-36026) * (DOCSP-36026) * (DOCSP-36026) * (DOCSP-36026) * (DOCSP-36026) * (DOCSP-36026) * (DOCSP-36026) * (DOCSP-36026) Various corrections to createStreamProcessor * (DOCSP-36026) Various fixes to create and list * (DOCSP-36026) Add ToC Page + various fixes * (DOCSP-36026) ToC fixes. * (DOCSP-36026) * (DOCSP-36026) * (DOCSP-36026) Fixes to sample * (DOCSP-36026) * (DOCSP-36026) * (DOCSP-36026) * (DOCSP-36026) Copy review. * (DOCSP-36026) * DOCSP-34750 fixes admonition on findOneAndUpdate (#5516) * (DOCS-16193): Mention non-guaranteed order of $accumulator (#7008) * (DOCS-16193): Mention non-guaranteed order of * review edits * formatting fix * wording * formatting fix * minimalism * add copyable false * wording * change heading level * nits * DOCS-16693 Sharded Lookup Typo (#7029) * DOCS-16693 Sharded Lookup Typo * * * extra line * DOCSP-27249 Cluster Key index on id:1 (#6994) * DOCSP-27249 add cluster key limitation * DOCSP-27249 add cluster key limitation * DOCSP-27249 typo fix * DOCSP-27249 external feedback * DOCS-15936 fix fast counts behavior of the validate command (#6884) * DOCS-15936 fix fast counts behavior of the validate command * DOCS-15936 updates for AH's feedback * Apply suggestions from code review Co-authored-by: Alison Huh <112565127+ajhuh-mdb@users.noreply.github.com> * DOCS-15936 updates for AH's feedback * Apply suggestions from code review Co-authored-by: Alison Huh <112565127+ajhuh-mdb@users.noreply.github.com> * DOCS-15936 last round of edits --------- Co-authored-by: Alison Huh <112565127+ajhuh-mdb@users.noreply.github.com> * DOCSP-21467 doc tlsOptions for for CSFLE connection (#6974) * DOCSP-21467 doc tlsOptions for for CSFLE connection * DOCSP-21467 updates for AH's feedback * Apply suggestions from code review Co-authored-by: Anna Henningsen --------- Co-authored-by: Anna Henningsen * DOCSP-28264 update for feedback on TTL Indexes (#7023) * DOCSP-38000 Merge Capped Collections IA feature branch into master (#7033) * DOCS-15096 Capped collections landing page updates (#6079) * DOCS-15096 Capped collections landing page updates * move capped collections page and update snooty.toml * add facet * formatting * reorg * fix ref * update use cases * review edits * shuffle * wording * address review comments * paragraph split * typo * undo toc update * following > these * review edits * typo * wording * clarify update note * edit * (DOCSP-37995): Change max docs for capped collection (#6948) * (DOCSP-37995): Change max docs for capped collection * edit include * add ref link * add comma * (DOCSP-38002): Change byte size of a capped collection (#6953) * (DOCSP-38002): Change byte size of a capped collection * add unit * fix command * fix command * (DOCSP-38010): convert collection to capped (#6955) * (DOCSP-38010): convert collection to capped * typo * add copyable option * change learn more link * build error * add detail for db lock * edits * remove quotes around field name * (DOCSP-38019): Check if collection is capped (#6957) * (DOCSP-38019): Check if collection is capped * fix formatting for collection names * formatting * formatting * add page meta info * fix indentation * (DOCSP-38041): Create a capped collection (#6965) * (DOCSP-38041): Create a capped collection * edits * formatting * review edits * (DOCSP-38027): Query a capped collection (#6962) * (DOCSP-38027): Query a capped collection * add code block language * edits * add page metadata * typo * add natural order and use include * edits * formatting * add learn more * add step lead-in * review edits * add detail for multiple concurrent writes * add period * nit * DOCS-15150 Add dryRun flag for collMod (#7021) * WIP * WIP * WIP * wip * typos * use procedure * move tutorial * reorder * convert to task layout * add page metadata * edits * edits * remove admonition * DOCSP-38081-7.3.1-release-notes (#7014) * DOCSP-38081-7.3.1-release-notes * DOCSP-38081-7.3.1-release-notes * DOCSP-38081-7.3.1-release-notes * DOCSP-38081-7.3.1-release-notes * DOCSP-38081-7.3.1-release-notes --------- Co-authored-by: jason-price-mongodb * DOCSP-26451 replace $sort-WiredTiger (#7034) * DOCSP-25292 $elemMatch should describe non-object array fields (#6993) * DOCSP-25292 should describe non-object array fields * Apply suggestions from code review Co-authored-by: Mihai Andrei * DOCSP-25292 updates for MA's feedback --------- Co-authored-by: Mihai Andrei * Merge $lastN (agg) with $lastN (array) and $firstN (agg) with $firstN (array) (#6944) * WIP to see how formatting looks * updating $last docs * toic too deep (dood); remove self-referntial * re-add expression * remove extraneous file * separated group and expression directives * remove wrong info on limitations and fix $group page * DOCS-26453 authorization - collection methods doc replacments (#7045) * DOCS-16213-change-streams-update (#7049) Co-authored-by: jason-price-mongodb * DOCSP-34454 Investigate feedback about anyElementTrue (#7047) * DOCSP-37283 Fixes uniqueness bug for shard key (#6675) * DOCSP-37283 Fixes uniqueness bug for shard key * Fixes per Jocelyn * Fixes per Matt * Fixes per Matt * DOCSP-38148 Removed link (#7056) * Removed link * removed link * DOCSP 28579 restructuring project page (#7013) * DOCSP-28579 clarifying project limitations * DOCSP-28579 clarifying project limitations * DOCSP-28579 tech edit * (DOCSP-38129): Added atlas-nav folder. (#7065) * DOCSP-35805 Add Find Options to Manual (#6011) * DOCSP-35805 Add Find Options to Manual * * * * * * * * * * * * * * * * * * * * * * * * * examples * * * * * * * * * * * * * IR Joe * * * * * * * * * * * * * * * * * * * * * * * DOCSP-31880 Many-to-Many Embedded Docs (#5138) (#7055) * DOCSP-31880 many to many * DOCSP-31880 fix example * DOCSP-31880 nit push * DOCSP-31880 add page to toc * DOCSP-31880 nit fix * DOCSP-31880 update example * DOCSP-31880 test build * DOCSP-31880 internal feedback * DOCSP-31880 new staging link * DOCSP-31880 max feedback * DOCSP-31880 remove quotes * RexEx inconsistencies (#7022) * re-add expression * removing unexpected file changes * update docs to agree on the supported regex options * remove duplicate info and point to more detailed page * wordsmithing; remote info box * DOCSP-32377 ObjectId with Custom Date (#7059) * DOCSP-32377 ObjectId with Custom Date * * * * * JA feedback * DOCSP-37678 updating cluster manager actions (#6990) * DOCSP-37678 updating cluster manager actions * DOCSP-37678 updating cluster manager actions prefixes * DOCSP-37678 removing IS actions * DOCSP-37678 removing internal and unreleased actions * DOCSP-37678 removing querySettings * DOCSP-37678 tech edits * empty * DOCSP-3768 re-ordering * DOCSP-38151 QE orphan build fixes (#7057) * Fixed missing TOC entry, removed unused topic * Ref fix * Restored + moved detailed enabling topic * Fixed an overlooked orphan include * Rename * Rename * Fix attempt on SVG markup * Markup fix * DOCSP-33136 clarify getDefaultRWConcern output fields (#7012) * DOCSP-33136 clarify getDefaultRWConcern output fields * DOCSP-33136 tech edits * DOCSP-33136 tech edits * DOCSP-37928-hidden-indexes-second-person-edit (#7090) * DOCSP-26454 :doc:s command line - document validation (#7089) * DOCSP-26460 replaces hide - oplog :doc:s (#7109) * DOCSP-37599 GraphQL deprecation note (#7088) * up to date test * fix * DOCSP-38134 libmongocrypt install amazon (#7078) * DOCSP-38134 Amazon 2023 libmongocrypt install * fixes code block * Fixes per Kevin * Fixes per Kevin * DOCSP-24227 adding insert to literal example (#7107) * DOCSP-24231 adding insert commands to restaurants/myColl examples (#7110) * DOCSP-24231 adding insert commands to restaurants/myColl examples * DOCSP-24231 adding insert commands to restaurants/myColl examples * DOCSP-24231 adding insert commands to restaurants/myColl examples * DOCSP-24231 adding insert commands to restaurants/myColl examples * DOCSP-24231 adding insert commands to restaurants/myColl examples * DOCSP-26462 query optimization process - replica state (#7114) * DOCSP-26459 replace elections - hidden :doc:s (#7108) * Moved server network metrics (#7123) * Fixed FCV to fCV capitalization (#7122) * DOCSP-36758: Define CMK on encryption key rotation page (#7125) * DOCSP-36758: Define CMK on encryption key rotation page * add facet tags * (DOCSP-38254): Added meta description to Query for Null or Missing Fields page. (#7126) * (DOCSP-38150) Fixes definition of replication lag. (#7128) * (DOCSP-38190) Adds supported version notice. (#7124) * (DOCSP-38190) Adds supported version notice. * Revises per copy review. * DOCSP-26464 replaces replication - variable :doc:s (#7121) * DOCSP-38284 Time Series Limitation 404s (#7142) * DOCSP-38267: fix typo on toUpper page (#7120) * DOCSP-26461 overview of sharding - profiler log messages (#7112) * DOCSP-24248 add insertMany command to collection (#7141) * DOCSP-24248 add inseertMany command to collection * DOCSP-24248 spacing fix * DOCSP-24248 ashley feedback * DOCSP-24248 fix quotes * DOCSP-24232 adding insert statement to oil orders collection and updating myColl (#7113) * DOCSP-24232 adding insert statement to oil collection * DOCSP-24232 adding insert statement to oil collection * DOCSP-24232 adding insert statement to oil collection * DOCSP-24232 adding insert statement to oil collection * DOCS-15181: noCursorTimeout (#7044) * re-add expression * removing unexpected file changes * add note; noCursorTimeout since 4.4.8 * fix include name * review changes * final review change * DOCSP-24236 adding insert command to superheros example (#7140) * DOCSP-24236 adding insert command to superheros example * empty * DOCSP-24236 adding insert command to superheros example * DOCSP-27486 Add public API for Mongo._uri field (#7152) * DOCSP-27486 Add public API for Mongo._uri field * Apply suggestions from code review Co-authored-by: Himanshu Singh * DOCSP-27486 updates for HS' feedback * Update source/reference/method/Mongo.getURI.txt Co-authored-by: corryroot <72401712+corryroot@users.noreply.github.com> --------- Co-authored-by: Himanshu Singh Co-authored-by: corryroot <72401712+corryroot@users.noreply.github.com> * (DOCSP-38253): Added meta descriptions for batch 13. (#7163) * (DOCSP-38253): Added meta descriptions for batch 13. * (DOCSP-38253): Incorporated Sarah's feedback. * DOCSP-24265 Add insert command for pow and stdDevPop (#7174) * DOCSP-24265 update code examples * DOCSP-24265 update code examples * DOCSP-24265 update code examples * DOCSP-24265 sarah feedback * DOCSP-24265 typo fix * DOCSP-24279 Add insert command for isArray and concatArrays (#7185) * DOCSP-24279 code upstes * DOCSP-24279 code upstes * DOCSP-24279 sarah feedback from other pr * DOCSP-24279 sarah feedback from other pr * DOCSP-24279 space bracket * DOCSP-28670 add to table in MongoDB Extended JSON (#7158) * (DOCSP-38245): Add meta tags to top 250 pages (#7201) * DOCSP-37917-client-sessions-update (#7159) * DOCSP-37917-client-sessions-update * DOCSP-37917-client-sessions-update * DOCSP-37917-client-sessions-update * DOCSP-37917-client-sessions-update * DOCSP-37917-client-sessions-update * DOCSP-37917-client-sessions-update --------- Co-authored-by: jason-price-mongodb * DOCSP-37340 Fix Outdated Links Collation Concept Links (#7196) * (DOCSP-38244): Add meta descriptions (#7211) * (DOCSP-38244): Add meta descriptions * revert command page * tweak language for find * change deprecated descriptions * remove duplicate meta tag (#7220) * DOCSP-38261 Remove Recommendation to Disable Retryable Writes (#7160) * DOCSP-37812 adding index status and queryable status (#7076) * DOCSP-38255 Add batch of meta descriptions to top 250 Manual pages (#7115) * DOCSP-33171 batch 3 added meta descriptions * DOCSP-38255 added meta descriptions for Manual pages batch 15 * strengthened the deprecated description for map-reduce per SO * resolve merge conflict * DOCSP-24273 fix Type code examples (#7226) * DOCSP-24273 fix code examples * DOCSP-24273 remove will * DOCSP-24273 remove quotations * DOCSP-24273 remove quotations * DOCSP-24273 typo fix * (DOCSP-38252): Added meta descriptions for batch 12. (#7224) * DOCSP-38120 Broken Transaction Error Examples (#7131) * DOCSP-38120 Broken Transaction Error Examples * * * DOCSP-24244 adding insert command/improving consistency of set and add Fields (#7144) * DOCSP-24244 adding insert command and improving consistency of set and addFields pages * DOCSP-24244 adding insert command and improving consistency of set and addFields pages * DOCSP-24244 typo * DOCSP-24244 typo * DOCSP-24244 improving consistency * DOCSP-24244 improving consistency * DOCSP-24244 improving consistency * DOCSP-24244 improving consistency * DOCSP-24244 indentation * DOCSP-24244 making output un copyable * (DOCSP-38251): Added meta descriptions for batch 11. (#7240) * (DOCSP-38251): Added meta descriptions for batch 11. * (DOCSP-38251): Incorporated Jeff's feedback. * DOCSP-38035 x or y only (#7149) * DOCSP-24247 adding insert command and improving consistency of isoDayOfWeek (#7175) * DOCSP-24247 adding insert command and improving consistency of isoDayOfWeek * DOCSP-24247 adding insert command and improving consistency of isoDayOfWeek * DOCSP-24250 adding insert command/ improving consistency of isoWeek and split (#7254) * DOCSP-24250 adding insert command and improving consistency of isoWeek and split * DOCSP-24250 adding insert command and improving consistency of isoWeek and split * DOCSP-24250 adding insert command and improving consistency of isoWeek and split * DOCS-15862 serverstatus metrics commands validator (#7182) * Rendering check * Fixed rendering; draft content * Wording fixes * Reordered * Added 'on this mongod' * External PR feedback * (DOCSP-38250): Added meta descriptions for batch 10. (#7272) * (DOCSP-38250): Added meta descriptions for batch 10. * (DOCSP-38250): Incorporated Ian's feedback. * DOCS-16499 Add Cursor Behavior Note To Find Page (#7260) * * * * * * * * * * * * * * * Apply suggestions from code review Co-authored-by: corryroot <72401712+corryroot@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Will Buerger <59492746+wbuerger46@users.noreply.github.com> --------- Co-authored-by: corryroot <72401712+corryroot@users.noreply.github.com> Co-authored-by: Will Buerger <59492746+wbuerger46@users.noreply.github.com> * DOCSP-26691-clarify-default-behavior-log-rotation-replica (#7161) * DOCSP-26691-clarify-default-behavior-log-rotation-replica * Editorial feedback. * Small editorial edit. * Added repo sync to repo to keep public and private repos in sync (#7157) * (DOCSP-38241): Adds meta descriptions batch 1 (#7259) * Add meta descriptions batch 1 * copy review feedback * DOCSP-38155: 7.0.8 Release Notes (#7206) * DOCSP-38155 * DOCSP-38155: Edit * DOCSP-24254 add insert command to reduce (#7150) * DOCSP-24248 save changes * DOCSP-24254 fix examples * DOCSP-24254 add commas * DOCSP-38292 adding missing brackets + improving example consistency (#7151) * DOCSP-38292 adding missing brackets * DOCSP-38292 removing quotes * DOCSP-38292 removing quotes * DOCSP-38292 improving consistency * DOCSP-38292 improving consistency * DOCSP-38292 removing spaces before colons * DOCSP-38292 removing quotes from update field names * DOCSP-38292 removing quotes from update field names * DOCSP-38292 removing aggregate.txt from scope * DOCSP-38292 removing aggregate.txt from scope * DOCSP-38292 indentation fix * DOCSP-24249 adding inserts and improving consistency of eq page (#7239) * DOCSP-24249 adding inserts and improving consistency of eq page * DOCSP-24249 adding inserts and improving consistency of eq page * DOCSP-24249 adding inserts and improving consistency of eq page * DOCSP-24249 adding inserts and improving consistency of eq page * DOCSP-24249 fixing indentation * DOCSP-24249 fixing indentation * DOCSP-24249 fixing indentation * DOCSP-24249 build error * DOCSP-24249 build error * DOCSP-24249 build error * DOCSP-24249 build error * DOCSP-24249 build error * DOCSP-38343: Fix (#7318) * (DOCSP-33787): Better clarify limitations of index on entire object (#7257) * (DOCSP-33787): Better clarify limitations of index on entire object * missing ref * typo * add no results example * wording * review feedback * Only run workflows on master and v*.* branches (#7330) * (DOCSP-38243): Add meta descriptions (#7326) * (DOCSP-38243): Add meta descriptions * undo dot-dollar changes * adjust distinct * DOCSP-28629 Add limitation: Unsupported filter option for listDatabases (#7165) * DOCSP-28629 Add limitation: Unsupported filter option for listDatabases * Update source/reference/command/listDatabases.txt Co-authored-by: corryroot <72401712+corryroot@users.noreply.github.com> --------- Co-authored-by: corryroot <72401712+corryroot@users.noreply.github.com> * DOCSP-24272 Add insert command to computed data example (#7297) * DOCSP-24272 add insert command * DOCSP-24272 add insert command * DOCSP-24272 add insert command * DOCSP-24272 add insert command * (DOCSP-15749): Add operationMetrics to explain output (#7197) * Add operationMetrics to explain output * tech review feedback * tech review feedback 2 * copy review feedback 2 * (DOCSP-28043): Update getShardDistribution() output (#7177) * update getShardDistribution output) ggp * copy feedback * copy review feedback * DOCSP-34823-asymmetric-sync (#7176) * DOCSP-34823-asymmetric-sync * DOCSP-34823-asymmetric-sync * DOCSP-34823-asymmetric-sync * DOCSP-34823-asymmetric-sync --------- Co-authored-by: jason-price-mongodb * DOCSP-38319 Fix Location Query Syntax (#7288) * DOCS-16583 bounded clustered ixscan allowed in notablescan (#7066) * DOCS-16583 bounded clustered ixscan allowed in notablescan * DOCS-16583 bounded clustered ixscan allowed in notablescan * DOCS-16583 bounded clustered ixscan allowed in notablescan * DOCS-16583 bounded clustered ixscan allowed in notablescan * DOCS-16583 copy edit * DOCS-16583 tech edit * DOCS-16583 tech edit * DOCS-16583 tech edit * DOCS-16583 tech edit * DOCS-16583 build error * DOCSP-27571-batch-size (#7325) * DOCSP-27571-batch-size * DOCSP-27571-batch-size * DOCSP-27571-batch-size --------- Co-authored-by: jason-price-mongodb * Docsp 38346: Finalize release notes for 7.3.1 (#7359) * DOCSP-38346 * DOCSP-38346: Update change logs * DOCSP-38346 * DOCSP-38346 * DOCSP-38346 * DOCSP-38246-meta Add meta descriptions batch 6 (#7327) * DOCSP-38246-meta Add meta descriptions batch 6 * test * fix * test * copy typos * DOCSP-28856-secondary-throttle-update (#7258) * DOCSP-28856-secondary-throttle-update * DOCSP-28856-secondary-throttle-update * DOCSP-28856-secondary-throttle-update * DOCSP-28856-secondary-throttle-update --------- Co-authored-by: jason-price-mongodb * (DOCSP-30104): Tutorial for modifying a shard zone (#7361) * (DOCSP-30104): Tutorial for modifying a shard zone * move file to correct location * remove duplicate toc entries * change learn more links * intro tweak * fix code formatting * remove monospace from step titles * remove punctuation from step * remove punctuation from step * fix step formatting * review feedback * change syntax highlighting * remove syntax highlighting * (DOCSP-38249, DOCSP-38248, DOCSP-38247): Added meta descriptions for batches 9, 8, and 7. (#7358) * (DOCSP-38249,DOCSP-38248, DOCSP-38247): Added meta descriptions for batches 9, 8, and 7. * (DOCSP-38249): Incorporated Ashley's feedback. * (DOCSP-38381) Fix broken links. (#7360) * DOCSP-38154 Add Additional getLog Commands (#7305) * DOCSP-38154 Add Additional getLog Commands * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * DOCSP-38156 add the reference to the array sorting behavior (#7127) * DOCSP-38156 add the reference to the array sorting behavior * Apply suggestions from code review Co-authored-by: Anna Henningsen * DOCSP-38156 updates for AH's feedback --------- Co-authored-by: Anna Henningsen * DOCSP-27787 Update mongodb-client-encryption to 2.4.0 (#7323) * DOCSP-27787 Update mongodb-client-encryption to 2.4.0 * Apply suggestions from code review Co-authored-by: Anna Henningsen * DOCSP-27787 updates for AH's feedback --------- Co-authored-by: Anna Henningsen * update branches where script runs (#7385) * DOCSP-38431: Typo fix (#7388) * (DOCS-16718): Clarify oplog size setting is uncompressed (#7111) * (DOCS-16718): Clarify oplog size setting is uncompressed * format replacements * DOCSP-38147 "Choosing an In Use Encryption Approach" crypto team review (#7058) * New topic, TOC update, refs * TOC and link restructuring to move limitations topics up * Fixed drawer page * Shorter title * Title fixes * Added stub headings * Stub headings * More topic structure * Copied language on implementation considerations to main QE and CSFLE topics * Typo fix * Reordered headings * Fixed heading level * Parallel wording * Reorder * Fixed ref typo * Removed unused stubs and links to them * Added draft language for security considerations * Added link for client-side schema enforcement * Updated image * Internal PR feedback Co-authored-by: Ashley Brown <98361885+mdb-ashley@users.noreply.github.com> * Internal PR feedback Co-authored-by: Ashley Brown <98361885+mdb-ashley@users.noreply.github.com> * Internal PR feedback Co-authored-by: Ashley Brown <98361885+mdb-ashley@users.noreply.github.com> * Rebuild * Updated CSFLE features topic to use same image * Updated image * Image update * Image update * Removed ambiguous/misleading statement about CSFLE being a mature security product * Some re-wording of the term implementation * Consolidated some limitations for brevity * Split bullet * Changed the order of some statements to move important info up * Reverted image to master --------- Co-authored-by: Ashley Brown <98361885+mdb-ashley@users.noreply.github.com> * (DOCSP-28903): Clarify default oplog size bounds (#7374) * (DOCSP-28903): Clarify default oplog size bounds * edit * Update source/core/replica-set-oplog.txt Co-authored-by: jocelyn-mendez1 <91144778+jocelyn-mendez1@users.noreply.github.com> * WIP * review feedback * simplify * reformatting * reformatting * remove page from other pr * standardize formatting * standardize order --------- Co-authored-by: jocelyn-mendez1 <91144778+jocelyn-mendez1@users.noreply.github.com> * DOCSP-22631 Add 2dsphere example to Time Series Secondary Index (#7281) * DOCSP-22631 Add 2dsphere example to Time Series Secondary Index * DOCSP-22631 updates for NA's feedback * DOCSP-36707 improves Unique Indexes page (#7292) * DOCSP-36707 improves Unique Indexes page * external review * Adds changelog, release notes for 6.0.15 (#7406) * DOCS-15864 documentation for enforceUserClusterSeparation parameter (#6789) * DOCS-15864 documentation for enforceUserClusterSeparation parameter * DOCS-15864 updates for SJ's feedback * DOCS-15864 updates for SJ's feedback * DOCSP-15864 updates for more feedback * DOCS-15864 updates for JA's feedback * DOCS-15864 adding enforceUserClusterSeparation to Server Parameters page * Update source/reference/parameters.txt Co-authored-by: Jeff Allen * DOCS-15864 updates for JA's feedback --------- Co-authored-by: Jeff Allen * DOCSP-25448 Create a Collection in MongoDB SEO Optimizations (#6829) * DOCSP-25448 Create a Collection in MongoDB SEO Optimizations * DOCSP-25448 updates for copy feedback * DOCSP-25448 updates for SEO * DOCSP-25448 updates for EG's feedback * DOCSP-35031 $toHashedIndexKey results not collated (#7064) * DOCSP-35031 results not collated * adjusts text * adjusts text * adjusts text * DOCSP-37789 Field names with periods (#7375) * WIP * WIP * fix snooty.toml syntax * start new page * WIP * first draft * edit * formatting fix * review feedback * review edits * fix indentation * fix example * DOCSP-36188 KMIP on Windows admonition (#7373) * DOCSP-36188 KMIP on Windows admonition * internal review * external review * DOCSP-36150 Replica Set Data Synchronization consistent terminology (#7387) * DOCSP-36150 Replica Set Data Synchronization consistent terminology * internal review * DOCSP-25748 shardCollection regarding hashed field in shard key (#7015) * DOCSP-25748 shardCollection regarding hashed field in shard key * external review * external review * DOCSP-33893 Adds support for to timeseries collections (#7411) * DOCSP-33893 Adds support for to timeseries collections * Reworks text * Adjusts text * Fixes per Joe * DOCSP-38242 meta descriptions batch 2 (#7416) * DOCSP-38242 meta descriptions batch 2) * DOCSP-38242 meta descriptions batch 2) * DOCSP-38242 copy edits * DOCSP-38242 copy edits * DOCSP-38442 Update contention factor guidance (#7399) * Initial structural changes * Subscript/superscript rendering check * Parsing fix * Syntax fixes * Additional clarification * Syntax and wording * Line break * Wording change * Wording change * Updated wording * Updated wording * Updated wording * Fixed wording * Changed chapter to section * Apply suggestions from code review Internal review feedback Co-authored-by: Jeff Allen * Line break * Monospace --------- Co-authored-by: Jeff Allen * Adds release notes, changelog for 7.0.9 (#7421) * DOCSP-38351 Reduce changeStreams page to one meta description (#7432) * DOCSP-38374-geospatial-queries-compatibility (#7398) * DOCS-15305 Validation with encrypted fields (#7198) * Added schema validation content * Drafted partial filter content * Doc constant fix * Fixed line break * Softened wording * Internal PR feedback * Internal PR feedback * Converted restrictions to bulleted list, removed include on behavior that starts as of 5.0 * Syntax * Syntax * reverted bulleted list due to syntax issues * Syntax fix again * External review feedback * Updated with current behavior * DOCSP-28917 Define assertions (#7040) * DOCSP-28917 define assertions * DOCSP-28917 define assertions * DOCSP-28917 adam feedback * DOCSP-28917 updates * DOCSP-28917 updates * (DOCSP-38604): Add product name to manual getting started page (#7447) * (DOCSP-38604): Add product name to manual getting started page * simplify toc entry * Finalizing 7.3 (#7445) * DOCSP-34743 7.0 new concurrent read/write behavior (#7046) * DOCSP-34743 adding dynamic concurrency info * DOCSP-34743 adding dynamic concurrency info * DOCSP-34743 adding dynamic concurrency info * DOCSP-34743 fixing prefix * DOCSP-34743 simplifying wording * DOCSP-34743 adding overload info * DOCSP-34743 adding overload info * DOCSP-34743 adding overload info * DOCSP-38434 adding shell method note to objectID page (#7441) * DOCSP-38434 adding shell method note to count documents page * DOCSP-38434 adding shell method note to object id page * DOCSP-25321 Refactors the "wills" on Configuration Options (#7415) * Refactors the wills * Text adjustment * Text adjustment * Text adjustment in includes * Removes 'via' per Jeff * Removes 'via' per Jeff * Removes 'via' per Jeff * Fixes per Jeff * Fixes per Jeff * Fixes per Jeff * DOCSP-37741 batchSize note (#7195) * DOCSP-37741 batchSize note * Adjusts text * Adjusts text * Adjusts text * Updates repo-sync.yml --------- Co-authored-by: Alison Huh <112565127+ajhuh-mdb@users.noreply.github.com> Co-authored-by: jmd-mongo <73852296+jmd-mongo@users.noreply.github.com> Co-authored-by: Rea Rustagi <85902999+rustagir@users.noreply.github.com> Co-authored-by: Nick Villahermosa Co-authored-by: MongoCaleb <32645888+MongoCaleb@users.noreply.github.com> Co-authored-by: ltran-mdb2 <143426234+ltran-mdb2@users.noreply.github.com> Co-authored-by: Matt Maville <150086858+mmaville-mdb@users.noreply.github.com> Co-authored-by: Kenneth P. J. Dyer <93145796+kennethdyer@users.noreply.github.com> Co-authored-by: Jeff Allen Co-authored-by: kanchana-mongodb <54281287+kanchana-mongodb@users.noreply.github.com> Co-authored-by: John Williams <55147273+jwilliams-mongo@users.noreply.github.com> Co-authored-by: Sarah Olson <98367156+sarah-olson-mongodb@users.noreply.github.com> Co-authored-by: jason-price-mongodb <69260375+jason-price-mongodb@users.noreply.github.com> Co-authored-by: jason-price-mongodb Co-authored-by: Anna Henningsen Co-authored-by: ianf-mongodb <85948430+ianf-mongodb@users.noreply.github.com> Co-authored-by: Jordan Smith <45415425+jordan-smith721@users.noreply.github.com> Co-authored-by: Sarah Simpers <82042374+sarahsimpers@users.noreply.github.com> Co-authored-by: jocelyn-mendez1 <91144778+jocelyn-mendez1@users.noreply.github.com> Co-authored-by: Kenneth P. J. Dyer Co-authored-by: davidhou17 <55004296+davidhou17@users.noreply.github.com> Co-authored-by: Evelyn Rabil Co-authored-by: Evelyn Rabil <114026323+erabil-mdb@users.noreply.github.com> Co-authored-by: Sarah Lin <55722001+sarahemlin@users.noreply.github.com> Co-authored-by: Dachary Co-authored-by: pierwill <19642016+pierwill@users.noreply.github.com> Co-authored-by: pierwill Co-authored-by: corryroot <72401712+corryroot@users.noreply.github.com> Co-authored-by: Melissa Mahoney Co-authored-by: Jeff Allen Co-authored-by: Kevin Cherkauer <111792207+kevin-cherkauer@users.noreply.github.com> Co-authored-by: Nora Reidy Co-authored-by: Nick Larew Co-authored-by: mmeigs Co-authored-by: lmkerbey-mdb <105309825+lmkerbey-mdb@users.noreply.github.com> Co-authored-by: Mihai Andrei Co-authored-by: lindseymoore <71525840+lindseymoore@users.noreply.github.com> Co-authored-by: Chris Cho Co-authored-by: Himanshu Singh Co-authored-by: Will Buerger <59492746+wbuerger46@users.noreply.github.com> Co-authored-by: Alexander Neben --- .github/workflows/repo-sync.yml | 38 + .gitignore | 8 +- .mci.yml | 51 - .tx/config | 5244 ----------------- Makefile | 3 + config/changelog_conf.yaml | 8 + config/redirects | 34 +- draft/commands-locks.txt | 4 - repo_sync.py | 45 + requirements.txt | 3 +- snooty.toml | 42 +- .../analyzing-mongodb-performance.txt | 45 +- .../configuration-and-maintenance.txt | 5 + source/administration/configuration.txt | 8 +- .../connection-pool-overview.txt | 109 +- .../diagnose-query-performance.txt | 173 + source/administration/monitoring.txt | 13 +- .../production-checklist-development.txt | 2 +- .../production-checklist-operations.txt | 10 + source/administration/production-notes.txt | 87 +- source/administration/security-checklist.txt | 1 + .../sharded-cluster-administration.txt | 5 +- .../data-models-relationships.txt | 5 + source/applications/data-models.txt | 5 + source/applications/replication.txt | 2 +- source/changeStreams.txt | 83 +- source/contents.txt | 23 +- .../aggregation-pipeline-optimization.txt | 165 +- source/core/aggregation-pipeline.txt | 5 +- source/core/authentication.txt | 4 +- source/core/authorization.txt | 4 +- source/core/backups.txt | 2 +- source/core/bulk-write-operations.txt | 2 +- source/core/capped-collections.txt | 311 +- .../change-max-docs-capped-collection.txt | 62 + .../change-size-capped-collection.txt | 59 + .../check-if-collection-is-capped.txt | 65 + .../convert-collection-to-capped.txt | 84 + .../create-capped-collection.txt | 95 + .../query-capped-collection.txt | 176 + ...causal-consistency-read-write-concerns.txt | 2 +- source/core/clustered-collections.txt | 18 +- .../core/collection-level-access-control.txt | 2 +- source/core/crud.txt | 2 - source/core/csfle.txt | 21 +- source/core/csfle/features.txt | 6 +- source/core/csfle/fundamentals.txt | 6 +- .../core/csfle/fundamentals/create-schema.txt | 2 +- .../csfle/fundamentals/keys-key-vaults.txt | 90 - .../core/csfle/fundamentals/manage-keys.txt | 34 +- .../csfle/fundamentals/manual-encryption.txt | 12 +- source/core/csfle/quick-start.txt | 8 +- source/core/csfle/reference.txt | 8 - source/core/csfle/reference/compatibility.txt | 102 - .../csfle/reference/csfle-options-clients.txt | 8 +- source/core/csfle/reference/decryption.txt | 4 +- .../csfle/reference/encryption-components.txt | 4 +- .../csfle/reference/encryption-schemas.txt | 16 +- source/core/csfle/reference/kms-providers.txt | 183 - source/core/csfle/reference/libmongocrypt.txt | 45 +- source/core/csfle/reference/limitations.txt | 10 + .../csfle/reference/supported-operations.txt | 1 - source/core/csfle/tutorials.txt | 24 + .../csfle/tutorials/aws/aws-automatic.txt | 4 +- .../csfle/tutorials/azure/azure-automatic.txt | 4 +- .../csfle/tutorials/gcp/gcp-automatic.txt | 4 +- .../csfle/tutorials/kmip/kmip-automatic.txt | 4 +- source/core/data-model-operations.txt | 2 +- source/core/databases-and-collections.txt | 313 +- source/core/dot-dollar-considerations.txt | 231 +- .../dollar-prefix.txt | 217 + .../dot-dollar-considerations/periods.txt | 165 + source/core/hashed-sharding.txt | 15 +- source/core/index-case-insensitive.txt | 79 +- source/core/index-creation.txt | 37 +- source/core/index-hidden.txt | 20 +- source/core/index-partial.txt | 11 +- source/core/index-ttl.txt | 5 +- source/core/index-unique.txt | 49 +- .../core/index-unique/convert-to-unique.txt | 191 + source/core/indexes/drop-index.txt | 2 +- .../indexes/index-types/geospatial/2d.txt | 9 + .../2d/query/proximity-flat-surface.txt | 10 +- .../index-types/geospatial/2dsphere.txt | 7 + .../indexes/index-types/index-compound.txt | 7 +- .../core/indexes/index-types/index-hashed.txt | 2 +- .../indexes/index-types/index-multikey.txt | 2 + .../core/indexes/index-types/index-single.txt | 1 + .../create-embedded-object-index.txt | 107 + .../create-single-field-index.txt | 64 +- .../create-text-index-multiple-languages.txt | 10 +- .../create-wildcard-index-multiple-fields.txt | 8 +- source/core/inmemory.txt | 9 +- source/core/journaling.txt | 27 +- source/core/kerberos.txt | 9 +- source/core/map-reduce.txt | 14 +- source/core/query-plans.txt | 26 +- source/core/queryable-encryption.txt | 33 +- .../queryable-encryption/about-qe-csfle.txt | 185 + source/core/queryable-encryption/features.txt | 2 +- .../queryable-encryption/fundamentals.txt | 16 +- .../fundamentals/enable-qe.txt | 77 + .../fundamentals/encrypt-and-query.txt | 350 +- .../fundamentals/keys-key-vaults.txt | 40 +- .../fundamentals/kms-providers.txt | 86 +- .../fundamentals/manage-collections.txt | 28 +- .../fundamentals/manage-keys.txt | 8 +- .../fundamentals/manual-encryption.txt | 7 +- .../queryable-encryption/install-library.txt | 168 + source/core/queryable-encryption/install.txt | 80 +- .../overview-enable-qe.txt | 61 + .../queryable-encryption/overview-use-qe.txt | 49 + .../qe-create-application.txt | 960 +++ .../queryable-encryption/qe-create-cmk.txt | 110 + .../qe-create-encrypted-collection.txt | 469 ++ .../qe-create-encryption-schema.txt | 205 + .../qe-retrieve-encrypted-document.txt | 116 + .../core/queryable-encryption/quick-start.txt | 83 +- .../core/queryable-encryption/reference.txt | 10 - .../reference/compatibility.txt | 262 +- .../reference/libmongocrypt.txt | 40 +- .../reference/limitations.txt | 34 +- .../reference/mongocryptd.txt | 52 - .../reference/shared-library.txt | 107 - .../reference/supported-operations.txt | 33 +- .../core/queryable-encryption/tutorials.txt | 32 +- .../tutorials/aws/aws-automatic.txt | 1093 ---- .../tutorials/azure/azure-automatic.txt | 1090 ---- .../tutorials/gcp/gcp-automatic.txt | 1087 ---- .../tutorials/kmip/kmip-automatic.txt | 1103 ---- source/core/ranged-sharding.txt | 8 +- .../read-isolation-consistency-recency.txt | 2 +- source/core/read-preference-hedge-option.txt | 10 +- source/core/read-preference-mechanics.txt | 14 +- source/core/read-preference-use-cases.txt | 4 +- source/core/read-preference.txt | 18 +- ...replica-set-architecture-three-members.txt | 16 +- source/core/replica-set-architectures.txt | 13 +- source/core/replica-set-elections.txt | 17 +- source/core/replica-set-members.txt | 3 +- source/core/replica-set-oplog.txt | 60 +- source/core/replica-set-rollbacks.txt | 134 +- source/core/replica-set-secondary.txt | 2 +- source/core/replica-set-sync.txt | 254 +- source/core/replica-set-write-concern.txt | 6 +- source/core/retryable-writes.txt | 184 +- source/core/schema-validation.txt | 23 +- .../schema-validation/specify-json-schema.txt | 16 + .../specify-json-schema/json-schema-tips.txt | 6 + .../specify-validation-level.txt | 14 +- .../use-json-schema-query-conditions.txt | 7 +- source/core/security-in-use-encryption.txt | 49 +- source/core/security-ldap-external.txt | 6 + source/core/security-ldap.txt | 6 + source/core/security-transport-encryption.txt | 11 +- source/core/sharded-cluster-components.txt | 16 +- source/core/sharded-cluster-query-router.txt | 2 +- .../core/sharding-balancer-administration.txt | 10 +- source/core/sharding-change-a-shard-key.txt | 5 +- source/core/sharding-choose-a-shard-key.txt | 45 +- source/core/sharding-data-partitioning.txt | 9 +- source/core/sharding-refine-a-shard-key.txt | 2 - source/core/sharding-reshard-a-collection.txt | 2 +- source/core/sharding-shard-a-collection.txt | 15 +- source/core/sharding-shard-key.txt | 37 +- source/core/tailable-cursors.txt | 72 +- source/core/timeseries-collections.txt | 16 +- .../timeseries/timeseries-best-practices.txt | 61 +- .../timeseries/timeseries-granularity.txt | 52 +- .../timeseries/timeseries-limitations.txt | 61 +- .../core/timeseries/timeseries-procedures.txt | 26 +- .../timeseries/timeseries-secondary-index.txt | 14 + .../timeseries-shard-collection.txt | 7 - source/core/transactions-in-applications.txt | 17 +- source/core/transactions-operations.txt | 2 +- .../transactions-production-consideration.txt | 2 +- source/core/transactions.txt | 75 +- source/core/views.txt | 10 +- source/core/views/create-view.txt | 2 - .../core/views/join-collections-with-view.txt | 3 + source/core/wiredtiger.txt | 19 + source/crud.txt | 3 +- source/data-center-awareness.txt | 2 - source/faq.txt | 2 + source/faq/diagnostics.txt | 7 +- source/faq/indexes.txt | 6 +- source/faq/replica-sets.txt | 2 +- source/faq/sharding.txt | 7 +- source/geospatial-queries.txt | 11 +- ...nt-side-field-level-encryption-diagram.svg | 1851 +++++- source/images/crud-write-concern-ack.rst | 2 +- source/images/crud-write-concern-journal.rst | 2 +- source/images/crud-write-concern-unack.rst | 2 +- ...d-cluster-hashed-distribution.bakedsvg.svg | 2 +- .../5.0-changes/removed-parameters.rst | 16 + .../includes/7.0-concurrent-transactions.rst | 7 +- source/includes/access-create-user.rst | 2 +- .../aggregation/convert-to-bool-table.rst | 66 + .../includes/analyzeShardKey-limitations.rst | 2 +- .../analyzeShardKey-method-command-fields.rst | 8 +- .../analyzeShardKey-supporting-indexes.rst | 3 + source/includes/atlas-nav/steps.rst | 0 .../command-output/search-index-statuses.rst | 22 + .../field-definitions/type.rst | 6 + .../vector-search-index-definition-fields.rst | 17 + source/includes/autosplit-no-operation.rst | 2 +- .../capped-collections/concurrent-writes.rst | 2 + .../query-natural-order.rst | 3 + .../capped-collections/use-ttl-index.rst | 8 + ...and-post-images-additional-information.rst | 12 +- .../includes/changelogs/releases/4.4.25.rst | 4 +- .../includes/changelogs/releases/4.4.28.rst | 37 + .../includes/changelogs/releases/4.4.29.rst | 79 + .../includes/changelogs/releases/5.0.22.rst | 2 +- .../includes/changelogs/releases/5.0.24.rst | 153 + .../includes/changelogs/releases/5.0.25.rst | 145 + .../includes/changelogs/releases/5.0.26.rst | 108 + .../includes/changelogs/releases/6.0.10.rst | 2 +- .../includes/changelogs/releases/6.0.13.rst | 181 + .../includes/changelogs/releases/6.0.14.rst | 158 + .../includes/changelogs/releases/6.0.15.rst | 227 + source/includes/changelogs/releases/7.0.1.rst | 2 +- source/includes/changelogs/releases/7.0.2.rst | 2 +- source/includes/changelogs/releases/7.0.4.rst | 2 +- source/includes/changelogs/releases/7.0.5.rst | 11 +- source/includes/changelogs/releases/7.0.6.rst | 247 + source/includes/changelogs/releases/7.0.7.rst | 166 + source/includes/changelogs/releases/7.0.8.rst | 66 + source/includes/changelogs/releases/7.0.9.rst | 122 + source/includes/changelogs/releases/7.2.1.rst | 187 + source/includes/changelogs/releases/7.2.2.rst | 16 + source/includes/changelogs/releases/7.3.1.rst | 24 + source/includes/checkpoints.rst | 3 + source/includes/client-sessions-reuse.rst | 3 + .../clustered-collections-introduction.rst | 14 +- source/includes/clustered-index-fields.rst | 11 +- .../comment-option-getMore-inheritance.rst | 5 + .../max-connecting-use-case.rst | 6 + .../considerations-deploying-replica-set.rst | 2 + source/includes/cqa-currentOp.rst | 5 +- source/includes/cqa-limitations.rst | 2 +- .../cqa-queryAnalysisSampleExpirationSecs.rst | 5 +- .../includes/create-an-encrypted-db-conn.rst | 10 +- source/includes/csfle-warning-local-keys.rst | 10 - source/includes/currentOp-output-example.rst | 12 +- .../diagnostic-backtrace-generation.rst | 4 +- .../driver-example-c-cleanup.rst | 10 + .../driver-example-delete-55.rst | 19 + .../driver-example-delete-56.rst | 18 + .../driver-example-delete-57.rst | 19 + .../driver-example-delete-58.rst | 11 + .../driver-example-delete-result.rst | 19 +- .../driver-example-indexes-1.rst | 12 + .../driver-example-insert-1.rst | 18 + .../driver-example-insert-2.rst | 18 + .../driver-example-insert-3.rst | 19 + .../driver-example-query-10.rst | 19 + .../driver-example-query-11.rst | 18 + .../driver-example-query-12.rst | 18 + .../driver-example-query-13.rst | 18 + .../driver-example-query-14.rst | 19 + .../driver-example-query-15.rst | 18 + .../driver-example-query-16.rst | 19 + .../driver-example-query-17.rst | 18 + .../driver-example-query-18.rst | 19 + .../driver-example-query-19.rst | 19 + .../driver-example-query-20.rst | 18 + .../driver-example-query-21.rst | 19 + .../driver-example-query-22.rst | 19 + .../driver-example-query-23.rst | 19 + .../driver-example-query-24.rst | 19 + .../driver-example-query-25.rst | 19 + .../driver-example-query-26.rst | 19 + .../driver-example-query-27.rst | 19 + .../driver-example-query-28.rst | 20 + .../driver-example-query-29.rst | 19 + .../driver-example-query-30.rst | 19 + .../driver-example-query-31.rst | 19 + .../driver-example-query-32.rst | 19 + .../driver-example-query-33.rst | 19 + .../driver-example-query-34.rst | 19 + .../driver-example-query-35.rst | 18 + .../driver-example-query-36.rst | 19 + .../driver-example-query-37.rst | 18 + .../driver-example-query-38.rst | 28 +- .../driver-example-query-39.rst | 28 +- .../driver-example-query-40.rst | 28 +- .../driver-example-query-41.rst | 29 +- .../driver-example-query-42.rst | 19 +- .../driver-example-query-43.rst | 19 + .../driver-example-query-44.rst | 20 + .../driver-example-query-45.rst | 18 + .../driver-example-query-46.rst | 18 + .../driver-example-query-47.rst | 18 + .../driver-example-query-48.rst | 18 + .../driver-example-query-49.rst | 18 + .../driver-example-query-50.rst | 23 +- .../driver-example-query-6.rst | 18 + .../driver-example-query-7.rst | 20 + .../driver-example-query-9.rst | 19 + .../driver-example-query-find-method.rst | 12 + .../driver-example-query-intro-no-perl.rst | 145 + .../driver-example-query-intro.rst | 24 +- .../driver-example-update-51.rst | 18 + .../driver-example-update-52.rst | 21 + .../driver-example-update-53.rst | 21 + .../driver-example-update-54.rst | 20 + .../driver-procedure-indexes-1.rst | 17 +- .../includes/driver-remove-indexes-tabs.rst | 5 +- .../drop-hashed-shard-key-index-main.rst | 7 + .../includes/drop-hashed-shard-key-index.rst | 3 + source/includes/enable-KMIP-on-windows.rst | 10 + .../includes/explain-ignores-cache-plan.rst | 4 + .../includes/extracts-4.0-upgrade-prereq.yaml | 3 +- source/includes/extracts-4.2-changes.yaml | 24 +- .../includes/extracts-4.2-downgrade-fcv.yaml | 8 +- source/includes/extracts-4.4-changes.yaml | 145 +- source/includes/extracts-agg-operators.yaml | 30 +- source/includes/extracts-agg-stages.yaml | 20 +- ...xtracts-bypassDocumentValidation-base.yaml | 2 +- source/includes/extracts-changestream.yaml | 5 +- ...ts-client-side-field-level-encryption.yaml | 10 +- source/includes/extracts-collation.yaml | 6 + source/includes/extracts-command-field.yaml | 3 +- source/includes/extracts-create-cmd.yaml | 2 +- ...tools-performance-considerations-base.yaml | 2 +- .../extracts-fact-findandmodify-return.yaml | 2 +- source/includes/extracts-filter.yaml | 299 +- ...acts-inequality-operators-selectivity.yaml | 6 +- .../extracts-linux-config-expectations.yaml | 4 +- ...-missing-shard-key-equality-condition.yaml | 4 +- .../extracts-production-notes-base.yaml | 26 +- source/includes/extracts-projection.yaml | 106 +- source/includes/extracts-replSetReconfig.yaml | 4 +- source/includes/extracts-rs-stepdown.yaml | 2 +- ...xtracts-server-status-projection-base.yaml | 2 +- source/includes/extracts-ssl-facts.yaml | 6 +- source/includes/extracts-tls-facts.yaml | 6 +- source/includes/extracts-transactions.yaml | 108 +- .../extracts-upsert-unique-index.yaml | 119 +- source/includes/extracts-views.yaml | 7 +- .../includes/extracts-wildcard-indexes.yaml | 41 +- .../includes/extracts-wired-tiger-base.yaml | 8 +- source/includes/extracts-wired-tiger.yaml | 13 +- .../includes/extracts-x509-certificate.yaml | 28 + source/includes/extracts-zoned-sharding.yaml | 2 +- .../fact-5.0-multiple-partial-index.rst | 4 +- .../fact-5.0-read-concern-latency.rst | 9 + .../includes/fact-7.3-singlebatch-cursor.rst | 5 + source/includes/fact-bson-types.rst | 5 - .../fact-bulk-writeConcernError-mongos.rst | 11 + .../fact-collection-namespace-limit.rst | 14 +- .../fact-csfle-compatibility-drivers.rst | 35 - source/includes/fact-default-conf-file.rst | 8 +- .../fact-disable-javascript-with-noscript.rst | 12 +- .../fact-document-field-name-restrictions.rst | 4 + source/includes/fact-dynamic-concurrency.rst | 18 + ...orce-user-cluster-separation-parameter.rst | 18 + ...nts-atlas-support-limited-free-and-m10.rst | 4 + .../fact-explain-results-categories.rst | 13 +- .../fact-explain-verbosity-executionStats.rst | 2 +- .../fact-explain-verbosity-queryPlanner.rst | 2 +- source/includes/fact-hidden-indexes.rst | 14 + .../fact-hint-text-query-restriction.rst | 2 +- source/includes/fact-installation-ulimit.rst | 3 +- .../includes/fact-manual-enc-definition.rst | 3 - .../fact-mapreduce-deprecated-bson.rst | 9 + .../fact-merge-same-collection-behavior.rst | 10 +- source/includes/fact-meta-syntax.rst | 12 +- .../includes/fact-mongodb-cr-deprecated.rst | 3 +- source/includes/fact-mongokerberos.rst | 9 + source/includes/fact-mws-intro.rst | 1 - source/includes/fact-mws.rst | 2 - ...ural-sort-order-text-query-restriction.rst | 2 +- source/includes/fact-oidc-providers.rst | 6 +- source/includes/fact-qe-csfle-contention.rst | 33 - .../fact-read-concern-write-timeline.rst | 2 +- source/includes/fact-read-own-writes.rst | 2 +- source/includes/fact-runtime-parameter.rst | 4 + .../fact-runtime-startup-parameter.rst | 9 + .../fact-selinux-redhat-with-policy.rst | 2 +- ...t-global-write-concern-before-reconfig.rst | 2 +- .../fact-sharded-cluster-components.rst | 7 +- .../includes/fact-split-horizon-binding.rst | 2 +- .../fact-ssl-tlsCAFile-tlsUseSystemCA.rst | 8 + source/includes/fact-stable-api-explain.rst | 2 + source/includes/fact-startup-parameter.rst | 4 + .../fact-stop-in-progress-index-builds.rst | 8 +- .../fact-text-search-phrase-and-term.rst | 2 +- source/includes/fact-text-search-score.rst | 4 +- source/includes/fact-timeZoneInfo.rst | 2 +- source/includes/fact-ulimit-minimum.rst | 2 + .../fact-use-aggregation-not-map-reduce.rst | 2 +- source/includes/fact-validate-metadata.rst | 2 +- .../fact-writeConcernError-mongos.rst | 11 + source/includes/find-options-values-table.rst | 90 + source/includes/fsync-mongos.rst | 5 + source/includes/getMore-slow-queries.rst | 2 +- source/includes/important-hostnames.rst | 3 +- ....txt => admonition-csfle-key-rotation.rst} | 2 +- .../case-insensitive-regex-queries.rst | 3 + .../commit-quorum-vs-write-concern.rst | 2 +- source/includes/indexes/commit-quorum.rst | 4 +- .../embedded-object-need-entire-doc.rst | 3 + .../includes/introduction-write-concern.rst | 2 +- source/includes/ldap-srv-details.rst | 8 + source/includes/let-variables-match-note.rst | 2 +- .../includes/limits-sharding-index-type.rst | 14 +- .../list-cluster-x509-requirements.rst | 1 - .../list-table-auth-mechanisms-shell-only.rst | 2 - .../list-text-search-restrictions-in-agg.rst | 6 +- .../log-changes-to-database-profiler.rst | 3 +- source/includes/negative-dividend.rst | 3 + source/includes/noCursorTimeoutNote.rst | 5 + .../includes/note-key-vault-permissions.rst | 5 + source/includes/note-shard-cluster-backup.rst | 5 +- source/includes/parameters-map-reduce.rst | 35 - .../project-stage-and-array-index.rst | 2 +- source/includes/qe-connection-boilerplate.rst | 4 +- .../node/queryable-encryption-helpers.js | 8 +- .../{requirements.txt => requirements.rst} | 0 .../includes/qe-tutorials/qe-quick-start.rst | 18 + source/includes/query-password.rst | 27 +- .../csfle-driver-tutorial-table.rst | 41 + .../example-qe-csfle-contention.rst | 19 +- .../fact-csfle-compatibility-drivers.rst | 2 +- .../qe-csfle-configure-mongocryptd.rst | 30 +- .../qe-csfle-contention.rst | 16 + .../qe-csfle-install-mongocryptd.rst | 35 +- .../qe-csfle-manual-enc-overview.rst | 5 + .../qe-csfle-partial-filter-disclaimer.rst | 3 + .../qe-csfle-schema-validation.rst | 12 + .../qe-csfle-setting-contention.rst | 46 + .../qe-csfle-warning-local-keys.rst | 13 + .../qe-enable-qe-at-collection-creation.rst | 4 + .../qe-explicitly-create-collection.rst | 7 + .../qe-facts-mongocryptd-process.rst | 9 +- .../queryable-encryption/quick-start/dek.rst | 5 +- .../reference/kms-providers/aws.rst | 2 +- .../reference/kms-providers/azure.rst | 2 +- .../reference/kms-providers/gcp.rst | 2 +- .../reference/kms-providers/kmip.rst | 2 +- .../queryable-encryption/set-up/csharp.rst | 9 - .../queryable-encryption/set-up/go.rst | 10 - .../queryable-encryption/set-up/java.rst | 10 - .../queryable-encryption/set-up/node.rst | 17 - .../queryable-encryption/set-up/python.rst | 12 - .../tutorials/assign-app-variables.rst | 192 + .../tutorials/automatic/aws/dek.rst | 2 +- .../tutorials/automatic/azure/dek.rst | 2 +- .../tutorials/automatic/gcp/dek.rst | 2 +- .../tutorials/exp/dek.rst | 5 +- source/includes/quick-start/cmk.rst | 2 +- source/includes/quick-start/dek.rst | 5 +- source/includes/quiesce-period.rst | 6 - source/includes/rapid-release-short.rst | 4 +- source/includes/rapid-release.rst | 4 +- .../includes/read-preference-modes-table.rst | 8 +- .../includes/reference/kms-providers/aws.rst | 75 - .../reference/kms-providers/azure.rst | 78 - .../reference/kms-providers/cmk-note.rst | 5 - .../includes/reference/kms-providers/gcp.rst | 111 - .../includes/reference/kms-providers/kmip.rst | 71 - .../reference/kms-providers/local.rst | 38 - .../reference/oplog-size-setting-intro.rst | 3 + .../replica-set-nodes-cannot-be-shared.rst | 4 + source/includes/replica-states.rst | 14 +- .../note-replica-set-major-versions.rst | 5 + .../block-revoked-certificates-intro.rst | 3 + .../includes/security/cve-2024-1351-info.rst | 20 + .../shard-key-modification-warning.rst | 6 +- ...store-file-system-snapshot-restriction.rst | 9 +- .../shardedDataDistribution-orphaned-docs.rst | 17 + ...shardedDataDistribution-output-example.rst | 24 + ...backup-sharded-cluster-with-snapshots.yaml | 202 - .../steps-change-replica-set-wiredtiger.yaml | 2 +- .../steps-clear-jumbo-flag-refine-key.yaml | 2 +- .../steps-configure-ldap-mongodb.yaml | 2 +- ...-windows-with-kerberos-authentication.yaml | 2 +- ...-mongodb-with-kerberos-authentication.yaml | 2 +- .../steps-create-role-dropSystemViews.yaml | 2 +- .../steps-deploy-replica-set-with-auth.yaml | 2 +- ...entication-in-replica-set-no-downtime.yaml | 8 +- ...install-mongodb-enterprise-on-red-hat.yaml | 2 +- .../steps-install-mongodb-on-red-hat.yaml | 2 +- .../steps-install-mongodb-on-suse.yaml | 2 +- .../steps-install-verify-files-pgp.yaml | 2 +- ...s-kerberos-auth-activedirectory-authz.yaml | 2 +- .../steps-shard-a-collection-ranged.yaml | 5 +- source/includes/steps-shard-existing-tsc.yaml | 29 +- .../includes/steps-start-sharded-cluster.yaml | 2 +- .../table-transactions-operations.rst | 18 +- ...series-secondary-indexes-downgrade-FCV.rst | 4 +- .../includes/tutorials/automatic/aws/cmk.rst | 2 +- .../includes/tutorials/automatic/aws/dek.rst | 4 +- .../tutorials/automatic/azure/dek.rst | 6 +- .../includes/tutorials/automatic/gcp/dek.rst | 6 +- ...hable-node-default-quorum-index-builds.rst | 6 + source/includes/upgrade-intro.rst | 11 +- source/includes/use-expr-in-find-query.rst | 30 + ...act-compare-view-and-materialized-view.rst | 9 +- source/includes/w-1-rollback-warning.rst | 8 +- .../warning-dropDatabase-shardedCluster.rst | 9 - source/index.txt | 13 +- source/indexes.txt | 14 +- source/installation.txt | 3 + source/introduction.txt | 5 +- source/legacy-opcodes.txt | 8 +- source/reference.txt | 4 - .../aggregation-commands-comparison.txt | 17 +- source/reference/audit-message.txt | 3 +- source/reference/bson-types.txt | 5 +- source/reference/built-in-roles.txt | 24 +- source/reference/change-events.txt | 2 +- .../change-events/reshardCollection.txt | 2 +- .../reference/collation-locales-defaults.txt | 2 +- source/reference/collation.txt | 21 +- source/reference/command.txt | 17 +- source/reference/command/aggregate.txt | 12 +- source/reference/command/analyzeShardKey.txt | 17 +- source/reference/command/appendOplogNote.txt | 18 +- source/reference/command/applyOps.txt | 11 + .../command/balancerCollectionStatus.txt | 2 - source/reference/command/buildInfo.txt | 11 + source/reference/command/cleanupOrphaned.txt | 63 +- .../command/cloneCollectionAsCapped.txt | 14 +- source/reference/command/collMod.txt | 169 +- source/reference/command/collStats.txt | 17 +- source/reference/command/compact.txt | 64 +- .../compactStructuredEncryptionData.txt | 11 + .../command/configureQueryAnalyzer.txt | 16 +- source/reference/command/connPoolStats.txt | 13 +- source/reference/command/connectionStatus.txt | 13 +- source/reference/command/convertToCapped.txt | 16 +- source/reference/command/count.txt | 3 - source/reference/command/create.txt | 17 +- source/reference/command/createIndexes.txt | 44 +- source/reference/command/createRole.txt | 1 - .../reference/command/createSearchIndexes.txt | 66 +- source/reference/command/createUser.txt | 6 +- source/reference/command/currentOp.txt | 19 +- source/reference/command/dataSize.txt | 12 + source/reference/command/dbHash.txt | 11 + source/reference/command/dbStats.txt | 14 +- source/reference/command/delete.txt | 23 +- source/reference/command/distinct.txt | 2 - source/reference/command/driverOIDTest.txt | 11 + source/reference/command/drop.txt | 19 +- .../command/dropAllRolesFromDatabase.txt | 4 +- .../command/dropAllUsersFromDatabase.txt | 3 - source/reference/command/dropConnections.txt | 15 +- source/reference/command/dropDatabase.txt | 24 +- source/reference/command/dropIndexes.txt | 31 +- source/reference/command/dropRole.txt | 4 +- source/reference/command/dropUser.txt | 4 +- source/reference/command/explain.txt | 31 +- source/reference/command/features.txt | 11 + source/reference/command/filemd5.txt | 11 + source/reference/command/find.txt | 86 +- source/reference/command/findAndModify.txt | 45 +- .../reference/command/flushRouterConfig.txt | 9 +- source/reference/command/fsync.txt | 86 +- source/reference/command/fsyncUnlock.txt | 47 +- source/reference/command/geoSearch.txt | 3 - source/reference/command/getAuditConfig.txt | 2 +- .../reference/command/getClusterParameter.txt | 11 + source/reference/command/getCmdLineOpts.txt | 11 + .../reference/command/getDefaultRWConcern.txt | 31 +- source/reference/command/getLog.txt | 32 +- source/reference/command/getParameter.txt | 18 +- .../command/grantPrivilegesToRole.txt | 2 - source/reference/command/grantRolesToRole.txt | 4 +- source/reference/command/grantRolesToUser.txt | 1 - source/reference/command/hello.txt | 13 +- source/reference/command/hostInfo.txt | 11 + source/reference/command/insert.txt | 18 +- source/reference/command/isSelf.txt | 11 + source/reference/command/killCursors.txt | 14 +- source/reference/command/killOp.txt | 14 +- source/reference/command/listCollections.txt | 18 +- source/reference/command/listCommands.txt | 11 + source/reference/command/listDatabases.txt | 19 +- source/reference/command/listIndexes.txt | 34 +- source/reference/command/lockInfo.txt | 11 + source/reference/command/logRotate.txt | 11 + source/reference/command/logout.txt | 11 +- source/reference/command/mapReduce.txt | 80 +- source/reference/command/medianKey.txt | 28 - source/reference/command/moveChunk.txt | 8 +- source/reference/command/moveRange.txt | 2 + .../reference/command/nav-administration.txt | 4 - source/reference/command/nav-diagnostic.txt | 7 - source/reference/command/nav-sharding.txt | 9 - source/reference/command/netstat.txt | 11 + source/reference/command/ping.txt | 11 + source/reference/command/planCacheClear.txt | 5 +- .../command/planCacheClearFilters.txt | 2 - .../command/planCacheListFilters.txt | 2 - .../reference/command/planCacheSetFilter.txt | 4 +- source/reference/command/profile.txt | 22 +- source/reference/command/reIndex.txt | 11 + .../command/refineCollectionShardKey.txt | 8 +- source/reference/command/removeShard.txt | 3 +- source/reference/command/renameCollection.txt | 15 +- .../command/replSetAbortPrimaryCatchUp.txt | 11 + source/reference/command/replSetFreeze.txt | 11 + source/reference/command/replSetGetConfig.txt | 18 +- source/reference/command/replSetGetStatus.txt | 76 +- source/reference/command/replSetInitiate.txt | 11 + .../reference/command/replSetMaintenance.txt | 11 + source/reference/command/replSetReconfig.txt | 27 +- .../reference/command/replSetResizeOplog.txt | 39 +- source/reference/command/replSetStepDown.txt | 11 + source/reference/command/replSetSyncFrom.txt | 11 + .../reference/command/reshardCollection.txt | 17 +- .../command/revokePrivilegesFromRole.txt | 3 +- .../reference/command/revokeRolesFromRole.txt | 2 - .../reference/command/revokeRolesFromUser.txt | 3 +- source/reference/command/rolesInfo.txt | 4 +- .../reference/command/rotateCertificates.txt | 11 + source/reference/command/serverStatus.txt | 558 +- .../reference/command/setClusterParameter.txt | 10 + .../reference/command/setDefaultRWConcern.txt | 17 +- .../setFeatureCompatibilityVersion.txt | 13 +- .../command/setIndexCommitQuorum.txt | 16 +- source/reference/command/setParameter.txt | 11 + source/reference/command/shardCollection.txt | 14 +- .../reference/command/shardConnPoolStats.txt | 11 + source/reference/command/shutdown.txt | 23 +- source/reference/command/startSession.txt | 2 + source/reference/command/top.txt | 11 + source/reference/command/update.txt | 22 +- source/reference/command/updateRole.txt | 4 +- source/reference/command/updateUser.txt | 4 +- .../reference/command/updateZoneKeyRange.txt | 2 +- source/reference/command/usersInfo.txt | 2 - source/reference/command/validate.txt | 46 +- source/reference/command/whatsmyuri.txt | 11 + source/reference/config-database.txt | 76 +- ...-settings-command-line-options-mapping.txt | 24 +- source/reference/configuration-options.txt | 368 +- source/reference/connection-string.txt | 22 +- source/reference/database-profiler.txt | 9 +- source/reference/database-references.txt | 5 +- source/reference/explain-results.txt | 103 +- source/reference/glossary.txt | 574 +- source/reference/insert-methods.txt | 3 + source/reference/limits.txt | 37 +- source/reference/local-database.txt | 2 +- source/reference/log-messages.txt | 47 +- .../map-reduce-to-aggregation-pipeline.txt | 2 +- source/reference/method.txt | 115 +- source/reference/method/BSONRegExp.txt | 106 + .../reference/method/Bulk.find.collation.txt | 12 +- source/reference/method/BulkWriteResult.txt | 12 +- ...ntEncryption.createEncryptedCollection.txt | 2 +- .../method/ClientEncryption.encrypt.txt | 5 +- source/reference/method/Date.txt | 3 + .../reference/method/KeyVault.createKey.txt | 16 +- source/reference/method/KeyVault.getKey.txt | 2 +- source/reference/method/KeyVault.getKeys.txt | 2 +- .../method/KeyVault.rewrapManyDataKey.txt | 2 +- source/reference/method/Mongo.getURI.txt | 55 + .../method/Mongo.getWriteConcern.txt | 2 +- source/reference/method/Mongo.setReadPref.txt | 22 +- .../method/Mongo.setWriteConcern.txt | 4 +- .../reference/method/Mongo.startSession.txt | 23 +- source/reference/method/Mongo.txt | 75 +- source/reference/method/ObjectId.toString.txt | 3 + source/reference/method/ObjectId.txt | 51 +- source/reference/method/PlanCache.list.txt | 22 +- .../method/Session.abortTransaction.txt | 17 +- .../method/Session.commitTransaction.txt | 19 +- .../method/Session.startTransaction.txt | 15 +- .../method/Session.withTransaction.txt | 12 + source/reference/method/WriteResult.txt | 20 +- .../method/convertShardKeyToHashed.txt | 2 +- .../reference/method/cursor.allowDiskUse.txt | 4 +- source/reference/method/cursor.batchSize.txt | 42 +- source/reference/method/cursor.collation.txt | 12 +- source/reference/method/cursor.comment.txt | 4 +- source/reference/method/cursor.count.txt | 18 +- source/reference/method/cursor.explain.txt | 2 + source/reference/method/cursor.hint.txt | 2 +- .../method/cursor.noCursorTimeout.txt | 2 - source/reference/method/cursor.readPref.txt | 17 +- source/reference/method/cursor.returnKey.txt | 8 +- source/reference/method/cursor.sort.txt | 58 +- source/reference/method/db.aggregate.txt | 14 + source/reference/method/db.auth.txt | 65 +- .../method/db.checkMetadataConsistency.txt | 12 + .../method/db.collection.aggregate.txt | 202 +- .../method/db.collection.analyzeShardKey.txt | 92 +- .../method/db.collection.bulkWrite.txt | 8 +- ...db.collection.checkMetadataConsistency.txt | 14 + ...ection.compactStructuredEncryptionData.txt | 14 + .../db.collection.configureQueryAnalyzer.txt | 14 + .../reference/method/db.collection.count.txt | 15 +- .../method/db.collection.countDocuments.txt | 4 +- .../method/db.collection.createIndex.txt | 27 +- .../method/db.collection.createIndexes.txt | 19 +- .../db.collection.createSearchIndex.txt | 62 +- .../method/db.collection.dataSize.txt | 13 + .../method/db.collection.deleteMany.txt | 55 +- .../method/db.collection.deleteOne.txt | 105 +- .../method/db.collection.distinct.txt | 15 +- .../reference/method/db.collection.drop.txt | 8 +- .../method/db.collection.dropIndex.txt | 33 +- .../method/db.collection.dropIndexes.txt | 32 +- .../db.collection.estimatedDocumentCount.txt | 2 + .../method/db.collection.explain.txt | 19 +- .../reference/method/db.collection.find.txt | 185 +- .../method/db.collection.findAndModify.txt | 47 +- .../method/db.collection.findOne.txt | 8 +- .../method/db.collection.findOneAndDelete.txt | 8 + .../db.collection.findOneAndReplace.txt | 10 +- .../method/db.collection.findOneAndUpdate.txt | 15 +- .../method/db.collection.getIndexes.txt | 21 +- .../method/db.collection.getPlanCache.txt | 3 - .../db.collection.getShardDistribution.txt | 93 +- .../method/db.collection.getShardVersion.txt | 13 + .../method/db.collection.hideIndex.txt | 10 +- .../reference/method/db.collection.insert.txt | 13 +- .../method/db.collection.insertMany.txt | 13 +- .../method/db.collection.insertOne.txt | 15 +- .../method/db.collection.mapReduce.txt | 13 + .../method/db.collection.reIndex.txt | 15 + .../reference/method/db.collection.remove.txt | 3 +- .../method/db.collection.renameCollection.txt | 14 + .../method/db.collection.replaceOne.txt | 13 +- .../reference/method/db.collection.stats.txt | 8 +- .../reference/method/db.collection.update.txt | 24 +- .../method/db.collection.updateMany.txt | 21 +- .../method/db.collection.updateOne.txt | 65 +- .../method/db.collection.validate.txt | 20 +- .../reference/method/db.createCollection.txt | 18 +- source/reference/method/db.createUser.txt | 11 + source/reference/method/db.currentOp.txt | 22 +- source/reference/method/db.dropAllRoles.txt | 15 + source/reference/method/db.dropDatabase.txt | 20 +- source/reference/method/db.fsyncLock.txt | 25 +- source/reference/method/db.fsyncUnlock.txt | 25 +- source/reference/method/db.getCollection.txt | 2 +- .../method/db.getProfilingStatus.txt | 5 +- .../method/db.grantPrivilegesToRole.txt | 15 + .../reference/method/db.grantRolesToRole.txt | 15 + source/reference/method/db.hello.txt | 15 +- source/reference/method/db.hostInfo.txt | 14 + source/reference/method/db.killOp.txt | 13 + source/reference/method/db.listCommands.txt | 14 + source/reference/method/db.logout.txt | 14 + .../method/db.revokePrivilegesFromRole.txt | 14 + .../method/db.revokeRolesFromRole.txt | 15 + .../method/db.rotateCertificates.txt | 15 + .../reference/method/db.serverBuildInfo.txt | 14 + source/reference/method/db.serverStatus.txt | 21 +- source/reference/method/db.setLogLevel.txt | 198 +- .../reference/method/db.setProfilingLevel.txt | 13 +- source/reference/method/db.shutdownServer.txt | 9 - source/reference/method/db.stats.txt | 14 + source/reference/method/db.updateRole.txt | 15 + source/reference/method/getKeyVault.txt | 4 +- source/reference/method/js-atlas-search.txt | 2 + source/reference/method/js-atlas-streams.txt | 86 + source/reference/method/js-bulk.txt | 2 + .../js-client-side-field-level-encryption.txt | 2 +- source/reference/method/js-collection.txt | 5 + source/reference/method/js-connection.txt | 4 + source/reference/method/js-constructor.txt | 5 + source/reference/method/js-database.txt | 6 +- source/reference/method/js-plan-cache.txt | 3 - source/reference/method/js-replication.txt | 4 - source/reference/method/js-sharding.txt | 2 - source/reference/method/passwordPrompt.txt | 11 +- source/reference/method/rs.add.txt | 2 +- source/reference/method/rs.reconfig.txt | 11 +- .../reference/method/rs.reconfigForPSASet.txt | 2 +- .../method/sh.abortReshardCollection.txt | 12 + source/reference/method/sh.addShard.txt | 13 + source/reference/method/sh.addShardToZone.txt | 14 + source/reference/method/sh.addTagRange.txt | 2 +- .../method/sh.balancerCollectionStatus.txt | 16 +- .../method/sh.checkMetadataConsistency.txt | 12 + source/reference/method/sh.enableSharding.txt | 13 + source/reference/method/sh.moveChunk.txt | 25 +- .../method/sh.removeShardFromZone.txt | 13 + .../reference/method/sh.reshardCollection.txt | 72 +- .../reference/method/sh.shardCollection.txt | 14 +- source/reference/method/sh.stopBalancer.txt | 16 +- .../method/sh.updateZoneKeyRange.txt | 20 +- .../method/sp.createStreamProcessor.txt | 202 + .../method/sp.listStreamProcessors.txt | 198 + source/reference/method/sp.process.txt | 198 + source/reference/method/sp.processor.drop.txt | 69 + .../reference/method/sp.processor.sample.txt | 131 + .../reference/method/sp.processor.start.txt | 67 + .../reference/method/sp.processor.stats.txt | 170 + source/reference/method/sp.processor.stop.txt | 68 + source/reference/mongodb-defaults.txt | 4 +- source/reference/mongodb-extended-json.txt | 16 +- source/reference/operator.txt | 3 + .../operator/aggregation-pipeline.txt | 5 +- source/reference/operator/aggregation.txt | 6 +- .../operator/aggregation/accumulator.txt | 75 +- .../operator/aggregation/addFields.txt | 141 +- .../operator/aggregation/anyElementTrue.txt | 50 +- .../operator/aggregation/arrayElemAt.txt | 3 + .../operator/aggregation/binarySize.txt | 2 - .../reference/operator/aggregation/bitAnd.txt | 6 +- .../operator/aggregation/bsonSize.txt | 2 - .../changeStreamSplitLargeEvent.txt | 6 +- .../operator/aggregation/collStats.txt | 8 +- .../operator/aggregation/concatArrays.txt | 25 +- .../reference/operator/aggregation/cond.txt | 3 + .../operator/aggregation/convert.txt | 42 +- .../reference/operator/aggregation/count.txt | 3 + .../operator/aggregation/currentOp.txt | 4 - .../operator/aggregation/dateFromParts.txt | 9 +- .../operator/aggregation/dateToString.txt | 3 + .../operator/aggregation/documents.txt | 2 +- .../reference/operator/aggregation/facet.txt | 3 + .../reference/operator/aggregation/filter.txt | 399 +- .../aggregation/firstN-array-element.txt | 135 - .../reference/operator/aggregation/firstN.txt | 182 +- .../operator/aggregation/function.txt | 19 +- .../operator/aggregation/geoNear.txt | 4 +- .../reference/operator/aggregation/group.txt | 17 +- .../reference/operator/aggregation/ifNull.txt | 16 +- .../operator/aggregation/indexStats.txt | 3 +- .../operator/aggregation/isArray.txt | 19 +- .../operator/aggregation/isNumber.txt | 2 - .../operator/aggregation/isoDayOfWeek.txt | 14 +- .../operator/aggregation/isoWeek.txt | 13 +- .../aggregation/lastN-array-element.txt | 134 - .../reference/operator/aggregation/lastN.txt | 182 +- .../reference/operator/aggregation/limit.txt | 3 + .../aggregation/listSampledQueries.txt | 2 +- .../operator/aggregation/literal.txt | 7 +- .../reference/operator/aggregation/lookup.txt | 64 +- source/reference/operator/aggregation/map.txt | 3 + .../reference/operator/aggregation/match.txt | 3 + .../reference/operator/aggregation/merge.txt | 41 +- .../reference/operator/aggregation/meta.txt | 15 +- source/reference/operator/aggregation/mod.txt | 99 +- source/reference/operator/aggregation/out.txt | 16 +- source/reference/operator/aggregation/pow.txt | 64 +- .../operator/aggregation/project.txt | 41 +- .../reference/operator/aggregation/rand.txt | 2 - .../reference/operator/aggregation/rank.txt | 10 +- .../reference/operator/aggregation/reduce.txt | 126 +- .../operator/aggregation/replaceAll.txt | 2 - .../operator/aggregation/replaceOne.txt | 2 - .../reference/operator/aggregation/sample.txt | 11 + .../operator/aggregation/sampleRate.txt | 2 - source/reference/operator/aggregation/set.txt | 103 +- .../operator/aggregation/setIsSubset.txt | 2 +- .../aggregation/shardedDataDistribution.txt | 66 +- .../reference/operator/aggregation/size.txt | 3 + .../reference/operator/aggregation/sort.txt | 3 + .../reference/operator/aggregation/split.txt | 39 +- .../operator/aggregation/stdDevPop.txt | 76 +- source/reference/operator/aggregation/sum.txt | 25 +- .../reference/operator/aggregation/toBool.txt | 39 +- .../operator/aggregation/toHashedIndexKey.txt | 66 + .../operator/aggregation/toUpper.txt | 2 +- .../reference/operator/aggregation/type.txt | 2 +- .../operator/aggregation/unionWith.txt | 13 +- .../reference/operator/aggregation/unwind.txt | 3 + .../operator/aggregation/vectorSearch.txt | 2 +- source/reference/operator/meta/natural.txt | 12 +- .../operator/projection/elemMatch.txt | 43 +- .../operator/projection/positional.txt | 25 +- .../reference/operator/query-comparison.txt | 3 + source/reference/operator/query-logical.txt | 3 + source/reference/operator/query.txt | 3 + source/reference/operator/query/all.txt | 3 + source/reference/operator/query/and.txt | 5 +- .../reference/operator/query/bitsAllClear.txt | 6 +- .../reference/operator/query/bitsAllSet.txt | 4 +- .../reference/operator/query/bitsAnyClear.txt | 4 +- .../reference/operator/query/bitsAnySet.txt | 4 +- source/reference/operator/query/comment.txt | 2 +- source/reference/operator/query/elemMatch.txt | 129 +- source/reference/operator/query/eq.txt | 64 +- source/reference/operator/query/exists.txt | 3 + source/reference/operator/query/expr.txt | 53 +- source/reference/operator/query/gt.txt | 3 + source/reference/operator/query/gte.txt | 3 + source/reference/operator/query/in.txt | 3 + source/reference/operator/query/mod.txt | 107 +- source/reference/operator/query/ne.txt | 196 +- source/reference/operator/query/near.txt | 4 - .../reference/operator/query/nearSphere.txt | 4 - source/reference/operator/query/nin.txt | 3 + source/reference/operator/query/not.txt | 3 + source/reference/operator/query/or.txt | 3 + source/reference/operator/query/regex.txt | 38 +- source/reference/operator/query/size.txt | 3 + source/reference/operator/query/text.txt | 86 +- source/reference/operator/query/type.txt | 323 +- source/reference/operator/query/where.txt | 33 +- source/reference/operator/update.txt | 9 +- source/reference/operator/update/addToSet.txt | 3 + .../reference/operator/update/currentDate.txt | 3 + source/reference/operator/update/inc.txt | 3 + source/reference/operator/update/position.txt | 3 + source/reference/operator/update/pull.txt | 3 + source/reference/operator/update/push.txt | 3 + source/reference/operator/update/rename.txt | 101 +- source/reference/operator/update/set.txt | 20 + source/reference/operator/update/unset.txt | 3 + source/reference/parameters.txt | 588 +- source/reference/privilege-actions.txt | 2 - source/reference/program/mongod.txt | 128 +- source/reference/program/mongokerberos.txt | 9 +- source/reference/program/mongoldap.txt | 21 +- source/reference/program/mongos.txt | 16 +- source/reference/read-concern-snapshot.txt | 2 +- source/reference/read-concern.txt | 20 +- source/reference/replica-configuration.txt | 20 +- source/reference/replica-states.txt | 2 +- source/reference/replication.txt | 4 - source/reference/sbe.txt | 28 +- source/reference/sharding.txt | 8 - .../reference/sql-aggregation-comparison.txt | 2 +- source/reference/sql-comparison.txt | 5 +- source/reference/stable-api-changelog.txt | 4 +- source/reference/stable-api.txt | 31 +- source/reference/system-collections.txt | 4 +- source/reference/ulimit.txt | 2 +- source/reference/versioning.txt | 3 +- source/reference/write-concern.txt | 19 +- source/release-notes.txt | 76 +- source/release-notes/1.2.txt | 72 - source/release-notes/1.4.txt | 98 - source/release-notes/1.6.txt | 116 - source/release-notes/1.8.txt | 405 -- source/release-notes/2.0.txt | 431 -- source/release-notes/2.2.txt | 746 --- source/release-notes/2.4-changelog.txt | 97 - source/release-notes/2.4-index-types.txt | 76 - source/release-notes/2.4-javascript.txt | 447 -- source/release-notes/2.4-upgrade.txt | 616 -- source/release-notes/2.4.txt | 443 -- source/release-notes/2.6-changelog.txt | 904 --- source/release-notes/2.6-compatibility.txt | 948 --- source/release-notes/2.6-downgrade.txt | 309 - .../2.6-upgrade-authorization.txt | 90 - source/release-notes/2.6-upgrade.txt | 270 - source/release-notes/2.6.txt | 695 --- source/release-notes/3.0-changelog.txt | 663 --- source/release-notes/3.0-compatibility.txt | 589 -- source/release-notes/3.0-downgrade.txt | 207 - source/release-notes/3.0-scram.txt | 139 - source/release-notes/3.0-upgrade.txt | 229 - source/release-notes/3.0.txt | 770 --- source/release-notes/3.2-changelog.txt | 7 + source/release-notes/3.2-compatibility.txt | 9 +- source/release-notes/3.2-downgrade.txt | 7 + source/release-notes/3.2-javascript.txt | 9 +- source/release-notes/3.2-upgrade.txt | 7 + source/release-notes/3.2.txt | 13 +- source/release-notes/3.4-changelog.txt | 7 + source/release-notes/3.4-compatibility.txt | 7 +- .../3.4-downgrade-replica-set.txt | 7 + .../3.4-downgrade-sharded-cluster.txt | 7 + .../3.4-downgrade-standalone.txt | 7 + source/release-notes/3.4-downgrade.txt | 7 + .../release-notes/3.4-upgrade-replica-set.txt | 7 + .../3.4-upgrade-sharded-cluster.txt | 7 + .../release-notes/3.4-upgrade-standalone.txt | 6 + source/release-notes/3.4.txt | 13 +- source/release-notes/3.6-changelog.txt | 7 + source/release-notes/3.6-compatibility.txt | 9 +- .../3.6-downgrade-replica-set.txt | 7 + .../3.6-downgrade-sharded-cluster.txt | 7 + .../3.6-downgrade-standalone.txt | 7 + .../release-notes/3.6-upgrade-replica-set.txt | 7 + .../3.6-upgrade-sharded-cluster.txt | 7 + .../release-notes/3.6-upgrade-standalone.txt | 6 + source/release-notes/3.6.txt | 9 +- source/release-notes/4.0-changelog.txt | 7 + source/release-notes/4.0-compatibility.txt | 16 +- .../4.0-downgrade-replica-set.txt | 7 + .../4.0-downgrade-sharded-cluster.txt | 7 + .../4.0-downgrade-standalone.txt | 7 + .../release-notes/4.0-upgrade-replica-set.txt | 7 + .../4.0-upgrade-sharded-cluster.txt | 7 + .../release-notes/4.0-upgrade-standalone.txt | 6 + source/release-notes/4.0.txt | 27 +- source/release-notes/4.2-changelog.txt | 7 + source/release-notes/4.2-compatibility.txt | 9 +- .../4.2-downgrade-replica-set.txt | 7 + .../4.2-downgrade-sharded-cluster.txt | 7 + .../4.2-downgrade-standalone.txt | 7 + source/release-notes/4.2-downgrade.txt | 7 + .../release-notes/4.2-upgrade-replica-set.txt | 7 + .../4.2-upgrade-sharded-cluster.txt | 7 + .../release-notes/4.2-upgrade-standalone.txt | 6 + source/release-notes/4.2.txt | 27 +- source/release-notes/4.4-changelog.txt | 4 + source/release-notes/4.4-compatibility.txt | 35 +- source/release-notes/4.4.txt | 58 +- source/release-notes/5.0-changelog.txt | 6 + source/release-notes/5.0-compatibility.txt | 14 +- .../5.0-downgrade-replica-set.txt | 2 + .../5.0-downgrade-sharded-cluster.txt | 2 + .../5.0-downgrade-standalone.txt | 2 + .../release-notes/5.0-upgrade-replica-set.txt | 6 +- .../5.0-upgrade-sharded-cluster.txt | 6 +- .../release-notes/5.0-upgrade-standalone.txt | 6 +- source/release-notes/5.0.txt | 173 +- source/release-notes/5.1-changelog.txt | 7 + source/release-notes/5.1-compatibility.txt | 9 +- source/release-notes/5.1.txt | 7 + source/release-notes/5.2-changelog.txt | 7 + source/release-notes/5.2-compatibility.txt | 7 + source/release-notes/5.2.txt | 7 + source/release-notes/5.3-changelog.txt | 9 +- source/release-notes/5.3-compatibility.txt | 7 + source/release-notes/5.3.txt | 9 + source/release-notes/6.0-changelog.txt | 6 + source/release-notes/6.0-compatibility.txt | 13 +- .../release-notes/6.0-upgrade-replica-set.txt | 2 +- .../6.0-upgrade-sharded-cluster.txt | 4 +- .../release-notes/6.0-upgrade-standalone.txt | 2 +- source/release-notes/6.0.txt | 97 +- source/release-notes/6.1-changelog.txt | 7 + source/release-notes/6.1-compatibility.txt | 7 + source/release-notes/6.1.txt | 15 +- source/release-notes/6.2-changelog.txt | 7 + source/release-notes/6.2-compatibility.txt | 7 + source/release-notes/6.2.txt | 7 + source/release-notes/6.3-changelog.txt | 7 + source/release-notes/6.3-compatibility.txt | 9 +- source/release-notes/6.3.txt | 7 + source/release-notes/7.0-changelog.txt | 8 + source/release-notes/7.0-compatibility.txt | 13 + source/release-notes/7.0.txt | 89 +- source/release-notes/7.1.txt | 1 + source/release-notes/7.2-changelog.txt | 16 + source/release-notes/7.2-compatibility.txt | 2 - source/release-notes/7.2.txt | 43 +- source/release-notes/7.3-changelog.txt | 15 + source/release-notes/7.3-compatibility.txt | 5 + source/release-notes/7.3.txt | 41 +- .../release-notes/drivers-write-concern.txt | 2 +- source/replication.txt | 44 +- source/security.txt | 3 + source/sharding.txt | 42 +- source/text-search.txt | 1 + source/tutorial.txt | 1 + .../adjust-replica-set-member-priority.txt | 11 +- source/tutorial/analyze-query-plan.txt | 35 +- ...uthenticate-nativeldap-activedirectory.txt | 5 +- source/tutorial/backup-and-restore-tools.txt | 23 +- ...up-sharded-cluster-with-database-dumps.txt | 10 + ...rded-cluster-with-filesystem-snapshots.txt | 215 +- .../backup-with-filesystem-snapshots.txt | 6 +- .../build-indexes-on-replica-sets.txt | 2 +- .../build-indexes-on-sharded-clusters.txt | 7 +- source/tutorial/change-oplog-size.txt | 12 +- .../change-replica-set-wiredtiger.txt | 2 +- source/tutorial/clear-jumbo-flag.txt | 3 +- .../configure-a-hidden-replica-set-member.txt | 11 +- ...figure-a-non-voting-replica-set-member.txt | 7 +- source/tutorial/configure-audit-filters.txt | 2 +- source/tutorial/configure-encryption.txt | 5 + .../configure-linux-iptables-firewall.txt | 2 +- source/tutorial/configure-ssl-clients.txt | 4 +- source/tutorial/configure-ssl.txt | 150 +- .../configure-windows-netsh-firewall.txt | 49 +- .../configure-x509-member-authentication.txt | 4 +- ...b-windows-with-kerberos-authentication.txt | 11 +- ...o-mongodb-with-kerberos-authentication.txt | 11 +- .../convert-standalone-to-replica-set.txt | 2 + ...create-queries-that-ensure-selectivity.txt | 174 +- source/tutorial/create-users.txt | 3 + source/tutorial/deploy-replica-set.txt | 8 + source/tutorial/deploy-shard-cluster.txt | 2 +- .../drop-a-hashed-shard-key-index.txt | 94 + source/tutorial/enable-authentication.txt | 3 + source/tutorial/equality-sort-range-rule.txt | 16 +- .../evaluate-operation-performance.txt | 4 +- source/tutorial/expand-replica-set.txt | 2 + source/tutorial/expire-data.txt | 66 +- source/tutorial/getting-started.txt | 8 +- source/tutorial/insert-documents.txt | 145 +- .../install-mongodb-enterprise-on-red-hat.txt | 1 + .../install-mongodb-enterprise-on-ubuntu.txt | 3 +- source/tutorial/install-mongodb-on-debian.txt | 1 + source/tutorial/install-mongodb-on-os-x.txt | 1 + .../install-mongodb-on-windows-unattended.txt | 1 + .../tutorial/install-mongodb-on-windows.txt | 1 + source/tutorial/iterate-a-cursor.txt | 4 +- .../kerberos-auth-activedirectory-authz.txt | 13 +- source/tutorial/manage-indexes.txt | 7 +- source/tutorial/manage-journaling.txt | 2 +- source/tutorial/manage-mongodb-processes.txt | 3 + source/tutorial/manage-shard-zone.txt | 5 + .../update-existing-shard-zone.txt | 106 + .../manage-sharded-cluster-balancer.txt | 52 +- .../tutorial/manage-the-database-profiler.txt | 165 +- source/tutorial/manage-users-and-roles.txt | 5 +- source/tutorial/map-reduce-examples.txt | 4 +- source/tutorial/model-computed-data.txt | 69 +- ...o-many-relationships-between-documents.txt | 91 + ...o-many-relationships-between-documents.txt | 3 + ...to-one-relationships-between-documents.txt | 3 + ...o-many-relationships-between-documents.txt | 3 + ...rformance-with-indexes-and-projections.txt | 2 + .../project-fields-from-query-results.txt | 173 +- source/tutorial/query-array-of-documents.txt | 4 +- source/tutorial/query-arrays.txt | 4 +- source/tutorial/query-documents.txt | 314 +- source/tutorial/query-embedded-documents.txt | 4 +- source/tutorial/query-for-null-fields.txt | 178 +- ...ver-data-following-unexpected-shutdown.txt | 3 +- source/tutorial/remove-documents.txt | 133 +- source/tutorial/remove-replica-set-member.txt | 11 +- source/tutorial/rotate-encryption-key.txt | 12 +- source/tutorial/rotate-log-files.txt | 7 + .../sharding-high-availability-writes.txt | 4 +- source/tutorial/sort-results-with-indexes.txt | 7 +- .../store-javascript-function-on-server.txt | 2 +- source/tutorial/troubleshoot-kerberos.txt | 7 +- source/tutorial/troubleshoot-map-function.txt | 13 - .../tutorial/troubleshoot-reduce-function.txt | 13 - source/tutorial/troubleshoot-replica-sets.txt | 2 +- .../troubleshoot-sharded-clusters.txt | 2 +- source/tutorial/update-documents.txt | 129 +- source/tutorial/upgrade-cluster-to-ssl.txt | 2 +- source/tutorial/upgrade-keyfile-to-x509.txt | 4 +- source/tutorial/upgrade-revision.txt | 224 + 1134 files changed, 24825 insertions(+), 29466 deletions(-) create mode 100644 .github/workflows/repo-sync.yml delete mode 100644 .mci.yml delete mode 100644 .tx/config create mode 100644 repo_sync.py create mode 100644 source/administration/diagnose-query-performance.txt create mode 100644 source/core/capped-collections/change-max-docs-capped-collection.txt create mode 100644 source/core/capped-collections/change-size-capped-collection.txt create mode 100644 source/core/capped-collections/check-if-collection-is-capped.txt create mode 100644 source/core/capped-collections/convert-collection-to-capped.txt create mode 100644 source/core/capped-collections/create-capped-collection.txt create mode 100644 source/core/capped-collections/query-capped-collection.txt delete mode 100644 source/core/csfle/fundamentals/keys-key-vaults.txt delete mode 100644 source/core/csfle/reference/compatibility.txt delete mode 100644 source/core/csfle/reference/kms-providers.txt create mode 100644 source/core/dot-dollar-considerations/dollar-prefix.txt create mode 100644 source/core/dot-dollar-considerations/periods.txt create mode 100644 source/core/index-unique/convert-to-unique.txt create mode 100644 source/core/indexes/index-types/index-single/create-embedded-object-index.txt create mode 100644 source/core/queryable-encryption/about-qe-csfle.txt create mode 100644 source/core/queryable-encryption/fundamentals/enable-qe.txt create mode 100644 source/core/queryable-encryption/install-library.txt create mode 100644 source/core/queryable-encryption/overview-enable-qe.txt create mode 100644 source/core/queryable-encryption/overview-use-qe.txt create mode 100644 source/core/queryable-encryption/qe-create-application.txt create mode 100644 source/core/queryable-encryption/qe-create-cmk.txt create mode 100644 source/core/queryable-encryption/qe-create-encrypted-collection.txt create mode 100644 source/core/queryable-encryption/qe-create-encryption-schema.txt create mode 100644 source/core/queryable-encryption/qe-retrieve-encrypted-document.txt delete mode 100644 source/core/queryable-encryption/reference/mongocryptd.txt delete mode 100644 source/core/queryable-encryption/reference/shared-library.txt delete mode 100644 source/core/queryable-encryption/tutorials/aws/aws-automatic.txt delete mode 100644 source/core/queryable-encryption/tutorials/azure/azure-automatic.txt delete mode 100644 source/core/queryable-encryption/tutorials/gcp/gcp-automatic.txt delete mode 100644 source/core/queryable-encryption/tutorials/kmip/kmip-automatic.txt create mode 100644 source/includes/aggregation/convert-to-bool-table.rst create mode 100644 source/includes/atlas-nav/steps.rst create mode 100644 source/includes/atlas-search-commands/field-definitions/type.rst create mode 100644 source/includes/atlas-search-commands/vector-search-index-definition-fields.rst create mode 100644 source/includes/capped-collections/concurrent-writes.rst create mode 100644 source/includes/capped-collections/query-natural-order.rst create mode 100644 source/includes/capped-collections/use-ttl-index.rst create mode 100644 source/includes/changelogs/releases/4.4.28.rst create mode 100644 source/includes/changelogs/releases/4.4.29.rst create mode 100644 source/includes/changelogs/releases/5.0.24.rst create mode 100644 source/includes/changelogs/releases/5.0.25.rst create mode 100644 source/includes/changelogs/releases/5.0.26.rst create mode 100644 source/includes/changelogs/releases/6.0.13.rst create mode 100644 source/includes/changelogs/releases/6.0.14.rst create mode 100644 source/includes/changelogs/releases/6.0.15.rst create mode 100644 source/includes/changelogs/releases/7.0.6.rst create mode 100644 source/includes/changelogs/releases/7.0.7.rst create mode 100644 source/includes/changelogs/releases/7.0.8.rst create mode 100644 source/includes/changelogs/releases/7.0.9.rst create mode 100644 source/includes/changelogs/releases/7.2.1.rst create mode 100644 source/includes/changelogs/releases/7.2.2.rst create mode 100644 source/includes/changelogs/releases/7.3.1.rst create mode 100644 source/includes/checkpoints.rst create mode 100644 source/includes/client-sessions-reuse.rst create mode 100644 source/includes/comment-option-getMore-inheritance.rst create mode 100644 source/includes/connection-pool/max-connecting-use-case.rst delete mode 100644 source/includes/csfle-warning-local-keys.rst create mode 100644 source/includes/driver-examples/driver-example-c-cleanup.rst create mode 100644 source/includes/driver-examples/driver-example-query-intro-no-perl.rst create mode 100644 source/includes/drop-hashed-shard-key-index-main.rst create mode 100644 source/includes/drop-hashed-shard-key-index.rst create mode 100644 source/includes/enable-KMIP-on-windows.rst create mode 100644 source/includes/explain-ignores-cache-plan.rst create mode 100644 source/includes/fact-5.0-read-concern-latency.rst create mode 100644 source/includes/fact-7.3-singlebatch-cursor.rst create mode 100644 source/includes/fact-bulk-writeConcernError-mongos.rst delete mode 100644 source/includes/fact-csfle-compatibility-drivers.rst create mode 100644 source/includes/fact-dynamic-concurrency.rst create mode 100644 source/includes/fact-enforce-user-cluster-separation-parameter.rst create mode 100644 source/includes/fact-environments-atlas-support-limited-free-and-m10.rst create mode 100644 source/includes/fact-hidden-indexes.rst delete mode 100644 source/includes/fact-manual-enc-definition.rst create mode 100644 source/includes/fact-mapreduce-deprecated-bson.rst create mode 100644 source/includes/fact-mongokerberos.rst delete mode 100644 source/includes/fact-mws-intro.rst delete mode 100644 source/includes/fact-mws.rst delete mode 100644 source/includes/fact-qe-csfle-contention.rst create mode 100644 source/includes/fact-runtime-parameter.rst create mode 100644 source/includes/fact-runtime-startup-parameter.rst create mode 100644 source/includes/fact-ssl-tlsCAFile-tlsUseSystemCA.rst create mode 100644 source/includes/fact-stable-api-explain.rst create mode 100644 source/includes/fact-startup-parameter.rst create mode 100644 source/includes/fact-ulimit-minimum.rst create mode 100644 source/includes/fact-writeConcernError-mongos.rst create mode 100644 source/includes/find-options-values-table.rst create mode 100644 source/includes/fsync-mongos.rst rename source/includes/in-use-encryption/{admonition-csfle-key-rotation.txt => admonition-csfle-key-rotation.rst} (64%) create mode 100644 source/includes/indexes/case-insensitive-regex-queries.rst create mode 100644 source/includes/indexes/embedded-object-need-entire-doc.rst create mode 100644 source/includes/ldap-srv-details.rst create mode 100644 source/includes/negative-dividend.rst create mode 100644 source/includes/noCursorTimeoutNote.rst create mode 100644 source/includes/note-key-vault-permissions.rst rename source/includes/qe-tutorials/python/{requirements.txt => requirements.rst} (100%) create mode 100644 source/includes/qe-tutorials/qe-quick-start.rst create mode 100644 source/includes/queryable-encryption/csfle-driver-tutorial-table.rst rename source/includes/{ => queryable-encryption}/example-qe-csfle-contention.rst (65%) create mode 100644 source/includes/queryable-encryption/qe-csfle-contention.rst create mode 100644 source/includes/queryable-encryption/qe-csfle-manual-enc-overview.rst create mode 100644 source/includes/queryable-encryption/qe-csfle-partial-filter-disclaimer.rst create mode 100644 source/includes/queryable-encryption/qe-csfle-schema-validation.rst create mode 100644 source/includes/queryable-encryption/qe-csfle-setting-contention.rst create mode 100644 source/includes/queryable-encryption/qe-csfle-warning-local-keys.rst create mode 100644 source/includes/queryable-encryption/qe-enable-qe-at-collection-creation.rst create mode 100644 source/includes/queryable-encryption/qe-explicitly-create-collection.rst delete mode 100644 source/includes/queryable-encryption/set-up/csharp.rst delete mode 100644 source/includes/queryable-encryption/set-up/go.rst delete mode 100644 source/includes/queryable-encryption/set-up/java.rst delete mode 100644 source/includes/queryable-encryption/set-up/node.rst delete mode 100644 source/includes/queryable-encryption/set-up/python.rst create mode 100644 source/includes/queryable-encryption/tutorials/assign-app-variables.rst delete mode 100644 source/includes/reference/kms-providers/aws.rst delete mode 100644 source/includes/reference/kms-providers/azure.rst delete mode 100644 source/includes/reference/kms-providers/cmk-note.rst delete mode 100644 source/includes/reference/kms-providers/gcp.rst delete mode 100644 source/includes/reference/kms-providers/kmip.rst delete mode 100644 source/includes/reference/kms-providers/local.rst create mode 100644 source/includes/reference/oplog-size-setting-intro.rst create mode 100644 source/includes/replica-set-nodes-cannot-be-shared.rst create mode 100644 source/includes/replication/note-replica-set-major-versions.rst create mode 100644 source/includes/security/block-revoked-certificates-intro.rst create mode 100644 source/includes/security/cve-2024-1351-info.rst create mode 100644 source/includes/shardedDataDistribution-orphaned-docs.rst create mode 100644 source/includes/shardedDataDistribution-output-example.rst delete mode 100644 source/includes/steps-backup-sharded-cluster-with-snapshots.yaml create mode 100644 source/includes/unreachable-node-default-quorum-index-builds.rst create mode 100644 source/includes/use-expr-in-find-query.rst delete mode 100644 source/reference/command/medianKey.txt create mode 100644 source/reference/method/BSONRegExp.txt create mode 100644 source/reference/method/Mongo.getURI.txt create mode 100644 source/reference/method/js-atlas-streams.txt create mode 100644 source/reference/method/sp.createStreamProcessor.txt create mode 100644 source/reference/method/sp.listStreamProcessors.txt create mode 100644 source/reference/method/sp.process.txt create mode 100644 source/reference/method/sp.processor.drop.txt create mode 100644 source/reference/method/sp.processor.sample.txt create mode 100644 source/reference/method/sp.processor.start.txt create mode 100644 source/reference/method/sp.processor.stats.txt create mode 100644 source/reference/method/sp.processor.stop.txt delete mode 100644 source/reference/operator/aggregation/firstN-array-element.txt delete mode 100644 source/reference/operator/aggregation/lastN-array-element.txt create mode 100644 source/reference/operator/aggregation/toHashedIndexKey.txt delete mode 100644 source/release-notes/1.2.txt delete mode 100644 source/release-notes/1.4.txt delete mode 100644 source/release-notes/1.6.txt delete mode 100644 source/release-notes/1.8.txt delete mode 100644 source/release-notes/2.0.txt delete mode 100644 source/release-notes/2.2.txt delete mode 100644 source/release-notes/2.4-changelog.txt delete mode 100644 source/release-notes/2.4-index-types.txt delete mode 100644 source/release-notes/2.4-javascript.txt delete mode 100644 source/release-notes/2.4-upgrade.txt delete mode 100644 source/release-notes/2.4.txt delete mode 100644 source/release-notes/2.6-changelog.txt delete mode 100644 source/release-notes/2.6-compatibility.txt delete mode 100644 source/release-notes/2.6-downgrade.txt delete mode 100644 source/release-notes/2.6-upgrade-authorization.txt delete mode 100644 source/release-notes/2.6-upgrade.txt delete mode 100644 source/release-notes/2.6.txt delete mode 100644 source/release-notes/3.0-changelog.txt delete mode 100644 source/release-notes/3.0-compatibility.txt delete mode 100644 source/release-notes/3.0-downgrade.txt delete mode 100644 source/release-notes/3.0-scram.txt delete mode 100644 source/release-notes/3.0-upgrade.txt delete mode 100644 source/release-notes/3.0.txt create mode 100644 source/release-notes/7.2-changelog.txt create mode 100644 source/release-notes/7.3-changelog.txt create mode 100644 source/tutorial/drop-a-hashed-shard-key-index.txt create mode 100644 source/tutorial/manage-shard-zone/update-existing-shard-zone.txt create mode 100644 source/tutorial/model-embedded-many-to-many-relationships-between-documents.txt create mode 100644 source/tutorial/upgrade-revision.txt diff --git a/.github/workflows/repo-sync.yml b/.github/workflows/repo-sync.yml new file mode 100644 index 00000000000..1937d1f0c2d --- /dev/null +++ b/.github/workflows/repo-sync.yml @@ -0,0 +1,38 @@ +name: Repo Sync + +on: + push: + branches: + - master + - v7.3 + - v7.2 + - v7.0 + - v6.0 + - v5.0 + +permissions: + contents: write + +jobs: + deploy: + runs-on: ubuntu-latest + environment: Copy To Public + steps: + - uses: actions/checkout@v3 + with: + fetch-depth: 0 + - name: Set up Python 3.12 + uses: actions/setup-python@v3 + with: + python-version: "3.12" + - name: Install dependencies + run: | + python3 -m pip install --upgrade pip + python3 -m pip install -r requirements.txt + - name: Publish Branch + run: | + python3 repo_sync.py + env: + APP_ID: ${{ vars.APP_ID }} + INSTALLATION_ID: ${{ vars.INSTALLATION_ID }} + SERVER_DOCS_PRIVATE_KEY: ${{ secrets.SERVER_DOCS_PRIVATE_KEY }} diff --git a/.gitignore b/.gitignore index c674fa83fcf..5a411ca004d 100644 --- a/.gitignore +++ b/.gitignore @@ -82,7 +82,11 @@ primer/source/includes/*.cs *.mo .stub primer/source/includes/table-linux-kernel-version-production.yaml -venv .vscode changelogs/.mongodb-jira.yaml -source/includes/qe-tutorials/csharp/obj/Debug/ \ No newline at end of file +source/includes/qe-tutorials/csharp/obj/ +source/includes/sdk/go/markdown/ +source/includes/mongosql/markdown/ + +# ignore python venv +.venv diff --git a/.mci.yml b/.mci.yml deleted file mode 100644 index d56373a882f..00000000000 --- a/.mci.yml +++ /dev/null @@ -1,51 +0,0 @@ -pre: - - command: git.get_project - params: - directory: "docs-mongodb" - - command: git.apply_patch - params: - directory: "docs-mongodb" - - command: shell.exec - params: - working_dir: "docs-mongodb" - script: | - rm -rf ~/venv - - virtualenv ~/venv - ${venv}/pip install -r requirements.txt - - # make the current branch always be master. - git branch -D master || true - git checkout -b master origin/master - -tasks: - - name: "build_manual" - commands: - - command: shell.exec - params: - working_dir: "docs-mongodb" - script: | - . ${venv}/activate - - giza generate source - giza sphinx --builder publish --serial_sphinx - - command: shell.exec - params: - working_dir: "docs-mongodb" - script: | - . ${venv}/activate - - giza env package --builder publish - giza packaging create --target push - - # TODO: deploy build/archive/* to s3 - -buildvariants: - - name: ubuntu1404-release - display_name: "Ubuntu 14.04" - run_on: - - ubuntu1404-test - expansions: - venv: "~/venv/bin" - tasks: - - name: "build_manual" diff --git a/.tx/config b/.tx/config deleted file mode 100644 index dace251cd17..00000000000 --- a/.tx/config +++ /dev/null @@ -1,5244 +0,0 @@ -[main] -host = https://www.transifex.com -type = PO - -[mongodb-manual.installation] -file_filter = locale//LC_MESSAGES/installation.po -source_file = locale/pot/installation.pot -source_lang = en - -[mongodb-manual.about] -file_filter = locale//LC_MESSAGES/about.po -source_file = locale/pot/about.pot -source_lang = en - -[mongodb-manual.data-center-awareness] -file_filter = locale//LC_MESSAGES/data-center-awareness.po -source_file = locale/pot/data-center-awareness.pot -source_lang = en - -[mongodb-manual.administration] -file_filter = locale//LC_MESSAGES/administration.po -source_file = locale/pot/administration.pot -source_lang = en - -[mongodb-manual.indexes] -file_filter = locale//LC_MESSAGES/indexes.po -source_file = locale/pot/indexes.pot -source_lang = en - -[mongodb-manual.faq] -file_filter = locale//LC_MESSAGES/faq.po -source_file = locale/pot/faq.pot -source_lang = en - -[mongodb-manual.contents] -file_filter = locale//LC_MESSAGES/contents.po -source_file = locale/pot/contents.pot -source_lang = en - -[mongodb-manual.release-notes] -file_filter = locale//LC_MESSAGES/release-notes.po -source_file = locale/pot/release-notes.pot -source_lang = en - -[mongodb-manual.tutorial] -file_filter = locale//LC_MESSAGES/tutorial.po -source_file = locale/pot/tutorial.pot -source_lang = en - -[mongodb-manual.security] -file_filter = locale//LC_MESSAGES/security.po -source_file = locale/pot/security.pot -source_lang = en - -[mongodb-manual.reference] -file_filter = locale//LC_MESSAGES/reference.po -source_file = locale/pot/reference.pot -source_lang = en - -[mongodb-manual.sharding] -file_filter = locale//LC_MESSAGES/sharding.po -source_file = locale/pot/sharding.pot -source_lang = en - -[mongodb-manual.crud] -file_filter = locale//LC_MESSAGES/crud.po -source_file = locale/pot/crud.pot -source_lang = en - -[mongodb-manual.data-modeling] -file_filter = locale//LC_MESSAGES/data-modeling.po -source_file = locale/pot/data-modeling.pot -source_lang = en - -[mongodb-manual.replication] -file_filter = locale//LC_MESSAGES/replication.po -source_file = locale/pot/replication.pot -source_lang = en - -[mongodb-manual.index] -file_filter = locale//LC_MESSAGES/index.po -source_file = locale/pot/index.pot -source_lang = en - -[mongodb-manual.aggregation] -file_filter = locale//LC_MESSAGES/aggregation.po -source_file = locale/pot/aggregation.pot -source_lang = en - -[mongodb-manual.faq--replica-sets] -file_filter = locale//LC_MESSAGES/faq/replica-sets.po -source_file = locale/pot/faq/replica-sets.pot -source_lang = en - -[mongodb-manual.faq--fundamentals] -file_filter = locale//LC_MESSAGES/faq/fundamentals.po -source_file = locale/pot/faq/fundamentals.pot -source_lang = en - -[mongodb-manual.faq--indexes] -file_filter = locale//LC_MESSAGES/faq/indexes.po -source_file = locale/pot/faq/indexes.pot -source_lang = en - -[mongodb-manual.faq--storage] -file_filter = locale//LC_MESSAGES/faq/storage.po -source_file = locale/pot/faq/storage.pot -source_lang = en - -[mongodb-manual.faq--diagnostics] -file_filter = locale//LC_MESSAGES/faq/diagnostics.po -source_file = locale/pot/faq/diagnostics.pot -source_lang = en - -[mongodb-manual.faq--mongo] -file_filter = locale//LC_MESSAGES/faq/mongo.po -source_file = locale/pot/faq/mongo.pot -source_lang = en - -[mongodb-manual.faq--concurrency] -file_filter = locale//LC_MESSAGES/faq/concurrency.po -source_file = locale/pot/faq/concurrency.pot -source_lang = en - -[mongodb-manual.faq--sharding] -file_filter = locale//LC_MESSAGES/faq/sharding.po -source_file = locale/pot/faq/sharding.pot -source_lang = en - -[mongodb-manual.faq--developers] -file_filter = locale//LC_MESSAGES/faq/developers.po -source_file = locale/pot/faq/developers.pot -source_lang = en - -[mongodb-manual.applications--data-models-applications] -file_filter = locale//LC_MESSAGES/applications/data-models-applications.po -source_file = locale/pot/applications/data-models-applications.pot -source_lang = en - -[mongodb-manual.applications--indexes] -file_filter = locale//LC_MESSAGES/applications/indexes.po -source_file = locale/pot/applications/indexes.pot -source_lang = en - -[mongodb-manual.applications--data-models-tree-structures] -file_filter = locale//LC_MESSAGES/applications/data-models-tree-structures.po -source_file = locale/pot/applications/data-models-tree-structures.pot -source_lang = en - -[mongodb-manual.applications--drivers] -file_filter = locale//LC_MESSAGES/applications/drivers.po -source_file = locale/pot/applications/drivers.pot -source_lang = en - -[mongodb-manual.applications--crud] -file_filter = locale//LC_MESSAGES/applications/crud.po -source_file = locale/pot/applications/crud.pot -source_lang = en - -[mongodb-manual.applications--design-notes] -file_filter = locale//LC_MESSAGES/applications/design-notes.po -source_file = locale/pot/applications/design-notes.pot -source_lang = en - -[mongodb-manual.applications--data-models] -file_filter = locale//LC_MESSAGES/applications/data-models.po -source_file = locale/pot/applications/data-models.pot -source_lang = en - -[mongodb-manual.applications--geospatial-indexes] -file_filter = locale//LC_MESSAGES/applications/geospatial-indexes.po -source_file = locale/pot/applications/geospatial-indexes.pot -source_lang = en - -[mongodb-manual.applications--replication] -file_filter = locale//LC_MESSAGES/applications/replication.po -source_file = locale/pot/applications/replication.pot -source_lang = en - -[mongodb-manual.applications--aggregation] -file_filter = locale//LC_MESSAGES/applications/aggregation.po -source_file = locale/pot/applications/aggregation.pot -source_lang = en - -[mongodb-manual.applications--data-models-relationships] -file_filter = locale//LC_MESSAGES/applications/data-models-relationships.po -source_file = locale/pot/applications/data-models-relationships.pot -source_lang = en - -[mongodb-manual.release-notes--2_6-changes] -file_filter = locale//LC_MESSAGES/release-notes/2.6-changes.po -source_file = locale/pot/release-notes/2.6-changes.pot -source_lang = en - -[mongodb-manual.release-notes--1_4-changes] -file_filter = locale//LC_MESSAGES/release-notes/1.4-changes.po -source_file = locale/pot/release-notes/1.4-changes.pot -source_lang = en - -[mongodb-manual.release-notes--1_8] -file_filter = locale//LC_MESSAGES/release-notes/1.8.po -source_file = locale/pot/release-notes/1.8.pot -source_lang = en - -[mongodb-manual.release-notes--2_6-upgrade] -file_filter = locale//LC_MESSAGES/release-notes/2.6-upgrade.po -source_file = locale/pot/release-notes/2.6-upgrade.pot -source_lang = en - -[mongodb-manual.release-notes--replica-set-features] -file_filter = locale//LC_MESSAGES/release-notes/replica-set-features.po -source_file = locale/pot/release-notes/replica-set-features.pot -source_lang = en - -[mongodb-manual.release-notes--1_2-changes] -file_filter = locale//LC_MESSAGES/release-notes/1.2-changes.po -source_file = locale/pot/release-notes/1.2-changes.pot -source_lang = en - -[mongodb-manual.release-notes--2_2] -file_filter = locale//LC_MESSAGES/release-notes/2.2.po -source_file = locale/pot/release-notes/2.2.pot -source_lang = en - -[mongodb-manual.release-notes--drivers-write-concern] -file_filter = locale//LC_MESSAGES/release-notes/drivers-write-concern.po -source_file = locale/pot/release-notes/drivers-write-concern.pot -source_lang = en - -[mongodb-manual.release-notes--2_0] -file_filter = locale//LC_MESSAGES/release-notes/2.0.po -source_file = locale/pot/release-notes/2.0.pot -source_lang = en - -[mongodb-manual.release-notes--1_2] -file_filter = locale//LC_MESSAGES/release-notes/1.2.po -source_file = locale/pot/release-notes/1.2.pot -source_lang = en - -[mongodb-manual.release-notes--security] -file_filter = locale//LC_MESSAGES/release-notes/security.po -source_file = locale/pot/release-notes/security.pot -source_lang = en - -[mongodb-manual.release-notes--2_6] -file_filter = locale//LC_MESSAGES/release-notes/2.6.po -source_file = locale/pot/release-notes/2.6.pot -source_lang = en - -[mongodb-manual.release-notes--1_6-changes] -file_filter = locale//LC_MESSAGES/release-notes/1.6-changes.po -source_file = locale/pot/release-notes/1.6-changes.pot -source_lang = en - -[mongodb-manual.release-notes--2_4] -file_filter = locale//LC_MESSAGES/release-notes/2.4.po -source_file = locale/pot/release-notes/2.4.pot -source_lang = en - -[mongodb-manual.release-notes--1_8-changes] -file_filter = locale//LC_MESSAGES/release-notes/1.8-changes.po -source_file = locale/pot/release-notes/1.8-changes.pot -source_lang = en - -[mongodb-manual.release-notes--1_4] -file_filter = locale//LC_MESSAGES/release-notes/1.4.po -source_file = locale/pot/release-notes/1.4.pot -source_lang = en - -[mongodb-manual.release-notes--2_2-changes] -file_filter = locale//LC_MESSAGES/release-notes/2.2-changes.po -source_file = locale/pot/release-notes/2.2-changes.pot -source_lang = en - -[mongodb-manual.release-notes--2_0-changes] -file_filter = locale//LC_MESSAGES/release-notes/2.0-changes.po -source_file = locale/pot/release-notes/2.0-changes.pot -source_lang = en - -[mongodb-manual.release-notes--1_6] -file_filter = locale//LC_MESSAGES/release-notes/1.6.po -source_file = locale/pot/release-notes/1.6.pot -source_lang = en - -[mongodb-manual.release-notes--2_4-javascript] -file_filter = locale//LC_MESSAGES/release-notes/2.4-javascript.po -source_file = locale/pot/release-notes/2.4-javascript.pot -source_lang = en - -[mongodb-manual.release-notes--2_4-upgrade] -file_filter = locale//LC_MESSAGES/release-notes/2.4-upgrade.po -source_file = locale/pot/release-notes/2.4-upgrade.pot -source_lang = en - -[mongodb-manual.release-notes--2_4-index-types] -file_filter = locale//LC_MESSAGES/release-notes/2.4-index-types.po -source_file = locale/pot/release-notes/2.4-index-types.pot -source_lang = en - -[mongodb-manual.release-notes--2_4-changes] -file_filter = locale//LC_MESSAGES/release-notes/2.4-changes.po -source_file = locale/pot/release-notes/2.4-changes.pot -source_lang = en - -[mongodb-manual.administration--indexes-geo] -file_filter = locale//LC_MESSAGES/administration/indexes-geo.po -source_file = locale/pot/administration/indexes-geo.pot -source_lang = en - -[mongodb-manual.administration--replica-sets] -file_filter = locale//LC_MESSAGES/administration/replica-sets.po -source_file = locale/pot/administration/replica-sets.pot -source_lang = en - -[mongodb-manual.administration--sharded-cluster-maintenance] -file_filter = locale//LC_MESSAGES/administration/sharded-cluster-maintenance.po -source_file = locale/pot/administration/sharded-cluster-maintenance.pot -source_lang = en - -[mongodb-manual.administration--indexes] -file_filter = locale//LC_MESSAGES/administration/indexes.po -source_file = locale/pot/administration/indexes.pot -source_lang = en - -[mongodb-manual.administration--monitoring] -file_filter = locale//LC_MESSAGES/administration/monitoring.po -source_file = locale/pot/administration/monitoring.pot -source_lang = en - -[mongodb-manual.administration--tutorials] -file_filter = locale//LC_MESSAGES/administration/tutorials.po -source_file = locale/pot/administration/tutorials.pot -source_lang = en - -[mongodb-manual.administration--scripting] -file_filter = locale//LC_MESSAGES/administration/scripting.po -source_file = locale/pot/administration/scripting.pot -source_lang = en - -[mongodb-manual.administration--indexes-creation] -file_filter = locale//LC_MESSAGES/administration/indexes-creation.po -source_file = locale/pot/administration/indexes-creation.pot -source_lang = en - -[mongodb-manual.administration--production-notes] -file_filter = locale//LC_MESSAGES/administration/production-notes.po -source_file = locale/pot/administration/production-notes.pot -source_lang = en - -[mongodb-manual.administration--strategy] -file_filter = locale//LC_MESSAGES/administration/strategy.po -source_file = locale/pot/administration/strategy.pot -source_lang = en - -[mongodb-manual.administration--security] -file_filter = locale//LC_MESSAGES/administration/security.po -source_file = locale/pot/administration/security.pot -source_lang = en - -[mongodb-manual.administration--backup-sharded-clusters] -file_filter = locale//LC_MESSAGES/administration/backup-sharded-clusters.po -source_file = locale/pot/administration/backup-sharded-clusters.pot -source_lang = en - -[mongodb-manual.administration--sharded-cluster-data] -file_filter = locale//LC_MESSAGES/administration/sharded-cluster-data.po -source_file = locale/pot/administration/sharded-cluster-data.pot -source_lang = en - -[mongodb-manual.administration--data-management] -file_filter = locale//LC_MESSAGES/administration/data-management.po -source_file = locale/pot/administration/data-management.pot -source_lang = en - -[mongodb-manual.administration--indexes-text] -file_filter = locale//LC_MESSAGES/administration/indexes-text.po -source_file = locale/pot/administration/indexes-text.pot -source_lang = en - -[mongodb-manual.administration--install-on-linux] -file_filter = locale//LC_MESSAGES/administration/install-on-linux.po -source_file = locale/pot/administration/install-on-linux.pot -source_lang = en - -[mongodb-manual.administration--indexes-management] -file_filter = locale//LC_MESSAGES/administration/indexes-management.po -source_file = locale/pot/administration/indexes-management.pot -source_lang = en - -[mongodb-manual.administration--configuration] -file_filter = locale//LC_MESSAGES/administration/configuration.po -source_file = locale/pot/administration/configuration.pot -source_lang = en - -[mongodb-manual.administration--sharded-clusters] -file_filter = locale//LC_MESSAGES/administration/sharded-clusters.po -source_file = locale/pot/administration/sharded-clusters.pot -source_lang = en - -[mongodb-manual.administration--sharded-cluster-deployment] -file_filter = locale//LC_MESSAGES/administration/sharded-cluster-deployment.po -source_file = locale/pot/administration/sharded-cluster-deployment.pot -source_lang = en - -[mongodb-manual.administration--security-access-control] -file_filter = locale//LC_MESSAGES/administration/security-access-control.po -source_file = locale/pot/administration/security-access-control.pot -source_lang = en - -[mongodb-manual.administration--optimization] -file_filter = locale//LC_MESSAGES/administration/optimization.po -source_file = locale/pot/administration/optimization.pot -source_lang = en - -[mongodb-manual.administration--security-network] -file_filter = locale//LC_MESSAGES/administration/security-network.po -source_file = locale/pot/administration/security-network.pot -source_lang = en - -[mongodb-manual.administration--backup] -file_filter = locale//LC_MESSAGES/administration/backup.po -source_file = locale/pot/administration/backup.pot -source_lang = en - -[mongodb-manual.administration--replica-set-maintenance] -file_filter = locale//LC_MESSAGES/administration/replica-set-maintenance.po -source_file = locale/pot/administration/replica-set-maintenance.pot -source_lang = en - -[mongodb-manual.administration--maintenance] -file_filter = locale//LC_MESSAGES/administration/maintenance.po -source_file = locale/pot/administration/maintenance.pot -source_lang = en - -[mongodb-manual.administration--replica-set-deployment] -file_filter = locale//LC_MESSAGES/administration/replica-set-deployment.po -source_file = locale/pot/administration/replica-set-deployment.pot -source_lang = en - -[mongodb-manual.administration--replica-set-member-configuration] -file_filter = locale//LC_MESSAGES/administration/replica-set-member-configuration.po -source_file = locale/pot/administration/replica-set-member-configuration.pot -source_lang = en - -[mongodb-manual.tutorial--create-an-index] -file_filter = locale//LC_MESSAGES/tutorial/create-an-index.po -source_file = locale/pot/tutorial/create-an-index.pot -source_lang = en - -[mongodb-manual.tutorial--remove-documents] -file_filter = locale//LC_MESSAGES/tutorial/remove-documents.po -source_file = locale/pot/tutorial/remove-documents.pot -source_lang = en - -[mongodb-manual.tutorial--configure-sharded-cluster-balancer] -file_filter = locale//LC_MESSAGES/tutorial/configure-sharded-cluster-balancer.po -source_file = locale/pot/tutorial/configure-sharded-cluster-balancer.pot -source_lang = en - -[mongodb-manual.tutorial--create-indexes-to-support-queries] -file_filter = locale//LC_MESSAGES/tutorial/create-indexes-to-support-queries.po -source_file = locale/pot/tutorial/create-indexes-to-support-queries.pot -source_lang = en - -[mongodb-manual.tutorial--generate-test-data] -file_filter = locale//LC_MESSAGES/tutorial/generate-test-data.po -source_file = locale/pot/tutorial/generate-test-data.pot -source_lang = en - -[mongodb-manual.tutorial--roll-back-to-v1_8-index] -file_filter = locale//LC_MESSAGES/tutorial/roll-back-to-v1.8-index.po -source_file = locale/pot/tutorial/roll-back-to-v1.8-index.pot -source_lang = en - -[mongodb-manual.tutorial--manage-mongodb-processes] -file_filter = locale//LC_MESSAGES/tutorial/manage-mongodb-processes.po -source_file = locale/pot/tutorial/manage-mongodb-processes.pot -source_lang = en - -[mongodb-manual.tutorial--limit-number-of-items-scanned-for-text-search] -file_filter = locale//LC_MESSAGES/tutorial/limit-number-of-items-scanned-for-text-search.po -source_file = locale/pot/tutorial/limit-number-of-items-scanned-for-text-search.pot -source_lang = en - -[mongodb-manual.tutorial--query-a-geohaystack-index] -file_filter = locale//LC_MESSAGES/tutorial/query-a-geohaystack-index.po -source_file = locale/pot/tutorial/query-a-geohaystack-index.pot -source_lang = en - -[mongodb-manual.tutorial--install-mongodb-on-linux] -file_filter = locale//LC_MESSAGES/tutorial/install-mongodb-on-linux.po -source_file = locale/pot/tutorial/install-mongodb-on-linux.pot -source_lang = en - -[mongodb-manual.tutorial--create-text-index-on-multiple-fields] -file_filter = locale//LC_MESSAGES/tutorial/create-text-index-on-multiple-fields.po -source_file = locale/pot/tutorial/create-text-index-on-multiple-fields.pot -source_lang = en - -[mongodb-manual.tutorial--replace-config-server] -file_filter = locale//LC_MESSAGES/tutorial/replace-config-server.po -source_file = locale/pot/tutorial/replace-config-server.pot -source_lang = en - -[mongodb-manual.tutorial--define-roles] -file_filter = locale//LC_MESSAGES/tutorial/define-roles.po -source_file = locale/pot/tutorial/define-roles.pot -source_lang = en - -[mongodb-manual.tutorial--create-an-auto-incrementing-field] -file_filter = locale//LC_MESSAGES/tutorial/create-an-auto-incrementing-field.po -source_file = locale/pot/tutorial/create-an-auto-incrementing-field.pot -source_lang = en - -[mongodb-manual.tutorial--ensure-indexes-fit-ram] -file_filter = locale//LC_MESSAGES/tutorial/ensure-indexes-fit-ram.po -source_file = locale/pot/tutorial/ensure-indexes-fit-ram.pot -source_lang = en - -[mongodb-manual.tutorial--split-chunks-in-sharded-cluster] -file_filter = locale//LC_MESSAGES/tutorial/split-chunks-in-sharded-cluster.po -source_file = locale/pot/tutorial/split-chunks-in-sharded-cluster.pot -source_lang = en - -[mongodb-manual.tutorial--migrate-config-servers-with-same-hostname] -file_filter = locale//LC_MESSAGES/tutorial/migrate-config-servers-with-same-hostname.po -source_file = locale/pot/tutorial/migrate-config-servers-with-same-hostname.pot -source_lang = en - -[mongodb-manual.tutorial--configure-linux-iptables-firewall] -file_filter = locale//LC_MESSAGES/tutorial/configure-linux-iptables-firewall.po -source_file = locale/pot/tutorial/configure-linux-iptables-firewall.pot -source_lang = en - -[mongodb-manual.tutorial--upgrade-revision] -file_filter = locale//LC_MESSAGES/tutorial/upgrade-revision.po -source_file = locale/pot/tutorial/upgrade-revision.pot -source_lang = en - -[mongodb-manual.tutorial--view-sharded-cluster-configuration] -file_filter = locale//LC_MESSAGES/tutorial/view-sharded-cluster-configuration.po -source_file = locale/pot/tutorial/view-sharded-cluster-configuration.pot -source_lang = en - -[mongodb-manual.tutorial--convert-replica-set-to-replicated-shard-cluster] -file_filter = locale//LC_MESSAGES/tutorial/convert-replica-set-to-replicated-shard-cluster.po -source_file = locale/pot/tutorial/convert-replica-set-to-replicated-shard-cluster.pot -source_lang = en - -[mongodb-manual.tutorial--shard-gridfs-data] -file_filter = locale//LC_MESSAGES/tutorial/shard-gridfs-data.po -source_file = locale/pot/tutorial/shard-gridfs-data.pot -source_lang = en - -[mongodb-manual.tutorial--create-a-hashed-index] -file_filter = locale//LC_MESSAGES/tutorial/create-a-hashed-index.po -source_file = locale/pot/tutorial/create-a-hashed-index.pot -source_lang = en - -[mongodb-manual.tutorial--model-data-for-atomic-operations] -file_filter = locale//LC_MESSAGES/tutorial/model-data-for-atomic-operations.po -source_file = locale/pot/tutorial/model-data-for-atomic-operations.pot -source_lang = en - -[mongodb-manual.tutorial--configure-a-delayed-replica-set-member] -file_filter = locale//LC_MESSAGES/tutorial/configure-a-delayed-replica-set-member.po -source_file = locale/pot/tutorial/configure-a-delayed-replica-set-member.pot -source_lang = en - -[mongodb-manual.tutorial--build-a-2dsphere-index] -file_filter = locale//LC_MESSAGES/tutorial/build-a-2dsphere-index.po -source_file = locale/pot/tutorial/build-a-2dsphere-index.pot -source_lang = en - -[mongodb-manual.tutorial--perform-two-phase-commits] -file_filter = locale//LC_MESSAGES/tutorial/perform-two-phase-commits.po -source_file = locale/pot/tutorial/perform-two-phase-commits.pot -source_lang = en - -[mongodb-manual.tutorial--query-documents] -file_filter = locale//LC_MESSAGES/tutorial/query-documents.po -source_file = locale/pot/tutorial/query-documents.pot -source_lang = en - -[mongodb-manual.tutorial--add-shards-to-shard-cluster] -file_filter = locale//LC_MESSAGES/tutorial/add-shards-to-shard-cluster.po -source_file = locale/pot/tutorial/add-shards-to-shard-cluster.pot -source_lang = en - -[mongodb-manual.tutorial--query-a-2dsphere-index] -file_filter = locale//LC_MESSAGES/tutorial/query-a-2dsphere-index.po -source_file = locale/pot/tutorial/query-a-2dsphere-index.pot -source_lang = en - -[mongodb-manual.tutorial--configure-auditing] -file_filter = locale//LC_MESSAGES/tutorial/configure-auditing.po -source_file = locale/pot/tutorial/configure-auditing.pot -source_lang = en - -[mongodb-manual.tutorial--analyze-query-plan] -file_filter = locale//LC_MESSAGES/tutorial/analyze-query-plan.po -source_file = locale/pot/tutorial/analyze-query-plan.pot -source_lang = en - -[mongodb-manual.tutorial--use-database-commands] -file_filter = locale//LC_MESSAGES/tutorial/use-database-commands.po -source_file = locale/pot/tutorial/use-database-commands.pot -source_lang = en - -[mongodb-manual.tutorial--manage-chained-replication] -file_filter = locale//LC_MESSAGES/tutorial/manage-chained-replication.po -source_file = locale/pot/tutorial/manage-chained-replication.pot -source_lang = en - -[mongodb-manual.tutorial--backup-sharded-cluster-with-filesystem-snapshots] -file_filter = locale//LC_MESSAGES/tutorial/backup-sharded-cluster-with-filesystem-snapshots.po -source_file = locale/pot/tutorial/backup-sharded-cluster-with-filesystem-snapshots.pot -source_lang = en - -[mongodb-manual.tutorial--create-a-vulnerability-report] -file_filter = locale//LC_MESSAGES/tutorial/create-a-vulnerability-report.po -source_file = locale/pot/tutorial/create-a-vulnerability-report.pot -source_lang = en - -[mongodb-manual.tutorial--model-tree-structures-with-ancestors-array] -file_filter = locale//LC_MESSAGES/tutorial/model-tree-structures-with-ancestors-array.po -source_file = locale/pot/tutorial/model-tree-structures-with-ancestors-array.pot -source_lang = en - -[mongodb-manual.tutorial--modify-documents] -file_filter = locale//LC_MESSAGES/tutorial/modify-documents.po -source_file = locale/pot/tutorial/modify-documents.pot -source_lang = en - -[mongodb-manual.tutorial--install-mongodb-on-ubuntu] -file_filter = locale//LC_MESSAGES/tutorial/install-mongodb-on-ubuntu.po -source_file = locale/pot/tutorial/install-mongodb-on-ubuntu.pot -source_lang = en - -[mongodb-manual.tutorial--model-tree-structures-with-materialized-paths] -file_filter = locale//LC_MESSAGES/tutorial/model-tree-structures-with-materialized-paths.po -source_file = locale/pot/tutorial/model-tree-structures-with-materialized-paths.pot -source_lang = en - -[mongodb-manual.tutorial--control-access-to-mongodb-with-kerberos-authentication] -file_filter = locale//LC_MESSAGES/tutorial/control-access-to-mongodb-with-kerberos-authentication.po -source_file = locale/pot/tutorial/control-access-to-mongodb-with-kerberos-authentication.pot -source_lang = en - -[mongodb-manual.tutorial--model-tree-structures-with-nested-sets] -file_filter = locale//LC_MESSAGES/tutorial/model-tree-structures-with-nested-sets.po -source_file = locale/pot/tutorial/model-tree-structures-with-nested-sets.pot -source_lang = en - -[mongodb-manual.tutorial--troubleshoot-reduce-function] -file_filter = locale//LC_MESSAGES/tutorial/troubleshoot-reduce-function.po -source_file = locale/pot/tutorial/troubleshoot-reduce-function.pot -source_lang = en - -[mongodb-manual.tutorial--getting-started-with-the-mongo-shell] -file_filter = locale//LC_MESSAGES/tutorial/getting-started-with-the-mongo-shell.po -source_file = locale/pot/tutorial/getting-started-with-the-mongo-shell.pot -source_lang = en - -[mongodb-manual.tutorial--model-referenced-one-to-many-relationships-between-documents] -file_filter = locale//LC_MESSAGES/tutorial/model-referenced-one-to-many-relationships-between-documents.po -source_file = locale/pot/tutorial/model-referenced-one-to-many-relationships-between-documents.pot -source_lang = en - -[mongodb-manual.tutorial--configure-windows-netsh-firewall] -file_filter = locale//LC_MESSAGES/tutorial/configure-windows-netsh-firewall.po -source_file = locale/pot/tutorial/configure-windows-netsh-firewall.pot -source_lang = en - -[mongodb-manual.tutorial--enforce-unique-keys-for-sharded-collections] -file_filter = locale//LC_MESSAGES/tutorial/enforce-unique-keys-for-sharded-collections.po -source_file = locale/pot/tutorial/enforce-unique-keys-for-sharded-collections.pot -source_lang = en - -[mongodb-manual.tutorial--reconfigure-replica-set-with-unavailable-members] -file_filter = locale//LC_MESSAGES/tutorial/reconfigure-replica-set-with-unavailable-members.po -source_file = locale/pot/tutorial/reconfigure-replica-set-with-unavailable-members.pot -source_lang = en - -[mongodb-manual.tutorial--upgrade-cluster-to-ssl] -file_filter = locale//LC_MESSAGES/tutorial/upgrade-cluster-to-ssl.po -source_file = locale/pot/tutorial/upgrade-cluster-to-ssl.pot -source_lang = en - -[mongodb-manual.tutorial--build-a-geohaystack-index] -file_filter = locale//LC_MESSAGES/tutorial/build-a-geohaystack-index.po -source_file = locale/pot/tutorial/build-a-geohaystack-index.pot -source_lang = en - -[mongodb-manual.tutorial--enable-authentication-in-sharded-cluster] -file_filter = locale//LC_MESSAGES/tutorial/enable-authentication-in-sharded-cluster.po -source_file = locale/pot/tutorial/enable-authentication-in-sharded-cluster.pot -source_lang = en - -[mongodb-manual.tutorial--migrate-config-servers-with-different-hostnames] -file_filter = locale//LC_MESSAGES/tutorial/migrate-config-servers-with-different-hostnames.po -source_file = locale/pot/tutorial/migrate-config-servers-with-different-hostnames.pot -source_lang = en - -[mongodb-manual.tutorial--backup-sharded-cluster-metadata] -file_filter = locale//LC_MESSAGES/tutorial/backup-sharded-cluster-metadata.po -source_file = locale/pot/tutorial/backup-sharded-cluster-metadata.pot -source_lang = en - -[mongodb-manual.tutorial--model-tree-structures] -file_filter = locale//LC_MESSAGES/tutorial/model-tree-structures.po -source_file = locale/pot/tutorial/model-tree-structures.pot -source_lang = en - -[mongodb-manual.tutorial--create-a-sparse-index] -file_filter = locale//LC_MESSAGES/tutorial/create-a-sparse-index.po -source_file = locale/pot/tutorial/create-a-sparse-index.pot -source_lang = en - -[mongodb-manual.tutorial--access-mongo-shell-help] -file_filter = locale//LC_MESSAGES/tutorial/access-mongo-shell-help.po -source_file = locale/pot/tutorial/access-mongo-shell-help.pot -source_lang = en - -[mongodb-manual.tutorial--manage-journaling] -file_filter = locale//LC_MESSAGES/tutorial/manage-journaling.po -source_file = locale/pot/tutorial/manage-journaling.pot -source_lang = en - -[mongodb-manual.tutorial--manage-the-database-profiler] -file_filter = locale//LC_MESSAGES/tutorial/manage-the-database-profiler.po -source_file = locale/pot/tutorial/manage-the-database-profiler.pot -source_lang = en - -[mongodb-manual.tutorial--deploy-geographically-distributed-replica-set] -file_filter = locale//LC_MESSAGES/tutorial/deploy-geographically-distributed-replica-set.po -source_file = locale/pot/tutorial/deploy-geographically-distributed-replica-set.pot -source_lang = en - -[mongodb-manual.tutorial--list-indexes] -file_filter = locale//LC_MESSAGES/tutorial/list-indexes.po -source_file = locale/pot/tutorial/list-indexes.pot -source_lang = en - -[mongodb-manual.tutorial--change-oplog-size] -file_filter = locale//LC_MESSAGES/tutorial/change-oplog-size.po -source_file = locale/pot/tutorial/change-oplog-size.pot -source_lang = en - -[mongodb-manual.tutorial--deploy-shard-cluster] -file_filter = locale//LC_MESSAGES/tutorial/deploy-shard-cluster.po -source_file = locale/pot/tutorial/deploy-shard-cluster.pot -source_lang = en - -[mongodb-manual.tutorial--install-mongodb-on-os-x] -file_filter = locale//LC_MESSAGES/tutorial/install-mongodb-on-os-x.po -source_file = locale/pot/tutorial/install-mongodb-on-os-x.pot -source_lang = en - -[mongodb-manual.tutorial--shard-collection-with-a-hashed-shard-key] -file_filter = locale//LC_MESSAGES/tutorial/shard-collection-with-a-hashed-shard-key.po -source_file = locale/pot/tutorial/shard-collection-with-a-hashed-shard-key.pot -source_lang = en - -[mongodb-manual.tutorial--model-tree-structures-with-child-references] -file_filter = locale//LC_MESSAGES/tutorial/model-tree-structures-with-child-references.po -source_file = locale/pot/tutorial/model-tree-structures-with-child-references.pot -source_lang = en - -[mongodb-manual.tutorial--merge-chunks-in-sharded-cluster] -file_filter = locale//LC_MESSAGES/tutorial/merge-chunks-in-sharded-cluster.po -source_file = locale/pot/tutorial/merge-chunks-in-sharded-cluster.pot -source_lang = en - -[mongodb-manual.tutorial--optimize-query-performance-with-indexes-and-projections] -file_filter = locale//LC_MESSAGES/tutorial/optimize-query-performance-with-indexes-and-projections.po -source_file = locale/pot/tutorial/optimize-query-performance-with-indexes-and-projections.pot -source_lang = en - -[mongodb-manual.tutorial--write-scripts-for-the-mongo-shell] -file_filter = locale//LC_MESSAGES/tutorial/write-scripts-for-the-mongo-shell.po -source_file = locale/pot/tutorial/write-scripts-for-the-mongo-shell.pot -source_lang = en - -[mongodb-manual.tutorial--add-user-administrator] -file_filter = locale//LC_MESSAGES/tutorial/add-user-administrator.po -source_file = locale/pot/tutorial/add-user-administrator.pot -source_lang = en - -[mongodb-manual.tutorial--avoid-text-index-name-limit] -file_filter = locale//LC_MESSAGES/tutorial/avoid-text-index-name-limit.po -source_file = locale/pot/tutorial/avoid-text-index-name-limit.pot -source_lang = en - -[mongodb-manual.tutorial--sort-results-with-indexes] -file_filter = locale//LC_MESSAGES/tutorial/sort-results-with-indexes.po -source_file = locale/pot/tutorial/sort-results-with-indexes.pot -source_lang = en - -[mongodb-manual.tutorial--restore-sharded-cluster] -file_filter = locale//LC_MESSAGES/tutorial/restore-sharded-cluster.po -source_file = locale/pot/tutorial/restore-sharded-cluster.pot -source_lang = en - -[mongodb-manual.tutorial--view-roles] -file_filter = locale//LC_MESSAGES/tutorial/view-roles.po -source_file = locale/pot/tutorial/view-roles.pot -source_lang = en - -[mongodb-manual.tutorial--choose-a-shard-key] -file_filter = locale//LC_MESSAGES/tutorial/choose-a-shard-key.po -source_file = locale/pot/tutorial/choose-a-shard-key.pot -source_lang = en - -[mongodb-manual.tutorial--build-a-2d-index] -file_filter = locale//LC_MESSAGES/tutorial/build-a-2d-index.po -source_file = locale/pot/tutorial/build-a-2d-index.pot -source_lang = en - -[mongodb-manual.tutorial--recover-data-following-unexpected-shutdown] -file_filter = locale//LC_MESSAGES/tutorial/recover-data-following-unexpected-shutdown.po -source_file = locale/pot/tutorial/recover-data-following-unexpected-shutdown.pot -source_lang = en - -[mongodb-manual.tutorial--evaluate-operation-performance] -file_filter = locale//LC_MESSAGES/tutorial/evaluate-operation-performance.po -source_file = locale/pot/tutorial/evaluate-operation-performance.pot -source_lang = en - -[mongodb-manual.tutorial--install-mongodb-enterprise-on-windows] -file_filter = locale//LC_MESSAGES/tutorial/install-mongodb-enterprise-on-windows.po -source_file = locale/pot/tutorial/install-mongodb-enterprise-on-windows.pot -source_lang = en - -[mongodb-manual.tutorial--generate-key-file] -file_filter = locale//LC_MESSAGES/tutorial/generate-key-file.po -source_file = locale/pot/tutorial/generate-key-file.pot -source_lang = en - -[mongodb-manual.tutorial--add-replica-set-arbiter] -file_filter = locale//LC_MESSAGES/tutorial/add-replica-set-arbiter.po -source_file = locale/pot/tutorial/add-replica-set-arbiter.pot -source_lang = en - -[mongodb-manual.tutorial--adjust-replica-set-member-priority] -file_filter = locale//LC_MESSAGES/tutorial/adjust-replica-set-member-priority.po -source_file = locale/pot/tutorial/adjust-replica-set-member-priority.pot -source_lang = en - -[mongodb-manual.tutorial--enable-text-search] -file_filter = locale//LC_MESSAGES/tutorial/enable-text-search.po -source_file = locale/pot/tutorial/enable-text-search.pot -source_lang = en - -[mongodb-manual.tutorial--expire-data] -file_filter = locale//LC_MESSAGES/tutorial/expire-data.po -source_file = locale/pot/tutorial/expire-data.pot -source_lang = en - -[mongodb-manual.tutorial--restore-single-shard] -file_filter = locale//LC_MESSAGES/tutorial/restore-single-shard.po -source_file = locale/pot/tutorial/restore-single-shard.pot -source_lang = en - -[mongodb-manual.tutorial--configure-replica-set-secondary-sync-target] -file_filter = locale//LC_MESSAGES/tutorial/configure-replica-set-secondary-sync-target.po -source_file = locale/pot/tutorial/configure-replica-set-secondary-sync-target.pot -source_lang = en - -[mongodb-manual.tutorial--change-hostnames-in-a-replica-set] -file_filter = locale//LC_MESSAGES/tutorial/change-hostnames-in-a-replica-set.po -source_file = locale/pot/tutorial/change-hostnames-in-a-replica-set.pot -source_lang = en - -[mongodb-manual.tutorial--configure-secondary-only-replica-set-member] -file_filter = locale//LC_MESSAGES/tutorial/configure-secondary-only-replica-set-member.po -source_file = locale/pot/tutorial/configure-secondary-only-replica-set-member.pot -source_lang = en - -[mongodb-manual.tutorial--configure-x509] -file_filter = locale//LC_MESSAGES/tutorial/configure-x509.po -source_file = locale/pot/tutorial/configure-x509.pot -source_lang = en - -[mongodb-manual.tutorial--deploy-replica-set] -file_filter = locale//LC_MESSAGES/tutorial/deploy-replica-set.po -source_file = locale/pot/tutorial/deploy-replica-set.pot -source_lang = en - -[mongodb-manual.tutorial--force-member-to-be-primary] -file_filter = locale//LC_MESSAGES/tutorial/force-member-to-be-primary.po -source_file = locale/pot/tutorial/force-member-to-be-primary.pot -source_lang = en - -[mongodb-manual.tutorial--configure-a-non-voting-replica-set-member] -file_filter = locale//LC_MESSAGES/tutorial/configure-a-non-voting-replica-set-member.po -source_file = locale/pot/tutorial/configure-a-non-voting-replica-set-member.pot -source_lang = en - -[mongodb-manual.tutorial--schedule-backup-window-for-sharded-clusters] -file_filter = locale//LC_MESSAGES/tutorial/schedule-backup-window-for-sharded-clusters.po -source_file = locale/pot/tutorial/schedule-backup-window-for-sharded-clusters.pot -source_lang = en - -[mongodb-manual.tutorial--administer-shard-tags] -file_filter = locale//LC_MESSAGES/tutorial/administer-shard-tags.po -source_file = locale/pot/tutorial/administer-shard-tags.pot -source_lang = en - -[mongodb-manual.tutorial--deploy-config-servers] -file_filter = locale//LC_MESSAGES/tutorial/deploy-config-servers.po -source_file = locale/pot/tutorial/deploy-config-servers.pot -source_lang = en - -[mongodb-manual.tutorial--measure-index-use] -file_filter = locale//LC_MESSAGES/tutorial/measure-index-use.po -source_file = locale/pot/tutorial/measure-index-use.pot -source_lang = en - -[mongodb-manual.tutorial--manage-sharded-cluster-balancer] -file_filter = locale//LC_MESSAGES/tutorial/manage-sharded-cluster-balancer.po -source_file = locale/pot/tutorial/manage-sharded-cluster-balancer.pot -source_lang = en - -[mongodb-manual.tutorial--troubleshoot-replica-sets] -file_filter = locale//LC_MESSAGES/tutorial/troubleshoot-replica-sets.po -source_file = locale/pot/tutorial/troubleshoot-replica-sets.pot -source_lang = en - -[mongodb-manual.tutorial--backup-small-sharded-cluster-with-mongodump] -file_filter = locale//LC_MESSAGES/tutorial/backup-small-sharded-cluster-with-mongodump.po -source_file = locale/pot/tutorial/backup-small-sharded-cluster-with-mongodump.pot -source_lang = en - -[mongodb-manual.tutorial--configure-ssl] -file_filter = locale//LC_MESSAGES/tutorial/configure-ssl.po -source_file = locale/pot/tutorial/configure-ssl.pot -source_lang = en - -[mongodb-manual.tutorial--troubleshoot-map-function] -file_filter = locale//LC_MESSAGES/tutorial/troubleshoot-map-function.po -source_file = locale/pot/tutorial/troubleshoot-map-function.pot -source_lang = en - -[mongodb-manual.tutorial--model-embedded-one-to-many-relationships-between-documents] -file_filter = locale//LC_MESSAGES/tutorial/model-embedded-one-to-many-relationships-between-documents.po -source_file = locale/pot/tutorial/model-embedded-one-to-many-relationships-between-documents.pot -source_lang = en - -[mongodb-manual.tutorial--enable-authentication] -file_filter = locale//LC_MESSAGES/tutorial/enable-authentication.po -source_file = locale/pot/tutorial/enable-authentication.pot -source_lang = en - -[mongodb-manual.tutorial--aggregation-with-user-preference-data] -file_filter = locale//LC_MESSAGES/tutorial/aggregation-with-user-preference-data.po -source_file = locale/pot/tutorial/aggregation-with-user-preference-data.pot -source_lang = en - -[mongodb-manual.tutorial--build-indexes-on-replica-sets] -file_filter = locale//LC_MESSAGES/tutorial/build-indexes-on-replica-sets.po -source_file = locale/pot/tutorial/build-indexes-on-replica-sets.pot -source_lang = en - -[mongodb-manual.tutorial--limit-number-of-elements-in-updated-array] -file_filter = locale//LC_MESSAGES/tutorial/limit-number-of-elements-in-updated-array.po -source_file = locale/pot/tutorial/limit-number-of-elements-in-updated-array.pot -source_lang = en - -[mongodb-manual.tutorial--create-tailable-cursor] -file_filter = locale//LC_MESSAGES/tutorial/create-tailable-cursor.po -source_file = locale/pot/tutorial/create-tailable-cursor.pot -source_lang = en - -[mongodb-manual.tutorial--insert-documents] -file_filter = locale//LC_MESSAGES/tutorial/insert-documents.po -source_file = locale/pot/tutorial/insert-documents.pot -source_lang = en - -[mongodb-manual.tutorial--install-mongodb-on-debian] -file_filter = locale//LC_MESSAGES/tutorial/install-mongodb-on-debian.po -source_file = locale/pot/tutorial/install-mongodb-on-debian.pot -source_lang = en - -[mongodb-manual.tutorial--terminate-running-operations] -file_filter = locale//LC_MESSAGES/tutorial/terminate-running-operations.po -source_file = locale/pot/tutorial/terminate-running-operations.pot -source_lang = en - -[mongodb-manual.tutorial--replace-replica-set-member] -file_filter = locale//LC_MESSAGES/tutorial/replace-replica-set-member.po -source_file = locale/pot/tutorial/replace-replica-set-member.pot -source_lang = en - -[mongodb-manual.tutorial--rebuild-indexes] -file_filter = locale//LC_MESSAGES/tutorial/rebuild-indexes.po -source_file = locale/pot/tutorial/rebuild-indexes.pot -source_lang = en - -[mongodb-manual.tutorial--convert-secondary-into-arbiter] -file_filter = locale//LC_MESSAGES/tutorial/convert-secondary-into-arbiter.po -source_file = locale/pot/tutorial/convert-secondary-into-arbiter.pot -source_lang = en - -[mongodb-manual.tutorial--control-results-of-text-search] -file_filter = locale//LC_MESSAGES/tutorial/control-results-of-text-search.po -source_file = locale/pot/tutorial/control-results-of-text-search.pot -source_lang = en - -[mongodb-manual.tutorial--aggregation-zip-code-data-set] -file_filter = locale//LC_MESSAGES/tutorial/aggregation-zip-code-data-set.po -source_file = locale/pot/tutorial/aggregation-zip-code-data-set.pot -source_lang = en - -[mongodb-manual.tutorial--change-user-password] -file_filter = locale//LC_MESSAGES/tutorial/change-user-password.po -source_file = locale/pot/tutorial/change-user-password.pot -source_lang = en - -[mongodb-manual.tutorial--convert-standalone-to-replica-set] -file_filter = locale//LC_MESSAGES/tutorial/convert-standalone-to-replica-set.po -source_file = locale/pot/tutorial/convert-standalone-to-replica-set.pot -source_lang = en - -[mongodb-manual.tutorial--install-mongodb-on-red-hat-centos-or-fedora-linux] -file_filter = locale//LC_MESSAGES/tutorial/install-mongodb-on-red-hat-centos-or-fedora-linux.po -source_file = locale/pot/tutorial/install-mongodb-on-red-hat-centos-or-fedora-linux.pot -source_lang = en - -[mongodb-manual.tutorial--configure-replica-set-tag-sets] -file_filter = locale//LC_MESSAGES/tutorial/configure-replica-set-tag-sets.po -source_file = locale/pot/tutorial/configure-replica-set-tag-sets.pot -source_lang = en - -[mongodb-manual.tutorial--remove-shards-from-cluster] -file_filter = locale//LC_MESSAGES/tutorial/remove-shards-from-cluster.po -source_file = locale/pot/tutorial/remove-shards-from-cluster.pot -source_lang = en - -[mongodb-manual.tutorial--iterate-a-cursor] -file_filter = locale//LC_MESSAGES/tutorial/iterate-a-cursor.po -source_file = locale/pot/tutorial/iterate-a-cursor.pot -source_lang = en - -[mongodb-manual.tutorial--calculate-distances-using-spherical-geometry-with-2d-geospatial-indexes] -file_filter = locale//LC_MESSAGES/tutorial/calculate-distances-using-spherical-geometry-with-2d-geospatial-indexes.po -source_file = locale/pot/tutorial/calculate-distances-using-spherical-geometry-with-2d-geospatial-indexes.pot -source_lang = en - -[mongodb-manual.tutorial--remove-replica-set-member] -file_filter = locale//LC_MESSAGES/tutorial/remove-replica-set-member.po -source_file = locale/pot/tutorial/remove-replica-set-member.pot -source_lang = en - -[mongodb-manual.tutorial--expand-replica-set] -file_filter = locale//LC_MESSAGES/tutorial/expand-replica-set.po -source_file = locale/pot/tutorial/expand-replica-set.pot -source_lang = en - -[mongodb-manual.tutorial--resync-replica-set-member] -file_filter = locale//LC_MESSAGES/tutorial/resync-replica-set-member.po -source_file = locale/pot/tutorial/resync-replica-set-member.pot -source_lang = en - -[mongodb-manual.tutorial--manage-in-progress-indexing-operations] -file_filter = locale//LC_MESSAGES/tutorial/manage-in-progress-indexing-operations.po -source_file = locale/pot/tutorial/manage-in-progress-indexing-operations.pot -source_lang = en - -[mongodb-manual.tutorial--rotate-log-files] -file_filter = locale//LC_MESSAGES/tutorial/rotate-log-files.po -source_file = locale/pot/tutorial/rotate-log-files.pot -source_lang = en - -[mongodb-manual.tutorial--migrate-chunks-in-sharded-cluster] -file_filter = locale//LC_MESSAGES/tutorial/migrate-chunks-in-sharded-cluster.po -source_file = locale/pot/tutorial/migrate-chunks-in-sharded-cluster.pot -source_lang = en - -[mongodb-manual.tutorial--use-capped-collections-for-fast-writes-and-reads] -file_filter = locale//LC_MESSAGES/tutorial/use-capped-collections-for-fast-writes-and-reads.po -source_file = locale/pot/tutorial/use-capped-collections-for-fast-writes-and-reads.pot -source_lang = en - -[mongodb-manual.tutorial--model-tree-structures-with-parent-references] -file_filter = locale//LC_MESSAGES/tutorial/model-tree-structures-with-parent-references.po -source_file = locale/pot/tutorial/model-tree-structures-with-parent-references.pot -source_lang = en - -[mongodb-manual.tutorial--convert-sharded-cluster-to-replica-set] -file_filter = locale//LC_MESSAGES/tutorial/convert-sharded-cluster-to-replica-set.po -source_file = locale/pot/tutorial/convert-sharded-cluster-to-replica-set.pot -source_lang = en - -[mongodb-manual.tutorial--add-user-to-database] -file_filter = locale//LC_MESSAGES/tutorial/add-user-to-database.po -source_file = locale/pot/tutorial/add-user-to-database.pot -source_lang = en - -[mongodb-manual.tutorial--isolate-sequence-of-operations] -file_filter = locale//LC_MESSAGES/tutorial/isolate-sequence-of-operations.po -source_file = locale/pot/tutorial/isolate-sequence-of-operations.pot -source_lang = en - -[mongodb-manual.tutorial--configure-a-hidden-replica-set-member] -file_filter = locale//LC_MESSAGES/tutorial/configure-a-hidden-replica-set-member.po -source_file = locale/pot/tutorial/configure-a-hidden-replica-set-member.pot -source_lang = en - -[mongodb-manual.tutorial--migrate-sharded-cluster-to-new-hardware] -file_filter = locale//LC_MESSAGES/tutorial/migrate-sharded-cluster-to-new-hardware.po -source_file = locale/pot/tutorial/migrate-sharded-cluster-to-new-hardware.pot -source_lang = en - -[mongodb-manual.tutorial--model-data-for-keyword-search] -file_filter = locale//LC_MESSAGES/tutorial/model-data-for-keyword-search.po -source_file = locale/pot/tutorial/model-data-for-keyword-search.pot -source_lang = en - -[mongodb-manual.tutorial--backup-sharded-cluster-with-database-dumps] -file_filter = locale//LC_MESSAGES/tutorial/backup-sharded-cluster-with-database-dumps.po -source_file = locale/pot/tutorial/backup-sharded-cluster-with-database-dumps.pot -source_lang = en - -[mongodb-manual.tutorial--build-indexes-in-the-background] -file_filter = locale//LC_MESSAGES/tutorial/build-indexes-in-the-background.po -source_file = locale/pot/tutorial/build-indexes-in-the-background.pot -source_lang = en - -[mongodb-manual.tutorial--configure-ldap-sasl-authentication] -file_filter = locale//LC_MESSAGES/tutorial/configure-ldap-sasl-authentication.po -source_file = locale/pot/tutorial/configure-ldap-sasl-authentication.pot -source_lang = en - -[mongodb-manual.tutorial--map-reduce-examples] -file_filter = locale//LC_MESSAGES/tutorial/map-reduce-examples.po -source_file = locale/pot/tutorial/map-reduce-examples.pot -source_lang = en - -[mongodb-manual.tutorial--troubleshoot-sharded-clusters] -file_filter = locale//LC_MESSAGES/tutorial/troubleshoot-sharded-clusters.po -source_file = locale/pot/tutorial/troubleshoot-sharded-clusters.pot -source_lang = en - -[mongodb-manual.tutorial--deploy-replica-set-for-testing] -file_filter = locale//LC_MESSAGES/tutorial/deploy-replica-set-for-testing.po -source_file = locale/pot/tutorial/deploy-replica-set-for-testing.pot -source_lang = en - -[mongodb-manual.tutorial--restore-replica-set-from-backup] -file_filter = locale//LC_MESSAGES/tutorial/restore-replica-set-from-backup.po -source_file = locale/pot/tutorial/restore-replica-set-from-backup.pot -source_lang = en - -[mongodb-manual.tutorial--create-a-compound-index] -file_filter = locale//LC_MESSAGES/tutorial/create-a-compound-index.po -source_file = locale/pot/tutorial/create-a-compound-index.pot -source_lang = en - -[mongodb-manual.tutorial--getting-started] -file_filter = locale//LC_MESSAGES/tutorial/getting-started.po -source_file = locale/pot/tutorial/getting-started.pot -source_lang = en - -[mongodb-manual.tutorial--create-chunks-in-sharded-cluster] -file_filter = locale//LC_MESSAGES/tutorial/create-chunks-in-sharded-cluster.po -source_file = locale/pot/tutorial/create-chunks-in-sharded-cluster.pot -source_lang = en - -[mongodb-manual.tutorial--project-fields-from-query-results] -file_filter = locale//LC_MESSAGES/tutorial/project-fields-from-query-results.po -source_file = locale/pot/tutorial/project-fields-from-query-results.pot -source_lang = en - -[mongodb-manual.tutorial--create-a-unique-index] -file_filter = locale//LC_MESSAGES/tutorial/create-a-unique-index.po -source_file = locale/pot/tutorial/create-a-unique-index.pot -source_lang = en - -[mongodb-manual.tutorial--store-javascript-function-on-server] -file_filter = locale//LC_MESSAGES/tutorial/store-javascript-function-on-server.po -source_file = locale/pot/tutorial/store-javascript-function-on-server.pot -source_lang = en - -[mongodb-manual.tutorial--change-user-privileges] -file_filter = locale//LC_MESSAGES/tutorial/change-user-privileges.po -source_file = locale/pot/tutorial/change-user-privileges.pot -source_lang = en - -[mongodb-manual.tutorial--remove-indexes] -file_filter = locale//LC_MESSAGES/tutorial/remove-indexes.po -source_file = locale/pot/tutorial/remove-indexes.pot -source_lang = en - -[mongodb-manual.tutorial--model-embedded-one-to-one-relationships-between-documents] -file_filter = locale//LC_MESSAGES/tutorial/model-embedded-one-to-one-relationships-between-documents.po -source_file = locale/pot/tutorial/model-embedded-one-to-one-relationships-between-documents.pot -source_lang = en - -[mongodb-manual.tutorial--install-mongodb-on-windows] -file_filter = locale//LC_MESSAGES/tutorial/install-mongodb-on-windows.po -source_file = locale/pot/tutorial/install-mongodb-on-windows.pot -source_lang = en - -[mongodb-manual.tutorial--perform-incremental-map-reduce] -file_filter = locale//LC_MESSAGES/tutorial/perform-incremental-map-reduce.po -source_file = locale/pot/tutorial/perform-incremental-map-reduce.pot -source_lang = en - -[mongodb-manual.tutorial--specify-language-for-text-index] -file_filter = locale//LC_MESSAGES/tutorial/specify-language-for-text-index.po -source_file = locale/pot/tutorial/specify-language-for-text-index.pot -source_lang = en - -[mongodb-manual.tutorial--modify-chunk-size-in-sharded-cluster] -file_filter = locale//LC_MESSAGES/tutorial/modify-chunk-size-in-sharded-cluster.po -source_file = locale/pot/tutorial/modify-chunk-size-in-sharded-cluster.pot -source_lang = en - -[mongodb-manual.tutorial--create-queries-that-ensure-selectivity] -file_filter = locale//LC_MESSAGES/tutorial/create-queries-that-ensure-selectivity.po -source_file = locale/pot/tutorial/create-queries-that-ensure-selectivity.pot -source_lang = en - -[mongodb-manual.tutorial--query-a-2d-index] -file_filter = locale//LC_MESSAGES/tutorial/query-a-2d-index.po -source_file = locale/pot/tutorial/query-a-2d-index.pot -source_lang = en - -[mongodb-manual.meta--administration] -file_filter = locale//LC_MESSAGES/meta/administration.po -source_file = locale/pot/meta/administration.pot -source_lang = en - -[mongodb-manual.meta--404] -file_filter = locale//LC_MESSAGES/meta/404.po -source_file = locale/pot/meta/404.pot -source_lang = en - -[mongodb-manual.meta--translation] -file_filter = locale//LC_MESSAGES/meta/translation.po -source_file = locale/pot/meta/translation.pot -source_lang = en - -[mongodb-manual.meta--style-guide] -file_filter = locale//LC_MESSAGES/meta/style-guide.po -source_file = locale/pot/meta/style-guide.pot -source_lang = en - -[mongodb-manual.meta--reference] -file_filter = locale//LC_MESSAGES/meta/reference.po -source_file = locale/pot/meta/reference.pot -source_lang = en - -[mongodb-manual.meta--410] -file_filter = locale//LC_MESSAGES/meta/410.po -source_file = locale/pot/meta/410.pot -source_lang = en - -[mongodb-manual.meta--403] -file_filter = locale//LC_MESSAGES/meta/403.po -source_file = locale/pot/meta/403.pot -source_lang = en - -[mongodb-manual.meta--practices] -file_filter = locale//LC_MESSAGES/meta/practices.po -source_file = locale/pot/meta/practices.pot -source_lang = en - -[mongodb-manual.meta--build] -file_filter = locale//LC_MESSAGES/meta/build.po -source_file = locale/pot/meta/build.pot -source_lang = en - -[mongodb-manual.meta--manual] -file_filter = locale//LC_MESSAGES/meta/manual.po -source_file = locale/pot/meta/manual.pot -source_lang = en - -[mongodb-manual.meta--401] -file_filter = locale//LC_MESSAGES/meta/401.po -source_file = locale/pot/meta/401.pot -source_lang = en - -[mongodb-manual.meta--organization] -file_filter = locale//LC_MESSAGES/meta/organization.po -source_file = locale/pot/meta/organization.pot -source_lang = en - -[mongodb-manual.reference--default-mongodb-port] -file_filter = locale//LC_MESSAGES/reference/default-mongodb-port.po -source_file = locale/pot/reference/default-mongodb-port.pot -source_lang = en - -[mongodb-manual.reference--limits] -file_filter = locale//LC_MESSAGES/reference/limits.po -source_file = locale/pot/reference/limits.pot -source_lang = en - -[mongodb-manual.reference--configuration-options] -file_filter = locale//LC_MESSAGES/reference/configuration-options.po -source_file = locale/pot/reference/configuration-options.pot -source_lang = en - -[mongodb-manual.reference--replica-states] -file_filter = locale//LC_MESSAGES/reference/replica-states.po -source_file = locale/pot/reference/replica-states.pot -source_lang = en - -[mongodb-manual.reference--local-database] -file_filter = locale//LC_MESSAGES/reference/local-database.po -source_file = locale/pot/reference/local-database.pot -source_lang = en - -[mongodb-manual.reference--gridfs] -file_filter = locale//LC_MESSAGES/reference/gridfs.po -source_file = locale/pot/reference/gridfs.pot -source_lang = en - -[mongodb-manual.reference--resource-document] -file_filter = locale//LC_MESSAGES/reference/resource-document.po -source_file = locale/pot/reference/resource-document.pot -source_lang = en - -[mongodb-manual.reference--operator] -file_filter = locale//LC_MESSAGES/reference/operator.po -source_file = locale/pot/reference/operator.pot -source_lang = en - -[mongodb-manual.reference--sql-aggregation-comparison] -file_filter = locale//LC_MESSAGES/reference/sql-aggregation-comparison.po -source_file = locale/pot/reference/sql-aggregation-comparison.pot -source_lang = en - -[mongodb-manual.reference--database-references] -file_filter = locale//LC_MESSAGES/reference/database-references.po -source_file = locale/pot/reference/database-references.pot -source_lang = en - -[mongodb-manual.reference--administration] -file_filter = locale//LC_MESSAGES/reference/administration.po -source_file = locale/pot/reference/administration.pot -source_lang = en - -[mongodb-manual.reference--indexes] -file_filter = locale//LC_MESSAGES/reference/indexes.po -source_file = locale/pot/reference/indexes.pot -source_lang = en - -[mongodb-manual.reference--parameters] -file_filter = locale//LC_MESSAGES/reference/parameters.po -source_file = locale/pot/reference/parameters.pot -source_lang = en - -[mongodb-manual.reference--exit-codes] -file_filter = locale//LC_MESSAGES/reference/exit-codes.po -source_file = locale/pot/reference/exit-codes.pot -source_lang = en - -[mongodb-manual.reference--sql-comparison] -file_filter = locale//LC_MESSAGES/reference/sql-comparison.po -source_file = locale/pot/reference/sql-comparison.pot -source_lang = en - -[mongodb-manual.reference--replica-configuration] -file_filter = locale//LC_MESSAGES/reference/replica-configuration.po -source_file = locale/pot/reference/replica-configuration.pot -source_lang = en - -[mongodb-manual.reference--object-id] -file_filter = locale//LC_MESSAGES/reference/object-id.po -source_file = locale/pot/reference/object-id.pot -source_lang = en - -[mongodb-manual.reference--bson-types] -file_filter = locale//LC_MESSAGES/reference/bson-types.po -source_file = locale/pot/reference/bson-types.pot -source_lang = en - -[mongodb-manual.reference--security] -file_filter = locale//LC_MESSAGES/reference/security.po -source_file = locale/pot/reference/security.pot -source_lang = en - -[mongodb-manual.reference--system-roles-collection] -file_filter = locale//LC_MESSAGES/reference/system-roles-collection.po -source_file = locale/pot/reference/system-roles-collection.pot -source_lang = en - -[mongodb-manual.reference--bios-example-collection] -file_filter = locale//LC_MESSAGES/reference/bios-example-collection.po -source_file = locale/pot/reference/bios-example-collection.pot -source_lang = en - -[mongodb-manual.reference--privilege-actions] -file_filter = locale//LC_MESSAGES/reference/privilege-actions.po -source_file = locale/pot/reference/privilege-actions.pot -source_lang = en - -[mongodb-manual.reference--glossary] -file_filter = locale//LC_MESSAGES/reference/glossary.po -source_file = locale/pot/reference/glossary.pot -source_lang = en - -[mongodb-manual.reference--connection-string] -file_filter = locale//LC_MESSAGES/reference/connection-string.po -source_file = locale/pot/reference/connection-string.pot -source_lang = en - -[mongodb-manual.reference--system-users-collection] -file_filter = locale//LC_MESSAGES/reference/system-users-collection.po -source_file = locale/pot/reference/system-users-collection.pot -source_lang = en - -[mongodb-manual.reference--system-collections] -file_filter = locale//LC_MESSAGES/reference/system-collections.po -source_file = locale/pot/reference/system-collections.pot -source_lang = en - -[mongodb-manual.reference--config-database] -file_filter = locale//LC_MESSAGES/reference/config-database.po -source_file = locale/pot/reference/config-database.pot -source_lang = en - -[mongodb-manual.reference--database-profiler] -file_filter = locale//LC_MESSAGES/reference/database-profiler.po -source_file = locale/pot/reference/database-profiler.pot -source_lang = en - -[mongodb-manual.reference--privilege-documents] -file_filter = locale//LC_MESSAGES/reference/privilege-documents.po -source_file = locale/pot/reference/privilege-documents.pot -source_lang = en - -[mongodb-manual.reference--read-preference] -file_filter = locale//LC_MESSAGES/reference/read-preference.po -source_file = locale/pot/reference/read-preference.pot -source_lang = en - -[mongodb-manual.reference--mongo-shell] -file_filter = locale//LC_MESSAGES/reference/mongo-shell.po -source_file = locale/pot/reference/mongo-shell.pot -source_lang = en - -[mongodb-manual.reference--sharding] -file_filter = locale//LC_MESSAGES/reference/sharding.po -source_file = locale/pot/reference/sharding.pot -source_lang = en - -[mongodb-manual.reference--crud] -file_filter = locale//LC_MESSAGES/reference/crud.po -source_file = locale/pot/reference/crud.pot -source_lang = en - -[mongodb-manual.reference--write-concern] -file_filter = locale//LC_MESSAGES/reference/write-concern.po -source_file = locale/pot/reference/write-concern.pot -source_lang = en - -[mongodb-manual.reference--mongodb-extended-json] -file_filter = locale//LC_MESSAGES/reference/mongodb-extended-json.po -source_file = locale/pot/reference/mongodb-extended-json.pot -source_lang = en - -[mongodb-manual.reference--ulimit] -file_filter = locale//LC_MESSAGES/reference/ulimit.po -source_file = locale/pot/reference/ulimit.pot -source_lang = en - -[mongodb-manual.reference--command] -file_filter = locale//LC_MESSAGES/reference/command.po -source_file = locale/pot/reference/command.pot -source_lang = en - -[mongodb-manual.reference--program] -file_filter = locale//LC_MESSAGES/reference/program.po -source_file = locale/pot/reference/program.pot -source_lang = en - -[mongodb-manual.reference--data-models] -file_filter = locale//LC_MESSAGES/reference/data-models.po -source_file = locale/pot/reference/data-models.pot -source_lang = en - -[mongodb-manual.reference--replication] -file_filter = locale//LC_MESSAGES/reference/replication.po -source_file = locale/pot/reference/replication.pot -source_lang = en - -[mongodb-manual.reference--aggregation-commands-comparison] -file_filter = locale//LC_MESSAGES/reference/aggregation-commands-comparison.po -source_file = locale/pot/reference/aggregation-commands-comparison.pot -source_lang = en - -[mongodb-manual.reference--aggregation] -file_filter = locale//LC_MESSAGES/reference/aggregation.po -source_file = locale/pot/reference/aggregation.pot -source_lang = en - -[mongodb-manual.reference--server-status] -file_filter = locale//LC_MESSAGES/reference/server-status.po -source_file = locale/pot/reference/server-status.pot -source_lang = en - -[mongodb-manual.reference--method] -file_filter = locale//LC_MESSAGES/reference/method.po -source_file = locale/pot/reference/method.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation-comparison] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation-comparison.po -source_file = locale/pot/reference/operator/aggregation-comparison.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation-arithmetic] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation-arithmetic.po -source_file = locale/pot/reference/operator/aggregation-arithmetic.pot -source_lang = en - -[mongodb-manual.reference--operator--query-modifier] -file_filter = locale//LC_MESSAGES/reference/operator/query-modifier.po -source_file = locale/pot/reference/operator/query-modifier.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation-string] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation-string.po -source_file = locale/pot/reference/operator/aggregation-string.pot -source_lang = en - -[mongodb-manual.reference--operator--query] -file_filter = locale//LC_MESSAGES/reference/operator/query.po -source_file = locale/pot/reference/operator/query.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation-pipeline] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation-pipeline.po -source_file = locale/pot/reference/operator/aggregation-pipeline.pot -source_lang = en - -[mongodb-manual.reference--operator--update] -file_filter = locale//LC_MESSAGES/reference/operator/update.po -source_file = locale/pot/reference/operator/update.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation-set] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation-set.po -source_file = locale/pot/reference/operator/aggregation-set.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation-group] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation-group.po -source_file = locale/pot/reference/operator/aggregation-group.pot -source_lang = en - -[mongodb-manual.reference--operator--update-field] -file_filter = locale//LC_MESSAGES/reference/operator/update-field.po -source_file = locale/pot/reference/operator/update-field.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation-date] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation-date.po -source_file = locale/pot/reference/operator/aggregation-date.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation-projection] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation-projection.po -source_file = locale/pot/reference/operator/aggregation-projection.pot -source_lang = en - -[mongodb-manual.reference--operator--projection] -file_filter = locale//LC_MESSAGES/reference/operator/projection.po -source_file = locale/pot/reference/operator/projection.pot -source_lang = en - -[mongodb-manual.reference--operator--query-comparison] -file_filter = locale//LC_MESSAGES/reference/operator/query-comparison.po -source_file = locale/pot/reference/operator/query-comparison.pot -source_lang = en - -[mongodb-manual.reference--operator--query-array] -file_filter = locale//LC_MESSAGES/reference/operator/query-array.po -source_file = locale/pot/reference/operator/query-array.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation-boolean] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation-boolean.po -source_file = locale/pot/reference/operator/aggregation-boolean.pot -source_lang = en - -[mongodb-manual.reference--operator--query-element] -file_filter = locale//LC_MESSAGES/reference/operator/query-element.po -source_file = locale/pot/reference/operator/query-element.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation-conditional] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation-conditional.po -source_file = locale/pot/reference/operator/aggregation-conditional.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation-array] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation-array.po -source_file = locale/pot/reference/operator/aggregation-array.pot -source_lang = en - -[mongodb-manual.reference--operator--update-array] -file_filter = locale//LC_MESSAGES/reference/operator/update-array.po -source_file = locale/pot/reference/operator/update-array.pot -source_lang = en - -[mongodb-manual.reference--operator--query-geospatial] -file_filter = locale//LC_MESSAGES/reference/operator/query-geospatial.po -source_file = locale/pot/reference/operator/query-geospatial.pot -source_lang = en - -[mongodb-manual.reference--operator--update-bitwise] -file_filter = locale//LC_MESSAGES/reference/operator/update-bitwise.po -source_file = locale/pot/reference/operator/update-bitwise.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation.po -source_file = locale/pot/reference/operator/aggregation.pot -source_lang = en - -[mongodb-manual.reference--operator--update-isolation] -file_filter = locale//LC_MESSAGES/reference/operator/update-isolation.po -source_file = locale/pot/reference/operator/update-isolation.pot -source_lang = en - -[mongodb-manual.reference--operator--query-logical] -file_filter = locale//LC_MESSAGES/reference/operator/query-logical.po -source_file = locale/pot/reference/operator/query-logical.pot -source_lang = en - -[mongodb-manual.reference--operator--query-evaluation] -file_filter = locale//LC_MESSAGES/reference/operator/query-evaluation.po -source_file = locale/pot/reference/operator/query-evaluation.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--eq] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/eq.po -source_file = locale/pot/reference/operator/aggregation/eq.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--millisecond] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/millisecond.po -source_file = locale/pot/reference/operator/aggregation/millisecond.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--setDifference] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/setDifference.po -source_file = locale/pot/reference/operator/aggregation/setDifference.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--strcasecmp] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/strcasecmp.po -source_file = locale/pot/reference/operator/aggregation/strcasecmp.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--multiply] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/multiply.po -source_file = locale/pot/reference/operator/aggregation/multiply.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--or] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/or.po -source_file = locale/pot/reference/operator/aggregation/or.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--sort] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/sort.po -source_file = locale/pot/reference/operator/aggregation/sort.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--substr] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/substr.po -source_file = locale/pot/reference/operator/aggregation/substr.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--allElementsTrue] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/allElementsTrue.po -source_file = locale/pot/reference/operator/aggregation/allElementsTrue.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--literal] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/literal.po -source_file = locale/pot/reference/operator/aggregation/literal.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--group] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/group.po -source_file = locale/pot/reference/operator/aggregation/group.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--project] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/project.po -source_file = locale/pot/reference/operator/aggregation/project.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--match] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/match.po -source_file = locale/pot/reference/operator/aggregation/match.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--sum] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/sum.po -source_file = locale/pot/reference/operator/aggregation/sum.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--divide] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/divide.po -source_file = locale/pot/reference/operator/aggregation/divide.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--minute] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/minute.po -source_file = locale/pot/reference/operator/aggregation/minute.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--year] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/year.po -source_file = locale/pot/reference/operator/aggregation/year.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--week] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/week.po -source_file = locale/pot/reference/operator/aggregation/week.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--let] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/let.po -source_file = locale/pot/reference/operator/aggregation/let.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--geoNear] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/geoNear.po -source_file = locale/pot/reference/operator/aggregation/geoNear.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--dayOfWeek] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/dayOfWeek.po -source_file = locale/pot/reference/operator/aggregation/dayOfWeek.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--dayOfMonth] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/dayOfMonth.po -source_file = locale/pot/reference/operator/aggregation/dayOfMonth.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--setIntersection] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/setIntersection.po -source_file = locale/pot/reference/operator/aggregation/setIntersection.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--avg] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/avg.po -source_file = locale/pot/reference/operator/aggregation/avg.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--last] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/last.po -source_file = locale/pot/reference/operator/aggregation/last.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--hour] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/hour.po -source_file = locale/pot/reference/operator/aggregation/hour.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--push] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/push.po -source_file = locale/pot/reference/operator/aggregation/push.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--add] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/add.po -source_file = locale/pot/reference/operator/aggregation/add.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--cond] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/cond.po -source_file = locale/pot/reference/operator/aggregation/cond.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--addToSet] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/addToSet.po -source_file = locale/pot/reference/operator/aggregation/addToSet.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--skip] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/skip.po -source_file = locale/pot/reference/operator/aggregation/skip.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--month] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/month.po -source_file = locale/pot/reference/operator/aggregation/month.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--limit] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/limit.po -source_file = locale/pot/reference/operator/aggregation/limit.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--interface] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/interface.po -source_file = locale/pot/reference/operator/aggregation/interface.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--setIsSubset] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/setIsSubset.po -source_file = locale/pot/reference/operator/aggregation/setIsSubset.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--ifNull] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/ifNull.po -source_file = locale/pot/reference/operator/aggregation/ifNull.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--max] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/max.po -source_file = locale/pot/reference/operator/aggregation/max.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--dayOfYear] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/dayOfYear.po -source_file = locale/pot/reference/operator/aggregation/dayOfYear.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--anyElementTrue] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/anyElementTrue.po -source_file = locale/pot/reference/operator/aggregation/anyElementTrue.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--gt] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/gt.po -source_file = locale/pot/reference/operator/aggregation/gt.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--concat] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/concat.po -source_file = locale/pot/reference/operator/aggregation/concat.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--second] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/second.po -source_file = locale/pot/reference/operator/aggregation/second.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--out] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/out.po -source_file = locale/pot/reference/operator/aggregation/out.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--mod] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/mod.po -source_file = locale/pot/reference/operator/aggregation/mod.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--map] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/map.po -source_file = locale/pot/reference/operator/aggregation/map.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--ne] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/ne.po -source_file = locale/pot/reference/operator/aggregation/ne.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--unwind] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/unwind.po -source_file = locale/pot/reference/operator/aggregation/unwind.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--lt] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/lt.po -source_file = locale/pot/reference/operator/aggregation/lt.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--cmp] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/cmp.po -source_file = locale/pot/reference/operator/aggregation/cmp.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--gte] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/gte.po -source_file = locale/pot/reference/operator/aggregation/gte.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--redact] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/redact.po -source_file = locale/pot/reference/operator/aggregation/redact.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--toLower] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/toLower.po -source_file = locale/pot/reference/operator/aggregation/toLower.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--and] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/and.po -source_file = locale/pot/reference/operator/aggregation/and.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--first] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/first.po -source_file = locale/pot/reference/operator/aggregation/first.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--size] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/size.po -source_file = locale/pot/reference/operator/aggregation/size.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--lte] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/lte.po -source_file = locale/pot/reference/operator/aggregation/lte.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--not] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/not.po -source_file = locale/pot/reference/operator/aggregation/not.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--subtract] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/subtract.po -source_file = locale/pot/reference/operator/aggregation/subtract.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--min] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/min.po -source_file = locale/pot/reference/operator/aggregation/min.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--setUnion] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/setUnion.po -source_file = locale/pot/reference/operator/aggregation/setUnion.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--setEquals] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/setEquals.po -source_file = locale/pot/reference/operator/aggregation/setEquals.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--toUpper] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/toUpper.po -source_file = locale/pot/reference/operator/aggregation/toUpper.pot -source_lang = en - -[mongodb-manual.reference--operator--update--sort] -file_filter = locale//LC_MESSAGES/reference/operator/update/sort.po -source_file = locale/pot/reference/operator/update/sort.pot -source_lang = en - -[mongodb-manual.reference--operator--update--inc] -file_filter = locale//LC_MESSAGES/reference/operator/update/inc.po -source_file = locale/pot/reference/operator/update/inc.pot -source_lang = en - -[mongodb-manual.reference--operator--update--pop] -file_filter = locale//LC_MESSAGES/reference/operator/update/pop.po -source_file = locale/pot/reference/operator/update/pop.pot -source_lang = en - -[mongodb-manual.reference--operator--update--slice] -file_filter = locale//LC_MESSAGES/reference/operator/update/slice.po -source_file = locale/pot/reference/operator/update/slice.pot -source_lang = en - -[mongodb-manual.reference--operator--update--pushAll] -file_filter = locale//LC_MESSAGES/reference/operator/update/pushAll.po -source_file = locale/pot/reference/operator/update/pushAll.pot -source_lang = en - -[mongodb-manual.reference--operator--update--setOnInsert] -file_filter = locale//LC_MESSAGES/reference/operator/update/setOnInsert.po -source_file = locale/pot/reference/operator/update/setOnInsert.pot -source_lang = en - -[mongodb-manual.reference--operator--update--push] -file_filter = locale//LC_MESSAGES/reference/operator/update/push.po -source_file = locale/pot/reference/operator/update/push.pot -source_lang = en - -[mongodb-manual.reference--operator--update--positional] -file_filter = locale//LC_MESSAGES/reference/operator/update/positional.po -source_file = locale/pot/reference/operator/update/positional.pot -source_lang = en - -[mongodb-manual.reference--operator--update--addToSet] -file_filter = locale//LC_MESSAGES/reference/operator/update/addToSet.po -source_file = locale/pot/reference/operator/update/addToSet.pot -source_lang = en - -[mongodb-manual.reference--operator--update--each] -file_filter = locale//LC_MESSAGES/reference/operator/update/each.po -source_file = locale/pot/reference/operator/update/each.pot -source_lang = en - -[mongodb-manual.reference--operator--update--currentDate] -file_filter = locale//LC_MESSAGES/reference/operator/update/currentDate.po -source_file = locale/pot/reference/operator/update/currentDate.pot -source_lang = en - -[mongodb-manual.reference--operator--update--pullAll] -file_filter = locale//LC_MESSAGES/reference/operator/update/pullAll.po -source_file = locale/pot/reference/operator/update/pullAll.pot -source_lang = en - -[mongodb-manual.reference--operator--update--unset] -file_filter = locale//LC_MESSAGES/reference/operator/update/unset.po -source_file = locale/pot/reference/operator/update/unset.pot -source_lang = en - -[mongodb-manual.reference--operator--update--max] -file_filter = locale//LC_MESSAGES/reference/operator/update/max.po -source_file = locale/pot/reference/operator/update/max.pot -source_lang = en - -[mongodb-manual.reference--operator--update--bit] -file_filter = locale//LC_MESSAGES/reference/operator/update/bit.po -source_file = locale/pot/reference/operator/update/bit.pot -source_lang = en - -[mongodb-manual.reference--operator--update--set] -file_filter = locale//LC_MESSAGES/reference/operator/update/set.po -source_file = locale/pot/reference/operator/update/set.pot -source_lang = en - -[mongodb-manual.reference--operator--update--isolated] -file_filter = locale//LC_MESSAGES/reference/operator/update/isolated.po -source_file = locale/pot/reference/operator/update/isolated.pot -source_lang = en - -[mongodb-manual.reference--operator--update--position] -file_filter = locale//LC_MESSAGES/reference/operator/update/position.po -source_file = locale/pot/reference/operator/update/position.pot -source_lang = en - -[mongodb-manual.reference--operator--update--mul] -file_filter = locale//LC_MESSAGES/reference/operator/update/mul.po -source_file = locale/pot/reference/operator/update/mul.pot -source_lang = en - -[mongodb-manual.reference--operator--update--rename] -file_filter = locale//LC_MESSAGES/reference/operator/update/rename.po -source_file = locale/pot/reference/operator/update/rename.pot -source_lang = en - -[mongodb-manual.reference--operator--update--pull] -file_filter = locale//LC_MESSAGES/reference/operator/update/pull.po -source_file = locale/pot/reference/operator/update/pull.pot -source_lang = en - -[mongodb-manual.reference--operator--update--min] -file_filter = locale//LC_MESSAGES/reference/operator/update/min.po -source_file = locale/pot/reference/operator/update/min.pot -source_lang = en - -[mongodb-manual.reference--operator--query--or] -file_filter = locale//LC_MESSAGES/reference/operator/query/or.po -source_file = locale/pot/reference/operator/query/or.pot -source_lang = en - -[mongodb-manual.reference--operator--query--where] -file_filter = locale//LC_MESSAGES/reference/operator/query/where.po -source_file = locale/pot/reference/operator/query/where.pot -source_lang = en - -[mongodb-manual.reference--operator--query--center] -file_filter = locale//LC_MESSAGES/reference/operator/query/center.po -source_file = locale/pot/reference/operator/query/center.pot -source_lang = en - -[mongodb-manual.reference--operator--query--in] -file_filter = locale//LC_MESSAGES/reference/operator/query/in.po -source_file = locale/pot/reference/operator/query/in.pot -source_lang = en - -[mongodb-manual.reference--operator--query--geoIntersects] -file_filter = locale//LC_MESSAGES/reference/operator/query/geoIntersects.po -source_file = locale/pot/reference/operator/query/geoIntersects.pot -source_lang = en - -[mongodb-manual.reference--operator--query--geoWithin] -file_filter = locale//LC_MESSAGES/reference/operator/query/geoWithin.po -source_file = locale/pot/reference/operator/query/geoWithin.pot -source_lang = en - -[mongodb-manual.reference--operator--query--all] -file_filter = locale//LC_MESSAGES/reference/operator/query/all.po -source_file = locale/pot/reference/operator/query/all.pot -source_lang = en - -[mongodb-manual.reference--operator--query--centerSphere] -file_filter = locale//LC_MESSAGES/reference/operator/query/centerSphere.po -source_file = locale/pot/reference/operator/query/centerSphere.pot -source_lang = en - -[mongodb-manual.reference--operator--query--nin] -file_filter = locale//LC_MESSAGES/reference/operator/query/nin.po -source_file = locale/pot/reference/operator/query/nin.pot -source_lang = en - -[mongodb-manual.reference--operator--query--nor] -file_filter = locale//LC_MESSAGES/reference/operator/query/nor.po -source_file = locale/pot/reference/operator/query/nor.pot -source_lang = en - -[mongodb-manual.reference--operator--query--polygon] -file_filter = locale//LC_MESSAGES/reference/operator/query/polygon.po -source_file = locale/pot/reference/operator/query/polygon.pot -source_lang = en - -[mongodb-manual.reference--operator--query--exists] -file_filter = locale//LC_MESSAGES/reference/operator/query/exists.po -source_file = locale/pot/reference/operator/query/exists.pot -source_lang = en - -[mongodb-manual.reference--operator--query--gt] -file_filter = locale//LC_MESSAGES/reference/operator/query/gt.po -source_file = locale/pot/reference/operator/query/gt.pot -source_lang = en - -[mongodb-manual.reference--operator--query--geometry] -file_filter = locale//LC_MESSAGES/reference/operator/query/geometry.po -source_file = locale/pot/reference/operator/query/geometry.pot -source_lang = en - -[mongodb-manual.reference--operator--query--uniqueDocs] -file_filter = locale//LC_MESSAGES/reference/operator/query/uniqueDocs.po -source_file = locale/pot/reference/operator/query/uniqueDocs.pot -source_lang = en - -[mongodb-manual.reference--operator--query--mod] -file_filter = locale//LC_MESSAGES/reference/operator/query/mod.po -source_file = locale/pot/reference/operator/query/mod.pot -source_lang = en - -[mongodb-manual.reference--operator--query--regex] -file_filter = locale//LC_MESSAGES/reference/operator/query/regex.po -source_file = locale/pot/reference/operator/query/regex.pot -source_lang = en - -[mongodb-manual.reference--operator--query--near] -file_filter = locale//LC_MESSAGES/reference/operator/query/near.po -source_file = locale/pot/reference/operator/query/near.pot -source_lang = en - -[mongodb-manual.reference--operator--query--ne] -file_filter = locale//LC_MESSAGES/reference/operator/query/ne.po -source_file = locale/pot/reference/operator/query/ne.pot -source_lang = en - -[mongodb-manual.reference--operator--query--maxDistance] -file_filter = locale//LC_MESSAGES/reference/operator/query/maxDistance.po -source_file = locale/pot/reference/operator/query/maxDistance.pot -source_lang = en - -[mongodb-manual.reference--operator--query--lt] -file_filter = locale//LC_MESSAGES/reference/operator/query/lt.po -source_file = locale/pot/reference/operator/query/lt.pot -source_lang = en - -[mongodb-manual.reference--operator--query--gte] -file_filter = locale//LC_MESSAGES/reference/operator/query/gte.po -source_file = locale/pot/reference/operator/query/gte.pot -source_lang = en - -[mongodb-manual.reference--operator--query--box] -file_filter = locale//LC_MESSAGES/reference/operator/query/box.po -source_file = locale/pot/reference/operator/query/box.pot -source_lang = en - -[mongodb-manual.reference--operator--query--nearSphere] -file_filter = locale//LC_MESSAGES/reference/operator/query/nearSphere.po -source_file = locale/pot/reference/operator/query/nearSphere.pot -source_lang = en - -[mongodb-manual.reference--operator--query--type] -file_filter = locale//LC_MESSAGES/reference/operator/query/type.po -source_file = locale/pot/reference/operator/query/type.pot -source_lang = en - -[mongodb-manual.reference--operator--query--and] -file_filter = locale//LC_MESSAGES/reference/operator/query/and.po -source_file = locale/pot/reference/operator/query/and.pot -source_lang = en - -[mongodb-manual.reference--operator--query--elemMatch] -file_filter = locale//LC_MESSAGES/reference/operator/query/elemMatch.po -source_file = locale/pot/reference/operator/query/elemMatch.pot -source_lang = en - -[mongodb-manual.reference--operator--query--size] -file_filter = locale//LC_MESSAGES/reference/operator/query/size.po -source_file = locale/pot/reference/operator/query/size.pot -source_lang = en - -[mongodb-manual.reference--operator--query--lte] -file_filter = locale//LC_MESSAGES/reference/operator/query/lte.po -source_file = locale/pot/reference/operator/query/lte.pot -source_lang = en - -[mongodb-manual.reference--operator--query--not] -file_filter = locale//LC_MESSAGES/reference/operator/query/not.po -source_file = locale/pot/reference/operator/query/not.pot -source_lang = en - -[mongodb-manual.reference--operator--meta--maxScan] -file_filter = locale//LC_MESSAGES/reference/operator/meta/maxScan.po -source_file = locale/pot/reference/operator/meta/maxScan.pot -source_lang = en - -[mongodb-manual.reference--operator--meta--comment] -file_filter = locale//LC_MESSAGES/reference/operator/meta/comment.po -source_file = locale/pot/reference/operator/meta/comment.pot -source_lang = en - -[mongodb-manual.reference--operator--meta--query] -file_filter = locale//LC_MESSAGES/reference/operator/meta/query.po -source_file = locale/pot/reference/operator/meta/query.pot -source_lang = en - -[mongodb-manual.reference--operator--meta--snapshot] -file_filter = locale//LC_MESSAGES/reference/operator/meta/snapshot.po -source_file = locale/pot/reference/operator/meta/snapshot.pot -source_lang = en - -[mongodb-manual.reference--operator--meta--natural] -file_filter = locale//LC_MESSAGES/reference/operator/meta/natural.po -source_file = locale/pot/reference/operator/meta/natural.pot -source_lang = en - -[mongodb-manual.reference--operator--meta--max] -file_filter = locale//LC_MESSAGES/reference/operator/meta/max.po -source_file = locale/pot/reference/operator/meta/max.pot -source_lang = en - -[mongodb-manual.reference--operator--meta--showDiskLoc] -file_filter = locale//LC_MESSAGES/reference/operator/meta/showDiskLoc.po -source_file = locale/pot/reference/operator/meta/showDiskLoc.pot -source_lang = en - -[mongodb-manual.reference--operator--meta--maxTimeMS] -file_filter = locale//LC_MESSAGES/reference/operator/meta/maxTimeMS.po -source_file = locale/pot/reference/operator/meta/maxTimeMS.pot -source_lang = en - -[mongodb-manual.reference--operator--meta--orderby] -file_filter = locale//LC_MESSAGES/reference/operator/meta/orderby.po -source_file = locale/pot/reference/operator/meta/orderby.pot -source_lang = en - -[mongodb-manual.reference--operator--meta--explain] -file_filter = locale//LC_MESSAGES/reference/operator/meta/explain.po -source_file = locale/pot/reference/operator/meta/explain.pot -source_lang = en - -[mongodb-manual.reference--operator--meta--hint] -file_filter = locale//LC_MESSAGES/reference/operator/meta/hint.po -source_file = locale/pot/reference/operator/meta/hint.pot -source_lang = en - -[mongodb-manual.reference--operator--meta--min] -file_filter = locale//LC_MESSAGES/reference/operator/meta/min.po -source_file = locale/pot/reference/operator/meta/min.pot -source_lang = en - -[mongodb-manual.reference--operator--meta--returnKey] -file_filter = locale//LC_MESSAGES/reference/operator/meta/returnKey.po -source_file = locale/pot/reference/operator/meta/returnKey.pot -source_lang = en - -[mongodb-manual.reference--operator--projection--slice] -file_filter = locale//LC_MESSAGES/reference/operator/projection/slice.po -source_file = locale/pot/reference/operator/projection/slice.pot -source_lang = en - -[mongodb-manual.reference--operator--projection--positional] -file_filter = locale//LC_MESSAGES/reference/operator/projection/positional.po -source_file = locale/pot/reference/operator/projection/positional.pot -source_lang = en - -[mongodb-manual.reference--operator--projection--elemMatch] -file_filter = locale//LC_MESSAGES/reference/operator/projection/elemMatch.po -source_file = locale/pot/reference/operator/projection/elemMatch.pot -source_lang = en - -[mongodb-manual.reference--command--whatsmyuri] -file_filter = locale//LC_MESSAGES/reference/command/whatsmyuri.po -source_file = locale/pot/reference/command/whatsmyuri.pot -source_lang = en - -[mongodb-manual.reference--command--dropDatabase] -file_filter = locale//LC_MESSAGES/reference/command/dropDatabase.po -source_file = locale/pot/reference/command/dropDatabase.pot -source_lang = en - -[mongodb-manual.reference--command--handshake] -file_filter = locale//LC_MESSAGES/reference/command/handshake.po -source_file = locale/pot/reference/command/handshake.pot -source_lang = en - -[mongodb-manual.reference--command--shardingState] -file_filter = locale//LC_MESSAGES/reference/command/shardingState.po -source_file = locale/pot/reference/command/shardingState.pot -source_lang = en - -[mongodb-manual.reference--command--getnonce] -file_filter = locale//LC_MESSAGES/reference/command/getnonce.po -source_file = locale/pot/reference/command/getnonce.pot -source_lang = en - -[mongodb-manual.reference--command--sleep] -file_filter = locale//LC_MESSAGES/reference/command/sleep.po -source_file = locale/pot/reference/command/sleep.pot -source_lang = en - -[mongodb-manual.reference--command--logApplicationMessage] -file_filter = locale//LC_MESSAGES/reference/command/logApplicationMessage.po -source_file = locale/pot/reference/command/logApplicationMessage.pot -source_lang = en - -[mongodb-manual.reference--command--eval] -file_filter = locale//LC_MESSAGES/reference/command/eval.po -source_file = locale/pot/reference/command/eval.pot -source_lang = en - -[mongodb-manual.reference--command--getPrevError] -file_filter = locale//LC_MESSAGES/reference/command/getPrevError.po -source_file = locale/pot/reference/command/getPrevError.pot -source_lang = en - -[mongodb-manual.reference--command--clean] -file_filter = locale//LC_MESSAGES/reference/command/clean.po -source_file = locale/pot/reference/command/clean.pot -source_lang = en - -[mongodb-manual.reference--command--skewClockCommand] -file_filter = locale//LC_MESSAGES/reference/command/skewClockCommand.po -source_file = locale/pot/reference/command/skewClockCommand.pot -source_lang = en - -[mongodb-manual.reference--command--godinsert] -file_filter = locale//LC_MESSAGES/reference/command/godinsert.po -source_file = locale/pot/reference/command/godinsert.pot -source_lang = en - -[mongodb-manual.reference--command--nav-auditing] -file_filter = locale//LC_MESSAGES/reference/command/nav-auditing.po -source_file = locale/pot/reference/command/nav-auditing.pot -source_lang = en - -[mongodb-manual.reference--command--buildInfo] -file_filter = locale//LC_MESSAGES/reference/command/buildInfo.po -source_file = locale/pot/reference/command/buildInfo.pot -source_lang = en - -[mongodb-manual.reference--command--listDatabases] -file_filter = locale//LC_MESSAGES/reference/command/listDatabases.po -source_file = locale/pot/reference/command/listDatabases.pot -source_lang = en - -[mongodb-manual.reference--command--dropAllRolesFromDatabase] -file_filter = locale//LC_MESSAGES/reference/command/dropAllRolesFromDatabase.po -source_file = locale/pot/reference/command/dropAllRolesFromDatabase.pot -source_lang = en - -[mongodb-manual.reference--command--convertToCapped] -file_filter = locale//LC_MESSAGES/reference/command/convertToCapped.po -source_file = locale/pot/reference/command/convertToCapped.pot -source_lang = en - -[mongodb-manual.reference--command--mapReduce] -file_filter = locale//LC_MESSAGES/reference/command/mapReduce.po -source_file = locale/pot/reference/command/mapReduce.pot -source_lang = en - -[mongodb-manual.reference--command--authenticate] -file_filter = locale//LC_MESSAGES/reference/command/authenticate.po -source_file = locale/pot/reference/command/authenticate.pot -source_lang = en - -[mongodb-manual.reference--command--group] -file_filter = locale//LC_MESSAGES/reference/command/group.po -source_file = locale/pot/reference/command/group.pot -source_lang = en - -[mongodb-manual.reference--command--nav-crud] -file_filter = locale//LC_MESSAGES/reference/command/nav-crud.po -source_file = locale/pot/reference/command/nav-crud.pot -source_lang = en - -[mongodb-manual.reference--command--medianKey] -file_filter = locale//LC_MESSAGES/reference/command/medianKey.po -source_file = locale/pot/reference/command/medianKey.pot -source_lang = en - -[mongodb-manual.reference--command--replSetFreeze] -file_filter = locale//LC_MESSAGES/reference/command/replSetFreeze.po -source_file = locale/pot/reference/command/replSetFreeze.pot -source_lang = en - -[mongodb-manual.reference--command--listCommands] -file_filter = locale//LC_MESSAGES/reference/command/listCommands.po -source_file = locale/pot/reference/command/listCommands.pot -source_lang = en - -[mongodb-manual.reference--command--nav-diagnostic] -file_filter = locale//LC_MESSAGES/reference/command/nav-diagnostic.po -source_file = locale/pot/reference/command/nav-diagnostic.pot -source_lang = en - -[mongodb-manual.reference--command--grantRolesToRole] -file_filter = locale//LC_MESSAGES/reference/command/grantRolesToRole.po -source_file = locale/pot/reference/command/grantRolesToRole.pot -source_lang = en - -[mongodb-manual.reference--command--updateRole] -file_filter = locale//LC_MESSAGES/reference/command/updateRole.po -source_file = locale/pot/reference/command/updateRole.pot -source_lang = en - -[mongodb-manual.reference--command--cleanupOrphaned] -file_filter = locale//LC_MESSAGES/reference/command/cleanupOrphaned.po -source_file = locale/pot/reference/command/cleanupOrphaned.pot -source_lang = en - -[mongodb-manual.reference--command--getParameter] -file_filter = locale//LC_MESSAGES/reference/command/getParameter.po -source_file = locale/pot/reference/command/getParameter.pot -source_lang = en - -[mongodb-manual.reference--command--nav-administration] -file_filter = locale//LC_MESSAGES/reference/command/nav-administration.po -source_file = locale/pot/reference/command/nav-administration.pot -source_lang = en - -[mongodb-manual.reference--command--revokeRolesFromRole] -file_filter = locale//LC_MESSAGES/reference/command/revokeRolesFromRole.po -source_file = locale/pot/reference/command/revokeRolesFromRole.pot -source_lang = en - -[mongodb-manual.reference--command--nav-sharding] -file_filter = locale//LC_MESSAGES/reference/command/nav-sharding.po -source_file = locale/pot/reference/command/nav-sharding.pot -source_lang = en - -[mongodb-manual.reference--command--testDistLockWithSyncCluster] -file_filter = locale//LC_MESSAGES/reference/command/testDistLockWithSyncCluster.po -source_file = locale/pot/reference/command/testDistLockWithSyncCluster.pot -source_lang = en - -[mongodb-manual.reference--command--recvChunkStatus] -file_filter = locale//LC_MESSAGES/reference/command/recvChunkStatus.po -source_file = locale/pot/reference/command/recvChunkStatus.pot -source_lang = en - -[mongodb-manual.reference--command--createUser] -file_filter = locale//LC_MESSAGES/reference/command/createUser.po -source_file = locale/pot/reference/command/createUser.pot -source_lang = en - -[mongodb-manual.reference--command--top] -file_filter = locale//LC_MESSAGES/reference/command/top.po -source_file = locale/pot/reference/command/top.pot -source_lang = en - -[mongodb-manual.reference--command--dbHash] -file_filter = locale//LC_MESSAGES/reference/command/dbHash.po -source_file = locale/pot/reference/command/dbHash.pot -source_lang = en - -[mongodb-manual.reference--command--mapreduce_shardedfinish] -file_filter = locale//LC_MESSAGES/reference/command/mapreduce.shardedfinish.po -source_file = locale/pot/reference/command/mapreduce.shardedfinish.pot -source_lang = en - -[mongodb-manual.reference--command--updateUser] -file_filter = locale//LC_MESSAGES/reference/command/updateUser.po -source_file = locale/pot/reference/command/updateUser.pot -source_lang = en - -[mongodb-manual.reference--command--configureFailPoint] -file_filter = locale//LC_MESSAGES/reference/command/configureFailPoint.po -source_file = locale/pot/reference/command/configureFailPoint.pot -source_lang = en - -[mongodb-manual.reference--command--nav-authentication] -file_filter = locale//LC_MESSAGES/reference/command/nav-authentication.po -source_file = locale/pot/reference/command/nav-authentication.pot -source_lang = en - -[mongodb-manual.reference--command--features] -file_filter = locale//LC_MESSAGES/reference/command/features.po -source_file = locale/pot/reference/command/features.pot -source_lang = en - -[mongodb-manual.reference--command--geoNear] -file_filter = locale//LC_MESSAGES/reference/command/geoNear.po -source_file = locale/pot/reference/command/geoNear.pot -source_lang = en - -[mongodb-manual.reference--command--repairDatabase] -file_filter = locale//LC_MESSAGES/reference/command/repairDatabase.po -source_file = locale/pot/reference/command/repairDatabase.pot -source_lang = en - -[mongodb-manual.reference--command--replSetHeartbeat] -file_filter = locale//LC_MESSAGES/reference/command/replSetHeartbeat.po -source_file = locale/pot/reference/command/replSetHeartbeat.pot -source_lang = en - -[mongodb-manual.reference--command--resetError] -file_filter = locale//LC_MESSAGES/reference/command/resetError.po -source_file = locale/pot/reference/command/resetError.pot -source_lang = en - -[mongodb-manual.reference--command--split] -file_filter = locale//LC_MESSAGES/reference/command/split.po -source_file = locale/pot/reference/command/split.pot -source_lang = en - -[mongodb-manual.reference--command--shardCollection] -file_filter = locale//LC_MESSAGES/reference/command/shardCollection.po -source_file = locale/pot/reference/command/shardCollection.pot -source_lang = en - -[mongodb-manual.reference--command--writeBacksQueued] -file_filter = locale//LC_MESSAGES/reference/command/writeBacksQueued.po -source_file = locale/pot/reference/command/writeBacksQueued.pot -source_lang = en - -[mongodb-manual.reference--command--profile] -file_filter = locale//LC_MESSAGES/reference/command/profile.po -source_file = locale/pot/reference/command/profile.pot -source_lang = en - -[mongodb-manual.reference--command--journalLatencyTest] -file_filter = locale//LC_MESSAGES/reference/command/journalLatencyTest.po -source_file = locale/pot/reference/command/journalLatencyTest.pot -source_lang = en - -[mongodb-manual.reference--command--resync] -file_filter = locale//LC_MESSAGES/reference/command/resync.po -source_file = locale/pot/reference/command/resync.pot -source_lang = en - -[mongodb-manual.reference--command--indexStats] -file_filter = locale//LC_MESSAGES/reference/command/indexStats.po -source_file = locale/pot/reference/command/indexStats.pot -source_lang = en - -[mongodb-manual.reference--command--replSetReconfig] -file_filter = locale//LC_MESSAGES/reference/command/replSetReconfig.po -source_file = locale/pot/reference/command/replSetReconfig.pot -source_lang = en - -[mongodb-manual.reference--command--availableQueryOptions] -file_filter = locale//LC_MESSAGES/reference/command/availableQueryOptions.po -source_file = locale/pot/reference/command/availableQueryOptions.pot -source_lang = en - -[mongodb-manual.reference--command--recvChunkStart] -file_filter = locale//LC_MESSAGES/reference/command/recvChunkStart.po -source_file = locale/pot/reference/command/recvChunkStart.pot -source_lang = en - -[mongodb-manual.reference--command--validate] -file_filter = locale//LC_MESSAGES/reference/command/validate.po -source_file = locale/pot/reference/command/validate.pot -source_lang = en - -[mongodb-manual.reference--command--writebacklisten] -file_filter = locale//LC_MESSAGES/reference/command/writebacklisten.po -source_file = locale/pot/reference/command/writebacklisten.pot -source_lang = en - -[mongodb-manual.reference--command--update] -file_filter = locale//LC_MESSAGES/reference/command/update.po -source_file = locale/pot/reference/command/update.pot -source_lang = en - -[mongodb-manual.reference--command--connPoolStats] -file_filter = locale//LC_MESSAGES/reference/command/connPoolStats.po -source_file = locale/pot/reference/command/connPoolStats.pot -source_lang = en - -[mongodb-manual.reference--command--touch] -file_filter = locale//LC_MESSAGES/reference/command/touch.po -source_file = locale/pot/reference/command/touch.pot -source_lang = en - -[mongodb-manual.reference--command--isSelf] -file_filter = locale//LC_MESSAGES/reference/command/isSelf.po -source_file = locale/pot/reference/command/isSelf.pot -source_lang = en - -[mongodb-manual.reference--command--removeShard] -file_filter = locale//LC_MESSAGES/reference/command/removeShard.po -source_file = locale/pot/reference/command/removeShard.pot -source_lang = en - -[mongodb-manual.reference--command--copydb] -file_filter = locale//LC_MESSAGES/reference/command/copydb.po -source_file = locale/pot/reference/command/copydb.pot -source_lang = en - -[mongodb-manual.reference--command--getoptime] -file_filter = locale//LC_MESSAGES/reference/command/getoptime.po -source_file = locale/pot/reference/command/getoptime.pot -source_lang = en - -[mongodb-manual.reference--command--cloneCollectionAsCapped] -file_filter = locale//LC_MESSAGES/reference/command/cloneCollectionAsCapped.po -source_file = locale/pot/reference/command/cloneCollectionAsCapped.pot -source_lang = en - -[mongodb-manual.reference--command--cloneCollection] -file_filter = locale//LC_MESSAGES/reference/command/cloneCollection.po -source_file = locale/pot/reference/command/cloneCollection.pot -source_lang = en - -[mongodb-manual.reference--command--replSetSyncFrom] -file_filter = locale//LC_MESSAGES/reference/command/replSetSyncFrom.po -source_file = locale/pot/reference/command/replSetSyncFrom.pot -source_lang = en - -[mongodb-manual.reference--command--replSetStepDown] -file_filter = locale//LC_MESSAGES/reference/command/replSetStepDown.po -source_file = locale/pot/reference/command/replSetStepDown.pot -source_lang = en - -[mongodb-manual.reference--command--revokeRolesFromUser] -file_filter = locale//LC_MESSAGES/reference/command/revokeRolesFromUser.po -source_file = locale/pot/reference/command/revokeRolesFromUser.pot -source_lang = en - -[mongodb-manual.reference--command--forceerror] -file_filter = locale//LC_MESSAGES/reference/command/forceerror.po -source_file = locale/pot/reference/command/forceerror.pot -source_lang = en - -[mongodb-manual.reference--command--setShardVersion] -file_filter = locale//LC_MESSAGES/reference/command/setShardVersion.po -source_file = locale/pot/reference/command/setShardVersion.pot -source_lang = en - -[mongodb-manual.reference--command--geoWalk] -file_filter = locale//LC_MESSAGES/reference/command/geoWalk.po -source_file = locale/pot/reference/command/geoWalk.pot -source_lang = en - -[mongodb-manual.reference--command--rolesInfo] -file_filter = locale//LC_MESSAGES/reference/command/rolesInfo.po -source_file = locale/pot/reference/command/rolesInfo.pot -source_lang = en - -[mongodb-manual.reference--command--mergeChunks] -file_filter = locale//LC_MESSAGES/reference/command/mergeChunks.po -source_file = locale/pot/reference/command/mergeChunks.pot -source_lang = en - -[mongodb-manual.reference--command--aggregate] -file_filter = locale//LC_MESSAGES/reference/command/aggregate.po -source_file = locale/pot/reference/command/aggregate.pot -source_lang = en - -[mongodb-manual.reference--command--usersInfo] -file_filter = locale//LC_MESSAGES/reference/command/usersInfo.po -source_file = locale/pot/reference/command/usersInfo.pot -source_lang = en - -[mongodb-manual.reference--command--splitVector] -file_filter = locale//LC_MESSAGES/reference/command/splitVector.po -source_file = locale/pot/reference/command/splitVector.pot -source_lang = en - -[mongodb-manual.reference--command--nav-replication] -file_filter = locale//LC_MESSAGES/reference/command/nav-replication.po -source_file = locale/pot/reference/command/nav-replication.pot -source_lang = en - -[mongodb-manual.reference--command--transferMods] -file_filter = locale//LC_MESSAGES/reference/command/transferMods.po -source_file = locale/pot/reference/command/transferMods.pot -source_lang = en - -[mongodb-manual.reference--command--replSetElect] -file_filter = locale//LC_MESSAGES/reference/command/replSetElect.po -source_file = locale/pot/reference/command/replSetElect.pot -source_lang = en - -[mongodb-manual.reference--command--isMaster] -file_filter = locale//LC_MESSAGES/reference/command/isMaster.po -source_file = locale/pot/reference/command/isMaster.pot -source_lang = en - -[mongodb-manual.reference--command--grantRolesToUser] -file_filter = locale//LC_MESSAGES/reference/command/grantRolesToUser.po -source_file = locale/pot/reference/command/grantRolesToUser.pot -source_lang = en - -[mongodb-manual.reference--command--clone] -file_filter = locale//LC_MESSAGES/reference/command/clone.po -source_file = locale/pot/reference/command/clone.pot -source_lang = en - -[mongodb-manual.reference--command--dataSize] -file_filter = locale//LC_MESSAGES/reference/command/dataSize.po -source_file = locale/pot/reference/command/dataSize.pot -source_lang = en - -[mongodb-manual.reference--command--text] -file_filter = locale//LC_MESSAGES/reference/command/text.po -source_file = locale/pot/reference/command/text.pot -source_lang = en - -[mongodb-manual.reference--command--getLog] -file_filter = locale//LC_MESSAGES/reference/command/getLog.po -source_file = locale/pot/reference/command/getLog.pot -source_lang = en - -[mongodb-manual.reference--command--getCmdLineOpts] -file_filter = locale//LC_MESSAGES/reference/command/getCmdLineOpts.po -source_file = locale/pot/reference/command/getCmdLineOpts.pot -source_lang = en - -[mongodb-manual.reference--command--nav-role-management] -file_filter = locale//LC_MESSAGES/reference/command/nav-role-management.po -source_file = locale/pot/reference/command/nav-role-management.pot -source_lang = en - -[mongodb-manual.reference--command--nav-aggregation] -file_filter = locale//LC_MESSAGES/reference/command/nav-aggregation.po -source_file = locale/pot/reference/command/nav-aggregation.pot -source_lang = en - -[mongodb-manual.reference--command--diagLogging] -file_filter = locale//LC_MESSAGES/reference/command/diagLogging.po -source_file = locale/pot/reference/command/diagLogging.pot -source_lang = en - -[mongodb-manual.reference--command--enableSharding] -file_filter = locale//LC_MESSAGES/reference/command/enableSharding.po -source_file = locale/pot/reference/command/enableSharding.pot -source_lang = en - -[mongodb-manual.reference--command--serverStatus] -file_filter = locale//LC_MESSAGES/reference/command/serverStatus.po -source_file = locale/pot/reference/command/serverStatus.pot -source_lang = en - -[mongodb-manual.reference--command--replSetInitiate] -file_filter = locale//LC_MESSAGES/reference/command/replSetInitiate.po -source_file = locale/pot/reference/command/replSetInitiate.pot -source_lang = en - -[mongodb-manual.reference--command--flushRouterConfig] -file_filter = locale//LC_MESSAGES/reference/command/flushRouterConfig.po -source_file = locale/pot/reference/command/flushRouterConfig.pot -source_lang = en - -[mongodb-manual.reference--command--nav-geospatial] -file_filter = locale//LC_MESSAGES/reference/command/nav-geospatial.po -source_file = locale/pot/reference/command/nav-geospatial.pot -source_lang = en - -[mongodb-manual.reference--command--copydbgetnonce] -file_filter = locale//LC_MESSAGES/reference/command/copydbgetnonce.po -source_file = locale/pot/reference/command/copydbgetnonce.pot -source_lang = en - -[mongodb-manual.reference--command--listShards] -file_filter = locale//LC_MESSAGES/reference/command/listShards.po -source_file = locale/pot/reference/command/listShards.pot -source_lang = en - -[mongodb-manual.reference--command--dbStats] -file_filter = locale//LC_MESSAGES/reference/command/dbStats.po -source_file = locale/pot/reference/command/dbStats.pot -source_lang = en - -[mongodb-manual.reference--command--recvChunkAbort] -file_filter = locale//LC_MESSAGES/reference/command/recvChunkAbort.po -source_file = locale/pot/reference/command/recvChunkAbort.pot -source_lang = en - -[mongodb-manual.reference--command--count] -file_filter = locale//LC_MESSAGES/reference/command/count.po -source_file = locale/pot/reference/command/count.pot -source_lang = en - -[mongodb-manual.reference--command--emptycapped] -file_filter = locale//LC_MESSAGES/reference/command/emptycapped.po -source_file = locale/pot/reference/command/emptycapped.pot -source_lang = en - -[mongodb-manual.reference--command--grantPrivilegesToRole] -file_filter = locale//LC_MESSAGES/reference/command/grantPrivilegesToRole.po -source_file = locale/pot/reference/command/grantPrivilegesToRole.pot -source_lang = en - -[mongodb-manual.reference--command--dropRole] -file_filter = locale//LC_MESSAGES/reference/command/dropRole.po -source_file = locale/pot/reference/command/dropRole.pot -source_lang = en - -[mongodb-manual.reference--command--unsetSharding] -file_filter = locale//LC_MESSAGES/reference/command/unsetSharding.po -source_file = locale/pot/reference/command/unsetSharding.pot -source_lang = en - -[mongodb-manual.reference--command--dropUser] -file_filter = locale//LC_MESSAGES/reference/command/dropUser.po -source_file = locale/pot/reference/command/dropUser.pot -source_lang = en - -[mongodb-manual.reference--command--reIndex] -file_filter = locale//LC_MESSAGES/reference/command/reIndex.po -source_file = locale/pot/reference/command/reIndex.pot -source_lang = en - -[mongodb-manual.reference--command--connPoolSync] -file_filter = locale//LC_MESSAGES/reference/command/connPoolSync.po -source_file = locale/pot/reference/command/connPoolSync.pot -source_lang = en - -[mongodb-manual.reference--command--splitChunk] -file_filter = locale//LC_MESSAGES/reference/command/splitChunk.po -source_file = locale/pot/reference/command/splitChunk.pot -source_lang = en - -[mongodb-manual.reference--command--dropIndexes] -file_filter = locale//LC_MESSAGES/reference/command/dropIndexes.po -source_file = locale/pot/reference/command/dropIndexes.pot -source_lang = en - -[mongodb-manual.reference--command--findAndModify] -file_filter = locale//LC_MESSAGES/reference/command/findAndModify.po -source_file = locale/pot/reference/command/findAndModify.pot -source_lang = en - -[mongodb-manual.reference--command--getLastError] -file_filter = locale//LC_MESSAGES/reference/command/getLastError.po -source_file = locale/pot/reference/command/getLastError.pot -source_lang = en - -[mongodb-manual.reference--command--testDistLockWithSkew] -file_filter = locale//LC_MESSAGES/reference/command/testDistLockWithSkew.po -source_file = locale/pot/reference/command/testDistLockWithSkew.pot -source_lang = en - -[mongodb-manual.reference--command--setParameter] -file_filter = locale//LC_MESSAGES/reference/command/setParameter.po -source_file = locale/pot/reference/command/setParameter.pot -source_lang = en - -[mongodb-manual.reference--command--drop] -file_filter = locale//LC_MESSAGES/reference/command/drop.po -source_file = locale/pot/reference/command/drop.pot -source_lang = en - -[mongodb-manual.reference--command--replSetMaintenance] -file_filter = locale//LC_MESSAGES/reference/command/replSetMaintenance.po -source_file = locale/pot/reference/command/replSetMaintenance.pot -source_lang = en - -[mongodb-manual.reference--command--replSetGetStatus] -file_filter = locale//LC_MESSAGES/reference/command/replSetGetStatus.po -source_file = locale/pot/reference/command/replSetGetStatus.pot -source_lang = en - -[mongodb-manual.reference--command--dropAllUsersFromDatabase] -file_filter = locale//LC_MESSAGES/reference/command/dropAllUsersFromDatabase.po -source_file = locale/pot/reference/command/dropAllUsersFromDatabase.pot -source_lang = en - -[mongodb-manual.reference--command--isdbgrid] -file_filter = locale//LC_MESSAGES/reference/command/isdbgrid.po -source_file = locale/pot/reference/command/isdbgrid.pot -source_lang = en - -[mongodb-manual.reference--command--nav-testing] -file_filter = locale//LC_MESSAGES/reference/command/nav-testing.po -source_file = locale/pot/reference/command/nav-testing.pot -source_lang = en - -[mongodb-manual.reference--command--hashBSONElement] -file_filter = locale//LC_MESSAGES/reference/command/hashBSONElement.po -source_file = locale/pot/reference/command/hashBSONElement.pot -source_lang = en - -[mongodb-manual.reference--command--checkShardingIndex] -file_filter = locale//LC_MESSAGES/reference/command/checkShardingIndex.po -source_file = locale/pot/reference/command/checkShardingIndex.pot -source_lang = en - -[mongodb-manual.reference--command--insert] -file_filter = locale//LC_MESSAGES/reference/command/insert.po -source_file = locale/pot/reference/command/insert.pot -source_lang = en - -[mongodb-manual.reference--command--nav-user-management] -file_filter = locale//LC_MESSAGES/reference/command/nav-user-management.po -source_file = locale/pot/reference/command/nav-user-management.pot -source_lang = en - -[mongodb-manual.reference--command--filemd5] -file_filter = locale//LC_MESSAGES/reference/command/filemd5.po -source_file = locale/pot/reference/command/filemd5.pot -source_lang = en - -[mongodb-manual.reference--command--createRole] -file_filter = locale//LC_MESSAGES/reference/command/createRole.po -source_file = locale/pot/reference/command/createRole.pot -source_lang = en - -[mongodb-manual.reference--command--nav-internal] -file_filter = locale//LC_MESSAGES/reference/command/nav-internal.po -source_file = locale/pot/reference/command/nav-internal.pot -source_lang = en - -[mongodb-manual.reference--command--collStats] -file_filter = locale//LC_MESSAGES/reference/command/collStats.po -source_file = locale/pot/reference/command/collStats.pot -source_lang = en - -[mongodb-manual.reference--command--logRotate] -file_filter = locale//LC_MESSAGES/reference/command/logRotate.po -source_file = locale/pot/reference/command/logRotate.pot -source_lang = en - -[mongodb-manual.reference--command--collMod] -file_filter = locale//LC_MESSAGES/reference/command/collMod.po -source_file = locale/pot/reference/command/collMod.pot -source_lang = en - -[mongodb-manual.reference--command--hostInfo] -file_filter = locale//LC_MESSAGES/reference/command/hostInfo.po -source_file = locale/pot/reference/command/hostInfo.pot -source_lang = en - -[mongodb-manual.reference--command--shutdown] -file_filter = locale//LC_MESSAGES/reference/command/shutdown.po -source_file = locale/pot/reference/command/shutdown.pot -source_lang = en - -[mongodb-manual.reference--command--logout] -file_filter = locale//LC_MESSAGES/reference/command/logout.po -source_file = locale/pot/reference/command/logout.pot -source_lang = en - -[mongodb-manual.reference--command--fsync] -file_filter = locale//LC_MESSAGES/reference/command/fsync.po -source_file = locale/pot/reference/command/fsync.pot -source_lang = en - -[mongodb-manual.reference--command--geoSearch] -file_filter = locale//LC_MESSAGES/reference/command/geoSearch.po -source_file = locale/pot/reference/command/geoSearch.pot -source_lang = en - -[mongodb-manual.reference--command--closeAllDatabases] -file_filter = locale//LC_MESSAGES/reference/command/closeAllDatabases.po -source_file = locale/pot/reference/command/closeAllDatabases.pot -source_lang = en - -[mongodb-manual.reference--command--replSetFresh] -file_filter = locale//LC_MESSAGES/reference/command/replSetFresh.po -source_file = locale/pot/reference/command/replSetFresh.pot -source_lang = en - -[mongodb-manual.reference--command--replSetTest] -file_filter = locale//LC_MESSAGES/reference/command/replSetTest.po -source_file = locale/pot/reference/command/replSetTest.pot -source_lang = en - -[mongodb-manual.reference--command--getShardVersion] -file_filter = locale//LC_MESSAGES/reference/command/getShardVersion.po -source_file = locale/pot/reference/command/getShardVersion.pot -source_lang = en - -[mongodb-manual.reference--command--applyOps] -file_filter = locale//LC_MESSAGES/reference/command/applyOps.po -source_file = locale/pot/reference/command/applyOps.pot -source_lang = en - -[mongodb-manual.reference--command--recvChunkCommit] -file_filter = locale//LC_MESSAGES/reference/command/recvChunkCommit.po -source_file = locale/pot/reference/command/recvChunkCommit.pot -source_lang = en - -[mongodb-manual.reference--command--cursorInfo] -file_filter = locale//LC_MESSAGES/reference/command/cursorInfo.po -source_file = locale/pot/reference/command/cursorInfo.pot -source_lang = en - -[mongodb-manual.reference--command--captrunc] -file_filter = locale//LC_MESSAGES/reference/command/captrunc.po -source_file = locale/pot/reference/command/captrunc.pot -source_lang = en - -[mongodb-manual.reference--command--moveChunk] -file_filter = locale//LC_MESSAGES/reference/command/moveChunk.po -source_file = locale/pot/reference/command/moveChunk.pot -source_lang = en - -[mongodb-manual.reference--command--revokePrivilegesFromRole] -file_filter = locale//LC_MESSAGES/reference/command/revokePrivilegesFromRole.po -source_file = locale/pot/reference/command/revokePrivilegesFromRole.pot -source_lang = en - -[mongodb-manual.reference--command--distinct] -file_filter = locale//LC_MESSAGES/reference/command/distinct.po -source_file = locale/pot/reference/command/distinct.pot -source_lang = en - -[mongodb-manual.reference--command--driverOIDTest] -file_filter = locale//LC_MESSAGES/reference/command/driverOIDTest.po -source_file = locale/pot/reference/command/driverOIDTest.pot -source_lang = en - -[mongodb-manual.reference--command--netstat] -file_filter = locale//LC_MESSAGES/reference/command/netstat.po -source_file = locale/pot/reference/command/netstat.pot -source_lang = en - -[mongodb-manual.reference--command--getShardMap] -file_filter = locale//LC_MESSAGES/reference/command/getShardMap.po -source_file = locale/pot/reference/command/getShardMap.pot -source_lang = en - -[mongodb-manual.reference--command--ping] -file_filter = locale//LC_MESSAGES/reference/command/ping.po -source_file = locale/pot/reference/command/ping.pot -source_lang = en - -[mongodb-manual.reference--command--addShard] -file_filter = locale//LC_MESSAGES/reference/command/addShard.po -source_file = locale/pot/reference/command/addShard.pot -source_lang = en - -[mongodb-manual.reference--command--movePrimary] -file_filter = locale//LC_MESSAGES/reference/command/movePrimary.po -source_file = locale/pot/reference/command/movePrimary.pot -source_lang = en - -[mongodb-manual.reference--command--migrateClone] -file_filter = locale//LC_MESSAGES/reference/command/migrateClone.po -source_file = locale/pot/reference/command/migrateClone.pot -source_lang = en - -[mongodb-manual.reference--command--replSetGetRBID] -file_filter = locale//LC_MESSAGES/reference/command/replSetGetRBID.po -source_file = locale/pot/reference/command/replSetGetRBID.pot -source_lang = en - -[mongodb-manual.reference--command--renameCollection] -file_filter = locale//LC_MESSAGES/reference/command/renameCollection.po -source_file = locale/pot/reference/command/renameCollection.pot -source_lang = en - -[mongodb-manual.reference--command--delete] -file_filter = locale//LC_MESSAGES/reference/command/delete.po -source_file = locale/pot/reference/command/delete.pot -source_lang = en - -[mongodb-manual.reference--command--create] -file_filter = locale//LC_MESSAGES/reference/command/create.po -source_file = locale/pot/reference/command/create.pot -source_lang = en - -[mongodb-manual.reference--command--compact] -file_filter = locale//LC_MESSAGES/reference/command/compact.po -source_file = locale/pot/reference/command/compact.pot -source_lang = en - -[mongodb-manual.reference--program--mongod] -file_filter = locale//LC_MESSAGES/reference/program/mongod.po -source_file = locale/pot/reference/program/mongod.pot -source_lang = en - -[mongodb-manual.reference--program--mongofiles] -file_filter = locale//LC_MESSAGES/reference/program/mongofiles.po -source_file = locale/pot/reference/program/mongofiles.pot -source_lang = en - -[mongodb-manual.reference--program--mongoimport] -file_filter = locale//LC_MESSAGES/reference/program/mongoimport.po -source_file = locale/pot/reference/program/mongoimport.pot -source_lang = en - -[mongodb-manual.reference--program--bsondump] -file_filter = locale//LC_MESSAGES/reference/program/bsondump.po -source_file = locale/pot/reference/program/bsondump.pot -source_lang = en - -[mongodb-manual.reference--program--mongod_exe] -file_filter = locale//LC_MESSAGES/reference/program/mongod.exe.po -source_file = locale/pot/reference/program/mongod.exe.pot -source_lang = en - -[mongodb-manual.reference--program--mongos_exe] -file_filter = locale//LC_MESSAGES/reference/program/mongos.exe.po -source_file = locale/pot/reference/program/mongos.exe.pot -source_lang = en - -[mongodb-manual.reference--program--mongotop] -file_filter = locale//LC_MESSAGES/reference/program/mongotop.po -source_file = locale/pot/reference/program/mongotop.pot -source_lang = en - -[mongodb-manual.reference--program--mongorestore] -file_filter = locale//LC_MESSAGES/reference/program/mongorestore.po -source_file = locale/pot/reference/program/mongorestore.pot -source_lang = en - -[mongodb-manual.reference--program--mongo] -file_filter = locale//LC_MESSAGES/reference/program/mongo.po -source_file = locale/pot/reference/program/mongo.pot -source_lang = en - -[mongodb-manual.reference--program--mongodump] -file_filter = locale//LC_MESSAGES/reference/program/mongodump.po -source_file = locale/pot/reference/program/mongodump.pot -source_lang = en - -[mongodb-manual.reference--program--mongoexport] -file_filter = locale//LC_MESSAGES/reference/program/mongoexport.po -source_file = locale/pot/reference/program/mongoexport.pot -source_lang = en - -[mongodb-manual.reference--program--mongoperf] -file_filter = locale//LC_MESSAGES/reference/program/mongoperf.po -source_file = locale/pot/reference/program/mongoperf.pot -source_lang = en - -[mongodb-manual.reference--program--mongooplog] -file_filter = locale//LC_MESSAGES/reference/program/mongooplog.po -source_file = locale/pot/reference/program/mongooplog.pot -source_lang = en - -[mongodb-manual.reference--program--mongosniff] -file_filter = locale//LC_MESSAGES/reference/program/mongosniff.po -source_file = locale/pot/reference/program/mongosniff.pot -source_lang = en - -[mongodb-manual.reference--program--mongos] -file_filter = locale//LC_MESSAGES/reference/program/mongos.po -source_file = locale/pot/reference/program/mongos.pot -source_lang = en - -[mongodb-manual.reference--program--mongostat] -file_filter = locale//LC_MESSAGES/reference/program/mongostat.po -source_file = locale/pot/reference/program/mongostat.pot -source_lang = en - -[mongodb-manual.reference--method--hostname] -file_filter = locale//LC_MESSAGES/reference/method/hostname.po -source_file = locale/pot/reference/method/hostname.pot -source_lang = en - -[mongodb-manual.reference--method--stopMongod] -file_filter = locale//LC_MESSAGES/reference/method/stopMongod.po -source_file = locale/pot/reference/method/stopMongod.pot -source_lang = en - -[mongodb-manual.reference--method--db_repairDatabase] -file_filter = locale//LC_MESSAGES/reference/method/db.repairDatabase.po -source_file = locale/pot/reference/method/db.repairDatabase.pot -source_lang = en - -[mongodb-manual.reference--method--sh_isBalancerRunning] -file_filter = locale//LC_MESSAGES/reference/method/sh.isBalancerRunning.po -source_file = locale/pot/reference/method/sh.isBalancerRunning.pot -source_lang = en - -[mongodb-manual.reference--method--pwd] -file_filter = locale//LC_MESSAGES/reference/method/pwd.po -source_file = locale/pot/reference/method/pwd.pot -source_lang = en - -[mongodb-manual.reference--method--db_loadServerScripts] -file_filter = locale//LC_MESSAGES/reference/method/db.loadServerScripts.po -source_file = locale/pot/reference/method/db.loadServerScripts.pot -source_lang = en - -[mongodb-manual.reference--method--stopMongoProgramByPid] -file_filter = locale//LC_MESSAGES/reference/method/stopMongoProgramByPid.po -source_file = locale/pot/reference/method/stopMongoProgramByPid.pot -source_lang = en - -[mongodb-manual.reference--method--waitMongoProgramOnPort] -file_filter = locale//LC_MESSAGES/reference/method/waitMongoProgramOnPort.po -source_file = locale/pot/reference/method/waitMongoProgramOnPort.pot -source_lang = en - -[mongodb-manual.reference--method--srand] -file_filter = locale//LC_MESSAGES/reference/method/srand.po -source_file = locale/pot/reference/method/srand.pot -source_lang = en - -[mongodb-manual.reference--method--sh_waitForBalancer] -file_filter = locale//LC_MESSAGES/reference/method/sh.waitForBalancer.po -source_file = locale/pot/reference/method/sh.waitForBalancer.pot -source_lang = en - -[mongodb-manual.reference--method--ObjectId_getTimestamp] -file_filter = locale//LC_MESSAGES/reference/method/ObjectId.getTimestamp.po -source_file = locale/pot/reference/method/ObjectId.getTimestamp.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_storageSize] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.storageSize.po -source_file = locale/pot/reference/method/db.collection.storageSize.pot -source_lang = en - -[mongodb-manual.reference--method--Mongo_getDB] -file_filter = locale//LC_MESSAGES/reference/method/Mongo.getDB.po -source_file = locale/pot/reference/method/Mongo.getDB.pot -source_lang = en - -[mongodb-manual.reference--method--cursor_snapshot] -file_filter = locale//LC_MESSAGES/reference/method/cursor.snapshot.po -source_file = locale/pot/reference/method/cursor.snapshot.pot -source_lang = en - -[mongodb-manual.reference--method--db_shutdownServer] -file_filter = locale//LC_MESSAGES/reference/method/db.shutdownServer.po -source_file = locale/pot/reference/method/db.shutdownServer.pot -source_lang = en - -[mongodb-manual.reference--method--sh_help] -file_filter = locale//LC_MESSAGES/reference/method/sh.help.po -source_file = locale/pot/reference/method/sh.help.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_dropIndexes] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.dropIndexes.po -source_file = locale/pot/reference/method/db.collection.dropIndexes.pot -source_lang = en - -[mongodb-manual.reference--method--db_cloneCollection] -file_filter = locale//LC_MESSAGES/reference/method/db.cloneCollection.po -source_file = locale/pot/reference/method/db.cloneCollection.pot -source_lang = en - -[mongodb-manual.reference--method--db_printSecondaryReplicationInfo] -file_filter = locale//LC_MESSAGES/reference/method/db.printSecondaryReplicationInfo.po -source_file = locale/pot/reference/method/db.printSecondaryReplicationInfo.pot -source_lang = en - -[mongodb-manual.reference--method--db_printSlaveReplicationInfo] -file_filter = locale//LC_MESSAGES/reference/method/db.printSlaveReplicationInfo.po -source_file = locale/pot/reference/method/db.printSlaveReplicationInfo.pot -source_lang = en - -[mongodb-manual.reference--method--js-sharding] -file_filter = locale//LC_MESSAGES/reference/method/js-sharding.po -source_file = locale/pot/reference/method/js-sharding.pot -source_lang = en - -[mongodb-manual.reference--method--Mongo_setSlaveOk] -file_filter = locale//LC_MESSAGES/reference/method/Mongo.setSlaveOk.po -source_file = locale/pot/reference/method/Mongo.setSlaveOk.pot -source_lang = en - -[mongodb-manual.reference--method--cursor_maxTimeMS] -file_filter = locale//LC_MESSAGES/reference/method/cursor.maxTimeMS.po -source_file = locale/pot/reference/method/cursor.maxTimeMS.pot -source_lang = en - -[mongodb-manual.reference--method--js-replication] -file_filter = locale//LC_MESSAGES/reference/method/js-replication.po -source_file = locale/pot/reference/method/js-replication.pot -source_lang = en - -[mongodb-manual.reference--method--cursor_showDiskLoc] -file_filter = locale//LC_MESSAGES/reference/method/cursor.showDiskLoc.po -source_file = locale/pot/reference/method/cursor.showDiskLoc.pot -source_lang = en - -[mongodb-manual.reference--method--mkdir] -file_filter = locale//LC_MESSAGES/reference/method/mkdir.po -source_file = locale/pot/reference/method/mkdir.pot -source_lang = en - -[mongodb-manual.reference--method--rs_remove] -file_filter = locale//LC_MESSAGES/reference/method/rs.remove.po -source_file = locale/pot/reference/method/rs.remove.pot -source_lang = en - -[mongodb-manual.reference--method--sh_waitForDLock] -file_filter = locale//LC_MESSAGES/reference/method/sh.waitForDLock.po -source_file = locale/pot/reference/method/sh.waitForDLock.pot -source_lang = en - -[mongodb-manual.reference--method--ObjectId_toString] -file_filter = locale//LC_MESSAGES/reference/method/ObjectId.toString.po -source_file = locale/pot/reference/method/ObjectId.toString.pot -source_lang = en - -[mongodb-manual.reference--method--cursor_skip] -file_filter = locale//LC_MESSAGES/reference/method/cursor.skip.po -source_file = locale/pot/reference/method/cursor.skip.pot -source_lang = en - -[mongodb-manual.reference--method--cursor_limit] -file_filter = locale//LC_MESSAGES/reference/method/cursor.limit.po -source_file = locale/pot/reference/method/cursor.limit.pot -source_lang = en - -[mongodb-manual.reference--method--db_fsyncUnlock] -file_filter = locale//LC_MESSAGES/reference/method/db.fsyncUnlock.po -source_file = locale/pot/reference/method/db.fsyncUnlock.pot -source_lang = en - -[mongodb-manual.reference--method--db_getCollectionNames] -file_filter = locale//LC_MESSAGES/reference/method/db.getCollectionNames.po -source_file = locale/pot/reference/method/db.getCollectionNames.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_update] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.update.po -source_file = locale/pot/reference/method/db.collection.update.pot -source_lang = en - -[mongodb-manual.reference--method--cursor_sort] -file_filter = locale//LC_MESSAGES/reference/method/cursor.sort.po -source_file = locale/pot/reference/method/cursor.sort.pot -source_lang = en - -[mongodb-manual.reference--method--cursor_objsLeftInBatch] -file_filter = locale//LC_MESSAGES/reference/method/cursor.objsLeftInBatch.po -source_file = locale/pot/reference/method/cursor.objsLeftInBatch.pot -source_lang = en - -[mongodb-manual.reference--method--rs_initiate] -file_filter = locale//LC_MESSAGES/reference/method/rs.initiate.po -source_file = locale/pot/reference/method/rs.initiate.pot -source_lang = en - -[mongodb-manual.reference--method--rs_conf] -file_filter = locale//LC_MESSAGES/reference/method/rs.conf.po -source_file = locale/pot/reference/method/rs.conf.pot -source_lang = en - -[mongodb-manual.reference--method--sh_disableBalancing] -file_filter = locale//LC_MESSAGES/reference/method/sh.disableBalancing.po -source_file = locale/pot/reference/method/sh.disableBalancing.pot -source_lang = en - -[mongodb-manual.reference--method--js-cursor] -file_filter = locale//LC_MESSAGES/reference/method/js-cursor.po -source_file = locale/pot/reference/method/js-cursor.pot -source_lang = en - -[mongodb-manual.reference--method--getHostName] -file_filter = locale//LC_MESSAGES/reference/method/getHostName.po -source_file = locale/pot/reference/method/getHostName.pot -source_lang = en - -[mongodb-manual.reference--method--db_removeUser] -file_filter = locale//LC_MESSAGES/reference/method/db.removeUser.po -source_file = locale/pot/reference/method/db.removeUser.pot -source_lang = en - -[mongodb-manual.reference--method--db_getUsers] -file_filter = locale//LC_MESSAGES/reference/method/db.getUsers.po -source_file = locale/pot/reference/method/db.getUsers.pot -source_lang = en - -[mongodb-manual.reference--method--db_help] -file_filter = locale//LC_MESSAGES/reference/method/db.help.po -source_file = locale/pot/reference/method/db.help.pot -source_lang = en - -[mongodb-manual.reference--method--db_dropUser] -file_filter = locale//LC_MESSAGES/reference/method/db.dropUser.po -source_file = locale/pot/reference/method/db.dropUser.pot -source_lang = en - -[mongodb-manual.reference--method--startMongoProgram] -file_filter = locale//LC_MESSAGES/reference/method/startMongoProgram.po -source_file = locale/pot/reference/method/startMongoProgram.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_validate] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.validate.po -source_file = locale/pot/reference/method/db.collection.validate.pot -source_lang = en - -[mongodb-manual.reference--method--listFiles] -file_filter = locale//LC_MESSAGES/reference/method/listFiles.po -source_file = locale/pot/reference/method/listFiles.pot -source_lang = en - -[mongodb-manual.reference--method--Mongo_getReadPrefTagSet] -file_filter = locale//LC_MESSAGES/reference/method/Mongo.getReadPrefTagSet.po -source_file = locale/pot/reference/method/Mongo.getReadPrefTagSet.pot -source_lang = en - -[mongodb-manual.reference--method--db_listCommands] -file_filter = locale//LC_MESSAGES/reference/method/db.listCommands.po -source_file = locale/pot/reference/method/db.listCommands.pot -source_lang = en - -[mongodb-manual.reference--method--ObjectId_valueOf] -file_filter = locale//LC_MESSAGES/reference/method/ObjectId.valueOf.po -source_file = locale/pot/reference/method/ObjectId.valueOf.pot -source_lang = en - -[mongodb-manual.reference--method--db_runCommand] -file_filter = locale//LC_MESSAGES/reference/method/db.runCommand.po -source_file = locale/pot/reference/method/db.runCommand.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_group] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.group.po -source_file = locale/pot/reference/method/db.collection.group.pot -source_lang = en - -[mongodb-manual.reference--method--sh_removeShardTag] -file_filter = locale//LC_MESSAGES/reference/method/sh.removeShardTag.po -source_file = locale/pot/reference/method/sh.removeShardTag.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_distinct] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.distinct.po -source_file = locale/pot/reference/method/db.collection.distinct.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_save] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.save.po -source_file = locale/pot/reference/method/db.collection.save.pot -source_lang = en - -[mongodb-manual.reference--method--db_grantRolesToRole] -file_filter = locale//LC_MESSAGES/reference/method/db.grantRolesToRole.po -source_file = locale/pot/reference/method/db.grantRolesToRole.pot -source_lang = en - -[mongodb-manual.reference--method--db_resetError] -file_filter = locale//LC_MESSAGES/reference/method/db.resetError.po -source_file = locale/pot/reference/method/db.resetError.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_totalSize] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.totalSize.po -source_file = locale/pot/reference/method/db.collection.totalSize.pot -source_lang = en - -[mongodb-manual.reference--method--quit] -file_filter = locale//LC_MESSAGES/reference/method/quit.po -source_file = locale/pot/reference/method/quit.pot -source_lang = en - -[mongodb-manual.reference--method--js-connection] -file_filter = locale//LC_MESSAGES/reference/method/js-connection.po -source_file = locale/pot/reference/method/js-connection.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_findOne] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.findOne.po -source_file = locale/pot/reference/method/db.collection.findOne.pot -source_lang = en - -[mongodb-manual.reference--method--db_createCollection] -file_filter = locale//LC_MESSAGES/reference/method/db.createCollection.po -source_file = locale/pot/reference/method/db.createCollection.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_findAndModify] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.findAndModify.po -source_file = locale/pot/reference/method/db.collection.findAndModify.pot -source_lang = en - -[mongodb-manual.reference--method--clearRawMongoProgramOutput] -file_filter = locale//LC_MESSAGES/reference/method/clearRawMongoProgramOutput.po -source_file = locale/pot/reference/method/clearRawMongoProgramOutput.pot -source_lang = en - -[mongodb-manual.reference--method--db_copyDatabase] -file_filter = locale//LC_MESSAGES/reference/method/db.copyDatabase.po -source_file = locale/pot/reference/method/db.copyDatabase.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_totalIndexSize] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.totalIndexSize.po -source_file = locale/pot/reference/method/db.collection.totalIndexSize.pot -source_lang = en - -[mongodb-manual.reference--method--connect] -file_filter = locale//LC_MESSAGES/reference/method/connect.po -source_file = locale/pot/reference/method/connect.pot -source_lang = en - -[mongodb-manual.reference--method--js-user-management] -file_filter = locale//LC_MESSAGES/reference/method/js-user-management.po -source_file = locale/pot/reference/method/js-user-management.pot -source_lang = en - -[mongodb-manual.reference--method--db_getRole] -file_filter = locale//LC_MESSAGES/reference/method/db.getRole.po -source_file = locale/pot/reference/method/db.getRole.pot -source_lang = en - -[mongodb-manual.reference--method--sh__checkMongos] -file_filter = locale//LC_MESSAGES/reference/method/sh._checkMongos.po -source_file = locale/pot/reference/method/sh._checkMongos.pot -source_lang = en - -[mongodb-manual.reference--method--waitProgram] -file_filter = locale//LC_MESSAGES/reference/method/waitProgram.po -source_file = locale/pot/reference/method/waitProgram.pot -source_lang = en - -[mongodb-manual.reference--method--Mongo_getReadPrefMode] -file_filter = locale//LC_MESSAGES/reference/method/Mongo.getReadPrefMode.po -source_file = locale/pot/reference/method/Mongo.getReadPrefMode.pot -source_lang = en - -[mongodb-manual.reference--method--db_getLastErrorObj] -file_filter = locale//LC_MESSAGES/reference/method/db.getLastErrorObj.po -source_file = locale/pot/reference/method/db.getLastErrorObj.pot -source_lang = en - -[mongodb-manual.reference--method--md5sumFile] -file_filter = locale//LC_MESSAGES/reference/method/md5sumFile.po -source_file = locale/pot/reference/method/md5sumFile.pot -source_lang = en - -[mongodb-manual.reference--method--db_createRole] -file_filter = locale//LC_MESSAGES/reference/method/db.createRole.po -source_file = locale/pot/reference/method/db.createRole.pot -source_lang = en - -[mongodb-manual.reference--method--cursor_next] -file_filter = locale//LC_MESSAGES/reference/method/cursor.next.po -source_file = locale/pot/reference/method/cursor.next.pot -source_lang = en - -[mongodb-manual.reference--method--db_stats] -file_filter = locale//LC_MESSAGES/reference/method/db.stats.po -source_file = locale/pot/reference/method/db.stats.pot -source_lang = en - -[mongodb-manual.reference--method--db_dropAllUsers] -file_filter = locale//LC_MESSAGES/reference/method/db.dropAllUsers.po -source_file = locale/pot/reference/method/db.dropAllUsers.pot -source_lang = en - -[mongodb-manual.reference--method--db_serverStatus] -file_filter = locale//LC_MESSAGES/reference/method/db.serverStatus.po -source_file = locale/pot/reference/method/db.serverStatus.pot -source_lang = en - -[mongodb-manual.reference--method--db_dropAllRoles] -file_filter = locale//LC_MESSAGES/reference/method/db.dropAllRoles.po -source_file = locale/pot/reference/method/db.dropAllRoles.pot -source_lang = en - -[mongodb-manual.reference--method--fuzzFile] -file_filter = locale//LC_MESSAGES/reference/method/fuzzFile.po -source_file = locale/pot/reference/method/fuzzFile.pot -source_lang = en - -[mongodb-manual.reference--method--db_updateRole] -file_filter = locale//LC_MESSAGES/reference/method/db.updateRole.po -source_file = locale/pot/reference/method/db.updateRole.pot -source_lang = en - -[mongodb-manual.reference--method--sh_shardCollection] -file_filter = locale//LC_MESSAGES/reference/method/sh.shardCollection.po -source_file = locale/pot/reference/method/sh.shardCollection.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_getIndexes] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.getIndexes.po -source_file = locale/pot/reference/method/db.collection.getIndexes.pot -source_lang = en - -[mongodb-manual.reference--method--sh_setBalancerState] -file_filter = locale//LC_MESSAGES/reference/method/sh.setBalancerState.po -source_file = locale/pot/reference/method/sh.setBalancerState.pot -source_lang = en - -[mongodb-manual.reference--method--db_grantRolesToUser] -file_filter = locale//LC_MESSAGES/reference/method/db.grantRolesToUser.po -source_file = locale/pot/reference/method/db.grantRolesToUser.pot -source_lang = en - -[mongodb-manual.reference--method--js-database] -file_filter = locale//LC_MESSAGES/reference/method/js-database.po -source_file = locale/pot/reference/method/js-database.pot -source_lang = en - -[mongodb-manual.reference--method--cursor_toArray] -file_filter = locale//LC_MESSAGES/reference/method/cursor.toArray.po -source_file = locale/pot/reference/method/cursor.toArray.pot -source_lang = en - -[mongodb-manual.reference--method--js-constructor] -file_filter = locale//LC_MESSAGES/reference/method/js-constructor.po -source_file = locale/pot/reference/method/js-constructor.pot -source_lang = en - -[mongodb-manual.reference--method--sh__lastMigration] -file_filter = locale//LC_MESSAGES/reference/method/sh._lastMigration.po -source_file = locale/pot/reference/method/sh._lastMigration.pot -source_lang = en - -[mongodb-manual.reference--method--sh_addTagRange] -file_filter = locale//LC_MESSAGES/reference/method/sh.addTagRange.po -source_file = locale/pot/reference/method/sh.addTagRange.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_mapReduce] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.mapReduce.po -source_file = locale/pot/reference/method/db.collection.mapReduce.pot -source_lang = en - -[mongodb-manual.reference--method--getMemInfo] -file_filter = locale//LC_MESSAGES/reference/method/getMemInfo.po -source_file = locale/pot/reference/method/getMemInfo.pot -source_lang = en - -[mongodb-manual.reference--method--db_printReplicationInfo] -file_filter = locale//LC_MESSAGES/reference/method/db.printReplicationInfo.po -source_file = locale/pot/reference/method/db.printReplicationInfo.pot -source_lang = en - -[mongodb-manual.reference--method--db_getRoles] -file_filter = locale//LC_MESSAGES/reference/method/db.getRoles.po -source_file = locale/pot/reference/method/db.getRoles.pot -source_lang = en - -[mongodb-manual.reference--method--sh_splitAt] -file_filter = locale//LC_MESSAGES/reference/method/sh.splitAt.po -source_file = locale/pot/reference/method/sh.splitAt.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_createIndex] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.createIndex.po -source_file = locale/pot/reference/method/db.collection.createIndex.pot -source_lang = en - -[mongodb-manual.reference--method--sh_enableBalancing] -file_filter = locale//LC_MESSAGES/reference/method/sh.enableBalancing.po -source_file = locale/pot/reference/method/sh.enableBalancing.pot -source_lang = en - -[mongodb-manual.reference--method--db_auth] -file_filter = locale//LC_MESSAGES/reference/method/db.auth.po -source_file = locale/pot/reference/method/db.auth.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_count] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.count.po -source_file = locale/pot/reference/method/db.collection.count.pot -source_lang = en - -[mongodb-manual.reference--method--cursor_explain] -file_filter = locale//LC_MESSAGES/reference/method/cursor.explain.po -source_file = locale/pot/reference/method/cursor.explain.pot -source_lang = en - -[mongodb-manual.reference--method--sh__adminCommand] -file_filter = locale//LC_MESSAGES/reference/method/sh._adminCommand.po -source_file = locale/pot/reference/method/sh._adminCommand.pot -source_lang = en - -[mongodb-manual.reference--method--db_getReplicationInfo] -file_filter = locale//LC_MESSAGES/reference/method/db.getReplicationInfo.po -source_file = locale/pot/reference/method/db.getReplicationInfo.pot -source_lang = en - -[mongodb-manual.reference--method--cursor_hasNext] -file_filter = locale//LC_MESSAGES/reference/method/cursor.hasNext.po -source_file = locale/pot/reference/method/cursor.hasNext.pot -source_lang = en - -[mongodb-manual.reference--method--db_fsyncLock] -file_filter = locale//LC_MESSAGES/reference/method/db.fsyncLock.po -source_file = locale/pot/reference/method/db.fsyncLock.pot -source_lang = en - -[mongodb-manual.reference--method--db_eval] -file_filter = locale//LC_MESSAGES/reference/method/db.eval.po -source_file = locale/pot/reference/method/db.eval.pot -source_lang = en - -[mongodb-manual.reference--method--sh_waitForPingChange] -file_filter = locale//LC_MESSAGES/reference/method/sh.waitForPingChange.po -source_file = locale/pot/reference/method/sh.waitForPingChange.pot -source_lang = en - -[mongodb-manual.reference--method--cursor_forEach] -file_filter = locale//LC_MESSAGES/reference/method/cursor.forEach.po -source_file = locale/pot/reference/method/cursor.forEach.pot -source_lang = en - -[mongodb-manual.reference--method--sh_getBalancerHost] -file_filter = locale//LC_MESSAGES/reference/method/sh.getBalancerHost.po -source_file = locale/pot/reference/method/sh.getBalancerHost.pot -source_lang = en - -[mongodb-manual.reference--method--rs_printReplicationInfo] -file_filter = locale//LC_MESSAGES/reference/method/rs.printReplicationInfo.po -source_file = locale/pot/reference/method/rs.printReplicationInfo.pot -source_lang = en - -[mongodb-manual.reference--method--cd] -file_filter = locale//LC_MESSAGES/reference/method/cd.po -source_file = locale/pot/reference/method/cd.pot -source_lang = en - -[mongodb-manual.reference--method--js-native] -file_filter = locale//LC_MESSAGES/reference/method/js-native.po -source_file = locale/pot/reference/method/js-native.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_dropIndex] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.dropIndex.po -source_file = locale/pot/reference/method/db.collection.dropIndex.pot -source_lang = en - -[mongodb-manual.reference--method--stopMongoProgram] -file_filter = locale//LC_MESSAGES/reference/method/stopMongoProgram.po -source_file = locale/pot/reference/method/stopMongoProgram.pot -source_lang = en - -[mongodb-manual.reference--method--db_addUser] -file_filter = locale//LC_MESSAGES/reference/method/db.addUser.po -source_file = locale/pot/reference/method/db.addUser.pot -source_lang = en - -[mongodb-manual.reference--method--db_getUser] -file_filter = locale//LC_MESSAGES/reference/method/db.getUser.po -source_file = locale/pot/reference/method/db.getUser.pot -source_lang = en - -[mongodb-manual.reference--method--rs_status] -file_filter = locale//LC_MESSAGES/reference/method/rs.status.po -source_file = locale/pot/reference/method/rs.status.pot -source_lang = en - -[mongodb-manual.reference--method--db_revokeRolesFromRole] -file_filter = locale//LC_MESSAGES/reference/method/db.revokeRolesFromRole.po -source_file = locale/pot/reference/method/db.revokeRolesFromRole.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_stats] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.stats.po -source_file = locale/pot/reference/method/db.collection.stats.pot -source_lang = en - -[mongodb-manual.reference--method--Date] -file_filter = locale//LC_MESSAGES/reference/method/Date.po -source_file = locale/pot/reference/method/Date.pot -source_lang = en - -[mongodb-manual.reference--method--runProgram] -file_filter = locale//LC_MESSAGES/reference/method/runProgram.po -source_file = locale/pot/reference/method/runProgram.pot -source_lang = en - -[mongodb-manual.reference--method--db_grantPrivilegesToRole] -file_filter = locale//LC_MESSAGES/reference/method/db.grantPrivilegesToRole.po -source_file = locale/pot/reference/method/db.grantPrivilegesToRole.pot -source_lang = en - -[mongodb-manual.reference--method--rs_slaveOk] -file_filter = locale//LC_MESSAGES/reference/method/rs.slaveOk.po -source_file = locale/pot/reference/method/rs.slaveOk.pot -source_lang = en - -[mongodb-manual.reference--method--db_dropRole] -file_filter = locale//LC_MESSAGES/reference/method/db.dropRole.po -source_file = locale/pot/reference/method/db.dropRole.pot -source_lang = en - -[mongodb-manual.reference--method--sh_splitFind] -file_filter = locale//LC_MESSAGES/reference/method/sh.splitFind.po -source_file = locale/pot/reference/method/sh.splitFind.pot -source_lang = en - -[mongodb-manual.reference--method--db_setProfilingLevel] -file_filter = locale//LC_MESSAGES/reference/method/db.setProfilingLevel.po -source_file = locale/pot/reference/method/db.setProfilingLevel.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_indexStats] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.indexStats.po -source_file = locale/pot/reference/method/db.collection.indexStats.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_drop] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.drop.po -source_file = locale/pot/reference/method/db.collection.drop.pot -source_lang = en - -[mongodb-manual.reference--method--db_killOp] -file_filter = locale//LC_MESSAGES/reference/method/db.killOp.po -source_file = locale/pot/reference/method/db.killOp.pot -source_lang = en - -[mongodb-manual.reference--method--cat] -file_filter = locale//LC_MESSAGES/reference/method/cat.po -source_file = locale/pot/reference/method/cat.pot -source_lang = en - -[mongodb-manual.reference--method--sh_status] -file_filter = locale//LC_MESSAGES/reference/method/sh.status.po -source_file = locale/pot/reference/method/sh.status.pot -source_lang = en - -[mongodb-manual.reference--method--cursor_size] -file_filter = locale//LC_MESSAGES/reference/method/cursor.size.po -source_file = locale/pot/reference/method/cursor.size.pot -source_lang = en - -[mongodb-manual.reference--method--db_getMongo] -file_filter = locale//LC_MESSAGES/reference/method/db.getMongo.po -source_file = locale/pot/reference/method/db.getMongo.pot -source_lang = en - -[mongodb-manual.reference--method--copyDbpath] -file_filter = locale//LC_MESSAGES/reference/method/copyDbpath.po -source_file = locale/pot/reference/method/copyDbpath.pot -source_lang = en - -[mongodb-manual.reference--method--rand] -file_filter = locale//LC_MESSAGES/reference/method/rand.po -source_file = locale/pot/reference/method/rand.pot -source_lang = en - -[mongodb-manual.reference--method--db_isMaster] -file_filter = locale//LC_MESSAGES/reference/method/db.isMaster.po -source_file = locale/pot/reference/method/db.isMaster.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_aggregate] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.aggregate.po -source_file = locale/pot/reference/method/db.collection.aggregate.pot -source_lang = en - -[mongodb-manual.reference--method--rs_stepDown] -file_filter = locale//LC_MESSAGES/reference/method/rs.stepDown.po -source_file = locale/pot/reference/method/rs.stepDown.pot -source_lang = en - -[mongodb-manual.reference--method--rs_help] -file_filter = locale//LC_MESSAGES/reference/method/rs.help.po -source_file = locale/pot/reference/method/rs.help.pot -source_lang = en - -[mongodb-manual.reference--method--db_getProfilingStatus] -file_filter = locale//LC_MESSAGES/reference/method/db.getProfilingStatus.po -source_file = locale/pot/reference/method/db.getProfilingStatus.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_find] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.find.po -source_file = locale/pot/reference/method/db.collection.find.pot -source_lang = en - -[mongodb-manual.reference--method--rs_add] -file_filter = locale//LC_MESSAGES/reference/method/rs.add.po -source_file = locale/pot/reference/method/rs.add.pot -source_lang = en - -[mongodb-manual.reference--method--sh_startBalancer] -file_filter = locale//LC_MESSAGES/reference/method/sh.startBalancer.po -source_file = locale/pot/reference/method/sh.startBalancer.pot -source_lang = en - -[mongodb-manual.reference--method--js-collection] -file_filter = locale//LC_MESSAGES/reference/method/js-collection.po -source_file = locale/pot/reference/method/js-collection.pot -source_lang = en - -[mongodb-manual.reference--method--db_revokeRolesFromUser] -file_filter = locale//LC_MESSAGES/reference/method/db.revokeRolesFromUser.po -source_file = locale/pot/reference/method/db.revokeRolesFromUser.pot -source_lang = en - -[mongodb-manual.reference--method--cursor_hint] -file_filter = locale//LC_MESSAGES/reference/method/cursor.hint.po -source_file = locale/pot/reference/method/cursor.hint.pot -source_lang = en - -[mongodb-manual.reference--method--sh_enableSharding] -file_filter = locale//LC_MESSAGES/reference/method/sh.enableSharding.po -source_file = locale/pot/reference/method/sh.enableSharding.pot -source_lang = en - -[mongodb-manual.reference--method--db_logout] -file_filter = locale//LC_MESSAGES/reference/method/db.logout.po -source_file = locale/pot/reference/method/db.logout.pot -source_lang = en - -[mongodb-manual.reference--method--db_dropDatabase] -file_filter = locale//LC_MESSAGES/reference/method/db.dropDatabase.po -source_file = locale/pot/reference/method/db.dropDatabase.pot -source_lang = en - -[mongodb-manual.reference--method--cursor_max] -file_filter = locale//LC_MESSAGES/reference/method/cursor.max.po -source_file = locale/pot/reference/method/cursor.max.pot -source_lang = en - -[mongodb-manual.reference--method--UUID] -file_filter = locale//LC_MESSAGES/reference/method/UUID.po -source_file = locale/pot/reference/method/UUID.pot -source_lang = en - -[mongodb-manual.reference--method--rs_addArb] -file_filter = locale//LC_MESSAGES/reference/method/rs.addArb.po -source_file = locale/pot/reference/method/rs.addArb.pot -source_lang = en - -[mongodb-manual.reference--method--load] -file_filter = locale//LC_MESSAGES/reference/method/load.po -source_file = locale/pot/reference/method/load.pot -source_lang = en - -[mongodb-manual.reference--method--rs_reconfig] -file_filter = locale//LC_MESSAGES/reference/method/rs.reconfig.po -source_file = locale/pot/reference/method/rs.reconfig.pot -source_lang = en - -[mongodb-manual.reference--method--removeFile] -file_filter = locale//LC_MESSAGES/reference/method/removeFile.po -source_file = locale/pot/reference/method/removeFile.pot -source_lang = en - -[mongodb-manual.reference--method--db_printCollectionStats] -file_filter = locale//LC_MESSAGES/reference/method/db.printCollectionStats.po -source_file = locale/pot/reference/method/db.printCollectionStats.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_getShardDistribution] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.getShardDistribution.po -source_file = locale/pot/reference/method/db.collection.getShardDistribution.pot -source_lang = en - -[mongodb-manual.reference--method--db_changeUserPassword] -file_filter = locale//LC_MESSAGES/reference/method/db.changeUserPassword.po -source_file = locale/pot/reference/method/db.changeUserPassword.pot -source_lang = en - -[mongodb-manual.reference--method--sh_waitForBalancerOff] -file_filter = locale//LC_MESSAGES/reference/method/sh.waitForBalancerOff.po -source_file = locale/pot/reference/method/sh.waitForBalancerOff.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_remove] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.remove.po -source_file = locale/pot/reference/method/db.collection.remove.pot -source_lang = en - -[mongodb-manual.reference--method--db_cloneDatabase] -file_filter = locale//LC_MESSAGES/reference/method/db.cloneDatabase.po -source_file = locale/pot/reference/method/db.cloneDatabase.pot -source_lang = en - -[mongodb-manual.reference--method--db_version] -file_filter = locale//LC_MESSAGES/reference/method/db.version.po -source_file = locale/pot/reference/method/db.version.pot -source_lang = en - -[mongodb-manual.reference--method--rs_syncFrom] -file_filter = locale//LC_MESSAGES/reference/method/rs.syncFrom.po -source_file = locale/pot/reference/method/rs.syncFrom.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_renameCollection] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.renameCollection.po -source_file = locale/pot/reference/method/db.collection.renameCollection.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_getIndexStats] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.getIndexStats.po -source_file = locale/pot/reference/method/db.collection.getIndexStats.pot -source_lang = en - -[mongodb-manual.reference--method--db_commandHelp] -file_filter = locale//LC_MESSAGES/reference/method/db.commandHelp.po -source_file = locale/pot/reference/method/db.commandHelp.pot -source_lang = en - -[mongodb-manual.reference--method--isWindows] -file_filter = locale//LC_MESSAGES/reference/method/isWindows.po -source_file = locale/pot/reference/method/isWindows.pot -source_lang = en - -[mongodb-manual.reference--method--js-subprocess] -file_filter = locale//LC_MESSAGES/reference/method/js-subprocess.po -source_file = locale/pot/reference/method/js-subprocess.pot -source_lang = en - -[mongodb-manual.reference--method--run] -file_filter = locale//LC_MESSAGES/reference/method/run.po -source_file = locale/pot/reference/method/run.pot -source_lang = en - -[mongodb-manual.reference--method--db_revokePrivilegesFromRole] -file_filter = locale//LC_MESSAGES/reference/method/db.revokePrivilegesFromRole.po -source_file = locale/pot/reference/method/db.revokePrivilegesFromRole.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_getShardVersion] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.getShardVersion.po -source_file = locale/pot/reference/method/db.collection.getShardVersion.pot -source_lang = en - -[mongodb-manual.reference--method--db_getLastError] -file_filter = locale//LC_MESSAGES/reference/method/db.getLastError.po -source_file = locale/pot/reference/method/db.getLastError.pot -source_lang = en - -[mongodb-manual.reference--method--db_createUser] -file_filter = locale//LC_MESSAGES/reference/method/db.createUser.po -source_file = locale/pot/reference/method/db.createUser.pot -source_lang = en - -[mongodb-manual.reference--method--resetDbpath] -file_filter = locale//LC_MESSAGES/reference/method/resetDbpath.po -source_file = locale/pot/reference/method/resetDbpath.pot -source_lang = en - -[mongodb-manual.reference--method--version] -file_filter = locale//LC_MESSAGES/reference/method/version.po -source_file = locale/pot/reference/method/version.pot -source_lang = en - -[mongodb-manual.reference--method--db_getProfilingLevel] -file_filter = locale//LC_MESSAGES/reference/method/db.getProfilingLevel.po -source_file = locale/pot/reference/method/db.getProfilingLevel.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_insert] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.insert.po -source_file = locale/pot/reference/method/db.collection.insert.pot -source_lang = en - -[mongodb-manual.reference--method--db_serverBuildInfo] -file_filter = locale//LC_MESSAGES/reference/method/db.serverBuildInfo.po -source_file = locale/pot/reference/method/db.serverBuildInfo.pot -source_lang = en - -[mongodb-manual.reference--method--sh_getBalancerState] -file_filter = locale//LC_MESSAGES/reference/method/sh.getBalancerState.po -source_file = locale/pot/reference/method/sh.getBalancerState.pot -source_lang = en - -[mongodb-manual.reference--method--sh_addShard] -file_filter = locale//LC_MESSAGES/reference/method/sh.addShard.po -source_file = locale/pot/reference/method/sh.addShard.pot -source_lang = en - -[mongodb-manual.reference--method--db_getPrevError] -file_filter = locale//LC_MESSAGES/reference/method/db.getPrevError.po -source_file = locale/pot/reference/method/db.getPrevError.pot -source_lang = en - -[mongodb-manual.reference--method--Mongo] -file_filter = locale//LC_MESSAGES/reference/method/Mongo.po -source_file = locale/pot/reference/method/Mongo.pot -source_lang = en - -[mongodb-manual.reference--method--db_hostInfo] -file_filter = locale//LC_MESSAGES/reference/method/db.hostInfo.po -source_file = locale/pot/reference/method/db.hostInfo.pot -source_lang = en - -[mongodb-manual.reference--method--cursor_count] -file_filter = locale//LC_MESSAGES/reference/method/cursor.count.po -source_file = locale/pot/reference/method/cursor.count.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_copyTo] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.copyTo.po -source_file = locale/pot/reference/method/db.collection.copyTo.pot -source_lang = en - -[mongodb-manual.reference--method--rs_printSecondaryReplicationInfo] -file_filter = locale//LC_MESSAGES/reference/method/rs.printSecondaryReplicationInfo.po -source_file = locale/pot/reference/method/rs.printSecondaryReplicationInfo.pot -source_lang = en - -[mongodb-manual.reference--method--rs_printSlaveReplicationInfo] -file_filter = locale//LC_MESSAGES/reference/method/rs.printSlaveReplicationInfo.po -source_file = locale/pot/reference/method/rs.printSlaveReplicationInfo.pot -source_lang = en - -[mongodb-manual.reference--method--runMongoProgram] -file_filter = locale//LC_MESSAGES/reference/method/runMongoProgram.po -source_file = locale/pot/reference/method/runMongoProgram.pot -source_lang = en - -[mongodb-manual.reference--method--js-role-management] -file_filter = locale//LC_MESSAGES/reference/method/js-role-management.po -source_file = locale/pot/reference/method/js-role-management.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_reIndex] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.reIndex.po -source_file = locale/pot/reference/method/db.collection.reIndex.pot -source_lang = en - -[mongodb-manual.reference--method--cursor_min] -file_filter = locale//LC_MESSAGES/reference/method/cursor.min.po -source_file = locale/pot/reference/method/cursor.min.pot -source_lang = en - -[mongodb-manual.reference--method--db_updateUser] -file_filter = locale//LC_MESSAGES/reference/method/db.updateUser.po -source_file = locale/pot/reference/method/db.updateUser.pot -source_lang = en - -[mongodb-manual.reference--method--cursor_batchSize] -file_filter = locale//LC_MESSAGES/reference/method/cursor.batchSize.po -source_file = locale/pot/reference/method/cursor.batchSize.pot -source_lang = en - -[mongodb-manual.reference--method--cursor_addOption] -file_filter = locale//LC_MESSAGES/reference/method/cursor.addOption.po -source_file = locale/pot/reference/method/cursor.addOption.pot -source_lang = en - -[mongodb-manual.reference--method--rs_freeze] -file_filter = locale//LC_MESSAGES/reference/method/rs.freeze.po -source_file = locale/pot/reference/method/rs.freeze.pot -source_lang = en - -[mongodb-manual.reference--method--cursor_map] -file_filter = locale//LC_MESSAGES/reference/method/cursor.map.po -source_file = locale/pot/reference/method/cursor.map.pot -source_lang = en - -[mongodb-manual.reference--method--rawMongoProgramOutput] -file_filter = locale//LC_MESSAGES/reference/method/rawMongoProgramOutput.po -source_file = locale/pot/reference/method/rawMongoProgramOutput.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_ensureIndex] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.ensureIndex.po -source_file = locale/pot/reference/method/db.collection.ensureIndex.pot -source_lang = en - -[mongodb-manual.reference--method--sh_addShardTag] -file_filter = locale//LC_MESSAGES/reference/method/sh.addShardTag.po -source_file = locale/pot/reference/method/sh.addShardTag.pot -source_lang = en - -[mongodb-manual.reference--method--db_currentOp] -file_filter = locale//LC_MESSAGES/reference/method/db.currentOp.po -source_file = locale/pot/reference/method/db.currentOp.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_dataSize] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.dataSize.po -source_file = locale/pot/reference/method/db.collection.dataSize.pot -source_lang = en - -[mongodb-manual.reference--method--db_getName] -file_filter = locale//LC_MESSAGES/reference/method/db.getName.po -source_file = locale/pot/reference/method/db.getName.pot -source_lang = en - -[mongodb-manual.reference--method--cursor_readPref] -file_filter = locale//LC_MESSAGES/reference/method/cursor.readPref.po -source_file = locale/pot/reference/method/cursor.readPref.pot -source_lang = en - -[mongodb-manual.reference--method--sh_stopBalancer] -file_filter = locale//LC_MESSAGES/reference/method/sh.stopBalancer.po -source_file = locale/pot/reference/method/sh.stopBalancer.pot -source_lang = en - -[mongodb-manual.reference--method--db_printShardingStatus] -file_filter = locale//LC_MESSAGES/reference/method/db.printShardingStatus.po -source_file = locale/pot/reference/method/db.printShardingStatus.pot -source_lang = en - -[mongodb-manual.reference--method--Mongo_setReadPref] -file_filter = locale//LC_MESSAGES/reference/method/Mongo.setReadPref.po -source_file = locale/pot/reference/method/Mongo.setReadPref.pot -source_lang = en - -[mongodb-manual.reference--method--db_getSiblingDB] -file_filter = locale//LC_MESSAGES/reference/method/db.getSiblingDB.po -source_file = locale/pot/reference/method/db.getSiblingDB.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_isCapped] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.isCapped.po -source_file = locale/pot/reference/method/db.collection.isCapped.pot -source_lang = en - -[mongodb-manual.reference--method--db_getCollection] -file_filter = locale//LC_MESSAGES/reference/method/db.getCollection.po -source_file = locale/pot/reference/method/db.getCollection.pot -source_lang = en - -[mongodb-manual.reference--method--sh_moveChunk] -file_filter = locale//LC_MESSAGES/reference/method/sh.moveChunk.po -source_file = locale/pot/reference/method/sh.moveChunk.pot -source_lang = en - -[mongodb-manual.reference--method--sh__checkFullName] -file_filter = locale//LC_MESSAGES/reference/method/sh._checkFullName.po -source_file = locale/pot/reference/method/sh._checkFullName.pot -source_lang = en - -[mongodb-manual.reference--method--ls] -file_filter = locale//LC_MESSAGES/reference/method/ls.po -source_file = locale/pot/reference/method/ls.pot -source_lang = en - -[mongodb-manual.core--aggregation-pipeline-optimization] -file_filter = locale//LC_MESSAGES/core/aggregation-pipeline-optimization.po -source_file = locale/pot/core/aggregation-pipeline-optimization.pot -source_lang = en - -[mongodb-manual.core--data-model-operations] -file_filter = locale//LC_MESSAGES/core/data-model-operations.po -source_file = locale/pot/core/data-model-operations.pot -source_lang = en - -[mongodb-manual.core--journaling] -file_filter = locale//LC_MESSAGES/core/journaling.po -source_file = locale/pot/core/journaling.pot -source_lang = en - -[mongodb-manual.core--tag-aware-sharding] -file_filter = locale//LC_MESSAGES/core/tag-aware-sharding.po -source_file = locale/pot/core/tag-aware-sharding.pot -source_lang = en - -[mongodb-manual.core--data-model-design] -file_filter = locale//LC_MESSAGES/core/data-model-design.po -source_file = locale/pot/core/data-model-design.pot -source_lang = en - -[mongodb-manual.core--single-purpose-aggregation] -file_filter = locale//LC_MESSAGES/core/single-purpose-aggregation.po -source_file = locale/pot/core/single-purpose-aggregation.pot -source_lang = en - -[mongodb-manual.core--sharded-cluster-architectures] -file_filter = locale//LC_MESSAGES/core/sharded-cluster-architectures.po -source_file = locale/pot/core/sharded-cluster-architectures.pot -source_lang = en - -[mongodb-manual.core--query-optimization] -file_filter = locale//LC_MESSAGES/core/query-optimization.po -source_file = locale/pot/core/query-optimization.pot -source_lang = en - -[mongodb-manual.core--backups] -file_filter = locale//LC_MESSAGES/core/backups.po -source_file = locale/pot/core/backups.pot -source_lang = en - -[mongodb-manual.core--security-interface] -file_filter = locale//LC_MESSAGES/core/security-interface.po -source_file = locale/pot/core/security-interface.pot -source_lang = en - -[mongodb-manual.core--sharded-cluster-query-router] -file_filter = locale//LC_MESSAGES/core/sharded-cluster-query-router.po -source_file = locale/pot/core/sharded-cluster-query-router.pot -source_lang = en - -[mongodb-manual.core--gridfs] -file_filter = locale//LC_MESSAGES/core/gridfs.po -source_file = locale/pot/core/gridfs.pot -source_lang = en - -[mongodb-manual.core--sharding-chunk-migration] -file_filter = locale//LC_MESSAGES/core/sharding-chunk-migration.po -source_file = locale/pot/core/sharding-chunk-migration.pot -source_lang = en - -[mongodb-manual.core--document] -file_filter = locale//LC_MESSAGES/core/document.po -source_file = locale/pot/core/document.pot -source_lang = en - -[mongodb-manual.core--map-reduce-concurrency] -file_filter = locale//LC_MESSAGES/core/map-reduce-concurrency.po -source_file = locale/pot/core/map-reduce-concurrency.pot -source_lang = en - -[mongodb-manual.core--sharded-cluster-requirements] -file_filter = locale//LC_MESSAGES/core/sharded-cluster-requirements.po -source_file = locale/pot/core/sharded-cluster-requirements.pot -source_lang = en - -[mongodb-manual.core--replica-set-architecture-geographically-distributed] -file_filter = locale//LC_MESSAGES/core/replica-set-architecture-geographically-distributed.po -source_file = locale/pot/core/replica-set-architecture-geographically-distributed.pot -source_lang = en - -[mongodb-manual.core--index-creation] -file_filter = locale//LC_MESSAGES/core/index-creation.po -source_file = locale/pot/core/index-creation.pot -source_lang = en - -[mongodb-manual.core--replica-set-hidden-member] -file_filter = locale//LC_MESSAGES/core/replica-set-hidden-member.po -source_file = locale/pot/core/replica-set-hidden-member.pot -source_lang = en - -[mongodb-manual.core--sharding-shard-key-indexes] -file_filter = locale//LC_MESSAGES/core/sharding-shard-key-indexes.po -source_file = locale/pot/core/sharding-shard-key-indexes.pot -source_lang = en - -[mongodb-manual.core--data-modeling-introduction] -file_filter = locale//LC_MESSAGES/core/data-modeling-introduction.po -source_file = locale/pot/core/data-modeling-introduction.pot -source_lang = en - -[mongodb-manual.core--replica-set-architecture-four-members] -file_filter = locale//LC_MESSAGES/core/replica-set-architecture-four-members.po -source_file = locale/pot/core/replica-set-architecture-four-members.pot -source_lang = en - -[mongodb-manual.core--sharded-cluster-shards] -file_filter = locale//LC_MESSAGES/core/sharded-cluster-shards.po -source_file = locale/pot/core/sharded-cluster-shards.pot -source_lang = en - -[mongodb-manual.core--administration] -file_filter = locale//LC_MESSAGES/core/administration.po -source_file = locale/pot/core/administration.pot -source_lang = en - -[mongodb-manual.core--aggregation-pipeline] -file_filter = locale//LC_MESSAGES/core/aggregation-pipeline.po -source_file = locale/pot/core/aggregation-pipeline.pot -source_lang = en - -[mongodb-manual.core--indexes] -file_filter = locale//LC_MESSAGES/core/indexes.po -source_file = locale/pot/core/indexes.pot -source_lang = en - -[mongodb-manual.core--index-multikey] -file_filter = locale//LC_MESSAGES/core/index-multikey.po -source_file = locale/pot/core/index-multikey.pot -source_lang = en - -[mongodb-manual.core--replication-introduction] -file_filter = locale//LC_MESSAGES/core/replication-introduction.po -source_file = locale/pot/core/replication-introduction.pot -source_lang = en - -[mongodb-manual.core--2d] -file_filter = locale//LC_MESSAGES/core/2d.po -source_file = locale/pot/core/2d.pot -source_lang = en - -[mongodb-manual.core--index-unique] -file_filter = locale//LC_MESSAGES/core/index-unique.po -source_file = locale/pot/core/index-unique.pot -source_lang = en - -[mongodb-manual.core--replica-set-write-concern] -file_filter = locale//LC_MESSAGES/core/replica-set-write-concern.po -source_file = locale/pot/core/replica-set-write-concern.pot -source_lang = en - -[mongodb-manual.core--replica-set-architecture-three-members] -file_filter = locale//LC_MESSAGES/core/replica-set-architecture-three-members.po -source_file = locale/pot/core/replica-set-architecture-three-members.pot -source_lang = en - -[mongodb-manual.core--replica-set-arbiter] -file_filter = locale//LC_MESSAGES/core/replica-set-arbiter.po -source_file = locale/pot/core/replica-set-arbiter.pot -source_lang = en - -[mongodb-manual.core--aggregation-mechanics] -file_filter = locale//LC_MESSAGES/core/aggregation-mechanics.po -source_file = locale/pot/core/aggregation-mechanics.pot -source_lang = en - -[mongodb-manual.core--index-text] -file_filter = locale//LC_MESSAGES/core/index-text.po -source_file = locale/pot/core/index-text.pot -source_lang = en - -[mongodb-manual.core--security-introduction] -file_filter = locale//LC_MESSAGES/core/security-introduction.po -source_file = locale/pot/core/security-introduction.pot -source_lang = en - -[mongodb-manual.core--replica-set-primary] -file_filter = locale//LC_MESSAGES/core/replica-set-primary.po -source_file = locale/pot/core/replica-set-primary.pot -source_lang = en - -[mongodb-manual.core--index-single] -file_filter = locale//LC_MESSAGES/core/index-single.po -source_file = locale/pot/core/index-single.pot -source_lang = en - -[mongodb-manual.core--import-export] -file_filter = locale//LC_MESSAGES/core/import-export.po -source_file = locale/pot/core/import-export.pot -source_lang = en - -[mongodb-manual.core--replica-set-secondary] -file_filter = locale//LC_MESSAGES/core/replica-set-secondary.po -source_file = locale/pot/core/replica-set-secondary.pot -source_lang = en - -[mongodb-manual.core--sharded-cluster-architectures-production] -file_filter = locale//LC_MESSAGES/core/sharded-cluster-architectures-production.po -source_file = locale/pot/core/sharded-cluster-architectures-production.pot -source_lang = en - -[mongodb-manual.core--index-ttl] -file_filter = locale//LC_MESSAGES/core/index-ttl.po -source_file = locale/pot/core/index-ttl.pot -source_lang = en - -[mongodb-manual.core--index-sparse] -file_filter = locale//LC_MESSAGES/core/index-sparse.po -source_file = locale/pot/core/index-sparse.pot -source_lang = en - -[mongodb-manual.core--security] -file_filter = locale//LC_MESSAGES/core/security.po -source_file = locale/pot/core/security.pot -source_lang = en - -[mongodb-manual.core--index-types] -file_filter = locale//LC_MESSAGES/core/index-types.po -source_file = locale/pot/core/index-types.pot -source_lang = en - -[mongodb-manual.core--aggregation-pipeline-limits] -file_filter = locale//LC_MESSAGES/core/aggregation-pipeline-limits.po -source_file = locale/pot/core/aggregation-pipeline-limits.pot -source_lang = en - -[mongodb-manual.core--sharded-cluster-config-servers] -file_filter = locale//LC_MESSAGES/core/sharded-cluster-config-servers.po -source_file = locale/pot/core/sharded-cluster-config-servers.pot -source_lang = en - -[mongodb-manual.core--index-compound] -file_filter = locale//LC_MESSAGES/core/index-compound.po -source_file = locale/pot/core/index-compound.pot -source_lang = en - -[mongodb-manual.core--replication-process] -file_filter = locale//LC_MESSAGES/core/replication-process.po -source_file = locale/pot/core/replication-process.pot -source_lang = en - -[mongodb-manual.core--capped-collections] -file_filter = locale//LC_MESSAGES/core/capped-collections.po -source_file = locale/pot/core/capped-collections.pot -source_lang = en - -[mongodb-manual.core--replica-set-elections] -file_filter = locale//LC_MESSAGES/core/replica-set-elections.po -source_file = locale/pot/core/replica-set-elections.pot -source_lang = en - -[mongodb-manual.core--geohaystack] -file_filter = locale//LC_MESSAGES/core/geohaystack.po -source_file = locale/pot/core/geohaystack.pot -source_lang = en - -[mongodb-manual.core--index-hashed] -file_filter = locale//LC_MESSAGES/core/index-hashed.po -source_file = locale/pot/core/index-hashed.pot -source_lang = en - -[mongodb-manual.core--sharded-cluster-architectures-test] -file_filter = locale//LC_MESSAGES/core/sharded-cluster-architectures-test.po -source_file = locale/pot/core/sharded-cluster-architectures-test.pot -source_lang = en - -[mongodb-manual.core--replica-set-oplog] -file_filter = locale//LC_MESSAGES/core/replica-set-oplog.po -source_file = locale/pot/core/replica-set-oplog.pot -source_lang = en - -[mongodb-manual.core--sharded-cluster-metadata] -file_filter = locale//LC_MESSAGES/core/sharded-cluster-metadata.po -source_file = locale/pot/core/sharded-cluster-metadata.pot -source_lang = en - -[mongodb-manual.core--read-preference] -file_filter = locale//LC_MESSAGES/core/read-preference.po -source_file = locale/pot/core/read-preference.pot -source_lang = en - -[mongodb-manual.core--master-slave] -file_filter = locale//LC_MESSAGES/core/master-slave.po -source_file = locale/pot/core/master-slave.pot -source_lang = en - -[mongodb-manual.core--distributed-write-operations] -file_filter = locale//LC_MESSAGES/core/distributed-write-operations.po -source_file = locale/pot/core/distributed-write-operations.pot -source_lang = en - -[mongodb-manual.core--sharding-balancing] -file_filter = locale//LC_MESSAGES/core/sharding-balancing.po -source_file = locale/pot/core/sharding-balancing.pot -source_lang = en - -[mongodb-manual.core--replica-set-high-availability] -file_filter = locale//LC_MESSAGES/core/replica-set-high-availability.po -source_file = locale/pot/core/replica-set-high-availability.pot -source_lang = en - -[mongodb-manual.core--introduction] -file_filter = locale//LC_MESSAGES/core/introduction.po -source_file = locale/pot/core/introduction.pot -source_lang = en - -[mongodb-manual.core--distributed-queries] -file_filter = locale//LC_MESSAGES/core/distributed-queries.po -source_file = locale/pot/core/distributed-queries.pot -source_lang = en - -[mongodb-manual.core--replica-set-priority-0-member] -file_filter = locale//LC_MESSAGES/core/replica-set-priority-0-member.po -source_file = locale/pot/core/replica-set-priority-0-member.pot -source_lang = en - -[mongodb-manual.core--query-plans] -file_filter = locale//LC_MESSAGES/core/query-plans.po -source_file = locale/pot/core/query-plans.pot -source_lang = en - -[mongodb-manual.core--sharded-cluster-operations] -file_filter = locale//LC_MESSAGES/core/sharded-cluster-operations.po -source_file = locale/pot/core/sharded-cluster-operations.pot -source_lang = en - -[mongodb-manual.core--replica-set-sync] -file_filter = locale//LC_MESSAGES/core/replica-set-sync.po -source_file = locale/pot/core/replica-set-sync.pot -source_lang = en - -[mongodb-manual.core--sharding] -file_filter = locale//LC_MESSAGES/core/sharding.po -source_file = locale/pot/core/sharding.pot -source_lang = en - -[mongodb-manual.core--sharding-shard-key] -file_filter = locale//LC_MESSAGES/core/sharding-shard-key.po -source_file = locale/pot/core/sharding-shard-key.pot -source_lang = en - -[mongodb-manual.core--replica-set-architectures] -file_filter = locale//LC_MESSAGES/core/replica-set-architectures.po -source_file = locale/pot/core/replica-set-architectures.pot -source_lang = en - -[mongodb-manual.core--replica-set-members] -file_filter = locale//LC_MESSAGES/core/replica-set-members.po -source_file = locale/pot/core/replica-set-members.pot -source_lang = en - -[mongodb-manual.core--sharded-cluster-mechanics] -file_filter = locale//LC_MESSAGES/core/sharded-cluster-mechanics.po -source_file = locale/pot/core/sharded-cluster-mechanics.pot -source_lang = en - -[mongodb-manual.core--write-operations] -file_filter = locale//LC_MESSAGES/core/write-operations.po -source_file = locale/pot/core/write-operations.pot -source_lang = en - -[mongodb-manual.core--server-side-javascript] -file_filter = locale//LC_MESSAGES/core/server-side-javascript.po -source_file = locale/pot/core/server-side-javascript.pot -source_lang = en - -[mongodb-manual.core--sharding-chunk-splitting] -file_filter = locale//LC_MESSAGES/core/sharding-chunk-splitting.po -source_file = locale/pot/core/sharding-chunk-splitting.pot -source_lang = en - -[mongodb-manual.core--crud] -file_filter = locale//LC_MESSAGES/core/crud.po -source_file = locale/pot/core/crud.pot -source_lang = en - -[mongodb-manual.core--write-concern] -file_filter = locale//LC_MESSAGES/core/write-concern.po -source_file = locale/pot/core/write-concern.pot -source_lang = en - -[mongodb-manual.core--operational-segregation] -file_filter = locale//LC_MESSAGES/core/operational-segregation.po -source_file = locale/pot/core/operational-segregation.pot -source_lang = en - -[mongodb-manual.core--sharded-cluster-components] -file_filter = locale//LC_MESSAGES/core/sharded-cluster-components.po -source_file = locale/pot/core/sharded-cluster-components.pot -source_lang = en - -[mongodb-manual.core--auditing] -file_filter = locale//LC_MESSAGES/core/auditing.po -source_file = locale/pot/core/auditing.pot -source_lang = en - -[mongodb-manual.core--indexes-introduction] -file_filter = locale//LC_MESSAGES/core/indexes-introduction.po -source_file = locale/pot/core/indexes-introduction.pot -source_lang = en - -[mongodb-manual.core--shell-types] -file_filter = locale//LC_MESSAGES/core/shell-types.po -source_file = locale/pot/core/shell-types.pot -source_lang = en - -[mongodb-manual.core--data-models] -file_filter = locale//LC_MESSAGES/core/data-models.po -source_file = locale/pot/core/data-models.pot -source_lang = en - -[mongodb-manual.core--sharded-cluster-high-availability] -file_filter = locale//LC_MESSAGES/core/sharded-cluster-high-availability.po -source_file = locale/pot/core/sharded-cluster-high-availability.pot -source_lang = en - -[mongodb-manual.core--security-network] -file_filter = locale//LC_MESSAGES/core/security-network.po -source_file = locale/pot/core/security-network.pot -source_lang = en - -[mongodb-manual.core--map-reduce-sharded-collections] -file_filter = locale//LC_MESSAGES/core/map-reduce-sharded-collections.po -source_file = locale/pot/core/map-reduce-sharded-collections.pot -source_lang = en - -[mongodb-manual.core--index-properties] -file_filter = locale//LC_MESSAGES/core/index-properties.po -source_file = locale/pot/core/index-properties.pot -source_lang = en - -[mongodb-manual.core--replica-set-delayed-member] -file_filter = locale//LC_MESSAGES/core/replica-set-delayed-member.po -source_file = locale/pot/core/replica-set-delayed-member.pot -source_lang = en - -[mongodb-manual.core--geospatial-indexes] -file_filter = locale//LC_MESSAGES/core/geospatial-indexes.po -source_file = locale/pot/core/geospatial-indexes.pot -source_lang = en - -[mongodb-manual.core--sharding-introduction] -file_filter = locale//LC_MESSAGES/core/sharding-introduction.po -source_file = locale/pot/core/sharding-introduction.pot -source_lang = en - -[mongodb-manual.core--replication] -file_filter = locale//LC_MESSAGES/core/replication.po -source_file = locale/pot/core/replication.pot -source_lang = en - -[mongodb-manual.core--read-preference-mechanics] -file_filter = locale//LC_MESSAGES/core/read-preference-mechanics.po -source_file = locale/pot/core/read-preference-mechanics.pot -source_lang = en - -[mongodb-manual.core--aggregation-introduction] -file_filter = locale//LC_MESSAGES/core/aggregation-introduction.po -source_file = locale/pot/core/aggregation-introduction.pot -source_lang = en - -[mongodb-manual.core--map-reduce] -file_filter = locale//LC_MESSAGES/core/map-reduce.po -source_file = locale/pot/core/map-reduce.pot -source_lang = en - -[mongodb-manual.core--crud-introduction] -file_filter = locale//LC_MESSAGES/core/crud-introduction.po -source_file = locale/pot/core/crud-introduction.pot -source_lang = en - -[mongodb-manual.core--aggregation] -file_filter = locale//LC_MESSAGES/core/aggregation.po -source_file = locale/pot/core/aggregation.pot -source_lang = en - -[mongodb-manual.core--aggregation-pipeline-sharded-collections] -file_filter = locale//LC_MESSAGES/core/aggregation-pipeline-sharded-collections.po -source_file = locale/pot/core/aggregation-pipeline-sharded-collections.pot -source_lang = en - -[mongodb-manual.core--2dsphere] -file_filter = locale//LC_MESSAGES/core/2dsphere.po -source_file = locale/pot/core/2dsphere.pot -source_lang = en - -[mongodb-manual.core--bulk-inserts] -file_filter = locale//LC_MESSAGES/core/bulk-inserts.po -source_file = locale/pot/core/bulk-inserts.pot -source_lang = en - -[mongodb-manual.core--replica-set-rollbacks] -file_filter = locale//LC_MESSAGES/core/replica-set-rollbacks.po -source_file = locale/pot/core/replica-set-rollbacks.pot -source_lang = en - -[mongodb-manual.core--cursors] -file_filter = locale//LC_MESSAGES/core/cursors.po -source_file = locale/pot/core/cursors.pot -source_lang = en - -[mongodb-manual.core--write-performance] -file_filter = locale//LC_MESSAGES/core/write-performance.po -source_file = locale/pot/core/write-performance.pot -source_lang = en - -[mongodb-manual.core--read-operations] -file_filter = locale//LC_MESSAGES/core/read-operations.po -source_file = locale/pot/core/read-operations.pot -source_lang = en - -[mongodb-manual.tutorial--add-admin-user] -file_filter = locale//LC_MESSAGES/tutorial/add-admin-user.po -source_file = locale/pot/tutorial/add-admin-user.pot -source_lang = en - -[mongodb-manual.tutorial--verify-user-privileges] -file_filter = locale//LC_MESSAGES/tutorial/verify-user-privileges.po -source_file = locale/pot/tutorial/verify-user-privileges.pot -source_lang = en - -[mongodb-manual.tutorial--assign-role-to-user] -file_filter = locale//LC_MESSAGES/tutorial/assign-role-to-user.po -source_file = locale/pot/tutorial/assign-role-to-user.pot -source_lang = en - -[mongodb-manual.administration--security-checklist] -file_filter = locale//LC_MESSAGES/administration/security-checklist.po -source_file = locale/pot/administration/security-checklist.pot -source_lang = en - -[mongodb-manual.administration--security-user-role-management] -file_filter = locale//LC_MESSAGES/administration/security-user-role-management.po -source_file = locale/pot/administration/security-user-role-management.pot -source_lang = en - -[mongodb-manual.administration--install-enterprise] -file_filter = locale//LC_MESSAGES/administration/install-enterprise.po -source_file = locale/pot/administration/install-enterprise.pot -source_lang = en - -[mongodb-manual.tutorial--control-access-to-mongodb-windows-with-kerberos-authentication] -file_filter = locale//LC_MESSAGES/tutorial/control-access-to-mongodb-windows-with-kerberos-authentication.po -source_file = locale/pot/tutorial/control-access-to-mongodb-windows-with-kerberos-authentication.pot -source_lang = en - -[mongodb-manual.tutorial--enable-authentication-without-bypass] -file_filter = locale//LC_MESSAGES/tutorial/enable-authentication-without-bypass.po -source_file = locale/pot/tutorial/enable-authentication-without-bypass.pot -source_lang = en - -[mongodb-manual.tutorial--troubleshoot-kerberos] -file_filter = locale//LC_MESSAGES/tutorial/troubleshoot-kerberos.po -source_file = locale/pot/tutorial/troubleshoot-kerberos.pot -source_lang = en - -[mongodb-manual.tutorial--text-search-in-aggregation] -file_filter = locale//LC_MESSAGES/tutorial/text-search-in-aggregation.po -source_file = locale/pot/tutorial/text-search-in-aggregation.pot -source_lang = en - -[mongodb-manual.tutorial--perform-maintence-on-replica-set-members] -file_filter = locale//LC_MESSAGES/tutorial/perform-maintence-on-replica-set-members.po -source_file = locale/pot/tutorial/perform-maintence-on-replica-set-members.pot -source_lang = en - -[mongodb-manual.tutorial--authenticate-as-client] -file_filter = locale//LC_MESSAGES/tutorial/authenticate-as-client.po -source_file = locale/pot/tutorial/authenticate-as-client.pot -source_lang = en - -[mongodb-manual.meta--includes] -file_filter = locale//LC_MESSAGES/meta/includes.po -source_file = locale/pot/meta/includes.pot -source_lang = en - -[mongodb-manual.reference--text-search-languages] -file_filter = locale//LC_MESSAGES/reference/text-search-languages.po -source_file = locale/pot/reference/text-search-languages.pot -source_lang = en - -[mongodb-manual.reference--audit-message] -file_filter = locale//LC_MESSAGES/reference/audit-message.po -source_file = locale/pot/reference/audit-message.pot -source_lang = en - -[mongodb-manual.reference--built-in-roles] -file_filter = locale//LC_MESSAGES/reference/built-in-roles.po -source_file = locale/pot/reference/built-in-roles.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation-text-search] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation-text-search.po -source_file = locale/pot/reference/operator/aggregation-text-search.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation--meta] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation/meta.po -source_file = locale/pot/reference/operator/aggregation/meta.pot -source_lang = en - -[mongodb-manual.release-notes--2_4-changelog] -file_filter = locale//LC_MESSAGES/release-notes/2.4-changelog.po -source_file = locale/pot/release-notes/2.4-changelog.pot -source_lang = en - -[mongodb-manual.release-notes--2_6-compatibility] -file_filter = locale//LC_MESSAGES/release-notes/2.6-compatibility.po -source_file = locale/pot/release-notes/2.6-compatibility.pot -source_lang = en - -[mongodb-manual.release-notes--2_6-downgrade] -file_filter = locale//LC_MESSAGES/release-notes/2.6-downgrade.po -source_file = locale/pot/release-notes/2.6-downgrade.pot -source_lang = en - -[mongodb-manual.release-notes--2_6-upgrade-authorization] -file_filter = locale//LC_MESSAGES/release-notes/2.6-upgrade-authorization.po -source_file = locale/pot/release-notes/2.6-upgrade-authorization.pot -source_lang = en - -[mongodb-manual.administration--security-deployment] -file_filter = locale//LC_MESSAGES/administration/security-deployment.po -source_file = locale/pot/administration/security-deployment.pot -source_lang = en - -[mongodb-manual.tutorial--implement-redaction-with-multiple-tags] -file_filter = locale//LC_MESSAGES/tutorial/implement-redaction-with-multiple-tags.po -source_file = locale/pot/tutorial/implement-redaction-with-multiple-tags.pot -source_lang = en - -[mongodb-manual.tutorial--install-mongodb-enterprise-on-ubuntu] -file_filter = locale//LC_MESSAGES/tutorial/install-mongodb-enterprise-on-ubuntu.po -source_file = locale/pot/tutorial/install-mongodb-enterprise-on-ubuntu.pot -source_lang = en - -[mongodb-manual.tutorial--control-access-to-document-content-with-field-level-security] -file_filter = locale//LC_MESSAGES/tutorial/control-access-to-document-content-with-field-level-security.po -source_file = locale/pot/tutorial/control-access-to-document-content-with-field-level-security.pot -source_lang = en - -[mongodb-manual.tutorial--backup-with-mongodump] -file_filter = locale//LC_MESSAGES/tutorial/backup-with-mongodump.po -source_file = locale/pot/tutorial/backup-with-mongodump.pot -source_lang = en - -[mongodb-manual.tutorial--backup-with-filesystem-snapshots] -file_filter = locale//LC_MESSAGES/tutorial/backup-with-filesystem-snapshots.po -source_file = locale/pot/tutorial/backup-with-filesystem-snapshots.pot -source_lang = en - -[mongodb-manual.tutorial--install-mongodb-enterprise-on-amazon] -file_filter = locale//LC_MESSAGES/tutorial/install-mongodb-enterprise-on-amazon.po -source_file = locale/pot/tutorial/install-mongodb-enterprise-on-amazon.pot -source_lang = en - -[mongodb-manual.tutorial--install-mongodb-enterprise-on-red-hat-or-centos] -file_filter = locale//LC_MESSAGES/tutorial/install-mongodb-enterprise-on-red-hat-or-centos.po -source_file = locale/pot/tutorial/install-mongodb-enterprise-on-red-hat-or-centos.pot -source_lang = en - -[mongodb-manual.tutorial--deploy-replica-set-with-auth] -file_filter = locale//LC_MESSAGES/tutorial/deploy-replica-set-with-auth.po -source_file = locale/pot/tutorial/deploy-replica-set-with-auth.pot -source_lang = en - -[mongodb-manual.tutorial--change-own-password-and-custom-data] -file_filter = locale//LC_MESSAGES/tutorial/change-own-password-and-custom-data.po -source_file = locale/pot/tutorial/change-own-password-and-custom-data.pot -source_lang = en - -[mongodb-manual.tutorial--install-mongodb-enterprise-on-suse] -file_filter = locale//LC_MESSAGES/tutorial/install-mongodb-enterprise-on-suse.po -source_file = locale/pot/tutorial/install-mongodb-enterprise-on-suse.pot -source_lang = en - -[mongodb-manual.reference--aggregation-variables] -file_filter = locale//LC_MESSAGES/reference/aggregation-variables.po -source_file = locale/pot/reference/aggregation-variables.pot -source_lang = en - -[mongodb-manual.reference--operator--query--text] -file_filter = locale//LC_MESSAGES/reference/operator/query/text.po -source_file = locale/pot/reference/operator/query/text.pot -source_lang = en - -[mongodb-manual.reference--operator--projection--meta] -file_filter = locale//LC_MESSAGES/reference/operator/projection/meta.po -source_file = locale/pot/reference/operator/projection/meta.pot -source_lang = en - -[mongodb-manual.reference--command--planCacheListFilters] -file_filter = locale//LC_MESSAGES/reference/command/planCacheListFilters.po -source_file = locale/pot/reference/command/planCacheListFilters.pot -source_lang = en - -[mongodb-manual.reference--command--createIndexes] -file_filter = locale//LC_MESSAGES/reference/command/createIndexes.po -source_file = locale/pot/reference/command/createIndexes.pot -source_lang = en - -[mongodb-manual.reference--command--invalidateUserCache] -file_filter = locale//LC_MESSAGES/reference/command/invalidateUserCache.po -source_file = locale/pot/reference/command/invalidateUserCache.pot -source_lang = en - -[mongodb-manual.reference--command--planCacheClear] -file_filter = locale//LC_MESSAGES/reference/command/planCacheClear.po -source_file = locale/pot/reference/command/planCacheClear.pot -source_lang = en - -[mongodb-manual.reference--command--planCacheSetFilter] -file_filter = locale//LC_MESSAGES/reference/command/planCacheSetFilter.po -source_file = locale/pot/reference/command/planCacheSetFilter.pot -source_lang = en - -[mongodb-manual.reference--command--planCacheListPlans] -file_filter = locale//LC_MESSAGES/reference/command/planCacheListPlans.po -source_file = locale/pot/reference/command/planCacheListPlans.pot -source_lang = en - -[mongodb-manual.reference--command--planCacheClearFilters] -file_filter = locale//LC_MESSAGES/reference/command/planCacheClearFilters.po -source_file = locale/pot/reference/command/planCacheClearFilters.pot -source_lang = en - -[mongodb-manual.reference--command--parallelCollectionScan] -file_filter = locale//LC_MESSAGES/reference/command/parallelCollectionScan.po -source_file = locale/pot/reference/command/parallelCollectionScan.pot -source_lang = en - -[mongodb-manual.reference--command--shardConnPoolStats] -file_filter = locale//LC_MESSAGES/reference/command/shardConnPoolStats.po -source_file = locale/pot/reference/command/shardConnPoolStats.pot -source_lang = en - -[mongodb-manual.reference--command--nav-plan-cache] -file_filter = locale//LC_MESSAGES/reference/command/nav-plan-cache.po -source_file = locale/pot/reference/command/nav-plan-cache.pot -source_lang = en - -[mongodb-manual.reference--command--authSchemaUpgrade] -file_filter = locale//LC_MESSAGES/reference/command/authSchemaUpgrade.po -source_file = locale/pot/reference/command/authSchemaUpgrade.pot -source_lang = en - -[mongodb-manual.reference--command--planCacheListQueryShapes] -file_filter = locale//LC_MESSAGES/reference/command/planCacheListQueryShapes.po -source_file = locale/pot/reference/command/planCacheListQueryShapes.pot -source_lang = en - -[mongodb-manual.reference--method--PlanCache_getPlansByQuery] -file_filter = locale//LC_MESSAGES/reference/method/PlanCache.getPlansByQuery.po -source_file = locale/pot/reference/method/PlanCache.getPlansByQuery.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_initializeUnorderedBulkOp] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.initializeUnorderedBulkOp.po -source_file = locale/pot/reference/method/db.collection.initializeUnorderedBulkOp.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_getPlanCache] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.getPlanCache.po -source_file = locale/pot/reference/method/db.collection.getPlanCache.pot -source_lang = en - -[mongodb-manual.reference--method--WriteResult] -file_filter = locale//LC_MESSAGES/reference/method/WriteResult.po -source_file = locale/pot/reference/method/WriteResult.pot -source_lang = en - -[mongodb-manual.reference--method--Bulk_find_upsert] -file_filter = locale//LC_MESSAGES/reference/method/Bulk.find.upsert.po -source_file = locale/pot/reference/method/Bulk.find.upsert.pot -source_lang = en - -[mongodb-manual.reference--method--db_upgradeCheckAllDBs] -file_filter = locale//LC_MESSAGES/reference/method/db.upgradeCheckAllDBs.po -source_file = locale/pot/reference/method/db.upgradeCheckAllDBs.pot -source_lang = en - -[mongodb-manual.reference--method--WriteResult_hasWriteConcernError] -file_filter = locale//LC_MESSAGES/reference/method/WriteResult.hasWriteConcernError.po -source_file = locale/pot/reference/method/WriteResult.hasWriteConcernError.pot -source_lang = en - -[mongodb-manual.reference--method--db_upgradeCheck] -file_filter = locale//LC_MESSAGES/reference/method/db.upgradeCheck.po -source_file = locale/pot/reference/method/db.upgradeCheck.pot -source_lang = en - -[mongodb-manual.reference--method--Bulk] -file_filter = locale//LC_MESSAGES/reference/method/Bulk.po -source_file = locale/pot/reference/method/Bulk.pot -source_lang = en - -[mongodb-manual.reference--method--Bulk_find_updateOne] -file_filter = locale//LC_MESSAGES/reference/method/Bulk.find.updateOne.po -source_file = locale/pot/reference/method/Bulk.find.updateOne.pot -source_lang = en - -[mongodb-manual.reference--method--Bulk_execute] -file_filter = locale//LC_MESSAGES/reference/method/Bulk.execute.po -source_file = locale/pot/reference/method/Bulk.execute.pot -source_lang = en - -[mongodb-manual.reference--method--db_collection_initializeOrderedBulkOp] -file_filter = locale//LC_MESSAGES/reference/method/db.collection.initializeOrderedBulkOp.po -source_file = locale/pot/reference/method/db.collection.initializeOrderedBulkOp.pot -source_lang = en - -[mongodb-manual.reference--method--Bulk_find_remove] -file_filter = locale//LC_MESSAGES/reference/method/Bulk.find.remove.po -source_file = locale/pot/reference/method/Bulk.find.remove.pot -source_lang = en - -[mongodb-manual.reference--method--PlanCache_listQueryShapes] -file_filter = locale//LC_MESSAGES/reference/method/PlanCache.listQueryShapes.po -source_file = locale/pot/reference/method/PlanCache.listQueryShapes.pot -source_lang = en - -[mongodb-manual.reference--method--Bulk_find_replaceOne] -file_filter = locale//LC_MESSAGES/reference/method/Bulk.find.replaceOne.po -source_file = locale/pot/reference/method/Bulk.find.replaceOne.pot -source_lang = en - -[mongodb-manual.reference--method--Bulk_find_removeOne] -file_filter = locale//LC_MESSAGES/reference/method/Bulk.find.removeOne.po -source_file = locale/pot/reference/method/Bulk.find.removeOne.pot -source_lang = en - -[mongodb-manual.reference--method--PlanCache_help] -file_filter = locale//LC_MESSAGES/reference/method/PlanCache.help.po -source_file = locale/pot/reference/method/PlanCache.help.pot -source_lang = en - -[mongodb-manual.reference--method--PlanCache_clear] -file_filter = locale//LC_MESSAGES/reference/method/PlanCache.clear.po -source_file = locale/pot/reference/method/PlanCache.clear.pot -source_lang = en - -[mongodb-manual.reference--method--js-bulk] -file_filter = locale//LC_MESSAGES/reference/method/js-bulk.po -source_file = locale/pot/reference/method/js-bulk.pot -source_lang = en - -[mongodb-manual.reference--method--Bulk_find] -file_filter = locale//LC_MESSAGES/reference/method/Bulk.find.po -source_file = locale/pot/reference/method/Bulk.find.pot -source_lang = en - -[mongodb-manual.reference--method--Bulk_insert] -file_filter = locale//LC_MESSAGES/reference/method/Bulk.insert.po -source_file = locale/pot/reference/method/Bulk.insert.pot -source_lang = en - -[mongodb-manual.reference--method--Bulk_find_update] -file_filter = locale//LC_MESSAGES/reference/method/Bulk.find.update.po -source_file = locale/pot/reference/method/Bulk.find.update.pot -source_lang = en - -[mongodb-manual.reference--method--PlanCache_clearPlansByQuery] -file_filter = locale//LC_MESSAGES/reference/method/PlanCache.clearPlansByQuery.po -source_file = locale/pot/reference/method/PlanCache.clearPlansByQuery.pot -source_lang = en - -[mongodb-manual.reference--method--js-plan-cache] -file_filter = locale//LC_MESSAGES/reference/method/js-plan-cache.po -source_file = locale/pot/reference/method/js-plan-cache.pot -source_lang = en - -[mongodb-manual.reference--method--WriteResult_hasWriteError] -file_filter = locale//LC_MESSAGES/reference/method/WriteResult.hasWriteError.po -source_file = locale/pot/reference/method/WriteResult.hasWriteError.pot -source_lang = en - -[mongodb-manual.core--kerberos] -file_filter = locale//LC_MESSAGES/core/kerberos.po -source_file = locale/pot/core/kerberos.pot -source_lang = en - -[mongodb-manual.core--authorization] -file_filter = locale//LC_MESSAGES/core/authorization.po -source_file = locale/pot/core/authorization.pot -source_lang = en - -[mongodb-manual.core--authentication] -file_filter = locale//LC_MESSAGES/core/authentication.po -source_file = locale/pot/core/authentication.pot -source_lang = en - -[mongodb-manual.core--storage] -file_filter = locale//LC_MESSAGES/core/storage.po -source_file = locale/pot/core/storage.pot -source_lang = en - -[mongodb-manual.core--read-operations-introduction] -file_filter = locale//LC_MESSAGES/core/read-operations-introduction.po -source_file = locale/pot/core/read-operations-introduction.pot -source_lang = en - -[mongodb-manual.core--write-operations-introduction] -file_filter = locale//LC_MESSAGES/core/write-operations-introduction.po -source_file = locale/pot/core/write-operations-introduction.pot -source_lang = en - -[mongodb-manual.core--index-intersection] -file_filter = locale//LC_MESSAGES/core/index-intersection.po -source_file = locale/pot/core/index-intersection.pot -source_lang = en - -[mongodb-manual.release-notes--2_6-changelog] -file_filter = locale//LC_MESSAGES/release-notes/2.6-changelog.po -source_file = locale/pot/release-notes/2.6-changelog.pot -source_lang = en - -[mongodb-manual.tutorial--implement-field-level-redaction] -file_filter = locale//LC_MESSAGES/tutorial/implement-field-level-redaction.po -source_file = locale/pot/tutorial/implement-field-level-redaction.pot -source_lang = en - -[mongodb-manual.tutorial--install-mongodb-enterprise-on-debian] -file_filter = locale//LC_MESSAGES/tutorial/install-mongodb-enterprise-on-debian.po -source_file = locale/pot/tutorial/install-mongodb-enterprise-on-debian.pot -source_lang = en - -[mongodb-manual.tutorial--configure-ldap-sasl-activedirectory] -file_filter = locale//LC_MESSAGES/tutorial/configure-ldap-sasl-activedirectory.po -source_file = locale/pot/tutorial/configure-ldap-sasl-activedirectory.pot -source_lang = en - -[mongodb-manual.tutorial--configure-ldap-sasl-openldap] -file_filter = locale//LC_MESSAGES/tutorial/configure-ldap-sasl-openldap.po -source_file = locale/pot/tutorial/configure-ldap-sasl-openldap.pot -source_lang = en - -[mongodb-manual.tutorial--configure-ssl-clients] -file_filter = locale//LC_MESSAGES/tutorial/configure-ssl-clients.po -source_file = locale/pot/tutorial/configure-ssl-clients.pot -source_lang = en - -[mongodb-manual.tutorial--model-monetary-data] -file_filter = locale//LC_MESSAGES/tutorial/model-monetary-data.po -source_file = locale/pot/tutorial/model-monetary-data.pot -source_lang = en - -[mongodb-manual.tutorial--configure-fips] -file_filter = locale//LC_MESSAGES/tutorial/configure-fips.po -source_file = locale/pot/tutorial/configure-fips.pot -source_lang = en - -[mongodb-manual.tutorial--configure-x509-client-authentication] -file_filter = locale//LC_MESSAGES/tutorial/configure-x509-client-authentication.po -source_file = locale/pot/tutorial/configure-x509-client-authentication.pot -source_lang = en - -[mongodb-manual.tutorial--configure-x509-member-authentication] -file_filter = locale//LC_MESSAGES/tutorial/configure-x509-member-authentication.po -source_file = locale/pot/tutorial/configure-x509-member-authentication.pot -source_lang = en - -[mongodb-manual.tutorial--verify-mongodb-packages] -file_filter = locale//LC_MESSAGES/tutorial/verify-mongodb-packages.po -source_file = locale/pot/tutorial/verify-mongodb-packages.pot -source_lang = en - -[mongodb-manual.tutorial--modify-an-index] -file_filter = locale//LC_MESSAGES/tutorial/modify-an-index.po -source_file = locale/pot/tutorial/modify-an-index.pot -source_lang = en - -[mongodb-manual.meta--aggregation-quick-reference] -file_filter = locale//LC_MESSAGES/meta/aggregation-quick-reference.po -source_file = locale/pot/meta/aggregation-quick-reference.pot -source_lang = en - -[mongodb-manual.reference--operator--aggregation-literal] -file_filter = locale//LC_MESSAGES/reference/operator/aggregation-literal.po -source_file = locale/pot/reference/operator/aggregation-literal.pot -source_lang = en - -[mongodb-manual.reference--operator--query--minDistance] -file_filter = locale//LC_MESSAGES/reference/operator/query/minDistance.po -source_file = locale/pot/reference/operator/query/minDistance.pot -source_lang = en - -[mongodb-manual.reference--method--Bulk_toString] -file_filter = locale//LC_MESSAGES/reference/method/Bulk.toString.po -source_file = locale/pot/reference/method/Bulk.toString.pot -source_lang = en - -[mongodb-manual.reference--method--BulkWriteResult] -file_filter = locale//LC_MESSAGES/reference/method/BulkWriteResult.po -source_file = locale/pot/reference/method/BulkWriteResult.pot -source_lang = en - -[mongodb-manual.reference--method--db_serverCmdLineOpts] -file_filter = locale//LC_MESSAGES/reference/method/db.serverCmdLineOpts.po -source_file = locale/pot/reference/method/db.serverCmdLineOpts.pot -source_lang = en - -[mongodb-manual.reference--method--Bulk_getOperations] -file_filter = locale//LC_MESSAGES/reference/method/Bulk.getOperations.po -source_file = locale/pot/reference/method/Bulk.getOperations.pot -source_lang = en - -[mongodb-manual.reference--method--Bulk_tojson] -file_filter = locale//LC_MESSAGES/reference/method/Bulk.tojson.po -source_file = locale/pot/reference/method/Bulk.tojson.pot -source_lang = en - -[mongodb-manual.core--collection-level-access-control] -file_filter = locale//LC_MESSAGES/core/collection-level-access-control.po -source_file = locale/pot/core/collection-level-access-control.pot -source_lang = en - diff --git a/Makefile b/Makefile index dd7b2573519..30357621dea 100644 --- a/Makefile +++ b/Makefile @@ -181,5 +181,8 @@ examples: curl -SfL https://raw.githubusercontent.com/mongodb/mongo-swift-driver/master/Examples/Docs/Sources/AsyncExamples/main.swift -o ${DRIVERS_PATH}/swiftAsync.swift curl -SfL https://raw.githubusercontent.com/mongodb/mongo-swift-driver/master/Examples/Docs/Sources/SyncExamples/main.swift -o ${DRIVERS_PATH}/swiftSync.swift +# kotlin-coroutine + curl -SfL https://raw.githubusercontent.com/mongodb/docs-kotlin/master/source/examples/ServerManualCodeExamples.kt -o ${DRIVERS_PATH}/kotlin_examples.kt + changelogs: python3 changelogs/generatechangelogs.py diff --git a/config/changelog_conf.yaml b/config/changelog_conf.yaml index 53ca2182d3b..1868786c6b8 100644 --- a/config/changelog_conf.yaml +++ b/config/changelog_conf.yaml @@ -20,6 +20,7 @@ groups: - Index Maintenance - Geo - Text Search + - prepared-txns "Write Operations": - Write Ops "Aggregation": @@ -29,11 +30,18 @@ groups: - JavaScript "WiredTiger": - Block cache + - Truncate + - APIs + - Test Format - WiredTiger + - dhandles + - RTS + - Reconciliation "MMAP": - MMAPv1 "Storage": - Storage + - Btree "Catalog": - Catalog "TTL": diff --git a/config/redirects b/config/redirects index e697c445f9d..e067bb860b5 100644 --- a/config/redirects +++ b/config/redirects @@ -1,11 +1,11 @@ define: prefix docs define: base https://www.mongodb.com/${prefix} define: versions v2.2 v2.4 v2.6 v3.0 v3.2 v3.4 v3.6 v4.0 v4.2 v4.4 v5.0 v5.1 v5.2 v5.3 v6.0 v6.1 v6.2 v6.3 v7.0 v7.1 v7.2 master -symlink: master -> v7.2 +symlink: master -> v7.3 symlink: stable -> v7.0 -symlink: rapid -> v7.1 +symlink: rapid -> v7.3 symlink: current -> v7.0 -symlink: upcoming -> v7.2 +symlink: upcoming -> v7.3 symlink: manual -> v7.0 [v2.2]: ${prefix}/${version}/core/read-operations-introduction -> ${base}/${version}/core/read-operations/ @@ -421,7 +421,7 @@ symlink: manual -> v7.0 [v2.2]: ${prefix}/${version}/reference/command/nav-user-management -> ${base}/${version}/reference/command/ [*]: ${prefix}/${version}/tutorial/control-access-to-document-content-with-multiple-tag-sets -> ${base}/${version}/tutorial/control-access-to-document-content-with-field-level-security/ [v2.4]: ${prefix}/${version}/release-notes/2.4-changelong -> ${base}/${version}/release-notes/2.4-changelog/ -(v2.4-*]: ${prefix}/${version}/tutorial/copy-databases-between-instances -> ${base}/${version}/reference/command/copydb/ +(v2.4-*]: ${prefix}/${version}/tutorial/copy-databases-between-instances -> ${base}/${version}/tutorial/backup-and-restore-tools/ [*-v2.4]: ${prefix}/${version}/reference/method/rs.printReplicationInfo -> ${base}/${version}/reference/method/rs.status/ [*-v2.4]: ${prefix}/${version}/reference/method/rs.printSlaveReplicationInfo -> ${base}/${version}/reference/method/rs.status/ [*-v2.4]: ${prefix}/${version}/reference/operator/update/mul -> ${base}/${version}/reference/operators/#update @@ -1490,7 +1490,6 @@ raw: ${prefix}/master/release-notes/3.0-general-improvements -> ${base}/release- [v4.2-*]: ${prefix}/${version}reference/method/rand/ -> ${base}/${version}/reference/method [v4.2-*]: ${prefix}/${version}reference/method/removeFile/ -> ${base}/${version}/reference/method [v4.2-*]: ${prefix}/${version}reference/method/srand/ -> ${base}/${version}/reference/method -[v4.2-*]: ${prefix}/${version}reference/method/db.collection.copyTo// -> ${base}/${version}/reference/method/db.collection.count/ [v4.2-*]: ${prefix}/${version}reference/method/db.collection.save/ -> ${base}/${version}/reference/method/db.collection.stats/ [v4.2-*]: ${prefix}/${version}reference/method/db.collection.repairDatabase -> ${base}/${version}/reference/method/db.dropDatabase/ [v4.2-*]: ${prefix}/${version}reference/method/getHostName/ -> ${base}/${version}/reference/method/hostname @@ -1865,6 +1864,13 @@ raw: ${prefix}/v2.8/release-notes/2.8-changes -> ${base}/v3.0/release-notes/3.0/ # DOCSP-3769 [*]: ${prefix}/${version}/applications/drivers -> ${base}/drivers/ +# DOCSP-33707 redirects for QE and CSFLE tutorials post-restructure +# [v8.0-*]: ${prefix}/${version}/core/queryable-encryption/tutorials/aws/aws-automatic -> ${base}/${version}/core/queryable-encryption/overview-enable-qe +# [v8.0-*]: ${prefix}/${version}/core/queryable-encryption/tutorials/azure/azure-automatic -> ${base}/${version}/core/queryable-encryption/overview-enable-qe +# [v8.0-*]: ${prefix}/${version}/core/queryable-encryption/tutorials/gcp/gcp-automatic -> ${base}/${version}/core/queryable-encryption/overview-enable-qe +# [v8.0-*]: ${prefix}/${version}/core/queryable-encryption/tutorials/kmip/kmip-automatic -> ${base}/${version}/core/queryable-encryption/overview-enable-qe +# [v8.0-*]: ${prefix}/${version}/core/csfle/reference/compatibility -> ${base}/${version}/core/queryable-encryption/reference/compatibility + # Redirects for 4.2 and greater [v4.2-*]: ${prefix}/${version}/reference/command/eval -> ${base}/${version}/release-notes/4.2-compatibility/#remove-support-for-the-eval-command @@ -2055,6 +2061,7 @@ raw: ${prefix}/manual/core/wildcard -> ${base}/manual/core/index-wildcard/ [v4.4-*]: ${prefix}/${version}/reference/method/PlanCache.listQueryShapes -> ${base}/${version}/release-notes/4.4/#removed-commands [v4.4-*]: ${prefix}/${version}/reference/program/mongosniff -> ${base}/${version}/reference/program/mongoreplay/ +[v4.4-*]: ${prefix}/${version}reference/method/db.collection.copyTo/ -> ${base}/${version}/reference/operator/aggregation/out/ [v3.4-v4.2]: ${prefix}/${version}/reference/command/refineCollectionShardKey -> ${base}/${version}/core/sharding-shard-key/ @@ -2128,7 +2135,6 @@ raw: ${prefix}/manual/core/wildcard -> ${base}/manual/core/index-wildcard/ [v5.0-*]: ${prefix}/${version}/reference/command/isMaster -> ${base}/${version}/reference/command/hello/ [v5.0-*]: ${prefix}/${version}/reference/method/db.isMaster -> ${base}/${version}/reference/method/db.hello/ -[v5.0-*]: ${prefix}/${version}/reference/method/db.collection.copyTo.txt -> ${base}/${version}/reference/operator/aggregation/out/ [v5.0-*]: ${prefix}/${version}/reference/method/db.collection.save.txt -> ${base}/${version}/reference/method/db.collection.insertOne/ [v5.0-*]: ${prefix}/${version}/reference/method/db.eval -> ${base}/${version}/reference/method/js-database/ [v5.0-*]: ${prefix}/${version}/reference/method/db.getProfilingLevel -> ${base}/${version}/reference/method/db.getProfilingStatus/ @@ -2367,6 +2373,14 @@ raw: ${prefix}/${version}/applications/drivers -> ${base}/drivers/ [*-v5.3]: ${prefix}/${version}/core/queryable-encryption -> ${base}/${version}/core/security-client-side-encryption # Release Notes redirects +[*-v6.0]: ${prefix}/${version}/release-notes/7.0/ -> ${base}/${version}/release-notes +[*-v6.0]: ${prefix}/${version}/release-notes/7.0-compatibility/ -> ${base}/${version}/release-notes +[*-v6.0]: ${prefix}/${version}/release-notes/7.0-changelog/ -> ${base}/${version}/release-notes +[*-v6.0]: ${prefix}/${version}/release-notes/7.0-downgrade/ -> ${base}/${version}/release-notes +[*-v6.0]: ${prefix}/${version}/release-notes/7.0-upgrade-standalone/ -> ${base}/${version}/release-notes +[*-v6.0]: ${prefix}/${version}/release-notes/7.0-upgrade-replica-set/ -> ${base}/${version}/release-notes +[*-v6.0]: ${prefix}/${version}/release-notes/7.0-upgrade-sharded-cluster/ -> ${base}/${version}/release-notes + [*-v6.1]: ${prefix}/${version}/release-notes/6.2 -> ${base}/${version}/release-notes [*-v5.0]: ${prefix}/${version}/release-notes/6.0 -> ${base}/${version}/release-notes @@ -2572,4 +2586,10 @@ raw: https://mongodb.github.io/mongo-java-driver/ -> ${base}/drivers/java/sync/c [*-v6.3]: ${prefix}/${version}/data-modeling/schema-design-process/identify-workload -> ${base}/${version}/core/data-modeling-introduction [*-v6.3]: ${prefix}/${version}/data-modeling/schema-design-process/map-relationships -> ${base}/${version}/core/data-modeling-introduction [v7.0-*]: ${prefix}/${version}/core/data-modeling-introduction -> ${base}/${version}/data-modeling -[v7.0-*]: ${prefix}/${version}/core/data-model-design -> ${base}/${version}/data-modeling/embedding-vs-references +[v7.0-*]: ${prefix}/${version}/core/data-model-design -> ${base}/${version}/data-modeling/#link-related-data + +# DOCSP-35889 +(v6.1-*]: ${prefix}/${version}/reference/operator/aggregation/first-array-element -> ${base}/${version}/reference/operator/aggregation/first/ + +# DOCSP-36940 +(v6.1-*]: ${prefix}/${version}/reference/operator/aggregation/last-array-element -> ${base}/${version}/reference/operator/aggregation/last/ diff --git a/draft/commands-locks.txt b/draft/commands-locks.txt index 171545fb27b..93f7ac051f3 100644 --- a/draft/commands-locks.txt +++ b/draft/commands-locks.txt @@ -265,10 +265,6 @@ command>` in MongoDB. - - - - * - :dbcommand:`medianKey` - - - - - - * - :dbcommand:`moveChunk` - - diff --git a/repo_sync.py b/repo_sync.py new file mode 100644 index 00000000000..4e90c780076 --- /dev/null +++ b/repo_sync.py @@ -0,0 +1,45 @@ +import subprocess +from typing_extensions import Annotated +import typer +import github + +def get_installation_access_token(app_id: int, private_key: str, + installation_id: int) -> str: + """ + Obtain an installation access token using JWT. + + Args: + - app_id (int): The application ID for GitHub App. + - private_key (str): The private key associated with the GitHub App. + - installation_id (int): The installation ID of the GitHub App for a particular account. + + Returns + - Optional[str]: The installation access token. Returns `None` if there's an error obtaining the token. + + """ + integration = github.GithubIntegration(app_id, private_key) + auth = integration.get_access_token(installation_id) + assert auth + assert auth.token + return auth.token + + +def main(branch: Annotated[str, typer.Option(envvar="GITHUB_REF_NAME")], + app_id: Annotated[int, typer.Option(envvar="APP_ID")], + installation_id: Annotated[int, typer.Option(envvar="INSTALLATION_ID")], + server_docs_private_key: Annotated[str, typer.Option(envvar="SERVER_DOCS_PRIVATE_KEY")]): + + access_token = get_installation_access_token(app_id, server_docs_private_key, installation_id) + + git_destination_url_with_token = f"https://x-access-token:{access_token}@github.com/mongodb/docs.git" + # Use a local path for testing + # git_destination_url_with_token = "path_to_local_git" + + # Taken from SO: https://stackoverflow.com/a/69979203 + subprocess.run(["git", "config", "--unset-all", "http.https://github.com/.extraheader"], check=True) + # Push the code upstream + subprocess.run(["git", "push", git_destination_url_with_token, branch], check=True) + + +if __name__ == "__main__": + typer.run(main) diff --git a/requirements.txt b/requirements.txt index 3b93d5617b3..66dfc65c0c8 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1 +1,2 @@ -giza +typer +pygithub diff --git a/snooty.toml b/snooty.toml index 7384995bfac..695b1c36bae 100644 --- a/snooty.toml +++ b/snooty.toml @@ -20,6 +20,7 @@ toc_landing_pages = [ "/administration/backup-sharded-clusters", "/administration/configuration-and-maintenance", "/administration/connection-pool-overview", + "/administration/diagnose-query-performance", "/administration/health-managers", "/administration/install-community", "/administration/install-enterprise-linux", @@ -45,12 +46,14 @@ toc_landing_pages = [ "/core/authentication", "/core/authorization", "/core/backups", + "/core/capped-collections", "/core/crud", "/core/csfle", "/core/csfle/fundamentals/", "/core/csfle/reference", "/core/csfle/tutorials", "/core/databases-and-collections", + "/core/dot-dollar-considerations", "/core/geohaystack", "/core/indexes/create-index", "/core/indexes/index-types", @@ -71,10 +74,20 @@ toc_landing_pages = [ "/core/index-creation", "/core/index-text", "/core/index-ttl", + "/core/index-unique", "/core/journaling", "/core/kerberos", "/core/map-reduce", + "/core/queryable-encryption/", + "/core/queryable-encryption/about-qe-csfle", + "/core/queryable-encryption/fundamentals/keys-key-vaults", + "/core/queryable-encryption/fundamentals/", + "/core/queryable-encryption/overview-enable-qe", + "/core/queryable-encryption/overview-use-qe", + "/core/queryable-encryption/tutorials/", + "/core/queryable-encryption/reference/", "/core/query-optimization", + "/core/queryable-encryption/overview-use-qe", "/core/read-isolation-consistency-recency", "/core/read-preference", "/core/replica-set-architectures", @@ -84,12 +97,9 @@ toc_landing_pages = [ "/core/schema-validation", "/core/schema-validation/specify-json-schema", "/core/security-encryption-at-rest", - "/core/queryable-encryption/", - "/core/queryable-encryption/fundamentals/", - "/core/queryable-encryption/tutorials/", - "/core/queryable-encryption/reference/", "/core/security-hardening", "/core/security-internal-authentication", + "/core/security-in-use-encryption", "/core/security-ldap", "/core/security-scram", "/core/security-oidc", @@ -144,9 +154,11 @@ toc_landing_pages = [ "/reference/command/nav-sharding", "/reference/command/nav-user-management", "/reference/configuration-options", + "/reference/explain-results/", "/reference/inconsistency-type", "/reference/method", "/reference/method/js-atlas-search", + "/reference/method/js-atlas-streams", "/reference/method/js-bulk", "/reference/method/js-client-side-field-level-encryption", "/reference/method/js-collection", @@ -201,14 +213,17 @@ toc_landing_pages = [ "/release-notes/4.4-downgrade", "/release-notes/4.4", "/release-notes/5.0", + "/release-notes/5.0-downgrade", "/release-notes/5.1", "/release-notes/5.2", "/release-notes/5.3", "/release-notes/6.0", + "/release-notes/6.0-downgrade", "/release-notes/6.1", "/release-notes/6.2", "/release-notes/6.3", "/release-notes/7.0", + "/release-notes/7.0-downgrade", "/release-notes/7.1", "/release-notes/7.2", "/release-notes/7.3", @@ -219,6 +234,7 @@ toc_landing_pages = [ "/storage", "/text-search", "/tutorial/insert-documents", + "/tutorial/manage-shard-zone", "/tutorial/install-mongodb-enterprise-on-amazon", "/tutorial/install-mongodb-enterprise-on-debian", "/tutorial/install-mongodb-enterprise-on-os-x", @@ -250,7 +266,7 @@ sbe-short = "slot-based execution engine" sbe-title = "Slot-Based Query Execution Engine" version = "{+version+}" version-last = "{+version-last+}" -year = "2023" +year = "2024" ui-org-menu = ":icon-mms:`office` :guilabel:`Organizations` menu" [constants] @@ -260,14 +276,16 @@ atlas-ui = "Atlas UI" mongosh = ":binary:`~bin.mongosh`" package-branch = "testing" # testing for dev rc releases windows-dir-version = "6.0" # wizard +minimum-lts-version = "5.0" package-name-org = "mongodb-org" package-name-enterprise = "mongodb-enterprise" package-name = "mongodb" version = "8.0" +current-rapid-release = "7.3" latest-lts-version = "7.0" last-supported-version = "5.0" -release = "7.1.1" -version-dev = "7.2" +release = "7.3.1" +version-dev = "7.3" version-last = "6.0" pgp-version = "{+version+}" rsa-key = "4B7C549A058F8B6B" @@ -300,6 +318,7 @@ qe-abbr = ":abbr:`QE (Queryable Encryption)`" qe-preview = "{+qe+} Public Preview" qe-equality-ga = "{+qe+} with equality queries" qe-equality-ga-title = "{+qe+} With Equality Queries" +in-use-encryption = "In-Use Encryption" in-use-doc = "document with encrypted fields" in-use-doc-title = "Document with Encrypted Fields" in-use-docs = "documents with encrypted fields" @@ -330,7 +349,7 @@ kmip-kms-no-hover = "KMIP-compliant key provider" kmip-kms = "{+kmip-hover+}-compliant key provider" kmip-kms-title = "KMIP-Compliant Key Provider" csfle-code-snippets-gen-keys = "https://github.com/mongodb/docs/tree/master/source/includes/quick-start/generate-master-key" -libmongocrypt-version = "1.8" +libmongocrypt-version = "1.9" mongodb-crypt-version = "1.7.3" sample-app-url-csfle = "https://github.com/mongodb-university/docs-in-use-encryption-examples/tree/main/csfle" sample-app-url-qe = "https://github.com/mongodb/docs/tree/master/source/includes/qe-tutorials" @@ -340,6 +359,8 @@ enc-schema-title = "Encryption Schema" efm = "``encryptedFieldsMap``" efm-title = "encryptedFieldsMap" shared-library = "Automatic Encryption Shared Library" +shared-library-version = "7.0.0" +shared-library-version-drop-down = "{+shared-library-version+} (current)" shared-library-package = "``crypt_shared``" shared-library-download-link = "" auto-encrypt-options = "autoEncryptionOpts" @@ -355,7 +376,7 @@ java-driver-api = "https://mongodb.github.io/mongo-java-driver/{+java-driver-ver pymongo-api-docs = "https://pymongo.readthedocs.io/en/stable/api" node-libmongocrypt-binding-docs = "https://github.com/mongodb/libmongocrypt/tree/master/bindings" csharp-api-docs = "https://mongodb.github.io/mongo-csharp-driver/2.18/apidocs/html" -java-api-docs = "https://mongodb.github.io/mongo-java-driver/4.7/apidocs" +java-api-docs = "https://mongodb.github.io/mongo-java-driver/4.11/apidocs" go-api-docs = "https://pkg.go.dev/go.mongodb.org/mongo-driver@v1.10.3" enterprise-download-link = "https://www.mongodb.com/try/download/enterprise" # C2C Product @@ -390,7 +411,8 @@ targets = [ ] variant = "danger" value = """ - OpenID Connect is currently available in Public Preview. + OpenID Connect is currently available in Public Preview and is only supported + on Linux binaries. """ [bundle] manpages = "manpages.tar.gz" diff --git a/source/administration/analyzing-mongodb-performance.txt b/source/administration/analyzing-mongodb-performance.txt index ac7fc94cb53..5dfbd09ee43 100644 --- a/source/administration/analyzing-mongodb-performance.txt +++ b/source/administration/analyzing-mongodb-performance.txt @@ -109,37 +109,6 @@ system-wide limits can be modified using the ``ulimit`` command, or by editing your system's ``/etc/sysctl`` file. See :ref:`ulimit` for more information. -.. _database-profiling: - -Database Profiling ------------------- - -The :ref:`database-profiler` collects detailed -information about operations run against a mongod instance. The -profiler's output can help to identify inefficient queries and -operations. - -You can enable and configure profiling for individual databases or for -all databases on a :binary:`~bin.mongod` instance. -Profiler settings affect only a single :binary:`~bin.mongod` instance -and don't propagate across a :term:`replica set` or :term:`sharded -cluster`. - -See :ref:`database-profiler` for information on -enabling and configuring the profiler. - -The following profiling levels are available: - -.. include:: /includes/database-profiler-levels.rst - -.. include:: /includes/warning-profiler-performance.rst - -.. note:: - - .. include:: /includes/fact-log-slow-queries.rst - -.. include:: /includes/extracts/4.2-changes-log-query-shapes-plan-cache-key.rst - .. _ftdc-stub: Full Time Diagnostic Data Capture @@ -201,12 +170,12 @@ one or more of the following utilization statistics: .. note:: - Starting in MongoDB 4.4, if the :binary:`~bin.mongod` process runs - in a :term:`container`, FTDC reports utilization statistics from - the perspective of the container instead of the host operating - system. For example, if a the :binary:`~bin.mongod` runs in a - container that is configured with RAM restrictions, FTDC calculates memory utilization against the container's RAM limit, as - opposed to the host operating system's RAM limit. + If the :binary:`~bin.mongod` process runs in a :term:`container`, FTDC + reports utilization statistics from the perspective of the container + instead of the host operating system. For example, if a the + :binary:`~bin.mongod` runs in a container that is configured with RAM + restrictions, FTDC calculates memory utilization against the container's + RAM limit, as opposed to the host operating system's RAM limit. FTDC collects statistics produced by the following commands on file rotation or startup: @@ -263,8 +232,8 @@ For information on MongoDB Support, visit `Get Started With MongoDB Support ` and :doc:`configuration -file ` interfaces provide MongoDB +The :ref:`command line ` and :ref:`configuration file +` interfaces provide MongoDB administrators with a large number of options and settings for controlling the operation of the database system. This document provides an overview of common configurations and examples of @@ -310,8 +310,8 @@ If running as a replica set, :method:`initiate ` the shard replica set and add members. For the router (i.e. :binary:`~bin.mongos`), configure at least one -:binary:`~bin.mongos` process with the following :doc:`setting -`: +:binary:`~bin.mongos` process with the following :ref:`setting +`: .. code-block:: yaml diff --git a/source/administration/connection-pool-overview.txt b/source/administration/connection-pool-overview.txt index 4c213b7826a..19a4307657f 100644 --- a/source/administration/connection-pool-overview.txt +++ b/source/administration/connection-pool-overview.txt @@ -4,6 +4,10 @@ Connection Pool Overview ======================== +.. facet:: + :name: genre + :values: reference + .. default-domain:: mongodb .. contents:: On this page @@ -79,13 +83,13 @@ to be established. Connection Pool Configuration Settings -------------------------------------- -To configure the connection pool, set the options: +You can specify connection pool settings in these locations: -- through the :ref:`MongoDB URI `, +- The :ref:`MongoDB URI ` -- programmatically when building the ``MongoClient`` instance, or +- Your application's ``MongoClient`` instance -- in your application framework's configuration files. +- Your application framework's configuration files Settings ~~~~~~~~ @@ -97,6 +101,33 @@ Settings * - Setting - Description + * - :urioption:`connectTimeoutMS` + + - Most drivers default to never time out. Some versions of the + Java drivers (for example, version 3.7) default to ``10``. + + *Default:* ``0`` for most drivers. See your :driver:`driver ` + documentation. + + * - :urioption:`maxConnecting` + + - Maximum number of connections a pool may be establishing + concurrently. + + ``maxConnecting`` is supported for all drivers **except** the + :driver:`Rust Driver `. + + .. include:: /includes/connection-pool/max-connecting-use-case.rst + + *Default:* ``2`` + + * - :urioption:`maxIdleTimeMS` + + - The maximum number of milliseconds that a connection can + remain idle in the pool before being removed and closed. + + *Default:* See your :driver:`driver ` documentation. + * - :urioption:`maxPoolSize` - .. _maxpoolsize-cp-setting: @@ -118,51 +149,31 @@ Settings *Default*: ``0`` - * - :urioption:`connectTimeoutMS` - - - Most drivers default to never time out. Some versions of the - Java drivers (for example, version 3.7) default to ``10``. - - *Default:* ``0`` for most drivers. See your :driver:`driver ` - documentation. + * - :parameter:`ShardingTaskExecutorPoolMaxSize` - * - :urioption:`socketTimeoutMS` + - Maximum number of outbound connections each TaskExecutor + connection pool can open to any given :binary:`~bin.mongod` + instance. - - Number of milliseconds to wait before timeout on a TCP - connection. - - Do *not* use :urioption:`socketTimeoutMS` as a mechanism for - preventing long-running server operations. + *Default*: 2\ :sup:`64` - 1 - Setting low socket timeouts may result in operations that error - before the server responds. - - *Default*: ``0``, which means no timeout. See your - :driver:`driver ` documentation. + Parameter only applies to sharded deployments. - * - :urioption:`maxIdleTimeMS` - - - The maximum number of milliseconds that a connection can - remain idle in the pool before being removed and closed. + * - :parameter:`ShardingTaskExecutorPoolMaxSizeForConfigServers` - *Default:* See your :driver:`driver ` documentation. + - .. include:: /includes/ShardingTaskExecutorPoolMaxSizeForConfigServers-parameter.rst - * - :urioption:`waitQueueTimeoutMS` + *Default*: ``-1`` - - Maximum wait time in milliseconds that a can thread wait for - a connection to become available. A value of ``0`` means there - is no limit. + .. versionadded:: 6.0 - *Default*: ``0``. See your :driver:`driver ` documentation. - * - :parameter:`ShardingTaskExecutorPoolMinSize` - Minimum number of outbound connections each TaskExecutor connection pool can open to any given :binary:`~bin.mongod` instance. - *Default*: ``1``. See - :parameter:`ShardingTaskExecutorPoolMinSize`. + *Default*: ``1`` Parameter only applies to sharded deployments. @@ -174,24 +185,26 @@ Settings .. versionadded:: 6.0 - * - :parameter:`ShardingTaskExecutorPoolMaxSize` - - - Maximum number of outbound connections each TaskExecutor - connection pool can open to any given :binary:`~bin.mongod` - instance. - - *Default*: 2\ :sup:`64` - 1. See - :parameter:`ShardingTaskExecutorPoolMaxSize`. + * - :urioption:`socketTimeoutMS` - Parameter only applies to sharded deployments. + - Number of milliseconds to wait before timeout on a TCP + connection. + + Do *not* use :urioption:`socketTimeoutMS` as a mechanism for + preventing long-running server operations. - * - :parameter:`ShardingTaskExecutorPoolMaxSizeForConfigServers` + Setting low socket timeouts may result in operations that error + before the server responds. + + *Default*: ``0``, which means no timeout. - - .. include:: /includes/ShardingTaskExecutorPoolMaxSizeForConfigServers-parameter.rst + * - :urioption:`waitQueueTimeoutMS` - *Default*: ``-1`` + - Maximum wait time in milliseconds that a can thread wait for + a connection to become available. A value of ``0`` means there + is no limit. - .. versionadded:: 6.0 + *Default*: ``0`` .. toctree:: :titlesonly: diff --git a/source/administration/diagnose-query-performance.txt b/source/administration/diagnose-query-performance.txt new file mode 100644 index 00000000000..75b04d7cc08 --- /dev/null +++ b/source/administration/diagnose-query-performance.txt @@ -0,0 +1,173 @@ +.. _server-diagnose-queries: + +========================= +Analyze Query Performance +========================= + +.. default-domain:: mongodb + +.. contents:: On this page + :local: + :backlinks: none + :depth: 2 + :class: singlecol + +.. |both| replace:: Atlas clusters and self-hosted deployments +.. |m10-atlas| replace:: M10+ Atlas clusters + +MongoDB provides several ways to examine the performance of your +workload, allowing you to understand query performance and identify +long-running queries. Understanding query performance helps you build +effective indexes and ensure your application runs critical queries +efficiently. + +Identify Slow Queries +--------------------- + +Use the following methods to identify slow queries that occur on your +deployment. + +Performance Overview +~~~~~~~~~~~~~~~~~~~~ + +The following methods provide overviews of your deployment's +performance. Use these methods to determine if there are performance +issues that need to be addressed: + +.. list-table:: + :header-rows: 1 + :widths: 10 10 20 + + * - Method + - Availability + - Description + + * - Use the Atlas Performance Advisor + - |m10-atlas| + - The Atlas Performance Advisor monitors slow queries and suggests + new indexes to improve performance. For more information, see + :ref:`performance-advisor`. + + * - Check ongoing operations in Atlas + - |m10-atlas| + - You can use the :ref:`Atlas Real-Time Performance Panel + ` (RTPP) to see current network + traffic, database operations, and hardware statistics. + + * - Check ongoing operations locally + - |both| + - The :pipeline:`$currentOp` aggregation stage returns information + on active operations and cursors. Use ``$currentOp`` to identify + long-running or stuck operations that may be negatively impacting + performance. + + You can also use the :dbcommand:`top` command to get additional + operation count and latency statistics.. + + * - Check server metrics + - |both| + - For Atlas clusters, you can :ref:`view cluster metrics + ` to identify performance issues. + + For self-hosted deployments, the :dbcommand:`serverStatus` + command provides metrics that can indicate poor performance and + anomalies for query execution. + + * - View common query shapes + - |both| + - The :pipeline:`$queryStats` aggregation stage returns information + about common query shapes. ``$queryStats`` provides a holistic + view of the kinds of queries being run on your deployment. + + * - View index statistics + - |both| + - The :pipeline:`$indexStats` aggregation stage returns information + about your collection's indexes and how often individual indexes + are used. Use ``$indexStats`` to identify unused indexes that can + be removed to improve write performance. + +Analyze a Slow Query +~~~~~~~~~~~~~~~~~~~~ + +Use these methods to analyze a slow query and determine the cause of +poor performance: + +.. list-table:: + :header-rows: 1 + :widths: 10 10 20 + + * - Method + - Availability + - Description + + * - Use the Atlas Query Profiler + - |m10-atlas| + - The Atlas Query Profiler shows long-running operations and + performance statistics. For more information, see + :ref:`profile-database`. + + * - Enable the Database Profiler + - |both| + - When enabled, the database profiler stores information about slow + queries in the :data:`system.profile <.system.profile>` + collection. + + For more information, see :ref:`database-profiler`. + + * - View slow queries in the diagnostic log + - |both| + - MongoDB logs queries that exceed the slow operation threshold + (default 100 milliseconds) in the :ref:`diagnostic logs + `. + + Check the diagnostic logs to identify problematic queries and see + which queries would benefit from indexes. + + * - View explain results + - |both| + - Query explain results show information on the query plan and + execution statistics. You can use explain results to determine + the following information about a query: + + - The amount of time a query took to execute + - Whether the query used an index + - The number of documents and index keys scanned to fulfill a + query + + To view explain results, use the following methods: + + - :method:`db.collection.explain()` + - :method:`~cursor.explain()` cursor method + + To learn about explain results output, see :ref:`explain-results` + and :ref:`interpret-explain-plan`. + +Perform Advanced Query Analysis +------------------------------- + +The following methods are suited for deeper investigation of problematic +queries, and can provide fine-grained performance insights: + +.. list-table:: + :header-rows: 1 + :widths: 10 10 20 + + * - Method + - Availability + - Description + + * - View plan cache statistics + - |both| + - The :pipeline:`$planCacheStats` aggregation stage returns + information about a collection's :ref:`plan cache + `. + + The plan cache contains query plans that the query planner uses + to efficiently complete queries. Generally, the plan cache should + contain entries for your most commonly-run queries. + +.. toctree:: + :titlesonly: + + /reference/explain-results + /tutorial/manage-the-database-profiler diff --git a/source/administration/monitoring.txt b/source/administration/monitoring.txt index 4bca56d8648..c11a057e859 100644 --- a/source/administration/monitoring.txt +++ b/source/administration/monitoring.txt @@ -35,8 +35,8 @@ a running MongoDB instance: - MongoDB distributes a set of utilities that provides real-time reporting of database activities. -- MongoDB provides various :doc:`database commands - ` that return statistics regarding the current +- MongoDB provides various :ref:`database commands + ` that return statistics regarding the current database state with greater fidelity. - `MongoDB Atlas `_ @@ -156,9 +156,9 @@ by the collection, and information about its indexes. ```````````````````` The :dbcommand:`replSetGetStatus` command (:method:`rs.status()` from -the shell) returns an overview of your replica set's status. The :doc:`replSetGetStatus -` document details the -state and configuration of the replica set and statistics about its members. +the shell) returns an overview of your replica set's status. The +``replSetGetStatus`` document details the state and configuration of the +replica set and statistics about its members. Use this data to ensure that replication is properly configured, and to check the connections between the current host and the other members @@ -269,8 +269,7 @@ control these options. .. note:: You can specify these configuration operations as the command line - arguments to :doc:`mongod ` or :doc:`mongos - ` + arguments to :binary:`mongod` or :binary:`mongos`. For example: diff --git a/source/administration/production-checklist-development.txt b/source/administration/production-checklist-development.txt index 9bfdaf84e9a..234b169a2c7 100644 --- a/source/administration/production-checklist-development.txt +++ b/source/administration/production-checklist-development.txt @@ -75,7 +75,7 @@ Replication .. include:: /includes/extracts/arbiters-and-pvs-with-reference.rst - Ensure that your secondaries remain up-to-date by using - :doc:`monitoring tools ` and by + :ref:`monitoring tools ` and by specifying appropriate :doc:`write concern `. diff --git a/source/administration/production-checklist-operations.txt b/source/administration/production-checklist-operations.txt index 46b30300099..1c6563ac2f3 100644 --- a/source/administration/production-checklist-operations.txt +++ b/source/administration/production-checklist-operations.txt @@ -6,6 +6,10 @@ Operations Checklist .. default-domain:: mongodb +.. facet:: + :name: genre + :values: reference + .. contents:: On this page :local: :backlinks: none @@ -275,3 +279,9 @@ Load Balancing - Avoid placing load balancers between MongoDB cluster or replica set components. + +Security +~~~~~~~~ + +For a list of security measures to protect your MongoDB installation, +see the :ref:`MongoDB Security Checklist `. diff --git a/source/administration/production-notes.txt b/source/administration/production-notes.txt index abcf78c93d3..785f1c16a15 100644 --- a/source/administration/production-notes.txt +++ b/source/administration/production-notes.txt @@ -46,7 +46,6 @@ x86_64 ~~~~~~ MongoDB requires the following minimum ``x86_64`` microarchitectures: -[#microarch-intel]_ - For Intel ``x86_64``, MongoDB requires one of: @@ -81,6 +80,11 @@ Starting in MongoDB 5.0, :binary:`~bin.mongod`, :binary:`~bin.mongos`, and the legacy :binary:`~bin.mongo` shell no longer support ``arm64`` platforms which do not meet this minimum microarchitecture requirement. +.. note:: MongoDB no longer supports single board hardware lacking the proper + CPU architecture (Raspberry Pi 4). See `Compatibility Changes in MongoDB 5.0 + `_ + for more information. + .. _prod-notes-supported-platforms-x86_64: .. _prod-notes-supported-platforms-PPC64LE: .. _prod-notes-supported-platforms-s390x: @@ -114,31 +118,22 @@ Recommended Platforms While MongoDB supports a variety of platforms, the following operating systems are recommended for production use on ``x86_64`` architecture: -- Amazon Linux 2 -- Debian 10 -- :abbr:`RHEL (Red Hat Enterprise Linux)` / CentOS 7 and 8 [#rocky-almalinux]_ -- SLES 12 and 15 -- Ubuntu LTS 20.04 and 22.04 -- Windows Server 2016 and 2019 - -.. [#oracle-linux] - - MongoDB only supports Oracle Linux running the Red Hat Compatible - Kernel (RHCK). MongoDB does **not** support the Unbreakable - Enterprise Kernel (UEK). +- Amazon Linux +- Debian +- :abbr:`RHEL (Red Hat Enterprise Linux)` [#rocky-almalinux]_ +- SLES +- Ubuntu LTS +- Windows Server -.. [#microarch-intel] - - MongoDB 5.0 requires use of the AVX instruction set, available on - `select Intel and AMD processors - `__. +For best results, run the latest version of your platform. If you run an +older version, make sure that your version is supported by its provider. .. [#rocky-almalinux] - MongoDB on-premises products released for RHEL version 8.0+ are - compatible with and supported on Rocky Linux version 8.0+ and - AlmaLinux version 8.0+, contingent upon those distributions meeting their - obligation to deliver full RHEL compatibility. + MongoDB on-premises products released for RHEL version 8.0+ are + compatible with Rocky Linux version 8.0+ and AlmaLinux version 8.0+, + contingent upon those distributions meeting their obligation to + deliver full RHEL compatibility. .. seealso:: @@ -149,13 +144,25 @@ Use the Latest Stable Packages Be sure you have the latest stable release. -All MongoDB releases are available on the :dl:`MongoDB Download Center <>` -page. The :dl:`MongoDB Download Center <>` is a good place to verify the -current stable release, even if you are installing via a package -manager. +MongoDB releases are available on the MongoDB Download Center: + +- `MongoDB Enterprise Advanced + `_ +- `MongoDB Community Edition + `_ -For other MongoDB products, refer either to the :dl:`MongoDB Download Center <>` -page or their `respective documentation `_. +For details on upgrading to the most current minor release, see +:ref:`upgrade-to-latest-revision`. + +The following related packages are also available on the MongoDB +Download Center: + +- `Tools `_ +- `Atlas SQL Interface `_ +- `Mobile & Edge `_ + +For other MongoDB products, see their `respective documentation +`_. MongoDB ``dbPath`` ------------------ @@ -674,7 +681,7 @@ WiredTiger uses :term:`prefix compression` on all indexes by default. Clock Synchronization --------------------- -MongoDB :doc:`components ` keep logical clocks for +MongoDB :ref:`components ` keep logical clocks for supporting time-dependent operations. Using `NTP `_ to synchronize host machine clocks mitigates the risk of clock drift between components. Clock drift between components increases the @@ -789,6 +796,14 @@ MongoDB performs best where swapping can be avoided or kept to a minimum. As such you should set ``vm.swappiness`` to either ``1`` or ``0`` depending on your application needs and cluster configuration. +.. note:: + + Most system and user processes run within a cgroup, which, by default, sets + the ``vm.swappiness`` to ``60``. If you are running + :abbr:`RHEL (Red Hat Enterprise Linux)` / CentOS, set + ``vm.force_cgroup_v2_swappiness`` to ``1`` to ensure that the specified + ``vm.swappiness`` value overrides any cgroup defaults. + .. [#swappiness-kernel-version] With Linux kernel versions previous to ``3.5``, or @@ -853,8 +868,7 @@ consider the following recommendations: .. note:: - Starting in MongoDB 4.4, a startup error is generated if the - ``ulimit`` value for number of open files is under ``64000``. + .. include:: /includes/fact-ulimit-minimum.rst - Disable Transparent Huge Pages. MongoDB performs better with normal (4096 bytes) virtual memory pages. See :doc:`Transparent Huge @@ -1036,12 +1050,9 @@ and affect :doc:`replica set ` and :ref:`sharded cluster ` high availability mechanisms. -It is possible to clone a virtual machine running MongoDB. -You might use this function to -spin up a new virtual host to add as a member of a replica -set. If you clone a VM with journaling enabled, the clone snapshot will -be valid. If not using journaling, first stop :binary:`~bin.mongod`, -then clone the VM, and finally, restart :binary:`~bin.mongod`. +You can clone a virtual machine running MongoDB. You might use this +function to deploy a new virtual host to add as a member of a replica +set. KVM ``` @@ -1088,4 +1099,4 @@ Backups To make backups of your MongoDB database, please refer to :ref:`MongoDB Backup Methods Overview `. -.. include:: /includes/unicode-checkmark.rst +.. include:: /includes/unicode-checkmark.rst \ No newline at end of file diff --git a/source/administration/security-checklist.txt b/source/administration/security-checklist.txt index b9cb597411d..2da04da6b43 100644 --- a/source/administration/security-checklist.txt +++ b/source/administration/security-checklist.txt @@ -67,6 +67,7 @@ Pre-production Checklist/Considerations .. seealso:: - :doc:`/core/authorization` + - :doc:`/tutorial/create-users` - :doc:`/tutorial/manage-users-and-roles` |arrow| Encrypt Communication (TLS/SSL) diff --git a/source/administration/sharded-cluster-administration.txt b/source/administration/sharded-cluster-administration.txt index 9a641ff55a3..1b71254080c 100644 --- a/source/administration/sharded-cluster-administration.txt +++ b/source/administration/sharded-cluster-administration.txt @@ -28,8 +28,9 @@ Sharded Cluster Administration /tutorial/backup-sharded-cluster-metadata /tutorial/convert-sharded-cluster-to-replica-set /tutorial/convert-replica-set-to-replicated-shard-cluster + /tutorial/drop-a-hashed-shard-key-index -:doc:`Config Server Administration ` +:ref:`Config Server Administration ` This section contains articles and tutorials related to sharded cluster config server administration @@ -67,3 +68,5 @@ Sharded Cluster Administration Convert a replica set to a sharded cluster in which each shard is its own replica set. +:doc:`/tutorial/drop-a-hashed-shard-key-index` + Drop a Hashed Shard Key Index. diff --git a/source/applications/data-models-relationships.txt b/source/applications/data-models-relationships.txt index cbd874eb0ce..822205c9e33 100644 --- a/source/applications/data-models-relationships.txt +++ b/source/applications/data-models-relationships.txt @@ -25,6 +25,10 @@ Model Relationships Between Documents ` to describe one-to-many relationships between documents. +:doc:`/tutorial/model-embedded-many-to-many-relationships-between-documents` + Presents a data model that uses :ref:`embedded documents + ` to describe many-to-many relationships + between connected data. .. toctree:: :titlesonly: @@ -33,3 +37,4 @@ Model Relationships Between Documents /tutorial/model-embedded-one-to-one-relationships-between-documents /tutorial/model-embedded-one-to-many-relationships-between-documents /tutorial/model-referenced-one-to-many-relationships-between-documents + /tutorial/model-embedded-many-to-many-relationships-between-documents diff --git a/source/applications/data-models.txt b/source/applications/data-models.txt index 65a510108f0..2177beb7f07 100644 --- a/source/applications/data-models.txt +++ b/source/applications/data-models.txt @@ -38,6 +38,11 @@ patterns and common schema design considerations: ` to describe one-to-many relationships between connected data. + :doc:`/tutorial/model-embedded-many-to-many-relationships-between-documents` + Presents a data model that uses :ref:`embedded documents + ` to describe many-to-many relationships + between connected data. + :doc:`/tutorial/model-referenced-one-to-many-relationships-between-documents` Presents a data model that uses :ref:`references ` to describe one-to-many diff --git a/source/applications/replication.txt b/source/applications/replication.txt index ecc914c5efd..43cc0668b38 100644 --- a/source/applications/replication.txt +++ b/source/applications/replication.txt @@ -24,7 +24,7 @@ additional read and write configurations for replica sets. write and read operations. :doc:`/core/replica-set-write-concern` - Write concern describes the level of acknowledgement requested + Write concern describes the level of acknowledgment requested from MongoDB for write operations. :doc:`/core/read-preference` diff --git a/source/changeStreams.txt b/source/changeStreams.txt index aba3df3d581..f1b518b3856 100644 --- a/source/changeStreams.txt +++ b/source/changeStreams.txt @@ -7,16 +7,13 @@ Change Streams .. default-domain:: mongodb -.. meta:: - :description: Change streams allow applications to access real-time data changes without the complexity and risk of tailing the oplog. - .. facet:: :name: genre :values: reference .. facet:: :name: programming_language - :values: c, csharp, go, java, javascript/typescript, php, python, ruby, swift + :values: c, csharp, go, java, javascript/typescript, php, python, ruby, swift, kotlin .. contents:: On this page :local: @@ -26,10 +23,10 @@ Change Streams .. meta:: :description: Change streams code examples for how to access real-time data changes in MongoDB - :keywords: database triggers, real time, code example, node.js, java sync, motor, swift sync, swift async + :keywords: database triggers, real time, code example, node.js, java sync, motor, swift sync, swift async, kotlin coroutine Change streams allow applications to access real-time data changes -without the complexity and risk of tailing the :term:`oplog`. +without the prior complexity and risk of manually tailing the :term:`oplog`. Applications can use change streams to subscribe to all data changes on a single collection, a database, or an entire deployment, and immediately react to them. Because change streams use the aggregation @@ -226,6 +223,19 @@ upper-right to set the language of the examples on this page. :start-after: Start Changestream Example 1 :end-before: End Changestream Example 1 + .. tab:: + :tabid: kotlin-coroutine + + The Kotlin examples below assume that you are connected to a MongoDB replica set and can access a database + that contains the ``inventory`` collection. To learn more about completing these tasks, see the + :driver:`Kotlin Coroutine Driver Databases and Collections ` guide. + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Changestream Example 1 + :end-before: End Changestream Example 1 + .. tab:: :tabid: csharp @@ -428,8 +438,31 @@ upper-right to set the language of the examples on this page. MongoCursor cursor = collection.watch(pipeline).iterator(); The ``pipeline`` list includes a single :pipeline:`$match` stage that - filters any operations where the ``username`` is ``alice``, or - operations where the ``operationType`` is ``delete``. + filters for any operations that meet one or both of the following criteria: + + - ``username`` value is ``alice`` + - ``operationType`` value is ``delete`` + + Passing the ``pipeline`` to the :method:`~db.collection.watch()` method directs the + change stream to return notifications after passing them through the + specified ``pipeline``. + + .. tab:: + :tabid: kotlin-coroutine + + .. include:: /includes/fact-change-streams-modify-output.rst + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Changestream Example 4 + :end-before: End Changestream Example 4 + + The ``pipeline`` list includes a single :pipeline:`$match` stage that + filters for any operations that meet one or both of the following criteria: + + - ``username`` value is ``alice`` + - ``operationType`` value is ``delete`` Passing the ``pipeline`` to the :method:`~db.collection.watch()` method directs the change stream to return notifications after passing them through the @@ -612,6 +645,23 @@ upper-right to set the language of the examples on this page. :start-after: Start Changestream Example 2 :end-before: End Changestream Example 2 + .. tab:: + :tabid: kotlin-coroutine + + To return the most current majority-committed version of the updated + document, pass ``FullDocument.UPDATE_LOOKUP`` to the + `ChangeStreamFlow.fullDocument() <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-change-stream-flow/full-document.html>`__ method. + + In the example below, all update operations notifications + include a ``FullDocument`` field that represents the *current* + version of the document affected by the update operation. + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Changestream Example 2 + :end-before: End Changestream Example 2 + .. tab:: :tabid: c @@ -839,6 +889,22 @@ See :ref:`change-stream-resume-token` for more information on the resume token. :start-after: Start Changestream Example 3 :end-before: End Changestream Example 3 + .. tab:: + :tabid: kotlin-coroutine + + You can use the `ChangeStreamFlow.resumeAfter() + <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-change-stream-flow/resume-after.html>`__ + method to resume notifications after the operation specified in the resume + token. The ``resumeAfter()`` method takes a value that must + resolve to a resume token, such as the ``resumeToken`` variable in the + example below. + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Changestream Example 3 + :end-before: End Changestream Example 3 + .. tab:: :tabid: csharp @@ -865,7 +931,6 @@ See :ref:`change-stream-resume-token` for more information on the resume token. operation specified. - .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c :language: C :dedent: 3 diff --git a/source/contents.txt b/source/contents.txt index a6fa891bb9e..efc82a35d65 100644 --- a/source/contents.txt +++ b/source/contents.txt @@ -12,15 +12,16 @@ project, this Manual and additional editions of this text. - :doc:`/introduction` - :doc:`/crud` - :doc:`/aggregation` -- :doc:`/data-modeling` -- :doc:`/core/transactions` - :doc:`/indexes` -- :doc:`/security` +- :doc:`/core/timeseries-collections` - :doc:`/changeStreams` +- :doc:`/core/transactions` +- :doc:`/data-modeling` - :doc:`/replication` - :doc:`/sharding` -- :doc:`/administration` - :doc:`/storage` +- :doc:`/administration` +- :doc:`/security` - :doc:`/faq` - :doc:`/reference` - :doc:`/release-notes` @@ -35,16 +36,18 @@ project, this Manual and additional editions of this text. MongoDB Shell (mongosh) /crud /aggregation - /data-modeling /indexes - /security - /replication - /sharding - /changeStreams + Atlas Search + Atlas Vector Search /core/timeseries-collections + /changeStreams /core/transactions - /administration + /data-modeling + /replication + /sharding /storage + /administration + /security /faq /reference /release-notes diff --git a/source/core/aggregation-pipeline-optimization.txt b/source/core/aggregation-pipeline-optimization.txt index 117fad75281..bdaddb595dc 100644 --- a/source/core/aggregation-pipeline-optimization.txt +++ b/source/core/aggregation-pipeline-optimization.txt @@ -73,47 +73,58 @@ Pipeline Sequence Optimization ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For an aggregation pipeline that contains a projection stage -(:pipeline:`$project` or :pipeline:`$unset` or -:pipeline:`$addFields` or :pipeline:`$set`) followed by a -:pipeline:`$match` stage, MongoDB moves any filters in the -:pipeline:`$match` stage that do not require values computed in the -projection stage to a new :pipeline:`$match` stage before the +(:pipeline:`$addFields`, :pipeline:`$project`, :pipeline:`$set`, or +:pipeline:`$unset`) followed by a :pipeline:`$match` stage, MongoDB +moves any filters in the ``$match`` stage that do not require values +computed in the projection stage to a new ``$match`` stage before the projection. -If an aggregation pipeline contains multiple projection and/or -:pipeline:`$match` stages, MongoDB performs this optimization for each -:pipeline:`$match` stage, moving each :pipeline:`$match` filter before -all projection stages that the filter does not depend on. +If an aggregation pipeline contains multiple projection or ``$match`` +stages, MongoDB performs this optimization for each ``$match`` stage, +moving each ``$match`` filter before all projection stages that the +filter does not depend on. -Consider a pipeline of the following stages: +Consider a pipeline with the following stages: .. code-block:: javascript - :emphasize-lines: 9-14 + :emphasize-lines: 18-23 - { $addFields: { - maxTime: { $max: "$times" }, - minTime: { $min: "$times" } - } }, - { $project: { - _id: 1, name: 1, times: 1, maxTime: 1, minTime: 1, - avgTime: { $avg: ["$maxTime", "$minTime"] } - } }, - { $match: { - name: "Joe Schmoe", - maxTime: { $lt: 20 }, - minTime: { $gt: 5 }, - avgTime: { $gt: 7 } - } } - -The optimizer breaks up the :pipeline:`$match` stage into four -individual filters, one for each key in the :pipeline:`$match` query -document. The optimizer then moves each filter before as many projection -stages as possible, creating new :pipeline:`$match` stages as needed. -Given this example, the optimizer produces the following *optimized* -pipeline: + { + $addFields: { + maxTime: { $max: "$times" }, + minTime: { $min: "$times" } + } + }, + { + $project: { + _id: 1, + name: 1, + times: 1, + maxTime: 1, + minTime: 1, + avgTime: { $avg: ["$maxTime", "$minTime"] } + } + }, + { + $match: { + name: "Joe Schmoe", + maxTime: { $lt: 20 }, + minTime: { $gt: 5 }, + avgTime: { $gt: 7 } + } + } + +The optimizer breaks up the ``$match`` stage into four individual +filters, one for each key in the ``$match`` query document. The +optimizer then moves each filter before as many projection stages as +possible, creating new ``$match`` stages as needed. + +Given this example, the optimizer automatically produces the following +*optimized* pipeline: .. code-block:: javascript :emphasize-lines: 1, 6, 11 + :copyable: false { $match: { name: "Joe Schmoe" } }, { $addFields: { @@ -127,6 +138,14 @@ pipeline: } }, { $match: { avgTime: { $gt: 7 } } } +.. note:: + + The optimized pipeline is not intended to be run manually. The + original and optimized pipelines return the same results. + + You can see the optimized pipeline in the :ref:`explain plan + `. + The :pipeline:`$match` filter ``{ avgTime: { $gt: 7 } }`` depends on the :pipeline:`$project` stage to compute the ``avgTime`` field. The :pipeline:`$project` stage is the last projection stage in this @@ -144,14 +163,10 @@ use any values computed in either the :pipeline:`$project` or :pipeline:`$addFields` stages so it was moved to a new :pipeline:`$match` stage before both of the projection stages. -.. note:: - - After optimization, the filter ``{ name: "Joe Schmoe" }`` is in a - :pipeline:`$match` stage at the beginning of the pipeline. This has - the added benefit of allowing the aggregation to use an index on the - ``name`` field when initially querying the collection. See - :ref:`aggregation-pipeline-optimization-indexes-and-filters` for more - information. +After optimization, the filter ``{ name: "Joe Schmoe" }`` is in a +:pipeline:`$match` stage at the beginning of the pipeline. This has the +added benefit of allowing the aggregation to use an index on the +``name`` field when initially querying the collection. .. _agg-sort-match-optimization: @@ -375,45 +390,67 @@ stage .. _agg-lookup-unwind-coalescence: -``$lookup`` + ``$unwind`` Coalescence -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +``$lookup``, ``$unwind``, and ``$match`` Coalescence +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -When a :pipeline:`$unwind` immediately follows another -:pipeline:`$lookup`, and the :pipeline:`$unwind` operates on the ``as`` -field of the :pipeline:`$lookup`, the optimizer can coalesce the -:pipeline:`$unwind` into the :pipeline:`$lookup` stage. This avoids -creating large intermediate documents. +When :pipeline:`$unwind` immediately follows :pipeline:`$lookup`, and the +:pipeline:`$unwind` operates on the ``as`` field of the :pipeline:`$lookup`, +the optimizer coalesces the :pipeline:`$unwind` into the :pipeline:`$lookup` +stage. This avoids creating large intermediate documents. Furthermore, if +:pipeline:`$unwind` is followed by a :pipeline:`$match` on any ``as`` subfield +of the :pipeline:`$lookup`, the optimizer also coalesces the :pipeline:`$match`. For example, a pipeline contains the following sequence: .. code-block:: javascript + :copyable: false { - $lookup: { - from: "otherCollection", - as: "resultingArray", - localField: "x", - foreignField: "y" - } + $lookup: { + from: "otherCollection", + as: "resultingArray", + localField: "x", + foreignField: "y" + } }, - { $unwind: "$resultingArray"} + { $unwind: "$resultingArray" }, + { $match: { + "resultingArray.foo": "bar" + } + } -The optimizer can coalesce the :pipeline:`$unwind` stage into the -:pipeline:`$lookup` stage. If you run the aggregation with ``explain`` -option, the ``explain`` output shows the coalesced stage: +The optimizer coalesces the :pipeline:`$unwind` and :pipeline:`$match` stages +into the :pipeline:`$lookup` stage. If you run the aggregation with ``explain`` +option, the ``explain`` output shows the coalesced stages: .. code-block:: javascript + :copyable: false { - $lookup: { - from: "otherCollection", - as: "resultingArray", - localField: "x", - foreignField: "y", - unwinding: { preserveNullAndEmptyArrays: false } - } + $lookup: { + from: "otherCollection", + as: "resultingArray", + localField: "x", + foreignField: "y", + let: {}, + pipeline: [ + { + $match: { + "foo": { + "$eq": "bar" + } + } + } + ], + unwinding: { + "preserveNullAndEmptyArrays": false + } + } } +You can see this optimized pipeline in the :ref:`explain plan +`. + .. _sbe-pipeline-optimizations: |sbe-title| Pipeline Optimizations diff --git a/source/core/aggregation-pipeline.txt b/source/core/aggregation-pipeline.txt index 305fdb62c92..37e6faf3422 100644 --- a/source/core/aggregation-pipeline.txt +++ b/source/core/aggregation-pipeline.txt @@ -204,9 +204,8 @@ Some aggregation pipeline stages accept an :ref:`aggregation expression - Can contain additional nested :ref:`aggregation expressions `. -Starting in MongoDB 4.4, you can use the :group:`$accumulator` and -:expression:`$function` aggregation operators to define custom -aggregation expressions in JavaScript. +You can use the :group:`$accumulator` and :expression:`$function` aggregation +operators to define custom aggregation expressions in JavaScript. For all aggregation expressions, see :ref:`aggregation-expressions`. diff --git a/source/core/authentication.txt b/source/core/authentication.txt index 26d7b8f20c1..3d2a85daada 100644 --- a/source/core/authentication.txt +++ b/source/core/authentication.txt @@ -20,11 +20,11 @@ Authentication :class: singlecol Authentication is the process of verifying the identity of a client. -When access control (:doc:`authorization `) is +When access control (:ref:`authorization `) is enabled, MongoDB requires all clients to authenticate themselves in order to determine their access. -Although authentication and :doc:`authorization ` +Although authentication and :ref:`authorization ` are closely connected, authentication is distinct from authorization: - **Authentication** verifies the identity of a :ref:`user `. diff --git a/source/core/authorization.txt b/source/core/authorization.txt index d808de98306..897b9421587 100644 --- a/source/core/authorization.txt +++ b/source/core/authorization.txt @@ -69,7 +69,7 @@ Privileges A privilege consists of a specified resource and the actions permitted on the resource. -A :doc:`resource ` is a database, +A :ref:`resource ` is a database, collection, set of collections, or the cluster. If the resource is the cluster, the affiliated actions affect the state of the system rather than a specific database or collection. For information on the resource @@ -118,7 +118,7 @@ other databases. Built-In Roles and User-Defined Roles ------------------------------------- -MongoDB provides :doc:`built-in roles ` that +MongoDB provides :ref:`built-in roles ` that provide set of privileges commonly needed in a database system. If these built-in-roles cannot provide the desired set of privileges, diff --git a/source/core/backups.txt b/source/core/backups.txt index 403f7f5dac9..40ac4e10dfa 100644 --- a/source/core/backups.txt +++ b/source/core/backups.txt @@ -167,7 +167,7 @@ systems. :binary:`~bin.mongodump` and :binary:`~bin.mongorestore` operate against a running :binary:`~bin.mongod` process, and can manipulate the underlying data files directly. By default, :binary:`~bin.mongodump` does not -capture the contents of the :doc:`local database `. +capture the contents of the :ref:`local database `. :binary:`~bin.mongodump` only captures the documents in the database. The resulting backup is space efficient, but :binary:`~bin.mongorestore` or diff --git a/source/core/bulk-write-operations.txt b/source/core/bulk-write-operations.txt index 825699becdf..12eb48c5a63 100644 --- a/source/core/bulk-write-operations.txt +++ b/source/core/bulk-write-operations.txt @@ -18,7 +18,7 @@ Overview MongoDB provides clients the ability to perform write operations in bulk. Bulk write operations affect a *single* collection. MongoDB allows applications to determine the acceptable level of -acknowledgement required for bulk write operations. +acknowledgment required for bulk write operations. The :method:`db.collection.bulkWrite()` method provides the ability to perform bulk insert, update, and delete operations. diff --git a/source/core/capped-collections.txt b/source/core/capped-collections.txt index 4d5976c55b7..d8f0d3a146c 100644 --- a/source/core/capped-collections.txt +++ b/source/core/capped-collections.txt @@ -1,4 +1,5 @@ .. _manual-capped-collection: +.. _capped_collections_remove_documents: ================== Capped Collections @@ -12,271 +13,149 @@ Capped Collections :depth: 2 :class: singlecol -:term:`Capped collections ` are fixed-size -collections that support high-throughput operations that insert -and retrieve documents based on insertion order. Capped -collections work in a way similar to circular buffers: once a -collection fills its allocated space, it makes room for new documents -by overwriting the oldest documents in the collection. - -See :method:`~db.createCollection()` or :dbcommand:`create` -for more information on creating capped collections. - -As an alternative to capped collections, consider :ref:`TTL (Time To -Live) indexes `. TTL indexes allow you to expire and -remove data from normal collections based on the value of a date-typed -field and a TTL value for the index. You can also use a TTL index on a -capped collection to remove expired documents even if the capped -collection hasn't exceeded its size limit. For details, :ref:`ttl-collections`. - - -Behavior --------- - -Insertion Order -~~~~~~~~~~~~~~~ - -Capped collections guarantee preservation of the insertion order. As a -result, queries do not need an index to return documents in insertion -order. Without this indexing overhead, capped collections can support -higher insertion throughput. - -.. _capped_collections_remove_documents: - -Automatic Removal of Oldest Documents -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -To make room for new documents, capped collections automatically remove -the oldest documents in the collection without requiring scripts or -explicit remove operations. - -Consider the following potential use cases for capped -collections: - -- Store log information generated by high-volume systems. Inserting - documents in a capped collection without an index is close to the - speed of writing log information directly to a file - system. Furthermore, the built-in *first-in-first-out* property - maintains the order of events, while managing storage use. - For example, the :ref:`oplog ` - uses a capped collection. - -- Cache small amounts of data in a capped collections. Since caches - are read rather than write heavy, you would either need to ensure - that this collection *always* remains in the working set (i.e. in - RAM) *or* accept some write penalty for the required index or - indexes. - -.. _capped-collections-oplog: - -Oplog Collection -~~~~~~~~~~~~~~~~ - -The :term:`oplog.rs ` collection that stores a log -of the operations in a :term:`replica set` uses a capped collection. - -Starting in MongoDB 4.0, unlike other capped collections, the oplog can -grow past its configured size limit to avoid deleting the :data:`majority -commit point `. - -.. note:: - - MongoDB rounds the capped size of the oplog up to the nearest - integer multiple of 256, in bytes. - -.. note:: - - MongoDB rounds the capped size of the oplog - up to the nearest integer multiple of 256, in bytes. - -``_id`` Index -~~~~~~~~~~~~~ - -Capped collections have an ``_id`` field and an index on the ``_id`` -field by default. +.. facet:: + :name: genre + :values: reference -.. _capped-collections-recommendations-and-restrictions: +Capped collections are fixed-size collections that insert and retrieve +documents based on insertion order. Capped collections work similarly to +circular buffers: once a collection fills its allocated space, it makes +room for new documents by overwriting the oldest documents in the +collection. -Restrictions and Recommendations --------------------------------- - -Reads -~~~~~ - -.. include:: /includes/extracts/transactions-capped-collection-read-change.rst - -Updates -~~~~~~~ - -If you plan to update documents in a capped collection, create an index -so that these update operations do not require a collection scan. - -Sharding -~~~~~~~~ - -You cannot shard a capped collection. - -Query Efficiency -~~~~~~~~~~~~~~~~ +Restrictions +------------ -Use natural ordering to retrieve the most recently inserted elements -from the collection efficiently. This is similar to using the ``tail`` -command on a log file. +- Capped collections cannot be sharded. -Aggregation ``$out`` -~~~~~~~~~~~~~~~~~~~~ +- You cannot create capped collections on :atlas:`serverless instances + `. -The aggregation pipeline stage :pipeline:`$out` -cannot write results to a capped collection. +- Capped collections are not supported in :ref:`Stable API ` + V1. -.. include:: /includes/replacement-mms.rst +- You cannot write to capped collections in :ref:`transactions + `. -Transactions -~~~~~~~~~~~~ +- The :pipeline:`$out` aggregation pipeline stage cannot write results + to a capped collection. -.. include:: /includes/extracts/transactions-capped-collection-change.rst +- You cannot use read concern :readconcern:`"snapshot"` when reading + from a capped collection. -Stable API -~~~~~~~~~~ +Command Syntax +-------------- -Capped collections are not supported in :ref:`Stable API -` V1. - -Procedures ----------- - -Create a Capped Collection -~~~~~~~~~~~~~~~~~~~~~~~~~~ - -You must create capped collections explicitly using the -:method:`db.createCollection()` method, which is a -:binary:`~bin.mongosh` helper for the :dbcommand:`create` command. -When creating a capped collection you must specify the maximum size of -the collection in bytes, which MongoDB will pre-allocate for the -collection. The size of the capped collection includes a small amount of -space for internal overhead. +The following example creates a capped collection called ``log`` with a +maximum size of 100,000 bytes. .. code-block:: javascript db.createCollection( "log", { capped: true, size: 100000 } ) -.. note:: - - The value that you provide for the ``size`` field - must be greater than ``0`` and less than or equal to - ``1024^5`` (1 {+pb+}). MongoDB rounds the ``size`` of all capped - collections up to the nearest integer multiple of 256, in bytes. - -Additionally, you may also specify a maximum number of documents for the -collection using the ``max`` field as in the following document: - -.. code-block:: javascript +For more information on creating capped collections, see +:method:`~db.createCollection()` or :dbcommand:`create`. - db.createCollection("log", { capped : true, size : 5242880, max : - 5000 } ) +Use Cases +--------- -.. important:: - - The ``size`` field is *always* required, even when - you specify the ``max`` number of documents. MongoDB removes older - documents if a collection reaches the maximum size limit before it - reaches the maximum document count. +.. include:: /includes/capped-collections/use-ttl-index.rst -.. see:: +The most common use case for a capped collection is to store log +information. When the capped collection reaches its maximum size, old +log entries are automatically overwritten with new entries. - :method:`db.createCollection()` and :dbcommand:`create`. +Get Started +----------- -.. _capped-collections-options: +To create and query capped collections, see these pages: -Query a Capped Collection -~~~~~~~~~~~~~~~~~~~~~~~~~ +- :ref:`capped-collections-create` -If you perform a :method:`~db.collection.find()` on a capped collection -with no ordering specified, MongoDB guarantees that the ordering of -results is the same as the insertion order. +- :ref:`capped-collections-query` -To retrieve documents in reverse insertion order, issue -:method:`~db.collection.find()` along with the :method:`~cursor.sort()` -method with the :operator:`$natural` parameter set to ``-1``, as shown -in the following example: +- :ref:`capped-collections-check` -.. code-block:: javascript +- :ref:`capped-collections-convert` - db.cappedCollection.find().sort( { $natural: -1 } ) +- :ref:`capped-collections-change-size` -Check if a Collection is Capped -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +- :ref:`capped-collections-change-max-docs` -Use the :method:`~db.collection.isCapped()` method to determine if a -collection is capped, as follows: +.. _capped-collections-recommendations-and-restrictions: -.. code-block:: javascript +Details +------- - db.collection.isCapped() +Consider these behavioral details for capped collections. -Convert a Collection to Capped -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +.. _capped-collections-oplog: -You can convert a non-capped collection to a capped collection with -the :dbcommand:`convertToCapped` command: +Oplog Collection +~~~~~~~~~~~~~~~~ -.. code-block:: javascript +The :term:`oplog.rs ` collection that stores a log +of the operations in a :term:`replica set` uses a capped collection. - db.runCommand({"convertToCapped": "mycoll", size: 100000}); +Unlike other capped collections, the oplog can grow past its configured +size limit to avoid deleting the :data:`majority commit point +`. -The ``size`` parameter specifies the size of the capped collection in -bytes. +.. note:: -.. include:: /includes/fact-database-lock.rst + MongoDB rounds the capped size of the oplog up to the nearest + integer multiple of 256, in bytes. -Change a Capped Collection's Size -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +_id Index +~~~~~~~~~ -.. versionadded:: 6.0 +Capped collections have an ``_id`` field and an index on the ``_id`` +field by default. -You can resize a capped collection using the :dbcommand:`collMod` command's -``cappedSize`` option to set the ``cappedSize`` in bytes. ``cappedSize`` must be -greater than ``0`` and less than or equal to ``1024^5`` (1 {+pb+}). +Updates +~~~~~~~ -.. note:: +Avoid updating data in a capped collection. Because capped collections +are fixed-size, updates can cause your data to expand beyond the +collection's allocated space, which can cause unexpected behavior. - Before you can resize a capped collection, you must have already set - the :ref:`featureCompatibilityVersion ` to at least version - ``"6.0"``. +Query Efficiency +~~~~~~~~~~~~~~~~ -For example, the following command sets the maximum size of the ``"log"`` capped -collection to 100000 bytes: +.. include:: /includes/capped-collections/query-natural-order.rst -.. code-block:: javascript +Tailable Cursor +~~~~~~~~~~~~~~~ - db.runCommand( { collMod: "log", cappedSize: 100000 } ) +You can use a :term:`tailable cursor` with capped collections. Similar to the +Unix ``tail -f`` command, the tailable cursor "tails" the end of a +capped collection. As new documents are inserted into the capped +collection, you can use the tailable cursor to continue retrieving +documents. -Change the Maximum Number of Documents in a Capped Collection -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +For information on creating a tailable cursor, see +:ref:`tailable-cursors-landing-page`. -.. versionadded:: 6.0 +Multiple Concurrent Writes +~~~~~~~~~~~~~~~~~~~~~~~~~~ -To change the maximum number of documents in a capped collection, use the -:dbcommand:`collMod` command's ``cappedMax`` option. If ``cappedMax`` is less -than or equal to ``0``, there is no maximum document limit. If -``cappedMax`` is less than the current number of documents in the -collection, MongoDB removes the excess documents on the next insert operation. +.. include:: /includes/capped-collections/concurrent-writes.rst -For example, the following command sets the maximum number of documents in the -``"log"`` capped collection to 500: +Learn More +---------- -.. code-block:: javascript +- :ref:`index-feature-ttl` - db.runCommand( { collMod: "log", cappedMax: 500 } ) +- :ref:`index-properties` -Tailable Cursor -~~~~~~~~~~~~~~~ +- :ref:`indexing-strategies` -You can use a :term:`tailable cursor` with capped collections. Similar to the -Unix ``tail -f`` command, the tailable cursor "tails" the end of a -capped collection. As new documents are inserted into the capped -collection, you can use the tailable cursor to continue retrieving -documents. +.. toctree:: + :titlesonly: -See :doc:`/core/tailable-cursors` for information on creating -a tailable cursor. + /core/capped-collections/create-capped-collection + /core/capped-collections/query-capped-collection + /core/capped-collections/check-if-collection-is-capped + /core/capped-collections/convert-collection-to-capped + /core/capped-collections/change-size-capped-collection + /core/capped-collections/change-max-docs-capped-collection diff --git a/source/core/capped-collections/change-max-docs-capped-collection.txt b/source/core/capped-collections/change-max-docs-capped-collection.txt new file mode 100644 index 00000000000..9e7d8422db4 --- /dev/null +++ b/source/core/capped-collections/change-max-docs-capped-collection.txt @@ -0,0 +1,62 @@ +.. _capped-collections-change-max-docs: + +=============================================== +Change Maximum Documents in a Capped Collection +=============================================== + +.. contents:: On this page + :local: + :backlinks: none + :depth: 2 + :class: singlecol + +.. facet:: + :name: genre + :values: tutorial + +.. versionadded:: 6.0 + +To change the maximum number of documents in a :ref:`capped collection +`, use the :dbcommand:`collMod` command's +``cappedMax`` option. + +- If ``cappedMax`` is less than or equal to ``0``, there is no maximum + document limit. + +- If ``cappedMax`` is less than the current number of documents in the + collection, MongoDB removes the excess documents on the next insert + operation. + +About this Task +--------------- + +.. include:: /includes/capped-collections/use-ttl-index.rst + +Before you Begin +---------------- + +Create a capped collection called ``log`` that can store a maximum of +20,000 documents: + +.. code-block:: javascript + + db.createCollection( "log", { capped: true, size: 5242880, max: 20000 } ) + +Steps +----- + +Run the following command to set the maximum number of documents in the +``log`` collection to 5,000: + +.. code-block:: javascript + + db.runCommand( { collMod: "log", cappedMax: 5000 } ) + +Learn More +---------- + +- :ref:`capped-collections-change-size` + +- :ref:`capped-collections-check` + +- :ref:`capped-collections-query` diff --git a/source/core/capped-collections/change-size-capped-collection.txt b/source/core/capped-collections/change-size-capped-collection.txt new file mode 100644 index 00000000000..ef0de5357b1 --- /dev/null +++ b/source/core/capped-collections/change-size-capped-collection.txt @@ -0,0 +1,59 @@ +.. _capped-collections-change-size: + +====================================== +Change the Size of a Capped Collection +====================================== + +.. contents:: On this page + :local: + :backlinks: none + :depth: 2 + :class: singlecol + +.. facet:: + :name: genre + :values: tutorial + +.. versionadded:: 6.0 + +To change the size of a :ref:`capped collection +`, use the :dbcommand:`collMod` command's +``cappedSize`` option. ``cappedSize`` is specified in bytes, and must be +greater than ``0`` and less than or equal to ``1024^5`` (1 {+pb+}). + +If ``cappedSize`` is less than the current size of the collection, +MongoDB removes the excess documents on the next insert operation. + +About this Task +--------------- + +.. include:: /includes/capped-collections/use-ttl-index.rst + +Before you Begin +---------------- + +Create a capped collection called ``log`` that has a maximum size of +2,621,440 bytes: + +.. code-block:: javascript + + db.createCollection( "log", { capped: true, size: 2621440 } ) + +Steps +----- + +Run the following command to set the maximum size of the ``log`` +collection to 5,242,880 bytes: + +.. code-block:: javascript + + db.runCommand( { collMod: "log", cappedSize: 5242880 } ) + +Learn More +---------- + +- :ref:`capped-collections-change-max-docs` + +- :ref:`capped-collections-check` + +- :ref:`capped-collections-query` diff --git a/source/core/capped-collections/check-if-collection-is-capped.txt b/source/core/capped-collections/check-if-collection-is-capped.txt new file mode 100644 index 00000000000..7ff5b5c6f36 --- /dev/null +++ b/source/core/capped-collections/check-if-collection-is-capped.txt @@ -0,0 +1,65 @@ +.. _capped-collections-check: + +=============================== +Check if a Collection is Capped +=============================== + +.. contents:: On this page + :local: + :backlinks: none + :depth: 2 + :class: singlecol + +.. facet:: + :name: genre + :values: tutorial + +To check if a collection is capped, use the +:method:`~db.collection.isCapped()` method. + +About this Task +--------------- + +.. include:: /includes/capped-collections/use-ttl-index.rst + +Before you Begin +---------------- + +Create a non-capped collection and a capped collection: + +.. code-block:: javascript + + db.createCollection("nonCappedCollection1") + + db.createCollection("cappedCollection1", { capped: true, size: 100000 } ) + +Steps +----- + +To check if the collections are capped, use the +:method:`~db.collection.isCapped()` method: + +.. io-code-block:: + :copyable: true + + .. input:: + :language: javascript + + db.nonCappedCollection1.isCapped() + + db.cappedCollection1.isCapped() + + .. output:: + :language: javascript + + false + true + +Learn More +---------- + +- :ref:`capped-collections-create` + +- :ref:`capped-collections-convert` + +- :pipeline:`$collStats` diff --git a/source/core/capped-collections/convert-collection-to-capped.txt b/source/core/capped-collections/convert-collection-to-capped.txt new file mode 100644 index 00000000000..cc6e38274e6 --- /dev/null +++ b/source/core/capped-collections/convert-collection-to-capped.txt @@ -0,0 +1,84 @@ +.. _capped-collections-convert: + +============================== +Convert a Collection to Capped +============================== + +.. contents:: On this page + :local: + :backlinks: none + :depth: 2 + :class: singlecol + +.. facet:: + :name: genre + :values: tutorial + +To convert a non-capped collection to a :ref:`capped collection +`, use the :dbcommand:`convertToCapped` +database command. + +The ``convertToCapped`` command holds a database-exclusive lock for the +duration of the operation. Other operations that lock the same database +are blocked until the ``convertToCapped`` operation completes. + +About this Task +--------------- + +.. include:: /includes/capped-collections/use-ttl-index.rst + +Before you Begin +---------------- + +Create a non-capped collection called ``log2``: + +.. code-block:: javascript + + db.createCollection("log2") + +Steps +----- + +.. procedure:: + :style: normal + + .. step:: Convert the collection to a capped collection + + To convert the ``log2`` collection to a capped collection, run the + :dbcommand:`convertToCapped` command: + + .. code-block:: javascript + + db.runCommand( { + convertToCapped: "log2", + size: 100000 + } ) + + The ``log2`` collection has a maximum size of 100,000 bytes. + + .. step:: Confirm that the collection is capped + + To confirm that the ``log2`` collection is now capped, use the + :method:`~db.collection.isCapped()` method: + + .. io-code-block:: + :copyable: true + + .. input:: + :language: javascript + + db.log2.isCapped() + + .. output:: + :language: javascript + + true + +Learn More +---------- + +- :ref:`faq-concurrency-database-lock` + +- :ref:`capped-collections-change-size` + +- :ref:`capped-collections-query` diff --git a/source/core/capped-collections/create-capped-collection.txt b/source/core/capped-collections/create-capped-collection.txt new file mode 100644 index 00000000000..b752ccc5b46 --- /dev/null +++ b/source/core/capped-collections/create-capped-collection.txt @@ -0,0 +1,95 @@ +.. _capped-collections-create: + +========================== +Create a Capped Collection +========================== + +.. contents:: On this page + :local: + :backlinks: none + :depth: 2 + :class: singlecol + +.. facet:: + :name: genre + :values: tutorial + +To create a :ref:`capped collection `, specify +the ``capped`` option to either the :method:`db.createCollection()` +method or the :dbcommand:`create` command. + +You must create capped collections explicitly. You cannot create a +capped collection implicitly by inserting data into a non-existing +collection. + +When you create a capped collection you must specify the maximum size of +the collection. MongoDB pre-allocates the specified storage for the +collection. The size of the capped collection includes a small amount of +space for internal overhead. + +You can optionally specify a maximum number of documents for the +collection. MongoDB removes older documents if the collection reaches +the maximum size limit before it reaches the maximum document count. + +About this Task +--------------- + +.. include:: /includes/capped-collections/use-ttl-index.rst + +Steps +----- + +The following examples show you how to: + +- :ref:`create-capped-collection-max-size` +- :ref:`create-capped-collection-max-docs` + +.. _create-capped-collection-max-size: + +Create a Capped Collection with a Maximum Size +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Create a capped collection called ``log`` that has a maximum size of +100,000 bytes: + +.. code-block:: javascript + + db.createCollection( "log", { capped: true, size: 100000 } ) + +.. note:: + + The value that you provide for the ``size`` field + must be greater than ``0`` and less than or equal to + ``1024^5`` (1 {+pb+}). MongoDB rounds the ``size`` of all capped + collections up to the nearest integer multiple of 256, in bytes. + +.. _create-capped-collection-max-docs: + +Create a Capped Collection with a Maximum Number of Documents +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Create a capped collection called ``log2`` that has a maximum size of +5,242,880 bytes and can store a maximum of 5,000 documents: + +.. code-block:: javascript + + db.createCollection( + "log2", + { + capped: true, + size: 5242880, + max: 5000 + } + ) + +.. important:: + + The ``size`` field is always required, even when you specify the + ``max`` number of documents. + +Learn More +---------- + +- :method:`db.createCollection()` +- :ref:`capped-collections-query` +- :ref:`capped-collections-check` diff --git a/source/core/capped-collections/query-capped-collection.txt b/source/core/capped-collections/query-capped-collection.txt new file mode 100644 index 00000000000..cfc9ad7ab7c --- /dev/null +++ b/source/core/capped-collections/query-capped-collection.txt @@ -0,0 +1,176 @@ +.. _capped-collections-query: + +========================= +Query a Capped Collection +========================= + +.. contents:: On this page + :local: + :backlinks: none + :depth: 2 + :class: singlecol + +.. facet:: + :name: genre + :values: tutorial + +When you query a capped collection without specifying a sort order, +MongoDB returns results in the same order that they were inserted, +meaning the oldest documents are returned first. + +.. include:: /includes/capped-collections/query-natural-order.rst + +About this Task +--------------- + +.. include:: /includes/capped-collections/use-ttl-index.rst + +Multiple Concurrent Writes +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. include:: /includes/capped-collections/concurrent-writes.rst + +Before you Begin +---------------- + +.. procedure:: + :style: normal + + .. step:: Create a capped collection + + .. code-block:: javascript + + db.createCollection("log", { capped: true, size: 100000 } ) + + .. step:: Insert sample data + + .. code-block:: javascript + + db.log.insertMany( [ + { + message: "system start", + type: "startup", + time: 1711403508 + }, + { + message: "user login attempt", + type: "info", + time: 1711403907 + }, + { + message: "user login fail", + type: "warning", + time: 1711404209 + }, + { + message: "user login success", + type: "info", + time: 1711404367 + }, + { + message: "user logout", + type: "info", + time: 1711404555 + } + ] ) + +Steps +----- + +The following examples show you how to: + +- :ref:`query-capped-collection-insertion-order` +- :ref:`query-capped-collection-recent` + +.. _query-capped-collection-insertion-order: + +Return Documents in Insertion Order +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Query the ``log`` collection for documents where ``type`` is ``info``, +and use the default sort order: + +.. io-code-block:: + :copyable: true + + .. input:: + :language: javascript + + db.log.find( { type: "info" } ) + + .. output:: + :language: javascript + + [ + { + _id: ObjectId("660204b74cabd75abebadbc2"), + message: 'user login attempt', + type: 'info', + time: 1711403907 + }, + { + _id: ObjectId("660204b74cabd75abebadbc4"), + message: 'user login success', + type: 'info', + time: 1711404367 + }, + { + _id: ObjectId("660204b74cabd75abebadbc5"), + message: 'user logout', + type: 'info', + time: 1711404555 + } + ] + +Documents are returned in the order that they were inserted. + +.. _query-capped-collection-recent: + +Return Most Recent Documents +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To return documents in reverse insertion order (meaning the most recent +documents are first), specify the :method:`~cursor.sort()` method with +the :operator:`$natural` parameter set to ``-1``. + +The following query returns the three most recent documents from the +``log`` collection, starting with the most recent document: + +.. io-code-block:: + :copyable: true + + .. input:: + :language: javascript + + db.log.find().sort( { $natural: -1 } ).limit(3) + + .. output:: + :language: javascript + + [ + { + _id: ObjectId("6601f2484cabd75abebadbbb"), + message: 'user logout', + type: 'info', + time: 1711404555 + }, + { + _id: ObjectId("6601f2484cabd75abebadbba"), + message: 'user login success', + type: 'info', + time: 1711404367 + }, + { + _id: ObjectId("6601f2484cabd75abebadbb9"), + message: 'user login fail', + type: 'warning', + time: 1711404209 + } + ] + +Learn More +---------- + +- :ref:`index-feature-ttl` +- :ref:`read-operations-indexing` +- :ref:`create-indexes-to-support-queries` diff --git a/source/core/causal-consistency-read-write-concerns.txt b/source/core/causal-consistency-read-write-concerns.txt index a0cb05c679c..b012559d6e5 100644 --- a/source/core/causal-consistency-read-write-concerns.txt +++ b/source/core/causal-consistency-read-write-concerns.txt @@ -64,7 +64,7 @@ guarantee causal consistency for: acknowledged by a majority of the replica set members and is durable. - Write operations with :writeconcern:`"majority"` write concern; - in other words, the write operations that request acknowledgement + in other words, the write operations that request acknowledgment that the operation has been applied to a majority of the replica set's voting members. diff --git a/source/core/clustered-collections.txt b/source/core/clustered-collections.txt index 34bc6724c14..ee2ea99bf83 100644 --- a/source/core/clustered-collections.txt +++ b/source/core/clustered-collections.txt @@ -14,9 +14,6 @@ Clustered Collections .. versionadded:: 5.3 -Overview --------- - .. include:: /includes/clustered-collections-introduction.rst .. important:: Backward-Incompatible Feature @@ -27,9 +24,7 @@ Overview Benefits -------- -Because clustered collections store documents ordered by the -:ref:`clustered index ` key value, -clustered collections have the following benefits compared to +Clustered collections have the following benefits compared to non-clustered collections: - Faster queries on clustered collections without needing a secondary @@ -70,7 +65,8 @@ Behavior -------- Clustered collections store documents ordered by the :ref:`clustered -index ` key value. +index ` key value. The clustered +index key must be ``{ _id: 1 }``. You can only have one clustered index in a collection because the documents can be stored in only one order. Only collections with a @@ -86,12 +82,12 @@ from secondary indexes: collection size returned by the :dbcommand:`collStats` command includes the clustered index size. -Starting in MongoDB 6.2, if a usable clustered index exists, the MongoDB +Starting in MongoDB 6.0.7, if a usable clustered index exists, the MongoDB query planner evaluates the clustered index against secondary indexes in the query planning process. When a query uses a clustered index, MongoDB performs a :term:`bounded collection scan`. -Prior to MongoDB 6.2, if a :term:`secondary index ` +Prior to MongoDB 6.0.7, if a :term:`secondary index ` existed on a clustered collection and the secondary index was usable by your query, the query planner selected the secondary index instead of the clustered index by default. In MongoDB 6.1 and prior, to use the @@ -104,6 +100,8 @@ Limitations Clustered collection limitations: +- The clustered index key must be ``{ _id: 1 }``. + - You cannot transform a non-clustered collection to a clustered collection, or the reverse. Instead, you can: @@ -115,8 +113,6 @@ Clustered collection limitations: - Export collection data with :binary:`~bin.mongodump` and import the data into another collection with :binary:`~bin.mongorestore`. -- The clustered index key must be on the ``_id`` field. - - You cannot hide a clustered index. See :doc:`Hidden indexes `. diff --git a/source/core/collection-level-access-control.txt b/source/core/collection-level-access-control.txt index e2db98be145..6dd4a62f22e 100644 --- a/source/core/collection-level-access-control.txt +++ b/source/core/collection-level-access-control.txt @@ -25,7 +25,7 @@ Privileges and Scope -------------------- A privilege consists of :ref:`actions ` -and the :doc:`resources ` upon which the +and the :ref:`resources ` upon which the actions are permissible; i.e. the resources define the scope of the actions for that privilege. diff --git a/source/core/crud.txt b/source/core/crud.txt index 48422a902d9..33e9a21eb55 100644 --- a/source/core/crud.txt +++ b/source/core/crud.txt @@ -23,7 +23,6 @@ Atomicity, consistency, and distributed operations Query Plan, Performance, and Analysis - :doc:`/core/query-plans` - :doc:`/core/query-optimization` - - :doc:`/tutorial/analyze-query-plan` - :doc:`/core/write-performance` Miscellaneous @@ -37,7 +36,6 @@ Miscellaneous .. toctree:: :titlesonly: - /tutorial/analyze-query-plan /core/write-operations-atomicity /core/distributed-queries /core/dot-dollar-considerations diff --git a/source/core/csfle.txt b/source/core/csfle.txt index b12409ea877..619d420328f 100644 --- a/source/core/csfle.txt +++ b/source/core/csfle.txt @@ -29,6 +29,17 @@ You can set up {+csfle-abbrev+} using the following mechanisms: specify the logic for encryption with this library throughout your application. +Considerations +-------------- + +When implementing an application that uses {+csfle+}, consider the points listed in :ref:`Security Considerations `. + +For limitations, see :ref:`{+csfle-abbrev+} limitations +`. + +Compatibility +~~~~~~~~~~~~~ + The following table shows which MongoDB server products support which {+csfle-abbrev+} mechanisms: @@ -85,7 +96,6 @@ The fundamentals section contains the following pages: - :ref:`csfle-fundamentals-automatic-encryption` - :ref:`csfle-fundamentals-manual-encryption` - :ref:`csfle-fundamentals-create-schema` -- :ref:`csfle-reference-keys-key-vaults` - :ref:`csfle-fundamentals-manage-keys` - :ref:`csfle-reference-encryption-algorithms` @@ -98,18 +108,15 @@ To learn how to perform specific tasks with {+csfle-abbrev+}, see the Reference --------- -To view information to help you develop your {+csfle-abbrev+} enabled applications, -see the :ref:`csfle-reference` section. +To learn about encryption key management, read :ref:`qe-reference-keys-key-vaults`. -The reference section contains the following pages: +For more information about developing your {+csfle-abbrev+}-enabled applications, +see the :ref:`csfle-reference` section, which contains the following pages: -- :ref:`csfle-compatibility-reference` -- :ref:`csfle-reference-encryption-limits` - :ref:`csfle-reference-encryption-schemas` - :ref:`csfle-reference-server-side-schema` - :ref:`csfle-reference-automatic-encryption-supported-operations` - :ref:`csfle-reference-mongo-client` -- :ref:`csfle-reference-kms-providers` - :ref:`csfle-reference-encryption-components` - :ref:`csfle-reference-decryption` - :ref:`csfle-reference-cryptographic-primitives` diff --git a/source/core/csfle/features.txt b/source/core/csfle/features.txt index 6927834621a..cd5ab542a2d 100644 --- a/source/core/csfle/features.txt +++ b/source/core/csfle/features.txt @@ -51,10 +51,10 @@ read and write the encrypted data fields. {+csfle-abbrev+}, see :ref:``. To view a list of all supported KMS providers, see - :ref:``. + :ref:``. To learn more about why you should use a remote KMS, see - :ref:`csfle-reasons-to-use-remote-kms`. + :ref:`qe-reasons-to-use-remote-kms`. .. _csfle-feature-comparison: @@ -124,7 +124,7 @@ Comparison of Features The following diagram lists security features MongoDB supports and the potential security vulnerabilities that they address: -.. image:: /images/CSFLE_Security_Feature_Chart.png +.. image:: /images/QE_Security_Feature_Chart.png :alt: Diagram that describes MongoDB security features and the potential vulnerabilities that they address .. important:: Use the Mechanisms Together diff --git a/source/core/csfle/fundamentals.txt b/source/core/csfle/fundamentals.txt index b5fae6051a9..fe75174d658 100644 --- a/source/core/csfle/fundamentals.txt +++ b/source/core/csfle/fundamentals.txt @@ -12,12 +12,13 @@ Fundamentals :depth: 2 :class: singlecol -Read the following sections to learn how {+csfle+} works and how to use it: +To learn about encryption key management, read :ref:`qe-reference-keys-key-vaults`. + +To learn how {+csfle+} works and how to use it, read the following sections: - :ref:`csfle-fundamentals-automatic-encryption` - :ref:`csfle-fundamentals-manual-encryption` - :ref:`csfle-fundamentals-create-schema` -- :ref:`csfle-reference-keys-key-vaults` - :ref:`csfle-fundamentals-manage-keys` - :ref:`csfle-reference-encryption-algorithms` @@ -27,6 +28,5 @@ Read the following sections to learn how {+csfle+} works and how to use it: /core/csfle/fundamentals/automatic-encryption /core/csfle/fundamentals/manual-encryption /core/csfle/fundamentals/create-schema - /core/csfle/fundamentals/keys-key-vaults /core/csfle/fundamentals/manage-keys /core/csfle/fundamentals/encryption-algorithms diff --git a/source/core/csfle/fundamentals/create-schema.txt b/source/core/csfle/fundamentals/create-schema.txt index c45e05c64fb..cb9b4d2a2c0 100644 --- a/source/core/csfle/fundamentals/create-schema.txt +++ b/source/core/csfle/fundamentals/create-schema.txt @@ -43,7 +43,7 @@ Encryption rules must contain either the ``encrypt`` or To learn more about the encryption algorithms you can define in your encryption schema, see :ref:``. -To learn more about {+dek-long+}s, see :ref:`csfle-reference-keys-key-vaults`. +To learn more about {+dek-long+}s, see :ref:`qe-reference-keys-key-vaults`. encrypt Keyword ~~~~~~~~~~~~~~~ diff --git a/source/core/csfle/fundamentals/keys-key-vaults.txt b/source/core/csfle/fundamentals/keys-key-vaults.txt deleted file mode 100644 index 00cb0b29081..00000000000 --- a/source/core/csfle/fundamentals/keys-key-vaults.txt +++ /dev/null @@ -1,90 +0,0 @@ -.. _csfle-reference-keys-key-vaults: - -=================== -Keys and Key Vaults -=================== - -.. default-domain:: mongodb - -.. contents:: On this page - :local: - :backlinks: none - :depth: 2 - :class: singlecol - -Overview --------- - -In this guide, you can learn details about the following components of -{+csfle+} ({+csfle-abbrev+}): - -- {+dek-long+}s ({+dek-abbr+})s -- {+cmk-long+}s ({+cmk-abbr+})s -- {+key-vault-long+}s -- {+kms-long+} ({+kms-abbr+}) - -To view step by step guides demonstrating how to use the preceding -components to set up a {+csfle-abbrev+} enabled client, see the following resources: - -- :ref:`` -- :ref:`` - -.. _csfle-key-architecture: - -Data Encryption Keys and the Customer Master Key ------------------------------------------------- - -.. include:: /includes/queryable-encryption/qe-csfle-about-dek-cmk-keys.rst - -.. include:: /includes/queryable-encryption/qe-csfle-warning-remote-kms.rst - -.. _csfle-key-rotation: - -Key Rotation -~~~~~~~~~~~~ - -.. include:: /includes/queryable-encryption/qe-csfle-key-rotation.rst - -.. _csfle-reference-key-vault: -.. _field-level-encryption-keyvault: - -{+key-vault-long-title+}s ---------------------- - -.. include:: /includes/queryable-encryption/qe-csfle-about-key-vault-collections.rst - -To view diagrams detailing how your {+dek-abbr+}, {+cmk-abbr+}, and -{+key-vault-long+} interact -in all supported {+kms-abbr+} provider architectures, see -:ref:`csfle-reference-kms-providers`. - -{+key-vault-long-title+} Name -~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. include:: /includes/fact-csfle-qe-keyvault-name.rst - -Permissions -~~~~~~~~~~~ - -.. include:: /includes/queryable-encryption/qe-csfle-key-vault-permissions.rst - -To learn how to grant your application access to your {+cmk-abbr+}, see the -:ref:`` tutorial. - -Key Vault Cluster -~~~~~~~~~~~~~~~~~ - -.. include:: /includes/queryable-encryption/qe-csfle-key-vault-cluster.rst - -To specify the cluster that hosts your {+key-vault-long+}, use the -``keyVaultClient`` field of your client's ``MongoClient`` object. -To learn more about the {+csfle-abbrev+}-specific configuration options in your -client's ``MongoClient`` object, see :ref:``. - -Update a {+key-vault-long-title+} -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. include:: /includes/in-use-encryption/update-a-key.rst - -To view a tutorial that shows how to create a {+dek-abbr+}, see -the :ref:`Quick Start `. diff --git a/source/core/csfle/fundamentals/manage-keys.txt b/source/core/csfle/fundamentals/manage-keys.txt index 18791491ca9..1b3959b49d3 100644 --- a/source/core/csfle/fundamentals/manage-keys.txt +++ b/source/core/csfle/fundamentals/manage-keys.txt @@ -30,7 +30,7 @@ MongoDB uses the following components to perform {+csfle+}: - {+kms-long+} ({+kms-abbr+}) To learn more about keys and key vaults, see -:ref:`csfle-reference-keys-key-vaults`. +:ref:`qe-reference-keys-key-vaults`. Supported Key Management Services --------------------------------- @@ -47,31 +47,7 @@ Supported Key Management Services To learn more about these providers, including diagrams that show how your application uses them to perform {+csfle+}, see -:ref:`csfle-reference-kms-providers`. - -.. _csfle-reasons-to-use-remote-kms: - -Reasons to Use a Remote Key Management System -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Using a remote {+kms-long+} to manage your {+cmk-long+} -has the following advantages over using your local filesystem to host -the {+cmk-abbr+}: - -- Secure storage of the key with access auditing -- Reduced risk of access permission issues -- Availability and distribution of the key to remote clients -- Automated key backup and recovery -- Centralized encryption key lifecycle management - -Additionally, for the following {+kms-abbr+} providers, your -{+kms-abbr+} remotely encrypts and decrypts your {+dek-long+}, ensuring -your {+cmk-long+} is never exposed to your {+csfle-abbrev+}-enabled -application: - -- {+aws-long+} KMS -- {+azure-kv+} -- {+gcp-kms-abbr+} +:ref:`qe-fundamentals-kms-providers`. Manage a {+dek-long+}'s Alternate Name --------------------------------------------- @@ -147,7 +123,7 @@ Select the tab that corresponds to your driver language: :language: javascript To learn more about ``dataKeyOpts`` and ``kmsProviders`` objects, see -:ref:`csfle-reference-kms-providers`. +:ref:`qe-fundamentals-kms-providers`. Use Key Alternate Names in an Automatic Encryption Schema ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -286,7 +262,7 @@ The ``rewrapManyDataKey`` uses the following syntax: ) To learn more about the ``dataKeyOpts`` object for your KMS provider, see -:ref:`csfle-reference-kms-providers-supported-kms`. +:ref:`qe-fundamentals-kms-providers-supported-kms`. .. _field-level-encryption-data-key-delete: @@ -307,7 +283,7 @@ You can delete a {+dek-long+} from your {+key-vault-long+} using standard CRUD keyVault = db.getKeyVault() keyVault.deleteKey(UUID("")) -To learn more about {+key-vault-long+}s see :ref:`csfle-reference-key-vault`. +To learn more about {+key-vault-long+}s see :ref:`qe-reference-keys-key-vaults`. Learn More ---------- diff --git a/source/core/csfle/fundamentals/manual-encryption.txt b/source/core/csfle/fundamentals/manual-encryption.txt index 0607bf698bf..01b3ef1c32d 100644 --- a/source/core/csfle/fundamentals/manual-encryption.txt +++ b/source/core/csfle/fundamentals/manual-encryption.txt @@ -17,13 +17,9 @@ Overview -------- -Learn how to use the **{+manual-enc+}** mechanism of {+csfle+} -({+csfle-abbrev+}). +.. include:: /includes/queryable-encryption/qe-csfle-manual-enc-overview.rst -.. include:: /includes/fact-manual-enc-definition.rst - -{+manual-enc-first+} is available in the following MongoDB products -of version 4.2 or later: +{+manual-enc-first+} is available in the following MongoDB products: - MongoDB Community Server - MongoDB Enterprise Advanced @@ -219,7 +215,7 @@ Learn More ---------- To learn more about {+key-vault-long+}s, {+dek-long+}s, and {+cmk-long+}s, -see :ref:`csfle-reference-keys-key-vaults`. +see :ref:`qe-reference-keys-key-vaults`. To learn more about {+kms-abbr+} providers and ``kmsProviders`` objects, -see :ref:`csfle-reference-kms-providers`. +see :ref:`qe-fundamentals-kms-providers`. diff --git a/source/core/csfle/quick-start.txt b/source/core/csfle/quick-start.txt index 9383a9231d7..0a881f3fb8d 100644 --- a/source/core/csfle/quick-start.txt +++ b/source/core/csfle/quick-start.txt @@ -265,10 +265,10 @@ To learn how {+csfle-abbrev+} works, see To learn more about the topics mentioned in this guide, see the following links: -- :ref:`{+cmk-long+}s ` -- :ref:`{+kms-long+} providers ` -- :ref:`{+dek-long+}s ` -- :ref:`{+key-vault-long+}s ` +- :ref:`{+cmk-long+}s ` +- :ref:`{+kms-long+} providers ` +- :ref:`{+dek-long+}s ` +- :ref:`{+key-vault-long+}s ` - :ref:`Encryption Schemas ` - :ref:`mongocryptd ` - :ref:`{+csfle-abbrev+}-specific MongoClient settings ` diff --git a/source/core/csfle/reference.txt b/source/core/csfle/reference.txt index 7f19c937cef..9edb96163dd 100644 --- a/source/core/csfle/reference.txt +++ b/source/core/csfle/reference.txt @@ -12,18 +12,13 @@ Reference :depth: 2 :class: singlecol -.. versionadded:: 4.2 - Read the following sections to learn about components of the {+csfle+} ({+csfle-abbrev+}) feature: -- :ref:`csfle-compatibility-reference` -- :ref:`csfle-reference-encryption-limits` - :ref:`csfle-reference-encryption-schemas` - :ref:`csfle-reference-server-side-schema` - :ref:`csfle-reference-automatic-encryption-supported-operations` - :ref:`csfle-reference-mongo-client` -- :ref:`csfle-reference-kms-providers` - :ref:`csfle-reference-encryption-components` - :ref:`csfle-reference-decryption` - :ref:`csfle-reference-cryptographic-primitives` @@ -34,13 +29,10 @@ of the {+csfle+} ({+csfle-abbrev+}) feature: .. toctree:: :titlesonly: - /core/csfle/reference/compatibility - /core/csfle/reference/limitations /core/csfle/reference/encryption-schemas /core/csfle/reference/server-side-schema /core/csfle/reference/supported-operations /core/csfle/reference/csfle-options-clients - /core/csfle/reference/kms-providers /core/csfle/reference/encryption-components /core/csfle/reference/decryption /core/csfle/reference/cryptographic-primitives diff --git a/source/core/csfle/reference/compatibility.txt b/source/core/csfle/reference/compatibility.txt deleted file mode 100644 index 6bfbe0bbbee..00000000000 --- a/source/core/csfle/reference/compatibility.txt +++ /dev/null @@ -1,102 +0,0 @@ -.. _csfle-compatibility-reference: -.. _field-level-encryption-drivers: -.. _csfle-driver-compatibility: - -=================== -CSFLE Compatibility -=================== - -This page describes the MongoDB and driver versions with which {+csfle+} -is compatible. - -MongoDB Edition and Version Compatibility ------------------------------------------ - -:ref:`Automatic encryption ` -with {+csfle+} is only available with MongoDB Enterprise Edition, -version 4.2 or later. - -:ref:`Explicit encryption ` with -{+csfle+} is available with MongoDB Community and Enterprise Edition, -version 4.2 or later. - -Driver Compatibility Table --------------------------- - -{+csfle+} is only available the following official compatible driver -versions or later: - -.. list-table:: - :widths: 20 20 60 - :header-rows: 1 - - * - Driver - - Supported Versions - - Quickstarts / Tutorials - - * - :driver:`Node ` - - ``3.4.0+`` - - | `Node.js Quickstart `__ - | :driver:`Client-Side Field Level Encryption Guide ` - - * - :driver:`Java ` - - ``3.12.0+`` - - | `Java Driver Quickstart `__ - | `Java Async Driver Quickstart `__ - | :driver:`Client-Side Field Level Encryption Guide ` - - * - `Java Reactive Streams `__ - - ``1.13.0+`` - - `Java RS Documentation `__ - - * - :driver:`Python (PyMongo) ` - - ``3.10.0+`` - - | `Python Driver Quickstart `__ - | :driver:`Client-Side Field Level Encryption Guide ` - - * - :driver:`C#/.NET ` - - ``2.10.0+`` - - `.NET Driver Quickstart `__ - - * - :driver:`C ` - - ``1.17.5`` - - `C Driver Client-Side Field Level Encryption `__ - - * - :driver:`Go ` - - ``1.2+`` - - `Go Driver Quickstart `__ - - * - :driver:`Scala ` - - ``2.8.0+`` - - `Scala Documentation `__ - - * - :driver:`PHP ` - - ``1.6.0+`` - - `PHP Driver Quickstart `__ - - * - `Ruby `__ - - ``2.12.1+`` - - `Ruby Driver Quickstart `__ - -.. _csfle-reference-compatability-key-rotation: - -.. important:: Key Rotation Support - - To use the key rotation API of {+csfle-abbrev+}, such as the - ``rewrapManyDateKey`` method, you must use specific versions - of either your driver's binding package or ``libmongocrypt``. - - The following list details each driver's key rotation API - dependencies: - - - If you're using Node.js driver version 6.0.0 or later, - ``mongodb-client-encryption`` must have the same major version number - as the driver. - Otherwise, use a 2.x.x version of ``mongodb-client-encryption`` that is 2.2.0 or later. - - Java Driver: Use ``mongodb-crypt`` version {+mongodb-crypt-version+} or later. - - pymongo: Use ``pymongocrypt`` version 1.3.1 or later. - - Go Driver: Use ``libmongocrypt`` version 1.5.2 or later. - - C#/.NET Driver: Use the MongoDB C#/.NET Driver version 2.17.1 or later. - -Please refer to the driver reference documentation for syntax and -implementation examples. diff --git a/source/core/csfle/reference/csfle-options-clients.txt b/source/core/csfle/reference/csfle-options-clients.txt index 81a6d2e45e4..6feddd6d611 100644 --- a/source/core/csfle/reference/csfle-options-clients.txt +++ b/source/core/csfle/reference/csfle-options-clients.txt @@ -1,7 +1,7 @@ .. _csfle-reference-mongo-client: ============================================= -{+csfle-abbrev+}-Specific MongoClient Options +MongoClient Options for {+csfle-abbrev+} ============================================= .. default-domain:: mongodb @@ -53,7 +53,7 @@ The following table describes the structure of an ``{+auto-encrypt-options+}`` configuration is used as the host of your {+key-vault-long+}. - To learn more about {+key-vault-long+}s, see :ref:`csfle-reference-key-vault`. + To learn more about {+key-vault-long+}s, see :ref:`qe-reference-keys-key-vaults`. * - ``keyVaultNamespace`` @@ -73,9 +73,9 @@ The following table describes the structure of an managing your {+cmk-long+}s (CMKs). To learn more about ``kmsProviders`` objects, see - :ref:`csfle-reference-kms-providers`. + :ref:`qe-fundamentals-kms-providers`. - To learn more about {+cmk-long+}s, see :ref:`csfle-reference-keys-key-vaults`. + To learn more about {+cmk-long+}s, see :ref:`qe-reference-keys-key-vaults`. * - ``tlsOptions`` diff --git a/source/core/csfle/reference/decryption.txt b/source/core/csfle/reference/decryption.txt index f52bcdedf34..51bd5589d16 100644 --- a/source/core/csfle/reference/decryption.txt +++ b/source/core/csfle/reference/decryption.txt @@ -123,7 +123,7 @@ performs the following procedure: {+dek-long+}, decryption fails and the driver returns the encrypted ``BinData`` blob. - .. include:: /includes/csfle-warning-local-keys.rst + .. include:: /includes/queryable-encryption/qe-warning-local-keys.rst #. Decrypt the ``BinData`` value using the decrypted {+dek-long+} and appropriate algorithm. @@ -152,4 +152,4 @@ To learn how to configure the database connection for {+csfle+}, see :ref:`csfle-reference-mongo-client`. To learn more about the relationship between {+dek-long+}s and -{+cmk-long+}s, see :ref:`csfle-reference-keys-key-vaults`. +{+cmk-long+}s, see :ref:`qe-reference-keys-key-vaults`. diff --git a/source/core/csfle/reference/encryption-components.txt b/source/core/csfle/reference/encryption-components.txt index d5a5f243264..d33b7f2f692 100644 --- a/source/core/csfle/reference/encryption-components.txt +++ b/source/core/csfle/reference/encryption-components.txt @@ -64,7 +64,7 @@ your {+key-vault-long+} on a different MongoDB cluster than the cluster storing your encrypted application data. To learn more about the {+key-vault-long+}, see -:ref:`csfle-reference-keys-key-vaults`. +:ref:`qe-reference-keys-key-vaults`. {+kms-long+} ~~~~~~~~~~~~~~~~~~~~~ @@ -73,7 +73,7 @@ The {+kms-long+} ({+kms-abbr+}) stores the {+cmk-long+} ({+cmk-abbr+}) used to encrypt {+dek-long+}s. To view a list of all {+kms-abbr+} providers MongoDB supports, -see :ref:`csfle-reference-kms-providers`. +see :ref:`qe-fundamentals-kms-providers`. MongoDB Cluster ~~~~~~~~~~~~~~~ diff --git a/source/core/csfle/reference/encryption-schemas.txt b/source/core/csfle/reference/encryption-schemas.txt index 630774029af..f6a3fbd63b7 100644 --- a/source/core/csfle/reference/encryption-schemas.txt +++ b/source/core/csfle/reference/encryption-schemas.txt @@ -1,3 +1,6 @@ +.. meta:: + :keywords: client-side field level encryption, encryption + .. _csfle-reference-encryption-schemas: .. _field-level-encryption-json-schema: @@ -168,7 +171,6 @@ Definition - ``bool`` - ``object`` - ``array`` - - ``javascriptWithScope`` (*Deprecated in MongoDB 4.4*) If :autoencryptkeyword:`encrypt.algorithm` or its inherited value is ``AED_AES_256_CBC_HMAC_SHA_512-Random``, ``bsonType`` is @@ -211,7 +213,7 @@ Definition Official MongoDB 4.2+ compatible drivers have language-specific requirements for specifying the UUID. Defer to the - :ref:`driver documentation ` + :ref:`driver documentation ` for complete documentation on implementing client-side field level encryption. @@ -299,14 +301,14 @@ Definition part of the automatic encryption :ref:`configuration options `. The specified configuration options must *also* include appropriate access to the - :ref:`Key Management Service (KMS) ` and + :ref:`Key Management Service (KMS) ` and {+cmk-long+} (CMK) used to create the data key. Automatic encryption fails if the {+dek-long+} does not exist *or* if the client cannot decrypt the key with the specified KMS and CMK. Official MongoDB 4.2+ compatible drivers have language-specific requirements for specifying the UUID. Defer to the - :ref:`driver documentation ` + :ref:`driver documentation ` for complete documentation on implementing client-side field level encryption. @@ -428,7 +430,7 @@ and ``medicalRecords`` fields for encryption. - The ``medicalRecords`` field requires randomized encryption using the specified key. -.. include:: /includes/fact-csfle-compatibility-drivers.rst +.. include:: /includes/queryable-encryption/fact-csfle-compatibility-drivers.rst .. _field-level-encryption-auto-encrypt-multiple-fields-inheritance: @@ -543,10 +545,10 @@ and ``medicalRecords`` fields for encryption. specified key. The ``encrypt`` options override those specified in the parent ``encryptMetadata`` field. -.. include:: /includes/fact-csfle-compatibility-drivers.rst +.. include:: /includes/queryable-encryption/fact-csfle-compatibility-drivers.rst To learn more about your CMK and {+key-vault-long+}, -see the :ref:`key vaults ` page. +see the :ref:`key vaults ` page. To learn more about encryption algorithms, see the :ref:`Encryption algorithms ` page. diff --git a/source/core/csfle/reference/kms-providers.txt b/source/core/csfle/reference/kms-providers.txt deleted file mode 100644 index 9152aad09cb..00000000000 --- a/source/core/csfle/reference/kms-providers.txt +++ /dev/null @@ -1,183 +0,0 @@ -.. _csfle-reference-kms-providers: -.. _field-level-encryption-kms: - -=================== -CSFLE KMS Providers -=================== - -.. default-domain:: mongodb - -.. contents:: On this page - :local: - :backlinks: none - :depth: 2 - :class: singlecol - -Overview --------- - -Learn about the {+kms-long+} ({+kms-abbr+}) providers {+csfle+} -({+csfle-abbrev+}) supports. - -{+kms-long+} Tasks -------------------------------- - -In {+csfle-abbrev+}, your {+kms-long+} performs the following -tasks: - -- :ref:`Creates and stores your {+cmk-long+} ` -- :ref:`Create and Encrypt your {+dek-long+}s ` - -To learn more about {+cmk-long+}s and {+dek-long+}s, see -:ref:`csfle-reference-keys-key-vaults`. - -.. _csfle-reference-kms-providers-create-and-store: - -Create and Store your {+cmk-long+} -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -To create a {+cmk-long+}, you must configure your {+kms-long+} -to generate your {+cmk-long+} as follows: - -.. image:: /images/CSFLE_Master_Key_KMS.png - :alt: Diagram - -To view a tutorial demonstrating how to create and store your -{+cmk-abbr+} in your preferred {+kms-abbr+}, -see :ref:`csfle-tutorial-automatic-encryption`. - -.. _csfle-reference-kms-providers-encrypt: - -Create and Encrypt a {+dek-long+} -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -When you create a {+dek-long+}, you must perform the following actions: - -- Instantiate a ``ClientEncryption`` instance in your - {+csfle-abbrev+}-enabled application: - - * Provide a ``kmsProviders`` object that specifies the credentials - your {+csfle-abbrev+}-enabled application uses to authenticate with - your {+kms-abbr+} provider. - -- Create a {+dek-long+} with the ``CreateDataKey`` method of the - ``ClientEncryption`` object in your {+csfle-abbrev+}-enabled application. - - * Provide a ``dataKeyOpts`` object that specifies with which key - your {+kms-abbr+} should encrypt your new {+dek-long+}. - -To view a tutorial demonstrating how to create and encrypt a -{+dek-long+}, see the following resources: - -- :ref:`csfle-quick-start` -- :ref:`csfle-tutorial-automatic-encryption` - -To view the structure of ``kmsProviders`` and ``dataKeyOpts`` objects -for all supported {+kms-abbr+} providers, see -:ref:`csfle-reference-kms-providers-supported-kms`. - -.. _csfle-reference-kms-providers-supported-kms: - -Supported Key Management Services ---------------------------------- - -The following sections of this page present the following information -for all {+kms-long+} providers: - -- Architecture of {+csfle-abbrev+}-enabled client -- Structure of ``kmsProviders`` objects -- Structure of ``dataKeyOpts`` objects - -{+csfle-abbrev+} supports the following {+kms-long+} -providers: - -- :ref:`csfle-reference-kms-providers-aws` -- :ref:`csfle-reference-kms-providers-azure` -- :ref:`csfle-reference-kms-providers-gcp` -- :ref:`csfle-reference-kms-providers-kmip` -- :ref:`csfle-reference-kms-providers-local` - -.. _csfle-reference-kms-providers-aws: -.. _field-level-encryption-aws-kms: - -Amazon Web Services KMS -~~~~~~~~~~~~~~~~~~~~~~~ - -This section provides information related to using -`AWS Key Management Service `_ -in your {+csfle-abbrev+}-enabled application. - -To view a tutorial demonstrating how to use AWS KMS in your -{+csfle-abbrev+}-enabled application, see -:ref:`csfle-tutorial-automatic-aws`. - -.. include:: /includes/reference/kms-providers/aws.rst - -.. _csfle-reference-kms-providers-azure: -.. _field-level-encryption-azure-keyvault: - -Azure Key Vault -~~~~~~~~~~~~~~~ - -This section provides information related to using -`Azure Key Vault -`_ -in your {+csfle-abbrev+}-enabled application. - -To view a tutorial demonstrating how to use Azure Key Vault in your -{+csfle-abbrev+}-enabled application, see -:ref:`csfle-tutorial-automatic-azure`. - -.. include:: /includes/reference/kms-providers/azure.rst - -.. _csfle-reference-kms-providers-gcp: -.. _field-level-encryption-gcp-kms: - -Google Cloud Platform KMS -~~~~~~~~~~~~~~~~~~~~~~~~~ - -This section provides information related to using -`Google Cloud Key Management `_ -in your {+csfle-abbrev+}-enabled application. - -To view a tutorial demonstrating how to use GCP KMS in your -{+csfle-abbrev+}-enabled application, see -:ref:`csfle-tutorial-automatic-gcp`. - -.. include:: /includes/reference/kms-providers/gcp.rst - -.. _csfle-reference-kms-providers-kmip: - -KMIP -~~~~ - -This section provides information related to using a -`KMIP `_ -compliant {+kms-long+} in your {+csfle-abbrev+}-enabled application. - -To view a tutorial demonstrating how to use a KMIP compliant -{+kms-long+} in your {+csfle-abbrev+}-enabled application, see -:ref:`csfle-tutorial-automatic-kmip`. - -To learn how to set up KMIP with HashiCorp Vault, see the `How to Set Up HashiCorp Vault KMIP Secrets Engine with MongoDB CSFLE or Queryable Encryption -`__ -blog post. - -.. include:: /includes/reference/kms-providers/kmip.rst - -.. _csfle-reference-kms-providers-local: -.. _field-level-encryption-local-kms: - -Local Key Provider -~~~~~~~~~~~~~~~~~~ - -This section provides information related to using a Local Key Provider (your filesystem) -in your {+csfle-abbrev+}-enabled application. - -.. include:: /includes/csfle-warning-local-keys.rst - -To view a tutorial demonstrating how to use a Local Key Provider -for testing {+csfle+}, see -:ref:`csfle-quick-start`. - -.. include:: /includes/reference/kms-providers/local.rst diff --git a/source/core/csfle/reference/libmongocrypt.txt b/source/core/csfle/reference/libmongocrypt.txt index 7c225052c4f..f956a925b37 100644 --- a/source/core/csfle/reference/libmongocrypt.txt +++ b/source/core/csfle/reference/libmongocrypt.txt @@ -54,7 +54,7 @@ Debian .. code-block:: sh - sudo sh -c 'curl -s --location https://www.mongodb.org/static/pgp/libmongocrypt.asc | gpg --dearmor >/etc/apt/trusted.gpg.d/libmongocrypt.gpg' + sudo sh -c 'curl -s --location https://pgp.mongodb.com/libmongocrypt.asc | gpg --dearmor >/etc/apt/trusted.gpg.d/libmongocrypt.gpg' .. step:: @@ -97,7 +97,7 @@ Ubuntu .. code-block:: sh - sudo sh -c 'curl -s --location https://www.mongodb.org/static/pgp/libmongocrypt.asc | gpg --dearmor >/etc/apt/trusted.gpg.d/libmongocrypt.gpg' + sudo sh -c 'curl -s --location https://pgp.mongodb.com/libmongocrypt.asc | gpg --dearmor >/etc/apt/trusted.gpg.d/libmongocrypt.gpg' .. step:: @@ -146,7 +146,7 @@ RedHat baseurl=https://libmongocrypt.s3.amazonaws.com/yum/redhat/$releasever/libmongocrypt/{+libmongocrypt-version+}/x86_64 gpgcheck=1 enabled=1 - gpgkey=https://www.mongodb.org/static/pgp/libmongocrypt.asc + gpgkey=https://pgp.mongodb.com/libmongocrypt.asc .. step:: @@ -173,7 +173,35 @@ Amazon Linux 2 baseurl=https://libmongocrypt.s3.amazonaws.com/yum/amazon/2/libmongocrypt/{+libmongocrypt-version+}/x86_64 gpgcheck=1 enabled=1 - gpgkey=https://www.mongodb.org/static/pgp/libmongocrypt.asc + gpgkey=https://pgp.mongodb.com/libmongocrypt.asc + + .. step:: + + Install the ``libmongocrypt`` package: + + .. code-block:: sh + + sudo yum install -y libmongocrypt + +Amazon Linux 2023 +~~~~~~~~~~~~~~~~~ + +.. procedure:: + :style: connected + + .. step:: + + Create a ``/etc/yum.repos.d/libmongocrypt.repo`` + repository file: + + .. code-block:: toml + + [libmongocrypt] + name=libmongocrypt repository + baseurl=https://libmongocrypt.s3.amazonaws.com/yum/amazon/2023/libmongocrypt/{+libmongocrypt-version+}/x86_64 + gpgcheck=1 + enabled=1 + gpgkey=https://pgp.mongodb.com/libmongocrypt.asc .. step:: @@ -191,16 +219,17 @@ Amazon Linux .. step:: - Create a repository file for the ``libmongocrypt`` package: + Create a ``/etc/yum.repos.d/libmongocrypt.repo`` + repository file: - .. code-block:: sh + .. code-block:: toml [libmongocrypt] name=libmongocrypt repository baseurl=https://libmongocrypt.s3.amazonaws.com/yum/amazon/2013.03/libmongocrypt/{+libmongocrypt-version+}/x86_64 gpgcheck=1 enabled=1 - gpgkey=https://www.mongodb.org/static/pgp/libmongocrypt.asc + gpgkey=https://pgp.mongodb.com/libmongocrypt.asc .. step:: @@ -222,7 +251,7 @@ Suse .. code-block:: sh - sudo rpm --import https://www.mongodb.org/static/pgp/libmongocrypt.asc + sudo rpm --import https://pgp.mongodb.com/libmongocrypt.asc .. step:: diff --git a/source/core/csfle/reference/limitations.txt b/source/core/csfle/reference/limitations.txt index 3a399d3a39e..477cb9e27d0 100644 --- a/source/core/csfle/reference/limitations.txt +++ b/source/core/csfle/reference/limitations.txt @@ -1,3 +1,6 @@ +.. meta:: + :keywords: CSFLE, in-use encryption, security, supported operations + .. _csfle-reference-encryption-limits: ================= @@ -12,6 +15,13 @@ CSFLE Limitations :depth: 1 :class: singlecol +Overview +-------- +Consider these limitations and restrictions before you enable {+csfle-abbrev+}. +Some operations are unsupported, and others behave differently. + +For compatibility limitations, see :ref:``. + Read and Write Operation Support -------------------------------- diff --git a/source/core/csfle/reference/supported-operations.txt b/source/core/csfle/reference/supported-operations.txt index 6482cb56b0b..e52f08981c7 100644 --- a/source/core/csfle/reference/supported-operations.txt +++ b/source/core/csfle/reference/supported-operations.txt @@ -424,4 +424,3 @@ encrypted field to the following value types: - ``decimal128`` - ``double`` - ``object`` -- ``javascriptWithScope`` (*Deprecated in MongoDB 4.4*) diff --git a/source/core/csfle/tutorials.txt b/source/core/csfle/tutorials.txt index 624e87fdb01..3fa69992906 100644 --- a/source/core/csfle/tutorials.txt +++ b/source/core/csfle/tutorials.txt @@ -1,4 +1,16 @@ +.. facet:: + :name: genre + :values: reference + +.. facet:: + :name: programming_language + :values: csharp, go, java, javascript/typescript, php, python, ruby, rust, scala + +.. meta:: + :keywords: client-side field level encryption, encryption + .. _csfle-tutorials: +.. _csfle-driver-tutorials: .. _csfle-tutorial-automatic-encryption: .. _csfle-tutorial-manual-encryption: .. _fle-convert-to-a-remote-master-key: @@ -15,6 +27,10 @@ Tutorials :depth: 2 :class: singlecol + +Key Management Tutorials +------------------------ + Read the following pages to learn how to use {+csfle+} with your preferred {+kms-long+}: @@ -75,6 +91,14 @@ access to all sample applications. | `KMIP <{+sample-app-url-csfle+}/dotnet/kmip/reader/>`__ | `Local <{+sample-app-url-csfle+}/dotnet/local/reader/>`__ + +Driver Tutorials +---------------- + +For {+csfle-abbrev+} driver tutorials, see the following: + +.. include:: /includes/queryable-encryption/csfle-driver-tutorial-table.rst + .. toctree:: :titlesonly: diff --git a/source/core/csfle/tutorials/aws/aws-automatic.txt b/source/core/csfle/tutorials/aws/aws-automatic.txt index eb53fcde707..4495eb474b1 100644 --- a/source/core/csfle/tutorials/aws/aws-automatic.txt +++ b/source/core/csfle/tutorials/aws/aws-automatic.txt @@ -264,5 +264,5 @@ To learn more about the topics mentioned in this guide, see the following links: - Learn more about CSFLE components on the :ref:`Reference ` page. -- Learn how {+cmk-long+}s and {+dek-long+}s work on the :ref:`` page -- See how KMS Providers manage your CSFLE keys on the :ref:`` page. +- Learn how {+cmk-long+}s and {+dek-long+}s work on the :ref:`` page +- See how KMS Providers manage your CSFLE keys on the :ref:`` page. diff --git a/source/core/csfle/tutorials/azure/azure-automatic.txt b/source/core/csfle/tutorials/azure/azure-automatic.txt index 0670ced583c..eb46c6986e8 100644 --- a/source/core/csfle/tutorials/azure/azure-automatic.txt +++ b/source/core/csfle/tutorials/azure/azure-automatic.txt @@ -261,5 +261,5 @@ To learn more about the topics mentioned in this guide, see the following links: - Learn more about CSFLE components on the :ref:`Reference ` page. -- Learn how {+cmk-long+}s and {+dek-long+}s work on the :ref:`` page -- See how KMS Providers manage your CSFLE keys on the :ref:`` page. +- Learn how {+cmk-long+}s and {+dek-long+}s work on the :ref:`` page +- See how KMS Providers manage your CSFLE keys on the :ref:`` page. diff --git a/source/core/csfle/tutorials/gcp/gcp-automatic.txt b/source/core/csfle/tutorials/gcp/gcp-automatic.txt index 31b0404057e..27321cb3d0b 100644 --- a/source/core/csfle/tutorials/gcp/gcp-automatic.txt +++ b/source/core/csfle/tutorials/gcp/gcp-automatic.txt @@ -261,5 +261,5 @@ To learn more about the topics mentioned in this guide, see the following links: - Learn more about CSFLE components on the :ref:`Reference ` page. -- Learn how {+cmk-long+}s and {+dek-long+}s work on the :ref:`` page -- See how KMS Providers manage your CSFLE keys on the :ref:`` page. +- Learn how {+cmk-long+}s and {+dek-long+}s work on the :ref:`` page +- See how KMS Providers manage your CSFLE keys on the :ref:`` page. diff --git a/source/core/csfle/tutorials/kmip/kmip-automatic.txt b/source/core/csfle/tutorials/kmip/kmip-automatic.txt index ca351cfddbc..331a4608b03 100644 --- a/source/core/csfle/tutorials/kmip/kmip-automatic.txt +++ b/source/core/csfle/tutorials/kmip/kmip-automatic.txt @@ -273,5 +273,5 @@ To learn more about the topics mentioned in this guide, see the following links: - Learn more about CSFLE components on the :ref:`Reference ` page. -- Learn how {+cmk-long+}s and {+dek-long+}s work on the :ref:`` page. -- See how KMS Providers manage your CSFLE keys on the :ref:`` page. +- Learn how {+cmk-long+}s and {+dek-long+}s work on the :ref:`` page. +- See how KMS Providers manage your CSFLE keys on the :ref:`` page. diff --git a/source/core/data-model-operations.txt b/source/core/data-model-operations.txt index 81913ff1c09..eea145fa815 100644 --- a/source/core/data-model-operations.txt +++ b/source/core/data-model-operations.txt @@ -102,7 +102,7 @@ As you create indexes, consider the following behaviors of indexes: See :doc:`/applications/indexes` for more information on indexes as well as :doc:`/tutorial/analyze-query-plan/`. Additionally, the MongoDB -:doc:`database profiler ` may +:ref:`database profiler ` may help identify inefficient queries. .. _data-model-large-number-of-collections: diff --git a/source/core/databases-and-collections.txt b/source/core/databases-and-collections.txt index dd306c1c944..84773849d6c 100644 --- a/source/core/databases-and-collections.txt +++ b/source/core/databases-and-collections.txt @@ -1,8 +1,8 @@ .. _databases-and-collections: -========================= -Databases and Collections -========================= +==================================== +Databases and Collections in MongoDB +==================================== .. default-domain:: mongodb @@ -22,8 +22,8 @@ Databases and Collections :values: reference .. meta:: - :description: Overview of what databases and collections (tables) are in MongoDB. - :keywords: atlas + :description: Learn how databases and collections work within MongoDB + :keywords: atlas, compass, MongoDB .. contents:: On this page :local: @@ -33,48 +33,154 @@ Databases and Collections Overview -------- + MongoDB stores data records as :term:`documents ` (specifically :ref:`BSON documents `) which are gathered together in :term:`collections `. A :term:`database ` stores one or more collections of documents. -.. |page-topic| replace:: manage MongoDB :atlas:`databases ` and :atlas:`collections ` in the UI +You can manage :atlas:`databases ` and +:atlas:`collections ` on the Atlas cluster from +the Atlas UI, :binary:`~bin.mongosh`, or |compass|. This page describes +how to manage databases and collections on the Atlas cluster from the +Atlas UI. For self-managed deployments, you can use +:binary:`~bin.mongosh` or |compass| to manage databases and collections. + +Select the client that you want to use to manage databases and +collections. + +.. tabs:: + + .. tab:: Atlas UI + :tabid: atlas + + MongoDB Atlas is a multi-cloud database service that simplifies + deploying and managing your databases on the cloud providers of + your choice. + + .. tab:: mongosh + :tabid: mongosh -.. cta-banner:: - :url: https://www.mongodb.com/docs/atlas/atlas-ui/databases/ - :icon: Cloud + The MongoDB Shell, :program:`mongosh`, is a JavaScript and Node.js + :abbr:`REPL (Read Eval Print Loop)` environment for interacting + with MongoDB deployments. To learn more, see :mongosh:`mongosh + `. - .. include:: /includes/fact-atlas-compatible.rst + .. tab:: MongoDB Compass + :tabid: compass + MongoDB Compass is a powerful GUI for querying, aggregating, and + analyzing your MongoDB data in a visual environment. To learn + more, see :compass:`MongoDB Compass `. Databases --------- -In MongoDB, databases hold one or more collections of documents. To -select a database to use, in :binary:`~bin.mongosh`, issue the -``use `` statement, as in the following example: -.. code-block:: javascript +In MongoDB, databases hold one or more collections of documents. + +.. tabs:: + :hidden: + + .. tab:: Atlas UI + :tabid: atlas + + To select a database to use, log in to Atlas and do the following: + + .. procedure:: + :style: normal - use myDB + .. step:: Navigate to the :guilabel:`Collections` tab. + .. step:: Select the database from the list of databases in the left pane. + + .. tab:: mongosh + :tabid: mongosh + + To select a database to use, in :binary:`~bin.mongosh`, issue the + ``use `` statement, as in the following example: + + .. code-block:: javascript + + use myDB + + .. tab:: MongoDB Compass + :tabid: compass + + To select a database to use, complete the following steps: + + .. procedure:: + :style: normal + + .. step:: Start |compass| and connect to your cluster. + + To learn more, see :compass:`Connect to MongoDB + `. + + .. step:: Select :guilabel:`Databases` from the left navigation. + + The :guilabel:`Databases` tab opens to list the existing databases + for your MongoDB deployment. + Create a Database ~~~~~~~~~~~~~~~~~ -If a database does not exist, MongoDB creates the database when you -first store data for that database. As such, you can switch to a -non-existent database and perform the following operation in -:binary:`~bin.mongosh`: +.. tabs:: + :hidden: -.. code-block:: javascript + .. tab:: Atlas UI + :tabid: atlas - use myNewDB + To create a new database, log in to Atlas and do the following: - db.myNewCollection1.insertOne( { x: 1 } ) + .. procedure:: + :style: normal -The :method:`~db.collection.insertOne()` operation creates both the -database ``myNewDB`` and the collection ``myNewCollection1`` if they do -not already exist. Be sure that both the database and collection names -follow MongoDB :ref:`restrictions-on-db-names`. + .. step:: Navigate to the :guilabel:`Collections` tab. + + .. step:: Click :guilabel:`Create Database`. + + .. step:: Enter the :guilabel:`Database Name` and the :guilabel:`Collection Name`. + + Enter the database and the collection name to create the + database and its first collection. + + .. step:: Click :guilabel:`Create`. + + Upon successful creation, the database and the collection + displays in the left pane in the Atlas UI. + + .. tab:: mongosh + :tabid: mongosh + + If a database does not exist, MongoDB creates the database when you + first store data for that database. As such, you can switch to a + non-existent database and perform the following operation in + :binary:`~bin.mongosh`: + + .. code-block:: javascript + + use myNewDB + + db.myNewCollection1.insertOne( { x: 1 } ) + + The :method:`~db.collection.insertOne()` operation creates both the + database ``myNewDB`` and the collection ``myNewCollection1`` if they do + not already exist. Be sure that both the database and collection names + follow MongoDB :ref:`restrictions-on-db-names`. + + .. tab:: MongoDB Compass + :tabid: compass + + .. procedure:: + :style: normal + + .. step:: Open the :guilabel:`Databases` tab. + + .. step:: Click the :guilabel:`Create database` button. + + .. step:: Enter database and first collection names in the :guilabel:`Create Database` dialog. + + .. step:: Click :guilabel:`Create Database` to create the database and its first collection. .. _collections: @@ -92,27 +198,136 @@ Create a Collection If a collection does not exist, MongoDB creates the collection when you first store data for that collection. -.. code-block:: javascript +.. tabs:: + :hidden: + + .. tab:: Atlas + :tabid: atlas + + To create a new collection, log in to Atlas and do the following: + + .. procedure:: + :style: normal - db.myNewCollection2.insertOne( { x: 1 } ) - db.myNewCollection3.createIndex( { y: 1 } ) + .. step:: Navigate to the :guilabel:`Collections` tab. -Both the :method:`~db.collection.insertOne()` and the -:method:`~db.collection.createIndex()` operations create their -respective collection if they do not already exist. Be sure that the -collection name follows MongoDB :ref:`restrictions-on-db-names`. + .. step:: Click the :guilabel:`+` icon for the database. + + .. step:: Enter the name of the collection. + + .. step:: Click :guilabel:`Create`. + + Upon successful creation, the collection displays underneath + the database in the Atlas UI. + + .. tab:: mongosh + :tabid: mongosh + + .. code-block:: javascript + + db.myNewCollection2.insertOne( { x: 1 } ) + db.myNewCollection3.createIndex( { y: 1 } ) + + Both the :method:`~db.collection.insertOne()` and the + :method:`~db.collection.createIndex()` operations create their + respective collection if they do not already exist. Be sure that the + collection name follows MongoDB :ref:`restrictions-on-db-names`. + + .. tab:: MongoDB Compass + :tabid: compass + + .. procedure:: + :style: normal + + .. step:: Click the name of the database where you to want to create a collection in the left navigation. + + .. step:: Click the :guilabel:`+` icon next to the database name. + + .. step:: Enter the name of the collection in the :guilabel:`Create Collection` dialog. + + .. step:: Click :guilabel:`Create Collection` to create the collection. Explicit Creation ~~~~~~~~~~~~~~~~~ -MongoDB provides the :method:`db.createCollection()` method to -explicitly create a collection with various options, such as setting -the maximum size or the documentation validation rules. If you are not -specifying these options, you do not need to explicitly create the -collection since MongoDB creates new collections when you first store -data for the collections. +.. tabs:: + :hidden: + + .. tab:: Atlas + :tabid: atlas + + To create a new collection, log in to Atlas and do the following: + + .. procedure:: + :style: normal + + .. step:: Navigate to the :guilabel:`Collections` tab. + + .. step:: Click the :guilabel:`+` icon for the database. + + .. step:: Enter the name of the collection. + + .. step:: Optional. From the :guilabel:`Additional Preferences` dropdown, select the type of collection that you want to create. + + You can create one of the following types of collections: + + - :ref:`Capped collection ` -To modify these collection options, see :dbcommand:`collMod`. + If you select to create a capped collection, specify the + maximum size in bytes. + + - :ref:`Time Series Collection ` + + If you select to create a time series collection, specify + the time field and granularity. You can optionally specify + the meta field and the time for old data in the collection + to expire. + + - :ref:`Clustered Index Collection ` + + If you select to create a clustered collection, you must + specify clustered index key value and a name for the + clustered index. + + .. step:: Click :guilabel:`Create`. + + Upon successful creation, the collection displays underneath + the database in the Atlas UI. + + .. tab:: mongosh + :tabid: mongosh + + MongoDB provides the :method:`db.createCollection()` method to + explicitly create a collection with various options, such as setting + the maximum size or the documentation validation rules. If you are not + specifying these options, you do not need to explicitly create the + collection since MongoDB creates new collections when you first store + data for the collections. + + To modify these collection options, see :dbcommand:`collMod`. + + .. tab:: MongoDB Compass + :tabid: compass + + .. procedure:: + :style: normal + + .. step:: Click the name of the database where you to want to create a collection in the left navigation. + + .. step:: Click the :guilabel:`Create collection` button. + + .. step:: Enter the name of the collection and optionally, configure additional preferences. + + .. step:: Click :guilabel:`Create Collection` to create the collection. + + |compass| provides the following additional preferences that + you can configure for your collection: + + - :compass:`Create a Capped Collection` + - :compass:`Create a Clustered Collection` + - :compass:`Create a Collection with Collation ` + - :compass:`Create a Collection with Encrypted Field ` + - :compass:`Create a Time Series Collection ` Document Validation ~~~~~~~~~~~~~~~~~~~ @@ -151,9 +366,21 @@ identifier)`. The collection UUID remains the same across all members of a replica set and shards in a sharded cluster. -To retrieve the UUID for a collection, run either the -:manual:`listCollections ` command -or the :method:`db.getCollectionInfos()` method. +.. tabs:: + :hidden: + + .. tab:: Atlas + :tabid: atlas + + .. tab:: mongosh + :tabid: mongosh + + To retrieve the UUID for a collection, run either the + :manual:`listCollections ` command + or the :method:`db.getCollectionInfos()` method. + + .. tab:: MongoDB Compass + :tabid: compass .. toctree:: :titlesonly: diff --git a/source/core/dot-dollar-considerations.txt b/source/core/dot-dollar-considerations.txt index 09c9fedf09a..a386df7c7b6 100644 --- a/source/core/dot-dollar-considerations.txt +++ b/source/core/dot-dollar-considerations.txt @@ -1,10 +1,13 @@ .. _field-names-periods-dollar-signs: +.. _crud-concepts-dot-dollar-considerations: -========================================================= -Field Names with Periods (``.``) and Dollar Signs (``$``) -========================================================= +========================================= +Field Names with Periods and Dollar Signs +========================================= -.. default-domain:: mongodb +.. facet:: + :name: genre + :values: reference .. contents:: On this page :local: @@ -12,221 +15,29 @@ Field Names with Periods (``.``) and Dollar Signs (``$``) :depth: 1 :class: singlecol -.. _crud-concepts-dot-dollar-considerations: - -Overview --------- - -MongoDB 5.0 adds improved support for field names that are dollar -(``$``) prefixed or that contain periods (``.``). The validation rules -for storing data have been updated to make it easier to work with data -sources that use these characters. +MongoDB supports field names that are dollar (``$``) prefixed or that +contain periods (``.``). In most cases data that has been stored using field names like these is not directly accessible. You need to use helper methods like :expression:`$getField`, :expression:`$setField`, and -:expression:`$literal` in queries that access those fields. +:expression:`$literal` in queries that access those fields. The field name validation rules are not the same for all types of -storage operations. This page summarizes how different insert and -update operations handle dollar (``$``) prefixed field names. - -Insert operations ------------------ - -Dollar (``$``) prefixed fields are permitted as top level and nested -field names for inserts. - -.. code-block:: javascript - :emphasize-lines: 3 - - db.sales.insertOne( { - "$price": 50.00, - "quantity": 30 - } ) - -Dollar (``$``) prefixed fields are permitted on inserts using otherwise -reserved words. Operator names like :update:`$inc` can be used as -field names as well as words like ``id``, ``db``, and ``ref``. - -.. code-block:: javascript - :emphasize-lines: 2, 4-6 - - db.books.insertOne( { - "$id": "h1961-01", - "location": { - "$db": "novels", - "$ref": "2007042768", - "$inc": true - } } ) - -An update which creates a new document during an :term:`upsert` is -treated as an ``insert`` rather than an ``update`` for field name -validation. :term:`Upserts ` can accept dollar (``$``) prefixed -fields. However, :term:`upserts ` are a special case and -similar update operations may cause an error if the ``match`` portion -of the update selects an existing document. - -This code sample has ``upsert: true`` so it will insert a new document -if the collection doesn't already contain a document that matches the -query term, ``{ "date": "2021-07-07" }``. If this sample code matches -an existing document, the update will fail since ``$hotel`` is dollar -(``$``) prefixed. - -.. code-block:: javascript - :emphasize-lines: 5 - - db.expenses.updateOne( - { "date": "2021-07-07" }, - { $set: { - "phone": 25.17, - "$hotel": 320.10 - } }, - { upsert: true } - ) - -Document Replacing Updates --------------------------- - -Update operators either replace existing fields with new documents -or else modify those fields. In cases where the update performs a -replacement, dollar (``$``) prefixed fields are not permitted as top -level field names. - -Consider a document like - -.. code-block:: javascript:: - - { - "_id": "E123", - "address": { - "$number": 123, - "$street": "Elm Road" - }, - "$rooms": { - "br": 2, - "bath": 1 - } - } - -You could use an update operator that replaces an existing document to -modify the ``address.$street`` field but you could not update the -``$rooms`` field that way. - -.. code-block:: - - db.housing.updateOne( - { "_id": "E123" }, - { $set: { "address.$street": "Elm Ave" } } - ) - -Use :expression:`$setField` as part of an aggregation pipeline to -:ref:`update top level ` dollar (``$``) -prefixed fields like ``$rooms``. - -Document Modifying Updates --------------------------- - -When an update modifies, rather than replaces, existing document -fields, dollar (``$``) prefixed fields can be top level field names. -Subfields can be accessed directly, but you need a helper method to -access the top level fields. - -.. seealso:: - - :expression:`$getField`, :expression:`$setField`, - :expression:`$literal`, :pipeline:`$replaceWith` - -Consider a collection with documents like this inventory record: - -.. code-block:: - :copyable: false - - { - _id: ObjectId("610023ad7d58ecda39b8d161"), - "part": "AB305", - "$bin": 200, - "quantity": 100, - "pricing": { sale: true, "$discount": 60 } - } - -The ``pricing.$discount`` subfield can be queried directly. - -.. code-block:: - - db.inventory.findAndModify( { - query: { "part": { $eq: "AB305" } }, - update: { $inc: { "pricing.$discount": 10 } } - } ) - - -Use :expression:`$getField` and :expression:`$literal` to access the -value of the top level ``$bin`` field. - -.. code-block:: - :emphasize-lines: 3 - - db.inventory.findAndModify( { - query: { $expr: { - $eq: [ { $getField: { $literal: "$bin" } }, 200 ] - } }, - update: { $inc: { "quantity": 10 } } - } ) - -.. _dotDollar-aggregate-update: - -Updates Using Aggregation Pipelines ------------------------------------ - -Use :expression:`$setField`, :expression:`$getField`, and -:expression:`$literal` in the :pipeline:`$replaceWith` stage to modify -dollar (``$``) prefixed fields in an aggregation :term:`pipeline`. - -Consider a collection of school records like: - -.. code-block:: javascript - :copyable: false - - { - "_id": 100001, - "$term": "fall", - "registered": true, - "grade": 4 - } - -Create a new collection for the spring semester using a -:term:`pipeline` to update the dollar (``$``) prefixed ``$term`` field. - -.. code-block:: javascript - :emphasize-lines: 3-5 - - db.school.aggregate( [ - { $match: { "registered": true } }, - { $replaceWith: { - $setField: { - field: { $literal: "$term" }, - input: "$$ROOT", - value: "spring" - } } }, - { $out: "spring2022" } - ] ) +storage operations. -General Restrictions --------------------- +Get Started +----------- -In addition to the storage validation rules above, there are some -general restrictions on using dollar (``$``) prefixed field names. -These fields cannot: +For examples of how to handle field names that contain periods and +dollar signs, see these pages: -- Be indexed -- Be used as part of a shard key -- Be validated using :query:`$jsonSchema` -- Be be modified with an escape sequence -- Be used with - :driver:`Field Level Encryption ` -- Be used as a subfield in an ``_id`` document +- :ref:`dollar-prefix-field-names` -.. include:: /includes/warning-possible-data-loss.rst +- :ref:`period-field-names` -.. include:: /includes/warning-dot-dollar-import-export.rst +.. toctree:: + :titlesonly: + /core/dot-dollar-considerations/dollar-prefix + /core/dot-dollar-considerations/periods diff --git a/source/core/dot-dollar-considerations/dollar-prefix.txt b/source/core/dot-dollar-considerations/dollar-prefix.txt new file mode 100644 index 00000000000..95612279c0d --- /dev/null +++ b/source/core/dot-dollar-considerations/dollar-prefix.txt @@ -0,0 +1,217 @@ +.. _dollar-prefix-field-names: + +=========================== +Dollar-Prefixed Field Names +=========================== + +.. facet:: + :name: genre + :values: reference + +.. contents:: On this page + :local: + :backlinks: none + :depth: 1 + :class: singlecol + +This section summarizes how different insert and update operations +handle dollar (``$``) prefixed field names. + +Insert Operations +----------------- + +Dollar (``$``) prefixed fields are permitted as top level and nested +field names for inserts. + +.. code-block:: javascript + :emphasize-lines: 3 + + db.sales.insertOne( { + "$price": 50.00, + "quantity": 30 + } ) + +Dollar (``$``) prefixed fields are permitted on inserts using otherwise +reserved words. Operator names like :update:`$inc` can be used as +field names as well as words like ``id``, ``db``, and ``ref``. + +.. code-block:: javascript + :emphasize-lines: 2, 4-6 + + db.books.insertOne( { + "$id": "h1961-01", + "location": { + "$db": "novels", + "$ref": "2007042768", + "$inc": true + } } ) + +An update which creates a new document during an :term:`upsert` is +treated as an ``insert`` rather than an ``update`` for field name +validation. :term:`Upserts ` can accept dollar (``$``) prefixed +fields. However, :term:`upserts ` are a special case and +similar update operations may cause an error if the ``match`` portion +of the update selects an existing document. + +This code sample has ``upsert: true`` so it will insert a new document +if the collection doesn't already contain a document that matches the +query term, ``{ "date": "2021-07-07" }``. If this sample code matches +an existing document, the update will fail since ``$hotel`` is dollar +(``$``) prefixed. + +.. code-block:: javascript + :emphasize-lines: 5 + + db.expenses.updateOne( + { "date": "2021-07-07" }, + { $set: { + "phone": 25.17, + "$hotel": 320.10 + } }, + { upsert: true } + ) + +Document Replacing Updates +-------------------------- + +Update operators either replace existing fields with new documents +or else modify those fields. In cases where the update performs a +replacement, dollar (``$``) prefixed fields are not permitted as top +level field names. + +Consider a document like + +.. code-block:: javascript:: + + { + "_id": "E123", + "address": { + "$number": 123, + "$street": "Elm Road" + }, + "$rooms": { + "br": 2, + "bath": 1 + } + } + +You could use an update operator that replaces an existing document to +modify the ``address.$street`` field but you could not update the +``$rooms`` field that way. + +.. code-block:: + + db.housing.updateOne( + { "_id": "E123" }, + { $set: { "address.$street": "Elm Ave" } } + ) + +Use :expression:`$setField` as part of an aggregation pipeline to +:ref:`update top level ` dollar (``$``) +prefixed fields like ``$rooms``. + +Document Modifying Updates +-------------------------- + +When an update modifies, rather than replaces, existing document +fields, dollar (``$``) prefixed fields can be top level field names. +Subfields can be accessed directly, but you need a helper method to +access the top level fields. + +.. seealso:: + + :expression:`$getField`, :expression:`$setField`, + :expression:`$literal`, :pipeline:`$replaceWith` + +Consider a collection with documents like this inventory record: + +.. code-block:: + :copyable: false + + { + _id: ObjectId("610023ad7d58ecda39b8d161"), + "part": "AB305", + "$bin": 200, + "quantity": 100, + "pricing": { sale: true, "$discount": 60 } + } + +The ``pricing.$discount`` subfield can be queried directly. + +.. code-block:: + + db.inventory.findAndModify( { + query: { "part": { $eq: "AB305" } }, + update: { $inc: { "pricing.$discount": 10 } } + } ) + + +Use :expression:`$getField` and :expression:`$literal` to access the +value of the top level ``$bin`` field. + +.. code-block:: + :emphasize-lines: 3 + + db.inventory.findAndModify( { + query: { $expr: { + $eq: [ { $getField: { $literal: "$bin" } }, 200 ] + } }, + update: { $inc: { "quantity": 10 } } + } ) + +.. _dotDollar-aggregate-update: + +Updates Using Aggregation Pipelines +----------------------------------- + +Use :expression:`$setField`, :expression:`$getField`, and +:expression:`$literal` in the :pipeline:`$replaceWith` stage to modify +dollar (``$``) prefixed fields in an aggregation :term:`pipeline`. + +Consider a collection of school records like: + +.. code-block:: javascript + :copyable: false + + { + "_id": 100001, + "$term": "fall", + "registered": true, + "grade": 4 + } + +Create a new collection for the spring semester using a +:term:`pipeline` to update the dollar (``$``) prefixed ``$term`` field. + +.. code-block:: javascript + :emphasize-lines: 3-5 + + db.school.aggregate( [ + { $match: { "registered": true } }, + { $replaceWith: { + $setField: { + field: { $literal: "$term" }, + input: "$$ROOT", + value: "spring" + } } }, + { $out: "spring2022" } + ] ) + +General Restrictions +-------------------- + +In addition to the storage validation rules above, there are some +general restrictions on using dollar (``$``) prefixed field names. +These fields cannot: + +- Be indexed +- Be used as part of a shard key +- Be validated using :query:`$jsonSchema` +- Be be modified with an escape sequence +- Be used with + :driver:`Field Level Encryption ` +- Be used as a subfield in an ``_id`` document + +.. include:: /includes/warning-possible-data-loss.rst + +.. include:: /includes/warning-dot-dollar-import-export.rst diff --git a/source/core/dot-dollar-considerations/periods.txt b/source/core/dot-dollar-considerations/periods.txt new file mode 100644 index 00000000000..b4ec123e478 --- /dev/null +++ b/source/core/dot-dollar-considerations/periods.txt @@ -0,0 +1,165 @@ +.. _period-field-names: + +======================== +Field Names with Periods +======================== + +.. facet:: + :name: genre + :values: reference + +.. contents:: On this page + :local: + :backlinks: none + :depth: 2 + :class: singlecol + +This section summarizes how to insert, query, and update documents with +field names that contain a period. + +Insert a Field Name with a Period +--------------------------------- + +To insert a document that contains a field name with a period, put the +field name in quotes. + +The following command inserts a document that contains a field name +``price.usd``: + +.. code-block:: javascript + + db.inventory.insertOne( + { + "item" : "sweatshirt", + "price.usd": 45.99, + "quantity": 20 + } + ) + +Query a Field that has a Period +------------------------------- + +To query for a field that has a period, use the :expression:`$getField` +operator. + +The following query returns documents where the ``price.usd`` field is +greater than ``40``: + +.. io-code-block:: + :copyable: true + + .. input:: + :language: javascript + + db.inventory.find( + { + $expr: + { + $gt: [ { $getField: "price.usd" }, 40 ] + } + } + ) + + .. output:: + :language: javascript + + [ + { + _id: ObjectId("66145f9bcb1d4abffd2f1b50"), + item: 'sweatshirt', + 'price.usd': 45.99, + quantity: 20 + } + ] + +If you don't use ``$getField``, MongoDB treats the field name with a +period as an embedded object. For example, the following query matches +documents where a ``usd`` field inside of a ``price`` field is greater +than ``40``: + +.. code-block:: javascript + + db.inventory.find( { + "price.usd": { $gt: 40 } + } ) + +The preceding query would match this document: + +.. code-block:: javascript + :emphasize-lines: 3-5 + + { + "item" : "sweatshirt", + "price": { + "usd": 45.99 + }, + "quantity": 20 + } + +Update a Field that has a Period +-------------------------------- + +To update a field that has a period, use an aggregation pipeline with +the :expression:`$setField` operator. + +The following operation sets the ``price.usd`` field to ``29.99``: + +.. code-block:: javascript + + db.inventory.updateOne( + { "item": "sweatshirt" }, + [ + { + $replaceWith: { + $setField: { + field: "price.usd", + input: "$$ROOT", + value: 29.99 + } + } + } + ] + ) + +If you don't use ``$setField``, MongoDB treats the field name with a +period as an embedded object. For example, the following operation does +not update the existing ``price.usd`` field, and instead inserts a new +field ``usd``, embedded inside of a ``price`` field: + +.. code-block:: javascript + :emphasize-lines: 3 + + db.inventory.updateOne( + { "item": "sweatshirt" }, + { $set: { "price.usd": 29.99 } } + ) + +Resulting document: + +.. code-block:: javascript + :copyable: false + :emphasize-lines: 5,7 + + [ + { + _id: ObjectId("66145f9bcb1d4abffd2f1b50"), + item: 'sweatshirt', + 'price.usd': 45.99 + quantity: 20, + price: { usd: 29.99 } + } + ] + +For more examples of updates with aggregation pipelines, see +:ref:`updates-agg-pipeline`. + +Learn More +---------- + +- :expression:`$getField` + +- :expression:`$setField` + +- :expression:`$literal` + +- :ref:`dollar-prefix-field-names` diff --git a/source/core/hashed-sharding.txt b/source/core/hashed-sharding.txt index 2c5d7acf20d..40f0ed18bac 100644 --- a/source/core/hashed-sharding.txt +++ b/source/core/hashed-sharding.txt @@ -9,7 +9,7 @@ Hashed Sharding Hashed sharding uses either a :ref:`single field hashed index ` or a :ref:`compound hashed index -` (*New in 4.4*) as the shard key to +` as the shard key to partition data across your sharded cluster. Sharding on a Single Field Hashed Index @@ -28,7 +28,7 @@ Sharding on a Single Field Hashed Index value; this value is used as your shard key. [#hashvalue]_ Sharding on a Compound Hashed Index - MongoDB 4.4 adds support for creating compound indexes with a single + MongoDB includes support for creating compound indexes with a single :ref:`hashed field `. To create a compound hashed index, specify ``hashed`` as the value of any single index key when creating the index. @@ -121,11 +121,8 @@ hashed index to use as the :term:`shard key`: - Starting in MongoDB 5.0, you can :ref:`reshard a collection ` by changing a collection's shard key. - - Starting in MongoDB 4.4, you can :ref:`refine a shard key - ` by adding a suffix field or fields to the - existing shard key. - - In MongoDB 4.2 and earlier, the choice of shard key cannot - be changed after sharding. + - You can :ref:`refine a shard key ` by adding a suffix + field or fields to the existing shard key. Shard a Populated Collection ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -249,6 +246,10 @@ Sharding Empty Collection on Compound Hashed Shard Key with Non-Hashed Prefix :ref:`pre-define-zone-range-hashed-example`. +Drop a Hashed Shard Key Index +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. include:: /includes/drop-hashed-shard-key-index.rst .. seealso:: diff --git a/source/core/index-case-insensitive.txt b/source/core/index-case-insensitive.txt index 33e86b9ea04..613fbcf47da 100644 --- a/source/core/index-case-insensitive.txt +++ b/source/core/index-case-insensitive.txt @@ -1,7 +1,7 @@ .. _index-feature-case-insensitive: ======================== -Case Insensitive Indexes +Case-Insensitive Indexes ======================== .. default-domain:: mongodb @@ -12,30 +12,52 @@ Case Insensitive Indexes :depth: 2 :class: singlecol -Case insensitive indexes support queries that perform string -comparisons without regard for case. +Case-insensitive indexes support queries that perform string comparisons +without regard for case. Case insensitivity is derived from +:ref:`collation `. -You can create a case insensitive index with -:method:`db.collection.createIndex()` by specifying the ``collation`` -parameter as an option. For example: +.. important:: -.. code-block:: javascript + .. include:: /includes/indexes/case-insensitive-regex-queries.rst - db.collection.createIndex( { "key" : 1 }, - { collation: { - locale : , - strength : - } - } ) +Command Syntax +-------------- -To specify a collation for a case sensitive index, include: +You can create a case-insensitive index with +:method:`db.collection.createIndex()` by specifying the ``collation`` +option: -- ``locale``: specifies language rules. See - :ref:`Collation Locales` for a list of - available locales. +.. code-block:: javascript -- ``strength``: determines comparison rules. A value of - ``1`` or ``2`` indicates a case insensitive collation. + db.collection.createIndex( + { + : + }, + { + collation: + { + locale : , + strength : < 1 | 2 > + } + } + ) + +To specify a collation for a case-insensitive index, include the +following fields in the ``collation`` object: + +.. list-table:: + :header-rows: 1 + :widths: 10 20 + + * - Field + - Description + + * - ``locale`` + - Specifies language rules. For a list of available locales, see + :ref:`collation-languages-locales`. + * - ``strength`` + - Determines comparison rules. A ``strength`` value of 1 or 2 + indicates case-insensitive collation. For additional collation fields, see :ref:`Collation`. @@ -43,11 +65,6 @@ For additional collation fields, see Behavior -------- -Using a case insensitive index does not affect -the results of a query, but it can increase performance; see -:ref:`Indexes ` for a detailed discussion of the costs and -benefits of indexes. - To use an index that specifies a collation, query and sort operations must specify the same collation as the index. If a collection has defined a collation, all queries and indexes inherit that collation @@ -58,10 +75,10 @@ Examples .. _no-default-collation-example: -Create a Case Insensitive Index +Create a Case-Insensitive Index ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -To use a case insensitive index on a collection with no default +To use a case-insensitive index on a collection with no default collation, create an index with a collation and set the ``strength`` parameter to ``1`` or ``2`` (see :ref:`Collation` for a detailed @@ -69,15 +86,17 @@ description of the ``strength`` parameter). You must specify the same collation at the query level in order to use the index-level collation. The following example creates a collection with no default collation, -then adds an index on the ``type`` field with a case insensitive +then adds an index on the ``type`` field with a case-insensitive collation. .. code-block:: javascript db.createCollection("fruit") - db.fruit.createIndex( { type: 1}, - { collation: { locale: 'en', strength: 2 } } ) + db.fruit.createIndex( + { type: 1 }, + { collation: { locale: 'en', strength: 2 } } + ) To use the index, queries must specify the same collation. @@ -99,7 +118,7 @@ To use the index, queries must specify the same collation. .. _default-collation-example: -Case Insensitive Indexes on Collections with a Default Collation +Case-Insensitive Indexes on Collections with a Default Collation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When you create a collection with a default collation, all the indexes diff --git a/source/core/index-creation.txt b/source/core/index-creation.txt index 3f408fed7d6..0caea9a51d7 100644 --- a/source/core/index-creation.txt +++ b/source/core/index-creation.txt @@ -22,12 +22,11 @@ of the index build. The rest of the build process yields to interleaving read and write operations. For a detailed description of index build process and locking behavior, see :ref:`index-build-process`. -Starting in MongoDB 4.4, index builds on a replica set or sharded -cluster build simultaneously across all data-bearing replica set -members. The primary requires a minimum number of data-bearing voting -members (i.e. commit quorum), including itself, that must complete the -build before marking the index as ready for use. A "voting" member is -any replica set member where :rsconf:`members[n].votes` is greater than +Index builds on a replica set or sharded cluster build simultaneously across +all data-bearing replica set members. The primary requires a minimum number of +data-bearing voting members (i.e. commit quorum), including itself, that must +complete the build before marking the index as ready for use. A "voting" member +is any replica set member where :rsconf:`members[n].votes` is greater than ``0``. See :ref:`index-operations-replicated-build` for more information. @@ -175,6 +174,8 @@ Index Builds in Replicated Environments .. include:: /includes/extracts/4.4-changes-index-builds-simultaneous-nolink.rst +.. include:: /includes/unreachable-node-default-quorum-index-builds.rst + The build process is summarized as follows: 1. The primary receives the :dbcommand:`createIndexes` command and @@ -274,10 +275,7 @@ when it is restarted and continues from the saved checkpoint. In earlier versions, if the index build was interrupted it had to be restarted from the beginning. -Prior to MongoDB 4.4, the startup process stalls behind any recovered -index builds. The secondary could fall out of sync with the replica set -and require resynchronization. Starting in MongoDB 4.4, the -:binary:`~bin.mongod` can perform the startup process while the +The :binary:`~bin.mongod` can perform the startup process while the recovering index builds. If you restart the :binary:`~bin.mongod` as a standalone (i.e. removing @@ -304,8 +302,8 @@ there is still work to be done when the rollback concludes, the :binary:`~bin.mongod` automatically recovers the index build and continues from the saved checkpoint. -Starting in version 4.4, MongoDB can pause an in-progress -index build to perform a :doc:`rollback `. +MongoDB can pause an in-progress index build to perform a +:ref:`rollback `. - If the rollback does not revert the index build, MongoDB restarts the index build after completing the rollback. @@ -313,9 +311,6 @@ index build to perform a :doc:`rollback `. - If the rollback reverts the index build, you must re-create the index or indexes after the rollback completes. -Prior to MongoDb 4.4, rollbacks could start only after all in-progress -index builds finished. - .. _index-creation-index-consistency: Index Consistency Checks for Sharded Collections @@ -338,9 +333,8 @@ can occur, such as: fails to build the index for an associated shard or incorrectly builds an index with different specification. -Starting in MongoDB 4.4 (and in MongoDB 4.2.6), the :ref:`config server -` primary periodically checks for -index inconsistencies across the shards for sharded collections. To +The :ref:`config server ` primary periodically checks +for index inconsistencies across the shards for sharded collections. To configure these periodic checks, see :parameter:`enableShardedIndexConsistencyCheck` and :parameter:`shardedIndexConsistencyCheckIntervalMS`. @@ -476,10 +470,9 @@ process: - A :binary:`~bin.mongod` that is *not* part of a replica set skips this stage. - Starting in MongoDB 4.4, the :binary:`~bin.mongod` submits a - "vote" to the primary to commit the index. Specifically, it writes - the "vote" to an internal replicated collection on the - :term:`primary`. + The :binary:`~bin.mongod` submits a "vote" to the primary to commit the + index. Specifically, it writes the "vote" to an internal replicated + collection on the :term:`primary`. If the :binary:`~bin.mongod` is the :term:`primary`, it waits until it has a commit quorum of votes (all voting data-bearing diff --git a/source/core/index-hidden.txt b/source/core/index-hidden.txt index 200b0e53beb..37bacc2883d 100644 --- a/source/core/index-hidden.txt +++ b/source/core/index-hidden.txt @@ -13,14 +13,12 @@ Hidden Indexes :depth: 1 :class: singlecol -.. versionadded:: 4.4 - Hidden indexes are not visible to the :doc:`query planner ` and cannot be used to support a query. -By hiding an index from the planner, users can evaluate the potential +By hiding an index from the planner, you can evaluate the potential impact of dropping an index without actually dropping the index. If the -impact is negative, the user can unhide the index instead of having to +impact is negative, you can unhide the index instead of having to recreate a dropped index. Behavior @@ -51,9 +49,7 @@ Restrictions ------------ - To hide an index, you must have :ref:`featureCompatibilityVersion - ` set to ``4.4`` or greater. However, once hidden, the - index remains hidden even with :ref:`featureCompatibilityVersion - ` set to ``4.2`` on MongoDB 4.4 binaries. + ` set to ``{+minimum-lts-version+}`` or greater. - You cannot hide the ``_id`` index. @@ -73,10 +69,8 @@ To create a ``hidden`` index, use the To use the ``hidden`` option with :method:`db.collection.createIndex()`, you must have - :ref:`featureCompatibilityVersion ` set to ``4.4`` or - greater. However, once hidden, the index remains hidden even with - :ref:`featureCompatibilityVersion ` set to ``4.2`` on - MongoDB 4.4 binaries. + :ref:`featureCompatibilityVersion ` set to + ``{+minimum-lts-version+}`` or greater. For example, the following operation creates a hidden ascending index on the ``borough`` field: @@ -129,9 +123,7 @@ Hide an Existing Index .. note:: - To hide an index, you must have :ref:`featureCompatibilityVersion - ` set to ``4.4`` or greater. However, once hidden, the - index remains hidden even with :ref:`featureCompatibilityVersion - ` set to ``4.2`` on MongoDB 4.4 binaries. + ` set to ``{+minimum-lts-version+}`` or greater. - You cannot hide the ``_id`` index. diff --git a/source/core/index-partial.txt b/source/core/index-partial.txt index 9a1b605e8a1..a233980bcd5 100644 --- a/source/core/index-partial.txt +++ b/source/core/index-partial.txt @@ -174,14 +174,15 @@ specified filter expression and expire only those documents. For details, see Restrictions ------------ -.. include:: /includes/fact-5.0-multiple-partial-index.rst +- You cannot specify both the ``partialFilterExpression`` option and + the ``sparse`` option. -You cannot specify both the ``partialFilterExpression`` option and -the ``sparse`` option. +- ``_id`` indexes cannot be partial indexes. -``_id`` indexes cannot be partial indexes. +- Shard key indexes cannot be partial indexes. + +- .. include:: /includes/queryable-encryption/qe-csfle-partial-filter-disclaimer.rst -Shard key indexes cannot be partial indexes. .. _index-partial-equivalent-indexes: diff --git a/source/core/index-ttl.txt b/source/core/index-ttl.txt index dabb0334110..67aa3bb9cb4 100644 --- a/source/core/index-ttl.txt +++ b/source/core/index-ttl.txt @@ -201,9 +201,8 @@ For example, if a bucket covers data up until ``2023-03-27T18:29:59Z`` and ``expireAfterSeconds`` is 300, the TTL index expires the bucket after ``2023-03-27T18:34:59Z``. -If the indexed field in a document is not a -:ref:`date ` or an array that holds one or -more date values, the document will not expire. +If the indexed field in a document doesn't contain one or more date +values, the document will not expire. If a document does not contain the indexed field, the document will not expire. diff --git a/source/core/index-unique.txt b/source/core/index-unique.txt index 6e7d86da783..aaec9123e34 100644 --- a/source/core/index-unique.txt +++ b/source/core/index-unique.txt @@ -1,4 +1,3 @@ - .. _index-type-unique: ============== @@ -15,6 +14,9 @@ Unique Indexes :name: genre :values: reference +.. meta:: + :description: Use a unique index to ensure indexed fields do not store duplicate values. + .. contents:: On this page :local: :backlinks: none @@ -22,18 +24,11 @@ Unique Indexes :class: singlecol A unique index ensures that the indexed fields do not store duplicate -values; i.e. enforces uniqueness for the indexed fields. By default, -MongoDB creates a unique index on the :ref:`_id ` -field during the creation of a collection. - -.. note:: New Internal Format - - - Starting in MongoDB 4.2, for :ref:`featureCompatibilityVersion - ` (fCV) of 4.2 (or greater), MongoDB uses a new internal - format for unique indexes that is incompatible with earlier MongoDB - versions. The new format applies to both existing unique indexes as - well as newly created/rebuilt unique indexes. +values. A unique index on a single field ensures that a value appears +at most once for a given field. A unique compound index ensures that +any given *combination* of the index key values only appears at most +once. By default, MongoDB creates a unique index on the :ref:`_id +` field during the creation of a collection. .. |page-topic| replace:: :atlas:`create and manage unique indexes in the UI ` @@ -75,9 +70,9 @@ Unique Compound Index ~~~~~~~~~~~~~~~~~~~~~ You can also enforce a unique constraint on :ref:`compound indexes -`. If you use the unique constraint on a -:ref:`compound index `, then MongoDB will enforce -uniqueness on the *combination* of the index key values. +`. A unique :ref:`compound index +` enforces uniqueness on the *combination* of the +index key values. For example, to create a unique index on ``groupNumber``, ``lastname``, and ``firstname`` fields of the ``members`` collection, use the @@ -137,18 +132,17 @@ index `. Building Unique Index on Replica Sets and Sharded Clusters ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -For replica sets and sharded clusters, using a :doc:`rolling procedure -` to create a unique index +For replica sets and sharded clusters, using a :ref:`rolling procedure +` to create a unique index requires that you stop all writes to the collection during the procedure. If you cannot stop all writes to the collection during the -procedure, do not use the rolling procedure. Instead, build your unique index -on the collection by: - -- issuing :method:`db.collection.createIndex()` on the primary for a - replica set, or +procedure, do not use the rolling procedure. Instead, to build your +unique index on the collection you must either: -- issuing :method:`db.collection.createIndex()` on the - :binary:`~bin.mongos` for a sharded cluster. +- Run ``db.collection.createIndex()`` on the primary for a + replica set +- Run ``db.collection.createIndex()`` on the :binary:`~bin.mongos` + for a sharded cluster .. _unique-separate-documents: @@ -421,3 +415,8 @@ the unique index. .. code-block:: javascript db.scoreHistory.insert( { score : 3 } ) + +.. toctree:: + :titlesonly: + + /core/index-unique/convert-to-unique diff --git a/source/core/index-unique/convert-to-unique.txt b/source/core/index-unique/convert-to-unique.txt new file mode 100644 index 00000000000..3f4573ca6ed --- /dev/null +++ b/source/core/index-unique/convert-to-unique.txt @@ -0,0 +1,191 @@ +.. _index-convert-to-unique: + +=========================================== +Convert an Existing Index to a Unique Index +=========================================== + +.. facet:: + :name: genre + :values: tutorial + +.. contents:: On this page + :local: + :backlinks: none + :depth: 1 + :class: singlecol + +To convert a non-unique index to a :ref:`unique index +`, use the :dbcommand:`collMod` command. The +``collMod`` command provides options to verify that your indexed field +contains unique values before you complete the conversion. + +Before you Begin +---------------- + +.. procedure:: + :style: normal + + .. step:: Populate sample data + + Create the ``apples`` collection: + + .. code-block:: javascript + + db.apples.insertMany( [ + { type: "Delicious", quantity: 12 }, + { type: "Macintosh", quantity: 13 }, + { type: "Delicious", quantity: 13 }, + { type: "Fuji", quantity: 15 }, + { type: "Washington", quantity: 10 } + ] ) + + .. step:: Create a single field index + + Add a single field index on the ``type`` field: + + .. code-block:: javascript + + db.apples.createIndex( { type: 1 } ) + +Steps +----- + +.. procedure:: + :style: normal + + .. step:: Prepare the index to be converted to a unique index + + Run ``collMod`` on the ``type`` field index and set + ``prepareUnique`` to ``true``: + + .. code-block:: javascript + + db.runCommand( { + collMod: "apples", + index: { + keyPattern: { type: 1 }, + prepareUnique: true + } + } ) + + After ``prepareUnique`` is set, you cannot insert new documents + that duplicate an index key entry. For example, the following + insert operation results in an error: + + .. io-code-block:: + :copyable: true + + .. input:: + :language: javascript + + db.apples.insertOne( { type: "Delicious", quantity: 20 } ) + + .. output:: + :language: javascript + + MongoServerError: E11000 duplicate key error collection: + test.apples index: type_1 dup key: { type: "Delicious" } + + .. step:: Check for unique key violations + + To see if there are any documents that violate the unique constraint on + the ``type`` field, run ``collMod`` with ``unique: true`` and ``dryRun: + true``: + + .. io-code-block:: + :copyable: true + + .. input:: + :language: javascript + + db.runCommand( { + collMod: "apples", + index: { + keyPattern: { type: 1 }, + unique: true + }, + dryRun: true + } ) + + .. output:: + :language: javascript + + MongoServerError: Cannot convert the index to unique. Please resolve conflicting documents before running collMod again. + + Violations: [ + { + ids: [ + ObjectId("660489d24cabd75abebadbd0"), + ObjectId("660489d24cabd75abebadbd2") + ] + } + ] + + .. step:: Resolve duplicate key conflicts + + To complete the conversion, modify the duplicate entries to remove any + conflicts. For example: + + .. code-block:: javascript + + db.apples.deleteOne( + { _id: ObjectId("660489d24cabd75abebadbd2") } + ) + + .. step:: Confirm that all conflicts are resolved + + To confirm that the index can be converted, re-run the ``collMod()`` + command with ``dryRun: true``: + + .. io-code-block:: + :copyable: true + + .. input:: + :language: javascript + + db.runCommand( { + collMod: "apples", + index: { + keyPattern: { type: 1 }, + unique: true + }, + dryRun: true + } ) + + .. output:: + :language: javascript + + { ok: 1 } + + .. step:: Finalize the index conversion + + To finalize the conversion to a unique index, run the ``collMod`` + command with ``unique: true`` and remove the ``dryRun`` flag: + + .. io-code-block:: + :copyable: true + + .. input:: + :language: javascript + + db.runCommand( { + collMod: "apples", + index: { + keyPattern: { type: 1 }, + unique: true + } + } ) + + .. output:: + :language: javascript + + { unique_new: true, ok: 1 } + +Learn More +---------- + +- :ref:`manage-indexes` + +- :ref:`index-properties` + +- :ref:`indexing-strategies` diff --git a/source/core/indexes/drop-index.txt b/source/core/indexes/drop-index.txt index 483e4fb34f5..652404f09dd 100644 --- a/source/core/indexes/drop-index.txt +++ b/source/core/indexes/drop-index.txt @@ -77,7 +77,7 @@ method and specify an array of index names: .. code-block:: javascript - db..dropIndexes("", "", "") + db..dropIndexes( [ "", "", "" ] ) Drop All Indexes Except the ``_id`` Index ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/source/core/indexes/index-types/geospatial/2d.txt b/source/core/indexes/index-types/geospatial/2d.txt index 5ce1232ad50..7ed27b60b14 100644 --- a/source/core/indexes/index-types/geospatial/2d.txt +++ b/source/core/indexes/index-types/geospatial/2d.txt @@ -26,6 +26,15 @@ You cannot use 2d indexes for queries on :term:`GeoJSON` objects. To enable queries on GeoJSON objects, use :ref:`2dsphere indexes <2dsphere-index>`. +.. note:: + + When creating a :ref:`2d index <2d-index>`, the first value (longitude) must + be between -180 and 180, inclusive. The second value (latitude) must be between + -90 and 90, inclusive. However, these default limits can be overridden with the ``min`` + and ``max`` :ref:`options on 2d indexes <2d-index-options>`. Unlike + :ref:`2dsphere index <2dsphere-index>` coordinates, ``2d indexes`` values do + not "wrap" around a sphere. + Use Cases --------- diff --git a/source/core/indexes/index-types/geospatial/2d/query/proximity-flat-surface.txt b/source/core/indexes/index-types/geospatial/2d/query/proximity-flat-surface.txt index 51c2a1a3b3d..6812d93c41f 100644 --- a/source/core/indexes/index-types/geospatial/2d/query/proximity-flat-surface.txt +++ b/source/core/indexes/index-types/geospatial/2d/query/proximity-flat-surface.txt @@ -22,12 +22,10 @@ To query for location data near a specified point, use the db..find( { : { - $near : { - [ , ], - $maxDistance : - } - } - } ) + $near : [ , ], + $maxDistance : + } + } ) About this Task --------------- diff --git a/source/core/indexes/index-types/geospatial/2dsphere.txt b/source/core/indexes/index-types/geospatial/2dsphere.txt index 8ae903dc34c..decffabd2df 100644 --- a/source/core/indexes/index-types/geospatial/2dsphere.txt +++ b/source/core/indexes/index-types/geospatial/2dsphere.txt @@ -29,6 +29,13 @@ type: .. include:: /includes/indexes/code-examples/create-2dsphere-index.rst +.. note:: + + When :ref:`creating a a 2dsphere index <2dsphere-index-create>`, the first + value, or longitude, must be between -180 and 180, inclusive. The second value, + or latitude, must be between -90 and 90, inclusive. These coordinates "wrap" + around the sphere. For example, -179.9 and +179.9 are near neighbors. + Use Cases --------- diff --git a/source/core/indexes/index-types/index-compound.txt b/source/core/indexes/index-types/index-compound.txt index 29a9252de08..433a3437551 100644 --- a/source/core/indexes/index-types/index-compound.txt +++ b/source/core/indexes/index-types/index-compound.txt @@ -88,11 +88,8 @@ information, see :ref:`index-compound-sort-order`. Hashed Index Fields ~~~~~~~~~~~~~~~~~~~ -- In MongoDB 4.4 and later, compound indexes may contain **a single** - :ref:`hashed index field `. - -- In MongoDB 4.2 and earlier, compound indexes cannot contain any hashed - index fields. +Compound indexes may contain **a single** +:ref:`hashed index field `. .. _compound-index-prefix: diff --git a/source/core/indexes/index-types/index-hashed.txt b/source/core/indexes/index-types/index-hashed.txt index 7edb45b00df..a09178787d2 100644 --- a/source/core/indexes/index-types/index-hashed.txt +++ b/source/core/indexes/index-types/index-hashed.txt @@ -46,7 +46,7 @@ Floating-Point Numbers Hashed indexes truncate floating-point numbers to 64-bit integers before hashing. For example, a hashed index uses the same hash to store the -values ``2.3``, ``2.2``, and ``2.9``. This is a **collison**, where +values ``2.3``, ``2.2``, and ``2.9``. This is a **collision**, where multiple values are assigned to a single hash key. Collisions may negatively impact query performance. diff --git a/source/core/indexes/index-types/index-multikey.txt b/source/core/indexes/index-types/index-multikey.txt index 8bcc5e44927..b4f62560a29 100644 --- a/source/core/indexes/index-types/index-multikey.txt +++ b/source/core/indexes/index-types/index-multikey.txt @@ -30,6 +30,8 @@ sets that index to be a multikey index. MongoDB can create multikey indexes over arrays that hold both scalar values (for example, strings and numbers) and embedded documents. +If an array contains multiple instances of the same value, the index +only includes one entry for the value. To create a multikey index, use the following prototype: diff --git a/source/core/indexes/index-types/index-single.txt b/source/core/indexes/index-types/index-single.txt index 2b2c260dd69..c4d6469e869 100644 --- a/source/core/indexes/index-types/index-single.txt +++ b/source/core/indexes/index-types/index-single.txt @@ -97,3 +97,4 @@ Details :hidden: /core/indexes/index-types/index-single/create-single-field-index + /core/indexes/index-types/index-single/create-embedded-object-index diff --git a/source/core/indexes/index-types/index-single/create-embedded-object-index.txt b/source/core/indexes/index-types/index-single/create-embedded-object-index.txt new file mode 100644 index 00000000000..62237b46e90 --- /dev/null +++ b/source/core/indexes/index-types/index-single/create-embedded-object-index.txt @@ -0,0 +1,107 @@ +.. _index-subdocuments: +.. _index-embedded-documents: + +======================================= +Create an Index on an Embedded Document +======================================= + +.. facet:: + :name: genre + :values: tutorial + +.. contents:: On this page + :local: + :backlinks: none + :depth: 2 + :class: singlecol + +You can create indexes on embedded documents as a whole. However, only +queries that specify the **entire** embedded document use the index. +Queries on a specific field within the document do not use the index. + +About this Task +--------------- + +- To utilize an index on an embedded document, your query must specify + the entire embedded document. This can lead to unexpected behaviors if + your schema model changes and you add or remove fields from your + indexed document. + +- When you query embedded documents, the order that you specify fields + in the query matters. The embedded documents in your query and + returned document must match exactly. To see examples of queries on + embedded documents, see :ref:`read-operations-subdocuments`. + +- Before you create an index on an embedded document, consider if you + should instead index specific fields in that document, or use a + :ref:`wildcard index ` to index all of the + document's subfields. + +Before you Begin +---------------- + +Create a ``students`` collection that contains the following documents: + +.. code-block:: javascript + + db.students.insertMany( [ + { + "name": "Alice", + "gpa": 3.6, + "location": { city: "Sacramento", state: "California" } + }, + { + "name": "Bob", + "gpa": 3.2, + "location": { city: "Albany", state: "New York" } + } + ] ) + +Steps +----- + +Create an index on the ``location`` field: + +.. code-block:: javascript + + db.students.createIndex( { location: 1 } ) + +Results +------- + +The following query uses the index on the ``location`` field: + +.. code-block:: javascript + + db.students.find( { location: { city: "Sacramento", state: "California" } } ) + +The following queries *do not* use the index on the ``location`` field +because they query on specific fields within the embedded document: + +.. code-block:: javascript + + db.students.find( { "location.city": "Sacramento" } ) + + db.students.find( { "location.state": "New York" } ) + +In order for a :term:`dot notation` query to use an index, you must +create an index on the specific embedded field you are querying, not the +entire embedded object. For an example, see +:ref:`index-embedded-fields`. + +The following query returns no results because the embedded fields in +the query predicate are specified in a different order than they appear +in the document: + +.. code-block:: javascript + + db.students.find( { location: { state: "California", city: "Sacramento" } } ) + +Learn More +---------- + +- :ref:`indexes-single-field` + +- :ref:`server-diagnose-queries` + +- :ref:`optimize-query-performance` diff --git a/source/core/indexes/index-types/index-single/create-single-field-index.txt b/source/core/indexes/index-types/index-single/create-single-field-index.txt index edeaf00c6ab..95663c076e0 100644 --- a/source/core/indexes/index-types/index-single/create-single-field-index.txt +++ b/source/core/indexes/index-types/index-single/create-single-field-index.txt @@ -54,8 +54,6 @@ The following examples show you how to: - :ref:`index-create-ascending-single-field` -- :ref:`index-embedded-documents` - - :ref:`index-embedded-fields` .. _index-create-ascending-single-field: @@ -83,64 +81,6 @@ following: db.students.find( { gpa: { $lt: 3.4 } } ) -.. _index-subdocuments: -.. _index-embedded-documents: - -Create an Index on an Embedded Document -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -You can create indexes on embedded documents as a whole. - -Consider a social networking application where students can search for -one another by location. Student location is stored in an embedded -document called ``location``. The ``location`` document contains the -fields ``city`` and ``state``. - -You can create an index on the ``location`` field to improve performance -for queries on the ``location`` document: - -.. code-block:: javascript - - db.students.createIndex( { location: 1 } ) - -Results -``````` - -The following query uses the index on the ``location`` field: - -.. code-block:: javascript - - db.students.find( { location: { city: "Sacramento", state: "California" } } ) - -.. important:: Field Order for Embedded Documents - - When you query based on embedded documents, the order that specify - fields matters. The embedded documents in your query and returned - document must match exactly. To see more examples of queries on - embedded documents, see :ref:`read-operations-subdocuments`. - -Details -``````` - -When you create an index on an embedded document, only queries that -specify the *entire* embedded document use the index. Queries on a -specific field within the document do not use the index. - -For example, the following queries *do not* use the index on the -``location`` field because they query on specific fields within the -embedded document: - -.. code-block:: javascript - - db.students.find( { "location.city": "Sacramento" } ) - - db.students.find( { "location.state": "New York" } ) - -In order for a :term:`dot notation` query to use an index, you must -create an index on the specific embedded field you are querying, not the -entire embedded object. For an example, see -:ref:`index-embedded-fields`. - .. _index-embedded-fields: Create an Index on an Embedded Field @@ -172,10 +112,10 @@ following: Learn More ---------- +- :ref:`index-embedded-documents` + - :ref:`index-create-multikey-embedded` - :ref:`Check if a query uses an index ` - :ref:`Learn about other types of index types ` - -- :ref:`Learn about index properties ` diff --git a/source/core/indexes/index-types/index-text/specify-language-text-index/create-text-index-multiple-languages.txt b/source/core/indexes/index-types/index-text/specify-language-text-index/create-text-index-multiple-languages.txt index db25838c63b..8ea718dc607 100644 --- a/source/core/indexes/index-types/index-text/specify-language-text-index/create-text-index-multiple-languages.txt +++ b/source/core/indexes/index-types/index-text/specify-language-text-index/create-text-index-multiple-languages.txt @@ -93,7 +93,15 @@ The following operation creates a text index on the ``original`` and .. code-block:: javascript - db.quotes.createIndex( { original: "text", "translation.quote": "text" } ) + db.quotes.createIndex({ original: "text", "translation.quote": "text", "default_language" : "fr" }) + +.. note:: + + English is the default language for indexes. If you do not specify the + :ref:`default_language `, your query must + specify the language with the :ref:`$language ` parameter. + For more information, refer to :ref:``. + Results ------- diff --git a/source/core/indexes/index-types/index-wildcard/create-wildcard-index-multiple-fields.txt b/source/core/indexes/index-types/index-wildcard/create-wildcard-index-multiple-fields.txt index 5f8bd87b8d9..2874870f6a0 100644 --- a/source/core/indexes/index-types/index-wildcard/create-wildcard-index-multiple-fields.txt +++ b/source/core/indexes/index-types/index-wildcard/create-wildcard-index-multiple-fields.txt @@ -188,8 +188,10 @@ queries: db.products.find( { "attributes.size.height" : 10 } ) db.products.find( { "attributes.color" : "blue" } ) -The index **does not** support queries on fields not included in the -``wildcardProjection``, such as this query: +The index only supports queries on fields included in the +``wildcardProjection`` object. In this example, MongoDB performs +a collection scan for the following query because it includes +a field that is not present in the ``wildcardProjection`` object: .. code-block:: javascript @@ -200,7 +202,7 @@ The index **does not** support queries on fields not included in the Exclude Specific Fields from a Wildcard Index ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -If there document fields that you rarely query, you can create a +If there are document fields that you rarely query, you can create a wildcard index that omits those fields. The following operation creates a wildcard index on all document fields diff --git a/source/core/inmemory.txt b/source/core/inmemory.txt index 9bddb62ea28..75177921328 100644 --- a/source/core/inmemory.txt +++ b/source/core/inmemory.txt @@ -65,8 +65,13 @@ encryption at rest configuration. .. _inmemory-concurrency: -Concurrency ------------ +Transaction (Read and Write) Concurrency +---------------------------------------- + +.. include:: /includes/fact-dynamic-concurrency.rst + +Document Level Concurrency +-------------------------- The in-memory storage engine uses *document-level* concurrency control for write operations. As a result, multiple clients can modify different diff --git a/source/core/journaling.txt b/source/core/journaling.txt index d86f14d0276..bf190ea903f 100644 --- a/source/core/journaling.txt +++ b/source/core/journaling.txt @@ -122,14 +122,29 @@ For details, see :ref:`manage-journaling-change-wt-journal-compressor`. Journal File Size Limit ``````````````````````` -WiredTiger journal files for MongoDB have a maximum size limit of -approximately 100 MB. +WiredTiger journal files have a maximum size limit of approximately 100 MB. +Once the file exceeds that limit, WiredTiger creates a new journal file. -- Once the file exceeds that limit, WiredTiger creates a new journal - file. +WiredTiger automatically removes old journal files and maintains only +the files needed to recover from the last checkpoint. To determine how much +disk space to set aside for journal files, consider the following: + +- The default maximum size for a checkpoint is 2 GB +- Additional space may be required for MongoDB to write new journal + files while recovering from a checkpoint +- MongoDB compresses journal files +- The time it takes to restore a checkpoint is specific to your use case +- If you override the maximum checkpoint size or disable compression, your + calculations may be significantly different + +For these reasons, it is difficult to calculate exactly how much additional +space you need. Over-estimating disk space is always a safer approach. + +.. important:: + + If you do not set aside enough disk space for your journal + files, the MongoDB server will crash. -- WiredTiger automatically removes old journal files to maintain only - the files needed to recover from last checkpoint. Pre-Allocation `````````````` diff --git a/source/core/kerberos.txt b/source/core/kerberos.txt index f243741f707..c664c54cc67 100644 --- a/source/core/kerberos.txt +++ b/source/core/kerberos.txt @@ -216,11 +216,10 @@ for details. Testing and Verification ------------------------ -Introduced alongside MongoDB 4.4, the :binary:`~bin.mongokerberos` -program provides a convenient method to verify your platform's Kerberos -configuration for use with MongoDB, and to test that Kerberos -authentication from a MongoDB client works as expected. See the -:binary:`~bin.mongokerberos` documentation for more information. +The :binary:`~bin.mongokerberos` program provides a convenient method to +verify your platform's Kerberos configuration for use with MongoDB, and to +test that Kerberos authentication from a MongoDB client works as expected. +See the :binary:`~bin.mongokerberos` documentation for more information. :binary:`~bin.mongokerberos` is available in MongoDB Enterprise only. diff --git a/source/core/map-reduce.txt b/source/core/map-reduce.txt index 3d33f31d9e4..b3b23e83308 100644 --- a/source/core/map-reduce.txt +++ b/source/core/map-reduce.txt @@ -16,6 +16,7 @@ Map-Reduce .. meta:: :keywords: deprecated + :description: Map-Reduce has been deprecated and must be replaced by an aggregation pipeline. .. contents:: On this page :local: @@ -81,19 +82,6 @@ mapping. Map-reduce operations can also use a custom JavaScript function to make final modifications to the results at the end of the map and reduce operation, such as perform additional calculations. -.. note:: - - Starting in MongoDB 4.4, :dbcommand:`mapReduce` no longer supports - the deprecated :ref:`BSON type ` JavaScript code with - scope (BSON Type 15) for its functions. The ``map``, ``reduce``, - and ``finalize`` functions must be either BSON type String - (BSON Type 2) or BSON type JavaScript (BSON Type 13). To pass - constant values which will be accessible in the ``map``, ``reduce``, - and ``finalize`` functions, use the ``scope`` parameter. - - The use of JavaScript code with scope for the :dbcommand:`mapReduce` - functions has been deprecated since version 4.2.1. - Map-Reduce Results ------------------- diff --git a/source/core/query-plans.txt b/source/core/query-plans.txt index 83ab80f04d3..6b6667adf02 100644 --- a/source/core/query-plans.txt +++ b/source/core/query-plans.txt @@ -16,7 +16,7 @@ Query Plans .. TODO Consider moving this to the mechanics of the index section -For a query, the MongoDB query planner chooses and caches the most +For any given query, the MongoDB query planner chooses and caches the most efficient query plan given the available indexes. The evaluation of the most efficient query plan is based on the number of "work units" (``works``) performed by the query execution plan when the query planner @@ -30,6 +30,8 @@ The following diagram illustrates the query planner logic: .. include:: /images/query-planner-logic.rst +.. include:: includes/explain-ignores-cache-plan.rst + .. _cache-entry-state: Plan Cache Entry State @@ -152,13 +154,13 @@ Users can also: Plan Cache Debug Info Size Limit -------------------------------- -Starting in MongoDB 5.0 (and 4.4.3, 4.2.12, 4.0.23, and 3.6.23), the -:doc:`plan cache ` will save full ``plan cache`` -entries only if the cumulative size of the ``plan caches`` for all -collections is lower than 0.5 GB. When the cumulative size of the -``plan caches`` for all collections exceeds this threshold, additional -``plan cache`` entries are stored without the following debug -information: +Starting in MongoDB 5.0, the +:ref:`plan cache ` will save full +``plan cache`` entries only if the cumulative size of the +``plan caches`` for all collections is lower than 0.5 GB. When the +cumulative size of the ``plan caches`` for all collections exceeds this +threshold, additional ``plan cache`` entries are stored without the +following debug information: - :ref:`createdFromQuery ` - :ref:`cachedPlan ` @@ -217,13 +219,13 @@ Availability The ``queryHash`` and ``planCacheKey`` are available in: -- :doc:`explain() output ` fields: +- :ref:`explain() output ` fields: :data:`queryPlanner.queryHash ` and :data:`queryPlanner.planCacheKey ` -- :doc:`profiler log messages ` - and :doc:`diagnostic log messages (i.e. mongod/mongos log - messages)` when logging slow queries. +- :ref:`profiler log messages ` + and :ref:`diagnostic log messages (i.e. mongod/mongos log + messages)` when logging slow queries. - :pipeline:`$planCacheStats` aggregation stage (*New in MongoDB 4.2*) diff --git a/source/core/queryable-encryption.txt b/source/core/queryable-encryption.txt index 2be938bb89b..ff411a89373 100644 --- a/source/core/queryable-encryption.txt +++ b/source/core/queryable-encryption.txt @@ -1,3 +1,10 @@ +.. facet:: + :name: genre + :values: reference + +.. meta:: + :keywords: queryable encryption, encryption + .. _qe-manual-feature-qe: ==================== @@ -49,6 +56,12 @@ You can set up {+qe+} using the following mechanisms: Considerations -------------- +When implementing an application that uses {+qe+}, consider the points listed +in :ref:`Security Considerations `. + +For other limitations, see :ref:`{+qe+} limitations +`. + Compatibility ~~~~~~~~~~~~~ @@ -95,7 +108,7 @@ Install ------- To learn what you must install to use {+qe+}, see -the :ref:`` page. +the :ref:`` and :ref:`` pages. Quick Start ----------- @@ -105,16 +118,16 @@ To start using {+qe+}, see the :ref:``. Fundamentals ------------ -To learn how {+qe+} works and how to set it up, see the -:ref:`` section. +To learn about encryption key management, see :ref:`qe-reference-keys-key-vaults`. -The fundamentals section contains the following pages: +To learn how {+qe+} works, see the :ref:`` section, +which contains the following pages: - :ref:`qe-fundamentals-encrypt-query` +- :ref:`qe-create-encryption-schema` - :ref:`qe-fundamentals-collection-management` -- :ref:`qe-reference-keys-key-vaults` +- :ref:`qe-fundamentals-manual-encryption` - :ref:`qe-fundamentals-manage-keys` -- :ref:`qe-fundamentals-kms-providers` Tutorials --------- @@ -130,20 +143,14 @@ see the :ref:`qe-reference` section. The reference section contains the following pages: -- :ref:`qe-compatibility-reference` -- :ref:`qe-reference-encryption-limits` - :ref:`qe-reference-automatic-encryption-supported-operations` - :ref:`qe-reference-mongo-client` -- :ref:`qe-reference-shared-library` -- :ref:`qe-reference-libmongocrypt` -- :ref:`qe-reference-mongocryptd` .. toctree:: :titlesonly: /core/queryable-encryption/features - /core/queryable-encryption/install /core/queryable-encryption/quick-start /core/queryable-encryption/fundamentals /core/queryable-encryption/tutorials - /core/queryable-encryption/reference + /core/queryable-encryption/reference \ No newline at end of file diff --git a/source/core/queryable-encryption/about-qe-csfle.txt b/source/core/queryable-encryption/about-qe-csfle.txt new file mode 100644 index 00000000000..62a385fb351 --- /dev/null +++ b/source/core/queryable-encryption/about-qe-csfle.txt @@ -0,0 +1,185 @@ +.. facet:: + :name: genre + :values: reference + +.. meta:: + :keywords: queryable encryption, in-use encryption, client-side field level encryption + +.. _about-qe-csfle: + +====================================== +Choosing an In-Use Encryption Approach +====================================== + +.. contents:: On this page + :local: + :backlinks: none + :depth: 2 + :class: singlecol + +MongoDB provides two approaches to :term:`In-Use Encryption`: +:ref:`{+qe+} ` and :ref:`{+csfle+} ` +({+csfle-abbrev+}). When using either approach, you can also choose +between automatic and {+manual-enc+}. + +About {+qe+} and {+csfle-abbrev+} +------------------------------------------------------------------------ + +Both {+qe+} and {+csfle+} ({+csfle-abbrev+}) enable a client application +to encrypt data before transporting it over the network. Sensitive data is +transparently encrypted and decrypted by the client and only +communicated to and from the server in encrypted form. + +To compare features in detail, see :ref:`{+qe+} Features ` and +:ref:`{+csfle-abbrev+} Features `. + +Considerations +-------------- + +When implementing and application that uses {+qe+} or {+csfle-abbrev+}, review +the security considerations in this section. + +For the limitations of each approach, see :ref:`{+qe+} limitations +` or :ref:`{+csfle-abbrev+} limitations +`. + +For MongoDB server and driver version compatibility, see :ref:`Compatibility +`. + +.. _qe-csfle-security-considerations: + +Security Considerations +~~~~~~~~~~~~~~~~~~~~~~~ + +* {+csfle-abbrev+} and {+qe+} do not provide any cryptographic integrity + guarantees against adversaries with access to your {+cmk-long+}, + {+dek-long+}s. + +* {+csfle-abbrev+} and {+qe+} do not provide any cryptographic integrity + guarantees against adversaries with arbitrary write access to collections + containing encrypted data. + +* MongoDB uses :ref:`schema validation ` to enforce + encryption of specific fields in a collection. Without a client-side schema, + the client downloads the server-side schema for the collection to determine + which fields to encrypt. To avoid this issue, use client-side schema validation. + + Because {+csfle-abbrev+} and {+qe+} do not provide a mechanism to verify + the integrity of a schema, relying on a server-side schema means + trusting that the server's schema has not been tampered with. If an adversary + compromises the server, they can modify the schema so that a previously + encrypted field is no longer labeled for encryption. This causes the client + to send plaintext values for that field. + + For an example of {+csfle-abbrev+} configuration for client and server-side + schemas, see :ref:`CSFLE Server-Side Field Level Encryption Enforcement `. + + +Using {+qe+} and {+csfle-abbrev+} +------------------------------------------------------------------------ + +You can use {+qe+}, {+csfle+}, or both in your application. However, +you can't use both approaches in the same collection. + +Consider using {+qe+} in the following scenarios: + +- You are developing a new application and want to use the latest + cryptographic advancements from MongoDB. +- You expect users to run ranged, prefix, suffix, or substring queries + against encrypted data. +- Your application can use a single key for a given field, rather than + requiring separate keys on a per-user or per-tenant basis. +- You value read performance over storage requirements. {+qe+} generates + internal :ref:`metadata collections ` and + indexes to improve query performance. As a result, a collection + encrypted with {+qe+} uses 2-4 times the storage space that it would + if it were plaintext or encrypted with {+csfle-abbrev+}. + +There are situations where {+csfle-abbrev+} may be a preferable solution: + +- Your application already uses {+csfle-abbrev+}. +- You need to use different keys for the same field. This is commonly + encountered when separating tenants or using user-specific keys. +- You need to be flexible with your data schema and potentially add more + encrypted fields. Adding encrypted fields for {+qe+} + requires rebuilding metadata collections and indexes. +- The increased storage requirements of {+qe+} are a concern. + +Querying Encrypted Fields +~~~~~~~~~~~~~~~~~~~~~~~~~ + +{+qe+} supports equality queries on encrypted fields. +Support for ranged queries is upcoming, and support for prefix, suffix, +and substring queries with {+qe+} is under development. + +{+csfle+} supports equality queries on deterministically encrypted fields. + +For more information about supported query operators, see :ref:`Supported Query +Operators for {+qe+} ` and +:ref:`Supported Query Operators for {+csfle-abbrev+} +`. For the full list of MongoDB query +operators, see :ref:`query-projection-operators-top`. + +Encryption Algorithms +~~~~~~~~~~~~~~~~~~~~~ + +The new encryption algorithm for {+qe+} uses randomized encryption based on +structured encryption, which produces different encrypted output values +from the same input. This prevents attackers from reverse-engineering +the encryption. + +For detailed information on MongoDB's approach to {+qe+}, see the +`Overview of {+qe+} +`__ +and +`Design and Analysis of a Stateless +Document Database Encryption Scheme `__ whitepapers. + +The {+csfle-abbrev+} encryption algorithm supports both randomized +encryption and :ref:`deterministic encryption +`. However, it only supports +**querying** fields that are encrypted deterministically. + +With deterministic encryption, a given input value always encrypts to +the same output value. Deterministic encryption is suitable for high +:term:`cardinality` data. If a field has many potential unique values, +such as street addresses, then it is difficult for potential attackers +to reverse engineer encrypted values to plaintext. Conversely, if a +field has very few values, like sex, then attackers can reasonably guess +them and use that information to help to decipher the cryptographic +algorithm. + +Private Querying +~~~~~~~~~~~~~~~~ + +MongoDB encrypts queries for both {+qe+} and {+csfle+} so that the +server has no information on cleartext document or query values. +With {+qe+}, private querying goes a step further and redacts logs and +metadata to scrub information around the query's existence. This ensures +stronger privacy and confidentiality. + +Choosing Between Automatic and {+manual-enc-title+} +--------------------------------------------------------- + +Using Automatic Encryption +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +We recommend automatic encryption in most situations, as it streamlines +the process of writing your client application. With automatic +encryption, MongoDB automatically encrypts and decrypts fields in read +and write operations. + +Using {+manual-enc-title+} +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. include:: /includes/queryable-encryption/qe-csfle-manual-enc-overview.rst + +For details, see :ref:`{+manual-enc-title+} with {+qe+} +` or :ref:`{+manual-enc-title+} with +{+csfle-abbrev+} `. + +.. toctree:: + :titlesonly: + + /core/queryable-encryption/reference/limitations + /core/csfle/reference/limitations \ No newline at end of file diff --git a/source/core/queryable-encryption/features.txt b/source/core/queryable-encryption/features.txt index da9efe6aefe..110534cc6df 100644 --- a/source/core/queryable-encryption/features.txt +++ b/source/core/queryable-encryption/features.txt @@ -33,7 +33,7 @@ and only communicated to and from the server in encrypted form. Unlike :ref:`Client-Side Field Level Encryption ` that can use :ref:`Deterministic Encryption `, -{+qe+} uses fast, searchable encryption schemes based on `Structured Encryption `__. +{+qe+} uses fast, searchable encryption schemes based on structured encryption. These schemes produce different encrypted output values even when given the same cleartext input. diff --git a/source/core/queryable-encryption/fundamentals.txt b/source/core/queryable-encryption/fundamentals.txt index 337b4365832..5449f0942f2 100644 --- a/source/core/queryable-encryption/fundamentals.txt +++ b/source/core/queryable-encryption/fundamentals.txt @@ -12,21 +12,25 @@ Fundamentals :depth: 2 :class: singlecol -Read the following sections to learn how {+qe+} works and how to use it: +To learn about encryption key management, read :ref:`qe-reference-keys-key-vaults`. + +To learn how {+qe+} works and how to use it, read the following sections: - :ref:`qe-fundamentals-encrypt-query` +- :ref:`qe-create-encryption-schema` +- :ref:`qe-fundamentals-enable-qe` - :ref:`qe-fundamentals-collection-management` -- :ref:`qe-reference-keys-key-vaults` -- :ref:`qe-fundamentals-manage-keys` - :ref:`qe-fundamentals-manual-encryption` -- :ref:`qe-fundamentals-kms-providers` +- :ref:`qe-fundamentals-manage-keys` +- :ref:`qe-overview-enable-qe` +- :ref:`qe-overview-use-qe` .. toctree:: :titlesonly: /core/queryable-encryption/fundamentals/encrypt-and-query + /core/queryable-encryption/qe-create-encryption-schema + /core/queryable-encryption/fundamentals/enable-qe /core/queryable-encryption/fundamentals/manage-collections /core/queryable-encryption/fundamentals/manual-encryption - /core/queryable-encryption/fundamentals/keys-key-vaults /core/queryable-encryption/fundamentals/manage-keys - /core/queryable-encryption/fundamentals/kms-providers diff --git a/source/core/queryable-encryption/fundamentals/enable-qe.txt b/source/core/queryable-encryption/fundamentals/enable-qe.txt new file mode 100644 index 00000000000..e21577f00ad --- /dev/null +++ b/source/core/queryable-encryption/fundamentals/enable-qe.txt @@ -0,0 +1,77 @@ +.. facet:: + :name: genre + :values: tutorial + +.. facet:: + :name: programming_language + :values: javascript/typescript + +.. meta:: + :keywords: queryable encryption, code example, node.js + +.. _qe-fundamentals-enable-qe: + +======================================================================== +Enabling {+qe+} when Creating Collections +======================================================================== + +.. contents:: On this page + :local: + :backlinks: none + :depth: 2 + :class: singlecol + +Overview +-------- + +.. include:: /includes/queryable-encryption/qe-enable-qe-at-collection-creation.rst + +.. include:: /includes/queryable-encryption/qe-explicitly-create-collection.rst + +Enable {+qe+} on a Collection +------------------------------------------------------------------------ + +You can enable {+qe+} on fields in one of two ways. The following +examples use Node.js to enable {+qe+}: + +- Pass the {+enc-schema+}, represented by the ``encryptedFieldsObject`` + constant, to the client that the application uses to create the collection: + + .. code-block:: javascript + :emphasize-lines: 8-10 + + const client = new MongoClient(uri, { + autoEncryption: { + keyVaultNameSpace: "", + kmsProviders: "", + extraOptions: { + cryptSharedLibPath: "" + }, + encryptedFieldsMap: { + "": { encryptedFieldsObject } + } + } + + ... + + await client.db("").createCollection(""); + } + + For more information on ``autoEncryption`` configuration options, see the + section on :ref:`qe-reference-mongo-client`. + +- Pass the {+enc-schema+} ``encryptedFieldsObject`` to + ``createCollection()``: + + .. code-block:: javascript + + await encryptedDB.createCollection("", { + encryptedFields: encryptedFieldsObject + }); + + .. tip:: + + Specify the ``encryptedFieldsObject`` when you create the + collection, and also when you create a client to access the + collection. This ensures that if the server's security is + compromised, the information is still encrypted through the client. \ No newline at end of file diff --git a/source/core/queryable-encryption/fundamentals/encrypt-and-query.txt b/source/core/queryable-encryption/fundamentals/encrypt-and-query.txt index f7e252a6fc6..0dba872aca8 100644 --- a/source/core/queryable-encryption/fundamentals/encrypt-and-query.txt +++ b/source/core/queryable-encryption/fundamentals/encrypt-and-query.txt @@ -1,10 +1,19 @@ -.. _qe-fundamentals-encrypt-query: +.. facet:: + :name: genre + :values: tutorial + +.. meta:: + :keywords: code example, node.js + +.. meta:: + :keywords: queryable encryption, contention -================================= -Field Encryption and Queryability -================================= +.. _qe-fundamentals-encrypt-query: +.. _qe-encryption-schema: -.. default-domain:: mongodb +============================ +Encrypted Fields and Queries +============================ .. contents:: On this page :local: @@ -15,220 +24,56 @@ Field Encryption and Queryability Overview -------- -Learn about the following {+qe+} topics: +When you use {+qe+}, you define encrypted fields at the collection level +using an {+enc-schema+}. Encrypting a field and enabling queries +increases storage requirements and impacts query performance. -- Considerations when enabling queries on an encrypted field. -- How to specify fields for encryption. -- How to configure an encrypted field so that it is queryable. -- Query types and which ones you can use on encrypted fields. -- How to optimize query performance on encrypted fields. +For instructions on creating an {+enc-schema+} and configuring +querying, see :ref:`qe-create-encryption-schema` Considerations when Enabling Querying ------------------------------------- -When you use {+qe+}, you can choose whether to make an encrypted field queryable. -If you don't need to perform CRUD operations that require you -to query an encrypted field, you may not need to enable querying on that field. -You can still retrieve the entire document by querying other fields that are queryable or not encrypted. +.. warning:: -When you make encrypted fields queryable, {+qe+} creates an index for each encrypted field, which -can make write operations on that field take longer. When a write operation updates -an indexed field, MongoDB also updates the related index. + .. include:: /includes/queryable-encryption/qe-enable-qe-at-collection-creation.rst + +You can choose to make an encrypted field queryable. If you +don't need to perform CRUD operations that require you to query an +encrypted field, you may not need to enable querying on that field. You +can still retrieve the entire document by querying other fields that are +queryable or unencrypted. + +When you make encrypted fields queryable, MongoDB creates an index for +each encrypted field, which can make write operations on that field take +longer. When a write operation updates an indexed field, MongoDB also +updates the related index. When you create an encrypted collection, MongoDB creates :ref:`two metadata collections `, increasing the storage space requirements. -.. _qe-specify-fields-for-encryption: - -Specify Fields for Encryption ------------------------------ - -.. _qe-encryption-schema: - -With {+qe+}, you specify which fields you want to automatically -encrypt in your MongoDB document using a JSON {+enc-schema+}. The -{+enc-schema+} defines which fields are encrypted and which queries -are available for those fields. - -.. important:: - - You can specify any field for encryption except the - ``_id`` field. - -To specify fields for encryption and querying, create an {+enc-schema+} that includes the following properties: - -.. list-table:: - :header-rows: 1 - :widths: 30 30 40 - - * - Key Name - - Type - - Required - - * - ``path`` - - String - - Required - - * - ``bsonType`` - - String - - Required - - * - ``keyId`` - - Binary - - Optional. Use only if you want to use {+manual-enc+}, which - requires you to generate a key for each field in advance. - - * - ``queries`` - - Object - - Optional. Include to make the field queryable. - -Example -~~~~~~~ - -This example shows how to create the {+enc-schema+}. - -Consider the following document that contains personally identifiable information -(PII), credit card information, and sensitive medical information: - -.. code-block:: json - - { - "firstName": "Jon", - "lastName": "Snow", - "patientId": 12345187, - "address": "123 Cherry Ave", - "medications": [ - "Adderall", - "Lipitor" - ], - "patientInfo": { - "ssn": "921-12-1234", - "billing": { - "type": "visa", - "number": "1234-1234-1234-1234" - } - } - } - -To ensure the PII and sensitive medical information stays secure, create -the {+enc-schema+} and configure those fields for automatic -encryption. For example: - -.. code-block:: javascript - - const encryptedFieldsObject = { - fields: [ - { - path: "patientId", - bsonType: "int" - }, - { - path: "patientInfo.ssn", - bsonType: "string" - }, - { - path: "medications", - bsonType: "array" - }, - { - path: "patientInfo.billing", - bsonType: "object" - } - ] - } - -MongoDB creates encryption keys for each field automatically. -Configure ``AutoEncryptionSettings`` on the client, then use the -``createEncryptedCollection`` helper method to create your collections. - -If you are using :ref:`explicit encryption -`, you must create a unique -{+dek-long+} for each encrypted field in advance. Add a ``keyId`` field -to each entry that includes the key: - -.. code-block:: javascript - :emphasize-lines: 5, 10 - - const encryptedFieldsObject = { - fields: [ - { - path: "patientId", - keyId: "", - bsonType: "int" - }, - { - path: "patientInfo.ssn", - keyId: "", - bsonType: "string" - }, - . . . - ] - } - -.. _qe-enable-queries: - -Configure Fields for Querying ------------------------------ - -Include the ``queries`` property on fields to make them queryable. This -enables an authorized client to issue read and write queries against -those fields. Omitting the ``queries`` property prevents clients from querying a field. - - -Example -~~~~~~~ - -Add the ``queries`` property to the previous example schema to make the -``patientId`` and ``patientInfo.ssn`` fields queryable. - -.. code-block:: javascript - :emphasize-lines: 6, 11 - - const encryptedFieldsObject = { - fields: [ - { - path: "patientId", - bsonType: "int", - queries: { queryType: "equality" } - }, - { - path: "patientInfo.ssn", - bsonType: "string", - queries: { queryType: "equality" } - }, - { - path: "medications", - bsonType: "array" - }, - { - path: "patientInfo.billing", - bsonType: "object" - }, - ] - } - .. _qe-contention: -Configure Contention Factor -~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Contention +---------- -Include the ``contention`` property on queryable fields to prefer either -find performance, or write and update performance. +.. include:: /includes/queryable-encryption/qe-csfle-contention.rst -.. include:: /includes/fact-qe-csfle-contention.rst +Adjusting the Contention Factor +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Example -+++++++ +You can optionally include the ``contention`` property on queryable fields to +change the contention factor from its default value of ``8``. Before you modify +the contention factor, consider the following points: -.. include:: /includes/example-qe-csfle-contention.rst -.. _qe-query-types: +.. include:: /includes/queryable-encryption/qe-csfle-setting-contention.rst -Query Types -~~~~~~~~~~~ +.. _qe-query-types: -Passing a query type to the ``queries`` option in your encrypted fields -object sets the allowed query types for the field. Querying non-encrypted fields or encrypted fields with a supported query +Supported Query Types and Behavior +---------------------------------- +Querying non-encrypted fields or encrypted fields with a supported query type returns encrypted data that is then decrypted at the client. @@ -262,104 +107,31 @@ following :term:`BSON` types: - ``decimal128`` - ``object`` - ``array`` -- ``javascriptWithScope`` (*Deprecated in MongoDB 4.4*) Client and Server Schemas ------------------------- .. content copied from source/core/csfle/fundamentals/automatic-encryption.txt -MongoDB supports using -:ref:`schema validation ` -to enforce encryption of specific fields -in a collection. Clients using automatic {+qe+} have -specific behavior depending on the database connection -configuration: +MongoDB supports using :ref:`schema validation ` +to enforce encryption of specific fields in a collection. Clients using +automatic {+qe+} behave differently depending on the database connection configuration: -- If the connection - ``encryptedFieldsMap`` object contains a key for the specified collection, the - client uses that object to perform automatic {+qe+}, - rather than using the remote schema. At a minimum, the local rules **must** - encrypt those fields that the remote schema marks as requiring - encryption. +- If the connection ``encryptedFieldsMap`` object contains a key for the + specified collection, the client uses that object to perform + automatic {+qe+}, rather than using the remote schema. At minimum, + the local rules must encrypt all fields that the remote schema does. -- If the connection - ``encryptedFieldsMap`` object does *not* contain a key for the specified - collection, the client downloads the server-side remote schema for - the collection and uses it to perform automatic {+qe+}. +- If the connection ``encryptedFieldsMap`` object doesn't contain a + key for the specified collection, the client downloads the + server-side remote schema for the collection and uses it instead. - .. important:: Behavior Considerations + .. important:: Remote Schema Behavior - When a client does not have an encryption schema for the - specified collection, the following occurs: + When using a remote schema: - - The client trusts that the server has a valid schema with respect - to automatic {+qe+}. + - The client trusts that the server has a valid schema - - The client uses the remote schema to perform automatic - {+qe+} only. The client does not enforce any other - validation rules specified in the schema. - -To learn more about automatic {+qe+}, see the following resources: - -- :ref:`{+qe+} Introduction ` -- :ref:`` - -.. _qe-fundamentals-enable-qe: - -Enable {+qe+} ---------------------------- - -Enable {+qe+} before creating a collection. Enabling {+qe+} after -creating a collection does not encrypt fields on documents already in -that collection. You can enable {+qe+} on fields in one of two ways: - -- Pass the {+enc-schema+}, represented by the - ``encryptedFieldsObject`` - constant, to the client that the application uses to create the collection: - - -.. code-block:: javascript - :emphasize-lines: 8-10 - - const client = new MongoClient(uri, { - autoEncryption: { - keyVaultNameSpace: "", - kmsProviders: "", - extraOptions: { - cryptSharedLibPath: "" - }, - encryptedFieldsMap: { - "": { encryptedFieldsObject } - } - } - - ... - - await client.db("").createCollection(""); - } - -For more information on ``autoEncryption`` configuration options, see the -section on :ref:`qe-reference-mongo-client`. - -- Pass the encrypted fields object to ``createCollection()`` to create a new collection: - -.. code-block:: javascript - - await encryptedDB.createCollection("", { - encryptedFields: encryptedFieldsObject - }); - -.. tip:: - - Specify the encrypted fields when you create the collection, and also - when you create a client to access the collection. This ensures that - if the server's security is compromised, the information is still - encrypted through the client. - -.. important:: - - Explicitly create your collection, rather than creating it implicitly - with an insert operation. When you create a collection using - ``createCollection()``, MongoDB creates an index on the encrypted - fields. Without this index, queries on encrypted fields may run slowly. + - The client uses the remote schema to perform automatic {+qe+} + only. The client does not enforce any other validation rules + specified in the schema. \ No newline at end of file diff --git a/source/core/queryable-encryption/fundamentals/keys-key-vaults.txt b/source/core/queryable-encryption/fundamentals/keys-key-vaults.txt index 08386b03af3..158074de68a 100644 --- a/source/core/queryable-encryption/fundamentals/keys-key-vaults.txt +++ b/source/core/queryable-encryption/fundamentals/keys-key-vaults.txt @@ -1,8 +1,8 @@ .. _qe-reference-keys-key-vaults: -=================== -Keys and Key Vaults -=================== +============================== +Encryption Keys and Key Vaults +============================== .. default-domain:: mongodb @@ -16,7 +16,7 @@ Overview -------- In this guide, you can learn details about the following components of -{+qe+}: +{+in-use-encryption+}: - {+dek-long+}s ({+dek-abbr+})s - {+cmk-long+}s ({+cmk-abbr+})s @@ -24,13 +24,14 @@ In this guide, you can learn details about the following components of - {+kms-long+} ({+kms-abbr+}) To view step by step guides demonstrating how to use the preceding -components to set up a {+qe+} enabled client, see the following resources: +components to set up a {+qe+} or {+csfle+} enabled client, see the +following resources: -- :ref:`` -- :ref:`` - -.. _qe-envelope-encryption: -.. _qe-key-architecture: +- :ref:`{+qe+} Quick Start ` +- :ref:`{+qe+} Automatic Encryption Tutorial + ` +- :ref:`{+csfle-abbrev+} Quick Start ` +- :ref:`{+csfle-abbrev+} Automatic Encryption Tutorial ` Data Encryption Keys and the Customer Master Key ------------------------------------------------ @@ -49,7 +50,6 @@ Key Rotation For details on rotating keys, see :ref:`Rotate Encryption Keys `. .. _qe-reference-key-vault: -.. _qe-field-level-encryption-keyvault: {+key-vault-long-title+}s --------------------- @@ -71,7 +71,9 @@ Permissions .. include:: /includes/queryable-encryption/qe-csfle-key-vault-permissions.rst To learn how to grant your application access to your {+cmk-long+}, see the -:ref:`` tutorial. +:ref:`{+qe+} Automatic Encryption Tutorial +` or :ref:`{+csfle-abbrev+} +Automatic Encryption Tutorial `. Key Vault Cluster ~~~~~~~~~~~~~~~~~ @@ -80,8 +82,10 @@ Key Vault Cluster To specify the cluster that hosts your {+key-vault-long+}, use the ``keyVaultClient`` field of your client's ``MongoClient`` object. -To learn more about the {+qe+}-specific configuration options in your -client's ``MongoClient`` object, see :ref:``. +To learn more about the specific configuration options in your +client's ``MongoClient`` object, see the :ref:`MongoClient Options for +{+qe+} ` or :ref:`MongoClient Options for +{+csfle-abbrev+} `. Update a {+key-vault-long-title+} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -89,4 +93,10 @@ Update a {+key-vault-long-title+} .. include:: /includes/in-use-encryption/update-a-key.rst To view a tutorial that shows how to create a {+dek-long+}, see -the :ref:`Quick Start `. \ No newline at end of file +the :ref:`{+qe+} Quick Start ` or the +:ref:`{+csfle-abbrev+} Quick Start `. + +.. toctree:: + :titlesonly: + + /core/queryable-encryption/fundamentals/kms-providers \ No newline at end of file diff --git a/source/core/queryable-encryption/fundamentals/kms-providers.txt b/source/core/queryable-encryption/fundamentals/kms-providers.txt index cca8cd28176..4c0deafff2f 100644 --- a/source/core/queryable-encryption/fundamentals/kms-providers.txt +++ b/source/core/queryable-encryption/fundamentals/kms-providers.txt @@ -15,7 +15,7 @@ KMS Providers Overview -------- -Learn about the {+kms-long+} ({+kms-abbr+}) providers {+qe+} +Learn about the {+kms-long+} ({+kms-abbr+}) providers {+in-use-encryption+} supports. .. _qe-reasons-to-use-remote-kms: @@ -35,7 +35,7 @@ it: Additionally, for the following {+kms-abbr+} providers, your {+kms-abbr+} remotely encrypts and decrypts your {+dek-long+}, ensuring -your {+cmk-long+} is never exposed to your {+qe+} enabled +your {+cmk-long+} is never exposed to your {+qe+} or {+csfle-abbrev+} enabled application: - {+aws-long+} KMS @@ -45,7 +45,7 @@ application: {+kms-long+} Tasks ---------------------------- -In {+qe+}, your {+kms-long+}: +In {+in-use-encryption+}, your {+kms-long+}: - Creates and encrypts the {+cmk-long+} - Encrypts the {+dek-long+}s created by your application @@ -54,7 +54,7 @@ In {+qe+}, your {+kms-long+}: To learn more about {+cmk-long+}s and {+dek-long+}s, see :ref:`qe-reference-keys-key-vaults`. -.. _qe-reference-kms-providers-create-and-store: +.. _qe-fundamentals-kms-providers-create-and-store: Create and Store your {+cmk-long+} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -66,9 +66,11 @@ To create a {+cmk-long+}, configure your {+kms-long+} to generate your {+cmk-lon To view a tutorial that demonstrates how to create and store a {+cmk-abbr+} in your preferred {+kms-abbr+}, -see :ref:`qe-tutorial-automatic-encryption`. +see the :ref:`{+qe+} Automatic Encryption Tutorial +` or :ref:`{+csfle-abbrev+} +Automatic Encryption Tutorial `. -.. _qe-reference-kms-providers-encrypt: +.. _qe-fundamentals-kms-providers-encrypt: Create and Encrypt a {+dek-long+} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -76,14 +78,13 @@ Create and Encrypt a {+dek-long+} To create a {+dek-long+}: - Instantiate a ``ClientEncryption`` instance in your - {+qe+} enabled application: + {+qe+} or {+csfle-abbrev+} enabled application: * Provide a ``kmsProviders`` object that specifies the credentials - your {+qe+} enabled application uses to authenticate with - your {+kms-abbr+} provider. + your application uses to authenticate with your {+kms-abbr+} provider. - Create a {+dek-long+} with the ``CreateDataKey`` method of the - ``ClientEncryption`` object in your {+qe+} enabled application. + ``ClientEncryption`` object in your application. * Provide a ``dataKeyOpts`` object that specifies with which key your {+kms-abbr+} should encrypt your new {+dek-long+}. @@ -91,14 +92,17 @@ To create a {+dek-long+}: To view a tutorial demonstrating how to create and encrypt a {+dek-long+}, see the following resources: -- :ref:`qe-quick-start` -- :ref:`qe-tutorial-automatic-encryption` +- :ref:`{+qe+} Quick Start ` +- :ref:`{+qe+} Automatic Encryption Tutorial + ` +- :ref:`{+csfle-abbrev+} Quick Start ` +- :ref:`{+csfle-abbrev+} Automatic Encryption Tutorial ` To view the structure of ``kmsProviders`` and ``dataKeyOpts`` objects for all supported {+kms-abbr+} providers, see -:ref:`qe-reference-kms-providers-supported-kms`. +:ref:`qe-fundamentals-kms-providers-supported-kms`. -.. _qe-reference-kms-providers-supported-kms: +.. _qe-fundamentals-kms-providers-supported-kms: Supported Key Management Services --------------------------------- @@ -106,37 +110,35 @@ Supported Key Management Services The following sections of this page present the following information for all {+kms-long+} providers: -- Architecture of {+qe+} enabled client +- Architecture of {+in-use-encryption+} enabled client - Structure of ``kmsProviders`` objects - Structure of ``dataKeyOpts`` objects -{+qe+} supports the following {+kms-long+} +Both {+qe+} and {+csfle-abbrev+} support the following {+kms-long+} providers: -- :ref:`qe-reference-kms-providers-aws` -- :ref:`qe-reference-kms-providers-azure` -- :ref:`qe-reference-kms-providers-gcp` -- :ref:`qe-reference-kms-providers-kmip` -- :ref:`qe-reference-kms-providers-local` +- :ref:`qe-fundamentals-kms-providers-aws` +- :ref:`qe-fundamentals-kms-providers-azure` +- :ref:`qe-fundamentals-kms-providers-gcp` +- :ref:`qe-fundamentals-kms-providers-kmip` +- :ref:`qe-fundamentals-kms-providers-local` -.. _qe-reference-kms-providers-aws: -.. _qe-field-level-encryption-aws-kms: +.. _qe-fundamentals-kms-providers-aws: Amazon Web Services KMS ~~~~~~~~~~~~~~~~~~~~~~~ This section provides information related to using `AWS Key Management Service `_ -in your {+qe+} enabled application. +in your {+qe+} or {+csfle-abbrev+} enabled application. To view a tutorial demonstrating how to use AWS KMS in your -{+qe+} enabled application, see -:ref:`qe-tutorial-automatic-aws`. +application, see :ref:`Overview: Enable Queryable Encryption +` or :ref:`csfle-tutorial-automatic-aws`. .. include:: /includes/queryable-encryption/reference/kms-providers/aws.rst -.. _qe-reference-kms-providers-azure: -.. _qe-field-level-encryption-azure-keyvault: +.. _qe-fundamentals-kms-providers-azure: Azure Key Vault ~~~~~~~~~~~~~~~ @@ -144,38 +146,37 @@ Azure Key Vault This section provides information related to using `Azure Key Vault `_ -in your {+qe+} enabled application. +in your {+qe+} or {+csfle-abbrev+} enabled application. To view a tutorial demonstrating how to use Azure Key Vault in your -{+qe+} enabled application, see -:ref:`qe-tutorial-automatic-azure`. +application, see :ref:`Overview: Enable Queryable Encryption +` or :ref:`csfle-tutorial-automatic-azure`. .. include:: /includes/queryable-encryption/reference/kms-providers/azure.rst -.. _qe-reference-kms-providers-gcp: -.. _qe-field-level-encryption-gcp-kms: +.. _qe-fundamentals-kms-providers-gcp: Google Cloud Platform KMS ~~~~~~~~~~~~~~~~~~~~~~~~~ This section provides information related to using `Google Cloud Key Management `_ -in your {+qe+} enabled application. +in your {+qe+} or {+csfle-abbrev+} enabled application. To view a tutorial demonstrating how to use GCP KMS in your -{+qe+} enabled application, see -:ref:`qe-tutorial-automatic-gcp`. +application, see :ref:`Overview: Enable Queryable Encryption +` or :ref:`csfle-tutorial-automatic-gcp`. .. include:: /includes/queryable-encryption/reference/kms-providers/gcp.rst -.. _qe-reference-kms-providers-kmip: +.. _qe-fundamentals-kms-providers-kmip: KMIP ~~~~ This section provides information related to using a `KMIP `_ -compliant {+kms-long+} in your {+qe+} enabled application. +compliant {+kms-long+} in your {+qe+} or {+csfle-abbrev+} enabled application. To learn how to set up KMIP with HashiCorp Vault, see the `How to Set Up HashiCorp Vault KMIP Secrets Engine with MongoDB CSFLE or Queryable Encryption `__ @@ -183,19 +184,18 @@ blog post. .. include:: /includes/queryable-encryption/reference/kms-providers/kmip.rst -.. _qe-reference-kms-providers-local: -.. _qe-field-level-encryption-local-kms: +.. _qe-fundamentals-kms-providers-local: Local Key Provider ~~~~~~~~~~~~~~~~~~ This section provides information related to using a Local Key Provider (your filesystem) -in your {+qe+} enabled application. +in your {+qe+} or {+csfle-abbrev+} enabled application. .. include:: /includes/queryable-encryption/qe-warning-local-keys.rst To view a tutorial demonstrating how to use a Local Key Provider -for testing {+qe+}, see -:ref:`qe-quick-start`. +for testing {+qe+}, see the :ref:`{+qe+} Quick Start ` +or :ref:`{+csfle-abbrev+} Quick Start `. .. include:: /includes/queryable-encryption/reference/kms-providers/local.rst diff --git a/source/core/queryable-encryption/fundamentals/manage-collections.txt b/source/core/queryable-encryption/fundamentals/manage-collections.txt index b11fc52cbe2..ca8c0c699b0 100644 --- a/source/core/queryable-encryption/fundamentals/manage-collections.txt +++ b/source/core/queryable-encryption/fundamentals/manage-collections.txt @@ -1,10 +1,18 @@ -.. _qe-fundamentals-collection-management: +.. facet:: + :name: genre + :values: reference + +.. meta:: + :keywords: queryable encryption, encrypted collections, metadata collections -=============================== -Encrypted Collection Management -=============================== +.. meta:: + :keywords: code example, node.js, shell + +.. _qe-fundamentals-collection-management: -.. default-domain:: mongodb +===================== +Encrypted Collections +===================== .. contents:: On this page :local: @@ -12,14 +20,16 @@ Encrypted Collection Management :depth: 2 :class: singlecol -It is important that you understand the performance and storage costs of field level encryption. Each encrypted field: +Field level encryption comes with performance and storage costs. Every +field you choose to encrypt: - Adds writes to insert and update operations. -- Requires additional storage, because MongoDB maintains an encrypted field index. +- Requires additional storage, because MongoDB maintains an index of + encrypted fields to improve query performance. This section lists the writes per operation and explains how to compact encrypted collection indexes so that you can minimize write and storage -costs. +costs. If you want to encrypt fields and configure them for querying, see :ref:``. Overview -------- @@ -146,7 +156,7 @@ collections and reduces their size. Run compaction when the size of ``ECOC`` exceeds 1 GB. You can check the size of your collections using :binary:`~bin.mongosh` -and issuing the :method:`db.collection.totalSize()` command. +and running the :method:`db.collection.totalSize()` command. .. example:: diff --git a/source/core/queryable-encryption/fundamentals/manage-keys.txt b/source/core/queryable-encryption/fundamentals/manage-keys.txt index 7c6a53a264b..6d7248cb8fa 100644 --- a/source/core/queryable-encryption/fundamentals/manage-keys.txt +++ b/source/core/queryable-encryption/fundamentals/manage-keys.txt @@ -48,12 +48,8 @@ To view a list of supported {+kms-abbr+} providers, see the :ref:`qe-fundamentals-kms-providers` page. For tutorials detailing how to set up a {+qe+} enabled -application with each of the supported {+kms-abbr+} providers, see the -following pages: - -- :ref:`qe-tutorial-automatic-aws` -- :ref:`qe-tutorial-automatic-azure` -- :ref:`qe-tutorial-automatic-gcp` +application with each of the supported {+kms-abbr+} providers, see +:ref:`Overview: Enable Queryable Encryption `. Procedure --------- diff --git a/source/core/queryable-encryption/fundamentals/manual-encryption.txt b/source/core/queryable-encryption/fundamentals/manual-encryption.txt index 6cdff72fc39..91e795a4a96 100644 --- a/source/core/queryable-encryption/fundamentals/manual-encryption.txt +++ b/source/core/queryable-encryption/fundamentals/manual-encryption.txt @@ -15,12 +15,7 @@ Overview -------- -Learn how to use the {+manual-enc+} mechanism of {+qe+}. {+manual-enc-first+} -lets you specify the key material used to encrypt fields. It provides -fine-grained control over security, at the cost of increased complexity -when configuring collections and writing code for MongoDB Drivers. - -.. include:: /includes/fact-manual-enc-definition.rst +.. include:: /includes/queryable-encryption/qe-csfle-manual-enc-overview.rst {+manual-enc-first+} is available in the following MongoDB products: diff --git a/source/core/queryable-encryption/install-library.txt b/source/core/queryable-encryption/install-library.txt new file mode 100644 index 00000000000..7de358d6780 --- /dev/null +++ b/source/core/queryable-encryption/install-library.txt @@ -0,0 +1,168 @@ +.. _qe-csfle-install-library: + +======================================================================== +Install and Configure a {+qe+} Library +======================================================================== + +.. contents:: On this page + :local: + :backlinks: none + :depth: 1 + :class: singlecol + +MongoDB uses one of two libraries for translating queries into +encrypted queries, and for encrypting and decrypting data. The latest is +the {+shared-library+}. + + +Before You Start +---------------- + +Follow the preceding tasks to :ref:`install a {+qe+} compatible driver +and dependencies ` before continuing. + +Choose a Library +---------------- + +.. _qe-reference-shared-library: + +{+shared-library+} +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The {+shared-library+} is a **dynamic library** that enables your client +application to perform automatic encryption. A dynamic library is a set +of functionality accessed by an application at runtime rather than +compile time. The {+shared-library+} performs the following tasks: + +- Reads the :ref:`{+enc-schema+} ` to determine which fields to encrypt or decrypt +- Prevents your application from executing unsupported operations on + encrypted fields + +The {+shared-library+} *does not* do any of the following: + +- Perform data encryption or decryption +- Access the encryption key material +- Listen for data over the network + +The {+shared-library+} is a preferred alternative to ``mongocryptd`` and doesn't require you to start another process to perform automatic encryption. + +.. _qe-reference-mongocryptd: +.. _qe-mongocryptd: + +mongocryptd +~~~~~~~~~~~ + +.. important:: Use the {+shared-library+} + + If you are starting a new project, use the {+shared-library+}. The + {+shared-library+} replaces ``mongocryptd`` and does not require + you to start a new process. + +``mongocryptd`` is installed with `MongoDB Enterprise +Server <{+enterprise-download-link+}>`__. + +When you create a MongoDB client with {+in-use-encryption+}, the +``mongocryptd`` process starts automatically by default. + +.. include:: /includes/queryable-encryption/qe-facts-mongocryptd-process.rst + +Procedure +--------- + +.. tabs:: + + .. tab:: {+shared-library+} + :tabid: {+shared-library+} + + .. _qe-csfle-shared-library-download: + + To download the {+shared-library+} from the `MongoDB Download + Center `__, + select the version and platform, then the library. + + .. tip:: + + To view an expanded list of available releases and packages, see + `MongoDB Enterprise Downloads `__. + + .. procedure:: + :style: normal + + .. step:: + + In the :guilabel:`Version` dropdown, select ``{+shared-library-version-drop-down+}``. + + .. step:: + + In the :guilabel:`Platform` dropdown, select your platform. + + .. step:: + + In the :guilabel:`Package` dropdown, select ``crypt_shared``. + + .. step:: + + Click :guilabel:`Download`. + + .. _qe-csfle-configure-shared-library: + + To configure how your driver searches for the {+shared-library+}, + use the following parameters: + + .. list-table:: + :header-rows: 1 + :stub-columns: 1 + :widths: 30 70 + + * - Name + - Description + + * - cryptSharedLibPath + - Specifies the absolute path to the {+shared-library+} + package, {+shared-library-package+}. + + *Default*: ``undefined`` + + * - cryptSharedLibRequired + - Specifies if the driver must use the {+shared-library+}. If + ``true``, the driver returns an error if the + {+shared-library+} is unavailable. If ``false``, the driver + performs the following sequence of actions: + + #. Attempts to use the {+shared-library+}. + #. If the {+shared-library+} is unavailable, the driver + attempts to start and connect to ``mongocryptd``. + + *Default*: ``false`` + + To view an example demonstrating how to configure these + parameters, see the :ref:`Quick Start `. + + .. tab:: mongocryptd + :tabid: mongocryptd + + .. procedure:: + :style: normal + + .. step:: + + Install ``mongocryptd``: + + .. include:: /includes/queryable-encryption/qe-csfle-install-mongocryptd.rst + + .. step:: + + Configure the library: + + .. include:: /includes/queryable-encryption/qe-csfle-configure-mongocryptd.rst + + Examples + ~~~~~~~~ + + .. include:: /includes/queryable-encryption/qe-csfle-mongocryptd-examples.rst + +Next Steps +---------- + +After installing a library, :ref:`create a {+cmk-long+} ` +in your {+kms-long+} of choice. diff --git a/source/core/queryable-encryption/install.txt b/source/core/queryable-encryption/install.txt index 6d175857d24..c847c0d5f18 100644 --- a/source/core/queryable-encryption/install.txt +++ b/source/core/queryable-encryption/install.txt @@ -1,11 +1,9 @@ .. _qe-install: .. _qe-implementation: -========================= -Installation Requirements -========================= - -.. default-domain:: mongodb +======================================================================== +Install a {+qe+} Compatible Driver +======================================================================== .. contents:: On this page :local: @@ -16,68 +14,34 @@ Installation Requirements Overview -------- -Learn about the applications and libraries you must install to use -{+qe+}. - -What You Need -------------- - -Before you can use {+qe+}, set up the following items -in your development environment: - -- (Optional) Download the :ref:`{+shared-library+} `. - The {+shared-library+} replaces :ref:`mongocryptd ` and - does not require spawning a new process. - -- Install a :ref:`MongoDB Driver Compatible with {+qe+} `. -- Start an - :atlas:`Atlas Cluster ` - or a - :manual:`MongoDB Enterprise instance - ` - - .. warning:: - - You can use {+qe+} only with MongoDB 7.0 and later, which - may not yet be available in MongoDB Atlas. - -- Install specific driver dependencies. To see the list of - dependencies for your driver, select the tab corresponding to the language you - would like to use to complete this guide: +To enable {+qe+} in your development environment, you must first install +a compatible driver and dependencies. .. _qe-quick-start-driver-dependencies: -.. tabs-drivers:: +Procedure +--------- - .. tab:: - :tabid: java-sync +.. procedure:: - .. include:: /includes/queryable-encryption/set-up/java.rst + .. step:: Install a {+qe+} compatible driver - .. tab:: - :tabid: nodejs + Install a :ref:`MongoDB Driver Compatible with {+qe+} `. + + .. step:: Install driver dependencies - .. include:: /includes/queryable-encryption/set-up/node.rst + See the :ref:`Drivers compatibility table ` for a list of + dependencies for your driver. - .. tab:: - :tabid: python + .. step:: Start a MongoDB Atlas Cluster or Enterprise instance. - .. include:: /includes/queryable-encryption/set-up/python.rst + Start an :atlas:`Atlas Cluster ` or a + :manual:`MongoDB Enterprise instance + `. - .. tab:: - :tabid: csharp - - .. include:: /includes/queryable-encryption/set-up/csharp.rst - - .. tab:: - :tabid: go - - .. include:: /includes/queryable-encryption/set-up/go.rst - - -Learn More +Next Steps ---------- -To start using {+qe+}, see :ref:`qe-quick-start`. - -To learn how to use {+qe+} with a remote {+kms-long+}, see :ref:`qe-tutorial-automatic-encryption`. +Once you have installed a compatible driver and dependencies, +:ref:`install and configure a {+qe+} library ` +to continue setting up your deployment and development environment. diff --git a/source/core/queryable-encryption/overview-enable-qe.txt b/source/core/queryable-encryption/overview-enable-qe.txt new file mode 100644 index 00000000000..62ba99970eb --- /dev/null +++ b/source/core/queryable-encryption/overview-enable-qe.txt @@ -0,0 +1,61 @@ +.. facet:: + :name: genre + :values: reference + +.. meta:: + :keywords: queryable encryption, in-use encryption + +.. _qe-overview-enable-qe: + +===================================== +Overview: Enable Queryable Encryption +===================================== + +.. contents:: On this page + :local: + :backlinks: none + :depth: 2 + :class: singlecol + +This page summarizes the tasks required to set up your MongoDB +deployment and your development environment for {+qe+}. + +Enable {+qe+} +--------------------------- + +.. procedure:: + :style: normal + + .. step:: Install a compatible MongoDB driver and dependencies + + :ref:`Install a {+qe+} compatible driver ` + + :ref:`Install libmongocrypt ` + + .. step:: Install and configure a {+qe+} library + + :ref:`Install and configure a {+qe+} library ` + + .. step:: Create a {+cmk-long+} + + :ref:`Create a {+cmk-long+} ` + + .. step:: Create your {+qe+} enabled application + + :ref:`Create a {+qe+} enabled application ` + +Use {+qe+} +--------------------------- + +After you install a {+qe+} driver and libraries, create a {+cmk-long+}, and +create your application, you can start encrypting and querying data. See +:ref:`Overview: Use {+qe+} ` for instructions. + +.. toctree:: + :titlesonly: + + /core/queryable-encryption/install + /core/queryable-encryption/reference/libmongocrypt + /core/queryable-encryption/install-library + /core/queryable-encryption/qe-create-cmk + /core/queryable-encryption/qe-create-application \ No newline at end of file diff --git a/source/core/queryable-encryption/overview-use-qe.txt b/source/core/queryable-encryption/overview-use-qe.txt new file mode 100644 index 00000000000..792ab548d41 --- /dev/null +++ b/source/core/queryable-encryption/overview-use-qe.txt @@ -0,0 +1,49 @@ +.. facet:: + :name: genre + :values: reference + +.. meta:: + :keywords: queryable encryption, in-use encryption + +.. _qe-overview-use-qe: + +================================== +Overview: Use Queryable Encryption +================================== + +.. contents:: On this page + :local: + :backlinks: none + :depth: 2 + :class: singlecol + +This page summarizes the tasks required to create a {+qe+}-enabled +collection, insert a document with encrypted fields, and query encrypted +data. + +Enable {+qe+} +--------------------------- + +Before encrypting and querying data, you must install a {+qe+}-enabled driver +and libraries, create a {+cmk-long+}, and create your application. See +:ref:`Overview: Enable {+qe+} ` for instructions. + +Use {+qe+} +--------------------------- + +.. procedure:: + :style: normal + + .. step:: Create an encrypted collection and insert a document with encrypted fields + + :ref:`Create an encrypted collection and insert documents ` + + .. step:: Query a document with encrypted fields + + :ref:`Query a document with encrypted fields ` + +.. toctree:: + :titlesonly: + + /core/queryable-encryption/qe-create-encrypted-collection + /core/queryable-encryption/qe-retrieve-encrypted-document \ No newline at end of file diff --git a/source/core/queryable-encryption/qe-create-application.txt b/source/core/queryable-encryption/qe-create-application.txt new file mode 100644 index 00000000000..53afff0906f --- /dev/null +++ b/source/core/queryable-encryption/qe-create-application.txt @@ -0,0 +1,960 @@ +.. _qe-create-application: + +======================================================================== +Create your {+qe+} Enabled Application +======================================================================== + +.. contents:: On this page + :local: + :backlinks: none + :depth: 1 + :class: singlecol + +Overview +-------- + +This guide shows you how to build an application that implements {+qe+} +to automatically encrypt and decrypt document fields. + +After you complete the steps in this guide, you should have a working +client application that is ready for inserting documents with fields encrypted +with your {+cmk-long+}. + +Before You Start +---------------- + +Ensure you have completed the following prerequisite tasks before creating your application: + +#. :ref:`Install a {+qe+} compatible driver and dependencies ` + +#. :ref:`Install and configure a {+qe+} library ` + +#. :ref:`Create a {+cmk-long+} ` + +.. see:: Full Application + + To see the complete code for this sample application, select the tab + corresponding to your programming language and follow the provided + link. Each sample application repository includes a ``README.md`` + file that you can use to learn how to set up your environment and run + the application. + + .. tabs:: + + .. tab:: mongosh + :tabid: shell + + `Complete mongosh Application <{+sample-app-url-qe+}/mongosh/>`__ + + .. tab:: Node.js + :tabid: nodejs + + `Complete Node.js Application <{+sample-app-url-qe+}/node/>`__ + + .. tab:: Python + :tabid: python + + `Complete Python Application <{+sample-app-url-qe+}/python/>`__ + + .. tab:: Java + :tabid: java-sync + + `Complete Java Application <{+sample-app-url-qe+}/java/>`__ + + .. tab:: Go + :tabid: go + + `Complete Go Application <{+sample-app-url-qe+}/go/>`__ + + .. tab:: C# + :tabid: csharp + + `Complete C# Application <{+sample-app-url-qe+}/csharp/>`__ + +.. tabs-selector:: drivers + +Procedure +--------- + +Select the tab for your key provider below. + +.. tabs:: + + + .. tab:: {+aws-long+} + :tabid: create-app-aws + + .. procedure:: + + .. step:: Assign application variables + + .. include:: /includes/queryable-encryption/tutorials/assign-app-variables.rst + + .. step:: Add your KMS credentials + + Create a variable containing your KMS credentials with the + following structure. Use the Access Key ID and Secret Access + Key you used in step 2.2 when you :ref:`created an AWS IAM user `. + + .. tabs-drivers:: + + .. tab:: + :tabid: shell + + .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-helpers.js + :start-after: start-aws-kms-credentials + :end-before: end-aws-kms-credentials + :language: javascript + :dedent: + + .. tab:: + :tabid: nodejs + + .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js + :start-after: start-aws-kms-credentials + :end-before: end-aws-kms-credentials + :language: javascript + :dedent: + + .. tab:: + :tabid: python + + .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_helpers.py + :start-after: start-aws-kms-credentials + :end-before: end-aws-kms-credentials + :language: python + :dedent: + + .. tab:: + :tabid: java-sync + + .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/util/QueryableEncryptionHelpers.java + :start-after: start-aws-kms-credentials + :end-before: end-aws-kms-credentials + :language: java + :dedent: + + .. tab:: + :tabid: go + + .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_helpers.go + :start-after: start-aws-kms-credentials + :end-before: end-aws-kms-credentials + :language: go + :dedent: + + .. tab:: + :tabid: csharp + + .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionHelpers.cs + :start-after: start-aws-kms-credentials + :end-before: end-aws-kms-credentials + :language: csharp + :dedent: + + .. include:: /includes/queryable-encryption/tutorials/automatic/aws/role-authentication.rst + + .. step:: Add your CMK credentials + + Create a variable containing your {+cmk-long+} credentials + with the following structure. Use the {+aws-arn-abbr+} and + Region you recorded in step 1.3 when you :ref:`created a CMK `. + + .. tabs-drivers:: + + .. tab:: + :tabid: shell + + .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-helpers.js + :start-after: start-aws-cmk-credentials + :end-before: end-aws-cmk-credentials + :language: javascript + :dedent: + + .. tab:: + :tabid: nodejs + + .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js + :start-after: start-aws-cmk-credentials + :end-before: end-aws-cmk-credentials + :language: javascript + :dedent: + + .. tab:: + :tabid: python + + .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_helpers.py + :start-after: start-aws-cmk-credentials + :end-before: end-aws-cmk-credentials + :language: python + :dedent: + + .. tab:: + :tabid: java-sync + + .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/util/QueryableEncryptionHelpers.java + :start-after: start-aws-cmk-credentials + :end-before: end-aws-cmk-credentials + :language: java + :dedent: + + .. tab:: + :tabid: go + + .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_helpers.go + :start-after: start-aws-cmk-credentials + :end-before: end-aws-cmk-credentials + :language: go + :dedent: + + .. tab:: + :tabid: csharp + + .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionHelpers.cs + :start-after: start-aws-cmk-credentials + :end-before: end-aws-cmk-credentials + :language: csharp + :dedent: + + .. step:: Set automatic encryption options + + .. include:: /includes/queryable-encryption/shared-lib-learn-more.rst + + .. tabs-drivers:: + + .. tab:: + :tabid: shell + + Create an ``autoEncryptionOptions`` object with the following + options: + + - The namespace of your {+key-vault-long+} + - The ``kmsProviderCredentials`` object, which + contains your AWS KMS credentials + + .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-helpers.js + :start-after: start-auto-encryption-options + :end-before: end-auto-encryption-options + :language: javascript + :dedent: + + .. tab:: + :tabid: nodejs + + Create an ``autoEncryptionOptions`` object with the following + options: + + - The namespace of your {+key-vault-long+} + - The ``kmsProviders`` object, which contains your AWS KMS credentials + - The ``sharedLibraryPathOptions`` object, which + contains the path to your {+shared-library+} + + .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js + :start-after: start-auto-encryption-options + :end-before: end-auto-encryption-options + :emphasize-lines: 5-9 + :language: javascript + :dedent: + + .. tab:: + :tabid: python + + Create an ``AutoEncryptionOpts`` object with the following + options: + + - The ``kms_provider_credentials`` object, which + contains your AWS KMS credentials + - The namespace of your {+key-vault-long+} + - The path to your {+shared-library+} + + .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_helpers.py + :start-after: start-auto-encryption-options + :end-before: end-auto-encryption-options + :language: python + :dedent: + + .. tab:: + :tabid: java-sync + + Create an ``AutoEncryptionSettings`` object with the following + options: + + - The namespace of your {+key-vault-long+} + - The ``kmsProviderCredentials`` object, which + contains your AWS KMS credentials + - The ``extraOptions`` object, which contains the path + to your {+shared-library+} + + .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/util/QueryableEncryptionHelpers.java + :start-after: start-auto-encryption-options + :end-before: end-auto-encryption-options + :emphasize-lines: 4-8 + :language: java + :dedent: + + .. tab:: + :tabid: go + + Create an ``AutoEncryption`` object with the following + options: + + - The namespace of your {+key-vault-long+} + - The ``kmsProviderCredentials`` object, which + contains your AWS KMS credentials + - The ``cryptSharedLibraryPath`` object, which + contains the path to your {+shared-library+} + + .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_helpers.go + :start-after: start-auto-encryption-options + :end-before: end-auto-encryption-options + :emphasize-lines: 5-8 + :language: go + :dedent: + + .. tab:: + :tabid: csharp + + Create an ``AutoEncryptionOptions`` object with the following + options: + + - The namespace of your {+key-vault-long+} + - The ``kmsProviderCredentials`` object, which + contains your AWS KMS credentials + - The ``extraOptions`` object, which contains the path + to your {+shared-library+} + + .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionHelpers.cs + :start-after: start-auto-encryption-options + :end-before: end-auto-encryption-options + :emphasize-lines: 7-10 + :language: csharp + :dedent: + + + + + + + + + + + .. tab:: {+azure-kv+} + :tabid: create-app-azure + + .. procedure:: + + .. step:: Assign application variables + + .. include:: /includes/queryable-encryption/tutorials/assign-app-variables.rst + + .. _qe-tutorials-automatic-encryption-azure-kms-providers: + + .. step:: Add your KMS credentials + + Create a variable containing your KMS credentials with the + following structure. Use the {+azure-kv+} credentials you + recorded in the when you :ref:`registered your application with Azure `. + + .. tabs-drivers:: + + .. tab:: + :tabid: shell + + .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-helpers.js + :start-after: start-azure-kms-credentials + :end-before: end-azure-kms-credentials + :language: javascript + :dedent: + + .. tab:: + :tabid: nodejs + + .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js + :start-after: start-azure-kms-credentials + :end-before: end-azure-kms-credentials + :language: javascript + :dedent: + + .. tab:: + :tabid: python + + .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_helpers.py + :start-after: start-azure-kms-credentials + :end-before: end-azure-kms-credentials + :language: python + :dedent: + + .. tab:: + :tabid: java-sync + + .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/util/QueryableEncryptionHelpers.java + :start-after: start-azure-kms-credentials + :end-before: end-azure-kms-credentials + :language: python + :dedent: + + .. tab:: + :tabid: go + + .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_helpers.go + :start-after: start-azure-kms-credentials + :end-before: end-azure-kms-credentials + :language: go + :dedent: + + .. tab:: + :tabid: csharp + + .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionHelpers.cs + :start-after: start-azure-kms-credentials + :end-before: end-azure-kms-credentials + :language: csharp + :dedent: + + + .. step:: Add your CMK credentials + + Create a variable containing your {+cmk-long+} credentials + with the following structure. Use the CMK details you + recorded when you :ref:`created a CMK `. + + .. tabs-drivers:: + + .. tab:: + :tabid: shell + + .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-helpers.js + :start-after: start-azure-cmk-credentials + :end-before: end-azure-cmk-credentials + :language: javascript + :dedent: + + .. tab:: + :tabid: nodejs + + .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js + :start-after: start-azure-cmk-credentials + :end-before: end-azure-cmk-credentials + :language: javascript + :dedent: + + .. tab:: + :tabid: python + + .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_helpers.py + :start-after: start-azure-cmk-credentials + :end-before: end-azure-cmk-credentials + :language: python + :dedent: + + .. tab:: + :tabid: java-sync + + .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/util/QueryableEncryptionHelpers.java + :start-after: start-azure-cmk-credentials + :end-before: end-azure-cmk-credentials + :language: java + :dedent: + + .. tab:: + :tabid: go + + .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_helpers.go + :start-after: start-azure-cmk-credentials + :end-before: end-azure-cmk-credentials + :language: go + :dedent: + + .. tab:: + :tabid: csharp + + .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionHelpers.cs + :start-after: start-azure-cmk-credentials + :end-before: end-azure-cmk-credentials + :language: csharp + :dedent: + + + .. step:: Create an encryption client + + To create a client for encrypting and decrypting data in + encrypted collections, instantiate a new ``MongoClient`` + using your connection URI and automatic encryption + options. + + .. tabs-drivers:: + + .. tab:: + :tabid: shell + + .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js + :start-after: start-create-client + :end-before: end-create-client + :language: javascript + :dedent: + + .. tab:: + :tabid: nodejs + + .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-tutorial.js + :start-after: start-create-client + :end-before: end-create-client + :language: javascript + :dedent: + + .. tab:: + :tabid: python + + .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py + :start-after: start-create-client + :end-before: end-create-client + :language: python + :dedent: + + .. tab:: + :tabid: java-sync + + .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java + :start-after: start-create-client + :end-before: end-create-client + :language: java + :dedent: + + .. tab:: + :tabid: go + + .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go + :start-after: start-create-client + :end-before: end-create-client + :language: go + :dedent: + + .. tab:: + :tabid: csharp + + .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs + :start-after: start-create-client + :end-before: end-create-client + :language: csharp + :dedent: + + + + + + + + + + + .. tab:: {+gcp-kms-abbr+} + :tabid: create-app-gcp + + .. procedure:: + + .. step:: Assign application variables + + .. include:: /includes/queryable-encryption/tutorials/assign-app-variables.rst + + + .. step:: Add your KMS credentials + + Create a variable containing your KMS credentials with the + following structure. + + .. tabs-drivers:: + + .. tab:: + :tabid: shell + + .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-helpers.js + :start-after: start-gcp-kms-credentials + :end-before: end-gcp-kms-credentials + :language: javascript + :dedent: + + .. tab:: + :tabid: nodejs + + .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js + :start-after: start-gcp-kms-credentials + :end-before: end-gcp-kms-credentials + :language: javascript + :dedent: + + .. tab:: + :tabid: python + + .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_helpers.py + :start-after: start-gcp-kms-credentials + :end-before: end-gcp-kms-credentials + :language: python + :dedent: + + .. tab:: + :tabid: java-sync + + .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/util/QueryableEncryptionHelpers.java + :start-after: start-gcp-kms-credentials + :end-before: end-gcp-kms-credentials + :language: java + :dedent: + + .. tab:: + :tabid: go + + .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_helpers.go + :start-after: start-gcp-kms-credentials + :end-before: end-gcp-kms-credentials + :language: go + :dedent: + + .. tab:: + :tabid: csharp + + .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionHelpers.cs + :start-after: start-gcp-kms-credentials + :end-before: end-gcp-kms-credentials + :language: csharp + :dedent: + + + .. step:: Add your CMK credentials + + Create a variable containing your {+cmk-long+} credentials + with the following structure. Use the credentials you recorded + when you :ref:`created a CMK `. + + .. tabs-drivers:: + + .. tab:: + :tabid: shell + + .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-helpers.js + :start-after: start-gcp-cmk-credentials + :end-before: end-gcp-cmk-credentials + :language: javascript + :dedent: + + .. tab:: + :tabid: nodejs + + .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js + :start-after: start-gcp-cmk-credentials + :end-before: end-gcp-cmk-credentials + :language: javascript + :dedent: + + .. tab:: + :tabid: python + + .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_helpers.py + :start-after: start-gcp-cmk-credentials + :end-before: end-gcp-cmk-credentials + :language: python + :dedent: + + .. tab:: + :tabid: java-sync + + .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/util/QueryableEncryptionHelpers.java + :start-after: start-gcp-cmk-credentials + :end-before: end-gcp-cmk-credentials + :language: java + :dedent: + + .. tab:: + :tabid: go + + .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_helpers.go + :start-after: start-gcp-cmk-credentials + :end-before: end-gcp-cmk-credentials + :language: go + :dedent: + + .. tab:: + :tabid: csharp + + .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionHelpers.cs + :start-after: start-gcp-cmk-credentials + :end-before: end-gcp-cmk-credentials + :language: csharp + :dedent: + + + .. step:: Create an encryption client + + To create a client for encrypting and decrypting data in + encrypted collections, instantiate a new ``MongoClient`` + using your connection URI and automatic encryption + options. + + .. tabs-drivers:: + + .. tab:: + :tabid: shell + + .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js + :start-after: start-create-client + :end-before: end-create-client + :language: javascript + :dedent: + + .. tab:: + :tabid: nodejs + + .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-tutorial.js + :start-after: start-create-client + :end-before: end-create-client + :language: javascript + :dedent: + + .. tab:: + :tabid: python + + .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py + :start-after: start-create-client + :end-before: end-create-client + :language: python + :dedent: + + .. tab:: + :tabid: java-sync + + .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java + :start-after: start-create-client + :end-before: end-create-client + :language: java + :dedent: + + .. tab:: + :tabid: go + + .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go + :start-after: start-create-client + :end-before: end-create-client + :language: go + :dedent: + + .. tab:: + :tabid: csharp + + .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs + :start-after: start-create-client + :end-before: end-create-client + :language: csharp + :dedent: + + + + + + + + + + + .. tab:: {+kmip-kms-no-hover+} + :tabid: create-app-kmip + + .. procedure:: + + .. step:: Assign application variables + + .. include:: /includes/queryable-encryption/tutorials/assign-app-variables.rst + + .. step:: Add your KMS credentials + + Create a variable containing the endpoint of your + {+kmip-kms-no-hover+} with the following structure: + + .. tabs-drivers:: + + .. tab:: + :tabid: shell + + .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-helpers.js + :start-after: start-kmip-kms-credentials + :end-before: end-kmip-kms-credentials + :language: javascript + :dedent: + + .. tab:: + :tabid: nodejs + + .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js + :start-after: start-kmip-kms-credentials + :end-before: end-kmip-kms-credentials + :language: javascript + :dedent: + + .. tab:: + :tabid: python + + .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_helpers.py + :start-after: start-kmip-kms-credentials + :end-before: end-kmip-kms-credentials + :language: python + :dedent: + + .. tab:: + :tabid: java-sync + + .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/util/QueryableEncryptionHelpers.java + :start-after: start-kmip-kms-credentials + :end-before: end-kmip-kms-credentials + :language: java + :dedent: + + .. tab:: + :tabid: go + + .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_helpers.go + :start-after: start-kmip-kms-credentials + :end-before: end-kmip-kms-credentials + :language: go + :dedent: + + .. tab:: + :tabid: csharp + + .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionHelpers.cs + :start-after: start-kmip-kms-credentials + :end-before: end-kmip-kms-credentials + :language: csharp + :dedent: + + + .. step:: Add your CMK credentials + + Create an empty object as shown in the following code example. + This prompts your {+kmip-kms+} to generate a new {+cmk-long+}. + + .. tabs-drivers:: + + .. tab:: + :tabid: shell + + .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-helpers.js + :start-after: start-kmip-local-cmk-credentials + :end-before: end-kmip-local-cmk-credentials + :language: javascript + :dedent: + + .. tab:: + :tabid: nodejs + + .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js + :start-after: start-kmip-local-cmk-credentials + :end-before: end-kmip-local-cmk-credentials + :language: javascript + :dedent: + + .. tab:: + :tabid: python + + .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_helpers.py + :start-after: start-kmip-local-cmk-credentials + :end-before: end-kmip-local-cmk-credentials + :language: python + :dedent: + + .. tab:: + :tabid: java-sync + + .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/util/QueryableEncryptionHelpers.java + :start-after: start-kmip-local-cmk-credentials + :end-before: end-kmip-local-cmk-credentials + :language: java + :dedent: + + .. tab:: + :tabid: go + + .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_helpers.go + :start-after: start-kmip-local-cmk-credentials + :end-before: end-kmip-local-cmk-credentials + :language: go + :dedent: + + .. tab:: + :tabid: csharp + + .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionHelpers.cs + :start-after: start-kmip-local-cmk-credentials + :end-before: end-kmip-local-cmk-credentials + :language: csharp + :dedent: + + + .. step:: Create an encryption client + + To create a client for encrypting and decrypting data in + encrypted collections, instantiate a new ``MongoClient`` + using your connection URI and automatic encryption + options. + + .. tabs-drivers:: + + .. tab:: + :tabid: shell + + .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js + :start-after: start-create-client + :end-before: end-create-client + :language: javascript + :dedent: + + .. tab:: + :tabid: nodejs + + .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-tutorial.js + :start-after: start-create-client + :end-before: end-create-client + :language: javascript + :dedent: + + .. tab:: + :tabid: python + + .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py + :start-after: start-create-client + :end-before: end-create-client + :language: python + :dedent: + + .. tab:: + :tabid: java-sync + + .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java + :start-after: start-create-client + :end-before: end-create-client + :language: java + :dedent: + + .. tab:: + :tabid: go + + .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go + :start-after: start-create-client + :end-before: end-create-client + :language: go + :dedent: + + .. tab:: + :tabid: csharp + + .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs + :start-after: start-create-client + :end-before: end-create-client + :language: csharp + :dedent: + + +Next Steps +---------- + +After installing a driver and dependencies, creating a {+cmk-long+}, and +creating your application, see :ref:`Overview: Use {+qe+} +` to encrypt and query data. \ No newline at end of file diff --git a/source/core/queryable-encryption/qe-create-cmk.txt b/source/core/queryable-encryption/qe-create-cmk.txt new file mode 100644 index 00000000000..020fe36d9d4 --- /dev/null +++ b/source/core/queryable-encryption/qe-create-cmk.txt @@ -0,0 +1,110 @@ +.. _qe-create-cmk: + +============================ +Create a {+cmk-long+} +============================ + +.. contents:: On this page + :local: + :backlinks: none + :depth: 1 + :class: singlecol + +Overview +-------- + +In this guide, you will learn how to generate a {+cmk-long+} in your {+kms-long+} of choice. Generate a {+cmk-long+} before creating your {+qe+}-enabled application. + +.. tip:: Customer Master Keys + + To learn more about the {+cmk-long+}, see + :ref:`qe-reference-keys-key-vaults` + + +Before You Start +---------------- + +Complete the preceding tasks before continuing: + +#. :ref:`Install a {+qe+} compatible driver and dependencies ` + +#. :ref:`Install and configure a {+qe+} library ` + + +Procedure +--------- + +Select the tab for your key provider below. + +.. tabs:: + + .. tab:: {+aws-long+} + :tabid: cmk-aws + + .. procedure:: + + .. _qe-create-cmk-aws: + + .. step:: Create the {+cmk-long+} + + .. include:: /includes/queryable-encryption/tutorials/automatic/aws/cmk.rst + + .. _qe-create-aws-iam-user: + + .. step:: Create an AWS IAM User + + .. include:: /includes/queryable-encryption/tutorials/automatic/aws/user.rst + + .. tab:: {+azure-kv+} + :tabid: cmk-azure + + .. procedure:: + + .. _qe-register-cmk-azure: + + .. step:: Register your Application with Azure + + .. include:: /includes/queryable-encryption/tutorials/automatic/azure/register.rst + + .. _qe-create-cmk-azure: + + .. step:: Create the {+cmk-long+} + + .. include:: /includes/queryable-encryption/tutorials/automatic/azure/cmk.rst + + .. tab:: {+gcp-kms-abbr+} + :tabid: cmk-gcp + + .. procedure:: + + .. step:: Register a {+gcp-abbr+} Service Account + + .. include:: /includes/queryable-encryption/tutorials/automatic/gcp/register.rst + + .. _qe-create-cmk-gcp: + + .. step:: Create a {+gcp-abbr+} {+cmk-long+} + + .. include:: /includes/queryable-encryption/tutorials/automatic/gcp/cmk.rst + + .. tab:: {+kmip-kms-no-hover+} + :tabid: cmk-kmip + + .. procedure:: + + .. step:: Configure your {+kmip-kms-title+} + + .. include:: /includes/queryable-encryption/tutorials/automatic/kmip/configure.rst + + .. step:: Specify your Certificates + + .. _qe-kmip-tutorial-specify-your-certificates: + + .. include:: /includes/queryable-encryption/tutorials/automatic/kmip/certificates.rst + +Next Steps +---------- + +After installing drivers and dependencies and creating a {+cmk-long+}, +you can :ref:`create your {+qe+} enabled application `. + \ No newline at end of file diff --git a/source/core/queryable-encryption/qe-create-encrypted-collection.txt b/source/core/queryable-encryption/qe-create-encrypted-collection.txt new file mode 100644 index 00000000000..fe3893ced45 --- /dev/null +++ b/source/core/queryable-encryption/qe-create-encrypted-collection.txt @@ -0,0 +1,469 @@ +.. facet:: + :name: genre + :values: tutorial + +.. facet:: + :name: programming_language + :values: javascript/typescript + +.. meta:: + :keywords: queryable encryption, in-use encryption, code example, node.js + +.. _qe-create-encrypted-collection: + +=================================================== +Create an Encrypted Collection and Insert Documents +=================================================== + +.. contents:: On this page + :local: + :backlinks: none + :depth: 1 + :class: singlecol + +Overview +-------- + +This guide shows you how to create a {+qe+}-enabled collection and insert a +document with encrypted fields. + +After you complete the steps in this guide, you should be able to create +an encrypted collection and insert a document with fields that are encrypted +with your {+cmk-long+}. + +Before You Start +---------------- + +:ref:`Create your {+qe+}-enabled application ` +before creating an encrypted collection. + +If you are using :ref:`{+manual-enc+} +`, you must also create a unique +{+dek-long+} for each encrypted field in advance. For more information, +see :ref:`qe-reference-keys-key-vaults`. + +.. tabs-selector:: drivers + +Procedure +--------- + +.. procedure:: + + .. step:: Specify Fields to Encrypt + + To encrypt a field, add it to the {+enc-schema+}. To enable + queries on a field, add the ``queries`` property. + + Create the {+enc-schema+} as follows. This code sample encrypts + both the ``ssn`` and ``billing`` fields, but only the ``ssn`` + field is queryable: + + .. tabs-drivers:: + + .. tab:: + :tabid: shell + + .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js + :start-after: start-encrypted-fields-map + :end-before: end-encrypted-fields-map + :language: javascript + :dedent: + + .. tab:: + :tabid: nodejs + + .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-tutorial.js + :start-after: start-encrypted-fields-map + :end-before: end-encrypted-fields-map + :language: javascript + :dedent: + + .. tab:: + :tabid: python + + .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py + :start-after: start-encrypted-fields-map + :end-before: end-encrypted-fields-map + :language: python + :dedent: + + .. tab:: + :tabid: java-sync + + .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java + :start-after: start-encrypted-fields-map + :end-before: end-encrypted-fields-map + :language: java + :dedent: + + .. tab:: + :tabid: go + + .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go + :start-after: start-encrypted-fields-map + :end-before: end-encrypted-fields-map + :language: go + :dedent: + + .. tab:: + :tabid: csharp + + .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs + :start-after: start-encrypted-fields-map + :end-before: end-encrypted-fields-map + :language: csharp + :dedent: + + For an extended version of this step, see :ref:`Create an + {+enc-schema-title+} `. + + .. step:: Instantiate ``ClientEncryption`` to access the API for the encryption helper methods + + .. tabs-drivers:: + + .. tab:: + :tabid: shell + + .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js + :start-after: start-client-encryption + :end-before: end-client-encryption + :language: javascript + :dedent: + + .. tab:: + :tabid: nodejs + + .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js + :start-after: start-client-encryption + :end-before: end-client-encryption + :language: javascript + :dedent: + + .. tab:: + :tabid: python + + .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_helpers.py + :start-after: start-client-encryption + :end-before: end-client-encryption + :language: python + :dedent: + + .. tab:: + :tabid: java-sync + + .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java + :start-after: start-client-encryption + :end-before: end-client-encryption + :language: java + :dedent: + + .. tab:: + :tabid: go + + .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_helpers.go + :start-after: start-client-encryption + :end-before: end-client-encryption + :language: go + :dedent: + + .. tab:: + :tabid: csharp + + .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionHelpers.cs + :start-after: start-client-encryption + :end-before: end-client-encryption + :language: csharp + :dedent: + + .. step:: Create the collection + + .. include:: /includes/queryable-encryption/qe-explicitly-create-collection.rst + + .. tabs-drivers:: + + .. tab:: + :tabid: shell + + Create your encrypted collection by using the encryption + helper method accessed through the ``ClientEncryption`` class. + This method automatically generates data encryption keys for your + encrypted fields and creates the encrypted collection: + + .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js + :start-after: start-create-encrypted-collection + :end-before: end-create-encrypted-collection + :language: javascript + :dedent: + + .. tab:: + :tabid: nodejs + + .. include:: /includes/tutorials/automatic/node-include-clientEncryption.rst + + Create your encrypted collection by using the encryption + helper method accessed through the ``ClientEncryption`` class. + This method automatically generates data encryption keys for your + encrypted fields and creates the encrypted collection: + + .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js + :start-after: start-create-encrypted-collection + :end-before: end-create-encrypted-collection + :language: javascript + :dedent: + + .. tip:: Database vs. Database Name + + The method that creates the encrypted collection requires a reference + to a database *object* rather than the database *name*. + + .. tab:: + :tabid: python + + Create your encrypted collection by using the encryption + helper method accessed through the ``ClientEncryption`` class. + This method automatically generates data encryption keys for your + encrypted fields and creates the encrypted collection: + + .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py + :start-after: start-create-encrypted-collection + :end-before: end-create-encrypted-collection + :language: python + :dedent: + + .. tip:: Database vs. Database Name + + The method that creates the encrypted collection requires a reference + to a database *object* rather than the database *name*. You can + obtain this reference by using a method on your client object. + + .. tab:: + :tabid: java-sync + + Create your encrypted collection by using the encryption + helper method accessed through the ``ClientEncryption`` class. + This method automatically generates data encryption keys for your + encrypted fields and creates the encrypted collection: + + .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java + :start-after: start-create-encrypted-collection + :end-before: end-create-encrypted-collection + :language: java + :dedent: + + .. tip:: Database vs. Database Name + + The method that creates the encrypted collection requires a reference + to a database *object* rather than the database *name*. You can + obtain this reference by using a method on your client object. + + .. tab:: + :tabid: go + + The Golang version of this tutorial uses data models to + represent the document structure. Add the following + structs to your project to represent the data in your + collection: + + .. literalinclude:: /includes/qe-tutorials/go/models.go + :start-after: start-patient-document + :end-before: end-patient-document + :language: go + :dedent: + + .. literalinclude:: /includes/qe-tutorials/go/models.go + :start-after: start-patient-record + :end-before: end-patient-record + :language: go + :dedent: + + .. literalinclude:: /includes/qe-tutorials/go/models.go + :start-after: start-payment-info + :end-before: end-payment-info + :language: go + :dedent: + + After you've added these classes, create your encrypted + collection by using the encryption helper method accessed + through the ``ClientEncryption`` class. + This method automatically generates data encryption keys for your + encrypted fields and creates the encrypted collection: + + .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go + :start-after: start-create-encrypted-collection + :end-before: end-create-encrypted-collection + :language: go + :dedent: + + .. tip:: Database vs. Database Name + + The method that creates the encrypted collection requires a reference + to a database *object* rather than the database *name*. You can + obtain this reference by using a method on your client object. + + .. tab:: + :tabid: csharp + + The C# version of this tutorial uses separate classes as data models + to represent the document structure. + Add the following ``Patient``, ``PatientRecord``, and ``PatientBilling`` + classes to your project: + + .. literalinclude:: /includes/qe-tutorials/csharp/Patient.cs + :start-after: start-patient + :end-before: end-patient + :language: csharp + :dedent: + + .. literalinclude:: /includes/qe-tutorials/csharp/PatientRecord.cs + :start-after: start-patient-record + :end-before: end-patient-record + :language: csharp + :dedent: + + .. literalinclude:: /includes/qe-tutorials/csharp/PatientBilling.cs + :start-after: start-patient-billing + :end-before: end-patient-billing + :language: csharp + :dedent: + + After you've added these classes, create your encrypted collection by + using the encryption helper method accessed through the + ``ClientEncryption`` class. + This method automatically generates data encryption keys for your + encrypted fields and creates the encrypted collection: + + .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs + :start-after: start-create-encrypted-collection + :end-before: end-create-encrypted-collection + :language: csharp + :dedent: + + .. tip:: Database vs. Database Name + + The method that creates the collection requires a reference + to a database *object* rather than the database *name*. + + For additional information, see :ref:`Enable {+qe+} when Creating a + Collection ` + + .. step:: Insert a Document with Encrypted Fields + + .. _qe-aws-insert: + .. _qe-azure-insert: + .. _qe-gcp-insert: + .. _qe-kmip-insert: + + .. tabs-drivers:: + + .. tab:: + :tabid: shell + + Create a sample document that describes a patient's personal information. + Use the encrypted client to insert it into the ``patients`` collection, + as shown in the following example: + + .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js + :start-after: start-insert-document + :end-before: end-insert-document + :emphasize-lines: 15 + :language: javascript + :dedent: + + .. tab:: + :tabid: nodejs + + Create a sample document that describes a patient's personal information. + Use the encrypted client to insert it into the ``patients`` collection, + as shown in the following example: + + .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-tutorial.js + :start-after: start-insert-document + :end-before: end-insert-document + :emphasize-lines: 17 + :language: javascript + :dedent: + + .. tab:: + :tabid: python + + Create a sample document that describes a patient's personal information. + Use the encrypted client to insert it into the ``patients`` collection, + as shown in the following example: + + .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py + :start-after: start-insert-document + :end-before: end-insert-document + :emphasize-lines: 15 + :language: python + :dedent: + + .. tab:: + :tabid: java-sync + + This tutorial uses POJOs as data models + to represent the document structure. To set up your application to + use POJOs, add the following code: + + .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java + :start-after: start-setup-application-pojo + :end-before: end-setup-application-pojo + :language: java + :dedent: + + To learn more about Java POJOs, see the `Plain Old Java Object + wikipedia article `__. + + This tutorial uses the following POJOs: + + - ``Patient`` + - ``PatientRecord`` + - ``PatientBilling`` + + You can view these classes in the `models package of the complete Java application + <{+sample-app-url-qe+}/java/src/main/java/com/mongodb/tutorials/qe/models>`__. + + Add these POJO classes to your application. Then, create an instance + of a ``Patient`` that describes a patient's personal information. Use + the encrypted client to insert it into the ``patients`` collection, + as shown in the following example: + + .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java + :start-after: start-insert-document + :end-before: end-insert-document + :emphasize-lines: 8 + :language: java + :dedent: + + .. tab:: + :tabid: go + + Create a sample document that describes a patient's personal information. + Use the encrypted client to insert it into the ``patients`` collection, + as shown in the following example: + + .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go + :start-after: start-insert-document + :end-before: end-insert-document + :emphasize-lines: 15 + :language: go + :dedent: + + .. tab:: + :tabid: csharp + + Create a sample document that describes a patient's personal information. + Use the encrypted client to insert it into the ``patients`` collection, + as shown in the following example: + + .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs + :start-after: start-insert-document + :end-before: end-insert-document + :emphasize-lines: 19 + :language: csharp + :dedent: + +Next Steps +---------- + +After creating a {+qe+}-enabled collection, you can :ref:`query the +encrypted fields `. \ No newline at end of file diff --git a/source/core/queryable-encryption/qe-create-encryption-schema.txt b/source/core/queryable-encryption/qe-create-encryption-schema.txt new file mode 100644 index 00000000000..93965899cb8 --- /dev/null +++ b/source/core/queryable-encryption/qe-create-encryption-schema.txt @@ -0,0 +1,205 @@ +.. facet:: + :name: genre + :values: tutorial + +.. facet:: + :name: programming_language + :values: javascript/typescript + +.. meta:: + :keywords: queryable encryption, code example, node.js + +.. _qe-create-encryption-schema: + +======================================================================== +Create an {+enc-schema-title+} +======================================================================== + +.. contents:: On this page + :local: + :backlinks: none + :depth: 2 + :class: singlecol + +About this Task +--------------- + +To make encrypted fields queryable, create an :term:`{+enc-schema+}`. +This schema defines which fields are queryable, and which +query types are permitted. For more information, see :ref:`qe-fundamentals-encrypt-query`. + +.. _qe-specify-fields-for-encryption: + +Steps +----- + +.. procedure:: + :style: normal + + .. step:: + Create a JSON {+enc-schema+} document + + Include an ``encryptedFieldsObject`` with a nested ``fields`` array: + + .. code-block:: javascript + + const encryptedFieldsObject = { + fields: [] + } + + .. step:: + Specify encryption parameters for each field you want to encrypt: + + a. Add the ``path`` and ``bsonType`` strings to the ``fields`` array: + + .. code-block:: javascript + :emphasize-lines: 4, 5 + + const encryptedFieldsObject = { + fields: [ + { + path: "myDocumentField", + bsonType: "int" + } + ] + } + + .. important:: + + You can specify any field for encryption except the + ``_id`` field. + + #. If you are using :ref:`{+manual-enc+} + `, add a ``keyId`` field + with the {+dek-abbr+} + + .. code-block:: javascript + :emphasize-lines: 4 + + { + path: "myDocumentField", + bsonType: "int", + keyId: "" + } + + .. tip:: + + With Automatic Encryption, MongoDB creates encryption keys for + each field. You configure ``AutoEncryptionSettings`` on the + client, then use the ``createEncryptedCollection`` helper method + to create your collections. + + #. If you want a field to be queryable, add the ``queries`` property + and list allowed ``queryTypes`` + + .. _qe-enable-queries: + + {+qe+} currently supports ``equality`` queries only. + + .. code-block:: javascript + :emphasize-lines: 4 + + { + path: "myDocumentField", + bsonType: "int", + queries: { queryType: "equality" } + } + + #. (Optional) Include the ``contention`` property on queryable fields + to favor either find performance, or write and update performance + + .. code-block:: javascript + :emphasize-lines: 5 + + { + path: "myDocumentField", + bsonType: "int", + queries: { queryType: "equality", + contention: "0"} + } + + For more information, see :ref:`qe-contention`. + +Example +------- +This example shows how to create an {+enc-schema+} for hospital data. + +Consider the following document that contains personally identifiable information +(PII), credit card information, and sensitive medical information: + +.. code-block:: json + + { + "firstName": "Jon", + "lastName": "Snow", + "patientId": 12345187, + "address": "123 Cherry Ave", + "medications": [ + "Adderall", + "Lipitor" + ], + "patientInfo": { + "ssn": "921-12-1234", + "billing": { + "type": "visa", + "number": "1234-1234-1234-1234" + } + } + } + +To ensure the PII and sensitive medical information stays secure, this +{+enc-schema+} adds the relevant fields: + +.. code-block:: javascript + + const encryptedFieldsObject = { + fields: [ + { + path: "patientId", + bsonType: "int" + }, + { + path: "patientInfo.ssn", + bsonType: "string" + }, + { + path: "medications", + bsonType: "array" + }, + { + path: "patientInfo.billing", + bsonType: "object" + } + ] + } + +Adding the ``queries`` property makes the ``patientId`` and +``patientInfo.ssn`` fields queryable. This example enables equality queries: + +.. code-block:: javascript + :emphasize-lines: 6, 11 + + const encryptedFieldsObject = { + fields: [ + { + path: "patientId", + bsonType: "int", + queries: { queryType: "equality" } + }, + { + path: "patientInfo.ssn", + bsonType: "string", + queries: { queryType: "equality" } + }, + { + path: "medications", + bsonType: "array" + }, + { + path: "patientInfo.billing", + bsonType: "object" + }, + ] + } + +.. include:: /includes/queryable-encryption/example-qe-csfle-contention.rst \ No newline at end of file diff --git a/source/core/queryable-encryption/qe-retrieve-encrypted-document.txt b/source/core/queryable-encryption/qe-retrieve-encrypted-document.txt new file mode 100644 index 00000000000..fef7034112e --- /dev/null +++ b/source/core/queryable-encryption/qe-retrieve-encrypted-document.txt @@ -0,0 +1,116 @@ +.. facet:: + :name: genre + :values: tutorial + +.. facet:: + :name: programming_language + :values: javascript/typescript + +.. meta:: + :keywords: queryable encryption, in-use encryption, code example, node.js + +.. _qe-query-encrypted-document: + +======================================= +Query a Document with Encrypted Fields +======================================= + +.. contents:: On this page + :local: + :backlinks: none + :depth: 1 + :class: singlecol + +Overview +-------- + +This guide shows you how to use a {+qe+}-enabled application to retrieve +a document that has encrypted fields. + +After you complete the steps in this guide, you should be able to use +your application to query data in encrypted fields, and to decrypt those +fields as an authorized user. + +Before You Start +---------------- + +:ref:`Create an encrypted collection and insert documents +` before continuing. + +.. tabs-selector:: drivers + +Procedure +--------- + +.. procedure:: + + .. step:: Query an Encrypted Field + + The following code sample executes a find query on an encrypted field and + prints the decrypted data: + + .. tabs-drivers:: + + .. tab:: + :tabid: shell + + .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js + :start-after: start-find-document + :end-before: end-find-document + :language: javascript + :dedent: + + .. tab:: + :tabid: nodejs + + .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-tutorial.js + :start-after: start-find-document + :end-before: end-find-document + :language: javascript + :dedent: + + .. tab:: + :tabid: python + + .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py + :start-after: start-find-document + :end-before: end-find-document + :language: python + :dedent: + + .. tab:: + :tabid: java-sync + + .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java + :start-after: start-find-document + :end-before: end-find-document + :language: java + :dedent: + + .. tab:: + :tabid: go + + .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go + :start-after: start-find-document + :end-before: end-find-document + :language: go + :dedent: + + .. tab:: + :tabid: csharp + + .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs + :start-after: start-find-document + :end-before: end-find-document + :language: csharp + :dedent: + + The output of the preceding code sample should look similar to the + following: + + .. literalinclude:: /includes/qe-tutorials/encrypted-document.json + :language: json + :copyable: false + :dedent: + + .. include:: /includes/queryable-encryption/safe-content-warning.rst \ No newline at end of file diff --git a/source/core/queryable-encryption/quick-start.txt b/source/core/queryable-encryption/quick-start.txt index 6cdcbe1fb80..e5e12a86af8 100644 --- a/source/core/queryable-encryption/quick-start.txt +++ b/source/core/queryable-encryption/quick-start.txt @@ -90,23 +90,7 @@ Procedure .. tab:: :tabid: shell - - **kmsProviderName** - The KMS you're using to store your {+cmk-long+}. - Set this variable to ``"local"`` for this tutorial. - - **uri** - Your MongoDB deployment connection URI. Set your connection - URI in the ``MONGODB_URI`` environment variable or replace the value - directly. - - **keyVaultDatabaseName** - The database in MongoDB where your data - encryption keys (DEKs) will be stored. Set this variable - to ``"encryption"``. - - **keyVaultCollectionName** - The collection in MongoDB where your DEKs - will be stored. Set this variable to ``"__keyVault"``. - - **keyVaultNamespace** - The namespace in MongoDB where your DEKs will - be stored. Set this variable to the values of the ``keyVaultDatabaseName`` - and ``keyVaultCollectionName`` variables, separated by a period. - - **encryptedDatabaseName** - The database in MongoDB where your encrypted - data will be stored. Set this variable to ``"medicalRecords"``. - - **encryptedCollectionName** - The collection in MongoDB where your encrypted - data will be stored. Set this variable to ``"patients"``. + .. include:: /includes/qe-tutorials/qe-quick-start.rst You can declare these variables by using the following code: @@ -119,23 +103,7 @@ Procedure .. tab:: :tabid: nodejs - - **kmsProviderName** - The KMS you're using to store your {+cmk-long+}. - Set this variable to ``"local"`` for this tutorial. - - **uri** - Your MongoDB deployment connection URI. Set your connection - URI in the ``MONGODB_URI`` environment variable or replace the value - directly. - - **keyVaultDatabaseName** - The database in MongoDB where your data - encryption keys (DEKs) will be stored. Set this variable - to ``"encryption"``. - - **keyVaultCollectionName** - The collection in MongoDB where your DEKs - will be stored. Set this variable to ``"__keyVault"``. - - **keyVaultNamespace** - The namespace in MongoDB where your DEKs will - be stored. Set this variable to the values of the ``keyVaultDatabaseName`` - and ``keyVaultCollectionName`` variables, separated by a period. - - **encryptedDatabaseName** - The database in MongoDB where your encrypted - data will be stored. Set this variable to ``"medicalRecords"``. - - **encryptedCollectionName** - The collection in MongoDB where your encrypted - data will be stored. Set this variable to ``"patients"``. + .. include:: /includes/qe-tutorials/qe-quick-start.rst You can declare these variables by using the following code: @@ -157,7 +125,8 @@ Procedure encryption keys (DEKs) will be stored. Set this variable to ``"encryption"``. - **key_vault_collection_name** - The collection in MongoDB where your DEKs - will be stored. Set this variable to ``"__keyVault"``. + will be stored. Set this variable to ``"__keyVault"``, which is the + convention to help prevent mistaking it for a user collection. - **key_vault_namespace** - The namespace in MongoDB where your DEKs will be stored. Set this variable to the values of the ``key_vault_database_name`` and ``key_vault_collection_name`` variables, separated by a period. @@ -177,23 +146,7 @@ Procedure .. tab:: :tabid: java-sync - - **kmsProviderName** - The KMS you're using to store your {+cmk-long+}. - Set this variable to ``"local"`` for this tutorial. - - **uri** - Your MongoDB deployment connection URI. Set your connection - URI in the ``MONGODB_URI`` environment variable or replace the value - directly. - - **keyVaultDatabaseName** - The database in MongoDB where your data - encryption keys (DEKs) will be stored. Set this variable - to ``"encryption"``. - - **keyVaultCollectionName** - The collection in MongoDB where your DEKs - will be stored. Set this variable to ``"__keyVault"``. - - **keyVaultNamespace** - The namespace in MongoDB where your DEKs will - be stored. Set this variable to the values of the ``keyVaultDatabaseName`` - and ``keyVaultCollectionName`` variables, separated by a period. - - **encryptedDatabaseName** - The database in MongoDB where your encrypted - data will be stored. Set this variable to ``"medicalRecords"``. - - **encryptedCollectionName** - The collection in MongoDB where your encrypted - data will be stored. Set this variable to ``"patients"``. + .. include:: /includes/qe-tutorials/qe-quick-start.rst You can declare these variables by using the following code: @@ -206,23 +159,7 @@ Procedure .. tab:: :tabid: go - - **kmsProviderName** - The KMS you're using to store your {+cmk-long+}. - Set this variable to ``"local"`` for this tutorial. - - **uri** - Your MongoDB deployment connection URI. Set your connection - URI in the ``MONGODB_URI`` environment variable or replace the value - directly. - - **keyVaultDatabaseName** - The database in MongoDB where your data - encryption keys (DEKs) will be stored. Set this variable - to ``"encryption"``. - - **keyVaultCollectionName** - The collection in MongoDB where your DEKs - will be stored. Set this variable to ``"__keyVault"``. - - **keyVaultNamespace** - The namespace in MongoDB where your DEKs will - be stored. Set this variable to the values of the ``keyVaultDatabaseName`` - and ``keyVaultCollectionName`` variables, separated by a period. - - **encryptedDatabaseName** - The database in MongoDB where your encrypted - data will be stored. Set this variable to ``"medicalRecords"``. - - **encryptedCollectionName** - The collection in MongoDB where your encrypted - data will be stored. Set this variable to ``"patients"``. + .. include:: /includes/qe-tutorials/qe-quick-start.rst You can declare these variables by using the following code: @@ -241,7 +178,8 @@ Procedure encryption keys (DEKs) will be stored. Set the value of ``keyVaultDatabaseName`` to ``"encryption"``. - **keyVaultCollectionName** - The collection in MongoDB where your DEKs - will be stored. Set the value of ``keyVaultCollectionName`` to ``"__keyVault"``. + will be stored. Set this variable to ``"__keyVault"``, which is the + convention to help prevent mistaking it for a user collection. - **keyVaultNamespace** - The namespace in MongoDB where your DEKs will be stored. Set ``keyVaultNamespace`` to a new ``CollectionNamespace`` object whose name is the values of the ``keyVaultDatabaseName`` and ``keyVaultCollectionName`` variables, @@ -264,10 +202,7 @@ Procedure .. important:: {+key-vault-long-title+} Namespace Permissions - The {+key-vault-long+} is in the ``encryption.__keyVault`` - namespace. Ensure that the database user your application uses to connect - to MongoDB has :ref:`ReadWrite ` - permissions on this namespace. + .. include:: /includes/note-key-vault-permissions .. include:: /includes/queryable-encryption/env-variables.rst diff --git a/source/core/queryable-encryption/reference.txt b/source/core/queryable-encryption/reference.txt index 739b660f0cb..5499bde30fb 100644 --- a/source/core/queryable-encryption/reference.txt +++ b/source/core/queryable-encryption/reference.txt @@ -15,21 +15,11 @@ Reference Read the following sections to learn about components of {+qe+}: -- :ref:`qe-compatibility-reference` -- :ref:`qe-reference-encryption-limits` - :ref:`qe-reference-automatic-encryption-supported-operations` - :ref:`qe-reference-mongo-client` -- :ref:`qe-reference-shared-library` -- :ref:`qe-reference-libmongocrypt` -- :ref:`qe-reference-mongocryptd` .. toctree:: :titlesonly: - /core/queryable-encryption/reference/compatibility - /core/queryable-encryption/reference/limitations /core/queryable-encryption/reference/supported-operations /core/queryable-encryption/reference/qe-options-clients - /core/queryable-encryption/reference/shared-library - /core/queryable-encryption/reference/libmongocrypt - /core/queryable-encryption/reference/mongocryptd diff --git a/source/core/queryable-encryption/reference/compatibility.txt b/source/core/queryable-encryption/reference/compatibility.txt index 17c4a55e2a6..7a784e59624 100644 --- a/source/core/queryable-encryption/reference/compatibility.txt +++ b/source/core/queryable-encryption/reference/compatibility.txt @@ -1,73 +1,245 @@ -.. _qe-driver-compatibility: +.. facet:: + :name: programming_language + :values: csharp, go, java, javascript/typescript, php, python, ruby, rust, scala + +.. _qe-csfle-compatibility: + +============= +Compatibility +============= + +.. contents:: On this page + :local: + :backlinks: none + :depth: 2 + :class: singlecol + +This page describes the MongoDB editions and driver versions compatible +with {+qe+} and {+csfle+} to help you determine whether your deployment +supports each in-use encryption feature. + .. _qe-compatibility-reference: -================================== + {+qe+} Compatibility -================================== +------------------------------------ + +You can use {+qe+} on a MongoDB 7.0 or later replica set or sharded +cluster, but not a standalone instance. The following table shows which +MongoDB server products support which {+qe+} mechanisms: + +.. list-table:: + :header-rows: 1 + :widths: 25 15 30 30 + + * - Product Name + - Minimum Version + - Supports {+qe+} with Automatic Encryption + - Supports {+qe+} with {+manual-enc-title+} -This page describes the MongoDB and driver versions with which {+qe+} -is compatible. + * - MongoDB Atlas [1]_ + - 7.0 + - Yes + - Yes -MongoDB Edition, Topology, and Version Compatibility ----------------------------------------------------- + * - MongoDB Enterprise Advanced + - 7.0 + - Yes + - Yes -{+qe+} with automatic encryption is only available with MongoDB Enterprise -Edition and MongoDB Atlas. You can use {+qe+} on a -MongoDB replica set or sharded cluster, but not a standalone instance. + * - MongoDB Community Edition + - 7.0 + - No + - Yes -:ref:`Explicit encryption ` is -available with MongoDB Community and Enterprise Edition. +.. [1] {+qe+} is compatible with MongoDB Atlas but not :atlas:`MongoDB Atlas Search `. -.. _qe-driver-compatibility-table: +.. _qe-driver-compatibility: -Driver Compatibility Table --------------------------- +{+qe+} Driver Compatibility +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -{+qe-equality-ga+} requires the following minimum versions for +{+qe+} requires the following minimum versions for compatible MongoDB drivers: .. list-table:: - :widths: 50 50 + :widths: 25 25 50 :header-rows: 1 * - Driver + - Minimum Version - Encryption Library - * - :driver:`Node.js ` versions ``5.5.0`` through ``5.8.1`` - - `mongodb-client-encryption `__ version ``2.8.0`` or later - - * - :driver:`Node.js ` version ``6.0.0`` or later - - `mongodb-client-encryption - `__ with the - same major version number as the Node.js driver. - - For example, Node.js driver v6.x.x requires ``mongodb-client-encryption`` + * - :driver:`C ` + - 1.24.0 + - :ref:`libmongocrypt ` version 1.8.0 or later. + + * - :driver:`C++ ` + - 3.8.0 + - :ref:`libmongocrypt ` version 1.8.0 or later. + + * - :driver:`C#/.NET ` + - 2.20.0 + - No additional dependency. + + * - :driver:`Go ` + - 1.12 + - :ref:`libmongocrypt ` version 1.8.0 + or later. + + * - :driver:`C ` + - 1.24.0 + - :ref:`libmongocrypt ` version 1.8.0 or later. + + * - :driver:`C++ ` + - 3.8.0 + - :ref:`libmongocrypt ` version 1.8.0 or later. + + * - :driver:`C#/.NET ` + - 2.20.0 + - No additional dependency. + + * - :driver:`Go ` + - 1.12 + - :ref:`libmongocrypt ` version 1.8.0 + or later. + + * - :driver:`Java (Synchronous)` and `Java Reactive + Streams `__ + - 4.10.0 + - `mongodb-crypt `__ version 1.8.0 or later + + * - :driver:`Node.js ` + - 5.5.0 + - `mongodb-client-encryption `__ + version 2.8.0 or later. + + Node 6.0.0 or later requires ``mongodb-client-encryption`` with the + same major version number as the Node.js driver. For example, + Node.js driver v6.x.x requires ``mongodb-client-encryption`` v6.x.x. + + * - :driver:`PHP ` + - 1.16 + - No additional dependency. + + * - :driver:`PyMongo ` + - 4.4 + - `pymongocrypt `__ version + 1.6 or later. + + * - :driver:`Ruby ` + - 2.19 + - `libmongocrypt-helper `__ version 1.8.0 or later. + + * - :driver:`Rust ` + - 2.6.0 + - :ref:`libmongocrypt ` version 1.8.0 + or later. + + * - :driver:`Scala ` + - 4.10.0 + - `mongodb-crypt `__ version 1.8.0 or later + +MongoDB Support Limitations +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. include:: /includes/queryable-encryption/qe-supportability.rst + +.. _csfle-compatibility-reference: + +{+csfle+} Compatibility +------------------------------------------------------------------------ + +You can use {+csfle+}({+csfle-abbrev+}) on a MongoDB 4.2 or later +replica set or sharded cluster, but not a standalone instance. The +following table shows which MongoDB server products support which +{+csfle+} mechanisms: + +.. list-table:: + :header-rows: 1 + :widths: 25 15 30 30 - * - :driver:`C#/.NET ` version ``2.20.0`` or later - - No additional dependency + * - Product Name + - Minimum Version + - Supports {+csfle-abbrev+} with Automatic Encryption + - Supports {+csfle-abbrev+} with {+manual-enc-title+} - * - :driver:`Java (Synchronous) ` version ``4.10.0`` or later - - `mongodb-crypt `__ version ``1.8.0`` or later + * - MongoDB Enterprise Advanced + - 4.2 + - Yes + - Yes - * - :driver:`PyMongo ` version ``4.4`` or later - - `pymongocrypt `__ version ``1.6`` or later + * - MongoDB Community Edition + - 4.2 + - No + - Yes - * - :driver:`Go ` version ``1.12`` or later - - :ref:`libmongocrypt ` version ``1.8.0`` or later - - * - :driver:`C ` version ``1.24.0`` or later - - :ref:`libmongocrypt ` version ``1.8.0`` or later +.. _csfle-driver-compatibility: - * - :driver:`C++ ` version ``3.8.0`` or later - - :ref:`libmongocrypt ` version ``1.8.0`` or later +{+csfle+} Driver Compatibility +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +{+csfle+} requires the following minimum versions for +compatible MongoDB drivers. - * - :driver:`PHP ` version ``1.16`` or later - - No additional dependency +.. important:: Key Rotation Support + + To use the Key Rotation API, you must use specific versions + of either your driver's binding package or ``libmongocrypt``. - * - :driver:`Ruby ` version ``2.19`` or later - - `libmongocrypt-helper `__ version ``1.8.0`` or later +.. _csfle-reference-compatibility-key-rotation: - * - :driver:`Rust ` version ``2.6.0`` or later - - :ref:`libmongocrypt ` version ``1.8.0`` or later +.. list-table:: + :widths: 25 25 50 + :header-rows: 1 + * - Driver + - Minimum Version + - Key Rotation Requirements + + * - :driver:`C ` + - 1.17.5 + - No additional requirements. + + * - :driver:`C++ ` + - 3.6.0 + - No additional requirements. + + * - :driver:`C#/.NET ` + - 2.10.0 + - Driver version 2.17.1 or later. + + * - :driver:`Go ` + - 1.2 + - ``libmongocrypt`` version 1.5.2 or later. + + * - :driver:`Java ` + - 3.11.0 + - ``mongodb-crypt`` version {+mongodb-crypt-version+} or later. + + * - `Java Reactive Streams + `__ + - 1.12.0 + - ``mongodb-crypt`` version {+mongodb-crypt-version+} or later. + + * - :driver:`Node.js ` + - 3.4.0 + - For driver version 6.0 or later, use the same ``mongodb-client-encryption`` + major version as the driver. + Otherwise, use ``mongodb-client-encryption`` 2.2.0 - 2.x. + + * - :driver:`PHP ` + - 1.6.0 + - No additional requirements. + + * - :driver:`Python (PyMongo) ` + - 3.10.0 + - ``pymongocrypt`` version 1.3.1 or later. + + * - `Ruby `__ + - 2.12.1 + - No additional requirements. + + * - :driver:`Scala ` + - 2.7.0 + - No additional requirements. \ No newline at end of file diff --git a/source/core/queryable-encryption/reference/libmongocrypt.txt b/source/core/queryable-encryption/reference/libmongocrypt.txt index 09ce583f367..11e74912ab2 100644 --- a/source/core/queryable-encryption/reference/libmongocrypt.txt +++ b/source/core/queryable-encryption/reference/libmongocrypt.txt @@ -1,10 +1,8 @@ .. _qe-reference-libmongocrypt: -============================================== -Install libmongocrypt for Queryable Encryption -============================================== - -.. default-domain:: mongodb +===================== +Install libmongocrypt +===================== .. contents:: On this page :local: @@ -15,14 +13,13 @@ Install libmongocrypt for Queryable Encryption Overview -------- -Learn how to install ``libmongocrypt``, a core component of {+qe+}. -This library performs encryption and decryption and manages communication -between the driver and the {+kms-long+} ({+kms-abbr+}). +The ``libmongocrypt`` library performs encryption and decryption, and +manages communication between the driver and the {+kms-long+} +({+kms-abbr+}). It is packaged with some drivers, but others require you +to install it. -You *do not* need to install this library if it is packaged with the -driver that you are using. To learn which drivers require installation of -``libmongocrypt``, check that it is listed as a dependency in the -:ref:`qe-driver-compatibility-table`. +To see if you need to install ``libmongocrypt``, check if +it is listed as a dependency in the :ref:`Drivers compatibility table `. .. warning:: @@ -58,7 +55,7 @@ Debian .. code-block:: sh - sudo sh -c 'curl -s --location https://www.mongodb.org/static/pgp/libmongocrypt.asc | gpg --dearmor >/etc/apt/trusted.gpg.d/libmongocrypt.gpg' + sudo sh -c 'curl -s --location https://pgp.mongodb.com/libmongocrypt.asc | gpg --dearmor >/etc/apt/trusted.gpg.d/libmongocrypt.gpg' .. step:: @@ -101,7 +98,7 @@ Ubuntu .. code-block:: sh - sudo sh -c 'curl -s --location https://www.mongodb.org/static/pgp/libmongocrypt.asc | gpg --dearmor >/etc/apt/trusted.gpg.d/libmongocrypt.gpg' + sudo sh -c 'curl -s --location https://pgp.mongodb.com/libmongocrypt.asc | gpg --dearmor >/etc/apt/trusted.gpg.d/libmongocrypt.gpg' .. step:: @@ -150,7 +147,7 @@ RedHat baseurl=https://libmongocrypt.s3.amazonaws.com/yum/redhat/$releasever/libmongocrypt/{+libmongocrypt-version+}/x86_64 gpgcheck=1 enabled=1 - gpgkey=https://www.mongodb.org/static/pgp/libmongocrypt.asc + gpgkey=https://pgp.mongodb.com/libmongocrypt.asc .. step:: @@ -177,7 +174,7 @@ Amazon Linux 2 baseurl=https://libmongocrypt.s3.amazonaws.com/yum/amazon/2/libmongocrypt/{+libmongocrypt-version+}/x86_64 gpgcheck=1 enabled=1 - gpgkey=https://www.mongodb.org/static/pgp/libmongocrypt.asc + gpgkey=https://pgp.mongodb.com/libmongocrypt.asc .. step:: @@ -204,7 +201,7 @@ Amazon Linux baseurl=https://libmongocrypt.s3.amazonaws.com/yum/amazon/2013.03/libmongocrypt/{+libmongocrypt-version+}/x86_64 gpgcheck=1 enabled=1 - gpgkey=https://www.mongodb.org/static/pgp/libmongocrypt.asc + gpgkey=https://pgp.mongodb.com/libmongocrypt.asc .. step:: @@ -226,7 +223,7 @@ Suse .. code-block:: sh - sudo rpm --import https://www.mongodb.org/static/pgp/libmongocrypt.asc + sudo rpm --import https://pgp.mongodb.com/libmongocrypt.asc .. step:: @@ -247,3 +244,10 @@ Suse .. code-block:: sh sudo zypper -n install libmongocrypt + +Next Steps +---------- + +Once you have installed your driver dependencies, :ref:`install and +configure a library ` to continue setting up +your deployment and development environment. diff --git a/source/core/queryable-encryption/reference/limitations.txt b/source/core/queryable-encryption/reference/limitations.txt index 8110417f0a6..0cdaf8795ac 100644 --- a/source/core/queryable-encryption/reference/limitations.txt +++ b/source/core/queryable-encryption/reference/limitations.txt @@ -1,8 +1,11 @@ +.. meta:: + :keywords: Queryable Encryption, in-use encryption, security, contention, redaction, topology support, supported operations + .. _qe-reference-encryption-limits: -=========== -Limitations -=========== +=============================================================================== +{+qe+} Limitations +=============================================================================== .. default-domain:: mongodb @@ -17,27 +20,26 @@ Overview Consider these limitations and restrictions before enabling {+qe+}. Some operations are unsupported, and others behave differently. -Atlas Search ------------- -{+qe+} is incompatible with :atlas:`MongoDB Atlas Search `. - +For compatibility limitations, please read :ref:``. MongoDB Support Limitations --------------------------- .. include:: /includes/queryable-encryption/qe-supportability.rst -For details, see the Redaction section. +For details, see the :ref:`Redaction ` section of this page. Contention Factor ----------------- Contention factor is a setting that helps tune performance based on the -number of concurrent connections. +number of concurrent operations. When unset, contention uses a default value of +``8``, which provides high performance for most workloads. You can set the contention factor only when specifying a field for encryption. -Once you specify a field for encryption, the contention factor is immutable. If -you don't specify the contention factor, it uses the default value of ``4``. +Once you specify a field for encryption, the contention factor is immutable. + +For more information, see :ref:`Configuring contention factor `. Manual Metadata Collection Compaction ------------------------------------- @@ -147,11 +149,11 @@ Sharding CRUD ---- -- {+qe+} does not support batch operations. The following operations are - not supported: - - - :method:`db.collection.updateMany()` - - :method:`db.collection.deleteMany()` +- {+qe+} does not support multi-document update operations. + :method:`db.collection.updateMany()` is not supported. +- {+qe+} does not support multi-statement update or delete operations. + :method:`db.collection.bulkWrite()` with more than one update or + delete operation is not supported. - {+qe+} limits :method:`db.collection.findAndModify()` arguments. diff --git a/source/core/queryable-encryption/reference/mongocryptd.txt b/source/core/queryable-encryption/reference/mongocryptd.txt deleted file mode 100644 index dc60c5eeccc..00000000000 --- a/source/core/queryable-encryption/reference/mongocryptd.txt +++ /dev/null @@ -1,52 +0,0 @@ -.. _qe-reference-mongocryptd: -.. _qe-field-level-encryption-mongocryptd: -.. _qe-mongocryptd: - -========================================================== -Install and Configure mongocryptd for {+qe+} -========================================================== - -.. default-domain:: mongodb - -.. contents:: On this page - :local: - :backlinks: none - :depth: 2 - :class: singlecol - -Overview --------- - -.. tip:: Use the {+shared-library+} - - If you are starting a new project, use the - ``crypt_shared`` encryption helper, :ref:`referred to as the Shared - Library `. The {+shared-library+} replaces - ``mongocryptd`` and does not require spawning a new process. - -``mongocryptd`` is installed with `MongoDB Enterprise -Server <{+enterprise-download-link+}>`__. - - -When you create a {+qe+} enabled MongoDB client, the ``mongocryptd`` -process starts automatically by default. - -.. include:: /includes/queryable-encryption/qe-facts-mongocryptd-process.rst - -.. _qe-mongocryptd-installation: - -Installation ------------- - -.. include:: /includes/queryable-encryption/qe-csfle-install-mongocryptd.rst - - -Configuration -------------- - -.. include:: /includes/queryable-encryption/qe-csfle-configure-mongocryptd.rst - -Examples -~~~~~~~~ - -.. include:: /includes/queryable-encryption/qe-csfle-mongocryptd-examples.rst diff --git a/source/core/queryable-encryption/reference/shared-library.txt b/source/core/queryable-encryption/reference/shared-library.txt deleted file mode 100644 index c4c2222681d..00000000000 --- a/source/core/queryable-encryption/reference/shared-library.txt +++ /dev/null @@ -1,107 +0,0 @@ -.. _qe-reference-shared-library: - -============================================================ -{+shared-library+} for {+qe+} -============================================================ - -.. default-domain:: mongodb - -.. contents:: On this page - :local: - :backlinks: none - :depth: 2 - :class: singlecol - -Overview --------- - -The {+shared-library+} is a **dynamic library** that enables your client -application to perform automatic {+qe+}. -A dynamic library is a set of functionality accessed -by an application at runtime rather than compile time. -The {+shared-library+} performs the following tasks: - -- Reads the :ref:`{+enc-schema+} ` to determine which fields to encrypt or decrypt -- Prevents your application from executing unsupported operations on - encrypted fields - -The {+shared-library+} *does not* do any of the following: - -- Perform data encryption or decryption -- Access the encryption key material -- Listen for data over the network - -.. important:: Supported MongoDB Server Products - - Automatic {+qe+} is only available in the following MongoDB server products: - - - MongoDB Atlas 7.0 or later clusters - - MongoDB Enterprise 7.0 or later - - Automatic {+qe+} is not available in any version of MongoDB - Community Server. - -The {+shared-library+} is a preferred alternative to ``mongocryptd`` and does -not require you to spawn another process to perform automatic encryption. - -.. tip:: - - While we recommend using the {+shared-library+}, ``mongocryptd`` is still supported. - - To learn more about ``mongocryptd``, see :ref:``. - -To learn more about automatic encryption, see -:ref:``. - -.. _qe-reference-shared-library-download: - -Download the {+shared-library+} ------------------------------------------------- - -Download the {+shared-library+} from the `MongoDB Download Center `__ by selecting the -version and platform, then the library: - -#. In the :guilabel:`Version` dropdown, select the version labeled as "current." -#. In the :guilabel:`Platform` dropdown, select your platform. -#. In the :guilabel:`Package` dropdown, select ``crypt_shared``. -#. Click :guilabel:`Download`. - -.. tip:: - - To view an expanded list of available releases and packages, see - `MongoDB Enterprise Downloads `__. - -.. _qe-reference-shared-library-configuration: - -Configuration -------------- - -You can configure how your driver searches for the {+shared-library+} -through the following parameters: - -.. list-table:: - :header-rows: 1 - :stub-columns: 1 - :widths: 30 70 - - * - Name - - Description - - * - cryptSharedLibPath - - | Specifies the absolute path to the {+shared-library+} package, - | {+shared-library-package+}. - | **Default**: ``undefined`` - - * - cryptSharedLibRequired - - | Specifies if the driver must use the {+shared-library+}. If ``true``, - | the driver raises an error if the {+shared-library+} is unavailable. - | If ``false``, the driver performs the following sequence of actions: - - #. Attempts to use the {+shared-library+}. - #. If the {+shared-library+} is unavailable, the driver attempts to - spawn and connect to ``mongocryptd``. - - | **Default**: ``false`` - -To view an example demonstrating how to configure these parameters, see -the :ref:`Quick Start `. diff --git a/source/core/queryable-encryption/reference/supported-operations.txt b/source/core/queryable-encryption/reference/supported-operations.txt index 3c46bb332db..6dbee194034 100644 --- a/source/core/queryable-encryption/reference/supported-operations.txt +++ b/source/core/queryable-encryption/reference/supported-operations.txt @@ -4,14 +4,19 @@ Supported Operations for {+qe+} ============================================= -.. default-domain:: mongodb - .. contents:: On this page :local: :backlinks: none :depth: 2 :class: singlecol +.. facet:: + :name: genre + :values: reference + +.. meta:: + :keywords: QE, read operations, write operations + This page documents the specific commands, query operators, update operators, aggregation stages, and aggregation expressions supported for {+qe+} compatible drivers. @@ -45,7 +50,6 @@ following commands: - :dbcommand:`aggregate` - :dbcommand:`count` - :dbcommand:`delete` -- :dbcommand:`distinct` - :dbcommand:`explain` - :dbcommand:`find` - :dbcommand:`findAndModify` @@ -226,8 +230,7 @@ aggregation pipeline stages: - :pipeline:`$collStats` - :pipeline:`$count` - :pipeline:`$geoNear` -- :pipeline:`$group` (For usage requirements, see - :ref:`qe-group-behavior`) +- :pipeline:`$group` on unencrypted fields - :pipeline:`$indexStats` - :pipeline:`$limit` - :pipeline:`$lookup` and :pipeline:`$graphLookup` (For usage @@ -254,25 +257,6 @@ Each supported stage must specify only supported :ref:`aggregation expressions `. -.. _qe-group-behavior: - -``$group`` Behavior -~~~~~~~~~~~~~~~~~~~ - -:pipeline:`$group` has the following behaviors specific to {+qe+}. - -``$group`` supports: - -- Grouping on encrypted fields. -- Using :group:`$addToSet` and :group:`$push` accumulators on encrypted - fields. - -``$group`` does not support: - -- Matching on the array returned by :group:`$addToSet` and :group:`$push` - accumulators. -- Arithmetic accumulators on encrypted fields. - .. _qe-csfle-lookup-graphLookup-behavior: ``$lookup`` and ``$graphLookup`` Behavior @@ -449,4 +433,3 @@ following value types: - ``decimal128`` - ``double`` - ``object`` -- ``javascriptWithScope`` (*Deprecated in MongoDB 4.4*) diff --git a/source/core/queryable-encryption/tutorials.txt b/source/core/queryable-encryption/tutorials.txt index ce7cff4c499..07052f86b12 100644 --- a/source/core/queryable-encryption/tutorials.txt +++ b/source/core/queryable-encryption/tutorials.txt @@ -15,24 +15,11 @@ Tutorials :depth: 2 :class: singlecol -Read the following pages to learn how to use {+qe+} with your preferred -{+kms-long+}: - -- AWS - - - :ref:`qe-tutorial-automatic-aws` - -- Azure - - - :ref:`qe-tutorial-automatic-azure` - -- GCP - - - :ref:`qe-tutorial-automatic-gcp` - -- Any {+kmip-kms-title+} - - - :ref:`qe-tutorial-automatic-kmip` +Read the :ref:`Overview: Enable Queryable Encryption +` section to set up your development environment and +data keys, then the :ref:`Overview: Use Queryable Encryption +` section to learn how to use {+qe+} with your +preferred {+kms-long+}. To learn how to use {+qe+} with a local key (not for production), see the :ref:`qe-quick-start`. @@ -41,8 +28,7 @@ To learn how to use {+manual-enc+} with {+qe+}, read :ref:``. Each tutorial provides a sample application in multiple languages for -each supported {+kms-long+}. See the table below for quick -access to all sample applications. +each supported {+kms-long+}. Code samples for specific language drivers: @@ -56,8 +42,6 @@ Code samples for specific language drivers: .. toctree:: :titlesonly: - /core/queryable-encryption/tutorials/aws/aws-automatic - /core/queryable-encryption/tutorials/azure/azure-automatic - /core/queryable-encryption/tutorials/gcp/gcp-automatic - /core/queryable-encryption/tutorials/kmip/kmip-automatic + /core/queryable-encryption/overview-enable-qe + /core/queryable-encryption/overview-use-qe /core/queryable-encryption/tutorials/explicit-encryption diff --git a/source/core/queryable-encryption/tutorials/aws/aws-automatic.txt b/source/core/queryable-encryption/tutorials/aws/aws-automatic.txt deleted file mode 100644 index 004d616f1da..00000000000 --- a/source/core/queryable-encryption/tutorials/aws/aws-automatic.txt +++ /dev/null @@ -1,1093 +0,0 @@ -.. _qe-tutorial-automatic-aws: -.. _qe-tutorial-automatic-dek-aws: - -========================================================= -Use Automatic {+qe+} with AWS -========================================================= - -.. contents:: On this page - :local: - :backlinks: none - :depth: 2 - :class: singlecol - -Overview --------- - -This guide shows you how to build an application that implements the MongoDB -{+qe+} feature to automatically encrypt and decrypt document fields and use -Amazon Web Services (AWS) {+kms-abbr+} for key management. - -After you complete the steps in this guide, you should have: - -- A {+cmk-long+} managed by AWS KMS -- An AWS IAM user with permissions to access the {+cmk-long+} - in AWS KMS -- A working client application that inserts {+in-use-docs+} - using your {+cmk-long+} - -.. tip:: Customer Master Keys - - To learn more about the {+cmk-long+}, read the - :ref:`qe-reference-keys-key-vaults` - documentation. - -Before You Get Started ----------------------- - -.. include:: /includes/queryable-encryption/set-up-section.rst - -.. see:: Full Application - - To see the complete code for this sample application, - select the tab corresponding to your programming language and follow - the provided link. Each sample application repository includes a - ``README.md`` file that you can use to learn how to set up your environment - and run the application. - - .. tabs:: - - .. tab:: mongosh - :tabid: shell - - `Complete mongosh Application <{+sample-app-url-qe+}/mongosh/>`__ - - .. tab:: Node.js - :tabid: nodejs - - `Complete Node.js Application <{+sample-app-url-qe+}/node/>`__ - - .. tab:: Python - :tabid: python - - `Complete Python Application <{+sample-app-url-qe+}/python/>`__ - - .. tab:: Java - :tabid: java-sync - - `Complete Java Application <{+sample-app-url-qe+}/java/>`__ - - .. tab:: Go - :tabid: go - - `Complete Go Application <{+sample-app-url-qe+}/go/>`__ - - .. tab:: C# - :tabid: csharp - - `Complete C# Application <{+sample-app-url-qe+}/csharp/>`__ - -.. tabs-selector:: drivers - -Set Up the KMS --------------- - -.. procedure:: - :style: normal - - .. step:: Create the {+cmk-long+} - - .. include:: /includes/queryable-encryption/tutorials/automatic/aws/cmk.rst - - .. step:: Create an AWS IAM User - - .. include:: /includes/queryable-encryption/tutorials/automatic/aws/user.rst - -Create the Application ----------------------- - -.. procedure:: - :style: normal - - .. step:: Assign Your Application Variables - - The code samples in this tutorial use the following variables to perform - the {+qe+} workflow: - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - - **kmsProviderName** - The KMS you're using to store your {+cmk-long+}. - Set this variable to ``"aws"`` for this tutorial. - - **uri** - Your MongoDB deployment connection URI. Set your connection - URI in the ``MONGODB_URI`` environment variable or replace the value - directly. - - **keyVaultDatabaseName** - The database in MongoDB where your data - encryption keys (DEKs) will be stored. Set this variable - to ``"encryption"``. - - **keyVaultCollectionName** - The collection in MongoDB where your DEKs - will be stored. Set this variable to ``"__keyVault"``. - - **keyVaultNamespace** - The namespace in MongoDB where your DEKs will - be stored. Set this variable to the values of the ``keyVaultDatabaseName`` - and ``keyVaultCollectionName`` variables, separated by a period. - - **encryptedDatabaseName** - The database in MongoDB where your encrypted - data will be stored. Set this variable to ``"medicalRecords"``. - - **encryptedCollectionName** - The collection in MongoDB where your encrypted - data will be stored. Set this variable to ``"patients"``. - - You can declare these variables by using the following code: - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js - :start-after: start-setup-application-variables - :end-before: end-setup-application-variables - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - - **kmsProviderName** - The KMS you're using to store your {+cmk-long+}. - Set this variable to ``"aws"`` for this tutorial. - - **uri** - Your MongoDB deployment connection URI. Set your connection - URI in the ``MONGODB_URI`` environment variable or replace the value - directly. - - **keyVaultDatabaseName** - The database in MongoDB where your data - encryption keys (DEKs) will be stored. Set this variable - to ``"encryption"``. - - **keyVaultCollectionName** - The collection in MongoDB where your DEKs - will be stored. Set this variable to ``"__keyVault"``. - - **keyVaultNamespace** - The namespace in MongoDB where your DEKs will - be stored. Set this variable to the values of the ``keyVaultDatabaseName`` - and ``keyVaultCollectionName`` variables, separated by a period. - - **encryptedDatabaseName** - The database in MongoDB where your encrypted - data will be stored. Set this variable to ``"medicalRecords"``. - - **encryptedCollectionName** - The collection in MongoDB where your encrypted - data will be stored. Set this variable to ``"patients"``. - - You can declare these variables by using the following code: - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-tutorial.js - :start-after: start-setup-application-variables - :end-before: end-setup-application-variables - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - - **kms_provider_name** - The KMS you're using to store your {+cmk-long+}. - Set this variable to ``"aws"`` for this tutorial. - - **uri** - Your MongoDB deployment connection URI. Set your connection - URI in the ``MONGODB_URI`` environment variable or replace the value - directly. - - **key_vault_database_name** - The database in MongoDB where your data - encryption keys (DEKs) will be stored. Set this variable - to ``"encryption"``. - - **key_vault_collection_name** - The collection in MongoDB where your DEKs - will be stored. Set this variable to ``"__keyVault"``. - - **key_vault_namespace** - The namespace in MongoDB where your DEKs will - be stored. Set this variable to the values of the ``key_vault_database_name`` - and ``key_vault_collection_name`` variables, separated by a period. - - **encrypted_database_name** - The database in MongoDB where your encrypted - data will be stored. Set this variable to ``"medicalRecords"``. - - **encrypted_collection_name** - The collection in MongoDB where your encrypted - data will be stored. Set this variable to ``"patients"``. - - You can declare these variables by using the following code: - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py - :start-after: start-setup-application-variables - :end-before: end-setup-application-variables - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - - **kmsProviderName** - The KMS you're using to store your {+cmk-long+}. - Set this variable to ``"aws"`` for this tutorial. - - **uri** - Your MongoDB deployment connection URI. Set your connection - URI in the ``MONGODB_URI`` environment variable or replace the value - directly. - - **keyVaultDatabaseName** - The database in MongoDB where your data - encryption keys (DEKs) will be stored. Set this variable - to ``"encryption"``. - - **keyVaultCollectionName** - The collection in MongoDB where your DEKs - will be stored. Set this variable to ``"__keyVault"``. - - **keyVaultNamespace** - The namespace in MongoDB where your DEKs will - be stored. Set this variable to the values of the ``keyVaultDatabaseName`` - and ``keyVaultCollectionName`` variables, separated by a period. - - **encryptedDatabaseName** - The database in MongoDB where your encrypted - data will be stored. Set this variable to ``"medicalRecords"``. - - **encryptedCollectionName** - The collection in MongoDB where your encrypted - data will be stored. Set this variable to ``"patients"``. - - You can declare these variables by using the following code: - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-setup-application-variables - :end-before: end-setup-application-variables - :language: java - :dedent: - - .. tab:: - :tabid: go - - - **kmsProviderName** - The KMS you're using to store your {+cmk-long+}. - Set this variable to ``"aws"`` for this tutorial. - - **uri** - Your MongoDB deployment connection URI. Set your connection - URI in the ``MONGODB_URI`` environment variable or replace the value - directly. - - **keyVaultDatabaseName** - The database in MongoDB where your data - encryption keys (DEKs) will be stored. Set this variable - to ``"encryption"``. - - **keyVaultCollectionName** - The collection in MongoDB where your DEKs - will be stored. Set this variable to ``"__keyVault"``. - - **keyVaultNamespace** - The namespace in MongoDB where your DEKs will - be stored. Set this variable to the values of the ``keyVaultDatabaseName`` - and ``keyVaultCollectionName`` variables, separated by a period. - - **encryptedDatabaseName** - The database in MongoDB where your encrypted - data will be stored. Set this variable to ``"medicalRecords"``. - - **encryptedCollectionName** - The collection in MongoDB where your encrypted - data will be stored. Set this variable to ``"patients"``. - - You can declare these variables by using the following code: - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go - :start-after: start-setup-application-variables - :end-before: end-setup-application-variables - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - - **kmsProviderName** - The KMS you're using to store your {+cmk-long+}. - Set this value to ``"aws"`` for this tutorial. - - **keyVaultDatabaseName** - The database in MongoDB where your data - encryption keys (DEKs) will be stored. Set the value of ``keyVaultDatabaseName`` - to ``"encryption"``. - - **keyVaultCollectionName** - The collection in MongoDB where your DEKs - will be stored. Set the value of ``keyVaultCollectionName`` to ``"__keyVault"``. - - **keyVaultNamespace** - The namespace in MongoDB where your DEKs will - be stored. Set ``keyVaultNamespace`` to a new ``CollectionNamespace`` object whose name - is the values of the ``keyVaultDatabaseName`` and ``keyVaultCollectionName`` variables, - separated by a period. - - **encryptedDatabaseName** - The database in MongoDB where your encrypted - data will be stored. Set the value of ``encryptedDatabaseName`` to ``"medicalRecords"``. - - **encryptedCollectionName** - The collection in MongoDB where your encrypted - data will be stored. Set the value of ``encryptedCollectionName`` to ``"patients"``. - - **uri** - Your MongoDB deployment connection URI. Set your connection - URI in the ``appsettings.json`` file or replace the value - directly. - - You can declare these variables by using the following code: - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs - :start-after: start-setup-application-variables - :end-before: end-setup-application-variables - :language: csharp - :dedent: - - .. important:: {+key-vault-long-title+} Namespace Permissions - - The {+key-vault-long+} is in the ``encryption.__keyVault`` - namespace. Ensure that the database user your application uses to connect - to MongoDB has :ref:`ReadWrite ` - permissions on this namespace. - - .. include:: /includes/queryable-encryption/env-variables.rst - - .. step:: Create your Encrypted Collection - - .. procedure:: - :style: connected - - .. step:: Add Your AWS KMS Credentials - - Create a variable containing your AWS {+kms-abbr+} credentials with the - following structure. Use the Access Key ID and Secret Access Key you created - in the :ref:`Create an IAM User ` step of - this tutorial. - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-helpers.js - :start-after: start-aws-kms-credentials - :end-before: end-aws-kms-credentials - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js - :start-after: start-aws-kms-credentials - :end-before: end-aws-kms-credentials - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_helpers.py - :start-after: start-aws-kms-credentials - :end-before: end-aws-kms-credentials - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/util/QueryableEncryptionHelpers.java - :start-after: start-aws-kms-credentials - :end-before: end-aws-kms-credentials - :language: java - :dedent: - - .. tab:: - :tabid: go - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_helpers.go - :start-after: start-aws-kms-credentials - :end-before: end-aws-kms-credentials - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionHelpers.cs - :start-after: start-aws-kms-credentials - :end-before: end-aws-kms-credentials - :language: csharp - :dedent: - - .. include:: /includes/queryable-encryption/tutorials/automatic/aws/role-authentication.rst - - .. step:: Add your {+cmk-long+} Credentials - - Create a variable containing your {+cmk-long+} credentials with the - following structure. Use the {+aws-arn-abbr+} and Region you recorded - in the :ref:`Create a {+cmk-long+} ` - step of this tutorial. - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-helpers.js - :start-after: start-aws-cmk-credentials - :end-before: end-aws-cmk-credentials - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js - :start-after: start-aws-cmk-credentials - :end-before: end-aws-cmk-credentials - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_helpers.py - :start-after: start-aws-cmk-credentials - :end-before: end-aws-cmk-credentials - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/util/QueryableEncryptionHelpers.java - :start-after: start-aws-cmk-credentials - :end-before: end-aws-cmk-credentials - :language: java - :dedent: - - .. tab:: - :tabid: go - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_helpers.go - :start-after: start-aws-cmk-credentials - :end-before: end-aws-cmk-credentials - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionHelpers.cs - :start-after: start-aws-cmk-credentials - :end-before: end-aws-cmk-credentials - :language: csharp - :dedent: - - .. step:: Set Your Automatic Encryption Options - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - Create an ``autoEncryptionOptions`` object that contains the following - options: - - - The namespace of your {+key-vault-long+} - - The ``kmsProviderCredentials`` object, which contains your AWS KMS - credentials - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-helpers.js - :start-after: start-auto-encryption-options - :end-before: end-auto-encryption-options - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - Create an ``autoEncryptionOptions`` object that contains the following - options: - - - The namespace of your {+key-vault-long+} - - The ``kmsProviders`` object, which contains your - AWS KMS credentials - - The ``sharedLibraryPathOptions`` object, which contains the path to - your {+shared-library+} - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js - :start-after: start-auto-encryption-options - :end-before: end-auto-encryption-options - :emphasize-lines: 5-9 - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - Create an ``AutoEncryptionOpts`` object that contains the following - options: - - - The ``kms_provider_credentials`` object, which contains your - AWS KMS credentials - - The namespace of your {+key-vault-long+} - - The path to your {+shared-library+} - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_helpers.py - :start-after: start-auto-encryption-options - :end-before: end-auto-encryption-options - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - Create an ``AutoEncryptionSettings`` object that contains the following - options: - - - The namespace of your {+key-vault-long+} - - The ``kmsProviderCredentials`` object, which contains your - AWS KMS credentials - - The ``extraOptions`` object, which contains the path to - your {+shared-library+} - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/util/QueryableEncryptionHelpers.java - :start-after: start-auto-encryption-options - :end-before: end-auto-encryption-options - :emphasize-lines: 4-8 - :language: java - :dedent: - - .. tab:: - :tabid: go - - Create an ``AutoEncryption`` object that contains the following - options: - - - The namespace of your {+key-vault-long+} - - The ``kmsProviderCredentials`` object, which contains your - AWS KMS credentials - - The ``cryptSharedLibraryPath`` object, which contains the path to - your {+shared-library+} - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_helpers.go - :start-after: start-auto-encryption-options - :end-before: end-auto-encryption-options - :emphasize-lines: 5-8 - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - Create an ``AutoEncryptionOptions`` object that contains the following - options: - - - The namespace of your {+key-vault-long+} - - The ``kmsProviderCredentials`` object, which contains your - AWS KMS credentials - - The ``extraOptions`` object, which contains the path to - your {+shared-library+} - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionHelpers.cs - :start-after: start-auto-encryption-options - :end-before: end-auto-encryption-options - :emphasize-lines: 7-10 - :language: csharp - :dedent: - - .. include:: /includes/queryable-encryption/shared-lib-learn-more.rst - - .. step:: Create a Client to Set Up an Encrypted Collection - - To create a client used to encrypt and decrypt data in - your collection, instantiate a new ``MongoClient`` by using your - connection URI and your automatic encryption options. - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js - :start-after: start-create-client - :end-before: end-create-client - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-tutorial.js - :start-after: start-create-client - :end-before: end-create-client - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py - :start-after: start-create-client - :end-before: end-create-client - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-create-client - :end-before: end-create-client - :language: java - :dedent: - - .. tab:: - :tabid: go - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go - :start-after: start-create-client - :end-before: end-create-client - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs - :start-after: start-create-client - :end-before: end-create-client - :language: csharp - :dedent: - - .. step:: Specify Fields to Encrypt - - To encrypt a field, add it to the {+enc-schema+}. - To enable queries on a field, add the "queries" - property. Create the {+enc-schema+} as follows: - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js - :start-after: start-encrypted-fields-map - :end-before: end-encrypted-fields-map - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-tutorial.js - :start-after: start-encrypted-fields-map - :end-before: end-encrypted-fields-map - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py - :start-after: start-encrypted-fields-map - :end-before: end-encrypted-fields-map - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-encrypted-fields-map - :end-before: end-encrypted-fields-map - :language: java - :dedent: - - .. tab:: - :tabid: go - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go - :start-after: start-encrypted-fields-map - :end-before: end-encrypted-fields-map - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs - :start-after: start-encrypted-fields-map - :end-before: end-encrypted-fields-map - :language: csharp - :dedent: - - .. note:: - - In the previous code sample, both the "ssn" and - "billing" fields are encrypted, but only the "ssn" - field can be queried. - - .. step:: Create the Collection - - Instantiate ``ClientEncryption`` to access the API for the - encryption helper methods. - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js - :start-after: start-client-encryption - :end-before: end-client-encryption - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js - :start-after: start-client-encryption - :end-before: end-client-encryption - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_helpers.py - :start-after: start-client-encryption - :end-before: end-client-encryption - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-client-encryption - :end-before: end-client-encryption - :language: java - :dedent: - - .. tab:: - :tabid: go - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_helpers.go - :start-after: start-client-encryption - :end-before: end-client-encryption - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionHelpers.cs - :start-after: start-client-encryption - :end-before: end-client-encryption - :language: csharp - :dedent: - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - Create your encrypted collection by using the encryption - helper method accessed through the ``ClientEncryption`` class. - This method automatically generates data encryption keys for your - encrypted fields and creates the encrypted collection: - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js - :start-after: start-create-encrypted-collection - :end-before: end-create-encrypted-collection - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - .. include:: /includes/tutorials/automatic/node-include-clientEncryption.rst - - Create your encrypted collection by using the encryption - helper method accessed through the ``ClientEncryption`` class. - This method automatically generates data encryption keys for your - encrypted fields and creates the encrypted collection: - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js - :start-after: start-create-encrypted-collection - :end-before: end-create-encrypted-collection - :language: javascript - :dedent: - - .. tip:: Database vs. Database Name - - The method that creates the encrypted collection requires a reference - to a database *object* rather than the database *name*. You can - obtain this reference by using a method on your client object. - - .. tab:: - :tabid: python - - Create your encrypted collection by using the encryption - helper method accessed through the ``ClientEncryption`` class. - This method automatically generates data encryption keys for your - encrypted fields and creates the encrypted collection: - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py - :start-after: start-create-encrypted-collection - :end-before: end-create-encrypted-collection - :language: python - :dedent: - - .. tip:: Database vs. Database Name - - The method that creates the encrypted collection requires a reference - to a database *object* rather than the database *name*. You can - obtain this reference by using a method on your client object. - - .. tab:: - :tabid: java-sync - - Create your encrypted collection by using the encryption - helper method accessed through the ``ClientEncryption`` class. - This method automatically generates data encryption keys for your - encrypted fields and creates the encrypted collection: - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-create-encrypted-collection - :end-before: end-create-encrypted-collection - :language: java - :dedent: - - .. tip:: Database vs. Database Name - - The method that creates the encrypted collection requires a reference - to a database *object* rather than the database *name*. You can - obtain this reference by using a method on your client object. - - .. tab:: - :tabid: go - - The Golang version of this tutorial uses data models to - represent the document structure. Add the following - structs to your project to represent the data in your - collection: - - .. literalinclude:: /includes/qe-tutorials/go/models.go - :start-after: start-patient-document - :end-before: end-patient-document - :language: go - :dedent: - - .. literalinclude:: /includes/qe-tutorials/go/models.go - :start-after: start-patient-record - :end-before: end-patient-record - :language: go - :dedent: - - .. literalinclude:: /includes/qe-tutorials/go/models.go - :start-after: start-payment-info - :end-before: end-payment-info - :language: go - :dedent: - - After you've added these classes, create your encrypted - collection by using the encryption helper method accessed - through the ``ClientEncryption`` class. - This method automatically generates data encryption keys for your - encrypted fields and creates the encrypted collection: - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go - :start-after: start-create-encrypted-collection - :end-before: end-create-encrypted-collection - :language: go - :dedent: - - .. tip:: Database vs. Database Name - - The method that creates the encrypted collection requires a reference - to a database *object* rather than the database *name*. You can - obtain this reference by using a method on your client object. - - .. tab:: - :tabid: csharp - - The C# version of this tutorial uses separate classes as data models - to represent the document structure. - Add the following ``Patient``, ``PatientRecord``, and ``PatientBilling`` - classes to your project: - - .. literalinclude:: /includes/qe-tutorials/csharp/Patient.cs - :start-after: start-patient - :end-before: end-patient - :language: csharp - :dedent: - - .. literalinclude:: /includes/qe-tutorials/csharp/PatientRecord.cs - :start-after: start-patient-record - :end-before: end-patient-record - :language: csharp - :dedent: - - .. literalinclude:: /includes/qe-tutorials/csharp/PatientBilling.cs - :start-after: start-patient-billing - :end-before: end-patient-billing - :language: csharp - :dedent: - - After you've added these classes, create your encrypted collection by - using the encryption helper method accessed through the - ``ClientEncryption`` class. - This method automatically generates data encryption keys for your - encrypted fields and creates the encrypted collection: - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs - :start-after: start-create-encrypted-collection - :end-before: end-create-encrypted-collection - :language: csharp - :dedent: - - .. tip:: Database vs. Database Name - - The method that creates the encrypted collection requires a reference - to a database *object* rather than the database *name*. You can - obtain this reference by using a method on your client object. - - .. _qe-aws-insert: - - .. step:: Insert a Document with Encrypted Fields - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - Create a sample document that describes a patient's personal information. - Use the encrypted client to insert it into the ``patients`` collection, - as shown in the following example: - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js - :start-after: start-insert-document - :end-before: end-insert-document - :emphasize-lines: 15 - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - Create a sample document that describes a patient's personal information. - Use the encrypted client to insert it into the ``patients`` collection, - as shown in the following example: - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-tutorial.js - :start-after: start-insert-document - :end-before: end-insert-document - :emphasize-lines: 17 - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - Create a sample document that describes a patient's personal information. - Use the encrypted client to insert it into the ``patients`` collection, - as shown in the following example: - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py - :start-after: start-insert-document - :end-before: end-insert-document - :emphasize-lines: 15 - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - This tutorial uses POJOs as data models - to represent the document structure. To set up your application to - use POJOs, add the following code: - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-setup-application-pojo - :end-before: end-setup-application-pojo - :language: java - :dedent: - - To learn more about Java POJOs, see the `Plain Old Java Object - wikipedia article `__. - - This tutorial uses the following POJOs: - - - ``Patient`` - - ``PatientRecord`` - - ``PatientBilling`` - - You can view these classes in the `models package of the complete Java application - <{+sample-app-url-qe+}/java/src/main/java/com/mongodb/tutorials/qe/models>`__. - - Add these POJO classes to your application. Then, create an instance - of a ``Patient`` that describes a patient's personal information. Use - the encrypted client to insert it into the ``patients`` collection, - as shown in the following example: - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-insert-document - :end-before: end-insert-document - :emphasize-lines: 8 - :language: java - :dedent: - - .. tab:: - :tabid: go - - Create a sample document that describes a patient's personal information. - Use the encrypted client to insert it into the ``patients`` collection, - as shown in the following example: - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go - :start-after: start-insert-document - :end-before: end-insert-document - :emphasize-lines: 15 - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - Create a sample document that describes a patient's personal information. - Use the encrypted client to insert it into the ``patients`` collection, - as shown in the following example: - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs - :start-after: start-insert-document - :end-before: end-insert-document - :emphasize-lines: 19 - :language: csharp - :dedent: - - .. step:: Query on an Encrypted Field - - The following code sample executes a find query on an encrypted field and - prints the decrypted data: - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js - :start-after: start-find-document - :end-before: end-find-document - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-tutorial.js - :start-after: start-find-document - :end-before: end-find-document - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py - :start-after: start-find-document - :end-before: end-find-document - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-find-document - :end-before: end-find-document - :language: java - :dedent: - - .. tab:: - :tabid: go - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go - :start-after: start-find-document - :end-before: end-find-document - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs - :start-after: start-find-document - :end-before: end-find-document - :language: csharp - :dedent: - - The output of the preceding code sample should look similar to the - following: - - .. literalinclude:: /includes/qe-tutorials/encrypted-document.json - :language: json - :copyable: false - :dedent: - - .. include:: /includes/queryable-encryption/safe-content-warning.rst - -Learn More ----------- - -To learn how {+qe+} works, see -:ref:``. - -To learn more about the topics mentioned in this guide, see the -following links: - -- Learn more about {+qe+} components on the :ref:`Reference ` page. -- Learn how {+cmk-long+}s and {+dek-long+}s work on the :ref:`` page. -- See how KMS Providers manage your {+qe+} keys on the :ref:`` page. diff --git a/source/core/queryable-encryption/tutorials/azure/azure-automatic.txt b/source/core/queryable-encryption/tutorials/azure/azure-automatic.txt deleted file mode 100644 index f2d44016d69..00000000000 --- a/source/core/queryable-encryption/tutorials/azure/azure-automatic.txt +++ /dev/null @@ -1,1090 +0,0 @@ -.. _qe-tutorial-automatic-azure: -.. _qe-tutorial-automatic-dek-azure: - -=========================================================== -Use Automatic {+qe+} with Azure -=========================================================== - -.. contents:: On this page - :local: - :backlinks: none - :depth: 2 - :class: singlecol - -Overview --------- - -This guide shows you how to build an application that implements the MongoDB -{+qe+} feature to automatically encrypt and decrypt document fields and use -{+azure-kv+} {+kms-abbr+} for key management. - -After you complete the steps in this guide, you should have: - -- A {+cmk-long+} managed by {+azure-kv+} -- A working client application that inserts {+in-use-docs+} - using your {+cmk-long+} - -.. tip:: Customer Master Keys - - To learn more about the {+cmk-long+}, read the - :ref:`qe-reference-keys-key-vaults` - documentation. - -Before You Get Started ----------------------- - -.. include:: /includes/queryable-encryption/set-up-section.rst - -.. see:: Full Application - - To see the complete code for this sample application, - select the tab corresponding to your programming language and follow - the provided link. Each sample application repository includes a - ``README.md`` file that you can use to learn how to set up your environment - and run the application. - - .. tabs:: - - .. tab:: mongosh - :tabid: shell - - `Complete mongosh Application <{+sample-app-url-qe+}/mongosh/>`__ - - .. tab:: Node.js - :tabid: nodejs - - `Complete Node.js Application <{+sample-app-url-qe+}/node/>`__ - - .. tab:: Python - :tabid: python - - `Complete Python Application <{+sample-app-url-qe+}/python/>`__ - - .. tab:: Java - :tabid: java-sync - - `Complete Java Application <{+sample-app-url-qe+}/java/>`__ - - .. tab:: Go - :tabid: go - - `Complete Go Application <{+sample-app-url-qe+}/go/>`__ - - .. tab:: C# - :tabid: csharp - - `Complete C# Application <{+sample-app-url-qe+}/csharp/>`__ - -.. tabs-selector:: drivers - -Set Up the KMS --------------- - -.. procedure:: - :style: normal - - .. step:: Register your Application with Azure - - .. include:: /includes/queryable-encryption/tutorials/automatic/azure/register.rst - - .. step:: Create the {+cmk-long+} - - .. include:: /includes/queryable-encryption/tutorials/automatic/azure/cmk.rst - -Create the Application ----------------------- - -.. procedure:: - :style: normal - - .. step:: Assign Your Application Variables - - The code samples in this tutorial use the following variables to perform - the {+qe+} workflow: - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - - **kmsProviderName** - The KMS you're using to store your {+cmk-long+}. - Set this variable to ``"azure"`` for this tutorial. - - **uri** - Your MongoDB deployment connection URI. Set your connection - URI in the ``MONGODB_URI`` environment variable or replace the value - directly. - - **keyVaultDatabaseName** - The database in MongoDB where your data - encryption keys (DEKs) will be stored. Set this variable - to ``"encryption"``. - - **keyVaultCollectionName** - The collection in MongoDB where your DEKs - will be stored. Set this variable to ``"__keyVault"``. - - **keyVaultNamespace** - The namespace in MongoDB where your DEKs will - be stored. Set this variable to the values of the ``keyVaultDatabaseName`` - and ``keyVaultCollectionName`` variables, separated by a period. - - **encryptedDatabaseName** - The database in MongoDB where your encrypted - data will be stored. Set this variable to ``"medicalRecords"``. - - **encryptedCollectionName** - The collection in MongoDB where your encrypted - data will be stored. Set this variable to ``"patients"``. - - You can declare these variables by using the following code: - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js - :start-after: start-setup-application-variables - :end-before: end-setup-application-variables - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - - **kmsProviderName** - The KMS you're using to store your {+cmk-long+}. - Set this variable to ``"azure"`` for this tutorial. - - **uri** - Your MongoDB deployment connection URI. Set your connection - URI in the ``MONGODB_URI`` environment variable or replace the value - directly. - - **keyVaultDatabaseName** - The database in MongoDB where your data - encryption keys (DEKs) will be stored. Set this variable - to ``"encryption"``. - - **keyVaultCollectionName** - The collection in MongoDB where your DEKs - will be stored. Set this variable to ``"__keyVault"``. - - **keyVaultNamespace** - The namespace in MongoDB where your DEKs will - be stored. Set this variable to the values of the ``keyVaultDatabaseName`` - and ``keyVaultCollectionName`` variables, separated by a period. - - **encryptedDatabaseName** - The database in MongoDB where your encrypted - data will be stored. Set this variable to ``"medicalRecords"``. - - **encryptedCollectionName** - The collection in MongoDB where your encrypted - data will be stored. Set this variable to ``"patients"``. - - You can declare these variables by using the following code: - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-tutorial.js - :start-after: start-setup-application-variables - :end-before: end-setup-application-variables - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - - **kms_provider_name** - The KMS you're using to store your {+cmk-long+}. - Set this variable to ``"azure"`` for this tutorial. - - **uri** - Your MongoDB deployment connection URI. Set your connection - URI in the ``MONGODB_URI`` environment variable or replace the value - directly. - - **key_vault_database_name** - The database in MongoDB where your data - encryption keys (DEKs) will be stored. Set this variable - to ``"encryption"``. - - **key_vault_collection_name** - The collection in MongoDB where your DEKs - will be stored. Set this variable to ``"__keyVault"``. - - **key_vault_namespace** - The namespace in MongoDB where your DEKs will - be stored. Set this variable to the values of the ``key_vault_database_name`` - and ``key_vault_collection_name`` variables, separated by a period. - - **encrypted_database_name** - The database in MongoDB where your encrypted - data will be stored. Set this variable to ``"medicalRecords"``. - - **encrypted_collection_name** - The collection in MongoDB where your encrypted - data will be stored. Set this variable to ``"patients"``. - - You can declare these variables by using the following code: - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py - :start-after: start-setup-application-variables - :end-before: end-setup-application-variables - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - - **kmsProviderName** - The KMS you're using to store your {+cmk-long+}. - Set this variable to ``"azure"`` for this tutorial. - - **uri** - Your MongoDB deployment connection URI. Set your connection - URI in the ``MONGODB_URI`` environment variable or replace the value - directly. - - **keyVaultDatabaseName** - The database in MongoDB where your data - encryption keys (DEKs) will be stored. Set this variable - to ``"encryption"``. - - **keyVaultCollectionName** - The collection in MongoDB where your DEKs - will be stored. Set this variable to ``"__keyVault"``. - - **keyVaultNamespace** - The namespace in MongoDB where your DEKs will - be stored. Set this variable to the values of the ``keyVaultDatabaseName`` - and ``keyVaultCollectionName`` variables, separated by a period. - - **encryptedDatabaseName** - The database in MongoDB where your encrypted - data will be stored. Set this variable to ``"medicalRecords"``. - - **encryptedCollectionName** - The collection in MongoDB where your encrypted - data will be stored. Set this variable to ``"patients"``. - - You can declare these variables by using the following code: - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-setup-application-variables - :end-before: end-setup-application-variables - :language: java - :dedent: - - .. tab:: - :tabid: go - - - **kmsProviderName** - The KMS you're using to store your {+cmk-long+}. - Set this variable to ``"azure"`` for this tutorial. - - **uri** - Your MongoDB deployment connection URI. Set your connection - URI in the ``MONGODB_URI`` environment variable or replace the value - directly. - - **keyVaultDatabaseName** - The database in MongoDB where your data - encryption keys (DEKs) will be stored. Set this variable - to ``"encryption"``. - - **keyVaultCollectionName** - The collection in MongoDB where your DEKs - will be stored. Set this variable to ``"__keyVault"``. - - **keyVaultNamespace** - The namespace in MongoDB where your DEKs will - be stored. Set this variable to the values of the ``keyVaultDatabaseName`` - and ``keyVaultCollectionName`` variables, separated by a period. - - **encryptedDatabaseName** - The database in MongoDB where your encrypted - data will be stored. Set this variable to ``"medicalRecords"``. - - **encryptedCollectionName** - The collection in MongoDB where your encrypted - data will be stored. Set this variable to ``"patients"``. - - You can declare these variables by using the following code: - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go - :start-after: start-setup-application-variables - :end-before: end-setup-application-variables - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - - **kmsProviderName** - The KMS you're using to store your {+cmk-long+}. - Set this value to ``"azure"`` for this tutorial. - - **keyVaultDatabaseName** - The database in MongoDB where your data - encryption keys (DEKs) will be stored. Set the value of ``keyVaultDatabaseName`` - to ``"encryption"``. - - **keyVaultCollectionName** - The collection in MongoDB where your DEKs - will be stored. Set the value of ``keyVaultCollectionName`` to ``"__keyVault"``. - - **keyVaultNamespace** - The namespace in MongoDB where your DEKs will - be stored. Set ``keyVaultNamespace`` to a new ``CollectionNamespace`` object whose name - is the values of the ``keyVaultDatabaseName`` and ``keyVaultCollectionName`` variables, - separated by a period. - - **encryptedDatabaseName** - The database in MongoDB where your encrypted - data will be stored. Set the value of ``encryptedDatabaseName`` to ``"medicalRecords"``. - - **encryptedCollectionName** - The collection in MongoDB where your encrypted - data will be stored. Set the value of ``encryptedCollectionName`` to ``"patients"``. - - **uri** - Your MongoDB deployment connection URI. Set your connection - URI in the ``appsettings.json`` file or replace the value - directly. - - You can declare these variables by using the following code: - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs - :start-after: start-setup-application-variables - :end-before: end-setup-application-variables - :language: csharp - :dedent: - - .. important:: {+key-vault-long-title+} Namespace Permissions - - The {+key-vault-long+} is in the ``encryption.__keyVault`` - namespace. Ensure that the database user your application uses to connect - to MongoDB has :ref:`ReadWrite ` - permissions on this namespace. - - .. include:: /includes/queryable-encryption/env-variables.rst - - .. step:: Create your Encrypted Collection - - .. procedure:: - :style: connected - - .. step:: Add Your Azure KMS Credentials - - .. _qe-tutorials-automatic-encryption-azure-kms-providers: - - Create a variable containing your Azure {+kms-abbr+} credentials with the - following structure. Use the {+azure-kv+} credentials you recorded in the - :ref:`Register your Application with Azure ` - step of this tutorial. - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-helpers.js - :start-after: start-azure-kms-credentials - :end-before: end-azure-kms-credentials - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js - :start-after: start-azure-kms-credentials - :end-before: end-azure-kms-credentials - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_helpers.py - :start-after: start-azure-kms-credentials - :end-before: end-azure-kms-credentials - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/util/QueryableEncryptionHelpers.java - :start-after: start-azure-kms-credentials - :end-before: end-azure-kms-credentials - :language: python - :dedent: - - .. tab:: - :tabid: go - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_helpers.go - :start-after: start-azure-kms-credentials - :end-before: end-azure-kms-credentials - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionHelpers.cs - :start-after: start-azure-kms-credentials - :end-before: end-azure-kms-credentials - :language: csharp - :dedent: - - .. step:: Add your {+cmk-long+} Credentials - - Create a variable containing your {+cmk-long+} credentials with the - following structure. Use the {+cmk-long+} details you recorded in the - :ref:`Create a {+cmk-long+} ` step of this tutorial. - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-helpers.js - :start-after: start-azure-cmk-credentials - :end-before: end-azure-cmk-credentials - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js - :start-after: start-azure-cmk-credentials - :end-before: end-azure-cmk-credentials - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_helpers.py - :start-after: start-azure-cmk-credentials - :end-before: end-azure-cmk-credentials - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/util/QueryableEncryptionHelpers.java - :start-after: start-azure-cmk-credentials - :end-before: end-azure-cmk-credentials - :language: java - :dedent: - - .. tab:: - :tabid: go - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_helpers.go - :start-after: start-azure-cmk-credentials - :end-before: end-azure-cmk-credentials - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionHelpers.cs - :start-after: start-azure-cmk-credentials - :end-before: end-azure-cmk-credentials - :language: csharp - :dedent: - - .. step:: Set Your Automatic Encryption Options - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - Create an ``autoEncryptionOptions`` object that contains the following - options: - - - The namespace of your {+key-vault-long+} - - The ``kmsProviderCredentials`` object, which contains your Azure KMS - credentials - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-helpers.js - :start-after: start-auto-encryption-options - :end-before: end-auto-encryption-options - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - Create an ``autoEncryptionOptions`` object that contains the following - options: - - - The namespace of your {+key-vault-long+} - - The ``kmsProviders`` object, which contains your - Azure KMS credentials - - The ``sharedLibraryPathOptions`` object, which contains the path to - your {+shared-library+} - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js - :start-after: start-auto-encryption-options - :end-before: end-auto-encryption-options - :emphasize-lines: 5-9 - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - Create an ``AutoEncryptionOpts`` object that contains the following - options: - - - The ``kms_provider_credentials`` object, which contains your - Azure KMS credentials - - The namespace of your {+key-vault-long+} - - The path to your {+shared-library+} - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_helpers.py - :start-after: start-auto-encryption-options - :end-before: end-auto-encryption-options - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - Create an ``AutoEncryptionSettings`` object that contains the following - options: - - - The namespace of your {+key-vault-long+} - - The ``kmsProviderCredentials`` object, which contains your - Azure KMS credentials - - The ``extraOptions`` object, which contains the path to - your {+shared-library+} - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/util/QueryableEncryptionHelpers.java - :start-after: start-auto-encryption-options - :end-before: end-auto-encryption-options - :emphasize-lines: 4-8 - :language: java - :dedent: - - .. tab:: - :tabid: go - - Create an ``AutoEncryption`` object that contains the following - options: - - - The namespace of your {+key-vault-long+} - - The ``kmsProviderCredentials`` object, which contains your - Azure KMS credentials - - The ``cryptSharedLibraryPath`` object, which contains the path to - your {+shared-library+} - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_helpers.go - :start-after: start-auto-encryption-options - :end-before: end-auto-encryption-options - :emphasize-lines: 5-8 - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - Create an ``AutoEncryptionOptions`` object that contains the following - options: - - - The namespace of your {+key-vault-long+} - - The ``kmsProviderCredentials`` object, which contains your - Azure KMS credentials - - The ``extraOptions`` object, which contains the path to - your {+shared-library+} - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionHelpers.cs - :start-after: start-auto-encryption-options - :end-before: end-auto-encryption-options - :emphasize-lines: 7-10 - :language: csharp - :dedent: - - .. include:: /includes/queryable-encryption/shared-lib-learn-more.rst - - .. step:: Create a Client to Set Up an Encrypted Collection - - To create a client used to encrypt and decrypt data in - your collection, instantiate a new ``MongoClient`` by using your - connection URI and your automatic encryption options. - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js - :start-after: start-create-client - :end-before: end-create-client - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-tutorial.js - :start-after: start-create-client - :end-before: end-create-client - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py - :start-after: start-create-client - :end-before: end-create-client - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-create-client - :end-before: end-create-client - :language: java - :dedent: - - .. tab:: - :tabid: go - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go - :start-after: start-create-client - :end-before: end-create-client - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs - :start-after: start-create-client - :end-before: end-create-client - :language: csharp - :dedent: - - .. step:: Specify Fields to Encrypt - - To encrypt a field, add it to the {+enc-schema+}. - To enable queries on a field, add the "queries" - property. Create the {+enc-schema+} as follows: - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js - :start-after: start-encrypted-fields-map - :end-before: end-encrypted-fields-map - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-tutorial.js - :start-after: start-encrypted-fields-map - :end-before: end-encrypted-fields-map - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py - :start-after: start-encrypted-fields-map - :end-before: end-encrypted-fields-map - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-encrypted-fields-map - :end-before: end-encrypted-fields-map - :language: java - :dedent: - - .. tab:: - :tabid: go - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go - :start-after: start-encrypted-fields-map - :end-before: end-encrypted-fields-map - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs - :start-after: start-encrypted-fields-map - :end-before: end-encrypted-fields-map - :language: csharp - :dedent: - - .. note:: - - In the previous code sample, both the "ssn" and - "billing" fields are encrypted, but only the "ssn" - field can be queried. - - .. step:: Create the Collection - - Instantiate ``ClientEncryption`` to access the API for the - encryption helper methods. - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js - :start-after: start-client-encryption - :end-before: end-client-encryption - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js - :start-after: start-client-encryption - :end-before: end-client-encryption - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_helpers.py - :start-after: start-client-encryption - :end-before: end-client-encryption - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-client-encryption - :end-before: end-client-encryption - :language: java - :dedent: - - .. tab:: - :tabid: go - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_helpers.go - :start-after: start-client-encryption - :end-before: end-client-encryption - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionHelpers.cs - :start-after: start-client-encryption - :end-before: end-client-encryption - :language: csharp - :dedent: - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - Create your encrypted collection by using the encryption - helper method accessed through the ``ClientEncryption`` class. - This method automatically generates data encryption keys for your - encrypted fields and creates the encrypted collection: - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js - :start-after: start-create-encrypted-collection - :end-before: end-create-encrypted-collection - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - .. include:: /includes/tutorials/automatic/node-include-clientEncryption.rst - - Create your encrypted collection by using the encryption - helper method accessed through the ``ClientEncryption`` class. - This method automatically generates data encryption keys for your - encrypted fields and creates the encrypted collection: - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js - :start-after: start-create-encrypted-collection - :end-before: end-create-encrypted-collection - :language: javascript - :dedent: - - .. tip:: Database vs. Database Name - - The method that creates the encrypted collection requires a reference - to a database *object* rather than the database *name*. You can - obtain this reference by using a method on your client object. - - .. tab:: - :tabid: python - - Create your encrypted collection by using the encryption - helper method accessed through the ``ClientEncryption`` class. - This method automatically generates data encryption keys for your - encrypted fields and creates the encrypted collection: - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py - :start-after: start-create-encrypted-collection - :end-before: end-create-encrypted-collection - :language: python - :dedent: - - .. tip:: Database vs. Database Name - - The method that creates the encrypted collection requires a reference - to a database *object* rather than the database *name*. You can - obtain this reference by using a method on your client object. - - .. tab:: - :tabid: java-sync - - Create your encrypted collection by using the encryption - helper method accessed through the ``ClientEncryption`` class. - This method automatically generates data encryption keys for your - encrypted fields and creates the encrypted collection: - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-create-encrypted-collection - :end-before: end-create-encrypted-collection - :language: java - :dedent: - - .. tip:: Database vs. Database Name - - The method that creates the encrypted collection requires a reference - to a database *object* rather than the database *name*. You can - obtain this reference by using a method on your client object. - - .. tab:: - :tabid: go - - The Golang version of this tutorial uses data models to - represent the document structure. Add the following - structs to your project to represent the data in your - collection: - - .. literalinclude:: /includes/qe-tutorials/go/models.go - :start-after: start-patient-document - :end-before: end-patient-document - :language: go - :dedent: - - .. literalinclude:: /includes/qe-tutorials/go/models.go - :start-after: start-patient-record - :end-before: end-patient-record - :language: go - :dedent: - - .. literalinclude:: /includes/qe-tutorials/go/models.go - :start-after: start-payment-info - :end-before: end-payment-info - :language: go - :dedent: - - After you've added these classes, create your encrypted - collection by using the encryption helper method accessed - through the ``ClientEncryption`` class. - This method automatically generates data encryption keys for your - encrypted fields and creates the encrypted collection: - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go - :start-after: start-create-encrypted-collection - :end-before: end-create-encrypted-collection - :language: go - :dedent: - - .. tip:: Database vs. Database Name - - The method that creates the encrypted collection requires a reference - to a database *object* rather than the database *name*. You can - obtain this reference by using a method on your client object. - - .. tab:: - :tabid: csharp - - The C# version of this tutorial uses separate classes as data models - to represent the document structure. - Add the following ``Patient``, ``PatientRecord``, and ``PatientBilling`` - classes to your project: - - .. literalinclude:: /includes/qe-tutorials/csharp/Patient.cs - :start-after: start-patient - :end-before: end-patient - :language: csharp - :dedent: - - .. literalinclude:: /includes/qe-tutorials/csharp/PatientRecord.cs - :start-after: start-patient-record - :end-before: end-patient-record - :language: csharp - :dedent: - - .. literalinclude:: /includes/qe-tutorials/csharp/PatientBilling.cs - :start-after: start-patient-billing - :end-before: end-patient-billing - :language: csharp - :dedent: - - After you've added these classes, create your encrypted collection by - using the encryption helper method accessed through the - ``ClientEncryption`` class. - This method automatically generates data encryption keys for your - encrypted fields and creates the encrypted collection: - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs - :start-after: start-create-encrypted-collection - :end-before: end-create-encrypted-collection - :language: csharp - :dedent: - - .. tip:: Database vs. Database Name - - The method that creates the encrypted collection requires a reference - to a database *object* rather than the database *name*. You can - obtain this reference by using a method on your client object. - - .. _qe-azure-insert: - - .. step:: Insert a Document with Encrypted Fields - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - Create a sample document that describes a patient's personal information. - Use the encrypted client to insert it into the ``patients`` collection, - as shown in the following example: - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js - :start-after: start-insert-document - :end-before: end-insert-document - :emphasize-lines: 15 - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - Create a sample document that describes a patient's personal information. - Use the encrypted client to insert it into the ``patients`` collection, - as shown in the following example: - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-tutorial.js - :start-after: start-insert-document - :end-before: end-insert-document - :emphasize-lines: 17 - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - Create a sample document that describes a patient's personal information. - Use the encrypted client to insert it into the ``patients`` collection, - as shown in the following example: - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py - :start-after: start-insert-document - :end-before: end-insert-document - :emphasize-lines: 15 - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - This tutorial uses POJOs as data models - to represent the document structure. To set up your application to - use POJOs, add the following code: - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-setup-application-pojo - :end-before: end-setup-application-pojo - :language: java - :dedent: - - To learn more about Java POJOs, see the `Plain Old Java Object - wikipedia article `__. - - This tutorial uses the following POJOs: - - - ``Patient`` - - ``PatientRecord`` - - ``PatientBilling`` - - You can view these classes in the `models package of the complete Java application - <{+sample-app-url-qe+}/java/src/main/java/com/mongodb/tutorials/qe/models>`__. - - Add these POJO classes to your application. Then, create an instance - of a ``Patient`` that describes a patient's personal information. Use - the encrypted client to insert it into the ``patients`` collection, - as shown in the following example: - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-insert-document - :end-before: end-insert-document - :emphasize-lines: 8 - :language: java - :dedent: - - .. tab:: - :tabid: go - - Create a sample document that describes a patient's personal information. - Use the encrypted client to insert it into the ``patients`` collection, - as shown in the following example: - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go - :start-after: start-insert-document - :end-before: end-insert-document - :emphasize-lines: 15 - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - Create a sample document that describes a patient's personal information. - Use the encrypted client to insert it into the ``patients`` collection, - as shown in the following example: - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs - :start-after: start-insert-document - :end-before: end-insert-document - :emphasize-lines: 19 - :language: csharp - :dedent: - - .. step:: Query on an Encrypted Field - - The following code sample executes a find query on an encrypted field and - prints the decrypted data: - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js - :start-after: start-find-document - :end-before: end-find-document - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-tutorial.js - :start-after: start-find-document - :end-before: end-find-document - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py - :start-after: start-find-document - :end-before: end-find-document - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-find-document - :end-before: end-find-document - :language: java - :dedent: - - .. tab:: - :tabid: go - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go - :start-after: start-find-document - :end-before: end-find-document - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs - :start-after: start-find-document - :end-before: end-find-document - :language: csharp - :dedent: - - The output of the preceding code sample should look similar to the - following: - - .. literalinclude:: /includes/qe-tutorials/encrypted-document.json - :language: json - :copyable: false - :dedent: - - .. include:: /includes/queryable-encryption/safe-content-warning.rst - -Learn More ----------- - -To learn how {+qe+} works, see -:ref:``. - -To learn more about the topics mentioned in this guide, see the -following links: - -- Learn more about {+qe+} components on the :ref:`Reference ` page. -- Learn how {+cmk-long+}s and {+dek-long+}s work on the :ref:`` page. -- See how KMS Providers manage your {+qe+} keys on the :ref:`` page. diff --git a/source/core/queryable-encryption/tutorials/gcp/gcp-automatic.txt b/source/core/queryable-encryption/tutorials/gcp/gcp-automatic.txt deleted file mode 100644 index ed12c93a93d..00000000000 --- a/source/core/queryable-encryption/tutorials/gcp/gcp-automatic.txt +++ /dev/null @@ -1,1087 +0,0 @@ -.. _qe-tutorial-automatic-gcp: -.. _qe-tutorial-automatic-dek-gcp: - -========================================================= -Use Automatic {+qe+} with GCP -========================================================= - -.. contents:: On this page - :local: - :backlinks: none - :depth: 2 - :class: singlecol - -Overview --------- - -This guide shows you how to build an application that implements the MongoDB -{+qe+} feature to automatically encrypt and decrypt document fields and use -{+gcp-kms+} for key management. - -After you complete the steps in this guide, you should have: - -- A {+cmk-abbr+} managed by {+gcp-kms+} -- A working client application that inserts {+in-use-docs+} - using your {+cmk-abbr+} - -.. tip:: Customer Master Keys - - To learn more about the {+cmk-long+}, read the - :ref:`qe-reference-keys-key-vaults` - documentation. - -Before You Get Started ----------------------- - -.. include:: /includes/queryable-encryption/set-up-section.rst - -.. see:: Full Application - - To see the complete code for this sample application, - select the tab corresponding to your programming language and follow - the provided link. Each sample application repository includes a - ``README.md`` file that you can use to learn how to set up your environment - and run the application. - - .. tabs:: - - .. tab:: mongosh - :tabid: shell - - `Complete mongosh Application <{+sample-app-url-qe+}/mongosh/>`__ - - .. tab:: Node.js - :tabid: nodejs - - `Complete Node.js Application <{+sample-app-url-qe+}/node/>`__ - - .. tab:: Python - :tabid: python - - `Complete Python Application <{+sample-app-url-qe+}/python/>`__ - - .. tab:: Java - :tabid: java-sync - - `Complete Java Application <{+sample-app-url-qe+}/java/>`__ - - .. tab:: Go - :tabid: go - - `Complete Go Application <{+sample-app-url-qe+}/go/>`__ - - .. tab:: C# - :tabid: csharp - - `Complete C# Application <{+sample-app-url-qe+}/csharp/>`__ - -.. tabs-selector:: drivers - -Set Up the KMS --------------- - -.. procedure:: - :style: normal - - .. step:: Register a {+gcp-abbr+} Service Account - - .. include:: /includes/queryable-encryption/tutorials/automatic/gcp/register.rst - - .. step:: Create a {+gcp-abbr+} {+cmk-long+} - - .. include:: /includes/queryable-encryption/tutorials/automatic/gcp/cmk.rst - - -Create the Application ----------------------- - -.. procedure:: - :style: normal - - .. step:: Assign Your Application Variables - - The code samples in this tutorial use the following variables to perform - the {+qe+} workflow: - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - - **kmsProviderName** - The KMS you're using to store your {+cmk-long+}. - Set this variable to ``"gcp"`` for this tutorial. - - **uri** - Your MongoDB deployment connection URI. Set your connection - URI in the ``MONGODB_URI`` environment variable or replace the value - directly. - - **keyVaultDatabaseName** - The database in MongoDB where your data - encryption keys (DEKs) will be stored. Set this variable - to ``"encryption"``. - - **keyVaultCollectionName** - The collection in MongoDB where your DEKs - will be stored. Set this variable to ``"__keyVault"``. - - **keyVaultNamespace** - The namespace in MongoDB where your DEKs will - be stored. Set this variable to the values of the ``keyVaultDatabaseName`` - and ``keyVaultCollectionName`` variables, separated by a period. - - **encryptedDatabaseName** - The database in MongoDB where your encrypted - data will be stored. Set this variable to ``"medicalRecords"``. - - **encryptedCollectionName** - The collection in MongoDB where your encrypted - data will be stored. Set this variable to ``"patients"``. - - You can declare these variables by using the following code: - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js - :start-after: start-setup-application-variables - :end-before: end-setup-application-variables - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - - **kmsProviderName** - The KMS you're using to store your {+cmk-long+}. - Set this variable to ``"gcp"`` for this tutorial. - - **uri** - Your MongoDB deployment connection URI. Set your connection - URI in the ``MONGODB_URI`` environment variable or replace the value - directly. - - **keyVaultDatabaseName** - The database in MongoDB where your data - encryption keys (DEKs) will be stored. Set this variable - to ``"encryption"``. - - **keyVaultCollectionName** - The collection in MongoDB where your DEKs - will be stored. Set this variable to ``"__keyVault"``. - - **keyVaultNamespace** - The namespace in MongoDB where your DEKs will - be stored. Set this variable to the values of the ``keyVaultDatabaseName`` - and ``keyVaultCollectionName`` variables, separated by a period. - - **encryptedDatabaseName** - The database in MongoDB where your encrypted - data will be stored. Set this variable to ``"medicalRecords"``. - - **encryptedCollectionName** - The collection in MongoDB where your encrypted - data will be stored. Set this variable to ``"patients"``. - - You can declare these variables by using the following code: - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-tutorial.js - :start-after: start-setup-application-variables - :end-before: end-setup-application-variables - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - - **kms_provider_name** - The KMS you're using to store your {+cmk-long+}. - Set this variable to ``"gcp"`` for this tutorial. - - **uri** - Your MongoDB deployment connection URI. Set your connection - URI in the ``MONGODB_URI`` environment variable or replace the value - directly. - - **key_vault_database_name** - The database in MongoDB where your data - encryption keys (DEKs) will be stored. Set this variable - to ``"encryption"``. - - **key_vault_collection_name** - The collection in MongoDB where your DEKs - will be stored. Set this variable to ``"__keyVault"``. - - **key_vault_namespace** - The namespace in MongoDB where your DEKs will - be stored. Set this variable to the values of the ``key_vault_database_name`` - and ``key_vault_collection_name`` variables, separated by a period. - - **encrypted_database_name** - The database in MongoDB where your encrypted - data will be stored. Set this variable to ``"medicalRecords"``. - - **encrypted_collection_name** - The collection in MongoDB where your encrypted - data will be stored. Set this variable to ``"patients"``. - - You can declare these variables by using the following code: - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py - :start-after: start-setup-application-variables - :end-before: end-setup-application-variables - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - - **kmsProviderName** - The KMS you're using to store your {+cmk-long+}. - Set this variable to ``"gcp"`` for this tutorial. - - **uri** - Your MongoDB deployment connection URI. Set your connection - URI in the ``MONGODB_URI`` environment variable or replace the value - directly. - - **keyVaultDatabaseName** - The database in MongoDB where your data - encryption keys (DEKs) will be stored. Set this variable - to ``"encryption"``. - - **keyVaultCollectionName** - The collection in MongoDB where your DEKs - will be stored. Set this variable to ``"__keyVault"``. - - **keyVaultNamespace** - The namespace in MongoDB where your DEKs will - be stored. Set this variable to the values of the ``keyVaultDatabaseName`` - and ``keyVaultCollectionName`` variables, separated by a period. - - **encryptedDatabaseName** - The database in MongoDB where your encrypted - data will be stored. Set this variable to ``"medicalRecords"``. - - **encryptedCollectionName** - The collection in MongoDB where your encrypted - data will be stored. Set this variable to ``"patients"``. - - You can declare these variables by using the following code: - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-setup-application-variables - :end-before: end-setup-application-variables - :language: java - :dedent: - - .. tab:: - :tabid: go - - - **kmsProviderName** - The KMS you're using to store your {+cmk-long+}. - Set this variable to ``"gcp"`` for this tutorial. - - **uri** - Your MongoDB deployment connection URI. Set your connection - URI in the ``MONGODB_URI`` environment variable or replace the value - directly. - - **keyVaultDatabaseName** - The database in MongoDB where your data - encryption keys (DEKs) will be stored. Set this variable - to ``"encryption"``. - - **keyVaultCollectionName** - The collection in MongoDB where your DEKs - will be stored. Set this variable to ``"__keyVault"``. - - **keyVaultNamespace** - The namespace in MongoDB where your DEKs will - be stored. Set this variable to the values of the ``keyVaultDatabaseName`` - and ``keyVaultCollectionName`` variables, separated by a period. - - **encryptedDatabaseName** - The database in MongoDB where your encrypted - data will be stored. Set this variable to ``"medicalRecords"``. - - **encryptedCollectionName** - The collection in MongoDB where your encrypted - data will be stored. Set this variable to ``"patients"``. - - You can declare these variables by using the following code: - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go - :start-after: start-setup-application-variables - :end-before: end-setup-application-variables - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - - **kmsProviderName** - The KMS you're using to store your {+cmk-long+}. - Set this value to ``"gcp"`` for this tutorial. - - **keyVaultDatabaseName** - The database in MongoDB where your data - encryption keys (DEKs) will be stored. Set the value of ``keyVaultDatabaseName`` - to ``"encryption"``. - - **keyVaultCollectionName** - The collection in MongoDB where your DEKs - will be stored. Set the value of ``keyVaultCollectionName`` to ``"__keyVault"``. - - **keyVaultNamespace** - The namespace in MongoDB where your DEKs will - be stored. Set ``keyVaultNamespace`` to a new ``CollectionNamespace`` object whose name - is the values of the ``keyVaultDatabaseName`` and ``keyVaultCollectionName`` variables, - separated by a period. - - **encryptedDatabaseName** - The database in MongoDB where your encrypted - data will be stored. Set the value of ``encryptedDatabaseName`` to ``"medicalRecords"``. - - **encryptedCollectionName** - The collection in MongoDB where your encrypted - data will be stored. Set the value of ``encryptedCollectionName`` to ``"patients"``. - - **uri** - Your MongoDB deployment connection URI. Set your connection - URI in the ``appsettings.json`` file or replace the value - directly. - - You can declare these variables by using the following code: - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs - :start-after: start-setup-application-variables - :end-before: end-setup-application-variables - :language: csharp - :dedent: - - .. important:: {+key-vault-long-title+} Namespace Permissions - - The {+key-vault-long+} is in the ``encryption.__keyVault`` - namespace. Ensure that the database user your application uses to connect - to MongoDB has :ref:`ReadWrite ` - permissions on this namespace. - - .. include:: /includes/queryable-encryption/env-variables.rst - - .. step:: Create your Encrypted Collection - - .. procedure:: - :style: connected - - .. step:: Add Your {+gcp-kms+} Credentials - - Create a variable containing your {+gcp-kms+} credentials with the - following structure: - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-helpers.js - :start-after: start-gcp-kms-credentials - :end-before: end-gcp-kms-credentials - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js - :start-after: start-gcp-kms-credentials - :end-before: end-gcp-kms-credentials - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_helpers.py - :start-after: start-gcp-kms-credentials - :end-before: end-gcp-kms-credentials - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/util/QueryableEncryptionHelpers.java - :start-after: start-gcp-kms-credentials - :end-before: end-gcp-kms-credentials - :language: java - :dedent: - - .. tab:: - :tabid: go - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_helpers.go - :start-after: start-gcp-kms-credentials - :end-before: end-gcp-kms-credentials - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionHelpers.cs - :start-after: start-gcp-kms-credentials - :end-before: end-gcp-kms-credentials - :language: csharp - :dedent: - - .. step:: Add your {+cmk-long+} Credentials - - Create a variable containing your {+cmk-long+} credentials with the - following structure. Use the credentials you recorded - in the :ref:`Create a new {+cmk-long+} ` - step of this tutorial. - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-helpers.js - :start-after: start-gcp-cmk-credentials - :end-before: end-gcp-cmk-credentials - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js - :start-after: start-gcp-cmk-credentials - :end-before: end-gcp-cmk-credentials - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_helpers.py - :start-after: start-gcp-cmk-credentials - :end-before: end-gcp-cmk-credentials - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/util/QueryableEncryptionHelpers.java - :start-after: start-gcp-cmk-credentials - :end-before: end-gcp-cmk-credentials - :language: java - :dedent: - - .. tab:: - :tabid: go - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_helpers.go - :start-after: start-gcp-cmk-credentials - :end-before: end-gcp-cmk-credentials - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionHelpers.cs - :start-after: start-gcp-cmk-credentials - :end-before: end-gcp-cmk-credentials - :language: csharp - :dedent: - - .. step:: Set Your Automatic Encryption Options - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - Create an ``autoEncryptionOptions`` object that contains the following - options: - - - The namespace of your {+key-vault-long+} - - The ``kmsProviderCredentials`` object, which contains your {+gcp-kms+} - credentials - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-helpers.js - :start-after: start-auto-encryption-options - :end-before: end-auto-encryption-options - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - Create an ``autoEncryptionOptions`` object that contains the following - options: - - - The namespace of your {+key-vault-long+} - - The ``kmsProviders`` object, which contains your - {+gcp-kms+} credentials - - The ``sharedLibraryPathOptions`` object, which contains the path to - your {+shared-library+} - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js - :start-after: start-auto-encryption-options - :end-before: end-auto-encryption-options - :emphasize-lines: 5-9 - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - Create an ``AutoEncryptionOpts`` object that contains the following - options: - - - The ``kms_provider_credentials`` object, which contains your - {+gcp-kms+} credentials - - The namespace of your {+key-vault-long+} - - The path to your {+shared-library+} - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_helpers.py - :start-after: start-auto-encryption-options - :end-before: end-auto-encryption-options - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - Create an ``AutoEncryptionSettings`` object that contains the following - options: - - - The namespace of your {+key-vault-long+} - - The ``kmsProviderCredentials`` object, which contains your - {+gcp-kms+} credentials - - The ``extraOptions`` object, which contains the path to - your {+shared-library+} - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/util/QueryableEncryptionHelpers.java - :start-after: start-auto-encryption-options - :end-before: end-auto-encryption-options - :emphasize-lines: 4-8 - :language: java - :dedent: - - .. tab:: - :tabid: go - - Create an ``AutoEncryption`` object that contains the following - options: - - - The namespace of your {+key-vault-long+} - - The ``kmsProviderCredentials`` object, which contains your - {+gcp-kms+} credentials - - The ``cryptSharedLibraryPath`` object, which contains the path to - your {+shared-library+} - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_helpers.go - :start-after: start-auto-encryption-options - :end-before: end-auto-encryption-options - :emphasize-lines: 5-8 - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - Create an ``AutoEncryptionOptions`` object that contains the following - options: - - - The namespace of your {+key-vault-long+} - - The ``kmsProviderCredentials`` object, which contains your - {+gcp-kms+} credentials - - The ``extraOptions`` object, which contains the path to - your {+shared-library+} - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionHelpers.cs - :start-after: start-auto-encryption-options - :end-before: end-auto-encryption-options - :emphasize-lines: 7-10 - :language: csharp - :dedent: - - .. include:: /includes/queryable-encryption/shared-lib-learn-more.rst - - .. step:: Create a Client to Set Up an Encrypted Collection - - To create a client used to encrypt and decrypt data in - your collection, instantiate a new ``MongoClient`` by using your - connection URI and your automatic encryption options. - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js - :start-after: start-create-client - :end-before: end-create-client - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-tutorial.js - :start-after: start-create-client - :end-before: end-create-client - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py - :start-after: start-create-client - :end-before: end-create-client - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-create-client - :end-before: end-create-client - :language: java - :dedent: - - .. tab:: - :tabid: go - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go - :start-after: start-create-client - :end-before: end-create-client - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs - :start-after: start-create-client - :end-before: end-create-client - :language: csharp - :dedent: - - .. step:: Specify Fields to Encrypt - - To encrypt a field, add it to the {+enc-schema+}. - To enable queries on a field, add the "queries" - property. Create the {+enc-schema+} as follows: - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js - :start-after: start-encrypted-fields-map - :end-before: end-encrypted-fields-map - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-tutorial.js - :start-after: start-encrypted-fields-map - :end-before: end-encrypted-fields-map - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py - :start-after: start-encrypted-fields-map - :end-before: end-encrypted-fields-map - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-encrypted-fields-map - :end-before: end-encrypted-fields-map - :language: java - :dedent: - - .. tab:: - :tabid: go - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go - :start-after: start-encrypted-fields-map - :end-before: end-encrypted-fields-map - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs - :start-after: start-encrypted-fields-map - :end-before: end-encrypted-fields-map - :language: csharp - :dedent: - - .. note:: - - In the previous code sample, both the "ssn" and - "billing" fields are encrypted, but only the "ssn" - field can be queried. - - .. step:: Create the Collection - - Instantiate ``ClientEncryption`` to access the API for the - encryption helper methods. - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js - :start-after: start-client-encryption - :end-before: end-client-encryption - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js - :start-after: start-client-encryption - :end-before: end-client-encryption - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_helpers.py - :start-after: start-client-encryption - :end-before: end-client-encryption - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-client-encryption - :end-before: end-client-encryption - :language: java - :dedent: - - .. tab:: - :tabid: go - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_helpers.go - :start-after: start-client-encryption - :end-before: end-client-encryption - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionHelpers.cs - :start-after: start-client-encryption - :end-before: end-client-encryption - :language: csharp - :dedent: - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - Create your encrypted collection by using the encryption - helper method accessed through the ``ClientEncryption`` class. - This method automatically generates data encryption keys for your - encrypted fields and creates the encrypted collection: - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js - :start-after: start-create-encrypted-collection - :end-before: end-create-encrypted-collection - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - .. include:: /includes/tutorials/automatic/node-include-clientEncryption.rst - - Create your encrypted collection by using the encryption - helper method accessed through the ``ClientEncryption`` class. - This method automatically generates data encryption keys for your - encrypted fields and creates the encrypted collection: - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js - :start-after: start-create-encrypted-collection - :end-before: end-create-encrypted-collection - :language: javascript - :dedent: - - .. tip:: Database vs. Database Name - - The method that creates the encrypted collection requires a reference - to a database *object* rather than the database *name*. You can - obtain this reference by using a method on your client object. - - .. tab:: - :tabid: python - - Create your encrypted collection by using the encryption - helper method accessed through the ``ClientEncryption`` class. - This method automatically generates data encryption keys for your - encrypted fields and creates the encrypted collection: - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py - :start-after: start-create-encrypted-collection - :end-before: end-create-encrypted-collection - :language: python - :dedent: - - .. tip:: Database vs. Database Name - - The method that creates the encrypted collection requires a reference - to a database *object* rather than the database *name*. You can - obtain this reference by using a method on your client object. - - .. tab:: - :tabid: java-sync - - Create your encrypted collection by using the encryption - helper method accessed through the ``ClientEncryption`` class. - This method automatically generates data encryption keys for your - encrypted fields and creates the encrypted collection: - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-create-encrypted-collection - :end-before: end-create-encrypted-collection - :language: java - :dedent: - - .. tip:: Database vs. Database Name - - The method that creates the encrypted collection requires a reference - to a database *object* rather than the database *name*. You can - obtain this reference by using a method on your client object. - - .. tab:: - :tabid: go - - The Golang version of this tutorial uses data models to - represent the document structure. Add the following - structs to your project to represent the data in your - collection: - - .. literalinclude:: /includes/qe-tutorials/go/models.go - :start-after: start-patient-document - :end-before: end-patient-document - :language: go - :dedent: - - .. literalinclude:: /includes/qe-tutorials/go/models.go - :start-after: start-patient-record - :end-before: end-patient-record - :language: go - :dedent: - - .. literalinclude:: /includes/qe-tutorials/go/models.go - :start-after: start-payment-info - :end-before: end-payment-info - :language: go - :dedent: - - After you've added these classes, create your encrypted - collection by using the encryption helper method accessed - through the ``ClientEncryption`` class. - This method automatically generates data encryption keys for your - encrypted fields and creates the encrypted collection: - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go - :start-after: start-create-encrypted-collection - :end-before: end-create-encrypted-collection - :language: go - :dedent: - - .. tip:: Database vs. Database Name - - The method that creates the encrypted collection requires a reference - to a database *object* rather than the database *name*. You can - obtain this reference by using a method on your client object. - - .. tab:: - :tabid: csharp - - The C# version of this tutorial uses separate classes as data models - to represent the document structure. - Add the following ``Patient``, ``PatientRecord``, and ``PatientBilling`` - classes to your project: - - .. literalinclude:: /includes/qe-tutorials/csharp/Patient.cs - :start-after: start-patient - :end-before: end-patient - :language: csharp - :dedent: - - .. literalinclude:: /includes/qe-tutorials/csharp/PatientRecord.cs - :start-after: start-patient-record - :end-before: end-patient-record - :language: csharp - :dedent: - - .. literalinclude:: /includes/qe-tutorials/csharp/PatientBilling.cs - :start-after: start-patient-billing - :end-before: end-patient-billing - :language: csharp - :dedent: - - After you've added these classes, create your encrypted collection by - using the encryption helper method accessed through the - ``ClientEncryption`` class. - This method automatically generates data encryption keys for your - encrypted fields and creates the encrypted collection: - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs - :start-after: start-create-encrypted-collection - :end-before: end-create-encrypted-collection - :language: csharp - :dedent: - - .. tip:: Database vs. Database Name - - The method that creates the encrypted collection requires a reference - to a database *object* rather than the database *name*. You can - obtain this reference by using a method on your client object. - - .. _qe-gcip-insert: - - .. step:: Insert a Document with Encrypted Fields - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - Create a sample document that describes a patient's personal information. - Use the encrypted client to insert it into the ``patients`` collection, - as shown in the following example: - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js - :start-after: start-insert-document - :end-before: end-insert-document - :emphasize-lines: 15 - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - Create a sample document that describes a patient's personal information. - Use the encrypted client to insert it into the ``patients`` collection, - as shown in the following example: - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-tutorial.js - :start-after: start-insert-document - :end-before: end-insert-document - :emphasize-lines: 17 - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - Create a sample document that describes a patient's personal information. - Use the encrypted client to insert it into the ``patients`` collection, - as shown in the following example: - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py - :start-after: start-insert-document - :end-before: end-insert-document - :emphasize-lines: 15 - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - This tutorial uses POJOs as data models - to represent the document structure. To set up your application to - use POJOs, add the following code: - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-setup-application-pojo - :end-before: end-setup-application-pojo - :language: java - :dedent: - - To learn more about Java POJOs, see the `Plain Old Java Object - wikipedia article `__. - - This tutorial uses the following POJOs: - - - ``Patient`` - - ``PatientRecord`` - - ``PatientBilling`` - - You can view these classes in the `models package of the complete Java application - <{+sample-app-url-qe+}/java/src/main/java/com/mongodb/tutorials/qe/models>`__. - - Add these POJO classes to your application. Then, create an instance - of a ``Patient`` that describes a patient's personal information. Use - the encrypted client to insert it into the ``patients`` collection, - as shown in the following example: - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-insert-document - :end-before: end-insert-document - :emphasize-lines: 8 - :language: java - - .. tab:: - :tabid: go - - Create a sample document that describes a patient's personal information. - Use the encrypted client to insert it into the ``patients`` collection, - as shown in the following example: - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go - :start-after: start-insert-document - :end-before: end-insert-document - :emphasize-lines: 15 - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - Create a sample document that describes a patient's personal information. - Use the encrypted client to insert it into the ``patients`` collection, - as shown in the following example: - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs - :start-after: start-insert-document - :end-before: end-insert-document - :emphasize-lines: 19 - :language: csharp - :dedent: - - .. step:: Query on an Encrypted Field - - The following code sample executes a find query on an encrypted field and - prints the decrypted data: - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js - :start-after: start-find-document - :end-before: end-find-document - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-tutorial.js - :start-after: start-find-document - :end-before: end-find-document - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py - :start-after: start-find-document - :end-before: end-find-document - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-find-document - :end-before: end-find-document - :language: java - :dedent: - - .. tab:: - :tabid: go - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go - :start-after: start-find-document - :end-before: end-find-document - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs - :start-after: start-find-document - :end-before: end-find-document - :language: csharp - :dedent: - - The output of the preceding code sample should look similar to the - following: - - .. literalinclude:: /includes/qe-tutorials/encrypted-document.json - :language: json - :copyable: false - :dedent: - - .. include:: /includes/queryable-encryption/safe-content-warning.rst - -Learn More ----------- - -To learn how {+qe+} works, see -:ref:``. - -To learn more about the topics mentioned in this guide, see the -following links: - -- Learn more about {+qe+} components on the :ref:`Reference ` page. -- Learn how {+cmk-long+}s and {+dek-long+}s work on the :ref:`` page. -- See how KMS Providers manage your {+qe+} keys on the :ref:`` page. diff --git a/source/core/queryable-encryption/tutorials/kmip/kmip-automatic.txt b/source/core/queryable-encryption/tutorials/kmip/kmip-automatic.txt deleted file mode 100644 index 13fd1d1a727..00000000000 --- a/source/core/queryable-encryption/tutorials/kmip/kmip-automatic.txt +++ /dev/null @@ -1,1103 +0,0 @@ -.. _qe-tutorial-automatic-kmip: -.. _qe-tutorial-automatic-dek-kmip: - -=========================================================== -Use Automatic {+qe+} with KMIP -=========================================================== - -.. contents:: On this page - :local: - :backlinks: none - :depth: 2 - :class: singlecol - -Overview --------- - -This guide shows you how to build an application that implements the MongoDB -{+qe+} feature to automatically encrypt and decrypt document fields and use -a Key Management Interoperability Protocol (KMIP)-compliant key provider for -key management. - -After you complete the steps in this guide, you should have: - -- A {+cmk-long+} managed by a {+kmip-kms+} -- A working client application that inserts {+in-use-docs+} - using your {+cmk-long+} - -.. tip:: Customer Master Keys - - To learn more about the {+cmk-long+}, read the - :ref:`qe-reference-keys-key-vaults` - documentation. - -Before You Get Started ----------------------- - -.. include:: /includes/queryable-encryption/set-up-section.rst - -.. see:: Full Application - - To see the complete code for this sample application, - select the tab corresponding to your programming language and follow - the provided link. Each sample application repository includes a - ``README.md`` file that you can use to learn how to set up your environment - and run the application. - - .. tabs:: - - .. tab:: mongosh - :tabid: shell - - `Complete mongosh Application <{+sample-app-url-qe+}/mongosh/>`__ - - .. tab:: Node.js - :tabid: nodejs - - `Complete Node.js Application <{+sample-app-url-qe+}/node/>`__ - - .. tab:: Python - :tabid: python - - `Complete Python Application <{+sample-app-url-qe+}/python/>`__ - - .. tab:: Java - :tabid: java-sync - - `Complete Java Application <{+sample-app-url-qe+}/java/>`__ - - .. tab:: Go - :tabid: go - - `Complete Go Application <{+sample-app-url-qe+}/go/>`__ - - .. tab:: C# - :tabid: csharp - - `Complete C# Application <{+sample-app-url-qe+}/csharp/>`__ - -.. tabs-selector:: drivers - -Set Up the KMS --------------- - -.. procedure:: - :style: normal - - .. step:: Configure your {+kmip-kms-title+} - - .. include:: /includes/queryable-encryption/tutorials/automatic/kmip/configure.rst - - .. step:: Specify your Certificates - - .. _qe-kmip-tutorial-specify-your-certificates: - - .. include:: /includes/queryable-encryption/tutorials/automatic/kmip/certificates.rst - -Create the Application ----------------------- - -.. procedure:: - :style: normal - - .. step:: Assign Your Application Variables - - The code samples in this tutorial use the following variables to perform - the {+qe+} workflow: - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - - **kmsProviderName** - The KMS you're using to store your {+cmk-long+}. - Set this variable to ``"kmip"`` for this tutorial. - - **uri** - Your MongoDB deployment connection URI. Set your connection - URI in the ``MONGODB_URI`` environment variable or replace the value - directly. - - **keyVaultDatabaseName** - The database in MongoDB where your data - encryption keys (DEKs) will be stored. Set this variable - to ``"encryption"``. - - **keyVaultCollectionName** - The collection in MongoDB where your DEKs - will be stored. Set this variable to ``"__keyVault"``. - - **keyVaultNamespace** - The namespace in MongoDB where your DEKs will - be stored. Set this variable to the values of the ``keyVaultDatabaseName`` - and ``keyVaultCollectionName`` variables, separated by a period. - - **encryptedDatabaseName** - The database in MongoDB where your encrypted - data will be stored. Set this variable to ``"medicalRecords"``. - - **encryptedCollectionName** - The collection in MongoDB where your encrypted - data will be stored. Set this variable to ``"patients"``. - - You can declare these variables by using the following code: - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js - :start-after: start-setup-application-variables - :end-before: end-setup-application-variables - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - - **kmsProviderName** - The KMS you're using to store your {+cmk-long+}. - Set this variable to ``"kmip"`` for this tutorial. - - **uri** - Your MongoDB deployment connection URI. Set your connection - URI in the ``MONGODB_URI`` environment variable or replace the value - directly. - - **keyVaultDatabaseName** - The database in MongoDB where your data - encryption keys (DEKs) will be stored. Set this variable - to ``"encryption"``. - - **keyVaultCollectionName** - The collection in MongoDB where your DEKs - will be stored. Set this variable to ``"__keyVault"``. - - **keyVaultNamespace** - The namespace in MongoDB where your DEKs will - be stored. Set this variable to the values of the ``keyVaultDatabaseName`` - and ``keyVaultCollectionName`` variables, separated by a period. - - **encryptedDatabaseName** - The database in MongoDB where your encrypted - data will be stored. Set this variable to ``"medicalRecords"``. - - **encryptedCollectionName** - The collection in MongoDB where your encrypted - data will be stored. Set this variable to ``"patients"``. - - You can declare these variables by using the following code: - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-tutorial.js - :start-after: start-setup-application-variables - :end-before: end-setup-application-variables - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - - **kms_provider_name** - The KMS you're using to store your {+cmk-long+}. - Set this variable to ``"kmip"`` for this tutorial. - - **uri** - Your MongoDB deployment connection URI. Set your connection - URI in the ``MONGODB_URI`` environment variable or replace the value - directly. - - **key_vault_database_name** - The database in MongoDB where your data - encryption keys (DEKs) will be stored. Set this variable - to ``"encryption"``. - - **key_vault_collection_name** - The collection in MongoDB where your DEKs - will be stored. Set this variable to ``"__keyVault"``. - - **key_vault_namespace** - The namespace in MongoDB where your DEKs will - be stored. Set this variable to the values of the ``key_vault_database_name`` - and ``key_vault_collection_name`` variables, separated by a period. - - **encrypted_database_name** - The database in MongoDB where your encrypted - data will be stored. Set this variable to ``"medicalRecords"``. - - **encrypted_collection_name** - The collection in MongoDB where your encrypted - data will be stored. Set this variable to ``"patients"``. - - You can declare these variables by using the following code: - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py - :start-after: start-setup-application-variables - :end-before: end-setup-application-variables - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - - **kmsProviderName** - The KMS you're using to store your {+cmk-long+}. - Set this variable to ``"kmip"`` for this tutorial. - - **uri** - Your MongoDB deployment connection URI. Set your connection - URI in the ``MONGODB_URI`` environment variable or replace the value - directly. - - **keyVaultDatabaseName** - The database in MongoDB where your data - encryption keys (DEKs) will be stored. Set this variable - to ``"encryption"``. - - **keyVaultCollectionName** - The collection in MongoDB where your DEKs - will be stored. Set this variable to ``"__keyVault"``. - - **keyVaultNamespace** - The namespace in MongoDB where your DEKs will - be stored. Set this variable to the values of the ``keyVaultDatabaseName`` - and ``keyVaultCollectionName`` variables, separated by a period. - - **encryptedDatabaseName** - The database in MongoDB where your encrypted - data will be stored. Set this variable to ``"medicalRecords"``. - - **encryptedCollectionName** - The collection in MongoDB where your encrypted - data will be stored. Set this variable to ``"patients"``. - - You can declare these variables by using the following code: - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-setup-application-variables - :end-before: end-setup-application-variables - :language: java - :dedent: - - .. tab:: - :tabid: go - - - **kmsProviderName** - The KMS you're using to store your {+cmk-long+}. - Set this variable to ``"kmip"`` for this tutorial. - - **uri** - Your MongoDB deployment connection URI. Set your connection - URI in the ``MONGODB_URI`` environment variable or replace the value - directly. - - **keyVaultDatabaseName** - The database in MongoDB where your data - encryption keys (DEKs) will be stored. Set this variable - to ``"encryption"``. - - **keyVaultCollectionName** - The collection in MongoDB where your DEKs - will be stored. Set this variable to ``"__keyVault"``. - - **keyVaultNamespace** - The namespace in MongoDB where your DEKs will - be stored. Set this variable to the values of the ``keyVaultDatabaseName`` - and ``keyVaultCollectionName`` variables, separated by a period. - - **encryptedDatabaseName** - The database in MongoDB where your encrypted - data will be stored. Set this variable to ``"medicalRecords"``. - - **encryptedCollectionName** - The collection in MongoDB where your encrypted - data will be stored. Set this variable to ``"patients"``. - - You can declare these variables by using the following code: - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go - :start-after: start-setup-application-variables - :end-before: end-setup-application-variables - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - - **kmsProviderName** - The KMS you're using to store your {+cmk-long+}. - Set this value to ``"kmip"`` for this tutorial. - - **keyVaultDatabaseName** - The database in MongoDB where your data - encryption keys (DEKs) will be stored. Set the value of ``keyVaultDatabaseName`` - to ``"encryption"``. - - **keyVaultCollectionName** - The collection in MongoDB where your DEKs - will be stored. Set the value of ``keyVaultCollectionName`` to ``"__keyVault"``. - - **keyVaultNamespace** - The namespace in MongoDB where your DEKs will - be stored. Set ``keyVaultNamespace`` to a new ``CollectionNamespace`` object whose name - is the values of the ``keyVaultDatabaseName`` and ``keyVaultCollectionName`` variables, - separated by a period. - - **encryptedDatabaseName** - The database in MongoDB where your encrypted - data will be stored. Set the value of ``encryptedDatabaseName`` to ``"medicalRecords"``. - - **encryptedCollectionName** - The collection in MongoDB where your encrypted - data will be stored. Set the value of ``encryptedCollectionName`` to ``"patients"``. - - **uri** - Your MongoDB deployment connection URI. Set your connection - URI in the ``appsettings.json`` file or replace the value - directly. - - You can declare these variables by using the following code: - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs - :start-after: start-setup-application-variables - :end-before: end-setup-application-variables - :language: csharp - :dedent: - - .. important:: {+key-vault-long-title+} Namespace Permissions - - The {+key-vault-long+} is in the ``encryption.__keyVault`` - namespace. Ensure that the database user your application uses to connect - to MongoDB has :ref:`ReadWrite ` - permissions on this namespace. - - .. include:: /includes/queryable-encryption/env-variables.rst - - .. step:: Create your Encrypted Collection - - .. procedure:: - :style: connected - - .. step:: Add Your {+kmip-kms-title+} KMS Credentials - - Create a variable containing the endpoint of your {+kmip-kms-no-hover+} - with the following structure: - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-helpers.js - :start-after: start-kmip-kms-credentials - :end-before: end-kmip-kms-credentials - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js - :start-after: start-kmip-kms-credentials - :end-before: end-kmip-kms-credentials - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_helpers.py - :start-after: start-kmip-kms-credentials - :end-before: end-kmip-kms-credentials - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/util/QueryableEncryptionHelpers.java - :start-after: start-kmip-kms-credentials - :end-before: end-kmip-kms-credentials - :language: java - :dedent: - - .. tab:: - :tabid: go - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_helpers.go - :start-after: start-kmip-kms-credentials - :end-before: end-kmip-kms-credentials - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionHelpers.cs - :start-after: start-kmip-kms-credentials - :end-before: end-kmip-kms-credentials - :language: csharp - :dedent: - - .. step:: Add your {+cmk-long+} Credentials - - Create an empty object as shown in the following code example. - This prompts your {+kmip-kms+} to generate a new {+cmk-long+}. - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-helpers.js - :start-after: start-kmip-local-cmk-credentials - :end-before: end-kmip-local-cmk-credentials - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js - :start-after: start-kmip-local-cmk-credentials - :end-before: end-kmip-local-cmk-credentials - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_helpers.py - :start-after: start-kmip-local-cmk-credentials - :end-before: end-kmip-local-cmk-credentials - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/util/QueryableEncryptionHelpers.java - :start-after: start-kmip-local-cmk-credentials - :end-before: end-kmip-local-cmk-credentials - :language: java - :dedent: - - .. tab:: - :tabid: go - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_helpers.go - :start-after: start-kmip-local-cmk-credentials - :end-before: end-kmip-local-cmk-credentials - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionHelpers.cs - :start-after: start-kmip-local-cmk-credentials - :end-before: end-kmip-local-cmk-credentials - :language: csharp - :dedent: - - .. step:: Set Your Automatic Encryption Options - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - Create an ``autoEncryptionOptions`` object that contains the following - options: - - - The namespace of your {+key-vault-long+} - - The ``kmsProviderCredentials`` object, which contains your - {+kmip-hover+} endpoint - - The ``tlsOptions`` object that you created in the :ref:`Specify your - Certificates ` - step - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-helpers.js - :start-after: start-kmip-encryption-options - :end-before: end-kmip-encryption-options - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - Create an ``autoEncryptionOptions`` object that contains the following - options: - - - The namespace of your {+key-vault-long+} - - The ``kmsProviders`` object, which contains your - {+kmip-hover+} endpoint - - The ``sharedLibraryPathOptions`` object, which contains the path to - your {+shared-library+} - - The ``tlsOptions`` object that you created in the :ref:`Specify your - Certificates ` - step - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js - :start-after: start-kmip-encryption-options - :end-before: end-kmip-encryption-options - :emphasize-lines: 5-10 - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - Create an ``AutoEncryptionOpts`` object that contains the following - options: - - - The ``kms_provider_credentials`` object, which contains your - {+kmip-hover+} endpoint - - The namespace of your {+key-vault-long+} - - The path to your {+shared-library+} - - The ``tls_options`` object that you created in the :ref:`Specify your - Certificates ` - step - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_helpers.py - :start-after: start-kmip-encryption-options - :end-before: end-kmip-encryption-options - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - Create an ``AutoEncryptionSettings`` object that contains the following - options: - - - The namespace of your {+key-vault-long+} - - The ``kmsProviderCredentials`` object, which contains your - {+kmip-hover+} endpoint - - The ``extraOptions`` object, which contains the path to - your {+shared-library+} - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/util/QueryableEncryptionHelpers.java - :start-after: start-auto-encryption-options - :end-before: end-auto-encryption-options - :emphasize-lines: 4-8 - :language: java - :dedent: - - .. tab:: - :tabid: go - - Create an ``AutoEncryption`` object that contains the following - options: - - - The namespace of your {+key-vault-long+} - - The ``kmsProviderCredentials`` object, which contains your - {+kmip-hover+} endpoint - - The ``cryptSharedLibraryPath`` object, which contains the path to - your {+shared-library+} - - The ``tlsConfig`` object that you created in the :ref:`Specify your - Certificates ` - step - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_helpers.go - :start-after: start-kmip-encryption-options - :end-before: end-kmip-encryption-options - :emphasize-lines: 5-9 - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - Create an ``AutoEncryptionOptions`` object that contains the following - options: - - - The namespace of your {+key-vault-long+} - - The ``kmsProviderCredentials`` object, which contains your - {+kmip-hover+} endpoint - - The ``extraOptions`` object, which contains the path to - your {+shared-library+} - - The ``tlsOptions`` object that you created in the :ref:`Specify your - Certificates ` - step - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionHelpers.cs - :start-after: start-kmip-encryption-options - :end-before: end-kmip-encryption-options - :emphasize-lines: 7-11 - :language: csharp - :dedent: - - .. include:: /includes/queryable-encryption/shared-lib-learn-more.rst - - .. step:: Create a Client to Set Up an Encrypted Collection - - To create a client used to encrypt and decrypt data in - your collection, instantiate a new ``MongoClient`` by using your - connection URI and your automatic encryption options. - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js - :start-after: start-create-client - :end-before: end-create-client - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-tutorial.js - :start-after: start-create-client - :end-before: end-create-client - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py - :start-after: start-create-client - :end-before: end-create-client - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-create-client - :end-before: end-create-client - :language: java - :dedent: - - .. tab:: - :tabid: go - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go - :start-after: start-create-client - :end-before: end-create-client - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs - :start-after: start-create-client - :end-before: end-create-client - :language: csharp - :dedent: - - .. step:: Specify Fields to Encrypt - - To encrypt a field, add it to the {+enc-schema+}. - To enable queries on a field, add the "queries" - property. Create the {+enc-schema+} as follows: - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js - :start-after: start-encrypted-fields-map - :end-before: end-encrypted-fields-map - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-tutorial.js - :start-after: start-encrypted-fields-map - :end-before: end-encrypted-fields-map - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py - :start-after: start-encrypted-fields-map - :end-before: end-encrypted-fields-map - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-encrypted-fields-map - :end-before: end-encrypted-fields-map - :language: java - :dedent: - - .. tab:: - :tabid: go - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go - :start-after: start-encrypted-fields-map - :end-before: end-encrypted-fields-map - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs - :start-after: start-encrypted-fields-map - :end-before: end-encrypted-fields-map - :language: csharp - :dedent: - - .. note:: - - In the previous code sample, both the "ssn" and - "billing" fields are encrypted, but only the "ssn" - field can be queried. - - .. step:: Create the Collection - - Instantiate ``ClientEncryption`` to access the API for the - encryption helper methods. - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js - :start-after: start-client-encryption - :end-before: end-client-encryption - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js - :start-after: start-client-encryption - :end-before: end-client-encryption - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_helpers.py - :start-after: start-client-encryption - :end-before: end-client-encryption - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-client-encryption - :end-before: end-client-encryption - :language: java - :dedent: - - .. tab:: - :tabid: go - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_helpers.go - :start-after: start-client-encryption - :end-before: end-client-encryption - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionHelpers.cs - :start-after: start-client-encryption - :end-before: end-client-encryption - :language: csharp - :dedent: - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - Create your encrypted collection by using the encryption - helper method accessed through the ``ClientEncryption`` class. - This method automatically generates data encryption keys for your - encrypted fields and creates the encrypted collection: - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js - :start-after: start-create-encrypted-collection - :end-before: end-create-encrypted-collection - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - .. include:: /includes/tutorials/automatic/node-include-clientEncryption.rst - - Create your encrypted collection by using the encryption - helper method accessed through the ``ClientEncryption`` class. - This method automatically generates data encryption keys for your - encrypted fields and creates the encrypted collection: - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-helpers.js - :start-after: start-create-encrypted-collection - :end-before: end-create-encrypted-collection - :language: javascript - :dedent: - - .. tip:: Database vs. Database Name - - The method that creates the encrypted collection requires a reference - to a database *object* rather than the database *name*. You can - obtain this reference by using a method on your client object. - - .. tab:: - :tabid: python - - Create your encrypted collection by using the encryption - helper method accessed through the ``ClientEncryption`` class. - This method automatically generates data encryption keys for your - encrypted fields and creates the encrypted collection: - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py - :start-after: start-create-encrypted-collection - :end-before: end-create-encrypted-collection - :language: python - :dedent: - - .. tip:: Database vs. Database Name - - The method that creates the encrypted collection requires a reference - to a database *object* rather than the database *name*. You can - obtain this reference by using a method on your client object. - - .. tab:: - :tabid: java-sync - - Create your encrypted collection by using the encryption - helper method accessed through the ``ClientEncryption`` class. - This method automatically generates data encryption keys for your - encrypted fields and creates the encrypted collection: - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-create-encrypted-collection - :end-before: end-create-encrypted-collection - :language: java - :dedent: - - .. tip:: Database vs. Database Name - - The method that creates the encrypted collection requires a reference - to a database *object* rather than the database *name*. You can - obtain this reference by using a method on your client object. - - .. tab:: - :tabid: go - - The Golang version of this tutorial uses data models to - represent the document structure. Add the following - structs to your project to represent the data in your - collection: - - .. literalinclude:: /includes/qe-tutorials/go/models.go - :start-after: start-patient-document - :end-before: end-patient-document - :language: go - :dedent: - - .. literalinclude:: /includes/qe-tutorials/go/models.go - :start-after: start-patient-record - :end-before: end-patient-record - :language: go - :dedent: - - .. literalinclude:: /includes/qe-tutorials/go/models.go - :start-after: start-payment-info - :end-before: end-payment-info - :language: go - :dedent: - - After you've added these classes, create your encrypted - collection by using the encryption helper method accessed - through the ``ClientEncryption`` class. - This method automatically generates data encryption keys for your - encrypted fields and creates the encrypted collection: - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go - :start-after: start-create-encrypted-collection - :end-before: end-create-encrypted-collection - :language: go - :dedent: - - .. tip:: Database vs. Database Name - - The method that creates the encrypted collection requires a reference - to a database *object* rather than the database *name*. You can - obtain this reference by using a method on your client object. - - .. tab:: - :tabid: csharp - - The C# version of this tutorial uses separate classes as data models - to represent the document structure. - Add the following ``Patient``, ``PatientRecord``, and ``PatientBilling`` - classes to your project: - - .. literalinclude:: /includes/qe-tutorials/csharp/Patient.cs - :start-after: start-patient - :end-before: end-patient - :language: csharp - :dedent: - - .. literalinclude:: /includes/qe-tutorials/csharp/PatientRecord.cs - :start-after: start-patient-record - :end-before: end-patient-record - :language: csharp - :dedent: - - .. literalinclude:: /includes/qe-tutorials/csharp/PatientBilling.cs - :start-after: start-patient-billing - :end-before: end-patient-billing - :language: csharp - :dedent: - - After you've added these classes, create your encrypted collection by - using the encryption helper method accessed through the - ``ClientEncryption`` class. - This method automatically generates data encryption keys for your - encrypted fields and creates the encrypted collection: - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs - :start-after: start-create-encrypted-collection - :end-before: end-create-encrypted-collection - :language: csharp - :dedent: - - .. tip:: Database vs. Database Name - - The method that creates the encrypted collection requires a reference - to a database *object* rather than the database *name*. You can - obtain this reference by using a method on your client object. - - .. _qe-kmip-insert: - - .. step:: Insert a Document with Encrypted Fields - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - Create a sample document that describes a patient's personal information. - Use the encrypted client to insert it into the ``patients`` collection, - as shown in the following example: - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js - :start-after: start-insert-document - :end-before: end-insert-document - :emphasize-lines: 15 - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - Create a sample document that describes a patient's personal information. - Use the encrypted client to insert it into the ``patients`` collection, - as shown in the following example: - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-tutorial.js - :start-after: start-insert-document - :end-before: end-insert-document - :emphasize-lines: 17 - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - Create a sample document that describes a patient's personal information. - Use the encrypted client to insert it into the ``patients`` collection, - as shown in the following example: - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py - :start-after: start-insert-document - :end-before: end-insert-document - :emphasize-lines: 15 - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - This tutorial uses POJOs as data models - to represent the document structure. To set up your application to - use POJOs, add the following code: - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-setup-application-pojo - :end-before: end-setup-application-pojo - :language: java - :dedent: - - To learn more about Java POJOs, see the `Plain Old Java Object - wikipedia article `__. - - This tutorial uses the following POJOs: - - - ``Patient`` - - ``PatientRecord`` - - ``PatientBilling`` - - You can view these classes in the `models package of the complete Java application - <{+sample-app-url-qe+}/java/src/main/java/com/mongodb/tutorials/qe/models>`__. - - Add these POJO classes to your application. Then, create an instance - of a ``Patient`` that describes a patient's personal information. Use - the encrypted client to insert it into the ``patients`` collection, - as shown in the following example: - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-insert-document - :end-before: end-insert-document - :emphasize-lines: 8 - :language: java - :dedent: - - .. tab:: - :tabid: go - - Create a sample document that describes a patient's personal information. - Use the encrypted client to insert it into the ``patients`` collection, - as shown in the following example: - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go - :start-after: start-insert-document - :end-before: end-insert-document - :emphasize-lines: 15 - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - Create a sample document that describes a patient's personal information. - Use the encrypted client to insert it into the ``patients`` collection, - as shown in the following example: - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs - :start-after: start-insert-document - :end-before: end-insert-document - :emphasize-lines: 19 - :language: csharp - :dedent: - - .. step:: Query on an Encrypted Field - - The following code sample executes a find query on an encrypted field and - prints the decrypted data: - - .. tabs-drivers:: - - .. tab:: - :tabid: shell - - .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js - :start-after: start-find-document - :end-before: end-find-document - :language: javascript - :dedent: - - .. tab:: - :tabid: nodejs - - .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-tutorial.js - :start-after: start-find-document - :end-before: end-find-document - :language: javascript - :dedent: - - .. tab:: - :tabid: python - - .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py - :start-after: start-find-document - :end-before: end-find-document - :language: python - :dedent: - - .. tab:: - :tabid: java-sync - - .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java - :start-after: start-find-document - :end-before: end-find-document - :language: java - :dedent: - - .. tab:: - :tabid: go - - .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go - :start-after: start-find-document - :end-before: end-find-document - :language: go - :dedent: - - .. tab:: - :tabid: csharp - - .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs - :start-after: start-find-document - :end-before: end-find-document - :language: csharp - :dedent: - - The output of the preceding code sample should look similar to the - following: - - .. literalinclude:: /includes/qe-tutorials/encrypted-document.json - :language: json - :copyable: false - :dedent: - - .. include:: /includes/queryable-encryption/safe-content-warning.rst - -Learn More ----------- - -To learn how {+qe+} works, see -:ref:``. - -To learn more about the topics mentioned in this guide, see the -following links: - -- Learn more about {+qe+} components on the :ref:`Reference ` page. -- Learn how {+cmk-long+}s and {+dek-long+}s work on the :ref:`` page. -- See how KMS Providers manage your {+qe+} keys on the :ref:`` page. diff --git a/source/core/ranged-sharding.txt b/source/core/ranged-sharding.txt index bb3cc8a3568..0c1a0118ba0 100644 --- a/source/core/ranged-sharding.txt +++ b/source/core/ranged-sharding.txt @@ -54,12 +54,8 @@ to use as the :term:`shard key`. - Starting in MongoDB 5.0, you can :ref:`reshard a collection ` by changing a collection's shard key. - - Starting in MongoDB 4.4, you can :ref:`refine a shard key - ` by adding a suffix field or fields to the existing - shard key. - - In MongoDB 4.2 and earlier, the choice of shard key cannot - be changed after sharding. - + - You can :ref:`refine a shard key ` by adding a suffix + field or fields to the existing shard key. Shard a Populated Collection ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/source/core/read-isolation-consistency-recency.txt b/source/core/read-isolation-consistency-recency.txt index 4b9c20f1258..18203fb6544 100644 --- a/source/core/read-isolation-consistency-recency.txt +++ b/source/core/read-isolation-consistency-recency.txt @@ -117,7 +117,7 @@ For causally related operations: members and is durable. - Write operations with :writeconcern:`"majority"` write concern; - i.e. the write operations that request acknowledgement that the + i.e. the write operations that request acknowledgment that the operation has been applied to a majority of the replica set's voting members. diff --git a/source/core/read-preference-hedge-option.txt b/source/core/read-preference-hedge-option.txt index 22f50aa5690..8b5ac9b9e34 100644 --- a/source/core/read-preference-hedge-option.txt +++ b/source/core/read-preference-hedge-option.txt @@ -6,9 +6,9 @@ Hedged Read Option .. default-domain:: mongodb -Starting in MongoDB 4.4 for sharded clusters, you can specify the use -of :ref:`hedged reads ` for non-``primary`` -:doc:`read preferences `. +You can specify the use of :ref:`hedged reads ` for +non-``primary`` :ref:`read preferences ` on sharded +clusters. With hedged reads, the :binary:`~bin.mongos` instances can route read operations to two replica set members per each queried shard and return @@ -16,8 +16,8 @@ results from the first respondent per shard. .. include:: /includes/list-hedged-reads-operations.rst -To specify hedged read for a read preference, MongoDB 4.4 introduces -the hedged read option for read preferences. +To specify hedged read for a read preference, use the hedged read option for +read preferences. Enable Hedged Reads ------------------- diff --git a/source/core/read-preference-mechanics.txt b/source/core/read-preference-mechanics.txt index 85ecec7de2a..17a1c107ca8 100644 --- a/source/core/read-preference-mechanics.txt +++ b/source/core/read-preference-mechanics.txt @@ -76,14 +76,12 @@ settings. The read preference is re-evaluated for each operation. Hedged Reads ```````````` -Starting in version 4.4, :binary:`~bin.mongos` supports :ref:`hedged -reads ` for non-``primary`` :doc:`read preferences -` modes. That is, :binary:`~bin.mongos` can send -an additional read to another member, if available, to hedge the read -operation if using non-``primary`` :doc:`read preferences -`. The additional read sent to hedge the read -operation uses the ``maxTimeMS`` value of -:parameter:`maxTimeMSForHedgedReads`. +:binary:`~bin.mongos` supports :ref:`hedged reads ` for +non-``primary`` :ref:`read preference ` modes. +That is, :binary:`~bin.mongos` can send an additional read to another member, +if available, to hedge the read operation if using non-``primary`` +read preferences. The additional read sent to hedge the read operation +uses the ``maxTimeMS`` value of :parameter:`maxTimeMSForHedgedReads`. .. include:: /includes/list-hedged-reads-operations.rst diff --git a/source/core/read-preference-use-cases.txt b/source/core/read-preference-use-cases.txt index c61e88c00c0..2264a80e724 100644 --- a/source/core/read-preference-use-cases.txt +++ b/source/core/read-preference-use-cases.txt @@ -53,8 +53,8 @@ read preference modes: Use :readmode:`primaryPreferred` if you want an application to read from the primary under normal circumstances, but to - allow stale reads from secondaries when the primary is unavailable. This provides a - "read-only mode" for your application during a failover. + allow :term:`stale reads ` from secondaries when the primary is + unavailable. .. _read-preference-counter-indications: diff --git a/source/core/read-preference.txt b/source/core/read-preference.txt index 61171a50e00..fc32c9805c5 100644 --- a/source/core/read-preference.txt +++ b/source/core/read-preference.txt @@ -14,6 +14,9 @@ Read Preference :name: genre :values: reference +.. meta:: + :description: Read preference describes how MongoDB clients route read operations to the members of a replica set. + .. contents:: On this page :local: :backlinks: none @@ -27,7 +30,7 @@ Read preference consists of the :ref:`read preference mode `, the :ref:`maxStalenessSeconds ` option, and the :ref:`hedged read ` option. :ref:`Hedged read -` option is available for MongoDB 4.4+ sharded +` option is available for sharded clusters for reads that use non-``primary`` read preference. .. _read-pref-summary: @@ -40,9 +43,8 @@ The following table lists a brief summary of the read preference modes: .. note:: - Starting in version 4.4, non-``primary`` read preference modes - support :ref:`hedged read ` on sharded - clusters. + Non-``primary`` read preference modes support + :ref:`hedged read ` on sharded clusters. .. include:: /includes/read-preference-modes-table.rst @@ -118,7 +120,7 @@ Read Preference Modes .. note:: - Starting in version 4.4, :readmode:`primaryPreferred` supports + Read preference :readmode:`primaryPreferred` supports :ref:`hedged reads ` on sharded clusters. .. readmode:: secondary @@ -143,7 +145,7 @@ Read Preference Modes .. note:: - Starting in version 4.4, :readmode:`secondary` supports + Read preference :readmode:`secondary` supports :ref:`hedged reads ` on sharded clusters. .. readmode:: secondaryPreferred @@ -160,7 +162,7 @@ Read Preference Modes .. note:: - Starting in version 4.4, :readmode:`secondaryPreferred` supports + Read preference :readmode:`secondaryPreferred` supports :ref:`hedged reads ` on sharded clusters. .. readmode:: nearest @@ -197,7 +199,7 @@ Read Preference Modes .. note:: - Starting in version 4.4, read preference :readmode:`nearest`, by + Read preference :readmode:`nearest`, by default, specifies the use of :ref:`hedged reads ` for reads on a sharded cluster. diff --git a/source/core/replica-set-architecture-three-members.txt b/source/core/replica-set-architecture-three-members.txt index a1c67396947..35e1baa954f 100644 --- a/source/core/replica-set-architecture-three-members.txt +++ b/source/core/replica-set-architecture-three-members.txt @@ -31,11 +31,11 @@ Primary with Two Secondary Members (P-S-S) A replica set with three members that store data has: -- One :doc:`primary `. +- One :ref:`primary `. -- Two :doc:`secondary ` members. Both - secondaries can become the primary in an :doc:`election - `. +- Two :ref:`secondary ` members. Both + secondaries can become the primary in an :ref:`election + `. .. include:: /images/replica-set-primary-with-two-secondaries.rst @@ -61,11 +61,11 @@ Primary with a Secondary and an Arbiter (PSA) A three member replica set with a two members that store data has: -- One :doc:`primary `. +- One :ref:`primary `. -- One :doc:`secondary ` member. The - secondary can become primary in an :doc:`election - `. +- One :ref:`secondary ` member. The + secondary can become primary in an :ref:`election + `. - One :ref:`arbiter `. The arbiter only votes in elections. diff --git a/source/core/replica-set-architectures.txt b/source/core/replica-set-architectures.txt index 785c20ea846..42df46e7a05 100644 --- a/source/core/replica-set-architectures.txt +++ b/source/core/replica-set-architectures.txt @@ -1,3 +1,10 @@ +.. facet:: + :name: genre + :values: reference + +.. meta:: + :keywords: replica set members, replica set capacity, voting members, arbiter + .. _replica-set-deployment-overview: .. _replica-set-architecture: @@ -22,6 +29,8 @@ three-member replica set. These sets provide redundancy and fault tolerance. Avoid complexity when possible, but let your application requirements dictate the architecture. +.. include:: /includes/replication/note-replica-set-major-versions.rst + Strategies ---------- @@ -55,8 +64,6 @@ it may be possible to place an arbiter into environments that you would not place other members of the replica set. Consult your security policies. -.. include:: /includes/extracts/arbiters-and-pvs-with-reference.rst - .. include:: /includes/admonition-multiple-arbiters.rst .. _replica-set-architectures-consider-fault-tolerance: @@ -169,7 +176,7 @@ Target Operations with Tag Sets Use :ref:`replica set tag sets ` to target read operations to specific members or to customize write -concern to request acknowledgement from specific members. +concern to request acknowledgment from specific members. .. seealso:: diff --git a/source/core/replica-set-elections.txt b/source/core/replica-set-elections.txt index 86c31cc7cba..136ac1ce06c 100644 --- a/source/core/replica-set-elections.txt +++ b/source/core/replica-set-elections.txt @@ -94,10 +94,9 @@ not seek election. For details, see Mirrored Reads ~~~~~~~~~~~~~~ -Starting in version 4.4, MongoDB provides :ref:`mirrored reads -` to pre-warm electable secondary members' cache with -the most recently accessed data. With mirrored reads, the primary can -mirror a subset of :ref:`operations +MongoDB provides :ref:`mirrored reads ` to pre-warm +electable secondary members' cache with the most recently accessed data. +With mirrored reads, the primary can mirror a subset of :ref:`operations ` that it receives and send them to a subset of electable secondaries. Pre-warming the cache of a secondary can help restore performance more quickly after an election. @@ -124,11 +123,11 @@ Network Partition A :term:`network partition` may segregate a primary into a partition with a minority of nodes. When the primary detects that it can only see -a minority of nodes in the replica set, the primary steps down as -primary and becomes a secondary. Independently, a member in the -partition that can communicate with a :data:`majority -` of the nodes (including itself) -holds an election to become the new primary. +a minority of voting nodes in the replica set, the primary steps down +and becomes a secondary. Independently, a member in the partition that +can communicate with a :data:`majority +` of the voting nodes (including +itself) holds an election to become the new primary. Voting Members diff --git a/source/core/replica-set-members.txt b/source/core/replica-set-members.txt index 1250c0186fb..539a2328bab 100644 --- a/source/core/replica-set-members.txt +++ b/source/core/replica-set-members.txt @@ -36,8 +36,7 @@ adding another secondary), you may choose to include an :ref:`arbiter ` but does not hold data (i.e. does not provide data redundancy). -A replica set can have up to :ref:`50 members -<3.0-replica-sets-max-members>` but only 7 voting members. +A replica set can have up to 50 members but only 7 voting members. .. seealso:: diff --git a/source/core/replica-set-oplog.txt b/source/core/replica-set-oplog.txt index ebd8e8976ff..a34be39550f 100644 --- a/source/core/replica-set-oplog.txt +++ b/source/core/replica-set-oplog.txt @@ -6,6 +6,10 @@ Replica Set Oplog .. default-domain:: mongodb +.. facet:: + :name: genre + :values: reference + .. contents:: On this page :local: :backlinks: none @@ -14,20 +18,13 @@ Replica Set Oplog The :term:`oplog` (operations log) is a special :term:`capped collection` that keeps a rolling record of all operations that modify -the data stored in your databases. +the data stored in your databases. If write operations do not modify any +data or fail, they do not create oplog entries. Unlike other capped collections, the oplog can grow past its configured size limit to avoid deleting the :data:`majority commit point `. -.. versionadded:: 4.4 - - MongoDB 4.4 supports specifying a minimum oplog retention - period in hours, where MongoDB only removes an oplog entry if: - - - The oplog has reached the maximum configured size, *and* - - The oplog entry is older than the configured number of hours. - MongoDB applies database operations on the :term:`primary` and then records the operations on the primary's oplog. The :term:`secondary` members then copy and apply @@ -50,36 +47,39 @@ Oplog Size ---------- When you start a replica set member for the first time, MongoDB creates -an oplog of a default size if you do not specify the oplog size. +an oplog of a default size if you do not specify the oplog size. For Unix and Windows systems The default oplog size depends on the storage engine: .. list-table:: - :widths: 30 30 20 20 + :widths: 50 50 :header-rows: 1 * - Storage Engine - Default Oplog Size - - Lower Bound - - Upper Bound - * - :doc:`/core/inmemory` + * - :ref:`storage-wiredtiger` - - 5% of physical memory + - 5% of free disk space - - 50 MB + * - :ref:`storage-inmemory` - - 50 GB + - 5% of physical memory - * - :doc:`/core/wiredtiger` - - 5% of free disk space - - 990 MB - - 50 GB + The default oplog size has the following constraints: + + - The minimum oplog size is 990 MB. If 5% of free disk space or + physical memory (whichever is applicable based on your storage + engine) is less than 990 MB, the default oplog size is 990 MB. + + - The maximum default oplog size is 50 GB. If 5% of free disk space or + physical memory (whichever is applicable based on your storage + engine) is greater than 50 GB, the default oplog size is 50 GB. For 64-bit macOS systems - The default oplog size is 192 MB of either physical memory or free - disk space depending on the storage engine: + The default oplog size is 192 MB of either free disk space or + physical memory depending on the storage engine: .. list-table:: :widths: 50 50 @@ -88,13 +88,13 @@ For 64-bit macOS systems * - Storage Engine - Default Oplog Size - * - :doc:`/core/inmemory` - - - 192 MB of physical memory + * - :ref:`storage-wiredtiger` - * - :doc:`/core/wiredtiger` - 192 MB of free disk space + * - :ref:`storage-inmemory` + + - 192 MB of physical memory In most cases, the default oplog size is sufficient. For example, if an oplog is 5% of free disk space and fills up in 24 hours of operations, @@ -115,9 +115,7 @@ oplog dynamically without restarting the :binary:`~bin.mongod` process. Minimum Oplog Retention Period ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -.. versionadded:: 4.4 - - .. include:: /includes/extracts/4.4-changes-minimum-oplog-retention-period.rst +.. include:: /includes/extracts/4.4-changes-minimum-oplog-retention-period.rst To configure the minimum oplog retention period when starting the :binary:`~bin.mongod`, either: @@ -250,7 +248,7 @@ the ``local.oplog.rs`` collection from a standalone MongoDB instance. both :ref:`replication` and recovery of a node if the node goes down. Starting in MongoDB 5.0, it is no longer possible to perform manual -write operations to the :doc:`oplog ` on a +write operations to the :ref:`oplog ` on a cluster running as a :ref:`replica set `. Performing write operations to the oplog when running as a :term:`standalone instance ` should only be done with diff --git a/source/core/replica-set-rollbacks.txt b/source/core/replica-set-rollbacks.txt index be052dbe58d..7c8d5a7df69 100644 --- a/source/core/replica-set-rollbacks.txt +++ b/source/core/replica-set-rollbacks.txt @@ -52,90 +52,57 @@ Rollback Data By default, when a rollback occurs, MongoDB writes the rollback data to :term:`BSON` files. -.. note:: Rollback Directory Change - - Starting in Mongo 4.4, the rollback directory for a collection is named - after the collection's UUID rather than the collection namespace. - -.. tabs:: +For each collection whose data is rolled back, the rollback files are located in +a ``/rollback/`` directory and have filenames of the +form: - .. tab:: MongoDB 4.4+ - :tabid: 4.4 +.. code-block:: none + :copyable: false - For each collection whose data is rolled back, the - rollback files are located in a ``/rollback/`` - directory and have filenames of the form: - - .. code-block:: none - :copyable: false - - removed..bson - - For example, if data for the collection ``comments`` in the - ``reporting`` database rolled back: - - .. code-block:: none - :copyable: false - - /rollback/20f74796-d5ea-42f5-8c95-f79b39bad190/removed.2020-02-19T04-57-11.0.bson - - where ```` is the :binary:`~bin.mongod`'s :setting:`~storage.dbPath`. - - .. tip:: Collection Name - - To get the collection name, you can search for ``rollback - file`` in the MongoDB log. For example, if the log file is - ``/var/log/mongodb/mongod.log``, you can use ``grep`` to - search for instances of ``"rollback file"`` in the log: - - .. code-block:: bash - - grep "rollback file" /var/log/mongodb/mongod.log - - Alternatively, you can loop through all the databases and run - :method:`db.getCollectionInfos()` for the specific UUID until - you get a match. For example: + removed..bson - .. code-block:: javascript +For example, if data for the collection ``comments`` in the ``reporting`` +database rolled back: - var mydatabases=db.adminCommand("listDatabases").databases; - var foundcollection=false; +.. code-block:: none + :copyable: false - for (var i = 0; i < mydatabases.length; i++) { - let mdb = db.getSiblingDB(mydatabases[i].name); - collections = mdb.getCollectionInfos( { "info.uuid": UUID("20f74796-d5ea-42f5-8c95-f79b39bad190") } ); + /rollback/20f74796-d5ea-42f5-8c95-f79b39bad190/removed.2020-02-19T04-57-11.0.bson - for (var j = 0; j < collections.length; j++) { // Array of 1 element - foundcollection=true; - print(mydatabases[i].name + '.' + collections[j].name); - break; - } +where ```` is the :binary:`~bin.mongod`'s :setting:`~storage.dbPath`. - if (foundcollection) { break; } - } - - .. tab:: MongoDB 4.2 - :tabid: 4.2 - - For each collection whose data is rolled back, the - rollback files are located in a ``/rollback/.`` - directory and have filenames of the form: - - .. code-block:: none - :copyable: false +.. tip:: Collection Name + + To get the collection name, you can search for ``rollback file`` in the + MongoDB log. For example, if the log file is + ``/var/log/mongodb/mongod.log``, you can use ``grep`` to search for instances + of ``"rollback file"`` in the log: - removed..bson +.. code-block:: bash + + grep "rollback file" /var/log/mongodb/mongod.log - For example, if data for the collection ``comments`` in the - ``reporting`` database rolled back: +Alternatively, you can loop through all the databases and run +:method:`db.getCollectionInfos()` for the specific UUID until you get a match. +For example: - .. code-block:: none - :copyable: false +.. code-block:: javascript + + var mydatabases=db.adminCommand("listDatabases").databases; + var foundcollection=false; - /rollback/reporting.comments/removed.2019-01-31T02-57-40.0.bson + for (var i = 0; i < mydatabases.length; i++) { + let mdb = db.getSiblingDB(mydatabases[i].name); + collections = mdb.getCollectionInfos( { "info.uuid": UUID("20f74796-d5ea-42f5-8c95-f79b39bad190") } ); - where ```` is the :binary:`~bin.mongod`'s :setting:`~storage.dbPath`. + for (var j = 0; j < collections.length; j++) { // Array of 1 element + foundcollection=true; + print(mydatabases[i].name + '.' + collections[j].name); + break; + } + if (foundcollection) { break; } + } Rollback Data Exclusion ~~~~~~~~~~~~~~~~~~~~~~~ @@ -159,7 +126,7 @@ Avoid Replica Set Rollbacks --------------------------- For replica sets, the :ref:`write concern ` -:writeconcern:`{ w: 1 } <\>` only provides acknowledgement of write +:writeconcern:`{ w: 1 } <\>` only provides acknowledgment of write operations on the primary. Data may be rolled back if the primary steps down before the write operations have replicated to any of the secondaries. This includes data written in :doc:`multi-document @@ -173,7 +140,7 @@ To prevent rollbacks of data that have been acknowledged to the client, run all voting members with journaling enabled and use :ref:`{ w: "majority" } write concern ` to guarantee that the write operations propagate to a majority of the replica set nodes before returning with -acknowledgement to the issuing client. +acknowledgment to the issuing client. Starting in MongoDB 5.0, ``{ w: "majority" }`` is the default write concern for *most* MongoDB deployments. See :ref:`wc-default-behavior`. @@ -207,7 +174,26 @@ Index Operations When :readconcern:`"majority"` Read Concern is Disabled Size Limitations ~~~~~~~~~~~~~~~~ -MongoDB does not limit the amount of data you can roll back. +MongoDB supports the following rollback algorithms, which have different size limitations: + +- **Recover to a Timestamp**, where a former primary reverts to a consistent point in time and + applies operations until it catches up to the sync source's branch of history. This is the + default rollback algorithm. + + When using this algorithm, MongoDB does not limit the amount of data you can roll back. + +- **Rollback via Refetch**, where a former primary finds the common point between its :term:`oplog` + and the sync source's oplog. Then, the member examines and reverts all operations in its oplog until + it reaches this common point. Rollback via Refetch occurs only when the + :setting:`~replication.enableMajorityReadConcern` setting in your configuration file is set to + ``false``. + + When using this algorithm, MongoDB can only roll back up to 300 MB of data. + + .. note:: + + Starting in MongoDB 5.0, :setting:`~replication.enableMajorityReadConcern` is set to + ``true`` and cannot be changed. .. _rollback-time-limit: diff --git a/source/core/replica-set-secondary.txt b/source/core/replica-set-secondary.txt index e6e0649a785..150ee575b8e 100644 --- a/source/core/replica-set-secondary.txt +++ b/source/core/replica-set-secondary.txt @@ -16,7 +16,7 @@ Replica Set Secondary Members A secondary maintains a copy of the :term:`primary's ` data set. To replicate data, a secondary applies operations from the -primary's :doc:`oplog ` to its own data set +primary's :ref:`oplog ` to its own data set in an asynchronous process. [#slow-oplogs]_ A replica set can have one or more secondaries. diff --git a/source/core/replica-set-sync.txt b/source/core/replica-set-sync.txt index 22b3423433d..ecac6ae128d 100644 --- a/source/core/replica-set-sync.txt +++ b/source/core/replica-set-sync.txt @@ -6,15 +6,26 @@ Replica Set Data Synchronization .. default-domain:: mongodb +.. facet:: + :name: genre + :values: reference + .. contents:: On this page :local: :backlinks: none :depth: 1 :class: singlecol +.. |source| replace:: source member +.. |destin| replace:: destination member +.. |sources| replace:: source members +.. |destins| replace:: destination members +.. |Sources| replace:: Source members +.. |Destins| replace:: Destination members + In order to maintain up-to-date copies of the shared data set, -secondary members of a replica set :term:`sync` or replicate data from -other members. MongoDB uses two forms of data synchronization: +secondary members of a replica set :term:`sync` or replicate data from a +|source|. MongoDB uses two forms of data synchronization: initial sync to populate new members with the full data set, and replication to apply ongoing changes to the entire data set. @@ -24,14 +35,13 @@ entire data set. Initial Sync ------------ -Initial sync copies all the data from one member of the replica set to -another member. See :ref:`replica-set-initial-sync-source-selection` for -more information on initial sync source selection criteria. +Initial sync copies all the data from the |source| of the replica set to +a |destin|. See :ref:`replica-set-initial-sync-source-selection` for +more information on |source| selection criteria. -Starting in MongoDB 4.4, you can specify the preferred initial sync -source using the :parameter:`initialSyncSourceReadPreference` parameter. -This parameter can only be specified when starting the -:binary:`~bin.mongod`. +You can specify the preferred initial sync source using the +:parameter:`initialSyncSourceReadPreference` parameter. This parameter can +only be specified when starting the :binary:`~bin.mongod`. Starting in MongoDB 5.2, initial syncs can be *logical* or *file copy based*. @@ -45,26 +55,26 @@ When you perform a logical initial sync, MongoDB: #. Clones all databases except the :ref:`local ` database. To clone, the - :binary:`~bin.mongod` scans every collection in each source database and - inserts all data into its own copies of these collections. + :binary:`~bin.mongod` scans every collection in each |source| + database and inserts all data into its own copies of these + collections. #. Builds all collection indexes as the documents are copied for each collection. #. Pulls newly added oplog records during the data copy. Ensure that the - target member has enough disk space in the ``local`` database to - temporarily store these oplog records for the duration of this data - copy stage. + |destin| has enough disk space in the ``local`` + database to temporarily store these oplog records for the duration of + this data copy stage. #. Applies all changes to the data set. Using the oplog from the - source, the :binary:`~bin.mongod` updates its data set to reflect the - current state of the replica set. + |source|, the :binary:`~bin.mongod` updates its data set to reflect + the current state of the replica set. When the initial sync finishes, the member transitions from :replstate:`STARTUP2` to :replstate:`SECONDARY`. -To perform an initial sync, see -:doc:`/tutorial/resync-replica-set-member`. +To perform an initial sync, see :ref:`resync-replica-member`. .. _replica-set-initial-sync-file-copy-based: @@ -93,30 +103,28 @@ Enable File Copy Based Initial Sync To enable file copy based initial sync, set the :parameter:`initialSyncMethod` parameter to ``fileCopyBased`` on the -destination member for the initial sync. This parameter can only be set +|destin| for the initial sync. This parameter can only be set at startup. Behavior ```````` -File copy based initial sync replaces the ``local`` database on the -member being *synced to* with the ``local`` database from the member -being *synced from*. +File copy based initial sync replaces the ``local`` database of the +|destin| with the ``local`` database of the source member when +syncing. Limitations ``````````` - During a file copy based initial sync: - - You cannot run a backup on the member that is being *synced to* or - the member that is being *synced from*. + - You cannot run backups on either the |source| or the |destin|. - - You cannot write to the ``local`` database on the member that is - being *synced to*. + - You cannot write to the ``local`` database on the |destin|. -- You can only run an initial sync from one given member at a time. +- You can only run an initial sync from one |source| at a time. -- When using the encrypted storage engine, MongoDB uses the source +- When using the encrypted storage engine, MongoDB uses the |source| key to encrypt the destination. .. _init-sync-retry: @@ -124,25 +132,20 @@ Limitations Fault Tolerance ~~~~~~~~~~~~~~~ -If a secondary performing initial sync encounters a *non-transient* -(i.e. persistent) network error during the sync process, the secondary -restarts the initial sync process from the beginning. - -Starting in MongoDB 4.4, a secondary performing initial sync can attempt -to resume the sync process if interrupted by a *transient* (i.e. -temporary) network error, collection drop, or -collection rename. The sync source must also run MongoDB 4.4 to support -resumable initial sync. If the sync source runs MongoDB 4.2 or earlier, -the secondary must restart the initial sync process as if it encountered -a non-transient network error. - -By default, the secondary tries to resume initial sync for 24 hours. -MongoDB 4.4 adds the -:parameter:`initialSyncTransientErrorRetryPeriodSeconds` server -parameter for controlling the amount of time the secondary attempts to -resume initial sync. If the secondary cannot successfully resume the +If a |destin| performing initial sync encounters a persistent network +error during the sync process, the |destin| restarts the initial sync +process from the beginning. + +A |destin| performing initial sync can attempt to resume the sync process +if interrupted by a temporary network error, collection drop, or +collection rename. + +By default, the |destin| tries to resume initial sync for 24 hours. +You can use the :parameter:`initialSyncTransientErrorRetryPeriodSeconds` +server parameter to control the amount of time the |destin| attempts to +resume initial sync. If the |destin| cannot successfully resume the initial sync process during the configured time period, it selects a new -healthy source from the replica set and restarts the initial +healthy |source| from the replica set and restarts the initial synchronization process from the beginning. The secondary attempts to restart the initial sync up to ``10`` times @@ -155,24 +158,24 @@ Initial Sync Source Selection Initial sync source selection depends on the value of the :binary:`~bin.mongod` startup parameter -:parameter:`initialSyncSourceReadPreference` (*new in 4.4*): +:parameter:`initialSyncSourceReadPreference`: - For :parameter:`initialSyncSourceReadPreference` set to - :readmode:`primary` (default if :rsconf:`chaining + :readmode:`primary` (default if :rsconf:`chainingAllowed ` is disabled), select the :term:`primary` - as the sync source. If the primary is unavailable or unreachable, log + as the |source|. If the primary is unavailable or unreachable, log an error and periodically check for primary availability. - For :parameter:`initialSyncSourceReadPreference` set to :readmode:`primaryPreferred` (default for voting replica set - members), attempt to select the :term:`primary` as the sync source. If - the primary is unavailable or unreachable, perform sync source + members), attempt to select the :term:`primary` as the |source|. If + the primary is unavailable or unreachable, perform sync |source| selection from the remaining replica set members. -- For all other supported read modes, perform sync source selection - from the replica set members. +- For all other supported read modes, perform sync |source| selection + from the |destins|. -Members performing initial sync source selection make two passes through +Members performing initial |source| selection make two passes through the list of all replica set members: .. tabs:: @@ -182,40 +185,39 @@ the list of all replica set members: The member applies the following criteria to each replica set member when making the first pass for selecting a - initial sync source: + initial |source|: - - The sync source *must* be in the :replstate:`PRIMARY` or + - The |source| *must* be in the :replstate:`PRIMARY` or :replstate:`SECONDARY` replication state. - - The sync source *must* be online and reachable. + - The |source| *must* be online and reachable. - If :parameter:`initialSyncSourceReadPreference` is :readmode:`secondary` or :readmode:`secondaryPreferred`, - the sync source *must* be a :term:`secondary`. + the |source| *must* be a :term:`secondary`. - - The sync source *must* be :rsconf:`visible `. + - The |source| *must* be :rsconf:`visible `. - - The sync source *must* be within ``30`` seconds of the newest + - The |source| *must* be within ``30`` seconds of the newest oplog entry on the primary. - If the member :rsconf:`builds indexes - `, the sync source *must* - build indexes. + `, the |source| *must* build indexes. - If the member :rsconf:`votes ` in - replica set elections, the sync source *must* also vote. + replica set elections, the |source| *must* also vote. - If the member is *not* a :rsconf:`delayed member - `, the sync source *must not* be delayed. + `, the |source| *must not* be delayed. - If the member *is* a :rsconf:`delayed member - `, the sync source must have a shorter + `, the |source| must have a shorter configured delay. - - The sync source *must* be faster (i.e. lower latency) than + - The |source| *must* be faster than the current best sync source. - If no candidate sync sources remain after the first pass, + If no candidate |source| remains after the first pass, the member performs a second pass with relaxed criteria. See :guilabel:`Sync Source Selection (Second Pass)`. @@ -224,26 +226,26 @@ the list of all replica set members: The member applies the following criteria to each replica set member when making the second pass for selecting a - initial sync source: + initial |source|: - - The sync source *must* be in the + - The |source| *must* be in the :replstate:`PRIMARY` or :replstate:`SECONDARY` replication state. - - The sync source *must* be online and reachable. + - The |source| *must* be online and reachable. - If :parameter:`initialSyncSourceReadPreference` is - :readmode:`secondary`, the sync source *must* be a + :readmode:`secondary`, the |source| *must* be a :term:`secondary`. - If the member :rsconf:`builds indexes - `, the sync source must + `, the |source| must build indexes. - - The sync source *must* be faster (i.e. lower latency) than + - The |source| *must* be faster than the current best sync source. -If the member cannot select an initial sync source after two passes, it +If the |destin| cannot select a |source| after two passes, it logs an error and waits ``1`` second before restarting the selection process. The secondary :binary:`~bin.mongod` can restart the initial sync source selection process up to ``10`` times before exiting with an @@ -252,11 +254,11 @@ error. Oplog Window ~~~~~~~~~~~~ -The :term:`oplog window` must be long enough so that a secondary can fetch any -new :term:`oplog` entries that occur between the start and end of +The :term:`oplog window` must be long enough so that a |destin| can fetch +any new :term:`oplog` entries that occur between the start and end of the :ref:`replica-set-initial-sync-logical`. If the window isn't long enough, there is a risk that some entries may -fall off the ``oplog`` before the secondary can apply them. +fall off the ``oplog`` before the |destin| can apply them. It is recommended that you size the ``oplog`` for additional time to fetch any new ``oplog`` entries. This allows for changes that may occur during initial @@ -269,29 +271,25 @@ For more information, see :ref:`replica-set-oplog-sizing`. Replication ----------- -Secondary members replicate data continuously after the initial sync. -Secondary members copy the :doc:`oplog ` from -their *sync from* source and apply these operations in an asynchronous -process. [#slow-oplogs]_ +|Destins| replicate data continuously after the initial sync. +|Destins| copy the :ref:`oplog ` from +the |source| and apply these operations in an asynchronous +process. -Secondaries may automatically change their *sync from* source as needed +|Destins| automatically change their |source| as needed based on changes in the ping time and state of other members' replication. See :ref:`replica-set-replication-sync-source-selection` -for more information on sync source selection criteria. - -.. [#slow-oplogs] - - .. include:: /includes/extracts/4.2-changes-slow-oplog-log-message-footnote.rst +for more information on |source| selection criteria. .. _replica-set-streaming-replication: Streaming Replication ~~~~~~~~~~~~~~~~~~~~~ -Starting in MongoDB 4.4, *sync from* sources send a continuous stream -of :doc:`oplog ` entries to their syncing -secondaries. Streaming replication mitigates replication lag in -high-load and high-latency networks. It also: +|Sources| send a continuous stream +of :ref:`oplog ` entries to their |destins|. +Streaming replication mitigates replication lag in high-load and +high-latency networks. It also: - Reduces staleness for reads from secondaries. - Reduces risk of losing write operations with :ref:`w: 1 ` due to @@ -300,14 +298,10 @@ high-load and high-latency networks. It also: <"majority">` and :ref:`w: >1 ` (that is, any write concern that requires waiting for replication). -Prior to MongoDB 4.4, secondaries fetched batches of :doc:`oplog -` entries by issuing a request to their *sync -from* source and waiting for a response. This required a network roundtrip -for each batch of :doc:`oplog ` entries. MongoDB -4.4 adds the :parameter:`oplogFetcherUsesExhaust` startup parameter for -disabling streaming replication and using the older replication behavior. +Use the :parameter:`oplogFetcherUsesExhaust` startup parameter to disable +streaming replication and using the older replication behavior. Set the :parameter:`oplogFetcherUsesExhaust` parameter to ``false`` only if -there are any resource constraints on the *sync from* source or if you wish +there are any resource constraints on the |source| or if you wish to limit MongoDB's usage of network bandwidth for replication. .. _replica-set-internals-multi-threaded-replication: @@ -347,17 +341,17 @@ For more information, see :ref:`flow-control`. Replication Sync Source Selection ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Replication sync source selection depends on the replica set +Replication |source| selection depends on the replica set :rsconf:`chaining ` setting: -- With chaining enabled (default), perform sync source selection from - the replica set members. +- With chaining enabled (default), perform |source| selection from + the |destins|. -- With chaining disabled, select the :term:`primary` as the sync - source. If the primary is unavailable or unreachable, log an - error and periodically check for primary availability. +- With chaining disabled, select the :term:`primary` as the |source|. + If the primary is unavailable or unreachable, log an error and + periodically check for primary availability. -Members performing replication sync source selection make two passes +Members performing replication |source| selection make two passes through the list of all replica set members: .. tabs:: @@ -367,39 +361,38 @@ through the list of all replica set members: The member applies the following criteria to each replica set member when making the first pass for selecting a - replication sync source: + |source|: - - The sync source *must* be in the :replstate:`PRIMARY` or + - The |source| *must* be in the :replstate:`PRIMARY` or :replstate:`SECONDARY` replication state. - - The sync source *must* be online and reachable. + - The |source| *must* be online and reachable. - - The sync source *must* have newer oplog entries than the member - (i.e. the sync source is *ahead* of the member). + - The |source| *must* have newer oplog entries than the member. + That is, the |source| must be *ahead* of the member. - - The sync source *must* be :rsconf:`visible `. + - The |source| *must* be :rsconf:`visible `. - - The sync source *must* be within ``30`` seconds of the newest + - The |source| *must* be within ``30`` seconds of the newest oplog entry on the primary. - If the member :rsconf:`builds indexes - `, the sync source *must* + `, the |source| *must* build indexes. - If the member :rsconf:`votes ` in - replica set elections, the sync source *must* also vote. + replica set elections, the |source| *must* also vote. - If the member is *not* a :rsconf:`delayed member - `, the sync source *must not* be delayed. + `, the |source| *must not* be delayed. - If the member *is* a :rsconf:`delayed member - `, the sync source must have a shorter + `, the |source| must have a shorter configured delay. - - The sync source *must* be faster (i.e. lower latency) than - the current best sync source. + - The |source| *must* be faster than the current best sync source. - If no candidate sync sources remain after the first pass, + If no candidate |sources| remain after the first pass, the member performs a second pass with relaxed criteria. See the :guilabel:`Sync Source Selection (Second Pass)`. @@ -408,37 +401,34 @@ through the list of all replica set members: The member applies the following criteria to each replica set member when making the second pass for selecting a - replication sync source: + |source|: - - The sync source *must* be in the + - The |source| *must* be in the :replstate:`PRIMARY` or :replstate:`SECONDARY` replication state. - - The sync source *must* be online and reachable. + - The |source| *must* be online and reachable. - If the member :rsconf:`builds indexes - `, the sync source must + `, the |source| must build indexes. - - The sync source *must* be faster (i.e. lower latency) than - the current best sync source. + - The |source| *must* be faster than the current best sync source. If the member cannot select a sync source after two passes, it logs an error and waits ``1`` second before restarting the selection process. -The number of times a source can be changed per hour is +The number of times a |source| can be changed per hour is configurable by setting the :parameter:`maxNumSyncSourceChangesPerHour` parameter. .. note:: - Starting in MongoDB 4.4, the startup parameter - :parameter:`initialSyncSourceReadPreference` takes precedence over - the replica set's :rsconf:`settings.chainingAllowed` setting when - selecting an initial sync source. After a replica set member + The startup parameter :parameter:`initialSyncSourceReadPreference` takes + precedence over the replica set's :rsconf:`settings.chainingAllowed` setting + when selecting an initial sync |source|. After a replica set member successfully performs initial sync, it defers to the value of - :rsconf:`~settings.chainingAllowed` when selecting a replication sync - source. + :rsconf:`~settings.chainingAllowed` when selecting a |source|. See :ref:`replica-set-initial-sync-source-selection` for more information on initial sync source selection. diff --git a/source/core/replica-set-write-concern.txt b/source/core/replica-set-write-concern.txt index 7840c04de6c..fbb38c73ea2 100644 --- a/source/core/replica-set-write-concern.txt +++ b/source/core/replica-set-write-concern.txt @@ -22,7 +22,7 @@ successfully. For replica sets: - A write concern of :writeconcern:`w: "majority" <"majority">` requires - acknowledgement that the write operations have been durably committed to a + acknowledgment that the write operations have been durably committed to a :ref:`calculated majority ` of the data-bearing voting members. For most replica set configurations, :writeconcern:`w: "majority" <"majority">` is the :ref:`default write concern @@ -110,10 +110,6 @@ operation. Refer to the documentation for the write operation for instructions on write concern support and syntax. For complete documentation on write concern, see :ref:`write-concern`. -.. seealso:: - - :ref:`write-methods-incompatibility` - .. _repl-set-modify-default-write-concern: Modify Default Write Concern diff --git a/source/core/retryable-writes.txt b/source/core/retryable-writes.txt index e43f983e35c..5b695a0514b 100644 --- a/source/core/retryable-writes.txt +++ b/source/core/retryable-writes.txt @@ -16,7 +16,6 @@ Retryable writes allow MongoDB drivers to automatically retry certain write operations a single time if they encounter network errors, or if they cannot find a healthy :term:`primary` in the :ref:`replica set ` or :ref:`sharded cluster `. -[#duplicate-key-update]_ Prerequisites ------------- @@ -31,7 +30,7 @@ Supported Deployment Topologies Supported Storage Engine Retryable writes require a storage engine supporting document-level locking, such as the :ref:`WiredTiger ` or - :doc:`in-memory ` storage engines. + :ref:`in-memory ` storage engines. 3.6+ MongoDB Drivers Clients require MongoDB drivers updated for MongoDB 3.6 or greater: @@ -102,15 +101,15 @@ cannot be :writeconcern:`{w: 0} <\>`. * - | :method:`db.collection.insertOne()` | :method:`db.collection.insertMany()` - - Insert operations. + - Insert operations * - | :method:`db.collection.updateOne()` | :method:`db.collection.replaceOne()` - - Single-document update operations. [#duplicate-key-update]_ + - Single-document update operations * - | :method:`db.collection.deleteOne()` | :method:`db.collection.remove()` where ``justOne`` is ``true`` - - Single document delete operations. + - Single document delete operations * - | :method:`db.collection.findAndModify()` | :method:`db.collection.findOneAndDelete()` @@ -144,24 +143,6 @@ cannot be :writeconcern:`{w: 0} <\>`. any multi-document write operations, such as ``update`` which specifies ``true`` for the ``multi`` option. -.. note:: Updates to Shard Key Values - - Starting in MongoDB 4.2, you can update a document's shard key value - (unless the shard key field is the immutable ``_id`` field) by - issuing single-document update/findAndModify operations either as a - retryable write or in a :ref:`transaction `. For - details, see :ref:`update-shard-key`. - -.. [#duplicate-key-update] - - MongoDB 4.2 will retry certain single-document upserts - (update with ``upsert: true`` and ``multi: false``) that encounter a - duplicate key exception. See :ref:`retryable-update-upsert` for - conditions. - - Prior to MongoDB 4.2, MongoDB would not retry upsert operations - that encountered a duplicate key error. - Behavior -------- @@ -190,163 +171,6 @@ the failover period exceeds :urioption:`serverSelectionTimeoutMS`. applications starts responding (without a restart), the write operation may be retried and applied again. -.. _retryable-update-upsert: - -Duplicate Key Errors on Upsert -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -MongoDB 4.2 will retry single-document upsert operations -(i.e ``upsert : true`` and ``multi : false``) that -fail due to a duplicate key error *only if* the operation meets -*all* of the following conditions: - -- The target collection has a unique index that caused the duplicate key - error. - -- The update match condition is either: - - - A single equality predicate - - ``{ "fieldA" : "valueA" }``, - - *or* - - - a logical AND of equality predicates - - ``{ "fieldA" : "valueA", "fieldB" : "valueB" }`` - -- The set of fields in the unique index key pattern matches the set - of fields in the update query predicate. - -- The update operation does not modify any of the fields in the - query predicate. - -The following table contains examples of upsert operations that -the server can or cannot retry on a duplicate key error: - -.. list-table:: - :header-rows: 1 - :widths: 30 40 30 - - * - Unique Index Key Pattern - - Update Operation - - Retryable - - * - .. code-block:: javascript - :copyable: false - - { _id : 1 } - - .. code-block:: javascript - :copyable: false - - db.collName.updateOne( - { _id : ObjectId("1aa1c1efb123f14aaa167aaa") }, - { $set : { fieldA : 25 } }, - { upsert : true } - ) - - Yes - - * - .. code-block:: javascript - :copyable: false - - { fieldA : 1 } - - .. code-block:: javascript - :copyable: false - - db.collName.updateOne( - { fieldA : { $in : [ 25 ] } }, - { $set : { fieldB : "someValue" } }, - { upsert : true } - ) - - Yes - - * - .. code-block:: javascript - :copyable: false - - { - fieldA : 1, - fieldB : 1 - } - - .. code-block:: javascript - :copyable: false - - db.collName.updateOne( - { fieldA : 25, fieldB : "someValue" }, - { $set : { fieldC : false } }, - { upsert : true } - ) - - Yes - - * - .. code-block:: javascript - :copyable: false - - { fieldA : 1 } - - .. code-block:: javascript - :copyable: false - - db.collName.updateOne( - { fieldA : { $lte : 25 } }, - { $set : { fieldC : true } }, - { upsert : true } - ) - - No - - The query predicate on ``fieldA`` is not an equality - - * - .. code-block:: javascript - :copyable: false - - { fieldA : 1 } - - .. code-block:: javascript - :copyable: false - - db.collName.updateOne( - { fieldA : { $in : [ 25 ] } }, - { $set : { fieldA : 20 } }, - { upsert : true } - ) - - No - - The update operation modifies fields specified in the - query predicate. - - * - .. code-block:: javascript - :copyable: false - - { _id : 1 } - - .. code-block:: javascript - :copyable: false - - db.collName.updateOne( - { fieldA : { $in : [ 25 ] } }, - { $set : { fieldA : 20 } }, - { upsert : true } - ) - - No - - The set of query predicate fields (``fieldA``) does not - match the set of index key fields (``_id``). - - * - .. code-block:: javascript - :copyable: false - - { fieldA : 1 } - - .. code-block:: javascript - :copyable: false - - db.collName.updateOne( - { fieldA : 25, fieldC : true }, - { $set : { fieldD : false } }, - { upsert : true } - ) - - No - - The set of query predicate fields (``fieldA``, ``fieldC``) - does not match the set of index key fields (``fieldA``). - -Prior to MongoDB 4.2, MongoDB retryable writes did not support -retrying upserts which failed due to duplicate key errors. - Diagnostics ~~~~~~~~~~~ diff --git a/source/core/schema-validation.txt b/source/core/schema-validation.txt index ad82dd137d2..a3b81a59fe2 100644 --- a/source/core/schema-validation.txt +++ b/source/core/schema-validation.txt @@ -11,6 +11,9 @@ Schema Validation :name: genre :values: reference +.. meta:: + :description: Use schema validation to ensure there are no unintended schema changes or improper data types. + .. contents:: On this page :local: :backlinks: none @@ -55,26 +58,20 @@ validation in the following scenarios: from accidentally misspelling an item name when entering sales data. - For a students collection, ensure that the ``gpa`` field is always a - positive number. This validation catches typos during data entry. + positive number. This validation prevents errors during data entry. When MongoDB Checks Validation ------------------------------ -When you create a new collection with schema validation, MongoDB checks -validation during updates and inserts in that collection. - -When you add validation to an existing, non-empty collection: - -- Newly inserted documents are checked for validation. +After you add schema validation rules to a collection: -- Documents already existing in your collection are not checked for - validation until they are modified. Specific behavior for existing - documents depends on your chosen validation level. To learn more, see +- All document inserts must match the rules. +- The schema validation level defines how the rules are applied to + existing documents and document updates. To learn more, see :ref:`schema-specify-validation-level`. -Adding validation to an existing collection does not enforce validation -on existing documents. To check a collection for invalid documents, use -the :dbcommand:`validate` command. +To find documents in a collection that don't match the schema validation +rules, see :ref:`use-json-schema-query-conditions-find-documents`. What Happens When a Document Fails Validation --------------------------------------------- diff --git a/source/core/schema-validation/specify-json-schema.txt b/source/core/schema-validation/specify-json-schema.txt index c638c3b21a1..fb7e4e7b444 100644 --- a/source/core/schema-validation/specify-json-schema.txt +++ b/source/core/schema-validation/specify-json-schema.txt @@ -15,6 +15,9 @@ Specify JSON Schema Validation :name: genre :values: tutorial +.. meta:: + :description: Use JSON schema to specify validation rules for your fields in a human-readable format. + .. contents:: On this page :local: :backlinks: none @@ -46,6 +49,8 @@ You can't specify schema validation for: - :ref:`System collections ` +.. include:: /includes/queryable-encryption/qe-csfle-schema-validation.rst + Steps ----- @@ -134,6 +139,17 @@ document. } } + .. tip:: + + By default, :binary:`mongosh` prints nested objects up to six + levels deep. To print all nested objects to their full + depth, set ``inspectDepth`` to ``Infinity``. + + .. code-block:: shell + :copyable: true + + config.set("inspectDepth", Infinity) + .. step:: Insert a valid document. If you change the ``gpa`` field value to a ``double`` type, the diff --git a/source/core/schema-validation/specify-json-schema/json-schema-tips.txt b/source/core/schema-validation/specify-json-schema/json-schema-tips.txt index f28d8160d81..266e3146298 100644 --- a/source/core/schema-validation/specify-json-schema/json-schema-tips.txt +++ b/source/core/schema-validation/specify-json-schema/json-schema-tips.txt @@ -124,3 +124,9 @@ With the preceding validation, this document is allowed: ``null`` field values are not the same as missing fields. If a field is missing from a document, MongoDB does not validate that field. + + +Validation with Encrypted Fields +-------------------------------- + +.. include:: /includes/queryable-encryption/qe-csfle-schema-validation.rst \ No newline at end of file diff --git a/source/core/schema-validation/specify-validation-level.txt b/source/core/schema-validation/specify-validation-level.txt index f563a3fe7bb..2293762008a 100644 --- a/source/core/schema-validation/specify-validation-level.txt +++ b/source/core/schema-validation/specify-validation-level.txt @@ -31,13 +31,15 @@ MongoDB applies validation rules: - Behavior * - ``strict`` - - (*Default*) MongoDB applies validation rules to all inserts and - updates. + - (*Default*) MongoDB applies the same validation rules to all + document inserts and updates. * - ``moderate`` - - MongoDB only applies validation rules to existing valid - documents. Updates to invalid documents which exist prior to the - validation being added are not checked for validity. + - MongoDB applies the same validation rules to document inserts + and updates to existing valid documents that match the + validation rules. Updates to existing documents in the + collection that don't match the validation rules aren't checked + for validity. Prerequisite ------------ @@ -260,7 +262,7 @@ documents. upsertedCount: 0 } - The output shows that: + The output shows: - The update fails for the document with ``_id: 1``. This document met the initial validation requirements, and MongoDB applies diff --git a/source/core/schema-validation/use-json-schema-query-conditions.txt b/source/core/schema-validation/use-json-schema-query-conditions.txt index 8f1b64de4c1..9f62e731fdf 100644 --- a/source/core/schema-validation/use-json-schema-query-conditions.txt +++ b/source/core/schema-validation/use-json-schema-query-conditions.txt @@ -102,11 +102,14 @@ Both commands return the same result: } ] +.. _use-json-schema-query-conditions-find-documents: + Find Documents that Don't Match the Schema ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -To find all documents do not satisfy the schema:, use -:query:`$jsonSchema` with the :query:`$nor` operator: +To find documents in a collection that don't match the schema validation +rules, use :query:`$jsonSchema` with the :query:`$nor` operator. For +example: .. code-block:: javascript diff --git a/source/core/security-in-use-encryption.txt b/source/core/security-in-use-encryption.txt index bc64a693e4c..60e81297902 100644 --- a/source/core/security-in-use-encryption.txt +++ b/source/core/security-in-use-encryption.txt @@ -1,4 +1,12 @@ +.. facet:: + :name: genre + :values: reference + +.. meta:: + :keywords: client-side field level encryption, queryable encryption, in-use encryption, envelope encryption + .. _security-in-use-encryption: +.. _security-data-encryption: ================= In-Use Encryption @@ -6,10 +14,49 @@ In-Use Encryption .. default-domain:: mongodb -.. TODO: Write +.. contents:: On this page + :local: + :backlinks: none + :depth: 2 + :class: singlecol + +MongoDB provides two approaches to :term:`In-Use Encryption`: + +- :ref:`{+qe+} ` +- :ref:`{+csfle+} `. + +Choosing an In-Use Encryption Approach +-------------------------------------- +You can use both {+qe+} and {+csfle+} in the same deployment, but they are +incompatible with each other in the same collection. For a comparison +of the two, including compatibility with MongoDB versions and points to +consider when choosing one or the other, see :ref:`Choosing an In-Use +Encryption Approach `. + +Encryption Keys and Key Vaults +------------------------------ +Both {+qe+} and {+csfle+} use an :term:`envelope encryption` approach to +encrypt data, where an encrypted field in a document uses a unique +:term:`Data Encryption Key`, and those keys are encrypted using a +:term:`Customer Master Key`. + +For details, see :ref:``. + +{+qe+} +------------------------------------- +To learn how {+qe+} and its components work and how to implement it in +your application, see :ref:``. + +{+csfle+} +-------------------------------------- +To learn how {+csfle+} and its components work and how to implement it +in your application, see :ref:``. .. toctree:: :titlesonly: + /core/queryable-encryption/about-qe-csfle + /core/queryable-encryption/reference/compatibility + /core/queryable-encryption/fundamentals/keys-key-vaults /core/queryable-encryption /core/csfle diff --git a/source/core/security-ldap-external.txt b/source/core/security-ldap-external.txt index 889070a0558..5e90b7b4ee2 100644 --- a/source/core/security-ldap-external.txt +++ b/source/core/security-ldap-external.txt @@ -227,6 +227,12 @@ configuration file: - Quote-enclosed comma-separated list of LDAP servers in ``host[:port]`` format. + You can prefix LDAP servers with ``srv:`` and ``srv_raw:``. + + .. |ldap-binary| replace:: :binary:`mongod` + + .. include:: /includes/ldap-srv-details.rst + - **YES** * - :setting:`security.ldap.authz.queryTemplate` diff --git a/source/core/security-ldap.txt b/source/core/security-ldap.txt index 825f030f477..11435769aa7 100644 --- a/source/core/security-ldap.txt +++ b/source/core/security-ldap.txt @@ -216,6 +216,12 @@ configuration file: - Quote-enclosed comma-separated list of LDAP servers in ``host[:port]`` format. + You can prefix LDAP servers with ``srv:`` and ``srv_raw:``. + + .. |ldap-binary| replace:: :binary:`mongod` + + .. include:: /includes/ldap-srv-details.rst + - **YES** * - :setting:`security.ldap.bind.method` diff --git a/source/core/security-transport-encryption.txt b/source/core/security-transport-encryption.txt index 159a6306051..ff6503ded7f 100644 --- a/source/core/security-transport-encryption.txt +++ b/source/core/security-transport-encryption.txt @@ -223,18 +223,17 @@ OCSP (Online Certificate Status Protocol) .. include:: /includes/fact-ocsp-enabled.rst -Starting in version 4.4, to check for certificate revocation, MongoDB -:parameter:`enables ` the use of OCSP (Online Certificate -Status Protocol) by default. The use of OCSP eliminates the need to -periodically download a :setting:`Certificate Revocation List (CRL) -` and restart the +To check for certificate revocation, MongoDB :parameter:`enables ` +the use of OCSP (Online Certificate Status Protocol) by default. The use of +OCSP eliminates the need to periodically download a +:setting:`Certificate Revocation List (CRL) ` and restart the :binary:`mongod` / :binary:`mongos` with the updated CRL. In versions 4.0 and 4.2, the use of OCSP is available only through the use of :setting:`system certificate store ` on Windows or macOS. -As part of its OCSP support, MongoDB 4.4+ supports the following on +As part of its OCSP support, MongoDB supports the following on Linux: .. include:: /includes/list-ocsp-support.rst diff --git a/source/core/sharded-cluster-components.txt b/source/core/sharded-cluster-components.txt index 8ecdfc59e16..da17716a09a 100644 --- a/source/core/sharded-cluster-components.txt +++ b/source/core/sharded-cluster-components.txt @@ -16,21 +16,7 @@ Sharded Cluster Components :depth: 1 :class: singlecol -A MongoDB :term:`sharded cluster` consists of the following components: - -* :ref:`shard `: Each shard contains a - subset of the sharded data. As of MongoDB 3.6, shards must be deployed - as a :term:`replica set`. - -* :doc:`/core/sharded-cluster-query-router`: The ``mongos`` acts as a - query router, providing an interface between client applications and - the sharded cluster. Starting in MongoDB 4.4, :binary:`~bin.mongos` - can support :ref:`hedged reads ` to minimize - latencies. - -* :ref:`config servers `: Config - servers store metadata and configuration settings for the cluster. As - of MongoDB 3.4, config servers must be deployed as a replica set (CSRS). +.. include:: /includes/fact-sharded-cluster-components.rst .. _sc-production-configuration: diff --git a/source/core/sharded-cluster-query-router.txt b/source/core/sharded-cluster-query-router.txt index abaf672e306..3ba215b64fe 100644 --- a/source/core/sharded-cluster-query-router.txt +++ b/source/core/sharded-cluster-query-router.txt @@ -156,7 +156,7 @@ For details on read preference and sharded clusters, see Hedged Reads ~~~~~~~~~~~~ -Starting in version 4.4, :binary:`~bin.mongos` instances can hedge +:binary:`~bin.mongos` instances can hedge reads that use non-``primary`` :doc:`read preferences `. With hedged reads, the :binary:`~bin.mongos` instances route read operations to two replica set members per each diff --git a/source/core/sharding-balancer-administration.txt b/source/core/sharding-balancer-administration.txt index 8f1ac104362..9b0288f0581 100644 --- a/source/core/sharding-balancer-administration.txt +++ b/source/core/sharding-balancer-administration.txt @@ -236,16 +236,10 @@ when the migration proceeds with next document in the range. In the :data:`config.settings` collection: - If the ``_secondaryThrottle`` setting for the balancer is set to a - write concern, each document move during range migration must receive - the requested acknowledgement before proceeding with the next + :term:`write concern`, each document moved during range migration must receive + the requested acknowledgment before proceeding with the next document. -- If the ``_secondaryThrottle`` setting for the balancer is set to - ``true``, each document move during range migration must receive - acknowledgement from at least one secondary before the migration - proceeds with the next document in the range. This is equivalent to a - write concern of :writeconcern:`{ w: 2 } <\>`. - - If the ``_secondaryThrottle`` setting is unset, the migration process does not wait for replication to a secondary and instead continues with the next document. diff --git a/source/core/sharding-change-a-shard-key.txt b/source/core/sharding-change-a-shard-key.txt index 5f1faaa19f2..c6aab848dfc 100644 --- a/source/core/sharding-change-a-shard-key.txt +++ b/source/core/sharding-change-a-shard-key.txt @@ -27,9 +27,8 @@ To address these issues, MongoDB allows you to change your shard key: - Starting in MongoDB 5.0, you can :ref:`reshard a collection ` by changing a collection's shard key. -- Starting in MongoDB 4.4, you can :ref:`refine a shard key - ` by adding a suffix field or fields to the existing - shard key. +- You can :ref:`refine a shard key ` by adding a suffix field + or fields to the existing shard key. Data distribution fixes are most effective when you reshard a collection. If you want to improve data distribution and your diff --git a/source/core/sharding-choose-a-shard-key.txt b/source/core/sharding-choose-a-shard-key.txt index b0d34e2bd8a..50f4a6333ec 100644 --- a/source/core/sharding-choose-a-shard-key.txt +++ b/source/core/sharding-choose-a-shard-key.txt @@ -39,8 +39,7 @@ When you choose your shard key, consider: - Starting in MongoDB 5.0, you can :ref:`change your shard key ` and redistribute your data using the :dbcommand:`reshardCollection` command. - - Starting in MongoDB 4.4, you can use the - :dbcommand:`refineCollectionShardKey` command to refine a + - You can use the :dbcommand:`refineCollectionShardKey` command to refine a collection's shard key. The :dbcommand:`refineCollectionShardKey` command adds a suffix field or fields to the existing key to create the new shard key. @@ -181,6 +180,48 @@ This does not apply for aggregation queries that operate on a large amount of data. In these cases, scatter-gather can be a useful approach that allows the query to run in parallel on all shards. +Use Shard Key Analyzer in 7.0 to Find Your Shard Key +---------------------------------------------------- + +Starting in 7.0, MongoDB makes it easier to choose your shard key. You +can use :dbcommand:`analyzeShardKey` which calculates metrics for +evaluating a shard key for an unsharded or sharded collection. Metrics +are based on sampled queries, allowing you to make a data-driven choice +for your shard key. + +Enable Query Sampling +~~~~~~~~~~~~~~~~~~~~~ + +To analyze a shard key, you must enable query sampling on the target +collection. For more information, see: + +- :dbcommand:`configureQueryAnalyzer` database command +- :method:`db.collection.configureQueryAnalyzer()` shell method + +To monitor the query sampling process, use the :pipeline:`$currentOp` +stage. For an example, see :ref:`sampled-queries-currentOp-stage`. + +Shard Key Analysis Commands +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To analyze a shard key, see: + +- :dbcommand:`analyzeShardKey` database command +- :method:`db.collection.analyzeShardKey()` shell method + +``analyzeShardKey`` returns metrics about key characteristics of a shard +key and its read and write distribution. The metrics are based on +sampled queries. + +- The ``keyCharacteristics`` field contains metrics about the + :ref:`cardinality `, :ref:`frequency + `, and :ref:`monotonicity ` + of the shard key. + +- The ``readWriteDistribution`` field contains metrics about the query + routing patterns and the :ref:`load distribution + ` of shard key ranges. + .. seealso:: :ref:`read-operations-sharded-clusters` diff --git a/source/core/sharding-data-partitioning.txt b/source/core/sharding-data-partitioning.txt index 85180ccf562..a0ca0616c71 100644 --- a/source/core/sharding-data-partitioning.txt +++ b/source/core/sharding-data-partitioning.txt @@ -182,11 +182,10 @@ occurs with high :ref:`frequency`. Starting in MongoDB 5.0, you can :ref:`reshard a collection ` by changing a document's shard key. -Starting in MongoDB 4.4, MongoDB provides the -:dbcommand:`refineCollectionShardKey` command. Refining a collection's -shard key allows for a more fine-grained data distribution and can -address situations where the existing key insufficient cardinality -leads to jumbo chunks. +MongoDB provides the :dbcommand:`refineCollectionShardKey` command. +Refining a collection's shard key allows for a more fine-grained data +distribution and can address situations where the existing key insufficient +cardinality leads to jumbo chunks. To learn whether you should reshard your collection or refine your shard key, see :ref:`change-a-shard-key`. diff --git a/source/core/sharding-refine-a-shard-key.txt b/source/core/sharding-refine-a-shard-key.txt index 767d34b04d9..7ee309a923a 100644 --- a/source/core/sharding-refine-a-shard-key.txt +++ b/source/core/sharding-refine-a-shard-key.txt @@ -12,8 +12,6 @@ Refine a Shard Key :depth: 3 :class: singlecol -.. versionadded:: 4.4 - Refining a collection's shard key allows for a more fine-grained data distribution and can address situations where the existing key has led to :ref:`jumbo chunks ` due to insufficient diff --git a/source/core/sharding-reshard-a-collection.txt b/source/core/sharding-reshard-a-collection.txt index 544d0eb903d..c894e03e7ad 100644 --- a/source/core/sharding-reshard-a-collection.txt +++ b/source/core/sharding-reshard-a-collection.txt @@ -89,7 +89,7 @@ requirements: - deploy your rewritten application The following queries return an error if the query filter does not - include **both** the current shard key or a unique field (like + include either the current shard key or a unique field (like ``_id``): - :method:`~db.collection.deleteOne()` diff --git a/source/core/sharding-shard-a-collection.txt b/source/core/sharding-shard-a-collection.txt index 76b8322bdfa..305bfc39295 100644 --- a/source/core/sharding-shard-a-collection.txt +++ b/source/core/sharding-shard-a-collection.txt @@ -34,9 +34,9 @@ shard a collection: - Specify a document ``{ : <1|"hashed">, ... }`` where - - ``1`` indicates :doc:`range-based sharding ` + - ``1`` indicates :ref:`range-based sharding ` - - ``"hashed"`` indicates :doc:`hashed sharding `. + - ``"hashed"`` indicates :ref:`hashed sharding `. For more information on the sharding method, see :method:`sh.shardCollection()`. @@ -47,9 +47,9 @@ Shard Key Fields and Values Missing Shard Key Fields ~~~~~~~~~~~~~~~~~~~~~~~~ -Starting in version 4.4, documents in sharded collections can be -missing the shard key fields. A missing shard key falls into the -same range as a ``null``-valued shard key. See :ref:`shard-key-missing`. +Documents in sharded collections can be missing the shard key fields. +A missing shard key falls into the same range as a ``null``-valued shard key. +See :ref:`shard-key-missing`. In version 4.2 and earlier, shard key fields must exist in every document to be able to shard a sharded collection. To set missing shard @@ -69,9 +69,8 @@ Change a Collection's Shard Key Starting in MongoDB 5.0, you can :ref:`reshard a collection ` by changing a document's shard key. -Starting in MongoDB 4.4, you can :ref:`refine a shard key -` by adding a suffix field or fields to the existing -shard key. +You can :ref:`refine a shard key ` by adding a suffix field +or fields to the existing shard key. In MongoDB 4.2 and earlier, the choice of shard key cannot be changed after sharding. diff --git a/source/core/sharding-shard-key.txt b/source/core/sharding-shard-key.txt index 2a086c3addf..9ee95182623 100644 --- a/source/core/sharding-shard-key.txt +++ b/source/core/sharding-shard-key.txt @@ -64,30 +64,28 @@ of the shard key. For a ranged sharded collection, only the following indexes can be :ref:`unique `: -- the index on the shard key +- The index on the shard key -- a :term:`compound index` where the shard key is a :ref:`prefix +- A :term:`compound index` where the shard key is a :ref:`prefix ` -- the default ``_id`` index; **however**, the ``_id`` index only - enforces the uniqueness constraint per shard **if** the ``_id`` field - is **not** the shard key or the prefix of the shard key. +- The default ``_id`` index. - .. important:: Uniqueness and the ``_id`` Index + .. important:: - If the ``_id`` field is not the shard key or the prefix of the - shard key, ``_id`` index only enforces the uniqueness constraint - per shard and **not** across shards. + Sharded clusters only enforce the uniqueness constraint on + ``_id`` fields across the cluster when the ``_id`` field is + also the shard key. - For example, consider a sharded collection (with shard key ``{x: - 1}``) that spans two shards A and B. Because the ``_id`` key is - not part of the shard key, the collection could have a document - with ``_id`` value ``1`` in shard A and another document with - ``_id`` value ``1`` in shard B. + If the ``_id`` field is not the shard key or if it is only + the prefix to the shard key, the uniqueness constraint + applies only to the shard that stores the document. This + means that two or more documents can have the same ``_id`` + value, provided they occur on different shards. - If the ``_id`` field is not the shard key nor the prefix of the - shard key, MongoDB expects applications to enforce the uniqueness - of the ``_id`` values across the shards. + In cases where the ``_id`` field is not the shard key, + MongoDB expects applications to enforce the uniqueness of + ``_id`` values across the shards. The unique index constraints mean that: @@ -115,9 +113,8 @@ You cannot specify a unique constraint on a :ref:`hashed index Missing Shard Key Fields ------------------------ -Starting in version 4.4, documents in sharded collections can be -missing the shard key fields. To set missing shard key fields, see -:ref:`shard-key-missing-set`. +Documents in sharded collections can be missing the shard key fields. +To set missing shard key fields, see :ref:`shard-key-missing-set`. Chunk Range and Missing Shard Key Fields ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/source/core/tailable-cursors.txt b/source/core/tailable-cursors.txt index fdefbe8a6f7..6f8aac14738 100644 --- a/source/core/tailable-cursors.txt +++ b/source/core/tailable-cursors.txt @@ -6,51 +6,67 @@ Tailable Cursors .. default-domain:: mongodb -By default, MongoDB will automatically close a cursor when the client -has exhausted all results in the cursor. However, for :doc:`capped -collections ` you may use a *Tailable -Cursor* that remains open after the client exhausts the results in the +.. facet:: + :name: genre + :values: reference + +.. contents:: On this page + :local: + :backlinks: none + :depth: 2 + :class: singlecol + +By default, MongoDB automatically closes a cursor when the client +exhausts all results in the cursor. However, for :ref:`capped +collections ` you can use a :term:`tailable +cursor` that remains open after the client exhausts the results in the initial cursor. Tailable cursors are conceptually equivalent to the -``tail`` Unix command with the ``-f`` option (i.e. with "follow" -mode). After clients insert new additional documents into a capped -collection, the tailable cursor will continue to retrieve -documents. +``tail`` Unix command with the ``-f`` option ("follow" mode). After +clients insert additional documents into a capped collection, the +tailable cursor continues to retrieve documents. + +Use Cases +--------- Use tailable cursors on capped collections that have high write volumes where indexes aren't practical. For instance, -MongoDB :doc:`replication ` uses tailable cursors to +MongoDB :ref:`replication ` uses tailable cursors to tail the primary's :term:`oplog`. .. note:: - If your query is on an indexed field, do not use tailable cursors, - but instead, use a regular cursor. Keep track of the last value of - the indexed field returned by the query. To retrieve the newly - added documents, query the collection again using the last value of - the indexed field in the query criteria, as in the following - example: + If your query is on an indexed field, use a regular cursor instead of + a tailable cursor. Keep track of the last value of the indexed field + returned by the query. To retrieve the newly added documents, query + the collection again using the last value of the indexed field in the + query criteria. For example: .. code-block:: javascript db..find( { indexedField: { $gt: } } ) -Consider the following behaviors related to tailable cursors: +Get Started +----------- -- Tailable cursors do not use indexes and return documents in - :term:`natural order`. +To create a tailable cursor in :binary:`mongosh`, see +:method:`cursor.tailable()`. -- Because tailable cursors do not use indexes, the initial scan for the - query may be expensive; but, after initially exhausting the cursor, - subsequent retrievals of the newly added documents are inexpensive. +To see tailable cursor methods for your driver, see your :driver:`driver +documentation `. -- Tailable cursors may become *dead*, or invalid, if either: +Behavior +-------- - - the query returns no match. +Consider the following behaviors related to tailable cursors: - - the cursor returns the document at the "end" of the collection and - then the application deletes that document. +- Tailable cursors do not use indexes. They return documents in + :term:`natural order`. - A *dead* cursor has an ID of ``0``. +- Because tailable cursors do not use indexes, the initial scan for the + query may be expensive. After initially exhausting the cursor, + subsequent retrievals of the newly added documents are inexpensive. -See your :driver:`driver documentation ` for the -driver-specific method to specify the tailable cursor. +- A tailable cursor can become invalid if the data at its current + position is overwritten by new data. For example, this can happen if + the speed of data insertion is faster than the speed of cursor + iteration. diff --git a/source/core/timeseries-collections.txt b/source/core/timeseries-collections.txt index d13b54a23bb..0d3741b1514 100644 --- a/source/core/timeseries-collections.txt +++ b/source/core/timeseries-collections.txt @@ -12,6 +12,7 @@ Time Series .. meta:: :keywords: iot + :description: Improve query efficiency and reduce disk usage for time series data and secondary indexes by storing time series data in time series collections. .. contents:: On this page :local: @@ -105,19 +106,20 @@ store data in time-order. This format provides the following benefits: Behavior ~~~~~~~~ -Time series collections behave like normal collections. You can insert -and query your data as you normally would. +Time series collections behave like typical collections. You insert +and query data as usual. MongoDB treats time series collections as writable non-materialized :ref:`views ` backed by an internal collection. When you insert data, the internal collection automatically organizes time series data into an optimized storage format. -Since MongoDB 6.3, when you create a new time series collection, MongodDB -generates a :ref:`compound index ` on the -:ref:`metaField and timeField ` fields. Queries on time -series collections take advantage of this index, as well as the optimized -storage format, to improve query performance. +Starting in MongoDB 6.3: if you create a new time series collection, +MongoDB also generates a :ref:`compound index ` +on the :ref:`metaField and timeField ` fields. To +improve query performance, queries on time series collections use the +new compound index. The compound index also uses the optimized storage +format. .. tip:: diff --git a/source/core/timeseries/timeseries-best-practices.txt b/source/core/timeseries/timeseries-best-practices.txt index 82c38a8dd2d..8ecdbdf53c7 100644 --- a/source/core/timeseries/timeseries-best-practices.txt +++ b/source/core/timeseries/timeseries-best-practices.txt @@ -143,15 +143,6 @@ Increase the Number of Clients Increasing the number of clients writing data to your collections can improve performance. -.. important:: Disable Retryable Writes - - To write data with multiple clients, you must disable retryable - writes. Retryable writes for time series collections do not combine - writes from multiple clients. - - To learn more about retryable writes and how to disable them, see - :ref:`retryable writes `. - .. _tsc-best-practice-optimize-compression: Optimize Compression @@ -261,4 +252,54 @@ To improve query performance, :ref:`create one or more secondary indexes ` on your ``timeField`` and ``metaField`` to support common query patterns. In versions 6.3 and higher, MongoDB creates a secondary index on the ``timeField`` and -``metaField`` automatically. \ No newline at end of file +``metaField`` automatically. + +Query metaFields on Sub-Fields +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +MongoDB reorders the metaFields of time-series collections, which may +cause servers to store data in a different field order than +applications. If metaFields are objects, queries on entire metaFields +may produce inconsistent results because metaField order may vary +between servers and applications. To optimize queries on time-series +metaFields, query timeseries metaFields on scalar sub-fields rather than +entire metaFields. + +The following example creates a time series collection: + +.. code-block:: javascript + + db.weather.insertMany( [ + { + "metaField": { "sensorId": 5578, "type": "temperature" }, + "timestamp": ISODate( "2021-05-18T00:00:00.000Z" ), + "temp": 12 + }, + { + "metaField": { "sensorId": 5578, "type": "temperature" }, + "timestamp": ISODate( "2021-05-18T04:00:00.000Z" ), + "temp": 11 + } + ] ) + +The following query on the ``sensorId`` and ``type`` scalar sub-fields +returns the first document that matches the query criteria: + +.. code-block:: javascript + + db.weather.findOne( { + "metaField.sensorId": 5578, + "metaField.type": "temperature" + } ) + +Example output: + +.. code-block:: javascript + :copyable: false + + { + _id: ObjectId("6572371964eb5ad43054d572"), + metaField: { sensorId: 5578, type: 'temperature' }, + timestamp: ISODate( "2021-05-18T00:00:00.000Z" ), + temp: 12 + } diff --git a/source/core/timeseries/timeseries-granularity.txt b/source/core/timeseries/timeseries-granularity.txt index cc365855bb2..19bb11ecd19 100644 --- a/source/core/timeseries/timeseries-granularity.txt +++ b/source/core/timeseries/timeseries-granularity.txt @@ -1,14 +1,19 @@ +.. meta:: + :keywords: time series, granularity, IOT, code example, node.js + +.. facet:: + :name: genre + :values: tutorial + +.. facet:: + :name: programming_language + :values: javascript/typescript + .. _timeseries-granularity: ==================================== Set Granularity for Time Series Data ==================================== - -.. default-domain:: mongodb - -.. facet:: - :name: genre - :values: reference .. contents:: On this page :local: @@ -16,9 +21,6 @@ Set Granularity for Time Series Data :depth: 2 :class: singlecol -.. meta:: - :keywords: Time series, granularity, IOT - When you create a time series collection, MongoDB automatically creates a ``system.buckets`` :ref:`system collection ` and groups incoming time series data @@ -87,11 +89,12 @@ bucket of data when using a given ``granularity`` value: .. include:: /includes/table-timeseries-granularity-intervals.rst -By default, ``granularity`` is set to ``seconds``. You can improve performance by setting the ``granularity`` value to the -closest match to the time span between incoming measurements from the -same data source. For example, if you are recording weather data from -thousands of sensors but only record data from each sensor once per 5 -minutes, set ``granularity`` to ``"minutes"``. +By default, ``granularity`` is set to ``seconds``. You can improve +performance by setting the ``granularity`` value to the closest match to +the time span between incoming measurements from the same data source. +For example, if you are recording weather data from thousands of sensors +but only record data from each sensor once per 5 minutes, set +``granularity`` to ``"minutes"``. .. code-block:: javascript @@ -125,8 +128,10 @@ Using Custom Bucketing Parameters In MongoDB 6.3 and higher, instead of ``granularity``, you can set bucket boundaries manually using the two custom bucketing parameters. -Consider this approach if you need the additional precision to optimize -a high volume of queries and :dbcommand:`insert` operations. +Consider this approach if you expect to query data for fixed time +intervals, such as every 4 hours starting at midnight. Ensuring buckets +don't overlap between those periods optimizes for high query volume and +:dbcommand:`insert` operations. To use custom bucketing parameters, set both parameters to the same value, and do not set ``granularity``: @@ -139,9 +144,10 @@ value, and do not set ``granularity``: bucket, MongoDB rounds down the document's timestamp value by this interval to set the minimum time for the bucket. -For the weather station example with 5 minute sensor intervals, you -could adjust bucketing by setting the custom bucketing parameters to -300 seconds, instead of using a ``granularity`` of ``"minutes"``: +For the weather station example, if you generate summary reports every +4 hours, you could adjust bucketing by setting the custom bucketing +parameters to 14400 seconds instead of using a ``granularity`` +of ``"minutes"``: .. code-block:: javascript @@ -151,15 +157,15 @@ could adjust bucketing by setting the custom bucketing parameters to timeseries: { timeField: "timestamp", metaField: "metadata", - bucketMaxSpanSeconds: 300, - bucketRoundingSeconds: 300 + bucketMaxSpanSeconds: 14400, + bucketRoundingSeconds: 14400 } } ) -If a document with a time of ``2023-03-27T18:24:35Z`` does not fit an +If a document with a time of ``2023-03-27T16:24:35Z`` does not fit an existing bucket, MongoDB creates a new bucket with a minimum time of -``2023-03-27T18:20:00Z`` and a maximum time of ``2023-03-27T18:24:59Z``. +``2023-03-27T16:00:00Z`` and a maximum time of ``2023-03-27T19:59:59Z``. Change Time Series Granularity ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/source/core/timeseries/timeseries-limitations.txt b/source/core/timeseries/timeseries-limitations.txt index afd2d8d76d6..2a4e39825fd 100644 --- a/source/core/timeseries/timeseries-limitations.txt +++ b/source/core/timeseries/timeseries-limitations.txt @@ -1,20 +1,22 @@ +.. meta:: + :keywords: time series, IOT + +.. facet:: + :name: genre + :values: reference + .. _manual-timeseries-collection-limitations: ================================== Time Series Collection Limitations ================================== -.. default-domain:: mongodb - .. contents:: On this page :local: :backlinks: none :depth: 2 :class: singlecol -.. meta:: - :keywords: Time Series, IOT - This page describes limitations on using :ref:`time series collections `. @@ -26,22 +28,27 @@ The following features are not supported for time series collections: * :atlas:`Atlas Search ` * :ref:`Change streams ` * :ref:`{+csfle+} ` -* :realm:`Database Triggers ` -* :realm:`GraphQL API ` +* :appservices:`Database Triggers ` +* :appservices:`GraphQL API (deprecated) ` * :ref:`Schema validation rules ` * :dbcommand:`reIndex` * :dbcommand:`renameCollection` -:realm:`Atlas Device Sync ` is only supported if the time series -collections are asymmetrically synchronized. For details, see -:realm:`Enable Atlas Device Sync `. +:appservices:`Atlas Device Sync ` support is limited to time +series collections that use :appservices:`Atlas Data Ingest +`. -Aggregation $merge -~~~~~~~~~~~~~~~~~~ +Aggregation $merge and $out +~~~~~~~~~~~~~~~~~~~~~~~~~~~ You cannot use the :pipeline:`$merge` aggregation stage to add data from another collection to a time series collection. +.. versionchanged:: 7.0.3 + + You can use the :pipeline:`$out` aggregation stage to write + documents to a time series collection. + .. _timeseries-limitations-updates-deletes: .. _timeseries-limitations-deletes: @@ -173,12 +180,23 @@ parameters later. .. _timeseries-limitations-granularity: +Granularity +~~~~~~~~~~~ + +Bucket Size +``````````` +For any configuration of granularity parameters, the maximum +size of a bucket is 1000 measurements or 125KB of data, +whichever is lower. MongoDB may also enforce a lower maximum size for +high cardinality data with many unique values, so that the working set +of buckets fits within the :ref:`WiredTiger cache `. + Modifying Bucket Parameters -~~~~~~~~~~~~~~~~~~~~~~~~~~~ +``````````````````````````` -Once you set a collection's ``granularity`` or custom bucketing +Once you set a collection's ``granularity`` or the custom bucketing parameters ``bucketMaxSpanSeconds`` and ``bucketRoundingSeconds``, you -can increase them, but not decrease them. +can increase the timespan covered by a bucket, but not decrease it. Use the :dbcommand:`collMod` command to modify the parameters. For example: .. code-block:: javascript @@ -205,15 +223,11 @@ collections. In versions earlier than MongoDB 5.0.6, you cannot shard time series collections. -Sharding Administration Commands +Sharding Administration Commands ```````````````````````````````` -Starting in MongoDB 5.2 (and 5.1.2, 5.0.6), you can run :ref:`sharding -administration commands ` (such as -:dbcommand:`moveChunk`) on the ``system.buckets`` collection. - -In versions earlier than MongoDB 5.0.6, you cannot run sharding -administration commands for sharded time series collections. +You cannot run sharding administration commands on sharded time series +collections. Shard Key Fields ```````````````` @@ -223,7 +237,8 @@ Shard Key Fields Resharding `````````` -You cannot reshard sharded time series collections. +You cannot reshard a sharded time series collection. However, you can +:ref:`refine its shard key `. Transactions ~~~~~~~~~~~~ diff --git a/source/core/timeseries/timeseries-procedures.txt b/source/core/timeseries/timeseries-procedures.txt index 6690c8aea1f..1c5c411cc02 100644 --- a/source/core/timeseries/timeseries-procedures.txt +++ b/source/core/timeseries/timeseries-procedures.txt @@ -1,11 +1,20 @@ +.. meta:: + :keywords: time series, IOT, code example, node.js + +.. facet:: + :name: genre + :values: tutorial + +.. facet:: + :name: programming_language + :values: javascript/typescript + .. _timeseries-create-query-procedures: ========================================= Create and Query a Time Series Collection ========================================= -.. default-domain:: mongodb - .. contents:: On this page :local: :backlinks: none @@ -61,7 +70,7 @@ Create a Time Series Collection After creation, you can modify granularity or bucket definitions using the :dbcommand:`collMod` method. However, - you can only increase the timespan covered by each bucket. You + you can only increase the time span covered by each bucket. You cannot decrease it. A. Define a ``granularity`` field: @@ -98,9 +107,9 @@ Create a Time Series Collection timeseries: { timeField: "timestamp", metaField: "metadata", - granularity: "seconds", - expireAfterSeconds: "86400" - } + granularity: "seconds" + }, + expireAfterSeconds: 86400 .. _time-series-fields: @@ -139,7 +148,7 @@ A time series collection includes the following fields: - integer - .. include:: /includes/time-series/fact-bucketroundingseconds-field-description.rst - * - ``timeseries.expireAfterSeconds`` + * - ``expireAfterSeconds`` - integer - Optional. Enable the automatic deletion of documents in a time series collection by specifying the number of seconds @@ -262,6 +271,9 @@ Example output: _id: ObjectId("62f11bbf1e52f124b84479ad") } +For more information on time series queries, see +:ref:`tsc-best-practice-optimize-query-performance`. + Run Aggregations on a Time Series Collection -------------------------------------------- diff --git a/source/core/timeseries/timeseries-secondary-index.txt b/source/core/timeseries/timeseries-secondary-index.txt index 89f752d5d81..56ee24af04b 100644 --- a/source/core/timeseries/timeseries-secondary-index.txt +++ b/source/core/timeseries/timeseries-secondary-index.txt @@ -306,6 +306,20 @@ Starting in MongoDB 6.0, you can: - Use the ``metaField`` with :ref:`2dsphere <2dsphere-index>` indexes. + .. example:: + + .. code-block:: javascript + + db.sensorData.createIndex({ "metadata.location": "2dsphere" }) + +- Create ``2dsphere`` indexes on measurements. + + .. example:: + + .. code-block:: javascript + + db.sensorData.createIndex({ "currentConditions.tempF": "2dsphere" }) + .. note:: .. include:: /includes/time-series-secondary-indexes-downgrade-FCV.rst diff --git a/source/core/timeseries/timeseries-shard-collection.txt b/source/core/timeseries/timeseries-shard-collection.txt index e48a49269d6..66b611c292c 100644 --- a/source/core/timeseries/timeseries-shard-collection.txt +++ b/source/core/timeseries/timeseries-shard-collection.txt @@ -22,13 +22,6 @@ Use this tutorial to shard a new or existing time series collection. limitations ` for time series collections. -Limitations ------------ - -You can't :ref:`reshard ` a sharded time series -collection. However, you can :ref:`refine its shard key -`. - Prerequisites ------------- diff --git a/source/core/transactions-in-applications.txt b/source/core/transactions-in-applications.txt index 5c6ae3dee78..f3622a7f687 100644 --- a/source/core/transactions-in-applications.txt +++ b/source/core/transactions-in-applications.txt @@ -341,6 +341,8 @@ labeled: To handle :ref:`unknown-transaction-commit-result`, applications should explicitly incorporate retry logic for the error. +.. _txn-core-api-retry: + Example ~~~~~~~ @@ -392,17 +394,18 @@ the transaction as a whole can be retried. - The core transaction API does not incorporate retry logic for ``"TransientTransactionError"``. To handle ``"TransientTransactionError"``, applications should explicitly - incorporate retry logic for the error. + incorporate retry logic for the error. To view an example that incorporates + retry logic for transient errors, see :ref:`Core API Example + `. .. _unknown-transaction-commit-result: ``UnknownTransactionCommitResult`` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -The commit operations are :doc:`retryable write operations -`. If the commit operation encounters an error, -MongoDB drivers retry the commit regardless of the value of -:urioption:`retryWrites`. +Commit operations are :ref:`retryable write operations `. If +the commit operation encounters an error, MongoDB drivers retry the commit +regardless of the value of :urioption:`retryWrites`. If the commit operation encounters an error labeled ``"UnknownTransactionCommitResult"``, the commit can be retried. @@ -413,7 +416,9 @@ If the commit operation encounters an error labeled - The core transaction API does not incorporate retry logic for ``"UnknownTransactionCommitResult"``. To handle ``"UnknownTransactionCommitResult"``, applications should explicitly - incorporate retry logic for the error. + incorporate retry logic for the error. To view an example that incorporates + retry logic for unknown commit errors, see :ref:`Core API Example + `. .. _transactionTooLargeForCache-error: diff --git a/source/core/transactions-operations.txt b/source/core/transactions-operations.txt index 05c971646b3..228b5abb258 100644 --- a/source/core/transactions-operations.txt +++ b/source/core/transactions-operations.txt @@ -35,7 +35,7 @@ The following read/write operations are allowed in transactions: You can update a document's shard key value (unless the shard key field is the immutable ``_id`` field) by issuing single-document update / findAndModify operations either in a transaction or as a - :doc:`retryable write `. For details, see + :ref:`retryable write `. For details, see :ref:`update-shard-key`. .. _transactions-operations-count: diff --git a/source/core/transactions-production-consideration.txt b/source/core/transactions-production-consideration.txt index bd782a27974..edf1b5b3ebf 100644 --- a/source/core/transactions-production-consideration.txt +++ b/source/core/transactions-production-consideration.txt @@ -140,7 +140,7 @@ Transactions and Security ------------------------- - If running with :ref:`access control `, you must - have :doc:`privileges ` for the + have :ref:`privileges ` for the :ref:`operations in the transaction `. - If running with :ref:`auditing `, operations in an diff --git a/source/core/transactions.txt b/source/core/transactions.txt index ff42fa745a2..21e5fcb17e7 100644 --- a/source/core/transactions.txt +++ b/source/core/transactions.txt @@ -16,6 +16,7 @@ Transactions .. meta:: :keywords: motor, java sync, code example, node.js + :description: Use distributed transactions across multiple operations, collections, databases, documents, and shards. .. contents:: On this page :local: @@ -80,7 +81,8 @@ upper-right to set the language of the following example. /* For a replica set, include the replica set name and a seedlist of the members in the URI string; e.g. String uri = "mongodb://mongodb0.example.com:27017,mongodb1.example.com:27017/admin?replicaSet=myRepl"; - For a sharded cluster, connect to the mongos instances; e.g. + For a sharded cluster, connect to the mongos instances. + For example: String uri = "mongodb://mongos0.example.com:27017,mongos1.example.com:27017:27017/admin"; */ @@ -277,7 +279,6 @@ upper-right to set the language of the following example. .. literalinclude:: /driver-examples/DocumentationTransactionsExampleSpec.scala :language: scala - .. seealso:: For an example in :binary:`~bin.mongosh`, see @@ -290,12 +291,12 @@ Transactions and Atomicity .. include:: /includes/transactions/distributed-transaction-repl-shard-support.rst -Distributed transactions are atomic. They provide an "all-or-nothing" -proposition: +Distributed transactions are atomic: + +- Transactions either apply all data changes or roll back the changes. -- When a transaction commits, all data changes made in the transaction - are saved and visible outside the transaction. That is, a transaction - will not commit some of its changes while rolling back others. +- If a transaction commits, all data changes made in the transaction + are saved and are visible outside of the transaction. .. include:: /includes/extracts/transactions-committed-visibility.rst @@ -334,11 +335,11 @@ For a list of operations not supported in transactions, see .. _transactions-create-collections-indexes: -Create Collections and Indexes In a Transaction +Create Collections and Indexes in a Transaction ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -You can perform the following operations inside of a :ref:`distributed -transaction ` as long as the transaction is not a +You can perform the following operations in a :ref:`distributed +transaction ` if the transaction is not a cross-shard write transaction: - Create collections. @@ -352,11 +353,11 @@ When creating a collection inside a transaction: `, such as with: - an :ref:`insert operation ` - against a non-existing collection, or + for a non-existent collection, or - an :ref:`update/findAndModify operation ` with ``upsert: true`` - against a non-existing collection. + for a non-existent collection. - You can :ref:`explicitly create a collection ` using the :dbcommand:`create` @@ -366,7 +367,7 @@ When :ref:`creating an index inside a transaction ` [#create-existing-index]_, the index to create must be on either: -- a non-existing collection. The collection is created as part of the +- a non-existent collection. The collection is created as part of the operation. - a new empty collection created earlier in the same transaction. @@ -387,7 +388,10 @@ Restrictions - For explicit creation of a collection or an index inside a transaction, the transaction read concern level must be - :readconcern:`"local"`. Explicit creation is through: + :readconcern:`"local"`. + + To explicitly create collections and indexes, use the following + commands and methods: .. list-table:: :header-rows: 1 @@ -444,10 +448,9 @@ Restricted Operations Transactions and Sessions ------------------------- -- Transactions are associated with a session +- Transactions are associated with a session. -- At any given time, you can have at most one open transaction for a - session. +- You can have at most one open transaction at a time for a session. - When using the drivers, each operation in the transaction must be associated with the session. Refer to your driver specific @@ -487,7 +490,7 @@ Transactions and Read Concern ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Operations in a transaction use the transaction-level :doc:`read -concern `. That is, any read concern set at +concern `. This means a read concern set at the collection and database level is ignored inside the transaction. You can set the transaction-level :doc:`read concern @@ -498,8 +501,8 @@ You can set the transaction-level :doc:`read concern - If transaction-level and the session-level read concern are unset, the transaction-level read concern defaults to the client-level read - concern. By default, client-level read concern is - :readconcern:`"local"` for reads against the primary. See also: + concern. By default, the client-level read concern is + :readconcern:`"local"` for reads on the primary. See also: - :ref:`transactions-read-preference` - :doc:`/reference/mongodb-defaults` @@ -559,16 +562,16 @@ Transactions and Write Concern Transactions use the transaction-level :doc:`write concern ` to commit the write operations. Write -operations inside transactions must be issued without explicit write +operations inside transactions must be run without an explicit write concern specification and use the default write concern. At commit -time, the writes are then commited using the transaction-level write +time, the writes committed using the transaction-level write concern. .. tip:: - Do not explicitly set the write concern for the individual write + Don't explicitly set the write concern for the individual write operations inside a transaction. Setting write concerns for the - individual write operations inside a transaction results in an error. + individual write operations inside a transaction returns an error. You can set the transaction-level :doc:`write concern ` at the transaction start: @@ -599,7 +602,7 @@ values, including: ```````` - Write concern :writeconcern:`w: 1 <\>` returns - acknowledgement after the commit has been applied to the primary. + acknowledgment after the commit has been applied to the primary. .. important:: @@ -621,9 +624,8 @@ values, including: ````````````````` - Write concern :writeconcern:`w: "majority" <"majority">` returns - acknowledgement after the commit has been applied to a majority - (M) of voting members, meaning the commit has been applied to the - primary and (M-1) voting secondaries. + acknowledgment after the commit has been applied to a majority of + voting members. - When you commit with :writeconcern:`w: "majority" <"majority">` write concern, transaction-level :readconcern:`"majority"` read @@ -652,12 +654,15 @@ values, including: General Information ------------------- +The following sections describe additional considerations for +transactions. + Production Considerations ~~~~~~~~~~~~~~~~~~~~~~~~~ -For various production considerations with using transactions, see +For transactions in production environments, see :ref:`production-considerations`. In addition, for sharded -clusters, see also :ref:`production-considerations-sharded`. +clusters, see :ref:`production-considerations-sharded`. Arbiters ~~~~~~~~ @@ -678,7 +683,7 @@ Shard Configuration Restriction Diagnostics ~~~~~~~~~~~ -MongoDB provides various transactions metrics: +To obtain transaction status and metrics, use the following methods: .. list-table:: :widths: 40 60 @@ -719,9 +724,9 @@ MongoDB provides various transactions metrics: * - :binary:`~bin.mongod` and :binary:`~bin.mongos` log messages - - Includes information on slow transactions, which are transactions + - Includes information on slow transactions (which are transactions that exceed the :setting:`operationProfiling.slowOpThresholdMs` - threshold) under the :data:`TXN` log component. + threshold) in the :data:`TXN` log component. .. _transactions-fcv: @@ -775,8 +780,8 @@ Starting in MongoDB 5.2 (and 5.0.4): :parameter:`metadataRefreshInTransactionMaxWaitBehindCritSecMS` parameter. -Additional Transactions Topics ------------------------------- +Learn More +---------- - :doc:`/core/transactions-in-applications` - :doc:`/core/transactions-production-consideration` diff --git a/source/core/views.txt b/source/core/views.txt index 2e6397e0b1c..d53092c4b19 100644 --- a/source/core/views.txt +++ b/source/core/views.txt @@ -10,6 +10,9 @@ Views :name: genre :values: reference +.. meta:: + :description: Understand use cases for read-only views in MongoDB. + .. contents:: On this page :local: :backlinks: none @@ -91,13 +94,6 @@ operations. .. include:: /includes/fact-allowDiskUseByDefault.rst -Sharded Views -~~~~~~~~~~~~~ - -Views are considered sharded if their underlying collection is sharded. -You cannot specify a sharded view for the ``from`` field in -:pipeline:`$lookup` and :pipeline:`$graphLookup` operations. - Time Series Collections ~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/source/core/views/create-view.txt b/source/core/views/create-view.txt index 972727d6af5..0e99743117f 100644 --- a/source/core/views/create-view.txt +++ b/source/core/views/create-view.txt @@ -75,8 +75,6 @@ Some operations are not available with views: - :query:`$text` operator, since ``$text`` operation in aggregation is valid only for the first stage. -- :pipeline:`$geoNear` pipeline stage. - - Renaming a view. For more information, see :ref:`views-supported-operations`. diff --git a/source/core/views/join-collections-with-view.txt b/source/core/views/join-collections-with-view.txt index 63b71099aaa..7f4d67d0242 100644 --- a/source/core/views/join-collections-with-view.txt +++ b/source/core/views/join-collections-with-view.txt @@ -14,6 +14,9 @@ Use a View to Join Two Collections :name: genre :values: tutorial +.. meta:: + :description: Use $lookup to create a view over two collections and then run queries against the view without having to construct or maintain complex pipelines. + .. contents:: On this page :local: :backlinks: none diff --git a/source/core/wiredtiger.txt b/source/core/wiredtiger.txt index ed6634162ac..a91754c3d51 100644 --- a/source/core/wiredtiger.txt +++ b/source/core/wiredtiger.txt @@ -11,6 +11,9 @@ WiredTiger Storage Engine :name: genre :values: reference +.. meta:: + :description: Learn how WiredTiger, MongoDB's default storage engine, works. + .. contents:: On this page :local: :backlinks: none @@ -57,6 +60,22 @@ engine: WiredTiger doesn't allocate cache on a per-database or per-collection level. +Transaction (Read and Write) Concurrency +---------------------------------------- + +.. include:: /includes/fact-dynamic-concurrency.rst + +To view the number of concurrent read transactions (read tickets) and +write transactions (write tickets) allowed in the WiredTiger storage +engine, use the :dbcommand:`serverStatus` command and see the +:serverstatus:`wiredTiger.concurrentTransactions` parameter. + +.. note:: + + A low value of :serverstatus:`wiredTiger.concurrentTransactions` does + not indicate a cluster overload. Use the number of queued read and + write tickets as an indication of cluster overload. + Document Level Concurrency -------------------------- diff --git a/source/crud.txt b/source/crud.txt index cde113611a9..b05285c4b36 100644 --- a/source/crud.txt +++ b/source/crud.txt @@ -11,6 +11,7 @@ MongoDB CRUD Operations :values: reference .. meta:: + :description: Manage documents in collections by running create, read, update, and delete operations. :keywords: atlas .. contents:: On this page @@ -52,7 +53,7 @@ collection: In MongoDB, insert operations target a single :term:`collection`. All write operations in MongoDB are :doc:`atomic ` on the level of a single -:doc:`document `. +:ref:`document `. .. include:: /images/crud-annotated-mongodb-insertOne.rst diff --git a/source/data-center-awareness.txt b/source/data-center-awareness.txt index 3d162215ab4..54dc3b05ad8 100644 --- a/source/data-center-awareness.txt +++ b/source/data-center-awareness.txt @@ -55,5 +55,3 @@ Further Reading :hidden: /core/workload-isolation - /core/zone-sharding - /tutorial/manage-shard-zone diff --git a/source/faq.txt b/source/faq.txt index 2e52f5a981f..d8686571ddd 100644 --- a/source/faq.txt +++ b/source/faq.txt @@ -1,3 +1,5 @@ +.. _faq-manual: + ========================== Frequently Asked Questions ========================== diff --git a/source/faq/diagnostics.txt b/source/faq/diagnostics.txt index bf9d5a92435..87bdaaaf408 100644 --- a/source/faq/diagnostics.txt +++ b/source/faq/diagnostics.txt @@ -16,8 +16,9 @@ This document provides answers to common diagnostic questions and issues. If you don't find the answer you're looking for, check -the :doc:`complete list of FAQs ` or post your question to the -`MongoDB Community `_. +the :ref:`complete list of FAQs ` or post your question to +the `MongoDB Community +`_. Where can I find information about a ``mongod`` process that stopped running unexpectedly? ------------------------------------------------------------------------------------------ @@ -275,7 +276,7 @@ consider the following options, depending on the nature of the impact: #. If the balancer is always migrating chunks to the detriment of overall cluster performance: - - You may want to attempt :doc:`decreasing the chunk size ` + - You may want to attempt :ref:`decreasing the chunk size ` to limit the size of the migration. - Your cluster may be over capacity, and you may want to attempt to diff --git a/source/faq/indexes.txt b/source/faq/indexes.txt index 6d48e08d765..8eaa37d236e 100644 --- a/source/faq/indexes.txt +++ b/source/faq/indexes.txt @@ -63,9 +63,9 @@ To terminate an in-progress index build, use the in-progress index builds in replica sets or sharded clusters. You cannot terminate a replicated index build on secondary members of a replica -set. You must first drop the index on the primary. Starting in version 4.4, the -primary stops the index build and creates an associated ``abortIndexBuild`` -:term:`oplog` entry. Secondaries that replicate the ``abortIndexBuild`` oplog +set. You must first drop the index on the primary. The primary stops the index +build and creates an associated ``abortIndexBuild`` :term:`oplog` entry. +Secondaries that replicate the ``abortIndexBuild`` oplog entry stop the in-progress index build and discard the build job. To learn more, see :ref:`dropIndexes-cmd-index-builds`. diff --git a/source/faq/replica-sets.txt b/source/faq/replica-sets.txt index 66da24dc760..3826d8f2671 100644 --- a/source/faq/replica-sets.txt +++ b/source/faq/replica-sets.txt @@ -27,7 +27,7 @@ What kind of replication does MongoDB support? ---------------------------------------------- MongoDB supports :ref:`Replica sets `, which can have up -to :ref:`50 nodes <3.0-replica-sets-max-members>`. +to 50 nodes. Does replication work over the Internet and WAN connections? ------------------------------------------------------------ diff --git a/source/faq/sharding.txt b/source/faq/sharding.txt index af207e34716..813d021c216 100644 --- a/source/faq/sharding.txt +++ b/source/faq/sharding.txt @@ -14,7 +14,7 @@ FAQ: Sharding with MongoDB This document answers common questions about :doc:`/sharding`. See also the :doc:`/sharding` section in the manual, which provides an -:doc:`overview of sharding `, including details on: +:ref:`overview of sharding `, including details on: - :ref:`Shard Keys and Considerations for Shard Key Selection ` @@ -45,9 +45,8 @@ that you are running: - Starting in MongoDB 5.0, you can :ref:`reshard a collection ` by changing a document's shard key. -- Starting in MongoDB 4.4, you can :ref:`refine a shard key - ` by adding a suffix field or fields to the - existing shard key. +- You can :ref:`refine a shard key ` by adding a suffix field + or fields to the existing shard key. - In MongoDB 4.2 and earlier, the choice of shard key cannot be changed after sharding. diff --git a/source/geospatial-queries.txt b/source/geospatial-queries.txt index 1f9b98b08b8..dadcecba25c 100644 --- a/source/geospatial-queries.txt +++ b/source/geospatial-queries.txt @@ -13,7 +13,9 @@ Geospatial Queries .. facet:: :name: genre :values: reference - + +.. meta:: + :description: How to query geospatial data such as geoJSON objects and legacy coordinate pairs. .. contents:: On this page :local: @@ -26,7 +28,12 @@ introduces MongoDB's geospatial features. .. |page-topic| replace:: run geospatial queries -.. include:: /includes/fact-atlas-compatible.rst +Compatibility +------------- + +.. |operator-method| replace:: geospatial queries + +.. include:: /includes/fact-compatibility.rst For deployments hosted in {+atlas+}, you can run geospatial queries in the UI by using the query :guilabel:`Filter` bar or aggregation diff --git a/source/images/client-side-field-level-encryption-diagram.svg b/source/images/client-side-field-level-encryption-diagram.svg index de5f76b3640..6394a994f6f 100644 --- a/source/images/client-side-field-level-encryption-diagram.svg +++ b/source/images/client-side-field-level-encryption-diagram.svg @@ -1 +1,1850 @@ - \ No newline at end of file + + + + \ No newline at end of file diff --git a/source/images/crud-write-concern-ack.rst b/source/images/crud-write-concern-ack.rst index 8731e0a927c..3737c51562d 100644 --- a/source/images/crud-write-concern-ack.rst +++ b/source/images/crud-write-concern-ack.rst @@ -1,3 +1,3 @@ .. figure:: /images/crud-write-concern-ack.bakedsvg.svg - :alt: Write operation to a ``mongod`` instance with write concern of ``acknowledged``. The client waits for acknowledgement of success or exception. + :alt: Write operation to a ``mongod`` instance with write concern of ``acknowledged``. The client waits for acknowledgment of success or exception. :figwidth: 460px diff --git a/source/images/crud-write-concern-journal.rst b/source/images/crud-write-concern-journal.rst index 2ec2fbbd67c..7f614b9cb74 100644 --- a/source/images/crud-write-concern-journal.rst +++ b/source/images/crud-write-concern-journal.rst @@ -1,3 +1,3 @@ .. figure:: /images/crud-write-concern-journal.bakedsvg.svg - :alt: Write operation to a ``mongod`` instance with write concern of ``journaled``. The ``mongod`` sends acknowledgement after it commits the write operation to the journal. + :alt: Write operation to a ``mongod`` instance with write concern of ``journaled``. The ``mongod`` sends acknowledgment after it commits the write operation to the journal. :figwidth: 600px diff --git a/source/images/crud-write-concern-unack.rst b/source/images/crud-write-concern-unack.rst index 9b5a506d0db..c8e80275728 100644 --- a/source/images/crud-write-concern-unack.rst +++ b/source/images/crud-write-concern-unack.rst @@ -1,3 +1,3 @@ .. figure:: /images/crud-write-concern-unack.bakedsvg.svg - :alt: Write operation to a ``mongod`` instance with write concern of ``unacknowledged``. The client does not wait for any acknowledgement. + :alt: Write operation to a ``mongod`` instance with write concern of ``unacknowledged``. The client does not wait for any acknowledgment. :figwidth: 460px diff --git a/source/images/sharded-cluster-hashed-distribution.bakedsvg.svg b/source/images/sharded-cluster-hashed-distribution.bakedsvg.svg index 9241c623a8e..f3ac25d70bb 100644 --- a/source/images/sharded-cluster-hashed-distribution.bakedsvg.svg +++ b/source/images/sharded-cluster-hashed-distribution.bakedsvg.svg @@ -1 +1 @@ - \ No newline at end of file + \ No newline at end of file diff --git a/source/includes/5.0-changes/removed-parameters.rst b/source/includes/5.0-changes/removed-parameters.rst index 22b7d73e3bd..6edca637825 100644 --- a/source/includes/5.0-changes/removed-parameters.rst +++ b/source/includes/5.0-changes/removed-parameters.rst @@ -20,3 +20,19 @@ MongoDB 5.0 removes the following server parameters: parameter. In 5.0+, collection and index creation inside of transactions is always enabled. You can no longer use the server parameter to disable this behavior. + + * - ``connPoolMaxShardedConnsPerHost`` + + - MongoDB 5.0 removes the ``connPoolMaxShardedConnsPerHost`` server + parameter. + + * - ``connPoolMaxShardedInUseConnsPerHost`` + + - MongoDB 5.0 removes the ``connPoolMaxShardedInUseConnsPerHost`` server + parameter. + + * - ``shardedConnPoolIdleTimeoutMinutes`` + + - MongoDB 5.0 removes the ``shardedConnPoolIdleTimeoutMinutes`` server + parameter. + diff --git a/source/includes/7.0-concurrent-transactions.rst b/source/includes/7.0-concurrent-transactions.rst index 6a26bae6e48..f86070e72bf 100644 --- a/source/includes/7.0-concurrent-transactions.rst +++ b/source/includes/7.0-concurrent-transactions.rst @@ -1,6 +1,7 @@ -Starting in MongoDB 7.0, a default algorithm is used to dynamically adjust -the maximum number of concurrent storage engine transactions (including both -read and write tickets) to optimize database throughput during overload. +Starting in version 7.0, MongoDB uses a default algorithm to dynamically +adjust the maximum number of concurrent storage engine transactions +(including both read and write tickets) to optimize database throughput +during overload. The following table summarizes how to identify overload scenarios for MongoDB 7.0 and prior releases: diff --git a/source/includes/access-create-user.rst b/source/includes/access-create-user.rst index 7e22b00aa66..58d56c487ed 100644 --- a/source/includes/access-create-user.rst +++ b/source/includes/access-create-user.rst @@ -8,4 +8,4 @@ The :authrole:`userAdmin` and :authrole:`userAdminAnyDatabase` built-in roles provide :authaction:`createUser` and :authaction:`grantRole` actions on their -respective :doc:`resources `. +respective :ref:`resources `. diff --git a/source/includes/aggregation/convert-to-bool-table.rst b/source/includes/aggregation/convert-to-bool-table.rst new file mode 100644 index 00000000000..f8ab05af1fb --- /dev/null +++ b/source/includes/aggregation/convert-to-bool-table.rst @@ -0,0 +1,66 @@ +.. list-table:: + :header-rows: 1 + :widths: 55 50 + + * - Input Type + - Behavior + + * - Array + - Returns true + + * - Binary data + - Returns true + + * - Boolean + - No-op. Returns the boolean value. + + * - Code + - Returns true + + * - Date + - Returns true + + * - Decimal + - | Returns true if not zero + | Return false if zero + + * - Double + - | Returns true if not zero + | Return false if zero + + * - Integer + - | Returns true if not zero + | Return false if zero + + * - JavaScript + - Returns true + + * - Long + - | Returns true if not zero + | Return false if zero + + * - MaxKey + - Returns true + + * - MinKey + - Returns true + + * - Null + - |null-description| + + * - Object + - Returns true + + * - ObjectId + - Returns true + + * - Regular expression + - Returns true + + * - String + - Returns true + + * - Timestamp + - Returns true + +To learn more about data types in MongoDB, see :ref:`bson-types`. diff --git a/source/includes/analyzeShardKey-limitations.rst b/source/includes/analyzeShardKey-limitations.rst index eb90f24578d..cd28d1949fb 100644 --- a/source/includes/analyzeShardKey-limitations.rst +++ b/source/includes/analyzeShardKey-limitations.rst @@ -1,5 +1,5 @@ - You cannot run |analyzeShardKey| on Atlas - :atlas:`multi-tenant ` + :atlas:`multi-tenant ` configurations. - You cannot run |analyzeShardKey| on standalone deployments. - You cannot run |analyzeShardKey| directly against a diff --git a/source/includes/analyzeShardKey-method-command-fields.rst b/source/includes/analyzeShardKey-method-command-fields.rst index cd6ff1c4573..1cb7e4ec2f5 100644 --- a/source/includes/analyzeShardKey-method-command-fields.rst +++ b/source/includes/analyzeShardKey-method-command-fields.rst @@ -16,7 +16,7 @@ There is no default value. - * - ``keyCharacteristics`` + * - ``opts.keyCharacteristics`` - boolean - Optional - Whether or not the metrics about the characteristics of the shard @@ -25,7 +25,7 @@ Defaults to ``true``. - * - ``readWriteDistribution`` + * - ``opts.readWriteDistribution`` - boolean - Optional - Whether or not the metrics about the read and write distribution @@ -34,7 +34,7 @@ Defaults to ``true``. - * - ``sampleRate`` + * - ``opts.sampleRate`` - double - Optional - The proportion of the documents in the collection to sample when @@ -45,7 +45,7 @@ There is no default value. - * - ``sampleSize`` + * - ``opts.sampleSize`` - integer - Optional - The number of documents to sample when calculating the metrics diff --git a/source/includes/analyzeShardKey-supporting-indexes.rst b/source/includes/analyzeShardKey-supporting-indexes.rst index 2e3947b4654..de9d6dfa235 100644 --- a/source/includes/analyzeShardKey-supporting-indexes.rst +++ b/source/includes/analyzeShardKey-supporting-indexes.rst @@ -34,3 +34,6 @@ index requirements: - Index is not :ref:`multi-key ` - Index is not :ref:`sparse ` - Index is not :ref:`partial ` + +To create supporting indexes, use the +:method:`db.collection.createIndex()` method. diff --git a/source/includes/atlas-nav/steps.rst b/source/includes/atlas-nav/steps.rst new file mode 100644 index 00000000000..e69de29bb2d diff --git a/source/includes/atlas-search-commands/command-output/search-index-statuses.rst b/source/includes/atlas-search-commands/command-output/search-index-statuses.rst index 275a96b2eda..e98a77eeed8 100644 --- a/source/includes/atlas-search-commands/command-output/search-index-statuses.rst +++ b/source/includes/atlas-search-commands/command-output/search-index-statuses.rst @@ -27,19 +27,41 @@ following: - For an existing index, Atlas Search uses the old index definition for queries until the index rebuild is complete. + An index in the ``BUILDING`` state may be queryable or + non-queryable. + + * - ``DOES_NOT_EXIST`` + - The index does not exist. + + An index in the ``DOES_NOT_EXIST`` state is always non-queryable. + + * - ``DELETING`` + - Atlas is deleting the index. + + An index in the ``DELETING`` state is always non-queryable. + * - ``FAILED`` - The index build failed. Indexes can enter the ``FAILED`` state due to an invalid index definition. + + An index in the ``FAILED`` state may be queryable or + non-queryable. * - ``PENDING`` - Atlas has not yet started building the index. + An index in the ``PENDING`` state is always non-queryable. + * - ``READY`` - The index is ready and can support queries. + An index in the ``READY`` state is always queryable. + * - ``STALE`` - The index is queryable but has stopped replicating data from the indexed collection. Searches on the index may return out-of-date data. Indexes can enter the ``STALE`` state due to replication errors. + + An index in the ``STALE`` state is always queryable. diff --git a/source/includes/atlas-search-commands/field-definitions/type.rst b/source/includes/atlas-search-commands/field-definitions/type.rst new file mode 100644 index 00000000000..980cd2d84bf --- /dev/null +++ b/source/includes/atlas-search-commands/field-definitions/type.rst @@ -0,0 +1,6 @@ +Type of search index to create. You can specify either: + +- ``search`` +- ``vectorSearch`` + +If you omit the ``type`` field, the index type is ``search``. diff --git a/source/includes/atlas-search-commands/vector-search-index-definition-fields.rst b/source/includes/atlas-search-commands/vector-search-index-definition-fields.rst new file mode 100644 index 00000000000..82d564ebe46 --- /dev/null +++ b/source/includes/atlas-search-commands/vector-search-index-definition-fields.rst @@ -0,0 +1,17 @@ +The vector search index definition takes the following fields: + +.. code-block:: javascript + + { + "fields": [ + { + "type": "vector" | "filter", + "path": "", + "numDimensions": , + "similarity": "euclidean" | "cosine" | "dotProduct" + } + ] + } + +For explanations of vector search index definition fields, see +:ref:`avs-types-vector-search`. diff --git a/source/includes/autosplit-no-operation.rst b/source/includes/autosplit-no-operation.rst index e0e72b53092..e69682aeeb7 100644 --- a/source/includes/autosplit-no-operation.rst +++ b/source/includes/autosplit-no-operation.rst @@ -1,4 +1,4 @@ -Starting in MongoDB 6.1, automatic chunk splitting is not performed. +Starting in MongoDB 6.0.3, automatic chunk splitting is not performed. This is because of balancing policy improvements. Auto-splitting commands still exist, but do not perform an operation. For details, see :ref:`release-notes-6.1-balancing-policy-changes`. diff --git a/source/includes/capped-collections/concurrent-writes.rst b/source/includes/capped-collections/concurrent-writes.rst new file mode 100644 index 00000000000..a0e31ef4fcf --- /dev/null +++ b/source/includes/capped-collections/concurrent-writes.rst @@ -0,0 +1,2 @@ +If there are concurrent writers to a capped collection, MongoDB does not +guarantee that documents are returned in insertion order. diff --git a/source/includes/capped-collections/query-natural-order.rst b/source/includes/capped-collections/query-natural-order.rst new file mode 100644 index 00000000000..89222e537ff --- /dev/null +++ b/source/includes/capped-collections/query-natural-order.rst @@ -0,0 +1,3 @@ +Use :term:`natural ordering ` to retrieve the most +recently inserted elements from the collection efficiently. This is +similar to using the ``tail`` command on a log file. diff --git a/source/includes/capped-collections/use-ttl-index.rst b/source/includes/capped-collections/use-ttl-index.rst new file mode 100644 index 00000000000..decd1360bdb --- /dev/null +++ b/source/includes/capped-collections/use-ttl-index.rst @@ -0,0 +1,8 @@ +Generally, :ref:`TTL (Time To Live) indexes ` offer +better performance and more flexibility than capped collections. TTL +indexes expire and remove data from normal collections based on the +value of a date-typed field and a TTL value for the index. + +Capped collections serialize inserts and therefore have worse concurrent +insert performance than non-capped collections. Before you create a +capped collection, consider if you can use a TTL index instead. diff --git a/source/includes/change-stream-pre-and-post-images-additional-information.rst b/source/includes/change-stream-pre-and-post-images-additional-information.rst index b62f892882a..0e70ae35fd7 100644 --- a/source/includes/change-stream-pre-and-post-images-additional-information.rst +++ b/source/includes/change-stream-pre-and-post-images-additional-information.rst @@ -8,7 +8,7 @@ Pre- and post-images are not available for a :ref:`change stream event ``expireAfterSeconds``. - The following example sets ``expireAfterSeconds`` to ``100`` - seconds: + seconds on an entire cluster: .. code-block:: javascript @@ -18,6 +18,16 @@ Pre- and post-images are not available for a :ref:`change stream event { changeStreamOptions: { preAndPostImages: { expireAfterSeconds: 100 } } } } ) + - The following example sets ``expireAfterSeconds`` to ``100`` + seconds on a specific collection: + + .. code-block:: javascript + + use admin + db.getSiblingDB("my_collection") + .sensors.watch({ changeStreamOptions: + { preAndPostImages: { expireAfterSeconds: 100 } } }) + - The following example returns the current ``changeStreamOptions`` settings, including ``expireAfterSeconds``: diff --git a/source/includes/changelogs/releases/4.4.25.rst b/source/includes/changelogs/releases/4.4.25.rst index 19b14e969f3..e780672f456 100644 --- a/source/includes/changelogs/releases/4.4.25.rst +++ b/source/includes/changelogs/releases/4.4.25.rst @@ -6,7 +6,7 @@ Operations ~~~~~~~~~~ -- :issue:`SERVER-58534` Collect FCV in FTDC +- :issue:`SERVER-58534` Collect fCV in FTDC - :issue:`SERVER-77610` Log session id associated with the backup cursor Internals @@ -53,7 +53,7 @@ Internals - :issue:`SERVER-80544` Fix incorrect wait in runSearchCommandWithRetries - :issue:`SERVER-80678` Remove an outdated test case -- :issue:`SERVER-80694` [v4.4] FCV gate null lastKnownCommittedOpTime +- :issue:`SERVER-80694` [v4.4] fCV gate null lastKnownCommittedOpTime behavior in oplog getMore - :issue:`SERVER-80703` Avoid traversing routing table in MigrationDestinationManager diff --git a/source/includes/changelogs/releases/4.4.28.rst b/source/includes/changelogs/releases/4.4.28.rst new file mode 100644 index 00000000000..bd502363d5b --- /dev/null +++ b/source/includes/changelogs/releases/4.4.28.rst @@ -0,0 +1,37 @@ +.. _4.4.28-changelog: + +4.4.28 Changelog +---------------- + +Sharding +~~~~~~~~ + +- :issue:`SERVER-82883` Recovering TransactionCoordinator on stepup may + block acquiring read/write tickets while participants are in the + prepared state +- :issue:`SERVER-84459` [test-only bug] JumboChunksNotMovedRandom must + keep chunk manager in scope in v4.4 + +Internals +~~~~~~~~~ + +- :issue:`SERVER-77506` Sharded multi-document transactions can mismatch + data and ShardVersion +- :issue:`SERVER-80886` $out may fail with a StaleDbVersion after a + movePrimary +- :issue:`SERVER-82111` In sharded_agg_helpers.cpp move invariant below + response status check +- :issue:`SERVER-82365` Optimize the construction of the balancer's + collection distribution status histogram (2nd attempt) +- :issue:`SERVER-83485` Fix multikey-path serialization code used during + validation +- :issue:`SERVER-83494` [7.0] Fix range deleter unit test case +- :issue:`SERVER-83830` On Enterprise build creating a collection in a + replica set with the storageEngine.inMemory option breaks secondaries +- :issue:`SERVER-84337` Backport new variants added to perf.yml over to + sys-perf-7.0 and sys-perf-4.4 +- :issue:`SERVER-84353` The test for stepDown deadlock with read ticket + exhaustion is flaky +- :issue:`WT-7929` Investigate a solution to avoid FTDC stalls during + checkpoint + diff --git a/source/includes/changelogs/releases/4.4.29.rst b/source/includes/changelogs/releases/4.4.29.rst new file mode 100644 index 00000000000..ced7ecc4d98 --- /dev/null +++ b/source/includes/changelogs/releases/4.4.29.rst @@ -0,0 +1,79 @@ +.. _4.4.29-changelog: + +4.4.29 Changelog +---------------- + +Replication +~~~~~~~~~~~ + +:issue:`SERVER-70155` Add duration of how long an oplog slot is kept +open to mongod "Slow query" log lines + +Query +~~~~~ + +:issue:`WT-11064` Skip globally visible tombstones as part of update +obsolete check + +Storage +~~~~~~~ + + +WiredTiger +`````````` + +- :issue:`WT-12036` Workaround for lock contention on Windows + +Build and Packaging +~~~~~~~~~~~~~~~~~~~ + +:issue:`SERVER-85156` dbCheck throws unexpected "invalidate" change +stream event [5.0] + +Internals +~~~~~~~~~ + +- :issue:`SERVER-72839` Server skips peer certificate validation if + neither CAFile nor clusterCAFile is provided +- :issue:`SERVER-74344` Ban use of sparse indexes on internal comparison + expression unless explicitly hinted +- :issue:`SERVER-80279` Commit on non-existing transaction then proceed + to continue can trigger an invariant +- :issue:`SERVER-80310` Update sysperf to allow running individual genny + tasks on waterfall +- :issue:`SERVER-82353` Multi-document transactions can miss documents + when movePrimary runs concurrently +- :issue:`SERVER-82815` Expose server’s index key creation via + aggregation +- :issue:`SERVER-83564` Make sure the process field is indexed in + config.locks +- :issue:`SERVER-84722` Create undocumented server parameter to skip + document validation on insert code path for internal usage +- :issue:`SERVER-84732` Fix typo in mongo-perf standalone inMemory ARM + AWS test +- :issue:`SERVER-85305` Fix sys-perf-4.4 clone issue +- :issue:`SERVER-85306` Update sys-perf config to use HTTPs github links + rather than SSH +- :issue:`SERVER-85419` Balancer pollutes logs in case no suitable + recipient is found during draining +- :issue:`SERVER-85530` Refresh Test Certificates +- :issue:`SERVER-85536` [4.4] removing unindexed unique partial index + entries generates write conflicts +- :issue:`SERVER-85652` Update DSI atlas azure tasks to use an AL2 + compile artifact. +- :issue:`SERVER-85771` Make $bucketAuto more robust in the case of an + empty string for the groupBy field +- :issue:`SERVER-85984` The test for inserting docs larger than the user + max is flaky +- :issue:`SERVER-86027` Tag + insert_docs_larger_than_max_user_size_standalone.js with + requires_persistence and requires_replication +- :issue:`SERVER-86081` Sys-perf missing required parameters due to + Evergreen Redaction +- :issue:`SERVER-86322` [v4.4] Add high value workloads to the 4.4 + branch +- :issue:`SERVER-86351` Investigate failed copybara sync operation +- :issue:`WT-11280` Generation tracking might not be properly + synchronized +- :issue:`WT-12272` Remove unnecessary module in evergreen.yml + diff --git a/source/includes/changelogs/releases/5.0.22.rst b/source/includes/changelogs/releases/5.0.22.rst index 07d226c65ba..c26eb43d503 100644 --- a/source/includes/changelogs/releases/5.0.22.rst +++ b/source/includes/changelogs/releases/5.0.22.rst @@ -22,7 +22,7 @@ Sharding Operations ~~~~~~~~~~ -- :issue:`SERVER-58534` Collect FCV in FTDC +- :issue:`SERVER-58534` Collect fCV in FTDC - :issue:`SERVER-68548` mongo shell version 4.4.15 logging asio message despite --quiet flag - :issue:`SERVER-77610` Log session id associated with the backup cursor diff --git a/source/includes/changelogs/releases/5.0.24.rst b/source/includes/changelogs/releases/5.0.24.rst new file mode 100644 index 00000000000..da667d2ce36 --- /dev/null +++ b/source/includes/changelogs/releases/5.0.24.rst @@ -0,0 +1,153 @@ +.. _5.0.24-changelog: + +5.0.24 Changelog +---------------- + +Sharding +~~~~~~~~ + +- :issue:`SERVER-50792` Return more useful errors when a shard key index + can't be found for shardCollection/refineCollectionShardKey +- :issue:`SERVER-73763` Resharding does not extend zone ranges for + config.tag docs, leading to config server primary fassert loop from + duplicate key error +- :issue:`SERVER-82838` ReshardingOplogApplier uses {w: "majority", + wtimeout: 60000} write concern when persisting resharding oplog + application progress +- :issue:`SERVER-82883` Recovering TransactionCoordinator on stepup may + block acquiring read/write tickets while participants are in the + prepared state +- :issue:`SERVER-82953` + CreateCollectionCoordinator::checkIfOptionsConflict should be more + verbose +- :issue:`SERVER-83146` Bulk write operation might fail with + NamespaceNotFound + +Replication +~~~~~~~~~~~ + +- :issue:`SERVER-55465` Fix Invariant upon failed request for a vote + from the current primary in the election dry-run of catchup takeover +- :issue:`SERVER-70155` Add duration of how long an oplog slot is kept + open to mongod "Slow query" log lines + +Catalog +~~~~~~~ + +:issue:`SERVER-82129` fCV 5.0 Upgrade fails due to +config.cache.collections missing UUIDs for most collections + +Storage +~~~~~~~ + +:issue:`SERVER-33494` WT SizeStorer never deletes old entries + +Internals +~~~~~~~~~ + +- :issue:`SERVER-65666` Do not create chunks on draining shards when + sharding a new collection +- :issue:`SERVER-67766` Log index and collection successful drop +- :issue:`SERVER-69063` Fix TCP keepalive option setting +- :issue:`SERVER-69615` Rollback fuzzing in WiredTiger leads to size + storer marked dirty at shutdown +- :issue:`SERVER-74074` Exclude auth consistency workloads from + concurrency simultaneous suites +- :issue:`SERVER-77311` Add a new log message when a secondary node is + skipping a two-phase index build with a subset of indexes built +- :issue:`SERVER-77506` Sharded multi-document transactions can mismatch + data and ShardVersion +- :issue:`SERVER-77926` Add LSAN suppressions for executor worker + threads +- :issue:`SERVER-78009` shardSvrCommitReshardCollection command should + fail recoverably if the node is shutting down +- :issue:`SERVER-79864` TTL deleter does not correctly handle time + series collections with extended range dates +- :issue:`SERVER-79982` Batched catalog writers can run concurrently + with HistoricalCatalogIdTracker::cleanup() and lead to incorrect PIT + find results. +- :issue:`SERVER-80789` Make AutoGetOplog behave consistently in replica + set node started as standalone +- :issue:`SERVER-80886` $out may fail with a StaleDbVersion after a + movePrimary +- :issue:`SERVER-80974` Unclean shutdown while dropping local.* + collection and indexes can make the catalog inconsistent +- :issue:`SERVER-81143` export_import_concurrency.js should check for + code 2 when killing child resmoke client +- :issue:`SERVER-81442` Poke WT oplog reclamation thread periodically +- :issue:`SERVER-81573` ExpressionNary::optimize crashes on initialized + children in v4.4 and 5.0 +- :issue:`SERVER-81878` startupRecoveryForRestore may not play nicely + with collection drop applied during startup recovery +- :issue:`SERVER-81949` Sync from 10gen/mongo to mongodb/mongo on v4.4 + with copybara +- :issue:`SERVER-82043` Enhancement of Commit Message Validation for + 10gen/mongo Commits +- :issue:`SERVER-82111` In sharded_agg_helpers.cpp move invariant below + response status check +- :issue:`SERVER-82223` Commit handler in fCV op observer is susceptible + to interruption +- :issue:`SERVER-82391` [v4.4] Only allow github Apps Copybara Syncer: + 10gen-to-Mongodb to syncs new commits to mongodb/mongo +- :issue:`SERVER-82447` $project incorrectly pushed down on timeseries + when $project uses $getField on a measurement field +- :issue:`SERVER-82449` [v4.4] Optimize copybara sync behavior for + specific no-change scenarios +- :issue:`SERVER-82555` Use shallow clone to speed up performance tests +- :issue:`SERVER-82640` Upload mongod --version output to S3 during + server compilation in Evergreen +- :issue:`SERVER-82708` Update variants used to performance test stable + branches +- :issue:`SERVER-82730` The validate cmd can invariant on corrupted + keystrings +- :issue:`SERVER-83091` $or query can trigger an infinite loop during + plan enumeration +- :issue:`SERVER-83161` Fix concurrent read to _errMsg from + MigrationDestinationManager without acquiring mutex +- :issue:`SERVER-83283` Modify copybara script to send slack message on + failure +- :issue:`SERVER-83336` Temporarily disable + wt_size_storer_cleanup_replica_set.js on macOS +- :issue:`SERVER-83354` Schedule copybara instance after each commit + made +- :issue:`SERVER-83485` Fix multikey-path serialization code used during + validation +- :issue:`SERVER-83494` [7.0] Fix range deleter unit test case +- :issue:`SERVER-83592` Add resmoke flag --enable_enterprise_tests + enable enterprise js tests +- :issue:`SERVER-83655` Restore legal client ns exception for + admin.system.new_users +- :issue:`SERVER-83830` On Enterprise build creating a collection in a + replica set with the storageEngine.inMemory option breaks secondaries +- :issue:`SERVER-83874` Move primary operation doesn't drop + db.system.views on the donor +- :issue:`SERVER-83916` Add LSAN Suppression for threads leaked by + unjoined thread pools +- :issue:`SERVER-84013` Incorrect results for index scan plan on query + with duplicate predicates in nested $or +- :issue:`SERVER-84353` The test for stepDown deadlock with read ticket + exhaustion is flaky +- :issue:`SERVER-84435` Deploy enterprise module consolidation to branch + v5.0 +- :issue:`SERVER-84457` [v5.0] Explicitly declare type of term field in + sync source resolver query +- :issue:`SERVER-84479` Amend burn_in test to tolerate absence of + manifest +- :issue:`SERVER-84576` [v5.0] Update 5.0 Readme +- :issue:`WT-7929` Investigate a solution to avoid FTDC stalls during + checkpoint +- :issue:`WT-9257` test_checkpoint WT_NOTFOUND failure on CS +- :issue:`WT-9821` Add option to verify to report all data corruption in + a file +- :issue:`WT-10601` Fix wt verify -c failure when first block on page is + corrupt +- :issue:`WT-10961` Fix OOO keys caused by racing deletion and insertion + on left subtrees +- :issue:`WT-10972` Eliminate long periods of silence when recovering + with recovery_progress verbose messages enabled +- :issue:`WT-11280` Generation tracking might not be properly + synchronized +- :issue:`WT-11774` Add diagnostic stat to investigate eviction server's + inability to queue pages +- :issue:`WT-12036` Workaround for lock contention on Windows + diff --git a/source/includes/changelogs/releases/5.0.25.rst b/source/includes/changelogs/releases/5.0.25.rst new file mode 100644 index 00000000000..9a1822d777a --- /dev/null +++ b/source/includes/changelogs/releases/5.0.25.rst @@ -0,0 +1,145 @@ +.. _5.0.25-changelog: + +5.0.25 Changelog +---------------- + +Sharding +~~~~~~~~ + +- :issue:`SERVER-76536` Increase + receiveChunkWaitForRangeDeleterTimeoutMS in concurrency suites +- :issue:`SERVER-81508` Potential double-execution of write statements + when ShardCannotRefreshDueToLocksHeld is thrown + +Replication +~~~~~~~~~~~ + +:issue:`SERVER-56756` Primary cannot stepDown when experiencing disk +failures + +Storage +~~~~~~~ + + +WiredTiger +`````````` + +- :issue:`WT-10017` Remove the unstable historical versions at the end + of rollback to stable +- :issue:`WT-12316` Fix timing stress options in test/format for 6.0 and + older branches + +Build and Packaging +~~~~~~~~~~~~~~~~~~~ + +:issue:`SERVER-85156` dbCheck throws unexpected "invalidate" change +stream event [5.0] + +Internals +~~~~~~~~~ + +- :issue:`SERVER-62763` Fix data-type used for passing options to + setsockopt +- :issue:`SERVER-64444` listIndexes fails on invalid pre-5.0 index spec + after upgrade +- :issue:`SERVER-66036` Improve future validity semantics +- :issue:`SERVER-68674` Vendor an immutable/persistent data structure + library +- :issue:`SERVER-69413` Documentation Updates +- :issue:`SERVER-71520` Dump all thread stacks on RSTL acquisition + timeout +- :issue:`SERVER-72839` Server skips peer certificate validation if + neither CAFile nor clusterCAFile is provided +- :issue:`SERVER-74874` Add typedef for immutable unordered map and set +- :issue:`SERVER-74875` Implement immutable ordered map and set +- :issue:`SERVER-74876` Evaluate which immer memory policy to use +- :issue:`SERVER-74946` Convert containers in CollectionCatalog for + collection lookup to immutable +- :issue:`SERVER-74947` Convert containers in CollectionCatalog for view + lookup to immutable +- :issue:`SERVER-74951` Convert containers in CollectionCatalog for + profile settings to immutable +- :issue:`SERVER-75263` Add immer benchmarks +- :issue:`SERVER-75497` Convert ordered containers in CollectionCatalog + to immutable +- :issue:`SERVER-75613` Add GDB pretty printers for immutable data + structures +- :issue:`SERVER-75851` Add typedef for immutable vector +- :issue:`SERVER-76789` Add immer to README.third_party.md +- :issue:`SERVER-76932` Add a way for a thread to know when the + SignalHandler thread is done with printAllThreadStacks +- :issue:`SERVER-77694` cannot compile immer header with --opt=off +- :issue:`SERVER-78911` Always suppress "Different user name was + supplied to saslSupportedMechs" log during X.509 intracluster auth +- :issue:`SERVER-80150` Log negotiated network compressor with client + metadata +- :issue:`SERVER-80279` Commit on non-existing transaction then proceed + to continue can trigger an invariant +- :issue:`SERVER-80978` Fix potential deadlock between + TTLMonitor::onStepUp and prepared transaction +- :issue:`SERVER-81133` Speedup logic to persist routing table cache +- :issue:`SERVER-82093` Release mongo v5 on amazon 2023 +- :issue:`SERVER-82353` Multi-document transactions can miss documents + when movePrimary runs concurrently +- :issue:`SERVER-82627` ReshardingDataReplication does not join the + ReshardingOplogFetcher thread pool causing invariant failure. +- :issue:`SERVER-82815` Expose server’s index key creation via + aggregation +- :issue:`SERVER-83050` Create a deployment of mongodb on + AL2-openssl-1.1.1 +- :issue:`SERVER-83337` Re-enable wt_size_storer_cleanup_replica_set.js + on macOS +- :issue:`SERVER-83369` Index creation does not enforce type of + bucketSize field +- :issue:`SERVER-83564` Make sure the process field is indexed in + config.locks +- :issue:`SERVER-84063` Remove BlackDuck from Security Daily Cron +- :issue:`SERVER-84722` Create undocumented server parameter to skip + document validation on insert code path for internal usage +- :issue:`SERVER-84749` Remove + sharding_update_v1_oplog_jscore_passthrough from macOS variants +- :issue:`SERVER-84772` Delete stitch-related tasks in enterprise + variant +- :issue:`SERVER-85167` Size storer can be flushed concurrently with + being destructed for rollback +- :issue:`SERVER-85263` Report escaped client application name +- :issue:`SERVER-85306` Update sys-perf config to use HTTPs github links + rather than SSH +- :issue:`SERVER-85364` [6.0] Convert resource map in CollectionCatalog + to immutable +- :issue:`SERVER-85365` [6.0] Convert shadow catalog in + CollectionCatalog to immutable +- :issue:`SERVER-85419` Balancer pollutes logs in case no suitable + recipient is found during draining +- :issue:`SERVER-85498` [5.0] Fix immutable_ordered_test on MacOS +- :issue:`SERVER-85530` Refresh Test Certificates +- :issue:`SERVER-85652` Update DSI atlas azure tasks to use an AL2 + compile artifact. +- :issue:`SERVER-85693` Fix potential access violation in + User::validateRestrictions +- :issue:`SERVER-85771` Make $bucketAuto more robust in the case of an + empty string for the groupBy field +- :issue:`SERVER-85984` The test for inserting docs larger than the user + max relies on a specific order of documents in the oplog, but that + order is not guaranteed +- :issue:`SERVER-86027` Tag + insert_docs_larger_than_max_user_size_standalone.js with + requires_persistence and requires_replication +- :issue:`SERVER-86062` [v5.0] directoryperdb.js relies on + fsync/checkpointing behavior which does not hold when running with + --nojournal +- :issue:`SERVER-86081` Sys-perf missing required parameters due to + Evergreen Redaction +- :issue:`SERVER-86561` Increase benchmarks_orphaned from 3h to 4h +- :issue:`WT-7712` commit and durable timestamps should be disallowed at + stable timestamp +- :issue:`WT-9824` Add testing to file manager WT connection + configurations in test/format +- :issue:`WT-11491` Log the WiredTiger time spent during startup and + shutdown +- :issue:`WT-11777` Fix units of __wt_timer_evaluate() calls: logging + and progress period +- :issue:`WT-12211` Fix PATH env variable in hang analyzer to generate + python core dump (7.0) +- :issue:`WT-12272` Remove unnecessary module in evergreen.yml + diff --git a/source/includes/changelogs/releases/5.0.26.rst b/source/includes/changelogs/releases/5.0.26.rst new file mode 100644 index 00000000000..e4e5dc6b49d --- /dev/null +++ b/source/includes/changelogs/releases/5.0.26.rst @@ -0,0 +1,108 @@ +.. _5.0.26-changelog: + +5.0.26 Changelog +---------------- + +Sharding +~~~~~~~~ + +- :issue:`SERVER-65802` mongos returns inconsistent error code when + renameCollection target already exists +- :issue:`SERVER-84368` CreateIndex fails with StaleConfig if run from a + stale mongos against a sharded non-empty collection + +Query +~~~~~ + +:issue:`SERVER-83602` $or -> $in MatchExpression rewrite should not +generate $or directly nested in another $or + +Storage +~~~~~~~ + +:issue:`WT-11062` Safe free the ref addr to allow concurrent access + +Internals +~~~~~~~~~ + +- :issue:`SERVER-56661` Increase default close_handle_minimum in + WiredTiger from 250 to 2000 +- :issue:`SERVER-60603` Allow connection reset errors without assertion + in ASIOSession::ensureSync() +- :issue:`SERVER-68128` Exceptions thrown while generating command + response lead to network error +- :issue:`SERVER-69005` $internalBoundedSort should not accept empty + sort pattern +- :issue:`SERVER-72703` Downgrade $out's db lock to MODE_IX +- :issue:`SERVER-75355` Improve explain with Queryable Encryption +- :issue:`SERVER-79235` rolling_index_builds_interrupted.js checkLog + relies on clearRawMongoProgramOutput +- :issue:`SERVER-79286` Create a query knob +- :issue:`SERVER-79400` Implement number of documents tie breaking + heuristics +- :issue:`SERVER-79575` Fix numa node counting +- :issue:`SERVER-80233` Implement index prefix heuristic +- :issue:`SERVER-80275` Add log line for detailed plan scoring +- :issue:`SERVER-81021` Improve index prefix heuristic by taking into + account closed intervals +- :issue:`SERVER-82476` Disable diagnostics latches by default +- :issue:`SERVER-84336` Timeseries inserts can leave dangling BSONObj in + WriteBatches in certain cases +- :issue:`SERVER-84612` Define a version for immer +- :issue:`SERVER-84615` Define a version for linenoise +- :issue:`SERVER-85534` Checkpoint the vector clock after committing + shard collection +- :issue:`SERVER-85633` Add lock around res_ninit call +- :issue:`SERVER-85843` A write operation may fail with + NamespaceNotFound if the database has been concurrently dropped + (sharding-only) +- :issue:`SERVER-85869` Exhaustive find on config shard can return stale + data +- :issue:`SERVER-85973` Update README.third_party.md to indicate that + Valgrind is licensed under BSD-4-Clause +- :issue:`SERVER-86017` Backport multi-planner tie breaking improvements + to v6.0 +- :issue:`SERVER-86214` add all bazel-* output dirs to git ignore +- :issue:`SERVER-86388` Remove fle_drivers_integration.js test from 6.0 +- :issue:`SERVER-86395` Investigate DuplicateKey error while recovering + convertToCapped from stable checkpoint +- :issue:`SERVER-86403` Fix THP startup warnings +- :issue:`SERVER-86433` Clear memory in the data_union stored on the + endpoint before use +- :issue:`SERVER-86562` Backport multi-planner tie breaking improvements + to v5.0 +- :issue:`SERVER-86619` Document::shouldSkipDeleted() accesses string + without checking for missing string +- :issue:`SERVER-86622` Resharding coordinator use possibly stale + database info +- :issue:`SERVER-86632` plan_cache_drop_database.js should catch + DatabaseDropPending errors +- :issue:`SERVER-86717` Resharding should validate user provided zone + range doesn't include $-prefixed fields. +- :issue:`SERVER-87198` [5.0] Make shard registry reads fallback to + majority readConcern if snapshot reads fail +- :issue:`SERVER-87224` Enable diagnostic latching in test variants on + old branches +- :issue:`SERVER-87259` [v5.0] Fix for atlas azure intel variant +- :issue:`SERVER-87415` Remove run_command__simple workload from + sys-perf +- :issue:`SERVER-87544` Fix up gitignore to permit git awareness of + enterprise module +- :issue:`SERVER-87567` The SessionWorkflow should correctly return a + response error on malformed requests +- :issue:`SERVER-87610` Relax shardVersionRetry tripwires on the + namespace of received stale exceptions +- :issue:`SERVER-87626` [v5.0] Add san_options to buildvariant config +- :issue:`SERVER-87653` Prevent latch_analyzer.js from being run as part + of the parallelTester +- :issue:`WT-9057` Null address read in compact walk +- :issue:`WT-12077` Incorrect hardware checksum calculation on zSeries + for buffers on stack +- :issue:`WT-12379` Incorrect python version on Windows on 6.0 +- :issue:`WT-12402` Add stats to track when eviction server skips + walking a tree +- :issue:`WT-12438` Stop using Ubuntu 18.04 Power Evergreen distro on + 5.0 +- :issue:`WT-12447` Fix incorrect version of Python in the CMake Windows + build on 5.0 + diff --git a/source/includes/changelogs/releases/6.0.10.rst b/source/includes/changelogs/releases/6.0.10.rst index 63f9310fd5f..f7eabb5e190 100644 --- a/source/includes/changelogs/releases/6.0.10.rst +++ b/source/includes/changelogs/releases/6.0.10.rst @@ -54,7 +54,7 @@ Internals option correctly - :issue:`SERVER-78650` Change stream oplog rewrite of $nor hits empty-array validation if no children are eligible for rewrite -- :issue:`SERVER-78674` Remove FCV check from feature flag check for +- :issue:`SERVER-78674` Remove fCV check from feature flag check for search batchsize project - :issue:`SERVER-78831` Make $listSearchIndexes throw an Exception when used outside of Atlas diff --git a/source/includes/changelogs/releases/6.0.13.rst b/source/includes/changelogs/releases/6.0.13.rst new file mode 100644 index 00000000000..c0ae72417e0 --- /dev/null +++ b/source/includes/changelogs/releases/6.0.13.rst @@ -0,0 +1,181 @@ +.. _6.0.13-changelog: + +6.0.13 Changelog +---------------- + +Sharding +~~~~~~~~ + +- :issue:`SERVER-50792` Return more useful errors when a shard key index + can't be found for shardCollection/refineCollectionShardKey +- :issue:`SERVER-73763` Resharding does not extend zone ranges for + config.tag docs, leading to config server primary fassert loop from + duplicate key error +- :issue:`SERVER-82838` ReshardingOplogApplier uses {w: "majority", + wtimeout: 60000} write concern when persisting resharding oplog + application progress +- :issue:`SERVER-82883` Recovering TransactionCoordinator on stepup may + block acquiring read/write tickets while participants are in the + prepared state +- :issue:`SERVER-82953` + CreateCollectionCoordinator::checkIfOptionsConflict should be more + verbose +- :issue:`SERVER-83146` Bulk write operation might fail with + NamespaceNotFound +- :issue:`SERVER-83775` Do not balance data between shards owning more + than the ideal data size + +Replication +~~~~~~~~~~~ + +:issue:`SERVER-70155` Add duration of how long an oplog slot is kept +open to mongod "Slow query" log lines + +Storage +~~~~~~~ + +:issue:`SERVER-33494` WT SizeStorer never deletes old entries + +Internals +~~~~~~~~~ + +- :issue:`SERVER-62955` Add a no-op oplog entry for reshardCollection + command +- :issue:`SERVER-65666` Do not create chunks on draining shards when + sharding a new collection +- :issue:`SERVER-67766` Log index and collection successful drop +- :issue:`SERVER-69615` Rollback fuzzing in WiredTiger leads to size + storer marked dirty at shutdown +- :issue:`SERVER-70338` Query yield accesses the storage engine without + locks during shutdown and rollback +- :issue:`SERVER-70974` Fix early-exits triggered when user specifies + TCP Fast Open server parameters +- :issue:`SERVER-71923` Emit change log event for + ConfigureCollectionBalancing invocations +- :issue:`SERVER-72683` increase timeout in disk/directoryperdb.js +- :issue:`SERVER-73439` Make the $inProg filter in the setup for the + killop test more specific +- :issue:`SERVER-74074` Exclude auth consistency workloads from + concurrency simultaneous suites +- :issue:`SERVER-75033` Capture core dumps from test failures on macOS +- :issue:`SERVER-76560` Time series collections not always honoring + expireAfterSeconds correctly +- :issue:`SERVER-77311` Add a new log message when a secondary node is + skipping a two-phase index build with a subset of indexes built +- :issue:`SERVER-77506` Sharded multi-document transactions can mismatch + data and ShardVersion +- :issue:`SERVER-77827` Allow restore role to drop system.views +- :issue:`SERVER-77926` Add LSAN suppressions for executor worker + threads +- :issue:`SERVER-78009` shardSvrCommitReshardCollection command should + fail recoverably if the node is shutting down +- :issue:`SERVER-79235` rolling_index_builds_interrupted.js checkLog + relies on clearRawMongoProgramOutput +- :issue:`SERVER-79864` TTL deleter does not correctly handle time + series collections with extended range dates +- :issue:`SERVER-79982` Batched catalog writers can run concurrently + with HistoricalCatalogIdTracker::cleanup() and lead to incorrect PIT + find results. +- :issue:`SERVER-80660` Log a summary of where mongodb spent time during + startup and shutdown +- :issue:`SERVER-80789` Make AutoGetOplog behave consistently in replica + set node started as standalone +- :issue:`SERVER-80974` Unclean shutdown while dropping local.* + collection and indexes can make the catalog inconsistent +- :issue:`SERVER-81028` Incorrect $listCatalog behavior in presence of a + concurrent collection rename in v7.0 +- :issue:`SERVER-81046` add requireSequenceTokens to + SearchCommand.CursorOptions +- :issue:`SERVER-81133` Speedup logic to persist routing table cache +- :issue:`SERVER-81143` export_import_concurrency.js should check for + code 2 when killing child resmoke client +- :issue:`SERVER-81375` Disable internal transactions resharding tests + in CSRS stepdown suite +- :issue:`SERVER-81442` Poke WT oplog reclamation thread periodically +- :issue:`SERVER-81606` Exclude untimestamped catalog durability test + from in-memory variants +- :issue:`SERVER-81949` Sync from 10gen/mongo to mongodb/mongo on v4.4 + with copybara +- :issue:`SERVER-82043` Enhancement of Commit Message Validation for + 10gen/mongo Commits +- :issue:`SERVER-82073` Fix merge chunk command generation in + collection_defragmentation.js +- :issue:`SERVER-82111` In sharded_agg_helpers.cpp move invariant below + response status check +- :issue:`SERVER-82197` Incorrect query results in SBE if $group spills + in presence of collation +- :issue:`SERVER-82223` Commit handler in fCV op observer is susceptible + to interruption +- :issue:`SERVER-82365` Optimize the construction of the balancer's + collection distribution status histogram (2nd attempt) +- :issue:`SERVER-82368` Match top/bottom N accumulators in SBE and + Classic +- :issue:`SERVER-82391` [v4.4] Only allow github Apps Copybara Syncer: + 10gen-to-Mongodb to syncs new commits to mongodb/mongo +- :issue:`SERVER-82437` db.collection.getSearchIndexes() + returns duplicate index +- :issue:`SERVER-82447` $project incorrectly pushed down on timeseries + when $project uses $getField on a measurement field +- :issue:`SERVER-82449` [v4.4] Optimize copybara sync behavior for + specific no-change scenarios +- :issue:`SERVER-82555` Use shallow clone to speed up performance tests +- :issue:`SERVER-82640` Upload mongod --version output to S3 during + server compilation in Evergreen +- :issue:`SERVER-82708` Update variants used to performance test stable + branches +- :issue:`SERVER-82730` The validate cmd can invariant on corrupted + keystrings +- :issue:`SERVER-82781` Simulate crash test hook may leave behind part + of file when copying data +- :issue:`SERVER-82967` Stepdown after calling + ActiveIndexBuilds::registerIndexBuild() during index build setup + doesn't unregister itself +- :issue:`SERVER-83091` $or query can trigger an infinite loop during + plan enumeration +- :issue:`SERVER-83099` LDAPTimer::setTimeout may run callback inline +- :issue:`SERVER-83107` Add 'type' field to search IndexDefinition + struct +- :issue:`SERVER-83161` Fix concurrent read to _errMsg from + MigrationDestinationManager without acquiring mutex +- :issue:`SERVER-83283` Modify copybara script to send slack message on + failure +- :issue:`SERVER-83336` Temporarily disable + wt_size_storer_cleanup_replica_set.js on macOS +- :issue:`SERVER-83354` Schedule copybara instance after each commit + made +- :issue:`SERVER-83389` aggregation_optimization_fuzzer fails on 6.0 and + 7.0 with a disabled disablePipelineOptimization failpoint +- :issue:`SERVER-83485` Fix multikey-path serialization code used during + validation +- :issue:`SERVER-83494` [7.0] Fix range deleter unit test case +- :issue:`SERVER-83567` Push in classic stores missing values. +- :issue:`SERVER-83592` Add resmoke flag --enable_enterprise_tests + enable enterprise js tests +- :issue:`SERVER-83655` Restore legal client ns exception for + admin.system.new_users +- :issue:`SERVER-83830` On Enterprise build creating a collection in a + replica set with the storageEngine.inMemory option breaks secondaries +- :issue:`SERVER-83866` Update BACKPORTS_REQUIRED_BASE_URL from + mongodb/mongo to 10gen/mongo +- :issue:`SERVER-83874` Move primary operation doesn't drop + db.system.views on the donor +- :issue:`SERVER-83916` Add LSAN Suppression for threads leaked by + unjoined thread pools +- :issue:`SERVER-83993` timeseries_union_with.js fails intermittently in + retryable_writes_downgrade suites on v6.0 +- :issue:`SERVER-84013` Incorrect results for index scan plan on query + with duplicate predicates in nested $or +- :issue:`SERVER-84130` Incorrect bucket-level filter optimization when + some events in the bucket are missing the field +- :issue:`SERVER-84353` The test for stepDown deadlock with read ticket + exhaustion is flaky +- :issue:`WT-11121` failed: format next returned OOO key +- :issue:`WT-11186` Restore ignore_prepare semantics to read with + read_committed isolation instead of read_uncommitted +- :issue:`WT-11491` Log the WiredTiger time spent during startup and + shutdown +- :issue:`WT-11774` Add diagnostic stat to investigate eviction server's + inability to queue pages +- :issue:`WT-12036` Workaround for lock contention on Windows +- :issue:`WT-12147` Temporarily disable clang-analyzer + diff --git a/source/includes/changelogs/releases/6.0.14.rst b/source/includes/changelogs/releases/6.0.14.rst new file mode 100644 index 00000000000..aee306210bb --- /dev/null +++ b/source/includes/changelogs/releases/6.0.14.rst @@ -0,0 +1,158 @@ +.. _6.0.14-changelog: + +6.0.14 Changelog +---------------- + +Sharding +~~~~~~~~ + +:issue:`SERVER-81508` Potential double-execution of write statements +when ShardCannotRefreshDueToLocksHeld is thrown + +Aggregation +~~~~~~~~~~~ + +:issue:`SERVER-82929` $listSearchIndexes requires find privilege action +rather than listSearchIndexes privilege action as it intended + +Storage +~~~~~~~ + + +WiredTiger +`````````` + +- :issue:`WT-12316` Fix timing stress options in test/format for 6.0 and + older branches + +Build and Packaging +~~~~~~~~~~~~~~~~~~~ + +:issue:`SERVER-62957` Add reshardCollection change stream event + +Internals +~~~~~~~~~ + +- :issue:`SERVER-64444` listIndexes fails on invalid pre-5.0 index spec + after upgrade +- :issue:`SERVER-65908` Update fields for reshardCollection noop message +- :issue:`SERVER-66503` ObjectIsBusy thrown in unindex +- :issue:`SERVER-68674` Vendor an immutable/persistent data structure + library +- :issue:`SERVER-69413` Documentation Updates +- :issue:`SERVER-72839` Server skips peer certificate validation if + neither CAFile nor clusterCAFile is provided +- :issue:`SERVER-74874` Add typedef for immutable unordered map and set +- :issue:`SERVER-74875` Implement immutable ordered map and set +- :issue:`SERVER-74876` Evaluate which immer memory policy to use +- :issue:`SERVER-74946` Convert containers in CollectionCatalog for + collection lookup to immutable +- :issue:`SERVER-74947` Convert containers in CollectionCatalog for view + lookup to immutable +- :issue:`SERVER-74951` Convert containers in CollectionCatalog for + profile settings to immutable +- :issue:`SERVER-75263` Add immer benchmarks +- :issue:`SERVER-75497` Convert ordered containers in CollectionCatalog + to immutable +- :issue:`SERVER-75613` Add GDB pretty printers for immutable data + structures +- :issue:`SERVER-75851` Add typedef for immutable vector +- :issue:`SERVER-76789` Add immer to README.third_party.md +- :issue:`SERVER-77694` cannot compile immer header with --opt=off +- :issue:`SERVER-78311` mongos does not report writeConcernError in + presence of writeErrors for insert command +- :issue:`SERVER-78662` Deadlock with index build, step down, prepared + transaction, and MODE_IS coll lock +- :issue:`SERVER-78911` Always suppress "Different user name was + supplied to saslSupportedMechs" log during X.509 intracluster auth +- :issue:`SERVER-79150` Reduce ScopedSetShardRole scope to setup stage + of index build +- :issue:`SERVER-79202` PinnedConnectionTaskExecutor can hang when + shutting down +- :issue:`SERVER-80150` Log negotiated network compressor with client + metadata +- :issue:`SERVER-80279` Commit on non-existing transaction then proceed + to continue can trigger an invariant +- :issue:`SERVER-80978` Fix potential deadlock between + TTLMonitor::onStepUp and prepared transaction +- :issue:`SERVER-81021` Improve index prefix heuristic by taking into + account closed intervals +- :issue:`SERVER-82353` Multi-document transactions can miss documents + when movePrimary runs concurrently +- :issue:`SERVER-82627` ReshardingDataReplication does not join the + ReshardingOplogFetcher thread pool causing invariant failure. +- :issue:`SERVER-82815` Expose server’s index key creation via + aggregation +- :issue:`SERVER-83050` Create a deployment of mongodb on + AL2-openssl-1.1.1 +- :issue:`SERVER-83119` Secondary replica crashes on clustered + collection if notablescan is enabled +- :issue:`SERVER-83145` Shared buffer fragment incorrectly tracks memory + usage in freeUnused() +- :issue:`SERVER-83337` Re-enable wt_size_storer_cleanup_replica_set.js + on macOS +- :issue:`SERVER-83369` Index creation does not enforce type of + bucketSize field +- :issue:`SERVER-83564` Make sure the process field is indexed in + config.locks +- :issue:`SERVER-83610` Consider reducing privileges required for + $documents +- :issue:`SERVER-83955` Fix wrong warning messages in ReplSetGetStatus + command +- :issue:`SERVER-84063` Remove BlackDuck from Security Daily Cron +- :issue:`SERVER-84233` Support BSON MinKey and MaxKey in BSONColumn +- :issue:`SERVER-84722` Create undocumented server parameter to skip + document validation on insert code path for internal usage +- :issue:`SERVER-84747` Deploy enterprise module consolidation to branch + v6.0 +- :issue:`SERVER-84749` Remove + sharding_update_v1_oplog_jscore_passthrough from macOS variants +- :issue:`SERVER-84772` Delete stitch-related tasks in enterprise + variant +- :issue:`SERVER-85167` Size storer can be flushed concurrently with + being destructed for rollback +- :issue:`SERVER-85171` split unittest tasks up +- :issue:`SERVER-85206` Improve performance of full_range.js and + explicit_range.js +- :issue:`SERVER-85245` FailedToParse error during setParamater of + wiredTigerConcurrentReadTransactions +- :issue:`SERVER-85263` Report escaped client application name +- :issue:`SERVER-85306` Update sys-perf config to use HTTPs github links + rather than SSH +- :issue:`SERVER-85364` [6.0] Convert resource map in CollectionCatalog + to immutable +- :issue:`SERVER-85365` [6.0] Convert shadow catalog in + CollectionCatalog to immutable +- :issue:`SERVER-85386` [v6.0] Adjust configuration to ensure + 'enterprise' module does not appear in version manifest +- :issue:`SERVER-85419` Balancer pollutes logs in case no suitable + recipient is found during draining +- :issue:`SERVER-85530` Refresh Test Certificates +- :issue:`SERVER-85631` Remove jstests/noPassthrough/ttl_expire_nan.js +- :issue:`SERVER-85652` Update DSI atlas azure tasks to use an AL2 + compile artifact. +- :issue:`SERVER-85693` Fix potential access violation in + User::validateRestrictions +- :issue:`SERVER-85707` [v6.0] Adding $changeStreamSplitLargeEvent stage + fails on v6.0 mongoS +- :issue:`SERVER-85771` Make $bucketAuto more robust in the case of an + empty string for the groupBy field +- :issue:`SERVER-85848` $redact inhibits change stream optimization +- :issue:`SERVER-85984` The test for inserting docs larger than the user + max is flaky +- :issue:`SERVER-86027` Tag + insert_docs_larger_than_max_user_size_standalone.js with + requires_persistence and requires_replication +- :issue:`SERVER-86081` Sys-perf missing required parameters due to + Evergreen Redaction +- :issue:`SERVER-86177` Remove extra lines added during backport +- :issue:`SERVER-86363` Make container registry login silent +- :issue:`WT-9057` Null address read in compact walk +- :issue:`WT-9824` Add testing to file manager WT connection + configurations in test/format +- :issue:`WT-12077` Incorrect hardware checksum calculation on zSeries + for buffers on stack +- :issue:`WT-12211` Fix PATH env variable in hang analyzer to generate + python core dump (7.0) +- :issue:`WT-12272` Remove unnecessary module in evergreen.yml + diff --git a/source/includes/changelogs/releases/6.0.15.rst b/source/includes/changelogs/releases/6.0.15.rst new file mode 100644 index 00000000000..3312f61d1c0 --- /dev/null +++ b/source/includes/changelogs/releases/6.0.15.rst @@ -0,0 +1,227 @@ +.. _6.0.15-changelog: + +6.0.15 Changelog +---------------- + +Sharding +~~~~~~~~ + +:issue:`SERVER-84368` CreateIndex fails with StaleConfig if run from a +stale mongos against a sharded non-empty collection + +Query +~~~~~ + +:issue:`SERVER-83602` $or -> $in MatchExpression rewrite should not +generate $or directly nested in another $or + +Write Operations +~~~~~~~~~~~~~~~~ + +:issue:`SERVER-88200` Time-series writes on manually-created buckets may +misbehave + +Storage +~~~~~~~ + +:issue:`WT-11062` Safe free the ref addr to allow concurrent access + +WiredTiger +`````````` + +- :issue:`WT-11845` Fix transaction visibility issue with truncate + +Build and Packaging +~~~~~~~~~~~~~~~~~~~ + +:issue:`WT-12587` Re-enable compile-clang tasks for older versions of +clang + +Internals +~~~~~~~~~ + +- :issue:`SERVER-68128` Exceptions thrown while generating command + response lead to network error +- :issue:`SERVER-72431` Make the commit of split chunks idempotent +- :issue:`SERVER-72703` Downgrade $out's db lock to MODE_IX +- :issue:`SERVER-74375` Failpoint should not allow escape of + FCBIS:_finishCallback +- :issue:`SERVER-75355` Improve explain with Queryable Encryption +- :issue:`SERVER-75845` Catch InterruptedDueToStorageChange in parallel + shell for fcbis_election_during_storage_change.js +- :issue:`SERVER-77559` Implement file system log handler for resmoke +- :issue:`SERVER-77737` $top/$bottom gives incorrect result for sharded + collection and constant expressions +- :issue:`SERVER-78556` Return default of internalInsertMaxBatchSize to + 64 +- :issue:`SERVER-78852` Test movePrimary and $out running concurrently +- :issue:`SERVER-79286` Create a query knob +- :issue:`SERVER-79400` Implement number of documents tie breaking + heuristics +- :issue:`SERVER-79575` Fix numa node counting +- :issue:`SERVER-79999` reduce test code coverage on macos builders +- :issue:`SERVER-80177` validate() should not return valid:false for + non-compliant documents +- :issue:`SERVER-80233` Implement index prefix heuristic +- :issue:`SERVER-80275` Add log line for detailed plan scoring +- :issue:`SERVER-80340` Handle and test dbCheck during initial sync +- :issue:`SERVER-80363` server default writeConcern is not honored when + wtimeout is set +- :issue:`SERVER-81163` compact.js times out when wiredTigerStressConfig + is set to true +- :issue:`SERVER-81400` Structural validation for BSONColumn +- :issue:`SERVER-82094` Release mongo v6 on amazon 2023 +- :issue:`SERVER-82476` Disable diagnostics latches by default +- :issue:`SERVER-82717` QueryPlannerIXSelect::stripInvalidAssignments + tries to strip non-existent index assignment from + $_internalSchemaAllElemMatchFromIndex +- :issue:`SERVER-83501` Write script to generate a file of all available + server parameters for sys-perf runs +- :issue:`SERVER-83508` Race between watchdog and FCBIS deleting old + storage files +- :issue:`SERVER-83952` Fix fuzzer failures for BSONColumn validation +- :issue:`SERVER-83956` Balancer wrongly emit warning message in + multiversion clusters +- :issue:`SERVER-84125` Check fieldname size in BSONColumn validation +- :issue:`SERVER-84179` Simple8b builder does not fully reset state + after writing RLE block +- :issue:`SERVER-84336` Timeseries inserts can leave dangling BSONObj in + WriteBatches in certain cases +- :issue:`SERVER-84612` Define a version for immer +- :issue:`SERVER-84615` Define a version for linenoise +- :issue:`SERVER-85368` Updates the genny module in sys-perf to point to + mongo/genny instead of 10gen/genny +- :issue:`SERVER-85534` Checkpoint the vector clock after committing + shard collection +- :issue:`SERVER-85580` Undo any update on ScopedSetShardRole + construction failure +- :issue:`SERVER-85633` Add lock around res_ninit call +- :issue:`SERVER-85694` $searchMeta aggregation pipeline stage not + passing correct query to mongot after PlanShardedSearch +- :issue:`SERVER-85714` BSONColumn validator need to treat MinKey and + MaxKey as uncompressed +- :issue:`SERVER-85716` Fix for empty buffer being passed to BSONColumn + validation +- :issue:`SERVER-85721` Point evergreen task log lobster links to + Parsley +- :issue:`SERVER-85843` A write operation may fail with + NamespaceNotFound if the database has been concurrently dropped + (sharding-only) +- :issue:`SERVER-85869` Exhaustive find on config shard can return stale + data +- :issue:`SERVER-85973` Update README.third_party.md to indicate that + Valgrind is licensed under BSD-4-Clause +- :issue:`SERVER-86017` Backport multi-planner tie breaking improvements + to v6.0 +- :issue:`SERVER-86065` BSONColumn structural validation should check + for nested interleaved mode +- :issue:`SERVER-86116` CreateCollectionCoordinator may fail to create + the chunk metadata on commit time. +- :issue:`SERVER-86214` add all bazel-* output dirs to git ignore +- :issue:`SERVER-86273` $search should set protocol version and search + sequence token in establishSearchCursors +- :issue:`SERVER-86388` Remove fle_drivers_integration.js test from 6.0 +- :issue:`SERVER-86395` Investigate DuplicateKey error while recovering + convertToCapped from stable checkpoint +- :issue:`SERVER-86403` Fix THP startup warnings +- :issue:`SERVER-86407` validation does not produce complete results + when it should +- :issue:`SERVER-86419` SBE and Classic behave differently for + $bitsAnyClear on NumberDecimal infinity +- :issue:`SERVER-86424` $facet should be able to generate documents with + searchSequenceToken +- :issue:`SERVER-86433` Clear memory in the data_union stored on the + endpoint before use +- :issue:`SERVER-86454` Merge canSwapWithRedact and + canSwapWithSingleDocTransform constraints +- :issue:`SERVER-86619` Document::shouldSkipDeleted() accesses string + without checking for missingg +- :issue:`SERVER-86622` Resharding coordinator use possibly stale + database info +- :issue:`SERVER-86632` plan_cache_drop_database.js should catch + DatabaseDropPending errors +- :issue:`SERVER-86634` A collection that ends with ecoc.compact must be + considered a FLE collection +- :issue:`SERVER-86646` Fix decodeRecordIdStrAtEnd handling of + unterminated size bytes +- :issue:`SERVER-86672` CollMod coordinator use possibly stale database + information +- :issue:`SERVER-86705` moveChunk op slower than TTL index in + ttl_deletes_not_targeting_orphaned_documents.js +- :issue:`SERVER-86717` Resharding should validate user provided zone + range doesn't include $-prefixed fields. +- :issue:`SERVER-86772` Fix racy watchdog_test +- :issue:`SERVER-86774` Increase oplog size for PIT (point in time + restore) tests +- :issue:`SERVER-86782` geo_axis_aligned.js takes too long +- :issue:`SERVER-86812` ClusterChunksResizePolicy may cause + setCompatibilityFeatureFeature to crash the config server while + downgrading the cluster +- :issue:`SERVER-86817` ClusterChunksResizePolicy does not fully clean + its state upon completion +- :issue:`SERVER-86822` remove sharding_gen from macOS builders +- :issue:`SERVER-86840` fix gather unittest script to handle split + unittests tasks +- :issue:`SERVER-87058` Chunk refresh from a secondary does not wait for + majority writeConcern while flushing +- :issue:`SERVER-87224` Enable diagnostic latching in test variants on + old branches +- :issue:`SERVER-87260` Fix for atlas azure intel variant 6.0 +- :issue:`SERVER-87306` Prevent accessing OCSP manager ptr during + shutdown +- :issue:`SERVER-87323` Future continuations must capture vector clock + as shared pointer +- :issue:`SERVER-87415` Remove run_command__simple workload from + sys-perf +- :issue:`SERVER-87479` Manually run SBE build variants on release + branches in evergreen to generate and add SBE $group/$lookup tests + with $skip/$limit prefixes +- :issue:`SERVER-87521` Fix race in BackgroundSync between making + RollbackImpl and shutdown +- :issue:`SERVER-87544` Fix up gitignore to permit git awareness of + enterprise module +- :issue:`SERVER-87567` The SessionWorkflow should correctly return a + response error on malformed requests +- :issue:`SERVER-87610` Relax shardVersionRetry tripwires on the + namespace of received stale exceptions +- :issue:`SERVER-87616` Create minimal trySbeEngine build variant on + release configurations which have SBE +- :issue:`SERVER-87905` BSONColumn validation integer overflow +- :issue:`SERVER-87979` Investigate and fix up + projection_executor_redaction_test on v6.0 +- :issue:`SERVER-88111` random_DDL_CRUD_operations.js bulk insert should + perform max internalInsertMaxBatchSize inserts +- :issue:`SERVER-88136` Fix arbiter_always_has_latest_fcv.js test to + correctly test arbiter FCV behavior +- :issue:`SERVER-88149` Tag group_lookup_with_canonical_query_prefix.js + with no_selinux +- :issue:`SERVER-88202` Fix possible integer overflow in BSON validation +- :issue:`SERVER-88262` Prevent timeouts in + read_pref_with_hedging_mode.js +- :issue:`SERVER-88650` Deadlock in VectorClockMongoD during shutdown +- :issue:`SERVER-88755` Make sure all sys-perf build variants specify a + mongodb_setup_release +- :issue:`SERVER-88942` Update db-contrib-tool version that includes fix + for downloading old binaries +- :issue:`SERVER-88971` Older sys-perf variants on 5.0 and 6.0 no longer + needed +- :issue:`SERVER-89068` Explicitly set exec_timeout and timeout_secs for + the sys-perf project +- :issue:`SERVER-89251` Revert concurrent movePrimary and aggregations + test from v7.0 and v6.0 +- :issue:`WT-10178` Fix timing stress causing format to time out with + prepare-conflict +- :issue:`WT-11241` Skip current transaction snap_min visible deleted + pages as part of the tree walk +- :issue:`WT-11987` Table's version number dropped to + version=(major=1,minor=0) +- :issue:`WT-12043` Remove obsolete HAVE_DIAGNOSTIC ifdefs to avoid + memory leak +- :issue:`WT-12227` Assertion fires in __hs_delete_record on 6.0 +- :issue:`WT-12304` RTS should provide information about how much more + work it has to do +- :issue:`WT-12321` Add stat to track how many bulk cursors are opened +- :issue:`WT-12379` Incorrect python version on Windows on 6.0 +- :issue:`WT-12402` Add stats to track when eviction server skips + walking a tree + diff --git a/source/includes/changelogs/releases/7.0.1.rst b/source/includes/changelogs/releases/7.0.1.rst index 28061366d7c..b01700fa657 100644 --- a/source/includes/changelogs/releases/7.0.1.rst +++ b/source/includes/changelogs/releases/7.0.1.rst @@ -106,7 +106,7 @@ Internals series inserts on OID collision - :issue:`SERVER-79447` The balancer stop sequence may cause the config server to crash on step down -- :issue:`SERVER-79509` Add testing of transitional FCVs with +- :issue:`SERVER-79509` Add testing of transitional fCVs with removeShard and transitionToDedicatedConfigServer - :issue:`SERVER-79515` Update task generator - :issue:`SERVER-79607` ShardRegistry shutdown should not wait diff --git a/source/includes/changelogs/releases/7.0.2.rst b/source/includes/changelogs/releases/7.0.2.rst index e8e1e7ec244..6359c8cb303 100644 --- a/source/includes/changelogs/releases/7.0.2.rst +++ b/source/includes/changelogs/releases/7.0.2.rst @@ -33,7 +33,7 @@ Sharding Operations ~~~~~~~~~~ -- :issue:`SERVER-58534` Collect FCV in FTDC +- :issue:`SERVER-58534` Collect fCV in FTDC - :issue:`SERVER-77610` Log session id associated with the backup cursor Build and Packaging diff --git a/source/includes/changelogs/releases/7.0.4.rst b/source/includes/changelogs/releases/7.0.4.rst index d4a0953d1d9..d5ef625ea0d 100644 --- a/source/includes/changelogs/releases/7.0.4.rst +++ b/source/includes/changelogs/releases/7.0.4.rst @@ -25,7 +25,7 @@ Internals - :issue:`SERVER-77113` Exclude fields containing dots from time series indexes - :issue:`SERVER-79317` Provide more documentation and helper functions - for case where feature flag checks could be run when FCV is + for case where feature flag checks could be run when fCV is uninitialized during initial sync - :issue:`SERVER-79470` Update shard-lite-audit infra provision for sysperf diff --git a/source/includes/changelogs/releases/7.0.5.rst b/source/includes/changelogs/releases/7.0.5.rst index eac67a2a865..aac2fcb0fc2 100644 --- a/source/includes/changelogs/releases/7.0.5.rst +++ b/source/includes/changelogs/releases/7.0.5.rst @@ -20,8 +20,6 @@ Sharding verbose - :issue:`SERVER-83061` Remove partially-released vestiges of ShardRole API from 7.0 -- :issue:`SERVER-83146` Bulk write operation might fail with - NamespaceNotFound Query ~~~~~ @@ -45,6 +43,7 @@ Internals locks during shutdown and rollback - :issue:`SERVER-70974` Fix early-exits triggered when user specifies TCP Fast Open server parameters +- :issue:`SERVER-75033` Capture core dumps from test failures on macOS - :issue:`SERVER-76560` Time series collections not always honoring expireAfterSeconds correctly - :issue:`SERVER-77311` Add a new log message when a secondary node is @@ -59,7 +58,7 @@ Internals causes invariant failure - :issue:`SERVER-79235` rolling_index_builds_interrupted.js checkLog relies on clearRawMongoProgramOutput -- :issue:`SERVER-79274` FCV checks can be racy if FCV is uninitialized +- :issue:`SERVER-79274` fCV checks can be racy if fCV is uninitialized in between the checks - :issue:`SERVER-79762` Fix initial_sync_chooses_correct_sync_source.js to wait initial sync node to find primary before starting initial sync @@ -124,7 +123,9 @@ Internals response status check - :issue:`SERVER-82143` Make clientId OIDC IdP configuration field optional -- :issue:`SERVER-82223` Commit handler in FCV op observer is susceptible +- :issue:`SERVER-82197` Incorrect query results in SBE if $group spills + in presence of collation +- :issue:`SERVER-82223` Commit handler in fCV op observer is susceptible to interruption - :issue:`SERVER-82313` Fix cancelling txn api from the caller - :issue:`SERVER-82365` Optimize the construction of the balancer's @@ -209,6 +210,8 @@ Internals - :issue:`SERVER-84087` Make sure ExecutorPool gets terminated after migrations have completed - :issue:`SERVER-84148` Fix timing issue in fle2_compact_setfcv.js test +- :issue:`SERVER-84337` Backport new variants added to perf.yml over to + sys-perf-7.0 and sys-perf-4.4 - :issue:`WT-7929` Investigate a solution to avoid FTDC stalls during checkpoint - :issue:`WT-11584` Fix test_checkpoint_stats test diff --git a/source/includes/changelogs/releases/7.0.6.rst b/source/includes/changelogs/releases/7.0.6.rst new file mode 100644 index 00000000000..f2c1c99e806 --- /dev/null +++ b/source/includes/changelogs/releases/7.0.6.rst @@ -0,0 +1,247 @@ +.. _7.0.6-changelog: + +7.0.6 Changelog +--------------- + +Sharding +~~~~~~~~ + +- :issue:`SERVER-75537` Handle direct operations against shards +- :issue:`SERVER-76337` Add a server status metric to track unauthorized + direct connections to shards +- :issue:`SERVER-76984` Remove check for !_isInternalClient() in service + entry point +- :issue:`SERVER-77027` Only check for direct shard connections if + sharding is enabled +- :issue:`SERVER-81508` Potential double-execution of write statements + when ShardCannotRefreshDueToLocksHeld is thrown +- :issue:`SERVER-83146` Bulk write operation might fail with + NamespaceNotFound +- :issue:`SERVER-83775` Do not balance data between shards owning more + than the ideal data size + +Replication +~~~~~~~~~~~ + +:issue:`SERVER-79191` continuous_initial_sync.py Can Be in Rollback +During FSM Teardown + +Query +~~~~~ + +:issue:`SERVER-84595` Delete invalid test +jstests/noPassthrough/out_majority_read_replset.js + +Aggregation +~~~~~~~~~~~ + +:issue:`SERVER-82929` $listSearchIndexes requires find privilege action +rather than listSearchIndexes privilege action as it intended + +Storage +~~~~~~~ + +:issue:`WT-11062` Safe free the ref addr to allow concurrent access + +WiredTiger +`````````` + +- :issue:`WT-11845` Fix transaction visibility issue with truncate + +Build and Packaging +~~~~~~~~~~~~~~~~~~~ + +:issue:`SERVER-62957` Add reshardCollection change stream event + +Internals +~~~~~~~~~ + +- :issue:`SERVER-69413` Documentation Updates +- :issue:`SERVER-72703` Downgrade $out's db lock to MODE_IX +- :issue:`SERVER-72839` Server skips peer certificate validation if + neither CAFile nor clusterCAFile is provided +- :issue:`SERVER-74875` Implement immutable ordered map and set +- :issue:`SERVER-75497` Convert ordered containers in CollectionCatalog + to immutable +- :issue:`SERVER-75613` Add GDB pretty printers for immutable data + structures +- :issue:`SERVER-75851` Add typedef for immutable vector +- :issue:`SERVER-76463` Ensure Sharding DDL locks acquired outside a + coordinator wait for DDL recovery +- :issue:`SERVER-77801` Remove + sharded_collections_jscore_passthrough_with_config_shard from the + macOS hosts +- :issue:`SERVER-78188` Permit default use of multithreaded LDAP + connection pool with libldap and OpenSSL 1.1.1 +- :issue:`SERVER-78311` mongos does not report writeConcernError in + presence of writeErrors for insert command +- :issue:`SERVER-78662` Deadlock with index build, step down, prepared + transaction, and MODE_IS coll lock +- :issue:`SERVER-78911` Always suppress "Different user name was + supplied to saslSupportedMechs" log during X.509 intracluster auth +- :issue:`SERVER-79150` Reduce ScopedSetShardRole scope to setup stage + of index build +- :issue:`SERVER-79192` Fix migration_coordinator_commit_failover.js to + use awaitReplicationBeforeStepUp: false +- :issue:`SERVER-79202` PinnedConnectionTaskExecutor can hang when + shutting down +- :issue:`SERVER-79214` Orphaned documents cause failures in indexu.js +- :issue:`SERVER-79286` Create a query knob +- :issue:`SERVER-79400` Implement number of documents tie breaking + heuristics +- :issue:`SERVER-79972` Investigate making core dump archival faster +- :issue:`SERVER-80150` Log negotiated network compressor with client + metadata +- :issue:`SERVER-80233` Implement index prefix heuristic +- :issue:`SERVER-80275` Add log line for detailed plan scoring +- :issue:`SERVER-80310` Update sysperf to allow running individual genny + tasks on waterfall +- :issue:`SERVER-80645` Amazon 2023 community packages fail to install +- :issue:`SERVER-80978` Fix potential deadlock between + TTLMonitor::onStepUp and prepared transaction +- :issue:`SERVER-81021` Improve index prefix heuristic by taking into + account closed intervals +- :issue:`SERVER-81181` Enable featureFlagCheckForDirectShardOperations +- :issue:`SERVER-81246` FLE WriteConcernError behavior unclear +- :issue:`SERVER-81534` DDL locks musn't be acquired during step down or + shutdown +- :issue:`SERVER-82053` Use index hint for time series bucket reopening + query +- :issue:`SERVER-82221` listCollections and listIndexes should include + commit-pending namespaces +- :issue:`SERVER-82261` setup_spawnhost_coredump script may miss core + dump from crashed process on Windows +- :issue:`SERVER-82353` Multi-document transactions can miss documents + when movePrimary runs concurrently +- :issue:`SERVER-82365` Optimize the construction of the balancer's + collection distribution status histogram (2nd attempt) +- :issue:`SERVER-82450` MongoServerError: batched writes must generate a + single applyOps entry +- :issue:`SERVER-82627` ReshardingDataReplication does not join the + ReshardingOplogFetcher thread pool causing invariant failure. +- :issue:`SERVER-82640` Upload mongod --version output to S3 during + server compilation in Evergreen +- :issue:`SERVER-82815` Expose server’s index key creation via + aggregation +- :issue:`SERVER-83050` Create a deployment of mongodb on + AL2-openssl-1.1.1 +- :issue:`SERVER-83119` Secondary replica crashes on clustered + collection if notablescan is enabled +- :issue:`SERVER-83145` Shared buffer fragment incorrectly tracks memory + usage in freeUnused() +- :issue:`SERVER-83192` Always include zero cpuNanos in profiler +- :issue:`SERVER-83296` Remove column data from BSON fuzzer +- :issue:`SERVER-83337` Re-enable wt_size_storer_cleanup_replica_set.js + on macOS +- :issue:`SERVER-83369` Index creation does not enforce type of + bucketSize field +- :issue:`SERVER-83454` Range Deleter Service registration and + de-registration should not rely on onCommit ordering guarantees +- :issue:`SERVER-83492` Remove limit and skip values from SBE plan cache + key if possible +- :issue:`SERVER-83567` Push in classic stores missing values. +- :issue:`SERVER-83610` Consider reducing privileges required for + $documents +- :issue:`SERVER-83639` Add exception for fuzzer for BSONColumn + validation +- :issue:`SERVER-83738` db-contrib-tool fails to get release json + sometimes +- :issue:`SERVER-83825` increase log verbosity for write conflict + retries in index_build_operation_metrics.js: +- :issue:`SERVER-83874` Move primary operation doesn't drop + db.system.views on the donor +- :issue:`SERVER-83955` Fix wrong warning messages in ReplSetGetStatus + command +- :issue:`SERVER-83959` When preparing SBE plan, correctly pass + preparingFromCache argument +- :issue:`SERVER-84063` Remove BlackDuck from Security Daily Cron +- :issue:`SERVER-84130` Incorrect bucket-level filter optimization when + some events in the bucket are missing the field +- :issue:`SERVER-84147` Update vscode workspace from true to explicit +- :issue:`SERVER-84186` Add benchmark that runs math operations in + Timeseries to sys perf +- :issue:`SERVER-84233` Support BSON MinKey and MaxKey in BSONColumn +- :issue:`SERVER-84313` Exclude + coordinate_txn_recover_on_stepup_with_tickets_exhausted.js from + sharding multiversion suites on 7.0 +- :issue:`SERVER-84336` Timeseries inserts can leave dangling BSONObj in + WriteBatches in certain cases +- :issue:`SERVER-84337` Backport new variants added to perf.yml over to + sys-perf-7.0 and sys-perf-4.4 +- :issue:`SERVER-84338` Top level $or queries may lead to invalid SBE + plan cache entry which returns wrong results +- :issue:`SERVER-84353` The test for stepDown deadlock with read ticket + exhaustion is flaky +- :issue:`SERVER-84410` Do an initial refresh of the other mongos in + txn_with_several_routers.js +- :issue:`SERVER-84436` Handle skip + limit sum overflowing int64_t in + SBE +- :issue:`SERVER-84468` Fix deadlock when running + runTransactionOnShardingCatalog() +- :issue:`SERVER-84534` [7.0] Blocklist plan_cache_sbe.js from + replica_sets_initsync_jscore_passthrough +- :issue:`SERVER-84548` Using ShardServerCatalogCacheLoader on configsvr + causes excessive WT data handles / memory usage +- :issue:`SERVER-84567` writeQueryStats should log an error rather than + uassert when the feature flag is disabled +- :issue:`SERVER-84722` Create undocumented server parameter to skip + document validation on insert code path for internal usage +- :issue:`SERVER-84723` Sharded multi-document transactions can observe + partial effects of concurrent DDL operations +- :issue:`SERVER-84732` Fix typo in mongo-perf standalone inMemory ARM + AWS test +- :issue:`SERVER-84806` Ignore reshardCollection change event after + v6.0->v7.0 upgrade in test +- :issue:`SERVER-85167` Size storer can be flushed concurrently with + being destructed for rollback +- :issue:`SERVER-85171` split unittest tasks up +- :issue:`SERVER-85206` Improve performance of full_range.js and + explicit_range.js +- :issue:`SERVER-85260` SBE $mergeObjects crashes server with undefined + input +- :issue:`SERVER-85263` Report escaped client application name +- :issue:`SERVER-85306` Update sys-perf config to use HTTPs github links + rather than SSH +- :issue:`SERVER-85419` Balancer pollutes logs in case no suitable + recipient is found during draining +- :issue:`SERVER-85453` ExternalDataSourceScopeGuard should not be + compatible with multiple plan executors +- :issue:`SERVER-85530` Refresh Test Certificates +- :issue:`SERVER-85633` Add lock around res_ninit call +- :issue:`SERVER-85652` Update DSI atlas azure tasks to use an AL2 + compile artifact. +- :issue:`SERVER-85693` Fix potential access violation in + User::validateRestrictions +- :issue:`SERVER-85714` BSONColumn validator need to treat MinKey and + MaxKey as uncompressed +- :issue:`SERVER-85771` Make $bucketAuto more robust in the case of an + empty string for the groupBy field +- :issue:`SERVER-85848` $redact inhibits change stream optimization +- :issue:`SERVER-85956` Query Stats 7.0 Backport Batch #1 +- :issue:`SERVER-85984` The test for inserting docs larger than the user + max is flaky +- :issue:`SERVER-86027` Tag + insert_docs_larger_than_max_user_size_standalone.js with + requires_persistence and requires_replication +- :issue:`SERVER-86081` Sys-perf missing required parameters due to + Evergreen Redaction +- :issue:`SERVER-86096` Add queryable encryption workloads to 7.0 + project on Evergreen +- :issue:`SERVER-86116` CreateCollectionCoordinator may fail to create + the chunk metadata on commit time. +- :issue:`SERVER-86118` Backport Query Stats to 7.0 Batch #2 +- :issue:`SERVER-86298` Query Stats 7.0 Backport Batch #3 +- :issue:`SERVER-86363` Make container registry login silent +- :issue:`SERVER-86432` Backport Query Stats to 7.0 Batch #4 +- :issue:`WT-11777` Fix units of __wt_timer_evaluate() calls: logging + and progress period +- :issue:`WT-11987` Table's version number dropped to + version=(major=1,minor=0) +- :issue:`WT-12043` Remove obsolete HAVE_DIAGNOSTIC ifdefs to avoid + memory leak +- :issue:`WT-12077` Incorrect hardware checksum calculation on zSeries + for buffers on stack +- :issue:`WT-12147` Temporarily disable clang-analyzer +- :issue:`WT-12211` Fix PATH env variable in hang analyzer to generate + python core dump (7.0) + diff --git a/source/includes/changelogs/releases/7.0.7.rst b/source/includes/changelogs/releases/7.0.7.rst new file mode 100644 index 00000000000..c3f3034c8ad --- /dev/null +++ b/source/includes/changelogs/releases/7.0.7.rst @@ -0,0 +1,166 @@ +.. _7.0.7-changelog: + +7.0.7 Changelog +--------------- + +Sharding +~~~~~~~~ + +:issue:`SERVER-84368` CreateIndex fails with StaleConfig if run from a +stale mongos against a sharded non-empty collection + +Query +~~~~~ + +:issue:`SERVER-83602` $or -> $in MatchExpression rewrite should not +generate $or directly nested in another $or + +Aggregation +~~~~~~~~~~~ + +:issue:`SERVER-87313` [v7.0] [SBE] Aggregate command hits tripwire +assertion in SortStage::SortImpl::runLimitCode() + +Build and Packaging +~~~~~~~~~~~~~~~~~~~ + +:issue:`WT-11407` Fix test_txn24 test (not WiredTiger) to stop +WT_ROLLBACK errors on MacOS + +Internals +~~~~~~~~~ + +- :issue:`SERVER-70672` Merge enterprise repo into 10gen/mongo +- :issue:`SERVER-72431` Make the commit of split chunks idempotent +- :issue:`SERVER-76700` Increase window of acceptable elapsed CPU times + in OperationCPUTimerTest::TestReset +- :issue:`SERVER-79285` makeOperationContext should not be called on the + primaryOnlyService instance cleanup executor +- :issue:`SERVER-79999` reduce test code coverage on macos builders +- :issue:`SERVER-80177` validate() should not return valid:false for + non-compliant documents +- :issue:`SERVER-83501` Write script to generate a file of all available + server parameters for sys-perf runs +- :issue:`SERVER-83508` Race between watchdog and FCBIS deleting old + storage files +- :issue:`SERVER-83956` Balancer wrongly emit warning message in + multiversion clusters +- :issue:`SERVER-84008` Enable query stats sys-perf variants on 7.0 +- :issue:`SERVER-84123` Add versioning to BSON validation +- :issue:`SERVER-84125` Check fieldname size in BSONColumn validation +- :issue:`SERVER-84179` Simple8b builder does not fully reset state + after writing RLE block +- :issue:`SERVER-84240` Make replSetReconfig retry network errors +- :issue:`SERVER-84589` Error when directly dropping a sharded + time-series buckets collection is misleading. +- :issue:`SERVER-84612` Define a version for immer +- :issue:`SERVER-84615` Define a version for linenoise +- :issue:`SERVER-84628` Startup warning in mongos for Read/Write Concern +- :issue:`SERVER-85318` Change expireAfterSeconds in + timeseries_out_non_sharded.js +- :issue:`SERVER-85459` [v7.0] bucketRoundingSeconds param is accepted + by nodes on fCV 6.0, binary 7.0 +- :issue:`SERVER-85534` Checkpoint the vector clock after committing + shard collection +- :issue:`SERVER-85690` Wait for stepdown to finish before continuing + index build in index_build_unregisters_after_stepdown.js +- :issue:`SERVER-85716` Fix for empty buffer being passed to BSONColumn + validation +- :issue:`SERVER-85843` A write operation may fail with + NamespaceNotFound if the database has been concurrently dropped + (sharding-only) +- :issue:`SERVER-85869` Exhaustive find on config shard can return stale + data +- :issue:`SERVER-85973` Update README.third_party.md to indicate that + Valgrind is licensed under BSD-4-Clause +- :issue:`SERVER-86021` 7.0 backport testing audit +- :issue:`SERVER-86065` BSONColumn structural validation should check + for nested interleaved mode +- :issue:`SERVER-86106` shadow-utils is not on suse +- :issue:`SERVER-86158` change fail point used in TTL operation metrics + tests +- :issue:`SERVER-86273` $search should set protocol version and search + sequence token in establishSearchCursors +- :issue:`SERVER-86355` recoverRefreshDbVersion is swallowing errors +- :issue:`SERVER-86395` Investigate DuplicateKey error while recovering + convertToCapped from stable checkpoint +- :issue:`SERVER-86399` Ensure that FTDC tracks information related to + systems that could be running the new allocator +- :issue:`SERVER-86403` Fix THP startup warnings +- :issue:`SERVER-86417` Change $vectorSearch filter to owned obj +- :issue:`SERVER-86424` $facet should be able to generate documents with + searchSequenceToken +- :issue:`SERVER-86433` Clear memory in the data_union stored on the + endpoint before use +- :issue:`SERVER-86452` [v7.0] make v7.0 fle variant closer to master +- :issue:`SERVER-86454` Merge canSwapWithRedact and + canSwapWithSingleDocTransform constraints +- :issue:`SERVER-86481` Jepsen set, register, and read concern majority + tests are not running in Evergreen +- :issue:`SERVER-86523` Backport Query Stats to 7.0 Batch #5 +- :issue:`SERVER-86607` Reject access tokens with multiple audience + claims +- :issue:`SERVER-86619` Document::shouldSkipDeleted() accesses string + without checking for missingg +- :issue:`SERVER-86620` [v7.0] Backport script for sys-perf parameters +- :issue:`SERVER-86622` Resharding coordinator use possibly stale + database info +- :issue:`SERVER-86624` Make RSLocalClient also wait for a snapshot to + be available +- :issue:`SERVER-86632` plan_cache_drop_database.js should catch + DatabaseDropPending errors +- :issue:`SERVER-86634` A collection that ends with ecoc.compact must be + considered a FLE collection +- :issue:`SERVER-86646` Fix decodeRecordIdStrAtEnd handling of + unterminated size bytes +- :issue:`SERVER-86652` Query Stats 7.0 Backport Batch #6 +- :issue:`SERVER-86698` Add query stats passthroughs to + evergreen_nightly for 7.0 +- :issue:`SERVER-86700` [7.X] Fix timeseries_agg_out.js not expecting + NamespaceNotFound error +- :issue:`SERVER-86705` moveChunk op slower than TTL index in + ttl_deletes_not_targeting_orphaned_documents.js +- :issue:`SERVER-86717` Resharding should validate user provided zone + range doesn't include $-prefixed fields. +- :issue:`SERVER-86772` Fix racy watchdog_test +- :issue:`SERVER-86822` remove sharding_gen from macOS builders +- :issue:`SERVER-86840` fix gather unittest script to handle split + unittests tasks +- :issue:`SERVER-86841` Fix test setup for shapifying_bm.cpp on 7.0 + branch +- :issue:`SERVER-86876` Disable diagnostic latches for sys-perf variants + on 7.0 +- :issue:`SERVER-86889` Fix idl_check_compability.py to consider edge + cases +- :issue:`SERVER-86903` Backport QS to 7.0 Batch #7 +- :issue:`SERVER-87061` Sharded multi-document transactions can observe + partial effects of concurrent reshard operation +- :issue:`SERVER-87130` Backport Query Stats to 7.0 Batch #8 +- :issue:`SERVER-87177` Modify tests in expression_test.cpp to not use + $getFields. +- :issue:`SERVER-87330` Accept JWKSets with non-RSA keys +- :issue:`SERVER-87394` [v7.0] Explore fixes for broken debian11 package +- :issue:`SERVER-87415` Remove run_command__simple workload from + sys-perf +- :issue:`SERVER-87479` Manually run SBE build variants on release + branches in evergreen to generate and add SBE $group/$lookup tests + with $skip/$limit prefixes +- :issue:`SERVER-87544` Fix up gitignore to permit git awareness of + enterprise module +- :issue:`SERVER-87557` Exclude some FF tests from an invalid build + variant +- :issue:`SERVER-87567` The SessionWorkflow should correctly return a + response error on malformed requests +- :issue:`SERVER-87600` Delete older variants from system_perf.yml +- :issue:`SERVER-87612` Backport Query Stats to 7.0 Batch #9 +- :issue:`WT-10178` Fix timing stress causing format to time out with + prepare-conflict +- :issue:`WT-11239` Add CLANG_C/CXX_VERSION compile flags to the + configure wiredtiger task +- :issue:`WT-11325` Missing keys in schema-abort-predictable-test +- :issue:`WT-12304` RTS should provide information about how much more + work it has to do +- :issue:`WT-12321` Add stat to track how many bulk cursors are opened +- :issue:`WT-12402` Add stats to track when eviction server skips + walking a tree + diff --git a/source/includes/changelogs/releases/7.0.8.rst b/source/includes/changelogs/releases/7.0.8.rst new file mode 100644 index 00000000000..2deb075aa85 --- /dev/null +++ b/source/includes/changelogs/releases/7.0.8.rst @@ -0,0 +1,66 @@ +.. _7.0.8-changelog: + +7.0.8 Changelog +--------------- + +Internals +~~~~~~~~~ + +- :issue:`SERVER-75845` Catch InterruptedDueToStorageChange in parallel + shell for fcbis_election_during_storage_change.js +- :issue:`SERVER-77559` Implement file system log handler for resmoke +- :issue:`SERVER-77737` $top/$bottom gives incorrect result for sharded + collection and constant expressions +- :issue:`SERVER-78556` Return default of internalInsertMaxBatchSize to + 64 +- :issue:`SERVER-78832` AutoGetCollectionForReadLockFree constructor + should check the shard version when setting shard key +- :issue:`SERVER-78852` Test movePrimary and $out running concurrently +- :issue:`SERVER-79575` Fix numa node counting +- :issue:`SERVER-79999` reduce test code coverage on macos builders +- :issue:`SERVER-81108` Sharded $search fails tassert in writeQueryStats +- :issue:`SERVER-83422` Remove explain from AggQueryShape +- :issue:`SERVER-84179` Simple8b builder does not fully reset state + after writing RLE block +- :issue:`SERVER-84530` Add query stats key hash to output of + $queryStats +- :issue:`SERVER-85580` Undo any update on ScopedSetShardRole + construction failure +- :issue:`SERVER-85721` Point evergreen task log lobster links to + Parsley +- :issue:`SERVER-85799` + rollback_recovery_commit_transaction_before_stable_timestamp should + wait for system to stablize before disabling failpoint +- :issue:`SERVER-86021` [v7.0] 7.0 backport testing audit +- :issue:`SERVER-86583` Non-transactional snapshot read on unsharded + collection may execute with mismatched sharding metadata +- :issue:`SERVER-86622` Resharding coordinator use possibly stale + database info +- :issue:`SERVER-86672` CollMod coordinator use possibly stale database + information +- :issue:`SERVER-86774` Increase oplog size for PIT (point in time + restore) tests +- :issue:`SERVER-86782` geo_axis_aligned.js takes too long +- :issue:`SERVER-86798` blacklist validate_db_metadata_command.js from + tenant migrations suite +- :issue:`SERVER-86965` [v7.0] Enable query stats for $search in 7.0 +- :issue:`SERVER-87058` Chunk refresh from a secondary does not wait for + majority writeConcern while flushing +- :issue:`SERVER-87081` query stats for sharded search on v7.0 +- :issue:`SERVER-87166` [v7.0] Fix collation_bucket.js for query_stats + on 7.0 +- :issue:`SERVER-87323` Future continuations must capture vector clock + as shared pointer +- :issue:`SERVER-87610` Relax shardVersionRetry tripwires on the + namespace of received stale exceptions +- :issue:`SERVER-87616` Create minimal trySbeEngine build variant on + release configurations which have SBE +- :issue:`SERVER-87666` Query shape for $documents is unique on each + execution +- :issue:`SERVER-87982` Rename the THP_enabled field in the ftdc + systemMetrics status section +- :issue:`SERVER-88111` random_DDL_CRUD_operations.js bulk insert should + perform max internalInsertMaxBatchSize inserts +- :issue:`SERVER-88360` Remove "Sharding catalog and local catalog + collection uuid do not match" tripwire assertion + diff --git a/source/includes/changelogs/releases/7.0.9.rst b/source/includes/changelogs/releases/7.0.9.rst new file mode 100644 index 00000000000..7cd7c57b4ef --- /dev/null +++ b/source/includes/changelogs/releases/7.0.9.rst @@ -0,0 +1,122 @@ +.. _7.0.9-changelog: + +7.0.9 Changelog +--------------- + +Write Operations +~~~~~~~~~~~~~~~~ + +:issue:`SERVER-88200` Time-series writes on manually-created buckets may +misbehave + +Storage +~~~~~~~ + +:issue:`SERVER-30832` Fix dbCheck behavior on rollback + +WiredTiger +`````````` + +- :issue:`WT-10807` Skip in-memory deleted pages as part of the tree + walk +- :issue:`WT-11911` Fix use-after-free with bounded cursor and + search_near + +Internals +~~~~~~~~~ + +- :issue:`SERVER-79637` Incorrect query results in $lookup with TS + foreign collection using a correlated predicate +- :issue:`SERVER-80340` Handle and test dbCheck during initial sync +- :issue:`SERVER-82349` Mongo 7 crashes on applyOps index delete/drops + without a collection UUID +- :issue:`SERVER-82571` insert_with_data_size_aware_balancing.js may + occasionally fail when run against slow machine/variants +- :issue:`SERVER-82717` QueryPlannerIXSelect::stripInvalidAssignments + tries to strip non-existent index assignment from + $_internalSchemaAllElemMatchFromIndex +- :issue:`SERVER-83193` Replace deprecated BatchedCommandRequest + getters/setters for WC with the ones provided by OperationContext +- :issue:`SERVER-83984` WiredTiger verbosity level is suppressed +- :issue:`SERVER-84653` Make the auto_safe_reconfig_helper tests wait + for newly added removal +- :issue:`SERVER-84717` [SBE] Fix buildGroup() to tolerate multiple + output fields with the same name +- :issue:`SERVER-85681` Fix for negative value being passed to + BasicBufBuilder::grow() +- :issue:`SERVER-85694` $searchMeta aggregation pipeline stage not + passing correct query to mongot after PlanShardedSearch +- :issue:`SERVER-85969` Documentation Updates +- :issue:`SERVER-86201` Cluster upserts performed through the + ShardServerProcessInterface should use the operation context to + configure their write concern +- :issue:`SERVER-86327` Time-series single schema per bucket column is + not maintained in some cases +- :issue:`SERVER-86375` Make index_build_memory_tracking.js less strict +- :issue:`SERVER-86380` Allow for multiple IdP configurations with the + same issuer but unique issuer-audience pair +- :issue:`SERVER-86407` validation does not produce complete results + when it should +- :issue:`SERVER-86419` SBE and Classic behave differently for + $bitsAnyClear on NumberDecimal infinity +- :issue:`SERVER-86478` Time-series bucket min/max does not track empty + field names under certain circumstances +- :issue:`SERVER-86529` Re-enable powercycle tests in Evergreen +- :issue:`SERVER-86640` Refactor out JWKS refresh from IdentityProvider + into a IDPJWKSRefresher +- :issue:`SERVER-86642` Update IDP registration selection process +- :issue:`SERVER-86987` Ensure check_metadata_consistency.js use + retriable writes when contacting config server +- :issue:`SERVER-87306` Prevent accessing OCSP manager ptr during + shutdown +- :issue:`SERVER-87521` Fix race in BackgroundSync between making + RollbackImpl and shutdown +- :issue:`SERVER-87573` Allow token_endpoint to be optional in OpenID + Discovery Document +- :issue:`SERVER-87845` Fix watchdog unit test PauseAndResume timeout + issue +- :issue:`SERVER-87905` BSONColumn validation integer overflow +- :issue:`SERVER-87930` Unittest CaptureLogs utility allows + unsynchronized access to log statements +- :issue:`SERVER-87987` Timeseries optimization does not exclude the + timeField though it's renamed by the $addFields and excluded by a + project +- :issue:`SERVER-88034` Fix powercycle task configurations +- :issue:`SERVER-88097` Add the --release flag to the sys-perf compiles +- :issue:`SERVER-88136` Fix arbiter_always_has_latest_fcv.js test to + correctly test arbiter FCV behavior +- :issue:`SERVER-88173` BinData bit comparisons give wrong results in + many cases +- :issue:`SERVER-88202` Fix possible integer overflow in BSON validation +- :issue:`SERVER-88262` Prevent timeouts in + read_pref_with_hedging_mode.js +- :issue:`SERVER-88296` $group constant expression fails to re-parse +- :issue:`SERVER-88328` Namespace may become unavailable while sharding + collection during downgrade from v7.2 to v7.0 +- :issue:`SERVER-88404` checkMetadataConsistency should refresh if it + finds no cached info for database +- :issue:`SERVER-88605` sys-perf configuration: update release version + in commented out build variants +- :issue:`SERVER-88650` Deadlock in VectorClockMongoD during shutdown +- :issue:`SERVER-88676` Backport build_patch_id functionality to 7.0 +- :issue:`SERVER-88755` Make sure all sys-perf build variants specify a + mongodb_setup_release +- :issue:`SERVER-88779` FLE2 retryable write breaks if an internal + transaction is retried +- :issue:`SERVER-88864` Make + nodes_eventually_sync_from_closer_data_center.js more robust to + transient slow heartbeat issues +- :issue:`SERVER-88942` Update db-contrib-tool version that includes fix + for downloading old binaries +- :issue:`SERVER-89026` Remove bench_test_with_tenants.js on v7.0 +- :issue:`SERVER-89067` Invalidate all user requests matching a user + name +- :issue:`SERVER-89068` Explicitly set exec_timeout and timeout_secs for + the sys-perf project +- :issue:`SERVER-89235` internal_strip_invalid_assignment.js missing tag +- :issue:`SERVER-89251` Revert concurrent movePrimary and aggregations + test from v7.0 and v6.0 +- :issue:`WT-11532` Fix session reset RNG by using cursor RNG +- :issue:`WT-12225` Fix RNG generator weakness around mongodb $sample + stage + diff --git a/source/includes/changelogs/releases/7.2.1.rst b/source/includes/changelogs/releases/7.2.1.rst new file mode 100644 index 00000000000..6d846bfb97a --- /dev/null +++ b/source/includes/changelogs/releases/7.2.1.rst @@ -0,0 +1,187 @@ +.. _7.2.1-changelog: + +7.2.1 Changelog +--------------- + +Sharding +~~~~~~~~ + +- :issue:`SERVER-77667` Prevent mongos from starting new transactions at + shutdown +- :issue:`SERVER-81508` Potential double-execution of write statements + when ShardCannotRefreshDueToLocksHeld is thrown +- :issue:`SERVER-83775` Do not balance data between shards owning more + than the ideal data size +- :issue:`SERVER-84738` Fix Data Race in ReshardingCollectionCloner + +Query +~~~~~ + +- :issue:`SERVER-83470` Introduce internalQueryFrameworkControl setting + for 6.0-style engine selection logic +- :issue:`SERVER-84595` Delete invalid test + jstests/noPassthrough/out_majority_read_replset.js + +Aggregation +~~~~~~~~~~~ + +:issue:`SERVER-82929` $listSearchIndexes requires find privilege action +rather than listSearchIndexes privilege action as it intended + +Storage +~~~~~~~ + +:issue:`WT-11062` Safe free the ref addr to allow concurrent access + +WiredTiger +`````````` + +- :issue:`WT-11845` Fix transaction visibility issue with truncate +- :issue:`WT-11911` Fix use-after-free with bounded cursor and + search_near +- :issue:`WT-12036` Workaround for lock contention on Windows + +Internals +~~~~~~~~~ + +- :issue:`SERVER-72703` Downgrade $out's db lock to MODE_IX +- :issue:`SERVER-79486` Increase the cardinality of the new shard key +- :issue:`SERVER-80363` server default writeConcern is not honored when + wtimeout is set +- :issue:`SERVER-81313` change streams fail to re-parse their own + representative query shape serialization for ResumeToken +- :issue:`SERVER-81496` Weird shapification behavior for + $convert/$toString +- :issue:`SERVER-81517` blacklist validate_db_metadata_command.js from + migrations suite +- :issue:`SERVER-81994` $densify range doesn't re-parse correctly +- :issue:`SERVER-82197` Incorrect query results in SBE if $group spills + in presence of collation +- :issue:`SERVER-82221` listCollections and listIndexes should include + commit-pending namespaces +- :issue:`SERVER-82313` Fix cancelling txn api from the caller +- :issue:`SERVER-82353` Multi-document transactions can miss documents + when movePrimary runs concurrently +- :issue:`SERVER-82365` Optimize the construction of the balancer's + collection distribution status histogram (2nd attempt) +- :issue:`SERVER-82437` db.collection.getSearchIndexes() + returns duplicate index +- :issue:`SERVER-82676` gRPC unit tests reuse port, causing conflicts + with concurrently running tests +- :issue:`SERVER-82706` check_metadata_consistency.js should use + retriable writes when contacting config server +- :issue:`SERVER-82791` createView fails with StaleConfig if a sharded + collection already exists with the same namespace +- :issue:`SERVER-82815` Expose server’s index key creation via + aggregation +- :issue:`SERVER-82822` Remove Bad Invariant in RetryUntilMajorityCommit +- :issue:`SERVER-82967` Stepdown after calling + ActiveIndexBuilds::registerIndexBuild() during index build setup + doesn't unregister itself +- :issue:`SERVER-83003` $listSearchIndexes should throw on non-existent + DB +- :issue:`SERVER-83119` Secondary replica crashes on clustered + collection if notablescan is enabled +- :issue:`SERVER-83337` Re-enable wt_size_storer_cleanup_replica_set.js + on macOS +- :issue:`SERVER-83369` Index creation does not enforce type of + bucketSize field +- :issue:`SERVER-83454` Range Deleter Service registration and + de-registration should not rely on onCommit ordering guarantees +- :issue:`SERVER-83492` Remove limit and skip values from SBE plan cache + key if possible +- :issue:`SERVER-83534` Allow IDL generator to accomodate query_shape + :custom +- :issue:`SERVER-83580` Re-introduce balancer policy unittests with + multiple chunks +- :issue:`SERVER-83685` Make internalQueryFrameworkControl + "trySbeRestricted" the default query knob +- :issue:`SERVER-83765` SessionWorkflow benchmark doesn't start up + ServiceExecutors +- :issue:`SERVER-83766` SessionWorkflow benchmark's mocked sessions + cannot access their transport layer +- :issue:`SERVER-83777` Cap $in length in plan cache key with + internalQueryMaxScansToExplode + 1 +- :issue:`SERVER-83825` increase log verbosity for write conflict + retries in index_build_operation_metrics.js: +- :issue:`SERVER-83830` On Enterprise build creating a collection in a + replica set with the storageEngine.inMemory option breaks secondaries +- :issue:`SERVER-83866` Update BACKPORTS_REQUIRED_BASE_URL from + mongodb/mongo to 10gen/mongo +- :issue:`SERVER-83874` Move primary operation doesn't drop + db.system.views on the donor +- :issue:`SERVER-83959` When preparing SBE plan, correctly pass + preparingFromCache argument +- :issue:`SERVER-84013` Incorrect results for index scan plan on query + with duplicate predicates in nested $or +- :issue:`SERVER-84063` Remove BlackDuck from Security Daily Cron +- :issue:`SERVER-84087` Make sure ExecutorPool gets terminated after + migrations have completed +- :issue:`SERVER-84130` Incorrect bucket-level filter optimization when + some events in the bucket are missing the field +- :issue:`SERVER-84137` Robustify + batched_multi_deletes_with_write_conflicts.js +- :issue:`SERVER-84186` Add benchmark that runs math operations in + Timeseries to sys perf +- :issue:`SERVER-84241` AsioTransportLayer::stopAcceptingSessions can + deadlock if called before listener thread started listening +- :issue:`SERVER-84274` Make InListData sort and dedup its elements + up-front +- :issue:`SERVER-84278` Don't generate plan cache entries for EOF plans +- :issue:`SERVER-84336` Timeseries inserts can leave dangling BSONObj in + WriteBatches in certain cases +- :issue:`SERVER-84338` Top level $or queries may lead to invalid SBE + plan cache entry which returns wrong results +- :issue:`SERVER-84353` The test for stepDown deadlock with read ticket + exhaustion is flaky +- :issue:`SERVER-84369` Ineligible query reuses plan cache entry for a + COUNT_SCAN (SBE only) +- :issue:`SERVER-84436` Handle skip + limit sum overflowing int64_t in + SBE +- :issue:`SERVER-84468` Fix deadlock when running + runTransactionOnShardingCatalog() +- :issue:`SERVER-84494` [v7.2] Remove $search tests in SBE since it is + disabled in 7.2 +- :issue:`SERVER-84502` Remove test_packages_release task from v7.3 + branch +- :issue:`SERVER-84546` switch asan statically linked test to dynamic + link +- :issue:`SERVER-84548` Using ShardServerCatalogCacheLoader on configsvr + causes excessive WT data handles / memory usage +- :issue:`SERVER-84567` writeQueryStats should log an error rather than + uassert when the feature flag is disabled +- :issue:`SERVER-84731` Resharding aggregation query should not acquire + RSTL-IX when waiting lastStableRecoveryTimestamp +- :issue:`SERVER-85263` Report escaped client application name +- :issue:`SERVER-85306` Update sys-perf config to use HTTPs github links + rather than SSH +- :issue:`SERVER-85652` Update DSI atlas azure tasks to use an AL2 + compile artifact. +- :issue:`SERVER-85694` $searchMeta aggregation pipeline stage not + passing correct query to mongot after PlanShardedSearch +- :issue:`SERVER-85776` Disable test facet_stats in replicated settings. +- :issue:`SERVER-85792` Backport new variants added to perf.yml over to + sys-perf-7.2 +- :issue:`SERVER-85836` TenantFileImporter service should skip the + feature document while iterating through the donor mdb_catlog table. +- :issue:`SERVER-85959` Remove streams benchmarks from v7.2 +- :issue:`SERVER-86081` Sys-perf missing required parameters due to + Evergreen Redaction +- :issue:`SERVER-86165` Avoid stepdowns in merge_command_options.js +- :issue:`SERVER-86363` Make container registry login silent +- :issue:`SERVER-86381` Delete copybara staging file on v7.2 +- :issue:`SERVER-86481` Jepsen set, register, and read concern majority + tests are not running in Evergreen +- :issue:`WT-11669` Create new log record for backup ids +- :issue:`WT-11987` Table's version number dropped to + version=(major=1,minor=0) +- :issue:`WT-12043` Remove obsolete HAVE_DIAGNOSTIC ifdefs to avoid + memory leak +- :issue:`WT-12092` Update the WiredTiger version in dockerfile +- :issue:`WT-12099` race between mmap threads resizing and + reading/writing +- :issue:`WT-12100` Fix csuite-long-running timeouts under MSan +- :issue:`WT-12110` Disable timestamp_abort backup tests in the + compatibility mode +- :issue:`WT-12147` Temporarily disable clang-analyzer + diff --git a/source/includes/changelogs/releases/7.2.2.rst b/source/includes/changelogs/releases/7.2.2.rst new file mode 100644 index 00000000000..be2cb369163 --- /dev/null +++ b/source/includes/changelogs/releases/7.2.2.rst @@ -0,0 +1,16 @@ +.. _7.2.2-changelog: + +7.2.2 Changelog +--------------- + +Internals +~~~~~~~~~ + +- :issue:`SERVER-83483` Azure E2E Machine Flow Tests Getting Incorrect + Credentials from EVG +- :issue:`SERVER-84723` Sharded multi-document transactions can observe + partial effects of concurrent DDL operations +- :issue:`SERVER-86873` Exclude transitionFromDedicatedConfigServer from + running in mixed version + jstests/sharding/database_versioning_all_commands.js on 7.2 + diff --git a/source/includes/changelogs/releases/7.3.1.rst b/source/includes/changelogs/releases/7.3.1.rst new file mode 100644 index 00000000000..cf7246e2a61 --- /dev/null +++ b/source/includes/changelogs/releases/7.3.1.rst @@ -0,0 +1,24 @@ +.. _7.3.1-changelog: + +7.3.1 Changelog +--------------- + +Sharding +~~~~~~~~ + +:issue:`SERVER-87191` Update without shard key might miss documents + +Write Operations +~~~~~~~~~~~~~~~~ + +:issue:`SERVER-88200` Time-series writes on manually-created buckets may +misbehave + +Internals +~~~~~~~~~ + +- :issue:`SERVER-86120` Return write error on failure to commit txn for + retryable update that modifies doc shard key +- :issue:`SERVER-88360` Remove "Sharding catalog and local catalog + collection uuid do not match" tripwire assertion + diff --git a/source/includes/checkpoints.rst b/source/includes/checkpoints.rst new file mode 100644 index 00000000000..3ac5f3a1cbd --- /dev/null +++ b/source/includes/checkpoints.rst @@ -0,0 +1,3 @@ +To provide :term:`durable` data, :ref:`WiredTiger ` +uses :ref:`checkpoints `. For more +details, see :ref:`journaling-wiredTiger`. diff --git a/source/includes/client-sessions-reuse.rst b/source/includes/client-sessions-reuse.rst new file mode 100644 index 00000000000..0482eaf3684 --- /dev/null +++ b/source/includes/client-sessions-reuse.rst @@ -0,0 +1,3 @@ +A session can only be used with the ``MongoClient`` object that created +the session. A single session cannot be used concurrently. Operations +that use a single session must be run sequentially. diff --git a/source/includes/clustered-collections-introduction.rst b/source/includes/clustered-collections-introduction.rst index 8ec853dcc77..51d3e6024b3 100644 --- a/source/includes/clustered-collections-introduction.rst +++ b/source/includes/clustered-collections-introduction.rst @@ -1,3 +1,11 @@ -Starting in MongoDB 5.3, you can create a collection with a -:ref:`clustered index `. Collections -created with a clustered index are called clustered collections. +Clustered collections store indexed documents in the same +:ref:`WiredTiger ` file as the index specification. +Storing the collection's documents and index in the same file provides +benefits for storage and performance compared to regular indexes. + +Clustered collections are created with a :ref:`clustered index +`. The clustered index specifies the +order in which documents are stored. + +To create a clustered collection, see +:ref:`clustered-collections-examples`. diff --git a/source/includes/clustered-index-fields.rst b/source/includes/clustered-index-fields.rst index c858140f83c..b24c4a2fbb4 100644 --- a/source/includes/clustered-index-fields.rst +++ b/source/includes/clustered-index-fields.rst @@ -1,14 +1,15 @@ -.. include:: /includes/clustered-collections-introduction.rst +Starting in MongoDB 5.3, you can create a collection with a **clustered +index**. Clustered indexes are stored in the same :ref:`WiredTiger +` file as the collection. The resulting collection +is called a :ref:`clustered collection `. -See :ref:`clustered-collections`. - -``clusteredIndex`` has the following syntax: +The ``clusteredIndex`` field has the following syntax: .. code-block:: javascript :copyable: false clusteredIndex: { - key: { }, + key: , unique: , name: } diff --git a/source/includes/comment-option-getMore-inheritance.rst b/source/includes/comment-option-getMore-inheritance.rst new file mode 100644 index 00000000000..abff1172129 --- /dev/null +++ b/source/includes/comment-option-getMore-inheritance.rst @@ -0,0 +1,5 @@ +.. note:: + + Any comment set on a |comment-include-command| command is inherited + by any subsequent :dbcommand:`getMore` commands run on the + |comment-include-command| cursor. diff --git a/source/includes/connection-pool/max-connecting-use-case.rst b/source/includes/connection-pool/max-connecting-use-case.rst new file mode 100644 index 00000000000..cd87cd070c2 --- /dev/null +++ b/source/includes/connection-pool/max-connecting-use-case.rst @@ -0,0 +1,6 @@ +Raising the value of ``maxConnecting`` allows the client to establish +connection to the server faster, but increases the chance of +:term:`connection storms `. If the value of +``maxConnecting`` is too low, your connection pool may experience heavy +throttling and increased tail latency for clients checking out +connections. diff --git a/source/includes/considerations-deploying-replica-set.rst b/source/includes/considerations-deploying-replica-set.rst index de79d768d89..dea69794f91 100644 --- a/source/includes/considerations-deploying-replica-set.rst +++ b/source/includes/considerations-deploying-replica-set.rst @@ -5,6 +5,8 @@ In production, deploy each member of the replica set to its own machine. If possible, ensure that MongoDB listens on the default port of ``27017``. +.. include:: /includes/replication/note-replica-set-major-versions.rst + For more information, see :doc:`/core/replica-set-architectures`. Hostnames diff --git a/source/includes/cqa-currentOp.rst b/source/includes/cqa-currentOp.rst index 28c79844d7a..5bd00c72ec4 100644 --- a/source/includes/cqa-currentOp.rst +++ b/source/includes/cqa-currentOp.rst @@ -1,6 +1,5 @@ Query Sampling Progress ~~~~~~~~~~~~~~~~~~~~~~~ -When query sampling is enabled, you can check the progress of query -sampling using the ``$currentOp`` aggregation stage. - +To monitor the query sampling process, use the :pipeline:`$currentOp` +stage. For an example, see :ref:`sampled-queries-currentOp-stage`. diff --git a/source/includes/cqa-limitations.rst b/source/includes/cqa-limitations.rst index 2252b7d99a2..21f5816a751 100644 --- a/source/includes/cqa-limitations.rst +++ b/source/includes/cqa-limitations.rst @@ -1,5 +1,5 @@ - You cannot run |CQA| on Atlas - :atlas:`multi-tenant ` + :atlas:`multi-tenant ` configurations. - You cannot run |CQA| on standalone deployments. diff --git a/source/includes/cqa-queryAnalysisSampleExpirationSecs.rst b/source/includes/cqa-queryAnalysisSampleExpirationSecs.rst index 691070a0c73..f3fe72babef 100644 --- a/source/includes/cqa-queryAnalysisSampleExpirationSecs.rst +++ b/source/includes/cqa-queryAnalysisSampleExpirationSecs.rst @@ -2,8 +2,7 @@ queryAnalysisSampleExpirationSecs ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Sampled queries are stored in an internal collection that has a TTL -index with ``expireAfterSeconds``. Configure ``expireAfterSeconds`` -with the ``queryAnalysisSampleExpirationSecs`` server parameter. -with the :parameter:`queryAnalysisSampleExpirationSecs`. +index with ``expireAfterSeconds``. To configure ``expireAfterSeconds``, +use the :parameter:`queryAnalysisSampleExpirationSecs` server parameter. Sampled queries are automatically deleted after ``queryAnalysisSampleExpirationSecs``. diff --git a/source/includes/create-an-encrypted-db-conn.rst b/source/includes/create-an-encrypted-db-conn.rst index 0e582d09233..f659052e7c7 100644 --- a/source/includes/create-an-encrypted-db-conn.rst +++ b/source/includes/create-an-encrypted-db-conn.rst @@ -9,10 +9,10 @@ initiated with client-side field level encryption enabled, either: following Key Management Service (KMS) providers for Customer Master Key (CMK) management: - - :ref:`Amazon Web Services KMS ` - - :ref:`Azure Key Vault ` - - :ref:`Google Cloud Platform KMS ` - - :ref:`Locally Managed Key ` + - :ref:`Amazon Web Services KMS ` + - :ref:`Azure Key Vault ` + - :ref:`Google Cloud Platform KMS ` + - :ref:`Locally Managed Key ` *or* @@ -20,4 +20,4 @@ initiated with client-side field level encryption enabled, either: ` to establish a connection with the required options. The command line options only support the :ref:`Amazon Web Services KMS - ` provider for CMK management. + ` provider for CMK management. diff --git a/source/includes/csfle-warning-local-keys.rst b/source/includes/csfle-warning-local-keys.rst deleted file mode 100644 index d04c871cacd..00000000000 --- a/source/includes/csfle-warning-local-keys.rst +++ /dev/null @@ -1,10 +0,0 @@ -.. warning:: Do Not Use the Local Key Provider in Production - - The Local Key Provider is an insecure method of storage and is - **not recommended** for production. Instead, - you should store your {+cmk-long+}s in a remote - :wikipedia:`{+kms-long+} ` - (KMS). - - To learn how to use a remote KMS in your {+csfle-abbrev+} implementation, - see the :ref:`` guide. diff --git a/source/includes/currentOp-output-example.rst b/source/includes/currentOp-output-example.rst index d3d9f53e890..128b3dc23ce 100644 --- a/source/includes/currentOp-output-example.rst +++ b/source/includes/currentOp-output-example.rst @@ -63,8 +63,8 @@ }, "killPending" : , "numYields" : , - "dataThroughputLastSecond" : , // Starting in MongoDB 4.4 for validate operations - "dataThroughputAverage" : , // Starting in MongoDB 4.4 for validate operations + "dataThroughputLastSecond" : , + "dataThroughputAverage" : , "locks" : { "ParallelBatchWriterMode" : , "ReplicationStateTransition" : , @@ -199,8 +199,8 @@ }, "killPending" : , "numYields" : , - "dataThroughputLastSecond" : , // Starting in MongoDB 4.4 for validate operations - "dataThroughputAverage" : , // Starting in MongoDB 4.4 for validate operations + "dataThroughputLastSecond" : , + "dataThroughputAverage" : , "locks" : { "ParallelBatchWriterMode" : , "ReplicationStateTransition" : , @@ -362,8 +362,8 @@ }, "killPending" : , "numYields" : , - "dataThroughputLastSecond" : , // Starting in MongoDB 4.4 for validate operations - "dataThroughputAverage" : , // Starting in MongoDB 4.4 for validate operations + "dataThroughputLastSecond" : , + "dataThroughputAverage" : , "locks" : { "ParallelBatchWriterMode" : , "ReplicationStateTransition" : , diff --git a/source/includes/diagnostic-backtrace-generation.rst b/source/includes/diagnostic-backtrace-generation.rst index 7934dfe6787..b555db97959 100644 --- a/source/includes/diagnostic-backtrace-generation.rst +++ b/source/includes/diagnostic-backtrace-generation.rst @@ -1,4 +1,4 @@ -Starting in MongoDB 4.4 running on Linux: +For MongoDB instances running on Linux: - When the :binary:`~bin.mongod` and :binary:`~bin.mongos` processes receive a ``SIGUSR2`` signal, backtrace details are added to the logs @@ -10,4 +10,4 @@ Starting in MongoDB 4.4 running on Linux: The backtrace functionality is available for these architectures: - ``x86_64`` -- ``arm64`` (starting in MongoDB 4.4.15, 5.0.10, and 6.0) +- ``arm64`` (starting in MongoDB 5.0.10, and 6.0) diff --git a/source/includes/driver-examples/driver-example-c-cleanup.rst b/source/includes/driver-examples/driver-example-c-cleanup.rst new file mode 100644 index 00000000000..e598ec3b193 --- /dev/null +++ b/source/includes/driver-examples/driver-example-c-cleanup.rst @@ -0,0 +1,10 @@ +Be sure to also clean up any open resources by calling the +following methods, as appropriate: + +- `bson_destroy `__ + +- `mongoc_bulk_operation_destroy `__ + +- `mongoc_collection_destroy `__ + +- `mongoc_cursor_destroy `__, diff --git a/source/includes/driver-examples/driver-example-delete-55.rst b/source/includes/driver-examples/driver-example-delete-55.rst index 4d4de37fdfe..9978185a0e6 100644 --- a/source/includes/driver-examples/driver-example-delete-55.rst +++ b/source/includes/driver-examples/driver-example-delete-55.rst @@ -35,6 +35,16 @@ Compass, see the :ref:`Compass documentation `. + + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 55 + :end-before: End Example 55 + - id: python content: | @@ -71,6 +81,15 @@ :start-after: Start Example 55 :end-before: End Example 55 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 55 + :end-before: End Example 55 + - id: nodejs content: | diff --git a/source/includes/driver-examples/driver-example-delete-56.rst b/source/includes/driver-examples/driver-example-delete-56.rst index d1091f2872d..45559bfcf4c 100644 --- a/source/includes/driver-examples/driver-example-delete-56.rst +++ b/source/includes/driver-examples/driver-example-delete-56.rst @@ -8,6 +8,15 @@ db.inventory.deleteMany({}) + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 56 + :end-before: End Example 56 + - id: python content: | @@ -44,6 +53,15 @@ :start-after: Start Example 56 :end-before: End Example 56 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 56 + :end-before: End Example 56 + - id: nodejs content: | diff --git a/source/includes/driver-examples/driver-example-delete-57.rst b/source/includes/driver-examples/driver-example-delete-57.rst index b4046ed438c..0f88679eb8a 100644 --- a/source/includes/driver-examples/driver-example-delete-57.rst +++ b/source/includes/driver-examples/driver-example-delete-57.rst @@ -8,6 +8,16 @@ db.inventory.deleteMany({ status : "A" }) + + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 57 + :end-before: End Example 57 + - id: python content: | @@ -44,6 +54,15 @@ :start-after: Start Example 57 :end-before: End Example 57 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 57 + :end-before: End Example 57 + - id: nodejs content: | diff --git a/source/includes/driver-examples/driver-example-delete-58.rst b/source/includes/driver-examples/driver-example-delete-58.rst index 04f5f93b9b2..5ce2466c775 100644 --- a/source/includes/driver-examples/driver-example-delete-58.rst +++ b/source/includes/driver-examples/driver-example-delete-58.rst @@ -46,6 +46,17 @@ #. Click :guilabel:`Delete` to confirm. Compass deletes the document from the collection. + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 58 + :end-before: End Example 58 + + .. include:: /includes/driver-examples/driver-example-c-cleanup.rst + - id: python content: | diff --git a/source/includes/driver-examples/driver-example-delete-result.rst b/source/includes/driver-examples/driver-example-delete-result.rst index a5c355d8a5e..64a2aded777 100644 --- a/source/includes/driver-examples/driver-example-delete-result.rst +++ b/source/includes/driver-examples/driver-example-delete-result.rst @@ -7,6 +7,13 @@ more information and examples, see :method:`~db.collection.deleteMany()`. + - id: c + content: | + The `mongoc_collection_delete_many `__ + method returns ``true`` if successful, or returns ``false`` and sets + an error if there are invalid arguments or a server or network error + occurs. + - id: python content: | The :py:meth:`~pymongo.collection.Collection.delete_many` @@ -37,6 +44,14 @@ object of type com.mongodb.client.result.DeleteResult_ if successful. Returns an instance of ``com.mongodb.MongoException`` if unsuccessful. + - id: kotlin-coroutine + content: | + The `MongoCollection.deleteMany() <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-mongo-collection/delete-many.html>`__ + method returns an instance of + `com.mongodb.client.result.DeleteResult <{+java-api-docs+}/mongodb-driver-core/com/mongodb/client/result/DeleteResult.html>`__ + that describes the status of the + operation and count of deleted documents. + - id: nodejs content: | :node-api:`deleteMany() ` returns a @@ -47,10 +62,10 @@ - id: php content: | Upon successful execution, the - :phpmethod:`deleteMany() ` + :phpmethod:`deleteMany() ` method returns an instance of :phpclass:`MongoDB\\DeleteResult ` - whose :phpmethod:`getDeletedCount()` + whose :phpmethod:`getDeletedCount()` method returns the number of documents that matched the filter. - id: perl diff --git a/source/includes/driver-examples/driver-example-indexes-1.rst b/source/includes/driver-examples/driver-example-indexes-1.rst index 82c51f63d98..dde9c0580a4 100644 --- a/source/includes/driver-examples/driver-example-indexes-1.rst +++ b/source/includes/driver-examples/driver-example-indexes-1.rst @@ -51,6 +51,18 @@ collection.createIndex(Indexes.descending("name"), someCallbackFunction()); + - id: kotlin-coroutine + content: | + + This example creates a single key descending index on the + ``name`` field: + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Index Example 1 + :end-before: End Index Example 1 + - id: nodejs content: | diff --git a/source/includes/driver-examples/driver-example-insert-1.rst b/source/includes/driver-examples/driver-example-insert-1.rst index 370c2f859e6..ab829beb8dc 100644 --- a/source/includes/driver-examples/driver-example-insert-1.rst +++ b/source/includes/driver-examples/driver-example-insert-1.rst @@ -15,6 +15,15 @@ .. figure:: /images/compass-insert-document-inventory.png :alt: Compass insert new document into collection + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 1 + :end-before: End Example 1 + - id: python content: | @@ -51,6 +60,15 @@ :start-after: Start Example 1 :end-before: End Example 1 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: 8 + :start-after: Start Example 1 + :end-before: End Example 1 + - id: nodejs content: | diff --git a/source/includes/driver-examples/driver-example-insert-2.rst b/source/includes/driver-examples/driver-example-insert-2.rst index a69a5ddb0dc..a29f2811d0b 100644 --- a/source/includes/driver-examples/driver-example-insert-2.rst +++ b/source/includes/driver-examples/driver-example-insert-2.rst @@ -24,6 +24,15 @@ :compass:`Query Bar ` documentation. + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 2 + :end-before: End Example 2 + - id: python content: | @@ -60,6 +69,15 @@ :start-after: Start Example 2 :end-before: End Example 2 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 2 + :end-before: End Example 2 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_insert.js diff --git a/source/includes/driver-examples/driver-example-insert-3.rst b/source/includes/driver-examples/driver-example-insert-3.rst index e655bb44f47..0a080c35503 100644 --- a/source/includes/driver-examples/driver-example-insert-3.rst +++ b/source/includes/driver-examples/driver-example-insert-3.rst @@ -33,6 +33,16 @@ For instructions on inserting documents using |compass|, see :ref:`Insert Documents `. + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 3 + :end-before: End Example 3 + + - id: python content: | @@ -69,6 +79,15 @@ :start-after: Start Example 3 :end-before: End Example 3 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: 8 + :start-after: Start Example 3 + :end-before: End Example 3 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_insert.js diff --git a/source/includes/driver-examples/driver-example-query-10.rst b/source/includes/driver-examples/driver-example-query-10.rst index 1cb7601a903..5a0af6df5f5 100644 --- a/source/includes/driver-examples/driver-example-query-10.rst +++ b/source/includes/driver-examples/driver-example-query-10.rst @@ -21,6 +21,16 @@ .. figure:: /images/compass-find-filter-query-op.png :alt: Query using query operators + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 10 + :end-before: End Example 10 + + - id: python content: | @@ -57,6 +67,15 @@ :start-after: Start Example 10 :end-before: End Example 10 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 10 + :end-before: End Example 10 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query.js diff --git a/source/includes/driver-examples/driver-example-query-11.rst b/source/includes/driver-examples/driver-example-query-11.rst index b020bf5f7e8..0fc0a23e729 100644 --- a/source/includes/driver-examples/driver-example-query-11.rst +++ b/source/includes/driver-examples/driver-example-query-11.rst @@ -21,6 +21,15 @@ .. figure:: /images/compass-find-filter-and.png :alt: Query using multiple conditions with AND + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 11 + :end-before: End Example 11 + - id: python content: | @@ -57,6 +66,15 @@ :start-after: Start Example 11 :end-before: End Example 11 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 11 + :end-before: End Example 11 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query.js diff --git a/source/includes/driver-examples/driver-example-query-12.rst b/source/includes/driver-examples/driver-example-query-12.rst index d5e0cc55f56..b9cd282386c 100644 --- a/source/includes/driver-examples/driver-example-query-12.rst +++ b/source/includes/driver-examples/driver-example-query-12.rst @@ -21,6 +21,15 @@ .. figure:: /images/compass-find-filter-or.png :alt: Query using OR + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 12 + :end-before: End Example 12 + - id: python content: | @@ -57,6 +66,15 @@ :start-after: Start Example 12 :end-before: End Example 12 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 12 + :end-before: End Example 12 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query.js diff --git a/source/includes/driver-examples/driver-example-query-13.rst b/source/includes/driver-examples/driver-example-query-13.rst index cc28fee06d4..d67da72ef4a 100644 --- a/source/includes/driver-examples/driver-example-query-13.rst +++ b/source/includes/driver-examples/driver-example-query-13.rst @@ -24,7 +24,16 @@ .. figure:: /images/compass-find-filter-and-or.png :alt: Query using AND as well as OR + - id: c + content: | + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 13 + :end-before: End Example 13 + + - id: python content: | @@ -61,6 +70,15 @@ :start-after: Start Example 13 :end-before: End Example 13 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 13 + :end-before: End Example 13 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query.js diff --git a/source/includes/driver-examples/driver-example-query-14.rst b/source/includes/driver-examples/driver-example-query-14.rst index 5cab3bfb388..65ba150b123 100644 --- a/source/includes/driver-examples/driver-example-query-14.rst +++ b/source/includes/driver-examples/driver-example-query-14.rst @@ -29,6 +29,16 @@ For instructions on inserting documents in MongoDB Compass, see :ref:`Insert Documents `. + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 14 + :end-before: End Example 14 + + - id: python content: | @@ -65,6 +75,15 @@ :start-after: Start Example 14 :end-before: End Example 14 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 14 + :end-before: End Example 14 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query_embedded_documents.js diff --git a/source/includes/driver-examples/driver-example-query-15.rst b/source/includes/driver-examples/driver-example-query-15.rst index 041dd727623..0535df5d57f 100644 --- a/source/includes/driver-examples/driver-example-query-15.rst +++ b/source/includes/driver-examples/driver-example-query-15.rst @@ -21,6 +21,15 @@ .. figure:: /images/compass-match-embedded.png :alt: Query embedded field + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 15 + :end-before: End Example 15 + - id: python content: | @@ -57,6 +66,15 @@ :start-after: Start Example 15 :end-before: End Example 15 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 15 + :end-before: End Example 15 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query_embedded_documents.js diff --git a/source/includes/driver-examples/driver-example-query-16.rst b/source/includes/driver-examples/driver-example-query-16.rst index 55f09bcef5b..5f1f139325f 100644 --- a/source/includes/driver-examples/driver-example-query-16.rst +++ b/source/includes/driver-examples/driver-example-query-16.rst @@ -13,6 +13,16 @@ .. figure:: /images/compass-find-embedded-no-match.png :alt: Query embedded field + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 16 + :end-before: End Example 16 + + - id: python content: | @@ -49,6 +59,15 @@ :start-after: Start Example 16 :end-before: End Example 16 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 16 + :end-before: End Example 16 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query_embedded_documents.js diff --git a/source/includes/driver-examples/driver-example-query-17.rst b/source/includes/driver-examples/driver-example-query-17.rst index 47e5dd8a677..6e624419a1b 100644 --- a/source/includes/driver-examples/driver-example-query-17.rst +++ b/source/includes/driver-examples/driver-example-query-17.rst @@ -21,6 +21,15 @@ .. figure:: /images/compass-find-nested-field.png :alt: Query single nested field + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 17 + :end-before: End Example 17 + - id: python content: | @@ -57,6 +66,15 @@ :start-after: Start Example 17 :end-before: End Example 17 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 17 + :end-before: End Example 17 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query_embedded_documents.js diff --git a/source/includes/driver-examples/driver-example-query-18.rst b/source/includes/driver-examples/driver-example-query-18.rst index eb257d00a00..c6f999e60db 100644 --- a/source/includes/driver-examples/driver-example-query-18.rst +++ b/source/includes/driver-examples/driver-example-query-18.rst @@ -21,6 +21,16 @@ .. figure:: /images/compass-find-nested-query-op.png :alt: Query single nested field + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 18 + :end-before: End Example 18 + + - id: python content: | @@ -57,6 +67,15 @@ :start-after: Start Example 18 :end-before: End Example 18 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 18 + :end-before: End Example 18 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query_embedded_documents.js diff --git a/source/includes/driver-examples/driver-example-query-19.rst b/source/includes/driver-examples/driver-example-query-19.rst index 2a56ddc6322..bb0ff20fab9 100644 --- a/source/includes/driver-examples/driver-example-query-19.rst +++ b/source/includes/driver-examples/driver-example-query-19.rst @@ -21,6 +21,16 @@ .. figure:: /images/compass-find-embedded-and.png :alt: Query multiple nested fields + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 19 + :end-before: End Example 19 + + - id: python content: | @@ -57,6 +67,15 @@ :start-after: Start Example 19 :end-before: End Example 19 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 19 + :end-before: End Example 19 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query_embedded_documents.js diff --git a/source/includes/driver-examples/driver-example-query-20.rst b/source/includes/driver-examples/driver-example-query-20.rst index e12d922fcc5..dea273c9393 100644 --- a/source/includes/driver-examples/driver-example-query-20.rst +++ b/source/includes/driver-examples/driver-example-query-20.rst @@ -29,7 +29,16 @@ For instructions on inserting documents in MongoDB Compass, see :ref:`Insert Documents `. + - id: c + content: | + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 20 + :end-before: End Example 20 + + - id: python content: | @@ -66,6 +75,15 @@ :start-after: Start Example 20 :end-before: End Example 20 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 20 + :end-before: End Example 20 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query_arrays.js diff --git a/source/includes/driver-examples/driver-example-query-21.rst b/source/includes/driver-examples/driver-example-query-21.rst index c0a575d6277..004d53810d9 100644 --- a/source/includes/driver-examples/driver-example-query-21.rst +++ b/source/includes/driver-examples/driver-example-query-21.rst @@ -21,6 +21,16 @@ .. figure:: /images/compass-array-match-exact.png :alt: Query array matching exactly + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 21 + :end-before: End Example 21 + + - id: python content: | @@ -57,6 +67,15 @@ :start-after: Start Example 21 :end-before: End Example 21 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 21 + :end-before: End Example 21 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query_arrays.js diff --git a/source/includes/driver-examples/driver-example-query-22.rst b/source/includes/driver-examples/driver-example-query-22.rst index c45f13eb958..523641011fa 100644 --- a/source/includes/driver-examples/driver-example-query-22.rst +++ b/source/includes/driver-examples/driver-example-query-22.rst @@ -21,6 +21,16 @@ .. figure:: /images/compass-array-match-all.png :alt: Query array matching all criteria + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 22 + :end-before: End Example 22 + + - id: python content: | @@ -57,6 +67,15 @@ :start-after: Start Example 22 :end-before: End Example 22 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 22 + :end-before: End Example 22 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query_arrays.js diff --git a/source/includes/driver-examples/driver-example-query-23.rst b/source/includes/driver-examples/driver-example-query-23.rst index 4f481c9c291..4e25301f225 100644 --- a/source/includes/driver-examples/driver-example-query-23.rst +++ b/source/includes/driver-examples/driver-example-query-23.rst @@ -21,6 +21,16 @@ .. figure:: /images/compass-array-elem-match.png :alt: Query array matching multiple criteria + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 23 + :end-before: End Example 23 + + - id: python content: | @@ -57,6 +67,15 @@ :start-after: Start Example 23 :end-before: End Example 23 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 23 + :end-before: End Example 23 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query_arrays.js diff --git a/source/includes/driver-examples/driver-example-query-24.rst b/source/includes/driver-examples/driver-example-query-24.rst index 35e98ba5e67..b577238f0f2 100644 --- a/source/includes/driver-examples/driver-example-query-24.rst +++ b/source/includes/driver-examples/driver-example-query-24.rst @@ -21,6 +21,16 @@ .. figure:: /images/compass-array-query-op.png :alt: Query array for at least one matching element + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 24 + :end-before: End Example 24 + + - id: python content: | @@ -57,6 +67,15 @@ :start-after: Start Example 24 :end-before: End Example 24 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 24 + :end-before: End Example 24 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query_arrays.js diff --git a/source/includes/driver-examples/driver-example-query-25.rst b/source/includes/driver-examples/driver-example-query-25.rst index fd21527ca1a..8782bf9ab8e 100644 --- a/source/includes/driver-examples/driver-example-query-25.rst +++ b/source/includes/driver-examples/driver-example-query-25.rst @@ -21,6 +21,16 @@ .. figure:: /images/compass-array-compound-filter.png :alt: Query array using a compound filter + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 25 + :end-before: End Example 25 + + - id: python content: | @@ -57,6 +67,15 @@ :start-after: Start Example 25 :end-before: End Example 25 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 25 + :end-before: End Example 25 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query_arrays.js diff --git a/source/includes/driver-examples/driver-example-query-26.rst b/source/includes/driver-examples/driver-example-query-26.rst index be53400c194..e6f6817ce1c 100644 --- a/source/includes/driver-examples/driver-example-query-26.rst +++ b/source/includes/driver-examples/driver-example-query-26.rst @@ -21,6 +21,16 @@ .. figure:: /images/compass-array-compound-multiple-criteria.png :alt: Query array by multiple conditions + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 26 + :end-before: End Example 26 + + - id: python content: | @@ -57,6 +67,15 @@ :start-after: Start Example 26 :end-before: End Example 26 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 26 + :end-before: End Example 26 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query_arrays.js diff --git a/source/includes/driver-examples/driver-example-query-27.rst b/source/includes/driver-examples/driver-example-query-27.rst index c4b224af31d..6b9abe669e4 100644 --- a/source/includes/driver-examples/driver-example-query-27.rst +++ b/source/includes/driver-examples/driver-example-query-27.rst @@ -21,6 +21,16 @@ .. figure:: /images/compass-array-match-by-index.png :alt: Query array by index + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 27 + :end-before: End Example 27 + + - id: python content: | @@ -57,6 +67,15 @@ :start-after: Start Example 26 :end-before: End Example 26 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 27 + :end-before: End Example 27 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query_arrays.js diff --git a/source/includes/driver-examples/driver-example-query-28.rst b/source/includes/driver-examples/driver-example-query-28.rst index 49df5a49853..df690fab865 100644 --- a/source/includes/driver-examples/driver-example-query-28.rst +++ b/source/includes/driver-examples/driver-example-query-28.rst @@ -21,6 +21,17 @@ .. figure:: /images/compass-array-query-by-size.png :alt: Query for array by number of elements + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 28 + :end-before: End Example 28 + + .. include:: /includes/driver-examples/driver-example-c-cleanup.rst + - id: python content: | @@ -57,6 +68,15 @@ :start-after: Start Example 28 :end-before: End Example 28 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 28 + :end-before: End Example 28 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query_arrays.js diff --git a/source/includes/driver-examples/driver-example-query-29.rst b/source/includes/driver-examples/driver-example-query-29.rst index a0e07a4dd7a..7b92827b25a 100644 --- a/source/includes/driver-examples/driver-example-query-29.rst +++ b/source/includes/driver-examples/driver-example-query-29.rst @@ -29,6 +29,16 @@ For instructions on inserting documents in MongoDB Compass, see :ref:`Insert Documents `. + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 29 + :end-before: End Example 29 + + - id: python content: | @@ -65,6 +75,15 @@ :start-after: Start Example 29 :end-before: End Example 29 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 29 + :end-before: End Example 29 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query_array_of_documents.js diff --git a/source/includes/driver-examples/driver-example-query-30.rst b/source/includes/driver-examples/driver-example-query-30.rst index 33bb00c8416..0d7222e69ae 100644 --- a/source/includes/driver-examples/driver-example-query-30.rst +++ b/source/includes/driver-examples/driver-example-query-30.rst @@ -21,6 +21,16 @@ .. figure:: /images/compass-find-nested-in-array.png :alt: Query for nested array element + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 30 + :end-before: End Example 30 + + - id: python content: | @@ -57,6 +67,15 @@ :start-after: Start Example 30 :end-before: End Example 30 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 30 + :end-before: End Example 30 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query_array_of_documents.js diff --git a/source/includes/driver-examples/driver-example-query-31.rst b/source/includes/driver-examples/driver-example-query-31.rst index 468f0cf7a30..a0244af2f18 100644 --- a/source/includes/driver-examples/driver-example-query-31.rst +++ b/source/includes/driver-examples/driver-example-query-31.rst @@ -13,6 +13,16 @@ .. figure:: /images/compass-find-nested-array-no-match.png :alt: Query for nested array element + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 31 + :end-before: End Example 31 + + - id: python content: | @@ -49,6 +59,15 @@ :start-after: Start Example 31 :end-before: End Example 31 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 31 + :end-before: End Example 31 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query_array_of_documents.js diff --git a/source/includes/driver-examples/driver-example-query-32.rst b/source/includes/driver-examples/driver-example-query-32.rst index ddf786d967a..0c0d1ac2eca 100644 --- a/source/includes/driver-examples/driver-example-query-32.rst +++ b/source/includes/driver-examples/driver-example-query-32.rst @@ -21,6 +21,16 @@ .. figure:: /images/compass-find-array-index-embedded-doc.png :alt: Query for array element matching single condition + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 32 + :end-before: End Example 32 + + - id: python content: | @@ -57,6 +67,15 @@ :start-after: Start Example 32 :end-before: End Example 32 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 32 + :end-before: End Example 32 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query_array_of_documents.js diff --git a/source/includes/driver-examples/driver-example-query-33.rst b/source/includes/driver-examples/driver-example-query-33.rst index 64b951aa19f..c547570adb7 100644 --- a/source/includes/driver-examples/driver-example-query-33.rst +++ b/source/includes/driver-examples/driver-example-query-33.rst @@ -21,6 +21,16 @@ .. figure:: /images/compass-find-array-embedded-field-condition.png :alt: Query for embedded field matching single condition + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 33 + :end-before: End Example 33 + + - id: python content: | @@ -57,6 +67,15 @@ :start-after: Start Example 33 :end-before: End Example 33 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 33 + :end-before: End Example 33 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query_array_of_documents.js diff --git a/source/includes/driver-examples/driver-example-query-34.rst b/source/includes/driver-examples/driver-example-query-34.rst index 464e151e60f..6336750633a 100644 --- a/source/includes/driver-examples/driver-example-query-34.rst +++ b/source/includes/driver-examples/driver-example-query-34.rst @@ -21,6 +21,16 @@ .. figure:: /images/compass-array-multiple-cond-single-doc.png :alt: Query for single document matching multiple conditions + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 34 + :end-before: End Example 34 + + - id: python content: | @@ -57,6 +67,15 @@ :start-after: Start Example 34 :end-before: End Example 34 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 34 + :end-before: End Example 34 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query_array_of_documents.js diff --git a/source/includes/driver-examples/driver-example-query-35.rst b/source/includes/driver-examples/driver-example-query-35.rst index d69fd03fd05..09d8ab20e4a 100644 --- a/source/includes/driver-examples/driver-example-query-35.rst +++ b/source/includes/driver-examples/driver-example-query-35.rst @@ -20,6 +20,15 @@ .. figure:: /images/compass-array-multiple-cond-single-doc-2.png :alt: Query for single document matching multiple conditions + + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 35 + :end-before: End Example 35 - id: python content: | @@ -57,6 +66,15 @@ :start-after: Start Example 35 :end-before: End Example 35 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 35 + :end-before: End Example 35 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query_array_of_documents.js diff --git a/source/includes/driver-examples/driver-example-query-36.rst b/source/includes/driver-examples/driver-example-query-36.rst index a039bb8ab5c..400a7cb2999 100644 --- a/source/includes/driver-examples/driver-example-query-36.rst +++ b/source/includes/driver-examples/driver-example-query-36.rst @@ -21,6 +21,16 @@ .. figure:: /images/compass-array-match-combination-of-elements.png :alt: Query quantity value within range + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 36 + :end-before: End Example 36 + + - id: python content: | @@ -57,6 +67,15 @@ :start-after: Start Example 36 :end-before: End Example 36 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 36 + :end-before: End Example 36 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query_array_of_documents.js diff --git a/source/includes/driver-examples/driver-example-query-37.rst b/source/includes/driver-examples/driver-example-query-37.rst index cacb26bbb12..1fa7801c08f 100644 --- a/source/includes/driver-examples/driver-example-query-37.rst +++ b/source/includes/driver-examples/driver-example-query-37.rst @@ -21,6 +21,15 @@ .. figure:: /images/compass-array-match-combination-of-elements-2.png :alt: Query matching quantity and warehouse location + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 37 + :end-before: End Example 37 + - id: python content: | @@ -57,6 +66,15 @@ :start-after: Start Example 37 :end-before: End Example 37 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 37 + :end-before: End Example 37 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query_array_of_documents.js diff --git a/source/includes/driver-examples/driver-example-query-38.rst b/source/includes/driver-examples/driver-example-query-38.rst index 301a5718bce..2057bd7a462 100644 --- a/source/includes/driver-examples/driver-example-query-38.rst +++ b/source/includes/driver-examples/driver-example-query-38.rst @@ -23,6 +23,16 @@ For instructions on inserting documents in MongoDB Compass, see :ref:`Insert Documents `. + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 38 + :end-before: End Example 38 + + - id: python content: | @@ -59,6 +69,15 @@ :start-after: Start Example 38 :end-before: End Example 38 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 38 + :end-before: End Example 38 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query_for_null_fields.js @@ -76,15 +95,6 @@ :start-after: Start Example 38 :end-before: End Example 38 - - id: perl - content: | - - .. literalinclude:: /driver-examples/driver-examples.t - :language: perl - :dedent: 4 - :start-after: Start Example 38 - :end-before: End Example 38 - - id: ruby content: | diff --git a/source/includes/driver-examples/driver-example-query-39.rst b/source/includes/driver-examples/driver-example-query-39.rst index c6878bd7dee..84d680b432c 100644 --- a/source/includes/driver-examples/driver-example-query-39.rst +++ b/source/includes/driver-examples/driver-example-query-39.rst @@ -22,6 +22,16 @@ .. figure:: /images/compass-find-null-field.png :alt: Query null value or missing field + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 39 + :end-before: End Example 39 + + - id: python content: | @@ -58,6 +68,15 @@ :start-after: Start Example 39 :end-before: End Example 39 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 39 + :end-before: End Example 39 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query_for_null_fields.js @@ -75,15 +94,6 @@ :start-after: Start Example 39 :end-before: End Example 39 - - id: perl - content: | - - .. literalinclude:: /driver-examples/driver-examples.t - :language: perl - :dedent: 4 - :start-after: Start Example 39 - :end-before: End Example 39 - - id: ruby content: | diff --git a/source/includes/driver-examples/driver-example-query-40.rst b/source/includes/driver-examples/driver-example-query-40.rst index 3777e0764b6..ff4d3519d16 100644 --- a/source/includes/driver-examples/driver-example-query-40.rst +++ b/source/includes/driver-examples/driver-example-query-40.rst @@ -22,6 +22,16 @@ .. figure:: /images/compass-find-null-type-check.png :alt: Find null type + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 40 + :end-before: End Example 40 + + - id: python content: | @@ -58,6 +68,15 @@ :start-after: Start Example 40 :end-before: End Example 40 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 40 + :end-before: End Example 40 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query_for_null_fields.js @@ -75,15 +94,6 @@ :start-after: Start Example 40 :end-before: End Example 40 - - id: perl - content: | - - .. literalinclude:: /driver-examples/driver-examples.t - :language: perl - :dedent: 4 - :start-after: Start Example 40 - :end-before: End Example 40 - - id: ruby content: | diff --git a/source/includes/driver-examples/driver-example-query-41.rst b/source/includes/driver-examples/driver-example-query-41.rst index ec979093970..6844c21cc8f 100644 --- a/source/includes/driver-examples/driver-example-query-41.rst +++ b/source/includes/driver-examples/driver-example-query-41.rst @@ -22,6 +22,17 @@ .. figure:: /images/compass-find-null-existence-check.png :alt: Query for null value + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 41 + :end-before: End Example 41 + + .. include:: /includes/driver-examples/driver-example-c-cleanup.rst + - id: python content: | @@ -58,6 +69,15 @@ :start-after: Start Example 41 :end-before: End Example 41 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 41 + :end-before: End Example 41 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query_for_null_fields.js @@ -75,15 +95,6 @@ :start-after: Start Example 41 :end-before: End Example 41 - - id: perl - content: | - - .. literalinclude:: /driver-examples/driver-examples.t - :language: perl - :dedent: 4 - :start-after: Start Example 41 - :end-before: End Example 41 - - id: ruby content: | diff --git a/source/includes/driver-examples/driver-example-query-42.rst b/source/includes/driver-examples/driver-example-query-42.rst index 71360043c3a..52f2ce24119 100644 --- a/source/includes/driver-examples/driver-example-query-42.rst +++ b/source/includes/driver-examples/driver-example-query-42.rst @@ -27,9 +27,17 @@ ] For instructions on inserting documents in MongoDB Compass, - see :doc:`Insert Documents `. + see :ref:`Insert Documents `. + - id: c + content: | + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 42 + :end-before: End Example 42 + - id: python content: | @@ -66,6 +74,15 @@ :start-after: Start Example 42 :end-before: End Example 42 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 42 + :end-before: End Example 42 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_project.js diff --git a/source/includes/driver-examples/driver-example-query-43.rst b/source/includes/driver-examples/driver-example-query-43.rst index 92fb492d4af..15f0c5db9b0 100644 --- a/source/includes/driver-examples/driver-example-query-43.rst +++ b/source/includes/driver-examples/driver-example-query-43.rst @@ -19,6 +19,16 @@ #. Click :guilabel:`Find`. + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 43 + :end-before: End Example 43 + + - id: python content: | @@ -55,6 +65,15 @@ :start-after: Start Example 43 :end-before: End Example 43 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 43 + :end-before: End Example 43 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_project.js diff --git a/source/includes/driver-examples/driver-example-query-44.rst b/source/includes/driver-examples/driver-example-query-44.rst index 02ffc83aba5..33d60d3da24 100644 --- a/source/includes/driver-examples/driver-example-query-44.rst +++ b/source/includes/driver-examples/driver-example-query-44.rst @@ -30,6 +30,17 @@ #. Click :guilabel:`Find`. + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 44 + :end-before: End Example 44 + + .. include:: /includes/driver-examples/driver-example-c-cleanup.rst + - id: python content: | @@ -72,6 +83,15 @@ :start-after: Start Example 44 :end-before: End Example 44 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 44 + :end-before: End Example 44 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_project.js diff --git a/source/includes/driver-examples/driver-example-query-45.rst b/source/includes/driver-examples/driver-example-query-45.rst index 72c98231c12..a5e8bd794e8 100644 --- a/source/includes/driver-examples/driver-example-query-45.rst +++ b/source/includes/driver-examples/driver-example-query-45.rst @@ -29,6 +29,15 @@ #. Click :guilabel:`Find`. + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 45 + :end-before: End Example 45 + - id: python content: | @@ -71,6 +80,15 @@ :start-after: Start Example 45 :end-before: End Example 45 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 45 + :end-before: End Example 45 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_project.js diff --git a/source/includes/driver-examples/driver-example-query-46.rst b/source/includes/driver-examples/driver-example-query-46.rst index be365b1e253..9bb8264df25 100644 --- a/source/includes/driver-examples/driver-example-query-46.rst +++ b/source/includes/driver-examples/driver-example-query-46.rst @@ -29,6 +29,15 @@ #. Click :guilabel:`Find`. + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 46 + :end-before: End Example 46 + - id: python content: | @@ -71,6 +80,15 @@ :start-after: Start Example 46 :end-before: End Example 46 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 46 + :end-before: End Example 46 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_project.js diff --git a/source/includes/driver-examples/driver-example-query-47.rst b/source/includes/driver-examples/driver-example-query-47.rst index 7b9cb4e70fe..dc13d21c30b 100644 --- a/source/includes/driver-examples/driver-example-query-47.rst +++ b/source/includes/driver-examples/driver-example-query-47.rst @@ -32,6 +32,15 @@ #. Click :guilabel:`Find`. + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 47 + :end-before: End Example 47 + - id: python content: | @@ -74,6 +83,15 @@ :start-after: Start Example 47 :end-before: End Example 47 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 47 + :end-before: End Example 47 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_project.js diff --git a/source/includes/driver-examples/driver-example-query-48.rst b/source/includes/driver-examples/driver-example-query-48.rst index 84eec839a2e..d710fb4b03a 100644 --- a/source/includes/driver-examples/driver-example-query-48.rst +++ b/source/includes/driver-examples/driver-example-query-48.rst @@ -32,6 +32,15 @@ #. Click :guilabel:`Find`. + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 48 + :end-before: End Example 48 + - id: python content: | @@ -74,6 +83,15 @@ :start-after: Start Example 48 :end-before: End Example 48 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 48 + :end-before: End Example 48 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_project.js diff --git a/source/includes/driver-examples/driver-example-query-49.rst b/source/includes/driver-examples/driver-example-query-49.rst index 198ac71ad0e..6a6d9962fa3 100644 --- a/source/includes/driver-examples/driver-example-query-49.rst +++ b/source/includes/driver-examples/driver-example-query-49.rst @@ -28,6 +28,15 @@ { item: 1, status: 1, "instock.qty": 1 } #. Click :guilabel:`Find`. + + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 49 + :end-before: End Example 49 - id: python content: | @@ -71,6 +80,15 @@ :start-after: Start Example 49 :end-before: End Example 49 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 49 + :end-before: End Example 49 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_project.js diff --git a/source/includes/driver-examples/driver-example-query-50.rst b/source/includes/driver-examples/driver-example-query-50.rst index 1e62f862971..f72d7e4cad7 100644 --- a/source/includes/driver-examples/driver-example-query-50.rst +++ b/source/includes/driver-examples/driver-example-query-50.rst @@ -30,7 +30,14 @@ #. Click :guilabel:`Find`. - + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 50 + :end-before: End Example 50 - id: python content: | @@ -74,6 +81,20 @@ :start-after: Start Example 50 :end-before: End Example 50 + - id: kotlin-coroutine + content: | + To specify a projection document, chain the + `FindFlow.projection() <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-find-flow/projection.html>`__ method to the + ``find()`` method. The example uses the + `com.mongodb.client.model.Projections <{+java-api-docs+}/mongodb-driver-core/com/mongodb/client/model/Projections.html>`__ class to create the + projection documents. + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 50 + :end-before: End Example 50 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_project.js diff --git a/source/includes/driver-examples/driver-example-query-6.rst b/source/includes/driver-examples/driver-example-query-6.rst index c6979b420ed..6ad44f09f44 100644 --- a/source/includes/driver-examples/driver-example-query-6.rst +++ b/source/includes/driver-examples/driver-example-query-6.rst @@ -29,6 +29,15 @@ For instructions on inserting documents in MongoDB Compass, see :ref:`Insert Documents `. + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 6 + :end-before: End Example 6 + - id: python content: | @@ -65,6 +74,15 @@ :start-after: Start Example 6 :end-before: End Example 6 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 6 + :end-before: End Example 6 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query.js diff --git a/source/includes/driver-examples/driver-example-query-7.rst b/source/includes/driver-examples/driver-example-query-7.rst index 68eeb414057..bcdbc8a2a94 100644 --- a/source/includes/driver-examples/driver-example-query-7.rst +++ b/source/includes/driver-examples/driver-example-query-7.rst @@ -14,6 +14,17 @@ .. figure:: /images/compass-select-all.png :alt: Compass select all documents in collection + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 7 + :end-before: End Example 7 + + .. include:: /includes/driver-examples/driver-example-c-cleanup.rst + - id: python content: | @@ -50,6 +61,15 @@ :start-after: Start Example 7 :end-before: End Example 7 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 7 + :end-before: End Example 7 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query.js diff --git a/source/includes/driver-examples/driver-example-query-9.rst b/source/includes/driver-examples/driver-example-query-9.rst index ba9e1824dac..50f13cbfa49 100644 --- a/source/includes/driver-examples/driver-example-query-9.rst +++ b/source/includes/driver-examples/driver-example-query-9.rst @@ -21,6 +21,16 @@ .. figure:: /images/compass-find-filter-inventory.png :alt: Query using equality condition + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 9 + :end-before: End Example 9 + + - id: python content: | @@ -57,6 +67,15 @@ :start-after: Start Example 9 :end-before: End Example 9 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 9 + :end-before: End Example 9 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_query.js diff --git a/source/includes/driver-examples/driver-example-query-find-method.rst b/source/includes/driver-examples/driver-example-query-find-method.rst index 9fc26cda700..3afb1b39826 100644 --- a/source/includes/driver-examples/driver-example-query-find-method.rst +++ b/source/includes/driver-examples/driver-example-query-find-method.rst @@ -15,6 +15,12 @@ :ref:`query filter parameter ` determines the select criteria: + - id: c + content: | + To select all documents in the collection, pass an empty + document as the query filter parameter to the find method. The + query filter parameter determines the select criteria: + - id: python content: | To select all documents in the collection, pass an empty @@ -39,6 +45,12 @@ document as the query filter parameter to the find method. The query filter parameter determines the select criteria: + - id: kotlin-coroutine + content: | + To select all documents in the collection, pass an empty + document as the query filter parameter to the find method. The + query filter parameter determines the select criteria: + - id: nodejs content: | To select all documents in the collection, pass an empty diff --git a/source/includes/driver-examples/driver-example-query-intro-no-perl.rst b/source/includes/driver-examples/driver-example-query-intro-no-perl.rst new file mode 100644 index 00000000000..91a1392c21c --- /dev/null +++ b/source/includes/driver-examples/driver-example-query-intro-no-perl.rst @@ -0,0 +1,145 @@ +.. tabs-drivers:: + + tabs: + - id: shell + content: | + + This page provides examples of |query_operations| using the + :method:`db.collection.find()` method in :binary:`mongosh`. + + .. include:: /includes/driver-examples/examples-intro.rst + + - id: compass + content: | + + This page provides examples of |query_operations| using + :ref:`MongoDB Compass `. + + .. include:: /includes/driver-examples/examples-intro.rst + + - id: c + content: | + + This page provides examples of |query_operations| using + `mongoc_collection_find_with_opts `__. + .. include:: /includes/driver-examples/examples-intro.rst + + - id: python + content: | + + This page provides examples of |query_operations| using the + :py:meth:`pymongo.collection.Collection.find` method in the + :api:`PyMongo ` + Python driver. + + .. include:: /includes/driver-examples/examples-intro.rst + + - id: motor + content: | + + This page provides examples of |query_operations| using the + :py:meth:`motor.motor_asyncio.AsyncIOMotorCollection.find` + method in the `Motor `_ + driver. + + .. include:: /includes/driver-examples/examples-intro.rst + + - id: java-sync + content: | + + This page provides examples of |query_operations| using the + com.mongodb.client.MongoCollection.find_ method in the MongoDB + `Java Synchronous Driver`_. + + .. tip:: + + The driver provides com.mongodb.client.model.Filters_ + helper methods to facilitate the creation of filter + documents. The examples on this page use these methods to + create the filter documents. + + .. include:: /includes/driver-examples/examples-intro.rst + + - id: java-async + content: | + + This page provides examples of |query_operations| using the + `com.mongodb.reactivestreams.client.MongoCollection.find `_ + method in the MongoDB `Java Reactive Streams Driver `_. + + .. include:: /includes/driver-examples/examples-intro.rst + + - id: kotlin-coroutine + content: | + This page provides examples of |query_operations| by using the + `MongoCollection.find() <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-mongo-collection/find.html>`__ method in the MongoDB + :driver:`Kotlin Coroutine Driver `. + + .. tip:: + + The driver provides `com.mongodb.client.model.Filters <{+java-api-docs+}/mongodb-driver-core/com/mongodb/client/model/Filters.html>`__ + helper methods to facilitate the creation of filter + documents. The examples on this page use these methods to + create the filter documents. + + .. include:: /includes/driver-examples/examples-intro.rst + + - id: nodejs + content: | + + This page provides examples of |query_operations| using the + :node-api:`Collection.find() ` method in + the :node-docs:`MongoDB Node.js Driver <>`. + + .. include:: /includes/driver-examples/examples-intro.rst + + - id: php + content: | + + This page provides examples of |query_operations| using the + :phpmethod:`MongoDB\\Collection::find() ` + method in the + `MongoDB PHP Library `_. + + .. include:: /includes/driver-examples/examples-intro.rst + + - id: ruby + content: | + + This page provides examples of |query_operations| using the + :ruby-api:`Mongo::Collection#find()` + method in the + :ruby:`MongoDB Ruby Driver `. + + .. include:: /includes/driver-examples/examples-intro.rst + + - id: scala + content: | + + This page provides examples of |query_operations| using the + :scala-api:`collection.find()` method + in the + `MongoDB Scala Driver `_. + + .. include:: /includes/driver-examples/examples-intro.rst + + - id: csharp + content: | + + This page provides examples of |query_operations| using the + :csharp-api:`MongoCollection.Find() ` + method in the + `MongoDB C# Driver `_. + + .. include:: /includes/driver-examples/examples-intro.rst + + - id: go + content: | + + This page provides examples of |query_operations| using the + :go-api:`Collection.Find ` + function in the + `MongoDB Go Driver `_. + + .. include:: /includes/driver-examples/examples-intro.rst + diff --git a/source/includes/driver-examples/driver-example-query-intro.rst b/source/includes/driver-examples/driver-example-query-intro.rst index 7dc285e13b9..5d083b3e9d0 100644 --- a/source/includes/driver-examples/driver-example-query-intro.rst +++ b/source/includes/driver-examples/driver-example-query-intro.rst @@ -8,6 +8,13 @@ .. include:: /includes/driver-examples/examples-intro.rst + - id: c + content: | + This page provides examples of |query_operations| using + `mongoc_collection_find_with_opts `__. + + .. include:: /includes/driver-examples/examples-intro.rst + - id: compass content: | This page provides examples of |query_operations| using @@ -56,6 +63,21 @@ .. include:: /includes/driver-examples/examples-intro.rst + - id: kotlin-coroutine + content: | + This page provides examples of |query_operations| by using the + `MongoCollection.find() <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-mongo-collection/find.html>`__ method in the MongoDB + :driver:`Kotlin Coroutine Driver `. + + .. tip:: + + The driver provides `com.mongodb.client.model.Filters <{+java-api-docs+}/mongodb-driver-core/com/mongodb/client/model/Filters.html>`__ + helper methods to facilitate the creation of filter + documents. The examples on this page use these methods to + create the filter documents. + + .. include:: /includes/driver-examples/examples-intro.rst + - id: nodejs content: | This page provides examples of |query_operations| using the @@ -67,7 +89,7 @@ - id: php content: | This page provides examples of |query_operations| using the - :phpmethod:`MongoDB\\Collection::find() ` + :phpmethod:`MongoDB\\Collection::find() ` method in the `MongoDB PHP Library `_. diff --git a/source/includes/driver-examples/driver-example-update-51.rst b/source/includes/driver-examples/driver-example-update-51.rst index 1c012409598..93141232fd3 100644 --- a/source/includes/driver-examples/driver-example-update-51.rst +++ b/source/includes/driver-examples/driver-example-update-51.rst @@ -39,6 +39,15 @@ For instructions on inserting documents using |compass|, see :ref:`Insert Documents `. + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 51 + :end-before: End Example 51 + - id: python content: | @@ -75,6 +84,15 @@ :start-after: Start Example 51 :end-before: End Example 51 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 51 + :end-before: End Example 51 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_update.js diff --git a/source/includes/driver-examples/driver-example-update-52.rst b/source/includes/driver-examples/driver-example-update-52.rst index 888a80cc5d0..d4f16398c54 100644 --- a/source/includes/driver-examples/driver-example-update-52.rst +++ b/source/includes/driver-examples/driver-example-update-52.rst @@ -107,6 +107,16 @@ :ref:`Field Update Operators `, you must manually enter the date value in Compass. + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 52 + :end-before: End Example 52 + + - id: python content: | @@ -149,6 +159,17 @@ .. include:: /includes/fact-update-operation-uses.rst + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 52 + :end-before: End Example 52 + + .. include:: /includes/fact-update-operation-uses.rst + - id: nodejs content: | .. literalinclude:: /driver-examples/node_update.js diff --git a/source/includes/driver-examples/driver-example-update-53.rst b/source/includes/driver-examples/driver-example-update-53.rst index ecd1c7ee557..b3a8e6a273a 100644 --- a/source/includes/driver-examples/driver-example-update-53.rst +++ b/source/includes/driver-examples/driver-example-update-53.rst @@ -16,6 +16,16 @@ .. include:: /includes/fact-update-many-operation-uses.rst + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 53 + :end-before: End Example 53 + + - id: python content: | @@ -58,6 +68,17 @@ .. include:: /includes/fact-update-many-operation-uses.rst + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 52 + :end-before: End Example 52 + + .. include:: /includes/fact-update-many-operation-uses.rst + - id: nodejs content: | .. literalinclude:: /driver-examples/node_update.js diff --git a/source/includes/driver-examples/driver-example-update-54.rst b/source/includes/driver-examples/driver-example-update-54.rst index 25ff83053ea..68436526e65 100644 --- a/source/includes/driver-examples/driver-example-update-54.rst +++ b/source/includes/driver-examples/driver-example-update-54.rst @@ -11,6 +11,17 @@ { item: "paper", instock: [ { warehouse: "A", qty: 60 }, { warehouse: "B", qty: 40 } ] } ) + - id: c + content: | + + .. literalinclude:: /driver-examples/test-mongoc-sample-commands.c + :language: c + :dedent: 3 + :start-after: Start Example 54 + :end-before: End Example 54 + + .. include:: /includes/driver-examples/driver-example-c-cleanup.rst + - id: python content: | @@ -47,6 +58,15 @@ :start-after: Start Example 54 :end-before: End Example 54 + - id: kotlin-coroutine + content: | + + .. literalinclude:: /driver-examples/kotlin_examples.kt + :language: kotlin + :dedent: + :start-after: Start Example 54 + :end-before: End Example 54 + - id: nodejs content: | .. literalinclude:: /driver-examples/node_update.js diff --git a/source/includes/driver-examples/driver-procedure-indexes-1.rst b/source/includes/driver-examples/driver-procedure-indexes-1.rst index fbfe1ae8617..fef66d8a11d 100644 --- a/source/includes/driver-examples/driver-procedure-indexes-1.rst +++ b/source/includes/driver-examples/driver-procedure-indexes-1.rst @@ -44,7 +44,7 @@ .. code-block:: java - collection.createIndex( , )å + collection.createIndex(, ) - id: java-async content: | @@ -58,6 +58,19 @@ collection.createIndex( , , ) + - id: kotlin-coroutine + content: | + + To create an index by using the + :driver:`Kotlin Coroutine Driver `, + use the `MongoCollection.createIndex() + <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-mongo-collection/create-index.html>`__ + method. + + .. code-block:: kotlin + + collection.createIndex(, ) + - id: nodejs content: | @@ -77,7 +90,7 @@ To create an index using the `PHP driver `_, use - :phpmethod:`MongoDB\\Collection::createIndex() `. + :phpmethod:`MongoDB\\Collection::createIndex() `. .. code-block:: php diff --git a/source/includes/driver-remove-indexes-tabs.rst b/source/includes/driver-remove-indexes-tabs.rst index bf07fedd87e..43771ccec25 100644 --- a/source/includes/driver-remove-indexes-tabs.rst +++ b/source/includes/driver-remove-indexes-tabs.rst @@ -41,10 +41,9 @@ :method:`db.collection.dropIndexes()` can accept an array of index names. - Starting in MongoDB 4.4, :method:`db.collection.dropIndexes()` can stop - in-progress index builds. See - :ref:`dropIndexes-method-index-builds` for more information. + in-progress index builds. See :ref:`dropIndexes-method-index-builds` + for more information. Remove All Indexes ~~~~~~~~~~~~~~~~~~ diff --git a/source/includes/drop-hashed-shard-key-index-main.rst b/source/includes/drop-hashed-shard-key-index-main.rst new file mode 100644 index 00000000000..7cac62a24c9 --- /dev/null +++ b/source/includes/drop-hashed-shard-key-index-main.rst @@ -0,0 +1,7 @@ +Starting in MongoDB 7.0.3 (and 6.0.12 and 5.0.22), you can drop the +index for a hashed shard key. + +This can speed up data insertion for collections sharded with a hashed +shard key. It can also speed up data ingestion when using +``mongosync``. + diff --git a/source/includes/drop-hashed-shard-key-index.rst b/source/includes/drop-hashed-shard-key-index.rst new file mode 100644 index 00000000000..f1549f780e7 --- /dev/null +++ b/source/includes/drop-hashed-shard-key-index.rst @@ -0,0 +1,3 @@ +.. include:: /includes/drop-hashed-shard-key-index-main.rst + +For details, see :ref:`drop-a-hashed-shard-key-index`. diff --git a/source/includes/enable-KMIP-on-windows.rst b/source/includes/enable-KMIP-on-windows.rst new file mode 100644 index 00000000000..d46b7f0353c --- /dev/null +++ b/source/includes/enable-KMIP-on-windows.rst @@ -0,0 +1,10 @@ +.. important:: + + Enabling encryption using a KMIP server on Windows fails when using + |kmip-client-cert-file| and the KMIP server enforces TLS 1.2. + + To enable encryption at rest with KMIP on Windows, you must: + + - Import the client certificate into the Windows Certificate Store. + - Use the |kmip-client-cert-selector| option. + diff --git a/source/includes/explain-ignores-cache-plan.rst b/source/includes/explain-ignores-cache-plan.rst new file mode 100644 index 00000000000..cab12d6ecb6 --- /dev/null +++ b/source/includes/explain-ignores-cache-plan.rst @@ -0,0 +1,4 @@ +.. note:: + + Using ``explain`` ignores all existing plan cache entries and prevents + the MongoDB query planner from creating a new plan cache entry. \ No newline at end of file diff --git a/source/includes/extracts-4.0-upgrade-prereq.yaml b/source/includes/extracts-4.0-upgrade-prereq.yaml index 8e4b318f443..8e7f893ee9b 100644 --- a/source/includes/extracts-4.0-upgrade-prereq.yaml +++ b/source/includes/extracts-4.0-upgrade-prereq.yaml @@ -3,8 +3,7 @@ content: | If your deployment has user credentials stored in ``MONGODB-CR`` schema, you must upgrade to :ref:`Salted Challenge Response Authentication Mechanism (SCRAM) ` **before** you - upgrade to version 4.0. For information on upgrading to ``SCRAM``, see - :doc:`/release-notes/3.0-scram`. + upgrade to version 4.0. --- ref: 4.0-upgrade-prereq-isolated content: | diff --git a/source/includes/extracts-4.2-changes.yaml b/source/includes/extracts-4.2-changes.yaml index 6c7b4f24de7..4e2e022eb95 100644 --- a/source/includes/extracts-4.2-changes.yaml +++ b/source/includes/extracts-4.2-changes.yaml @@ -111,15 +111,7 @@ content: | - Do not depend on the profiling level. - - May be affected by :setting:`~operationProfiling.slowOpSampleRate`, - depending on your MongoDB version: - - - In MongoDB 4.2, these slow oplog entries are not - affected by the :setting:`~operationProfiling.slowOpSampleRate`. - MongoDB logs all slow oplog entries regardless of the sample rate. - - - In MongoDB 4.4 and later, these slow oplog entries are affected by - the :setting:`~operationProfiling.slowOpSampleRate`. + - Are affected by :setting:`~operationProfiling.slowOpSampleRate`. The profiler does not capture slow oplog entries. --- @@ -137,7 +129,7 @@ content: | Starting in MongoDB 4.2 (and in 4.0.9), for slow operations, the :doc:`profiler entries ` and - :doc:`diagnostic log messages ` include + :ref:`diagnostic log messages ` include :data:`~system.profile.storage` information. --- ref: 4.2-changes-log-query-shapes-plan-cache-key @@ -410,7 +402,7 @@ content: | will use FIPS compliant connections to :binary:`mongod` / :binary:`mongos` if the :binary:`mongod` / :binary:`mongos` instances are - :doc:`configured to use FIPS mode `. + :ref:`configured to use FIPS mode `. --- ref: 4.2-changes-fips-program-mongod content: | @@ -419,7 +411,7 @@ content: | will use FIPS compliant connections to the :binary:`~bin.mongod` if the :binary:`~bin.mongod` instance is - :doc:`configured to use FIPS mode `. + :ref:`configured to use FIPS mode `. --- ref: 4.2-changes-fips @@ -439,7 +431,7 @@ content: | The programs will use FIPS compliant connections to :binary:`mongod` / :binary:`mongos` if the :binary:`mongod` / :binary:`mongos` instances are - :doc:`configured to use FIPS mode `. + :ref:`configured to use FIPS mode `. --- ref: 4.2-changes-count-syntax-validation content: | @@ -558,7 +550,7 @@ content: | .. include:: /includes/autosplit-no-operation.rst - In MongoDB versions earlier than 6.1, :method:`sh.startBalancer()` + In MongoDB versions earlier than 6.0.3, :method:`sh.startBalancer()` also enables auto-splitting for the sharded cluster. --- ref: 4.2-changes-stop-balancer-autosplit @@ -566,7 +558,7 @@ content: | .. include:: /includes/autosplit-no-operation.rst - In MongoDB versions earlier than 6.1, :method:`sh.stopBalancer()` + In MongoDB versions earlier than 6.0.3, :method:`sh.stopBalancer()` also disables auto-splitting for the sharded cluster. --- ref: 4.2-changes-global-lock-reporting @@ -607,7 +599,7 @@ content: | --- ref: 4.2-changes-flow-control-general-desc content: | - Starting in MongoDB 4.2, administrators can limit the rate at which + Administrators can limit the rate at which the primary applies its writes with the goal of keeping the :data:`majority committed ` lag under a configurable maximum value :parameter:`flowControlTargetLagSeconds`. diff --git a/source/includes/extracts-4.2-downgrade-fcv.yaml b/source/includes/extracts-4.2-downgrade-fcv.yaml index 888ac694325..b3f8f1db984 100644 --- a/source/includes/extracts-4.2-downgrade-fcv.yaml +++ b/source/includes/extracts-4.2-downgrade-fcv.yaml @@ -2,15 +2,15 @@ ref: 4.2-downgrade-fcv-index-key content: | Starting in MongoDB 4.2, for ``featureCompatibilityVersion`` (fCV) - set to ``"4.2"`` or greater, MongoDB removes the :limit:`Index Key - Limit`. For fCV set to ``"4.0"``, the limit still applies. + set to ``"4.2"`` or greater, MongoDB removes the Index Key + Limit. For fCV set to ``"4.0"``, the limit still applies. If you have an index with keys that exceed the :limit:`Index Key Limit` once fCV is set to ``"4.0"``, consider changing the index to a hashed index or to indexing a computed value. You can also **temporarily** use - :parameter:`failIndexKeyTooLong` set to ``false`` before resolving - the problem. However, with :parameter:`failIndexKeyTooLong` set to + ``failIndexKeyTooLong`` set to ``false`` before resolving + the problem. However, with ``failIndexKeyTooLong`` set to ``false``, queries that use these indexes can return incomplete results. --- diff --git a/source/includes/extracts-4.4-changes.yaml b/source/includes/extracts-4.4-changes.yaml index b8375ce62f5..e74ff141cea 100644 --- a/source/includes/extracts-4.4-changes.yaml +++ b/source/includes/extracts-4.4-changes.yaml @@ -1,20 +1,17 @@ ref: 4.4-changes-certificate-expiry-warning content: | - .. versionchanged:: 4.4 - - :binary:`~bin.mongod` / :binary:`~bin.mongos` logs a warning on - connection if the presented x.509 certificate expires within ``30`` - days of the ``mongod/mongos`` host system time. See - :ref:`4.4-rel-notes-certificate-expiration-warning` for more - information. + :binary:`~bin.mongod` / :binary:`~bin.mongos` logs a warning on + connection if the presented x.509 certificate expires within ``30`` + days of the ``mongod/mongos`` host system time. See + :ref:`4.4-rel-notes-certificate-expiration-warning` for more + information. --- ref: 4.4-changes-passwordPrompt content: | - Starting in MongoDB 4.4, if you use the - ``db.auth(, )`` syntax and omit the password, - the user is prompted to enter a password. + If you use the ``db.auth(, )`` syntax and omit the + password, the user is prompted to enter a password. --- ref: 4.4-changes-removed-commands content: | @@ -129,9 +126,8 @@ content: | ref: 4.4-changes-timestamp-format content: | - Starting in MongoDB 4.4, |timestampfmt| no longer supports ``ctime``. - An example of ``ctime`` formatted date is: ``Wed Dec 31 - 18:17:54.811``. + |timestampfmt| no longer supports ``ctime``. An example of ``ctime`` + formatted date is: ``Wed Dec 31 18:17:54.811``. --- ref: 4.4-changes-meta-convergence content: | @@ -144,10 +140,9 @@ content: | ref: 4.4-changes-projection-sort-meta-list content: | - - Starting in MongoDB 4.4, you can specify the - :expression:`{ $meta: "textScore" } <$meta>` expression in the - :method:`~cursor.sort()` without also specifying the expression in - the projection. For example, + - You can specify the :expression:`{ $meta: "textScore" } <$meta>` + expression in the :method:`~cursor.sort()` without also specifying the + expression in the projection. For example: .. code-block:: javascript @@ -158,16 +153,10 @@ content: | As a result, you can sort the resulting documents by their search relevance without projecting the ``textScore``. - | In earlier versions, to include - :expression:`{ $meta: "textScore" } <$meta>` expression in the - :method:`~cursor.sort()`, you must also include the same - expression in the projection. - - - Starting in MongoDB 4.4, if you include the - :expression:`{ $meta: "textScore" } <$meta>` expression in both the - :ref:`projection ` and :method:`~cursor.sort()`, - the projection and sort documents can have different field names - for the expression. + - If you include the :expression:`{ $meta: "textScore" } <$meta>` expression + in both the :ref:`projection ` and + :method:`~cursor.sort()`, the projection and sort documents can have + different field names for the expression. | For example, in the following operation, the projection uses a field named ``score`` for the expression and the @@ -180,10 +169,6 @@ content: | { score: { $meta: "textScore" } } ).sort( { ignoredName: { $meta: "textScore" } } ) - In previous versions of MongoDB, if ``{ $meta: "textScore" }`` is - included in both the projection and sort, you must specify the - same field name for the expression. - --- ref: 4.4-changes-textscore-predicate content: | @@ -227,42 +212,36 @@ content: | ref: 4.4-changes-natural-sort-views content: | - Starting in MongoDB 4.4, you can specify a :operator:`$natural` - sort when running a :dbcommand:`find` operation against a - :ref:`view `. + You can specify a :operator:`$natural` sort when running a :dbcommand:`find` + operation against a :ref:`view `. --- ref: 4.4-changes-drop-in-progress-indexes content: | - Starting in MongoDB 4.4, the :method:`db.collection.drop()` method and - :dbcommand:`drop` command abort any in-progress index builds on the - target collection before dropping the collection. Prior to MongoDB - 4.4, attempting to drop a collection with in-progress index builds - results in an error, and the collection is not dropped. + The :method:`db.collection.drop()` method and :dbcommand:`drop` command + abort any in-progress index builds on the target collection before dropping + the collection. .. include:: /includes/fact-abort-index-build-replica-sets.rst --- ref: 4.4-changes-drop-database-in-progress-indexes content: | - Starting in MongoDB 4.4, the :method:`db.dropDatabase()` method and - :dbcommand:`dropDatabase` command abort any in-progress index builds - on collections in the target database before dropping the database. - Aborting an index build has the same effect as dropping the built - index. Prior to MongoDB 4.4, attempting to drop a database that - contains a collection with an in-progress index build results in an - error, and the database is not dropped. + The :method:`db.dropDatabase()` method and :dbcommand:`dropDatabase` command + abort any in-progress index builds on collections in the target database + before dropping the database. Aborting an index build has the same effect as + dropping the built index. --- ref: 4.4-changes-minimum-oplog-retention-period content: | - Starting in MongoDB 4.4, you can specify the minimum number of hours - to preserve an oplog entry. The :binary:`~bin.mongod` only removes - an oplog entry *if*: + You can specify the minimum number of hours to preserve an oplog entry + where :binary:`~bin.mongod` only removes an oplog entry *if* both of the + following criteria are met: - The oplog has reached the :ref:`maximum configured size - `, *and* + `. - The oplog entry is older than the configured number of hours based on the host system clock. @@ -300,8 +279,6 @@ content: | .. include:: /includes/extracts/transactions-cross-shard-collection-restriction.rst - For fcv ``"4.2"`` or less, the collection must already exist for - insert and ``upsert: true`` operations. --- ref: 4.4-changes-transactions-save content: | @@ -338,26 +315,20 @@ ref: 4.4-changes-index-builds-simultaneous-fcv content: | .. note:: Requires ``featureCompatibilityVersion`` 4.4+ - Each :binary:`~bin.mongod` in the replica set or sharded cluster *must* have :ref:`featureCompatibilityVersion ` set to at least ``4.4`` to start index builds simultaneously across replica set members. - - MongoDB 4.4 running ``featureCompatibilityVersion: "4.2"`` builds - indexes on the primary before replicating the index build to - secondaries. --- ref: 4.4-changes-index-builds-simultaneous content: | - Starting with MongoDB 4.4, index builds on a replica set or sharded - cluster build simultaneously across all data-bearing replica set - members. For sharded clusters, the index build occurs only on shards - containing data for the collection being indexed. The primary - requires a minimum number of data-bearing :rsconf:`voting + Index builds on a replica set or sharded cluster build simultaneously across + all data-bearing replica set members. For sharded clusters, the index build + occurs only on shards containing data for the collection being indexed. + The primary requires a minimum number of data-bearing :rsconf:`voting ` members (i.e commit quorum), including itself, that must complete the build before marking the index as ready for use. See :ref:`index-operations-replicated-build` for more @@ -366,11 +337,10 @@ content: | --- ref: 4.4-changes-index-builds-simultaneous-nolink content: | - Starting with MongoDB 4.4, index builds on a replica set or sharded - cluster build simultaneously across all data-bearing replica set - members. For sharded clusters, the index build occurs only on shards - containing data for the collection being indexed. The primary - requires a minimum number of data-bearing :rsconf:`voting + Index builds on a replica set or sharded cluster build simultaneously across + all data-bearing replica set members. For sharded clusters, the index build + occurs only on shards containing data for the collection being indexed. The + primary requires a minimum number of data-bearing :rsconf:`voting ` members (i.e commit quorum), including itself, that must complete the build before marking the index as ready for use. @@ -388,16 +358,15 @@ content: | --- ref: 4.4-validate-data-throughput content: | - Starting in version MongoDB 4.4, - - - The :pipeline:`$currentOp` and the :dbcommand:`currentOp` command - include :data:`~$currentOp.dataThroughputAverage` and - :data:`~$currentOp.dataThroughputLastSecond` information for - validate operations in progress. + + The :pipeline:`$currentOp` and the :dbcommand:`currentOp` command + include :data:`~$currentOp.dataThroughputAverage` and + :data:`~$currentOp.dataThroughputLastSecond` information for + validate operations in progress. - - The log messages for validate operations include - ``dataThroughputAverage`` and ``dataThroughputLastSecond`` - information. + The log messages for validate operations include + ``dataThroughputAverage`` and ``dataThroughputLastSecond`` + information. --- ref: 4.4-replSetReconfig-majority content: | @@ -465,9 +434,8 @@ content: | ref: 4.4-changes-repl-state-restrictions content: | - Starting in MongoDB 4.4, to run on a replica set member, the - following operations require the member to be in - :replstate:`PRIMARY` or :replstate:`SECONDARY` state. + To run on a replica set member, the following operations require the member + to be in :replstate:`PRIMARY` or :replstate:`SECONDARY` state. - :dbcommand:`listDatabases` - :dbcommand:`listCollections` @@ -479,30 +447,20 @@ content: | If the member is in another state, such as :replstate:`STARTUP2`, the operation errors. - In previous versions, the operations can also be run when the member - is in :replstate:`STARTUP2`. However, the operations wait - until the member transitions to :replstate:`RECOVERING`. - --- ref: 4.4-changes-repl-state-restrictions-operation content: | - Starting in MongoDB 4.4, to run on a replica set member, - |operations| operations require the member to be in - :replstate:`PRIMARY` or :replstate:`SECONDARY` state. If the member + To run on a replica set member, |operations| operations require the member + to be in :replstate:`PRIMARY` or :replstate:`SECONDARY` state. If the member is in another state, such as :replstate:`STARTUP2`, the operation errors. - In previous versions, the operations also run when the member - is in :replstate:`STARTUP2`. The operations wait until the member - transitioned to :replstate:`RECOVERING`. - --- ref: 4.4-changes-mapreduce-ignore-verbose content: | - Starting in version 4.4, MongoDB ignores the :ref:`verbose - ` option. + MongoDB ignores the :ref:`verbose ` option. --- ref: 4.4-changes-getLastErrorDefaults-deprecation content: | @@ -518,8 +476,7 @@ content: | ref: 4.4-changes-tools content: | - Starting in version 4.4, the - :doc:`Windows MSI installer + The :doc:`Windows MSI installer ` for both Community and Enterprise editions does not include the :dbtools:`MongoDB Database Tools <>` (``mongoimport``, diff --git a/source/includes/extracts-agg-operators.yaml b/source/includes/extracts-agg-operators.yaml index be3fc6f1f30..5029df8a05e 100644 --- a/source/includes/extracts-agg-operators.yaml +++ b/source/includes/extracts-agg-operators.yaml @@ -539,7 +539,6 @@ content: | - Description * - :expression:`$getField` - - Returns the value of a specified field from a document. You can use :expression:`$getField` to retrieve the value of fields with names that contain periods (``.``) or start @@ -551,12 +550,16 @@ content: | - Returns a random float between 0 and 1 * - :expression:`$sampleRate` - - Randomly select documents at a given rate. Although the exact number of documents selected varies on each run, the quantity chosen approximates the sample rate expressed as a percentage of the total number of documents. + * - :expression:`$toHashedIndexKey` + - Computes and returns the hash of the input expression using + the same hash function that MongoDB uses to create a hashed + index. + --- ref: agg-operators-objects content: | @@ -721,14 +724,10 @@ content: | - Replaces the first instance of a matched string in a given input. - .. versionadded:: 4.4 - * - :expression:`$replaceAll` - Replaces all instances of a matched string in a given input. - .. versionadded:: 4.4 - * - :expression:`$rtrim` - Removes whitespace or the specified characters from the @@ -896,14 +895,10 @@ content: | - Defines a custom accumulator function. - .. versionadded:: 4.4 - * - :expression:`$function` - Defines a custom function. - .. versionadded:: 4.4 - --- ref: agg-operators-type content: | @@ -932,8 +927,6 @@ content: | other :ref:`BSON type `, ``null``, or a missing field. - .. versionadded:: 4.4 - * - :expression:`$toBool` - Converts value to a boolean. @@ -1133,6 +1126,19 @@ content: | Available in the :pipeline:`$setWindowFields` stage. + * - :group:`$minN` + + - Returns an aggregation of the ``n`` minimum valued elements + in a group. + Distinct from the :expression:`$minN` array operator. + + .. versionadded:: 5.2 + + Available in :pipeline:`$group`, + :pipeline:`$setWindowFields` + and as an :ref:`expression `. + + * - :group:`$percentile` - .. include:: /includes/aggregation/fact-return-percentile.rst diff --git a/source/includes/extracts-agg-stages.yaml b/source/includes/extracts-agg-stages.yaml index 4933954dec2..66bd87b52f5 100644 --- a/source/includes/extracts-agg-stages.yaml +++ b/source/includes/extracts-agg-stages.yaml @@ -150,7 +150,7 @@ content: | * - :pipeline:`$planCacheStats` - - Returns :doc:`plan cache ` information for a + - Returns :ref:`plan cache ` information for a collection. * - :pipeline:`$project` @@ -274,8 +274,6 @@ content: | pipeline results from two collections into a single result set. - .. versionadded:: 4.4 - * - :pipeline:`$unset` - Removes/excludes fields from documents. @@ -291,6 +289,22 @@ content: | document, outputs *n* documents where *n* is the number of array elements and can be zero for an empty array. + * - :pipeline:`$vectorSearch` + + - Performs an :abbr:`ANN (Approximate Nearest Neighbor)` search on a + vector in the specified field of an + :atlas:`Atlas ` collection. + + .. versionadded:: 7.0.2 + + .. note:: + + ``$vectorSearch`` is only available for MongoDB Atlas clusters + running MongoDB v6.0.11 or higher, and is not available for + self-managed deployments. To learn more, see + :atlas:`Atlas Search Aggregation Pipeline Stages + `. + --- ref: agg-stages-db.aggregate content: | diff --git a/source/includes/extracts-bypassDocumentValidation-base.yaml b/source/includes/extracts-bypassDocumentValidation-base.yaml index 4a66930e814..f1d39e0605c 100644 --- a/source/includes/extracts-bypassDocumentValidation-base.yaml +++ b/source/includes/extracts-bypassDocumentValidation-base.yaml @@ -2,7 +2,7 @@ ref: _bypassDocValidation content: | The {{role}} {{interface}} adds support for the ``bypassDocumentValidation`` option, which lets you bypass - :doc:`document validation ` when + :ref:`document validation ` when inserting or updating documents in a collection with validation rules. ... diff --git a/source/includes/extracts-changestream.yaml b/source/includes/extracts-changestream.yaml index 45a7f756d52..827129cce03 100644 --- a/source/includes/extracts-changestream.yaml +++ b/source/includes/extracts-changestream.yaml @@ -188,15 +188,12 @@ content: | ref: changestream-rc-majority-4.2 content: | - Starting in MongoDB 4.2, :doc:`change streams ` are + :ref:`Change streams ` are available regardless of the :readconcern:`"majority"` read concern support; that is, read concern ``majority`` support can be either enabled (default) or :ref:`disabled ` to use change streams. - In MongoDB 4.0 and earlier, :doc:`change streams ` are - available only if :readconcern:`"majority"` read concern support is - enabled (default). --- ref: changestream-available-pipeline-stages content: | diff --git a/source/includes/extracts-client-side-field-level-encryption.yaml b/source/includes/extracts-client-side-field-level-encryption.yaml index 7e31ac3b0ac..b07f9d9e0ca 100644 --- a/source/includes/extracts-client-side-field-level-encryption.yaml +++ b/source/includes/extracts-client-side-field-level-encryption.yaml @@ -12,10 +12,10 @@ content: | following Key Management Service (KMS) providers for Customer Master Key (CMK) management: - - :ref:`Amazon Web Services KMS ` - - :ref:`Azure Key Vault ` - - :ref:`Google Cloud Platform KMS ` - - :ref:`Locally Managed Key ` + - :ref:`Amazon Web Services KMS ` + - :ref:`Azure Key Vault ` + - :ref:`Google Cloud Platform KMS ` + - :ref:`Locally Managed Key ` *or* @@ -23,7 +23,7 @@ content: | ` to establish a connection with the required options. The command line options only support the :ref:`Amazon Web Services KMS - ` provider for CMK management. + ` provider for CMK management. --- ref: csfle-keyvault-unique-index content: | diff --git a/source/includes/extracts-collation.yaml b/source/includes/extracts-collation.yaml index 40dd16bc0f6..866a5c954a1 100644 --- a/source/includes/extracts-collation.yaml +++ b/source/includes/extracts-collation.yaml @@ -168,7 +168,13 @@ content: |- .. code-block:: javascript db.myColl.find( { score: 5, category: "cafe" } ) + + .. important:: + Matches against document keys, including embedded document keys, + use simple binary comparison. This means that a query for a key + like "foo.bár" will not match the key "foo.bar", regardless of the value you + set for the :ref:`strength ` parameter. --- ref: collation-index diff --git a/source/includes/extracts-command-field.yaml b/source/includes/extracts-command-field.yaml index 5b33a51e547..a2906b289e6 100644 --- a/source/includes/extracts-command-field.yaml +++ b/source/includes/extracts-command-field.yaml @@ -50,8 +50,7 @@ content: | (...) at the end of the string. The ``comment`` field is present if a comment was passed to the operation. - Starting in MongoDB 4.4, a comment may be attached to any :ref:`database - command `. + A comment may be attached to any :ref:`database command `. --- ref: command-field-currentOp diff --git a/source/includes/extracts-create-cmd.yaml b/source/includes/extracts-create-cmd.yaml index 4350f3399aa..187c56d40df 100644 --- a/source/includes/extracts-create-cmd.yaml +++ b/source/includes/extracts-create-cmd.yaml @@ -41,7 +41,7 @@ content: | A user with the :authrole:`readWrite` built in role on the database has the required privileges to run the listed operations. Either - :doc:`create a user ` with the required role + :ref:`create a user ` with the required role or :ref:`grant the role to an existing user `. replacement: diff --git a/source/includes/extracts-export-tools-performance-considerations-base.yaml b/source/includes/extracts-export-tools-performance-considerations-base.yaml index 75fb22fc94c..1686519cf1c 100644 --- a/source/includes/extracts-export-tools-performance-considerations-base.yaml +++ b/source/includes/extracts-export-tools-performance-considerations-base.yaml @@ -15,7 +15,7 @@ content: | backup as well as the point in time that the backup reflects. - Use an alternative backup strategy such as - :doc:`Filesystem Snapshots ` + :ref:`Filesystem Snapshots ` or :atlas:`Cloud Backups in MongoDB Atlas ` if the performance impact of {{out_tool}} and {{in_tool}} is unacceptable for your use case. diff --git a/source/includes/extracts-fact-findandmodify-return.yaml b/source/includes/extracts-fact-findandmodify-return.yaml index 88747a44775..6c1677daa80 100644 --- a/source/includes/extracts-fact-findandmodify-return.yaml +++ b/source/includes/extracts-fact-findandmodify-return.yaml @@ -8,7 +8,7 @@ content: | - If ``new`` is ``true``: - - the modified document if the query returns a match; + - the updated document if the query returns a match; - the inserted document if ``upsert: true`` and no document matches the query; diff --git a/source/includes/extracts-filter.yaml b/source/includes/extracts-filter.yaml index 3ed7baf4b36..d086244f55f 100644 --- a/source/includes/extracts-filter.yaml +++ b/source/includes/extracts-filter.yaml @@ -24,13 +24,23 @@ content: | { : , ... } + - id: c + content: | + To specify equality conditions, use ``:`` + expressions in the + :ref:`query filter document `: + + .. code-block:: c + + { : , ... } + - id: python content: | To specify equality conditions, use ``:`` expressions in the - :ref:`query filter document `: + :ref:`query filter document `: - .. code-block:: python + .. code-block:: python { : , ... } @@ -39,89 +49,100 @@ content: | To specify equality conditions, use ``:`` expressions in the :ref:`query filter document `: - + .. code-block:: python - + { : , ... } - + - id: java-sync content: | To specify equality conditions, use the ``com.mongodb.client.model.Filters.eq_`` method to create the :ref:`query filter document `: - + .. code-block:: java - - and(eq( , ), eq( , ) ...) - + + and(eq(, ), eq(, ) ...) + - id: java-async content: | To specify equality conditions, use the com.mongodb.client.model.Filters.eq_ method to create the :ref:`query filter document `: - + .. code-block:: java - - and(eq( , ), eq( , ) ...) - + + and(eq(, ), eq(, ) ...) + + - id: kotlin-coroutine + content: | + To specify equality conditions, use the + `Filters.eq() + <{+java-api-docs+}/mongodb-driver-core/com/mongodb/client/model/Filters.html#eq(java.lang.String,TItem)>`__ + method to create the :ref:`query filter document `: + + .. code-block:: kotlin + + and(eq(, ), eq(, ) ...) + - id: nodejs content: | To specify equality conditions, use ``:`` expressions in the :ref:`query filter document `: - + .. code-block:: javascript - + { : , ... } - + - id: php content: | To specify equality conditions, use `` => `` expressions in the :ref:`query filter document `: - + .. code-block:: php - + [ => , ... ] - + - id: perl content: | To specify equality conditions, use `` => `` expressions in the :ref:`query filter document `: - + .. code-block:: perl - + { => , ... } - + - id: ruby content: | To specify equality conditions, use `` => `` expressions in the :ref:`query filter document `: - + .. code-block:: ruby - + { => , ... } - + - id: scala content: | To specify equality conditions, use the ``com.mongodb.client.model.Filters.eq_`` method to create the :ref:`query filter document `: - + .. code-block:: scala - + and(equal(, ), equal(, ) ...) - + - id: csharp content: | To specify equality conditions, construct a filter using the :csharp-api:`Eq ` method: - + .. code-block:: c# - + Builders.Filter.Eq(, ); --- ref: filter-equality-embedded @@ -146,6 +167,14 @@ content: | ``{ : }`` where ```` is the document to match. + - id: c + content: | + To specify an equality condition on a field that is an + embedded/nested document, use the + :ref:`query filter document ` + ``{ : }`` where ```` is the document + to match. + - id: python content: | To specify an equality condition on a field that is an @@ -221,13 +250,13 @@ content: | embedded/nested document, construct a filter using the :csharp-api:`Eq ` - method: + method, where ```` is the document to match: .. code-block:: c# Builders.Filter.Eq(, ) - ```` is the document to match. + --- ref: filter-equality-array content: | @@ -247,6 +276,12 @@ content: | document ``{ : }`` where ```` is the exact array to match, including the order of the elements. + - id: c + content: | + To specify equality condition on an array, use the query + document ``{ : }`` where ```` is the + exact array to match, including the order of the elements. + - id: python content: | To specify equality condition on an array, use the query @@ -308,14 +343,16 @@ content: | content: | To specify equality condition on an array, construct a filter using the :csharp-api:`Eq - ` method: + ` method, + where ```` is the exact array to match, including the + order of the elements: .. code-block:: c# Builders.Filter.Eq(, ) - ```` is the exact array to match, including the - order of the elements. + + --- ref: filter-equality-array-element content: | @@ -329,6 +366,12 @@ content: | with the specified value, use the filter ``{ : }`` where ```` is the element value. + - id: c + content: | + To query if the array field contains at least *one* element + with the specified value, use the filter + ``{ : }`` where ```` is the element value. + - id: python content: | To query if the array field contains at least *one* element @@ -392,139 +435,162 @@ content: | To query if the array field contains at least *one* element with the specified value, construct a filter using the :csharp-api:`Eq - ` method: + ` method, + where ```` is the element value to match: .. code-block:: c# Builders.Filter.Eq(, ) - ```` is the element value to match. + --- ref: filter-query-operators content: | .. tabs-drivers:: - + tabs: - id: shell content: | A :ref:`query filter document ` can use the :ref:`query operators ` to specify conditions in the following form: - + .. code-block:: javascript - + { : { : }, ... } - + - id: compass content: | A :ref:`query filter document ` can use the :ref:`query operators ` to specify conditions in the following form: - + .. code-block:: javascript - + { : { : }, ... } + + - id: c + content: | + A :ref:`query filter document ` can + use the :ref:`query operators ` to specify + conditions in the following form: + + .. code-block:: c + + { : { : }, ... } - id: python content: | A :ref:`query filter document ` can use the :ref:`query operators ` to specify conditions in the following form: - - .. code-block:: python - + + .. code-block:: python + { : { : }, ... } - + - id: motor content: | A :ref:`query filter document ` can use the :ref:`query operators ` to specify conditions in the following form: - + .. code-block:: python - + { : { : }, ... } - + - id: java-sync content: | - + In addition to the equality condition, MongoDB provides various :ref:`query operators ` to specify filter conditions. Use the com.mongodb.client.model.Filters_ helper methods to facilitate the creation of filter documents. For example: - + .. code-block:: java - + and(gte(, ), lt(, ), eq(, )) - + - id: java-async content: | - + In addition to the equality condition, MongoDB provides various :ref:`query operators ` to specify filter conditions. Use the com.mongodb.client.model.Filters_ helper methods to facilitate the creation of filter documents. For example: - + .. code-block:: java - + and(gte(, ), lt(, ), eq(, )) - + + - id: kotlin-coroutine + content: | + + In addition to the equality condition, MongoDB provides + various :ref:`query operators ` to specify + filter conditions. Use the `com.mongodb.client.model.Filters <{+java-api-docs+}/mongodb-driver-core/com/mongodb/client/model/Filters.html>`__ helper methods to + facilitate the creation of filter documents. For example: + + .. code-block:: kotlin + + and(gte(, ), lt(, ), eq(, )) + - id: nodejs content: | A :ref:`query filter document ` can use the :ref:`query operators ` to specify conditions in the following form: - + .. code-block:: javascript - + { : { : }, ... } - + - id: php content: | A :ref:`query filter document ` can use the :ref:`query operators ` to specify conditions in the following form: - + .. code-block:: php - + [ => [ => ], ... ] - + - id: perl content: | A :ref:`query filter document ` can use the :ref:`query operators ` to specify conditions in the following form: - + .. code-block:: perl - + { => { => }, ... } - + - id: ruby content: | A :ref:`query filter document ` can use the :ref:`query operators ` to specify conditions in the following form: - + .. code-block:: ruby - + { => { => }, ... } - + - id: scala content: | - + In addition to the equality condition, MongoDB provides various :ref:`query operators ` to specify filter conditions. Use the ``com.mongodb.client.model.Filters_`` helper methods to facilitate the creation of filter documents. For example: - + .. code-block:: scala - + and(gte(, ), lt(, ), equal(, )) - + - id: csharp content: | In addition to the equality filter, MongoDB provides @@ -532,56 +598,57 @@ content: | filter conditions. Use the :csharp-api:`FilterDefinitionBuilder ` methods to create a filter document. For example: - + .. code-block:: c# - + var builder = Builders.Filter; builder.And(builder.Eq(, ), builder.Lt(, )); + --- ref: filter-query-operators-array content: | .. tabs-drivers:: - + tabs: - id: shell content: | To specify conditions on the elements in the array field, use :ref:`query operators ` in the :ref:`query filter document `: - + .. code-block:: javascript - + { : { : , ... } } - + - id: compass content: | To specify conditions on the elements in the array field, use :ref:`query operators ` in the :ref:`query filter document `: - + .. code-block:: javascript - + { : { : , ... } } - - id: python + - id: c content: | To specify conditions on the elements in the array field, use :ref:`query operators ` in the :ref:`query filter document `: - .. code-block:: python + .. code-block:: c { : { : , ... } } - - - id: motor + + - id: python content: | To specify conditions on the elements in the array field, use :ref:`query operators ` in the :ref:`query filter document `: .. code-block:: python - + { : { : , ... } } - id: java-sync @@ -589,79 +656,89 @@ content: | To specify conditions on the elements in the array field, use :ref:`query operators ` in the :ref:`query filter document `. For example: - + .. code-block:: java - + and(gte(, ), lt(, ) ...) - + - id: java-async content: | To specify conditions on the elements in the array field, use :ref:`query operators ` in the :ref:`query filter document `. For example: - + .. code-block:: java - + and(gte(, ), lt(, ) ...) - + + - id: kotlin-coroutine + content: | + To specify conditions on the elements in the array field, + use :ref:`query operators ` in the + :ref:`query filter document `. For example: + + .. code-block:: kotlin + + and(gte(, ), lt(, ) ...) + - id: nodejs content: | To specify conditions on the elements in the array field, use :ref:`query operators ` in the :ref:`query filter document `: - + .. code-block:: javascript - + { : { : , ... } } - + - id: php content: | To specify conditions on the elements in the array field, use :ref:`query operators ` in the :ref:`query filter document `: - + .. code-block:: php - + [ => [ => , ... ] ] - + - id: perl content: | To specify conditions on the elements in the array field, use :ref:`query operators ` in the :ref:`query filter document `: - + .. code-block:: perl - + { => { => , ... } } - + - id: ruby content: | To specify conditions on the elements in the array field, use :ref:`query operators ` in the :ref:`query filter document `: - + .. code-block:: ruby - + { => { => , ... } } - + - id: scala content: | To specify conditions on the elements in the array field, use :ref:`query operators ` in the :ref:`query filter document `: - + .. code-block:: scala - + and(gte(, ), lt(, ) ...) - + - id: csharp content: | To specify conditions on the elements in the array field, use :ref:`query operators ` in the :ref:`query filter document `. For example: - + .. code-block:: c# - + var builder = Builders.Filter; builder.And(builder.Eq(, ), builder.Lt(, )); -... + diff --git a/source/includes/extracts-inequality-operators-selectivity.yaml b/source/includes/extracts-inequality-operators-selectivity.yaml index 939653b93ad..6735bdad1e3 100644 --- a/source/includes/extracts-inequality-operators-selectivity.yaml +++ b/source/includes/extracts-inequality-operators-selectivity.yaml @@ -18,9 +18,9 @@ content: | --- ref: ne_operators_selectivity content: | - The inequality operator :query:`$ne` is *not* very selective since + The inequality operator ``$ne`` is *not* very selective since it often matches a large portion of the index. As a result, in many - cases, a :query:`$ne` query with an index may perform no better - than a :query:`$ne` query that must scan all documents in a + cases, a ``$ne`` query with an index may perform no better + than a ``$ne`` query that must scan all documents in a collection. See also :ref:`read-operations-query-selectivity`. ... diff --git a/source/includes/extracts-linux-config-expectations.yaml b/source/includes/extracts-linux-config-expectations.yaml index b6ede6c28ad..f59780cf28f 100644 --- a/source/includes/extracts-linux-config-expectations.yaml +++ b/source/includes/extracts-linux-config-expectations.yaml @@ -1,8 +1,8 @@ ref: _linux-config-expectations content: | The Linux package init scripts do not expect {{option}} to change from the - defaults. If you use the Linux packages and change {{option}}, you will have - to use your own init scripts and disable the built-in scripts. + defaults. If you use the Linux packages and change {{option}}, you must + use your own init scripts and disable the built-in scripts. --- ref: linux-config-expectations-systemlog-path replacement: diff --git a/source/includes/extracts-missing-shard-key-equality-condition.yaml b/source/includes/extracts-missing-shard-key-equality-condition.yaml index 5baffc7490a..4d4606aed50 100644 --- a/source/includes/extracts-missing-shard-key-equality-condition.yaml +++ b/source/includes/extracts-missing-shard-key-equality-condition.yaml @@ -1,7 +1,7 @@ ref: missing-shard-key-equality-condition-findAndModify content: | - Starting in version 4.4, documents in a sharded collection can be + Documents in a sharded collection can be :ref:`missing the shard key fields `. To target a document that is missing the shard key, you can use the ``null`` equality match :red:`in conjunction with` another filter condition @@ -15,7 +15,7 @@ content: | ref: missing-shard-key-equality-condition-update content: | - However, starting in version 4.4, documents in a sharded collection can be + However, documents in a sharded collection can be :ref:`missing the shard key fields `. To target a document that is missing the shard key, you can use the ``null`` equality match :red:`in conjunction with` another filter condition diff --git a/source/includes/extracts-production-notes-base.yaml b/source/includes/extracts-production-notes-base.yaml index a153957f964..f9db2f44536 100644 --- a/source/includes/extracts-production-notes-base.yaml +++ b/source/includes/extracts-production-notes-base.yaml @@ -8,15 +8,21 @@ content: | virtual machines. {{software}}'s balloon driver {{balloonDriverLiteral}} reclaims the pages that are considered least valuable. - The balloon driver resides inside the guest operating system. When the balloon driver expands, - it may induce the guest operating system to reclaim memory from guest - applications, which can interfere with MongoDB's memory management and - affect MongoDB's performance. + The balloon driver resides inside the guest operating system. Under + certain configurations, when the balloon driver expands, it can + interfere with MongoDB's memory management and affect MongoDB's + performance. - Do not disable the balloon driver and memory - overcommitment features. This can cause the hypervisor to use its swap which - will affect performance. Instead, map and reserve the full amount of - memory for the virtual machine running MongoDB. This ensures that the balloon - will not be inflated in the local operating system if there is memory - pressure in the hypervisor due to an overcommitted configuration. + To prevent negative performance impact from the balloon driver and + memory overcommitment features, reserve the full amount of memory for + the virtual machine running MongoDB. Reserving the appropriate amount + of memory for the virtual machine prevents the balloon from inflating + in the local operating system when there is memory pressure in the + hypervisor. + + Even though the balloon driver and memory overcommitment features can + negatively affect MongoDB performance under certain configurations, + **do not disable these features**. If you disable these features, the + hypervisor may use its swap space to fulfill memory requests, which + negatively affects performance. ... diff --git a/source/includes/extracts-projection.yaml b/source/includes/extracts-projection.yaml index 4342c3155ca..d85391ca06f 100644 --- a/source/includes/extracts-projection.yaml +++ b/source/includes/extracts-projection.yaml @@ -1,8 +1,8 @@ ref: projection-path-collision-embedded-document-full content: | - Starting in MongoDB 4.4, it is illegal to project an embedded - document with any of the embedded document's fields. + You cannot project an embedded document with any of the embedded + document's fields. For example, consider a collection ``inventory`` with documents that contain a ``size`` field: @@ -12,14 +12,13 @@ content: | { ..., size: { h: 10, w: 15.25, uom: "cm" }, ... } - Starting in MongoDB 4.4, the following operation fails with a ``Path - collision`` error because it attempts to project both ``size`` document - and the ``size.uom`` field: + The following operation fails with a ``Path collision`` error because it + attempts to project both ``size`` document and the ``size.uom`` field: .. code-block:: javascript :copyable: false - db.inventory.find( {}, { size: 1, "size.uom": 1 } ) // Invalid starting in 4.4 + db.inventory.find( {}, { size: 1, "size.uom": 1 } ) In previous versions, lattermost projection between the embedded documents and its fields determines the projection: @@ -38,9 +37,8 @@ content: | ref: projection-path-collision-slice-embedded-field-full content: | - Starting in MongoDB 4.4, |findoperation| projection - cannot contain both a :projection:`$slice` of an array and a field - embedded in the array. + |findoperation| projection cannot contain both a :projection:`$slice` of an + array and a field embedded in the array. For example, consider a collection ``inventory`` that contains an array field ``instock``: @@ -50,13 +48,13 @@ content: | { ..., instock: [ { warehouse: "A", qty: 35 }, { warehouse: "B", qty: 15 }, { warehouse: "C", qty: 35 } ], ... } - Starting in MongoDB 4.4, the following operation fails with a ``Path + The following operation fails with a ``Path collision`` error: .. code-block:: javascript :copyable: false - db.inventory.find( {}, { "instock": { $slice: 1 }, "instock.warehouse": 0 } ) // Invalid starting in 4.4 + db.inventory.find( {}, { "instock": { $slice: 1 }, "instock.warehouse": 0 } ) In previous versions, the projection applies both projections and returns the first element (``$slice: 1``) in the ``instock`` array @@ -69,35 +67,29 @@ content: | ref: projection-dollar-prefixed-field-full content: | - Starting in MongoDB 4.4, the |findoperation| projection cannot - project a field that starts with ``$`` with the exception of the - :ref:`DBRef fields `. + The |findoperation| projection cannot project a field that starts with + ``$`` with the exception of the :ref:`DBRef fields `. - For example, starting in MongoDB 4.4, the following operation is - invalid: + For example, the following operation is invalid: .. code-block:: javascript :copyable: false - db.inventory.find( {}, { "$instock.warehouse": 0, "$item": 0, "detail.$price": 1 } ) // Invalid starting in 4.4 + db.inventory.find( {}, { "$instock.warehouse": 0, "$item": 0, "detail.$price": 1 } ) - In earlier version, MongoDB ignores the ``$``-prefixed field - projections. --- ref: projection-positional-operator-slice-full content: | - Starting in MongoDB 4.4, |findoperation| projection - cannot include :projection:`$slice` projection expression as part of a - :projection:`$` projection expression. + |findoperation| projection cannot include :projection:`$slice` projection + expression as part of a :projection:`$` projection expression. - For example, starting in MongoDB 4.4, the following operation is - invalid: + For example, the following operation is invalid: .. code-block:: javascript :copyable: false - db.inventory.find( { "instock.qty": { $gt: 25 } }, { "instock.$": { $slice: 1 } } ) // Invalid starting in 4.4 + db.inventory.find( { "instock.qty": { $gt: 25 } }, { "instock.$": { $slice: 1 } } ) In previous versions, MongoDB returns the first element (``instock.$``) in the ``instock`` array that matches the query @@ -108,16 +100,16 @@ content: | --- ref: projection-empty-field-full content: | - Starting in MongoDB 4.4, |findoperation| projection - cannot include a projection of an empty field name. + + |findoperation| projection cannot include a projection of an empty field + name. - For example, starting in MongoDB 4.4, the following operation is - invalid: + For example, the following operation is invalid: .. code-block:: javascript :copyable: false - db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4 + db.inventory.find( { }, { "": 0 } ) In previous versions, MongoDB treats the inclusion/exclusion of the empty field as it would the projection of non-existing fields. @@ -172,10 +164,10 @@ content: | - Specifies the value of the projected field. - Starting in MongoDB 4.4, with the use of :ref:`aggregation - expressions and syntax `, including - the use of literals and aggregation variables, you can project - new fields or project existing fields with new values. + With the use of :ref:`aggregation expressions and syntax + `, including the use of literals and + aggregation variables, you can project new fields or project existing + fields with new values. - If you specify a non-numeric, non-boolean literal (such as a literal string or an array or an operator expression) for @@ -199,7 +191,6 @@ content: | or ``false`` to indicate the inclusion or exclusion of the field. - .. versionadded:: 4.4 --- ref: projection-values-table-without-meta content: | @@ -242,7 +233,7 @@ content: | - Specifies the value of the projected field. - Starting in MongoDB 4.4, with the use of :ref:`aggregation + With the use of :ref:`aggregation expressions and syntax `, including the use of literals and aggregation variables, you can project new fields or project existing fields with new values. @@ -269,7 +260,6 @@ content: | or ``false`` to indicate the inclusion or exclusion of the field. - .. versionadded:: 4.4 --- ref: projection-embedded-field-format content: | @@ -278,8 +268,7 @@ content: | - :ref:`dot notation `, for example ``"field.nestedfield": `` - - nested form, for example ``{ field: { nestedfield: } }`` (*Starting in - MongoDB 4.4*) + - nested form, for example ``{ field: { nestedfield: } }`` --- ref: projection-language-consistency-admonition @@ -288,8 +277,7 @@ content: | .. important:: Language Consistency - Starting in MongoDB 4.4, as part of making - :method:`~db.collection.find` and + As part of making :method:`~db.collection.find` and :method:`~db.collection.findAndModify` projection consistent with aggregation's :pipeline:`$project` stage, @@ -304,10 +292,9 @@ content: | ref: projection-elemMatch-projection-field-order content: | - Starting in MongoDB 4.4, regardless of the ordering of the fields - in the document, the :projection:`$elemMatch` projection of an - existing field returns the field after the other existing field - inclusions. + Regardless of the ordering of the fields in the document, the + :projection:`$elemMatch` projection of an existing field returns + the field after the other existing field inclusions. For example, consider a ``players`` collection with the following document: @@ -320,7 +307,7 @@ content: | lastLogin: new Date("2020-05-01") } ) - In version 4.4+, the following projection returns the ``games`` field + The following projection returns the ``games`` field after the other existing fields included in the projection even though in the document, the field is listed before ``joined`` and ``lastLogin`` fields: @@ -357,31 +344,26 @@ content: | ref: projection-positional-operator-path content: | - Starting in MongoDB 4.4, the :projection:`$` projection operator can - only appear at the end of the field path, for example ``"field.$"`` - or ``"fieldA.fieldB.$"``. + The :projection:`$` projection operator can only appear at the end of the + field path, for example ``"field.$"`` or ``"fieldA.fieldB.$"``. - For example, starting in MongoDB 4.4, the following operation is - invalid: + For example, the following operation is invalid: .. code-block:: javascript :copyable: false - db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4 + db.inventory.find( { }, { "instock.$.qty": 1 } ) To resolve, remove the component of the field path that follows the :projection:`$` projection operator. - - In previous versions, MongoDB ignores the part of the path that follows - the ``$``; i.e. the projection is treated as ``"instock.$"``. + --- ref: projection-slice-operator-inclusion content: | - Starting in MongoDB 4.4, the :projection:`$slice` projection of an - array in an nested document no longer returns the other fields in - the nested document when the projection is part of an inclusion - projection. + The :projection:`$slice` projection of an array in an nested document no + longer returns the other fields in the nested document when the projection + is part of an inclusion projection. For example, consider a collection ``inventory`` with documents that contain a ``size`` field: @@ -391,9 +373,9 @@ content: | { item: "socks", qty: 100, details: { colors: [ "blue", "red" ], sizes: [ "S", "M", "L"] } } - Starting in MongoDB 4.4, the following operation projects the - ``_id`` field (by default), the ``qty`` field, and the ``details`` - field with just the specified slice of the ``colors`` array: + The following operation projects the ``_id`` field (by default), the + ``qty`` field, and the ``details`` field with just the specified slice + of the ``colors`` array: .. code-block:: javascript diff --git a/source/includes/extracts-replSetReconfig.yaml b/source/includes/extracts-replSetReconfig.yaml index afcd0f994e4..19b7255cd0d 100644 --- a/source/includes/extracts-replSetReconfig.yaml +++ b/source/includes/extracts-replSetReconfig.yaml @@ -1,7 +1,7 @@ ref: replSetReconfig-majority content: | - Starting in MongoDB 4.4, |reconfig| waits until a majority of voting + |reconfig| waits until a majority of voting replica set members install the new replica configuration before returning success. A voting member is *any* replica set member where :rsconf:`members[n].votes` is ``1``, including arbiters. @@ -73,7 +73,7 @@ content: | ref: replSetReconfig-single-node content: | - Starting in MongoDB 4.4, |reconfig| by default allows adding or + |reconfig| by default allows adding or removing no more than ``1`` :rsconf:`voting ` member at a time. For example, a new configuration can make at most *one* of the following changes to the cluster :rsconf:`membership diff --git a/source/includes/extracts-rs-stepdown.yaml b/source/includes/extracts-rs-stepdown.yaml index 8b1e43ef75b..b0970a56ddb 100644 --- a/source/includes/extracts-rs-stepdown.yaml +++ b/source/includes/extracts-rs-stepdown.yaml @@ -39,7 +39,7 @@ content: | ``secondaryCatchUpPeriodSeconds``, by default 10 seconds, for a secondary to become up-to-date with the primary. The primary only steps down if a secondary is up-to-date with the primary during the - catchup period to prevent :doc:`rollbacks `. + catchup period to prevent :ref:`rollbacks `. If no electable secondary meets this criterion by the end of the waiting period, the primary does not step down and the |command-method| errors. diff --git a/source/includes/extracts-server-status-projection-base.yaml b/source/includes/extracts-server-status-projection-base.yaml index 937e2d0f6fb..516b206d0a4 100644 --- a/source/includes/extracts-server-status-projection-base.yaml +++ b/source/includes/extracts-server-status-projection-base.yaml @@ -5,7 +5,7 @@ content: | - some content in the :ref:`server-status-repl` document. - - :ref:`server-status-mirroredReads` document. (*Available starting in version 4.4*) + - :ref:`server-status-mirroredReads` document. To include fields that are excluded by default, specify the top-level field and set it to ``1`` in the command. To exclude fields that are included diff --git a/source/includes/extracts-ssl-facts.yaml b/source/includes/extracts-ssl-facts.yaml index fe7d3a04a90..38833f8bd73 100644 --- a/source/includes/extracts-ssl-facts.yaml +++ b/source/includes/extracts-ssl-facts.yaml @@ -16,9 +16,9 @@ content: | If ``--tlsCAFile``/``net.tls.CAFile`` (or their aliases ``--sslCAFile``/``net.ssl.CAFile``) is not specified - and you are not using x.509 authentication, the system-wide CA - certificate store will be used when connecting to an TLS/SSL-enabled - server. + and you are not using x.509 authentication, you must set the + :parameter:`tlsUseSystemCA` parameter to ``true``. This makes MongoDB use + the system-wide CA certificate store when connecting to a TLS-enabled server. .. include:: /includes/extracts/ssl-facts-x509-ca-file.rst diff --git a/source/includes/extracts-tls-facts.yaml b/source/includes/extracts-tls-facts.yaml index 2bcec88b450..397ce0379e6 100644 --- a/source/includes/extracts-tls-facts.yaml +++ b/source/includes/extracts-tls-facts.yaml @@ -13,9 +13,9 @@ ref: tls-facts-ca-file content: | If ``--tlsCAFile`` or ``tls.CAFile`` is not - specified and you are not using x.509 authentication, the - system-wide CA certificate store will be used when connecting to an - TLS-enabled server. + specified and you are not using x.509 authentication, you must set the + :parameter:`tlsUseSystemCA` parameter to ``true``. This makes MongoDB use + the system-wide CA certificate store when connecting to a TLS-enabled server. .. include:: /includes/extracts/tls-facts-x509-ca-file.rst diff --git a/source/includes/extracts-transactions.yaml b/source/includes/extracts-transactions.yaml index 0d11e300d12..5b42098e719 100644 --- a/source/includes/extracts-transactions.yaml +++ b/source/includes/extracts-transactions.yaml @@ -60,8 +60,6 @@ content: | ref: transactions-operations-restrictions content: | - .. versionchanged:: 4.4 - The following operations are not allowed in transactions: - Creating new collections in cross-shard write transactions. For @@ -207,37 +205,87 @@ content: | ref: transactions-stale-reads content: | - Read operations inside a transaction can return stale data. That is, - read operations inside a transaction are not guaranteed to see - writes performed by other committed transactions or - non-transactional writes. For - example, consider the following sequence: 1) a transaction is - in-progress 2) a write outside the transaction deletes a document 3) - a read operation inside the transaction is able to read the - now-deleted document since the operation is using a snapshot from - before the write. + Read operations inside a transaction can return old data, which is known as a + :term:`stale read`. Read operations inside a transaction are not guaranteed + to see writes performed by other committed transactions or + non-transactional writes. For example, consider the following sequence: + + #. A transaction is in-progress. + + #. A write outside the transaction deletes a document. + + #. A read operation inside the transaction can read the now-deleted document + since the operation uses a snapshot from before the write operation. To avoid stale reads inside transactions for a single document, you - can use the :method:`db.collection.findOneAndUpdate()` method. For - example: - - .. code-block:: javascript - - session.startTransaction( { readConcern: { level: "snapshot" }, writeConcern: { w: "majority" } } ); - - employeesCollection = session.getDatabase("hr").employees; - - employeeDoc = employeesCollection.findOneAndUpdate( - { _id: 1, employee: 1, status: "Active" }, - { $set: { employee: 1 } }, - { returnNewDocument: true } - ); - - - If the employee document has changed outside the transaction, then - the transaction aborts. + can use the :method:`db.collection.findOneAndUpdate()` method. The following + :binary:`~bin.mongosh` example demonstrates how you can use + ``db.collection.findOneAndUpdate()`` to take a :term:`write lock` and ensure + that your reads are up to date: + + .. procedure:: + :style: normal + + .. step:: Insert a document into the ``employees`` collection + + .. code-block:: javascript + :copyable: true + + db.getSiblingDB("hr").employees.insertOne( + { _id: 1, status: "Active" } + ) + + .. step:: Start a session + + .. code-block:: javascript + :copyable: true + + session = db.getMongo().startSession( { readPreference: { mode: "primary" } } ) + + .. step:: Start a transaction + + .. code-block:: javascript + :copyable: true + + session.startTransaction( { readConcern: { level: "snapshot" }, writeConcern: { w: "majority" } } ) + + employeesCollection = session.getDatabase("hr").employees + + .. step:: Use ``db.collection.findOneAndUpdate()`` inside the transaction + + .. code-block:: javascript + :copyable: true + + employeeDoc = employeesCollection.findOneAndUpdate( + { _id: 1, status: "Active" }, + { $set: { lockId: ObjectId() } }, + { returnNewDocument: true } + ) + + Note that inside the transaction, the ``findOneAndUpdate`` operation + sets a new ``lockId`` field. You can set ``lockId`` field to any + value, as long as it modifies the document. By updating the + document, the transaction acquires a lock. + + If an operation outside of the transaction attempts to modify the + document before you commit the transaction, MongoDB returns a write + conflict error to the external operation. + + .. step:: Commit the transaction + + .. code-block:: javascript + :copyable: true + + session.commitTransaction() + + After you commit the transaction, MongoDB releases the lock. + + .. note:: + + If any operation in the transaction fails, the transaction + aborts and all data changes made in the transaction are discarded + without ever becoming visible in the collection. - - If the employee document has not changed, the transaction returns - the document and locks the document. --- ref: transactions-read-concern-majority content: | diff --git a/source/includes/extracts-upsert-unique-index.yaml b/source/includes/extracts-upsert-unique-index.yaml index 70304426a80..94da321373a 100644 --- a/source/includes/extracts-upsert-unique-index.yaml +++ b/source/includes/extracts-upsert-unique-index.yaml @@ -1,12 +1,8 @@ ref: _upsert-unique-index-base content: | - When using the {{upsert}} option with the {{command}} - {{commandOrMethod}}, **and not** using a :ref:`unique index - ` on the query field(s), multiple - instances of {{aOrAn}} {{command}} operation with similar query - field(s) could result in duplicate documents being inserted in - certain circumstances. + Upserts can create duplicate documents, unless there is a + :ref:`unique index ` to prevent duplicates. Consider an example where no document with the name ``Andy`` exists and multiple clients issue the following command at roughly the same @@ -14,26 +10,105 @@ content: | {{codeExample}} - If all {{command}} operations finish the query phase - before any client successfully inserts data, **and** there is no - :ref:`unique index ` on the ``name`` field, each - {{command}} operation may result in an insert, creating multiple - documents with ``name: Andy``. - - To ensure that only one such document is created, and the other - {{command}} operations update this new document instead, create a - :ref:`unique index ` on the ``name`` field. This - guarantees that only one document with ``name: Andy`` is permitted - in the collection. - - With this unique index in place, the multiple {{command}} operations - now exhibit the following behavior: + If all {{command}} operations finish the query phase before any + client successfully inserts data, **and** there is no unique index on + the ``name`` field, each {{command}} operation may result in an + insert, creating multiple documents with ``name: Andy``. + + A unique index on the ``name`` field ensures that only one document + is created. With a unique index in place, the multiple {{command}} + operations now exhibit the following behavior: - Exactly one {{command}} operation will successfully insert a new document. - - All other {{command}} operations will update the newly-inserted - document, incrementing the ``score`` value. + - Other {{command}} operations either update the newly-inserted + document or fail due to a unique key collision. + + In order for other {{command}} operations to update the + newly-inserted document, **all** of the following conditions must + be met: + + - The target collection has a unique index that would cause a + duplicate key error. + + - The update operation is not ``updateMany`` or ``multi`` is + ``false``. + + - The update match condition is either: + + - A single equality predicate. For example ``{ "fieldA" : "valueA" }`` + + - A logical AND of equality predicates. For example ``{ "fieldA" : + "valueA", "fieldB" : "valueB" }`` + + - The fields in the equality predicate match the fields in the + unique index key pattern. + + - The update operation does not modify any fields in the + unique index key pattern. + + The following table shows examples of ``upsert`` operations that, + when a key collision occurs, either result in an update or fail. + + .. list-table:: + :header-rows: 1 + :widths: 30 40 30 + + * - Unique Index Key Pattern + - Update Operation + - Result + + * - .. code-block:: javascript + :copyable: false + + { name : 1 } + + - .. code-block:: javascript + :copyable: false + + db.people.updateOne( + { name: "Andy" }, + { $inc: { score: 1 } }, + { upsert: true } + ) + - The ``score`` field of the matched document is incremented by + 1. + + * - .. code-block:: javascript + :copyable: false + + { name : 1 } + + - .. code-block:: javascript + :copyable: false + + db.people.updateOne( + { name: { $ne: "Joe" } }, + { $set: { name: "Andy" } }, + { upsert: true } + ) + + - The operation fails because it modifies the field in the + unique index key pattern (``name``). + + * - .. code-block:: javascript + :copyable: false + + { name : 1 } + - .. code-block:: javascript + :copyable: false + + db.people.updateOne( + { name: "Andy", email: "andy@xyz.com" }, + { $set: { active: false } }, + { upsert: true } + ) + - The operation fails because the equality predicate fields + (``name``, ``email``) do not match the index key field + (``name``). + + --- ref: upsert-unique-index-findAndModify-command diff --git a/source/includes/extracts-views.yaml b/source/includes/extracts-views.yaml index 621353fe853..60ba05f9979 100644 --- a/source/includes/extracts-views.yaml +++ b/source/includes/extracts-views.yaml @@ -14,11 +14,6 @@ ref: views-unsupported-rename content: | You cannot rename :ref:`views `. --- -ref: views-unsupported-geoNear -content: | - :ref:`Views ` do not support geoNear operations - (specifically, the :pipeline:`$geoNear` pipeline stage). ---- ref: views-unsupported-projection-operators content: | :method:`~db.collection.find()` operations on views do not support @@ -119,7 +114,7 @@ content: | A user with the :authrole:`readWrite` built in role on the database has the required privileges to run the listed operations. Either - :doc:`create a user ` with the required role + :ref:`create a user ` with the required role or :ref:`grant the role to an existing user ` diff --git a/source/includes/extracts-wildcard-indexes.yaml b/source/includes/extracts-wildcard-indexes.yaml index e243bbc64c6..9e63b8ffdd5 100644 --- a/source/includes/extracts-wildcard-indexes.yaml +++ b/source/includes/extracts-wildcard-indexes.yaml @@ -6,29 +6,28 @@ content: | contain arbitrary nested fields, including embedded documents and arrays: - .. code-block:: text - :copyable: false + .. code-block:: javascript - { - "_id" : ObjectId("5c1d358bf383fbee028aea0b"), - "product_name" : "Blaster Gauntlet", - "product_attributes" : { - "price" : { - "cost" : 299.99 - "currency" : USD + db.products_catalog.insertMany( [ + { + _id : ObjectId("5c1d358bf383fbee028aea0b"), + product_name: "Blaster Gauntlet", + product_attributes: { + price: { + cost: 299.99, + currency: "USD" } - ... - } - }, - { - "_id" : ObjectId("5c1d358bf383fbee028aea0c"), - "product_name" : "Super Suit", - "product_attributes" : { - "superFlight" : true, - "resistance" : [ "Bludgeoning", "Piercing", "Slashing" ] - ... - }, - } + } + }, + { + _id: ObjectId("5c1d358bf383fbee028aea0c"), + product_name: "Super Suit", + product_attributes: { + superFlight: true, + resistance: [ "Bludgeoning", "Piercing", "Slashing" ] + } + } + ] ) --- ref: wildcard-index-summary content: | diff --git a/source/includes/extracts-wired-tiger-base.yaml b/source/includes/extracts-wired-tiger-base.yaml index ba58a15c8a8..7172d72b4a4 100644 --- a/source/includes/extracts-wired-tiger-base.yaml +++ b/source/includes/extracts-wired-tiger-base.yaml @@ -22,10 +22,10 @@ content: | .. note:: The {{cachesetting}} limits the size of the WiredTiger internal - cache. The operating system will use the available free memory + cache. The operating system uses the available free memory for filesystem cache, which allows the compressed MongoDB data - files to stay in memory. In addition, the operating system will - use any free RAM to buffer file system blocks and file system + files to stay in memory. In addition, the operating system + uses any free RAM to buffer file system blocks and file system cache. To accommodate the additional consumers of RAM, you may have to @@ -37,7 +37,7 @@ content: | accommodate the other :binary:`~bin.mongod` instances. - If you run :binary:`~bin.mongod` in a container (e.g. ``lxc``, + If you run :binary:`~bin.mongod` in a container (for example, ``lxc``, ``cgroups``, Docker, etc.) that does *not* have access to all of the RAM available in a system, you must set {{cachesetting}} to a value less than the amount of RAM available in the container. The exact diff --git a/source/includes/extracts-wired-tiger.yaml b/source/includes/extracts-wired-tiger.yaml index fadc4ecd4f4..a22fc54dc4e 100644 --- a/source/includes/extracts-wired-tiger.yaml +++ b/source/includes/extracts-wired-tiger.yaml @@ -176,11 +176,12 @@ content: | - 256 MB. - For example, on a system with a total of 4GB of RAM the WiredTiger - cache will use 1.5GB of RAM (``0.5 * (4 GB - 1 GB) = 1.5 GB``). - Conversely, a system with a total of 1.25 GB of RAM will allocate 256 - MB to the WiredTiger cache because that is more than half of the - total RAM minus one gigabyte (``0.5 * (1.25 GB - 1 GB) = 128 MB < 256 MB``). + For example, on a system with a total of 4GB of RAM the + WiredTiger cache uses 1.5GB of RAM (``0.5 * (4 GB - 1 GB) = + 1.5 GB``). Conversely, on a system with a total of 1.25 GB of + RAM WiredTiger allocates 256 MB to the WiredTiger cache + because that is more than half of the total RAM minus one + gigabyte (``0.5 * (1.25 GB - 1 GB) = 128 MB < 256 MB``). .. note:: @@ -194,7 +195,7 @@ content: | --- ref: wt-filesystem-cache content: | - Via the filesystem cache, MongoDB automatically uses all free memory + With the filesystem cache, MongoDB automatically uses all free memory that is not used by the WiredTiger cache or by other processes. --- ref: wt-snapshot-frequency diff --git a/source/includes/extracts-x509-certificate.yaml b/source/includes/extracts-x509-certificate.yaml index c707885fe26..b69ba671bf5 100644 --- a/source/includes/extracts-x509-certificate.yaml +++ b/source/includes/extracts-x509-certificate.yaml @@ -29,6 +29,13 @@ content: | - Organizational Unit (``OU``) - Domain Component (``DC``) + .. note:: + + You can also disable the ``enforceUserClusterSeparation`` + parameter during startup to automatically disable the + ``O/OU/DC`` check. This allows member certificates to + authenticate as users stored in the ``$external`` database. + - The ``subject`` of a client x.509 certificate, which contains the Distinguished Name (``DN``), must be **different** than the ``subject``\s of :ref:`member x.509 certificates `. @@ -64,6 +71,27 @@ content: | .. include:: /includes/list-cluster-x509-requirements.rst + .. note:: + + If you disable the ``enforceUserClusterSeparation`` parameter, the + following behaviors apply: + + - The ``O/OU/DC`` check is disabled if ``clusterAuthMode`` is + ``keyFile`` in your configuration file. This allows clients + possessing member certificates to authenticate as users stored in + the ``$external`` database. + - The server won't start if ``clusterAuthMode`` isn't ``keyFile`` in + your configuration file. + + .. include:: /includes/fact-enforce-user-cluster-separation-parameter.rst + + To set the ``enforceUserClusterSeparation`` parameter to + ``false``, run the following command during startup: + + .. code-block:: javascript + + mongod --setParameter enforceUserClusterSeparation=false + The certificates have the following requirements: .. include:: /includes/list-tls-certificate-requirements.rst diff --git a/source/includes/extracts-zoned-sharding.yaml b/source/includes/extracts-zoned-sharding.yaml index 40876e82009..c443e1b739c 100644 --- a/source/includes/extracts-zoned-sharding.yaml +++ b/source/includes/extracts-zoned-sharding.yaml @@ -57,7 +57,7 @@ content: | ref: zoned-sharding-shard-operation-chunk-distribution-hashed-short content: | - Starting in version 4.4, MongoDB supports sharding collections on + MongoDB supports sharding collections on :ref:`compound hashed indexes `. When sharding an empty or non-existing collection using a compound hashed shard key, additional requirements apply in order for MongoDB to diff --git a/source/includes/fact-5.0-multiple-partial-index.rst b/source/includes/fact-5.0-multiple-partial-index.rst index 9f44b95f3ab..29a1b06a46b 100644 --- a/source/includes/fact-5.0-multiple-partial-index.rst +++ b/source/includes/fact-5.0-multiple-partial-index.rst @@ -1,10 +1,10 @@ Starting in MongoDB 5.0, multiple -:doc:`partial indexes ` +:ref:`partial indexes ` can be created using the same :ref:`key pattern` as long as the :ref:`partialFilterExpression ` fields do not express equivalent filters. In earlier versions of MongoDB, creating multiple -:doc:`partial indexes` is not allowed when +:ref:`partial indexes ` is not allowed when using the same key pattern with different :ref:`partialFilterExpressions `. diff --git a/source/includes/fact-5.0-read-concern-latency.rst b/source/includes/fact-5.0-read-concern-latency.rst new file mode 100644 index 00000000000..512b29d6aa2 --- /dev/null +++ b/source/includes/fact-5.0-read-concern-latency.rst @@ -0,0 +1,9 @@ +Starting in MongoDB 5.0, :readconcern:`"local"` is the default read +concern level for read operations against the primary and secondaries. + +This may introduce a significant latency increase for count queries that +use a filter and for :ref:`covered queries `. + +You can opt out of this behavior by setting the cluster-wide +:ref:`read concern ` with +:dbcommand:`setDefaultRWConcern`. \ No newline at end of file diff --git a/source/includes/fact-7.3-singlebatch-cursor.rst b/source/includes/fact-7.3-singlebatch-cursor.rst new file mode 100644 index 00000000000..0504aea9dec --- /dev/null +++ b/source/includes/fact-7.3-singlebatch-cursor.rst @@ -0,0 +1,5 @@ +Starting in MongoDB 7.3, when you use a find command on a view +with the ``singleBatch: true`` and ``batchSize: 1`` options, a cursor +is no longer returned. In previous versions of MongoDB these find queries +would return a cursor even when you set the :ref:`single batch` +option to ``true``. \ No newline at end of file diff --git a/source/includes/fact-bson-types.rst b/source/includes/fact-bson-types.rst index 479f8312592..a5cdd66b913 100644 --- a/source/includes/fact-bson-types.rst +++ b/source/includes/fact-bson-types.rst @@ -76,11 +76,6 @@ - "symbol" - Deprecated. - * - JavaScript code with scope - - 15 - - "javascriptWithScope" - - Deprecated in MongoDB 4.4. - * - 32-bit integer - 16 - "int" diff --git a/source/includes/fact-bulk-writeConcernError-mongos.rst b/source/includes/fact-bulk-writeConcernError-mongos.rst new file mode 100644 index 00000000000..9ad10d31113 --- /dev/null +++ b/source/includes/fact-bulk-writeConcernError-mongos.rst @@ -0,0 +1,11 @@ + +.. versionchanged:: 7.1 + + When |cmd| is received from :program:`mongos`, write concern + errors are always reported, even when one or more write + errors occur. + + In previous releases, the occurrence of write errors could + cause the |cmd| to not report write concern errors. + + diff --git a/source/includes/fact-collection-namespace-limit.rst b/source/includes/fact-collection-namespace-limit.rst index 19e37eb2c69..692ebbce52d 100644 --- a/source/includes/fact-collection-namespace-limit.rst +++ b/source/includes/fact-collection-namespace-limit.rst @@ -1,10 +1,4 @@ -- For :ref:`featureCompatibilityVersion ` set to ``"4.4"`` or - greater, MongoDB raises the limit for unsharded collections and views to - 255 bytes, and to 235 bytes for sharded collections. For a collection or - a view, the namespace includes the database name, the dot (``.``) - separator, and the collection/view name - (e.g. ``.``), - -- For :ref:`featureCompatibilityVersion ` set to ``"4.2"`` or - earlier, the maximum length of unsharded collections and views namespace - remains 120 bytes and 100 bytes for sharded collection. +The namespace length limit for unsharded collections and views is 255 bytes, +and 235 bytes for sharded collections. For a collection or a view, the namespace +includes the database name, the dot (``.``) separator, and the collection/view +name (e.g. ``.``). diff --git a/source/includes/fact-csfle-compatibility-drivers.rst b/source/includes/fact-csfle-compatibility-drivers.rst deleted file mode 100644 index 15a91943d86..00000000000 --- a/source/includes/fact-csfle-compatibility-drivers.rst +++ /dev/null @@ -1,35 +0,0 @@ -While {+csfle+} does not support encrypting -individual array elements, randomized encryption supports encrypting the -*entire* array field rather than individual elements in the field. The -example automatic encryption rules specify randomized encryption for the -``medicalRecords`` field to encrypt the entire array. If the automatic -encryption rules specified :autoencryptkeyword:`encrypt` or -:autoencryptkeyword:`encryptMetadata` within ``medicalRecords.items`` or -``medicalRecords.additionalItems``, automatic field level encryption -fails and returns an errors. - -The official MongoDB 4.2+ compatible drivers, :binary:`~bin.mongosh`, -and the 4.2 or later legacy ``mongo`` shell require specifying the -automatic encryption rules as part of creating the database connection -object: - -- For ``mongosh``, use the :method:`Mongo()` - constructor to create a database connection. Specify the automatic - encryption rules to the ``schemaMap`` key of the - :ref:`<{+auto-encrypt-options+}>` parameter. See - :ref:`mongo-connection-automatic-client-side-encryption-enabled` - for a complete example. - -- For the official MongoDB 4.2+ compatible drivers, use the - driver-specific database connection constructor (``MongoClient``) - to create the database connection with the automatic encryption rules - included as part of the {+csfle+} - configuration object. Defer to the :ref:`driver API reference - ` for more complete documentation and - tutorials. - -For all clients, the ``keyVault`` and ``kmsProviders`` specified -to the {+csfle+} parameter *must* grant -access to both the {+dek-long+}s specified in the automatic -encryption rules *and* the {+cmk-long+} used to encrypt the -{+dek-long+}s. diff --git a/source/includes/fact-default-conf-file.rst b/source/includes/fact-default-conf-file.rst index 87f4e1d97b0..1c7dd4bfed7 100644 --- a/source/includes/fact-default-conf-file.rst +++ b/source/includes/fact-default-conf-file.rst @@ -27,7 +27,7 @@ - MSI Installer - ``\bin\mongod.cfg`` -- If you :ref:`installed MongoDB ` via a downloaded - ``TGZ`` or ``ZIP`` file, you will need to create your own configuration - file. The :ref:`basic example configuration ` is a good - place to start. +- If you :ref:`installed MongoDB ` + through a downloaded ``TGZ`` or ``ZIP`` file, you must create + your own configuration file. The :ref:`basic example + configuration ` is a good place to start. diff --git a/source/includes/fact-disable-javascript-with-noscript.rst b/source/includes/fact-disable-javascript-with-noscript.rst index 1baf9f00d5e..d6a16df8967 100644 --- a/source/includes/fact-disable-javascript-with-noscript.rst +++ b/source/includes/fact-disable-javascript-with-noscript.rst @@ -5,10 +5,8 @@ You can disable all server-side execution of JavaScript: line or setting :setting:`security.javascriptEnabled` to false in the configuration file. -- Starting in MongoDB 4.4, for a :binary:`~bin.mongos` instance by - passing the :option:`--noscripting ` option on - the command line or setting :setting:`security.javascriptEnabled` to - false in the configuration file. - - | In earlier versions, MongoDB does not allow JavaScript execution on - :binary:`~bin.mongos` instances. +- For a :binary:`~bin.mongos` instance by passing the + :option:`--noscripting ` option on the command + line or setting :setting:`security.javascriptEnabled` to false in the + configuration file. + \ No newline at end of file diff --git a/source/includes/fact-document-field-name-restrictions.rst b/source/includes/fact-document-field-name-restrictions.rst index 59e0467255f..9a37a81e302 100644 --- a/source/includes/fact-document-field-name-restrictions.rst +++ b/source/includes/fact-document-field-name-restrictions.rst @@ -7,3 +7,7 @@ in field names. There are some restrictions. See :ref:`Field Name Considerations ` for more details. +- Each field name must be unique within the document. You must not store + documents with duplicate fields because MongoDB :ref:`CRUD ` + operations might behave unexpectedly if a document has duplicate + fields. diff --git a/source/includes/fact-dynamic-concurrency.rst b/source/includes/fact-dynamic-concurrency.rst new file mode 100644 index 00000000000..c9e01fee56c --- /dev/null +++ b/source/includes/fact-dynamic-concurrency.rst @@ -0,0 +1,18 @@ +Starting in version 7.0, MongoDB uses a default algorithm to dynamically +adjust the maximum number of concurrent storage engine transactions +(read and write tickets). The dynamic concurrent storage engine +transaction algirithm optimizes database throughput during cluster +overload. The maximum number of concurrent storage engine transactions +(read and write tickets) never exceeds 128 read tickets and 128 +write tickets and may differ across nodes in a cluster. The maximum +number of read tickets and write tickets within a single node are always +equal. + +To specify a maximum number of read and write transactions (read and +write tickets) that the dynamic maximum can not exceed, use +:parameter:`storageEngineConcurrentReadTransactions` and +:parameter:`storageEngineConcurrentWriteTransactions`. + +If you want to disable the dynamic concurrent storage engine +transactions algorithm, file a support request to work with a MongoDB +Technical Services Engineer. \ No newline at end of file diff --git a/source/includes/fact-enforce-user-cluster-separation-parameter.rst b/source/includes/fact-enforce-user-cluster-separation-parameter.rst new file mode 100644 index 00000000000..2dceb5a9454 --- /dev/null +++ b/source/includes/fact-enforce-user-cluster-separation-parameter.rst @@ -0,0 +1,18 @@ +If you set the ``enforceUserClusterSeparation`` parameter to ``false``, +the server doesn't distinguish between client certificates, which +applications use to authenticate, and intra-cluster certificates, which +have privileged access. This has no effect if your ``clusterAuthMode`` +is ``keyFile``. However, if your ``clusterAuthMode`` is ``x509``, user +certificates that use the allowed scheme are conflated with cluster +certificates and granted privileged access. + +Your existing certificates are granted internal privileges if you do the +following: + +1. Create a user, with a name allowed by this parameter. +#. Set the ``enforceUserClusterSeparation`` parameter to ``false``. +#. Set ``clusterAuthMode`` to ``x509``. + +You must not upgrade from ``keyFile`` to ``x509`` without validating +that you've removed users with elevated privileges that the +``enforceUserClusterSeparation`` flag allowed you to create. \ No newline at end of file diff --git a/source/includes/fact-environments-atlas-support-limited-free-and-m10.rst b/source/includes/fact-environments-atlas-support-limited-free-and-m10.rst new file mode 100644 index 00000000000..2764605c051 --- /dev/null +++ b/source/includes/fact-environments-atlas-support-limited-free-and-m10.rst @@ -0,0 +1,4 @@ +.. note:: + + This command has *limited support* in M0, M2, M5, and M10 clusters. + For more information, see :atlas:`Unsupported Commands `. diff --git a/source/includes/fact-explain-results-categories.rst b/source/includes/fact-explain-results-categories.rst index 37bd0549620..d076e9519c2 100644 --- a/source/includes/fact-explain-results-categories.rst +++ b/source/includes/fact-explain-results-categories.rst @@ -1,21 +1,22 @@ |explain| operations can return information regarding: -- ``explainVersion``, the output format version (for example, ``"1"``); +- ``explainVersion``, the output format version (for example, ``"1"``). -- ``command``, which details the command being explained; +- ``command``, which details the command being explained. - :ref:`queryPlanner`, which details the plan selected by the - :doc:`query optimizer ` and lists the rejected - plans; + :ref:`query optimizer ` and lists the rejected + plans. - :ref:`executionStats`, which details the execution of the winning - plan and the rejected plans; + plan and the rejected plans. - :ref:`serverInfo`, which provides information on the - MongoDB instance; and + MongoDB instance. - ``serverParameters``, which details internal parameters. + The verbosity mode (i.e. ``queryPlanner``, ``executionStats``, ``allPlansExecution``) determines whether the results include :ref:`executionStats` and whether :ref:`executionStats` includes data diff --git a/source/includes/fact-explain-verbosity-executionStats.rst b/source/includes/fact-explain-verbosity-executionStats.rst index c80e0d08bc2..cafa7d8623d 100644 --- a/source/includes/fact-explain-verbosity-executionStats.rst +++ b/source/includes/fact-explain-verbosity-executionStats.rst @@ -1,4 +1,4 @@ -MongoDB runs the :doc:`query optimizer ` to choose +MongoDB runs the :ref:`query optimizer ` to choose the winning plan, executes the winning plan to completion, and returns statistics describing the execution of the winning plan. diff --git a/source/includes/fact-explain-verbosity-queryPlanner.rst b/source/includes/fact-explain-verbosity-queryPlanner.rst index 6b9c79102b2..ae302c6f74a 100644 --- a/source/includes/fact-explain-verbosity-queryPlanner.rst +++ b/source/includes/fact-explain-verbosity-queryPlanner.rst @@ -1,4 +1,4 @@ -MongoDB runs the :doc:`query optimizer ` to choose +MongoDB runs the :ref:`query optimizer ` to choose the winning plan for the operation under evaluation. |explain| returns the :data:`~explain.queryPlanner` information for the evaluated |operation|. diff --git a/source/includes/fact-hidden-indexes.rst b/source/includes/fact-hidden-indexes.rst new file mode 100644 index 00000000000..989dcef1cc1 --- /dev/null +++ b/source/includes/fact-hidden-indexes.rst @@ -0,0 +1,14 @@ +MongoDB offers the ability to hide or unhide indexes from the query planner. +By hiding an index from the planner, you can evaluate the potential impact of +dropping an index without actually dropping the index. + +If after the evaluation, the user decides to drop the index, you +can drop the hidden index; i.e. you do not need to unhide it first to +drop it. + +If, however, the impact is negative, the user can unhide the index +instead of having to recreate a dropped index. And because indexes are +fully maintained while hidden, the indexes are immediately available +for use once unhidden. + +For more information on hidden indexes, see :doc:`/core/index-hidden`. \ No newline at end of file diff --git a/source/includes/fact-hint-text-query-restriction.rst b/source/includes/fact-hint-text-query-restriction.rst index 3f346990683..f5840275cd6 100644 --- a/source/includes/fact-hint-text-query-restriction.rst +++ b/source/includes/fact-hint-text-query-restriction.rst @@ -1,4 +1,4 @@ .. hint-and-text-query -If a query includes a :query:`$text` expression, you cannot use +If a query includes a ``$text`` expression, you cannot use :method:`~cursor.hint()` to specify which index to use for the query. diff --git a/source/includes/fact-installation-ulimit.rst b/source/includes/fact-installation-ulimit.rst index 79c2e655d16..f475d949334 100644 --- a/source/includes/fact-installation-ulimit.rst +++ b/source/includes/fact-installation-ulimit.rst @@ -5,5 +5,4 @@ settings for your platform. .. note:: - Starting in MongoDB 4.4, a startup error is generated if the - ``ulimit`` value for number of open files is under ``64000``. + .. include:: /includes/fact-ulimit-minimum.rst diff --git a/source/includes/fact-manual-enc-definition.rst b/source/includes/fact-manual-enc-definition.rst deleted file mode 100644 index 6abcf69a624..00000000000 --- a/source/includes/fact-manual-enc-definition.rst +++ /dev/null @@ -1,3 +0,0 @@ -{+manual-enc-first+} is a mechanism in which you specify how to encrypt -and decrypt fields in your document for each operation you perform on -your database. \ No newline at end of file diff --git a/source/includes/fact-mapreduce-deprecated-bson.rst b/source/includes/fact-mapreduce-deprecated-bson.rst new file mode 100644 index 00000000000..7b70fba4b50 --- /dev/null +++ b/source/includes/fact-mapreduce-deprecated-bson.rst @@ -0,0 +1,9 @@ +:dbcommand:`mapReduce` no longer supports the deprecated +:ref:`BSON Type ` JavaScript code with scope (BSON Type 15) for its +functions. The ``map``, ``reduce``, and ``finalize`` functions must be either +BSON type String (BSON Type 2) or BSON Type JavaScript (BSON Type 13). To +pass constant values which will be accessible in the ``map``, ``reduce``, and +``finalize`` functions, use the ``scope`` parameter. + +The use of JavaScript code with scope for the :dbcommand:`mapReduce` functions +has been deprecated since version 4.2.1. \ No newline at end of file diff --git a/source/includes/fact-merge-same-collection-behavior.rst b/source/includes/fact-merge-same-collection-behavior.rst index 784c92087fc..dc1e96a90dd 100644 --- a/source/includes/fact-merge-same-collection-behavior.rst +++ b/source/includes/fact-merge-same-collection-behavior.rst @@ -1,7 +1,3 @@ -Starting in MongoDB 4.4, :pipeline:`$merge` can output to the same -collection that is being aggregated. You can also output to a -collection which appears in other stages of the pipeline, such as -:pipeline:`$lookup`. - -Versions of MongoDB prior to 4.4 did not allow :pipeline:`$merge` to -output to the same collection as the collection being aggregated. +:pipeline:`$merge` can output to the same collection that is being aggregated. +You can also output to a collection which appears in other stages of the +pipeline, such as :pipeline:`$lookup`. diff --git a/source/includes/fact-meta-syntax.rst b/source/includes/fact-meta-syntax.rst index 0d34c05f9c2..4167fcd19df 100644 --- a/source/includes/fact-meta-syntax.rst +++ b/source/includes/fact-meta-syntax.rst @@ -25,7 +25,7 @@ The |meta-object| expression can specify the following values as the signifies how well the document matched the :ref:`search term or terms `. - Starting in MongoDB 4.4, must be used in conjunction with a + ``{ $meta: "textScore" }`` must be used in conjunction with a :query:`$text` query. In earlier versions, if not used in conjunction with a @@ -39,15 +39,11 @@ The |meta-object| expression can specify the following values as the application logic, and is preferred over :method:`cursor.returnKey()`. - .. versionadded:: 4.4 - - :atlas:`MongoDB Atlas Search ` provides additional ``$meta`` keywords, such as: -- :atlas:`"searchScore" ` and - -- :atlas:`"searchHighlights" - `. +- :atlas:`"searchScore" ` +- :atlas:`"searchHighlights" ` +- :atlas:`"searchSequenceToken" ` Refer to the Atlas Search documentation for details. diff --git a/source/includes/fact-mongodb-cr-deprecated.rst b/source/includes/fact-mongodb-cr-deprecated.rst index aed97bf6ca8..39cd0f094c1 100644 --- a/source/includes/fact-mongodb-cr-deprecated.rst +++ b/source/includes/fact-mongodb-cr-deprecated.rst @@ -1,3 +1,2 @@ As of MongoDB 3.6, ``MONGODB-CR`` authentication mechanism is -deprecated. If you have not upgraded your ``MONGODB-CR`` authentication -schema to SCRAM, see :doc:`/release-notes/3.0-scram`. +deprecated. diff --git a/source/includes/fact-mongokerberos.rst b/source/includes/fact-mongokerberos.rst new file mode 100644 index 00000000000..815e2010c7a --- /dev/null +++ b/source/includes/fact-mongokerberos.rst @@ -0,0 +1,9 @@ +After completing the configuration steps, you can validate your +configuration with the :binary:`~bin.mongokerberos` tool. + +:binary:`~bin.mongokerberos` provides a convenient method to verify your +platform's Kerberos configuration for use with MongoDB, and to test that +Kerberos authentication from a MongoDB client works as expected. See the +:binary:`~bin.mongokerberos` documentation for more information. + +:binary:`~bin.mongokerberos` is available in MongoDB Enterprise only. \ No newline at end of file diff --git a/source/includes/fact-mws-intro.rst b/source/includes/fact-mws-intro.rst deleted file mode 100644 index f0f7330978b..00000000000 --- a/source/includes/fact-mws-intro.rst +++ /dev/null @@ -1 +0,0 @@ -You can run the operation in the web shell below: diff --git a/source/includes/fact-mws.rst b/source/includes/fact-mws.rst deleted file mode 100644 index a137412d860..00000000000 --- a/source/includes/fact-mws.rst +++ /dev/null @@ -1,2 +0,0 @@ -.. mongo-web-shell:: - :version: latest diff --git a/source/includes/fact-natural-sort-order-text-query-restriction.rst b/source/includes/fact-natural-sort-order-text-query-restriction.rst index 2ce2ddf108f..adb76bc7a2f 100644 --- a/source/includes/fact-natural-sort-order-text-query-restriction.rst +++ b/source/includes/fact-natural-sort-order-text-query-restriction.rst @@ -1,2 +1,2 @@ You cannot specify :operator:`$natural` sort order if the query -includes a :query:`$text` expression. +includes a ``$text`` expression. diff --git a/source/includes/fact-oidc-providers.rst b/source/includes/fact-oidc-providers.rst index 768cd64939a..b4e045b7244 100644 --- a/source/includes/fact-oidc-providers.rst +++ b/source/includes/fact-oidc-providers.rst @@ -117,6 +117,8 @@ - Optional + - boolean + - Determines if the ``authorizationClaim`` is required. The default value is ``true``. @@ -149,9 +151,7 @@ { user: "mdbinc/spencer.jackson@example.com", db: "$external" } - .. versionadded:: 7.2 - - - boolean + .. versionadded:: 7.2 (Also, available in 7.0.5). * - ``authorizationClaim`` diff --git a/source/includes/fact-qe-csfle-contention.rst b/source/includes/fact-qe-csfle-contention.rst deleted file mode 100644 index 87e187c51ad..00000000000 --- a/source/includes/fact-qe-csfle-contention.rst +++ /dev/null @@ -1,33 +0,0 @@ -Inserting the same field/value pair into multiple documents in close -succession can cause conflicts that delay insert operations. - -MongoDB tracks the occurrences of each field/value pair in an -encrypted collection using an internal counter. The contention factor -partitions this counter, similar to an array. This minimizes issues with -incrementing the counter when using ``insert``, ``update``, or ``findAndModify`` to add or modify an encrypted field -with the same field/value pair in close succession. ``contention = 0`` -creates an array with one element -at index 0. ``contention = 4`` creates an array with 5 elements at -indexes 0-4. MongoDB increments a random array element during insert. If -unset, ``contention`` defaults to 8. - -High contention improves the performance of insert and update operations on low cardinality fields, but decreases find performance. - -Consider increasing ``contention`` above the default value of 8 only if: - -- The field has low cardinality or low selectivity. A ``state`` field - may have 50 values, but if 99% of the data points use ``{state: NY}``, - that pair is likely to cause contention. - -- Write and update operations frequently modify the field. Since high - contention values sacrifice find performance in favor of write and - update operations, the benefit of a high contention factor for a - rarely updated field is unlikely to outweigh the drawback. - -Consider decreasing ``contention`` if: - -- The field is high cardinality and contains entirely unique values, - such as a credit card number. - -- The field is often queried, but never or rarely updated. In this - case, find performance is preferable to write and update performance. diff --git a/source/includes/fact-read-concern-write-timeline.rst b/source/includes/fact-read-concern-write-timeline.rst index a73c549f4e9..53dfdb46bd7 100644 --- a/source/includes/fact-read-concern-write-timeline.rst +++ b/source/includes/fact-read-concern-write-timeline.rst @@ -62,7 +62,7 @@ a three member replica set: | **Secondary**\ :sub:`2`: Write\ :sub:`prev` * - t\ :sub:`3` - - Primary is aware of successful replication to Secondary\ :sub:`1` and sends acknowledgement to client + - Primary is aware of successful replication to Secondary\ :sub:`1` and sends acknowledgment to client - | **Primary**: Write\ :sub:`0` | **Secondary**\ :sub:`1`: Write\ :sub:`0` | **Secondary**\ :sub:`2`: Write\ :sub:`0` diff --git a/source/includes/fact-read-own-writes.rst b/source/includes/fact-read-own-writes.rst index 400db1b2da2..a1948b9ecbb 100644 --- a/source/includes/fact-read-own-writes.rst +++ b/source/includes/fact-read-own-writes.rst @@ -1,6 +1,6 @@ Starting in MongoDB 3.6, you can use :ref:`causally consistent sessions ` to read your own writes, if the writes request -acknowledgement. +acknowledgment. Prior to MongoDB 3.6, in order to read your own writes you must issue your write operation with :writeconcern:`{ w: "majority" } <"majority">` diff --git a/source/includes/fact-runtime-parameter.rst b/source/includes/fact-runtime-parameter.rst new file mode 100644 index 00000000000..ccec50d4f30 --- /dev/null +++ b/source/includes/fact-runtime-parameter.rst @@ -0,0 +1,4 @@ + +This parameter is only available at runtime. To set the +parameter, use the :dbcommand:`setParameter` command. + diff --git a/source/includes/fact-runtime-startup-parameter.rst b/source/includes/fact-runtime-startup-parameter.rst new file mode 100644 index 00000000000..1f28076626a --- /dev/null +++ b/source/includes/fact-runtime-startup-parameter.rst @@ -0,0 +1,9 @@ + +This parameter is available both at runtime and at startup: + +- To set the parameter at runtime, use the + :dbcommand:`setParameter` command + +- To set the parameter at startup, use the + :setting:`setParameter` setting + diff --git a/source/includes/fact-selinux-redhat-with-policy.rst b/source/includes/fact-selinux-redhat-with-policy.rst index 5b753f5a549..726a774ee0f 100644 --- a/source/includes/fact-selinux-redhat-with-policy.rst +++ b/source/includes/fact-selinux-redhat-with-policy.rst @@ -15,7 +15,7 @@ packages. If your MongoDB deployment uses custom settings for any of the following: - - :doc:`MongoDB connection ports ` + - :ref:`MongoDB connection ports ` - :setting:`~storage.dbPath` - :setting:`systemLog.path` - :setting:`~processManagement.pidFilePath` diff --git a/source/includes/fact-set-global-write-concern-before-reconfig.rst b/source/includes/fact-set-global-write-concern-before-reconfig.rst index e56e787e6ab..007722b14dd 100644 --- a/source/includes/fact-set-global-write-concern-before-reconfig.rst +++ b/source/includes/fact-set-global-write-concern-before-reconfig.rst @@ -1,6 +1,6 @@ Starting in MongoDB 5.0, you must explicitly set the global default :ref:`write concern ` before attempting to reconfigure a :term:`replica set ` with a -:doc:`configuration ` +:ref:`configuration ` that would change the implicit default write concern. To set the global default write concern, use the :dbcommand:`setDefaultRWConcern` command. diff --git a/source/includes/fact-sharded-cluster-components.rst b/source/includes/fact-sharded-cluster-components.rst index 3e5078d6755..fa0dac4a77a 100644 --- a/source/includes/fact-sharded-cluster-components.rst +++ b/source/includes/fact-sharded-cluster-components.rst @@ -1,15 +1,14 @@ A MongoDB :term:`sharded cluster` consists of the following components: - :ref:`shard `: Each shard contains a - subset of the sharded data. Each shard can be deployed as a :term:`replica + subset of the sharded data. Each shard must be deployed as a :term:`replica set`. - :doc:`/core/sharded-cluster-query-router`: The ``mongos`` acts as a query router, providing an interface between client applications and the - sharded cluster. + sharded cluster. :binary:`~bin.mongos` can support + :ref:`hedged reads ` to minimize latencies. - :ref:`config servers `: Config servers store metadata and configuration settings for the cluster. As of MongoDB 3.4, config servers must be deployed as a replica set (CSRS). - -.. COMMENT TODO post code review, use this include file in /core/sharded-cluster-components.txt and /sharding.txt since they had duplicate content. diff --git a/source/includes/fact-split-horizon-binding.rst b/source/includes/fact-split-horizon-binding.rst index bb7b16dfb93..693879ce672 100644 --- a/source/includes/fact-split-horizon-binding.rst +++ b/source/includes/fact-split-horizon-binding.rst @@ -13,7 +13,7 @@ configuration commands. :binary:`mongod` and :binary:`mongos` do not rely on :parameter:`disableSplitHorizonIPCheck` for validation at startup. Legacy :binary:`mongod` and :binary:`mongos` instances that use IP -addresses instead of host names will start after an upgrade. +addresses instead of host names can start after an upgrade. Instances that are configured with IP addresses log a warning to use host names instead of IP addresses. diff --git a/source/includes/fact-ssl-tlsCAFile-tlsUseSystemCA.rst b/source/includes/fact-ssl-tlsCAFile-tlsUseSystemCA.rst new file mode 100644 index 00000000000..6336f9f4d6f --- /dev/null +++ b/source/includes/fact-ssl-tlsCAFile-tlsUseSystemCA.rst @@ -0,0 +1,8 @@ +When starting a :binary:`~bin.mongod` instance with +:ref:`TLS/SSL enabled `, you must +specify a value for the :option:`--tlsCAFile ` flag, the +:setting:`net.tls.CAFile` configuration option, or the :parameter:`tlsUseSystemCA` +parameter. + +``--tlsCAFile``, ``tls.CAFile``, and ``tlsUseSystemCA`` are all mutually +exclusive. diff --git a/source/includes/fact-stable-api-explain.rst b/source/includes/fact-stable-api-explain.rst new file mode 100644 index 00000000000..d9887b69643 --- /dev/null +++ b/source/includes/fact-stable-api-explain.rst @@ -0,0 +1,2 @@ +MongoDB does not guarantee any specific output format from the +:dbcommand:`explain` command, even when using the Stable API. \ No newline at end of file diff --git a/source/includes/fact-startup-parameter.rst b/source/includes/fact-startup-parameter.rst new file mode 100644 index 00000000000..e070d6e765c --- /dev/null +++ b/source/includes/fact-startup-parameter.rst @@ -0,0 +1,4 @@ + +This parameter is only available at startup. To set the +parameter, use the :setting:`setParameter` setting. + diff --git a/source/includes/fact-stop-in-progress-index-builds.rst b/source/includes/fact-stop-in-progress-index-builds.rst index 0367d1680bc..bc57d87d59d 100644 --- a/source/includes/fact-stop-in-progress-index-builds.rst +++ b/source/includes/fact-stop-in-progress-index-builds.rst @@ -1,8 +1,6 @@ -Starting in MongoDB 4.4, if an index specified to |drop-index| is still -building, |drop-index| attempts to stop the in-progress build. Stopping -an index build has the same effect as dropping the built index. In -versions earlier than MongoDB 4.4, |drop-index| returns an error if -there are any index builds in progress on the collection. +If an index specified to |drop-index| is still building, |drop-index| attempts +to stop the in-progress build. Stopping an index build has the same effect as +dropping the built index. For replica sets, run |drop-index| on the :term:`primary`. The primary stops the index build and creates an associated diff --git a/source/includes/fact-text-search-phrase-and-term.rst b/source/includes/fact-text-search-phrase-and-term.rst index 21f105a51ad..a2c878e8bb7 100644 --- a/source/includes/fact-text-search-phrase-and-term.rst +++ b/source/includes/fact-text-search-phrase-and-term.rst @@ -1,3 +1,3 @@ -If the ``$search`` string of a :query:`$text` operation includes a phrase and +If the ``$search`` string of a ``$text`` operation includes a phrase and individual terms, text search only matches the documents that include the phrase. diff --git a/source/includes/fact-text-search-score.rst b/source/includes/fact-text-search-score.rst index a176429775e..b060cd2eedf 100644 --- a/source/includes/fact-text-search-score.rst +++ b/source/includes/fact-text-search-score.rst @@ -1,8 +1,8 @@ -The :query:`$text` operator assigns a score to each document that +The ``$text`` operator assigns a score to each document that contains the search term in the indexed fields. The score represents the relevance of a document to a given text search query. The score can be part of a |sort-object| specification as well as part of the projection expression. The ``{ $meta: "textScore" }`` expression -provides information on the processing of the :query:`$text` operation. +provides information on the processing of the ``$text`` operation. See |meta-object| for details on accessing the score for projection or sort. diff --git a/source/includes/fact-timeZoneInfo.rst b/source/includes/fact-timeZoneInfo.rst index fad8926aa94..1d37a77f755 100644 --- a/source/includes/fact-timeZoneInfo.rst +++ b/source/includes/fact-timeZoneInfo.rst @@ -1,5 +1,5 @@ The full path from which to load the time zone database. If this option -is not provided, then MongoDB will use its built-in time zone database. +is not provided, then MongoDB uses its built-in time zone database. The configuration file included with Linux and macOS packages sets the time zone database path to ``/usr/share/zoneinfo`` by default. diff --git a/source/includes/fact-ulimit-minimum.rst b/source/includes/fact-ulimit-minimum.rst new file mode 100644 index 00000000000..5c7756e8c78 --- /dev/null +++ b/source/includes/fact-ulimit-minimum.rst @@ -0,0 +1,2 @@ +If the ``ulimit`` value for number of open files is under ``64000``, MongoDB +generates a startup warning. diff --git a/source/includes/fact-use-aggregation-not-map-reduce.rst b/source/includes/fact-use-aggregation-not-map-reduce.rst index d71fb83c7d4..169dea4d62d 100644 --- a/source/includes/fact-use-aggregation-not-map-reduce.rst +++ b/source/includes/fact-use-aggregation-not-map-reduce.rst @@ -11,7 +11,7 @@ deprecated: - For map-reduce operations that require custom functionality, you can use the :group:`$accumulator` and :expression:`$function` aggregation - operators, available starting in version 4.4. You can use those + operators. You can use those operators to define custom aggregation expressions in JavaScript. For examples of aggregation pipeline alternatives to map-reduce, see: diff --git a/source/includes/fact-validate-metadata.rst b/source/includes/fact-validate-metadata.rst index 953c37b4960..e98448f8a7b 100644 --- a/source/includes/fact-validate-metadata.rst +++ b/source/includes/fact-validate-metadata.rst @@ -17,7 +17,7 @@ The ``metadata`` validation option: only collections metadata. - Provides an alternative to dropping and recreating multiple invalid - indexes when used with the :doc:`collMod ` + indexes when used with the :dbcommand:`collMod` command. The ``metadata`` validation option only scans collection metadata to diff --git a/source/includes/fact-writeConcernError-mongos.rst b/source/includes/fact-writeConcernError-mongos.rst new file mode 100644 index 00000000000..7a10ebfcfc8 --- /dev/null +++ b/source/includes/fact-writeConcernError-mongos.rst @@ -0,0 +1,11 @@ + +.. versionchanged:: 7.1 + + When |cmd| executes on :program:`mongos`, write concern + errors are always reported, even when one or more write + errors occur. + + In previous releases, the occurrence of write errors could + cause the |cmd| to not report write concern errors. + + diff --git a/source/includes/find-options-values-table.rst b/source/includes/find-options-values-table.rst new file mode 100644 index 00000000000..7b714f56133 --- /dev/null +++ b/source/includes/find-options-values-table.rst @@ -0,0 +1,90 @@ +.. Note to author: This page duplicates the content from the github.io page: +.. https://mongodb.github.io/node-mongodb-native/6.5/interfaces/FindOptions.html +.. All the options defined here also work in mongosh + +.. list-table:: + :header-rows: 1 + :widths: 25 75 + + * - Option + - Description + + * - allowDiskUse + - Whether or not pipelines that require more than 100 megabytes of + memory to execute write to temporary files on disk. For details, + see :method:`cursor.allowDiskUse()`. + + * - allowPartialResults + - For queries against a sharded collection, allows the command + (or subsequent getMore commands) to return partial results, + rather than an error, if one or more queried shards are + unavailable. + + * - awaitData + - If the cursor is a a tailable-await cursor. + Requires ``tailable`` to be ``true``. + + * - collation + - Collation settings for update operation. + + * - comment + - Adds a ``$comment`` to the query that shows in the + :ref:`profiler ` logs. + + * - explain + - Adds explain output based on the verbosity mode provided. + + * - hint + - Forces the query optimizer to use specific indexes in the + query. + + * - limit + - Sets a limit of documents returned in the result set. + + * - max + - The exclusive upper bound for a specific index. + + * - maxAwaitTimeMS + - The maximum amount of time for the server to wait on + new documents to satisfy a tailable cursor query. Requires + ``tailable`` and ``awaitData`` to be ``true``. + + * - maxTimeMS + - The maximum amount of time (in milliseconds) the + server should allow the query to run. + + * - min + - The inclusive lower bound for a specific index. + + * - noCursorTimeout + - Whether the server should timeout the cursor + after a period of inactivity (by default 10 minutes). + + * - readConcern + - Specifies the read concern level for the query. + + * - readPreference + - Specifies the read preference level for the query. + + * - returnKey + - Whether only the index keys are returned for a + query. + + * - showRecordId + - If the ``$recordId`` field is added to the returned + documents. The ``$recordId`` indicates the position of the + document in the result set. + + * - skip + - How many documents to skip before returning the + first document in the result set. + + * - sort + - The order of the documents returned in the result + set. Fields specified in the sort, must have an index. + + * - tailable + - Indicates if the cursor is tailable. Tailable cursors remain + open after the intial results of the query are exhausted. + Tailable cursors are only available on + :ref:`manual-capped-collection`. \ No newline at end of file diff --git a/source/includes/fsync-mongos.rst b/source/includes/fsync-mongos.rst new file mode 100644 index 00000000000..dd4c1fd9cbb --- /dev/null +++ b/source/includes/fsync-mongos.rst @@ -0,0 +1,5 @@ + +Starting in MongoDB 7.1 (also available starting in 7.0.2, +6.0.11, and 5.0.22) |fsyncLockUnlock| can run on +:program:`mongos` to lock and unlock a sharded cluster. + diff --git a/source/includes/getMore-slow-queries.rst b/source/includes/getMore-slow-queries.rst index 0c2d507da01..d964f9212ba 100644 --- a/source/includes/getMore-slow-queries.rst +++ b/source/includes/getMore-slow-queries.rst @@ -2,4 +2,4 @@ Starting in MongoDB 5.1, when a :dbcommand:`getMore` command is logged as a :ref:`slow query `, the :ref:`queryHash ` and :ref:`planCacheKey ` fields are added to the :ref:`slow query log message ` and the -:doc:`profiler log message `. +:ref:`profiler log message `. diff --git a/source/includes/important-hostnames.rst b/source/includes/important-hostnames.rst index 9dde5ed597d..cba1a54757b 100644 --- a/source/includes/important-hostnames.rst +++ b/source/includes/important-hostnames.rst @@ -7,6 +7,5 @@ Use hostnames instead of IP addresses to configure clusters across a split network horizon. Starting in MongoDB 5.0, nodes that are only - configured with an IP address will fail startup validation and will - not start. + configured with an IP address fail startup validation and do not start. diff --git a/source/includes/in-use-encryption/admonition-csfle-key-rotation.txt b/source/includes/in-use-encryption/admonition-csfle-key-rotation.rst similarity index 64% rename from source/includes/in-use-encryption/admonition-csfle-key-rotation.txt rename to source/includes/in-use-encryption/admonition-csfle-key-rotation.rst index aadfd26e3e2..8936e6dc07a 100644 --- a/source/includes/in-use-encryption/admonition-csfle-key-rotation.txt +++ b/source/includes/in-use-encryption/admonition-csfle-key-rotation.rst @@ -1,4 +1,4 @@ .. important:: Key Rotation Support To view your driver's dependencies for the key rotation API, see - :ref:`Compatibility `. + :ref:`Compatibility `. diff --git a/source/includes/indexes/case-insensitive-regex-queries.rst b/source/includes/indexes/case-insensitive-regex-queries.rst new file mode 100644 index 00000000000..f4aed20c612 --- /dev/null +++ b/source/includes/indexes/case-insensitive-regex-queries.rst @@ -0,0 +1,3 @@ +Case-insensitive indexes typically do not improve performance for +:query:`$regex` queries. The ``$regex`` implementation is not +collation-aware and cannot utilize case-insensitive indexes efficiently. diff --git a/source/includes/indexes/commit-quorum-vs-write-concern.rst b/source/includes/indexes/commit-quorum-vs-write-concern.rst index 49b752e1f0c..2139043042d 100644 --- a/source/includes/indexes/commit-quorum-vs-write-concern.rst +++ b/source/includes/indexes/commit-quorum-vs-write-concern.rst @@ -12,7 +12,7 @@ which voting members, including the primary, must be prepared to commit a :ref:`simultaneous index build `. before the primary will execute the commit. -The **write concern** is the level of acknowledgement that the write has +The **write concern** is the level of acknowledgment that the write has propagated to the specified number of instances. The **commit quorum** specifies how many nodes must be *ready* to finish diff --git a/source/includes/indexes/commit-quorum.rst b/source/includes/indexes/commit-quorum.rst index 2dce10f7f6f..f5aba8dc00f 100644 --- a/source/includes/indexes/commit-quorum.rst +++ b/source/includes/indexes/commit-quorum.rst @@ -1,6 +1,6 @@ Index creation is a :ref:`multistage process `. -Starting in MongoDB 4.4, the index creation process uses the ``commit -quorum`` to minimize replication lag on secondary nodes. +The index creation process uses the ``commit quorum`` to minimize replication +lag on secondary nodes. When a secondary node receives a ``commitIndexBuild`` oplog entry, the node stops further oplog applications until the local index build can be diff --git a/source/includes/indexes/embedded-object-need-entire-doc.rst b/source/includes/indexes/embedded-object-need-entire-doc.rst new file mode 100644 index 00000000000..c413f2022e0 --- /dev/null +++ b/source/includes/indexes/embedded-object-need-entire-doc.rst @@ -0,0 +1,3 @@ +When you create an index on an embedded document, only queries that +specify the entire embedded document use the index. Queries on a +specific field within the document do not use the index. diff --git a/source/includes/introduction-write-concern.rst b/source/includes/introduction-write-concern.rst index 0684487bcfd..7114f51e80e 100644 --- a/source/includes/introduction-write-concern.rst +++ b/source/includes/introduction-write-concern.rst @@ -1,5 +1,5 @@ :ref:`Write Concern ` describes the level of -acknowledgement requested from MongoDB for write operations. The level +acknowledgment requested from MongoDB for write operations. The level of the write concerns affects how quickly the write operation returns. When write operations have a *weak* write concern, they return quickly. With *stronger* write concerns, clients must wait after sending a write diff --git a/source/includes/ldap-srv-details.rst b/source/includes/ldap-srv-details.rst new file mode 100644 index 00000000000..9c35dcff453 --- /dev/null +++ b/source/includes/ldap-srv-details.rst @@ -0,0 +1,8 @@ +If your connection string specifies ``"srv:"``, |ldap-binary| +verifies that ``"_ldap._tcp.gc._msdcs."`` exists for SRV to +support Active Directory. If not found, |ldap-binary| verifies that +``"_ldap._tcp."`` exists for SRV. If an SRV record cannot be +found, |ldap-binary| warns you to use ``"srv_raw:"`` instead. + +If your connection string specifies ``"srv_raw:"``, +|ldap-binary| performs an SRV record lookup for ``""``. diff --git a/source/includes/let-variables-match-note.rst b/source/includes/let-variables-match-note.rst index f30e445ba0e..b6526df949a 100644 --- a/source/includes/let-variables-match-note.rst +++ b/source/includes/let-variables-match-note.rst @@ -1,5 +1,5 @@ To define variables that you can access elsewhere in the command, use -the |let-option| option. +the ``let`` option. .. note:: diff --git a/source/includes/limits-sharding-index-type.rst b/source/includes/limits-sharding-index-type.rst index 6988b6f4bad..03f45c07c96 100644 --- a/source/includes/limits-sharding-index-type.rst +++ b/source/includes/limits-sharding-index-type.rst @@ -3,9 +3,15 @@ key, a compound index that starts with the shard key and specifies ascending order for the shard key, or a :ref:`hashed index `. -A :term:`shard key` index cannot be a descending index on the shard key, -an index that specifies a :ref:`multikey index `, a -:ref:`text index ` or a :ref:`geospatial index -` on the :term:`shard key` fields. +A :term:`shard key` index *cannot* be: + +- A descending index on the shard key +- A :ref:`partial index ` +- Any of the following index types: + + - :ref:`Geospatial ` + - :ref:`Multikey ` + - :ref:`Text ` + - :ref:`Wildcard ` .. COMMENT seealso extracts-geospatial-index-shard-key-restriction.yaml diff --git a/source/includes/list-cluster-x509-requirements.rst b/source/includes/list-cluster-x509-requirements.rst index cec0e20877f..336319bbca7 100644 --- a/source/includes/list-cluster-x509-requirements.rst +++ b/source/includes/list-cluster-x509-requirements.rst @@ -19,4 +19,3 @@ tls: clusterAuthX509: attributes: O=MongoDB, OU=MongoDB Server - diff --git a/source/includes/list-table-auth-mechanisms-shell-only.rst b/source/includes/list-table-auth-mechanisms-shell-only.rst index b713c31529a..93461fbe826 100644 --- a/source/includes/list-table-auth-mechanisms-shell-only.rst +++ b/source/includes/list-table-auth-mechanisms-shell-only.rst @@ -33,8 +33,6 @@ `MongoDB Atlas `_ cluster. See :ref:`example-connect-mongo-using-aws-iam`. - .. versionadded:: 4.4 - * - :ref:`GSSAPI ` (Kerberos) - External authentication using Kerberos. This mechanism is diff --git a/source/includes/list-text-search-restrictions-in-agg.rst b/source/includes/list-text-search-restrictions-in-agg.rst index 78c6ddb4de5..e4e76b7628b 100644 --- a/source/includes/list-text-search-restrictions-in-agg.rst +++ b/source/includes/list-text-search-restrictions-in-agg.rst @@ -1,9 +1,9 @@ -- The :pipeline:`$match` stage that includes a :query:`$text` must be +- The :pipeline:`$match` stage that includes a ``$text`` must be the **first** stage in the pipeline. -- A :query:`$text` operator can only occur once in the stage. +- A ``$text`` operator can only occur once in the stage. -- The :query:`$text` operator expression cannot appear in +- The ``$text`` operator expression cannot appear in :expression:`$or` or :expression:`$not` expressions. - The text search, by default, does not return the matching documents diff --git a/source/includes/log-changes-to-database-profiler.rst b/source/includes/log-changes-to-database-profiler.rst index 6ef61f92a81..58e0d04571b 100644 --- a/source/includes/log-changes-to-database-profiler.rst +++ b/source/includes/log-changes-to-database-profiler.rst @@ -1,5 +1,4 @@ -Starting in MongoDB 5.0 (also available starting in 4.4.2, and 4.2.12), -changes made to the :ref:`database profiler +Starting in MongoDB 5.0, changes made to the :ref:`database profiler ` ``level``, ``slowms``, ``sampleRate``, or ``filter`` using the :dbcommand:`profile` command or :method:`db.setProfilingLevel()` wrapper method are recorded in the diff --git a/source/includes/negative-dividend.rst b/source/includes/negative-dividend.rst new file mode 100644 index 00000000000..29386fb4543 --- /dev/null +++ b/source/includes/negative-dividend.rst @@ -0,0 +1,3 @@ +When the dividend is negative, the remainder is also negative. For +more details on this behavior, see the `official JavaScript documentation +`_. diff --git a/source/includes/noCursorTimeoutNote.rst b/source/includes/noCursorTimeoutNote.rst new file mode 100644 index 00000000000..f0392ebbdab --- /dev/null +++ b/source/includes/noCursorTimeoutNote.rst @@ -0,0 +1,5 @@ +.. note:: + + Since MongoDB version 4.4.8, cursors that are part of a session ignore + the ``noCursorTimeout`` option. MongoDB automatically closes these + cursors when the session ends or times out. \ No newline at end of file diff --git a/source/includes/note-key-vault-permissions.rst b/source/includes/note-key-vault-permissions.rst new file mode 100644 index 00000000000..ffb548028d3 --- /dev/null +++ b/source/includes/note-key-vault-permissions.rst @@ -0,0 +1,5 @@ +To complete this tutorial, the database user your application uses to connect to +MongoDB must have :authrole:`dbAdmin` permissions on the following namespaces: + +- ``encryption.__keyVault`` +- ``medicalRecords`` database diff --git a/source/includes/note-shard-cluster-backup.rst b/source/includes/note-shard-cluster-backup.rst index 4c01ba33cd7..cd7a51b24ab 100644 --- a/source/includes/note-shard-cluster-backup.rst +++ b/source/includes/note-shard-cluster-backup.rst @@ -1,4 +1,3 @@ -.. important:: +.. important:: - To capture a consistent backup from a sharded - cluster you **must** stop *all* writes to the cluster. + To back up a sharded cluster you **must** stop *all* writes to the cluster. diff --git a/source/includes/parameters-map-reduce.rst b/source/includes/parameters-map-reduce.rst index 3c5af0434a1..07c78600e2a 100644 --- a/source/includes/parameters-map-reduce.rst +++ b/source/includes/parameters-map-reduce.rst @@ -27,21 +27,6 @@ The ``map`` function has the following requirements: - The ``map`` function may optionally call ``emit(key,value)`` any number of times to create an output document associating ``key`` with ``value``. -- In MongoDB 4.2 and earlier, a single emit can only hold half of - MongoDB's :ref:`maximum BSON document size - `. MongoDB removes this restriction - starting in version 4.4. - -- Starting in MongoDB 4.4, :dbcommand:`mapReduce` no longer supports - the deprecated :ref:`BSON Type ` JavaScript code with - scope (BSON Type 15) for its functions. The ``map`` function must be - either BSON Type String (BSON Type 2) or BSON Type JavaScript - (BSON Type 13). To pass constant values which will be - accessible in the ``map`` function, use the ``scope`` parameter. - - | The use of JavaScript code with scope for the ``map`` function has - been deprecated since version 4.2.1. - The following ``map`` function will call ``emit(key,value)`` either 0 or 1 times depending on the value of the input document's ``status`` field: @@ -100,16 +85,6 @@ The ``reduce`` function exhibits the following behaviors: requirement may be violated when large documents are returned and then joined together in subsequent ``reduce`` steps. -- Starting in MongoDB 4.4, :dbcommand:`mapReduce` no longer supports - the deprecated BSON Type JavaScript code with scope (BSON Type 15) - for its functions. The ``reduce`` function must be either BSON Type - String (BSON Type 2) or BSON Type JavaScript (BSON Type 13). To pass - constant values which will be accessible in the ``reduce`` function, - use the ``scope`` parameter. - - | The use of JavaScript code with scope for the ``reduce`` function - has been deprecated since version 4.2.1. - Because it is possible to invoke the ``reduce`` function more than once for the same key, the following properties need to be true: @@ -264,14 +239,4 @@ aware that: - The ``finalize`` function can access the variables defined in the ``scope`` parameter. -- Starting in MongoDB 4.4, :dbcommand:`mapReduce` no longer supports - the deprecated BSON Type JavaScript code with scope (BSON Type 15) for - its functions. The ``finalize`` function must be either BSON Type - String (BSON Type 2) or BSON Type JavaScript (BSON Type 13). To pass - constant values which will be accessible in the ``finalize`` function, - use the ``scope`` parameter. - - | The use of JavaScript code with scope for the ``finalize`` function - has been deprecated since version 4.2.1. - .. end-finalize diff --git a/source/includes/project-stage-and-array-index.rst b/source/includes/project-stage-and-array-index.rst index 4e9f1877cac..ecefd11b83f 100644 --- a/source/includes/project-stage-and-array-index.rst +++ b/source/includes/project-stage-and-array-index.rst @@ -1,2 +1,2 @@ You cannot use an array index with the :pipeline:`$project` stage. -See :ref:`example-project-array-indexes`. +For more information, see :ref:`example-project-array-indexes`. diff --git a/source/includes/qe-connection-boilerplate.rst b/source/includes/qe-connection-boilerplate.rst index 302aca19666..d54379e288a 100644 --- a/source/includes/qe-connection-boilerplate.rst +++ b/source/includes/qe-connection-boilerplate.rst @@ -11,7 +11,7 @@ .. step:: Generate Your Key - To configure queryable encryption for a locally managed key, + To configure Queryable Encryption for a locally managed key, generate a base64-encoded 96-byte string with no line breaks. .. code-block:: javascript @@ -20,7 +20,7 @@ .. step:: Create the Queryable Encryption Options - Create the queryable encryption options using the generated local key string: + Create the Queryable Encryption options using the generated local key string: .. code-block:: javascript :emphasize-lines: 5 diff --git a/source/includes/qe-tutorials/node/queryable-encryption-helpers.js b/source/includes/qe-tutorials/node/queryable-encryption-helpers.js index 543c13ee965..b2fb8a7ebc0 100644 --- a/source/includes/qe-tutorials/node/queryable-encryption-helpers.js +++ b/source/includes/qe-tutorials/node/queryable-encryption-helpers.js @@ -147,28 +147,28 @@ export async function getAutoEncryptionOptions( const tlsOptions = getKmipTlsOptions(); // start-kmip-encryption-options - const sharedLibraryPathOptions = { + const extraOptions = { cryptSharedLibPath: process.env.SHARED_LIB_PATH, // Path to your Automatic Encryption Shared Library }; const autoEncryptionOptions = { keyVaultNamespace, kmsProviders, - sharedLibraryPathOptions, + extraOptions, tlsOptions, }; // end-kmip-encryption-options return autoEncryptionOptions; } else { // start-auto-encryption-options - const sharedLibraryPathOptions = { + const extraOptions = { cryptSharedLibPath: process.env.SHARED_LIB_PATH, // Path to your Automatic Encryption Shared Library }; const autoEncryptionOptions = { keyVaultNamespace, kmsProviders, - sharedLibraryPathOptions, + extraOptions, }; // end-auto-encryption-options diff --git a/source/includes/qe-tutorials/python/requirements.txt b/source/includes/qe-tutorials/python/requirements.rst similarity index 100% rename from source/includes/qe-tutorials/python/requirements.txt rename to source/includes/qe-tutorials/python/requirements.rst diff --git a/source/includes/qe-tutorials/qe-quick-start.rst b/source/includes/qe-tutorials/qe-quick-start.rst new file mode 100644 index 00000000000..ee001b1ba2a --- /dev/null +++ b/source/includes/qe-tutorials/qe-quick-start.rst @@ -0,0 +1,18 @@ +- **kmsProviderName** - The KMS you're using to store your {+cmk-long+}. + Set this variable to ``"local"`` for this tutorial. +- **uri** - Your MongoDB deployment connection URI. Set your connection + URI in the ``MONGODB_URI`` environment variable or replace the value + directly. +- **keyVaultDatabaseName** - The database in MongoDB where your data + encryption keys (DEKs) will be stored. Set this variable + to ``"encryption"``. +- **keyVaultCollectionName** - The collection in MongoDB where your DEKs + will be stored. Set this variable to ``"__keyVault"``, which is the convention + to help prevent mistaking it for a user collection. +- **keyVaultNamespace** - The namespace in MongoDB where your DEKs will + be stored. Set this variable to the values of the ``keyVaultDatabaseName`` + and ``keyVaultCollectionName`` variables, separated by a period. +- **encryptedDatabaseName** - The database in MongoDB where your encrypted + data will be stored. Set this variable to ``"medicalRecords"``. +- **encryptedCollectionName** - The collection in MongoDB where your encrypted + data will be stored. Set this variable to ``"patients"``. diff --git a/source/includes/query-password.rst b/source/includes/query-password.rst index 35fe0f40b11..15544fb6393 100644 --- a/source/includes/query-password.rst +++ b/source/includes/query-password.rst @@ -16,11 +16,10 @@ bind to the LDAP server. You can configure this setting on a running :binary:`~bin.mongod` or :binary:`~bin.mongos` using :dbcommand:`setParameter`. -Starting in MongoDB 4.4, the ``ldapQueryPassword`` -:dbcommand:`setParameter` command accepts either a string or -an array of strings. If ``ldapQueryPassword`` is set to an array, MongoDB tries -each password in order until one succeeds. Use a password array to roll over the -LDAP account password without downtime. +The ``ldapQueryPassword`` :dbcommand:`setParameter` command accepts either a +string or an array of strings. If ``ldapQueryPassword`` is set to an array, +MongoDB tries each password in order until one succeeds. Use a password array +to roll over the LDAP account password without downtime. .. note:: @@ -46,11 +45,10 @@ If not set, :program:`mongod` does not attempt to bind to the LDAP server. You can configure this setting on a running :program:`mongod` using :dbcommand:`setParameter`. -Starting in MongoDB 4.4, the ``ldapQueryPassword`` -:dbcommand:`setParameter` command accepts either a string or -an array of strings. If ``ldapQueryPassword`` is set to an array, MongoDB tries -each password in order until one succeeds. Use a password array to roll over the -LDAP account password without downtime. +The ``ldapQueryPassword`` :dbcommand:`setParameter` command accepts either a +string or an array of strings. If ``ldapQueryPassword`` is set to an array, +MongoDB tries each password in order until one succeeds. Use a password array +to roll over the LDAP account password without downtime. .. note:: @@ -75,11 +73,10 @@ If not set, :program:`mongoldap` does not attempt to bind to the LDAP server. You can configure this setting on a running :program:`mongoldap` using :dbcommand:`setParameter`. -Starting in MongoDB 4.4, the ``ldapQueryPassword`` -:dbcommand:`setParameter` command accepts either a string or -an array of strings. If ``ldapQueryPassword`` is set to an array, MongoDB tries -each password in order until one succeeds. Use a password array to roll over the -LDAP account password without downtime. +The ``ldapQueryPassword``:dbcommand:`setParameter` command accepts either a +string or an array of strings. If ``ldapQueryPassword`` is set to an array, +MongoDB tries each password in order until one succeeds. Use a password array +to roll over the LDAP account password without downtime. .. note:: diff --git a/source/includes/queryable-encryption/csfle-driver-tutorial-table.rst b/source/includes/queryable-encryption/csfle-driver-tutorial-table.rst new file mode 100644 index 00000000000..72d4131dc9b --- /dev/null +++ b/source/includes/queryable-encryption/csfle-driver-tutorial-table.rst @@ -0,0 +1,41 @@ +.. list-table:: + :widths: 30 70 + :header-rows: 1 + + * - Driver + - Quickstarts / Tutorials + + * - :driver:`Node.js ` + - | `Node.js Quickstart `__ + | :driver:`Client-Side Field Level Encryption Guide ` + + * - :driver:`Java ` + - | `Java Driver Quickstart `__ + | `Java Async Driver Quickstart `__ + | :driver:`Client-Side Field Level Encryption Guide ` + + * - `Java Reactive Streams `__ + - `Java RS Documentation `__ + + * - :driver:`Python (PyMongo) ` + - | `Python Driver Quickstart `__ + | :driver:`Client-Side Field Level Encryption Guide ` + + * - :driver:`C#/.NET ` + - `.NET Driver Quickstart `__ + + * - :driver:`C ` + - `C Driver Client-Side Field Level Encryption `__ + + * - :driver:`Go ` + - `Go Driver Quickstart `__ + + * - :driver:`Scala ` + - `Scala Documentation `__ + + * - :driver:`PHP ` + - `PHP Driver Quickstart `__ + + * - `Ruby `__ + - `Ruby Driver Quickstart + `__ \ No newline at end of file diff --git a/source/includes/example-qe-csfle-contention.rst b/source/includes/queryable-encryption/example-qe-csfle-contention.rst similarity index 65% rename from source/includes/example-qe-csfle-contention.rst rename to source/includes/queryable-encryption/example-qe-csfle-contention.rst index 1d7a90f48ea..59e66acc01a 100644 --- a/source/includes/example-qe-csfle-contention.rst +++ b/source/includes/queryable-encryption/example-qe-csfle-contention.rst @@ -1,8 +1,6 @@ -The Social Security Number (SSN) and patient identifier fields are high -:term:`cardinality` fields that contain unique values in a data set. For -high cardinality fields, you can set ``contention`` to a low value. The -following example sets ``contention`` to ``0`` for the ``patientId`` and -``patientInfo.ssn`` fields: +The example below sets ``contention`` to 0 for the low cardinality +Social Security Number (SSN) and patient ID fields, since these are +unique identifiers that shouldn't repeat in the data set. .. code-block:: javascript :emphasize-lines: 7,13 @@ -21,7 +19,14 @@ following example sets ``contention`` to ``0`` for the ``patientId`` and queries: { queryType: "equality", contention: "0"} }, - ... + { + path: "medications", + bsonType: "array" + }, + { + path: "patientInfo.billing", + bsonType: "object" + } ] } @@ -33,4 +38,4 @@ following example sets ``contention`` to ``0`` for the ``patientId`` and .. - DOB between 1930-1990 (unencrypted, ~22K values) .. - gender (encrypted, Male/Female/Non-binary) .. - creditCard.type (encrypted, 4 types) -.. - creditCard.expiry (encrypted, ~84 possible values) +.. - creditCard.expiry (encrypted, ~84 possible values) \ No newline at end of file diff --git a/source/includes/queryable-encryption/fact-csfle-compatibility-drivers.rst b/source/includes/queryable-encryption/fact-csfle-compatibility-drivers.rst index 0377f4d7bb5..cb242f5fa57 100644 --- a/source/includes/queryable-encryption/fact-csfle-compatibility-drivers.rst +++ b/source/includes/queryable-encryption/fact-csfle-compatibility-drivers.rst @@ -25,7 +25,7 @@ object: to create the database connection with the automatic encryption rules included as part of the {+qe+} configuration object. Defer to the :ref:`driver API reference - ` for more complete documentation and + ` for more complete documentation and tutorials. For all clients, the ``keyVault`` and ``kmsProviders`` specified diff --git a/source/includes/queryable-encryption/qe-csfle-configure-mongocryptd.rst b/source/includes/queryable-encryption/qe-csfle-configure-mongocryptd.rst index 6c57be60b73..398bb60c7b3 100644 --- a/source/includes/queryable-encryption/qe-csfle-configure-mongocryptd.rst +++ b/source/includes/queryable-encryption/qe-csfle-configure-mongocryptd.rst @@ -1,13 +1,10 @@ If the driver has access to the ``mongocryptd`` process, it spawns the process by default. -.. note:: mongocryptd Port In Use +.. important:: Start on Boot - If a ``mongocryptd`` process is already running on the port specified - by the driver, the driver may log a warning and continue without - spawning a new process. Any settings specified by the driver only - apply once the existing process exits and a new encrypted client - attempts to connect. + If possible, start ``mongocryptd`` on boot, rather than launching it + on demand. Configure how the driver starts ``mongocryptd`` through the following parameters: @@ -22,27 +19,28 @@ following parameters: * - port - | The port from which ``mongocryptd`` listens for messages. - | **Default**: ``27020`` + | *Default*: ``27020`` * - idleShutdownTimeoutSecs - | Number of idle seconds the ``mongocryptd`` process waits before exiting. - | **Default**: ``60`` + | *Default*: ``60`` * - mongocryptdURI - | The URI on which to run the ``mongocryptd`` process. - | **Default**: ``"mongodb://localhost:27020"`` + | *Default*: ``"mongodb://localhost:27020"`` * - mongocryptdBypassSpawn - | When ``true``, prevents the driver from automatically spawning ``mongocryptd``. - | **Default**: ``false`` + | *Default*: ``false`` * - mongocryptdSpawnPath - | The full path to ``mongocryptd``. - | **Default**: Defaults to empty string and spawns from the system path. - -.. important:: Start on Boot - - If possible, start ``mongocryptd`` on boot, rather than launching it - on demand. \ No newline at end of file + | *Default*: Defaults to empty string and spawns from the system + path. + +If a ``mongocryptd`` process is already running on the port specified by +the driver, the driver may log a warning and continue without spawning a +new process. Any settings specified by the driver only apply once the +existing process exits and a new encrypted client attempts to connect. \ No newline at end of file diff --git a/source/includes/queryable-encryption/qe-csfle-contention.rst b/source/includes/queryable-encryption/qe-csfle-contention.rst new file mode 100644 index 00000000000..905ff4ca796 --- /dev/null +++ b/source/includes/queryable-encryption/qe-csfle-contention.rst @@ -0,0 +1,16 @@ +Concurrent write operations, such as inserting the same field/value pair into +multiple documents in close succession, can cause contention: conflicts that +delay operations. + +With {+qe+}, MongoDB tracks the occurrences of each field/value pair in an +encrypted collection using an internal counter. The contention factor +partitions this counter, similar to an array. This minimizes issues with +incrementing the counter when using ``insert``, ``update``, or ``findAndModify`` to add or modify an encrypted field +with the same field/value pair in close succession. ``contention = 0`` +creates an array with one element at index 0. ``contention = 4`` creates an +array with 5 elements at indexes 0-4. MongoDB increments a random array element +during insert. + +When unset, ``contention`` defaults to ``8``, which provides high performance +for most workloads. Higher contention improves the performance of insert and +update operations on low cardinality fields, but decreases find performance. \ No newline at end of file diff --git a/source/includes/queryable-encryption/qe-csfle-install-mongocryptd.rst b/source/includes/queryable-encryption/qe-csfle-install-mongocryptd.rst index 67a6a909843..775414636d3 100644 --- a/source/includes/queryable-encryption/qe-csfle-install-mongocryptd.rst +++ b/source/includes/queryable-encryption/qe-csfle-install-mongocryptd.rst @@ -1,23 +1,24 @@ -For supported Linux Operating Systems, install the Server package by following the -:ref:`install on Linux tutorial ` -, follow the documented installation instructions and install the +**For supported Linux Operating Systems:** +To install the Server package, follow the :ref:`install on Linux +tutorial ` and install the ``mongodb-enterprise`` server package. Alternatively, specify ``mongodb-enterprise-cryptd`` instead to install only the ``mongocryptd`` binary. The package manager installs -the binaries to a location in the system PATH (e.g. ``/usr/bin/``) +the binaries to a location in the system PATH. -For OSX, install the Server package by following the -:ref:`install on MacOS tutorial `. -The package manager installs binaries to a location in the system -PATH. +**For OSX:** +To install the Server package, follow the :ref:`install on MacOS +tutorial `. The package manager installs +binaries to a location in the system PATH. -For Windows, install the Server package by following the -:ref:`install on Windows tutorial `. -You must add the ``mongocryptd`` package to your system PATH after -installation. Defer to documented best practices for your Windows -installation for instructions on adding the ``mongocryptd`` binary to -the system PATH. +**For Windows:** +To install the Server package, follow the :ref:`install on Windows +tutorial `. You must add the ``mongocryptd`` +package to your system PATH after installation. Follow the documented +best practices for your Windows installation to add the ``mongocryptd`` +binary to the system PATH. -For installations via an official tarball or ZIP archive, -follow the documented best practices for your operating system to add -the ``mongocryptd`` binary to your system PATH. \ No newline at end of file +**To install from an official tarball / ZIP archive:** +To install from an official archive, follow the documented best +practices for your operating system to add the ``mongocryptd`` binary +to your system PATH. \ No newline at end of file diff --git a/source/includes/queryable-encryption/qe-csfle-manual-enc-overview.rst b/source/includes/queryable-encryption/qe-csfle-manual-enc-overview.rst new file mode 100644 index 00000000000..58eb46818d7 --- /dev/null +++ b/source/includes/queryable-encryption/qe-csfle-manual-enc-overview.rst @@ -0,0 +1,5 @@ +{+manual-enc-first+} provides fine-grained control over security, at +the cost of increased complexity when configuring collections and +writing code for MongoDB Drivers. With {+manual-enc+}, you specify how +to encrypt fields in your document for each operation you +perform on the database, and you include this logic throughout your application. diff --git a/source/includes/queryable-encryption/qe-csfle-partial-filter-disclaimer.rst b/source/includes/queryable-encryption/qe-csfle-partial-filter-disclaimer.rst new file mode 100644 index 00000000000..3cac66b4ee2 --- /dev/null +++ b/source/includes/queryable-encryption/qe-csfle-partial-filter-disclaimer.rst @@ -0,0 +1,3 @@ +If you are using :ref:`{+csfle+} ` or :ref:`{+qe+} +`, a ``partialFilterExpression`` cannot reference an +encrypted field. \ No newline at end of file diff --git a/source/includes/queryable-encryption/qe-csfle-schema-validation.rst b/source/includes/queryable-encryption/qe-csfle-schema-validation.rst new file mode 100644 index 00000000000..b2d1c8bd8ef --- /dev/null +++ b/source/includes/queryable-encryption/qe-csfle-schema-validation.rst @@ -0,0 +1,12 @@ +If you have :ref:`{+csfle+} ` or :ref:`{+qe+} +` enabled on a collection, validation is +subject to the following restrictions: + +* For {+csfle-abbrev+}, when running :dbcommand:`collMod`, the + :ref:`libmongocrypt` library prefers the the JSON + :ref:`{+enc-schema+} ` specified in the + command. This enables setting a schema on a collection that does not yet + have one. + +* For {+qe+}, any JSON schema that includes an encrypted field results in a + query analysis error. \ No newline at end of file diff --git a/source/includes/queryable-encryption/qe-csfle-setting-contention.rst b/source/includes/queryable-encryption/qe-csfle-setting-contention.rst new file mode 100644 index 00000000000..ab23de5dac7 --- /dev/null +++ b/source/includes/queryable-encryption/qe-csfle-setting-contention.rst @@ -0,0 +1,46 @@ +Consider increasing ``contention`` above the default value of ``8`` only if the +field has frequent concurrent write operations. Since high contention values +sacrifice find performance in favor of insert and update operations, the +benefit of a high contention factor for a rarely updated field is unlikely to +outweigh the drawback. + +Consider decreasing ``contention`` if a field is often queried, but +rarely written. In this case, find performance is preferable to write and +update performance. + +You can calculate contention factor for a field by using a formula where: + +- ``ω`` is the number of concurrent write operations on the field in a short + time, such as 30ms. If unknown, you can use the server's number of virtual + cores. +- ``valinserts`` is the number of unique field/value pairs inserted since last + performing :ref:`metadata compaction `. +- ``ω``:sup:`∗` is ``ω/valinserts`` rounded up to the nearest integer. For a + workload of 100 operations with 1000 recent values, ``100/1000 = 0.1``, + which rounds up to ``1``. + +A reasonable contention factor, ``cf``, is the result of the following +formula, rounded up to the nearest positive integer: + +``(ω``:sup:`∗` ``· (ω``:sup:`∗` ``− 1)) / 0.2`` + +For example, if there are 100 concurrent write operations on a field in 30ms, +then ``ω = 100``. If there are 50 recent unique values for that field, then +``ω``:sup:`∗` ``= 100/50 = 2``. This results in ``cf = (2·1)/0.2 = 10``. + +.. warning:: + + Don't set the contention factor on properties of the data itself, such as + the frequency of field/value pairs (:term:`cardinality`). Only set the contention factor based on your workload. + + Consider a case + where ``ω = 100`` and ``valinserts = 1000``, resulting in ``ω``:sup:`∗` ``= + 100/1000 = 0.1 ≈ 1`` and ``cf = (1·0)/0.2 = 0 ≈ 1``. 20 of + the values appear very frequently, so you set ``contention = 3`` instead. An + attacker with access to multiple database snapshots can infer that the high + setting indicates frequent field/value pairs. In this case, leaving + ``contention`` unset so that it defaults to ``8`` would prevent the attacker + from having that information. + +For thorough information on contention and its cryptographic implications, see +"Section 9: Guidelines" in MongoDB's `Queryable Encryption Technical Paper `_ \ No newline at end of file diff --git a/source/includes/queryable-encryption/qe-csfle-warning-local-keys.rst b/source/includes/queryable-encryption/qe-csfle-warning-local-keys.rst new file mode 100644 index 00000000000..c3e1ed32ba1 --- /dev/null +++ b/source/includes/queryable-encryption/qe-csfle-warning-local-keys.rst @@ -0,0 +1,13 @@ +.. warning:: Do Not Use a Local Key File in Production + + A local key file in your filesystem is insecure and is + **not recommended** for production. Instead, + you should store your {+cmk-long+}s in a remote + :wikipedia:`{+kms-long+} ` + ({+kms-abbr+}). + + To learn how to use a remote {+kms-abbr+} in your + {+in-use-encryption+} enabled application, + see the :ref:`{+qe+} Automatic Encryption Tutorial + ` or :ref:`{+csfle-abbrev+} + Automatic Encryption Tutorial `. \ No newline at end of file diff --git a/source/includes/queryable-encryption/qe-enable-qe-at-collection-creation.rst b/source/includes/queryable-encryption/qe-enable-qe-at-collection-creation.rst new file mode 100644 index 00000000000..f5b8048cb9f --- /dev/null +++ b/source/includes/queryable-encryption/qe-enable-qe-at-collection-creation.rst @@ -0,0 +1,4 @@ +Enable {+qe+} at collection creation. You can't encrypt fields on +documents that are already in a collection. If you have existing data +that needs encryption, consider explicitly creating a new collection and +then using the :pipeline:`$out` aggregation stage to move documents into it. \ No newline at end of file diff --git a/source/includes/queryable-encryption/qe-explicitly-create-collection.rst b/source/includes/queryable-encryption/qe-explicitly-create-collection.rst new file mode 100644 index 00000000000..97d2eab6819 --- /dev/null +++ b/source/includes/queryable-encryption/qe-explicitly-create-collection.rst @@ -0,0 +1,7 @@ +.. important:: + + Explicitly create your collection, rather than creating it implicitly + with an insert operation. When you create a collection using + ``createCollection()``, MongoDB creates an index on the encrypted + fields. Without this index, queries on encrypted fields may run + slowly. \ No newline at end of file diff --git a/source/includes/queryable-encryption/qe-facts-mongocryptd-process.rst b/source/includes/queryable-encryption/qe-facts-mongocryptd-process.rst index 59eba9e9d9d..47f763b91dc 100644 --- a/source/includes/queryable-encryption/qe-facts-mongocryptd-process.rst +++ b/source/includes/queryable-encryption/qe-facts-mongocryptd-process.rst @@ -12,11 +12,14 @@ The ``mongocryptd`` process: :query:`document validation <$jsonSchema>` syntax, ``mongocryptd`` returns an error. -``mongocryptd`` only performs the previous functions, and doesn't perform any of the following: +``mongocryptd`` only performs the previous functions, and doesn't +perform any of the following: - ``mongocryptd`` doesn't perform encryption or decryption - ``mongocryptd`` doesn't access any encryption key material - ``mongocryptd`` doesn't listen over the network -To perform client-side field level encryption and automatic decryption, Drivers use the Apache-licensed `libmongocrypt -`__ library \ No newline at end of file +To perform field encryption and automatic decryption, the drivers use +the Apache-licensed `libmongocrypt +`__ +library. \ No newline at end of file diff --git a/source/includes/queryable-encryption/quick-start/dek.rst b/source/includes/queryable-encryption/quick-start/dek.rst index 22821ad09d1..fd6ab12494d 100644 --- a/source/includes/queryable-encryption/quick-start/dek.rst +++ b/source/includes/queryable-encryption/quick-start/dek.rst @@ -80,10 +80,7 @@ .. note:: {+key-vault-long-title+} Namespace Permissions - The {+key-vault-long+} is in the ``encryption.__keyVault`` - namespace. Ensure that the database user your application uses to connect - to MongoDB has :ref:`ReadWrite ` - permissions on this namespace. + .. include:: /includes/note-key-vault-permissions .. tabs-drivers:: diff --git a/source/includes/queryable-encryption/reference/kms-providers/aws.rst b/source/includes/queryable-encryption/reference/kms-providers/aws.rst index 1af611ae961..8f8b52c63be 100644 --- a/source/includes/queryable-encryption/reference/kms-providers/aws.rst +++ b/source/includes/queryable-encryption/reference/kms-providers/aws.rst @@ -1,4 +1,4 @@ -.. _qe-reference-kms-providers-aws-architecture: +.. _qe-fundamentals-kms-providers-aws-architecture: Architecture ```````````` diff --git a/source/includes/queryable-encryption/reference/kms-providers/azure.rst b/source/includes/queryable-encryption/reference/kms-providers/azure.rst index c346e873cca..1327fb2a518 100644 --- a/source/includes/queryable-encryption/reference/kms-providers/azure.rst +++ b/source/includes/queryable-encryption/reference/kms-providers/azure.rst @@ -1,4 +1,4 @@ -.. _qe-reference-kms-providers-azure-architecture: +.. _qe-fundamentals-kms-providers-azure-architecture: Architecture ```````````` diff --git a/source/includes/queryable-encryption/reference/kms-providers/gcp.rst b/source/includes/queryable-encryption/reference/kms-providers/gcp.rst index e54b66633c4..580a5a929bc 100644 --- a/source/includes/queryable-encryption/reference/kms-providers/gcp.rst +++ b/source/includes/queryable-encryption/reference/kms-providers/gcp.rst @@ -1,4 +1,4 @@ -.. _qe-reference-kms-providers-gcp-architecture: +.. _qe-fundamentals-kms-providers-gcp-architecture: Architecture ```````````` diff --git a/source/includes/queryable-encryption/reference/kms-providers/kmip.rst b/source/includes/queryable-encryption/reference/kms-providers/kmip.rst index 8a791aadd3a..941faa90839 100644 --- a/source/includes/queryable-encryption/reference/kms-providers/kmip.rst +++ b/source/includes/queryable-encryption/reference/kms-providers/kmip.rst @@ -38,7 +38,7 @@ object for a KMIP compliant {+kms-long+}: - Yes - Specifies a hostname and port number for the authentication server. -.. _qe-reference-kms-providers-kmip-datakeyopts: +.. _qe-fundamentals-kms-providers-kmip-datakeyopts: dataKeyOpts Object `````````````````` diff --git a/source/includes/queryable-encryption/set-up/csharp.rst b/source/includes/queryable-encryption/set-up/csharp.rst deleted file mode 100644 index a85c289b444..00000000000 --- a/source/includes/queryable-encryption/set-up/csharp.rst +++ /dev/null @@ -1,9 +0,0 @@ -.. list-table:: - :header-rows: 1 - :widths: 30 70 - - * - Dependency Name - - Description - - * - x64 Support - - {+qe+} requires x64 support. diff --git a/source/includes/queryable-encryption/set-up/go.rst b/source/includes/queryable-encryption/set-up/go.rst deleted file mode 100644 index 65a3a2b3653..00000000000 --- a/source/includes/queryable-encryption/set-up/go.rst +++ /dev/null @@ -1,10 +0,0 @@ -.. list-table:: - :header-rows: 1 - :widths: 30 70 - - * - Dependency Name - - Description - - * - :ref:`qe-reference-libmongocrypt` - - The ``libmongocrypt`` library contains bindings to communicate - with the native library that manages the encryption. diff --git a/source/includes/queryable-encryption/set-up/java.rst b/source/includes/queryable-encryption/set-up/java.rst deleted file mode 100644 index 9187fb26e99..00000000000 --- a/source/includes/queryable-encryption/set-up/java.rst +++ /dev/null @@ -1,10 +0,0 @@ -.. list-table:: - :header-rows: 1 - :widths: 30 70 - - * - Dependency Name - - Description - - * - `mongodb-crypt `__ - - The ``mongodb-crypt`` library contains bindings to communicate - with the native library that manages the encryption. diff --git a/source/includes/queryable-encryption/set-up/node.rst b/source/includes/queryable-encryption/set-up/node.rst deleted file mode 100644 index c02b42bde64..00000000000 --- a/source/includes/queryable-encryption/set-up/node.rst +++ /dev/null @@ -1,17 +0,0 @@ -.. list-table:: - :header-rows: 1 - :widths: 30 70 - - * - Dependency Name - - Description - - * - `mongodb-client-encryption - `_ - - - NodeJS wrapper for the ``libmongocrypt`` encryption library. - The ``libmongocrypt`` library contains bindings to communicate - with the native library that manages the encryption. - - .. note:: - - .. include:: /includes/in-use-encryption/node-mongodb-client-encryption-note.rst diff --git a/source/includes/queryable-encryption/set-up/python.rst b/source/includes/queryable-encryption/set-up/python.rst deleted file mode 100644 index 73c97deb2da..00000000000 --- a/source/includes/queryable-encryption/set-up/python.rst +++ /dev/null @@ -1,12 +0,0 @@ -.. list-table:: - :header-rows: 1 - :widths: 30 70 - - * - Dependency Name - - Description - - * - `pymongocrypt - `_ - - Python wrapper for the ``libmongocrypt`` encryption library. - The ``libmongocrypt`` library contains bindings to communicate - with the native library that manages the encryption. \ No newline at end of file diff --git a/source/includes/queryable-encryption/tutorials/assign-app-variables.rst b/source/includes/queryable-encryption/tutorials/assign-app-variables.rst new file mode 100644 index 00000000000..ff4d99b15c3 --- /dev/null +++ b/source/includes/queryable-encryption/tutorials/assign-app-variables.rst @@ -0,0 +1,192 @@ +The code samples in this tutorial use the following variables to perform +the {+qe+} workflow: + +.. tabs-drivers:: + + .. tab:: + :tabid: shell + + - **kmsProviderName** - The KMS you use to store your + {+cmk-long+}. Set this to your key provider: ``"aws"``, + ``"azure"``, ``"gcp"``, or ``"kmip"``. + - **uri** - Your MongoDB deployment connection URI. Set your + connection URI in the ``MONGODB_URI`` environment variable or + replace the value directly. + - **keyVaultDatabaseName** - The MongoDB database where your + data encryption keys (DEKs) will be stored. Set this to ``"encryption"``. + - **keyVaultCollectionName** - The collection in MongoDB where + your DEKs will be stored. Set this to ``"__keyVault"``. + - **keyVaultNamespace** - The namespace in MongoDB where your DEKs + will be stored. Set this to the values of the + ``keyVaultDatabaseName`` and ``keyVaultCollectionName`` + variables, separated by a period. + - **encryptedDatabaseName** - The MongoDB database where your + encrypted data will be stored. Set this to ``"medicalRecords"``. + - **encryptedCollectionName** - The collection in MongoDB where + your encrypted data will be stored. Set this to ``"patients"``. + + You can declare these variables by using the following code: + + .. literalinclude:: /includes/qe-tutorials/mongosh/queryable-encryption-tutorial.js + :start-after: start-setup-application-variables + :end-before: end-setup-application-variables + :language: javascript + :dedent: + + .. tab:: + :tabid: nodejs + + - **kmsProviderName** - The KMS you use to store your + {+cmk-long+}. Set this to your key provider: ``"aws"``, + ``"azure"``, ``"gcp"``, or ``"kmip"``. + - **uri** - Your MongoDB deployment connection URI. Set your connection + URI in the ``MONGODB_URI`` environment variable or replace the value + directly. + - **keyVaultDatabaseName** - The MongoDB database where your data + encryption keys (DEKs) will be stored. Set this to ``"encryption"``. + - **keyVaultCollectionName** - The collection in MongoDB where your DEKs + will be stored. Set this to ``"__keyVault"``. + - **keyVaultNamespace** - The namespace in MongoDB where your DEKs + will be stored. Set this to the values of the ``keyVaultDatabaseName`` + and ``keyVaultCollectionName`` variables, separated by a period. + - **encryptedDatabaseName** - The MongoDB database where your encrypted + data will be stored. Set this to ``"medicalRecords"``. + - **encryptedCollectionName** - The collection in MongoDB where your encrypted + data will be stored. Set this to ``"patients"``. + + You can declare these variables by using the following code: + + .. literalinclude:: /includes/qe-tutorials/node/queryable-encryption-tutorial.js + :start-after: start-setup-application-variables + :end-before: end-setup-application-variables + :language: javascript + :dedent: + + .. tab:: + :tabid: python + + - **kms_provider_name** - The KMS you use to store your + {+cmk-long+}. Set this to your key provider: ``"aws"``, + ``"azure"``, ``"gcp"``, or ``"kmip"``. + - **uri** - Your MongoDB deployment connection URI. Set your connection + URI in the ``MONGODB_URI`` environment variable or replace the value + directly. + - **key_vault_database_name** - The MongoDB database where your data + encryption keys (DEKs) will be stored. Set this to ``"encryption"``. + - **key_vault_collection_name** - The collection in MongoDB where your DEKs + will be stored. Set this to ``"__keyVault"``. + - **key_vault_namespace** - The namespace in MongoDB where your DEKs + will be stored. Set this to the values of the ``key_vault_database_name`` + and ``key_vault_collection_name`` variables, separated by a period. + - **encrypted_database_name** - The MongoDB database where your encrypted + data will be stored. Set this to ``"medicalRecords"``. + - **encrypted_collection_name** - The collection in MongoDB where + your encrypted data will be stored. Set this to ``"patients"``. + + You can declare these variables by using the following code: + + .. literalinclude:: /includes/qe-tutorials/python/queryable_encryption_tutorial.py + :start-after: start-setup-application-variables + :end-before: end-setup-application-variables + :language: python + :dedent: + + .. tab:: + :tabid: java-sync + + - **kmsProviderName** - The KMS you use to store your + {+cmk-long+}. Set this to your key provider: ``"aws"``, + ``"azure"``, ``"gcp"``, or ``"kmip"``. + - **uri** - Your MongoDB deployment connection URI. Set your connection + URI in the ``MONGODB_URI`` environment variable or replace the value + directly. + - **keyVaultDatabaseName** - The MongoDB database where your data + encryption keys (DEKs) will be stored. Set this to ``"encryption"``. + - **keyVaultCollectionName** - The collection in MongoDB where your DEKs + will be stored. Set this to ``"__keyVault"``. + - **keyVaultNamespace** - The namespace in MongoDB where your DEKs + will be stored. Set this to the values of the ``keyVaultDatabaseName`` + and ``keyVaultCollectionName`` variables, separated by a period. + - **encryptedDatabaseName** - The MongoDB database where your encrypted + data will be stored. Set this to ``"medicalRecords"``. + - **encryptedCollectionName** - The collection in MongoDB where your encrypted + data will be stored. Set this to ``"patients"``. + + You can declare these variables by using the following code: + + .. literalinclude:: /includes/qe-tutorials/java/src/main/java/com/mongodb/tutorials/qe/QueryableEncryptionTutorial.java + :start-after: start-setup-application-variables + :end-before: end-setup-application-variables + :language: java + :dedent: + + .. tab:: + :tabid: go + + - **kmsProviderName** - The KMS you use to store your + {+cmk-long+}. Set this to your key provider: ``"aws"``, + ``"azure"``, ``"gcp"``, or ``"kmip"``. + - **uri** - Your MongoDB deployment connection URI. Set your connection + URI in the ``MONGODB_URI`` environment variable or replace the value + directly. + - **keyVaultDatabaseName** - The MongoDB database where your data + encryption keys (DEKs) will be stored. Set this to ``"encryption"``. + - **keyVaultCollectionName** - The collection in MongoDB where your DEKs + will be stored. Set this to ``"__keyVault"``. + - **keyVaultNamespace** - The namespace in MongoDB where your DEKs + will be stored. Set this to the values of the + ``keyVaultDatabaseName`` and ``keyVaultCollectionName`` + variables, separated by a period. + - **encryptedDatabaseName** - The MongoDB database where your encrypted + data will be stored. Set this to ``"medicalRecords"``. + - **encryptedCollectionName** - The collection in MongoDB where your encrypted + data will be stored. Set this to ``"patients"``. + + You can declare these variables by using the following code: + + .. literalinclude:: /includes/qe-tutorials/go/queryable_encryption_tutorial.go + :start-after: start-setup-application-variables + :end-before: end-setup-application-variables + :language: go + :dedent: + + .. tab:: + :tabid: csharp + + - **kmsProviderName** - The KMS you use to store your + {+cmk-long+}. Set this to your key provider: ``"aws"``, + ``"azure"``, ``"gcp"``, or ``"kmip"``. + - **keyVaultDatabaseName** - The MongoDB database where your data + encryption keys (DEKs) will be stored. Set ``keyVaultDatabaseName`` + to ``"encryption"``. + - **keyVaultCollectionName** - The collection in MongoDB where your DEKs + will be stored. Set ``keyVaultCollectionName`` to ``"__keyVault"``. + - **keyVaultNamespace** - The namespace in MongoDB where your DEKs + will be stored. Set ``keyVaultNamespace`` to a new + ``CollectionNamespace`` object whose name is the values of the + ``keyVaultDatabaseName`` and ``keyVaultCollectionName`` + variables, separated by a period. + - **encryptedDatabaseName** - The MongoDB database where your encrypted + data will be stored. Set ``encryptedDatabaseName`` to ``"medicalRecords"``. + - **encryptedCollectionName** - The collection in MongoDB where your encrypted + data will be stored. Set ``encryptedCollectionName`` to ``"patients"``. + - **uri** - Your MongoDB deployment connection URI. Set your connection + URI in the ``appsettings.json`` file or replace the value + directly. + + You can declare these variables by using the following code: + + .. literalinclude:: /includes/qe-tutorials/csharp/QueryableEncryptionTutorial.cs + :start-after: start-setup-application-variables + :end-before: end-setup-application-variables + :language: csharp + :dedent: + +.. important:: {+key-vault-long-title+} Namespace Permissions + + The {+key-vault-long+} is in the ``encryption.__keyVault`` + namespace. Ensure that the database user your application uses to connect + to MongoDB has :ref:`ReadWrite ` + permissions on this namespace. + +.. include:: /includes/queryable-encryption/env-variables.rst \ No newline at end of file diff --git a/source/includes/queryable-encryption/tutorials/automatic/aws/dek.rst b/source/includes/queryable-encryption/tutorials/automatic/aws/dek.rst index f969d60a6b3..fe5dc7018cc 100644 --- a/source/includes/queryable-encryption/tutorials/automatic/aws/dek.rst +++ b/source/includes/queryable-encryption/tutorials/automatic/aws/dek.rst @@ -271,7 +271,7 @@ The output from the code in this section should resemble the following: To view a diagram showing how your client application creates your {+dek-long+} when using an AWS KMS, see - :ref:`qe-reference-kms-providers-aws-architecture`. + :ref:`qe-fundamentals-kms-providers-aws-architecture`. To learn more about the options for creating a {+dek-long+} encrypted with a {+cmk-long+} hosted in AWS KMS, see diff --git a/source/includes/queryable-encryption/tutorials/automatic/azure/dek.rst b/source/includes/queryable-encryption/tutorials/automatic/azure/dek.rst index 63554345681..ecdc3f908cf 100644 --- a/source/includes/queryable-encryption/tutorials/automatic/azure/dek.rst +++ b/source/includes/queryable-encryption/tutorials/automatic/azure/dek.rst @@ -266,7 +266,7 @@ To view a diagram showing how your client application creates your {+dek-long+} when using an {+azure-kv+}, see - :ref:`qe-reference-kms-providers-azure-architecture`. + :ref:`qe-fundamentals-kms-providers-azure-architecture`. To learn more about the options for creating a {+dek-long+} encrypted with a {+cmk-long+} hosted in {+azure-kv+}, see diff --git a/source/includes/queryable-encryption/tutorials/automatic/gcp/dek.rst b/source/includes/queryable-encryption/tutorials/automatic/gcp/dek.rst index 623359854c7..defc6be4cab 100644 --- a/source/includes/queryable-encryption/tutorials/automatic/gcp/dek.rst +++ b/source/includes/queryable-encryption/tutorials/automatic/gcp/dek.rst @@ -284,7 +284,7 @@ The output from the code in this section should resemble the following: To view a diagram showing how your client application creates your {+dek-long+} when using an {+gcp-kms+}, see - :ref:`qe-reference-kms-providers-gcp-architecture`. + :ref:`qe-fundamentals-kms-providers-gcp-architecture`. To learn more about the options for creating a {+dek-long+} encrypted with a {+cmk-long+} hosted in {+azure-kv+}, see diff --git a/source/includes/queryable-encryption/tutorials/exp/dek.rst b/source/includes/queryable-encryption/tutorials/exp/dek.rst index ef8274e906d..a075b28eb08 100644 --- a/source/includes/queryable-encryption/tutorials/exp/dek.rst +++ b/source/includes/queryable-encryption/tutorials/exp/dek.rst @@ -71,10 +71,7 @@ .. note:: {+key-vault-long-title+} Namespace Permissions - The {+key-vault-long+} is in the ``encryption.__keyVault`` - namespace. Ensure that the database user your application uses to connect - to MongoDB has :ref:`ReadWrite ` - permissions on this namespace. + .. include:: /includes/note-key-vault-permissions .. tabs-drivers:: diff --git a/source/includes/quick-start/cmk.rst b/source/includes/quick-start/cmk.rst index 00f62cb2c0e..76b00aa51f9 100644 --- a/source/includes/quick-start/cmk.rst +++ b/source/includes/quick-start/cmk.rst @@ -54,6 +54,6 @@ as the file ``master-key.txt``: :language: csharp :dedent: -.. include:: /includes/csfle-warning-local-keys.rst +.. include:: /includes/queryable-encryption/qe-warning-local-keys.rst .. include:: /includes/in-use-encryption/cmk-bash.rst diff --git a/source/includes/quick-start/dek.rst b/source/includes/quick-start/dek.rst index bbb0c2ba2ef..06715fd3f6a 100644 --- a/source/includes/quick-start/dek.rst +++ b/source/includes/quick-start/dek.rst @@ -69,10 +69,7 @@ .. note:: {+key-vault-long-title+} Namespace Permissions - The {+key-vault-long+} is in the ``encryption.__keyVault`` - namespace. Ensure that the database user your application uses to connect - to MongoDB has :ref:`ReadWrite ` - permissions on this namespace. + .. include:: /includes/note-key-vault-permissions .. tabs-drivers:: diff --git a/source/includes/quiesce-period.rst b/source/includes/quiesce-period.rst index 60c53ab2a8e..9953c8e5e78 100644 --- a/source/includes/quiesce-period.rst +++ b/source/includes/quiesce-period.rst @@ -64,9 +64,3 @@ quiesce period, which allows existing operations to complete. New operations are sent to other :binary:`~bin.mongos` nodes. In MongoDB versions earlier than 5.0, :binary:`~bin.mongos` shuts down immediately and does not use |timeout|. - -For a :binary:`~bin.mongod` :term:`primary` in MongoDB 4.4 and earlier, -``timeoutSecs`` specifies the time in seconds that the :term:`primary` -waits for a :term:`secondary` to catch up for the ``shutdownServer`` -command. If no secondaries catch up within ``timeoutSecs``, the -``shutdownServer`` command fails. diff --git a/source/includes/rapid-release-short.rst b/source/includes/rapid-release-short.rst index c8d47517e84..d87e480962c 100644 --- a/source/includes/rapid-release-short.rst +++ b/source/includes/rapid-release-short.rst @@ -1,6 +1,6 @@ .. important:: - MongoDB |version| is a rapid release and is only supported for - MongoDB Atlas. MongoDB |version| is not supported for use + MongoDB {+current-rapid-release+} is a rapid release and is only supported for + MongoDB Atlas. MongoDB {+current-rapid-release+} is not supported for use on-premises. For more information, see :ref:`release-version-numbers`. diff --git a/source/includes/rapid-release.rst b/source/includes/rapid-release.rst index f3f2e281570..337848face7 100644 --- a/source/includes/rapid-release.rst +++ b/source/includes/rapid-release.rst @@ -1,7 +1,7 @@ .. important:: - MongoDB |version| is a rapid release and is only supported for - MongoDB Atlas. MongoDB |version| is not supported for use + MongoDB {+current-rapid-release+} is a rapid release and is only supported for + MongoDB Atlas. MongoDB {+current-rapid-release+} is not supported for use on-premises. For more information, see :ref:`release-version-numbers`. diff --git a/source/includes/read-preference-modes-table.rst b/source/includes/read-preference-modes-table.rst index 66a2f15b110..1bd768c1a7e 100644 --- a/source/includes/read-preference-modes-table.rst +++ b/source/includes/read-preference-modes-table.rst @@ -16,20 +16,20 @@ if it is unavailable, operations read from :term:`secondary` members. - Starting in version 4.4, :readmode:`primaryPreferred` supports + Read preference :readmode:`primaryPreferred` supports :ref:`hedged reads ` on sharded clusters. * - :readmode:`secondary` - All operations read from the :term:`secondary` members of the replica set. - Starting in version 4.4, :readmode:`secondary` supports + Read preference :readmode:`secondary` supports :ref:`hedged reads ` on sharded clusters. * - :readmode:`secondaryPreferred` - .. include:: /includes/secondaryPreferred-read-mode.rst - Starting in version 4.4, :readmode:`secondaryPreferred` supports + Read preference :readmode:`secondaryPreferred` supports :ref:`hedged reads ` on sharded clusters. * - :readmode:`nearest` @@ -45,6 +45,6 @@ - Any specified :doc:`tag set lists ` - Starting in version 4.4, :readmode:`nearest` supports + Read preference :readmode:`nearest` supports :ref:`hedged reads ` on sharded clusters and enables the hedged read option by default. diff --git a/source/includes/reference/kms-providers/aws.rst b/source/includes/reference/kms-providers/aws.rst deleted file mode 100644 index fe4776879ca..00000000000 --- a/source/includes/reference/kms-providers/aws.rst +++ /dev/null @@ -1,75 +0,0 @@ -.. _csfle-reference-kms-providers-aws-architecture: - -Architecture -```````````` - -The following diagram describes the architecture of a -{+csfle-abbrev+}-enabled application using {+aws-abbr+} KMS. - -.. image:: /images/CSFLE_Data_Key_KMS.png - :alt: Diagram KMS - -.. include:: /includes/reference/kms-providers/cmk-note.rst - -.. _csfle-kms-provider-object-aws: - -kmsProviders Object -``````````````````` - -The following table presents the structure of a ``kmsProviders`` -object for AWS KMS: - -.. list-table:: - :header-rows: 1 - :stub-columns: 1 - :widths: 25 15 15 45 - - * - Field - - Required for IAM User - - Required for IAM Role - - Description - - * - Access Key ID - - Yes - - Yes - - Identifies the account user. - - * - Secret Access Key - - Yes - - Yes - - Contains the authentication credentials of the account user. - - * - Session Token - - No - - Yes - - Contains a token obtained from AWS Security Token Service (STS). - -.. _csfle-kms-datakeyopts-aws: - -dataKeyOpts Object -`````````````````` - -The following table presents the structure of a ``dataKeyOpts`` -object for AWS KMS: - -.. list-table:: - :header-rows: 1 - :stub-columns: 1 - :widths: 30 15 45 - - * - Field - - Required - - Description - - * - key - - Yes - - `Amazon Resource Number (ARN) `__ - of the master key. - - * - region - - No - - AWS region of your master key, e.g. "us-west-2"; required only if not specified in your ARN. - - * - endpoint - - No - - Custom hostname for the AWS endpoint if configured for your account. diff --git a/source/includes/reference/kms-providers/azure.rst b/source/includes/reference/kms-providers/azure.rst deleted file mode 100644 index faabb480abb..00000000000 --- a/source/includes/reference/kms-providers/azure.rst +++ /dev/null @@ -1,78 +0,0 @@ -.. _csfle-reference-kms-providers-azure-architecture: - -Architecture -```````````` - -The following diagram describes the architecture of a -{+csfle-abbrev+}-enabled application using Azure Key Vault. - -.. image:: /images/CSFLE_Data_Key_KMS.png - :alt: Diagram KMS - -.. include:: /includes/reference/kms-providers/cmk-note.rst - -.. _csfle-kms-provider-object-azure: - -kmsProviders Object -``````````````````` - -The following table presents the structure of a ``kmsProviders`` -object for Azure Key Vault: - -.. list-table:: - :header-rows: 1 - :stub-columns: 1 - :widths: 30 15 45 - - * - Field - - Required - - Description - - * - azure.tenantId - - Yes - - Identifies the organization of the account. - - * - azure.clientId - - Yes - - Identifies the clientId to authenticate your registered application. - - * - azure.clientSecret - - Yes - - Used to authenticate your registered application. - - * - azure.identityPlatformEndpoint - - No - - Specifies a hostname and port number for the authentication server. - Defaults to login.microsoftonline.com and is only needed for - non-commercial Azure instances such as a government or China account. - -.. _csfle-kms-datakeyopts-azure: - -dataKeyOpts Object -`````````````````` - -The following table presents the structure of a ``dataKeyOpts`` object for -Azure Key Vault: - -.. list-table:: - :header-rows: 1 - :stub-columns: 1 - :widths: 30 15 45 - - * - Field - - Required - - Description - - * - keyName - - Yes - - Name of the master key - - * - keyVersion - - No, but strongly recommended - - Version of the master key - - * - keyVaultEndpoint - - Yes - - URL of the key vault. E.g. myVaultName.vault.azure.net - -.. include:: /includes/queryable-encryption/qe-csfle-warning-azure-keyversion.rst diff --git a/source/includes/reference/kms-providers/cmk-note.rst b/source/includes/reference/kms-providers/cmk-note.rst deleted file mode 100644 index 991df9e6350..00000000000 --- a/source/includes/reference/kms-providers/cmk-note.rst +++ /dev/null @@ -1,5 +0,0 @@ -.. note:: Client Can't Access {+cmk-long+} - - When using the preceding {+kms-long+}, your - {+csfle-abbrev+}-enabled application does not have access to - your {+cmk-long+}. diff --git a/source/includes/reference/kms-providers/gcp.rst b/source/includes/reference/kms-providers/gcp.rst deleted file mode 100644 index f7d5c62330c..00000000000 --- a/source/includes/reference/kms-providers/gcp.rst +++ /dev/null @@ -1,111 +0,0 @@ -.. _csfle-reference-kms-providers-gcp-architecture: - -Architecture -```````````` - -The following diagram describes the architecture of a -{+csfle-abbrev+}-enabled application using GCP KMS. - -.. image:: /images/CSFLE_Data_Key_KMS.png - :alt: Diagram KMS - -.. include:: /includes/reference/kms-providers/cmk-note.rst - -.. _csfle-kms-provider-object-gcp: - -kmsProviders Object -``````````````````` - -The following table presents the structure of a ``kmsProviders`` -object for GCP KMS: - -.. list-table:: - :header-rows: 1 - :stub-columns: 1 - :widths: 20 12 68 - - * - Field - - Required - - Description - - * - email - - Yes - - Identifies your service account email address. - - * - privateKey - - Yes - - | Identifies your service account private key in either - `base64 string `__ or - :manual:`Binary subtype 0 ` - format without the prefix and suffix markers. - | - | Suppose your service account private key value is as follows: - - .. code-block:: none - :copyable: false - - -----BEGIN PRIVATE KEY-----\nyour-private-key\n-----END PRIVATE KEY-----\n - - | The value you would specify for this field is: - - .. code-block:: none - :copyable: false - - your-private-key - - | If you have a ``user-key.json`` credential file, you can extract - the string by executing the following command in a bash or - similar shell. The following command requires that you - install `OpenSSL `__: - - .. code-block:: shell - - cat user-key.json | jq -r .private_key | openssl pkcs8 -topk8 -nocrypt -inform PEM -outform DER | base64 -w 0 - - * - endpoint - - No - - Specifies a hostname and port number for the authentication server. - Defaults to oauth2.googleapis.com. - -.. _csfle-kms-datakeyopts-gcp: - -dataKeyOpts Object -`````````````````` - -The following table presents the structure of a ``dataKeyOpts`` object for -GCP KMS: - -.. list-table:: - :header-rows: 1 - :stub-columns: 1 - :widths: 30 15 45 - - * - Field - - Required - - Description - - * - projectId - - Yes - - Identifier for your project in which you created the key. - - * - location - - Yes - - Region specified for your key. - - * - keyRing - - Yes - - Identifier for the group of keys your key belongs to. - - * - keyName - - Yes - - Identifier for the symmetric master key. - - * - keyVersion - - No - - Specifies the version of the named key. If not specified, the default - version of the key is used. - - * - endpoint - - No - - Specifies the host and optional port of the Cloud KMS. The default - is ``cloudkms.googleapis.com``. diff --git a/source/includes/reference/kms-providers/kmip.rst b/source/includes/reference/kms-providers/kmip.rst deleted file mode 100644 index 37808d7d572..00000000000 --- a/source/includes/reference/kms-providers/kmip.rst +++ /dev/null @@ -1,71 +0,0 @@ -Architecture -```````````` - -The following diagram describes the architecture of a -{+csfle-abbrev+}-enabled application using a {+kmip-kms+}. - -.. image:: /images/CSFLE_Data_Key_KMIP.png - :alt: Diagram - -.. important:: Client Accesses {+cmk-long+} - - When your {+csfle-abbrev+}-enabled application uses - a {+kmip-kms+}, your application - directly accesses your {+cmk-long+}. - -kmsProviders Object -``````````````````` - -The following table presents the structure of a ``kmsProviders`` -object for a {+kmip-kms+}: - -.. note:: Authenticate through TLS/SSL - - Your {+csfle-abbrev+}-enabled application authenticates through - :abbr:`TLS/SSL (Transport Layer Security/Secure Sockets Layer)` - when using KMIP. - -.. list-table:: - :header-rows: 1 - :stub-columns: 1 - :widths: 20 12 68 - - * - Field - - Required - - Description - - * - endpoint - - Yes - - Specifies a hostname and port number for the authentication server. - -.. _csfle-reference-kms-providers-kmip-datakeyopts: - -dataKeyOpts Object -`````````````````` - -The following table presents the structure of a ``dataKeyOpts`` object -for a KMIP compliant {+kms-long+}: - -.. list-table:: - :header-rows: 1 - :stub-columns: 1 - :widths: 30 15 45 - - * - Field - - Required - - Description - - * - keyId - - No - - The ``keyId`` field of a 96 byte - `Secret Data managed object `__ - stored in your {+kmip-kms+}. - - If you do not specify the ``keyId`` field in the ``masterKey`` document - you send to your {+kmip-kms+}, the driver creates a new - 96 Byte Secret Data managed object in your {+kmip-kms+} to act as your - master key. - - * - endpoint - - Yes - - The URI of your {+kmip-kms+}. diff --git a/source/includes/reference/kms-providers/local.rst b/source/includes/reference/kms-providers/local.rst deleted file mode 100644 index cff26f9716e..00000000000 --- a/source/includes/reference/kms-providers/local.rst +++ /dev/null @@ -1,38 +0,0 @@ -Architecture -```````````` - -When you use a Local Key Provider in your {+csfle-abbrev+}-enabled -application, your application retrieves your {+cmk-long+} from -the filesystem of the computer on which your application is running. - -The following diagram describes the architecture of a -{+csfle-abbrev+}-enabled application using a Local Key Provider. - -.. image:: /images/CSFLE_Data_Key_Local.png - :alt: Local Key Provider architecture diagram. - -kmsProviders Object -``````````````````` - -The following table presents the structure of a ``kmsProviders`` -object for a Local Key Provider: - -.. list-table:: - :header-rows: 1 - :stub-columns: 1 - :widths: 30 15 45 - - * - Field - - Required - - Description - - * - key - - Yes - - The master key used to encrypt/decrypt data keys. - The master key is passed as a base64 encoded string. - -dataKeyOpts Object -`````````````````` - -When you use a Local Key Provider, you specify your {+cmk-long+} -through your ``kmsProviders`` object. diff --git a/source/includes/reference/oplog-size-setting-intro.rst b/source/includes/reference/oplog-size-setting-intro.rst new file mode 100644 index 00000000000..8f70cd37f8f --- /dev/null +++ b/source/includes/reference/oplog-size-setting-intro.rst @@ -0,0 +1,3 @@ +The maximum size in megabytes for the :term:`oplog`. The +|oplog-size-setting| setting configures the uncompressed size of the +oplog, not the size on disk. diff --git a/source/includes/replica-set-nodes-cannot-be-shared.rst b/source/includes/replica-set-nodes-cannot-be-shared.rst new file mode 100644 index 00000000000..c85d4de1d9d --- /dev/null +++ b/source/includes/replica-set-nodes-cannot-be-shared.rst @@ -0,0 +1,4 @@ +.. warning:: + + Each replica set node must belong to one, and only one, replica + set. Replica set nodes cannot belong to more than one replica set. diff --git a/source/includes/replica-states.rst b/source/includes/replica-states.rst index b23356cb6f2..94e6fa91f77 100644 --- a/source/includes/replica-states.rst +++ b/source/includes/replica-states.rst @@ -18,28 +18,28 @@ * - 1 - :replstate:`PRIMARY` - - The member in state :doc:`primary ` + - The member in state :ref:`primary ` is the only member that can accept write operations. Eligible to vote. * - 2 - :replstate:`SECONDARY` - - A member in state :doc:`secondary ` + - A member in state :ref:`secondary ` is replicating the data store. Eligible to vote. * - 3 - :replstate:`RECOVERING` - Members either perform startup self-checks, or transition from - completing a :doc:`rollback ` or - :doc:`resync `. Data is not + completing a :ref:`rollback ` or + :ref:`resync `. Data is not available for reads from this member. Eligible to vote. * - 5 - :replstate:`STARTUP2` - - The member has joined the set and is running an initial sync. Not - eligible to vote. + - The member is running an initial sync. Eligible to vote, + except when newly added to the replica set. * - 6 - :replstate:`UNKNOWN` @@ -55,7 +55,7 @@ * - 9 - :replstate:`ROLLBACK` - - This member is actively performing a :doc:`rollback `. Eligible to + - This member is actively performing a :ref:`rollback `. Eligible to vote. Data is not available for reads from this member. .. include:: /includes/extracts/4.2-changes-rollback-user-ops.rst diff --git a/source/includes/replication/note-replica-set-major-versions.rst b/source/includes/replication/note-replica-set-major-versions.rst new file mode 100644 index 00000000000..18ea3f539f4 --- /dev/null +++ b/source/includes/replication/note-replica-set-major-versions.rst @@ -0,0 +1,5 @@ +.. note:: + + Outside of a rolling upgrade, all :binary:`~bin.mongod` members of + a :term:`replica set` should use the same major version of + MongoDB. \ No newline at end of file diff --git a/source/includes/security/block-revoked-certificates-intro.rst b/source/includes/security/block-revoked-certificates-intro.rst new file mode 100644 index 00000000000..0415be53db7 --- /dev/null +++ b/source/includes/security/block-revoked-certificates-intro.rst @@ -0,0 +1,3 @@ +To prevent clients with revoked certificates from connecting to the +:binary:`~bin.mongod` or :binary:`~bin.mongos` instance, you can use a +Certificate Revocation List (CRL). diff --git a/source/includes/security/cve-2024-1351-info.rst b/source/includes/security/cve-2024-1351-info.rst new file mode 100644 index 00000000000..eda0847c11c --- /dev/null +++ b/source/includes/security/cve-2024-1351-info.rst @@ -0,0 +1,20 @@ +.. important:: Fix for MongoDB Server may allow successful untrusted connection + + Due to CVE-2024-1351, in |cve-version-list|, under certain + configurations of :option:`--tlsCAFile ` and + :setting:`~net.tls.CAFile`, MongoDB Server may skip peer certificate + validation which may result in untrusted connections to succeed. + + This may effectively reduce the security guarantees provided by TLS + and open connections that should have been closed due to failing + certificate validation. This issue affects the following MongoDB + Server versions: + + - 7.0.0 - 7.0.5 + - 6.0.0 - 6.0.13 + - 5.0.0 - 5.0.24 + - 4.4.0 - 4.4.28 + + **CVSS Score**: 8.8 + + **CWE**: CWE-295: Improper Certificate Validation diff --git a/source/includes/shard-key-modification-warning.rst b/source/includes/shard-key-modification-warning.rst index c805c011d26..2efc10de027 100644 --- a/source/includes/shard-key-modification-warning.rst +++ b/source/includes/shard-key-modification-warning.rst @@ -1,5 +1,5 @@ .. warning:: - Starting in version 4.4, documents in sharded collections can be - missing the shard key fields. Take precaution to avoid accidentally - removing the shard key when changing a document's shard key value. + Documents in sharded collections can be missing the shard key fields. + Take precaution to avoid accidentally removing the shard key when changing + a document's shard key value. diff --git a/source/includes/sharded-clusters-backup-restore-file-system-snapshot-restriction.rst b/source/includes/sharded-clusters-backup-restore-file-system-snapshot-restriction.rst index 480466ea3f1..83fb6d00634 100644 --- a/source/includes/sharded-clusters-backup-restore-file-system-snapshot-restriction.rst +++ b/source/includes/sharded-clusters-backup-restore-file-system-snapshot-restriction.rst @@ -1,7 +1,8 @@ -In MongoDB 4.2+, you cannot use :doc:`file system snapshots -` for backups that involve -transactions across shards because those backups do not maintain -atomicity. Instead, use one of the following to perform the backups: +To take a backup with a file system snapshot, you must first stop the balancer, +stop writes, and stop any schema transformation operations on the cluster. + +MongoDB provides backup and restore operations that can run with the balancer +and running transactions through the following services: - `MongoDB Atlas `_ diff --git a/source/includes/shardedDataDistribution-orphaned-docs.rst b/source/includes/shardedDataDistribution-orphaned-docs.rst new file mode 100644 index 00000000000..c96329cab5c --- /dev/null +++ b/source/includes/shardedDataDistribution-orphaned-docs.rst @@ -0,0 +1,17 @@ +Starting in MongoDB 6.0.3, you can run an aggregation using the +:pipeline:`$shardedDataDistribution` stage to confirm no orphaned +documents remain: + +.. code-block:: javascript + + db.aggregate([ + { $shardedDataDistribution: { } }, + { $match: { "ns": "." } } + ]) + +``$shardedDataDistribution`` has output similar to the following: + +.. include:: /includes/shardedDataDistribution-output-example.rst + +Ensure that ``"numOrphanedDocs"`` is ``0`` for each shard in the +cluster. diff --git a/source/includes/shardedDataDistribution-output-example.rst b/source/includes/shardedDataDistribution-output-example.rst new file mode 100644 index 00000000000..ba33bd2da41 --- /dev/null +++ b/source/includes/shardedDataDistribution-output-example.rst @@ -0,0 +1,24 @@ +.. code-block:: json + + [ + { + "ns": "test.names", + "shards": [ + { + "shardName": "shard-1", + "numOrphanedDocs": 0, + "numOwnedDocuments": 6, + "ownedSizeBytes": 366, + "orphanedSizeBytes": 0 + }, + { + "shardName": "shard-2", + "numOrphanedDocs": 0, + "numOwnedDocuments": 6, + "ownedSizeBytes": 366, + "orphanedSizeBytes": 0 + } + ] + } + ] + diff --git a/source/includes/steps-backup-sharded-cluster-with-snapshots.yaml b/source/includes/steps-backup-sharded-cluster-with-snapshots.yaml deleted file mode 100644 index 29e1464c298..00000000000 --- a/source/includes/steps-backup-sharded-cluster-with-snapshots.yaml +++ /dev/null @@ -1,202 +0,0 @@ -title: Disable the balancer. -stepnum: 1 -ref: disable-balancer -pre: | - - Connect :binary:`~bin.mongosh` to a cluster - :binary:`~bin.mongos` instance. Use the :method:`sh.stopBalancer()` - method to stop the balancer. If a balancing round is in progress, the - operation waits for balancing to complete before stopping the - balancer. - - .. code-block:: javascript - - use config - sh.stopBalancer() - -post: | - .. include:: /includes/extracts/4.2-changes-stop-balancer-autosplit.rst - - For more information, see the - :ref:`sharding-balancing-disable-temporarily` procedure. ---- -title: "If necessary, lock one secondary member of each replica set." -stepnum: 2 -ref: lock -pre: | - If your secondary does not have journaling enabled *or* its - journal and data files are on different volumes, you **must** lock - the secondary's :binary:`~bin.mongod` instance before capturing a backup. - - If your secondary has journaling enabled and its journal and data - files are on the same volume, you may skip this step. - - .. important:: - - If your deployment requires this step, you must perform it on one - secondary of each shard and one secondary of the - :ref:`config server replica set (CSRS) `. - - Ensure that the :term:`oplog` has sufficient capacity to allow these - secondaries to catch up to the state of the primaries after finishing - the backup procedure. See :ref:`replica-set-oplog-sizing` for more - information. - -action: - - heading: Lock shard replica set secondary. - pre: | - For each shard replica set in the sharded cluster, confirm that - the member has replicated data up to some control point. To - verify, first connect :binary:`~bin.mongosh` to the shard - primary and perform a write operation with - :writeconcern:`"majority"` write concern on a control - collection: - language: javascript - code: | - use config - db.BackupControl.findAndModify( - { - query: { _id: 'BackupControlDocument' }, - update: { $inc: { counter : 1 } }, - new: true, - upsert: true, - writeConcern: { w: 'majority', wtimeout: 15000 } - } - ); - - pre: | - The operation should return the modified (or inserted) control - document: - language: javascript - code: | - { "_id" : "BackupControlDocument", "counter" : 1 } - - pre: | - Query the shard secondary member for the returned control - document. Connect :binary:`~bin.mongosh` to the shard - secondary to lock and use :method:`db.collection.find()` to query - for the control document: - language: javascript - code: | - rs.secondaryOk(); - - use config; - - db.BackupControl.find( - { "_id" : "BackupControlDocument", "counter" : 1 } - ).readConcern('majority'); - - post: | - If the secondary member contains the latest control document, - it is safe to lock the member. Otherwise, wait until the member - contains the document or select a different secondary member - that contains the latest control document. - - pre: | - To lock the secondary member, run :method:`db.fsyncLock()` on - the member: - language: javascript - code: | - db.fsyncLock() - - - heading: Lock config server replica set secondary. - pre: | - If locking a secondary of the CSRS, confirm that the member has - replicated data up to some control point. To verify, first connect - :binary:`~bin.mongosh` to the CSRS primary and perform a write - operation with :writeconcern:`"majority"` write concern on a - control collection: - language: javascript - code: | - use config - db.BackupControl.findAndModify( - { - query: { _id: 'BackupControlDocument' }, - update: { $inc: { counter : 1 } }, - new: true, - upsert: true, - writeConcern: { w: 'majority', wtimeout: 15000 } - } - ); - - pre: | - The operation should return the modified (or inserted) control - document: - language: javascript - code: | - { "_id" : "BackupControlDocument", "counter" : 1 } - - pre: | - Query the CSRS secondary member for the returned control - document. Connect :binary:`~bin.mongosh` to the CSRS secondary - to lock and use :method:`db.collection.find()` to query for the - control document: - language: javascript - code: | - rs.secondaryOk(); - - use config; - - db.BackupControl.find( - { "_id" : "BackupControlDocument", "counter" : 1 } - ).readConcern('majority'); - - post: | - If the secondary member contains the latest control document, it - is safe to lock the member. Otherwise, wait until the member - contains the document or select a different secondary member - that contains the latest control document. - - pre: | - To lock the secondary member, run :method:`db.fsyncLock()` on - the member: - language: javascript - code: | - db.fsyncLock() ---- -title: Back up one of the config servers. -stepnum: 3 -ref: backup-config-server -content: | - - .. note:: - - Backing up a :ref:`config server ` backs - up the sharded cluster's metadata. You only need to back up one - config server, as they all hold the same data. Perform this step - against the locked CSRS secondary member. - - To create a file-system snapshot of the config server, follow the - procedure in :ref:`lvm-backup-operation`. ---- -title: Back up a replica set member for each shard. -stepnum: 4 -ref: backup-locked-shards -content: | - If you locked a member of the replica set shards, perform this step - against the locked secondary. - - You may back up the shards in parallel. For each shard, create a - snapshot, using the procedure in - :doc:`/tutorial/backup-with-filesystem-snapshots`. ---- -title: Unlock all locked replica set members. -stepnum: 5 -ref: unlock -pre: | - If you locked any :binary:`~bin.mongod` instances to capture the backup, - unlock them. - - To unlock the replica set members, use :method:`db.fsyncUnlock()` - method in :binary:`~bin.mongosh`. -action: - language: javascript - code: | - db.fsyncUnlock() ---- -title: Enable the balancer. -stepnum: 6 -ref: enable-balancer -pre: | - To re-enable to balancer, connect :binary:`~bin.mongosh` to a - :binary:`~bin.mongos` instance and run - :method:`sh.startBalancer()`. -action: - language: javascript - code: | - sh.startBalancer() -post: | - .. include:: /includes/extracts/4.2-changes-start-balancer-autosplit.rst -... diff --git a/source/includes/steps-change-replica-set-wiredtiger.yaml b/source/includes/steps-change-replica-set-wiredtiger.yaml index a23f602b90f..d8379f8744a 100644 --- a/source/includes/steps-change-replica-set-wiredtiger.yaml +++ b/source/includes/steps-change-replica-set-wiredtiger.yaml @@ -65,7 +65,7 @@ content: | post: | Since no data exists in the ``--dbpath``, the ``mongod`` will perform an - :doc:`initial sync `. The length of the + :ref:`initial sync `. The length of the initial sync process depends on the size of the database and network connection between members of the replica set. diff --git a/source/includes/steps-clear-jumbo-flag-refine-key.yaml b/source/includes/steps-clear-jumbo-flag-refine-key.yaml index 64be6f8030d..e08c140af6b 100644 --- a/source/includes/steps-clear-jumbo-flag-refine-key.yaml +++ b/source/includes/steps-clear-jumbo-flag-refine-key.yaml @@ -80,7 +80,7 @@ content: | } ) The :dbcommand:`refineCollectionShardKey` command updates the - :doc:`chunk ranges ` and + :ref:`chunk ranges ` and :ref:`zone ranges ` to incorporate the new fields without modifying the range values of the existing key fields. That is, the refinement of the shard key does not diff --git a/source/includes/steps-configure-ldap-mongodb.yaml b/source/includes/steps-configure-ldap-mongodb.yaml index 36aca98b65e..9330dc9d966 100644 --- a/source/includes/steps-configure-ldap-mongodb.yaml +++ b/source/includes/steps-configure-ldap-mongodb.yaml @@ -3,7 +3,7 @@ stepnum: 1 ref: add-ldap-sasl-auth-user pre: | Add the user to the ``$external`` database in MongoDB. To specify the - user's privileges, assign :doc:`roles ` to the + user's privileges, assign :ref:`roles ` to the user. .. include:: /includes/extracts/sessions-external-username-limit.rst diff --git a/source/includes/steps-control-access-to-mongodb-windows-with-kerberos-authentication.yaml b/source/includes/steps-control-access-to-mongodb-windows-with-kerberos-authentication.yaml index b879a2a1508..6ef94fb906c 100644 --- a/source/includes/steps-control-access-to-mongodb-windows-with-kerberos-authentication.yaml +++ b/source/includes/steps-control-access-to-mongodb-windows-with-kerberos-authentication.yaml @@ -34,7 +34,7 @@ pre: | **ALL UPPERCASE**. The ``$external`` database allows :binary:`mongod.exe` to consult an external source (e.g. Kerberos) to authenticate. To specify the user's privileges, assign - :doc:`roles ` to the user. + :ref:`roles ` to the user. .. include:: /includes/extracts/sessions-external-username-limit.rst diff --git a/source/includes/steps-control-access-to-mongodb-with-kerberos-authentication.yaml b/source/includes/steps-control-access-to-mongodb-with-kerberos-authentication.yaml index 340be414cb4..f484e86a0b2 100644 --- a/source/includes/steps-control-access-to-mongodb-with-kerberos-authentication.yaml +++ b/source/includes/steps-control-access-to-mongodb-with-kerberos-authentication.yaml @@ -35,7 +35,7 @@ pre: | ``$external`` database. Specify the Kerberos realm in all uppercase. The ``$external`` database allows :binary:`~bin.mongod` to consult an external source (e.g. Kerberos) to authenticate. To specify the - user's privileges, assign :doc:`roles ` to the + user's privileges, assign :ref:`roles ` to the user. .. include:: /includes/extracts/sessions-external-username-limit.rst diff --git a/source/includes/steps-create-role-dropSystemViews.yaml b/source/includes/steps-create-role-dropSystemViews.yaml index f20b0f747b9..774f4f32c0e 100644 --- a/source/includes/steps-create-role-dropSystemViews.yaml +++ b/source/includes/steps-create-role-dropSystemViews.yaml @@ -22,7 +22,7 @@ pre: | - an ``actions`` array that contains the :authaction:`dropCollection` action, and - - a :doc:`resource document ` that + - a :ref:`resource document ` that specifies an empty string (``""``) for the database and the string ``"system.views"`` for the collection. See :ref:`resource-specific-collection` for more information. diff --git a/source/includes/steps-deploy-replica-set-with-auth.yaml b/source/includes/steps-deploy-replica-set-with-auth.yaml index 1e4d3a226b7..1214053df54 100644 --- a/source/includes/steps-deploy-replica-set-with-auth.yaml +++ b/source/includes/steps-deploy-replica-set-with-auth.yaml @@ -62,7 +62,7 @@ action: mongod --config post: | For more information on the configuration file, see - :doc:`configuration options `. + :ref:`configuration options `. - heading: Command Line pre: | If using the command line options, start the :binary:`~bin.mongod` with the following options: diff --git a/source/includes/steps-enable-authentication-in-replica-set-no-downtime.yaml b/source/includes/steps-enable-authentication-in-replica-set-no-downtime.yaml index 645db1766b3..b3c85da7112 100644 --- a/source/includes/steps-enable-authentication-in-replica-set-no-downtime.yaml +++ b/source/includes/steps-enable-authentication-in-replica-set-no-downtime.yaml @@ -254,7 +254,7 @@ action: post: | For more information on the configuration file, see - :doc:`configuration options`. + :ref:`configuration options `. post: | @@ -341,7 +341,7 @@ action: mongod --config post: | For more information on the configuration file, see - :doc:`configuration options`. + :ref:`configuration options `. post: | @@ -413,7 +413,7 @@ action: post: | For more information on the configuration file, see - :doc:`configuration options`. + :ref:`configuration options `. post: | You can also use the equivalent :binary:`~bin.mongod` options when starting your @@ -499,7 +499,7 @@ action: mongod --config post: | For more information on the configuration file, see - :doc:`configuration options`. + :ref:`configuration options `. post: | You can also use the equivalent :binary:`~bin.mongod` options when starting your diff --git a/source/includes/steps-install-mongodb-enterprise-on-red-hat.yaml b/source/includes/steps-install-mongodb-enterprise-on-red-hat.yaml index a8feac64b6e..2b01ca0a8dc 100644 --- a/source/includes/steps-install-mongodb-enterprise-on-red-hat.yaml +++ b/source/includes/steps-install-mongodb-enterprise-on-red-hat.yaml @@ -13,7 +13,7 @@ action: baseurl=https://repo.mongodb.com/yum/{{distro_name}}/{{distro_release}}/mongodb-enterprise/{+version+}/$basearch/ gpgcheck=1 enabled=1 - gpgkey=https://www.mongodb.org/static/pgp/server-{+pgp-version+}.asc + gpgkey=https://pgp.mongodb.com/server-{+pgp-version+}.asc post: | .. note:: diff --git a/source/includes/steps-install-mongodb-on-red-hat.yaml b/source/includes/steps-install-mongodb-on-red-hat.yaml index bdc9c78440f..a93c4f3cf90 100644 --- a/source/includes/steps-install-mongodb-on-red-hat.yaml +++ b/source/includes/steps-install-mongodb-on-red-hat.yaml @@ -13,7 +13,7 @@ action: baseurl=https://repo.mongodb.org/yum/{{distro_name}}/{{distro_release}}/mongodb-org/{+version+}/x86_64/ gpgcheck=1 enabled=1 - gpgkey=https://www.mongodb.org/static/pgp/server-{+pgp-version+}.asc + gpgkey=https://pgp.mongodb.com/server-{+pgp-version+}.asc post: | You can also download the ``.rpm`` files directly from the {{distro_link}}. Downloads are organized by {{distro_name_full}} diff --git a/source/includes/steps-install-mongodb-on-suse.yaml b/source/includes/steps-install-mongodb-on-suse.yaml index 8f3716685d1..5e1205dd74c 100644 --- a/source/includes/steps-install-mongodb-on-suse.yaml +++ b/source/includes/steps-install-mongodb-on-suse.yaml @@ -5,7 +5,7 @@ ref: import-key action: language: sh code: | - sudo rpm --import https://www.mongodb.org/static/pgp/server-{+pgp-version+}.asc + sudo rpm --import https://pgp.mongodb.com/server-{+pgp-version+}.asc --- title: Add the MongoDB repository. stepnum: 2 diff --git a/source/includes/steps-install-verify-files-pgp.yaml b/source/includes/steps-install-verify-files-pgp.yaml index 58166c520c0..f5ef96d9dcc 100644 --- a/source/includes/steps-install-verify-files-pgp.yaml +++ b/source/includes/steps-install-verify-files-pgp.yaml @@ -48,7 +48,7 @@ action: language: sh copyable: true code: | - curl -LO https://www.mongodb.org/static/pgp/server-{+release+}.asc + curl -LO https://pgp.mongodb.com/server-{+release+}.asc gpg --import server-{+release+}.asc - pre: | PGP should return this response: diff --git a/source/includes/steps-kerberos-auth-activedirectory-authz.yaml b/source/includes/steps-kerberos-auth-activedirectory-authz.yaml index 15944fa8e1c..8964990750b 100644 --- a/source/includes/steps-kerberos-auth-activedirectory-authz.yaml +++ b/source/includes/steps-kerberos-auth-activedirectory-authz.yaml @@ -169,7 +169,7 @@ ref: security-kerberos-activedirectory-authauthz-configfile level: 4 pre: | - A MongoDB :doc:`configuration file ` is a + A MongoDB :ref:`configuration file ` is a plain-text YAML file with the ``.conf`` file extension. * If you are upgrading an existing MongoDB deployment, copy the diff --git a/source/includes/steps-shard-a-collection-ranged.yaml b/source/includes/steps-shard-a-collection-ranged.yaml index 7b90b43eb07..bccffe076cc 100644 --- a/source/includes/steps-shard-a-collection-ranged.yaml +++ b/source/includes/steps-shard-a-collection-ranged.yaml @@ -40,9 +40,8 @@ pre: | - Starting in MongoDB 5.0, you can :ref:`reshard a collection ` by changing a document's shard key. - - Starting in MongoDB 4.4, you can :ref:`refine a shard key - ` by adding a suffix field or fields to the - existing shard key. + - You can :ref:`refine a shard key ` by adding a suffix + field or fields to the existing shard key. - In MongoDB 4.2 and earlier, the choice of shard key cannot be changed after sharding. --- diff --git a/source/includes/steps-shard-existing-tsc.yaml b/source/includes/steps-shard-existing-tsc.yaml index 5dd32a0aa23..ea418d27cca 100644 --- a/source/includes/steps-shard-existing-tsc.yaml +++ b/source/includes/steps-shard-existing-tsc.yaml @@ -34,13 +34,13 @@ content: | ... --- -title: Shard the collection. -ref: new-sharded-tsc-create +title: Create a hashed index on your collection. +ref: new-sharded-tsc-index stepnum: 3 level: 4 content: | - Use the :method:`~sh.shardCollection()` method to shard the - collection. + Enable sharding on your collection by creating an index that supports + the :ref:`shard key `. Consider a time series collection with the following properties: @@ -67,11 +67,28 @@ content: | "speed": 50 } ) - To shard the collection, run the following command: + Run the following command to create a hashed index on the + ``metadata.location`` field: + + .. code-block:: javascript + + db.deliverySensor.createIndex( { "metadata.location" : "hashed" } ) + +--- +title: Shard your collection. +ref: new-sharded-tsc-create +stepnum: 4 +level: 4 +content: | + Use the :method:`~sh.shardCollection()` method to shard the + collection. + + To shard the ``deliverySensor`` collection described in the preceding step, run + the following command: .. code-block:: javascript - sh.shardCollection( "test.deliverySensor", { "metadata.location": 1 } ) + sh.shardCollection( "test.deliverySensor", { "metadata.location": "hashed" } ) In this example, :method:`sh.shardCollection()`: diff --git a/source/includes/steps-start-sharded-cluster.yaml b/source/includes/steps-start-sharded-cluster.yaml index cc4329b5bfd..135af887ea0 100644 --- a/source/includes/steps-start-sharded-cluster.yaml +++ b/source/includes/steps-start-sharded-cluster.yaml @@ -86,7 +86,7 @@ action: mongos --config post: | For more information on the configuration file, see - :doc:`configuration options`. + :ref:`configuration options `. - pre: | **Command Line** diff --git a/source/includes/table-transactions-operations.rst b/source/includes/table-transactions-operations.rst index c1dc52cb80b..bc306351985 100644 --- a/source/includes/table-transactions-operations.rst +++ b/source/includes/table-transactions-operations.rst @@ -18,6 +18,7 @@ - :pipeline:`$merge` - :pipeline:`$out` - :pipeline:`$planCacheStats` + - :pipeline:`$unionWith` * - :method:`db.collection.countDocuments()` - @@ -57,9 +58,8 @@ - :dbcommand:`findAndModify` - - Starting in MongoDB 4.4, if the update or replace operation is - run with ``upsert: true`` on a non-existing collection, the - collection is implicitly created. + - If the update or replace operation is run with ``upsert: true`` on a + non-existing collection, the collection is implicitly created. In MongoDB 4.2 and earlier, if ``upsert: true``, the operation must be run on an existing collection. @@ -73,8 +73,8 @@ - :dbcommand:`insert` - - Starting in MongoDB 4.4, if run on a non-existing - collection, the collection is implicitly created. + - If run on a non-existing collection, the collection is implicitly + created. In MongoDB 4.2 and earlier, the operation must be run on an existing collection. @@ -89,8 +89,8 @@ - :dbcommand:`update` - - Starting in MongoDB 4.4, if run on a non-existing - collection, the collection is implicitly created. + - If run on a non-existing collection, the collection is implicitly + created. In MongoDB 4.2 and earlier, the operation must be run on an existing collection. @@ -102,8 +102,8 @@ * - | :method:`db.collection.bulkWrite()` | Various :doc:`/reference/method/js-bulk` - - - Starting in MongoDB 4.4, if run on a non-existing - collection, the collection is implicitly created. + - If run on a non-existing collection, the collection is implicitly + created. In MongoDB 4.2 and earlier, the operation must be run on an existing collection. diff --git a/source/includes/time-series-secondary-indexes-downgrade-FCV.rst b/source/includes/time-series-secondary-indexes-downgrade-FCV.rst index 3acc363d12a..b7382255250 100644 --- a/source/includes/time-series-secondary-indexes-downgrade-FCV.rst +++ b/source/includes/time-series-secondary-indexes-downgrade-FCV.rst @@ -1,5 +1,5 @@ If there are :term:`secondary indexes ` on :ref:`time series collections ` and you need to -downgrade the feature compatibility version (FCV), you must first drop -any secondary indexes that are incompatible with the downgraded FCV. +downgrade the feature compatibility version (fCV), you must first drop +any secondary indexes that are incompatible with the downgraded fCV. See :dbcommand:`setFeatureCompatibilityVersion`. diff --git a/source/includes/tutorials/automatic/aws/cmk.rst b/source/includes/tutorials/automatic/aws/cmk.rst index 1e2263d22b0..0fd9987d426 100644 --- a/source/includes/tutorials/automatic/aws/cmk.rst +++ b/source/includes/tutorials/automatic/aws/cmk.rst @@ -32,7 +32,7 @@ .. tip:: Learn More To learn more about your {+cmk-long+}s, see - :ref:`csfle-reference-keys-key-vaults`. + :ref:`qe-reference-keys-key-vaults`. To learn more about key policies, see `Key Policies in AWS KMS `__ diff --git a/source/includes/tutorials/automatic/aws/dek.rst b/source/includes/tutorials/automatic/aws/dek.rst index 99181850784..99002ed02a2 100644 --- a/source/includes/tutorials/automatic/aws/dek.rst +++ b/source/includes/tutorials/automatic/aws/dek.rst @@ -172,8 +172,8 @@ To view a diagram showing how your client application creates your {+dek-long+} when using an AWS KMS, see - :ref:`csfle-reference-kms-providers-aws-architecture`. + :ref:`qe-fundamentals-kms-providers-aws-architecture`. To learn more about the options for creating a {+dek-long+} encrypted with a {+cmk-long+} hosted in AWS KMS, see - :ref:`csfle-kms-datakeyopts-aws`. + :ref:`qe-kms-datakeyopts-aws`. diff --git a/source/includes/tutorials/automatic/azure/dek.rst b/source/includes/tutorials/automatic/azure/dek.rst index b91430d599f..f3e3992ddb9 100644 --- a/source/includes/tutorials/automatic/azure/dek.rst +++ b/source/includes/tutorials/automatic/azure/dek.rst @@ -174,9 +174,9 @@ To view a diagram showing how your client application creates your {+dek-long+} when using an {+azure-kv+}, see - :ref:`csfle-reference-kms-providers-azure-architecture`. + :ref:`qe-fundamentals-kms-providers-azure-architecture`. To learn more about the options for creating a {+dek-long+} encrypted with a {+cmk-long+} hosted in {+azure-kv+}, see - :ref:`csfle-kms-provider-object-azure` and - :ref:`csfle-kms-datakeyopts-azure`. + :ref:`qe-kms-provider-object-azure` and + :ref:`qe-kms-datakeyopts-azure`. diff --git a/source/includes/tutorials/automatic/gcp/dek.rst b/source/includes/tutorials/automatic/gcp/dek.rst index f76acc48a3e..cd591cd8fa3 100644 --- a/source/includes/tutorials/automatic/gcp/dek.rst +++ b/source/includes/tutorials/automatic/gcp/dek.rst @@ -181,9 +181,9 @@ To view a diagram showing how your client application creates your {+dek-long+} when using an {+gcp-kms+}, see - :ref:`csfle-reference-kms-providers-gcp-architecture`. + :ref:`qe-fundamentals-kms-providers-gcp-architecture`. To learn more about the options for creating a {+dek-long+} encrypted with a {+cmk-long+} hosted in {+azure-kv+}, see - :ref:`csfle-kms-provider-object-gcp` and - :ref:`csfle-kms-datakeyopts-gcp`. + :ref:`qe-kms-provider-object-gcp` and + :ref:`qe-kms-datakeyopts-gcp`. diff --git a/source/includes/unreachable-node-default-quorum-index-builds.rst b/source/includes/unreachable-node-default-quorum-index-builds.rst new file mode 100644 index 00000000000..00c7a086eca --- /dev/null +++ b/source/includes/unreachable-node-default-quorum-index-builds.rst @@ -0,0 +1,6 @@ +.. important:: + + If a data-bearing voting node becomes unreachable and the + :ref:`commitQuorum ` is set to the + default ``votingMembers``, index builds can hang until that node + comes back online. diff --git a/source/includes/upgrade-intro.rst b/source/includes/upgrade-intro.rst index 0593b2b195a..426cf60ac2b 100644 --- a/source/includes/upgrade-intro.rst +++ b/source/includes/upgrade-intro.rst @@ -1,4 +1,7 @@ -Use this tutorial to upgrade from a previous major release or upgrade -to the latest patch release of your current release series. Familiarize -yourself with the content of this document, including thoroughly reviewing the -prerequisites, prior to upgrading to MongoDB |newversion|. +Use this tutorial to upgrade from MongoDB |oldversion| to MongoDB +|newversion|. To upgrade to a new patch release within the same release +series, see :ref:`upgrade-to-latest-revision`. + +Familiarize yourself with the content of this document, including +thoroughly reviewing the prerequisites, prior to upgrading to MongoDB +|newversion|. diff --git a/source/includes/use-expr-in-find-query.rst b/source/includes/use-expr-in-find-query.rst new file mode 100644 index 00000000000..9859259555e --- /dev/null +++ b/source/includes/use-expr-in-find-query.rst @@ -0,0 +1,30 @@ +Compare Two Fields from A Single Document +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Consider a ``monthlyBudget`` collection with the following documents: + +.. code-block:: javascript + + db.monthlyBudget.insertMany( [ + { _id : 1, category : "food", budget : 400, spent : 450 }, + { _id : 2, category : "drinks", budget : 100, spent : 150 }, + { _id : 3, category : "clothes", budget : 100, spent : 50 }, + { _id : 4, category : "misc", budget : 500, spent : 300 }, + { _id : 5, category : "travel", budget : 200, spent : 650 } + ] ) + +The following operation uses :query:`$expr` to find documents +where the ``spent`` amount exceeds the ``budget``: + +.. code-block:: javascript + + db.monthlyBudget.find( { $expr: { $gt: [ $spent , $budget ] } } ) + +The operation returns the following results: + +.. code-block:: javascript + :copyable: false + + { _id : 1, category : "food", budget : 400, spent : 450 } + { _id : 2, category : "drinks", budget : 100, spent : 150 } + { _id : 5, category : "travel", budget : 200, spent : 650 } diff --git a/source/includes/views/fact-compare-view-and-materialized-view.rst b/source/includes/views/fact-compare-view-and-materialized-view.rst index a5e9850395c..de060ef7a03 100644 --- a/source/includes/views/fact-compare-view-and-materialized-view.rst +++ b/source/includes/views/fact-compare-view-and-materialized-view.rst @@ -7,7 +7,14 @@ from an aggregation pipeline. - On-demand materialized views are stored on and read from disk. They use a :pipeline:`$merge` or :pipeline:`$out` stage to update the saved - data. + data. + + .. note:: + + When using :pipeline:`$merge`, you can use :ref:`change streams + ` to watch for changes on the materialized view. + When using :pipeline:`$out`, you can't watch for changes on the + materialized view. Indexes ~~~~~~~ diff --git a/source/includes/w-1-rollback-warning.rst b/source/includes/w-1-rollback-warning.rst index 40d2237d14e..ba2b1893e1e 100644 --- a/source/includes/w-1-rollback-warning.rst +++ b/source/includes/w-1-rollback-warning.rst @@ -1,6 +1,6 @@ .. warning:: - In MongoDB 4.4 and later, if write operations use - :writeconcern:`{ w: 1 } <\>` write concern, the rollback - directory may exclude writes submitted after an :term:`oplog hole` - if the primary restarts before the write operation completes. + If write operations use :writeconcern:`{ w: 1 } <\>` write concern, + the rollback directory may exclude writes submitted after an + :term:`oplog hole` if the primary restarts before the write operation + completes. diff --git a/source/includes/warning-dropDatabase-shardedCluster.rst b/source/includes/warning-dropDatabase-shardedCluster.rst index a5403874462..657ea8af892 100644 --- a/source/includes/warning-dropDatabase-shardedCluster.rst +++ b/source/includes/warning-dropDatabase-shardedCluster.rst @@ -7,15 +7,6 @@ database, you must follow these additional steps for using the #. Run the :dbcommand:`dropDatabase` command on a :binary:`~bin.mongos`, no additional steps required. -- For **MongoDB 4.4**, you must: - - #. Run the :dbcommand:`dropDatabase` command on a - :binary:`~bin.mongos`. - - #. Once the command successfully completes, run the - :dbcommand:`dropDatabase` command once more on a - :binary:`~bin.mongos`. - - For **MongoDB 4.2**, you must: #. Run the :dbcommand:`dropDatabase` command on a diff --git a/source/index.txt b/source/index.txt index 25174372a81..b72f224b4d3 100644 --- a/source/index.txt +++ b/source/index.txt @@ -21,8 +21,6 @@ What is MongoDB? .. button:: Get started with MongoDB Atlas :uri: https://www.mongodb.com/cloud?tck=docs_server - .. include:: /includes/rc-available.rst - .. image:: /images/hero.png :alt: Homepage hero image @@ -251,15 +249,18 @@ Explore libraries and tools for MongoDB. MongoDB Shell (mongosh) /crud /aggregation - /data-modeling - /core/transactions /indexes - /security + Atlas Search + Atlas Vector Search + /core/timeseries-collections /changeStreams + /core/transactions + /data-modeling /replication /sharding - /administration /storage + /administration + /security /faq /reference /release-notes diff --git a/source/indexes.txt b/source/indexes.txt index fe7bc627ab7..d27456d9d4b 100644 --- a/source/indexes.txt +++ b/source/indexes.txt @@ -10,6 +10,9 @@ Indexes :name: genre :values: reference +.. meta:: + :description: Create and manage indexes on collections to improve query performance. + .. contents:: On this page :local: :backlinks: none @@ -52,10 +55,15 @@ example, consider the following scenarios: * - A salesperson often needs to look up client information by location. Location is stored in an embedded object with fields like ``state``, ``city``, and ``zipcode``. You can create an - index on the entire ``location`` object to improve performance - for queries on any field in that object. + index on the ``location`` object to improve performance for + queries on that object. + + .. note:: + + .. include:: /includes/indexes/embedded-object-need-entire-doc.rst - - :ref:`Single Field Index ` on an object + - :ref:`Single Field Index ` on an embedded + document * - A grocery store manager often needs to look up inventory items by name and quantity to determine which items are low stock. You can diff --git a/source/installation.txt b/source/installation.txt index 031cb9c3e50..d60dfb4ff08 100644 --- a/source/installation.txt +++ b/source/installation.txt @@ -14,6 +14,9 @@ Install MongoDB .. default-domain:: mongodb +.. meta:: + :description: How to install MongoDB Community or Enterprise. + .. facet:: :name: genre :values: reference diff --git a/source/introduction.txt b/source/introduction.txt index aeba501a41f..1b8ff5c7310 100644 --- a/source/introduction.txt +++ b/source/introduction.txt @@ -11,6 +11,7 @@ Introduction to MongoDB :values: reference .. meta:: + :description: Learn about the advantages of MongoDB document databases. :keywords: atlas .. contents:: On this page @@ -19,8 +20,6 @@ Introduction to MongoDB :depth: 1 :class: singlecol -.. include:: /includes/rapid-release-short.rst - You can create a MongoDB database in the following environments: .. include:: /includes/fact-environments.rst @@ -137,7 +136,7 @@ third parties to develop storage engines for MongoDB. :titlesonly: :hidden: - /tutorial/getting-started + Getting Started Create an Atlas Free Tier Cluster /core/databases-and-collections /core/document diff --git a/source/legacy-opcodes.txt b/source/legacy-opcodes.txt index 1c70f21843c..4d5e7b19e3e 100644 --- a/source/legacy-opcodes.txt +++ b/source/legacy-opcodes.txt @@ -312,10 +312,10 @@ collection. The format of the OP_QUERY message is: - ``2`` corresponds to SlaveOk. Allow query of replica slave. Normally these return an error except for namespace "local". - - ``3`` corresponds to OplogReplay. Starting in MongoDB 4.4, you - need not specify this flag because the optimization - automatically happens for eligible queries on the oplog. See - :ref:`oplogReplay ` for more information. + - ``3`` corresponds to OplogReplay. You need not specify this flag + because the optimization automatically happens for eligible queries on + the oplog. See :ref:`oplogReplay ` for more + information. - ``4`` corresponds to NoCursorTimeout. The server normally times out idle cursors after an inactivity period (10 minutes) diff --git a/source/reference.txt b/source/reference.txt index ba84c9c231c..12c258a0313 100644 --- a/source/reference.txt +++ b/source/reference.txt @@ -41,9 +41,6 @@ Reference :ref:`server-error-codes` Details the error codes that MongoDB returns. -:ref:`explain-results` - Documentation on information returned from explain operations. - :ref:`glossary` A glossary of common terms and concepts specific to MongoDB. @@ -118,7 +115,6 @@ Reference /reference/mongodb-defaults /reference/exit-codes /reference/error-codes - /reference/explain-results /reference/glossary /reference/log-messages /reference/limits diff --git a/source/reference/aggregation-commands-comparison.txt b/source/reference/aggregation-commands-comparison.txt index 811b524508d..d6820235287 100644 --- a/source/reference/aggregation-commands-comparison.txt +++ b/source/reference/aggregation-commands-comparison.txt @@ -77,17 +77,12 @@ MongoDB aggregation commands. * - Flexibility - - Starting in version 4.4, can define custom aggregation - expressions with :group:`$accumulator` and - :expression:`$function`. - - In previous versions, can only use operators and expressions - supported by the aggregation pipeline. - - However, can add computed - fields, create new virtual sub-objects, and extract sub-fields - into the top-level of results by using the :pipeline:`$project` - pipeline operator. + - You can define custom aggregation expressions with :group:`$accumulator` + and :expression:`$function`. + + You can also add computed fields, create new virtual sub-objects, and + extract sub-fields into the top-level of results by using the + :pipeline:`$project` pipeline operator. See :pipeline:`$project` for more information as well as :doc:`/reference/operator/aggregation` for more information on all diff --git a/source/reference/audit-message.txt b/source/reference/audit-message.txt index ca90bf044a2..5813baf0ff8 100644 --- a/source/reference/audit-message.txt +++ b/source/reference/audit-message.txt @@ -19,7 +19,7 @@ System Event Audit Messages Audit Message ------------- -The :doc:`event auditing feature ` can record events in +The :ref:`event auditing feature ` can record events in JSON format. To configure auditing output, see :doc:`/tutorial/configure-auditing`. @@ -211,6 +211,7 @@ associated ``param`` details and the ``result`` values, if any. - | ``0`` - Success | ``18`` - Authentication Failed | ``334`` - Mechanism Unavailable + | ``337`` - Authentication Abandoned * - .. _audit-message-authCheck: diff --git a/source/reference/bson-types.txt b/source/reference/bson-types.txt index 43e47214ff0..74a0d1cb318 100644 --- a/source/reference/bson-types.txt +++ b/source/reference/bson-types.txt @@ -10,6 +10,9 @@ BSON Types :name: programming_language :values: shell +.. meta:: + :description: MongoDB uses BSON (Binary JSON) field types to store and serialize documents. + .. contents:: On this page :local: :backlinks: none @@ -35,7 +38,7 @@ following table: - The :expression:`$isNumber` aggregation operator returns ``true`` if its argument is a BSON integer, decimal, double, - or long. *New in version 4.4* + or long. To determine a field's type, see :ref:`check-types-in-shell`. diff --git a/source/reference/built-in-roles.txt b/source/reference/built-in-roles.txt index cfb782ca0ac..07dbd24683b 100644 --- a/source/reference/built-in-roles.txt +++ b/source/reference/built-in-roles.txt @@ -6,6 +6,9 @@ Built-In Roles .. default-domain:: mongodb +.. meta:: + :description: Control access to local MongoDB and MongoDB Atlas deployments by using built-in roles and privileges. + .. contents:: On this page :local: :backlinks: none @@ -308,7 +311,8 @@ Cluster Administration Roles - :authaction:`checkMetadataConsistency` (New in version 7.0) - :authaction:`cleanupOrphaned` - :authaction:`flushRouterConfig` - - :authaction:`getDefaultRWConcern` (New in version 4.4) + - :dbcommand:`getClusterParameter` + - :authaction:`getDefaultRWConcern` - :authaction:`listSessions` - :authaction:`listShards` - :authaction:`removeShard` @@ -317,7 +321,8 @@ Cluster Administration Roles - :authaction:`replSetGetStatus` - :authaction:`replSetStateChange` - :authaction:`resync` - - :authaction:`setDefaultRWConcern` (New in version 4.4) + - :dbcommand:`setClusterParameter` + - :authaction:`setDefaultRWConcern` - :authaction:`setFeatureCompatibilityVersion` * - *All* :ref:`databases ` @@ -325,10 +330,13 @@ Cluster Administration Roles - .. hlist:: :columns: 1 - - :authaction:`clearJumboFlag` (New in 4.2.3) + - :dbcommand:`analyzeShardKey` (New in version 7.0) + - :authaction:`clearJumboFlag` + - :dbcommand:`configureQueryAnalyzer` - :authaction:`enableSharding` - - :authaction:`refineCollectionShardKey` (New in 4.4) - :authaction:`moveChunk` + - :authaction:`refineCollectionShardKey` + - :authaction:`reshardCollection` - :authaction:`splitVector` :authrole:`clusterManager` provides additional privileges for the @@ -446,7 +454,7 @@ Cluster Administration Roles - :authaction:`connPoolStats` - :authaction:`getCmdLineOpts` - - :authaction:`getDefaultRWConcern` (New in version 4.4) + - :authaction:`getDefaultRWConcern` - :authaction:`getLog` - :authaction:`getParameter` - :authaction:`getShardMap` @@ -611,12 +619,6 @@ Cluster Administration Roles - :authaction:`shutdown` - :authaction:`touch` - :authaction:`unlock` - - .. versionchanged:: 4.4 - - Starting in version 4.4, :authrole:`hostManager` no longer - provides the :authaction:`cpuProfiler` privilege action on the - cluster. On *all* databases in the cluster, provides the following actions: diff --git a/source/reference/change-events.txt b/source/reference/change-events.txt index 1f9cb34a650..c855abd07b0 100644 --- a/source/reference/change-events.txt +++ b/source/reference/change-events.txt @@ -115,7 +115,7 @@ Operation Types - Occurs when the shard key for a collection and the distribution of data changes. - .. versionadded:: 6.1 + .. versionadded:: 6.1 *(Also available in 6.0.14)* * - :data:`shardCollection` diff --git a/source/reference/change-events/reshardCollection.txt b/source/reference/change-events/reshardCollection.txt index 3b04f50d9f0..e2ba8e8fde9 100644 --- a/source/reference/change-events/reshardCollection.txt +++ b/source/reference/change-events/reshardCollection.txt @@ -19,7 +19,7 @@ Summary .. data:: reshardCollection - .. versionadded:: 6.0 + .. versionadded:: 6.1 *(Also available in 6.0.14)* A ``reshardCollection`` event occurs when the shard key for a collection and the distribution of your data is changed. diff --git a/source/reference/collation-locales-defaults.txt b/source/reference/collation-locales-defaults.txt index 3b1c7c4c78e..0b938001e18 100644 --- a/source/reference/collation-locales-defaults.txt +++ b/source/reference/collation-locales-defaults.txt @@ -28,7 +28,7 @@ Supported Languages and Locales MongoDB's collation feature supports the following languages. The following table lists the supported languages and the associated locales as defined by `ICU Locale -ID `_. [#missing-locale]_ +ID `_. [#missing-locale]_ .. include:: /includes/collation-locale-table.rst diff --git a/source/reference/collation.txt b/source/reference/collation.txt index c4705a1030d..f43aa1e558f 100644 --- a/source/reference/collation.txt +++ b/source/reference/collation.txt @@ -11,6 +11,9 @@ Collation :name: programming_language :values: shell +.. meta:: + :description: Compare data in different languages by using collation and defining collation documents. + .. contents:: On this page :local: :backlinks: none @@ -61,15 +64,17 @@ parameters and the locales they are associated with, see To specify simple binary comparison, specify ``locale`` value of ``"simple"``. - + * - ``strength`` - integer - - Optional. The level of comparison to perform. + - .. _collation-parameter-strength: + + Optional. The level of comparison to perform. Corresponds to `ICU Comparison Levels - `_. + `_. Possible values are: .. list-table:: @@ -118,7 +123,7 @@ parameters and the locales they are associated with, see breaker. See `ICU Collation: Comparison Levels - `_ + `_ for details. @@ -142,7 +147,7 @@ parameters and the locales they are associated with, see ``2``. The default is ``false``. For more information, see `ICU Collation: Case Level - `_. + `_. @@ -172,7 +177,7 @@ parameters and the locales they are associated with, see - Default value. Similar to ``"lower"`` with slight differences. See - ``_ + ``_ for details of differences. @@ -222,7 +227,7 @@ parameters and the locales they are associated with, see and are only distinguished at strength levels greater than 3. See `ICU Collation: Comparison Levels - `_ + `_ for more information. Default is ``"non-ignorable"``. @@ -289,7 +294,7 @@ parameters and the locales they are associated with, see The default value is ``false``. See - ``_ for details. + ``_ for details. diff --git a/source/reference/command.txt b/source/reference/command.txt index 2cee51dcb42..ef2a397ecf6 100644 --- a/source/reference/command.txt +++ b/source/reference/command.txt @@ -6,6 +6,9 @@ Database Commands .. default-domain:: mongodb +.. meta:: + :description: How to run MongoDB commands and their parameters with examples. + .. contents:: On this page :local: :backlinks: none @@ -566,8 +569,6 @@ Sharding Commands - Returns information on whether the chunks of a sharded collection are balanced. - .. versionadded:: 4.4 - - No support for :atlas:`serverless instances `. * - :dbcommand:`balancerStart` @@ -685,12 +686,6 @@ Sharding Commands - No support for :atlas:`serverless instances `. - * - :dbcommand:`medianKey` - - - Deprecated internal command. See :dbcommand:`splitVector`. - - - Yes - * - :dbcommand:`moveChunk` - Internal command that migrates chunks between shards. @@ -720,8 +715,6 @@ Sharding Commands - Refines a collection's shard key by adding a suffix to the existing key. - .. versionadded:: 4.4 - - No support for :atlas:`M10 clusters ` and :atlas:`serverless instances `. @@ -927,8 +920,6 @@ Administration Commands - Retrieves the global default read and write concern options for the deployment. - .. versionadded:: 4.4 - - Yes * - :dbcommand:`getClusterParameter` @@ -1039,8 +1030,6 @@ Administration Commands - Sets the global default read and write concern options for the deployment. - .. versionadded:: 4.4 - - Yes * - :dbcommand:`shutdown` diff --git a/source/reference/command/aggregate.txt b/source/reference/command/aggregate.txt index e536166e4f0..8e8804e96be 100644 --- a/source/reference/command/aggregate.txt +++ b/source/reference/command/aggregate.txt @@ -207,15 +207,10 @@ arguments: - any - .. include:: /includes/extracts/comment-content.rst - - .. note:: - Any comment set on an :dbcommand:`aggregate` command is inherited - by any subsequent :dbcommand:`getMore` commands running with the - same ``cursorId`` returned from the ``aggregate`` command. + .. |comment-include-command| replace:: ``aggregate`` - *Changed in version 4.4.* Prior to 4.4, comments could only be strings. - + .. include:: /includes/comment-option-getMore-inheritance.rst * - ``writeConcern`` @@ -290,6 +285,7 @@ However, the following stages are not allowed within transactions: - :pipeline:`$merge` - :pipeline:`$out` - :pipeline:`$planCacheStats` +- :pipeline:`$unionWith` You also cannot specify the ``explain`` option. @@ -587,4 +583,4 @@ Use Variables in ``let`` .. seealso:: - :method:`db.collection.aggregate()` + :method:`db.collection.aggregate()` \ No newline at end of file diff --git a/source/reference/command/analyzeShardKey.txt b/source/reference/command/analyzeShardKey.txt index 3dedfaec75e..2ff104e6b4e 100644 --- a/source/reference/command/analyzeShardKey.txt +++ b/source/reference/command/analyzeShardKey.txt @@ -1,4 +1,3 @@ -.. _analyzeShardKey-command: =============== analyzeShardKey @@ -112,7 +111,7 @@ Limitations Access Control -------------- -|analyzeShardKey| requires one of the following roles: +|analyzeShardKey| requires one of these roles: - :authaction:`enableSharding` privilege action against the collection being analyzed. @@ -156,6 +155,14 @@ Examples .. include:: /includes/analyzeShardKey-example-intro.rst +.. note:: + + Before you run ``analyzeShardKey`` commands, read the + :ref:`supporting-indexes-ref` section earlier on this page. If you + require supporting indexes for the shard key you are analyzing, use + the :method:`db.collection.createIndex()` method to create the + indexes. + { lastName: 1 } keyCharacteristics ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -164,12 +171,12 @@ This |analyzeShardKey| command provides metrics on the .. code-block:: javascript - use social - db.post.analyzeShardKey( + db.adminCommand( { + analyzeShardKey: "social.post", key: { lastName: 1 }, keyCharacteristics: true, - readWriteDistribution: false, + readWriteDistribution: false } ) diff --git a/source/reference/command/appendOplogNote.txt b/source/reference/command/appendOplogNote.txt index c1a6cd1e783..4b1f2c13263 100644 --- a/source/reference/command/appendOplogNote.txt +++ b/source/reference/command/appendOplogNote.txt @@ -17,17 +17,20 @@ Definition Writes a non-operational entry to the :term:`oplog`. - Syntax ------ -You can only issue the ``appendOplogNote`` command against the ``admin`` database. +You can only run the ``appendOplogNote`` command on the ``admin`` +database. + +The command has this syntax: .. code-block:: javascript + :copyable: false db.adminCommand( { - appendOplogNote: 1 + appendOplogNote: 1, data: } ) @@ -42,9 +45,11 @@ Command Fields * - Field - Type - Description + * - ``appendOplogNote`` - any - Set to any value. + * - ``data`` - document - The document to append to the :term:`oplog`. @@ -59,9 +64,9 @@ To append a non-operational entry to the :term:`oplog`, use the db.adminCommand( { - appendOplogNote: 1 + appendOplogNote: 1, data: { - msg: "Appending test msg to oplog" + msg: "Appending test message to oplog" } } ) @@ -75,11 +80,10 @@ Example ``oplog`` entry: op: "n", ns: "", o: { - msg: "Appending test msg to oplog" + msg: "Appending test message to oplog" }, ts: Timestamp({ t: 1689177321, i: 1 }), t: Long("1"), v: Long("2"), wall: ISODate("2023-07-12T15:55:21.180Z") } - diff --git a/source/reference/command/applyOps.txt b/source/reference/command/applyOps.txt index ca2d5da6c2b..d0f9ba5db81 100644 --- a/source/reference/command/applyOps.txt +++ b/source/reference/command/applyOps.txt @@ -19,6 +19,17 @@ Definition instance. The :dbcommand:`applyOps` command is an internal command. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-serverless.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Behavior -------- diff --git a/source/reference/command/balancerCollectionStatus.txt b/source/reference/command/balancerCollectionStatus.txt index c0327839057..4b74617ffb4 100644 --- a/source/reference/command/balancerCollectionStatus.txt +++ b/source/reference/command/balancerCollectionStatus.txt @@ -15,8 +15,6 @@ Definition .. dbcommand:: balancerCollectionStatus - .. versionadded:: 4.4 - Returns a document that contains information about whether the chunks of a sharded collection are balanced (i.e. do not need to be moved) as of the time the command is run or need to be moved because diff --git a/source/reference/command/buildInfo.txt b/source/reference/command/buildInfo.txt index 885b8ae5e52..a79db10a857 100644 --- a/source/reference/command/buildInfo.txt +++ b/source/reference/command/buildInfo.txt @@ -19,6 +19,17 @@ Definition returns a build summary for the current :binary:`~bin.mongod`. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ diff --git a/source/reference/command/cleanupOrphaned.txt b/source/reference/command/cleanupOrphaned.txt index 62a8d0683ff..a3706f5d6c0 100644 --- a/source/reference/command/cleanupOrphaned.txt +++ b/source/reference/command/cleanupOrphaned.txt @@ -10,27 +10,23 @@ cleanupOrphaned :depth: 1 :class: singlecol +.. important:: + + Starting in MongoDB 6.0.3, you should run an aggregation using the + :pipeline:`$shardedDataDistribution` stage to confirm no orphaned + documents remain. For details, + see :ref:`shardedDataDistribution-no-orphaned-docs`. + Definition ---------- .. dbcommand:: cleanupOrphaned - .. versionchanged:: 4.4 - - For orphaned documents generated after upgrading to MongoDB 4.4, - :term:`chunk` migrations and orphaned document cleanup are more - resilient to failover. The cleanup process automatically resumes in - the event of a failover. You no longer need to run the - :dbcommand:`cleanupOrphaned` command to clean up orphaned documents. - Instead, use this command to wait for orphaned documents in a chunk + Use this command to wait for orphaned documents in a chunk range from a shard key's :bsontype:`MinKey` to its :bsontype:`MaxKey` for a specified namespace to be cleaned up from a majority of a shard's members. - In MongoDB 4.2 and earlier, :dbcommand:`cleanupOrphaned` initiated - the cleanup process for orphaned documents in a specified namespace - and shard key range. - To run, issue :dbcommand:`cleanupOrphaned` in the ``admin`` database directly on the :binary:`~bin.mongod` instance that is the primary replica set member of the shard. You do not need to disable the @@ -92,42 +88,6 @@ Command Fields of the sharded collection for which to wait for cleanup of the orphaned data. - - * - ``startingFromKey`` - - - document - - - Deprecated. Starting in MongoDB 4.4, the value of this field - is not used to determine the bounds of the cleanup range. The - :dbcommand:`cleanupOrphaned` command waits until - all orphaned documents in all ranges are cleaned up from the - shard before completing, regardless of the presence of or the - value of ``startingFromKey``. - - .. note:: - - The :binary:`~bin.mongod` continues to validate that the - ``startingFromKey`` value matches the shard key pattern, - even though it is not used to determine the bounds of the - cleanup range. - - - * - ``secondaryThrottle`` - - - boolean - - - Deprecated. Starting in MongoDB 4.4, this field has no effect. - - * - ``writeConcern`` - - - document - - - Deprecated. Starting in MongoDB 4.4, this field has no effect. - Orphaned documents are always cleaned up from a majority of a - shard's members (``{ writeConcern: { w: "majority" } }``) - before the :dbcommand:`cleanupOrphaned` command returns a - response. - Behavior -------- @@ -136,10 +96,9 @@ Behavior Determine Range ~~~~~~~~~~~~~~~ -Starting in MongoDB 4.4, the value of this field is not used to -determine the bounds of the cleanup range. The -:dbcommand:`cleanupOrphaned` command waits until all orphaned documents -in all ranges in the namespace are cleaned up from the shard before +The value of this field is not used to determine the bounds of the cleanup +range. The :dbcommand:`cleanupOrphaned` command waits until all orphaned +documents in all ranges in the namespace are cleaned up from the shard before completing, regardless of the presence of or value of ``startingFromKey``. diff --git a/source/reference/command/cloneCollectionAsCapped.txt b/source/reference/command/cloneCollectionAsCapped.txt index c533786e9bf..25fef0617f2 100644 --- a/source/reference/command/cloneCollectionAsCapped.txt +++ b/source/reference/command/cloneCollectionAsCapped.txt @@ -20,6 +20,16 @@ Definition within the same database. The operation does not affect the original non-capped collection. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free-or-serverless.rst + +.. include:: /includes/fact-environments-onprem-only.rst Syntax ------ @@ -72,9 +82,7 @@ The command takes the following fields: * - ``comment`` - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 - + The command copies an ``existing collection`` and creates a new ``capped collection`` with a maximum size specified by the ``capped size`` in bytes. diff --git a/source/reference/command/collMod.txt b/source/reference/command/collMod.txt index 500b00000e9..a0dff90e170 100644 --- a/source/reference/command/collMod.txt +++ b/source/reference/command/collMod.txt @@ -32,6 +32,17 @@ Definition views. For discussion of on-demand materialized views, see :pipeline:`$merge` instead. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ @@ -58,6 +69,27 @@ Options Change Index Properties ~~~~~~~~~~~~~~~~~~~~~~~ +To change index options, specify either the key pattern or name of the +existing index options you want to change: + +.. code-block:: javascript + :copyable: false + + db.runCommand( { + collMod: , + index: { + keyPattern: | name: , + expireAfterSeconds: , // Set the TTL expiration threshold + hidden: , // Change index visibility in the query planner + prepareUnique: , // Reject new duplicate index entries + unique: // Convert an index to a unique index + }, + dryRun: + } ) + +If the index does not exist, the command errors with the message +``"cannot find index for ns "``. + .. collflag:: index The ``index`` option can change the following properties of @@ -109,8 +141,6 @@ Change Index Properties Modifying the index option ``hidden`` resets the :pipeline:`$indexStats` for the index if the value changes. - .. versionadded:: 4.4 - * - ``prepareUnique`` - A boolean that determines whether the index will accept new duplicate entries. @@ -145,32 +175,23 @@ Change Index Properties To end a conversion, set ``prepareUnique`` to ``false``. - .. versionadded:: 6.0 - - To change index options, specify either the key pattern or name of - the existing index and the index option or options you wish to - change: + To see an example of how to convert a non-unique index to a + unique index, see :ref:`index-convert-to-unique`. - .. code-block:: javascript - :copyable: false - - db.runCommand( { - collMod: , - index: { - keyPattern: | name: , - expireAfterSeconds: , // Set the TTL expiration threshold - hidden: , // Change index visibility in the query planner - prepareUnique: , // Reject new duplicate index entries - unique: // Convert an index to a unique index - } - } ) + .. versionadded:: 6.0 - If the index does not exist, the command errors with the message - ``"cannot find index for ns "``. +.. collflag:: dryRun - .. seealso:: + *Default value:* ``false`` + + Only used when ``index.unique`` is ``true``. + + Before you convert a non-unique index to a unique index, you can run + the ``collMod`` command with ``dryRun: true``. If you do, MongoDB + checks the collection for duplicate keys and returns any violations. - - :ref:`index-type-hidden` + Use ``dryRun: true`` to confirm that you can convert an index to be + unique without any errors. Validate Documents ~~~~~~~~~~~~~~~~~~ @@ -429,8 +450,6 @@ To disable change stream pre- and post-images for a collection, set Attach Comment ~~~~~~~~~~~~~~ -.. versionadded:: 4.4 - .. collflag:: comment Optional. You can attach a comment to this command. The comment must be @@ -524,9 +543,7 @@ Hide an Index from the Query Planner .. note:: To hide an index, you must have :ref:`featureCompatibilityVersion - ` set to ``4.4`` or greater. However, once hidden, the - index remains hidden even with ``featureCompatibilityVersion`` - set to ``4.2`` on MongoDB 4.4 binaries. + ` set to ``{+minimum-lts-version+}`` or greater. The following example :ref:`hides ` an existing index on the ``orders`` collection. Specifically, the operation hides @@ -565,97 +582,3 @@ To hide a text index, you must specify the index by ``name`` and not by - :ref:`index-type-hidden` - :method:`db.collection.hideIndex()` - :method:`db.collection.unhideIndex()` - -Convert an Existing Index to a Unique Index -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Create the ``apples`` collection: - -.. code-block:: javscript - - db.apples.insertMany( [ - { type: "Delicious", quantity: 12 }, - { type: "Macintosh", quantity: 13 }, - { type: "Delicious", quantity: 13 }, - { type: "Fuji", quantity: 15 }, - { type: "Washington", quantity: 10 }, - ] ) - -Add a single field index on ``type``: - -.. code-block:: javscript - - db.apples.createIndex( { type: 1 } ) - -Prepare the index on the ``type`` field for conversion: - -.. code-block:: javscript - - db.runCommand( { - collMod: "apples", - index: { - keyPattern: { type: 1 }, - prepareUnique: true - } - } ) - -The existing index may contain duplicate entries, but it will not -accept new documents that duplicate an index entry when -``prepareUnique`` is ``true``. - -Try to insert a document with a duplicate index value: - -.. code-block:: javscript - - db.apples.insertOne( { type: "Delicious", quantity: 200 } ) - -The operation returns an error. The index will not accept new -duplicate entries. - -Use the ``unique``option to convert the index to a unique index. -``collMod`` checks the collection for duplicate index entries before -converting the index: - -.. code-block:: javscript - - db.runCommand( { - collMod: "apples", - index: { - keyPattern: { type: 1 }, - unique: true - } - } ) - -The response to this operation varies by driver. You will always -receive an error message about the duplicate entries. - -.. code-block:: shell - :copyable: false - - "errmsg" : "Cannot convert the index to unique. Please resolve - conflicting documents before running collMod again." - -Some drivers also return a list of ``ObjectIds`` for the duplicate -entries: - -.. code-block:: shell - :copyable: false - - { - "ok" : 0, - "errmsg" : "Cannot convert the index to unique. Please resolve \ - conflicting documents before running collMod again.", - "code" : 359, - "codeName" : "CannotConvertIndexToUnique", - "violations" : [ - { - "ids" : [ - ObjectId("62a2015777e2d47c4da33146"), - ObjectId("62a2015777e2d47c4da33148") - ] - } - ] - } - -To complete the conversion, modify the duplicate entries to remove any -conflicts and re-run ``collMod()`` with the ``unique`` option. diff --git a/source/reference/command/collStats.txt b/source/reference/command/collStats.txt index 4644bd6fc32..e36a3b8e007 100644 --- a/source/reference/command/collStats.txt +++ b/source/reference/command/collStats.txt @@ -41,6 +41,17 @@ Definition .. include:: /includes/fact-dbcommand.rst +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ @@ -398,7 +409,7 @@ collection and the storage engine, the output fields may vary. , ], "totalIndexSize" : , - "totalSize" : , // Available starting in MongoDB 4.4 + "totalSize" : , "indexSizes" : { "_id_" : , "" : , @@ -477,8 +488,6 @@ Output The field is only available if storage is available for reuse (i.e. greater than zero). - .. versionadded:: 4.4 - .. data:: collStats.nindexes The number of indexes on the collection. All collections have at @@ -526,8 +535,6 @@ Output :data:`~collStats.totalIndexSize`. The ``scale`` argument affects this value. - .. versionadded:: 4.4 - .. data:: collStats.indexSizes This field specifies the key and size of every existing index on diff --git a/source/reference/command/compact.txt b/source/reference/command/compact.txt index 14814fb0114..f1a8e8b70fc 100644 --- a/source/reference/command/compact.txt +++ b/source/reference/command/compact.txt @@ -19,6 +19,17 @@ Definition Attempts to release unneeded disk space to the operating system. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ @@ -82,8 +93,6 @@ The command takes the following fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - .. versionadded:: 4.4 - .. warning:: Always have an up-to-date backup before performing server maintenance such as the ``compact`` operation. @@ -165,21 +174,11 @@ Behavior Blocking ~~~~~~~~ -Blocking behavior is version specific. - -.. list-table:: - - * - Version - - Blocking Behavior - - * - 4.4 - - - .. include:: /includes/reference/compact-methods-list.rst - - All other operations are permitted. +- .. include:: /includes/reference/compact-methods-list.rst - * - Post 4.4.17, 5.0.12, 6.0.2, 6.1.0 - - - .. include:: /includes/reference/compact-methods-list.rst - - All other operations are permitted. - - The :ref:`locking order ` changes. +- All other operations are permitted. + +- The :ref:`locking order ` changes. To run ``compact`` in a replica set, see :ref:`compact-cmd-replica-sets` for additional considerations. @@ -239,7 +238,7 @@ replica set, however there are some important considerations: - You should run ``compact`` on secondary nodes whenever possible. If you cannot run ``compact`` on secondaries, see the :ref:`force ` option. -- Starting in MongoDB 6.1.0 (and 6.0.2, 5.0.12, and 4.4.17): +- Starting in MongoDB 6.1.0 (and 6.0.2 and 5.0.12): - A secondary node can replicate while ``compact`` is running. - Reads are permitted. @@ -271,29 +270,12 @@ To run ``compact`` on a cluster Version Specific Considerations for Secondary Nodes ``````````````````````````````````````````````````` -Blocking behavior on secondary nodes is version specific. +- A secondary node can replicate while ``compact`` is running. -.. list-table:: - - * - Version - - Blocking Behavior - - * - 4.4 - - - No replication is possible. - - Reads are not permitted. - - * - Post 4.4.17, 5.0.12, 6.0.2, 6.1.0 - - - A secondary node can replicate while ``compact`` is running. - - Reads permitted. +- Reads are permitted. -Replication status while the ``compact`` command is running depends on -your specific MongoDB version: - -- In MongoDB versions ``4.4.9`` and later, the replica set remains in a - :replstate:`SECONDARY` status. - -- In MongoDB versions earlier than ``4.4.9``, ``compact`` forces - the replica set into the :replstate:`RECOVERING` status. +While the ``compact`` command is running, the replica set remains in a +:replstate:`SECONDARY` status. For more information about replica set member states, see See :ref:`replica-set-member-states`. @@ -310,12 +292,6 @@ as a maintenance operation. You cannot issue ``compact`` against a :binary:`~bin.mongos` instance. -Capped Collections -~~~~~~~~~~~~~~~~~~ - -On :ref:`WiredTiger `, the ``compact`` -command will attempt to compact the collection. - Index Building ~~~~~~~~~~~~~~ diff --git a/source/reference/command/compactStructuredEncryptionData.txt b/source/reference/command/compactStructuredEncryptionData.txt index 6a14f91687d..f75c00f3803 100644 --- a/source/reference/command/compactStructuredEncryptionData.txt +++ b/source/reference/command/compactStructuredEncryptionData.txt @@ -20,6 +20,17 @@ Definition Compacts documents specified in the metadata collections and deletes redundant documents. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ diff --git a/source/reference/command/configureQueryAnalyzer.txt b/source/reference/command/configureQueryAnalyzer.txt index 169638bdf48..493842658ab 100644 --- a/source/reference/command/configureQueryAnalyzer.txt +++ b/source/reference/command/configureQueryAnalyzer.txt @@ -118,6 +118,12 @@ Consider the following behavior when running |CQA|: .. include:: /includes/cqa-currentOp.rst +View Sampled Queries +~~~~~~~~~~~~~~~~~~~~ + +To see sampled queries for all collections or a specific collection, use +the :pipeline:`$listSampledQueries` aggregation stage. + Limitations ~~~~~~~~~~~ @@ -132,15 +138,6 @@ Output .. _cqa-examples: -Query Sampling Progress -~~~~~~~~~~~~~~~~~~~~~~~ - -When query sampling is enabled, you can check the progress of the -query sampling using the :pipeline:`$currentOp` aggregation stage. - -For details on the query sampling-related fields, see the -:ref:`related fields `. - Examples -------- @@ -181,3 +178,4 @@ Learn More - :method:`db.collection.configureQueryAnalyzer()` - :ref:`currentOp Query Sampling Metrics ` +- :pipeline:`$listSampledQueries` diff --git a/source/reference/command/connPoolStats.txt b/source/reference/command/connPoolStats.txt index 518400e8d96..d9e9333afda 100644 --- a/source/reference/command/connPoolStats.txt +++ b/source/reference/command/connPoolStats.txt @@ -26,6 +26,17 @@ Definition .. include:: /includes/note-conn-pool-stats.rst +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ @@ -431,7 +442,7 @@ Output .. data:: connPoolStats.replicaSetMatchingStrategy - .. versionadded:: 5.0 (*Also available starting in 4.4.5 and 4.2.13*) + .. versionadded:: 5.0 On a :binary:`~bin.mongos` instance, this value reports the policy used by the instance to determine the minimum size limit of its diff --git a/source/reference/command/connectionStatus.txt b/source/reference/command/connectionStatus.txt index 5e8c610c342..46077929256 100644 --- a/source/reference/command/connectionStatus.txt +++ b/source/reference/command/connectionStatus.txt @@ -18,6 +18,17 @@ Definition Returns information about the current connection, specifically the state of authenticated users and their available permissions. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ @@ -55,7 +66,7 @@ field: - Optional. Set ``showPrivileges`` to true to instruct :dbcommand:`connectionStatus` to return the full set of - :doc:`privileges ` that + :ref:`privileges ` that currently-authenticated users possess. By default, this field is ``false``. diff --git a/source/reference/command/convertToCapped.txt b/source/reference/command/convertToCapped.txt index 932319d0b3b..2864752eb7d 100644 --- a/source/reference/command/convertToCapped.txt +++ b/source/reference/command/convertToCapped.txt @@ -15,16 +15,27 @@ Definition .. dbcommand:: convertToCapped - .. warning:: Do Not Run This Command In Sharded Clusters + .. warning:: Do Not Run This Command On Sharded Collections MongoDB does **not** support the :dbcommand:`convertToCapped` - command in a sharded cluster. + command on sharded collections. The :dbcommand:`convertToCapped` command converts an existing, non-capped collection to a :term:`capped collection` within the same database. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-serverless.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ @@ -68,7 +79,6 @@ The command takes the following fields: * - ``comment`` - .. include:: /includes/extracts/comment-content.rst - .. versionadded:: 4.4 :dbcommand:`convertToCapped` takes an existing collection (````) and transforms it into a capped collection with diff --git a/source/reference/command/count.txt b/source/reference/command/count.txt index 50cdd99a8ba..37933e42a3d 100644 --- a/source/reference/command/count.txt +++ b/source/reference/command/count.txt @@ -158,9 +158,6 @@ Command Fields - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 - Stable API Support ------------------ diff --git a/source/reference/command/create.txt b/source/reference/command/create.txt index 569036f092f..b4e4a4bc8ff 100644 --- a/source/reference/command/create.txt +++ b/source/reference/command/create.txt @@ -23,6 +23,17 @@ Definition views. For discussion of on-demand materialized views, see :pipeline:`$merge` instead. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ @@ -346,12 +357,12 @@ The ``create`` command has the following fields: * - ``encryptedFields`` - document - - Optional. A document that configures :ref:`queryable encryption - ` for the collection being created. + - Optional. A document that configures :ref:`{+qe+} + ` for the collection being created. .. include:: /includes/fact-encryptedFieldsConfig-intro.rst - For details, see :ref:``. + For details, see :ref:``. * - ``comment`` - any diff --git a/source/reference/command/createIndexes.txt b/source/reference/command/createIndexes.txt index 0cc3a4b2412..1b990b5de76 100644 --- a/source/reference/command/createIndexes.txt +++ b/source/reference/command/createIndexes.txt @@ -21,6 +21,16 @@ Definition :method:`db.collection.createIndexes()` helper methods. .. include:: /includes/fact-dbcommand-tip +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst Syntax ------ @@ -136,17 +146,13 @@ The :dbcommand:`createIndexes` command takes the following fields: - A replica set :doc:`tag name `. - - .. versionadded:: 4.4 - + * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 - + .. _createIndexes-options: Each document in the ``indexes`` array can take the following fields: @@ -210,6 +216,8 @@ Each document in the ``indexes`` array can take the following fields: .. include:: /includes/fact-partial-filter-expression-operators.rst + .. include:: /includes/queryable-encryption/qe-csfle-partial-filter-disclaimer.rst + You can specify a ``partialFilterExpression`` option for all MongoDB :ref:`index types `. @@ -259,14 +267,12 @@ Each document in the ``indexes`` array can take the following fields: - .. _cmd-createIndexes-hidden: Optional. A flag that determines whether the index is - :doc:`hidden ` from the query planner. A + :ref:`hidden ` from the query planner. A hidden index is not evaluated as part of query plan selection. Default is ``false``. - - .. versionadded:: 4.4 - + * - ``storageEngine`` - document @@ -300,13 +306,14 @@ Each document in the ``indexes`` array can take the following fields: :ref:`control-text-search-results` to adjust the scores. The default value is ``1``. - - + * - ``default_language`` - string - - Optional. For :ref:`text ` indexes, the language that + - .. _createIndexes-default-language: + + Optional. For :ref:`text ` indexes, the language that determines the list of stop words and the rules for the stemmer and tokenizer. See :ref:`text-search-languages` for the available languages and @@ -486,9 +493,8 @@ Replica Sets and Sharded Clusters To start an index build with a non-default commit quorum, specify the :ref:`commitQuorum `. -MongoDB 4.4 adds the :dbcommand:`setIndexCommitQuorum` command for -modifying the commit quorum of an in-progress index build. - +Use the :dbcommand:`setIndexCommitQuorum` command to modify the commit quorum +of an in-progress index build. To minimize the impact of building an index on replica sets and sharded clusters, use a rolling index build procedure @@ -524,6 +530,8 @@ Concurrency .. include:: /includes/extracts/createIndexes-resource-lock.rst +.. include:: /includes/unreachable-node-default-quorum-index-builds.rst + Memory Usage Limit ~~~~~~~~~~~~~~~~~~ @@ -566,8 +574,6 @@ Collation Option Hidden Option `````````````` -.. versionadded:: 4.4 - To change the ``hidden`` option for existing indexes, you can use the following :binary:`~bin.mongosh` methods: @@ -613,8 +619,6 @@ To learn more, see: Transactions ~~~~~~~~~~~~ -.. versionchanged:: 4.4 - .. include:: /includes/extracts/transactions-explicit-ddl.rst .. |operation| replace:: :dbcommand:`createIndexes` diff --git a/source/reference/command/createRole.txt b/source/reference/command/createRole.txt index af431f7a64d..c0d7fbf8368 100644 --- a/source/reference/command/createRole.txt +++ b/source/reference/command/createRole.txt @@ -105,7 +105,6 @@ The :dbcommand:`createRole` command has the following fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - .. versionadded:: 4.4 .. |local-cmd-name| replace:: :dbcommand:`createRole` diff --git a/source/reference/command/createSearchIndexes.txt b/source/reference/command/createSearchIndexes.txt index 112152c5d74..93ab4721688 100644 --- a/source/reference/command/createSearchIndexes.txt +++ b/source/reference/command/createSearchIndexes.txt @@ -4,6 +4,9 @@ createSearchIndexes .. default-domain:: mongodb +.. meta:: + :keywords: atlas search + .. contents:: On this page :local: :backlinks: none @@ -17,7 +20,7 @@ Definition .. versionadded:: 7.0 (*Also available starting in 6.0.7*) -.. |fts-indexes| replace:: :atlas:`{+fts+} indexes ` +.. |fts-indexes| replace:: :atlas:`{+fts+} indexes ` or :atlas:`Vector Search indexes ` .. include:: /includes/atlas-search-commands/command-descriptions/createSearchIndexes-description.rst @@ -39,6 +42,7 @@ Command syntax: indexes: [ { name: "", + type: "", definition: { /* search index definition fields */ } @@ -82,11 +86,20 @@ The ``createSearchIndexes`` command takes the following fields: If you do not specify a ``name``, the index is named ``default``. + * - ``indexes.type`` + - string + - Optional + - .. include:: /includes/atlas-search-commands/field-definitions/type.rst + * - ``indexes.definition`` - document - Required - - Document describing the index to create. For details on - ``definition`` syntax, see :ref:`search-index-definition-create`. + - Document describing the index to create. The ``definition`` syntax + depends on whether you create a standard search index or a Vector + Search index. For the ``definition`` syntax, see: + + - :ref:`search-index-definition-create` + - :ref:`vector-search-index-definition-create` .. _search-index-definition-create: @@ -95,6 +108,13 @@ Search Index Definition Syntax .. include:: /includes/atlas-search-commands/search-index-definition-fields.rst +.. _vector-search-index-definition-create: + +Vector Search Index Definition Syntax +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. include:: /includes/atlas-search-commands/vector-search-index-definition-fields.rst + Behavior -------- @@ -241,3 +261,43 @@ or digits. ``searchIndex03`` uses a dynamic field mapping, meaning the index contains all fields in the collection that have :ref:`supported data types `. + +Create a Vector Search Index +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The following example creates a vector search index named +``vectorSearchIndex01`` on the ``movies`` collection: + +.. code-block:: javascript + + db.runCommand( { + createSearchIndexes: "movies", + indexes: [ + { + name: "vectorSearchIndex01", + type: "vectorSearch", + definition: { + fields: [ + { + type: "vector", + numDimensions: 1, + path: "genre", + similarity: "cosine" + } + ] + } + } + ] + } ) + +The vector search index contains one dimension and indexes the +``genre`` field. + +Learn More +---------- + +- :pipeline:`$vectorSearch` aggregation stage + +- :ref:`Tutorial: Semantic Search ` + +- :atlas:`Atlas Vector Search Changelog ` diff --git a/source/reference/command/createUser.txt b/source/reference/command/createUser.txt index ca88c7ebfbc..42933f33e7d 100644 --- a/source/reference/command/createUser.txt +++ b/source/reference/command/createUser.txt @@ -4,6 +4,9 @@ createUser .. default-domain:: mongodb +.. meta:: + :description: Create a new database user with defined database permissions. + .. contents:: On this page :local: :backlinks: none @@ -176,9 +179,6 @@ Command Fields - any - .. include:: /includes/extracts/comment-content.rst - .. versionadded:: 4.4 - - Roles ~~~~~ diff --git a/source/reference/command/currentOp.txt b/source/reference/command/currentOp.txt index 2268fa4799e..e21c8ab55eb 100644 --- a/source/reference/command/currentOp.txt +++ b/source/reference/command/currentOp.txt @@ -26,6 +26,18 @@ Definition .. include:: /includes/fact-currentOp-aggregation-stage.rst +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + + Syntax ------ @@ -81,9 +93,6 @@ it can accept several optional fields. - .. include:: /includes/extracts/comment-content.rst - .. versionadded:: 4.4 - - ``currentOp`` and the :ref:`database profiler` report the same basic diagnostic information for CRUD operations, including the following: @@ -964,8 +973,6 @@ Output Fields "dataThroughputLastSecond" : 15.576952934265137, "dataThroughputAverage" : 15.375944137573242, - .. versionadded:: 4.4 - .. data:: currentOp.dataThroughputAverage The average amount of data (in MiB) processed by the @@ -985,8 +992,6 @@ Output Fields "dataThroughputLastSecond" : 15.576952934265137, "dataThroughputAverage" : 15.375944137573242, - .. versionadded:: 4.4 - .. data:: currentOp.fsyncLock Specifies if database is currently locked for :method:`fsync diff --git a/source/reference/command/dataSize.txt b/source/reference/command/dataSize.txt index 5ab2ef025de..6df27db6a52 100644 --- a/source/reference/command/dataSize.txt +++ b/source/reference/command/dataSize.txt @@ -18,6 +18,18 @@ Definition The :dbcommand:`dataSize` command returns the size in bytes for the specified data. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free.rst + +.. include:: /includes/fact-environments-onprem-only.rst + + Syntax ------ diff --git a/source/reference/command/dbHash.txt b/source/reference/command/dbHash.txt index 44258fc017e..bba5e136e98 100644 --- a/source/reference/command/dbHash.txt +++ b/source/reference/command/dbHash.txt @@ -25,6 +25,17 @@ Definition The :dbcommand:`dbHash` command obtains a shared (S) lock on the database, which prevents writes until the command completes. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free-or-serverless.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ diff --git a/source/reference/command/dbStats.txt b/source/reference/command/dbStats.txt index cef6159a63e..c2afb6dfefc 100644 --- a/source/reference/command/dbStats.txt +++ b/source/reference/command/dbStats.txt @@ -18,6 +18,18 @@ Definition The :dbcommand:`dbStats` command returns storage statistics for a given database. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-limited-free.rst + +.. include:: /includes/fact-environments-onprem-only.rst + + Syntax ------ @@ -209,8 +221,6 @@ Output This is the sum of :data:`~dbStats.storageSize` and :data:`~dbStats.indexSize`. - .. versionadded:: 4.4 - .. data:: dbStats.totalFreeStorageSize Sum of the free storage space allocated for both documents and diff --git a/source/reference/command/delete.txt b/source/reference/command/delete.txt index a8519914e1a..4a43c065044 100644 --- a/source/reference/command/delete.txt +++ b/source/reference/command/delete.txt @@ -4,6 +4,10 @@ delete .. default-domain:: mongodb +.. facet:: + :name: genre + :values: reference + .. contents:: On this page :local: :backlinks: none @@ -102,9 +106,7 @@ The command takes the following fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 - + * - :ref:`let ` - document - .. _delete-let-syntax: @@ -199,9 +201,6 @@ Each element of the ``deletes`` array contains the following fields: For an example, see :ref:`ex-delete-command-hint`. - .. versionadded:: 4.4 - - Behavior -------- @@ -370,8 +369,6 @@ option: Specify ``hint`` for Delete Operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -.. versionadded:: 4.4 - In :binary:`~bin.mongosh`, create a ``members`` collection with the following documents: @@ -486,8 +483,12 @@ The returned document contains a subset of the following fields: .. data:: delete.writeConcernError - Document that describe error related to write concern and contains - the fields: + Document describing errors that relate to the write concern. + + .. |cmd| replace:: :dbcommand:`delete` + .. include:: /includes/fact-writeConcernError-mongos + + The ``writeConcernError`` documents contian the following fields: .. data:: delete.writeConcernError.code @@ -499,8 +500,6 @@ The returned document contains a subset of the following fields: .. data:: delete.writeConcernError.errInfo.writeConcern - .. versionadded:: 4.4 - .. include:: /includes/fact-errInfo-wc.rst .. data:: delete.writeConcernError.errInfo.writeConcern.provenance diff --git a/source/reference/command/distinct.txt b/source/reference/command/distinct.txt index 9afd5ef8d52..c3f7ec59bbf 100644 --- a/source/reference/command/distinct.txt +++ b/source/reference/command/distinct.txt @@ -96,8 +96,6 @@ The command takes the following fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 * - ``hint`` - string or document diff --git a/source/reference/command/driverOIDTest.txt b/source/reference/command/driverOIDTest.txt index a1a2be5ff23..aef86d436fe 100644 --- a/source/reference/command/driverOIDTest.txt +++ b/source/reference/command/driverOIDTest.txt @@ -15,3 +15,14 @@ driverOIDTest :dbcommand:`driverOIDTest` is an internal command. .. slave-ok + +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst diff --git a/source/reference/command/drop.txt b/source/reference/command/drop.txt index 2ec91d02650..ee5f22e0c2a 100644 --- a/source/reference/command/drop.txt +++ b/source/reference/command/drop.txt @@ -21,6 +21,18 @@ Definition .. |method| replace:: :method:`~db.collection.drop` helper method .. include:: /includes/fact-dbcommand-tip +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + + Syntax ------ @@ -62,9 +74,7 @@ The command takes the following fields: * - ``comment`` - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 - + :binary:`~bin.mongosh` provides the equivalent helper method :method:`db.collection.drop()`. @@ -73,8 +83,7 @@ Behavior - Starting in MongoDB 5.0, the :dbcommand:`drop` command and the :method:`db.collection.drop()` method will raise an error if passed an - unrecognized parameter. In MongoDB 4.4 and earlier, unrecognized - parameters are silently ignored. + unrecognized parameter. - This command also removes any indexes associated with the dropped collection. diff --git a/source/reference/command/dropAllRolesFromDatabase.txt b/source/reference/command/dropAllRolesFromDatabase.txt index 0c627a00026..5acfdb82499 100644 --- a/source/reference/command/dropAllRolesFromDatabase.txt +++ b/source/reference/command/dropAllRolesFromDatabase.txt @@ -77,9 +77,7 @@ The command has the following fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 - + Required Access --------------- diff --git a/source/reference/command/dropAllUsersFromDatabase.txt b/source/reference/command/dropAllUsersFromDatabase.txt index 018e70027ba..8dbdcd2844a 100644 --- a/source/reference/command/dropAllUsersFromDatabase.txt +++ b/source/reference/command/dropAllUsersFromDatabase.txt @@ -75,9 +75,6 @@ The command takes the following fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 - Required Access --------------- diff --git a/source/reference/command/dropConnections.txt b/source/reference/command/dropConnections.txt index 7cb8f5cdd3e..fe69fb009ef 100644 --- a/source/reference/command/dropConnections.txt +++ b/source/reference/command/dropConnections.txt @@ -22,6 +22,17 @@ Definition connections to the specified hosts. The :dbcommand:`dropConnections` must be run against the ``admin`` database. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ @@ -58,9 +69,7 @@ The command requires the following field: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 - + Access Control -------------- diff --git a/source/reference/command/dropDatabase.txt b/source/reference/command/dropDatabase.txt index f5472c7ce3c..f79a9514a91 100644 --- a/source/reference/command/dropDatabase.txt +++ b/source/reference/command/dropDatabase.txt @@ -18,6 +18,18 @@ Definition The :dbcommand:`dropDatabase` command drops the current database, deleting the associated data files. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + + Syntax ------ @@ -58,7 +70,7 @@ The command takes the following optional fields: :writeconcern:`"majority"`. When issued on a replica set, if the specified write concern - results in fewer member acknowledgements than write concern + results in fewer member acknowledgments than write concern :writeconcern:`"majority"`, the operation uses :writeconcern:`"majority"`. Otherwise, the specified write concern is used. @@ -69,9 +81,7 @@ The command takes the following optional fields: * - ``comment`` - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 - + :binary:`~bin.mongosh` also provides the helper method :method:`db.dropDatabase()`. @@ -105,18 +115,16 @@ Indexes Replica Set and Sharded Clusters ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -.. versionchanged:: 4.4 - Replica Sets At minimum, :dbcommand:`dropDatabase` waits until all collections drops in the database have propagated to a majority of the replica set members (i.e. uses the write concern :writeconcern:`"majority"`). - If you specify a write concern that requires acknowledgement from + If you specify a write concern that requires acknowledgment from fewer than the majority, the command uses write concern :writeconcern:`"majority"`. - If you specify a write concern that requires acknowledgement from + If you specify a write concern that requires acknowledgment from more than the majority, the command uses the specified write concern. Sharded Clusters diff --git a/source/reference/command/dropIndexes.txt b/source/reference/command/dropIndexes.txt index ca5bd62a59f..551fa51afbb 100644 --- a/source/reference/command/dropIndexes.txt +++ b/source/reference/command/dropIndexes.txt @@ -25,6 +25,17 @@ Definition :method:`db.collection.dropIndexes()` helper methods. .. include:: /includes/fact-dbcommand-tip +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ @@ -85,9 +96,7 @@ The command takes the following fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 - + Behavior -------- @@ -137,21 +146,7 @@ Stop In-Progress Index Builds Hidden Indexes ~~~~~~~~~~~~~~ -Starting in version 4.4, MongoDB adds the ability to hide or unhide -indexes from the query planner. By hiding an index from the planner, -users can evaluate the potential impact of dropping an index without -actually dropping the index. - -If after the evaluation, the user decides to drop the index, the user -can drop the hidden index; i.e. you do not need to unhide it first to -drop it. - -If, however, the impact is negative, the user can unhide the index -instead of having to recreate a dropped index. And because indexes are -fully maintained while hidden, the indexes are immediately available -for use once unhidden. - -For more information on hidden indexes, see :doc:`/core/index-hidden`. +.. include:: /includes/fact-hidden-indexes.rst Examples -------- diff --git a/source/reference/command/dropRole.txt b/source/reference/command/dropRole.txt index ef5e3bef895..f0590a229eb 100644 --- a/source/reference/command/dropRole.txt +++ b/source/reference/command/dropRole.txt @@ -72,9 +72,7 @@ The command has the following fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 - + Behavior -------- diff --git a/source/reference/command/dropUser.txt b/source/reference/command/dropUser.txt index 38a8d8b6eae..4ed5d21c6f4 100644 --- a/source/reference/command/dropUser.txt +++ b/source/reference/command/dropUser.txt @@ -72,9 +72,7 @@ The command has the following fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 - + .. include:: /includes/check-before-dropping-useradmin.rst Required Access diff --git a/source/reference/command/explain.txt b/source/reference/command/explain.txt index 1925067308e..f72cf0d0a96 100644 --- a/source/reference/command/explain.txt +++ b/source/reference/command/explain.txt @@ -26,6 +26,19 @@ Definition .. include:: /includes/fact-dbcommand-tip + .. include:: includes/explain-ignores-cache-plan.rst + +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ @@ -137,6 +150,20 @@ For write operations, the :dbcommand:`explain` command returns information about the write operation that would be performed but does not actually modify the database. +Stable API +~~~~~~~~~~ + +The :ref:`Stable API ` V1 supports the following +verbosity modes for the ``explain`` command: + +- :ref:`allPlansExecution ` +- :ref:`executionStats ` +- :ref:`queryPlanner` + +.. warning:: + + .. include:: /includes/fact-stable-api-explain.rst + Restrictions ~~~~~~~~~~~~ @@ -174,8 +201,8 @@ verbosity mode to return the query planning information for a .. _ex-executionStats: -``executionStats`` Mode -~~~~~~~~~~~~~~~~~~~~~~~~ +``executionStats`` Mode +~~~~~~~~~~~~~~~~~~~~~~~ The following :dbcommand:`explain` operation runs in ``"executionStats"`` verbosity mode to return the query planning and execution information diff --git a/source/reference/command/features.txt b/source/reference/command/features.txt index a566c6deaf8..0eb2160ef47 100644 --- a/source/reference/command/features.txt +++ b/source/reference/command/features.txt @@ -16,3 +16,14 @@ features feature settings. .. slave-ok + +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst diff --git a/source/reference/command/filemd5.txt b/source/reference/command/filemd5.txt index e9f7c80f86e..c6b944ef07c 100644 --- a/source/reference/command/filemd5.txt +++ b/source/reference/command/filemd5.txt @@ -21,6 +21,17 @@ Definition The command takes the ``files_id`` of the file in question and the name of the GridFS root collection as arguments. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ diff --git a/source/reference/command/find.txt b/source/reference/command/find.txt index 86a5289cb73..6c2d666d9b1 100644 --- a/source/reference/command/find.txt +++ b/source/reference/command/find.txt @@ -36,14 +36,6 @@ This command is available in deployments hosted in the following environments: Syntax ------ -.. versionchanged:: 4.4 - - MongoDB deprecates the ``oplogReplay`` option to the :dbcommand:`find` - command. The optimization enabled by this flag in previous versions now - happens automatically for eligible queries on the oplog. Therefore, - you don't need to specify this flag. If specified, the server accepts - the flag for backwards compatibility, but the flag has no effect. - The :dbcommand:`find` command has the following syntax: .. versionchanged:: 5.0 @@ -149,7 +141,9 @@ The command accepts the following fields: Unlike the previous wire protocol version, a batchSize of 1 for the :dbcommand:`find` command does not close the cursor. - * - ``singleBatch`` + * - .. _find-single-batch: + + ``singleBatch`` - boolean - Optional. Determines whether to close the cursor after the first batch. Defaults to false. @@ -158,13 +152,9 @@ The command accepts the following fields: - any - .. include:: /includes/extracts/comment-content.rst - .. note:: - - Any comment set on a :dbcommand:`find` command is inherited - by any subsequent :dbcommand:`getMore` commands run on the - ``find`` cursor. + .. |comment-include-command| replace:: ``find`` - *Changed in version 4.4.* Prior to 4.4, comments could only be strings. + .. include:: /includes/comment-option-getMore-inheritance.rst * - ``maxTimeMS`` - non-negative integer @@ -224,50 +214,14 @@ The command accepts the following fields: :dbcommand:`getMore` command on the cursor temporarily if at the end of data rather than returning no data. After a timeout period, :dbcommand:`find` returns as normal. - - * - ``oplogReplay`` - - boolean - - .. deprecated:: 4.4 - - Optional. An internal command for replaying a :ref:`replica set's oplog - `. - - To use ``oplogReplay``, the ``find`` field must provide a ``filter`` - option comparing the ``ts`` document field to a - :bsontype:`timestamp ` using one of the following - comparison operators: - - * :expression:`$gte` - * :expression:`$gt` - * :expression:`$eq` - - For example, the following command replays documents from the ``data`` - :doc:`capped collection ` with a timestamp - later than or equal to January 1st, 2018 UTC: - - .. code-block:: javascript - - { find: "data", - oplogReplay: true, - filter: { ts: { $gte: new Timestamp(1514764800, 0) } } } - - .. note:: Deprecated - - - .. versionchanged:: 4.4 - - Starting in MongoDB 4.4, the ``oplogReplay`` field is deprecated. - ``find`` fields that use the :expression:`$gte`, :expression:`$gt`, - or :expression:`$eq` ``filter`` predicated on the ``ts`` field - will automatically utilize the storage format of the :ref:`replica - set's oplog ` to execute the command more - efficiently. If specified, the server accepts the ``oplogReplay`` - flag for backwards compatibility, but the flag has no effect. * - ``noCursorTimeout`` - boolean - - Optional. Prevents the server from timing out idle cursors after an inactivity - period (10 minutes). + - Optional. Prevents the server from timing out non-session idle cursors + after an inactivity period of 30 minutes. Ignored for cursors that are + part of a session. For more information, refer to + :ref:`Session Idle Timeout `. + * - :ref:`allowPartialResults ` - boolean @@ -305,8 +259,6 @@ The command accepts the following fields: For more information on memory restrictions for large blocking sorts, see :ref:`sort-index-use`. - .. versionadded:: 4.4 - * - :ref:`let ` - document - .. _find-let-syntax: @@ -348,7 +300,7 @@ collection: "x" : 1 } ], - "partialResultsReturned" : true, // Starting in version 4.4 + "partialResultsReturned" : true, "id" : NumberLong("668860441858272439"), "ns" : "test.contacts" }, @@ -375,11 +327,10 @@ collection: - Contains the cursor information, including the cursor ``id`` and the ``firstBatch`` of documents. - Starting in 4.4, if the operation against a sharded collection - returns partial results due to the unavailability of the queried - shard(s), the ``cursor`` document includes a - ``partialResultsReturned`` field. To return partial results, - rather than error, due to the unavailability of the queried + If the operation against a sharded collection returns partial results + due to the unavailability of the queried shard(s), the ``cursor`` + document includes a ``partialResultsReturned`` field. To return partial + results, rather than error, due to the unavailability of the queried shard(s), the :dbcommand:`find` command must run with :ref:`allowPartialResults ` set to ``true``. See :ref:`allowPartialResults @@ -426,6 +377,8 @@ For cursors created inside a session, you cannot call Similarly, for cursors created outside of a session, you cannot call :dbcommand:`getMore` inside a session. +.. _session-idle-timeout: + Session Idle Timeout ```````````````````` @@ -473,6 +426,11 @@ Index Filters and Collations .. include:: /includes/index-filters-and-collations.rst +Find Cursor Behavior on Views +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. include:: /includes/fact-7.3-singlebatch-cursor.rst + Examples -------- diff --git a/source/reference/command/findAndModify.txt b/source/reference/command/findAndModify.txt index bb70a624760..2ba596fcebb 100644 --- a/source/reference/command/findAndModify.txt +++ b/source/reference/command/findAndModify.txt @@ -21,7 +21,7 @@ Definition .. dbcommand:: findAndModify - The :dbcommand:`findAndModify` command modifies and returns a single + The :dbcommand:`findAndModify` command updates and returns a single document. By default, the returned document does not include the modifications made on the update. To return the document with the modifications made on the update, use the ``new`` option. @@ -92,7 +92,7 @@ The command takes the following fields: employs the same :ref:`query selectors ` as used in the :method:`db.collection.find()` method. Although the query may match multiple documents, |operation| - **will only select one document to modify**. + **will only select one document to update**. If unspecified, defaults to an empty document. @@ -104,9 +104,9 @@ The command takes the following fields: ``sort`` - document - - Optional. Determines which document the operation modifies if the query selects - multiple documents. |operation| modifies - the first document in the sort order specified by this argument. + - Optional. Determines which document the operation updates if the query + selects multiple documents. |operation| updates the first document in the + sort order specified by this argument. Starting in MongoDB 4.2 (and 4.0.12+, 3.6.14+, and 3.4.23+), the operation errors if the sort argument is not a document. @@ -135,14 +135,14 @@ The command takes the following fields: - Starting in MongoDB 4.2, if passed an :ref:`aggregation pipeline ` ``[ , , ... ]``, - |operation| modifies the document per the pipeline. The pipeline + |operation| updates the document per the pipeline. The pipeline can consist of the following stages: .. include:: /includes/list-update-agg-stages.rst * - ``new`` - boolean - - Optional. When ``true``, returns the modified document rather than the original. + - Optional. When ``true``, returns the updated document rather than the original. The default is ``false``. * - ``fields`` @@ -216,14 +216,10 @@ The command takes the following fields: For an example, see :ref:`ex-findAndModify-hint`. - .. versionadded:: 4.4 - * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 - + * - :ref:`let ` - document - .. _findAndModify-let-syntax: @@ -288,13 +284,19 @@ The ``lastErrorObject`` embedded document contains the following fields: - Description + * - ``n`` + - integer + - Contains the number of documents that matched the update + predicate or the number of documents that the command inserted or + deleted. + * - ``updatedExisting`` - boolean - Contains ``true`` if an ``update`` operation: - - Modified an existing document. + - Updated an existing document. - Found the document, but it was already in the desired destination state so no update actually occurred. @@ -346,6 +348,9 @@ To use :dbcommand:`findAndModify` on a sharded collection: - You can provide an equality condition on a full shard key in the ``query`` field. +- Starting in version 7.1, you do not need to provide the :term:`shard key` + or ``_id`` field in the query specification. + .. include:: /includes/extracts/missing-shard-key-equality-condition-findAndModify.rst Shard Key Modification @@ -355,7 +360,7 @@ Shard Key Modification .. include:: /includes/shard-key-modification-warning.rst -To modify the **existing** shard key value with +To update the **existing** shard key value with :dbcommand:`findAndModify`: - You :red:`must` run on a :binary:`~bin.mongos`. Do :red:`not` @@ -372,7 +377,7 @@ To modify the **existing** shard key value with Missing Shard Key ````````````````` -Starting in version 4.4, documents in a sharded collection can be +Documents in a sharded collection can be :ref:`missing the shard key fields `. To use :dbcommand:`findAndModify` to set the document's **missing** shard key: @@ -499,7 +504,7 @@ This command performs the following actions: "ok" : 1 } -To return the modified document in the ``value`` field, add the +To return the updated document in the ``value`` field, add the ``new:true`` option to the command. If no document match the ``query`` condition, the command @@ -525,7 +530,7 @@ following form: However, the :method:`~db.collection.findAndModify()` shell helper method returns only the unmodified document, or if ``new`` is -``true``, the modified document. +``true``, the updated document. .. code-block:: javascript @@ -745,7 +750,7 @@ Create a collection ``students`` with the following documents: { "_id" : 3, "grades" : [ 95, 110, 100 ] } ] ) -To modify all elements that are greater than or equal to ``100`` in the +To update all elements that are greater than or equal to ``100`` in the ``grades`` array, use the positional :update:`$[\]` operator with the ``arrayFilters`` option: @@ -802,7 +807,7 @@ Create a collection ``students2`` with the following documents: The following operation finds a document where the ``_id`` field equals ``1`` and uses the filtered positional operator :update:`$[\]` with -the ``arrayFilters`` to modify the ``mean`` for all elements in the +the ``arrayFilters`` to update the ``mean`` for all elements in the ``grades`` array where the grade is greater than or equal to ``85``. .. code-block:: javascript @@ -918,8 +923,6 @@ After the operation, the collection has the following documents: Specify ``hint`` for ``findAndModify`` Operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -.. versionadded:: 4.4 - In :binary:`~bin.mongosh`, create a ``members`` collection with the following documents: diff --git a/source/reference/command/flushRouterConfig.txt b/source/reference/command/flushRouterConfig.txt index 12b2f99d209..c84732778a1 100644 --- a/source/reference/command/flushRouterConfig.txt +++ b/source/reference/command/flushRouterConfig.txt @@ -25,11 +25,10 @@ Definition .. note:: - **Starting in MongoDB 4.4,** running :dbcommand:`flushRouterConfig` - is no longer required after executing the :dbcommand:`movePrimary` or - :dbcommand:`dropDatabase` commands. These two commands now - automatically refresh a sharded cluster's routing table as needed - when run. + Running :dbcommand:`flushRouterConfig` is no longer required after executing + the :dbcommand:`movePrimary` or :dbcommand:`dropDatabase` commands. These + two commands now automatically refresh a sharded cluster's routing table as + needed when run. Compatibility ------------- diff --git a/source/reference/command/fsync.txt b/source/reference/command/fsync.txt index 0c1c997ef8a..2d701800944 100644 --- a/source/reference/command/fsync.txt +++ b/source/reference/command/fsync.txt @@ -4,6 +4,10 @@ fsync .. default-domain:: mongodb +.. facet:: + :name: genre + :values: reference + .. contents:: On this page :local: :backlinks: none @@ -13,27 +17,26 @@ fsync .. meta:: :description: fsync, fsynclock, fsync lock, lock :keywords: fsync, fsynclock, fsync lock, lock - + Definition ---------- .. dbcommand:: fsync - Flushes all pending writes from the storage layer to disk. When the ``lock`` + Flushes all pending writes from the storage layer to disk. When the ``lock`` field is set to ``true``, it sets a lock on the server or cluster to prevent additional writes until the lock is released. - - - .. versionadded:: 7.1 - - When the ``fsync`` command runs on :program:`mongos`, it performs the - fsync operation on each shard in the cluster. + .. |fsyncLockUnlock| replace:: the ``fsync`` and + :dbcommand:`fsyncUnlock` commands + .. include:: /includes/fsync-mongos As applications write data, MongoDB records the data in the storage layer - and then writes the data to disk within the - :setting:`~storage.syncPeriodSecs` interval, which is 60 seconds by default. - Run ``fsync`` when you want to flush writes to disk ahead of that interval. + and then writes the data to disk. + + Run ``fsync`` when you want to flush writes to disk. + + .. include:: /includes/checkpoints.rst .. include:: /includes/fsync-lock-command @@ -42,8 +45,19 @@ Definition .. |method| replace:: :method:`db.fsyncLock` helper method .. include:: /includes/fact-dbcommand-tip - - + + +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free-or-serverless.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ @@ -52,11 +66,11 @@ The command has the following syntax: .. code-block:: javascript db.adminCommand( - { - fsync: 1, - lock: , + { + fsync: 1, + lock: , fsyncLockAcquisitionTimeout: , - comment: + comment: } ) @@ -67,20 +81,20 @@ The command has the following fields: .. list-table:: :header-rows: 1 - :widths: 20 20 60 - + :widths: 20 20 60 + * - Field - Type - Description - + * - ``fsync`` - integer - Enter "1" to apply :dbcommand:`fsync`. - * - ``fsyncLockAcquisitionTimeoutMillis`` + * - ``fsyncLockAcquisitionTimeoutMillis`` - integer - Optional. Specifies the amount of time in milliseconds to wait to - acquire locks. If the lock acquisition operation times out, the + acquire locks. If the lock acquisition operation times out, the command returns a failed response. Default: ``90000`` @@ -92,13 +106,11 @@ The command has the following fields: - Optional. Takes a lock on the server or cluster and blocks all write operations. Each ``fsync`` with ``lock`` operation takes a lock. - + * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 - + Considerations -------------- @@ -110,8 +122,8 @@ Impact on Larger Deployments .. versionadded:: 7.1 When the ``fsync`` command runs on :program:`mongos`, it performs the fsync -operation on the entire cluster. By setting the ``lock`` field to ``true``, -it sets a lock on the cluster, preventing additional writes. +operation on the entire cluster. By setting the ``lock`` field to ``true``, +it sets a lock on the cluster, preventing additional writes. To take a usable self-managed backup, before locking a sharded cluster: @@ -132,7 +144,7 @@ Lock Count The ``fsync`` command returns a document includes a ``lockCount`` field. When run on :program:`mongod`, the count indicates the number of fsync locks set on -the server. +the server. When run on a sharded cluster, :program:`mongos` sends the fsync operation to each shard and returns the results, which includes the ``lockCount`` for each. @@ -140,9 +152,9 @@ each shard and returns the results, which includes the ``lockCount`` for each. .. note:: - If the ``lockCount`` field is non-zero, all writes are blocked on the server - and cluster. To reduce the lock count, use the :dbcommand:`fsyncUnlock` - command. + If the ``lockCount`` field is greater than zero, all writes + are blocked on the server and cluster. To reduce the lock + count, use the :dbcommand:`fsyncUnlock` command. Fsync Locks after Failures ~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -152,7 +164,7 @@ Fsync locks execute on the primary in a replica set or sharded cluster. If the primary goes down or becomes unreachable due to network issues, the cluster :ref:`elects ` a new primary from the available secondaries. If a primary with an fsync lock goes down, the new primary does -**not** retain the fsync lock and can handle write operations. When elections +**not** retain the fsync lock and can handle write operations. When elections occur during backup operations, the resulting backup may be inconsistent or unusable. @@ -167,7 +179,7 @@ To recover from the primary going down: #. Restart the backup. -Additionally, fsync locks are persistent. When the old primary comes online +Additionally, fsync locks are persistent. When the old primary comes online again, you need to use the :dbcommand:`fsyncUnlock` command to release the lock on the node. @@ -183,7 +195,7 @@ Fsync Lock .. include:: /includes/extracts/wt-fsync-lock-compatibility-command.rst The ``fsync`` command can lock an individual :program:`mongod` instance or a -sharded cluster through :program:`mongos`. When run with the ``lock`` field +sharded cluster through :program:`mongos`. When run with the ``lock`` field set to ``true``, the fsync operation flushes all data to the storage layer and blocks all additional write operations until you unlock the instance or cluster. @@ -207,7 +219,7 @@ operation and the ``lockCount``: "ok" : 1 } -When locked, write operations are blocked. Separate connections may continue +When locked, write operations are blocked. Separate connections may continue read operations until the first attempt at a write operation, then they also wait until the sever or cluster is unlocked. @@ -221,7 +233,7 @@ wait until the sever or cluster is unlocked. lock, you must issue a corresponding number of unlock operations to unlock the server or cluster for writes. -Fsync Unlock +Fsync Unlock ~~~~~~~~~~~~ To unlock a server of cluster, use the :dbcommand:`fsyncUnlock` command: diff --git a/source/reference/command/fsyncUnlock.txt b/source/reference/command/fsyncUnlock.txt index f172bcd6be3..ceaf522ab57 100644 --- a/source/reference/command/fsyncUnlock.txt +++ b/source/reference/command/fsyncUnlock.txt @@ -4,6 +4,10 @@ fsyncUnlock .. default-domain:: mongodb +.. facet:: + :name: genre + :values: reference + .. contents:: On this page :local: :backlinks: none @@ -19,14 +23,12 @@ Definition .. dbcommand:: fsyncUnlock - Reduces the lock count on the server or cluster. To enable write operations, + Reduces the lock count on the server or cluster. To enable write operations, the lock count must be zero. - - - .. versionadded:: 7.1 - When the ``fsyncUnlock`` command runs on :program:`mongos`, it - reduces the lock count for each shard in the cluster. + .. |fsyncLockUnlock| replace:: the :dbcommand:`fsync` and + ``fsyncUnlock`` commands + .. include:: /includes/fsync-mongos Use this command to unblock writes after you finish a backup operation. @@ -38,7 +40,18 @@ Definition .. |method| replace:: :method:`db.fsyncUnlock` helper method .. include:: /includes/fact-dbcommand-tip - + +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ @@ -46,11 +59,11 @@ The command has the following syntax: .. code-block:: javascript - db.adminCommand( - { - fsyncUnlock: 1, - comment: - } + db.adminCommand( + { + fsyncUnlock: 1, + comment: + } ) The ``comment`` field is optional and may contain a comment of any data @@ -64,19 +77,19 @@ The operation returns a document with the following fields: .. list-table:: :header-rows: 1 :widths: 30 70 - + * - Field - Description - + * - ``info`` - Information on the status of the operation - + * - ``lockCount`` (*New in version 3.4*) - The number of locks remaining on the instance after the operation. - + * - ``ok`` - The status code. - + Examples -------- diff --git a/source/reference/command/geoSearch.txt b/source/reference/command/geoSearch.txt index 941977197c4..89d767919db 100644 --- a/source/reference/command/geoSearch.txt +++ b/source/reference/command/geoSearch.txt @@ -94,9 +94,6 @@ geoSearch - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 - Compatibility ------------- diff --git a/source/reference/command/getAuditConfig.txt b/source/reference/command/getAuditConfig.txt index 056f2deeb0c..9153b411870 100644 --- a/source/reference/command/getAuditConfig.txt +++ b/source/reference/command/getAuditConfig.txt @@ -124,5 +124,5 @@ has a filter which captures the desired operations and the .. seealso:: :method:`db.adminCommand`, :dbcommand:`setAuditConfig`, - :doc:`configure audit filters` + :ref:`configure audit filters ` diff --git a/source/reference/command/getClusterParameter.txt b/source/reference/command/getClusterParameter.txt index 23b9fa1da72..ef86b220cd8 100644 --- a/source/reference/command/getClusterParameter.txt +++ b/source/reference/command/getClusterParameter.txt @@ -24,6 +24,17 @@ Definition .. include:: /includes/fact-getClusterParameter-availability.rst +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ diff --git a/source/reference/command/getCmdLineOpts.txt b/source/reference/command/getCmdLineOpts.txt index 2af853f1a6e..0dd5620f32b 100644 --- a/source/reference/command/getCmdLineOpts.txt +++ b/source/reference/command/getCmdLineOpts.txt @@ -20,6 +20,17 @@ Definition :binary:`~bin.mongod` or :binary:`~bin.mongos`. Run :dbcommand:`getCmdLineOpts` in the ``admin`` database. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ diff --git a/source/reference/command/getDefaultRWConcern.txt b/source/reference/command/getDefaultRWConcern.txt index baa16680e53..62429490429 100644 --- a/source/reference/command/getDefaultRWConcern.txt +++ b/source/reference/command/getDefaultRWConcern.txt @@ -13,8 +13,6 @@ getDefaultRWConcern Definition ---------- -.. versionadded:: 4.4 - .. dbcommand:: getDefaultRWConcern The :dbcommand:`getDefaultRWConcern` administrative command retrieves @@ -23,6 +21,17 @@ Definition - For sharded clusters, issue the :dbcommand:`getDefaultRWConcern` on a :binary:`~bin.mongos`. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ @@ -75,13 +84,11 @@ The command has the following fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 - + Output ------ -The output includes the following fields: +The output may include the following fields: .. list-table:: :header-rows: 1 @@ -96,16 +103,20 @@ The output includes the following fields: - .. _getDefaultRWConcern-cmd-defaultWriteConcern: The global default :ref:`write concern ` - configuration. If this field is absent, the deployment has no - global default write concern settings. + configuration. + + If the deployment has no global default write concern settings, + this field is absent from ``getDefaultRWConcern`` output. * - :ref:`defaultReadConcern ` - ``object`` - .. _getDefaultRWConcern-cmd-defaultReadConcern: The global default :ref:`read concern ` - configuration. If this field is absent, the deployment has no - global default read concern settings. + configuration. + + If the deployment has no global default read concern settings, + this field is absent from ``getDefaultRWConcern`` output. * - :ref:`defaultWriteConcernSource ` - String diff --git a/source/reference/command/getLog.txt b/source/reference/command/getLog.txt index 7823c0c3b84..297abd9e475 100644 --- a/source/reference/command/getLog.txt +++ b/source/reference/command/getLog.txt @@ -21,10 +21,20 @@ Definition from a RAM cache of logged :binary:`~bin.mongod` events. To run :dbcommand:`getLog`, use the :method:`db.adminCommand()` method. - Starting in MongoDB 4.4, :dbcommand:`getLog` returns log data in - escaped :doc:`Relaxed Extended JSON v2.0 - ` format. Previously, log data - was returned as plaintext. + :dbcommand:`getLog` returns log data in escaped + :ref:`Relaxed Extended JSON v2.0 ` format. + Previously, log data was returned as plaintext. + +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst Syntax ------ @@ -66,10 +76,9 @@ The possible values for :dbcommand:`getLog` are: .. note:: - Starting in MongoDB 4.4, the :dbcommand:`getLog` command no longer - accepts the ``rs`` value, as this categorization of message type - has been deprecated. Instead, log messages are now always - identified by their :ref:`component ` - + The :dbcommand:`getLog` command no longer accepts the ``rs`` value, as this + categorization of message type has been deprecated. Instead, log messages are + now always identified by their :ref:`component ` - including *REPL* for replication messages. See :ref:`log-message-parsing-example-filter-component` for log parsing examples that filter on the component field. @@ -106,10 +115,9 @@ that contains more than 1024 characters. In earlier versions, Character Escaping ~~~~~~~~~~~~~~~~~~ -Starting in MongoDB 4.4, :dbcommand:`getLog` returns log data in -escaped :doc:`Relaxed Extended JSON v2.0 -` format, using the following -escape sequences to render log output as valid JSON: +:dbcommand:`getLog` returns log data in escaped :ref:`Relaxed Extended JSON v2.0 +` format, using the following escape sequences +to render log output as valid JSON: .. include:: /includes/fact-json-escape-sequences.rst diff --git a/source/reference/command/getParameter.txt b/source/reference/command/getParameter.txt index c84f42f2375..70a86a456af 100644 --- a/source/reference/command/getParameter.txt +++ b/source/reference/command/getParameter.txt @@ -19,7 +19,18 @@ Definition retrieving the values of parameters. Use the :method:`db.adminCommand( { command } )` method to run the :dbcommand:`getParameter` command in the ``admin`` database. - + +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-limited-free.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ @@ -82,10 +93,7 @@ The command takes the following fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 - - + Behavior -------- diff --git a/source/reference/command/grantPrivilegesToRole.txt b/source/reference/command/grantPrivilegesToRole.txt index 267f1b3547a..4c7669a196e 100644 --- a/source/reference/command/grantPrivilegesToRole.txt +++ b/source/reference/command/grantPrivilegesToRole.txt @@ -80,8 +80,6 @@ The command has the following fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - .. versionadded:: 4.4 - Behavior -------- diff --git a/source/reference/command/grantRolesToRole.txt b/source/reference/command/grantRolesToRole.txt index ab7e027b1dd..bef1810a035 100644 --- a/source/reference/command/grantRolesToRole.txt +++ b/source/reference/command/grantRolesToRole.txt @@ -64,9 +64,7 @@ The command has the following fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - .. versionadded:: 4.4 - - + .. |local-cmd-name| replace:: :dbcommand:`grantRolesToRole` .. include:: /includes/fact-roles-array-contents.rst diff --git a/source/reference/command/grantRolesToUser.txt b/source/reference/command/grantRolesToUser.txt index f1cc3aadb46..9778f73ca59 100644 --- a/source/reference/command/grantRolesToUser.txt +++ b/source/reference/command/grantRolesToUser.txt @@ -75,7 +75,6 @@ The command takes the following fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - .. versionadded:: 4.4 .. |local-cmd-name| replace:: :dbcommand:`grantRolesToUser` .. include:: /includes/fact-roles-array-contents.rst diff --git a/source/reference/command/hello.txt b/source/reference/command/hello.txt index fed02ec34e7..dd9a2ce7ce5 100644 --- a/source/reference/command/hello.txt +++ b/source/reference/command/hello.txt @@ -18,7 +18,7 @@ Definition .. dbcommand:: hello - .. versionadded:: 5.0 (and 4.4.2, 4.2.10, 4.0.21, and 3.6.21) + .. versionadded:: 5.0 :dbcommand:`hello` returns a document that describes the role of the :binary:`~bin.mongod` instance. If the optional field @@ -40,6 +40,17 @@ Definition members and to discover additional members of a :term:`replica set`. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ diff --git a/source/reference/command/hostInfo.txt b/source/reference/command/hostInfo.txt index fdf9e698620..e8fe14afd75 100644 --- a/source/reference/command/hostInfo.txt +++ b/source/reference/command/hostInfo.txt @@ -23,6 +23,17 @@ Definition You must run the :dbcommand:`hostInfo` command, which takes no arguments, against the ``admin`` database. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ diff --git a/source/reference/command/insert.txt b/source/reference/command/insert.txt index 06529be9a14..fe394c5569a 100644 --- a/source/reference/command/insert.txt +++ b/source/reference/command/insert.txt @@ -4,6 +4,10 @@ insert .. default-domain:: mongodb +.. facet:: + :name: genre + :values: reference + .. contents:: On this page :local: :backlinks: none @@ -109,9 +113,7 @@ The command takes the following fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 - + Behavior -------- @@ -327,8 +329,12 @@ The returned document contains a subset of the following fields: .. data:: insert.writeConcernError - Document that describe error related to write concern and contains - the field: + Document describing errors that relate to the write concern. + + .. |cmd| replace:: :dbcommand:`insert` + .. include:: /includes/fact-writeConcernError-mongos + + The ``writeConcernError`` documents contain the following fields: .. data:: insert.writeConcernError.code @@ -340,8 +346,6 @@ The returned document contains a subset of the following fields: .. data:: insert.writeConcernError.errInfo.writeConcern - .. versionadded:: 4.4 - .. include:: /includes/fact-errInfo-wc.rst .. data:: insert.writeConcernError.errInfo.writeConcern.provenance diff --git a/source/reference/command/isSelf.txt b/source/reference/command/isSelf.txt index 3e90cfe5689..ab5c5de69a2 100644 --- a/source/reference/command/isSelf.txt +++ b/source/reference/command/isSelf.txt @@ -15,3 +15,14 @@ isSelf :dbcommand:`_isSelf` is an internal command. .. slave-ok + +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst diff --git a/source/reference/command/killCursors.txt b/source/reference/command/killCursors.txt index c9e604e4357..6bdf2e69002 100644 --- a/source/reference/command/killCursors.txt +++ b/source/reference/command/killCursors.txt @@ -31,6 +31,17 @@ Definition .. include:: /includes/fact-dbcommand.rst +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ @@ -69,9 +80,6 @@ The command takes the following fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 - Required Access --------------- diff --git a/source/reference/command/killOp.txt b/source/reference/command/killOp.txt index 3ba62f032df..e6e5010c224 100644 --- a/source/reference/command/killOp.txt +++ b/source/reference/command/killOp.txt @@ -27,6 +27,17 @@ Definition .. include:: /includes/fact-dbcommand.rst +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-limited-free-and-m10.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ @@ -60,9 +71,6 @@ Command Fields * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 - .. include:: /includes/extracts/warning-terminating-ops-command.rst diff --git a/source/reference/command/listCollections.txt b/source/reference/command/listCollections.txt index f8c3ae9e0cd..e1187bf41bd 100644 --- a/source/reference/command/listCollections.txt +++ b/source/reference/command/listCollections.txt @@ -25,6 +25,17 @@ Definition :binary:`~bin.mongosh` provides the :method:`db.getCollectionInfos()` and the :method:`db.getCollectionNames()` helper methods. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ @@ -109,8 +120,10 @@ The command can take the following optional fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 + + .. |comment-include-command| replace:: ``listCollections`` + + .. include:: /includes/comment-option-getMore-inheritance.rst .. _listCollections-behavior: @@ -353,4 +366,3 @@ For collection information: - :method:`db.getCollectionInfos()` - :ref:`mongosh built-in commands ` - diff --git a/source/reference/command/listCommands.txt b/source/reference/command/listCommands.txt index 2fd6166f400..31fed0d2437 100644 --- a/source/reference/command/listCommands.txt +++ b/source/reference/command/listCommands.txt @@ -19,6 +19,17 @@ Definition database commands implemented for the current :binary:`~bin.mongod` or :binary:`~bin.mongos` instance. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ diff --git a/source/reference/command/listDatabases.txt b/source/reference/command/listDatabases.txt index e72fe7d62dc..5adb82ac2a8 100644 --- a/source/reference/command/listDatabases.txt +++ b/source/reference/command/listDatabases.txt @@ -20,6 +20,17 @@ Definition :dbcommand:`listDatabases` must run against the ``admin`` database, as in the following example: +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ @@ -61,6 +72,12 @@ The command can take the following optional fields: - ``empty`` - ``shards`` + + .. note:: + + The ``filter`` option isn't supported on Atlas :atlas:`Free + and Shared tier ` clusters and + :atlas:`Serverless Instances `. * - ``nameOnly`` - boolean @@ -85,8 +102,6 @@ The command can take the following optional fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 .. _listDatabases-behavior: diff --git a/source/reference/command/listIndexes.txt b/source/reference/command/listIndexes.txt index 84707a9b74e..a28d4a8a933 100644 --- a/source/reference/command/listIndexes.txt +++ b/source/reference/command/listIndexes.txt @@ -22,7 +22,18 @@ Definition .. |method| replace:: :method:`db.collection.getIndexes()` helper method .. include:: /includes/fact-dbcommand-tip - + +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ @@ -62,7 +73,10 @@ The command takes the following fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - .. versionadded:: 4.4 + + .. |comment-include-command| replace:: ``listIndexes`` + + .. include:: /includes/comment-option-getMore-inheritance.rst Required Access --------------- @@ -74,6 +88,13 @@ database. Behavior -------- +Atlas Search Indexes +~~~~~~~~~~~~~~~~~~~~ + +``listIndexes`` does not return information on :atlas:`{+fts+} indexes +`. Instead, use +:pipeline:`$listSearchIndexes`. + .. |operation| replace:: :dbcommand:`listIndexes` .. |operations| replace:: :dbcommand:`listIndexes` @@ -92,12 +113,6 @@ Wildcard Indexes .. include:: /includes/indexes/fact-wildcard-index-ordering.rst -Atlas Search Indexes -~~~~~~~~~~~~~~~~~~~~ - -``listIndexes`` does not return information on :atlas:`{+fts+} indexes -`. - Output ------ @@ -128,8 +143,7 @@ Output * - firstBatch - document - Index information includes the keys and options used to create the - index. The index option hidden, available starting in MongoDB 4.4, - is only present if the value is true. + index. The index option hidden is only present if the value is true. Use :dbcommand:`getMore` to retrieve additional results as needed. diff --git a/source/reference/command/lockInfo.txt b/source/reference/command/lockInfo.txt index 7d213562195..bd23b5d4b5c 100644 --- a/source/reference/command/lockInfo.txt +++ b/source/reference/command/lockInfo.txt @@ -19,6 +19,17 @@ Definition pending. :dbcommand:`lockInfo` is an internal command available on :binary:`~bin.mongod` instances only. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ diff --git a/source/reference/command/logRotate.txt b/source/reference/command/logRotate.txt index 595449dff93..8de10a2d399 100644 --- a/source/reference/command/logRotate.txt +++ b/source/reference/command/logRotate.txt @@ -22,6 +22,17 @@ Definition You must issue the :dbcommand:`logRotate` command against the :term:`admin database`. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free-or-m10.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ diff --git a/source/reference/command/logout.txt b/source/reference/command/logout.txt index fb4288e9230..38639ac71ea 100644 --- a/source/reference/command/logout.txt +++ b/source/reference/command/logout.txt @@ -17,16 +17,17 @@ Definition .. deprecated:: 5.0 - Attempting to use the :dbcommand:`logout` command will write an - error message to the log once per logout attempt. + If you enabled auditing, an attempt to use the :dbcommand:`logout` + command will create an entry in the audit log. This command will be removed in a future release. .. note:: - This command was used when you could log in as multiple users on a single physical connection. - Because this is no longer possible, running ``logout`` may cause connections to fail. - Going forward, you can achieve the same results by closing your connection. + This command was used when you could log in as multiple users on a + single logical connection. Because this is no longer possible, + running ``logout`` is no longer supported. Going forward, you can + achieve the same results by closing your connection. Compatibility ------------- diff --git a/source/reference/command/mapReduce.txt b/source/reference/command/mapReduce.txt index d6a4f8c24cb..e28a21f0f89 100644 --- a/source/reference/command/mapReduce.txt +++ b/source/reference/command/mapReduce.txt @@ -266,8 +266,6 @@ The command takes the following fields as arguments: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 .. _map-reduce-usage: @@ -380,76 +378,26 @@ If you set the :ref:`out ` parameter to write the results to a collection, the :dbcommand:`mapReduce` command returns a document in the following form: -.. tabs:: - - .. tab:: MongoDB 4.4+ - :tabid: 4.4 - - .. code-block:: javascript - - { "result" : "map_reduce_example", "ok" : 1 } - - .. tab:: MongoDB 4.2 and earlier - :tabid: 4.2 - - .. code-block:: javascript +.. code-block:: javascript - { - "result" : , - "timeMillis" : , - "counts" : { - "input" : , - "emit" : , - "reduce" : , - "output" : - }, - "ok" : , - } + { "result" : "map_reduce_example", "ok" : 1 } If you set the :ref:`out ` parameter to output the results inline, the :dbcommand:`mapReduce` command returns a document in the following form: -.. tabs:: - - .. tab:: MongoDB 4.4+ - :tabid: 4.4 - - .. code-block:: javascript - - { - "results" : [ - { - "_id" : , - "value" : - }, - ... - ], - "ok" : - } - - .. tab:: MongoDB 4.2 and earlier - :tabid: 4.2 - - .. code-block:: javascript - - { - "results" : [ - { - "_id" : , - "value" : - }, - ... - ], - "timeMillis" : , - "counts" : { - "input" : , - "emit" : , - "reduce" : , - "output" : - }, - "ok" : - } +.. code-block:: javascript + + { + "results" : [ + { + "_id" : , + "value" : + }, + ... + ], + "ok" : + } .. data:: mapReduce.result diff --git a/source/reference/command/medianKey.txt b/source/reference/command/medianKey.txt deleted file mode 100644 index 1ef8d6734db..00000000000 --- a/source/reference/command/medianKey.txt +++ /dev/null @@ -1,28 +0,0 @@ -========= -medianKey -========= - -.. default-domain:: mongodb - -.. contents:: On this page - :local: - :backlinks: none - :depth: 1 - :class: singlecol - -.. dbcommand:: medianKey - - :dbcommand:`medianKey` is an internal command. - - .. slave-ok, read-lock - -Compatibility -------------- - -This command is available in deployments hosted in the following environments: - -.. include:: /includes/fact-environments-atlas-only.rst - -.. include:: /includes/fact-environments-atlas-support-all.rst - -.. include:: /includes/fact-environments-onprem-only.rst diff --git a/source/reference/command/moveChunk.txt b/source/reference/command/moveChunk.txt index 28558fafc99..a4ab2608d8f 100644 --- a/source/reference/command/moveChunk.txt +++ b/source/reference/command/moveChunk.txt @@ -31,7 +31,7 @@ Definition db.adminCommand( { moveChunk : , find : , to : , - forceJumbo: , // Starting in MongoDB 4.4 + forceJumbo: , _secondaryThrottle : , writeConcern: , _waitForDelete : } ) @@ -43,7 +43,7 @@ Definition db.adminCommand( { moveChunk : , bounds : , to : , - forceJumbo: , // Starting in MongoDB 4.4 + forceJumbo: , _secondaryThrottle : , writeConcern: , _waitForDelete : } ) @@ -126,8 +126,6 @@ Definition blocking period, see :ref:`balance-chunks-that-exceed-size-limit` instead. - .. versionadded:: 4.4 - * - ``_secondaryThrottle`` - boolean @@ -240,7 +238,7 @@ side effects. ``maxCatchUpPercentageBeforeBlockingWrites`` Server Parameter ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Starting in MongoDB 5.0 (and 4.4.7, 4.2.15, 4.0.26), you can set the +Starting in MongoDB 5.0, you can set the :parameter:`maxCatchUpPercentageBeforeBlockingWrites` to specify the maximum allowed percentage of data not yet migrated during a :dbcommand:`moveChunk` operation when compared to the diff --git a/source/reference/command/moveRange.txt b/source/reference/command/moveRange.txt index 8ade538e0a4..7212f62a2fe 100644 --- a/source/reference/command/moveRange.txt +++ b/source/reference/command/moveRange.txt @@ -15,6 +15,8 @@ Definition .. dbcommand:: moveRange + .. versionadded:: 6.0 + Moves :term:`ranges ` between :term:`shards `. Run the :dbcommand:`moveRange` command with a :binary:`~bin.mongos` instance while using the :term:`admin database`. diff --git a/source/reference/command/nav-administration.txt b/source/reference/command/nav-administration.txt index f865524f90c..16df68a9c58 100644 --- a/source/reference/command/nav-administration.txt +++ b/source/reference/command/nav-administration.txt @@ -94,8 +94,6 @@ Administration Commands - Retrieves the global default read and write concern options for the deployment. - .. versionadded:: 4.4 - * - :dbcommand:`getAuditConfig` - .. include:: /includes/deprecated-get-set-auditconfig.rst @@ -177,8 +175,6 @@ Administration Commands - Sets the global default read and write concern options for the deployment. - .. versionadded:: 4.4 - * - :dbcommand:`shutdown` - Shuts down the :binary:`~bin.mongod` or :binary:`~bin.mongos` process. diff --git a/source/reference/command/nav-diagnostic.txt b/source/reference/command/nav-diagnostic.txt index 39acd69a4bd..c549823754f 100644 --- a/source/reference/command/nav-diagnostic.txt +++ b/source/reference/command/nav-diagnostic.txt @@ -103,13 +103,6 @@ Diagnostic Commands - Returns a collection metrics on instance-wide resource utilization and status. - * - :dbcommand:`shardConnPoolStats` - - - *Deprecated in 4.4 Use :dbcommand:`connPoolStats` instead.* - - Reports statistics on a :binary:`~bin.mongos`'s connection pool for client - operations against shards. - * - :dbcommand:`top` - Returns raw usage statistics for each database in the :binary:`~bin.mongod` instance. diff --git a/source/reference/command/nav-sharding.txt b/source/reference/command/nav-sharding.txt index 282d03e3318..211cd3c6568 100644 --- a/source/reference/command/nav-sharding.txt +++ b/source/reference/command/nav-sharding.txt @@ -47,8 +47,6 @@ Sharding Commands - Returns information on whether the chunks of a sharded collection are balanced. - .. versionadded:: 4.4 - * - :dbcommand:`balancerStart` - Starts a balancer thread. @@ -128,10 +126,6 @@ Sharding Commands * - :dbcommand:`listShards` - Returns a list of configured shards. - - * - :dbcommand:`medianKey` - - - Deprecated internal command. See :dbcommand:`splitVector`. * - :dbcommand:`moveChunk` @@ -158,8 +152,6 @@ Sharding Commands - Refines a collection's shard key by adding a suffix to the existing key. - .. versionadded:: 4.4 - * - :dbcommand:`removeShard` - Starts the process of removing a shard from a sharded cluster. @@ -237,7 +229,6 @@ Sharding Commands /reference/command/getShardVersion /reference/command/isdbgrid /reference/command/listShards - /reference/command/medianKey /reference/command/moveChunk /reference/command/movePrimary /reference/command/moveRange diff --git a/source/reference/command/netstat.txt b/source/reference/command/netstat.txt index b0d8e99114e..73ab3417c29 100644 --- a/source/reference/command/netstat.txt +++ b/source/reference/command/netstat.txt @@ -14,3 +14,14 @@ netstat :dbcommand:`netstat` is an internal command that is only available on :binary:`~bin.mongos` instances. + +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst diff --git a/source/reference/command/ping.txt b/source/reference/command/ping.txt index 770c813052d..13c16baea75 100644 --- a/source/reference/command/ping.txt +++ b/source/reference/command/ping.txt @@ -19,6 +19,17 @@ Definition server is responding to commands. This command will return immediately even if the server is write-locked: +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ diff --git a/source/reference/command/planCacheClear.txt b/source/reference/command/planCacheClear.txt index 2e1ce2c3331..98a9ab8d286 100644 --- a/source/reference/command/planCacheClear.txt +++ b/source/reference/command/planCacheClear.txt @@ -81,10 +81,7 @@ The command takes the following optional fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 - - + To see the query shapes for which cached query plans exist, see :ref:`planCacheStats-examples`. diff --git a/source/reference/command/planCacheClearFilters.txt b/source/reference/command/planCacheClearFilters.txt index e3524fd3501..fa18885a6ab 100644 --- a/source/reference/command/planCacheClearFilters.txt +++ b/source/reference/command/planCacheClearFilters.txt @@ -100,8 +100,6 @@ The command has the following fields: - any - .. include:: /includes/extracts/comment-content.rst - .. versionadded:: 4.4 - Required Access --------------- diff --git a/source/reference/command/planCacheListFilters.txt b/source/reference/command/planCacheListFilters.txt index a35a4cdd96b..53166f34c0c 100644 --- a/source/reference/command/planCacheListFilters.txt +++ b/source/reference/command/planCacheListFilters.txt @@ -67,8 +67,6 @@ The command has the following fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 Required Access --------------- diff --git a/source/reference/command/planCacheSetFilter.txt b/source/reference/command/planCacheSetFilter.txt index 0833c028b8f..af20fe3300f 100644 --- a/source/reference/command/planCacheSetFilter.txt +++ b/source/reference/command/planCacheSetFilter.txt @@ -107,7 +107,7 @@ The command has the following fields: ]``. - Index names. For example, ``[ "x_1", ... ]``. - The :doc:`query optimizer ` uses either a + The :ref:`query optimizer ` uses either a collection scan or the index arrays for the query plan. If the specified indexes do not exist or are :doc:`hidden `, the optimizer uses a collection scan. @@ -118,8 +118,6 @@ The command has the following fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 Index filters only exist for the duration of the server process and do not persist after shutdown. To clear the index filters, use the diff --git a/source/reference/command/profile.txt b/source/reference/command/profile.txt index 01c365c64b2..c4c668ca071 100644 --- a/source/reference/command/profile.txt +++ b/source/reference/command/profile.txt @@ -67,6 +67,17 @@ Definition .. include:: /includes/warning-profiler-performance.rst +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ @@ -159,9 +170,7 @@ The command takes the following fields: When ``filter`` is set, the ``slowms`` and ``sampleRate`` options are not used for profiling and slow-query log lines. - - .. versionadded:: 4.4.2 - + The :method:`db.getProfilingStatus()` and :method:`db.setProfilingLevel()` :ref:`shell methods ` provide wrappers around the @@ -177,11 +186,8 @@ database while enabling or disabling the profiler. This is typically a short operation. The lock blocks other operations until the :dbcommand:`profile` command has completed. -Starting in MongoDB 4.4.2, when connected to a sharded cluster through -:binary:`~bin.mongos`, you can run the :dbcommand:`profile` command -against any database. In previous versions of MongoDB, when connected -through :binary:`~bin.mongos`, you can only run the :dbcommand:`profile` -command against the ``admin`` database. +When connected to a sharded cluster through :binary:`~bin.mongos`, you can run +the :dbcommand:`profile` command against any database. .. seealso:: diff --git a/source/reference/command/reIndex.txt b/source/reference/command/reIndex.txt index d86995d72b0..d538c9993f1 100644 --- a/source/reference/command/reIndex.txt +++ b/source/reference/command/reIndex.txt @@ -34,6 +34,17 @@ Definition instances. - For most users, the :dbcommand:`reIndex` command is unnecessary. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ diff --git a/source/reference/command/refineCollectionShardKey.txt b/source/reference/command/refineCollectionShardKey.txt index c596786287f..9c6ce46b972 100644 --- a/source/reference/command/refineCollectionShardKey.txt +++ b/source/reference/command/refineCollectionShardKey.txt @@ -17,8 +17,6 @@ Definition .. dbcommand:: refineCollectionShardKey - .. versionadded:: 4.4 - Modifies the collection's :ref:`shard key ` by adding new field(s) as a suffix to the existing key. Refining a collection's shard key can address situations @@ -29,7 +27,7 @@ Definition As part of refining the shard key, the :dbcommand:`refineCollectionShardKey` command updates the - :doc:`chunk ranges ` and + :ref:`chunk ranges ` and :ref:`zone ranges ` to incorporate the new fields without modifying the range values of the existing key fields. That is, the refinement of the shard key does not @@ -111,7 +109,7 @@ The command takes the following fields: For the suffix fields, set the field values to either: - - ``1`` for :doc:`ranged based sharding ` + - ``1`` for :ref:`range-based sharding ` - ``"hashed"`` to specify a :ref:`hashed shard key ` *if* no other field in the shard key has @@ -165,7 +163,7 @@ Index Considerations .. note:: - - The supporting index cannot be a :doc:`partial index `. + - The supporting index cannot be a :ref:`partial index `. - The supporting index cannot be a :ref:`sparse index `. diff --git a/source/reference/command/removeShard.txt b/source/reference/command/removeShard.txt index 35c2b30f9bc..9d865b628ce 100644 --- a/source/reference/command/removeShard.txt +++ b/source/reference/command/removeShard.txt @@ -55,8 +55,7 @@ You cannot back up the cluster data during shard removal. Concurrent ``removeShard`` Operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Starting in MongoDB 4.4 (and 4.2.1), you can have more than one -:dbcommand:`removeShard` operation in progress. +You can have more than one :dbcommand:`removeShard` operation in progress. In MongoDB 4.2.0 and earlier, :dbcommand:`removeShard` returns an error if another :dbcommand:`removeShard` operation is in progress. diff --git a/source/reference/command/renameCollection.txt b/source/reference/command/renameCollection.txt index dacdc3c6413..29ded9ea579 100644 --- a/source/reference/command/renameCollection.txt +++ b/source/reference/command/renameCollection.txt @@ -25,6 +25,17 @@ Definition Issue the :dbcommand:`renameCollection` command against the :term:`admin database`. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ @@ -84,9 +95,7 @@ The command contains the following fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 - + Behavior -------- diff --git a/source/reference/command/replSetAbortPrimaryCatchUp.txt b/source/reference/command/replSetAbortPrimaryCatchUp.txt index 30dc8f7f34e..4e584d5bf6c 100644 --- a/source/reference/command/replSetAbortPrimaryCatchUp.txt +++ b/source/reference/command/replSetAbortPrimaryCatchUp.txt @@ -19,6 +19,17 @@ Definition :term:`primary` member of the replica set to abort sync (catch up) then complete the transition to primary. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ diff --git a/source/reference/command/replSetFreeze.txt b/source/reference/command/replSetFreeze.txt index fd2cb71e2ab..76e2f3328ad 100644 --- a/source/reference/command/replSetFreeze.txt +++ b/source/reference/command/replSetFreeze.txt @@ -24,6 +24,17 @@ Definition .. |method| replace:: :method:`rs.freeze` helper method .. include:: /includes/fact-dbcommand-tip +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free-or-m10.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ diff --git a/source/reference/command/replSetGetConfig.txt b/source/reference/command/replSetGetConfig.txt index 4c199631614..4823a5f601b 100644 --- a/source/reference/command/replSetGetConfig.txt +++ b/source/reference/command/replSetGetConfig.txt @@ -21,6 +21,18 @@ Definition .. |method| replace:: :method:`rs.conf` helper method .. include:: /includes/fact-dbcommand-tip +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free.rst + +.. include:: /includes/fact-environments-onprem-only.rst + + Syntax ------ @@ -69,13 +81,9 @@ Command Fields running the command on the primary. The command errors if run with ``commitmentStatus: true`` on a secondary. - .. versionadded:: 4.4 - * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 :binary:`~bin.mongosh` provides the :method:`rs.conf()` method that wraps the :dbcommand:`replSetGetConfig` command: @@ -160,7 +168,7 @@ command run with :ref:`commitmentStatus: true "replicaSetId" : ObjectId("5eaa1e9ac4d650aa7817623d") } }, - "commitmentStatus" : true, // Available in MongoDB 4.4 + "commitmentStatus" : true, "ok" : 1, "$clusterTime" : { "clusterTime" : Timestamp(1588212091, 1), diff --git a/source/reference/command/replSetGetStatus.txt b/source/reference/command/replSetGetStatus.txt index d2f2c4a2cf5..87ee79c0218 100644 --- a/source/reference/command/replSetGetStatus.txt +++ b/source/reference/command/replSetGetStatus.txt @@ -33,6 +33,16 @@ Definition .. |method| replace:: :method:`rs.status` helper method .. include:: /includes/fact-dbcommand-tip +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free.rst + +.. include:: /includes/fact-environments-onprem-only.rst Syntax ------ @@ -77,6 +87,18 @@ The command has the following syntax: You cannot specify ``initialSync: 1`` in the :binary:`~bin.mongosh` helper :method:`rs.status()`. +.. note:: + + If you haven't yet :method:`initialized ` your replica + set, the ``replSetGetStatus`` command returns the following error: + + .. code-block:: shell + :copyable: false + + MongoServerError: no replset config has been received + + Run the :dbcommand:`replSetInitiate` command and try again. + .. _rs-status-output: Example @@ -113,8 +135,8 @@ Example "heartbeatIntervalMillis" : NumberLong(2000), "majorityVoteCount" : 2, "writeMajorityCount" : 2, - "votingMembersCount" : 3, // Available starting in v4.4 - "writableVotingMembersCount" : 3, // Available starting in v4.4 + "votingMembersCount" : 3, + "writableVotingMembersCount" : 3, "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1583385878, 1), @@ -293,8 +315,8 @@ Example "heartbeatIntervalMillis" : NumberLong(2000), "majorityVoteCount" : 2, "writeMajorityCount" : 2, - "votingMembersCount" : 3, // Available starting in v4.4 - "writableVotingMembersCount" : 3, // Available starting in v4.4 + "votingMembersCount" : 3, + "writableVotingMembersCount" : 3, "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1583386018, 1), @@ -498,8 +520,8 @@ Example "heartbeatIntervalMillis" : NumberLong(2000), "majorityVoteCount" : 2, "writeMajorityCount" : 2, - "votingMembersCount" : 2, // Available starting in v4.4 - "writableVotingMembersCount" : 2, // Available starting in v4.4 + "votingMembersCount" : 2, + "writableVotingMembersCount" : 2, "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(0, 0), @@ -525,9 +547,9 @@ Example "initialSyncAttempts" : [ ], "appliedOps" : 0, "initialSyncOplogStart" : Timestamp(1583431411, 1), - "syncSourceUnreachableSince" : ISODate("2020-03-05T18:04:15.587Z"), // Available starting in v4.4 - "currentOutageDurationMillis" : NumberLong(8687), // Available starting in v4.4 - "totalTimeUnreachableMillis" : NumberLong(8687), // Available starting in v4.4 + "syncSourceUnreachableSince" : ISODate("2020-03-05T18:04:15.587Z"), + "currentOutageDurationMillis" : NumberLong(8687), + "totalTimeUnreachableMillis" : NumberLong(8687), "databases" : { "databasesCloned" : 3, "admin" : { @@ -710,12 +732,6 @@ following fields: member. The :data:`~replSetGetStatus.term` is used by the distributed consensus algorithm to ensure correctness. -.. data:: replSetGetStatus.syncingTo - - *Removed in MongoDB 4.4* - - See :data:`replSetGetStatus.syncSourceHost` instead. - .. data:: replSetGetStatus.syncSourceHost The :data:`~replSetGetStatus.syncSourceHost` field holds the @@ -759,15 +775,11 @@ following fields: .. data:: replSetGetStatus.votingMembersCount - .. versionadded:: 4.4 - The number of members configured with :rsconf:`votes: 1 `, including arbiters. .. data:: replSetGetStatus.writableVotingMembersCount - .. versionadded:: 4.4 - The number of *data-bearing* members configured with :rsconf:`votes: 1 ` (this does not include arbiters). @@ -1116,9 +1128,9 @@ following fields: "durationMillis" : 59539, "status" : "InvalidOptions: error fetching oplog during initial sync :: caused by :: Error while getting the next batch in the oplog fetcher :: caused by :: readConcern afterClusterTime value must not be greater than the current clusterTime. Requested clusterTime: { ts: Timestamp(0, 1) }; current clusterTime: { ts: Timestamp(0, 0) }", "syncSource" : "m1.example.net:27017", - "rollBackId" : 1, // Available starting in v4.4 - "operationsRetried" : 120, // Available starting in v4.4 - "totalTimeUnreachableMillis" : 52601 // Available starting in v4.4 + "rollBackId" : 1, + "operationsRetried" : 120, + "totalTimeUnreachableMillis" : 52601 } ], @@ -1150,20 +1162,14 @@ following fields: :ref:`file copy based initial sync `. - .. versionadded:: 4.4 - * - operationsRetried - Total number of all operation retry attempts. - .. versionadded:: 4.4 - * - totalTimeUnreachableMillis - Total time spent for retry operation attempts. - .. versionadded:: 4.4 - See also :data:`~replSetGetStatus.initialSyncStatus.failedInitialSyncAttempts`. @@ -1206,8 +1212,6 @@ following fields: Only present if the if sync source is unavailable during the current initial sync. - .. versionadded:: 4.4 - .. data:: replSetGetStatus.initialSyncStatus.currentOutageDurationMillis The time in milliseconds that the sync source has been unavailable. @@ -1215,15 +1219,11 @@ following fields: Only present if the if sync source is unavailable during the current initial sync. - .. versionadded:: 4.4 - .. data:: replSetGetStatus.initialSyncStatus.totalTimeUnreachableMillis The total time in milliseconds that the member has been unavailable during the current initial sync. - .. versionadded:: 4.4 - .. data:: replSetGetStatus.initialSyncStatus.databases Detail on the databases cloned during :ref:`initial sync @@ -1540,12 +1540,6 @@ following fields: This value does not appear for the member that returns the :method:`rs.status()` data. - .. data:: replSetGetStatus.members[n].syncingTo - - *Removed in MongoDB 4.4* - - See :data:`replSetGetStatus.members[n].syncSourceHost` instead. - .. data:: replSetGetStatus.members[n].syncSourceHost The :data:`~replSetGetStatus.members[n].syncSourceHost` field @@ -1579,7 +1573,5 @@ following fields: the ``RECOVERING`` state. This field is only included in the :dbcommand:`replSetGetStatus` output if its value is ``true``. - .. versionadded:: 4.4 - See also :ref:`command-response` for details on the ``ok`` status field, the ``operationTime`` field and the ``$clusterTime`` field. diff --git a/source/reference/command/replSetInitiate.txt b/source/reference/command/replSetInitiate.txt index 2c661b1f601..a37d1c0c7fd 100644 --- a/source/reference/command/replSetInitiate.txt +++ b/source/reference/command/replSetInitiate.txt @@ -29,6 +29,17 @@ Definition Run the command on only one of the :binary:`~bin.mongod` instances for the replica set. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free-or-m10.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ diff --git a/source/reference/command/replSetMaintenance.txt b/source/reference/command/replSetMaintenance.txt index 571be71f0f0..546473d1d2d 100644 --- a/source/reference/command/replSetMaintenance.txt +++ b/source/reference/command/replSetMaintenance.txt @@ -19,6 +19,17 @@ Definition maintenance mode for a :term:`secondary` member of a :term:`replica set`. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free-or-m10.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ diff --git a/source/reference/command/replSetReconfig.txt b/source/reference/command/replSetReconfig.txt index 43414db968c..8c7350a3785 100644 --- a/source/reference/command/replSetReconfig.txt +++ b/source/reference/command/replSetReconfig.txt @@ -24,6 +24,16 @@ Definition .. |method| replace:: :method:`rs.reconfig` helper method .. include:: /includes/fact-dbcommand-tip +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free-or-m10.rst + +.. include:: /includes/fact-environments-onprem-only.rst Syntax ------ @@ -76,9 +86,7 @@ The command takes the following optional field: in the operation failing *before* it can apply the new configuration. See :ref:`replSetReconfig-cmd-majority-install` for more information. - - .. versionadded:: 4.4 - + You may also run :dbcommand:`replSetReconfig` with the shell's :method:`rs.reconfig()` method. @@ -95,10 +103,8 @@ Global Write Concern ``term`` Replica Configuration Field ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -MongoDB 4.4 adds the :rsconf:`term` field to the replica set -configuration document. The :rsconf:`term` field is set by the -:term:`primary` replica set member. The primary ignores the -:rsconf:`term` field if set explicitly in the +The :rsconf:`term` field is set by the :term:`primary` replica set member. +The primary ignores the :rsconf:`term` field if set explicitly in the :dbcommand:`replSetReconfig` operation. .. |reconfig| replace:: :dbcommand:`replSetReconfig` @@ -160,10 +166,9 @@ primary to step down in some situations. Primary step-down triggers an :ref:`election ` to select a new :term:`primary`: -- Starting in MongoDB 4.4, when the new primary steps up, it - increments the :rsconf:`term` field to distinguish configuration - changes made on the new primary from changes made on the previous - primary. +- When the new primary steps up, it increments the :rsconf:`term` field to + distinguish configuration changes made on the new primary from changes made + on the previous primary. - Starting in MongoDB 4.2, when the primary steps down, it no longer closes all client connections; however, writes that were in progress diff --git a/source/reference/command/replSetResizeOplog.txt b/source/reference/command/replSetResizeOplog.txt index c4822eb890d..be941c4cf54 100644 --- a/source/reference/command/replSetResizeOplog.txt +++ b/source/reference/command/replSetResizeOplog.txt @@ -14,22 +14,31 @@ Definition ---------- .. dbcommand:: replSetResizeOplog + + :dbcommand:`replSetResizeOplog` also supports specifying the minimum number + of hours to preserve an oplog entry. + + .. versionchanged:: 5.0 + + To set the ``replSetOplog`` size in :binary:`~bin.mongosh`, use + the ``Double()`` constructor. - .. versionadded:: 4.4 + :dbcommand:`replSetResizeOplog` enables you to resize the oplog or + its minimum retention period dynamically without restarting the + :binary:`~bin.mongod` process. - :dbcommand:`replSetResizeOplog` also supports specifying the - minimum number of hours to preserve an oplog entry. + You must run this command against the ``admin`` database. - .. versionchanged:: 5.0 +Compatibility +------------- - To set the ``replSetOplog`` size in :binary:`~bin.mongosh`, use - the ``Double()`` constructor. +This command is available in deployments hosted in the following environments: - :dbcommand:`replSetResizeOplog` enables you to resize the oplog or - its minimum retention period dynamically without restarting the - :binary:`~bin.mongod` process. +.. include:: /includes/fact-environments-atlas-only.rst - You must run this command against the ``admin`` database. +.. include:: /includes/fact-environments-atlas-support-no-m10.rst + +.. include:: /includes/fact-environments-onprem-only.rst Syntax ------ @@ -99,8 +108,6 @@ The command takes the following fields: period, see the :serverstatus:`oplogTruncation.oplogMinRetentionHours` in the output of the :dbcommand:`serverStatus` command. - - .. versionadded:: 4.4 .. seealso:: @@ -146,7 +153,7 @@ you use: Reducing the maximum oplog size results in truncation of the oldest oplog entries until the oplog reaches the new configured size. - Similarly, reducing the minimum oplog retention period (*new in 4.4*) + Similarly, reducing the minimum oplog retention period results in truncation of oplog entries older that the specified period *if* the oplog has exceeded the maximum configured size. @@ -170,7 +177,7 @@ Minimum Oplog Retention Period ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ A :binary:`~bin.mongod` has the following behavior when configured with -a minimum oplog retention period (*New in 4.4*): +a minimum oplog retention period: - The oplog can grow without constraint so as to retain oplog entries for the configured number of hours. This may result in reduction or @@ -192,7 +199,7 @@ a minimum oplog retention period (*New in 4.4*): ``replSetResizeOplog`` Does Not Replicate To Other Members ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Changing the oplog size or minimum oplog retention period (*new in 4.4*) +Changing the oplog size or minimum oplog retention period of a given replica set member with :dbcommand:`replSetResizeOplog` does not change the oplog size of any other member in the replica set. You must run :dbcommand:`replSetResizeOplog` on each replica set member in @@ -207,7 +214,7 @@ Reducing Oplog Size Does Not Immediately Return Disk Space Reducing the oplog size does not immediately reclaim that disk space. This includes oplog size reduction due to truncation of oplog events older than of the :ref:`minimum oplog retention period -` (*New in 4.4*). +`. To immediately free unused disk space after reducing the oplog size, run :dbcommand:`compact` against the ``oplog.rs`` collection in the diff --git a/source/reference/command/replSetStepDown.txt b/source/reference/command/replSetStepDown.txt index a0eb68e5ca3..33b102a6023 100644 --- a/source/reference/command/replSetStepDown.txt +++ b/source/reference/command/replSetStepDown.txt @@ -26,6 +26,17 @@ Description The :dbcommand:`replSetStepDown` can only run on the ``admin`` database. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free-or-m10.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ diff --git a/source/reference/command/replSetSyncFrom.txt b/source/reference/command/replSetSyncFrom.txt index c712806b9f1..541c6589237 100644 --- a/source/reference/command/replSetSyncFrom.txt +++ b/source/reference/command/replSetSyncFrom.txt @@ -25,6 +25,17 @@ Description Run :dbcommand:`replSetSyncFrom` in the ``admin`` database. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free-or-m10.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ diff --git a/source/reference/command/reshardCollection.txt b/source/reference/command/reshardCollection.txt index a0396a970b7..f712cad3d63 100644 --- a/source/reference/command/reshardCollection.txt +++ b/source/reference/command/reshardCollection.txt @@ -58,7 +58,7 @@ The command has the following syntax: }, ... ], - forceDistribution: + forceRedistribution: } ) @@ -89,7 +89,7 @@ The command takes the following fields: Set the field values to either: - - ``1`` for :doc:`ranged based sharding ` + - ``1`` for :ref:`range-based sharding ` - ``"hashed"`` to specify a :ref:`hashed shard key `. @@ -113,7 +113,7 @@ The command takes the following fields: * - ``collation`` - document - - Optional. If the collection specified to ``reshardCollection`` + - Optional. If the collection specified in ``reshardCollection`` has a default :ref:`collation `, you *must* include a collation document with ``{ locale : "simple" }``, or the ``reshardCollection`` command fails. @@ -125,8 +125,8 @@ The command takes the following fields: * - ``forceRedistribution`` - boolean - - Optional. When set to ``true``, the operation executes even if the new - shard key is the same as the old shard key. Use with the + - Optional. If set to ``true``, the operation runs even if the new + shard key is the same as the old shard key. Use with the ``zones`` option to move data to specific zones. .. versionadded:: 7.2 @@ -201,8 +201,8 @@ Commit Phase :ref:`sharding-resharding` -Example -------- +Examples +-------- Reshard a Collection ~~~~~~~~~~~~~~~~~~~~ @@ -217,9 +217,10 @@ new shard key ``{ order_id: 1 }``: key: { order_id: 1 } }) -MongoDB returns the following: +Output: .. code-block:: javascript + :copyable: false { ok: 1, diff --git a/source/reference/command/revokePrivilegesFromRole.txt b/source/reference/command/revokePrivilegesFromRole.txt index b558ea95878..eedfb30b1ba 100644 --- a/source/reference/command/revokePrivilegesFromRole.txt +++ b/source/reference/command/revokePrivilegesFromRole.txt @@ -83,8 +83,7 @@ The command takes the following fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - .. versionadded:: 4.4 - + Behavior -------- diff --git a/source/reference/command/revokeRolesFromRole.txt b/source/reference/command/revokeRolesFromRole.txt index 9aee093d3fa..d038597d996 100644 --- a/source/reference/command/revokeRolesFromRole.txt +++ b/source/reference/command/revokeRolesFromRole.txt @@ -78,8 +78,6 @@ The command has the following fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - .. versionadded:: 4.4 - .. |local-cmd-name| replace:: :dbcommand:`revokeRolesFromRole` .. include:: /includes/fact-roles-array-contents.rst diff --git a/source/reference/command/revokeRolesFromUser.txt b/source/reference/command/revokeRolesFromUser.txt index ba5a595c52b..c82a52cd906 100644 --- a/source/reference/command/revokeRolesFromUser.txt +++ b/source/reference/command/revokeRolesFromUser.txt @@ -79,8 +79,7 @@ The command takes the following fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - .. versionadded:: 4.4 - + .. |local-cmd-name| replace:: :dbcommand:`revokeRolesFromUser` .. include:: /includes/fact-roles-array-contents.rst diff --git a/source/reference/command/rolesInfo.txt b/source/reference/command/rolesInfo.txt index 68e0d02cb1d..4f3ba4157f1 100644 --- a/source/reference/command/rolesInfo.txt +++ b/source/reference/command/rolesInfo.txt @@ -93,9 +93,7 @@ The command takes the following fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 - + .. _rolesinfo-behavior: Behavior diff --git a/source/reference/command/rotateCertificates.txt b/source/reference/command/rotateCertificates.txt index c51fc820890..a94f9dc89f1 100644 --- a/source/reference/command/rotateCertificates.txt +++ b/source/reference/command/rotateCertificates.txt @@ -23,6 +23,17 @@ Definition certificates defined in the :doc:`configuration file `. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ diff --git a/source/reference/command/serverStatus.txt b/source/reference/command/serverStatus.txt index 7dd60562193..c17cad4b155 100644 --- a/source/reference/command/serverStatus.txt +++ b/source/reference/command/serverStatus.txt @@ -4,6 +4,14 @@ serverStatus .. default-domain:: mongodb +.. facet:: + :name: programming_language + :values: shell + +.. facet:: + :name: genre + :values: reference + .. contents:: On this page :local: :backlinks: none @@ -25,6 +33,17 @@ Definition applications can run this command at a regular interval to collect statistics about the instance. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ @@ -114,10 +133,9 @@ After you run an update query, ``db.serverStatus()`` and Include ``mirroredReads`` ~~~~~~~~~~~~~~~~~~~~~~~~~ -By default, the :serverstatus:`mirroredReads` information (available -starting in version 4.4) is not included in the output. To return -:serverstatus:`mirroredReads` information, you must explicitly specify -the inclusion: +By default, the :serverstatus:`mirroredReads` information is not included in +the output. To return :serverstatus:`mirroredReads` information, you must +explicitly specify the inclusion: .. code-block:: javascript @@ -181,16 +199,17 @@ asserts .. serverstatus:: asserts A document that reports on the number of assertions raised since the - MongoDB process started. While assert errors are typically uncommon, - if there are non-zero values for the :serverstatus:`asserts`, you - should examine the log file for more information. In many cases, - these errors are trivial, but are worth investigating. + MongoDB process started. Assertions are internal checks for errors that + occur while the database is operating and can help diagnose issues with the + MongoDB server. Non-zero asserts values indicate assertion errors, which are + uncommon and not an immediate cause for concern. Errors that generate asserts + can be recorded in the log file or returned directly to a client application + for more information. .. serverstatus:: asserts.regular The number of regular assertions raised since the MongoDB process - started. Examine the log file for more information about these - messages. + started. Examine the MongoDB log for more information. .. serverstatus:: asserts.warning @@ -208,7 +227,8 @@ asserts the MongoDB process started. These are errors that user may generate, such as out of disk space or duplicate key. You can prevent these assertions by fixing a problem with your application - or deployment. Examine the MongoDB log for more information. + or deployment. Examine the log file for more information about these + messages. .. serverstatus:: asserts.rollovers @@ -427,14 +447,12 @@ connections If you are running MongoDB 5.0 or later, do not use the ``isMaster`` command. Instead, use :dbcommand:`hello`. - .. versionadded:: 4.4 - .. serverstatus:: connections.exhaustHello The number of connections whose last request was a :dbcommand:`hello` request with :ref:`exhaustAllowed `. - .. versionadded:: 5.0 (and 4.4.2) + .. versionadded:: 5.0 .. serverstatus:: connections.awaitingTopologyChanges @@ -446,8 +464,6 @@ connections If you are running MongoDB 5.0 or later, do not use the ``isMaster`` command. Instead, use :dbcommand:`hello`. - .. versionadded:: 4.4 - .. serverstatus:: connections.loadBalanced .. versionadded:: 5.3 @@ -460,8 +476,6 @@ connections defaultRWConcern ~~~~~~~~~~~~~~~~ -*Available starting in 4.4* - The ``defaultRWConcern`` section provides information on the local copy of the global default read or write concern settings. The data may be stale or out of date. See :dbcommand:`getDefaultRWConcern` for more @@ -489,8 +503,6 @@ information. The last known global default read or write concern settings. - .. versionadded:: 4.4 - .. serverstatus:: defaultRWConcern.defaultReadConcern The last known global default :ref:`read concern ` @@ -500,8 +512,6 @@ information. default read concern has either not been set *or* has not yet propagated to the instance. - .. versionadded:: 4.4 - .. serverstatus:: defaultRWConcern.defaultReadConcern.level The last known global default :ref:`read concern level @@ -511,8 +521,6 @@ information. default for this setting has either not been set *or* has not yet propagated to the instance. - .. versionadded:: 4.4 - .. serverstatus:: defaultRWConcern.defaultWriteConcern The last known global default :ref:`write concern ` @@ -522,8 +530,6 @@ information. default write concern has either not been set *or* has not yet propagated to the instance. - .. versionadded:: 4.4 - .. serverstatus:: defaultRWConcern.defaultWriteConcern.w The last known global default :ref:`w ` setting. @@ -532,8 +538,6 @@ information. default for this setting has either not been set *or* has not yet propagated to the instance. - .. versionadded:: 4.4 - .. serverstatus:: defaultRWConcern.defaultWriteConcern.wtimeout The last known global default :ref:`wtimeout ` setting. @@ -542,8 +546,6 @@ information. default for this setting has either not been set *or* has not yet propagated to the instance. - .. versionadded:: 4.4 - .. serverstatus:: defaultRWConcern.defaultWriteConcernSource .. include:: /includes/fact-defaultWriteConcernSource-possible-values.rst @@ -565,8 +567,6 @@ information. absent, this field indicates the timestamp when the defaults were last unset. - .. versionadded:: 4.4 - .. serverstatus:: defaultRWConcern.updateWallClockTime The wall clock time when the instance last updated its copy of any @@ -576,8 +576,6 @@ information. absent, this field indicates the time when the defaults were last unset. - .. versionadded:: 4.4 - .. serverstatus:: defaultRWConcern.localUpdateWallClockTime The local system wall clock time when the instance last updated its @@ -586,9 +584,6 @@ information. has never had knowledge of a global default read or write concern setting. - .. versionadded:: 4.4 - - .. _server-status-electionMetrics: electionMetrics @@ -694,7 +689,7 @@ primary: The :serverstatus:`electionMetrics.freezeTimeout` includes both the number of elections called and the number of elections that succeeded. - ..versionadded:: 4.2.1 + .. versionadded:: 4.2.1 .. serverstatus:: electionMetrics.numStepDownsCausedByHigherTerm @@ -821,7 +816,7 @@ flowControl enabled : , targetRateLimit : , timeAcquiringMicros : Long(""), - locksPerKiloOp : , // Available in 4.4+. In 4.2, returned locksPerOp instead. + locksPerKiloOp : , sustainerRate : , isLagged : , isLaggedCount : , @@ -868,36 +863,11 @@ flowControl .. serverstatus:: flowControl.locksPerKiloOp - .. note:: Starting in MongoDB 4.4 - - - :serverstatus:`~flowControl.locksPerKiloOp` replaces - :serverstatus:`~flowControl.locksPerOp` field. - :serverstatus:`~flowControl.locksPerOp` field is available only - on version 4.2. - When run on the primary, an approximation of the number of locks taken per 1000 operations. When run on a secondary, the returned number is a placeholder. - .. versionadded:: 4.4 - -.. serverstatus:: flowControl.locksPerOp - - .. note:: Available on MongoDB 4.2 only - - - MongoDB 4.4 replaces :serverstatus:`~flowControl.locksPerOp` with - :serverstatus:`flowControl.locksPerKiloOp`. - - When run on the primary, an approximation of the number of locks taken - per operation. - - When run on a secondary, the returned number is a placeholder. - - .. versionadded:: 4.2 - .. serverstatus:: flowControl.sustainerRate When run on the primary, an approximation of operations applied per @@ -1040,9 +1010,7 @@ globalLock hedgingMetrics ~~~~~~~~~~~~~~ -.. versionadded:: 4.4 - - For :binary:`~bin.mongos` instances only. +For :binary:`~bin.mongos` instances only. .. code-block:: javascript @@ -1057,9 +1025,7 @@ hedgingMetrics Provides metrics on :ref:`hedged reads ` for the :binary:`~bin.mongos` instance. - .. versionadded:: 4.4 - - For :binary:`~bin.mongos` instances only. + For :binary:`~bin.mongos` instances only. .. serverstatus:: hedgingMetrics.numTotalOperations @@ -1067,9 +1033,7 @@ hedgingMetrics option enabled ` to this :binary:`~bin.mongos` instance. - .. versionadded:: 4.4 - - For :binary:`~bin.mongos` instances only. + For :binary:`~bin.mongos` instances only. .. serverstatus:: hedgingMetrics.numTotalHedgedOperations @@ -1078,9 +1042,7 @@ hedgingMetrics i.e. sent the operation to an additional member of each queried shard. - .. versionadded:: 4.4 - - For :binary:`~bin.mongos` instances only. + For :binary:`~bin.mongos` instances only. .. serverstatus:: hedgingMetrics.numAdvantageouslyHedgedOperations @@ -1088,9 +1050,7 @@ hedgingMetrics :ref:`hedge the read operation ` fulfilled the client request. - .. versionadded:: 4.4 - - For :binary:`~bin.mongos` instances only. + For :binary:`~bin.mongos` instances only. .. _server-status-indexBuilds: @@ -1217,6 +1177,56 @@ indexBulkBuilder The current bytes of memory allocated for building indexes. +.. _server-status-indexStats: + +indexStats +~~~~~~~~~~ + +.. code-block:: json + + indexStats: { + count: Long(""), + features: { + '2d': { count: Long(""), accesses: Long("") }, + '2dsphere': { count: Long(""), accesses: Long("") }, + '2dsphere_bucket': { count: Long(""), accesses: Long("") }, + collation: { count: Long(""), accesses: Long("") }, + compound: { count: Long(""), accesses: Long("") }, + hashed: { count: Long(""), accesses: Long("") }, + id: { count: Long(""), accesses: Long("") }, + normal: { count: Long(""), accesses: Long("") }, + partial: { count: Long(""), accesses: Long("") }, + single: { count: Long(""), accesses: Long("") }, + sparse: { count: Long(""), accesses: Long("") }, + text: { count: Long(""), accesses: Long("") }, + ttl: { count: Long(""), accesses: Long("") }, + unique: { count: Long(""), accesses: Long("") }, + wildcard: { count: Long(""), accesses: Long("") } + } + } + +.. serverstatus:: indexStats + + A document that reports statistics on all indexes on databases and collections. + + .. versionadded:: 6.0 + +.. serverstatus:: indexStats.count + + The total number of indexes. + + .. versionadded:: 6.0 + +.. serverstatus:: indexStats.features + + A document that provides counters for each index type and the number of + accesses on each index. Each index type under ``indexStats.features`` + has a ``count`` field that counts the total number of indexes for that + type, and an ``accesses`` field that counts the number of accesses on that + index. + + .. versionadded:: 6.0 + .. _server-status-instance-information: Instance Information @@ -1534,6 +1544,11 @@ metrics commands: { : { failed: Long(""), + validator: { + total: Long(""), + failed: Long(""), + jsonSchema: Long("") + }, total: Long("") } }, @@ -1736,15 +1751,20 @@ metrics reconfig : { numAutoReconfigsForRemovalOfNewlyAddedFields : Long("") }, - stepDown : { - userOperationsKilled : Long(""), - userOperationsRunning : Long("") + stateTransition : { + lastStateTransition : , + totalOperationsKilled : Long(""), + totalOperationsRunning : Long("") }, syncSource : { numSelections : Long(""), numTimesChoseSame : Long(""), numTimesChoseDifferent : Long(""), numTimesCouldNotFind : Long("") + }, + waiters : { + opTime : Long(""), + replication : Long("") } }, storage : { @@ -1790,7 +1810,6 @@ metrics :dbcommand:`serverStatus` reports the number of times that stage has been executed. - *New in version 4.4 (and 4.2.6).* *Updated in version 5.2 (and 5.0.6).* .. _server-status-apiVersions: @@ -1882,8 +1901,6 @@ metrics The counter for ``$expr`` increments when the query runs. The counter for ``$gt`` does not. - .. versionadded:: 5.1 - .. serverstatus:: metrics.changeStreams.largeEventsSplit The number of change stream events larger than 16 MB that were split @@ -1906,8 +1923,7 @@ metrics MB. To prevent the exception, see :pipeline:`$changeStreamSplitLargeEvent`. - .. versionadded:: 7.0 - + .. versionadded:: 7.0 (*Also available in 6.0.9 and 5.0.19*) .. serverstatus:: metrics.changeStreams.showExpandedEvents @@ -1941,11 +1957,38 @@ metrics The number of times ```` failed on this :binary:`~bin.mongod`. + +.. serverstatus:: metrics.commands..validator + + For the :dbcommand:`create` and :dbcommand:`collMod` commands, a document + that reports on non-empty ``validator`` objects passed to the command to + specify :ref:`validation rules or expressions ` + for the collection. + + +.. serverstatus:: metrics.commands..validator.total + + The number of times a non-empty ``validator`` object was passed as an option + to the command on this :binary:`~bin.mongod`. + +.. serverstatus:: metrics.commands..validator.failed + + The number of times a call to the command on this :binary:`~bin.mongod` + failed with a non-empty ``validator`` object due to a schema validation + error. + +.. serverstatus:: metrics.commands..validator.jsonSchema + + The number of times a ``validator`` object with a ``$jsonSchema`` was passed + as an option to the command on this :binary:`~bin.mongod`. + + .. serverstatus:: metrics.commands..total The number of times ```` executed on this :binary:`~bin.mongod`. + .. serverstatus:: metrics.commands.update.pipeline The number of times an @@ -2108,6 +2151,52 @@ metrics .. versionadded:: 5.0 +.. serverstatus:: metrics.network + + .. versionadded:: 6.3 + + A document that reports server network metrics. + +.. serverstatus:: metrics.network.totalEgressConnectionEstablishmentTimeMillis + + .. versionadded:: 6.3 + + The total time in milliseconds to establish server connections. + +.. serverstatus:: metrics.network.totalIngressTLSConnections + + .. versionadded:: 6.3 + + The total number of incoming connections to the server that use TLS. + The number is cumulative and is the total after the server was + started. + +.. serverstatus:: metrics.network.totalIngressTLSHandshakeTimeMillis + + .. versionadded:: 6.3 + + The total time in milliseconds that incoming connections to the + server have to wait for the TLS network handshake to complete. The + number is cumulative and is the total after the server was started. + +.. serverstatus:: metrics.network.totalTimeForEgressConnectionAcquiredToWireMicros + + .. versionadded:: 6.3 + + The total time in microseconds that operations wait between + acquisition of a server connection and writing the bytes to send to + the server over the network. The number is cumulative and is the + total after the server was started. + +.. serverstatus:: metrics.network.totalTimeToFirstNonAuthCommandMillis + + .. versionadded:: 6.3 + + The total time in milliseconds from accepting incoming connections to + the server and receiving the first operation that isn't part of the + connection authentication handshake. The number is cumulative and is + the total after the server was started. + .. serverstatus:: metrics.operation A document that holds counters for several types of update and query @@ -2174,7 +2263,7 @@ metrics These metrics are primarily intended for internal use by MongoDB. - *New in version 6.0.0, 5.0.9, and 4.4.15* + *New in version 6.0.0 and 5.0.9* .. serverstatus:: metrics.query.sort @@ -2297,23 +2386,17 @@ metrics A document that reports on the number of queries that performed a collection scan. - .. versionadded:: 4.4 - .. serverstatus:: metrics.queryExecutor.collectionScans.nonTailable The number of queries that performed a collection scan that did not use a :ref:`tailable cursor `. - .. versionadded:: 4.4 - .. serverstatus:: metrics.queryExecutor.collectionScans.total The total number queries that performed a collection scan. The total consists of queries that did and did not use a :doc:`tailable cursor `. - .. versionadded:: 4.4 - .. serverstatus:: metrics.record A document that reports on data related to record allocation in the @@ -2445,8 +2528,6 @@ metrics number reports on the empty batches received when it was a secondary. Otherwise, for a primary, this number is ``0``. - .. versionadded:: 4.4 - .. serverstatus:: metrics.repl.network.notPrimaryLegacyUnacknowledgedWrites The number of unacknowledged (``w: 0``) legacy write operations (see @@ -2469,23 +2550,17 @@ metrics commands to fetch the :term:`oplog` that a node processed as a sync source. - .. versionadded:: 4.4 - .. serverstatus:: metrics.repl.network.oplogGetMoresProcessed.num The number of :dbcommand:`getMore` commands to fetch the :term:`oplog` that a node processed as a sync source. - .. versionadded:: 4.4 - .. serverstatus:: metrics.repl.network.oplogGetMoresProcessed.totalMillis The time, in milliseconds, that a node spent processing the :dbcommand:`getMore` commands counted in :serverstatus:`metrics.repl.network.oplogGetMoresProcessed.num`. - .. versionadded:: 4.4 - .. serverstatus:: metrics.repl.network.ops The total @@ -2505,8 +2580,6 @@ metrics A document that reports the number of ``replSetUpdatePosition`` commands a node sent to its sync source. - .. versionadded:: 4.4 - .. serverstatus:: metrics.repl.network.replSetUpdatePosition.num The number of ``replSetUpdatePosition`` commands a node sent @@ -2514,8 +2587,6 @@ metrics replication commands that communicate replication progress from nodes to their sync sources. - .. versionadded:: 4.4 - .. note:: Replica set members in the :replstate:`STARTUP2` state do not send @@ -2544,34 +2615,82 @@ metrics .. versionadded:: 5.0 -.. serverstatus:: metrics.repl.stepDown +.. serverstatus:: metrics.repl.stateTransition - Information on user operations that were running when the - :binary:`~bin.mongod` stepped down. + Information on user operations when the member undergoes one of the + following transitions that can stop user operations: - .. versionadded:: 4.2 + - The member steps up to become a primary. -.. serverstatus:: metrics.repl.stepDown.userOperationsKilled + - The member steps down to become a secondary. - The number of user operations killed when the :binary:`~bin.mongod` - stepped down. + - The member is actively performing a rollback. - .. versionadded:: 4.2 +.. serverstatus:: metrics.repl.stateTransition.lastStateTransition -.. serverstatus:: metrics.repl.stepDown.userOperationsRunning + The transition being reported: - The number of user operations that remained running when the - :binary:`~bin.mongod` stepped down. + .. list-table:: + :widths: 20 80 + :header-rows: 1 - .. versionadded:: 4.2 + * - State Change + - Description + + * - ``"stepUp"`` + + - The member steps up to become a primary. + + * - ``"stepDown"`` + - The member steps down to become a secondary. + + * - ``"rollback"`` + + - The member is actively performing a rollback. + + * - ``""`` + + - The member has not undergone any state changes. + +.. serverstatus:: metrics.repl.stateTransition.totalOperationsKilled + + The total number of operations stopped during the + :binary:`~bin.mongod` instance's state change. + + .. versionadded:: 7.3 + + ``totalOperationsKilled`` replaces + :serverstatus:`~metrics.repl.stateTransition.userOperationsKilled` + +.. serverstatus:: metrics.repl.stateTransition.totalOperationsRunning + + The total number of operations that remained running during the + :binary:`~bin.mongod` instance's state change. + + .. versionadded:: 7.3 + + ``totalOperationsRunning`` replaces + :serverstatus:`~metrics.repl.stateTransition.userOperationsRunning` + +.. serverstatus:: metrics.repl.stateTransition.userOperationsKilled + + .. deprecated:: 7.3 + + :serverstatus:`~metrics.repl.stateTransition.totalOperationsKilled` + replaces ``userOperationsKilled``. + +.. serverstatus:: metrics.repl.stateTransition.userOperationsRunning + + .. deprecated:: 7.3 + + :serverstatus:`~metrics.repl.stateTransition.totalOperationsRunning` + replaces ``userOperationsRunning``. .. serverstatus:: metrics.repl.syncSource Information on a replica set node's :ref:`sync source selection ` process. - .. versionadded:: 4.4 - .. serverstatus:: metrics.repl.syncSource.numSelections Number of times a node attempted to choose a node to sync from among @@ -2579,28 +2698,33 @@ metrics to sync from if, for example, the sync source is re-evaluated or the node receives an error from its current sync source. - .. versionadded:: 4.4 - .. serverstatus:: metrics.repl.syncSource.numTimesChoseSame Number of times a node kept its original sync source after re-evaluating if its current sync source was optimal. - .. versionadded:: 4.4 - .. serverstatus:: metrics.repl.syncSource.numTimesChoseDifferent Number of times a node chose a new sync source after re-evaluating if its current sync source was optimal. - .. versionadded:: 4.4 - .. serverstatus:: metrics.repl.syncSource.numTimesCouldNotFind Number of times a node could not find an available sync source when attempting to choose a node to sync from. - .. versionadded:: 4.4 +.. serverstatus:: metrics.repl.waiters.replication + + The number of threads waiting for replicated or journaled :ref:`write concern + acknowledgments `. + + .. versionadded:: 7.3 + +.. serverstatus:: metrics.repl.waiters.opTime + + The number of threads queued for local replication :term:`optime` assignments. + + .. versionadded:: 7.3 .. serverstatus:: metrics.storage.freelist.search.bucketExhausted @@ -2803,8 +2927,6 @@ mirroredReads .. serverstatus:: mirroredReads.seen - .. versionadded:: 4.4 - The number of :ref:`operations that support mirroring ` received by this member. @@ -2814,8 +2936,6 @@ mirroredReads .. serverstatus:: mirroredReads.sent - .. versionadded:: 4.4 - The number of mirrored reads sent by this member when primary. For example, if a read is mirrored and sent to two secondaries, the number of mirrored reads is ``2``. @@ -2835,6 +2955,8 @@ network network : { bytesIn : Long(""), bytesOut : Long(""), + physicalBytesIn : Long(""), + physicalBytesOut : Long(""), numSlowDNSOperations : Long(""), numSlowSSLOperations : Long(""), numRequests : Long(""), @@ -2888,27 +3010,39 @@ network .. serverstatus:: network.bytesIn - The total number of bytes that the server has *received* over network - connections initiated by clients or other :binary:`~bin.mongod` or - :binary:`~bin.mongos` instances. + The total number of logical bytes that the server has *received* over + network connections initiated by clients or other ``mongod`` or + ``mongos`` instances. Logical bytes are the exact number of bytes + that a given file contains. .. serverstatus:: network.bytesOut - The total number of bytes that the server has *sent* over network - connections initiated by clients or other :binary:`~bin.mongod` or - :binary:`~bin.mongos` instances. + The total number of logical bytes that the server has *sent* over + network connections initiated by clients or other ``mongod`` or + ``mongos`` instances. Logical bytes correspond to the number of + bytes that a given file contains. -.. serverstatus:: network.numSlowDNSOperations +.. serverstatus:: network.physicalBytesIn + + The total number of physical bytes that the server has *received* + over network connections initiated by clients or other ``mongod`` or + ``mongos`` instances. Physical bytes are the number of bytes that + actually reside on disk. + +.. serverstatus:: network.physicalBytesOut + + The total number of physical bytes that the server has *sent* over + network connections initiated by clients or other ``mongod`` or + ``mongos`` instances. Physical bytes are the number of bytes that + actually reside on disk. - .. versionadded:: 4.4 +.. serverstatus:: network.numSlowDNSOperations The total number of DNS resolution operations which took longer than 1 second. .. serverstatus:: network.numSlowSSLOperations - .. versionadded:: 4.4 - The total number of SSL handshake operations which took longer than 1 second. @@ -2921,16 +3055,12 @@ network with expectations and application use. .. serverstatus:: network.tcpFastOpen - - .. versionadded:: 4.4 A document that reports data on MongoDB's support and use of TCP Fast Open (TFO) connections. .. serverstatus:: network.tcpFastOpen.kernelSetting - .. versionadded:: 4.4 - *Linux only* Returns the value of ``/proc/sys/net/ipv4/tcp_fastopen``: @@ -2946,8 +3076,6 @@ network .. serverstatus:: network.tcpFastOpen.serverSupported - .. versionadded:: 4.4 - - Returns ``true`` if the host operating system supports inbound TCP Fast Open (TFO) connections. @@ -2956,8 +3084,6 @@ network .. serverstatus:: network.tcpFastOpen.clientSupported - .. versionadded:: 4.4 - - Returns ``true`` if the host operating system supports outbound TCP Fast Open (TFO) connections. @@ -2966,8 +3092,6 @@ network .. serverstatus:: network.tcpFastOpen.accepted - .. versionadded:: 4.4 - The total number of accepted incoming TCP Fast Open (TFO) connections to the :binary:`~bin.mongod` or :binary:`~bin.mongos` since the ``mongod`` or ``mongos`` last started. @@ -3731,17 +3855,13 @@ oplogTruncation .. serverstatus:: oplogTruncation - A document that reports on :doc:`oplog ` + A document that reports on :ref:`oplog ` truncations. The field only appears when the current instance is a member of a replica set and uses either the :doc:`/core/wiredtiger` or :doc:`/core/inmemory`. - .. versionchanged:: 4.4 - - Also available in :doc:`/core/inmemory`. - .. versionadded:: 4.2.1 Available in the :doc:`/core/wiredtiger`. @@ -3757,10 +3877,6 @@ oplogTruncation See :serverstatus:`oplogTruncation.processingMethod` - .. versionchanged:: 4.4 - - Also available in :doc:`/core/inmemory`. - .. versionadded:: 4.2.1 Available in the :doc:`/core/wiredtiger`. @@ -3774,22 +3890,16 @@ oplogTruncation if the :binary:`~bin.mongod` instance started on existing data files (i.e. not meaningful for :doc:`/core/inmemory`). - .. versionchanged:: 4.4 - - Also available in :doc:`/core/inmemory`. - .. versionadded:: 4.2.1 Available in the :doc:`/core/wiredtiger`. .. serverstatus:: oplogTruncation.oplogMinRetentionHours - .. versionadded:: 4.4 - - The minimum retention period for the oplog in hours. If the oplog - has exceeded the oplog size, the :binary:`~bin.mongod` only - truncates oplog entries older than the configured retention - value. + The minimum retention period for the oplog in hours. If the oplog + has exceeded the oplog size, the :binary:`~bin.mongod` only + truncates oplog entries older than the configured retention + value. Only visible if the :binary:`~bin.mongod` is a member of a replica set *and*: @@ -3809,10 +3919,6 @@ oplogTruncation The cumulative time spent, in microseconds, performing oplog truncations. - .. versionchanged:: 4.4 - - Also available in :doc:`/core/inmemory`. - .. versionadded:: 4.2.1 Available in the :doc:`/core/wiredtiger`. @@ -3822,10 +3928,6 @@ oplogTruncation The cumulative number of oplog truncations. - .. versionchanged:: 4.4 - - Also available in :doc:`/core/inmemory`. - .. versionadded:: 4.2.1 Available in the :doc:`/core/wiredtiger`. @@ -3844,11 +3946,13 @@ planCache totalSizeEstimateBytes : Long(""), classic : { hits : Long(""), - misses : Long("") + misses : Long(""), + skipped : Long("") }, sbe : { hits : Long(""), - misses: Long("") + misses: Long(""), + skipped : Long("") } } @@ -3885,6 +3989,13 @@ planCache Number of classic execution engine query plans which were not found in the query cache and went through the query planning phase. +.. serverstatus:: planCache.classic.skipped + + Number of classic execution engine query plans that were not found in the + query cache because the query is ineligible for caching. + + .. versionadded:: 7.3 + .. serverstatus:: planCache.sbe.hits Number of |sbe-short| query plans found in the query @@ -3895,6 +4006,13 @@ planCache Number of |sbe-short| plans which were not found in the query cache and went through the query planning phase. +.. serverstatus:: planCache.sbe.skipped + + Number of |sbe-short| query plans that were not found in the + query cache because the query is ineligible for caching. + + .. versionadded:: 7.3 + .. _server-status-queryStats: queryStats @@ -4497,7 +4615,7 @@ security - The number of times a given authentication mechanism has been used to authenticate against the :binary:`~bin.mongod` or - :binary:`~bin.mongos` instance. (New in MongoDB 4.4) + :binary:`~bin.mongos` instance. - The :binary:`mongod` / :binary:`mongos` instance's TLS/SSL certificate. (Only appears for :binary:`~bin.mongod` or @@ -4518,8 +4636,6 @@ security values in the document distinguish standard authentication and speculative authentication. [#speculative-auth]_ - .. versionadded:: 4.4 - .. note:: The fields in the ``mechanisms`` document depend on the @@ -4543,39 +4659,29 @@ security subset of those attempts which were speculative. [#speculative-auth]_ - .. versionadded:: 4.4 - .. serverstatus:: security.authentication.mechanisms.MONGODB-X509.speculativeAuthenticate.received Number of speculative authentication attempts received using :ref:`x.509 `. Includes both successful and failed speculative authentication attempts. [#speculative-auth]_ - .. versionadded:: 4.4 - .. serverstatus:: security.authentication.mechanisms.MONGODB-X509.speculativeAuthenticate.successful Number of successful speculative authentication attempts received using x.509. [#speculative-auth]_ - .. versionadded:: 4.4 - .. serverstatus:: security.authentication.mechanisms.MONGODB-X509.authenticate.received Number of successful and failed authentication attempts received using x.509. This value includes speculative authentication attempts received using x.509. - .. versionadded:: 4.4 - .. serverstatus:: security.authentication.mechanisms.MONGODB-X509.authenticate.successful Number of successful authentication attempts received using x.508. This value includes successful speculative authentication attempts which used x.509. - .. versionadded:: 4.4 - .. [#speculative-auth] Speculative authentication minimizes the number of network round @@ -5014,8 +5120,6 @@ shardingStatistics *Only present when run on a shard.* - .. versionadded:: 4.4 - .. serverstatus:: shardingStatistics.chunkMigrationConcurrency The number of threads on the source shard and the receiving shard for @@ -5084,8 +5188,6 @@ shardingStatistics *Only present when run on a shard member.* - .. versionadded:: 4.4 - .. serverstatus:: shardingStatistics.resharding A document with statistics about :ref:`resharding operations @@ -5683,17 +5785,14 @@ shardedIndexConsistency sharded collections. The returned metrics are meaningful only when run on the primary of - the :ref:`config server replica set - ` for a version 4.4+ (and - 4.2.6+) sharded cluster. + the :ref:`config server replica set ` for a + sharded cluster. .. seealso:: - :parameter:`enableShardedIndexConsistencyCheck` parameter - :parameter:`shardedIndexConsistencyCheckIntervalMS` parameter - *New in version 4.4. (and 4.2.6)* - .. serverstatus:: shardedIndexConsistency.numShardedCollectionsWithInconsistentIndexes *Available only on config server instances.* @@ -5708,16 +5807,13 @@ shardedIndexConsistency The returned metrics are meaningful only when run on the primary of the :ref:`config server replica set - ` for a version 4.4+ (and - 4.2.6+) sharded cluster. + ` for a sharded cluster. .. seealso:: - :parameter:`enableShardedIndexConsistencyCheck` parameter - :parameter:`shardedIndexConsistencyCheckIntervalMS` parameter - *New in version 4.4. (and 4.2.6)* - .. _server-status-storage-engine: storageEngine @@ -5842,57 +5938,11 @@ transactions .. serverstatus:: transactions When run on a :binary:`~bin.mongod`, a document with data about the - :doc:`retryable writes ` and + :ref:`retryable wrties ` and :ref:`transactions `. When run on a :binary:`~bin.mongos`, a document with data about the - :doc:`transactions ` run on the instance. - -.. serverstatus:: metrics.network - - .. versionadded:: 6.3 - - A document that reports server network metrics. - -.. serverstatus:: metrics.network.totalEgressConnectionEstablishmentTimeMillis - - .. versionadded:: 6.3 - - The total time in milliseconds to establish server connections. - -.. serverstatus:: metrics.network.totalIngressTLSConnections - - .. versionadded:: 6.3 - - The total number of incoming connections to the server that use TLS. - The number is cumulative and is the total after the server was - started. - -.. serverstatus:: metrics.network.totalIngressTLSHandshakeTimeMillis - - .. versionadded:: 6.3 - - The total time in milliseconds that incoming connections to the - server have to wait for the TLS network handshake to complete. The - number is cumulative and is the total after the server was started. - -.. serverstatus:: metrics.network.totalTimeForEgressConnectionAcquiredToWireMicros - - .. versionadded:: 6.3 - - The total time in microseconds that operations wait between - acquisition of a server connection and writing the bytes to send to - the server over the network. The number is cumulative and is the - total after the server was started. - -.. serverstatus:: metrics.network.totalTimeToFirstNonAuthCommandMillis - - .. versionadded:: 6.3 - - The total time in milliseconds from accepting incoming connections to - the server and receiving the first operation that isn't part of the - connection authentication handshake. The number is cumulative and is - the total after the server was started. + :ref:`transactions ` run on the instance. .. serverstatus:: transactions.retriedCommandsCount diff --git a/source/reference/command/setClusterParameter.txt b/source/reference/command/setClusterParameter.txt index 59104e8529b..9355521b976 100644 --- a/source/reference/command/setClusterParameter.txt +++ b/source/reference/command/setClusterParameter.txt @@ -26,6 +26,16 @@ Definition .. include:: /includes/reference/fact-setClusterParameter-availability.rst +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst Syntax ------ diff --git a/source/reference/command/setDefaultRWConcern.txt b/source/reference/command/setDefaultRWConcern.txt index 5772c1b2811..970e4e8343a 100644 --- a/source/reference/command/setDefaultRWConcern.txt +++ b/source/reference/command/setDefaultRWConcern.txt @@ -13,8 +13,6 @@ setDefaultRWConcern Definition ---------- -.. versionadded:: 4.4 - .. dbcommand:: setDefaultRWConcern The :dbcommand:`setDefaultRWConcern` administrative command sets the @@ -28,6 +26,17 @@ Definition - For sharded clusters, issue the :dbcommand:`setDefaultRWConcern` on a :binary:`~bin.mongos`. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ @@ -133,9 +142,7 @@ The command takes the following fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 - + :dbcommand:`setDefaultRWConcern` returns an object that contains the currently configured global default read and write concern. See :dbcommand:`getDefaultRWConcern` for more complete documentation on diff --git a/source/reference/command/setFeatureCompatibilityVersion.txt b/source/reference/command/setFeatureCompatibilityVersion.txt index d2123ebd98d..da0bb8000c0 100644 --- a/source/reference/command/setFeatureCompatibilityVersion.txt +++ b/source/reference/command/setFeatureCompatibilityVersion.txt @@ -31,6 +31,17 @@ Definition to ensure the likelihood of downgrade is minimal. When you are confident that the likelihood of downgrade is minimal, enable these features. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free-or-m10.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ @@ -163,7 +174,7 @@ To move the cluster out of the ``downgrading`` state, either: the {+fcv+} downgrade began. If a failed {+fcv+} downgrade's internal metadata is not - cleaned up, any subsequent FCV upgrade attempt fails with an + cleaned up, any subsequent fCV upgrade attempt fails with an error message. You must complete the {+fcv+} downgrade before trying to upgrade the {+fcv+}. diff --git a/source/reference/command/setIndexCommitQuorum.txt b/source/reference/command/setIndexCommitQuorum.txt index 2a6592d0c99..ebdf532e898 100644 --- a/source/reference/command/setIndexCommitQuorum.txt +++ b/source/reference/command/setIndexCommitQuorum.txt @@ -10,14 +10,23 @@ setIndexCommitQuorum :depth: 1 :class: singlecol -.. versionadded:: 4.4 - .. dbcommand:: setIndexCommitQuorum The ``setIndexCommitQuorum`` command sets minimum number of data-bearing members that must be prepared to commit their local index builds before the primary node will commit the index. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ @@ -107,9 +116,6 @@ The command takes the following fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 - Behavior -------- diff --git a/source/reference/command/setParameter.txt b/source/reference/command/setParameter.txt index b51ba94fa48..870f7676e5b 100644 --- a/source/reference/command/setParameter.txt +++ b/source/reference/command/setParameter.txt @@ -21,6 +21,17 @@ Definition the :dbcommand:`setParameter` command against the :term:`admin database`. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free-or-m10.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ diff --git a/source/reference/command/shardCollection.txt b/source/reference/command/shardCollection.txt index a956bafc1b9..59c2fe7c67b 100644 --- a/source/reference/command/shardCollection.txt +++ b/source/reference/command/shardCollection.txt @@ -92,7 +92,7 @@ The command takes the following fields: Set the field values to either: - - ``1`` for :doc:`ranged based sharding ` + - ``1`` for :ref:`range-based sharding ` - ``"hashed"`` to specify a :ref:`hashed shard key `. @@ -135,14 +135,12 @@ The command takes the following fields: no zones and zone ranges are defined for the empty collection, MongoDB attempts to evenly distributed the specified number of chunks across the shards in the cluster. - + - If sharding with :ref:`presplitHashedZones: false ` or omitted and zones and zone ranges have been defined for the empty - collection, ``numInitChunks`` has no effect. - - .. versionchanged:: 4.4 - + collection, ``numInitialChunks`` has no effect. + * - ``collation`` - document - Optional. If the collection specified to ``shardCollection`` @@ -175,9 +173,7 @@ The command takes the following fields: - The defined zone range or ranges do not meet the :ref:`requirements `. - - .. versionadded:: 4.4 - + * - :ref:`timeseries ` - object - .. _cmd-shard-collection-timeseries: diff --git a/source/reference/command/shardConnPoolStats.txt b/source/reference/command/shardConnPoolStats.txt index 652475f5121..816bff6141e 100644 --- a/source/reference/command/shardConnPoolStats.txt +++ b/source/reference/command/shardConnPoolStats.txt @@ -40,6 +40,17 @@ Definition MongoDB returns the connection to the connection pool after every operation. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-serverless.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Output ------ diff --git a/source/reference/command/shutdown.txt b/source/reference/command/shutdown.txt index d6c58f63bf9..4555716d47b 100644 --- a/source/reference/command/shutdown.txt +++ b/source/reference/command/shutdown.txt @@ -18,6 +18,17 @@ shutdown and then terminates the process. You must issue the :dbcommand:`shutdown` command against the :term:`admin database`. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free-or-m10.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ @@ -75,8 +86,7 @@ The command takes these fields: * - ``comment`` - .. include:: /includes/extracts/comment-content.rst - .. versionadded:: 4.4 - + .. seealso:: :method:`db.shutdownServer()` @@ -111,15 +121,6 @@ Shutting Down the Replica Set Primary, Secondary, or ``mongos`` .. include:: /includes/quiesce-period.rst -In MongoDB 4.4 and earlier, if running :dbcommand:`shutdown` against the -replica set :term:`primary`, the operation implicitly uses -:dbcommand:`replSetStepDown` to step down the primary before shutting -down the :binary:`~bin.mongod`. If no secondary in the replica set can -catch up to the primary within ``10`` seconds, the shutdown operation -fails. You can issue :dbcommand:`shutdown` with :ref:`force: true -` to shut down the primary *even if* the step down -fails. - .. warning:: Force shutdown of the primary can result in the diff --git a/source/reference/command/startSession.txt b/source/reference/command/startSession.txt index 1f36f04790d..5475dece85a 100644 --- a/source/reference/command/startSession.txt +++ b/source/reference/command/startSession.txt @@ -86,6 +86,8 @@ session. If the deployment transitions to auth without any downtime, any sessions without an owner cannot be used. +.. include:: /includes/client-sessions-reuse.rst + Output ------ diff --git a/source/reference/command/top.txt b/source/reference/command/top.txt index 2f35c46ce22..f66d2dbc539 100644 --- a/source/reference/command/top.txt +++ b/source/reference/command/top.txt @@ -38,6 +38,17 @@ Redaction When using :ref:`Queryable Encryption `, the ``top`` command only returns the collection name. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-serverless.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ diff --git a/source/reference/command/update.txt b/source/reference/command/update.txt index 3b6ce629a72..04cc7202f14 100644 --- a/source/reference/command/update.txt +++ b/source/reference/command/update.txt @@ -4,6 +4,10 @@ update .. default-domain:: mongodb +.. facet:: + :name: genre + :values: reference + .. contents:: On this page :local: :backlinks: none @@ -125,9 +129,6 @@ The command takes the following fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 - * - :ref:`let ` - document @@ -185,7 +186,8 @@ Each document contains the following fields: - document - .. _update-command-c: - Optional. + Optional. You can specify ``c`` only if :ref:`u + ` is a pipeline. .. include:: /includes/let-variables-syntax.rst @@ -548,7 +550,7 @@ See also :ref:`cmd-update-sharded-upsert`. Missing Shard Key ````````````````` -Starting in version 4.4, documents in a sharded collection can be +Documents in a sharded collection can be :ref:`missing the shard key fields `. To use :dbcommand:`update` to set the document's **missing** shard key, you :red:`must` run on a @@ -1238,8 +1240,12 @@ The returned document contains a subset of the following fields: .. data:: update.writeConcernError - Document that describe error related to write concern and contains - the field: + Document describing errors that relate to the write concern. + + .. |cmd| replace:: :dbcommand:`update` + .. include:: /includes/fact-writeConcernError-mongos + + The ``writeConcernError`` documents contain the following fields: .. data:: update.writeConcernError.code @@ -1251,8 +1257,6 @@ The returned document contains a subset of the following fields: .. data:: update.writeConcernError.errInfo.writeConcern - .. versionadded:: 4.4 - .. include:: /includes/fact-errInfo-wc.rst .. data:: update.writeConcernError.errInfo.writeConcern.provenance diff --git a/source/reference/command/updateRole.txt b/source/reference/command/updateRole.txt index 8f08cd3190f..80e248bc5f2 100644 --- a/source/reference/command/updateRole.txt +++ b/source/reference/command/updateRole.txt @@ -126,9 +126,7 @@ The command takes the following fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - .. versionadded:: 4.4 - - + .. |local-cmd-name| replace:: :dbcommand:`updateRole` Roles diff --git a/source/reference/command/updateUser.txt b/source/reference/command/updateUser.txt index 283014c916e..e4bf3111879 100644 --- a/source/reference/command/updateUser.txt +++ b/source/reference/command/updateUser.txt @@ -169,9 +169,7 @@ The command takes the following fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 - + Roles ~~~~~ diff --git a/source/reference/command/updateZoneKeyRange.txt b/source/reference/command/updateZoneKeyRange.txt index 4d909cfcc94..f040fcc9889 100644 --- a/source/reference/command/updateZoneKeyRange.txt +++ b/source/reference/command/updateZoneKeyRange.txt @@ -171,7 +171,7 @@ distribution, see :ref:`pre-define-zone-range-example`. Initial Chunk Distribution with Compound Hashed Shard Keys `````````````````````````````````````````````````````````` -Starting in version 4.4, MongoDB supports sharding collections on +MongoDB supports sharding collections on :ref:`compound hashed indexes `. MongoDB can perform optimized initial chunk creation and distribution when sharding the empty or non-existing collection on a compound hashed shard key. diff --git a/source/reference/command/usersInfo.txt b/source/reference/command/usersInfo.txt index 70f09005e4d..011513044c5 100644 --- a/source/reference/command/usersInfo.txt +++ b/source/reference/command/usersInfo.txt @@ -111,8 +111,6 @@ The command takes the following fields: * - ``comment`` - any - .. include:: /includes/extracts/comment-content.rst - - .. versionadded:: 4.4 .. _usersInfo-field-specification: diff --git a/source/reference/command/validate.txt b/source/reference/command/validate.txt index cccea205397..1ef1a6ead0c 100644 --- a/source/reference/command/validate.txt +++ b/source/reference/command/validate.txt @@ -18,7 +18,10 @@ Definition .. dbcommand:: validate The :dbcommand:`validate` command checks a collection's data and - indexes for correctness and returns the results. + indexes for correctness and returns the results. If you don't run the + command in the :ref:`background `, the + command also repairs any inconsistencies in the count and data size + of a collection. .. |method| replace:: :method:`~db.collection.validate` helper method .. include:: /includes/fact-dbcommand-tip @@ -39,6 +42,17 @@ Definition :binary:`~bin.mongosh` provides a wrapper around :dbcommand:`validate`. +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free-or-serverless.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ @@ -53,6 +67,7 @@ The command has the following syntax: repair: , // Optional, added in MongoDB 5.0 metadata: , // Optional, added in MongoDB 5.0.4 checkBSONConformance: // Optional, added in MongoDB 6.2 + background: // Optional } ) @@ -84,9 +99,9 @@ The command takes the following fields: - If ``true``, performs a more thorough check with the following exception: - - Starting in MongoDB 4.4, full validation on the ``oplog`` - for WiredTiger skips the more thorough check. The - :data:`validate.warnings` includes a notice of the behavior. + - Full validation on the ``oplog`` for WiredTiger skips the more + thorough check. The :data:`validate.warnings` includes a notice of + the behavior. - If ``false``, omits some checks for a faster but less thorough check. @@ -113,6 +128,16 @@ The command takes the following fields: .. include:: /includes/fact-validate-conformance.rst + * - ``background`` + - boolean + - .. _cmd-validate-background: + + *Optional*. If ``true``, MongoDB runs the ``validate`` command in the + background. If ``false``, the command repairs any inconsistencies + in the count and data size of a collection. + + The default is ``false``. + Behavior -------- @@ -164,6 +189,19 @@ Starting in MongoDB 6.0, the ``validate`` command returns a message if a :ref:`unique index ` has a key format that is incompatible. The message indicates an old format is used. +Count and Data Size Statistics +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The :dbcommand:`validate` command updates the collection's count and +data size statistics in the :dbcommand:`collStats` :ref:`output +` with their correct values. + +.. note:: + + In the event of an :ref:`unclean shutdown + `, the count and data size + statistics might be inaccurate. + Examples -------- diff --git a/source/reference/command/whatsmyuri.txt b/source/reference/command/whatsmyuri.txt index cc30d6ed732..affd37966d2 100644 --- a/source/reference/command/whatsmyuri.txt +++ b/source/reference/command/whatsmyuri.txt @@ -15,3 +15,14 @@ whatsmyuri :dbcommand:`whatsmyuri` is an internal command. .. slave-ok + +Compatibility +------------- + +This command is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst diff --git a/source/reference/config-database.txt b/source/reference/config-database.txt index 51aa8be30bc..a24b7392929 100644 --- a/source/reference/config-database.txt +++ b/source/reference/config-database.txt @@ -20,6 +20,26 @@ The collections in the ``config`` database support: replica sets, and sharded clusters and retryable writes for replica sets and sharded clusters. +.. note:: + + Sharded clusters may show different collections in the + ``config`` database, depending on whether you connect to + :program:`mongos` or :program:`mongod`: + + - On ``mongos``, the ``config`` database shows collections + located on the config servers, such as + :data:`~config.collections` or :data:`~config.chunks`. + + - On ``mongod``, the ``config`` database shows + collections specific to the given shard, such as + :data:`~config.migrationCoordinators` or + :data:`~config.rangeDeletions`. + + When a config server and a shard are hosted on the same node, + :program:`mongos` may have access to some shard-local + collections in the ``config`` database. + + Restrictions ------------ @@ -164,7 +184,7 @@ to support sharding: .. data:: config.changelog.what - Reflects the type of change recorded. Possible values include: + The type of change recorded. Possible values include: - ``dropCollection`` - ``dropCollection.start`` @@ -181,7 +201,7 @@ to support sharding: .. data:: config.changelog.details - A :term:`document` that contains additional details regarding + A :term:`document` that contains additional details for the change. The structure of the :data:`~config.changelog.details` document depends on the type of change. @@ -189,30 +209,40 @@ to support sharding: .. include:: /includes/admonition-config-db-is-internal.rst - The :data:`~config.chunks` collection stores a document for each chunk in - the cluster. Consider the following example of a document for a - chunk named ``mydb.foo-a_\"cat\"``: + The :data:`config.chunks` collection stores a document for each + chunk in the cluster. The following example shows a document: .. code-block:: javascript { - "_id" : "mydb.foo-a_\"cat\"", - "lastmod" : Timestamp(2, 1), - "uuid": "c025d039-e626-435e-b2d2-c1d436038041", - "min" : { - "animal" : "cat" - }, - "max" : { - "animal" : "dog" - }, - "shard" : "shard0004", - "history" : [ { "validAfter" : Timestamp(1569368571, 27), "shard" : "shard0004" } ] + _id: ObjectId('65a954c0de11596e08e7c1dc'), + uuid: UUID('a4479215-a38d-478f-a82b-e5e95d455e55'), + min: { a: Long('121204345') }, + max: { a: Long('993849349') }, + shard: 'shard01', + lastmod: Timestamp({ t: 1, i: 0 }), + history: [ + { + validAfter: Timestamp({ t: 1705596095, i: 14 }), + shard: 'shard01' + } + ] } - These documents store the range of values for the shard key that - describe the chunk in the ``min`` and ``max`` fields. Additionally - the ``shard`` field identifies the shard in the cluster that "owns" - the chunk. + In the document: + + - ``_id`` is the chunk identifier. + - ``min`` and ``max`` are the range of values for the chunk's shard + key. + - ``shard`` is the name of the shard that stores the chunk in the + cluster. + + .. tip:: + + To find the chunks in a collection, retrieve the collection's + ``uuid`` identifier from the :data:`config.collections` + collection. Then, use ``uuid`` to retrieve the matching document + with the same ``uuid`` from the ``config.chunks`` collection. .. data:: config.collections @@ -271,8 +301,6 @@ to support sharding: .. data:: config.migrationCoordinators - .. versionadded:: 4.4 - The :data:`~config.migrationCoordinators` collection exists on each shard and stores a document for each in-progress :term:`chunk` migration from this shard to another shard. The chunk migration fails @@ -412,7 +440,7 @@ to support sharding: .. include:: /includes/admonition-config-db-is-internal.rst The :data:`~config.version` collection holds the current metadata version number. This - collection contains only one document. For example: + collection contains only one document. For example: .. code-block:: javascript @@ -430,7 +458,7 @@ to support sharding: Collections to Support Sessions ------------------------------- -Starting in MongoDB 3.6, the ``config`` database contains the +The ``config`` database contains the *internal* collections to support :ref:`causally consistent sessions ` for standalones, replica sets, and sharded clusters and retryable writes and :ref:`transactions ` for diff --git a/source/reference/configuration-file-settings-command-line-options-mapping.txt b/source/reference/configuration-file-settings-command-line-options-mapping.txt index 92d6a87f02f..c0f1fe7e470 100644 --- a/source/reference/configuration-file-settings-command-line-options-mapping.txt +++ b/source/reference/configuration-file-settings-command-line-options-mapping.txt @@ -484,19 +484,11 @@ Starting in version 5.0: MongoDB removes the ``--serviceExecutor`` command-line option and the corresponding ``net.serviceExecutor`` configuration option. -Starting in version 4.4: - MongoDB removes the ``--noIndexBuildRetry`` command-line option and - the corresponding ``storage.indexBuildRetry`` option. - -Starting in version 4.2: - MongoDB removes the deprecated MMAPv1 storage engine and the - MMAPv1-specific configuration options: - - .. include:: /includes/removed-mmapv1-options.rst - - For earlier versions of MongoDB, refer to the corresponding version of - the manual. For example: - - - :v4.0:`https://www.mongodb.com/docs/v4.0 ` - - :v3.6:`https://www.mongodb.com/docs/v3.6 ` - - :v3.4:`https://www.mongodb.com/docs/v3.4 ` + For earlier versions of MongoDB, refer to the corresponding version of + the manual. For example: + + - :v4.4:`https://www.mongodb.com/docs/v4.4 ` + - :v4.2:`https://www.mongodb.com/docs/v4.2 ` + - :v4.0:`https://www.mongodb.com/docs/v4.0 ` + - :v3.6:`https://www.mongodb.com/docs/v3.6 ` + - :v3.4:`https://www.mongodb.com/docs/v3.4 ` diff --git a/source/reference/configuration-options.txt b/source/reference/configuration-options.txt index e177562512d..aff21bf9ae2 100644 --- a/source/reference/configuration-options.txt +++ b/source/reference/configuration-options.txt @@ -10,6 +10,9 @@ Configuration File Options :name: programming_language :values: shell +.. meta:: + :description: Specify configuration file options to manage large scale deployments and control MongoDB behavior. + .. contents:: On this page :local: :backlinks: none @@ -27,6 +30,10 @@ versions of MongoDB, see the appropriate version of the MongoDB Manual. to configure settings for your {+atlas+} deployment, see :atlas:`Configure Additional Settings `. +In addition to using the configuration file options, the default +configuration for the MongoDB binaries also uses the operating system +environment variables. + .. _conf-file: Configuration File @@ -74,7 +81,8 @@ settings that you may adapt to your local configuration: The Linux package init scripts included in the official MongoDB packages depend on specific values for :setting:`systemLog.path`, :setting:`storage.dbPath`, and -:setting:`processManagement.fork`. If you modify these settings in the default +:setting:`processManagement.fork` or ``MONGODB_CONFIG_OVERRIDE_NOFORK`` +system environment variable. If you modify these settings in the default configuration file, :binary:`~bin.mongod` may not start. .. [#yaml-json] YAML is a superset of :term:`JSON`. @@ -194,7 +202,7 @@ Core Options *Default*: 0 - The default :doc:`log message ` + The default :ref:`log message ` verbosity level for :ref:`components `. The verbosity level determines the amount of :ref:`Informational and Debug ` messages MongoDB outputs. [#log-message]_ @@ -273,8 +281,10 @@ Core Options *Default*: false - When ``true``, :binary:`~bin.mongos` or :binary:`~bin.mongod` appends new entries to the end of the existing log file when the :binary:`~bin.mongos` or :binary:`~bin.mongod` - instance restarts. Without this option, :binary:`~bin.mongod` will back up the + When ``true``, :binary:`~bin.mongos` or :binary:`~bin.mongod` + appends new entries to the end of the existing log file when + the instance restarts. Without this option, + :binary:`~bin.mongod` or :binary:`~bin.mongos` backs up the existing log and create a new file. @@ -902,15 +912,24 @@ Core Options *Default*: false - Enable a :term:`daemon` mode that runs the :binary:`~bin.mongos` or :binary:`~bin.mongod` process in the - background. By default :binary:`~bin.mongos` or :binary:`~bin.mongod` does not run as a daemon: - typically you will run :binary:`~bin.mongos` or :binary:`~bin.mongod` as a daemon, either by using - :setting:`processManagement.fork` or by using a controlling process that handles the - daemonization process (e.g. as with ``upstart`` and ``systemd``). - + Enable a :term:`daemon` mode that runs the :binary:`~bin.mongos` or + :binary:`~bin.mongod` process in the background. By default + :binary:`~bin.mongos` or :binary:`~bin.mongod` does not run as a daemon. + To use :binary:`~bin.mongos` or :binary:`~bin.mongod` as a daemon, set + :setting:`processManagement.fork` or use a controlling process that + handles the daemonization process (for example, ``systemd``). + The :setting:`processManagement.fork` option is not supported on Windows. .. include:: /includes/extracts/linux-config-expectations-processmanagement-fork.rst + + .. note:: + + Alternatively, you can set the ``MONGODB_CONFIG_OVERRIDE_NOFORK`` + environment variable on your system to ``true`` to run the + :binary:`~bin.mongos` or :binary:`~bin.mongod` process in the + background. If you set the environment variable, it overrides the + setting for ``processManagement.fork``. .. setting:: processManagement.pidFilePath @@ -1128,15 +1147,15 @@ Core Options | *Default (Linux):* (`RLIMIT_NOFILE `__) * 0.8 - The maximum number of simultaneous connections that :binary:`~bin.mongos` or :binary:`~bin.mongod` will - accept. This setting has no effect if it is higher than your operating + The maximum number of simultaneous connections that :binary:`~bin.mongos` or :binary:`~bin.mongod` + accepts. This setting has no effect if it is higher than your operating system's configured maximum connection tracking threshold. - - Do not assign too low of a value to this option, or you will + + Do not assign too low of a value to this option, or you may encounter errors during normal application operation. .. include:: /includes/fact-maxconns-mongos.rst - + .. setting:: net.wireObjectCheck @@ -1368,15 +1387,16 @@ Core Options :setting:`~net.tls.certificateKeyFile`). Use the :setting:`net.tls.certificateKeyFilePassword` option only if the certificate-key file is encrypted. In all cases, the - :binary:`~bin.mongos` or :binary:`~bin.mongod` will redact the + :binary:`~bin.mongos` or :binary:`~bin.mongod` redacts the password from all logging and reporting output. Starting in MongoDB 4.0: - On Linux/BSD, if the private key in the PEM file is encrypted and - you do not specify the - :setting:`net.tls.certificateKeyFilePassword` option, MongoDB will - prompt for a passphrase. See :ref:`ssl-certificate-password`. + you do not specify the :setting:`net.tls.certificateKeyFilePassword` + option, MongoDB prompts for a passphrase. + + For more information, see :ref:`ssl-certificate-password`. - On macOS, if the private key in the PEM file is encrypted, you must explicitly specify the @@ -1422,9 +1442,14 @@ Core Options full certificate chain of the specified TLS certificate. Specifically, the secure certificate store must contain the root CA and any intermediate CA certificates required to build the full - certificate chain to the TLS certificate. Do **not** use - :setting:`net.tls.CAFile` or :setting:`net.tls.clusterFile` to - specify the root and intermediate CA certificate + certificate chain to the TLS certificate. + + .. warning:: + + If you use ``net.tls.certificateSelector`` and/or + :setting:`net.tls.clusterCertificateSelector`, we **do not** recommend + using :setting:`net.tls.CAFile` or :setting:`net.tls.clusterFile` to + specify the root and intermediate CA certificate For example, if the TLS certificate was signed with a single root CA certificate, the secure certificate store must contain that root @@ -1463,9 +1488,14 @@ Core Options full certificate chain of the specified cluster certificate. Specifically, the secure certificate store must contain the root CA and any intermediate CA certificates required to build the full - certificate chain to the cluster certificate. Do **not** use - :setting:`net.tls.CAFile` or :setting:`net.tls.clusterCAFile` to - specify the root and intermediate CA certificate. + certificate chain to the cluster certificate. + + .. warning:: + + If you use :setting:`net.tls.certificateSelector` and/or + ``net.tls.clusterCertificateSelector``, we **do not** recommend using + :setting:`net.tls.CAFile` or :setting:`net.tls.clusterCAFile` to specify + the root and intermediate CA certificate. For example, if the cluster certificate was signed with a single root CA certificate, the secure certificate store must contain that root @@ -1523,17 +1553,18 @@ Core Options The password to de-crypt the x.509 certificate-key file specified with ``--sslClusterFile``. Use the :setting:`net.tls.clusterPassword` option only if the - certificate-key file is encrypted. In all cases, the - :binary:`~bin.mongos` or :binary:`~bin.mongod` will redact the + certificate-key file is encrypted. In all cases, + :binary:`~bin.mongos` or :binary:`~bin.mongod` redacts the password from all logging and reporting output. - + Starting in MongoDB 4.0: - + - On Linux/BSD, if the private key in the x.509 file is encrypted and you do not specify the :setting:`net.tls.clusterPassword` option, - MongoDB will prompt for a passphrase. See - :ref:`ssl-certificate-password`. - + MongoDB prompts for a passphrase. + + For more information, see :ref:`ssl-certificate-password`. + - On macOS, if the private key in the x.509 file is encrypted, you must explicitly specify the :setting:`net.tls.clusterPassword` option. Alternatively, you can either use a certificate from the @@ -1675,8 +1706,8 @@ Core Options MongoDB 4.0 and :setting:`net.tls.certificateSelector` in MongoDB 4.2+ to use the system SSL certificate store. - - Starting in version 4.4, to check for certificate revocation, - MongoDB :parameter:`enables ` the use of OCSP + - To check for certificate revocation, MongoDB + :parameter:`enables ` the use of OCSP (Online Certificate Status Protocol) by default as an alternative to specifying a CRL file or using the system SSL certificate store. @@ -1721,11 +1752,11 @@ Core Options .. include:: /includes/extracts/tls-facts-x509-invalid-certificate.rst - When using - the :setting:`net.tls.allowInvalidCertificates` setting, MongoDB + When using the ``net.tls.allowInvalidCertificates`` setting, MongoDB logs a warning regarding the use of the invalid certificate. - .. include:: /includes/extracts/tls-facts-see-more.rst + For more information about TLS and MongoDB, see + :ref:`configure-mongod-mongos-for-tls-ssl` and :ref:`inter-process-auth`. .. setting:: net.tls.allowInvalidHostnames @@ -1734,12 +1765,14 @@ Core Options *Default*: false - When :setting:`net.tls.allowInvalidHostnames` is ``true``, MongoDB disables the validation of the - hostnames in TLS certificates, allowing :binary:`~bin.mongod` to connect to - MongoDB instances if the hostname their certificates do not match the - specified hostname. + When ``net.tls.allowInvalidHostnames`` is ``true``, MongoDB disables + the validation of the hostnames in TLS certificates. This allows + :binary:`~bin.mongod` or :binary:`~bin.mongos` to connect to other MongoDB + instances in the cluster, even if the hostname of their certificates does not + match the specified hostname. - .. include:: /includes/extracts/tls-facts-see-more.rst + For more information about TLS and MongoDB, see + :ref:`configure-mongod-mongos-for-tls-ssl`. .. setting:: net.tls.disabledProtocols @@ -1952,15 +1985,16 @@ Core Options :setting:`~net.ssl.PEMKeyFile`). Use the :setting:`net.ssl.PEMKeyPassword` option only if the certificate-key file is encrypted. In all cases, the :binary:`~bin.mongos` or - :binary:`~bin.mongod` will redact the password from all logging and + :binary:`~bin.mongod` redacts the password from all logging and reporting output. Starting in MongoDB 4.0: - On Linux/BSD, if the private key in the PEM file is encrypted and you do not specify the :setting:`net.ssl.PEMKeyPassword` option, - MongoDB will prompt for a passphrase. See - :ref:`ssl-certificate-password`. + MongoDB prompts for a passphrase. + + For more information, see :ref:`ssl-certificate-password`. - On macOS, if the private key in the PEM file is encrypted, you must explicitly specify the :setting:`net.ssl.PEMKeyPassword` option. @@ -2112,15 +2146,16 @@ Core Options The password to de-crypt the x.509 certificate-key file specified with ``--sslClusterFile``. Use the :setting:`net.ssl.clusterPassword` option only if the certificate-key file is encrypted. In all cases, - the :binary:`~bin.mongos` or :binary:`~bin.mongod` will redact the + the :binary:`~bin.mongos` or :binary:`~bin.mongod` redacts the password from all logging and reporting output. Starting in MongoDB 4.0: - On Linux/BSD, if the private key in the x.509 file is encrypted and you do not specify the :setting:`net.ssl.clusterPassword` option, - MongoDB will prompt for a passphrase. See - :ref:`ssl-certificate-password`. + MongoDB prompts for a passphrase. + + For more information, see :ref:`ssl-certificate-password`. - On macOS, if the private key in the x.509 file is encrypted, you must explicitly specify the :setting:`net.ssl.clusterPassword` @@ -2227,11 +2262,10 @@ Core Options MongoDB 4.0 and :setting:`net.tls.certificateSelector` in MongoDB 4.2 to use the system SSL certificate store. - - Starting in version 4.4, MongoDB :ref:`enables `, - by default, the use of OCSP (Online Certificate Status - Protocol) to check for certificate revocation as an alternative - to specifying a CRL file or using the system SSL certificate - store. + - MongoDB :ref:`enables `, by default, the use of OCSP + (Online Certificate Status Protocol) to check for certificate revocation + as an alternative to specifying a CRL file or using the system SSL + certificate store. .. include:: /includes/extracts/ssl-facts-see-more.rst @@ -2321,7 +2355,7 @@ Core Options - To list multiple protocols, specify as a comma separated list of protocols. For example ``TLS1_0,TLS1_1``. - - Specifying an unrecognized protocol will prevent the server from + - Specifying an unrecognized protocol prevents the server from starting. - The specified disabled protocols overrides any default disabled @@ -2586,10 +2620,9 @@ Core Options If you do not use these operations, disable server-side scripting. - Starting in version 4.4, the :setting:`security.javascriptEnabled` - is available for both :binary:`~bin.mongod` and - :binary:`~bin.mongos`. In earlier versions, the setting is only - available for :binary:`~bin.mongod`. + The :setting:`security.javascriptEnabled` is available for both + :binary:`~bin.mongod` and :binary:`~bin.mongos`. In earlier versions, the + setting is only available for :binary:`~bin.mongod`. .. setting:: security.redactClientLogData @@ -2765,9 +2798,9 @@ Key Management Configuration Options *Type*: string - The path to the local keyfile when managing keys via process *other - than* KMIP. Only set when managing keys via process other than KMIP. - If data is already encrypted using KMIP, MongoDB will throw an error. + The path to the local keyfile when managing keys through a process *other + than* KMIP. Only set when managing keys through a process other than KMIP. + If data is already encrypted using KMIP, MongoDB throws an error. Requires :setting:`security.enableEncryption` to be ``true``. @@ -2786,12 +2819,11 @@ Key Management Configuration Options encryption for the :binary:`~bin.mongod` instance. Requires :setting:`security.enableEncryption` to be true. - If unspecified, MongoDB will request that the KMIP server create a + If unspecified, MongoDB requests that the KMIP server create a new key to utilize as the system key. If the KMIP server cannot locate a key with the specified identifier - or the data is already encrypted with a key, MongoDB will throw an - error. + or the data is already encrypted with a key, MongoDB throws an error. .. include:: /includes/fact-enterprise-only-admonition.rst @@ -2824,10 +2856,10 @@ Key Management Configuration Options :setting:`security.enableEncryption` to be true. Starting in MongoDB 4.2.1 (and 4.0.14), you can specify multiple KMIP - servers as a comma-separated list, e.g. + servers as a comma-separated list, for example ``server1.example.com,server2.example.com``. On startup, the - :binary:`~bin.mongod` will attempt to establish a connection to each - server in the order listed, and will select the first server to + :binary:`~bin.mongod` attempts to establish a connection to each + server in the order listed, and selects the first server to which it can successfully establish a connection. KMIP server selection occurs only at startup. @@ -2851,8 +2883,8 @@ Key Management Configuration Options :setting:`security.enableEncryption` to be true. If specifying multiple KMIP servers with - :setting:`security.kmip.serverName`, the :binary:`~bin.mongod` will - use the port specified with :setting:`security.kmip.port` for all + :setting:`security.kmip.serverName`, the :binary:`~bin.mongod` + uses the port specified with :setting:`security.kmip.port` for all provided KMIP servers. .. include:: /includes/fact-enterprise-only-admonition.rst @@ -2870,11 +2902,10 @@ Key Management Configuration Options To use this setting, you must also specify the :setting:`security.kmip.serverName` setting. - .. note:: - - Starting in 4.0, on macOS or Windows, you can use a certificate - from the operating system's secure store instead of a PEM key - file. See :setting:`security.kmip.clientCertificateSelector`. + .. |kmip-client-cert-file| replace:: ``security.kmip.clientCertificateFile`` + .. |kmip-client-cert-selector| replace:: :setting:`security.kmip.clientCertificateSelector` + + .. include:: /includes/enable-KMIP-on-windows.rst .. include:: /includes/fact-enterprise-only-admonition.rst @@ -2898,7 +2929,7 @@ Key Management Configuration Options *Type*: string - .. versionadded:: 4.0 (and 4.2.15, 4.4.7, and 5.0) + .. versionadded:: 4.0 (and 5.0) Available on Windows and macOS as an alternative to :setting:`security.kmip.clientCertificateFile`. @@ -2942,9 +2973,6 @@ Key Management Configuration Options *Default*: 0 - - .. versionadded:: 4.4 - How many times to retry the initial connection to the KMIP server. Use together with :setting:`~security.kmip.connectTimeoutMS` to control how long the :binary:`~bin.mongod` waits for a response @@ -2959,12 +2987,9 @@ Key Management Configuration Options *Default*: 5000 - - .. versionadded:: 4.4 - Timeout in milliseconds to wait for a response from the KMIP server. If the :setting:`~security.kmip.connectRetries` setting is specified, - the :binary:`~bin.mongod` will wait up to the value specified with + the :binary:`~bin.mongod` waits up to the value specified with :setting:`~security.kmip.connectTimeoutMS` for each retry. Value must be ``1000`` or greater. @@ -2986,10 +3011,10 @@ Key Management Configuration Options When ``security.kmip.activateKeys`` is ``true`` and you have existing keys on a KMIP server, the key must be activated first or the :binary:`mongod` - node will fail to start. + node fails to start. If the key being used by the mongod transitions into a non-active state, - the :binary:`mongod` node will shut down unless ``kmipActivateKeys`` is + the :binary:`mongod` node shuts down unless ``kmipActivateKeys`` is false. To ensure you have an active key, rotate the KMIP master key by using :setting:`security.kmip.rotateMasterKey`. @@ -3128,6 +3153,12 @@ Key Management Configuration Options :setting:`security.ldap.servers`. MongoDB supports following LDAP referrals as defined in `RFC 4511 4.1.10 `_. Do not use :setting:`security.ldap.servers` for listing every LDAP server in your infrastructure. + + You can prefix LDAP servers with ``srv:`` and ``srv_raw:``. + + .. |ldap-binary| replace:: :binary:`mongod` + + .. include:: /includes/ldap-srv-details.rst This setting can be configured on a running :binary:`~bin.mongod` or :binary:`~bin.mongos` using :dbcommand:`setParameter`. @@ -3151,11 +3182,14 @@ Key Management Configuration Options - Using an LDAP query for :setting:`security.ldap.userToDNMapping`. - The LDAP server disallows anonymous binds - You must use :setting:`~security.ldap.bind.queryUser` with :setting:`~security.ldap.bind.queryPassword`. + You must use :setting:`~security.ldap.bind.queryUser` with + :setting:`~security.ldap.bind.queryPassword`. - If unset, :binary:`~bin.mongod` or :binary:`~bin.mongos` will not attempt to bind to the LDAP server. + If unset, :binary:`~bin.mongod` or :binary:`~bin.mongos` does not + attempt to bind to the LDAP server. - This setting can be configured on a running :binary:`~bin.mongod` or :binary:`~bin.mongos` using + This setting can be configured on a running + :binary:`~bin.mongod` or :binary:`~bin.mongos` using :dbcommand:`setParameter`. .. note:: @@ -3295,7 +3329,7 @@ Key Management Configuration Options For Linux deployments, you must configure the appropriate TLS Options in ``/etc/openldap/ldap.conf`` file. Your operating system's package manager - creates this file as part of the MongoDB Enterprise installation, via the + creates this file as part of the MongoDB Enterprise installation, through the ``libldap`` dependency. See the documentation for ``TLS Options`` in the `ldap.conf OpenLDAP documentation `_ @@ -3368,7 +3402,7 @@ Key Management Configuration Options ` that requires a DN. - Transforming the usernames of clients authenticating to Mongo DB using - different authentication mechanisms (e.g. x.509, kerberos) to a full LDAP + different authentication mechanisms (for example, x.509, kerberos) to a full LDAP DN for authorization. :setting:`~security.ldap.userToDNMapping` expects a quote-enclosed JSON-string representing an ordered array @@ -3407,7 +3441,7 @@ Key Management Configuration Options Each curly bracket-enclosed numeric value is replaced by the corresponding `regex capture group `_ extracted - from the authentication username via the ``match`` regex. + from the authentication username through the ``match`` regex. The result of the substitution must be an `RFC4514 `_ escaped string. @@ -3422,7 +3456,7 @@ Key Management Configuration Options respecting RFC4515 and RFC4516. Each curly bracket-enclosed numeric value is replaced by the corresponding `regex capture group `_ extracted - from the authentication username via the ``match`` expression. + from the authentication username through the ``match`` expression. :binary:`~bin.mongod` or :binary:`~bin.mongos` executes the query against the LDAP server to retrieve the LDAP DN for the authenticated user. :binary:`~bin.mongod` or :binary:`~bin.mongos` requires exactly one returned result for the transformation to be @@ -3434,7 +3468,7 @@ Key Management Configuration Options .. note:: An explanation of `RFC4514 `_, - `RFC4515 `_, + `RFC4515 `_, `RFC4516 `_, or LDAP queries is out of scope for the MongoDB Documentation. Please review the RFC directly or use your preferred LDAP resource. @@ -3456,9 +3490,8 @@ Key Management Configuration Options describes fails, :binary:`~bin.mongod` or :binary:`~bin.mongos` returns an error. - Starting in MongoDB 4.4, :binary:`~bin.mongod` or - :binary:`~bin.mongos` also returns an error if one of the - transformations cannot be evaluated due to networking or + :binary:`~bin.mongod` or :binary:`~bin.mongos` also returns an error if one + of the transformations cannot be evaluated due to networking or authentication failures to the LDAP server. :binary:`~bin.mongod` or :binary:`~bin.mongos` rejects the connection request and does not check the remaining documents in the array. @@ -3466,7 +3499,7 @@ Key Management Configuration Options Starting in MongoDB 5.0, :setting:`~security.ldap.userToDNMapping` accepts an empty string ``""`` or empty array ``[ ]`` in place of a mapping document. If providing an empty string or empty array to - :setting:`~security.ldap.userToDNMapping`, MongoDB will map the + :setting:`~security.ldap.userToDNMapping`, MongoDB maps the authenticated username as the LDAP DN. Previously, providing an empty mapping document would cause mapping to fail. @@ -3521,7 +3554,7 @@ Key Management Configuration Options *Available in MongoDB Enterprise only.* A relative LDAP query URL formatted conforming to `RFC4515 - `_ and `RFC4516 + `_ and `RFC4516 `_ that :binary:`~bin.mongod` executes to obtain the LDAP groups to which the authenticated user belongs to. The query is relative to the host or hosts specified in :setting:`security.ldap.servers`. @@ -3589,7 +3622,7 @@ Key Management Configuration Options .. note:: - An explanation of `RFC4515 `_, + An explanation of `RFC4515 `_, `RFC4516 `_ or LDAP queries is out of scope for the MongoDB Documentation. Please review the RFC directly or use your preferred LDAP resource. @@ -3680,15 +3713,6 @@ LDAP Parameters - .. include:: /includes/journal-always-enabled-change.rst -.. versionchanged:: 4.4 - - - MongoDB removes the ``storage.indexBuildRetry`` option and the - corresponding ``--noIndexBuildRetry`` command-line option. - - - MongoDB deprecates - ``storage.wiredTiger.engineConfig.maxCacheOverflowFileSizeGB`` - option. The option has no effect starting in MongoDB 4.4. - .. code-block:: yaml storage: @@ -3703,7 +3727,7 @@ LDAP Parameters cacheSizeGB: journalCompressor: directoryForIndexes: - maxCacheOverflowFileSizeGB: // deprecated in MongoDB 4.4 + maxCacheOverflowFileSizeGB: collectionConfig: blockCompressor: indexConfig: @@ -3776,7 +3800,7 @@ LDAP Parameters On WiredTiger, the default journal commit interval is 100 milliseconds. Additionally, a write that includes or implies - ``j:true`` will cause an immediate sync of the journal. For details + ``j:true`` causes an immediate sync of the journal. For details or additional conditions that affect the frequency of the sync, see :ref:`journal-process`. @@ -3841,26 +3865,22 @@ LDAP Parameters *Default*: 60 The amount of time that can pass before MongoDB flushes data to the data - files via an :term:`fsync` operation. + files. **Do not set this value on production systems.** In almost every situation, you should use the default setting. - .. warning:: - - If you set :setting:`storage.syncPeriodSecs` to ``0``, MongoDB will not sync the - memory mapped files to disk. - The :binary:`~bin.mongod` process writes data very quickly to the journal and lazily to the data files. :setting:`storage.syncPeriodSecs` has no effect on :ref:``, but if :setting:`storage.syncPeriodSecs` is - set to ``0`` the journal will eventually consume all available disk space. + set to ``0`` the journal eventually consumes all available disk space. The :setting:`storage.syncPeriodSecs` setting is available only for :binary:`~bin.mongod`. .. include:: /includes/not-available-for-inmemory-storage-engine.rst + .. include:: /includes/checkpoints.rst .. setting:: storage.engine @@ -3891,9 +3911,9 @@ LDAP Parameters Available in MongoDB Enterprise only. If you attempt to start a :binary:`~bin.mongod` with a - :setting:`storage.dbPath` that contains data files produced by a - storage engine other than the one specified by :setting:`storage.engine`, :binary:`~bin.mongod` - will refuse to start. + :setting:`storage.dbPath` that contains data files produced + by a storage engine other than the one specified by + :setting:`storage.engine`, :binary:`~bin.mongod` refuses to start. @@ -3901,17 +3921,15 @@ LDAP Parameters *Type*: double - .. versionadded:: 4.4 - - Specifies the minimum number of hours to preserve an oplog entry, - where the decimal values represent the fractions of an hour. For - example, a value of ``1.5`` represents one hour and thirty - minutes. + Specifies the minimum number of hours to preserve an oplog entry, + where the decimal values represent the fractions of an hour. For + example, a value of ``1.5`` represents one hour and thirty + minutes. - The value must be greater than or equal to ``0``. A value of ``0`` - indicates that the :binary:`~bin.mongod` should truncate the oplog - starting with the oldest entries to maintain the configured - maximum oplog size. + The value must be greater than or equal to ``0``. A value of ``0`` + indicates that the :binary:`~bin.mongod` should truncate the oplog + starting with the oldest entries to maintain the configured + maximum oplog size. Defaults to ``0``. @@ -3965,7 +3983,7 @@ LDAP Parameters cacheSizeGB: journalCompressor: directoryForIndexes: - maxCacheOverflowFileSizeGB: // Deprecated in MongoDB 4.4 + maxCacheOverflowFileSizeGB: collectionConfig: blockCompressor: indexConfig: @@ -3975,8 +3993,8 @@ LDAP Parameters *Type*: float - Defines the maximum size of the internal cache that WiredTiger will - use for all data. The memory consumed by an index build (see + Defines the maximum size of the internal cache that WiredTiger + uses for all data. The memory consumed by an index build (see :parameter:`maxIndexBuildMemoryUsageMegabytes`) is separate from the WiredTiger cache memory. @@ -4034,48 +4052,6 @@ LDAP Parameters create a symbolic link named ``index`` under the data directory to the new destination. - -.. setting:: storage.wiredTiger.engineConfig.maxCacheOverflowFileSizeGB - - *Type*: float - - .. note:: Deprecated in MongoDB 4.4 - - - MongoDB deprecates the - ``storage.wiredTiger.engineConfig.maxCacheOverflowFileSizeGB`` - option. The option has no effect starting in MongoDB 4.4. - - Specifies the maximum size (in GB) for the "lookaside (or cache - overflow) table" file :file:`WiredTigerLAS.wt` for MongoDB - 4.2.1-4.2.x and 4.0.12-4.0.x. The file no longer exists starting in - version 4.4. - - The setting can accept the following values: - - .. list-table:: - :header-rows: 1 - :widths: 20 80 - - * - Value - - Description - - * - ``0`` - - - The default value. If set to ``0``, the file size is - unbounded. - - * - number >= 0.1 - - - The maximum size (in GB). If the :file:`WiredTigerLAS.wt` - file exceeds this size, :binary:`~bin.mongod` exits with a - fatal assertion. You can clear the :file:`WiredTigerLAS.wt` - file and restart :binary:`~bin.mongod`. - - To change the maximum size during runtime, use the - :parameter:`wiredTigerMaxCacheOverflowSizeGB` parameter. - - *Available starting in MongoDB 4.2.1 (and 4.0.12)* .. setting:: storage.wiredTiger.engineConfig.zstdCompressionLevel @@ -4116,8 +4092,8 @@ LDAP Parameters :setting:`storage.wiredTiger.collectionConfig.blockCompressor` affects all collections created. If you change the value of :setting:`storage.wiredTiger.collectionConfig.blockCompressor` on an existing MongoDB deployment, all new - collections will use the specified compressor. Existing collections - will continue to use the compressor specified when they were + collections uses the specified compressor. Existing collections + continue to use the compressor specified when they were created, or the default compressor at that time. @@ -4132,7 +4108,7 @@ LDAP Parameters The :setting:`storage.wiredTiger.indexConfig.prefixCompression` setting affects all indexes created. If you change the value of :setting:`storage.wiredTiger.indexConfig.prefixCompression` on an existing MongoDB deployment, all new - indexes will use prefix compression. Existing indexes + indexes uses prefix compression. Existing indexes are not affected. @@ -4303,8 +4279,6 @@ LDAP Parameters mode: all filter: '{ op: "query", millis: { $gt: 2000 } }' - .. versionadded:: 4.4.2 - .. _replication-options: ``replication`` Options @@ -4321,19 +4295,20 @@ LDAP Parameters *Type*: integer - The maximum size in megabytes for the replication operation log - (i.e., the :term:`oplog`). - + .. |oplog-size-setting| replace:: ``oplogSizeMB`` + + .. include:: /includes/reference/oplog-size-setting-intro.rst + .. note:: .. include:: /includes/fact-oplog-size.rst - By default, the :binary:`~bin.mongod` process creates an :term:`oplog` based on - the maximum amount of space available. For 64-bit systems, the oplog - is typically 5% of available disk space. + By default, the :binary:`~bin.mongod` process creates an oplog based + on the maximum amount of space available. For 64-bit systems, the + oplog is typically 5% of available disk space. Once the :binary:`~bin.mongod` has created the oplog for the first - time, changing the :setting:`replication.oplogSizeMB` option will not + time, changing the :setting:`replication.oplogSizeMB` option does not affect the size of the oplog. To change the maximum oplog size after starting the :binary:`~bin.mongod`, use :dbcommand:`replSetResizeOplog`. :dbcommand:`replSetResizeOplog` @@ -4372,7 +4347,7 @@ LDAP Parameters :setting:`~replication.enableMajorityReadConcern` cannot be changed and is always set to ``true``. Attempting to start a storage engine that does not support majority read concern with the - ``--enableMajorityReadConcern`` option will fail and return an error + ``--enableMajorityReadConcern`` option fails and return an error message. In earlier versions of MongoDB, @@ -4500,8 +4475,8 @@ LDAP Parameters and a facility level of ``user``. The syslog message limit can result in the truncation of - audit messages. The auditing system will neither detect the - truncation nor error upon its occurrence. + audit messages. The auditing system neither detects the + truncation nor errors upon its occurrence. * - ``console`` @@ -4618,18 +4593,19 @@ LDAP Parameters the default value in all of the client :driver:`drivers `. When :binary:`~bin.mongos` receives a request that permits reads to - :term:`secondary` members, the :binary:`~bin.mongos` will: + :term:`secondary` members, the :binary:`~bin.mongos`: - - Find the member of the set with the lowest ping time. + - Finds the member of the set with the lowest ping time. - - Construct a list of replica set members that is within a ping time of + - Constructs a list of replica set members that is within a ping time of 15 milliseconds of the nearest suitable member of the set. - If you specify a value for the :setting:`replication.localPingThresholdMs` option, :binary:`~bin.mongos` will - construct the list of replica members that are within the latency - allowed by this value. + If you specify a value for the + :setting:`replication.localPingThresholdMs` option, + :binary:`~bin.mongos` construct the list of replica members that are + within the latency allowed by this value. - - Select a member to read from at random from this list. + - Selects a member to read from at random from this list. The ping time used for a member compared by the :setting:`replication.localPingThresholdMs` setting is a moving average of recent ping times, calculated at most every 10 diff --git a/source/reference/connection-string.txt b/source/reference/connection-string.txt index f3b4bfe075c..ff2cac23f9e 100644 --- a/source/reference/connection-string.txt +++ b/source/reference/connection-string.txt @@ -15,6 +15,9 @@ Connection Strings .. meta:: :keywords: atlas, drivers +.. meta:: + :description: Use connection strings to establish connections between MongoDB instances, tools, and applications that use drivers. + .. contents:: On this page :local: :backlinks: none @@ -697,6 +700,16 @@ connecting to the MongoDB deployment. drivers. For information on your driver, see the :driver:`Drivers ` documentation. + * - .. urioption:: maxConnecting + + - Maximum number of connections a pool may be establishing + concurrently. The default value is ``2``. + + ``maxConnecting`` is supported for all drivers **except** the + :driver:`Rust Driver `. + + .. include:: /includes/connection-pool/max-connecting-use-case.rst + * - .. urioption:: maxIdleTimeMS - The maximum number of milliseconds that a connection can remain @@ -768,7 +781,7 @@ timeout using the :urioption:`wtimeoutMS` write concern parameter: * - .. urioption:: w - Corresponds to the write concern :ref:`wc-w`. The ``w`` option - requests acknowledgement that the write operation has propagated + requests acknowledgment that the write operation has propagated to a specified number of :binary:`~bin.mongod` instances or to :binary:`~bin.mongod` instances with specified tags. @@ -790,7 +803,7 @@ timeout using the :urioption:`wtimeoutMS` write concern parameter: * - .. urioption:: journal - Corresponds to the write concern :ref:`wc-j` option. The - :urioption:`journal` option requests acknowledgement from + :urioption:`journal` option requests acknowledgment from MongoDB that the write operation has been written to the :ref:`journal `. For details, see :ref:`wc-j`. @@ -1013,7 +1026,7 @@ credentials are authenticated against the ``admin`` database. - :ref:`MONGODB-X509 ` - - ``MONGODB-AWS`` (*Added in MongoDB 4.4*) + - ``MONGODB-AWS`` - :ref:`GSSAPI ` (Kerberos) @@ -1300,8 +1313,6 @@ deployment. {+atlas+} Cluster that Authenticates with AWS IAM credentials ````````````````````````````````````````````````````````````````` -.. versionadded:: 4.4 - The following connects to a `MongoDB Atlas `_ cluster which has been configured to support authentication via `AWS IAM credentials @@ -1481,4 +1492,3 @@ The following connects to a sharded cluster with three :binary:`~bin.mongos` ins ``D1fficultP%40ssw0rd``: .. include:: /includes/connection-examples-by-language-sharded.rst - diff --git a/source/reference/database-profiler.txt b/source/reference/database-profiler.txt index b0f554d4d70..83f74660494 100644 --- a/source/reference/database-profiler.txt +++ b/source/reference/database-profiler.txt @@ -41,9 +41,8 @@ operations against encrypted collections are omitted from the :data:`system.profile <.system.profile>` collection. For details, see :ref:`qe-redaction`. -Starting in MongoDB 4.4, it is no longer possible to perform any -operation, including reads, on the :data:`system.profile -<.system.profile>` collection from within a +It is no longer possible to perform any operation, including reads, on the +:data:`system.profile <.system.profile>` collection from within a :ref:`transaction `. .. include:: /includes/database-profiler-note.rst @@ -191,8 +190,6 @@ operation. .. data:: system.profile.replanReason - .. versionadded:: 4.4 - A string that indicates the specific reason a :ref:`cached plan` was evicted. @@ -441,7 +438,7 @@ operation. .. versionadded:: 6.2 - The time, in milliseconds, that the ``find`` or ``aggregate`` command + The time, in microseconds, that the ``find`` or ``aggregate`` command spent in :ref:`query planning `. .. data:: system.profile.planSummary diff --git a/source/reference/database-references.txt b/source/reference/database-references.txt index 8751d6be06b..0f232b92d87 100644 --- a/source/reference/database-references.txt +++ b/source/reference/database-references.txt @@ -13,6 +13,9 @@ Database References .. meta:: :keywords: drivers +.. meta:: + :description: MongoDB database references store related information in separate documents in different collections or databases. + .. contents:: On this page :local: :backlinks: none @@ -267,7 +270,7 @@ Driver Support for DBRefs .. list-table:: :header-rows: 1 :stub-columns: 1 - :widths: 20 25 80 + :widths: 20 25 55 * - Driver - DBRef Support diff --git a/source/reference/explain-results.txt b/source/reference/explain-results.txt index 3aaa44a62b1..14f7f92a2f5 100644 --- a/source/reference/explain-results.txt +++ b/source/reference/explain-results.txt @@ -12,22 +12,29 @@ Explain Results :depth: 2 :class: singlecol -To return information on query plans and execution statistics of the -query plans, MongoDB provides: +To return information on :ref:`query plans +` and execution statistics of the query +plans, MongoDB provides the following methods: -- the :method:`db.collection.explain()` method, +- :method:`db.collection.explain()` -- the :method:`cursor.explain()` method, and +- :method:`cursor.explain()` -- the :dbcommand:`explain` command. +To learn about important explain result fields and how to interpret +them, see :ref:`interpret-explain-plan`. .. important:: - - Only the most important output fields are shown on this page. + ``explain`` ignores the plan cache. Instead, a set + of candidate plans are generated, and a winner is chosen without consulting + the plan cache. Furthermore, ``explain`` prevents the MongoDB query planner + from caching the winning plan. - - The output is subject to change. +.. note:: - - Some fields are for internal use and are not documented. + Only the most important output fields are shown on this page, and fields for + internal use are not documented. The fields listed in the output are subject + to change. .. _explain-output-structure: @@ -137,7 +144,7 @@ documentation for that version. ~~~~~~~~~~~~~~~~ :data:`explain.queryPlanner` information details the plan selected by -the :doc:`query optimizer `. +the :ref:`query optimizer `. These examples may combine the output structures of MongoDB's classic and slot-based execution engines. They are not meant to be @@ -231,7 +238,7 @@ representative. Your output may differ significantly. .. data:: explain.queryPlanner Contains information on the selection of the query plan by the - :doc:`query optimizer `. + :ref:`query optimizer `. .. data:: explain.queryPlanner.namespace @@ -463,6 +470,22 @@ representative. Your output may differ significantly. }, ... ] + operationMetrics: { + cpuNanos: , + cursorSeeks: , + docBytesRead: , + docBytesWritten: , + docUnitsRead: , + docUnitsReturned: , + docUnitsWritten: , + idxEntryBytesRead: , + idxEntryBytesWritten: , + idxEntryUnitsRead: , + idxEntryUnitsWritten: , + totalUnitsWritten: , + keysSorted: , + sorterSpills: + } } - id: sharded @@ -558,16 +581,24 @@ representative. Your output may differ significantly. .. data:: explain.executionStats.nReturned - Number of documents that match the query condition. + Number of documents returned by the winning query plan. :data:`~explain.executionStats.nReturned` corresponds to the ``n`` field returned by ``cursor.explain()`` in earlier versions of MongoDB. .. data:: explain.executionStats.executionTimeMillis Total time in milliseconds required for query plan selection and - query execution. :data:`explain.executionStats.executionTimeMillis` corresponds - to the ``millis`` field returned by ``cursor.explain()`` in - earlier versions of MongoDB. + query execution. It includes the time it takes to run the trial phase + part of the plan selection process, but does not include the network time + to transmit the data back to the client. + + The time reported by ``explain.executionStats.executionTimeMillis`` is + not necessarily representative of actual query time. During steady + state operations (when the query plan is cached), or when using + :method:`cursor.hint()` with ``cursor.explain()``, MongoDB bypasses the + plan selection process, resulting in a faster actual time, leading to + a lower ``explain.executionStats.executionTimeMillis`` value. + .. data:: explain.executionStats.totalKeysExamined @@ -738,6 +769,13 @@ representative. Your output may differ significantly. both the winning and rejected plans. The field is present only if ``explain`` runs in ``allPlansExecution`` verbosity mode. + .. data:: explain.executionStats.operationMetrics + + Contains resource consumption statistics, as long as they + are not zero. The field is present only if ``explain`` + runs in ``executionStats`` verbosity mode or higher and if + :parameter:`profileOperationResourceConsumptionMetrics` is enabled. + .. _serverInfo: ``serverInfo`` @@ -834,7 +872,6 @@ The following fields are included in the explain results for a totalKeysExamined: , collectionScans: , indexesUsed: [ , , ..., ], - nReturned: , executionTimeMillisEstimate: To see the descriptions for the fields in the ``$lookup`` section, see @@ -862,15 +899,10 @@ The other fields are: Array of strings with the names of the indexes used by the query. -.. data:: explain.nReturned - - Number of documents that match the query condition. - .. data:: explain.executionTimeMillisEstimate Estimated time in milliseconds for the query execution. - .. _explain-output-collection-scan: Collection Scan @@ -885,10 +917,23 @@ key pattern, direction of traversal, and index bounds. Starting in MongoDB 5.3, if the query planner selects a :ref:`clustered index ` for a :ref:`clustered -collection `, the explain result includes a +collection ` and the query contains bounds that +define the portion of the index to search, the explain result includes a ``CLUSTERED_IXSCAN`` stage. The stage includes information about the clustered index key and index bounds. +If the query planner selects a :ref:`clustered index +` for a :ref:`clustered collection +` and the query *does not* contain bounds, the +query performs an unbounded collection scan and the explain result +includes a ``COLLSCAN`` stage. + +.. note:: + + The :parameter:`notablescan` parameter does not allow unbounded + queries that use a clustered index because the queries require a + full collection scan. + For more information on execution statistics of collection scans, see :doc:`/tutorial/analyze-query-plan`. @@ -1028,9 +1073,13 @@ documents, blocking the flow of data for that specific query. If MongoDB requires using more than 100 megabytes of system memory for the blocking sort operation, MongoDB returns an error *unless* the query -specifies :method:`cursor.allowDiskUse()` (*New in MongoDB 4.4*). -:method:`cursor.allowDiskUse()` allows MongoDB to use temporary files -on disk to store data exceeding the 100 megabyte system memory limit -while processing a blocking sort operation. If the explain plan does not -contain an explicit ``SORT`` stage, then MongoDB can use an index to -obtain the sort order. +specifies :method:`cursor.allowDiskUse()`. :method:`cursor.allowDiskUse()` +allows MongoDB to use temporary files on disk to store data exceeding the 100 +megabyte system memory limit while processing a blocking sort operation. If the +explain plan does not contain an explicit ``SORT`` stage, then MongoDB can use +an index to obtain the sort order. + +.. toctree:: + :titlesonly: + + /tutorial/analyze-query-plan diff --git a/source/reference/glossary.txt b/source/reference/glossary.txt index e0507b7e2ae..c413c199163 100644 --- a/source/reference/glossary.txt +++ b/source/reference/glossary.txt @@ -9,6 +9,9 @@ Glossary .. default-domain:: mongodb +.. meta:: + :description: A glossary of MongoDB terms including operations and features. + .. contents:: On this page :local: :backlinks: none @@ -25,7 +28,7 @@ Glossary :sorted: $cmd - A special virtual :term:`collection` that exposes MongoDB's + A virtual :term:`collection` that exposes MongoDB's :term:`database commands `. To use database commands, see :ref:`issue-commands`. @@ -35,7 +38,7 @@ Glossary think of the ``_id`` field as the document's :term:`primary key`. If you create a new document without an ``_id`` field, MongoDB automatically creates the field and assigns a unique - BSON :term:`ObjectId`. + BSON :term:`ObjectId` to the field. accumulator An :term:`expression` in an :term:`aggregation pipeline` that @@ -55,20 +58,23 @@ Glossary see :ref:`admin-commands`. aggregation - Any of a variety of operations that reduces and summarizes large + An operation that reduces and summarizes large sets of data. MongoDB's :method:`~db.collection.aggregate()` and :method:`~db.collection.mapReduce()` methods are two examples of aggregation operations. For more information, see :ref:`aggregation`. aggregation pipeline - The set of MongoDB operators that let you calculate aggregate - values without having to use :term:`map-reduce`. For a list of - operators, see :doc:`/reference/aggregation`. + Consists of one or more stages that process documents. Aggregation + operators calculate aggregate values without having to use + :term:`map-reduce`. For a list of operators, see + :doc:`/reference/aggregation`. arbiter - A member of a :term:`replica set` that exists solely to vote in - :term:`elections `. Arbiters do not replicate data. See + A :term:`replica set` member that exists just to vote in + :term:`elections `. Arbiters do not replicate data. An + arbiter participates in elections for a :term:`primary` but cannot + become a primary. For more details, see :ref:`replica-set-arbiter-configuration`. Atlas @@ -76,8 +82,8 @@ Glossary is a cloud-hosted database-as-a-service. atomic operation - An atomic operation is a write operation which either completes - entirely, or does not complete at all. In the case of + An atomic operation is a write operation that either completes + entirely or doesn't complete at all. For :ref:`distributed transactions `, which involve writes to multiple documents, all writes to each document must succeed for the transaction to succeed. Atomic operations cannot @@ -110,7 +116,7 @@ Glossary Backup cursors are for internal use only. blocking sort - A sort that must be performed in memory before output is returned. + A sort that must be performed in memory before the output is returned. Blocking sorts may impact performance for large data sets. Use an :term:`indexed sort` to avoid a blocking sort. @@ -118,15 +124,15 @@ Glossary operations. bounded collection scan - A plan used by the :doc:`query optimizer ` that - eliminates documents with specific field value ranges. For + A plan used by the :ref:`query optimizer ` that + excludes documents with specific field value ranges. For example, if a range of date field values is outside of a specified - date range, the documents in that range are eliminated from the + date range, the documents in that range are excluded from the query plan. See :ref:`explain-output-collection-scan`. BSON A serialization format used to store :term:`documents ` and make - remote procedure calls in MongoDB. "BSON" is a portmanteau of the words + remote procedure calls in MongoDB. "BSON" is a combination of the words "binary" and "JSON". Think of BSON as a binary representation of JSON (JavaScript Object Notation) documents. See :ref:`bson-types` and @@ -138,7 +144,7 @@ Glossary B-tree A data structure commonly used by database management systems to - store indexes. MongoDB uses B-trees for its indexes. + store indexes. MongoDB uses B-tree indexes. CAP Theorem Given three properties of computing systems, consistency, @@ -148,7 +154,7 @@ Glossary capped collection A fixed-sized :term:`collection ` that automatically - overwrites its oldest entries when it reaches its maximum size. + overwrites its oldest entries when the collection reaches its maximum size. The MongoDB :term:`oplog` that is used in :term:`replication` is a capped collection. See :doc:`/core/capped-collections`. @@ -158,11 +164,11 @@ Glossary and has a cardinality of 3. See :ref:`shard-key-cardinality`. cartesian product - The result of combining two data sets such that the combined set + The result of combining two data sets where the combined set contains every possible combination of values. cfq - Complete Fairness Queueing (cfq) is a I/O operation scheduler + Complete Fairness Queueing (cfq) is an I/O operation scheduler that allocates bandwidth for incoming request processes. checksum @@ -170,20 +176,22 @@ Glossary The :term:`md5` algorithm is sometimes used as a checksum. chunk - A contiguous range of :term:`shard key` values within a particular + A contiguous range of :term:`shard key` values within a :term:`shard`. Chunk ranges are inclusive of the lower boundary and exclusive of the upper boundary. MongoDB splits chunks when - they grow beyond the configured chunk size, which by default is - 128 megabytes. MongoDB migrates chunks when a shard contains too - many chunks of a collection relative to other shards. See - :ref:`sharding-data-partitioning` and :ref:`sharding-balancing`. + they grow bigger than the configured chunk size. The default chunk + size is 128 megabytes. MongoDB migrates chunks when a shard + contains too many chunks of a collection relative to other shards. + For more details, see :ref:`sharding-data-partitioning`, + :ref:`sharding-balancing`, :ref:`sharded-cluster-balancer`, and + :ref:`release-notes-6.1-balancing-policy-changes`. client The application layer that uses a database for data persistence and storage. :term:`Drivers ` provide the interface level between the application layer and the database server. - Client can also refer to a single thread or process. + A client can also be a single thread or process. client affinity A consistent client connection to a specified data source. @@ -206,16 +214,16 @@ Glossary collection A grouping of MongoDB :term:`documents `. A collection - is the equivalent of an :term:`RDBMS` table. A collection exists - within a single :term:`database`. Collections do not enforce a - schema. Documents within a collection can have different fields. - Typically, all documents in a collection have a similar or related + is the equivalent of an :term:`RDBMS` table. A collection is + in a single :term:`database`. Collections do not enforce a + schema. Documents in a collection can have different fields. + Typically, documents in a collection have a similar or related purpose. See :ref:`faq-dev-namespace`. collection scan Collection scans are a query execution strategy where MongoDB must inspect every document in a collection to see if it matches the - query criteria. These queries are very inefficient and do not use + query criteria. These queries are very inefficient and don't use indexes. See :doc:`/core/query-optimization` for details about query execution strategies. @@ -229,7 +237,7 @@ Glossary During an :ref:`index build ` the :ref:`commit quorum ` specifies how many secondaries must be ready to commit their local - index build before the primary node will execute the commit. + index build before the primary node performs the commit. compound index An :term:`index` consisting of two or more keys. See @@ -238,20 +246,28 @@ Glossary concurrency control Concurrency control ensures that database operations can be executed concurrently without compromising correctness. - Pessimistic concurrency control, such as used in systems - with :term:`locks `, will block any potentially - conflicting operations even if they may not turn out to - actually conflict. Optimistic concurrency control, the approach - used by :ref:`WiredTiger `, will delay - checking until after a conflict may have occurred, aborting and - retrying one of the operations involved in any :term:`write - conflict` that arises. + Pessimistic concurrency control, such as that used in systems + with :term:`locks `, blocks any potentially + conflicting operations even if they may not conflict. + Optimistic concurrency control, the approach + used by :ref:`WiredTiger `, delays + checking until after a conflict may have occurred, ending and + retrying one of the operations in any :term:`write + conflict`. + + connection storm + A scenario where a driver attempts to open more connections to a + deployment than that deployment can handle. When requests for new + connections fail, the driver requests to establish even more + connections in response to the deployment slowing down or failing + to open new connections. These continuous requests can overload + the deployment and lead to outages. config database - An internal database that holds the metadata associated with a - :term:`sharded cluster`. Applications and administrators should - not modify the ``config`` database in the course of normal - operation. See :doc:`/reference/config-database`. + An internal database with metadata for a :term:`sharded cluster`. + Typically, you don't modify the ``config`` database. For more + information about the ``config`` database, see + :doc:`/reference/config-database`. config server A :binary:`~bin.mongod` instance that stores all the metadata @@ -259,7 +275,7 @@ Glossary See :ref:`sharding-config-server`. connection pool - A cache of database connections maintained by the driver. These + A cache of database connections maintained by the driver. The cached connections are re-used when connections to the database are required, instead of opening new connections. @@ -276,31 +292,30 @@ Glossary Read, Update, and Delete. See :ref:`crud`. CSV - A text-based data format consisting of comma-separated values. - This format is commonly used to exchange data between relational - databases since the format is well-suited to tabular data. You can + A text data format with comma-separated values. + CSV files can be used to exchange data between relational + databases because CSV files have tabular data. You can import CSV files using :binary:`~bin.mongoimport`. cursor A pointer to the result set of a :term:`query`. Clients can iterate through a cursor to retrieve results. By default, cursors not opened within a session automatically timeout after 10 - minutes of inactivity. Cursors opened under a session close with + minutes of inactivity. Cursors opened in a session close with the end or timeout of the session. See :ref:`read-operations-cursors`. Customer Master Key - A key that is used to encrypt your :term:`Data Encryption Key`. - The customer master key should be hosted in a remote key + A key that encrypts your :term:`Data Encryption Key`. + The customer master key must be hosted in a remote key provider. daemon - The conventional name for a background, non-interactive - process. + A background, non-interactive process. data directory - The file-system location where the :binary:`~bin.mongod` stores data - files. The :setting:`~storage.dbPath` option specifies the data directory. + The file system location where :binary:`~bin.mongod` stores data + files. :setting:`~storage.dbPath` specifies the data directory. Data Encryption Key A key you use to encrypt the fields in your MongoDB @@ -324,9 +339,9 @@ Glossary :doc:`/data-center-awareness`. database - A physical container for :term:`collections `. - Each database gets its own set of files on the file - system. A single MongoDB server typically has multiple + A container for :term:`collections `. + Each database has a set of files in the file + system. One MongoDB server typically has multiple databases. database command @@ -352,12 +367,12 @@ Glossary delayed member A :term:`replica set` member that cannot become primary and applies operations at a specified delay. The delay is useful for - protecting data from human error (i.e. unintentionally deleted + protecting data from human error (unintentionally deleted databases) or updates that have unforeseen effects on the production database. See :ref:`replica-set-delayed-members`. DEK - Abbreviation of Data Encryption Key, see + Data Encryption Key. For more details, see :term:`Data Encryption Key`. document @@ -379,24 +394,28 @@ Glossary driver A client library for interacting with MongoDB in a particular - language. See :driver:`driver `. + computer language. See :driver:`driver `. durable - A write operation is durable when it will persist across a - shutdown (or crash) and restart of one or more server processes. - For a single :binary:`~bin.mongod` server, a write operation is - considered durable when it has been written to the server's - :term:`journal` file. For a :doc:`replica set - `, a write operation is - considered durable once the write operation is durable on a - majority of voting nodes; i.e. written to a majority of voting - nodes' journals. + A write operation is durable when it persists after a shutdown (or + crash) and restart of one or more server processes. For a single + :binary:`~bin.mongod` server, a write operation is considered + durable when it has been written to the server's :term:`journal` + file. For a :doc:`replica set `, a write operation + is considered durable after the write operation achieves + durability on a majority of voting nodes and written to a majority + of voting nodes' journals. election - The process by which members of a :term:`replica set` select a + The process where members of a :term:`replica set` select a :term:`primary` on startup and in the event of a failure. See :ref:`replica-set-elections`. + {+enc-schema+} + In :ref:`{+qe+} `, the :ref:`JSON schema ` + that defines which fields are queryable and which query types are + permitted on those fields. + explicit encryption When using :term:`In-Use Encryption`, explicitly specifying the encryption or decryption operation, keyID, and @@ -408,20 +427,20 @@ Glossary where the number of seconds or milliseconds since this point is counted. envelope encryption - An encryption practice where data is encrypted using a - :term:`Data Encryption Key` and the data encryption key is + An encryption procedure where data is encrypted using a + :term:`Data Encryption Key` and the data encryption key is encrypted by another key called the :term:`Customer Master Key`. - Encrypted keys are stored within a MongoDB collection referred to - as the KeyVault as :term:`BSON` documents. + The encrypted keys are stored as :term:`BSON` documents in a + MongoDB collection called the KeyVault. eventual consistency A property of a distributed system that allows changes to the system to propagate gradually. In a database system, this means - that readable members are not required to reflect the latest - writes at all times. + that readable members aren't required to have the latest + updates. expression - In the context of an :term:`aggregation pipeline`, expressions are + In an :term:`aggregation pipeline`, expressions are the stateless transformations that operate on the data that passes through a :term:`pipeline`. See :ref:`aggregation-pipeline`. @@ -436,19 +455,20 @@ Glossary databases. See :ref:`document-structure`. field path - Path to a field in the document. To specify a field path, use a + Path to a field in a document. To specify a field path, use a string that prefixes the field name with a dollar sign (``$``). firewall - A system level networking filter that restricts access based on, - among other things, IP address. Firewalls form a part of an - effective network security strategy. See - :ref:`security-firewalls`. + A system level network filter that restricts access based on + IP addresses and other parameters. Firewalls are part of a + secure network. See :ref:`security-firewalls`. fsync - A system call that flushes all dirty, in-memory pages to - disk. MongoDB calls ``fsync()`` on its database files at least - every 60 seconds. See :dbcommand:`fsync`. + A system call that flushes all dirty, in-memory pages to storage. + As applications write data, MongoDB records the data in the + storage layer. + + .. include:: /includes/checkpoints.rst geohash A geohash value is a binary representation of the location on a @@ -457,7 +477,7 @@ Glossary GeoJSON A :term:`geospatial` data interchange format based on JavaScript Object Notation (:term:`JSON`). GeoJSON is used in - :doc:`geospatial queries `. For + :ref:`geospatial queries `. For supported GeoJSON objects, see :ref:`geo-overview-location-data`. For the GeoJSON format specification, see ``_. @@ -467,25 +487,25 @@ Glossary GridFS A convention for storing large files in a MongoDB database. All of - the official MongoDB drivers support this convention, as does the + the official MongoDB drivers support the GridFS convention, as does the :binary:`~bin.mongofiles` program. See :doc:`/core/gridfs`. hashed shard key - A special type of :term:`shard key` that uses a hash of the value + A type of :term:`shard key` that uses a hash of the value in the shard key field to distribute documents among members of the :term:`sharded cluster`. See :ref:`index-type-hashed`. health manager A health manager runs health checks on a :term:`health manager facet` at a specified :ref:`intensity level - `. Health manager checks run at + `. The health manager checks are run at specified time intervals. A health manager can be configured to move a failing :ref:`mongos ` out of a cluster - automatically. + automatically. health manager facet - A specific set of features and functionality that a :term:`health - manager` can be configured to run health checks against. For + A set of features that a :term:`health + manager` can be configured to run health checks for. For example, you can configure a health manager to monitor and manage DNS or LDAP cluster health issues automatically. See :ref:`health-managers-facets` for details. @@ -497,33 +517,33 @@ Glossary high availability High availability indicates a system designed for durability, - redundancy, and automatic failover such that the applications - supported by the system can operate continuously and without - downtime for a long period of time. MongoDB + redundancy, and automatic failover. Applications + supported by the system can operate without + downtime for a long time period. MongoDB :ref:`replica sets ` support - high availability when deployed according to our documented + high availability when deployed according to the :ref:`best practices `. For guidance on replica set deployment architecture, see :ref:`replica-set-architecture`. idempotent - The quality of an operation to produce the same result given the - same input, whether run once or run multiple times. + An operation produces the same result with the + same input when run multiple times. index A data structure that optimizes queries. See :doc:`/indexes`. index bounds The range of index values that MongoDB searches when using an - index to fulfill a query. To learn more, see + index to run a query. To learn more, see :ref:`multikey-index-bounds`. init script - A simple shell script used by a Linux platform's + A shell script used by a Linux platform's :term:`init system` to start, restart, or stop a :term:`daemon` - process. If you installed MongoDB via a package manager, an init - script has been provided for your system as part of the + process. If you installed MongoDB using a package manager, an init + script is provided for your system as part of the installation. See the respective :ref:`Installation Guide ` for your operating system. @@ -533,12 +553,11 @@ Glossary after the kernel starts, and manages all other processes on the system. The init system uses an :term:`init script` to start, restart, or stop a :term:`daemon` process, such as - :binary:`~bin.mongod` or :binary:`~bin.mongos`. Recent versions of - Linux tend to use the **systemd** init system, which uses the - ``systemctl`` command, while older versions tend to use the - **System V** init system, which uses the ``service`` command. - See the respective Installation Guide for - your operating system. + :binary:`~bin.mongod` or :binary:`~bin.mongos`. Recent Linux + versions typically use the **systemd** init system and the + ``systemctl`` command. Older Linux versions typically use the + **System V** init system and the ``service`` command. See + the Installation Guide for your operating system. initial sync The :term:`replica set` operation that replicates data from an @@ -546,45 +565,43 @@ Glossary :ref:`replica-set-initial-sync`. intent lock - A :term:`lock` on a resource that indicates that the holder - of the lock will read (intent shared) or write (intent + A :term:`lock` on a resource that indicates the lock holder + will read from (intent shared) or write to (intent exclusive) the resource using :term:`concurrency control` at a finer granularity than that of the resource with the intent lock. Intent locks allow concurrent readers and writers of a - resource. See :ref:`faq-concurrency-locking`. + resource. See :ref:`faq-concurrency-locking`. In-Use Encryption - Encryption that secures data while being transmitted, stored, and + Encryption that secures data when transmitted, stored, and processed, and enables supported queries on that encrypted data. MongoDB provides two approaches to In-Use Encryption: :ref:`{+qe+} ` and :ref:`{+csfle+} `. IPv6 - A revision to the IP (Internet Protocol) standard that - provides a significantly larger address space to more effectively - support the number of hosts on the contemporary Internet. + A revision to the IP (Internet Protocol) standard with a + large address space to support Internet hosts. ISODate The international date format used by :binary:`~bin.mongosh` - to display dates. The format is: ``YYYY-MM-DD HH:MM.SS.millis``. + to display dates. The format is ``YYYY-MM-DD HH:MM.SS.millis``. indexed sort - A sort in which an index provides the sorted result. Sort operations that + A sort where an index provides the sorted result. Sort operations that use an index often have better performance than a :term:`blocking sort`. See :ref:`Use Indexed to Sort Query Results ` for more information. interrupt point - A point in an operation's lifecycle when it can - safely abort. MongoDB only terminates an operation + A point in an operation when it can + safely end. MongoDB only ends an operation at designated interrupt points. See :doc:`/tutorial/terminate-running-operations`. JavaScript - A popular scripting language originally designed for web - browsers. :mongosh:`mongosh `, the legacy - :binary:`mongo ` shell, and certain server-side + A scripting language. :mongosh:`mongosh `, the legacy + :binary:`mongo ` shell, and certain server functions use a JavaScript interpreter. See :doc:`/core/server-side-javascript` for more information. @@ -598,7 +615,7 @@ Glossary See :doc:`/core/journaling/`. JSON - JavaScript Object Notation. A human-readable, plain text format + JavaScript Object Notation. A plain text format for expressing structured data with support in many programming languages. For more information, see ``_. Certain MongoDB tools render an approximation of MongoDB @@ -611,30 +628,35 @@ Glossary ``_. JSONP - :term:`JSON` with Padding. Refers to a method of injecting JSON + :term:`JSON` with padding. Refers to a method of injecting JSON into applications. **Presents potential security concerns**. + jumbo chunk + A :term:`chunk` that grows beyond the :ref:`specified chunk size + ` and cannot split into smaller chunks. For + more details, see :ref:`jumbo-chunks`. + key material The random string of bits used by an encryption algorithm to encrypt and decrypt data. Key Vault Collection - A MongoDB collection used to store the encrypted + A MongoDB collection that stores the encrypted :term:`Data Encryption Keys ` as :term:`BSON` documents. least privilege - An authorization policy that gives a user only the amount of access - that is essential to that user's work and no more. + An authorization policy that grants a user only the access + that is essential to that user's work. legacy coordinate pairs - The format used for :term:`geospatial` data prior to MongoDB + The format used for :term:`geospatial` data before MongoDB version 2.4. This format stores geospatial data as points on a - planar coordinate system (e.g. ``[ x, y ]``). See + planar coordinate system (for example, ``[ x, y ]``). See :doc:`/geospatial-queries`. LineString - A LineString is defined by an array of two or more positions. A + A LineString is an array of two or more positions. A closed LineString with four or more positions is called a LinearRing, as described in the GeoJSON LineString specification: ``_. To use a @@ -642,12 +664,17 @@ Glossary :ref:`geospatial-indexes-store-geojson`. lock - MongoDB uses locks to ensure that :doc:`concurrency ` - does not affect correctness. MongoDB uses :term:`read locks + MongoDB uses locks to ensure that :ref:`concurrency ` + does not affect correctness. MongoDB uses :term:`read locks `, :term:`write locks ` and :term:`intent locks `. For more information, see :ref:`faq-concurrency-locking`. + log files + Contain server events, such as incoming connections, commands run, + and issues encountered. For more details, see + :ref:`log-messages-ref`. + LVM Logical volume manager. LVM is a program that abstracts disk images from physical devices and provides a number of raw disk @@ -656,26 +683,29 @@ Glossary :ref:`lvm-backup-and-restore`. mapping type - A Structure in programming languages that associate keys with - values, where keys may nest other pairs of keys and values - (e.g. dictionaries, hashes, maps, and associative arrays). + A structure in programming languages that associate keys with + values. Keys may contain embedded pairs of keys and values + (for example, dictionaries, hashes, maps, and associative arrays). The properties of these structures depend on the language - specification and implementation. Generally the order of keys in + specification and implementation. Typically, the order of keys in mapping types is arbitrary and not guaranteed. map-reduce - A data processing and aggregation paradigm consisting of a "map" - phase that selects data and a "reduce" phase that transforms the + An aggregation process that has a "map" + phase that selects the data and a "reduce" phase that transforms the data. In MongoDB, you can run arbitrary aggregations over data - using map-reduce. For map-reduce implementation, see + using map-reduce. For the map-reduce implementation, see :doc:`/core/map-reduce`. For all approaches to aggregation, see :ref:`aggregation`. md5 - A hashing algorithm used to efficiently provide - reproducible unique strings to identify and :term:`checksum` - data. MongoDB uses md5 to identify chunks of data for - :term:`GridFS`. See :doc:`/reference/command/filemd5`. + A hashing algorithm that calculates a :term:`checksum` for the + supplied data. The algorithm returns a unique value + to identify the data. MongoDB uses md5 to identify chunks of data + for :term:`GridFS`. See :doc:`/reference/command/filemd5`. + + mean + Average of a set of numbers. median In a dataset, the median is the percentile value where 50% of the @@ -697,6 +727,9 @@ Glossary :binary:`~bin.mongofiles` tool provides an option to specify a MIME type to describe a file inserted into :term:`GridFS` storage. + mode + Number that occurs most frequently in a set of numbers. + mongo The legacy MongoDB shell. The :binary:`~bin.mongo` process starts the legacy shell as a :term:`daemon` connected to either a @@ -710,15 +743,15 @@ Glossary mongod The MongoDB database server. The :binary:`~bin.mongod` process starts the MongoDB server as a :term:`daemon`. The MongoDB server - manages data requests and formats and manages background - operations. See :doc:`/reference/program/mongod`. + manages data requests and background operations. See + :doc:`/reference/program/mongod`. mongos The MongoDB sharded cluster query router. The :binary:`~bin.mongos` process starts the MongoDB router as a :term:`daemon`. The MongoDB router acts as an interface between an application and a MongoDB :term:`sharded cluster` and - handles all routing and load balancing across the cluster. See + handles all routing and load balancing across the cluster. See :doc:`/reference/program/mongos`. mongosh @@ -730,14 +763,13 @@ Glossary :binary:`~bin.mongo` as the preferred shell. namespace - The canonical name for a collection or index in MongoDB. - The namespace is a combination of the database name and - the name of the collection or index, like so: - ``[database-name].[collection-or-index-name]``. All documents + A namespace is a combination of the database name and + the name of the collection or index: + ``.``. All documents belong to a namespace. See :ref:`faq-dev-namespace`. natural order - The order in which the database refers to documents on disk. This is the + The order that the database stores documents on disk. Natural order is the default sort order. See :operator:`$natural` and :ref:`return-natural-order`. @@ -746,14 +778,16 @@ Glossary partitions such that nodes in one partition cannot communicate with the nodes in the other partition. - Sometimes, partitions are partial or asymmetric. An example of a - partial partition would be a division of the nodes of a network + Sometimes, partitions are partial or asymmetric. An example + partial partition is the a division of the nodes of a network into three sets, where members of the first set cannot - communicate with members of the second set, and vice versa, but - all nodes can communicate with members of the third set. In an + communicate with members of the second set, and the reverse, but + all nodes can communicate with members of the third set. + + In an asymmetric partition, communication may be possible only when it originates with certain nodes. For example, nodes on one side of - the partition can communicate to the other side only if they + the partition can communicate with the other side only if they originate the communications channel. node @@ -769,9 +803,9 @@ Glossary See :term:`natural order`. ObjectId - A special 12-byte :term:`BSON` type that guarantees uniqueness - within the :term:`collection`. The ObjectId is generated based on - timestamp, machine ID, process ID, and a process-local incremental + A 12-byte :term:`BSON` type that is unique + within a :term:`collection`. The ObjectId is generated using the + timestamp, computer ID, process ID, and a local process incremental counter. MongoDB uses ObjectId values as the default values for :term:`_id` fields. @@ -797,10 +831,10 @@ Glossary See :doc:`/core/replica-set-oplog`. oplog hole - A temporary gap in the oplog due to oplog writes not occurring in + A temporary gap in the oplog because the oplog writes aren't in sequence. Replica set :ref:`primaries ` apply oplog entries in parallel as a batch operation. As a result, - temporary gaps in the oplog can occur from entries that are not + temporary gaps in the oplog can occur from entries that aren't yet written from a batch. oplog window @@ -825,17 +859,16 @@ Glossary orphaned document In a sharded cluster, orphaned documents are those documents on a - shard that also exist in chunks on other shards as a result of - failed migrations or incomplete migration cleanup due to abnormal - shutdown. + shard that also exist in chunks on other shards. This is caused by + a failed migration or an incomplete migration cleanup because of + an atypical shutdown. - Starting in MongoDB 4.4, orphaned documents are cleaned up - automatically after a chunk migration completes. You no longer - need to run :dbcommand:`cleanupOrphaned` to delete orphaned - documents. + Orphaned documents are cleaned up automatically after a chunk migration + completes. You no longer need to run :dbcommand:`cleanupOrphaned` to + delete orphaned documents. orphaned cursor - A cursor that is not properly closed or iterated over + A cursor that is not correctly closed or iterated over in your application code. Orphaned cursors can cause performance issues in your MongoDB deployment. @@ -845,8 +878,8 @@ Glossary ``0``. See :doc:`/core/replica-set-priority-0-member`. percentile - In a dataset, a given percentile is a value where that percentage - of the data falls at or below that value. For details, see + In a dataset, a percentile is a value where that percentage + of the data is at or below the specified value. For details, see :ref:`percentile-calculation-considerations`. PID @@ -862,7 +895,7 @@ Glossary the input of another. pipeline - A series of operations in an :term:`aggregation` process. + A series of operations in an :term:`aggregation`. See :ref:`aggregation-pipeline`. Point @@ -888,10 +921,10 @@ Glossary :ref:`db.collection.watch-change-streams-pre-and-post-images-example`. powerOf2Sizes - A per-collection setting that changes and normalizes the way - MongoDB allocates space for each :term:`document`, in an effort to - maximize storage reuse and to reduce fragmentation. This is the - default for :ref:`TTL Collections `. See + A setting for each collection that allocates space for each + :term:`document` to maximize storage reuse and reduce + fragmentation. ``powerOf2Sizes`` is the default for :ref:`TTL + Collections `. To change collection settings, see :dbcommand:`collMod`. prefix compression @@ -919,13 +952,13 @@ Glossary :ref:`replica-set-primary-member`. primary key - A record's unique immutable identifier. In an :term:`RDBMS`, the primary + A record's unique immutable identifier. In :term:`RDBMS` software, the primary key is typically an integer stored in each row's ``id`` field. - In MongoDB, the :term:`_id` field holds a document's primary - key which is usually a BSON :term:`ObjectId`. + In MongoDB, the :term:`_id` field stores a document's primary + key, which is typically a BSON :term:`ObjectId`. primary shard - The :term:`shard` that holds all the un-sharded collections. See + The :term:`shard` that stores all the unsharded collections. See :ref:`primary-shard`. priority @@ -939,23 +972,23 @@ Glossary :ref:`privilege `. projection - A document given to a :term:`query` that specifies which fields - MongoDB returns in the result set. See :ref:`projection`. For a - list of projection operators, see + A document supplied to a :term:`query` that specifies the fields + MongoDB returns in the result set. For more information about projections, + see :ref:`projection` and :doc:`/reference/operator/projection`. query - A read request. MongoDB uses a :term:`JSON`-like query language - that includes a variety of :term:`query operators ` with + A read request. MongoDB uses a :term:`JSON` form of query language + that includes :term:`query operators ` with names that begin with a ``$`` character. In - :binary:`~bin.mongosh`, you can issue queries using the + :binary:`~bin.mongosh`, you can run queries using the :method:`db.collection.find()` and :method:`db.collection.findOne()` methods. See :ref:`read-operations-queries`. query framework A combination of the :term:`query optimizer` and query execution engine - used to process an operation. + that processes an operation. query operator A keyword beginning with ``$`` in a query. For example, @@ -965,19 +998,23 @@ Glossary query optimizer A process that generates query plans. For each query, the optimizer generates a plan that matches the query to the index - that will return results as efficiently as possible. The + that returns the results as efficiently as possible. The optimizer reuses the query plan each time the query runs. If a collection changes significantly, the optimizer creates a new query plan. See :ref:`read-operations-query-optimization`. + query plan + Most efficient execution plan chosen by the query planner. For + more details, see :ref:`query-plans-query-optimization`. + query shape A combination of query predicate, sort, projection, and :ref:`collation `. The query shape allows MongoDB to identify logically equivalent queries and analyze their performance. For the query predicate, only the structure of the predicate, - including the field names, are significant; the values in the - query predicate are insignificant. As such, a query predicate ``{ + including the field names, are significant. The values in the + query predicate are insignificant. Therefore, a query predicate ``{ type: 'food' }`` is equivalent to the query predicate ``{ type: 'utensil' }`` for a query shape. @@ -1013,19 +1050,14 @@ Glossary RDBMS Relational Database Management System. A database management - system based on the relational model, typically using - :term:`SQL` as the query language. + system based on the relational model, typically using :term:`SQL` + as the query language. recovering A :term:`replica set` member status indicating that a member - is not ready to begin normal activities of a secondary or primary. + is not ready to begin activities of a secondary or primary. Recovering members are unavailable for reads. - replica pairs - The precursor to the MongoDB :term:`replica sets `. - - .. deprecated:: 1.6 - replica set A cluster of MongoDB servers that implements replication and automated failover. MongoDB's recommended @@ -1033,20 +1065,20 @@ Glossary replication A feature allowing multiple database servers to share the same - data, thereby ensuring redundancy and facilitating load balancing. - See :doc:`/replication`. + data. Replication ensures data redundancy and enables load + balancing. See :doc:`/replication`. replication lag - The length of time between the last operation in the + The time period between the last operation in the :term:`primary's ` :term:`oplog` and the last operation - applied to a particular :term:`secondary`. In general, you want to - keep replication lag as small as possible. See :ref:`Replication + applied to a particular :term:`secondary`. You typically want + replication lag as short as possible. See :ref:`Replication Lag `. resident memory The subset of an application's memory currently stored in physical RAM. Resident memory is a subset of :term:`virtual memory`, - which includes memory mapped to physical RAM and to disk. + which includes memory mapped to physical RAM and to storage. resource A database, collection, set of collections, or cluster. A @@ -1060,19 +1092,19 @@ Glossary :doc:`/security`. rollback - A process that reverts writes operations to ensure the consistency + A process that reverts write operations to ensure the consistency of all replica set members. See :ref:`replica-set-rollback`. secondary A :term:`replica set` member that replicates the contents of the - master database. Secondary members may handle read requests, but - only the :term:`primary` members can handle write operations. See + master database. Secondary members may run read requests, but + only the :term:`primary` members can run write operations. See :ref:`replica-set-secondary-members`. secondary index A database :term:`index` that improves query performance by minimizing the amount of work that the query engine must perform - to fulfill a query. See :doc:`/indexes`. + to run a query. See :doc:`/indexes`. secondary member See :term:`secondary`. Also known as a secondary node. @@ -1082,7 +1114,8 @@ Glossary :binary:`~bin.mongosh`) for initial discovery of the replica set configuration. Seed lists can be provided as a list of ``host:port`` pairs (see :ref:`connections-standard-connection-string-format` - or via DNS entries (see :ref:`connections-dns-seedlist`). + or through DNS entries.) For more information, + see :ref:`connections-dns-seedlist`. set name The arbitrary name given to a replica set. All members of a @@ -1090,10 +1123,11 @@ Glossary :setting:`~replication.replSetName` setting or the :option:`--replSet ` option. shard - A single :binary:`~bin.mongod` instance or :term:`replica set` that - stores some portion of a :term:`sharded cluster's ` total data set. In production, all shards should be - replica sets. See :doc:`/core/sharded-cluster-shards`. + A single :binary:`~bin.mongod` instance or :term:`replica set` + that stores part of a :term:`sharded cluster's ` + total data set. Typically, in a production deployment, ensure all + shards are part of replica sets. See + :doc:`/core/sharded-cluster-shards`. shard key The field MongoDB uses to distribute documents among members of a @@ -1111,22 +1145,22 @@ Glossary Sharding enables horizontal scaling. See :doc:`/sharding`. shell helper - A method in ``mongosh`` that provides a more concise - syntax for a :doc:`database command `. Shell helpers - improve the general interactive experience. See + A method in ``mongosh`` that has a concise + syntax for a :ref:`database command `. Shell helpers + improve the interactive experience. See :doc:`/reference/method`. single-master replication A :term:`replication` topology where only a single database instance accepts writes. Single-master replication ensures - consistency and is the replication topology employed by MongoDB. + consistency and is the replication topology used by MongoDB. See :doc:`/core/replica-set-primary`. snapshot .. include:: /includes/snapshot-definition.rst snappy - A compression/decompression library designed to balance + A compression/decompression library to balance efficient computation requirements with reasonable compression rates. Snappy is the default compression library for MongoDB's use of :ref:`WiredTiger @@ -1140,26 +1174,23 @@ Glossary cluster`. See :doc:`/core/sharding-data-partitioning`. SQL - Structured Query Language (SQL) is a common special-purpose - programming language used for interaction with a relational - database, including access control, insertions, - updates, queries, and deletions. There are some similar - elements in the basic SQL syntax supported by different database - vendors, but most implementations have their own dialects, data - types, and interpretations of proposed SQL standards. Complex - SQL is generally not directly portable between major - :term:`RDBMS` products. ``SQL`` is often used as - metonym for relational databases. + Structured Query Language (SQL) is used for interaction with + relational databases. SSD - Solid State Disk. A high-performance disk drive that uses solid - state electronics for persistence, as opposed to the rotating platters - and movable read/write heads used by traditional mechanical hard drives. - + Solid State Disk. High-performance storage that uses solid + state electronics for persistence instead of rotating platters + and movable read/write heads used by mechanical hard drives. + + stale read + A stale read refers to when a transaction reads old (stale) data that has + been modified by another transaction but not yet committed to the + database. + standalone - An instance of :binary:`~bin.mongod` that is running as a single - server and not as part of a :term:`replica set`. To convert a - standalone into a replica set, see + An instance of :binary:`~bin.mongod` that runs as a single server + and not as part of a :term:`replica set`. To convert it to a + replica set, see :doc:`/tutorial/convert-standalone-to-replica-set`. stash collection @@ -1178,14 +1209,14 @@ Glossary Subject Alternative Name Subject Alternative Name (SAN) is an extension of the X.509 certificate which allows an array of values such as IP addresses - and domain names that specify which resources a single security + and domain names that specify the resources a single security certificate may secure. strict consistency A property of a distributed system requiring that all members - always reflect the latest changes to the system. In a database + contain the latest changes to the system. In a database system, this means that any system that can provide data must - reflect the latest writes at all times. + contain the latest writes. sync The :term:`replica set` operation where members replicate data @@ -1203,13 +1234,10 @@ Glossary tag A label applied to a replica set member and used by clients to issue data-center-aware operations. For more information - on using tags with replica sets, see the following - sections of this manual: :ref:`replica-set-read-preference-tag-sets`. - - .. versionchanged:: 3.4 + on using tags with replica sets, see :ref:`replica-set-read-preference-tag-sets`. - In MongoDB 3.4, sharded cluster :term:`zones ` replace - :term:`tags `. + In MongoDB 3.4, sharded cluster :term:`zones ` replace + :term:`tags `. tag set A document containing zero or more :term:`tags `. @@ -1230,11 +1258,12 @@ Glossary :doc:`/core/timeseries-collections`. topology - The state of a deployment of MongoDB instances, including - the type of deployment (i.e. standalone, replica set, or sharded - cluster) as well as the availability of servers, and the role of - each server (i.e. :term:`primary`, :term:`secondary`, - :term:`config server`, or :binary:`~bin.mongos`.) + The state of a deployment of MongoDB instances. Includes: + + - Type of deployment (standalone, replica set, or sharded cluster). + - Availability of servers. + - Role of each server (:term:`primary`, :term:`secondary`, + :term:`config server`, or :binary:`~bin.mongos`). transaction Group of read or write operations. For details, see @@ -1250,11 +1279,11 @@ Glossary TSV A text-based data format consisting of tab-separated values. This format is commonly used to exchange data between relational - databases, since the format is well-suited to tabular data. You can + databases because the format is suited to tabular data. You can import TSV files using :binary:`~bin.mongoimport`. TTL - Stands for "time to live" and represents an expiration time or + Time-to-live (TTL) is an expiration time or period for a given piece of information to remain in a cache or other temporary storage before the system deletes it or ages it out. MongoDB has a TTL collection feature. See @@ -1267,7 +1296,7 @@ Glossary arrays. unique index - An index that enforces uniqueness for a particular field across + An index that enforces uniqueness for a particular field in a single collection. See :ref:`index-type-unique`. unordered query plan @@ -1276,13 +1305,16 @@ Glossary See :ref:`read-operations-query-optimization`. upsert - An option for update operations; e.g. + An option for update operations. For example: :method:`db.collection.updateOne()`, - :method:`db.collection.findAndModify()`. If set to true, the - update operation will either update the document(s) matched by - the specified query or if no documents match, insert a new - document. The new document will have the fields indicated in the - operation. See :ref:`upsert-parameter`. + :method:`db.collection.findAndModify()`. If upsert is ``true``, + the update operation either: + + - updates the document(s) matched by the query. + - or if no documents match, inserts a new document. The new + document has the field values specified in the update operation. + + For more information about upserts, see :ref:`upsert-parameter`. virtual memory An application's working memory, typically residing on both @@ -1310,11 +1342,11 @@ Glossary specified number of members. See :doc:`/reference/write-concern`. write conflict - A situation in which two concurrent operations, at least - one of which is a write, attempt to use a resource in a way - that would violate constraints imposed by a storage engine - using optimistic :term:`concurrency control`. MongoDB will - transparently abort and retry one of the conflicting operations. + A situation where two concurrent operations, at least one of which + is a write, try to use a resource that violates the + constraints for a storage engine that uses optimistic + :term:`concurrency control`. MongoDB automatically ends and + retries one of the conflicting write operations. write lock An exclusive :term:`lock` on a resource such as a collection @@ -1324,9 +1356,9 @@ Glossary locks, see :doc:`/faq/concurrency`. writeBacks - The process within the sharding system that ensures that writes - issued to a :term:`shard` that *is not* responsible for the - relevant chunk get applied to the proper shard. For related + The process in the sharding system that ensures writes + sent to a :term:`shard` that *is not* responsible for the + relevant chunk are applied to the correct shard. For more information, see :ref:`faq-writebacklisten` and :ref:`server-status-writebacksqueued`. @@ -1348,9 +1380,7 @@ Glossary zone A grouping of documents based on ranges of :term:`shard key` values for a given sharded collection. Each shard in the sharded cluster can - associate with one or more zones. In a balanced cluster, MongoDB - directs reads and writes covered by a zone only to those shards - inside the zone. See the :ref:`zone-sharding` manual page for more - information. - - Zones supersede functionality described by :term:`tags ` in MongoDB 3.2. + be in one or more zones. In a balanced cluster, MongoDB + directs reads and writes for a zone only to those shards + inside that zone. See the :ref:`zone-sharding` manual page for more + information. \ No newline at end of file diff --git a/source/reference/insert-methods.txt b/source/reference/insert-methods.txt index 767fd121255..d1bc46ba86d 100644 --- a/source/reference/insert-methods.txt +++ b/source/reference/insert-methods.txt @@ -4,6 +4,9 @@ Insert Methods .. default-domain:: mongodb +.. meta:: + :description: MongoDB provides insert methods for adding documents into a collection. + MongoDB provides the following methods for inserting :ref:`documents ` into a collection: diff --git a/source/reference/limits.txt b/source/reference/limits.txt index b2e5ae0f302..ffc9bdf01fd 100644 --- a/source/reference/limits.txt +++ b/source/reference/limits.txt @@ -13,6 +13,9 @@ MongoDB Limits and Thresholds .. meta:: :keywords: case sensitive +.. meta:: + :description: Hard and soft limitations of the MongoDB system in Atlas, Enterprise, and Community. + .. contents:: On this page :local: :backlinks: none @@ -210,7 +213,7 @@ tiers: - 32000 * - ``M80`` - - 96000 + - 64000 * - ``M140`` - 96000 @@ -498,7 +501,7 @@ Naming Restrictions .. limit:: Length of Database Names - Database names cannot be empty and must have fewer than 64 characters. + Database names cannot be empty and must be less than 64 bytes. .. limit:: Restriction on Collection Names @@ -679,7 +682,7 @@ Indexes .. limit:: Hidden Indexes - - You cannot :doc:`hide ` the ``_id`` index. + - You cannot :ref:`hide ` the ``_id`` index. - You cannot use :method:`~cursor.hint()` on a :doc:`hidden index `. @@ -771,12 +774,6 @@ Sharding Operational Restrictions Shard Key Limitations ~~~~~~~~~~~~~~~~~~~~~ -.. limit:: Shard Key Size - - Starting in version 4.4, MongoDB removes the limit on the shard key size. - - For MongoDB 4.2 and earlier, a shard key cannot exceed 512 bytes. - .. limit:: Shard Key Index Type .. include:: /includes/limits-sharding-index-type.rst @@ -788,9 +785,8 @@ Shard Key Limitations - Starting in MongoDB 5.0, you can :ref:`reshard a collection ` by changing a document's shard key. - - Starting in MongoDB 4.4, you can :ref:`refine a shard key - ` by adding a suffix field or fields to the - existing shard key. + - You can :ref:`refine a shard key ` by adding a suffix + field or fields to the existing shard key. - In MongoDB 4.2 and earlier, the choice of shard key cannot be changed after sharding. @@ -814,15 +810,10 @@ Operations If MongoDB requires using more than 100 megabytes of system memory for the blocking sort operation, MongoDB returns an error *unless* - the query specifies :method:`cursor.allowDiskUse()` (*New in MongoDB - 4.4*). :method:`~cursor.allowDiskUse()` allows MongoDB to use - temporary files on disk to store data exceeding the 100 megabyte - system memory limit while processing a blocking sort operation. - - .. versionchanged:: 4.4 - - For MongoDB 4.2 and prior, blocking sort operations could not - exceed 32 megabytes of system memory. + the query specifies :method:`cursor.allowDiskUse()`. + :method:`~cursor.allowDiskUse()` allows MongoDB to use temporary files on + disk to store data exceeding the 100 megabyte system memory limit while + processing a blocking sort operation. For more information on sorts and index use, see :ref:`sort-index-use`. @@ -902,12 +893,8 @@ Operations - .. include:: /includes/extracts/views-unsupported-mapReduce.rst - - .. include:: /includes/extracts/views-unsupported-geoNear.rst - .. limit:: Projection Restrictions - .. versionadded:: 4.4 - ``$``-Prefixed Field Path Restriction .. include:: /includes/extracts/projection-dollar-prefixed-field-full.rst diff --git a/source/reference/local-database.txt b/source/reference/local-database.txt index e376da98e4a..8c811d47f55 100644 --- a/source/reference/local-database.txt +++ b/source/reference/local-database.txt @@ -126,7 +126,7 @@ Collections on Replica Set Members .. include:: /includes/fact-oplog-size.rst Starting in MongoDB 5.0, it is no longer possible to perform manual - write operations to the :doc:`oplog ` on a + write operations to the :ref:`oplog ` on a cluster running as a :ref:`replica set `. Performing write operations to the oplog when running as a :term:`standalone instance ` should only be done with diff --git a/source/reference/log-messages.txt b/source/reference/log-messages.txt index e9712f6cb20..4c84ad597ca 100644 --- a/source/reference/log-messages.txt +++ b/source/reference/log-messages.txt @@ -10,6 +10,9 @@ Log Messages :name: programming_language :values: shell +.. meta:: + :description: MongoDB maintains a log of events such as incoming connections, commands run, and issues encountered for diagnosing issues, monitoring your deployment, and tuning performance. + .. contents:: On this page :local: :backlinks: none @@ -36,12 +39,11 @@ following methods: Structured Logging ------------------ -Starting in MongoDB 4.4, :binary:`~bin.mongod` / :binary:`~bin.mongos` -instances output all log messages in :ref:`structured JSON format -`. Log entries are written as a series -of key-value pairs, where each key indicates a log message field type, -such as "severity", and each corresponding value records the associated -logging information for that field type, such as "informational". +:binary:`~bin.mongod` / :binary:`~bin.mongos` instances output all log messages +in :ref:`structured JSON format `. Log entries +are written as a series of key-value pairs, where each key indicates a log +message field type, such as "severity", and each corresponding value records +the associated logging information for that field type, such as "informational". Previously, log entries were output as plaintext. .. example:: @@ -101,8 +103,7 @@ analyzing structured log messages can be found in the JSON Log Output Format ~~~~~~~~~~~~~~~~~~~~~~ -Starting in MongoDB 4.4, all log output is in JSON format including -output sent to: +All log output is in JSON format including output sent to: - Log file - Syslog @@ -466,8 +467,7 @@ examples that filter on the timestamp field. .. note:: - Starting in MongoDB 4.4, the ``ctime`` timestamp format is no longer - supported. + The ``ctime`` timestamp format is no longer supported. .. [#syslog-ts] @@ -1177,7 +1177,7 @@ Parsing Structured Log Messages Log parsing is the act of programmatically searching through and analyzing log files, often in an automated manner. With the introduction -of structured logging in MongoDB 4.4, log parsing is made simpler and +of structured logging, log parsing is made simpler and more powerful. For example: - Log message fields are presented as key-value pairs. Log parsers can @@ -1264,7 +1264,7 @@ following returns only the slow operations that took above .. code-block:: bash - jq '. | select(.attr.durationMillis>=2000)' /var/log/mongodb/mongod.log + jq 'select(.attr.durationMillis>=2000)' /var/log/mongodb/mongod.log Consult the `jq documentation `_ for more information on the ``jq`` filters shown in this example. @@ -1285,14 +1285,14 @@ The following example prints only the log messages of .. code-block:: bash - jq '. | select(.c=="REPL")' /var/log/mongodb/mongod.log + jq 'select(.c=="REPL")' /var/log/mongodb/mongod.log The following example prints all log messages *except* those of :ref:`component ` type **REPL**: .. code-block:: bash - jq '. | select(.c!="REPL")' /var/log/mongodb/mongod.log + jq 'select(.c!="REPL")' /var/log/mongodb/mongod.log The following example print log messages of :ref:`component ` type **REPL** *or* @@ -1300,7 +1300,7 @@ The following example print log messages of .. code-block:: bash - jq '. | select( .c as $c | ["REPL", "STORAGE"] | index($c) )' /var/log/mongodb/mongod.log + jq 'select( .c as $c | ["REPL", "STORAGE"] | index($c) )' /var/log/mongodb/mongod.log Consult the `jq documentation `_ for more information on the ``jq`` filters shown in this example. @@ -1329,7 +1329,7 @@ following ``jq`` syntax: .. code-block:: bash - jq '. | select( .id as $id | [22943, 22944] | index($id) )' /var/log/mongodb/mongod.log + jq 'select( .id as $id | [22943, 22944] | index($id) )' /var/log/mongodb/mongod.log Consult the `jq documentation `_ for more information on the ``jq`` filters shown in this example. @@ -1345,7 +1345,7 @@ the following returns all log entries that occurred on April 15th, 2020: .. code-block:: bash - jq '. | select(.t["$date"] >= "2020-04-15T00:00:00.000" and .t["$date"] <= "2020-04-15T23:59:59.999")' /var/log/mongodb/mongod.log + jq 'select(.t["$date"] >= "2020-04-15T00:00:00.000" and .t["$date"] <= "2020-04-15T23:59:59.999")' /var/log/mongodb/mongod.log Note that this syntax includes the full timestamp, including milliseconds but excluding the timezone offset. @@ -1357,7 +1357,7 @@ limit results to the month of May, 2020: .. code-block:: bash - jq '. | select(.t["$date"] >= "2020-05-01T00:00:00.000" and .t["$date"] <= "2020-05-31T23:59:59.999" and .attr.remote)' /var/log/mongodb/mongod.log + jq 'select(.t["$date"] >= "2020-05-01T00:00:00.000" and .t["$date"] <= "2020-05-31T23:59:59.999" and .attr.remote)' /var/log/mongodb/mongod.log Consult the `jq documentation `_ for more information on the ``jq`` filters shown in this example. @@ -1369,12 +1369,11 @@ Log ingestion services are third-party products that intake and aggregate log files, usually from a distributed cluster of systems, and provide ongoing analysis of that data in a central location. -The :ref:`JSON log format `, introduced -with MongoDB 4.4, allows for more flexibility when working with log -ingestion and analysis services. Whereas plaintext logs generally -require some manner of transformation before being eligible for use -with these products, JSON files can often be consumed out of the box, -depending on the service. Further, JSON-formatted logs offer more +The :ref:`JSON log format ` allows for more +flexibility when working with log ingestion and analysis services. Whereas +plaintext logs generally require some manner of transformation before being +eligible for use with these products, JSON files can often be consumed out of +the box, depending on the service. Further, JSON-formatted logs offer more control when performing filtering for these services, as the key-value structure offers the ability to specifically import only the fields of interest, while omitting the rest. diff --git a/source/reference/map-reduce-to-aggregation-pipeline.txt b/source/reference/map-reduce-to-aggregation-pipeline.txt index ca9e85b36ee..ce6344ec03b 100644 --- a/source/reference/map-reduce-to-aggregation-pipeline.txt +++ b/source/reference/map-reduce-to-aggregation-pipeline.txt @@ -14,7 +14,7 @@ stages ` such as For map-reduce operations that require custom functionality, MongoDB provides the :group:`$accumulator` and :expression:`$function` -aggregation operators starting in version 4.4. Use these operators to +aggregation operators. Use these operators to define custom aggregation expressions in JavaScript. Map-reduce expressions can be re-written as shown in the following diff --git a/source/reference/method.txt b/source/reference/method.txt index a85e992cb29..8fc767a50b9 100644 --- a/source/reference/method.txt +++ b/source/reference/method.txt @@ -6,6 +6,9 @@ .. default-domain:: mongodb +.. meta:: + :description: Mongosh methods for interacting with your data and deployments. + .. contents:: On this page :local: :backlinks: none @@ -16,6 +19,87 @@ .. include:: /includes/extracts/methods-toc-explanation.rst +Atlas Search Index Methods +-------------------------- + +.. include:: /includes/atlas-search-commands/mongosh-method-intro.rst + +.. |fts-index| replace:: {+fts+} index + +.. |fts-indexes| replace:: {+fts+} indexes + +.. include:: /includes/atlas-search-commands/mongosh-method-table.rst + + +.. toctree:: + :titlesonly: + :hidden: + + /reference/method/js-atlas-search + +Atlas Stream Processing Methods +------------------------------------------------------ + +:atlas:`Atlas Stream Processors +` +let you perform aggregation operations against streams of +continuous data using the same data model and query API that +you use with at-rest data. + +Use the following methods to manage Stream Processors: + +.. important:: + + The following methods can only be run on deployments hosted on + :atlas:`MongoDB Atlas `. + +.. list-table:: + :widths: 30 70 + :header-rows: 1 + + * - Name + + - Description + + * - :method:`sp.createStreamProcessor()` + + - Creates a stream processor. + + * - :method:`sp.listStreamProcessors()` + + - Lists all existing stream processors on the current stream + processing instance. + + * - :method:`sp.process()` + + - Creates an ephemeral stream processor. + + * - :method:`sp.processor.drop()` + + - Deletes an existing stream processor. + + * - :method:`sp.processor.sample()` + + - Returns an array of sampled results from a currently running stream processor. + + * - :method:`sp.processor.start()` + + - Starts an existing stream processor. + + * - :method:`sp.processor.stats()` + + - Returns statistics summarizing an existing stream processor. + + * - :method:`sp.processor.stop()` + + - Stops a currently running stream processor. + +.. toctree:: + :titlesonly: + :hidden: + + /reference/method/js-atlas-streams + Collection ---------- @@ -503,10 +587,6 @@ Database - Prints a report of the sharding configuration and the chunk ranges. - * - :method:`db.printSlaveReplicationInfo()` - - - .. include:: /includes/deprecated-db.printSlaveReplicationInfo.rst - * - :method:`db.resetError()` - *Removed in MongoDB 5.0.* Resets the last error status. @@ -518,7 +598,7 @@ Database * - :method:`db.runCommand()` - - Runs a :doc:`database command `. + - Runs a :ref:`database command `. * - :method:`db.serverBuildInfo()` @@ -603,8 +683,6 @@ Query Plan Cache - Returns the plan cache information for a collection. Accessible through the plan cache object of a specific collection, i.e. ``db.collection.getPlanCache().list()``. - - .. versionadded:: 4.4 .. toctree:: :titlesonly: @@ -885,10 +963,6 @@ Replication - Prints a formatted report of the replica set status from the perspective of the secondaries. - * - :method:`rs.printSlaveReplicationInfo()` - - - .. include:: /includes/deprecated-rs.printSlaveReplicationInfo.rst - * - :method:`rs.reconfig()` - Re-configures a replica set by applying a new replica set configuration object. @@ -954,8 +1028,6 @@ Sharding - Returns information on whether the chunks of a sharded collection are balanced. - .. versionadded:: 4.4 - * - :method:`sh.commitReshardCollection()` - Forces a :ref:`resharding operation ` to @@ -1336,20 +1408,3 @@ Client-Side Field Level Encryption :hidden: /reference/method/js-client-side-field-level-encryption - -Atlas Search Index Methods --------------------------- - -.. include:: /includes/atlas-search-commands/mongosh-method-intro.rst - -.. |fts-index| replace:: {+fts+} index - -.. |fts-indexes| replace:: {+fts+} indexes - -.. include:: /includes/atlas-search-commands/mongosh-method-table.rst - -.. toctree:: - :titlesonly: - :hidden: - - /reference/method/js-atlas-search diff --git a/source/reference/method/BSONRegExp.txt b/source/reference/method/BSONRegExp.txt new file mode 100644 index 00000000000..a2a2957d585 --- /dev/null +++ b/source/reference/method/BSONRegExp.txt @@ -0,0 +1,106 @@ +.. _server-bsonRegExp-method: + +============ +BSONRegExp() +============ + +.. default-domain:: mongodb + +.. contents:: On this page + :local: + :backlinks: none + :depth: 1 + :class: singlecol + +Definition +---------- + +Creates a new :ref:`BSON type ` for a regular expression. + +Syntax +------ + +``BSONRegExp`` has the following syntax: + +.. method:: BSONRegExp(", "") + + .. list-table:: + :header-rows: 1 + :widths: 20 20 60 + + * - Parameter + + - Type + + - Description + + * - ``pattern`` + + - string + + - The regular expression pattern. You must not wrap the pattern + with delimiter characters. + + * - ``flag`` + + - string + + - The regular expression flags. Characters in this argument are + sorted alphabetically. + +.. _bsonRegExp-examples: + +Examples +-------- + +Insert a ``BSONRegExp()`` Object +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Use the ``BSONRegExp()`` constructor to create the BSON regular expression. + +.. code-block:: javascript + + var bsonRegExp = BSONRegExp("(?-i)AA_", "i") + +Insert the object into the ``testbson`` collection. + +.. code-block:: javascript + + db.testbson.insertOne( { foo: bsonRegExp } ) + +Retrieve a ``BSONRegExp()`` Object +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Query the ``testbson`` collection for the inserted document. + +.. code-block:: javascript + + db.testbson.find( {}, {}, { bsonRegExp: true } ) + +You can see the binary BSON regular expressions stored in the collection. + +.. code-block:: javascript + :copyable: false + + [ + { + _id: ObjectId('65e8ba8a4b3c33a76e6cacca'), + foo: BSONRegExp('(?-i)AA_', 'i') + } + ] + +If you set ``bsonRegExp`` to ``false``, ``mongosh`` returns an error: + +.. io-code-block:: + :copyable: true + + .. input:: + :language: javascript + + db.testbson.find( {}, {}, { bsonRegExp: false }) + + .. output:: + :language: javascript + + Uncaught: + SyntaxError: Invalid regular expression: /(?-i)AA_/i: Invalid group diff --git a/source/reference/method/Bulk.find.collation.txt b/source/reference/method/Bulk.find.collation.txt index d522eaf84f1..a46d0af274f 100644 --- a/source/reference/method/Bulk.find.collation.txt +++ b/source/reference/method/Bulk.find.collation.txt @@ -53,7 +53,7 @@ Description - Optional. The level of comparison to perform. Corresponds to `ICU Comparison Levels - `_. + `_. Possible values are: .. list-table:: @@ -102,7 +102,7 @@ Description breaker. See `ICU Collation: Comparison Levels - `_ + `_ for details. @@ -126,7 +126,7 @@ Description ``2``. The default is ``false``. For more information, see `ICU Collation: Case Level - `_. + `_. @@ -156,7 +156,7 @@ Description - Default value. Similar to ``"lower"`` with slight differences. See - ``_ + ``_ for details of differences. @@ -203,7 +203,7 @@ Description and are only distinguished at strength levels greater than 3. See `ICU Collation: Comparison Levels - `_ + `_ for more information. Default is ``"non-ignorable"``. @@ -270,7 +270,7 @@ Description The default value is ``false``. See - ``_ for details. + ``_ for details. diff --git a/source/reference/method/BulkWriteResult.txt b/source/reference/method/BulkWriteResult.txt index 9bb2e256062..dd84d48a936 100644 --- a/source/reference/method/BulkWriteResult.txt +++ b/source/reference/method/BulkWriteResult.txt @@ -6,6 +6,10 @@ BulkWriteResult() .. default-domain:: mongodb +.. facet:: + :name: programming_language + :values: shell + .. contents:: On this page :local: :backlinks: none @@ -129,8 +133,12 @@ property with the following fields: .. data:: writeConcernError - Document that describes the error related to write concern and - contains the fields: + Document describing errors that relate to the write concern. + + .. |cmd| replace:: :method:`BulkWriteResult` + .. include:: /includes/fact-bulk-writeConcernError-mongos + + The ``writeConcernError`` documents contains the following fields: .. data:: writeConcernError.code diff --git a/source/reference/method/ClientEncryption.createEncryptedCollection.txt b/source/reference/method/ClientEncryption.createEncryptedCollection.txt index b2baa7bb238..05e67fdb88a 100644 --- a/source/reference/method/ClientEncryption.createEncryptedCollection.txt +++ b/source/reference/method/ClientEncryption.createEncryptedCollection.txt @@ -93,7 +93,7 @@ Example ------- The following example uses a locally managed KMS for the -queryable encryption configuration. +Queryable Encryption configuration. .. procedure:: :style: normal diff --git a/source/reference/method/ClientEncryption.encrypt.txt b/source/reference/method/ClientEncryption.encrypt.txt index 8a05a628452..796850f2d8b 100644 --- a/source/reference/method/ClientEncryption.encrypt.txt +++ b/source/reference/method/ClientEncryption.encrypt.txt @@ -54,7 +54,7 @@ Syntax that identifies a specific data encryption key. If the data encryption key does not exist in the key vault configured for the database connection, :method:`~ClientEncryption.encrypt()` - returns an error. See :ref:`field-level-encryption-keyvault` + returns an error. See :ref:`qe-reference-key-vault` for more information on key vaults and data encryption keys. * - ``value`` @@ -136,7 +136,6 @@ BSON types: - ``bool`` - ``object`` - ``array`` -- ``javascriptWithScope`` (*Deprecated in MongoDB 4.4*) Examples -------- @@ -205,7 +204,7 @@ Queryable Encryption ~~~~~~~~~~~~~~~~~~~~ The following example uses a locally managed KMS for the -queryable encryption configuration. +Queryable Encryption configuration. .. procedure:: :style: normal diff --git a/source/reference/method/Date.txt b/source/reference/method/Date.txt index bd24c507a0b..b4f083e4490 100644 --- a/source/reference/method/Date.txt +++ b/source/reference/method/Date.txt @@ -4,6 +4,9 @@ Date() and Datetime .. default-domain:: mongodb +.. meta:: + :description: Use the date method to return a new date. You can specify a date to return or return the current date. + .. facet:: :name: programming_language :values: shell diff --git a/source/reference/method/KeyVault.createKey.txt b/source/reference/method/KeyVault.createKey.txt index fbbed6428e4..b06a06ef1e9 100644 --- a/source/reference/method/KeyVault.createKey.txt +++ b/source/reference/method/KeyVault.createKey.txt @@ -50,29 +50,29 @@ KeyVault.createKey() - *Required* The :ref:`Key Management Service (KMS) - ` to use for retrieving the + ` to use for retrieving the Customer Master Key (CMK). Accepts the following parameters: - ``aws`` for :ref:`Amazon Web Services KMS - `. Requires specifying a + `. Requires specifying a Customer Master Key (CMK) string for ``customerMasterKey``. - ``azure`` for :ref:`Azure Key Vault - `. Requires + `. Requires specifying a Customer Master Key (CMK) document for ``customerMasterKey``. .. versionadded:: 5.0 - ``gcp`` for :ref:`Google Cloud Platform KMS - `. Requires specifying a + `. Requires specifying a Customer Master Key (CMK) document for ``customerMasterKey``. .. versionadded:: 5.0 - ``local`` for a :ref:`locally managed key - `. + `. If the :method:`database connection ` was not configured with the specified KMS, data encryption key @@ -89,13 +89,13 @@ KeyVault.createKey() Provide the CMK as follows depending on your KMS provider: - For the :ref:`Amazon Web Services KMS - `, specify the full + `, specify the full `Amazon Resource Name (ARN) `__ of the master key as a single string. - For the :ref:`Azure Key Vault - ` KMS, specify a + ` KMS, specify a document containing the following key value pairs: - ``keyName`` - The `Azure Key Vault Name @@ -108,7 +108,7 @@ KeyVault.createKey() .. versionadded:: 5.0 - For the :ref:`Google Cloud Platform KMS - `, specify a + `, specify a document containing the following key value pairs: - ``projectId`` - The GCP project name diff --git a/source/reference/method/KeyVault.getKey.txt b/source/reference/method/KeyVault.getKey.txt index 97323ba26aa..52f60776a87 100644 --- a/source/reference/method/KeyVault.getKey.txt +++ b/source/reference/method/KeyVault.getKey.txt @@ -47,7 +47,7 @@ Example ------- The following example uses a :ref:`locally managed KMS -` for the client-side field level +` for the client-side field level encryption configuration. .. include:: /includes/csfle-connection-boilerplate-example.rst diff --git a/source/reference/method/KeyVault.getKeys.txt b/source/reference/method/KeyVault.getKeys.txt index 225c264a11f..e5f268fcd72 100644 --- a/source/reference/method/KeyVault.getKeys.txt +++ b/source/reference/method/KeyVault.getKeys.txt @@ -45,7 +45,7 @@ Example ------- The following example uses a :ref:`locally managed KMS -` for the client-side field level +` for the client-side field level encryption configuration. .. include:: /includes/csfle-connection-boilerplate-example.rst diff --git a/source/reference/method/KeyVault.rewrapManyDataKey.txt b/source/reference/method/KeyVault.rewrapManyDataKey.txt index 02bf22fdb6e..5962948747c 100644 --- a/source/reference/method/KeyVault.rewrapManyDataKey.txt +++ b/source/reference/method/KeyVault.rewrapManyDataKey.txt @@ -18,7 +18,7 @@ KeyVault.rewrapManyDataKey() and re-encrypts them with a new {+cmk-long+} ({+cmk-abbr-no-hover+}). Use this method to rotate the {+cmk-abbr-no-hover+} that encrypts your {+dek-abbr-no-hover+}s. To learn more about {+cmk-abbr-no-hover+}s - and {+dek-abbr-no-hover+}s, see :ref:``. + and {+dek-abbr-no-hover+}s, see :ref:``. You specify a {+cmk-abbr-no-hover+} through the ``masterKey`` parameter. If you do not include a ``masterKey`` argument, the method decrypts diff --git a/source/reference/method/Mongo.getURI.txt b/source/reference/method/Mongo.getURI.txt new file mode 100644 index 00000000000..51196c93719 --- /dev/null +++ b/source/reference/method/Mongo.getURI.txt @@ -0,0 +1,55 @@ +============== +Mongo.getURI() +============== + +.. default-domain:: mongodb + +.. contents:: On this page + :local: + :backlinks: none + :depth: 1 + :class: singlecol + +Definition +---------- + +.. method:: Mongo.getURI() + + :returns: The connection string for the current active connection. + + See the :ref:`mongodb-uri` for more information. + +Syntax +------ + +The command takes the following form: + +.. code-block:: javascript + + db.getMongo().getURI() + +You can use this method to return a URI string for a connection, which +you can then use to create a new ``Mongo()`` instance: + +.. code-block:: javascript + + new Mongo(db.getMongo().getURI()) + +Example +------- + +To return the current connection string, enter the following: + +.. io-code-block:: + :copyable: true + + .. input:: + :language: js + + db.getMongo().getURI() + + .. output:: + :language: js + + mongodb://127.0.0.1:27019/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+2.1.4 + diff --git a/source/reference/method/Mongo.getWriteConcern.txt b/source/reference/method/Mongo.getWriteConcern.txt index 39a2cdf038b..4365fedfc99 100644 --- a/source/reference/method/Mongo.getWriteConcern.txt +++ b/source/reference/method/Mongo.getWriteConcern.txt @@ -64,7 +64,7 @@ The fields are: `. * - ``wtimeout`` - - The number of milliseconds to wait for acknowledgement of the + - The number of milliseconds to wait for acknowledgment of the write concern. ``wtimeout`` is only applicable when ``w`` has a value greater than ``1``. diff --git a/source/reference/method/Mongo.setReadPref.txt b/source/reference/method/Mongo.setReadPref.txt index 9de4a83b7ba..73923385ee4 100644 --- a/source/reference/method/Mongo.setReadPref.txt +++ b/source/reference/method/Mongo.setReadPref.txt @@ -82,19 +82,16 @@ Parameters document ``{ }`` is equivalent to specifying ``{ enabled: true }``. - Hedged reads are available starting in MongoDB 4.4 for sharded - clusters. To use hedged reads, the :binary:`~bin.mongos` must - have :parameter:`enabled support ` for hedged - reads (the default) and the non-``primary`` :doc:`read - preferences ` must enable the use of - hedged reads. + Hedged reads are available for sharded clusters. To use hedged reads, + the :binary:`~bin.mongos` must have :parameter:`enabled support + ` for hedged reads (the default) and the + non-``primary`` :ref:`read preference ` must + enable the use of hedged reads. Read preference :readmode:`nearest` enables the use of hedged reads on sharded clusters by default; i.e. by default, has ``{ enabled: true }``. - .. versionadded:: 4.4 - :method:`Mongo.setReadPref()` does not support the :ref:`replica-set-read-preference-max-staleness` option for read preference. @@ -172,15 +169,14 @@ See :ref:`read-pref-order-matching` for details. Specify Hedged Read ~~~~~~~~~~~~~~~~~~~ -Starting in MongoDB 4.4 for sharded clusters, you can enable -:ref:`hedged reads ` for non-primary :doc:`read -preferences `. To use hedged reads, the -:binary:`~bin.mongos` must have :parameter:`enabled support +For sharded clusters, you can enable :ref:`hedged reads ` +for non-primary :ref:`read preferences `. To use hedged +reads, the :binary:`~bin.mongos` must have :parameter:`enabled support ` for hedged reads (the default) and the non-``primary`` :ref:`read preferences ` must enable the use of hedged reads. -To target secondaries on 4.4+ sharded cluster using hedged reads, +To target secondaries on sharded clusters using hedged reads, include both the :ref:`mode ` and the :ref:`hedgeOptions `, as in the following examples: diff --git a/source/reference/method/Mongo.setWriteConcern.txt b/source/reference/method/Mongo.setWriteConcern.txt index 9d136e5d8dc..d2f4a3c4060 100644 --- a/source/reference/method/Mongo.setWriteConcern.txt +++ b/source/reference/method/Mongo.setWriteConcern.txt @@ -57,7 +57,7 @@ The fields are: `. * - ``wtimeout`` - - The number of milliseconds to wait for acknowledgement of the + - The number of milliseconds to wait for acknowledgment of the write concern. ``wtimeout`` is only applicable when ``w`` has a value greater than ``1``. @@ -68,7 +68,7 @@ In the following example: - Two :binary:`~bin.mongod` or :binary:`~bin.mongod` instances must acknowledge writes. -- There is a ``1`` second timeout to wait for write acknowledgements. +- There is a ``1`` second timeout to wait for write acknowledgments. .. code-block:: javascript diff --git a/source/reference/method/Mongo.startSession.txt b/source/reference/method/Mongo.startSession.txt index 47749ed2ac2..994c013540c 100644 --- a/source/reference/method/Mongo.startSession.txt +++ b/source/reference/method/Mongo.startSession.txt @@ -22,6 +22,8 @@ Definition .. |dbcommand| replace:: :dbcommand:`startSession` command .. include:: /includes/fact-mongosh-shell-method-alt + .. include:: /includes/client-sessions-reuse.rst + The :method:`~Mongo.startSession()` method can take a document with session options. The options available are: @@ -37,7 +39,7 @@ Definition - Boolean. Enables or disables :ref:`causal consistency ` for the session. :method:`Mongo.startSession()` enables ``causalConsistency`` - by default. + by default. Mutually exclusive with ``snapshot``. After starting a session, you cannot modify its ``causalConsistency`` setting. @@ -85,6 +87,12 @@ Definition After starting a session, you cannot modify its ``retryWrites`` setting. + * - snapshot + + - Boolean. Enables :ref:`snapshot reads ` + for the session for MongoDB 5.0+ deployments. Mutually + exclusive with ``causalConsistency``. + * - writeConcern - Document. Specifies the :ref:`write concern `. @@ -93,6 +101,19 @@ Definition :method:`Session.getOptions().setWriteConcern() `. +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Examples -------- diff --git a/source/reference/method/Mongo.txt b/source/reference/method/Mongo.txt index 9f44d9529f4..e635ce3f44e 100644 --- a/source/reference/method/Mongo.txt +++ b/source/reference/method/Mongo.txt @@ -34,10 +34,15 @@ Description * - ``host`` - - string + - string or ``Mongo`` instance - - *Optional*. The host, either in the form of ```` or - ``<:port>``. + - *Optional*. Host or connection string. + + The host can either be a connection string or in the form of ```` or + ``<:port>``. The connection string can be in the form of + a ``Mongo`` instance. If you specify a ``Mongo`` instance, the + :method:`Mongo()` constructor uses the connection string of + the specified Mongo instance. If omitted, :method:`Mongo` instantiates a connection to the localhost interface on the default port ``27017``. @@ -102,7 +107,8 @@ syntax: "keyVaultNamespace" : "", "kmsProviders" : , "schemaMap" : , - "bypassAutoEncryption" : + "bypassAutoEncryption" : , + "tlsOptions": } The ``{+auto-encrypt-options+}`` document takes the @@ -155,7 +161,7 @@ following parameters: - document - *(Required)* The :ref:`Key Management Service (KMS) - ` used by client-side field level + ` used by client-side field level encryption for managing a Customer Master Key (CMK). Client-side field level encryption uses the CMK for encrypting and decrypting data encryption keys. @@ -163,19 +169,18 @@ following parameters: {+csfle+} supports the following KMS providers: - - :ref:`Amazon Web Services KMS ` - - :ref:`Azure Key Vault ` - - :ref:`Google Cloud Platform KMS ` - - :ref:`Locally Managed Key ` + - :ref:`Amazon Web Services KMS ` + - :ref:`Azure Key Vault ` + - :ref:`Google Cloud Platform KMS ` + - :ref:`Locally Managed Key ` If possible, consider defining the credentials provided in ``kmsProviders`` as environment variables, and then passing them to :binary:`~bin.mongosh` using the :option:`--eval ` option. This minimizes the chances of credentials - leaking into logs. See - :ref:`Create a Data Key + leaking into logs. See :ref:`Create a Data Key ` for examples of - this approach for each supported KMS. + this approach for each supported KMS. Amazon Web Services KMS .. include:: /includes/extracts/csfle-aws-kms-4.2.0-4.2.1-broken.rst @@ -195,6 +200,17 @@ following parameters: The specified ``accessKeyId`` must correspond to an IAM user with all ``List`` and ``Read`` permissions for the KMS service. + In some environments, the AWS SDK can pick up credentials + automatically. To enable AWS KMS usage without providing + explicit credentials to the AWS SDK, you can pass the + ``kmsProvider`` details to the ``Mongo()`` constructor. + + .. code-block:: json + + { + "kmsProviders" : { "aws" : { } } + } + Azure Key Vault Specify the ``azure`` document to ``kmsProviders`` with the following fields: @@ -247,7 +263,8 @@ following parameters: - *(Optional)* The automatic client-side field level encryption rules specified using the JSON schema Draft 4 standard syntax and - encryption-specific keywords. + encryption-specific keywords. This option is mutually exclusive + with ``explicitEncryptionOnly``. For complete documentation, see :ref:`csfle-fundamentals-create-schema`. @@ -268,6 +285,21 @@ following parameters: indexed fields without the ``crypt_shared`` library. For details, see :ref:`qe-reference-mongo-client`. + * - ``explicitEncryptionOnly`` + - boolean + - *(Optional)* Specify ``true`` to use neither automatic encryption + nor automatic decryption. You can use :method:`getKeyVault()` and + :method:`getClientEncryption()` to perform explicit + encryption. This option is mutually exclusive with ``schemaMap``. + If omitted, defaults to ``false``. + + * - ``tlsOptions`` + - object + - *(Optional)* The path to the TLS client certificate and private key file in PEM format + (``tlsCertificateKeyFile``), TLS client certificate and private key file password + (``tlsCertificateKeyFilePassword``), or TLS certificate authority file + (``tlsCAFile``) to use to connect to the KMS in PEM format. To learn more + about these options, see :mongosh:`TLS Options `. .. _mongo-api-options: @@ -425,6 +457,23 @@ The specified automatic encryption rules encrypt the ``taxid`` and algorithm. Only clients configured for the correct KMS *and* access to the specified data encryption key can decrypt the field. +The following operation creates a new connection object from within a +:binary:`~bin.mongosh` session. The ``mongo.tlsOptions`` option enables +a connection using KMIP as the KMS provider: + +.. code-block:: javascript + + var csfleConnection = { + keyVaultNamespace: "encryption.__keyVault", + kmsProviders: { kmip: { endpoint: "kmip.example.com:123" } }, + tlsOptions: { kmip: { tlsCertificateKeyFile: "/path/to/client/cert-and-key-bundle.pem" } } + } + + cluster = Mongo( + "mongodb://mymongo.example.net:27017/?replicaSet=myMongoCluster", + csfleConnection + ); + See :doc:`/reference/method/js-client-side-field-level-encryption` for a complete list of client-side field level encryption methods. diff --git a/source/reference/method/ObjectId.toString.txt b/source/reference/method/ObjectId.toString.txt index 7f872164352..8c05f541911 100644 --- a/source/reference/method/ObjectId.toString.txt +++ b/source/reference/method/ObjectId.toString.txt @@ -4,6 +4,9 @@ ObjectId.toString() .. default-domain:: mongodb +.. meta:: + :description: Return the string representation of an ObjectId. + .. facet:: :name: programming_language :values: shell diff --git a/source/reference/method/ObjectId.txt b/source/reference/method/ObjectId.txt index 37f2ca05901..c67146767c0 100644 --- a/source/reference/method/ObjectId.txt +++ b/source/reference/method/ObjectId.txt @@ -6,9 +6,8 @@ ObjectId() .. default-domain:: mongodb -.. facet:: - :name: programming_language - :values: shell +.. meta:: + :description: Create a new ObjectId. .. facet:: :name: programming_language @@ -27,6 +26,8 @@ Description .. method:: ObjectId() + .. include:: /includes/fact-mongosh-shell-method + Returns a new :ref:`objectid`. The 12-byte :ref:`objectid` consists of: @@ -121,6 +122,49 @@ The method returns: 507f191e810c19729de860ea +Specify a Date +~~~~~~~~~~~~~~ + +You can use a custom :ref:`document-bson-type-date` to specify an ObjectId. + +.. procedure:: + :style: normal + + .. step:: Set a variable for your specified date + + Internally, Date objects are stored as signed + 64-bit integer that represents the number of milliseconds since the + :wikipedia:`Unix epoch`. To learn more, see :method:`Date()`. + + .. code-block:: javascript + :copyable: true + + myDate = new Date( "2024-01-01" ) + + .. step:: Convert your Date object to seconds + + .. code-block:: javascript + :copyable: true + + timestamp = Math.floor( myDate / 1000 ) + + .. step:: Set your new ObjectId with ``timestamp`` as the argument + + You can verify the Date by using :method:`ObjectId.getTimestamp()`. + + .. io-code-block:: + :copyable: true + + .. input:: + :language: javascript + + newObjectId = ObjectId(timestamp) + + .. output:: + :language: javascript + + ObjectId("6592008029c8c3e4dc76256c") + Specify an Integer String ~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -161,4 +205,3 @@ unique, 24 character hexadecimal value when you call .. seealso:: :ref:`ObjectId BSON Type ` - diff --git a/source/reference/method/PlanCache.list.txt b/source/reference/method/PlanCache.list.txt index 244f113e218..7c7e4033ff9 100644 --- a/source/reference/method/PlanCache.list.txt +++ b/source/reference/method/PlanCache.list.txt @@ -15,9 +15,7 @@ Definition .. method:: PlanCache.list() - .. versionadded:: 4.4 - - Returns an array of :doc:`plan cache ` entries + Returns an array of :ref:`plan cache ` entries for a collection. The method is only available from the :method:`plan cache object @@ -249,7 +247,7 @@ associated with the following shapes: 1.5002 ], "indexFilterSet" : false, - "estimatedSizeBytes" : NumberLong(3160), // Available starting in MongoDB 5.0, 4.4.3, 4.2.12, 4.0.23, 3.6.23 + "estimatedSizeBytes" : NumberLong(3160), // Available starting in MongoDB 5.0 "host" : "mongodb1.example.net:27018", "shard" : "shardA" // Available if run on sharded cluster }, @@ -300,15 +298,11 @@ For details on the output, see :ref:`$planCacheStats output List Query Shapes ~~~~~~~~~~~~~~~~~ -MongoDB 4.4 removes the deprecated ``planCacheListQueryShapes`` command -and its helper method ``PlanCache.listQueryShapes()``. - -As an alternative, you can use the :method:`PlanCache.list()` to obtain -a list of all of the query shapes for which there is a cached plan. For -example, the following operation passes in a pipeline with a -:pipeline:`$project` stage to only output the :ref:`createdFromQuery -` field and the :ref:`queryHash -` field. +To obtain a list of all of the query shapes for which there is a cached plan, +you can use the :method:`PlanCache.list()`. For example, the following operation +passes in a pipeline with a :pipeline:`$project` stage to only output the +:ref:`createdFromQuery ` field and the +:ref:`queryHash ` field. .. code-block:: javascript @@ -523,7 +517,7 @@ The operation returns the following: 1.5002 ], "indexFilterSet" : false, - "estimatedSizeBytes" : NumberLong(3160), // Available starting in MongoDB 5.0, 4.4.3, 4.2.12, 4.0.23, 3.6.23 + "estimatedSizeBytes" : NumberLong(3160), // Available starting in MongoDB 5.0 "host" : "mongodb1.example.net:27018", "shard" : "shardA" // Available if run on sharded cluster } diff --git a/source/reference/method/Session.abortTransaction.txt b/source/reference/method/Session.abortTransaction.txt index 16b00d97262..9f5500da17b 100644 --- a/source/reference/method/Session.abortTransaction.txt +++ b/source/reference/method/Session.abortTransaction.txt @@ -29,6 +29,19 @@ Definition .. include:: /includes/fact-mongosh-shell-method-alt +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Behavior -------- @@ -102,7 +115,7 @@ or in committing the transaction, the session aborts the transaction. break; } catch (error) { // If transient error, retry the whole transaction - if ( error.hasOwnProperty("errorLabels") && error.errorLabels.includes("TransientTransactionError") ) { + if (error?.errorLabels?.includes("TransientTransactionError")) { print("TransientTransactionError, retrying transaction ..."); continue; } else { @@ -122,7 +135,7 @@ or in committing the transaction, the session aborts the transaction. break; } catch (error) { // Can retry commit - if (error.hasOwnProperty("errorLabels") && error.errorLabels.includes("UnknownTransactionCommitResult") ) { + if (error?.errorLabels?.includes("UnknownTransactionCommitResult") ) { print("UnknownTransactionCommitResult, retrying commit operation ..."); continue; } else { diff --git a/source/reference/method/Session.commitTransaction.txt b/source/reference/method/Session.commitTransaction.txt index 79aa83efe6f..4e340c536ea 100644 --- a/source/reference/method/Session.commitTransaction.txt +++ b/source/reference/method/Session.commitTransaction.txt @@ -26,6 +26,21 @@ Definition .. include:: /includes/fact-mongosh-shell-method-alt + +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + + Behavior -------- @@ -101,7 +116,7 @@ as a single transaction. break; } catch (error) { // If transient error, retry the whole transaction - if ( error.hasOwnProperty("errorLabels") && error.errorLabels.includes("TransientTransactionError") ) { + if (error?.errorLabels?.includes("TransientTransactionError") ) { print("TransientTransactionError, retrying transaction ..."); continue; } else { @@ -121,7 +136,7 @@ as a single transaction. break; } catch (error) { // Can retry commit - if (error.hasOwnProperty("errorLabels") && error.errorLabels.includes("UnknownTransactionCommitResult") ) { + if (error?.errorLabels?.includes("UnknownTransactionCommitResult") ) { print("UnknownTransactionCommitResult, retrying commit operation ..."); continue; } else { diff --git a/source/reference/method/Session.startTransaction.txt b/source/reference/method/Session.startTransaction.txt index 70bd4dca020..f045ef48241 100644 --- a/source/reference/method/Session.startTransaction.txt +++ b/source/reference/method/Session.startTransaction.txt @@ -17,7 +17,14 @@ Definition Starts a :ref:`multi-document transaction ` associated with the session. At any given time, you can have at most - one open transaction for a session. + one open transaction for a session. + + .. note:: + + This operation is a no-op. The transaction won't start on the + server until the first command is sent on the session. Therefore, + the snapshot time of the transaction won't be set until the first + command is sent on the session. .. include:: /includes/transaction-support @@ -71,7 +78,7 @@ Definition If you commit using :writeconcern:`"w: 1" <\>` write concern, your transaction can be rolled back during the - :doc:`failover process `. + :ref:`failover process `. For MongoDB Drivers, transactions use the client-level write concern as the default. @@ -168,7 +175,7 @@ as a single transaction. break; } catch (error) { // If transient error, retry the whole transaction - if ( error.hasOwnProperty("errorLabels") && error.errorLabels.includes("TransientTransactionError") ) { + if (error?.errorLabels?.includes("TransientTransactionError") ) { print("TransientTransactionError, retrying transaction ..."); continue; } else { @@ -188,7 +195,7 @@ as a single transaction. break; } catch (error) { // Can retry commit - if (error.hasOwnProperty("errorLabels") && error.errorLabels.includes("UnknownTransactionCommitResult") ) { + if (error?.errorLabels?.includes("UnknownTransactionCommitResult") ) { print("UnknownTransactionCommitResult, retrying commit operation ..."); continue; } else { diff --git a/source/reference/method/Session.withTransaction.txt b/source/reference/method/Session.withTransaction.txt index bfb730f80a2..2b4e80cb4a9 100644 --- a/source/reference/method/Session.withTransaction.txt +++ b/source/reference/method/Session.withTransaction.txt @@ -34,6 +34,18 @@ Definition .. |dbcommand| replace:: :dbcommand:`commitTransaction` command .. include:: /includes/fact-mongosh-shell-method-alt +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst Behavior -------- diff --git a/source/reference/method/WriteResult.txt b/source/reference/method/WriteResult.txt index bd538e376b7..9eac52c9841 100644 --- a/source/reference/method/WriteResult.txt +++ b/source/reference/method/WriteResult.txt @@ -4,6 +4,10 @@ WriteResult() .. default-domain:: mongodb +.. facet:: + :name: programming_language + :values: shell + .. contents:: On this page :local: :backlinks: none @@ -87,11 +91,21 @@ The :method:`WriteResult` has the following properties: A description of the error. + .. data:: WriteResult.writeError.errInfo + + A document that contains information regarding any write errors, + excluding write concern errors, that were encountered during the + write operation. When an operation fails document validation, the + server produces an error under this field explaining why the + document did not match against the collection's validator + expression. .. data:: WriteResult.writeConcernError - A document that contains information regarding any write concern errors encountered - during the write operation. + Document describing errors that relate to the write concern. + + .. |cmd| replace:: :method:`WriteResult` + .. include:: /includes/fact-bulk-writeConcernError-mongos .. data:: WriteResult.writeConcernError.code @@ -103,8 +117,6 @@ The :method:`WriteResult` has the following properties: .. data:: WriteResult.writeConcernError.errInfo.writeConcern - .. versionadded:: 4.4 - .. include:: /includes/fact-errInfo-wc.rst .. data:: WriteResult.writeConcernError.errInfo.writeConcern.provenance diff --git a/source/reference/method/convertShardKeyToHashed.txt b/source/reference/method/convertShardKeyToHashed.txt index 3ddf480df7e..b61064209e8 100644 --- a/source/reference/method/convertShardKeyToHashed.txt +++ b/source/reference/method/convertShardKeyToHashed.txt @@ -18,7 +18,7 @@ Description Returns the hashed value for the input. The :method:`convertShardKeyToHashed()` method uses the same hashing function as the hashed index and can be used to see what the - :doc:`hashed value ` would be for a key. + :ref:`hashed value ` would be for a key. Example ------- diff --git a/source/reference/method/cursor.allowDiskUse.txt b/source/reference/method/cursor.allowDiskUse.txt index bde6bbedc35..c40332ae1df 100644 --- a/source/reference/method/cursor.allowDiskUse.txt +++ b/source/reference/method/cursor.allowDiskUse.txt @@ -13,8 +13,6 @@ cursor.allowDiskUse() Definition ---------- -.. versionadded:: 4.4 - .. method:: cursor.allowDiskUse() @@ -75,7 +73,7 @@ documentation on blocking sorts and sort index use, see To check if MongoDB must perform an blocking sort, append :method:`cursor.explain()` to the query and check the -:doc:`explain results `. If the query plan +:ref:`explain results `. If the query plan contains a ``SORT`` stage, then MongoDB must perform an blocking sort operation subject to the 100 megabyte memory limit. diff --git a/source/reference/method/cursor.batchSize.txt b/source/reference/method/cursor.batchSize.txt index 0269c6e9a31..8cea5216ab0 100644 --- a/source/reference/method/cursor.batchSize.txt +++ b/source/reference/method/cursor.batchSize.txt @@ -17,10 +17,8 @@ Definition .. method:: cursor.batchSize(size) - .. include:: /includes/fact-mongosh-shell-method.rst - Specifies the number of documents to return in each batch of the response from the MongoDB instance. In most cases, modifying the batch size will @@ -28,35 +26,49 @@ Definition most :driver:`drivers ` return results as if MongoDB returned a single batch. - The :method:`~cursor.batchSize()` method takes the - following parameter: + .. note:: + + If the batch size is too large, the cursor allocates more + resources than it requires, which can negatively impact + query performance. If the batch size is too small, the + cursor requires more network round trips to retrieve the + query results, which can negatively impact query + performance. + Adjust ``batchSize`` to a value appropriate to your + database, load, and application needs. + + The :method:`~cursor.batchSize()` method takes the + following field: .. list-table:: :header-rows: 1 :widths: 20 20 80 - * - Parameter - + * - Field - Type - - Description * - ``size`` - - integer - - - The number of documents to return per batch. + - The initial number of documents to return for a batch. The + default initial batch size is 101 documents. Subsequent + batches are 16 megabytes. The default applies to drivers and + Mongo Shell. For details, see :ref:`cursor-batches`. Example ------- -The following example sets the batch size for the results of a query -(i.e. :method:`~db.collection.find()`) to ``10``. The -:method:`~cursor.batchSize()` method does not change the -output in :binary:`~bin.mongosh`, which, by default, iterates over the -first 20 documents. +The following example sets ``batchSize`` for the results of a query +(specifically, :method:`~db.collection.find()`) to ``10``: .. code-block:: javascript db.inventory.find().batchSize(10) + +Learn More +---------- + +- :ref:`cursor-batches` +- :method:`cursor.next()` +- :dbcommand:`getMore` diff --git a/source/reference/method/cursor.collation.txt b/source/reference/method/cursor.collation.txt index 99174e311bd..8bbf6dc3e94 100644 --- a/source/reference/method/cursor.collation.txt +++ b/source/reference/method/cursor.collation.txt @@ -58,7 +58,7 @@ Definition - Optional. The level of comparison to perform. Corresponds to `ICU Comparison Levels - `_. + `_. Possible values are: .. list-table:: @@ -107,7 +107,7 @@ Definition breaker. See `ICU Collation: Comparison Levels - `_ + `_ for details. @@ -131,7 +131,7 @@ Definition ``2``. The default is ``false``. For more information, see `ICU Collation: Case Level - `_. + `_. @@ -161,7 +161,7 @@ Definition - Default value. Similar to ``"lower"`` with slight differences. See - ``_ + ``_ for details of differences. @@ -208,7 +208,7 @@ Definition and are only distinguished at strength levels greater than 3. See `ICU Collation: Comparison Levels - `_ + `_ for more information. Default is ``"non-ignorable"``. @@ -275,7 +275,7 @@ Definition The default value is ``false``. See - ``_ for details. + ``_ for details. diff --git a/source/reference/method/cursor.comment.txt b/source/reference/method/cursor.comment.txt index 59dd90050bb..9d80f1feccc 100644 --- a/source/reference/method/cursor.comment.txt +++ b/source/reference/method/cursor.comment.txt @@ -57,7 +57,7 @@ find operation. This can make it easier to track a particular query in the following diagnostic outputs: - The :data:`system.profile <.system.profile>` -- The :data:`QUERY` :doc:`log ` component +- The :data:`QUERY` :ref:`log ` component - :method:`db.currentOp()` See :ref:`configure log verbosity ` for the @@ -103,7 +103,7 @@ The following is an excerpt from the } -:binary:`~bin.mongod` :doc:`log ` +:binary:`~bin.mongod` :ref:`log ` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The following is an excerpt from the :binary:`~bin.mongod` log. It has been diff --git a/source/reference/method/cursor.count.txt b/source/reference/method/cursor.count.txt index 9c290ef3464..3442e0f27dc 100644 --- a/source/reference/method/cursor.count.txt +++ b/source/reference/method/cursor.count.txt @@ -59,16 +59,20 @@ Definition - boolean - - Optional. Specifies whether to consider the effects of the - :method:`cursor.skip()` and :method:`cursor.limit()` methods in the - count. By default, the :method:`~cursor.count()` method + - Optional. :binary:`~bin.mongosh` ignores any value you set for + this option. The value defaults to ``true``. + + The option specifies whether to consider the effects of the + :method:`cursor.skip()` and :method:`cursor.limit()` methods + in the count. By default, the :method:`~cursor.count()` method ignores the effects of the :method:`cursor.skip()` and - :method:`cursor.limit()`. Set ``applySkipLimit`` to ``true`` to - consider the effect of these methods. + :method:`cursor.limit()`. You must set ``applySkipLimit`` to + ``true`` to consider the effect of these methods. + .. note:: - - + The legacy :binary:`~bin.mongo` shell, which is now + deprecated, used your setting for this option. MongoDB also provides an equivalent :method:`db.collection.count()` as an alternative to the ``db.collection.find().count()`` diff --git a/source/reference/method/cursor.explain.txt b/source/reference/method/cursor.explain.txt index f02e859ef54..9f0416b2ef3 100644 --- a/source/reference/method/cursor.explain.txt +++ b/source/reference/method/cursor.explain.txt @@ -64,6 +64,8 @@ Definition Behavior -------- +.. include:: includes/explain-ignores-cache-plan.rst + .. _explain-cursor-method-verbosity: Verbosity Modes diff --git a/source/reference/method/cursor.hint.txt b/source/reference/method/cursor.hint.txt index c6f397228ca..b23c999638f 100644 --- a/source/reference/method/cursor.hint.txt +++ b/source/reference/method/cursor.hint.txt @@ -22,7 +22,7 @@ Definition Call this method on a query to override MongoDB's default index - selection and :doc:`query optimization process `. + selection and :ref:`query optimization process `. Use :method:`db.collection.getIndexes()` to return the list of current indexes on a collection. diff --git a/source/reference/method/cursor.noCursorTimeout.txt b/source/reference/method/cursor.noCursorTimeout.txt index 9230ad3dd10..0bdcecdb4d0 100644 --- a/source/reference/method/cursor.noCursorTimeout.txt +++ b/source/reference/method/cursor.noCursorTimeout.txt @@ -15,10 +15,8 @@ Definition .. method:: cursor.noCursorTimeout() - .. include:: /includes/fact-mongosh-shell-method.rst - Instructs the server to avoid closing a cursor automatically after a period of inactivity. diff --git a/source/reference/method/cursor.readPref.txt b/source/reference/method/cursor.readPref.txt index cc590543051..c3ceb0a1e2c 100644 --- a/source/reference/method/cursor.readPref.txt +++ b/source/reference/method/cursor.readPref.txt @@ -79,9 +79,9 @@ Parameters document ``{ }`` is equivalent to specifying ``{ enabled: true }``. - Hedged reads are available starting in MongoDB 4.4 for sharded - clusters. To use hedged reads, the :binary:`~bin.mongos` must - have :parameter:`enabled support ` for hedged + Hedged reads are available for sharded clusters. To use hedged reads, + the :binary:`~bin.mongos` must have + :parameter:`enabled support ` for hedged reads (the default) and the non-``primary`` :doc:`read preferences ` must enable the use of hedged reads. @@ -90,8 +90,6 @@ Parameters reads on sharded clusters by default; i.e. by default, has ``{ enabled: true }``. - .. versionadded:: 4.4 - :method:`~cursor.readPref()` does not support the :ref:`replica-set-read-preference-max-staleness` option for read preference. @@ -151,15 +149,14 @@ See :ref:`read-pref-order-matching` for details. Specify Hedged Read ~~~~~~~~~~~~~~~~~~~ -Starting in MongoDB 4.4 for sharded clusters, you can enable -:ref:`hedged reads ` for non-primary :doc:`read -preferences `. To use hedged reads, the -:binary:`~bin.mongos` must have :parameter:`enabled support +For sharded clusters, you can enable :ref:`hedged reads ` +for non-primary :ref:`read preference `. To use hedged +reads, the :binary:`~bin.mongos` must have :parameter:`enabled support ` for hedged reads (the default) and the non-``primary`` :ref:`read preferences ` must enable the use of hedged reads. -To target secondaries on 4.4+ sharded cluster using hedged reads, +To target secondaries on sharded clusters using hedged reads, include both the :ref:`mode ` and the :ref:`hedgeOptions `, as in the following examples: diff --git a/source/reference/method/cursor.returnKey.txt b/source/reference/method/cursor.returnKey.txt index 1324d827e61..aeffa810b89 100644 --- a/source/reference/method/cursor.returnKey.txt +++ b/source/reference/method/cursor.returnKey.txt @@ -20,10 +20,10 @@ Definition .. tip:: - Starting in MongoDB 4.4, :expression:`$meta` supports the keyword - ``"indexKey"`` to return index key metadata if an index is used. - The use of :expression:`{ $meta: "indexKey" } <$meta>` is - preferred over :method:`cursor.returnKey()`. + :expression:`$meta` supports the keyword ``"indexKey"`` to return index + key metadata if an index is used. The use of + :expression:`{ $meta: "indexKey" } <$meta>` is preferred over + :method:`cursor.returnKey()`. Modifies the cursor to return index keys rather than the documents. diff --git a/source/reference/method/cursor.sort.txt b/source/reference/method/cursor.sort.txt index 7fe1940d5b5..49044d3d10c 100644 --- a/source/reference/method/cursor.sort.txt +++ b/source/reference/method/cursor.sort.txt @@ -8,6 +8,9 @@ cursor.sort() :name: programming_language :values: shell +.. meta:: + :description: The MongoDB cursor sort method specifies the order of matching documents that a query returns. + .. contents:: On this page :local: :backlinks: none @@ -81,8 +84,6 @@ Limits Sort Consistency ~~~~~~~~~~~~~~~~ -.. versionchanged:: 4.4 - .. include:: /includes/fact-sort-consistency.rst Consider the following ``restaurant`` collection: @@ -179,13 +180,57 @@ The following sample document specifies a descending sort by the db.users.find( { $text: { $search: "operating" } }, - { score: { $meta: "textScore" }} // Optional starting in MongoDB 4.4 + { score: { $meta: "textScore" }} ).sort({ score: { $meta: "textScore" } }) The ``"textScore"`` metadata sorts in descending order. For more information, see :expression:`$meta` for details. +.. _sort-by-array: + +Sort by an Array Field +~~~~~~~~~~~~~~~~~~~~~~ + +When you sort by an array field, the following behaviors apply: + +- MongoDB uses the lowest or highest member of an array for sorting in + ascending or descending order, respectively. +- MongoDB ignores the query filter when selecting the elements to sort by. + +.. example:: + + Consider a collection named ``shoes`` with the following documents: + + .. code-block:: json + + db.shoes.insertMany([ + { _id: 'A', sizes: [7, 11] }, + { _id: 'B', sizes: [8, 9, 10] } + ]) + + The following queries sort the results by shoe size in ascending + and descending order. + + .. code-block:: json + + db.shoes.find().sort( { sizes: 1 } ) + db.shoes.find().sort( { sizes: -1 } ) + + Both of these queries return document with ``_id: 'A'`` first because + sizes ``7`` and ``11`` are the lowest and highest in the entries in + the ``sizes`` array, respectively. + + The following query finds shoes with sizes greater than 10 and sorts + the results by shoe size in ascending order. + + db.shoes.find( { sizes: { $gte: 10 } } ).sort( { sizes: 1 } ) + + This query returns document with ``_id: 'A'`` first even though the + filter includes a condition on ``sizes`` greater than ``7`` because + MongoDB ignores the query filter when selecting the elements to sort + by. + .. _sort-index-use: Sort and Index Use @@ -208,14 +253,14 @@ sort operations, see :ref:`sorting-with-indexes`. If MongoDB requires using more than 100 megabytes of system memory for the blocking sort operation, MongoDB returns an error *unless* the query -specifies :method:`cursor.allowDiskUse()` (*New in MongoDB 4.4*). +specifies :method:`cursor.allowDiskUse()`. :method:`~cursor.allowDiskUse()` allows MongoDB to use temporary files on disk to store data exceeding the 100 megabyte system memory limit while processing a blocking sort operation. To check if MongoDB must perform a blocking sort, append :method:`cursor.explain()` to the query and check the -:doc:`explain results `. If the query plan +:ref:`explain results `. If the query plan contains a ``SORT`` stage, then MongoDB must perform a blocking sort operation subject to the 100 megabyte memory limit. @@ -247,8 +292,7 @@ uses a top-k sort algorithm. This algorithm buffers the first ``k`` results (or last, depending on the sort order) seen so far by the underlying index or collection access. If at any point the memory footprint of these ``k`` results exceeds 100 megabytes, the query will -fail *unless* the query specifies :method:`cursor.allowDiskUse()` -(*New in MongoDB 4.4*). +fail *unless* the query specifies :method:`cursor.allowDiskUse()`. .. seealso:: diff --git a/source/reference/method/db.aggregate.txt b/source/reference/method/db.aggregate.txt index 265ba5c08b9..c5d75c72a53 100644 --- a/source/reference/method/db.aggregate.txt +++ b/source/reference/method/db.aggregate.txt @@ -142,6 +142,20 @@ Definition :pipeline:`$merge` stage. +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-limited-free.rst + +.. include:: /includes/fact-environments-onprem-only.rst + + Example ------- diff --git a/source/reference/method/db.auth.txt b/source/reference/method/db.auth.txt index 49ecd86e812..b35142ea4ad 100644 --- a/source/reference/method/db.auth.txt +++ b/source/reference/method/db.auth.txt @@ -26,60 +26,51 @@ Definition .. include:: /includes/extracts/4.4-changes-passwordPrompt.rst -Syntax ------- -The :method:`db.auth()` has the following syntax forms: +Compatibility +------------- -.. _db-auth-syntax-username-password: - -``db.auth(, )`` -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +.. |command| replace:: method -.. tabs:: +This method is available in deployments hosted in the following environments: - .. tab:: MongoDB 4.4 - :tabid: mdb-4-4 +.. include:: /includes/fact-environments-atlas-only.rst - Starting in MongoDB 4.4, you can either: - - - Omit the password to prompt the user to enter a password: +.. include:: /includes/fact-environments-atlas-support-no-free.rst - .. code-block:: javascript +.. include:: /includes/fact-environments-onprem-only.rst - db.auth( ) + +Syntax +------ - - Use :method:`passwordPrompt()` to prompt the user to enter - a password: - - .. code-block:: javascript +The :method:`db.auth()` has the following syntax forms: - db.auth( , passwordPrompt() ) +.. _db-auth-syntax-username-password: - - Specify a cleartext password. +``db.auth(, )`` +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - .. code-block:: javascript +You can either: - db.auth( , ) +- Omit the password to prompt the user to enter a password: - .. tab:: MongoDB 4.2 - :tabid: mdb-4-2 + .. code-block:: javascript - Starting in MongoDB 4.2, you can either: - - - Use :method:`passwordPrompt()` to prompt the user to enter - a password: + db.auth( ) - .. code-block:: javascript +- Use :method:`passwordPrompt()` to prompt the user to enter + a password: + + .. code-block:: javascript - db.auth( , passwordPrompt() ) + db.auth( , passwordPrompt() ) - - Specify a cleartext password: - - .. code-block:: javascript +- Specify a cleartext password. - db.auth( , ) + .. code-block:: javascript + db.auth( , ) .. _db-auth-syntax-user-document: @@ -201,8 +192,8 @@ To authenticate after connecting :binary:`~bin.mongosh`, issue use test db.auth( "myTestDBUser", passwordPrompt() ) -Starting in MongoDB 4.4, you can omit the ``password`` value entirely to -prompt the user to enter their password: +You can omit the ``password`` value entirely to prompt the user to enter their +password: .. code-block:: javascript diff --git a/source/reference/method/db.checkMetadataConsistency.txt b/source/reference/method/db.checkMetadataConsistency.txt index 0c6e00e2794..85f9f7c6d31 100644 --- a/source/reference/method/db.checkMetadataConsistency.txt +++ b/source/reference/method/db.checkMetadataConsistency.txt @@ -39,6 +39,18 @@ Definition which contains a document for each inconsistency found in the sharding metadata. +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst Syntax ------- diff --git a/source/reference/method/db.collection.aggregate.txt b/source/reference/method/db.collection.aggregate.txt index b2b65904521..5ae31f573f6 100644 --- a/source/reference/method/db.collection.aggregate.txt +++ b/source/reference/method/db.collection.aggregate.txt @@ -4,6 +4,9 @@ db.collection.aggregate() .. default-domain:: mongodb +.. meta:: + :description: Run an aggregation pipeline on a collection or view. + .. facet:: :name: programming_language :values: shell @@ -68,131 +71,20 @@ parameters: * - ``pipeline`` - array - A sequence of data aggregation operations or stages. See the - :ref:`aggregation pipeline operators ` - for details. + :ref:`aggregation pipeline operators ` + for details. - The method can still accept the pipeline stages as separate - arguments instead of as elements in an array; however, if you do - not specify the ``pipeline`` as an array, you cannot specify the - ``options`` parameter. + The method can still accept the pipeline stages as separate + arguments instead of as elements in an array; however, if you do + not specify the ``pipeline`` as an array, you cannot specify the + ``options`` parameter. * - ``options`` - document - Optional. Additional options that :method:`~db.collection.aggregate()` passes - to the :dbcommand:`aggregate` command. Available only if you - specify the ``pipeline`` as an array. - -The ``options`` document can contain the following fields and values: - -.. versionchanged:: 5.0 - -.. list-table:: - :header-rows: 1 - :widths: 20 20 80 - - * - Field - - Type - - Description - - * - ``explain`` - - boolean - - Optional. Specifies to return the information on the processing of the pipeline. See - :ref:`example-aggregate-method-explain-option` for an example. - - Not available in :ref:`multi-document transactions `. - - * - ``allowDiskUse`` - - boolean - - Optional. Enables writing to temporary files. When set to ``true``, aggregation - operations can write data to the :file:`_tmp` subdirectory in the - :setting:`~storage.dbPath` directory. See - :ref:`example-aggregate-method-external-sort` for an example. - - .. include:: /includes/extracts/4.2-changes-usedDisk.rst - - * - ``cursor`` - - document - - Optional. Specifies the *initial* batch size for the cursor. The value of the ``cursor`` - field is a document with the field ``batchSize``. See - :ref:`example-aggregate-method-initial-batch-size` for syntax and example. - - * - ``maxTimeMS`` - - non-negative integer - - Optional. Specifies a time limit in milliseconds for processing - operations on a cursor. If you do not specify a value for maxTimeMS, - operations will not time out. A value of ``0`` explicitly - specifies the default unbounded behavior. - - MongoDB terminates operations that exceed their allotted time limit - using the same mechanism as :method:`db.killOp()`. MongoDB only - terminates an operation at one of its designated :term:`interrupt - points `. - - * - ``bypassDocumentValidation`` - - boolean - - Optional. Applicable only if you specify the :pipeline:`$out` or :pipeline:`$merge` aggregation - stages. - - Enables :method:`db.collection.aggregate` to bypass document validation - during the operation. This lets you insert documents that do not - meet the validation requirements. - - * - ``readConcern`` - - document - - Optional. Specifies the :term:`read concern`. - - .. include:: /includes/fact-readConcern-syntax.rst - .. include:: /includes/fact-readConcern-option-description.rst - .. include:: /includes/extracts/4.2-changes-out-linearizable.rst - .. include:: /includes/extracts/4.2-changes-linearizable-merge-restriction.rst - - * - :ref:`collation ` - - document - - .. _method-collection-aggregate-collation: - - Optional. - - .. include:: /includes/extracts/collation-option.rst - - * - ``hint`` - - string or document - - Optional. The index to use for the aggregation. The index is on the initial - collection/view against which the aggregation is run. - - Specify the index either by the index name or by the index - specification document. - - .. note:: - - The ``hint`` does not apply to :pipeline:`$lookup` and - :pipeline:`$graphLookup` stages. - - * - ``comment`` - - string - - Optional. Users can specify an arbitrary string to help trace the operation - through the database profiler, currentOp, and logs. - - * - ``writeConcern`` - - document - - Optional. A document that expresses the :ref:`write concern ` - to use with the :pipeline:`$out` or :pipeline:`$merge` stage. - - Omit to use the default write concern with the :pipeline:`$out` or - :pipeline:`$merge` stage. - - * - ``let`` - - document - - .. _db.collection.aggregate-let-option: - - Optional. - - .. include:: /includes/let-variables-syntax.rst - .. include:: /includes/let-variables-aggregate-syntax-note.rst - - For a complete example using ``let`` and variables, see - :ref:`db.collection.aggregate-let-example`. - - .. versionadded:: 5.0 + to the :dbcommand:`aggregate` command. Available only if you + specify the ``pipeline`` as an array. To see available options, + see `AggregateOptions `__. Behavior -------- @@ -295,11 +187,13 @@ following documents: .. code-block:: javascript - { _id: 1, cust_id: "abc1", ord_date: ISODate("2012-11-02T17:04:11.102Z"), status: "A", amount: 50 } - { _id: 2, cust_id: "xyz1", ord_date: ISODate("2013-10-01T17:04:11.102Z"), status: "A", amount: 100 } - { _id: 3, cust_id: "xyz1", ord_date: ISODate("2013-10-12T17:04:11.102Z"), status: "D", amount: 25 } - { _id: 4, cust_id: "xyz1", ord_date: ISODate("2013-10-11T17:04:11.102Z"), status: "D", amount: 125 } - { _id: 5, cust_id: "abc1", ord_date: ISODate("2013-11-12T17:04:11.102Z"), status: "A", amount: 25 } + db.orders.insertMany( [ + { _id: 1, cust_id: "abc1", ord_date: ISODate("2012-11-02T17:04:11.102Z"), status: "A", amount: 50 }, + { _id: 2, cust_id: "xyz1", ord_date: ISODate("2013-10-01T17:04:11.102Z"), status: "A", amount: 100 }, + { _id: 3, cust_id: "xyz1", ord_date: ISODate("2013-10-12T17:04:11.102Z"), status: "D", amount: 25 }, + { _id: 4, cust_id: "xyz1", ord_date: ISODate("2013-10-11T17:04:11.102Z"), status: "D", amount: 125 }, + { _id: 5, cust_id: "abc1", ord_date: ISODate("2013-11-12T17:04:11.102Z"), status: "A", amount: 25 } + ] ) Group by and Calculate a Sum ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -312,18 +206,20 @@ descending order: .. code-block:: javascript - db.orders.aggregate([ - { $match: { status: "A" } }, - { $group: { _id: "$cust_id", total: { $sum: "$amount" } } }, - { $sort: { total: -1 } } - ]) + db.orders.aggregate( [ + { $match: { status: "A" } }, + { $group: { _id: "$cust_id", total: { $sum: "$amount" } } }, + { $sort: { total: -1 } } + ] ) The operation returns a cursor with the following documents: .. code-block:: javascript - { "_id" : "xyz1", "total" : 100 } - { "_id" : "abc1", "total" : 75 } + [ + { _id: "xyz1", total: 100 }, + { _id: "abc1", total: 75 } + ] .. include:: /includes/note-mongo-shell-automatically-iterates-cursor.rst @@ -338,11 +234,11 @@ pipeline. .. code-block:: javascript - db.orders.explain().aggregate([ + db.orders.explain().aggregate( [ { $match: { status: "A" } }, { $group: { _id: "$cust_id", total: { $sum: "$amount" } } }, { $sort: { total: -1 } } - ]) + ] ) The operation returns a document that details the processing of the aggregation pipeline. For example, the document may show, among other @@ -413,20 +309,22 @@ Specify a Collation .. include:: /includes/extracts/collation-versionadded.rst -A collection ``myColl`` has the following documents: +A collection ``restaurants`` has the following documents: .. code-block:: javascript - { _id: 1, category: "café", status: "A" } - { _id: 2, category: "cafe", status: "a" } - { _id: 3, category: "cafE", status: "a" } + db.restaurants.insertMany( [ + { _id: 1, category: "café", status: "A" }, + { _id: 2, category: "cafe", status: "a" }, + { _id: 3, category: "cafE", status: "a" } + ] ) The following aggregation operation includes the :ref:`collation ` option: .. code-block:: javascript - db.myColl.aggregate( + db.restaurants.aggregate( [ { $match: { status: "A" } }, { $group: { _id: "$category", count: { $sum: 1 } } } ], { collation: { locale: "fr", strength: 1 } } ); @@ -441,11 +339,11 @@ For descriptions on the collation fields, see Hint an Index ~~~~~~~~~~~~~ -Create a collection ``foodColl`` with the following documents: +Create a collection ``food`` with the following documents: .. code-block:: javascript - db.foodColl.insertMany( [ + db.food.insertMany( [ { _id: 1, category: "cake", type: "chocolate", qty: 10 }, { _id: 2, category: "cake", type: "ice cream", qty: 25 }, { _id: 3, category: "pie", type: "boston cream", qty: 20 }, @@ -456,15 +354,15 @@ Create the following indexes: .. code-block:: javascript - db.foodColl.createIndex( { qty: 1, type: 1 } ); - db.foodColl.createIndex( { qty: 1, category: 1 } ); + db.food.createIndex( { qty: 1, type: 1 } ); + db.food.createIndex( { qty: 1, category: 1 } ); The following aggregation operation includes the ``hint`` option to force the usage of the specified index: .. code-block:: javascript - db.foodColl.aggregate( + db.food.aggregate( [ { $sort: { qty: 1 }}, { $match: { category: "cake", qty: 10 } }, { $sort: { type: -1 } } ], { hint: { qty: 1, category: 1 } } ) @@ -505,12 +403,14 @@ A collection named ``movies`` contains documents formatted as such: .. code-block:: javascript - { - "_id" : ObjectId("599b3b54b8ffff5d1cd323d8"), - "title" : "Jaws", - "year" : 1975, - "imdb" : "tt0073195" - } + db.movies.insertOne( + { + _id: ObjectId("599b3b54b8ffff5d1cd323d8"), + title: "Jaws", + year: 1975, + imdb: "tt0073195" + } + ) The following aggregation operation finds movies created in 1995 and includes the ``comment`` option to provide tracking information in the ``logs``, @@ -564,8 +464,6 @@ Use Variables in ``let`` .. versionadded:: 5.0 -.. |let-option| replace:: :ref:`let ` - .. include:: /includes/let-variables-match-note.rst .. include:: /includes/let-variables-example.rst diff --git a/source/reference/method/db.collection.analyzeShardKey.txt b/source/reference/method/db.collection.analyzeShardKey.txt index a3dfe684f06..56d5d5c41b5 100644 --- a/source/reference/method/db.collection.analyzeShardKey.txt +++ b/source/reference/method/db.collection.analyzeShardKey.txt @@ -15,13 +15,27 @@ db.collection.analyzeShardKey() Definition ---------- -.. method:: db.collection.analyzeShardKey(key, options) +.. method:: db.collection.analyzeShardKey(key, opts) Calculates metrics for evaluating a shard key for an unsharded or sharded collection. Metrics are based on sampled queries. You can use :dbcommand:`configureQueryAnalyzer` to configure query sampling on a collection. + +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ @@ -30,8 +44,8 @@ Syntax .. code-block:: javascript db.collection.analyzeShardKey( + , { - key: , keyCharacteristics: , readWriteDistribution: , sampleRate: , @@ -63,7 +77,79 @@ For sample output, see :ref:`analyzeShardKey Output `. Examples -------- -For examples, see :ref:`analyzeShardKey Examples `. +.. |analyzeShardKey| replace:: ``db.collection.analyzeShardKey`` + +.. include:: /includes/analyzeShardKey-example-intro.rst + +.. note:: + + Before you run the |analyzeShardKey| method, read the + :ref:`supporting-indexes-ref` section. If you require supporting + indexes for the shard key you are analyzing, use the + :method:`db.collection.createIndex()` method to create the indexes. + +{ lastName: 1 } keyCharacteristics +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This |analyzeShardKey| method provides metrics on the +``{ lastName: 1 }`` shard key on the ``social.post`` collection: + +.. code-block:: javascript + + use social + db.post.analyzeShardKey( + { lastName: 1 }, + { + keyCharacteristics: true, + readWriteDistribution: false + } + ) + +The output for this command is similar to the following: + +.. include:: /includes/analyzeShardKey-example1-output.rst + +{ userId: 1 } keyCharacteristics +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This |analyzeShardKey| method provides metrics on the +``{ userId: 1 }`` shard key on the ``social.post`` collection: + +.. code-block:: javascript + + use social + db.post.analyzeShardKey( + { userId: 1 }, + { + keyCharacteristics: true, + readWriteDistribution: false + } + ) + +The output for this method is similar to the following: + +.. include:: /includes/analyzeShardKey-example2-output.rst + +{ userId: 1 } readWriteDistribution +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This |analyzeShardKey| command provides metrics on the +``{ userId: 1 }`` shard key on the ``social.post`` collection: + +.. code-block:: javascript + + use social + db.post.analyzeShardKey( + { userId: 1 }, + { + keyCharacteristics: false, + readWriteDistribution: true + } + ) + +The output for this method is similar to the following: + +.. include:: /includes/analyzeShardKey-example3-output.rst Learn More ---------- diff --git a/source/reference/method/db.collection.bulkWrite.txt b/source/reference/method/db.collection.bulkWrite.txt index bb13d35df8e..6fdcfabce9c 100644 --- a/source/reference/method/db.collection.bulkWrite.txt +++ b/source/reference/method/db.collection.bulkWrite.txt @@ -4,6 +4,9 @@ db.collection.bulkWrite() .. default-domain:: mongodb +.. meta:: + :description: Perform a series of ordered or unordered write operations. + .. facet:: :name: programming_language :values: shell @@ -42,9 +45,8 @@ Compatibility You can't perform :ref:`bulk write ` operations in the :ref:`Atlas UI `. - To insert multiple documents, you must insert an array of documents. - To learn more, see :atlas:`Create, View, Update, and Delete Documents - ` in the Atlas documentation. + To insert multiple documents, you must insert an array of documents. + To learn more, see :ref:`atlas-ui-docs` in the Atlas documentation. Syntax ------ diff --git a/source/reference/method/db.collection.checkMetadataConsistency.txt b/source/reference/method/db.collection.checkMetadataConsistency.txt index ce09fd1dba0..6d301f73afd 100644 --- a/source/reference/method/db.collection.checkMetadataConsistency.txt +++ b/source/reference/method/db.collection.checkMetadataConsistency.txt @@ -35,6 +35,20 @@ Definition the sharding metadata. +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + + Syntax ------- diff --git a/source/reference/method/db.collection.compactStructuredEncryptionData.txt b/source/reference/method/db.collection.compactStructuredEncryptionData.txt index 90e4158e91d..a6c719aa7db 100644 --- a/source/reference/method/db.collection.compactStructuredEncryptionData.txt +++ b/source/reference/method/db.collection.compactStructuredEncryptionData.txt @@ -25,3 +25,17 @@ db.collection.compactStructuredEncryptionData() only works on connections that have :ref:`automatic encryption ` enabled. + + +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst diff --git a/source/reference/method/db.collection.configureQueryAnalyzer.txt b/source/reference/method/db.collection.configureQueryAnalyzer.txt index e0ec6038e52..2497551bc0f 100644 --- a/source/reference/method/db.collection.configureQueryAnalyzer.txt +++ b/source/reference/method/db.collection.configureQueryAnalyzer.txt @@ -29,6 +29,19 @@ Definition details, see :ref:``. +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ @@ -106,3 +119,4 @@ Learn More - :dbcommand:`analyzeShardKey` - :dbcommand:`configureQueryAnalyzer` +- :pipeline:`$listSampledQueries` diff --git a/source/reference/method/db.collection.count.txt b/source/reference/method/db.collection.count.txt index 8abaf1d6907..e3618b061c2 100644 --- a/source/reference/method/db.collection.count.txt +++ b/source/reference/method/db.collection.count.txt @@ -4,6 +4,10 @@ db.collection.count() .. default-domain:: mongodb +.. meta:: + :keywords: deprecated + :description: The count method is deprecated and should be replaced by the countDocuments or estimatedDocumentCount method + .. facet:: :name: programming_language :values: shell @@ -36,12 +40,19 @@ Definition :method:`~db.collection.find()` operation but instead counts and returns the number of results that match a query. + Compatibility ------------- -.. |operator-method| replace:: ``db.collection.count()`` +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-limited-free.rst -.. include:: /includes/fact-compatibility.rst +.. include:: /includes/fact-environments-onprem-only.rst Syntax ------ diff --git a/source/reference/method/db.collection.countDocuments.txt b/source/reference/method/db.collection.countDocuments.txt index d5b795c1c0a..0d9a44e5c92 100644 --- a/source/reference/method/db.collection.countDocuments.txt +++ b/source/reference/method/db.collection.countDocuments.txt @@ -4,6 +4,9 @@ db.collection.countDocuments() .. default-domain:: mongodb +.. meta:: + :description: Return the number of documents in a collection or view. + .. facet:: :name: programming_language :values: shell @@ -210,4 +213,3 @@ Date('01/01/2012')``: - :dbcommand:`count` - :ref:`collStats pipeline stage with the count ` option - diff --git a/source/reference/method/db.collection.createIndex.txt b/source/reference/method/db.collection.createIndex.txt index d8a72e2d763..8335958e046 100644 --- a/source/reference/method/db.collection.createIndex.txt +++ b/source/reference/method/db.collection.createIndex.txt @@ -6,6 +6,9 @@ db.collection.createIndex() .. default-domain:: mongodb +.. meta:: + :description: Create an index on a collection to improve performance for queries. + .. facet:: :name: programming_language :values: shell @@ -124,8 +127,6 @@ parameters: - A replica set :doc:`tag name `. - .. versionadded:: 4.4 - .. _ensureIndex-options: .. _createIndex-options: @@ -262,15 +263,12 @@ otherwise specified: - .. _method-createIndex-hidden: Optional. A flag that determines whether the index is - :doc:`hidden ` from the query planner. A + :ref:`hidden ` from the query planner. A hidden index is not evaluated as part of the query plan selection. Default is ``false``. - .. versionadded:: 4.4 - - * - ``storageEngine`` - document @@ -398,9 +396,6 @@ indexes only: For available versions, see :ref:`text-index-versions`. - - - Options for ``2dsphere`` Indexes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -429,7 +424,7 @@ indexes only: For the available versions, see :ref:`2dsphere-v2`. - +.. _2d-index-options: Options for ``2d`` Indexes ~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -568,8 +563,6 @@ Collation Option Hidden Option `````````````` -.. versionadded:: 4.4 - To hide or unhide existing indexes, you can use the following :binary:`~bin.mongosh` methods: @@ -600,8 +593,6 @@ For example, Transactions ~~~~~~~~~~~~ -.. versionchanged:: 4.4 - .. include:: /includes/extracts/transactions-explicit-ddl.rst .. |operation| replace:: :method:`db.collection.createIndex()` @@ -645,11 +636,9 @@ field (in descending order.) db.collection.createIndex( { orderDate: 1, zipcode: -1 } ) -.. versionchanged:: 4.4 - - Starting in MongoDB 4.4, compound indexes can include a single - :ref:`hashed ` field. Compound hashed indexes - require :ref:`featureCompatibilityVersion ` set to ``4.4``. +Compound indexes can include a single :ref:`hashed ` field. +Compound hashed indexes require :ref:`featureCompatibilityVersion ` +set to at least ``5.0``. The following example creates a compound index on the ``state`` field (in ascending order) and the ``zipcode`` field (hashed): diff --git a/source/reference/method/db.collection.createIndexes.txt b/source/reference/method/db.collection.createIndexes.txt index a8187168217..867918cb32f 100644 --- a/source/reference/method/db.collection.createIndexes.txt +++ b/source/reference/method/db.collection.createIndexes.txt @@ -93,7 +93,18 @@ Definition - A replica set :ref:`tag name `. - .. versionadded:: 4.4 +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst .. _createIndexes-method-options: @@ -266,8 +277,6 @@ otherwise specified: Default is ``false``. - .. versionadded:: 4.4 - * - ``storageEngine`` - document @@ -564,8 +573,6 @@ Collation Option Hidden Option `````````````` -.. versionadded:: 4.4 - To hide or unhide existing indexes, you can use the following :binary:`~bin.mongosh` methods: @@ -611,8 +618,6 @@ To learn more, see: Transactions ~~~~~~~~~~~~ -.. versionchanged:: 4.4 - .. include:: /includes/extracts/transactions-explicit-ddl.rst .. |operation| replace:: :method:`db.collection.createIndexes()` diff --git a/source/reference/method/db.collection.createSearchIndex.txt b/source/reference/method/db.collection.createSearchIndex.txt index 3bb5b49cc7a..87ae6045df9 100644 --- a/source/reference/method/db.collection.createSearchIndex.txt +++ b/source/reference/method/db.collection.createSearchIndex.txt @@ -4,6 +4,9 @@ db.collection.createSearchIndex() .. default-domain:: mongodb +.. meta:: + :keywords: atlas search + .. contents:: On this page :local: :backlinks: none @@ -17,7 +20,7 @@ Definition .. versionadded:: 7.0 (*Also available starting in 6.0.7*) -.. |fts-index| replace:: :atlas:`{+fts+} index ` +.. |fts-index| replace:: :atlas:`{+fts+} index ` or :atlas:`Vector Search index ` .. include:: /includes/atlas-search-commands/command-descriptions/createSearchIndex-method.rst @@ -36,6 +39,7 @@ Command syntax: db..createSearchIndex( , + , { } @@ -65,12 +69,20 @@ Command Fields If you do not specify a ``name``, the index is named ``default``. + * - ``type`` + - string + - Optional + - .. include:: /includes/atlas-search-commands/field-definitions/type.rst + * - ``definition`` - document - Required - - Document describing the index to create. For details on - ``definition`` syntax, see - :ref:`search-index-definition-create-mongosh`. + - Document describing the index to create. The ``definition`` syntax + depends on whether you create a standard search index or a Vector + Search index. For the ``definition`` syntax, see: + + - :ref:`search-index-definition-create-mongosh` + - :ref:`vector-search-index-definition-create-mongosh` .. _search-index-definition-create-mongosh: @@ -79,6 +91,13 @@ Search Index Definition Syntax .. include:: /includes/atlas-search-commands/search-index-definition-fields.rst +.. _vector-search-index-definition-create-mongosh: + +Vector Search Index Definition Syntax +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. include:: /includes/atlas-search-commands/vector-search-index-definition-fields.rst + Behavior -------- @@ -166,3 +185,38 @@ with the name ``default`` on the ``food`` collection: } } ) + +Create a Vector Search Index +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The following example creates a vector search index named +``vectorSearchIndex01`` on the ``movies`` collection: + +.. code-block:: javascript + + db.movies.createSearchIndex( + "vectorSearchIndex01", + "vectorSearch", + { + fields: [ + { + type: "vector", + numDimensions: 1, + path: "genre", + similarity: "cosine" + } + ] + } + ) + +The vector search index contains one dimension and indexes the ``genre`` +field. + +Learn More +---------- + +- :pipeline:`$vectorSearch` aggregation stage + +- :ref:`Tutorial: Semantic Search ` + +- :atlas:`Atlas Vector Search Changelog ` diff --git a/source/reference/method/db.collection.dataSize.txt b/source/reference/method/db.collection.dataSize.txt index 8ecac35b0d6..1111f8346ab 100644 --- a/source/reference/method/db.collection.dataSize.txt +++ b/source/reference/method/db.collection.dataSize.txt @@ -27,3 +27,16 @@ db.collection.dataSize() .. |operations| replace:: :dbcommand:`collStats` + +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free.rst + +.. include:: /includes/fact-environments-onprem-only.rst diff --git a/source/reference/method/db.collection.deleteMany.txt b/source/reference/method/db.collection.deleteMany.txt index 87a31b28031..d1fcbd4e561 100644 --- a/source/reference/method/db.collection.deleteMany.txt +++ b/source/reference/method/db.collection.deleteMany.txt @@ -4,6 +4,9 @@ db.collection.deleteMany() .. default-domain:: mongodb +.. meta:: + :description: Delete all documents that match a specified filter from a collection. + .. facet:: :name: programming_language :values: shell @@ -122,8 +125,6 @@ syntax: For an example, see :ref:`ex-deleteMany-hint`. - .. versionadded:: 4.4 - Behavior -------- @@ -165,6 +166,16 @@ If the primary node fails during a :method:`db.collection.deleteMany()` operation, documents that were not yet deleted from secondary nodes are not deleted from the collection. +Oplog Entries +~~~~~~~~~~~~~ + +If a ``db.collection.deleteMany()`` operation successfully deletes one +or more documents, the operation adds an entry for each deleted document +on the :term:`oplog` (operations log). If the operation fails or does +not find any documents to delete, the operation does not add an entry on +the oplog. + + Examples -------- @@ -177,16 +188,18 @@ The ``orders`` collection has documents with the following structure: .. code-block:: javascript - { - _id: ObjectId("563237a41a4d68582c2509da"), - stock: "Brent Crude Futures", - qty: 250, - type: "buy-limit", - limit: 48.90, - creationts: ISODate("2015-11-01T12:30:15Z"), - expiryts: ISODate("2015-11-01T12:35:15Z"), - client: "Crude Traders Inc." - } + db.orders.insertOne( + { + _id: ObjectId("563237a41a4d68582c2509da"), + stock: "Brent Crude Futures", + qty: 250, + type: "buy-limit", + limit: 48.90, + creationts: ISODate("2015-11-01T12:30:15Z"), + expiryts: ISODate("2015-11-01T12:35:15Z"), + client: "Crude Traders Inc." + } + ) The following operation deletes all documents where ``client : "Crude Traders Inc."``: @@ -241,7 +254,7 @@ Given a three member replica set, the following operation specifies a print (e); } -If the acknowledgement takes longer than the ``wtimeout`` limit, the following +If the acknowledgment takes longer than the ``wtimeout`` limit, the following exception is thrown: .. code-block:: javascript @@ -251,7 +264,7 @@ exception is thrown: "errmsg" : "waiting for replication timed out", "errInfo" : { "wtimeout" : true, - "writeConcern" : { // Added in MongoDB 4.4 + "writeConcern" : { "w" : "majority", "wtimeout" : 100, "provenance" : "getLastErrorDefaults" @@ -268,20 +281,22 @@ Specify Collation .. include:: /includes/extracts/collation-versionadded.rst -A collection ``myColl`` has the following documents: +A collection ``restaurants`` has the following documents: .. code-block:: javascript - { _id: 1, category: "café", status: "A" } - { _id: 2, category: "cafe", status: "a" } - { _id: 3, category: "cafE", status: "a" } + db.restaurants.insertMany( [ + { _id: 1, category: "café", status: "A" }, + { _id: 2, category: "cafe", status: "a" }, + { _id: 3, category: "cafE", status: "a" } + ] ) The following operation includes the :ref:`collation ` option: .. code-block:: javascript - db.myColl.deleteMany( + db.restaurants.deleteMany( { category: "cafe", status: "A" }, { collation: { locale: "fr", strength: 1 } } ) @@ -291,8 +306,6 @@ option: Specify ``hint`` for Delete Operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -.. versionadded:: 4.4 - In :binary:`~bin.mongosh`, create a ``members`` collection with the following documents: diff --git a/source/reference/method/db.collection.deleteOne.txt b/source/reference/method/db.collection.deleteOne.txt index 8df3059f7b6..150abb979ce 100644 --- a/source/reference/method/db.collection.deleteOne.txt +++ b/source/reference/method/db.collection.deleteOne.txt @@ -4,6 +4,9 @@ db.collection.deleteOne() .. default-domain:: mongodb +.. meta:: + :description: Delete a single document from a collection. + .. facet:: :name: programming_language :values: shell @@ -51,7 +54,7 @@ The :method:`~db.collection.deleteOne()` method has the following form: { writeConcern: , collation: , - hint: // Available starting in MongoDB 4.4 + hint: } ) @@ -109,8 +112,6 @@ parameters: For an example, see :ref:`ex-deleteOne-hint`. - .. versionadded:: 4.4 - Behavior -------- @@ -131,8 +132,8 @@ To use :method:`db.collection.deleteOne` on a sharded collection: - If you only target one shard, you can use a partial shard key in the query specification or, -- You can provide the :term:`shard key` or the ``_id`` field in the query - specification. +- If you set ``limit: 1``, you do not need to provide the :term:`shard key` + or ``_id`` field in the query specification. Transactions ~~~~~~~~~~~~ @@ -145,6 +146,14 @@ Transactions .. |operation| replace:: :method:`db.collection.deleteOne()` +Oplog Entries +~~~~~~~~~~~~~ + +If a ``db.collection.deleteOne()`` operation successfully deletes a +document, the operation adds an entry on the :term:`oplog` (operations +log). If the operation fails or does not find a document to delete, the +operation does not add an entry on the oplog. + Examples -------- @@ -157,16 +166,18 @@ The ``orders`` collection has documents with the following structure: .. code-block:: javascript - { - _id: ObjectId("563237a41a4d68582c2509da"), - stock: "Brent Crude Futures", - qty: 250, - type: "buy-limit", - limit: 48.90, - creationts: ISODate("2015-11-01T12:30:15Z"), - expiryts: ISODate("2015-11-01T12:35:15Z"), - client: "Crude Traders Inc." - } + db.orders.insertOne( + { + _id: ObjectId("563237a41a4d68582c2509da"), + stock: "Brent Crude Futures", + qty: 250, + type: "buy-limit", + limit: 48.90, + creationts: ISODate("2015-11-01T12:30:15Z"), + expiryts: ISODate("2015-11-01T12:35:15Z"), + client: "Crude Traders Inc." + } + ) The following operation deletes the order with ``_id: ObjectId("563237a41a4d68582c2509da")`` : @@ -174,7 +185,7 @@ ObjectId("563237a41a4d68582c2509da")`` : .. code-block:: javascript try { - db.orders.deleteOne( { "_id" : ObjectId("563237a41a4d68582c2509da") } ); + db.orders.deleteOne( { _id: ObjectId("563237a41a4d68582c2509da") } ); } catch (e) { print(e); } @@ -183,7 +194,7 @@ The operation returns: .. code-block:: javascript - { "acknowledged" : true, "deletedCount" : 1 } + { acknowledged: true, deletedCount: 1 } The following operation deletes the first document with ``expiryts`` greater than ``ISODate("2015-11-01T12:40:15Z")`` @@ -191,7 +202,7 @@ than ``ISODate("2015-11-01T12:40:15Z")`` .. code-block:: javascript try { - db.orders.deleteOne( { "expiryts" : { $lt: ISODate("2015-11-01T12:40:15Z") } } ); + db.orders.deleteOne( { expiryts: { $lt: ISODate("2015-11-01T12:40:15Z") } } ); } catch (e) { print(e); } @@ -200,7 +211,7 @@ The operation returns: .. code-block:: javascript - { "acknowledged" : true, "deletedCount" : 1 } + { acknowledged: true, deletedCount: 1 } .. _deleteOne-example-update-with-write-concern: @@ -214,27 +225,27 @@ Given a three member replica set, the following operation specifies a try { db.orders.deleteOne( - { "_id" : ObjectId("563237a41a4d68582c2509da") }, - { w : "majority", wtimeout : 100 } + { _id: ObjectId("563237a41a4d68582c2509da") }, + { w: "majority", wtimeout: 100 } ); } catch (e) { print (e); } -If the acknowledgement takes longer than the ``wtimeout`` limit, the following +If the acknowledgment takes longer than the ``wtimeout`` limit, the following exception is thrown: .. code-block:: javascript WriteConcernError({ - "code" : 64, - "errmsg" : "waiting for replication timed out", - "errInfo" : { - "wtimeout" : true, - "writeConcern" : { // Added in MongoDB 4.4 - "w" : "majority", - "wtimeout" : 100, - "provenance" : "getLastErrorDefaults" + code: 64, + errmsg: "waiting for replication timed out", + errInfo: { + wtimeout: true, + writeConcern: { + w: "majority", + wtimeout: 100, + provenance: "getLastErrorDefaults" } } }) @@ -248,20 +259,22 @@ Specify Collation .. include:: /includes/extracts/collation-versionadded.rst -A collection ``myColl`` has the following documents: +A collection ``restaurants`` has the following documents: .. code-block:: javascript - { _id: 1, category: "café", status: "A" } - { _id: 2, category: "cafe", status: "a" } - { _id: 3, category: "cafE", status: "a" } + db.restaurants.insertMany( [ + { _id: 1, category: "café", status: "A" }, + { _id: 2, category: "cafe", status: "a" }, + { _id: 3, category: "cafE", status: "a" } + ] ) The following operation includes the :ref:`collation ` option: .. code-block:: javascript - db.myColl.deleteOne( + db.restaurants.deleteOne( { category: "cafe", status: "A" }, { collation: { locale: "fr", strength: 1 } } ) @@ -271,21 +284,19 @@ option: Specify ``hint`` for Delete Operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -.. versionadded:: 4.4 - In :binary:`~bin.mongosh`, create a ``students`` collection with the following documents: .. code-block:: javascript - db.members.insertMany([ - { "_id" : 1, "student" : "Richard", "grade" : "F", "points" : 0 }, - { "_id" : 2, "student" : "Jane", "grade" : "A", "points" : 60 }, - { "_id" : 3, "student" : "Adam", "grade" : "F", "points" : 0 }, - { "_id" : 4, "student" : "Ronan", "grade" : "D", "points" : 20 }, - { "_id" : 5, "student" : "Noah", "grade" : "F", "points" : 0 }, - { "_id" : 6, "student" : "Henry", "grade" : "A", "points" : 86 } - ]) + db.members.insertMany( [ + { _id: 1, student: "Richard", grade: "F", points: 0 }, + { _id: 2, student: "Jane", grade: "A", points: 60 }, + { _id: 3, student: "Adam", grade: "F", points: 0 }, + { _id: 4, student: "Ronan", grade: "D", points: 20 }, + { _id: 5, student: "Noah", grade: "F", points: 0 }, + { _id: 6, student: "Henry", grade: "A", points: 86 } + ] ) Create the following index on the collection: @@ -299,7 +310,7 @@ The following delete operation explicitly hints to use the index .. code-block:: javascript db.members.deleteOne( - { "points": { $lte: 20 }, "grade": "F" }, + { points: { $lte: 20 }, grade: "F" }, { hint: { grade: 1 } } ) @@ -311,7 +322,7 @@ The delete command returns the following: .. code-block:: javascript - { "acknowledged" : true, "deletedCount" : 1 } + { acknowledged: true, deletedCount: 1 } To view the indexes used, you can use the :pipeline:`$indexStats` pipeline: diff --git a/source/reference/method/db.collection.distinct.txt b/source/reference/method/db.collection.distinct.txt index ecea1087c76..ae98399be8d 100644 --- a/source/reference/method/db.collection.distinct.txt +++ b/source/reference/method/db.collection.distinct.txt @@ -4,6 +4,9 @@ db.collection.distinct() .. default-domain:: mongodb +.. meta:: + :description: Find distinct values that occur in a field within a collection. + .. facet:: :name: programming_language :values: shell @@ -28,12 +31,19 @@ Definition values for a specified field across a single collection or view and returns the results in an array. + Compatibility ------------- -.. |operator-method| replace:: ``db.collection.distinct()`` +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: -.. include:: /includes/fact-compatibility.rst +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-limited-free.rst + +.. include:: /includes/fact-environments-onprem-only.rst Syntax ------ @@ -265,4 +275,3 @@ option: For descriptions on the collation fields, see :ref:`collation-document-fields`. - diff --git a/source/reference/method/db.collection.drop.txt b/source/reference/method/db.collection.drop.txt index 7db96f4cc04..7eb218af9f0 100644 --- a/source/reference/method/db.collection.drop.txt +++ b/source/reference/method/db.collection.drop.txt @@ -6,6 +6,9 @@ db.collection.drop() .. default-domain:: mongodb +.. meta:: + :description: Delete a collection or view from a database. + .. facet:: :name: programming_language :values: shell @@ -113,9 +116,8 @@ For a sharded cluster running **MongoDB 5.0 or later**, no special action is required. Use the ``drop()`` method and then create a new collection with the same name. -For a sharded cluster running **MongoDB 4.4 or earlier**, -if you use the ``drop()`` method and then create a new collection with -the same name, you must either: +For a sharded cluster, if you use the ``drop()`` method and then create a new +collection with the same name, you must either: - Flush the cached routing table on every :binary:`~bin.mongos` using :dbcommand:`flushRouterConfig`. diff --git a/source/reference/method/db.collection.dropIndex.txt b/source/reference/method/db.collection.dropIndex.txt index 908d68f82dd..758c71fbb0a 100644 --- a/source/reference/method/db.collection.dropIndex.txt +++ b/source/reference/method/db.collection.dropIndex.txt @@ -65,18 +65,11 @@ Definition all non-``_id`` indexes. Use :method:`db.collection.dropIndexes()` instead. - .. versionadded:: 4.4 - - If an index specified to - :method:`db.collection.dropIndex()` is still building, - :method:`db.collection.dropIndex()` attempts to stop the - in-progress build. Stopping an index build has the same - effect as dropping the built index. Prior to MongoDB 4.4, - :method:`db.collection.dropIndex()` returned an error if - the specified index was still building. See - :ref:`dropIndex-method-index-builds` for more complete - documentation. - + If an index specified to :method:`db.collection.dropIndex()` is still + building, :method:`db.collection.dropIndex()` attempts to stop the + in-progress build. Stopping an index build has the same effect as + dropping the built index. See :ref:`dropIndex-method-index-builds` + for more complete documentation. Behavior -------- @@ -102,21 +95,7 @@ Stop In-Progress Index Builds Hidden Indexes ~~~~~~~~~~~~~~ -Starting in version 4.4, MongoDB adds the ability to hide or unhide -indexes from the query planner. By hiding an index from the planner, -users can evaluate the potential impact of dropping an index without -actually dropping the index. - -If after the evaluation, the user decides to drop the index, the user -can drop the hidden index; i.e. you do not need to unhide it first to -drop it. - -If, however, the impact is negative, the user can unhide the index -instead of having to recreate a dropped index. And because indexes are -fully maintained while hidden, the indexes are immediately available -for use once unhidden. - -For more information on hidden indexes, see :doc:`/core/index-hidden`. +.. include:: /includes/fact-hidden-indexes.rst Example ------- diff --git a/source/reference/method/db.collection.dropIndexes.txt b/source/reference/method/db.collection.dropIndexes.txt index 2b65616a7e5..49cb4abaa00 100644 --- a/source/reference/method/db.collection.dropIndexes.txt +++ b/source/reference/method/db.collection.dropIndexes.txt @@ -104,7 +104,21 @@ Definition **To drop multiple indexes** (Available starting in MongoDB 4.2), specify an array of the index names. - + +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + + Behavior -------- @@ -154,18 +168,4 @@ Stop In-Progress Index Builds Hidden Indexes ~~~~~~~~~~~~~~ -Starting in version 4.4, MongoDB adds the ability to hide or unhide -indexes from the query planner. By hiding an index from the planner, -users can evaluate the potential impact of dropping an index without -actually dropping the index. - -If after the evaluation, the user decides to drop the index, the user -can drop the hidden index; i.e. you do not need to unhide it first to -drop it. - -If, however, the impact is negative, the user can unhide the index -instead of having to recreate a dropped index. And because indexes are -fully maintained while hidden, the indexes are immediately available -for use once unhidden. - -For more information on hidden indexes, see :doc:`/core/index-hidden`. +.. include:: /includes/fact-hidden-indexes.rst diff --git a/source/reference/method/db.collection.estimatedDocumentCount.txt b/source/reference/method/db.collection.estimatedDocumentCount.txt index 75d3456cb4a..4575b1990ee 100644 --- a/source/reference/method/db.collection.estimatedDocumentCount.txt +++ b/source/reference/method/db.collection.estimatedDocumentCount.txt @@ -77,6 +77,8 @@ Sharded Clusters On a sharded cluster, the resulting count will not correctly filter out :term:`orphaned documents `. +.. _estimated-document-count-unclean-shutdown: + Unclean Shutdown ~~~~~~~~~~~~~~~~ diff --git a/source/reference/method/db.collection.explain.txt b/source/reference/method/db.collection.explain.txt index ee8c9e1d764..c27472e3fce 100644 --- a/source/reference/method/db.collection.explain.txt +++ b/source/reference/method/db.collection.explain.txt @@ -32,9 +32,7 @@ Description - :method:`~db.collection.distinct()` - :method:`~db.collection.findAndModify()` - .. versionadded:: 4.4 - - Returns information on :method:`~db.collection.mapReduce()`. + Returns information on :method:`~db.collection.mapReduce()`. To use :method:`db.collection.explain()`, append one of the aforementioned methods to :method:`db.collection.explain()`: @@ -88,12 +86,27 @@ Description +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + .. _explain-method-behavior: Behavior -------- +.. include:: includes/explain-ignores-cache-plan.rst + .. _explain-method-verbosity: .. _explain-method-queryPlanner: diff --git a/source/reference/method/db.collection.find.txt b/source/reference/method/db.collection.find.txt index 29855bd4595..47116b119d7 100644 --- a/source/reference/method/db.collection.find.txt +++ b/source/reference/method/db.collection.find.txt @@ -4,6 +4,9 @@ db.collection.find() .. default-domain:: mongodb +.. meta:: + :description: Find documents in a collection or view. + .. facet:: :name: programming_language :values: shell @@ -14,6 +17,10 @@ db.collection.find() :depth: 1 :class: singlecol +.. instruqt:: /mongodb-docs/tracks/db-collection-find-v2?token=em_J9Ddg3fzU3sHnFZN + :title: Finding Documents Lab + :drawer: + Definition ---------- @@ -35,9 +42,15 @@ Definition Compatibility ------------- -.. |operator-method| replace:: ``db.collection.find()`` +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: -.. include:: /includes/fact-compatibility.rst +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-limited-free.rst + +.. include:: /includes/fact-environments-onprem-only.rst Syntax ------ @@ -80,7 +93,9 @@ parameters: - document - .. _method-find-options: - .. include:: /includes/find-options-description.rst + Optional. Specifies additional options for the query. These options + modify query behavior and how results are returned. For details, + see :ref:`find-options`. Behavior -------- @@ -102,6 +117,13 @@ of the following form: .. include:: /includes/extracts/projection-values-table.rst +.. _find-options: + +Options +~~~~~~~ + +.. include:: /includes/find-options-values-table.rst + Embedded Field Specification ```````````````````````````` @@ -139,7 +161,7 @@ cursor handling mechanism for the :driver:`driver language Read Concern ~~~~~~~~~~~~ -To specify the :doc:`read concern ` for +To specify the :ref:`read concern ` for :method:`db.collection.find()`, use the :method:`cursor.readConcern()` method. @@ -151,7 +173,7 @@ Type Bracketing MongoDB treats some data types as equivalent for comparison purposes. For instance, numeric types undergo conversion before comparison. For most data types, however, -:doc:`comparison operators` only +:ref:`comparison operators ` only perform comparisons on documents where the :ref:`BSON type ` of the target field matches the type of the query operand. Consider the @@ -225,16 +247,6 @@ Client Disconnection .. include:: /includes/extracts/4.2-changes-disconnect.rst -Try It Yourself ---------------- - -The following lab walks you through how to use the ``db.collection.find()`` -method to find documents using equality match and the :query:`$in` operator. - -.. include:: /includes/fact-instruqt-intro.rst - -.. instruqt:: /mongodb-docs/tracks/db-collection-find-v2?token=em_J9Ddg3fzU3sHnFZN - Examples -------- @@ -265,7 +277,7 @@ Find All Documents in a Collection The :method:`find() ` method with no parameters returns all documents from a collection and returns all fields for the documents. For example, the following operation returns all documents in -the :doc:`bios collection `: +the :ref:`bios collection `: .. code-block:: javascript @@ -373,6 +385,8 @@ field does not exists: For a list of the query operators, see :ref:`query-selectors`. +.. include:: /includes/use-expr-in-find-query.rst + .. _query-subdocuments: .. _query-embedded-documents: @@ -380,7 +394,7 @@ Query Embedded Documents ~~~~~~~~~~~~~~~~~~~~~~~~ The following examples query the ``name`` embedded field in the -:doc:`bios collection `. +:ref:`bios collection `. Query Exact Matches on Embedded Documents ````````````````````````````````````````` @@ -543,6 +557,36 @@ For more information and examples of querying an array, see: For a list of array specific query operators, see :ref:`operator-query-array`. +Query for BSON Regular Expressions +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To find documents that contain BSON regular expressions as values, call +:method:`~db.collection.find()` with the ``bsonRegExp`` option set to +``true``. The ``bsonRegExp`` option allows you to return regular +expressions that can't be represented as JavaScript regular expressions. + +The following operation returns documents in a collection named +``testbson`` where the value of a field named ``foo`` is a +``BSONRegExp`` type: + +.. io-code-block:: + :copyable: true + + .. input:: + :language: javascript + + db.testbson.find( {}, {}, { bsonRegExp: true } ) + + .. output:: + :language: javascript + + [ + { + _id: ObjectId('65e8ba8a4b3c33a76e6cacca'), + foo: BSONRegExp('(?-i)AA_', 'i') + } + ] + .. _find-projection-examples: Projections @@ -621,8 +665,7 @@ array: { }, { _id: 0, 'name.last': 1, contribs: { $slice: 2 } } ) -Starting in MongoDB 4.4, you can also specify embedded fields using the -nested form, for example: +You can also specify embedded fields using the nested form. For example: .. code-block:: javascript @@ -631,14 +674,11 @@ nested form, for example: { _id: 0, name: { last: 1 }, contribs: { $slice: 2 } } ) - - Use Aggregation Expression `````````````````````````` -Starting in MongoDB 4.4, :method:`db.collection.find()` projection can -accept :ref:`aggregation expressions and syntax -`. +:method:`db.collection.find()` projection can accept +:ref:`aggregation expressions and syntax `. With the use of aggregation expressions and syntax, you can project new fields or project existing fields with new values. For example, the @@ -794,7 +834,7 @@ Limit the Number of Documents to Return The :method:`~cursor.limit()` method limits the number of documents in the result set. The following operation returns at most ``5`` documents -in the :doc:`bios collection `: +in the :ref:`bios collection `: .. code-block:: javascript @@ -808,7 +848,7 @@ Set the Starting Point of the Result Set The :method:`~cursor.skip()` method controls the starting point of the results set. The following operation skips the first ``5`` documents in -the :doc:`bios collection ` and +the :ref:`bios collection ` and returns all remaining documents: .. code-block:: javascript @@ -961,8 +1001,97 @@ Perform the following steps to retrieve the documents accessible to .. include:: /includes/user-roles-system-variable-example-output-jane.rst +Modify a Query with options +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The following examples show how you can use the ``options`` field +in a ``find()`` query. Use the following +:method:`~db.collection.insertMany()` to setup the ``users`` collection: + +.. code-block:: javascript + :copyable: true + + db.users.insertMany( [ + { username: "david", age: 27 }, + { username: "amanda", age: 25 }, + { username: "rajiv", age: 32 }, + { username: "rajiv", age: 90 } + ] ) + +limit with options +`````````````````` + +The following query limits the number of documents in the result set +with the ``limit`` options parameter: + +.. code-block:: javascript + :copyable: true + :emphasize-lines: 4 + + db.users.find( + { username : "rajiv"}, // query + { age : 1 }, // projection + { limit : 1 } // options + ) + +allowDiskUse with options +````````````````````````` + +The following query uses the ``options`` parameter to enable +``allowDiskUse``: + +.. code-block:: javascript + :copyable: true + :emphasize-lines: 4 + + db.users.find( + { username : "david" }, + { age : 1 }, + { allowDiskUse : true } + ) + +explain with options +```````````````````` + +The following query uses the ``options`` parameter to get the +``executionStats`` explain output: + +.. code-block:: javascript + :copyable: true + :emphasize-lines: 4 + + var cursor = db.users.find( + { username: "amanda" }, + { age : 1 }, + { explain : "executionStats" } + ) + cursor.next() + +Specify Multiple options in a query +``````````````````````````````````` + +The following query uses multiple ``options`` in a single query. This +query uses ``limit`` set to ``2`` to return only two documents, and +``showRecordId`` set to ``true`` to return the position of the document +in the result set: + +.. code-block:: javascript + :copyable: true + :emphasize-lines: 4-7 + + db.users.find( + {}, + { username: 1, age: 1 }, + { + limit: 2, + showRecordId: true + } + ) + Learn More ---------- -To see all available query options, see :node-api-4.0:`FindOptions -`. +- :method:`~db.collection.findOne()` +- :method:`~db.collection.findAndModify()` +- :method:`~db.collection.findOneAndDelete()` +- :method:`~db.collection.findOneAndReplace()` diff --git a/source/reference/method/db.collection.findAndModify.txt b/source/reference/method/db.collection.findAndModify.txt index 33b46af0210..6afc5912980 100644 --- a/source/reference/method/db.collection.findAndModify.txt +++ b/source/reference/method/db.collection.findAndModify.txt @@ -4,6 +4,9 @@ db.collection.findAndModify() .. default-domain:: mongodb +.. meta:: + :description: Update or delete a single document. + .. facet:: :name: programming_language :values: shell @@ -26,17 +29,23 @@ Definition .. |dbcommand| replace:: :dbcommand:`findAndModify` command .. include:: /includes/fact-mongosh-shell-method-alt.rst - Modifies and returns a single document. By default, the returned + Updates and returns a single document. By default, the returned document does not include the modifications made on the update. To return the document with the modifications made on the update, use - the ``new`` option. + the ``new`` option. Compatibility ------------- -.. |operator-method| replace:: ``db.collection.findAndModify()`` +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst -.. include:: /includes/fact-compatibility.rst +.. include:: /includes/fact-environments-onprem-only.rst Syntax ------ @@ -95,8 +104,8 @@ parameter with the following embedded document fields: ``sort`` - document - - Optional. Determines which document the operation modifies if the query selects - multiple documents. |operation| modifies + - Optional. Determines which document the operation updates if the query + selects multiple documents. |operation| updates the first document in the sort order specified by this argument. Starting in MongoDB 4.2 (and 4.0.12+, 3.6.14+, and 3.4.23+), the operation @@ -126,14 +135,14 @@ parameter with the following embedded document fields: - Starting in MongoDB 4.2, if passed an :ref:`aggregation pipeline ` ``[ , , ... ]``, - |operation| modifies the document per the pipeline. The pipeline + |operation| updates the document per the pipeline. The pipeline can consist of the following stages: .. include:: /includes/list-update-agg-stages.rst * - ``new`` - boolean - - Optional. When ``true``, returns the modified document rather than the original. + - Optional. When ``true``, returns the updated document rather than the original. The default is ``false``. * - ``fields`` @@ -219,7 +228,6 @@ one of the following: Behavior -------- - .. _fields-projection: ``fields`` Projection @@ -272,6 +280,9 @@ To use :dbcommand:`findAndModify` on a sharded collection: - You can provide an equality condition on a full shard key in the ``query`` field. +- Starting in version 7.1, you do not need to provide the :term:`shard key` + or ``_id`` field in the query specification. + .. include:: /includes/extracts/missing-shard-key-equality-condition-findAndModify.rst Shard Key Modification @@ -281,7 +292,7 @@ Shard Key Modification .. include:: /includes/shard-key-modification-warning.rst -To modify the **existing** shard key value with +To update the **existing** shard key value with :method:`db.collection.findAndModify()`: - You :red:`must` run on a :binary:`~bin.mongos`. Do :red:`not` @@ -299,7 +310,7 @@ To modify the **existing** shard key value with Missing Shard Key ````````````````` -Starting in version 4.4, documents in a sharded collection can be +Documents in a sharded collection can be :ref:`missing the shard key fields `. To use :method:`db.collection.findAndModify()` to set the document's **missing** shard key: @@ -354,6 +365,14 @@ Write Concerns and Transactions .. include:: /includes/extracts/transactions-operations-write-concern.rst +Oplog Entries +~~~~~~~~~~~~~ + +If a ``db.collection.findAndModify()`` operation successfully finds and +modifies a document, the operation adds an entry on the :term:`oplog` +(operations log). If the operation fails or does not find a document to +modify, the operation does not add an entry on the oplog. + Examples -------- @@ -399,7 +418,7 @@ This method performs the following actions: "score" : 5 } - To return the modified document, add the ``new:true`` option to + To return the updated document, add the ``new:true`` option to the method. If no document matched the ``query`` condition, the method @@ -568,7 +587,7 @@ Create a collection ``students`` with the following documents: { "_id" : 3, "grades" : [ 95, 110, 100 ] } ] ) -To modify all elements that are greater than or equal to ``100`` in the +To update all elements that are greater than or equal to ``100`` in the ``grades`` array, use the filtered positional operator :update:`$[\]` with the ``arrayFilters`` option in the :method:`db.collection.findAndModify` method: @@ -624,7 +643,7 @@ Create a collection ``students2`` with the following documents: The following operation finds a document where the ``_id`` field equals ``1`` and uses the filtered positional operator :update:`$[\]` with -the ``arrayFilters`` to modify the ``mean`` for all elements in the +the ``arrayFilters`` to update the ``mean`` for all elements in the ``grades`` array where the grade is greater than or equal to ``85``. .. code-block:: javascript diff --git a/source/reference/method/db.collection.findOne.txt b/source/reference/method/db.collection.findOne.txt index e57ea61e221..522d0bd618c 100644 --- a/source/reference/method/db.collection.findOne.txt +++ b/source/reference/method/db.collection.findOne.txt @@ -4,6 +4,9 @@ db.collection.findOne() .. default-domain:: mongodb +.. meta:: + :description: Find a single document in a collection or view. + .. facet:: :name: programming_language :values: shell @@ -105,10 +108,9 @@ Projection .. important:: Language Consistency - Starting in MongoDB 4.4, as part of making - :method:`~db.collection.find` and + As part of making :method:`~db.collection.find` and :method:`~db.collection.findAndModify` projection consistent with - aggregation's :pipeline:`$project` stage, + aggregation's :pipeline:`$project` stage: - The :method:`~db.collection.find` and :method:`~db.collection.findAndModify` projection can accept diff --git a/source/reference/method/db.collection.findOneAndDelete.txt b/source/reference/method/db.collection.findOneAndDelete.txt index 8c65f543860..63ce29271de 100644 --- a/source/reference/method/db.collection.findOneAndDelete.txt +++ b/source/reference/method/db.collection.findOneAndDelete.txt @@ -174,6 +174,14 @@ Transactions .. |operation| replace:: :method:`db.collection.findOneAndDelete()` +Oplog Entries +~~~~~~~~~~~~~ + +If a ``db.collection.findOneAndDelete()`` operation successfully deletes +a document, the operation adds an entry on the :term:`oplog` (operations +log). If the operation fails or does not find a document to delete, the +operation does not add an entry on the oplog. + .. _findOneAndDelete-examples: Examples diff --git a/source/reference/method/db.collection.findOneAndReplace.txt b/source/reference/method/db.collection.findOneAndReplace.txt index 5d37bbb74c7..5e17d3561f3 100644 --- a/source/reference/method/db.collection.findOneAndReplace.txt +++ b/source/reference/method/db.collection.findOneAndReplace.txt @@ -253,7 +253,7 @@ To modify the **existing** shard key value with Missing Shard Key ````````````````` -Starting in version 4.4, documents in a sharded collection can be +Documents in a sharded collection can be :ref:`missing the shard key fields `. To use :method:`db.collection.findOneAndReplace()` to set the document's **missing** shard key, @@ -296,6 +296,14 @@ Write Concerns and Transactions .. |operation| replace:: :method:`db.collection.findOneAndReplace()` +Oplog Entries +~~~~~~~~~~~~~ + +If a ``db.collection.findOneAndReplace()`` operation successfully +replaces a document, the operation adds an entry on the :term:`oplog` +(operations log). If the operation fails or does not find a document to +replace, the operation does not add an entry on the oplog. + .. _findOneAndReplace-examples: Examples diff --git a/source/reference/method/db.collection.findOneAndUpdate.txt b/source/reference/method/db.collection.findOneAndUpdate.txt index 3a876c3b5d4..2e63f93d02d 100644 --- a/source/reference/method/db.collection.findOneAndUpdate.txt +++ b/source/reference/method/db.collection.findOneAndUpdate.txt @@ -4,6 +4,9 @@ db.collection.findOneAndUpdate() .. default-domain:: mongodb +.. meta:: + :description: Update a single document. + .. facet:: :name: programming_language :values: shell @@ -19,7 +22,7 @@ Definition .. method:: db.collection.findOneAndUpdate( filter, update, options ) - .. |dbcommand| replace:: :dbcommand:`update` command + .. |dbcommand| replace:: :dbcommand:`findAndModify` command .. include:: /includes/fact-mongosh-shell-method-alt Updates a single document based on the ``filter`` and @@ -271,7 +274,7 @@ To modify the **existing** shard key value with Missing Shard Key ````````````````` -Starting in version 4.4, documents in a sharded collection can be +Documents in a sharded collection can be :ref:`missing the shard key fields `. To use :method:`db.collection.findOneAndUpdate()` to set the document's **missing** shard key, @@ -313,6 +316,14 @@ Write Concerns and Transactions .. |operation| replace:: :method:`db.collection.findOneAndUpdate()` +Oplog Entries +~~~~~~~~~~~~~ + +If a ``db.collection.findOneAndUpdate()`` operation successfully updates +a document, the operation adds an entry on the :term:`oplog` (operations +log). If the operation fails or does not find a document to update, the +operation does not add an entry on the oplog. + .. _findOneAndUpdate-examples: Examples diff --git a/source/reference/method/db.collection.getIndexes.txt b/source/reference/method/db.collection.getIndexes.txt index 52d5f9dc0d7..a5b043ba03a 100644 --- a/source/reference/method/db.collection.getIndexes.txt +++ b/source/reference/method/db.collection.getIndexes.txt @@ -37,6 +37,13 @@ Definition Behavior -------- +Atlas Search Indexes +~~~~~~~~~~~~~~~~~~~~ + +``getIndexes()`` does not return information on :atlas:`{+fts+} indexes +`. For information on Atlas +Search indexes, use :pipeline:`$listSearchIndexes`. + .. |operation| replace:: :method:`db.collection.getIndexes()` .. |operations| replace:: :dbcommand:`listIndexes` @@ -55,12 +62,6 @@ Wildcard Indexes .. include:: /includes/indexes/fact-wildcard-index-ordering.rst -Atlas Search Indexes -~~~~~~~~~~~~~~~~~~~~ - -``getIndexes()`` does not return information on :atlas:`{+fts+} indexes -`. - Required Access --------------- @@ -78,11 +79,6 @@ Output :method:`db.collection.getIndexes()` returns an array of documents that hold index information for the collection. For example: -.. note:: - - Starting in MongoDB 4.4, :method:`db.collection.getIndexes()` no - longer includes the ``ns`` field. - .. code-block:: javascript :copyable: false @@ -112,8 +108,7 @@ hold index information for the collection. For example: Index information includes the keys and options used to create the -index. The index option ``hidden``, available starting in MongoDB 4.4, -is only available if the value is ``true``. +index. The index option ``hidden`` is only available if the value is ``true``. For information on the keys and index options, see :method:`db.collection.createIndex()`. diff --git a/source/reference/method/db.collection.getPlanCache.txt b/source/reference/method/db.collection.getPlanCache.txt index b053726deed..f3016a0c828 100644 --- a/source/reference/method/db.collection.getPlanCache.txt +++ b/source/reference/method/db.collection.getPlanCache.txt @@ -65,6 +65,3 @@ The following methods are available through the interface: - Returns the plan cache information for a collection. Accessible through the plan cache object of a specific collection, i.e. ``db.collection.getPlanCache().list()``. - - .. versionadded:: 4.4 - diff --git a/source/reference/method/db.collection.getShardDistribution.txt b/source/reference/method/db.collection.getShardDistribution.txt index b761e097295..8b598c419e7 100644 --- a/source/reference/method/db.collection.getShardDistribution.txt +++ b/source/reference/method/db.collection.getShardDistribution.txt @@ -22,20 +22,15 @@ Definition Prints the data distribution statistics for a :term:`sharded ` collection. - .. tip:: - - Before running the method, use the :dbcommand:`flushRouterConfig` - command to refresh the cached routing table to avoid returning - stale distribution information for the collection. Once - refreshed, run :method:`db.collection.getShardDistribution()` for - the collection you wish to build the index. +Syntax +------ - For example: +The :method:`~db.collection.getShardDistribution()` method has the following +form: - .. code-block:: javascript +.. code-block:: javascript - db.adminCommand( { flushRouterConfig: "test.myShardedCollection" } ); - db.getSiblingDB("test").myShardedCollection.getShardDistribution(); + db.collection.getShardDistribution() .. seealso:: @@ -55,20 +50,32 @@ collection: .. code-block:: none :copyable: false - Shard shard-a at shard-a/MyMachine.local:30000,MyMachine.local:30001,MyMachine.local:30002 - data : 38.14Mb docs : 1000003 chunks : 2 - estimated data per chunk : 19.07Mb - estimated docs per chunk : 500001 - - Shard shard-b at shard-b/MyMachine.local:30100,MyMachine.local:30101,MyMachine.local:30102 - data : 38.14Mb docs : 999999 chunks : 3 - estimated data per chunk : 12.71Mb - estimated docs per chunk : 333333 - + Shard shard01 at shard01/localhost:27018 + { + data: '38.14MB', + docs: 1000003, + chunks: 2, + 'estimated data per chunk': '19.07B', + 'estimated docs per chunk': 500001 + } + --- + Shard shard02 at shard02/localhost:27019 + { + data: '38.14B', + docs: 999999, + chunks: 3, + 'estimated data per chunk': '12.71B', + 'estimated docs per chunk': 333333 + } + --- Totals - data : 76.29Mb docs : 2000002 chunks : 5 - Shard shard-a contains 50% data, 50% docs in cluster, avg obj size on shard : 40b - Shard shard-b contains 49.99% data, 49.99% docs in cluster, avg obj size on shard : 40b + { + data: '76.29B', + docs: 2000002, + chunks: 5, + 'Shard shard01': [ '50 % data', '50 % docs in cluster', '40B avg obj size on shard' ], + 'Shard shard02': [ '49.99 % data', '49.99 % docs in cluster', '40B avg obj size on shard' ] + } Output Fields ~~~~~~~~~~~~~ @@ -76,21 +83,31 @@ Output Fields .. code-block:: none :copyable: false - Shard at - data : docs : chunks : - estimated data per chunk : / - estimated docs per chunk : / - - Shard at - data : docs : chunks : - estimated data per chunk : / - estimated docs per chunk : / - + Shard shard01 at { + data: , + docs: , + chunks: , + 'estimated data per chunk': /, + 'estimated docs per chunk': / + } + --- + Shard shard02 at + { + data: , + docs: , + chunks: , + 'estimated data per chunk': /, + 'estimated docs per chunk': / + } + --- Totals - data : docs : chunks : - Shard contains % data, % docs in cluster, avg obj size on shard : stats.shards[ ].avgObjSize - Shard contains % data, % docs in cluster, avg obj size on shard : stats.shards[ ].avgObjSize - + { + data: , + docs: , + chunks: , + Shard shard01: [ % data, % docs in cluster, stats.shards[ ].avgObjSize avg obj size on shard ], + Shard shard02: [ % data, % docs in cluster, stats.shards[ ].avgObjSize avg obj size on shard ] + } The output information displays: diff --git a/source/reference/method/db.collection.getShardVersion.txt b/source/reference/method/db.collection.getShardVersion.txt index ad13d694986..8e9a913dee6 100644 --- a/source/reference/method/db.collection.getShardVersion.txt +++ b/source/reference/method/db.collection.getShardVersion.txt @@ -21,3 +21,16 @@ db.collection.getShardVersion() with a sharded cluster. For internal and diagnostic use only. + +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free-or-serverless.rst + +.. include:: /includes/fact-environments-onprem-only.rst diff --git a/source/reference/method/db.collection.hideIndex.txt b/source/reference/method/db.collection.hideIndex.txt index ea1b6476bcf..97b9828d578 100644 --- a/source/reference/method/db.collection.hideIndex.txt +++ b/source/reference/method/db.collection.hideIndex.txt @@ -17,8 +17,6 @@ Definition .. method:: db.collection.hideIndex() - .. versionadded:: 4.4 - .. |dbcommand| replace:: ``index.hidden`` collection option set using the :dbcommand:`collMod` command .. include:: /includes/fact-mongosh-shell-method-alt @@ -27,9 +25,9 @@ Definition hidden from the query planner ` is not evaluated as part of query plan selection. - By hiding an index from the planner, users can evaluate the + By hiding an index from the planner, you can evaluate the potential impact of dropping an index without actually dropping the - index. If the impact is negative, the user can unhide the index + index. If the impact is negative, you can unhide the index instead of having to recreate a dropped index. And because indexes are fully maintained while hidden, the indexes are immediately available for use once unhidden. @@ -85,9 +83,7 @@ Feature Compatibility Version ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To hide an index, you must have :ref:`featureCompatibilityVersion -` set to ``4.4`` or greater. However, once hidden, the index -remains hidden even with :ref:`featureCompatibilityVersion ` -set to ``4.2`` on MongoDB 4.4 binaries. +` set to ``{+minimum-lts-version+}`` or greater. Restrictions ~~~~~~~~~~~~ diff --git a/source/reference/method/db.collection.insert.txt b/source/reference/method/db.collection.insert.txt index 3acc7fcac3f..a1fc4a46143 100644 --- a/source/reference/method/db.collection.insert.txt +++ b/source/reference/method/db.collection.insert.txt @@ -4,8 +4,9 @@ db.collection.insert() .. default-domain:: mongodb -.. meta:: +.. meta:: :keywords: deprecated + :description: The insert method is deprecated should be replaced by insertOne or insertMany. .. contents:: On this page :local: @@ -152,6 +153,14 @@ Write Concerns and Transactions .. |operation| replace:: :method:`db.collection.insert()` +Oplog Entries +~~~~~~~~~~~~~ + +If a ``db.collection.insert()`` operation successfully inserts a document, +the operation adds an entry on the :term:`oplog` (operations log). +If the operation fails, the operation does not add an entry on the +oplog. + Examples -------- @@ -303,7 +312,7 @@ concern errors, the results include the "errmsg" : "waiting for replication timed out", "errInfo" : { "wtimeout" : true, - "writeConcern" : { // Added in MongoDB 4.4 + "writeConcern" : { "w" : "majority", "wtimeout" : 100, "provenance" : "getLastErrorDefaults" diff --git a/source/reference/method/db.collection.insertMany.txt b/source/reference/method/db.collection.insertMany.txt index 50878697223..f512ade1a10 100644 --- a/source/reference/method/db.collection.insertMany.txt +++ b/source/reference/method/db.collection.insertMany.txt @@ -4,6 +4,9 @@ db.collection.insertMany() .. default-domain:: mongodb +.. meta:: + :description: Insert multiple documents into a collection. + .. facet:: :name: programming_language :values: shell @@ -186,6 +189,14 @@ Performance Consideration for Random Data .. include:: /includes/indexes/random-data-performance.rst +Oplog Entries +~~~~~~~~~~~~~ + +If a ``db.collection.insertMany()`` operation successfully inserts one +or more documents, the operation adds an entry on the :term:`oplog` +(operations log) for each inserted document. If the operation fails, the +operation does not add an entry on the oplog. + .. _insertMany-examples: Examples @@ -419,7 +430,7 @@ This operation returns: "errmsg" : "waiting for replication timed out", "errInfo" : { "wtimeout" : true, - "writeConcern" : { // Added in MongoDB 4.4 + "writeConcern" : { "w" : "majority", "wtimeout" : 100, "provenance" : "getLastErrorDefaults" diff --git a/source/reference/method/db.collection.insertOne.txt b/source/reference/method/db.collection.insertOne.txt index c66b543b8f0..b02120c1834 100644 --- a/source/reference/method/db.collection.insertOne.txt +++ b/source/reference/method/db.collection.insertOne.txt @@ -4,6 +4,9 @@ db.collection.insertOne() .. default-domain:: mongodb +.. meta:: + :description: Insert a single document into a collection. + .. facet:: :name: programming_language :values: shell @@ -130,6 +133,14 @@ Write Concerns and Transactions .. |operation| replace:: :method:`db.collection.insertOne()` +Oplog Entries +~~~~~~~~~~~~~ + +If a ``db.collection.insertOne()`` operation successfully inserts a +document, the operation adds an entry on the :term:`oplog` (operations +log). If the operation fails, the operation does not add an entry on the +oplog. + .. _insertOne-examples: Examples @@ -234,7 +245,7 @@ Given a three member replica set, the following operation specifies a print (e); } -If the acknowledgement takes longer than the ``wtimeout`` limit, the following +If the acknowledgment takes longer than the ``wtimeout`` limit, the following exception is thrown: .. code-block:: javascript @@ -244,7 +255,7 @@ exception is thrown: "errmsg" : "waiting for replication timed out", "errInfo" : { "wtimeout" : true, - "writeConcern" : { // Added in MongoDB 4.4 + "writeConcern" : { "w" : "majority", "wtimeout" : 100, "provenance" : "getLastErrorDefaults" diff --git a/source/reference/method/db.collection.mapReduce.txt b/source/reference/method/db.collection.mapReduce.txt index 67ea230c763..4b063c938c8 100644 --- a/source/reference/method/db.collection.mapReduce.txt +++ b/source/reference/method/db.collection.mapReduce.txt @@ -23,6 +23,19 @@ db.collection.mapReduce() .. include:: /includes/extracts/views-unsupported-mapReduce.rst +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ diff --git a/source/reference/method/db.collection.reIndex.txt b/source/reference/method/db.collection.reIndex.txt index 3344a954a5c..d166a5be06a 100644 --- a/source/reference/method/db.collection.reIndex.txt +++ b/source/reference/method/db.collection.reIndex.txt @@ -35,6 +35,21 @@ Definition - For most users, the :method:`db.collection.reIndex()` command is unnecessary. + +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free.rst + +.. include:: /includes/fact-environments-onprem-only.rst + + Behavior -------- diff --git a/source/reference/method/db.collection.remove.txt b/source/reference/method/db.collection.remove.txt index 6684d9f3fea..b4e08fa1082 100644 --- a/source/reference/method/db.collection.remove.txt +++ b/source/reference/method/db.collection.remove.txt @@ -6,6 +6,7 @@ db.collection.remove() .. meta:: :keywords: deprecated + :description: The remove method is deprecated should be replaced by deleteOne or deleteMany. .. contents:: On this page :local: @@ -314,7 +315,7 @@ concern errors, the results include the "errmsg" : "waiting for replication timed out", "errInfo" : { "wtimeout" : true, - "writeConcern" : { // Added in MongoDB 4.4 + "writeConcern" : { "w" : "majority", "wtimeout" : 1, "provenance" : "getLastErrorDefaults" diff --git a/source/reference/method/db.collection.renameCollection.txt b/source/reference/method/db.collection.renameCollection.txt index cd78faf04b9..c024aa6b017 100644 --- a/source/reference/method/db.collection.renameCollection.txt +++ b/source/reference/method/db.collection.renameCollection.txt @@ -50,6 +50,20 @@ Definition the collection. The default value is ``false``. +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + + Behavior -------- diff --git a/source/reference/method/db.collection.replaceOne.txt b/source/reference/method/db.collection.replaceOne.txt index 47b8a77952d..94a6e58b67e 100644 --- a/source/reference/method/db.collection.replaceOne.txt +++ b/source/reference/method/db.collection.replaceOne.txt @@ -4,6 +4,9 @@ db.collection.replaceOne() .. default-domain:: mongodb +.. meta:: + :description: Replace a matched document with a new document. + .. facet:: :name: programming_language :values: shell @@ -173,9 +176,7 @@ replacement document. Shard Key Requirements In Replacement Document `````````````````````````````````````````````` -Starting in MongoDB 4.4, the replacement document does not need to -include the shard key. In MongoDB 4.2 and earlier, the replacement -document must include the shard key. +The replacement document does not need to include the shard key. .. include:: /includes/shard-key-modification-warning.rst @@ -217,7 +218,7 @@ To modify the **existing** shard key value with Missing Shard Key ````````````````` -Starting in version 4.4, documents in a sharded collection can be +Documents in a sharded collection can be :ref:`missing the shard key fields `. To use :method:`db.collection.replaceOne()` to set the document's **missing** shard key, you :red:`must` run on a @@ -396,11 +397,9 @@ Given a three member replica set, the following operation specifies a print(e); } -If the acknowledgement takes longer than the ``wtimeout`` limit, the following +If the acknowledgment takes longer than the ``wtimeout`` limit, the following exception is thrown: -.. versionchanged:: 4.4 - .. code-block:: javascript WriteConcernError({ diff --git a/source/reference/method/db.collection.stats.txt b/source/reference/method/db.collection.stats.txt index 0ddb2fd2ff2..c354b3d033b 100644 --- a/source/reference/method/db.collection.stats.txt +++ b/source/reference/method/db.collection.stats.txt @@ -405,7 +405,7 @@ The operation returns: "nindexes" : 4, "indexBuilds" : [ ], // Available starting in MongoDB 4.2 "totalIndexSize" : 704512, - "totalSize" : 10375168, // Available starting in MongoDB 4.4 + "totalSize" : 10375168, "indexSizes" : { "_id_" : 241664, "cuisine_1" : 147456, @@ -447,7 +447,7 @@ The operation returns: "nindexes" : 4, "indexBuilds" : [ ], // Available starting in MongoDB 4.2 "totalIndexSize" : 688, - "totalSize" : 10132, // Available starting in MongoDB 4.4 + "totalSize" : 10132, "indexSizes" : { "_id_" : 236, "cuisine_1" : 144, @@ -534,7 +534,7 @@ The operation returns: }, "indexBuilds" : [ ], // Available starting in MongoDB 4.2 "totalIndexSize" : 704512, - "totalSize" : 10375168, // Available starting in MongoDB 4.4 + "totalSize" : 10375168, "indexSizes" : { "_id_" : 241664, "cuisine_1" : 147456, @@ -651,7 +651,7 @@ Both operations will return the same output: }, "indexBuilds" : [ ], // Available starting in MongoDB 4.2 "totalIndexSize" : 704512, - "totalSize" : 10375168, // Available starting in MongoDB 4.4 + "totalSize" : 10375168, "indexSizes" : { "_id_" : 241664, "cuisine_1" : 147456, diff --git a/source/reference/method/db.collection.update.txt b/source/reference/method/db.collection.update.txt index 64039a146d3..8eb59c0cb98 100644 --- a/source/reference/method/db.collection.update.txt +++ b/source/reference/method/db.collection.update.txt @@ -8,6 +8,7 @@ db.collection.update() .. meta:: :keywords: deprecated + :description: The update method is deprecated should be replaced by updateOne or updateMany. .. contents:: On this page :local: @@ -39,8 +40,8 @@ Compatibility .. include:: /includes/fact-compatibility.rst To learn how to update documents hosted in {+atlas+} by -using the Atlas UI, see :atlas:`Create, View, Update, and Delete Documents -`. +using the Atlas UI, see :ref:``. + Syntax ------ @@ -326,7 +327,7 @@ See also :ref:`method-update-sharded-upsert`. Missing Shard Key ````````````````` -Starting in version 4.4, documents in a sharded collection can be +Documents in a sharded collection can be :ref:`missing the shard key fields `. To use :method:`db.collection.update()` to set the document's **missing** shard key, you :red:`must` run on a @@ -403,6 +404,14 @@ Write Concerns and Transactions .. _example-update-replace-fields: .. _update-behavior-replacement-document: +Oplog Entries +~~~~~~~~~~~~~ + +If a ``db.collection.update()`` operation successfully updates one or +more documents, the operation adds an entry on the :term:`oplog` +(operations log). If the operation fails or does not find any documents +to update, the operation does not add an entry on the oplog. + Examples -------- @@ -1005,9 +1014,11 @@ with :method:`~db.collection.update()`. :method:`WriteResult()` .. _update-with-unique-indexes: +.. _retryable-update-upsert: +.. _upsert-duplicate-key-error: -Upsert with Unique Index -```````````````````````` +Upsert with Duplicate Values +```````````````````````````` .. include:: /includes/extracts/upsert-unique-index-update-method.rst @@ -1469,8 +1480,6 @@ If the :method:`db.collection.update()` method encounters write concern errors, the results include the :data:`WriteResult.writeConcernError` field: -.. versionchanged:: 4.4 - .. code-block:: javascript WriteResult({ @@ -1522,4 +1531,3 @@ field: .. seealso:: :method:`WriteResult.hasWriteError()` - diff --git a/source/reference/method/db.collection.updateMany.txt b/source/reference/method/db.collection.updateMany.txt index b37e053e2c4..9a0fe6b3771 100644 --- a/source/reference/method/db.collection.updateMany.txt +++ b/source/reference/method/db.collection.updateMany.txt @@ -4,6 +4,9 @@ db.collection.updateMany() .. default-domain:: mongodb +.. meta:: + :description: Update multiple documents that match a specified filter. + .. facet:: :name: programming_language :values: shell @@ -185,6 +188,8 @@ The method returns a document that contains: - ``upsertedId`` containing the ``_id`` for the upserted document +- ``upsertedCount`` containing the number of upserted documents + Access Control -------------- @@ -340,6 +345,15 @@ Write Concerns and Transactions .. |operation| replace:: :method:`db.collection.updateMany()` +Oplog Entries +~~~~~~~~~~~~~ + +If a ``db.collection.updateMany()`` operation successfully updates one +or more documents, the operation adds an entry on the :term:`oplog` +(operations log) for each updated document. If the operation fails or +does not find any documents to update, the operation does not add an +entry on the oplog. + .. _updateMany-method-examples: Examples @@ -589,7 +603,8 @@ The operation returns: "acknowledged" : true, "matchedCount" : 0, "modifiedCount" : 0, - "upsertedId" : ObjectId("56fc5dcb39ee682bdc609b02") + "upsertedId" : ObjectId("56fc5dcb39ee682bdc609b02"), + "upsertedCount": 1 } The collection now contains the following documents: @@ -627,11 +642,9 @@ Given a three member replica set, the following operation specifies a print(e); } -If the acknowledgement takes longer than the ``wtimeout`` limit, the following +If the acknowledgment takes longer than the ``wtimeout`` limit, the following exception is thrown: -.. versionchanged:: 4.4 - .. code-block:: javascript WriteConcernError({ diff --git a/source/reference/method/db.collection.updateOne.txt b/source/reference/method/db.collection.updateOne.txt index 57712b3f152..72f4da164c8 100644 --- a/source/reference/method/db.collection.updateOne.txt +++ b/source/reference/method/db.collection.updateOne.txt @@ -4,6 +4,9 @@ db.collection.updateOne() .. default-domain:: mongodb +.. meta:: + :description: Update a single document that matches a specified filter. + .. facet:: :name: programming_language :values: shell @@ -180,7 +183,9 @@ The method returns a document that contains: - ``modifiedCount`` containing the number of modified documents -- ``upsertedId`` containing the ``_id`` for the upserted document. +- ``upsertedId`` containing the ``_id`` for the upserted document + +- ``upsertedCount`` containing the number of upserted documents - A boolean ``acknowledged`` as ``true`` if the operation ran with :term:`write concern` or ``false`` if write concern was disabled @@ -344,40 +349,41 @@ See also :ref:`updateOne-sharded-upsert`. Missing Shard Key ````````````````` -Starting in version 4.4, documents in a sharded collection can be -:ref:`missing the shard key fields `. To use -:method:`db.collection.updateOne()` to set the document's -**missing** shard key, you :red:`must` run on a -:binary:`~bin.mongos`. Do :red:`not` issue the operation directly on -the shard. +- Starting in version 7.1, you do not need to provide the :term:`shard key` + or ``_id`` field in the query specification. -In addition, the following requirements also apply: +- Documents in a sharded collection can be + :ref:`missing the shard key fields `. To use + :method:`db.collection.updateOne()` to set a **missing** shard key, + you :red:`must` run on a :binary:`~bin.mongos`. Do :red:`not` issue + the operation directly on the shard. -.. list-table:: - :header-rows: 1 - :widths: 30 70 + In addition, the following requirements also apply: - * - Task + .. list-table:: + :header-rows: 1 + :widths: 30 70 - - Requirements + * - Task - * - To set to ``null`` + - Requirements - - - Requires equality filter on the full shard key if + * - To set to ``null`` + + - Requires equality filter on the full shard key if ``upsert: true``. - * - To set to a non-``null`` value + * - To set to a non-``null`` value - - - :red:`Must` be performed either inside a + - :red:`Must` be performed either inside a :ref:`transaction ` or as a - :doc:`retryable write `. + :ref:`retryable write `. - - Requires equality filter on the full shard key if ``upsert: - true``. + Requires equality filter on the full shard key if ``upsert: true``. -.. tip:: + .. tip:: - .. include:: /includes/extracts/missing-shard-key-equality-condition-abridged.rst + .. include:: /includes/extracts/missing-shard-key-equality-condition-abridged.rst See also: @@ -410,6 +416,14 @@ Write Concerns and Transactions .. |operation| replace:: :method:`db.collection.updateOne()` +Oplog Entries +~~~~~~~~~~~~~ + +If a ``db.collection.updateOne()`` operation successfully updates a +document, the operation adds an entry on the :term:`oplog` (operations +log). If the operation fails or does not find a document to update, the +operation does not add an entry on the oplog. + .. _updateOne-method-examples: Examples @@ -659,7 +673,8 @@ Since ``upsert:true`` the document is ``inserted`` based on the ``filter`` and "acknowledged" : true, "matchedCount" : 0, "modifiedCount" : 0, - "upsertedId" : 4 + "upsertedId" : 4, + "upsertedCount": 1 } The collection now contains the following documents: @@ -741,11 +756,9 @@ within 100 milliseconds, it returns: { "acknowledged" : true, "matchedCount" : 1, "modifiedCount" : 1 } -If the acknowledgement takes longer than the ``wtimeout`` limit, the following +If the acknowledgment takes longer than the ``wtimeout`` limit, the following exception is thrown: -.. versionchanged:: 4.4 - .. code-block:: javascript WriteConcernError({ diff --git a/source/reference/method/db.collection.validate.txt b/source/reference/method/db.collection.validate.txt index b3cfa36d3b3..06ee941a41c 100644 --- a/source/reference/method/db.collection.validate.txt +++ b/source/reference/method/db.collection.validate.txt @@ -34,10 +34,24 @@ Description The :method:`db.collection.validate()` method is a wrapper around the :dbcommand:`validate` command. + +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free-or-serverless.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ -.. note:: Changed in version 4.4 +.. note:: :method:`db.collection.validate()` no longer accepts just a boolean parameter. See :ref:`4.4-validate-method-signature`. @@ -78,8 +92,8 @@ following optional document parameter with the fields: - If ``true``, performs a more thorough check with the following exception: - - Starting in MongoDB 4.4, full validation on the ``oplog`` - for WiredTiger skips the more thorough check. + - Full validation on the ``oplog`` for WiredTiger skips the more + thorough check. - If ``false``, omits some checks for a faster but less thorough check. diff --git a/source/reference/method/db.createCollection.txt b/source/reference/method/db.createCollection.txt index 173888199f5..cb7dd76719a 100644 --- a/source/reference/method/db.createCollection.txt +++ b/source/reference/method/db.createCollection.txt @@ -4,6 +4,9 @@ db.createCollection() .. default-domain:: mongodb +.. meta:: + :description: Create a new collection. + .. facet:: :name: programming_language :values: shell @@ -551,8 +554,17 @@ options when you create a collection with This operation creates a new collection named ``users`` with a specific configuration string that MongoDB will pass to the -``wiredTiger`` storage engine. See the :wtdocs-v5.0:`WiredTiger documentation of -collection level options ` -for specific ``wiredTiger`` options. +``wiredTiger`` storage engine. + +For example, to specify the ``zlib`` compressor for file blocks in the +``users`` collection, set the ``block_compressor`` option with the +following command: + +.. code-block:: javascript + + db.createCollection( + "users", + { storageEngine: { wiredTiger: { configString: "block_compressor=zlib" } } } + ) .. include:: /includes/fact-encryption-options-create-collection.rst diff --git a/source/reference/method/db.createUser.txt b/source/reference/method/db.createUser.txt index 7b4bbc51077..c6c29fbd835 100644 --- a/source/reference/method/db.createUser.txt +++ b/source/reference/method/db.createUser.txt @@ -187,7 +187,18 @@ Definition The client digests the password and passes the digested password to the server. +Compatibility +------------- +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free-or-m10.rst + +.. include:: /includes/fact-environments-onprem-only.rst Roles ~~~~~ diff --git a/source/reference/method/db.currentOp.txt b/source/reference/method/db.currentOp.txt index dd7c10f2611..c67c37b82f6 100644 --- a/source/reference/method/db.currentOp.txt +++ b/source/reference/method/db.currentOp.txt @@ -29,6 +29,21 @@ Definition .. include:: /includes/5.0-fact-currentop.rst + +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + + Syntax ~~~~~~ @@ -118,7 +133,7 @@ are equivalent: db.currentOp( { "$all": true } ) :method:`db.currentOp` and the -:doc:`database profiler` report the same +:ref:`database profiler ` report the same basic diagnostic information for all CRUD operations, including the following: @@ -206,7 +221,8 @@ database ``db1`` that have been running longer than 3 seconds: Active Indexing Operations ~~~~~~~~~~~~~~~~~~~~~~~~~~ -The following example returns information on index creation operations: +The following example returns information on index creation operations +on any number of fields: .. code-block:: javascript @@ -215,12 +231,12 @@ The following example returns information on index creation operations: currentOp: true, $or: [ { op: "command", "command.createIndexes": { $exists: true } }, + { op: "command", "command.$truncated": /^\{ createIndexes/ }, { op: "none", "msg" : /^Index Build/ } ] } ) - Output Example -------------- diff --git a/source/reference/method/db.dropAllRoles.txt b/source/reference/method/db.dropAllRoles.txt index bcccb8de08f..4b3c05b9c89 100644 --- a/source/reference/method/db.dropAllRoles.txt +++ b/source/reference/method/db.dropAllRoles.txt @@ -49,6 +49,21 @@ Definition .. |local-cmd-name| replace:: :method:`db.dropAllRoles()` + +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free.rst + +.. include:: /includes/fact-environments-onprem-only.rst + + Behavior -------- diff --git a/source/reference/method/db.dropDatabase.txt b/source/reference/method/db.dropDatabase.txt index bba3c4b4d6d..51b4bd53556 100644 --- a/source/reference/method/db.dropDatabase.txt +++ b/source/reference/method/db.dropDatabase.txt @@ -4,6 +4,9 @@ db.dropDatabase() .. default-domain:: mongodb +.. meta:: + :description: Delete a database. + .. facet:: :name: programming_language :values: shell @@ -21,12 +24,19 @@ Definition Removes the current database, deleting the associated data files. + Compatibility ------------- -.. |operator-method| replace:: ``db.dropDatabase()`` +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst -.. include:: /includes/fact-compatibility.rst +.. include:: /includes/fact-environments-onprem-only.rst Syntax ------ @@ -54,7 +64,7 @@ The :method:`db.dropDatabase()` method takes an optional parameter: :writeconcern:`"majority"`. When issued on a replica set, if the specified write concern - results in fewer member acknowledgements than write concern + results in fewer member acknowledgments than write concern :writeconcern:`"majority"`, the operation uses :writeconcern:`"majority"`. Otherwise, the specified write concern is used. @@ -105,11 +115,11 @@ Replica Sets members (i.e. uses the write concern :writeconcern:`"majority"`). Starting in MongoDB 4.2, you can specify a write concern to the - method. If you specify a write concern that requires acknowledgement + method. If you specify a write concern that requires acknowledgment from fewer than the majority, the method uses write concern :writeconcern:`"majority"`. - If you specify a write concern that requires acknowledgement from + If you specify a write concern that requires acknowledgment from more than the majority, the method uses the specified write concern. Sharded Clusters diff --git a/source/reference/method/db.fsyncLock.txt b/source/reference/method/db.fsyncLock.txt index 4b0603f7f6c..441c7c6222b 100644 --- a/source/reference/method/db.fsyncLock.txt +++ b/source/reference/method/db.fsyncLock.txt @@ -4,6 +4,10 @@ db.fsyncLock() .. default-domain:: mongodb +.. facet:: + :name: genre + :values: reference + .. contents:: On this page :local: :backlinks: none @@ -20,13 +24,12 @@ Definition .. method:: db.fsyncLock() Flushes all pending writes from the storage layer to disk and locks the - server to prevent any additional writes until the lock is released. + server to prevent any additional writes until the lock is released. - .. versionadded:: 7.1 + .. |fsyncLockUnlock| replace:: the ``db.fsyncLock()`` and + :method:`db.fsyncUnlock` methods + .. include:: /includes/fsync-mongos - When the ``db.fsyncLock`` method runs on :program:`mongos`, it applies an - fsync lock to each shard in the cluster. - .. |dbcommand| replace:: :dbcommand:`fsync` command .. include:: /includes/fact-mongosh-shell-method-alt.rst @@ -49,14 +52,14 @@ Definition * - ``info`` - Information on the status of the operation. - * - ``lockCount`` + * - ``lockCount`` - Number of locks currently on the instance. - * - ``seeAlso`` + * - ``seeAlso`` - Link to the :dbcommand:`fsync` command documentation. - * - ``ok`` + * - ``ok`` - The status code. - :method:`db.fsyncLock()` is an administrative command. Use this method to + :method:`db.fsyncLock()` is an administrative command. Use this method to lock a server or cluster before :ref:`backup operations `. Behavior @@ -74,7 +77,7 @@ Fsync locks execute on the primary in a replica set or sharded cluster. If the primary goes down or becomes unreachable due to network issues, the cluster :ref:`elects ` a new primary from the available secondaries. If a primary with an fsync lock goes down, the new primary does -**not** retain the fsync lock and can handle write operations. When elections +**not** retain the fsync lock and can handle write operations. When elections occur during backup operations, the resulting backup may be inconsistent or unusable. @@ -90,7 +93,7 @@ To recover from the primary going down: #. Restart the backup. -Additionally, fsync locks are persistent. When the old primary comes online +Additionally, fsync locks are persistent. When the old primary comes online again, you need to run the :method:`db.fsyncUnlock` command to release the lock on the node. diff --git a/source/reference/method/db.fsyncUnlock.txt b/source/reference/method/db.fsyncUnlock.txt index d04c336e323..9c36708ede4 100644 --- a/source/reference/method/db.fsyncUnlock.txt +++ b/source/reference/method/db.fsyncUnlock.txt @@ -4,6 +4,10 @@ db.fsyncUnlock() .. default-domain:: mongodb +.. facet:: + :name: genre + :values: reference + .. contents:: On this page :local: :backlinks: none @@ -21,10 +25,9 @@ Definition Reduces the lock count on the server to renable write operations. - .. versionadded:: 7.1 - - When the ``db.fsyncUnlock()`` method runs on :program:`mongos`, it - reduces the lock count for each shard in the cluster. + .. |fsyncLockUnlock| replace:: the :method:`db.fsyncLock` and + ``db.fsyncUnlock()`` methods + .. include:: /includes/fsync-mongos .. |dbcommand| replace:: :dbcommand:`fsyncUnlock` command .. include:: /includes/fact-mongosh-shell-method-alt.rst @@ -58,6 +61,20 @@ Definition The :method:`db.fsyncUnlock()` method wraps the :dbcommand:`fsyncUnlock` command. + +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Compatibility with WiredTiger ----------------------------- diff --git a/source/reference/method/db.getCollection.txt b/source/reference/method/db.getCollection.txt index d697ae7be1e..bf83b64e169 100644 --- a/source/reference/method/db.getCollection.txt +++ b/source/reference/method/db.getCollection.txt @@ -49,7 +49,7 @@ Behavior -------- The :method:`db.getCollection()` object can access any -:doc:`collection methods`. +:ref:`collection methods `. The collection specified may or may not exist on the server. If the collection does not exist, MongoDB creates it implicitly as part of diff --git a/source/reference/method/db.getProfilingStatus.txt b/source/reference/method/db.getProfilingStatus.txt index dae1082205b..2e5c51d8bc4 100644 --- a/source/reference/method/db.getProfilingStatus.txt +++ b/source/reference/method/db.getProfilingStatus.txt @@ -17,9 +17,8 @@ db.getProfilingStatus() and :setting:`~operationProfiling.slowOpSampleRate` setting. - Starting in MongoDB 4.4.2, you can set a ``filter`` to - control which operations are logged by the profiler. When - set, any configured filters are also returned by + You can set a ``filter`` to control which operations are logged by + the profiler. When set, any configured filters are also returned by :method:`db.getProfilingStatus()`, along with a ``note`` explaining filter behavior. diff --git a/source/reference/method/db.grantPrivilegesToRole.txt b/source/reference/method/db.grantPrivilegesToRole.txt index 53f0a09de3c..7d90dc86515 100644 --- a/source/reference/method/db.grantPrivilegesToRole.txt +++ b/source/reference/method/db.grantPrivilegesToRole.txt @@ -64,6 +64,21 @@ Definition .. |local-cmd-name| replace:: :method:`db.grantPrivilegesToRole()` + +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free.rst + +.. include:: /includes/fact-environments-onprem-only.rst + + Behavior -------- diff --git a/source/reference/method/db.grantRolesToRole.txt b/source/reference/method/db.grantRolesToRole.txt index fec555f9a58..54b0b780fcf 100644 --- a/source/reference/method/db.grantRolesToRole.txt +++ b/source/reference/method/db.grantRolesToRole.txt @@ -50,6 +50,21 @@ Definition .. |local-cmd-name| replace:: :method:`db.grantRolesToRole()` .. include:: /includes/fact-roles-array-contents.rst + +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free.rst + +.. include:: /includes/fact-environments-onprem-only.rst + + Behavior -------- diff --git a/source/reference/method/db.hello.txt b/source/reference/method/db.hello.txt index 9b5aaf817c7..9c461ce197f 100644 --- a/source/reference/method/db.hello.txt +++ b/source/reference/method/db.hello.txt @@ -12,7 +12,7 @@ db.hello() .. method:: db.hello() - .. versionadded:: 5.0 (and 4.4.2, 4.2.10, 4.0.21, and 3.6.21) + .. versionadded:: 5.0 Returns a document that describes the role of the :binary:`~bin.mongod` instance. @@ -26,3 +26,16 @@ db.hello() :dbcommand:`hello` for the complete documentation of the output of :method:`db.hello()`. + +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst diff --git a/source/reference/method/db.hostInfo.txt b/source/reference/method/db.hostInfo.txt index cc0a4a3501c..512c8296f5b 100644 --- a/source/reference/method/db.hostInfo.txt +++ b/source/reference/method/db.hostInfo.txt @@ -54,3 +54,17 @@ db.hostInfo() See :data:`hostInfo` for full documentation of the output of :method:`db.hostInfo()`. + + +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free.rst + +.. include:: /includes/fact-environments-onprem-only.rst diff --git a/source/reference/method/db.killOp.txt b/source/reference/method/db.killOp.txt index 5bb3fb600b7..ba5cc7e7db7 100644 --- a/source/reference/method/db.killOp.txt +++ b/source/reference/method/db.killOp.txt @@ -46,6 +46,19 @@ Description .. include:: /includes/extracts/warning-terminating-ops-method.rst +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-limited-free-and-m10.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Sharded Cluster --------------- diff --git a/source/reference/method/db.listCommands.txt b/source/reference/method/db.listCommands.txt index 79796232c0f..4f78b1b02c9 100644 --- a/source/reference/method/db.listCommands.txt +++ b/source/reference/method/db.listCommands.txt @@ -15,3 +15,17 @@ db.listCommands() Provides a list of all database commands. See the :doc:`/reference/command` document for a more extensive index of these options. + + +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst diff --git a/source/reference/method/db.logout.txt b/source/reference/method/db.logout.txt index cf6844ffb8c..9a010e59568 100644 --- a/source/reference/method/db.logout.txt +++ b/source/reference/method/db.logout.txt @@ -39,3 +39,17 @@ db.logout() :method:`db.logout()` function provides a wrapper around the database command :dbcommand:`logout`. + + +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst diff --git a/source/reference/method/db.revokePrivilegesFromRole.txt b/source/reference/method/db.revokePrivilegesFromRole.txt index 1dd088d0462..839c8b776d4 100644 --- a/source/reference/method/db.revokePrivilegesFromRole.txt +++ b/source/reference/method/db.revokePrivilegesFromRole.txt @@ -58,6 +58,20 @@ Definition - .. include:: /includes/fact-write-concern-spec-link.rst +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free.rst + +.. include:: /includes/fact-environments-onprem-only.rst + + Behavior -------- diff --git a/source/reference/method/db.revokeRolesFromRole.txt b/source/reference/method/db.revokeRolesFromRole.txt index 56529aad112..e1ea51c9d00 100644 --- a/source/reference/method/db.revokeRolesFromRole.txt +++ b/source/reference/method/db.revokeRolesFromRole.txt @@ -49,6 +49,21 @@ Definition .. |local-cmd-name| replace:: :method:`db.revokeRolesFromRole()` .. include:: /includes/fact-roles-array-contents.rst + +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free.rst + +.. include:: /includes/fact-environments-onprem-only.rst + + Behavior -------- diff --git a/source/reference/method/db.rotateCertificates.txt b/source/reference/method/db.rotateCertificates.txt index e52161e96c2..63f1c31f1d8 100644 --- a/source/reference/method/db.rotateCertificates.txt +++ b/source/reference/method/db.rotateCertificates.txt @@ -50,6 +50,21 @@ Definition The :method:`db.rotateCertificates()` method wraps the :dbcommand:`rotateCertificates` command. + +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + + Output ------ diff --git a/source/reference/method/db.serverBuildInfo.txt b/source/reference/method/db.serverBuildInfo.txt index 759326176a3..f3ade46b704 100644 --- a/source/reference/method/db.serverBuildInfo.txt +++ b/source/reference/method/db.serverBuildInfo.txt @@ -16,3 +16,17 @@ db.serverBuildInfo() command`. :dbcommand:`buildInfo` returns a document that contains an overview of parameters used to compile this :binary:`~bin.mongod` instance. + + +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst diff --git a/source/reference/method/db.serverStatus.txt b/source/reference/method/db.serverStatus.txt index 179e810119e..8a0b09af064 100644 --- a/source/reference/method/db.serverStatus.txt +++ b/source/reference/method/db.serverStatus.txt @@ -18,6 +18,20 @@ db.serverStatus() This command provides a wrapper around the database command :dbcommand:`serverStatus`. + +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Behavior -------- @@ -88,10 +102,9 @@ After you run an update query, subsequent calls to Include ``mirroredReads`` ~~~~~~~~~~~~~~~~~~~~~~~~~ -By default, the :serverstatus:`mirroredReads` information (available -starting in version 4.4) is not included in the output. To return -:serverstatus:`mirroredReads` information, you must explicitly specify -the inclusion: +By default, the :serverstatus:`mirroredReads` information is not included in +the output. To return :serverstatus:`mirroredReads` information, you must +explicitly specify the inclusion: .. code-block:: javascript diff --git a/source/reference/method/db.setLogLevel.txt b/source/reference/method/db.setLogLevel.txt index a5d6d90e935..b3d7d9d3aef 100644 --- a/source/reference/method/db.setLogLevel.txt +++ b/source/reference/method/db.setLogLevel.txt @@ -1,3 +1,4 @@ + ================ db.setLogLevel() ================ @@ -91,9 +92,89 @@ Omit the ```` parameter to set the default verbosity for all components; i.e. the :setting:`systemLog.verbosity` setting. The operation sets the default verbosity to ``1``: -.. code-block:: javascript +.. io-code-block:: + :copyable: true + + .. input:: + :language: javascript + + db.setLogLevel(1) - db.setLogLevel(1) + .. output:: + :language: javascript + :visible: false + + { + was: { + verbosity: 1, + accessControl: { verbosity: -1 }, + assert: { verbosity: -1 }, + command: { verbosity: -1 }, + control: { verbosity: -1 }, + executor: { verbosity: -1 }, + geo: { verbosity: -1 }, + globalIndex: { verbosity: -1 }, + index: { verbosity: -1 }, + network: { + verbosity: -1, + asio: { verbosity: -1 }, + bridge: { verbosity: -1 }, + connectionPool: { verbosity: -1 } + }, + processHealth: { verbosity: -1 }, + query: { + verbosity: -1, + optimizer: { verbosity: -1 }, + ce: { verbosity: -1 } + }, + queryStats: { verbosity: -1 }, + replication: { + verbosity: -1, + election: { verbosity: -1 }, + heartbeats: { verbosity: -1 }, + initialSync: { verbosity: -1 }, + rollback: { verbosity: -1 } + }, + sharding: { + verbosity: -1, + rangeDeleter: { verbosity: -1 }, + shardingCatalogRefresh: { verbosity: -1 }, + migration: { verbosity: -1 }, + reshard: { verbosity: -1 }, + migrationPerf: { verbosity: -1 } + }, + storage: { + verbosity: -1, + recovery: { verbosity: -1 }, + journal: { verbosity: 2 }, + wt: { + verbosity: -1, + wtBackup: { verbosity: -1 }, + wtCheckpoint: { verbosity: -1 }, + wtCompact: { verbosity: -1 }, + wtEviction: { verbosity: -1 }, + wtHS: { verbosity: -1 }, + wtRecovery: { verbosity: -1 }, + wtRTS: { verbosity: -1 }, + wtSalvage: { verbosity: -1 }, + wtTiered: { verbosity: -1 }, + wtTimestamp: { verbosity: -1 }, + wtTransaction: { verbosity: -1 }, + wtVerify: { verbosity: -1 }, + wtWriteLog: { verbosity: -1 } + } + }, + write: { verbosity: -1 }, + ftdc: { verbosity: -1 }, + tracking: { verbosity: -1 }, + transaction: { verbosity: -1 }, + tenantMigration: { verbosity: -1 }, + test: { verbosity: -1 }, + resourceConsumption: { verbosity: -1 }, + streams: { verbosity: -1 } + }, + ok: 1 + } Set Verbosity Level for a Component ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -102,6 +183,115 @@ Specify the ```` parameter to set the verbosity for the component. The following operation updates the :setting:`systemLog.component.storage.journal.verbosity` to ``2``: -.. code-block:: javascript +.. io-code-block:: + :copyable: true + + .. input:: + :language: javascript + + db.setLogLevel(2, "storage.journal" ) + + .. output:: + :language: javascript + :visible: false + + { + was: { + verbosity: 1, + accessControl: { verbosity: -1 }, + assert: { verbosity: -1 }, + command: { verbosity: -1 }, + control: { verbosity: -1 }, + executor: { verbosity: -1 }, + geo: { verbosity: -1 }, + globalIndex: { verbosity: -1 }, + index: { verbosity: -1 }, + network: { + verbosity: -1, + asio: { verbosity: -1 }, + bridge: { verbosity: -1 }, + connectionPool: { verbosity: -1 } + }, + processHealth: { verbosity: -1 }, + query: { + verbosity: -1, + optimizer: { verbosity: -1 }, + ce: { verbosity: -1 } + }, + queryStats: { verbosity: -1 }, + replication: { + verbosity: -1, + election: { verbosity: -1 }, + heartbeats: { verbosity: -1 }, + initialSync: { verbosity: -1 }, + rollback: { verbosity: -1 } + }, + sharding: { + verbosity: -1, + rangeDeleter: { verbosity: -1 }, + shardingCatalogRefresh: { verbosity: -1 }, + migration: { verbosity: -1 }, + reshard: { verbosity: -1 }, + migrationPerf: { verbosity: -1 } + }, + storage: { + verbosity: -1, + recovery: { verbosity: -1 }, + journal: { verbosity: -1 }, + wt: { + verbosity: -1, + wtBackup: { verbosity: -1 }, + wtCheckpoint: { verbosity: -1 }, + wtCompact: { verbosity: -1 }, + wtEviction: { verbosity: -1 }, + wtHS: { verbosity: -1 }, + wtRecovery: { verbosity: -1 }, + wtRTS: { verbosity: -1 }, + wtSalvage: { verbosity: -1 }, + wtTiered: { verbosity: -1 }, + wtTimestamp: { verbosity: -1 }, + wtTransaction: { verbosity: -1 }, + wtVerify: { verbosity: -1 }, + wtWriteLog: { verbosity: -1 } + } + }, + write: { verbosity: -1 }, + ftdc: { verbosity: -1 }, + tracking: { verbosity: -1 }, + transaction: { verbosity: -1 }, + tenantMigration: { verbosity: -1 }, + test: { verbosity: -1 }, + resourceConsumption: { verbosity: -1 }, + streams: { verbosity: -1 } + }, + ok: 1 + } + +Get Global Log Level For a Deployment +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The following operation gets the default logging level verbosity for a +deployment: + +.. io-code-block:: + :copyable: true + + .. input:: + :language: javascript + + db.adminCommand({getParameter: 1, logLevel: 1}); + + .. output:: + :language: javascript + :emphasize-lines: 2 + :visible: false + + { + logLevel: 0, + ok: 1 + } + +.. note:: - db.setLogLevel(2, "storage.journal" ) + You can also get log verbosity levels for MongoDB components. + For details, see :method:`db.getLogComponents()`. \ No newline at end of file diff --git a/source/reference/method/db.setProfilingLevel.txt b/source/reference/method/db.setProfilingLevel.txt index 0bc1f10eb78..f10fd88297e 100644 --- a/source/reference/method/db.setProfilingLevel.txt +++ b/source/reference/method/db.setProfilingLevel.txt @@ -59,10 +59,9 @@ Definition collections that the profiler can write to. The ``profile`` level must be ``0`` for a :binary:`~bin.mongos` instance. - Starting in MongoDB 4.4.2, you can specify a :ref:`filter - ` on both :binary:`~bin.mongod` - and :binary:`~bin.mongos` instances to control which operations are - logged by the profiler. When you specify a ``filter`` for the + You can specify a :ref:`filter ` on both + :binary:`~bin.mongod` and :binary:`~bin.mongos` instances to control which + operations are logged by the profiler. When you specify a ``filter`` for the profiler, the :ref:`slowms `, and :ref:`sampleRate ` options are not used for profiling and slow-query log lines. @@ -187,8 +186,6 @@ Parameters For an example of a filter used to control logged operations, see :ref:`profiler-filter-example`. - .. versionadded:: 4.4.2 - .. note:: When a profiling :ref:`filter @@ -317,11 +314,9 @@ Where: - ``filter`` is the **previous** :ref:`filter ` setting. - (*New in MongoDB 4.4.2*) - ``note`` is a string explaining the behavior of ``filter``. This field only appears in the output when ``filter`` is also present. - (*New in MongoDB 4.4.2*) .. note:: @@ -383,8 +378,6 @@ The following example sets for a :binary:`~bin.mongod` or Set a Filter to Determine Profiled Operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -.. versionadded:: 4.4.2 - The following example sets for a :binary:`~bin.mongod` instance: - the :ref:`profiling level ` to ``1``, diff --git a/source/reference/method/db.shutdownServer.txt b/source/reference/method/db.shutdownServer.txt index 406d47070f2..06ac5a40a3c 100644 --- a/source/reference/method/db.shutdownServer.txt +++ b/source/reference/method/db.shutdownServer.txt @@ -92,15 +92,6 @@ Shutting Down the Replica Set Primary, Secondary, or ``mongos`` .. include:: /includes/quiesce-period.rst -In MongoDB 4.4 and earlier, if running :method:`db.shutdownServer()` -against the replica set :term:`primary`, the operation implicitly uses -:dbcommand:`replSetStepDown` to step down the primary before shutting -down the :binary:`~bin.mongod`. If no secondary in the replica set can -catch up to the primary within ``10`` seconds, the shutdown operation -fails. You can issue :method:`db.shutdownServer()` with :ref:`force: -true ` to shut down the primary *even if* -the step down fails. - .. warning:: Force shutdown of the primary can result in the diff --git a/source/reference/method/db.stats.txt b/source/reference/method/db.stats.txt index bfd332ac6e1..e85b16a9225 100644 --- a/source/reference/method/db.stats.txt +++ b/source/reference/method/db.stats.txt @@ -20,6 +20,20 @@ Description The :method:`db.stats()` method is a wrapper around the :dbcommand:`dbStats` database command. + +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-limited-free.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Parameters ~~~~~~~~~~ diff --git a/source/reference/method/db.updateRole.txt b/source/reference/method/db.updateRole.txt index ac61c516f9b..5161567918a 100644 --- a/source/reference/method/db.updateRole.txt +++ b/source/reference/method/db.updateRole.txt @@ -139,6 +139,21 @@ Definition The :method:`db.updateRole()` method wraps the :dbcommand:`updateRole` command. + +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-free-or-m10.rst + +.. include:: /includes/fact-environments-onprem-only.rst + + Roles ~~~~~ diff --git a/source/reference/method/getKeyVault.txt b/source/reference/method/getKeyVault.txt index 6b3a50cc039..276866a7b2f 100644 --- a/source/reference/method/getKeyVault.txt +++ b/source/reference/method/getKeyVault.txt @@ -46,7 +46,7 @@ Requires Configuring Client-Side Field Level Encryption on Database Connection ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The following example uses a :ref:`locally managed key -` for the client-side field level +` for the client-side field level encryption configuration. .. include:: /includes/extracts/csfle-requires-enabling-encryption.rst @@ -60,7 +60,7 @@ Example ------- The following example uses a :ref:`locally managed key -` for the client-side field level +` for the client-side field level encryption configuration. .. include:: /includes/csfle-connection-boilerplate-example.rst diff --git a/source/reference/method/js-atlas-search.txt b/source/reference/method/js-atlas-search.txt index b35976e87dd..6be1aa1b624 100644 --- a/source/reference/method/js-atlas-search.txt +++ b/source/reference/method/js-atlas-search.txt @@ -1,3 +1,5 @@ +.. _atlas-search-index-methods: + ========================== Atlas Search Index Methods ========================== diff --git a/source/reference/method/js-atlas-streams.txt b/source/reference/method/js-atlas-streams.txt new file mode 100644 index 00000000000..cde9e794591 --- /dev/null +++ b/source/reference/method/js-atlas-streams.txt @@ -0,0 +1,86 @@ +.. _doc-stream-methods: + +=============================== +Atlas Stream Processing Methods +=============================== + +.. default-domain:: mongodb + +.. contents:: On this page + :local: + :backlinks: none + :depth: 1 + :class: singlecol + +.. note:: ``mongosh`` Methods + + .. include:: /includes/fact-mongosh-shell-method-toc.rst + +:atlas:`Atlas Stream Processors +` +let you perform aggregation operations against streams of +continuous data using the same data model and query API that +you use with at-rest data. + +Use the following methods to manage Stream Processors + +.. important:: + + The following methods can only be run on deployments hosted on + :atlas:`MongoDB Atlas `. + +.. include:: /includes/extracts/methods-toc-explanation.rst + +.. list-table:: + :widths: 30 70 + :header-rows: 1 + + * - Name + + - Description + + * - :method:`sp.createStreamProcessor()` + + - Creates a stream processor. + + * - :method:`sp.listStreamProcessors()` + + - Lists all existing stream processors on the current stream + processing instance. + + * - :method:`sp.process()` + + - Creates an ephemeral stream processor. + + * - :method:`sp.processor.drop()` + + - Deletes an existing stream processor. + + * - :method:`sp.processor.sample()` + + - Returns an array of sampled results from a currently running stream processor. + + * - :method:`sp.processor.start()` + + - Starts an existing stream processor. + + * - :method:`sp.processor.stats()` + + - Returns statistics summarizing an existing stream processor. + + * - :method:`sp.processor.stop()` + + - Stops a currently running stream processor. + +.. toctree:: + :titlesonly: + :hidden: + + /reference/method/sp.createStreamProcessor + /reference/method/sp.listStreamProcessors + /reference/method/sp.process + /reference/method/sp.processor.drop + /reference/method/sp.processor.sample + /reference/method/sp.processor.start + /reference/method/sp.processor.stats + /reference/method/sp.processor.stop diff --git a/source/reference/method/js-bulk.txt b/source/reference/method/js-bulk.txt index c49f4e49b81..485afe83841 100644 --- a/source/reference/method/js-bulk.txt +++ b/source/reference/method/js-bulk.txt @@ -1,3 +1,5 @@ +.. _js-bulk-methods: + ====================== Bulk Operation Methods ====================== diff --git a/source/reference/method/js-client-side-field-level-encryption.txt b/source/reference/method/js-client-side-field-level-encryption.txt index 39c28911a9c..9b7e4207b46 100644 --- a/source/reference/method/js-client-side-field-level-encryption.txt +++ b/source/reference/method/js-client-side-field-level-encryption.txt @@ -17,7 +17,7 @@ Client-Side Field Level Encryption Methods The following methods are for :binary:`~bin.mongosh` *only*. For instructions on implementing client-side field level encryption using a MongoDB 4.2+ compatible driver, defer to the -driver documentation. See :ref:`field-level-encryption-drivers` for +driver documentation. See :ref:`csfle-driver-compatibility` for a complete list of 4.2+ compatible drivers with support for client-side field level encryption. diff --git a/source/reference/method/js-collection.txt b/source/reference/method/js-collection.txt index 86e7924f9c7..141a923edab 100644 --- a/source/reference/method/js-collection.txt +++ b/source/reference/method/js-collection.txt @@ -212,6 +212,11 @@ Collection Methods - Performs diagnostic operations on a collection. +.. seealso:: + + To manage :atlas:`{+fts+} indexes `, + see :ref:`atlas-search-index-methods`. + .. toctree:: :titlesonly: diff --git a/source/reference/method/js-connection.txt b/source/reference/method/js-connection.txt index ea4c696748d..5a4c4c1ab4f 100644 --- a/source/reference/method/js-connection.txt +++ b/source/reference/method/js-connection.txt @@ -40,6 +40,9 @@ Connection Methods * - :method:`Mongo.getReadPrefTagSet()` - Returns the read preference tag set for the MongoDB connection. + * - :method:`Mongo.getURI()` + - Returns the connection string for the current active connection. + * - :method:`Mongo.getWriteConcern` - Returns the :term:`write concern` for the connection object. @@ -78,6 +81,7 @@ Connection Methods /reference/method/Mongo.getDBs /reference/method/Mongo.getReadPrefMode /reference/method/Mongo.getReadPrefTagSet + /reference/method/Mongo.getURI /reference/method/Mongo.getWriteConcern /reference/method/Mongo.setCausalConsistency /reference/method/Mongo.setReadPref diff --git a/source/reference/method/js-constructor.txt b/source/reference/method/js-constructor.txt index 152f9120966..678d42fac85 100644 --- a/source/reference/method/js-constructor.txt +++ b/source/reference/method/js-constructor.txt @@ -32,6 +32,10 @@ Object Constructors and Methods - Returns a :ref:`binary data object `. + * - :method:`BSONRegExp()` + + - Creates a :method:`BSONRegExp()`. + * - :method:`BulkWriteResult()` - Wrapper around the result set from :method:`Bulk.execute()`. @@ -88,6 +92,7 @@ Object Constructors and Methods /reference/method/Binary.createFromBase64 /reference/method/Binary.createFromHexString /reference/method/BinData + /reference/method/BSONRegExp /reference/method/BulkWriteResult /reference/method/Date /reference/method/ObjectId diff --git a/source/reference/method/js-database.txt b/source/reference/method/js-database.txt index 5bdc6ec64e0..74081a80e22 100644 --- a/source/reference/method/js-database.txt +++ b/source/reference/method/js-database.txt @@ -136,10 +136,6 @@ Database Methods - Prints a report of the sharding configuration and the chunk ranges. - * - :method:`db.printSlaveReplicationInfo()` - - - .. include:: /includes/deprecated-db.printSlaveReplicationInfo.rst - * - :method:`db.resetError()` - *Removed in MongoDB 5.0.* Resets the last error status. @@ -151,7 +147,7 @@ Database Methods * - :method:`db.runCommand()` - - Runs a :doc:`database command `. + - Runs a :ref:`database command `. * - :method:`db.serverBuildInfo()` diff --git a/source/reference/method/js-plan-cache.txt b/source/reference/method/js-plan-cache.txt index 875a8f0e2d2..9ebe9c3399a 100644 --- a/source/reference/method/js-plan-cache.txt +++ b/source/reference/method/js-plan-cache.txt @@ -57,9 +57,6 @@ cache object. To retrieve the plan cache object, use the through the plan cache object of a specific collection, i.e. ``db.collection.getPlanCache().list()``. - .. versionadded:: 4.4 - - .. toctree:: :titlesonly: diff --git a/source/reference/method/js-replication.txt b/source/reference/method/js-replication.txt index 2e8c02fe3f9..b1dbc46ee83 100644 --- a/source/reference/method/js-replication.txt +++ b/source/reference/method/js-replication.txt @@ -54,10 +54,6 @@ Replication Methods - Prints a formatted report of the replica set status from the perspective of the secondaries. - * - :method:`rs.printSlaveReplicationInfo()` - - - .. include:: /includes/deprecated-rs.printSlaveReplicationInfo.rst - * - :method:`rs.reconfig()` - Reconfigures a replica set by applying a new replica set configuration object. diff --git a/source/reference/method/js-sharding.txt b/source/reference/method/js-sharding.txt index f0a2db3d7a8..cf1639de98f 100644 --- a/source/reference/method/js-sharding.txt +++ b/source/reference/method/js-sharding.txt @@ -60,8 +60,6 @@ Sharding Methods - Returns information on whether the chunks of a sharded collection are balanced. - .. versionadded:: 4.4 - * - :method:`sh.checkMetadataConsistency` - Checks the cluster for inconsistent sharding metadata. diff --git a/source/reference/method/passwordPrompt.txt b/source/reference/method/passwordPrompt.txt index 6c0b6f2929d..a233a229cd6 100644 --- a/source/reference/method/passwordPrompt.txt +++ b/source/reference/method/passwordPrompt.txt @@ -52,19 +52,14 @@ Starting in MongoDB 4.2, when you run the :ref:`db-auth-syntax-username-password` command you can replace the password with the :method:`passwordPrompt()` method. -Starting in MongoDB 4.4, if you omit the password from the -:ref:`db-auth-syntax-username-password` command, the user is -prompted to enter a password. +If you omit the password from the :ref:`db-auth-syntax-username-password` +command, the user is prompted to enter a password. -Both of the following examples prompt the user to enter a password +The following example prompts the user to enter a password which is not displayed in the shell: .. code-block:: javascript - // Starting in MongoDB 4.2 - db.auth("user123", passwordPrompt()) - - // Starting in MongoDB 4.4 db.auth("user123") Use ``passwordPrompt()`` with ``db.changeUserPassword()`` diff --git a/source/reference/method/rs.add.txt b/source/reference/method/rs.add.txt index 1c0aeefb9e3..bb8d9b599f1 100644 --- a/source/reference/method/rs.add.txt +++ b/source/reference/method/rs.add.txt @@ -148,7 +148,7 @@ Add a Priority 0 Member to a Replica Set The following operation adds a :binary:`~bin.mongod` instance, running on the host ``mongodb4.example.net`` and accessible on the default port -``27017``, as a :doc:`priority 0 ` +``27017``, as a :ref:`priority 0 ` secondary member: .. code-block:: javascript diff --git a/source/reference/method/rs.reconfig.txt b/source/reference/method/rs.reconfig.txt index 93c68514392..ff660ff59d5 100644 --- a/source/reference/method/rs.reconfig.txt +++ b/source/reference/method/rs.reconfig.txt @@ -50,7 +50,7 @@ Definition - document - - A :doc:`document ` that specifies + - A :ref:`document ` that specifies the configuration of a replica set. * - .. _rs-reconfig-method-force: @@ -82,9 +82,6 @@ Definition replica configuration to propagate to a majority of replica set members. - .. versionadded:: 4.4 - - To reconfigure an existing replica set, first retrieve the current configuration with :method:`rs.conf()`, modify the configuration document as needed, and then pass the modified @@ -104,10 +101,8 @@ Global Write Concern ``term`` Replica Configuration Field ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -MongoDB 4.4 adds the :rsconf:`term` field to the replica set -configuration document. The :rsconf:`term` field is set by the -:term:`primary` replica set member. The primary ignores the -:rsconf:`term` field if set explicitly in the +The :rsconf:`term` field is set by the :term:`primary` replica set member. +The primary ignores the :rsconf:`term` field if set explicitly in the :method:`rs.reconfig()` operation. .. |reconfig| replace:: :method:`rs.reconfig()` diff --git a/source/reference/method/rs.reconfigForPSASet.txt b/source/reference/method/rs.reconfigForPSASet.txt index 6512006d4de..84e2a8d8635 100644 --- a/source/reference/method/rs.reconfigForPSASet.txt +++ b/source/reference/method/rs.reconfigForPSASet.txt @@ -75,7 +75,7 @@ The :method:`rs.reconfigForPSASet()` method has the following syntax: - document - - A :doc:`document ` that + - A :ref:`document ` that specifies the new configuration of a replica set. * - ``force`` diff --git a/source/reference/method/sh.abortReshardCollection.txt b/source/reference/method/sh.abortReshardCollection.txt index 7df9c11fa6b..fd3313fdc18 100644 --- a/source/reference/method/sh.abortReshardCollection.txt +++ b/source/reference/method/sh.abortReshardCollection.txt @@ -32,6 +32,18 @@ Definition .. |dbcommand| replace:: :dbcommand:`abortReshardCollection` command .. include:: /includes/fact-mongosh-shell-method-alt.rst +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst Syntax ------ diff --git a/source/reference/method/sh.addShard.txt b/source/reference/method/sh.addShard.txt index 797b128ff49..7d867c88fb5 100644 --- a/source/reference/method/sh.addShard.txt +++ b/source/reference/method/sh.addShard.txt @@ -67,6 +67,19 @@ Definition .. include:: /includes/extracts/mongos-operations-wc-add-shard.rst +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-serverless.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Considerations -------------- diff --git a/source/reference/method/sh.addShardToZone.txt b/source/reference/method/sh.addShardToZone.txt index 0041a1acbb8..3130329cf18 100644 --- a/source/reference/method/sh.addShardToZone.txt +++ b/source/reference/method/sh.addShardToZone.txt @@ -57,6 +57,20 @@ Definition Only issue :method:`sh.addShardToZone()` when connected to a :binary:`~bin.mongos` instance. + +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-serverless.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Behavior -------- diff --git a/source/reference/method/sh.addTagRange.txt b/source/reference/method/sh.addTagRange.txt index a37bdf7977e..b4f24fe47d4 100644 --- a/source/reference/method/sh.addTagRange.txt +++ b/source/reference/method/sh.addTagRange.txt @@ -116,7 +116,7 @@ distribution, see :ref:`pre-define-zone-range-example`. Initial Chunk Distribution with Compound Hashed Shard Keys `````````````````````````````````````````````````````````` -Starting in version 4.4, MongoDB supports sharding collections on +MongoDB supports sharding collections on :ref:`compound hashed indexes `. MongoDB can perform optimized initial chunk creation and distribution when sharding the empty or non-existing collection on a compound hashed shard key. diff --git a/source/reference/method/sh.balancerCollectionStatus.txt b/source/reference/method/sh.balancerCollectionStatus.txt index f206bef0d39..222d2790585 100644 --- a/source/reference/method/sh.balancerCollectionStatus.txt +++ b/source/reference/method/sh.balancerCollectionStatus.txt @@ -15,8 +15,6 @@ Definition .. method:: sh.balancerCollectionStatus(namespace) - .. versionadded:: 4.4 - Returns a document that contains information about whether the chunks of a sharded collection are balanced (i.e. do not need to be moved) as of the time the command is run or need to be moved because @@ -26,6 +24,20 @@ Definition .. |dbcommand| replace:: :dbcommand:`balancerCollectionStatus` command .. include:: /includes/fact-mongosh-shell-method-alt.rst + +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-serverless.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ diff --git a/source/reference/method/sh.checkMetadataConsistency.txt b/source/reference/method/sh.checkMetadataConsistency.txt index 9445cd7d22d..4f8c8873aff 100644 --- a/source/reference/method/sh.checkMetadataConsistency.txt +++ b/source/reference/method/sh.checkMetadataConsistency.txt @@ -36,6 +36,18 @@ Definition which contains a document for each inconsistency found in the sharding metadata. +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst Syntax ------- diff --git a/source/reference/method/sh.enableSharding.txt b/source/reference/method/sh.enableSharding.txt index dc09af7aa02..dc273305a48 100644 --- a/source/reference/method/sh.enableSharding.txt +++ b/source/reference/method/sh.enableSharding.txt @@ -31,6 +31,19 @@ Definition .. include:: /includes/fact-mongosh-shell-method-alt.rst +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-serverless.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Syntax ------ diff --git a/source/reference/method/sh.moveChunk.txt b/source/reference/method/sh.moveChunk.txt index cc6900b6717..f7c7fafb1f8 100644 --- a/source/reference/method/sh.moveChunk.txt +++ b/source/reference/method/sh.moveChunk.txt @@ -76,12 +76,11 @@ Definition By default, MongoDB cannot move a chunk if the number of documents in the chunk is greater than 1.3 times the result of dividing the configured :ref:`chunk size` by the average - document size. Starting in MongoDB 4.4, the :dbcommand:`moveChunk` - command can specify a new option :ref:`forceJumbo - ` to allow for the manual migration of chunks - too large to move, with or without the :ref:`jumbo ` - label. See :ref:`moveChunk ` command for - details. + document size. The :dbcommand:`moveChunk` command can specify the + :ref:`forceJumbo ` option to allow for the manual + migration of chunks too large to move, with or without the + :ref:`jumbo ` label. See :ref:`moveChunk ` + command for details. .. seealso:: @@ -91,6 +90,20 @@ Definition - :doc:`/sharding`, and :ref:`chunk migration ` + +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-serverless.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Example ------- diff --git a/source/reference/method/sh.removeShardFromZone.txt b/source/reference/method/sh.removeShardFromZone.txt index 8cf0b9db1bb..37472d0eea7 100644 --- a/source/reference/method/sh.removeShardFromZone.txt +++ b/source/reference/method/sh.removeShardFromZone.txt @@ -56,6 +56,19 @@ Definition Only issue :method:`sh.removeShardFromZone()` when connected to a :binary:`~bin.mongos` instance. +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-serverless.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Behavior -------- diff --git a/source/reference/method/sh.reshardCollection.txt b/source/reference/method/sh.reshardCollection.txt index 5836e332941..09cf1c24a9c 100644 --- a/source/reference/method/sh.reshardCollection.txt +++ b/source/reference/method/sh.reshardCollection.txt @@ -51,7 +51,7 @@ Definition Set the field values to either: - - ``1`` for :doc:`ranged based sharding ` + - ``1`` for :ref:`range-based sharding ` - ``"hashed"`` to specify a :ref:`hashed shard key `. @@ -111,6 +111,20 @@ The ``options`` field supports the following fields: ... ] + +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-all.rst + +.. include:: /includes/fact-environments-onprem-only.rst + .. _resharding-process-details: Resharding Process @@ -211,6 +225,62 @@ Example output: operationTime: Timestamp(1, 1624887947) } +Reshard a Collection with Zones +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Reshard a collection with zones when you need to adjust the distribution +of data across the shards in your cluster to meet changing requirements or +to improve performance. + +In the following example, the ``test.scores`` collection resides on ``shard0`` +and ``shard1``. The current shard key is ``{ _id: 1}``. + +.. procedure:: + :style: normal + + .. step:: Add shards to a new zone + + In this example, this zone is called ``NewZone``. + + .. code-block:: javascript + + sh.addShardToZone( “shard2”, ‘NewZone’ ) + sh.addShardToZone( “shard3”, ‘NewZone’ ) + + .. step:: Run ``sh.reshardCollection`` with the new zone information + + .. code-block:: javascript + + sh.reshardCollection( + "test.scores", + { "studentId": 1, "testId": 1}, + { zones: [ { + min: { "studentId": MinKey(), "testId": MinKey() }, + max: { "studentId": MaxKey(), "testId": MaxKey() }, + zone: "NewZone" } + ] + } ) + + The resharding operation adds the shards in zone ``NewZone`` as recipients. + The database primary shard is added as a recipient as a backstop for any + missing ranges in the zone definition. If there are no missing ranges, the + collection is cloned on shards in the "NewZone", such as ``shard2`` and + ``shard3`` in this example. ``sh.reshardCollection`` returns the following: + + .. code-block:: javascript + :copyable: false + + { + ok: 1, + '$clusterTime': { + clusterTime: Timestamp( { t: 1699484530, i: 54 } ), + signature: { + hash: Binary.createFromBase64( "90ApBDrSSi4XnCpV3OWIH4OGO0Y=", 0 ), + keyId: Long( "7296989036055363606" ) + } }, + operationTime: Timestamp( { t: 1699484530, i: 54 } ) + } + .. seealso:: :ref:`sharding-resharding` diff --git a/source/reference/method/sh.shardCollection.txt b/source/reference/method/sh.shardCollection.txt index 738a090fc96..fe96177eb07 100644 --- a/source/reference/method/sh.shardCollection.txt +++ b/source/reference/method/sh.shardCollection.txt @@ -64,7 +64,7 @@ Definition Set the field value to either: - - ``1`` for :doc:`ranged based sharding ` + - ``1`` for :ref:`range-based sharding ` - ``"hashed"`` to specify a :ref:`hashed shard key `. @@ -90,8 +90,10 @@ Definition You cannot specify ``true`` when using :ref:`hashed shard keys `. - If specifying the ``options`` document, you must explicitly - specify the value for ``unique``. + For :ref:`Legacy mongo Shell `, you must explicitly + specify the value for ``unique`` if you specify the + ``options`` document. :binary:`~bin.mongosh` doesn't require + ``unique`` when you specify the ``options`` document. * - ``options`` @@ -145,9 +147,7 @@ Definition - If sharding with :ref:`presplitHashedZones: false ` or omitted and zones and zone ranges have been defined for the empty - collection, ``numInitChunks`` has no effect. - - .. versionchanged:: 4.4 + collection, ``numInitialChunks`` has no effect. * - ``collation`` @@ -186,8 +186,6 @@ Definition :ref:`requirements `. - .. versionadded:: 4.4 - * - :ref:`timeseries ` - document diff --git a/source/reference/method/sh.stopBalancer.txt b/source/reference/method/sh.stopBalancer.txt index 935a0fc8d26..872a3fc5a16 100644 --- a/source/reference/method/sh.stopBalancer.txt +++ b/source/reference/method/sh.stopBalancer.txt @@ -32,30 +32,30 @@ Definition .. list-table:: :header-rows: 1 - :widths: 20 20 80 + :widths: 20 20 60 * - Parameter - Type - + - Description * - ``timeout`` - integer - - - Time limit for disabling the balancer. + + - Optional. Time limit for disabling the balancer. Defaults to 60000 milliseconds. * - ``interval`` - integer - - - The interval (in milliseconds) at which to check if the balancing - round has stopped. - + - Optional. The interval (in milliseconds) at which to check if + the balancing round has stopped. + + If you omit both options, MongoDB disables the balancer indefinitely. You can only run :method:`sh.stopBalancer()` on a :binary:`~bin.mongos` instance. :method:`sh.stopBalancer()` errors diff --git a/source/reference/method/sh.updateZoneKeyRange.txt b/source/reference/method/sh.updateZoneKeyRange.txt index fa56c582da1..25bb5e1d35d 100644 --- a/source/reference/method/sh.updateZoneKeyRange.txt +++ b/source/reference/method/sh.updateZoneKeyRange.txt @@ -88,6 +88,20 @@ Definition Only issue :method:`sh.updateZoneKeyRange()` when connected to a :binary:`~bin.mongos` instance. + +Compatibility +------------- + +.. |command| replace:: method + +This method is available in deployments hosted in the following environments: + +.. include:: /includes/fact-environments-atlas-only.rst + +.. include:: /includes/fact-environments-atlas-support-no-serverless.rst + +.. include:: /includes/fact-environments-onprem-only.rst + Behavior -------- @@ -127,7 +141,7 @@ distribution, see :ref:`pre-define-zone-range-example`. Initial Chunk Distribution with Compound Hashed Shard Keys `````````````````````````````````````````````````````````` -Starting in version 4.4, MongoDB supports sharding collections on +MongoDB supports sharding collections on :ref:`compound hashed indexes `. MongoDB can perform optimized initial chunk creation and distribution when sharding the empty or non-existing collection on a compound hashed shard key. @@ -321,7 +335,7 @@ Compound Hashed Shard Key with Hashed Prefix For example, ``{ "_id" : "hashed", "facility" : 1 }`` -Starting in version 4.4, MongoDB supports sharding collections on +MongoDB supports sharding collections on :ref:`compound hashed indexes `. When sharding on a compound hashed shard key, MongoDB can perform optimized initial chunk creation and distribution on the empty or @@ -349,7 +363,7 @@ Compound Hashed Shard Key with Non-Prefix Hashed Field For example, ``{ "facility" : 1, "_id" : "hashed" }`` -Starting in version 4.4, MongoDB supports sharding collections on +MongoDB supports sharding collections on :ref:`compound hashed indexes `. When sharding on a compound hashed shard key, MongoDB can perform optimized initial chunk creation and distribution on the empty or diff --git a/source/reference/method/sp.createStreamProcessor.txt b/source/reference/method/sp.createStreamProcessor.txt new file mode 100644 index 00000000000..be04417bba7 --- /dev/null +++ b/source/reference/method/sp.createStreamProcessor.txt @@ -0,0 +1,202 @@ +========================== +sp.createStreamProcessor() +========================== + +.. default-domain:: mongodb + +.. contents:: On this page + :local: + :backlinks: none + :depth: 2 + :class: singlecol + +Definition +----------- + +.. method:: sp.createStreamProcessor() + +.. versionadded:: 7.0 + + Creates a :atlas:`Stream Processor + ` on + the current :atlas:`Stream Processing Instance + `. + +Syntax +----------- + +The :method:`sp.createStreamProcessor()` method has the following +syntax: + +.. code-block:: json + + sp.createStreamProcessor( + , + [ + + ], + { + + } + ) + +Command Fields +--------------------------- + +``sp.createStreamProcessor()`` takes these fields: + +.. list-table:: + :header-rows: 1 + :widths: 20 20 20 40 + + * - Field + - Type + - Necessity + - Description + + * - ``name`` + - string + - Required + - Logical name for the stream processor. This must be unique + within the stream processing instance. + + * - ``pipeline`` + - array + - Required + - :ref:`Stream aggregation pipeline ` you + want to apply to your streaming data. + + * - ``options`` + - object + - Optional + - Object defining various optional settings for your stream + processor. + + * - ``options.dlq`` + - object + - Conditional + - Object assigning a + :term:`dead letter queue` for your stream processing instance. + This field is necessary if you define the ``options`` field. + + * - ``options.dlq.connectionName`` + - string + - Conditional + - Label that identifies a connection in your + connection registry. This connection must reference an + Atlas cluster. This field is necessary if you define the + ``options.dlq`` field. + + * - ``options.dlq.db`` + - string + - Conditional + - Name of an Atlas database on the cluster specified + in ``options.dlq.connectionName``. This field is necessary if + you define the ``options.dlq`` field. + + * - ``options.dlq.coll`` + - string + - Conditional + - Name of a collection in the database specified in + ``options.dlq.db``. This field is necessary if you + define the ``options.dlq`` field. + + +Behavior +--------------- + +``sp.createStreamProcessor()`` creates a persistent, named stream +processor on the current stream processing instance. You can +initialize this stream processor with +:method:`sp.processor.start()`. If you try to create a stream +processor with the same name as an existing stream processor, +``mongosh`` will return an error. + +Access Control +------------------------ + +The user running ``sp.createStreamProcessor()`` must have the +:atlasrole:`atlasAdmin` role. + +Example +---------------- + +The following example creates a stream processor named ``solarDemo`` +which ingests data from the ``sample_stream_solar`` connection. The +processor excludes all documents where the value of the ``device_id`` +field is ``device_8``, passing the rest to a :atlas:`tumbling window +` with a 10-second +duration. Each window groups the documents it receives, then returns +various useful statistics of each group. The stream processor then +merges these records to ``solar_db.solar_coll`` over the ``mongodb1`` +connection. + +.. code-block:: json + :copyable: true + + sp.createStreamProcessor( + 'solarDemo', + [ + { + $source: { + connectionName: 'sample_stream_solar', + timeField: { + $dateFromString: { + dateString: '$timestamp' + } + } + } + }, + { + $match: { + $expr: { + $ne: [ + "$device_id", + "device_8" + ] + } + } + }, + { + $tumblingWindow: { + interval: { + size: NumberInt(10), + unit: "second" + }, + "pipeline": [ + { + $group: { + "_id": { "device_id": "$device_id" }, + "max_temp": { $max: "$obs.temp" }, + "max_watts": { $max: "$obs.watts" }, + "min_watts": { $min: "$obs.watts" }, + "avg_watts": { $avg: "$obs.watts" }, + "median_watts": { + $median: { + input: "$obs.watts", + method: "approximate" + } + } + } + } + ] + } + }, + { + $merge: { + into: { + connectionName: "mongodb1", + db: "solar_db", + coll: "solar_coll" + }, + on: ["_id"] + } + } + ] + ) + +Learn More +------------------ + +- :atlas:`Stream Aggregation ` +- :atlas:`Manage Stream Processors ` diff --git a/source/reference/method/sp.listStreamProcessors.txt b/source/reference/method/sp.listStreamProcessors.txt new file mode 100644 index 00000000000..b02dff05c1c --- /dev/null +++ b/source/reference/method/sp.listStreamProcessors.txt @@ -0,0 +1,198 @@ +========================= +sp.listStreamProcessors() +========================= + +.. default-domain:: mongodb + +.. contents:: On this page + :local: + :backlinks: none + :depth: 2 + :class: singlecol + +Definition +----------- + +.. method:: sp.listStreamProcessors() + +.. versionadded:: 7.0 + + Returns documents for each named + :atlas:`Stream Processor + ` on + the current :atlas:`Stream Processing Instance + `. Each + document provides descriptive information including the name, + current state, and pipeline of a stream processor. + +Syntax +----------- + +The :method:`sp.listStreamProcessors()` method has the following syntax: + +.. code-block:: json + + sp.listStreamProcessors( + { + + } + ) + + +Command Fields +--------------------------- + +``sp.listStreamProcessors()`` takes these fields: + +.. list-table:: + :header-rows: 1 + :widths: 20 20 20 40 + + * - Field + - Type + - Necessity + - Description + + * - ``filter`` + - document + - Optional + - Document specifying which fields to filter stream processors + on. If you provide a filter, the command will only return + those processors which match the values for all + the fields you specify. + +Behavior +--------------- + +``sp.listStreamProcessors()`` returns documents describing all of +the named stream processors on the current stream processing instance +to ``STDOUT``. + +Access Control +------------------------ + +The user running ``sp.listStreamProcessors()`` must have the +:atlasrole:`atlasAdmin` role. + +Example +---------------- + +The following example shows an expected response from +``sp.listStreamProcessors()`` when the command is called without any +filter: + +.. io-code-block:: + :copyable: true + + .. input:: + :language: sh + + sp.listStreamProcessors() + + .. output:: + :language: json + :linenos: + + { + id: '0135', + name: "proc01", + last_modified: ISODate("2023-03-20T20:15:54.601Z"), + state: "RUNNING", + error_msg: '', + pipeline: [ + { + $source: { + connectionName: "myKafka", + topic: "stuff" + } + }, + { + $match: { + temperature: 46 + } + }, + { + $emit: { + connectionName: "mySink", + topic: "output", + } + } + ], + lastStateChange: ISODate("2023-03-20T20:15:59.442Z") + }, + { + id: '0218', + name: "proc02", + last_modified: ISODate("2023-03-21T20:17:33.601Z"), + state: "STOPPED", + error_msg: '', + pipeline: [ + { + $source: { + connectionName: "myKafka", + topic: "things" + } + }, + { + $match: { + temperature: 41 + } + }, + { + $emit: { + connectionName: "mySink", + topic: "results", + } + } + ], + lastStateChange: ISODate("2023-03-21T20:18:26.139Z") + } + +The following example shows an expected response if you invoke +``sp.listStreamProcessors()`` filtering for only those stream +processors with a ``state`` of ``running``. + +.. io-code-block:: + :copyable: true + + .. input:: + :language: sh + + sp.listStreamProcessors({"state": "running"}) + + .. output:: + :language: json + :linenos: + + { + id: '0135', + name: "proc01", + last_modified: ISODate("2023-03-20T20:15:54.601Z"), + state: "RUNNING", + error_msg: '', + pipeline: [ + { + $source: { + connectionName: "myKafka", + topic: "stuff" + } + }, + { + $match: { + temperature: 46 + } + }, + { + $emit: { + connectionName: "mySink", + topic: "output", + } + } + ], + lastStateChange: ISODate("2023-03-20T20:15:59.442Z") + } + +Learn More +------------------ + +- :atlas:`Manage Stream Processors ` diff --git a/source/reference/method/sp.process.txt b/source/reference/method/sp.process.txt new file mode 100644 index 00000000000..c763d32d2da --- /dev/null +++ b/source/reference/method/sp.process.txt @@ -0,0 +1,198 @@ +============ +sp.process() +============ + +.. default-domain:: mongodb + +.. contents:: On this page + :local: + :backlinks: none + :depth: 2 + :class: singlecol + +Definition +----------- + +.. method:: sp.process() + +.. versionadded:: 7.0 + + Creates an ephemeral :atlas:`Stream Processor + ` on + the current :atlas:`Stream Processing Instance + `. + +Syntax +----------- + +The :method:`sp.process()` method has the following +syntax: + +.. code-block:: json + + sp.process( + [ + + ], + { + + } + ) + +Command Fields +--------------------------- + +``sp.createStreamProcessor()`` takes these fields: + +.. list-table:: + :header-rows: 1 + :widths: 20 20 20 40 + + * - Field + - Type + - Necessity + - Description + + * - ``name`` + - string + - Required + - Logical name for the stream processor. This must be unique + within the stream processing instance. + + * - ``pipeline`` + - array + - Required + - :ref:`Stream aggregation pipeline ` you + want to apply to your streaming data. + + * - ``options`` + - object + - Optional + - Object defining various optional settings for your stream + processor. + + * - ``options.dlq`` + - object + - Conditional + - Object assigning a + :term:`dead letter queue` for your stream processing instance. + This field is necessary if you define the ``options`` field. + + * - ``options.dlq.connectionName`` + - string + - Conditional + - Label that identifies a connection in your + connection registry. This connection must reference an + Atlas cluster. This field is necessary if you define the + ``options.dlq`` field. + + * - ``options.dlq.db`` + - string + - Conditional + - Name of an Atlas database on the cluster specified + in ``options.dlq.connectionName``. This field is necessary if + you define the ``options.dlq`` field. + + * - ``options.dlq.coll`` + - string + - Conditional + - Name of a collection in the database specified in + ``options.dlq.db``. This field is necessary if you + define the ``options.dlq`` field. + +Behavior +--------------- + +``sp.process()`` creates an ephemeral, unnamed stream +processor on the current stream processing instance and immediately +initializes it. This stream processor only persists as long as it +runs. If you terminate an ephemeral stream processor, you must create +it again in order to use it. + +Access Control +------------------------ + +The user running ``sp.process()`` must have the +:atlasrole:`atlasAdmin` role. + +Example +---------------- + +The following example creates an ephemeral stream processor +which ingests data from the ``sample_stream_solar`` connection. The +processor excludes all documents where the value of the ``device_id`` +field is ``device_8``, passing the rest to a :atlas:`tumbling window +` with a 10-second +duration. Each window groups the documents it receives, then returns +various useful statistics of each group. The stream processor then +merges these records to ``solar_db.solar_coll`` over the ``mongodb1`` +connection. + +.. code-block:: json + :copyable: true + + sp.process( + [ + { + $source: { + connectionName: 'sample_stream_solar', + timeField: { + $dateFromString: { + dateString: '$timestamp' + } + } + } + }, + { + $match: { + $expr: { + $ne: [ + "$device_id", + "device_8" + ] + } + } + }, + { + $tumblingWindow: { + interval: { + size: NumberInt(10), + unit: "second" + }, + "pipeline": [ + { + $group: { + "_id": { "device_id": "$device_id" }, + "max_temp": { $max: "$obs.temp" }, + "max_watts": { $max: "$obs.watts" }, + "min_watts": { $min: "$obs.watts" }, + "avg_watts": { $avg: "$obs.watts" }, + "median_watts": { + $median: { + input: "$obs.watts", + method: "approximate" + } + } + } + } + ] + } + }, + { + $merge: { + into: { + connectionName: "mongodb1", + db: "solar_db", + coll: "solar_coll" + }, + on: ["_id"] + } + } + ] + ) + +Learn More +------------------ + +- :atlas:`Stream Aggregation ` +- :atlas:`Manage Stream Processors ` diff --git a/source/reference/method/sp.processor.drop.txt b/source/reference/method/sp.processor.drop.txt new file mode 100644 index 00000000000..d8b33d2049a --- /dev/null +++ b/source/reference/method/sp.processor.drop.txt @@ -0,0 +1,69 @@ +=================== +sp.processor.drop() +=================== + +.. default-domain:: mongodb + +.. contents:: On this page + :local: + :backlinks: none + :depth: 2 + :class: singlecol + +Definition +----------- + +.. method:: sp.processor.drop() + +.. versionadded:: 7.0 + + Deletes a named + :atlas:`Stream Processor + ` from + the current :atlas:`Stream Processing Instance + `. + +Syntax +----------- + +The :method:`sp.processor.drop()` method has the following +syntax: + +.. code-block:: json + + sp.processor.drop() + +Command Fields +--------------------------- + +``sp.processor.drop()`` takes no fields. + +Behavior +--------------- + +``sp.processor.drop()`` deletes the given named stream processor +from the current stream processing instance. If you invoke this +command on a currently running stream processor, it stops that +processor before deleting it. + +Access Control +------------------------ + +The user running ``sp.processor.drop()`` must have the +:atlasrole:`atlasAdmin` role. + +Example +---------------- + +The following example stops a stream processor named ``solarDemo`` + +.. code-block:: + :copyable: true + + sp.solarDemo.drop() + + +Learn More +------------------ + +- :atlas:`Manage Stream Processors ` diff --git a/source/reference/method/sp.processor.sample.txt b/source/reference/method/sp.processor.sample.txt new file mode 100644 index 00000000000..322731be0b7 --- /dev/null +++ b/source/reference/method/sp.processor.sample.txt @@ -0,0 +1,131 @@ +===================== +sp.processor.sample() +===================== + +.. default-domain:: mongodb + +.. contents:: On this page + :local: + :backlinks: none + :depth: 2 + :class: singlecol + +Definition +----------- + +.. method:: sp.processor.sample() + +.. versionadded:: 7.0 + + Returns arrays of sampled results from a currently running + :atlas:`Stream Processor + ` on + the current :atlas:`Stream Processing Instance + `. + +Syntax +----------- + +The :method:`sp.processor.sample()` method has the following syntax: + +.. code-block:: json + + sp.processor.sample() + +Command Fields +--------------------------- + +``sp.processor.sample()`` takes no fields. + +Behavior +--------------- + +``sp.processor.sample()`` returns arrays of sampled results +from the named, currently running stream processor to ``STDOUT``. This +command runs continuously until you cancel it using ``CTRL-C``, or until +the returned samples cumulatively reach ``40 MB``. + +Access Control +------------------------ + +The user running ``sp.processor.sample()`` must have the +:atlasrole:`atlasAdmin` role. + +Example +---------------- + +The following example shows an expected response from calling ``sp.solarDemo.sample()`` +to sample from a stream processor called ``solarDemo``: + +.. io-code-block:: + :copyable: true + + .. input:: + :language: sh + + sp.solarDemo.sample() + + .. output:: + :language: json + + { + _id: { + device_id: 'device_5' + }, + max_temp: 8, + max_watts: 66, + min_watts: 66, + avg_watts: 66, + median_watts: 66, + _stream_meta: { + windowStartTimestamp: ISODate('2024-03-19T22:09:10.000Z'), + windowEndTimestamp: ISODate('2024-03-19T22:09:20.000Z') + } + } + { + _id: { + device_id: 'device_0' + }, + max_temp: 18, + max_watts: 210, + min_watts: 68, + avg_watts: 157, + median_watts: 193, + _stream_meta: { + windowStartTimestamp: ISODate('2024-03-19T22:09:10.000Z'), + windowEndTimestamp: ISODate('2024-03-19T22:09:20.000Z') + } + } + { + _id: { + device_id: 'device_10' + }, + max_temp: 21, + max_watts: 128, + min_watts: 4, + avg_watts: 66, + median_watts: 4, + _stream_meta: { + windowStartTimestamp: ISODate('2024-03-19T22:09:10.000Z'), + windowEndTimestamp: ISODate('2024-03-19T22:09:20.000Z') + } + } + { + _id: { + device_id: 'device_9' + }, + max_temp: 10, + max_watts: 227, + min_watts: 66, + avg_watts: 131.4, + median_watts: 108, + _stream_meta: { + windowStartTimestamp: ISODate('2024-03-19T22:09:10.000Z'), + windowEndTimestamp: ISODate('2024-03-19T22:09:20.000Z') + } + } + +Learn More +------------------ + +- :atlas:`Manage Stream Processors ` diff --git a/source/reference/method/sp.processor.start.txt b/source/reference/method/sp.processor.start.txt new file mode 100644 index 00000000000..7c7c9ebf82c --- /dev/null +++ b/source/reference/method/sp.processor.start.txt @@ -0,0 +1,67 @@ +==================== +sp.processor.start() +==================== + +.. default-domain:: mongodb + +.. contents:: On this page + :local: + :backlinks: none + :depth: 2 + :class: singlecol + +Definition +----------- + +.. method:: sp.processor.start() + +.. versionadded:: 7.0 + + Starts a named + :atlas:`Stream Processor + ` on + the current :atlas:`Stream Processing Instance + `. + +Syntax +----------- + +The :method:`sp.processor.start()` method has the following syntax: + +.. code-block:: json + + sp.processor.start() + + +Command Fields +--------------------------- + +``sp.processor.start()`` takes no fields. + +Behavior +--------------- + +``sp.processor.start()`` starts a named stream processor on the +current stream processing instance. The stream processor must be in a +``STOPPED`` state. If you invoke ``sp.processor.start()`` for a +stream processor that is not ``STOPPED``, ``mongosh`` will return an error. + +Access Control +------------------------ + +The user running ``sp.processor.start()`` must have the +:atlasrole:`atlasAdmin` role. + +Example +---------------- + +The following example starts a stream processor named ``solarDemo``. + +.. code-block:: sh + + sp.solarDemo.start() + +Learn More +------------------ + +- :atlas:`Manage Stream Processors ` diff --git a/source/reference/method/sp.processor.stats.txt b/source/reference/method/sp.processor.stats.txt new file mode 100644 index 00000000000..b5c8dd0d586 --- /dev/null +++ b/source/reference/method/sp.processor.stats.txt @@ -0,0 +1,170 @@ +==================== +sp.processor.stats() +==================== + +.. default-domain:: mongodb + +.. contents:: On this page + :local: + :backlinks: none + :depth: 2 + :class: singlecol + +Definition +----------- + +.. method:: sp.processor.stats() + +.. versionadded:: 7.0 + + Returns a document containing statistics of a currently running + :atlas:`Stream Processor + ` on + the current :atlas:`Stream Processing Instance + `. + +Syntax +----------- + +The :method:`sp.processor.stats()` method has the following syntax: + +.. code-block:: json + + sp.processor.stats() + +Command Fields +-------------- + +``sp.processor.stats()`` takes these fields: + +.. list-table:: + :header-rows: 1 + :widths: 20 20 20 40 + + * - Field + - Type + - Necessity + - Description + + * - ``options`` + - object + - Optional + - Object defining various optional settings for your + statistics report. + + * - ``options.scale`` + - integer + - Optional + - Unit to use for the size of items described in the + output. If set to ``1024``, the output document shows sizes in + kibibytes. Defaults to bytes. + + * - ``verbose`` + - boolean + - Optional + - Flag that specifies the verbosity level of the output + document. If set to ``true``, the output document contains a + subdocument that reports the statistics of each individual + operator in your pipeline. Defaults to false. + +Behavior +--------------- + +``sp.processor.stats()`` returns a document containing statistics about +the specified stream processor to ``STDOUT``. These statistics include +but are not limited to: + +- The number of messages ingested and processed +- The total size of all input and output +- The amount of memory used to store processor state + +You can only invoke ``sp.processor.stats()`` on a currently running +stream processor. If you try to invoke this command on a stopped stream +processor, ``mongosh`` will return an error. + +Access Control +------------------------ + +The user running ``sp.processor.stats()`` must have the +:atlasrole:`atlasAdmin` role. + +Example +---------------- + +The following example shows an expected response from calling +``sp.solarDemo.stats()`` to get the statistics of a stream processor +called ``solarDemo``: + +.. io-code-block:: + :copyable: true + + .. input:: + :language: sh + + sp.solarDemo.stats() + + .. output:: + :language: json + + { + ok: 1, + ns: '6500aa277fdbdb6e443a992e.63c1928d768e39423386aa16.solarDemo', + stats: { + name: 'solarDemo', + processorId: '65f9fea5c5154385174af71e', + status: 'running', + scaleFactor: Long('1'), + inputMessageCount: Long('926'), + inputMessageSize: 410310, + outputMessageCount: Long('383'), + outputMessageSize: 425513, + dlqMessageCount: Long('0'), + dlqMessageSize: Long('0'), + stateSize: Long('4504'), + watermark: ISODate('2024-03-19T22:16:49.523Z'), + ok: 1 + }, + pipeline: [ + { + '$source': { + connectionName: 'sample_stream_solar', + timeField: { '$dateFromString': { dateString: '$timestamp' } } + } + }, + { + '$match': { '$expr': { '$ne': [ '$device_id', 'device_8' ] } } + }, + { + '$tumblingWindow': { + interval: { size: 10, unit: 'second' }, + pipeline: [ + { + '$group': { + _id: [Object], + max_temp: [Object], + max_watts: [Object], + min_watts: [Object], + avg_watts: [Object], + median_watts: [Object] + } + } + ] + } + }, + { + '$merge': { + into: { + connectionName: 'mongodb1', + db: 'solar_db', + coll: 'solar_coll' + }, + on: [ '_id' ] + } + } + ] + } + +Learn More +------------------ + +- :atlas:`Manage Stream Processors ` diff --git a/source/reference/method/sp.processor.stop.txt b/source/reference/method/sp.processor.stop.txt new file mode 100644 index 00000000000..d64eb33a057 --- /dev/null +++ b/source/reference/method/sp.processor.stop.txt @@ -0,0 +1,68 @@ +=================== +sp.processor.stop() +=================== + +.. default-domain:: mongodb + +.. contents:: On this page + :local: + :backlinks: none + :depth: 2 + :class: singlecol + +Definition +----------- + +.. method:: sp.processor.stop() + +.. versionadded:: 7.0 + + Stops a named + :atlas:`Stream Processor + ` on + the current :atlas:`Stream Processing Instance + `. + +Syntax +----------- + +The :method:`sp.processor.stop()` method has the following syntax: + +.. code-block:: sh + + sp.processor.stop() + + +Command Fields +--------------------------- + +``sp.processor.stop()`` takes no fields. + +Behavior +--------------- + +``sp.processor.stop()`` stops a named stream processor on the +current stream processing instance. The stream processor must be in a +``running`` state. If you invoke ``sp.processor.stop()`` for a +stream processor that is not ``running``, ``mongosh`` will return an error. + +Access Control +------------------------ + +The user running ``sp.processor.stop()`` must have the +:atlasrole:`atlasAdmin` role. + +Example +---------------- + +The following example stops a stream processor named ``solarDemo``. + +.. code-block:: + :copyable: true + + sp.solarDemo.stop() + +Learn More +------------------ + +- :atlas:`Manage Stream Processors ` diff --git a/source/reference/mongodb-defaults.txt b/source/reference/mongodb-defaults.txt index e58878deebd..33b7634b09c 100644 --- a/source/reference/mongodb-defaults.txt +++ b/source/reference/mongodb-defaults.txt @@ -29,7 +29,7 @@ Read Concern Default Read Concern ~~~~~~~~~~~~~~~~~~~~ -The :red:`default` :doc:`read concern ` is as +The :red:`default` :ref:`read concern ` is as follows: .. list-table:: @@ -86,7 +86,7 @@ Specify Read Concern: MongoDB Drivers Transactions`` tab. Using the :driver:`MongoDB drivers `, you can override the default - :doc:`read concern ` and set read concern for operations + :ref:`read concern ` and set read concern for operations at the following levels: .. list-table:: diff --git a/source/reference/mongodb-extended-json.txt b/source/reference/mongodb-extended-json.txt index 1edf21e351d..d55cfe3ff7c 100644 --- a/source/reference/mongodb-extended-json.txt +++ b/source/reference/mongodb-extended-json.txt @@ -585,15 +585,9 @@ Where the values are as follows: - ``""`` - - A string that specifies BSON regular expression options ('g', 'i', - 'm' and 's') or an empty string ``""``. - - - Options other than ('g', 'i', 'm' and 's') will be dropped when - converting to this representation. - - .. important:: - - The options MUST be in alphabetical order. + - A string that specifies BSON regular expression options. You must specify + the options in alphabetical order. For information on the supported options, + see :query:`$options`. .. bsontype:: Timestamp @@ -703,6 +697,10 @@ Type Representations - {"$timestamp":{"t":1565545664,"i":1}} - {"$timestamp":{"t":1565545664,"i":1}} + * - "uuid": + - {"$uuid":"3b241101-e2bb-4255-8caf-4136c566a962"} + - {"$uuid":"3b241101-e2bb-4255-8caf-4136c566a962"} + .. _ex-obj-conversions: Extended JSON Object Conversions diff --git a/source/reference/operator.txt b/source/reference/operator.txt index d7b287cddd3..541e0aed8cf 100644 --- a/source/reference/operator.txt +++ b/source/reference/operator.txt @@ -6,6 +6,9 @@ Operators .. default-domain:: mongodb +.. meta:: + :description: Contains links to MongoDB query and aggregation operators. + .. contents:: On this page :local: :backlinks: none diff --git a/source/reference/operator/aggregation-pipeline.txt b/source/reference/operator/aggregation-pipeline.txt index 55d450b6fdb..026f7d2035f 100644 --- a/source/reference/operator/aggregation-pipeline.txt +++ b/source/reference/operator/aggregation-pipeline.txt @@ -6,6 +6,9 @@ Aggregation Stages .. default-domain:: mongodb +.. meta:: + :description: Contains a list of aggregation stages used to build aggregation pipelines. + .. contents:: On this page :local: :backlinks: none @@ -44,7 +47,7 @@ times in a pipeline. .. include:: /includes/extracts/agg-stages-db.collection.aggregate.rst For aggregation expression operators to use in the pipeline stages, see -:ref:`aggregation-pipeline-operator-reference`. +:ref:`aggregation-pipeline-operators`. db.aggregate() Stages --------------------- diff --git a/source/reference/operator/aggregation.txt b/source/reference/operator/aggregation.txt index 47a7834e304..e92af2f8151 100644 --- a/source/reference/operator/aggregation.txt +++ b/source/reference/operator/aggregation.txt @@ -11,6 +11,9 @@ Aggregation Operators .. default-domain:: mongodb +.. meta:: + :description: Contains a list of aggregation operators to use in aggregation stages. + .. contents:: On this page :local: :backlinks: none @@ -243,7 +246,6 @@ Window Operators /reference/operator/aggregation/filter /reference/operator/aggregation/first /reference/operator/aggregation/firstN - /reference/operator/aggregation/firstN-array-element /reference/operator/aggregation/floor /reference/operator/aggregation/function /reference/operator/aggregation/getField @@ -263,7 +265,6 @@ Window Operators /reference/operator/aggregation/isoWeekYear /reference/operator/aggregation/last /reference/operator/aggregation/lastN - /reference/operator/aggregation/lastN-array-element /reference/operator/aggregation/let /reference/operator/aggregation/linearFill /reference/operator/aggregation/literal @@ -342,6 +343,7 @@ Window Operators /reference/operator/aggregation/toDate /reference/operator/aggregation/toDecimal /reference/operator/aggregation/toDouble + /reference/operator/aggregation/toHashedIndexKey /reference/operator/aggregation/toInt /reference/operator/aggregation/toLong /reference/operator/aggregation/toObjectId diff --git a/source/reference/operator/aggregation/accumulator.txt b/source/reference/operator/aggregation/accumulator.txt index 296d39732de..c474de0f73b 100644 --- a/source/reference/operator/aggregation/accumulator.txt +++ b/source/reference/operator/aggregation/accumulator.txt @@ -15,8 +15,6 @@ Definition .. group:: $accumulator - .. versionadded:: 4.4 - Defines a custom :ref:`accumulator operator `. Accumulators are operators that maintain their state (e.g. totals, maximums, minimums, and related @@ -248,7 +246,13 @@ may need to merge two separate, intermediate states. The :ref:`merge ` function specifies how the operator should merge two states. -For example, :group:`$accumulator` may need to combine two states when: +The :ref:`merge ` function always merges two +states at a time. In the event that more than two states must be merged, +the resulting merge of two states is merged with a single state. This +process repeats until all states are merged. + +For example, :group:`$accumulator` may need to combine two states in the +following scenarios: - :group:`$accumulator` is run on a sharded cluster. The operator needs to merge the results from each shard to obtain the final @@ -261,16 +265,64 @@ For example, :group:`$accumulator` may need to combine two states when: Once the operation finishes, the results from disk and memory are merged together using the :ref:`merge ` function. - .. seealso:: +Document Processing Order +~~~~~~~~~~~~~~~~~~~~~~~~~ - :ref:`group-memory-limit` +The order that MongoDB processes documents for the ``init()``, +``accumulate()``, and ``merge()`` functions can vary, and might differ +from the order that those documents are specified to the +``$accumulator`` function. -.. note:: +For example, consider a series of documents where the ``_id`` fields are +the letters of the alphabet: + +.. code-block:: javascript + :copyable: false + + { _id: 'a' }, + { _id: 'b' }, + { _id: 'c' } + ... + { _id: 'z' } + +Next, consider an aggregation pipeline that sorts the documents by the +``_id`` field and then uses an ``$accumulator`` function to concatenate +the ``_id`` field values: + +.. code-block:: javascript + + [ + { + $sort: { _id: 1 } + }, + { + $group: { + _id: null, + alphabet: { + $accumulator: { + init: function() { + return "" + }, + accumulate: function(state, letter) { + return(state + letter) + }, + accumulateArgs: [ "$_id" ], + merge: function(state1, state2) { + return(state1 + state2) + }, + lang: "js" + } + } + } + } + ] + +MongoDB does not guarantee that the documents are processed in the +sorted order, meaning the ``alphabet`` field does not necessarily get +set to ``abc...z``. - The :ref:`merge ` function always merges two - states at a time. In the event that more than two states must be - merged, the resulting merge of two states is merged with a single - state. This process repeats until all states are merged. +Due to this behavior, ensure that your ``$accumulator`` function does +not need to process and return documents in a specific order. Javascript Enabled ~~~~~~~~~~~~~~~~~~ @@ -288,8 +340,7 @@ scripting: - For a :binary:`~bin.mongos` instance, see :setting:`security.javascriptEnabled` configuration option or the - :option:`--noscripting ` command-line option - starting in MongoDB 4.4. + :option:`--noscripting ` command-line option. | In earlier versions, MongoDB does not allow JavaScript execution on :binary:`~bin.mongos` instances. diff --git a/source/reference/operator/aggregation/addFields.txt b/source/reference/operator/aggregation/addFields.txt index 265803b8d93..37f4f791d3c 100644 --- a/source/reference/operator/aggregation/addFields.txt +++ b/source/reference/operator/aggregation/addFields.txt @@ -8,6 +8,9 @@ $addFields (aggregation) :name: programming_language :values: shell +.. meta:: + :description: Learn how to use an aggregation stage to add new fields to your documents. + .. contents:: On this page :local: :backlinks: none @@ -90,20 +93,22 @@ A collection called ``scores`` contains the following documents: .. code-block:: javascript - { - _id: 1, - student: "Maya", - homework: [ 10, 5, 10 ], - quiz: [ 10, 8 ], - extraCredit: 0 - } - { - _id: 2, - student: "Ryan", - homework: [ 5, 6, 5 ], - quiz: [ 8, 8 ], - extraCredit: 8 - } + db.scores.insertMany( [ + { + _id: 1, + student: "Maya", + homework: [ 10, 5, 10 ], + quiz: [ 10, 8 ], + extraCredit: 0 + }, + { + _id: 2, + student: "Ryan", + homework: [ 5, 6, 5 ], + quiz: [ 8, 8 ], + extraCredit: 8 + } + ] ) The following operation uses two :pipeline:`$addFields` stages to include three new fields in the output documents: @@ -126,27 +131,30 @@ include three new fields in the output documents: The operation returns the following documents: .. code-block:: javascript + :copyable: false - { - "_id" : 1, - "student" : "Maya", - "homework" : [ 10, 5, 10 ], - "quiz" : [ 10, 8 ], - "extraCredit" : 0, - "totalHomework" : 25, - "totalQuiz" : 18, - "totalScore" : 43 - } - { - "_id" : 2, - "student" : "Ryan", - "homework" : [ 5, 6, 5 ], - "quiz" : [ 8, 8 ], - "extraCredit" : 8, - "totalHomework" : 16, - "totalQuiz" : 16, - "totalScore" : 40 - } + [ + { + _id: 1, + student: "Maya", + homework: [ 10, 5, 10 ], + quiz: [ 10, 8 ], + extraCredit: 0, + totalHomework: 25, + totalQuiz: 18, + totalScore: 43 + }, + { + _id: 2, + student: "Ryan", + homework: [ 5, 6, 5 ], + quiz: [ 8, 8 ], + extraCredit: 8, + totalHomework: 16, + totalQuiz: 16, + totalScore: 40 + } + ] Adding Fields to an Embedded Document ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -160,13 +168,11 @@ the following documents: .. code-block:: javascript - db.vehicles.insertMany( - [ + db.vehicles.insertMany( [ { _id: 1, type: "car", specs: { doors: 4, wheels: 4 } }, { _id: 2, type: "motorcycle", specs: { doors: 0, wheels: 2 } }, { _id: 3, type: "jet ski" } - ] - ) + ] ) The following aggregation operation adds a new field ``fuel_type`` to the embedded document ``specs``. @@ -174,23 +180,22 @@ the embedded document ``specs``. .. code-block:: javascript db.vehicles.aggregate( [ - { - $addFields: { - "specs.fuel_type": "unleaded" - } - } - ] ) + { $addFields: { "specs.fuel_type": "unleaded" } } + ] ) The operation returns the following results: .. code-block:: javascript + :copyable: false - { _id: 1, type: "car", - specs: { doors: 4, wheels: 4, fuel_type: "unleaded" } } - { _id: 2, type: "motorcycle", - specs: { doors: 0, wheels: 2, fuel_type: "unleaded" } } - { _id: 3, type: "jet ski", - specs: { fuel_type: "unleaded" } } + [ + { _id: 1, type: "car", + specs: { doors: 4, wheels: 4, fuel_type: "unleaded" } }, + { _id: 2, type: "motorcycle", + specs: { doors: 0, wheels: 2, fuel_type: "unleaded" } }, + { _id: 3, type: "jet ski", + specs: { fuel_type: "unleaded" } } + ] Overwriting an existing field ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -202,7 +207,9 @@ A collection called ``animals`` contains the following document: .. code-block:: javascript - { _id: 1, dogs: 10, cats: 15 } + db.animals.insertOne( + { _id: 1, dogs: 10, cats: 15 } + ) The following ``$addFields`` operation specifies the ``cats`` field. @@ -210,15 +217,16 @@ The following ``$addFields`` operation specifies the ``cats`` field. db.animals.aggregate( [ { - $addFields: { "cats": 20 } + $addFields: { cats: 20 } } ] ) The operation returns the following document: .. code-block:: javascript + :copyable: false - { _id: 1, dogs: 10, cats: 20 } + [ { _id: 1, dogs: 10, cats: 20 } ] It is possible to replace one field with another. In the following example the ``item`` field substitutes for the ``_id`` field. @@ -227,9 +235,11 @@ A collection called ``fruit`` contains the following documents: .. code-block:: javascript - { "_id" : 1, "item" : "tangerine", "type" : "citrus" } - { "_id" : 2, "item" : "lemon", "type" : "citrus" } - { "_id" : 3, "item" : "grapefruit", "type" : "citrus" } + db.fruit.insertMany( [ + { _id: 1, item: "tangerine", type: "citrus" }, + { _id: 2, item: "lemon", type: "citrus" }, + { _id: 3, item: "grapefruit", type: "citrus" } + ] ) The following aggregration operation uses ``$addFields`` to replace the ``_id`` field of each document with the value of the ``item`` @@ -249,10 +259,13 @@ field, and replaces the ``item`` field with a static value. The operation returns the following: .. code-block:: javascript + :copyable: false - { "_id" : "tangerine", "item" : "fruit", "type" : "citrus" } - { "_id" : "lemon", "item" : "fruit", "type" : "citrus" } - { "_id" : "grapefruit", "item" : "fruit", "type" : "citrus" } + [ + { _id: "tangerine", item: "fruit", type: "citrus" }, + { _id: "lemon", item: "fruit", type: "citrus" }, + { _id: "grapefruit", item: "fruit", type: "citrus" } + ] .. _addFields-add-element-to-array: @@ -263,10 +276,10 @@ Create a sample ``scores`` collection with the following: .. code-block:: javascript - db.scores.insertMany([ + db.scores.insertMany( [ { _id: 1, student: "Maya", homework: [ 10, 5, 10 ], quiz: [ 10, 8 ], extraCredit: 0 }, { _id: 2, student: "Ryan", homework: [ 5, 6, 5 ], quiz: [ 8, 8 ], extraCredit: 8 } - ]) + ] ) You can use :pipeline:`$addFields` with a :expression:`$concatArrays` expression to add an element to an existing array field. For example, @@ -277,14 +290,14 @@ score ``[ 7 ]``. .. code-block:: javascript - db.scores.aggregate([ + db.scores.aggregate( [ { $match: { _id: 1 } }, { $addFields: { homework: { $concatArrays: [ "$homework", [ 7 ] ] } } } - ]) + ] ) The operation returns the following: .. code-block:: javascript :copyable: false - { "_id" : 1, "student" : "Maya", "homework" : [ 10, 5, 10, 7 ], "quiz" : [ 10, 8 ], "extraCredit" : 0 } + [ { _id: 1, student: "Maya", homework: [ 10, 5, 10, 7 ], quiz: [ 10, 8 ], extraCredit: 0 } ] diff --git a/source/reference/operator/aggregation/anyElementTrue.txt b/source/reference/operator/aggregation/anyElementTrue.txt index dc5e3faa0db..bb25ef3bccf 100644 --- a/source/reference/operator/aggregation/anyElementTrue.txt +++ b/source/reference/operator/aggregation/anyElementTrue.txt @@ -86,25 +86,41 @@ The following operation uses the :expression:`$anyElementTrue` operator to determine if the ``responses`` array contains any value that evaluates to ``true``: -.. code-block:: javascript +.. io-code-block:: + :copyable: true + + .. input:: + :language: javascript + + db.survey.aggregate( + [ + { $project: { responses: 1, isAnyTrue: { $anyElementTrue: [ "$responses" ] }, _id: 1 } } + ] + ) + + .. output:: + :language: javascript - db.survey.aggregate( [ - { $project: { responses: 1, isAnyTrue: { $anyElementTrue: [ "$responses" ] }, _id: 0 } } + { _id: 1, responses: [ true ], isAnyTrue: true }, + { _id: 2, responses: [ true, false ], isAnyTrue: true }, + { _id: 3, responses: [], isAnyTrue: false }, + { _id: 4, responses: [ 1, true, 'seven' ], isAnyTrue: true }, + { _id: 5, responses: [ 0 ], isAnyTrue: false }, + { _id: 6, responses: [ [] ], isAnyTrue: true }, + { _id: 7, responses: [ [ 0 ] ], isAnyTrue: true }, + { _id: 8, responses: [ [ false ] ], isAnyTrue: true }, + { _id: 9, responses: [ null ], isAnyTrue: false }, + { _id: 10, responses: [ null ], isAnyTrue: false } ] - ) - -The operation returns the following results: -.. code-block:: javascript +In the results: - { "responses" : [ true ], "isAnyTrue" : true } - { "responses" : [ true, false ], "isAnyTrue" : true } - { "responses" : [ ], "isAnyTrue" : false } - { "responses" : [ 1, true, "seven" ], "isAnyTrue" : true } - { "responses" : [ 0 ], "isAnyTrue" : false } - { "responses" : [ [ ] ], "isAnyTrue" : true } - { "responses" : [ [ 0 ] ], "isAnyTrue" : true } - { "responses" : [ [ false ] ], "isAnyTrue" : true } - { "responses" : [ null ], "isAnyTrue" : false } - { "responses" : [ undefined ], "isAnyTrue" : false } +- Document with ``_id: 1`` is ``true`` because the element inside the + ``responses`` array evaluates as ``true``. +- Documents with ``_id: 2`` and ``_id: 4`` are ``true`` because at least + one element inside the ``responses`` array evaluates as ``true``. +- Documents with ``_id: 6``, ``_id: 7``, and ``_id: 8`` are ``true`` + because the ``responses`` array, which is the array that + ``$anyElementTrue`` evaluated for the operation, contains a nested + array, which ``$anyElementTrue`` always evaluates as ``true``. diff --git a/source/reference/operator/aggregation/arrayElemAt.txt b/source/reference/operator/aggregation/arrayElemAt.txt index 38c0adb1c71..35eac729c80 100644 --- a/source/reference/operator/aggregation/arrayElemAt.txt +++ b/source/reference/operator/aggregation/arrayElemAt.txt @@ -8,6 +8,9 @@ $arrayElemAt (aggregation) :name: programming_language :values: shell +.. meta:: + :description: Learn how to use an aggregation operator to return an array element at a specific index. + .. contents:: On this page :local: :backlinks: none diff --git a/source/reference/operator/aggregation/binarySize.txt b/source/reference/operator/aggregation/binarySize.txt index fd67064d5ae..4901271465d 100644 --- a/source/reference/operator/aggregation/binarySize.txt +++ b/source/reference/operator/aggregation/binarySize.txt @@ -15,8 +15,6 @@ Definition .. expression:: $binarySize - .. versionadded:: 4.4 - Returns the size of a given string or binary data value's content in bytes. diff --git a/source/reference/operator/aggregation/bitAnd.txt b/source/reference/operator/aggregation/bitAnd.txt index 666bf61de62..690aacb4fbd 100644 --- a/source/reference/operator/aggregation/bitAnd.txt +++ b/source/reference/operator/aggregation/bitAnd.txt @@ -109,9 +109,9 @@ The operation returns the following results: :copyable: false [ - { _id: 0, result: Long("0") } - { _id: 1, result: Long("2") } - { _id: 2, result: Long("3") } + { _id: 0, result: NumberLong("0") } + { _id: 1, result: NumberLong("2") } + { _id: 2, result: NumberLong("3") } ] Learn More diff --git a/source/reference/operator/aggregation/bsonSize.txt b/source/reference/operator/aggregation/bsonSize.txt index 201c9ba7571..b2a4c73af79 100644 --- a/source/reference/operator/aggregation/bsonSize.txt +++ b/source/reference/operator/aggregation/bsonSize.txt @@ -15,8 +15,6 @@ Definition .. expression:: $bsonSize - .. versionadded:: 4.4 - Returns the size in bytes of a given document (i.e. bsontype ``Object``) when encoded as :term:`BSON`. You can use :expression:`$bsonSize` as an alternative to the diff --git a/source/reference/operator/aggregation/changeStreamSplitLargeEvent.txt b/source/reference/operator/aggregation/changeStreamSplitLargeEvent.txt index fa371de5bd5..c7fe2503c19 100644 --- a/source/reference/operator/aggregation/changeStreamSplitLargeEvent.txt +++ b/source/reference/operator/aggregation/changeStreamSplitLargeEvent.txt @@ -13,12 +13,12 @@ Definition .. pipeline:: $changeStreamSplitLargeEvent -.. versionadded:: 7.0 (*Also available in 6.0.9*) +*New in MongoDB 7.0 (and 6.0.9).* If a :ref:`change stream ` has large events that exceed 16 MB, a ``BSONObjectTooLarge`` exception is returned. Starting in -MongoDB 6.0.9, you can use a ``$changeStreamSplitLargeEvent`` stage to -split the events into smaller fragments. +MongoDB 7.0 (and 6.0.9), you can use a ``$changeStreamSplitLargeEvent`` +stage to split the events into smaller fragments. You should only use ``$changeStreamSplitLargeEvent`` when strictly necessary. For example, if your application requires full document pre- diff --git a/source/reference/operator/aggregation/collStats.txt b/source/reference/operator/aggregation/collStats.txt index fc86c5c316b..cbabfeb2972 100644 --- a/source/reference/operator/aggregation/collStats.txt +++ b/source/reference/operator/aggregation/collStats.txt @@ -93,8 +93,6 @@ Definition - Adds :ref:`query execution statistics ` to the return document. - .. versionadded:: 4.4 - For a collection in a replica set or a :ref:`non-sharded collection` in a cluster, ``$collStats`` outputs a single document. For a @@ -282,7 +280,7 @@ This query returns a result similar to the following: "count" : 1104369, "avgObjSize" : 550, "storageSize" : 352878592, - "freeStorageSize" : 2490380, // Starting in MongoDB 4.4 + "freeStorageSize" : 2490380, "capped" : false, "wiredTiger" : { ... @@ -295,7 +293,7 @@ This query returns a result similar to the following: "_id_1_abc_1" ], "totalIndexSize" : 260337664, - "totalSize" : 613216256, // Starting in MongoDB 4.4 + "totalSize" : 613216256, "indexSizes" : { "_id_" : 9891840, "_id_1_abc_1" : 250445824 @@ -359,8 +357,6 @@ information, see :ref:`storage-stats-document`. ``queryExecStats`` Document ~~~~~~~~~~~~~~~~~~~~~~~~~~~ -.. versionadded:: 4.4 - The ``queryExecStats`` embedded document only exists in the output if you specify the ``queryExecStats`` option. diff --git a/source/reference/operator/aggregation/concatArrays.txt b/source/reference/operator/aggregation/concatArrays.txt index 1bb0d0bac6f..e31a5fdd557 100644 --- a/source/reference/operator/aggregation/concatArrays.txt +++ b/source/reference/operator/aggregation/concatArrays.txt @@ -69,30 +69,33 @@ Behavior Example ------- -A collection named ``warehouses`` contains the following documents: +Create a collection named ``warehouses`` with the following documents: .. code-block:: javascript - { "_id" : 1, instock: [ "chocolate" ], ordered: [ "butter", "apples" ] } - { "_id" : 2, instock: [ "apples", "pudding", "pie" ] } - { "_id" : 3, instock: [ "pears", "pecans"], ordered: [ "cherries" ] } - { "_id" : 4, instock: [ "ice cream" ], ordered: [ ] } + db.warehouses.insertMany( [ + { _id : 1, instock: [ "chocolate" ], ordered: [ "butter", "apples" ] }, + { _id : 2, instock: [ "apples", "pudding", "pie" ] }, + { _id : 3, instock: [ "pears", "pecans" ], ordered: [ "cherries" ] }, + { _id : 4, instock: [ "ice cream" ], ordered: [ ] } + ] ) The following example concatenates the ``instock`` and the ``ordered`` arrays: .. code-block:: javascript - db.warehouses.aggregate([ + db.warehouses.aggregate( [ { $project: { items: { $concatArrays: [ "$instock", "$ordered" ] } } } - ]) + ] ) .. code-block:: javascript + :copyable: false - { "_id" : 1, "items" : [ "chocolate", "butter", "apples" ] } - { "_id" : 2, "items" : null } - { "_id" : 3, "items" : [ "pears", "pecans", "cherries" ] } - { "_id" : 4, "items" : [ "ice cream" ] } + { _id : 1, items : [ "chocolate", "butter", "apples" ] } + { _id : 2, items : null } + { _id : 3, items : [ "pears", "pecans", "cherries" ] } + { _id : 4, items : [ "ice cream" ] } .. seealso:: diff --git a/source/reference/operator/aggregation/cond.txt b/source/reference/operator/aggregation/cond.txt index 91412537259..9fdf26ff660 100644 --- a/source/reference/operator/aggregation/cond.txt +++ b/source/reference/operator/aggregation/cond.txt @@ -8,6 +8,9 @@ $cond (aggregation) :name: programming_language :values: shell +.. meta:: + :description: Learn how to use an aggreagation operator to return an expression based on a condition. + .. contents:: On this page :local: :backlinks: none diff --git a/source/reference/operator/aggregation/convert.txt b/source/reference/operator/aggregation/convert.txt index 381c107a3e1..8f6bb57c912 100644 --- a/source/reference/operator/aggregation/convert.txt +++ b/source/reference/operator/aggregation/convert.txt @@ -8,6 +8,9 @@ $convert (aggregation) :name: programming_language :values: shell +.. meta:: + :description: Learn how to use an aggregation operator to convert a value to a specified data type. + .. contents:: On this page :local: :backlinks: none @@ -161,43 +164,9 @@ Converting to a Boolean The following table lists the input types that can be converted to a boolean: -.. list-table:: - :header-rows: 1 - :widths: 55 50 - - * - Input Type - - Behavior - - * - Boolean - - No-op. Returns the boolean value. - - * - Double - - | Returns true if not zero. - | Return false if zero. - - * - Decimal - - | Returns true if not zero. - | Return false if zero. - - * - Integer - - | Returns true if not zero. - | Return false if zero. +.. |null-description| replace:: Returns the value specified for the ``onNull`` option. By default, returns null. - * - Long - - | Returns true if not zero. - | Return false if zero. - - * - ObjectId - - | Returns true. - - * - String - - | Returns true. - - * - Date - - | Returns true. - - * - Timestamp - - | Returns true. +.. include:: /includes/aggregation/convert-to-bool-table.rst The following table lists some conversion to boolean examples: @@ -1242,4 +1211,3 @@ The operation returns the following documents: These examples use :binary:`mongosh`. The default types are different in the legacy :binary:`mongo` shell. - diff --git a/source/reference/operator/aggregation/count.txt b/source/reference/operator/aggregation/count.txt index 7445b2354dc..6a8633fd52b 100644 --- a/source/reference/operator/aggregation/count.txt +++ b/source/reference/operator/aggregation/count.txt @@ -8,6 +8,9 @@ $count (aggregation) :name: programming_language :values: shell +.. meta:: + :description: Learn how to use an aggregation stage to count documents input to the stage. + .. contents:: On this page :local: :backlinks: none diff --git a/source/reference/operator/aggregation/currentOp.txt b/source/reference/operator/aggregation/currentOp.txt index 35c6c0c32ca..ad26e902b49 100644 --- a/source/reference/operator/aggregation/currentOp.txt +++ b/source/reference/operator/aggregation/currentOp.txt @@ -1373,8 +1373,6 @@ relevant for the operation: "dataThroughputLastSecond" : 15.576952934265137, "dataThroughputAverage" : 15.375944137573242, - .. versionadded:: 4.4 - .. data:: $currentOp.dataThroughputAverage The average amount of data (in MiB) processed by the @@ -1394,8 +1392,6 @@ relevant for the operation: "dataThroughputLastSecond" : 15.576952934265137, "dataThroughputAverage" : 15.375944137573242, - .. versionadded:: 4.4 - .. data:: $currentOp.locks The :data:`~$currentOp.locks` document reports the type and mode of diff --git a/source/reference/operator/aggregation/dateFromParts.txt b/source/reference/operator/aggregation/dateFromParts.txt index ce5d927f2ba..f18c06b6e1b 100644 --- a/source/reference/operator/aggregation/dateFromParts.txt +++ b/source/reference/operator/aggregation/dateFromParts.txt @@ -194,9 +194,7 @@ Definition .. |outofrange-4.4| replace:: If the number specified is outside this range, - :expression:`$dateFromParts` errors. Starting in MongoDB 4.4, the - lower bound for this value is ``1``. In previous versions of MongoDB, - the lower bound was ``0``. + :expression:`$dateFromParts` errors. The lower bound for this value is ``1``. Behavior -------- @@ -206,10 +204,7 @@ Behavior Value Range ~~~~~~~~~~~ -Starting in MongoDB 4.4, the supported value range for ``year`` and -``isoWeekYear`` is ``1-9999``. In prior versions of MongoDB, the lower -bound for these values was ``0`` and the supported value range was -``0-9999``. +The supported value range for ``year`` and ``isoWeekYear`` is ``1-9999``. If the value specified for fields other than ``year``, ``isoWeekYear``, and ``timezone`` is outside the valid range, :expression:`$dateFromParts` diff --git a/source/reference/operator/aggregation/dateToString.txt b/source/reference/operator/aggregation/dateToString.txt index 3c530003023..5c3c0b24350 100644 --- a/source/reference/operator/aggregation/dateToString.txt +++ b/source/reference/operator/aggregation/dateToString.txt @@ -8,6 +8,9 @@ $dateToString (aggregation) :name: programming_language :values: shell +.. meta:: + :description: Learn how to use an aggregation operator to convert a date object to a string. + .. contents:: On this page :local: :backlinks: none diff --git a/source/reference/operator/aggregation/documents.txt b/source/reference/operator/aggregation/documents.txt index 18cc359feab..2b9548d7997 100644 --- a/source/reference/operator/aggregation/documents.txt +++ b/source/reference/operator/aggregation/documents.txt @@ -15,7 +15,7 @@ Definition .. pipeline:: $documents - .. versionchanged:: 5.1 + .. versionadded:: 5.1 (*Also available in 5.0.3*) Returns literal documents from input values. diff --git a/source/reference/operator/aggregation/facet.txt b/source/reference/operator/aggregation/facet.txt index e750e1229c4..c3508c187c1 100644 --- a/source/reference/operator/aggregation/facet.txt +++ b/source/reference/operator/aggregation/facet.txt @@ -8,6 +8,9 @@ $facet (aggregation) :name: programming_language :values: shell +.. meta:: + :description: Learn how to use the $facet aggregation stage to process multiple aggregation pipelines in a single stage. + .. contents:: On this page :local: :backlinks: none diff --git a/source/reference/operator/aggregation/filter.txt b/source/reference/operator/aggregation/filter.txt index 406bdfabb54..c0a5679050c 100644 --- a/source/reference/operator/aggregation/filter.txt +++ b/source/reference/operator/aggregation/filter.txt @@ -8,10 +8,13 @@ $filter (aggregation) :name: programming_language :values: shell +.. meta:: + :description: Learn how to use an aggregation operator to return a subset of an array based on a specified condition. + .. contents:: On this page :local: :backlinks: none - :depth: 1 + :depth: 2 :class: singlecol Definition @@ -41,8 +44,8 @@ Syntax $filter: { input: , - cond: , as: , + cond: , limit: } } @@ -59,6 +62,13 @@ Syntax - An :ref:`expression ` that resolves to an array. + * - ``as`` + + - Optional. A name for the :doc:`variable + ` that represents each + individual element of the ``input`` array. If no name is + specified, the variable name defaults to ``this``. + * - ``cond`` - An :ref:`expression ` that resolves @@ -67,13 +77,6 @@ Syntax element of the ``input`` array individually with the variable name specified in ``as``. - * - ``as`` - - - Optional. A name for the :doc:`variable - ` that represents each - individual element of the ``input`` array. If no name is - specified, the variable name defaults to ``this``. - * - ``limit`` - Optional. A number expression that restricts the number of matching array elements that :expression:`$filter` returns. You cannot @@ -105,10 +108,7 @@ Behavior $filter: { input: [ 1, "a", 2, null, 3.1, NumberLong(4), "5" ], as: "num", - cond: { $and: [ - { $gte: [ "$$num", NumberLong("-9223372036854775807") ] }, - { $lte: [ "$$num", NumberLong("9223372036854775807") ] } - ] } + cond: { $isNumber: "$$num" } } } @@ -116,16 +116,13 @@ Behavior * - .. code-block:: javascript :copyable: false - :emphasize-lines: 9 + :emphasize-lines: 6 { $filter: { input: [ 1, "a", 2, null, 3.1, NumberLong(4), "5" ], as: "num", - cond: { $and:[ - { $gte: [ "$$num", NumberLong("-9223372036854775807") ] }, - { $lte: [ "$$num", NumberLong("9223372036854775807") ] } - ] }, + cond: { $isNumber: "$$num" }, limit: 2 } } @@ -134,17 +131,14 @@ Behavior * - .. code-block:: javascript :copyable: false - :emphasize-lines: 9 + :emphasize-lines: 6 { $filter: { input: [ 1, "a", 2, null, 3.1, NumberLong(4), "5" ], as: "num", - cond: { $and:[ - { $gte: [ "$$num", NumberLong("-9223372036854775807") ] }, - { $lte: [ "$$num", NumberLong("9223372036854775807") ] } - ] }, - limit: { $add: [ 0, 1 ]} + cond: { $isNumber: "$$num" }, + limit: { $add: [ 0, 1 ] } } } @@ -161,204 +155,255 @@ A collection ``sales`` has the following documents: { _id: 0, items: [ - { item_id: 43, quantity: 2, price: 10 }, - { item_id: 2, quantity: 1, price: 240 } + { item_id: 43, quantity: 2, price: 10, name: "pen" }, + { item_id: 2, quantity: 1, price: 240, name: "briefcase" } ] }, { _id: 1, items: [ - { item_id: 23, quantity: 3, price: 110 }, - { item_id: 103, quantity: 4, price: 5 }, - { item_id: 38, quantity: 1, price: 300 } + { item_id: 23, quantity: 3, price: 110, name: "notebook" }, + { item_id: 103, quantity: 4, price: 5, name: "pen" }, + { item_id: 38, quantity: 1, price: 300, name: "printer" } ] }, { _id: 2, items: [ - { item_id: 4, quantity: 1, price: 23 } + { item_id: 4, quantity: 1, price: 23, name: "paper" } ] } ] ) +Filter Based on Number Comparison +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + The following example filters the ``items`` array to only include documents that have a ``price`` greater than or equal to ``100``: -.. code-block:: javascript - - db.sales.aggregate( [ - { - $project: { - items: { - $filter: { - input: "$items", - as: "item", - cond: { $gte: [ "$$item.price", 100 ] } +.. io-code-block:: + :copyable: true + + .. input:: + :language: javascript + + db.sales.aggregate( [ + { + $project: { + items: { + $filter: { + input: "$items", + as: "item", + cond: { $gte: [ "$$item.price", 100 ] } + } } } } - } - ] ) - -The operation produces the following results: - -.. code-block:: javascript - - { - "_id" : 0, - "items" : [ - { "item_id" : 2, "quantity" : 1, "price" : 240 } + ] ) + + .. output:: + :language: javascript + + [ + { + _id: 0, + items: [ { item_id: 2, quantity: 1, price: 240, name: 'briefcase' } ] + }, + { + _id: 1, + items: [ + { item_id: 23, quantity: 3, price: 110, name: 'notebook' }, + { item_id: 38, quantity: 1, price: 300, name: 'printer' } + ] + }, + { _id: 2, items: [] } ] - } - { - "_id" : 1, - "items" : [ - { "item_id" : 23, "quantity" : 3, "price" : 110 }, - { "item_id" : 38, "quantity" : 1, "price" : 300 } - ] - } - { "_id" : 2, "items" : [ ] } -Using the ``limit`` field -~~~~~~~~~~~~~~~~~~~~~~~~~ +Use the limit Field +~~~~~~~~~~~~~~~~~~~ This example uses the ``sales`` collection from the previous example. -The example uses the ``limit`` field to specifiy the number of matching elements -returned in each ``items`` array. - -.. code-block:: javascript - :emphasize-lines: 9 - - db.sales.aggregate( [ - { - $project: { - items: { - $filter: { - input: "$items", - cond: { $gte: [ "$$item.price", 100 ] }, - as: "item", - limit: 1 +The example uses the ``limit`` field to specify the number of matching +elements returned in each ``items`` array. + +.. io-code-block:: + :copyable: true + + .. input:: + :language: javascript + :emphasize-lines: 9 + + db.sales.aggregate( [ + { + $project: { + items: { + $filter: { + input: "$items", + as: "item", + cond: { $gte: [ "$$item.price", 100 ] }, + limit: 1 + } } } } - } - ] ) - -The operation produces the following results: - -.. code-block:: javascript - - { - "_id" : 0, - "items" : [ - { "item_id" : 2, "quantity" : 1, "price" : 240 } - ] - } - { - "_id" : 1, - "items" : [ - { "item_id" : 23, "quantity" : 3, "price" : 110 } + ] ) + + .. output:: + :language: javascript + + [ + { + _id: 0, + items: [ { item_id: 2, quantity: 1, price: 240, name: 'briefcase' } ] + }, + { + _id: 1, + items: [ { item_id: 23, quantity: 3, price: 110, name: 'notebook' } ] + }, + { _id: 2, items: [] } ] - } - { "_id" : 2, "items" : [ ] } -``limit`` as a Numeric Expression -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +limit Greater than Possible Matches +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This example uses the ``sales`` collection from the previous example. -The following example uses a numeric expression for the ``limit`` field to -specifiy the number of matching elements returned in each ``items`` array. - -.. code-block:: javascript - :emphasize-lines: 9 - - db.sales.aggregate( [ - { - $project: { - items: { - $filter: { - input: "$items", - cond: { $lte: [ "$$item.price", 150] }, - as: "item", - limit: 2.000 +The example uses a ``limit`` field value that is larger than the +possible number of matching elements that can be returned. In this case, +``limit`` does not affect the query results and returns all documents +matching the ``$gte`` filter criteria. + +.. io-code-block:: + :copyable: true + + .. input:: + :language: javascript + :emphasize-lines: 9 + + db.sales.aggregate( [ + { + $project: { + items: { + $filter: { + input: "$items", + as: "item", + cond: { $gte: [ "$$item.price", 100] }, + limit: 5 + } } } } - } - ] ) - -The operation produces the following results: - -.. code-block:: javascript - - { - "_id": 0, - "items": [ - { "item_id": 43, "quantity": 2, "price": 10 } - ] - }, - { - "_id": 1, - "items": [ - { "item_id": 23, "quantity": 3, "price": 110 }, - { "item_id": 103, "quantity": 4, "price": 5 } - ] - }, - { - "_id": 2, - "items": [ - { "item_id": 4, "quantity": 1, "price": 23 } - ] - } + ] ) + + .. output:: + :language: javascript + + [ + { + _id: 0, + items: [ { item_id: 2, quantity: 1, price: 240, name: 'briefcase' } ] + }, + { + _id: 1, + items: [ + { item_id: 23, quantity: 3, price: 110, name: 'notebook' }, + { item_id: 38, quantity: 1, price: 300, name: 'printer' } + ] + }, + { _id: 2, items: [] } + ] -``limit`` Greater than Possible Matches -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Filter Based on String Equality Match +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This example uses the ``sales`` collection from the previous example. -The example uses a ``limit`` field value that is larger than the possible -number of matching elements that can be returned. +The following aggregation filters for ``items`` that have a ``name`` +value of ``pen``. -.. code-block:: javascript - :emphasize-lines: 9 +.. io-code-block:: + :copyable: true - db.sales.aggregate( [ - { - $project: { - items: { - $filter: { - input: "$items", - cond: { $gte: [ "$$item.price", 100] }, - as: "item", - limit: 5 + .. input:: + :language: javascript + + db.sales.aggregate( [ + { + $project: { + items: { + $filter: { + input: "$items", + as: "item", + cond: { $eq: [ "$$item.name", "pen"] } + } } } } - } - ] ) + ] ) + + .. output:: + :language: javascript + + [ + { + _id: 0, + items: [ { item_id: 43, quantity: 2, price: 10, name: 'pen' } ] + }, + { + _id: 1, + items: [ { item_id: 103, quantity: 4, price: 5, name: 'pen' } ] + }, + { _id: 2, items: [] } + ] -The operation produces the following results: +Filter Based on Regular Expression Match +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -.. code-block:: javascript +This example uses the ``sales`` collection from the previous example. - [ - { - "_id": 0, - "items": [ - { "item_id": 2, "quantity": 1, "price": 240 } - ] - }, - { - "_id": 1, - "items": [ - { "item_id": 23, "quantity": 3, "price": 110 }, - { "item_id": 38, "quantity": 1, "price": 300 } - ] - }, - { - "_id": 2, - "items": [] - } - ] +The following aggregation uses :expression:`$regexMatch` to filter for +``items`` that have a ``name`` value that starts with ``p``: + +.. io-code-block:: + :copyable: true + + .. input:: + :language: javascript + + db.sales.aggregate( [ + { + $project: { + items: { + $filter: { + input: "$items", + as: "item", + cond: { + $regexMatch: { input: "$$item.name", regex: /^p/ } + } + } + } + } + } + ] ) + + .. output:: + :language: javascript + + [ + { + _id: 0, + items: [ { item_id: 43, quantity: 2, price: 10, name: 'pen' } ] + }, + { + _id: 1, + items: [ + { item_id: 103, quantity: 4, price: 5, name: 'pen' }, + { item_id: 38, quantity: 1, price: 300, name: 'printer' } + ] + }, + { + _id: 2, + items: [ { item_id: 4, quantity: 1, price: 23, name: 'paper' } ] + } + ] diff --git a/source/reference/operator/aggregation/firstN-array-element.txt b/source/reference/operator/aggregation/firstN-array-element.txt deleted file mode 100644 index a940da08783..00000000000 --- a/source/reference/operator/aggregation/firstN-array-element.txt +++ /dev/null @@ -1,135 +0,0 @@ -======================== -$firstN (array operator) -======================== - -.. default-domain:: mongodb - -.. contents:: On this page - :local: - :backlinks: none - :depth: 1 - :class: singlecol - -Definition ----------- - -.. expression:: $firstN - - .. versionadded:: 5.2 - - Returns a specified number of elements from the beginning of an - array. - -.. seealso:: - - - :expression:`$lastN` - - - :expression:`$sortArray` - -Syntax ------- - -:expression:`$firstN` has the following syntax: - -.. code-block:: javascript - - { $firstN: { n: , input: } } - -.. list-table:: - :header-rows: 1 - :class: border-table - - * - Field - - Description - - * - ``n`` - - An :ref:`expression ` that resolves to a - positive integer. The integer specifies the number of array elements - that :expression:`$firstN` returns. - - * - ``input`` - - An :ref:`expression ` that resolves to the - array from which to return ``n`` elements. - -Behavior --------- - -- :expression:`$firstN` returns elements in the same order they appear in - the input array. - -- :expression:`$firstN` does not filter out ``null`` values in the input - array. - -- You cannot specify a value of ``n`` less than ``1``. - -- If the specified ``n`` is greater than or equal to the number of elements - in the ``input`` array, :expression:`$firstN` returns the ``input`` array. - -- If ``input`` resolves to a non-array value, the aggregation operation - errors. - -Example -------- - -The collection ``games`` has the following documents: - -.. code-block:: javascript - :copyable: true - - db.games.insertMany([ - { "playerId" : 1, "score" : [ 1, 2, 3 ] }, - { "playerId" : 2, "score" : [ 12, 90, 7, 89, 8 ] }, - { "playerId" : 3, "score" : [ null ] }, - { "playerId" : 4, "score" : [ ] }, - { "playerId" : 5, "score" : [ 1293, null, 3489, 9 ]}, - { "playerId" : 6, "score" : [ "12.1", 2, NumberLong("2090845886852"), 23 ]} - ]) - -The following example uses the :expression:`$firstN` operator to retrieve the -first three scores for each player. The scores are returned in the new field -``firstScores`` created by :pipeline:`$addFields`. - -.. code-block:: javascript - :copyable: true - - db.games.aggregate([ - { $addFields: { firstScores: { $firstN: { n: 3, input: "$score" } } } } - ]) - -The operation returns the following results: - -.. code-block:: javascript - :copyable: true - :emphasize-lines: 4, 9, 14, 19, 24, 29 - - [{ - "playerId": 1, - "score": [ 1, 2, 3 ], - "firstScores": [ 1, 2, 3 ] - }, - { - "playerId": 2, - "score": [ 12, 90, 7, 89, 8 ], - "firstScores": [ 12, 90, 7 ] - }, - { - "playerId": 3, - "score": [ null ], - "firstScores": [ null ] - }, - { - "playerId": 4, - "score": [ ], - "firstScores": [ ] - }, - { - "playerId": 5, - "score": [ 1293, null, 3489, 9 ], - "firstScores": [ 1293, null, 3489 ] - }, - { - "playerId": 6, - "score": [ "12.1", 2, NumberLong("2090845886852"), 23 ], - "firstScores": [ "12.1", 2, NumberLong("2090845886852") ] - }] - diff --git a/source/reference/operator/aggregation/firstN.txt b/source/reference/operator/aggregation/firstN.txt index 99783a2c702..83bd7811253 100644 --- a/source/reference/operator/aggregation/firstN.txt +++ b/source/reference/operator/aggregation/firstN.txt @@ -1,29 +1,37 @@ -================================= -$firstN (aggregation accumulator) -================================= +======= +$firstN +======= .. default-domain:: mongodb .. contents:: On this page :local: :backlinks: none - :depth: 1 + :depth: 2 :class: singlecol Definition ---------- -.. group:: $firstN +.. versionadded:: 5.2 + + +``$firstN`` can be used as an aggregation accumulator or array operator. As +an aggregation accumulator, it returns an aggregation of the first ``n`` +elements within a group. As an array operator, it returns the +specified number of elements from the beginning of an array. - .. versionadded:: 5.2 +Aggregation Accumulator +----------------------- +.. group:: $firstN - Returns an aggregation of the first ``n`` elements within a group. - The elements returned are meaningful only if in a specified sort order. - If the group contains fewer than ``n`` elements, ``$firstN`` - returns all elements in the group. +When ``$firstN`` is used as an aggregation accumulator, the elements returned +are meaningful only if they are in a specified sort order. If the group contains +fewer than ``n`` elements, ``$firstN`` returns all elements in the group. Syntax ------- +~~~~~~ +When used as an aggregation accumulator, ``$firstN`` has the following syntax: .. code-block:: none :copyable: false @@ -43,10 +51,10 @@ Syntax For details see :ref:`group key example`. Behavior --------- +~~~~~~~~ Null and Missing Values -~~~~~~~~~~~~~~~~~~~~~~~ +``````````````````````` - ``$firstN`` does not filter out null values. - ``$firstN`` converts missing values to null. @@ -107,7 +115,7 @@ In this example: ] Comparison of ``$firstN`` and ``$topN`` Accumulators -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +```````````````````````````````````````````````````` Both ``$firstN`` and ``$topN`` accumulators can accomplish similar results. @@ -121,31 +129,19 @@ In general: - ``$firstN`` can be used as an aggregation expression, ``$topN`` cannot. Restrictions ------------- +~~~~~~~~~~~~ Window Function and Aggregation Expression Support -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +`````````````````````````````````````````````````` ``$firstN`` is supported as an :ref:`aggregation expression `. -For details on aggregation expression usage see -:ref:`Using $firstN as an Aggregation Expression -`. - ``$firstN`` is supported as a :pipeline:`window operator <$setWindowFields>`. -Memory Limit Considerations -~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Aggregation pipelines which call ``$firstN`` are subject to the -:ref:`100 MB limit `. If this -limit is exceeded for an individual group, the aggregation fails -with an error. - Examples --------- +~~~~~~~~ Consider a ``gamescores`` collection with the following documents: @@ -163,7 +159,7 @@ Consider a ``gamescores`` collection with the following documents: ]) Find the First Three Player Scores for a Single Game -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +```````````````````````````````````````````````````` You can use the ``$firstN`` accumulator to find the first three scores in a single game. @@ -214,7 +210,7 @@ The operation returns the following results: ] Finding the First Three Player Scores Across Multiple Games -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +``````````````````````````````````````````````````````````` You can use the ``$firstN`` accumulator to find the first ``n`` input fields in each game. @@ -262,7 +258,7 @@ The operation returns the following results: ] Using ``$sort`` With ``$firstN`` -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +```````````````````````````````` Using a ``$sort`` stage earlier in the pipeline can influence the results of the ``$firstN`` accumulator. @@ -311,7 +307,7 @@ The operation returns the following results: .. _first-n-with-group-key: Computing ``n`` Based on the Group Key for ``$group`` -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +````````````````````````````````````````````````````` You can also assign the value of ``n`` dynamically. In this example, the :expression:`$cond` expression is used on the ``gameId`` field. @@ -357,7 +353,7 @@ The operation returns the following results: .. _firstN-aggregation-expression: Using ``$firstN`` as an Aggregation Expression -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +`````````````````````````````````````````````` You can also use ``$firstN`` as an aggregation expression. @@ -399,3 +395,121 @@ The operation returns the following results: [ { firstThreeElements: [ 10, 20, 30 ] } ] + +Array Operator +-------------- + +.. expression:: $firstN + + +Syntax +~~~~~~ + +When used as an array operator, ``$firstN`` has the following syntax: + +.. code-block:: javascript + + { $firstN: { n: , input: } } + +.. list-table:: + :header-rows: 1 + :class: border-table + + * - Field + - Description + + * - ``n`` + - An :ref:`expression ` that resolves to a + positive integer. The integer specifies the number of array elements + that :expression:`$firstN` returns. + + * - ``input`` + - An :ref:`expression ` that resolves to the + array from which to return ``n`` elements. + +Behavior +~~~~~~~~ + +- :expression:`$firstN` returns elements in the same order they appear in + the input array. + +- :expression:`$firstN` does not filter out ``null`` values in the input + array. + +- You cannot specify a value of ``n`` less than ``1``. + +- If the specified ``n`` is greater than or equal to the number of elements + in the ``input`` array, :expression:`$firstN` returns the ``input`` array. + +- If ``input`` resolves to a non-array value, the aggregation operation + errors. + +Example +~~~~~~~ + +The collection ``games`` has the following documents: + +.. code-block:: javascript + :copyable: true + + db.games.insertMany([ + { "playerId" : 1, "score" : [ 1, 2, 3 ] }, + { "playerId" : 2, "score" : [ 12, 90, 7, 89, 8 ] }, + { "playerId" : 3, "score" : [ null ] }, + { "playerId" : 4, "score" : [ ] }, + { "playerId" : 5, "score" : [ 1293, null, 3489, 9 ]}, + { "playerId" : 6, "score" : [ "12.1", 2, NumberLong("2090845886852"), 23 ]} + ]) + +The following example uses the :expression:`$firstN` operator to retrieve the +first three scores for each player. The scores are returned in the new field +``firstScores`` created by :pipeline:`$addFields`. + +.. code-block:: javascript + :copyable: true + + db.games.aggregate([ + { $addFields: { firstScores: { $firstN: { n: 3, input: "$score" } } } } + ]) + +The operation returns the following results: + +.. code-block:: javascript + :copyable: true + :emphasize-lines: 4, 9, 14, 19, 24, 29 + + [{ + "playerId": 1, + "score": [ 1, 2, 3 ], + "firstScores": [ 1, 2, 3 ] + }, + { + "playerId": 2, + "score": [ 12, 90, 7, 89, 8 ], + "firstScores": [ 12, 90, 7 ] + }, + { + "playerId": 3, + "score": [ null ], + "firstScores": [ null ] + }, + { + "playerId": 4, + "score": [ ], + "firstScores": [ ] + }, + { + "playerId": 5, + "score": [ 1293, null, 3489, 9 ], + "firstScores": [ 1293, null, 3489 ] + }, + { + "playerId": 6, + "score": [ "12.1", 2, NumberLong("2090845886852"), 23 ], + "firstScores": [ "12.1", 2, NumberLong("2090845886852") ] + }] + +.. seealso:: + + - :expression:`$lastN` + - :expression:`$sortArray` \ No newline at end of file diff --git a/source/reference/operator/aggregation/function.txt b/source/reference/operator/aggregation/function.txt index 524f6a96f48..29eb427d024 100644 --- a/source/reference/operator/aggregation/function.txt +++ b/source/reference/operator/aggregation/function.txt @@ -15,8 +15,6 @@ Definition .. expression:: $function - .. versionadded:: 4.4 - Defines a custom aggregation function or expression in JavaScript. You can use the :expression:`$function` operator to define custom @@ -120,8 +118,7 @@ scripting: - For a :binary:`~bin.mongos` instance, see :setting:`security.javascriptEnabled` configuration option or the - :option:`--noscripting ` command-line option - starting in MongoDB 4.4. + :option:`--noscripting ` command-line option. | In earlier versions, MongoDB does not allow JavaScript execution on :binary:`~bin.mongos` instances. @@ -138,10 +135,9 @@ JavaScript expression. However: :ref:`aggregation expressions ` within the query language. -- Starting in MongoDB 4.4, the :expression:`$function` and - :group:`$accumulator` allows users to define custom aggregation - expressions in JavaScript if the provided pipeline operators - cannot fulfill your application's needs. +- The :expression:`$function` and :group:`$accumulator` allows users to define + custom aggregation expressions in JavaScript if the provided pipeline + operators cannot fulfill your application's needs. Given the available aggregation operators: @@ -237,10 +233,9 @@ Example 2: Alternative to ``$where`` The :query:`$expr` operator allows the use of :ref:`aggregation expressions ` within the - query language. And, starting in MongoDB 4.4, the - :expression:`$function` and :group:`$accumulator` allows users to - define custom aggregation expressions in JavaScript if the provided - pipeline operators cannot fulfill your application's needs. + query language. And the :expression:`$function` and :group:`$accumulator` + allows users to define custom aggregation expressions in JavaScript if the + provided pipeline operators cannot fulfill your application's needs. Given the available aggregation operators: diff --git a/source/reference/operator/aggregation/geoNear.txt b/source/reference/operator/aggregation/geoNear.txt index c16a4fa5b96..292e2ff3ffc 100644 --- a/source/reference/operator/aggregation/geoNear.txt +++ b/source/reference/operator/aggregation/geoNear.txt @@ -184,8 +184,6 @@ When using :pipeline:`$geoNear`, consider that: - .. include:: /includes/fact-geoNear-restrict-near-in-query.rst -- .. include:: /includes/extracts/views-unsupported-geoNear.rst - - Starting in version 4.2, :pipeline:`$geoNear` no longer has a default limit of 100 documents. @@ -300,7 +298,7 @@ In this example: - The ``let`` option is used to set an array value of ``[-73.99279,40.719296]`` to the variable ``$pt``. -- ``$pt`` is specified as a let option to the ``near`` parameter in the +- ``$pt`` is specified as a ``let`` option to the ``near`` parameter in the ``$geoNear`` stage. .. code-block:: javascript diff --git a/source/reference/operator/aggregation/group.txt b/source/reference/operator/aggregation/group.txt index 3a27b8e136e..814d0f63c6c 100644 --- a/source/reference/operator/aggregation/group.txt +++ b/source/reference/operator/aggregation/group.txt @@ -7,6 +7,9 @@ $group (aggregation) .. facet:: :name: programming_language :values: shell + +.. meta:: + :description: Learn how to use an aggregation stage to seperate documents into unique groups. .. contents:: On this page :local: @@ -101,15 +104,11 @@ operators: ``$group`` and Memory Restrictions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -The :pipeline:`$group` stage has a limit of 100 megabytes of RAM. By -default, if the stage exceeds this limit, :pipeline:`$group` returns an -error. To allow more space for stage processing, use the -:ref:`allowDiskUse ` option to enable -aggregation pipeline stages to write data to temporary files. - -.. seealso:: - - :doc:`/core/aggregation-pipeline-limits` +If the :pipeline:`$group` stage exceeds 100 megabytes of RAM, MongoDB writes +data to temporary files. However, if the +:ref:`allowDiskUse ` option is set to ``false``, +``$group`` returns an error. For more information, refer to +:doc:`/core/aggregation-pipeline-limits`. .. _group-pipeline-optimization: diff --git a/source/reference/operator/aggregation/ifNull.txt b/source/reference/operator/aggregation/ifNull.txt index d4b10d2e25c..8937967c68a 100644 --- a/source/reference/operator/aggregation/ifNull.txt +++ b/source/reference/operator/aggregation/ifNull.txt @@ -8,6 +8,9 @@ $ifNull (aggregation) :name: programming_language :values: shell +.. meta:: + :description: Learn how to use an aggregation operator to evaluate expressions for null values and return or replace null values. + .. contents:: On this page :local: :backlinks: none @@ -55,19 +58,6 @@ Syntax ] } -In MongoDB 4.4 and earlier versions, :expression:`$ifNull` only -accepts a single input expression: - -.. code-block:: none - :copyable: false - - { - $ifNull: [ - , - - ] - } - Examples -------- diff --git a/source/reference/operator/aggregation/indexStats.txt b/source/reference/operator/aggregation/indexStats.txt index ecda04698ae..678d24275cf 100644 --- a/source/reference/operator/aggregation/indexStats.txt +++ b/source/reference/operator/aggregation/indexStats.txt @@ -81,8 +81,7 @@ Definition The full specfication document for the index, which includes the index key specification document. - The index option ``hidden``, available starting in MongoDB - 4.4, is only included if the value is ``true``. + The index option ``hidden`` is only included if the value is ``true``. .. versionadded:: 4.2.4 diff --git a/source/reference/operator/aggregation/isArray.txt b/source/reference/operator/aggregation/isArray.txt index bc5c7095f3d..4e6661bee82 100644 --- a/source/reference/operator/aggregation/isArray.txt +++ b/source/reference/operator/aggregation/isArray.txt @@ -58,15 +58,15 @@ The ```` can be any valid :ref:`expression Example ------- -Create the ``warehouses`` collection: +Create a collection named ``warehouses`` with the following documents: .. code-block:: javascript db.warehouses.insertMany( [ - { "_id" : 1, instock: [ "chocolate" ], ordered: [ "butter", "apples" ] }, - { "_id" : 2, instock: [ "apples", "pudding", "pie" ] }, - { "_id" : 3, instock: [ "pears", "pecans"], ordered: [ "cherries" ] }, - { "_id" : 4, instock: [ "ice cream" ], ordered: [ ] } + { _id : 1, instock: [ "chocolate" ], ordered: [ "butter", "apples" ] }, + { _id : 2, instock: [ "apples", "pudding", "pie" ] }, + { _id : 3, instock: [ "pears", "pecans" ], ordered: [ "cherries" ] }, + { _id : 4, instock: [ "ice cream" ], ordered: [ ] } ] ) Check if the ``instock`` and the ``ordered`` fields are arrays. If both @@ -91,11 +91,12 @@ fields are arrays, concatenate them: ] ) .. code-block:: javascript + :copyable: false - { "_id" : 1, "items" : [ "chocolate", "butter", "apples" ] } - { "_id" : 2, "items" : "One or more fields is not an array." } - { "_id" : 3, "items" : [ "pears", "pecans", "cherries" ] } - { "_id" : 4, "items" : [ "ice cream" ] } + { _id : 1, items : [ "chocolate", "butter", "apples" ] } + { _id : 2, items : "One or more fields is not an array." } + { _id : 3, items : [ "pears", "pecans", "cherries" ] } + { _id : 4, items : [ "ice cream" ] } .. seealso:: diff --git a/source/reference/operator/aggregation/isNumber.txt b/source/reference/operator/aggregation/isNumber.txt index aeeb6091532..d20ad6dd765 100644 --- a/source/reference/operator/aggregation/isNumber.txt +++ b/source/reference/operator/aggregation/isNumber.txt @@ -15,8 +15,6 @@ Definition .. expression:: $isNumber - .. versionadded:: 4.4 - ``$isNumber`` checks if the specified :ref:`expression ` resolves to one of the following numeric :term:`BSON types`: diff --git a/source/reference/operator/aggregation/isoDayOfWeek.txt b/source/reference/operator/aggregation/isoDayOfWeek.txt index e0f34a5a2c5..1033db673f5 100644 --- a/source/reference/operator/aggregation/isoDayOfWeek.txt +++ b/source/reference/operator/aggregation/isoDayOfWeek.txt @@ -110,8 +110,10 @@ A collection called ``birthdays`` contains the following documents: .. code-block:: javascript - { "_id" : 1, "name" : "Betty", "birthday" : ISODate("1993-09-21T00:00:00Z") } - { "_id" : 2, "name" : "Veronica", "birthday" : ISODate("1981-11-07T00:00:00Z") } + db.birthdays.insertMany( [ + { _id: 1, name: "Betty", birthday: ISODate("1993-09-21T00:00:00Z") }, + { _id: 2, name: "Veronica", birthday: ISODate("1981-11-07T00:00:00Z") } + ] ) The following operation returns the weekday number for each ``birthday`` field. @@ -119,7 +121,7 @@ The following operation returns the weekday number for each .. code-block:: javascript - db.dates.aggregate( [ + db.birthdays.aggregate( [ { $project: { _id: 0, @@ -133,8 +135,10 @@ The operation returns the following results: .. code-block:: javascript - { "name" : "Betty", "dayOfWeek" : 2 } - { "name" : "Veronica", "dayOfWeek" : 6 } + [ + { name: "Betty", dayOfWeek: 2 }, + { name: "Veronica", dayOfWeek: 6 } + ] .. seealso:: diff --git a/source/reference/operator/aggregation/isoWeek.txt b/source/reference/operator/aggregation/isoWeek.txt index a61a59dbaeb..7244ce6f3f7 100644 --- a/source/reference/operator/aggregation/isoWeek.txt +++ b/source/reference/operator/aggregation/isoWeek.txt @@ -111,8 +111,10 @@ A collection called ``deliveries`` contains the following documents: .. code-block:: javascript - { "_id" : 1, "date" : ISODate("2006-10-24T00:00:00Z"), "city" : "Boston" } - { "_id" : 2, "date" : ISODate("2011-08-18T00:00:00Z"), "city" : "Detroit" } + db.deliveries.insertMany( [ + { _id: 1, date: ISODate("2006-10-24T00:00:00Z"), city: "Boston" }, + { _id: 2, date: ISODate("2011-08-18T00:00:00Z"), city: "Detroit" } + ] ) The following operation returns the week number for each ``date`` field. @@ -132,9 +134,12 @@ The following operation returns the week number for each ``date`` field. The operation returns the following results: .. code-block:: javascript + :copyable: false - { "city" : "Boston", "weekNumber" : 43 } - { "city" : "Detroit", "weekNumber" : 33 } + [ + { city: "Boston", weekNumber: 43 }, + { city: "Detroit", weekNumber: 33 } + ] .. seealso:: diff --git a/source/reference/operator/aggregation/lastN-array-element.txt b/source/reference/operator/aggregation/lastN-array-element.txt deleted file mode 100644 index 74f2feb5c2d..00000000000 --- a/source/reference/operator/aggregation/lastN-array-element.txt +++ /dev/null @@ -1,134 +0,0 @@ -======================== -$lastN (array operator) -======================== - -.. default-domain:: mongodb - -.. contents:: On this page - :local: - :backlinks: none - :depth: 1 - :class: singlecol - -Definition ----------- - -.. expression:: $lastN - - .. versionadded:: 5.2 - - Returns a specified number of elements from the end of an - array. - -.. seealso:: - - - :expression:`$firstN` - - - :expression:`$sortArray` - -Syntax ------- - -:expression:`$lastN` has the following syntax: - -.. code-block:: javascript - - { $lastN: { n: , input: } } - -.. list-table:: - :header-rows: 1 - :class: border-table - - * - Field - - Description - - * - ``n`` - - An :ref:`expression ` that resolves to a - positive integer. The integer specifies the number of array elements - that :expression:`$lastN` returns. - - * - ``input`` - - An :ref:`expression ` that resolves to the - array from which to return ``n`` elements. - -Behavior --------- - -- :expression:`$lastN` returns elements in the same order they appear in - the input array. - -- :expression:`$lastN` does not filter out ``null`` values in the input - array. - -- You cannot specify a value of ``n`` less than ``1``. - -- If the specified ``n`` is greater than or equal to the number of elements - in the ``input`` array, :expression:`$lastN` returns the ``input`` array. - -- If ``input`` resolves to a non-array value, the aggregation operation - errors. - -Example -------- - -The collection ``games`` has the following documents: - -.. code-block:: javascript - :copyable: true - - db.games.insertMany([ - { "playerId" : 1, "score" : [ 1, 2, 3 ] }, - { "playerId" : 2, "score" : [ 12, 90, 7, 89, 8 ] }, - { "playerId" : 3, "score" : [ null ] }, - { "playerId" : 4, "score" : [ ] }, - { "playerId" : 5, "score" : [ 1293, null, 3489, 9 ]}, - { "playerId" : 6, "score" : [ "12.1", 2, NumberLong("2090845886852"), 23 ]} - ]) - -The following example uses the :expression:`$lastN` operator to retrieve the -last three scores for each player. The scores are returned in the new field -``lastScores`` created by :pipeline:`$addFields`. - -.. code-block:: javascript - :copyable: true - - db.games.aggregate([ - { $addFields: { lastScores: { $lastN: { n: 3, input: "$score" } } } } - ]) - -The operation returns the following results: - -.. code-block:: javascript - :copyable: true - :emphasize-lines: 4, 9, 14, 19, 24, 29 - - [{ - "playerId": 1, - "score": [ 1, 2, 3 ], - "lastScores": [ 1, 2, 3 ] - }, - { - "playerId": 2, - "score": [ 12, 90, 7, 89, 8 ], - "lastScores": [ 7, 89, 8 ] - }, - { - "playerId": 3, - "score": [ null ], - "lastScores": [ null ] - }, - { - "playerId": 4, - "score": [ ], - "lastScores": [ ] - }, - { - "playerId": 5, - "score": [ 1293, null, 3489, 9 ], - "lastScores": [ null, 3489, 9 ] - }, - { - "playerId": 6, - "score": [ "12.1", 2, NumberLong("2090845886852"), 23 ], - "lastScores": [ 2, NumberLong("2090845886852"), 23 ] - }] diff --git a/source/reference/operator/aggregation/lastN.txt b/source/reference/operator/aggregation/lastN.txt index eb90d0dcc67..b5dedb75369 100644 --- a/source/reference/operator/aggregation/lastN.txt +++ b/source/reference/operator/aggregation/lastN.txt @@ -1,29 +1,36 @@ -================================== -$lastN (aggregation accumulator) -================================== +====== +$lastN +====== .. default-domain:: mongodb .. contents:: On this page :local: :backlinks: none - :depth: 1 + :depth: 2 :class: singlecol Definition ---------- -.. group:: $lastN +.. versionadded:: 5.2 + +``$lastN`` can be used as an aggregation accumulator or array operator. As +an aggregation accumulator, it an aggregation of the last ``n`` elements within +a group. As an array operator, it returns the specified number of elements +from the end of an array. + +Aggregation Accumulator +----------------------- - .. versionadded:: 5.2 +.. group:: $lastN - Returns an aggregation of the last ``n`` elements within a group. - The elements returned are meaningful only if in a specified sort order. - If the group contains fewer than ``n`` elements, ``$lastN`` - returns all elements in the group. +When ``$lasttN`` is used as an aggregation accumulator, the elements returned +are meaningful only if they are in a specified sort order. If the group contains +fewer than ``n`` elements, ``$lastN`` returns all elements in the group. Syntax ------- +~~~~~~ .. code-block:: none :copyable: false @@ -43,10 +50,10 @@ Syntax For details see :ref:`group key example `. Behavior --------- +~~~~~~~~ Null and Missing Values -~~~~~~~~~~~~~~~~~~~~~~~ +``````````````````````` - ``$lastN`` does not filter out null values. - ``$lastN`` converts missing values to null. @@ -107,7 +114,7 @@ In this example: ] Comparison of ``$lastN`` and ``$bottomN`` -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +````````````````````````````````````````` Both ``$lastN`` and ``$bottomN`` accumulators can accomplish similar results. @@ -121,31 +128,19 @@ In general: - ``$lastN`` can be used as an aggregation expression, ``$bottomN`` cannot. Restrictions ------------- +~~~~~~~~~~~~ Window Function and Aggregation Expression Support -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +`````````````````````````````````````````````````` ``$lastN`` is supported as an :ref:`aggregation expression `. -For details on aggregation expression usage see -:ref:`Using $lastN as an Aggregation Expression -`. - ``$lastN`` is supported as a :pipeline:`window operator <$setWindowFields>`. -Memory Limit Considerations -~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Aggregation pipelines which call ``$lastN`` are subject to the -:ref:`100 MB limit `. If this -limit is exceeded for an individual group, the aggregation fails -with an error. - Examples --------- +~~~~~~~~ Consider a ``gamescores`` collection with the following documents: @@ -163,7 +158,7 @@ Consider a ``gamescores`` collection with the following documents: ]) Find the Last Three Player Scores for a Single Game -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +``````````````````````````````````````````````````` You can use the ``$lastN`` accumulator to find the last three scores in a single game. @@ -214,7 +209,7 @@ The operation returns the following results: ] Finding the Last Three Player Scores Across Multiple Games -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +`````````````````````````````````````````````````````````` You can use the ``$lastN`` accumulator to find the last ``n`` input fields in each game. @@ -262,7 +257,7 @@ The operation returns the following results: ] Using ``$sort`` With ``$lastN`` -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +``````````````````````````````` Using a ``$sort`` stage earlier in the pipeline can influence the results of the ``$lastN`` accumulator. @@ -311,7 +306,7 @@ The operation returns the following results: .. _last-n-with-group-key: Computing ``n`` Based on the Group Key for ``$group`` -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +````````````````````````````````````````````````````` You can also assign the value of ``n`` dynamically. In this example, the :expression:`$cond` expression is used on the ``gameId`` field. @@ -356,7 +351,7 @@ The operation returns the following results: .. _lastN-aggregation-expression: Using ``$lastN`` as an Aggregation Expression -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +````````````````````````````````````````````` You can also use ``$lastN`` as an aggregation expression. @@ -396,3 +391,122 @@ The operation returns the following results: :copyable: false [ { lastThreeElements: [ 20, 30, 40 ] } ] + + +Array Operator +-------------- + +.. expression:: $lastN + +Syntax +~~~~~~ + +:expression:`$lastN` has the following syntax: + +.. code-block:: javascript + + { $lastN: { n: , input: } } + +.. list-table:: + :header-rows: 1 + :class: border-table + + * - Field + - Description + + * - ``n`` + - An :ref:`expression ` that resolves to a + positive integer. The integer specifies the number of array elements + that :expression:`$lastN` returns. + + * - ``input`` + - An :ref:`expression ` that resolves to the + array from which to return ``n`` elements. + +Behavior +~~~~~~~~ + +- :expression:`$lastN` returns elements in the same order they appear in + the input array. + +- :expression:`$lastN` does not filter out ``null`` values in the input + array. + +- You cannot specify a value of ``n`` less than ``1``. + +- If the specified ``n`` is greater than or equal to the number of elements + in the ``input`` array, :expression:`$lastN` returns the ``input`` array. + +- If ``input`` resolves to a non-array value, the aggregation operation + errors. + +Example +~~~~~~~ + +The collection ``games`` has the following documents: + +.. code-block:: javascript + :copyable: true + + db.games.insertMany([ + { "playerId" : 1, "score" : [ 1, 2, 3 ] }, + { "playerId" : 2, "score" : [ 12, 90, 7, 89, 8 ] }, + { "playerId" : 3, "score" : [ null ] }, + { "playerId" : 4, "score" : [ ] }, + { "playerId" : 5, "score" : [ 1293, null, 3489, 9 ]}, + { "playerId" : 6, "score" : [ "12.1", 2, NumberLong("2090845886852"), 23 ]} + ]) + +The following example uses the :expression:`$lastN` operator to retrieve the +last three scores for each player. The scores are returned in the new field +``lastScores`` created by :pipeline:`$addFields`. + +.. code-block:: javascript + :copyable: true + + db.games.aggregate([ + { $addFields: { lastScores: { $lastN: { n: 3, input: "$score" } } } } + ]) + +The operation returns the following results: + +.. code-block:: javascript + :copyable: true + :emphasize-lines: 4, 9, 14, 19, 24, 29 + + [{ + "playerId": 1, + "score": [ 1, 2, 3 ], + "lastScores": [ 1, 2, 3 ] + }, + { + "playerId": 2, + "score": [ 12, 90, 7, 89, 8 ], + "lastScores": [ 7, 89, 8 ] + }, + { + "playerId": 3, + "score": [ null ], + "lastScores": [ null ] + }, + { + "playerId": 4, + "score": [ ], + "lastScores": [ ] + }, + { + "playerId": 5, + "score": [ 1293, null, 3489, 9 ], + "lastScores": [ null, 3489, 9 ] + }, + { + "playerId": 6, + "score": [ "12.1", 2, NumberLong("2090845886852"), 23 ], + "lastScores": [ 2, NumberLong("2090845886852"), 23 ] + }] + + +.. seealso:: + + - :expression:`$firstN` + - :expression:`$sortArray` \ No newline at end of file diff --git a/source/reference/operator/aggregation/limit.txt b/source/reference/operator/aggregation/limit.txt index 3bcbd46073e..0c8dd7a6383 100644 --- a/source/reference/operator/aggregation/limit.txt +++ b/source/reference/operator/aggregation/limit.txt @@ -8,6 +8,9 @@ $limit (aggregation) :name: programming_language :values: shell +.. meta:: + :description: Learn about the $limit aggregation stage, which restricts the number of documents passed to the subsequent stage. + .. contents:: On this page :local: :backlinks: none diff --git a/source/reference/operator/aggregation/listSampledQueries.txt b/source/reference/operator/aggregation/listSampledQueries.txt index 2e6e49b760e..3648ab4cff0 100644 --- a/source/reference/operator/aggregation/listSampledQueries.txt +++ b/source/reference/operator/aggregation/listSampledQueries.txt @@ -50,7 +50,7 @@ Limitations ----------- - You cannot use ``$listSampledQueries`` on Atlas - :atlas:`multitenant ` + :atlas:`multi-tenant ` configurations. - You cannot use ``$listSampledQueries`` on standalone deployments. - You cannot use ``$listSampledQueries`` directly against a diff --git a/source/reference/operator/aggregation/literal.txt b/source/reference/operator/aggregation/literal.txt index fcc478ceed1..6717ef67608 100644 --- a/source/reference/operator/aggregation/literal.txt +++ b/source/reference/operator/aggregation/literal.txt @@ -102,8 +102,10 @@ A ``books`` collection has the following documents: .. code-block:: javascript - { "_id" : 1, "title" : "Dracula", "condition": "new" } - { "_id" : 2, "title" : "The Little Prince", "condition": "new" } + db.books.insertMany([ + { "_id" : 1, "title" : "Dracula", "condition": "new" }, + { "_id" : 2, "title" : "The Little Prince", "condition": "new" } + ]) The :expression:`{ $literal: 1 } <$literal>` expression returns a new ``editionNumber`` field set to the value ``1``: @@ -117,6 +119,7 @@ The :expression:`{ $literal: 1 } <$literal>` expression returns a new The operation results in the following documents: .. code-block:: javascript + :copyable: false { "_id" : 1, "title" : "Dracula", "editionNumber" : 1 } { "_id" : 2, "title" : "The Little Prince", "editionNumber" : 1 } diff --git a/source/reference/operator/aggregation/lookup.txt b/source/reference/operator/aggregation/lookup.txt index b5075b6dd3f..2d964245277 100644 --- a/source/reference/operator/aggregation/lookup.txt +++ b/source/reference/operator/aggregation/lookup.txt @@ -6,6 +6,7 @@ $lookup (aggregation) .. meta:: :keywords: atlas + :description: Learn about the $lookup aggregation stage, which performs left outer joins with collections in the same database, adding a new array field to each input document with matching documents from the joined collection. .. contents:: On this page :local: @@ -27,7 +28,7 @@ Definition the "joined" collection. The :pipeline:`$lookup` stage passes these reshaped documents to the next stage. - Starting in MongoDB 5.1, :pipeline:`$lookup` works across sharded + Starting in MongoDB 5.1, you can use :pipeline:`$lookup` with sharded collections. To combine elements from two different collections, use the @@ -120,19 +121,25 @@ The :pipeline:`$lookup` takes a document with these fields: already exists in the input document, the existing field is *overwritten*. -The operation would correspond to the following pseudo-SQL statement: +The operation corresponds to this pseudo-SQL statement: .. code-block:: sql + :copyable: false - SELECT *, - FROM collection - WHERE IN ( - SELECT * + SELECT *, ( + SELECT ARRAY_AGG(*) FROM WHERE = - ); + ) AS + FROM collection; + +.. note:: + + The SQL statements on this page are included for comparison to the + MongoDB aggregation pipeline syntax. The SQL statements aren't + runnable. -See these examples: +For MongoDB examples, see these pages: - :ref:`lookup-single-equality-example` - :ref:`unwind-example` @@ -249,6 +256,7 @@ The :pipeline:`$lookup` stage accepts a document with these fields: The operation corresponds to this pseudo-SQL statement: .. code-block:: sql + :copyable: false SELECT *, FROM collection @@ -380,6 +388,7 @@ The :pipeline:`$lookup` accepts a document with these fields: The operation corresponds to this pseudo-SQL statement: .. code-block:: sql + :copyable: false SELECT *, FROM localCollection @@ -505,7 +514,8 @@ Starting in MongoDB 5.1, you can specify :ref:`sharded collections ` in the ``from`` parameter of :pipeline:`$lookup` stages. -.. include:: /includes/graphLookup-sharded-coll-transaction-note.rst +You **cannot** use the ``$lookup`` stage within a transaction while +targeting a sharded collection. |sbe-title| ~~~~~~~~~~~ @@ -536,33 +546,36 @@ different ``$lookup`` operations. - .. _equality-match-performance: - ``$lookup`` operations that perform equality matches with a - single join typically perform better when the source collection - contains an index on the ``foreignField``. + single join perform better when the foreign collection contains + an index on the ``foreignField``. + + .. important:: - * - :ref:`Uncorrelated Subqueries` + If a supporting index on the ``foreignField`` does not + exist, a ``$lookup`` operation that performs an equality + match with a single join will likely have poor performance. - - .. _uncorrelated-subqueries-performance: + * - :ref:`Uncorrelated Subqueries ` + - .. _uncorrelated-subqueries-performance: + - ``$lookup`` operations that contain uncorrelated subqueries - typically perform better when the inner pipeline can reference - an index on the ``foreignField``. + perform better when the inner pipeline can reference an + index of the foreign collection. - MongoDB only needs to run the ``$lookup`` subquery once before caching the query because there is no relationship between the - source and foreign collections. The ``$lookup`` subquery is not - based on any value in the source collection. This behavior - improves performance for subsequent executions of this query. - + source and foreign collections. The subquery is not based on + any value in the source collection. This behavior improves + performance for subsequent executions of the ``$lookup`` + operation. * - :ref:`Correlated Subqueries ` - .. _correlated-subqueries-performance: - ``$lookup`` operations that contain correlated subqueries - typically perform better when the following conditions apply: - - - The source collection contains an index on the - ``localField``. + perform better when the following conditions apply: - The foreign collection contains an index on the ``foreignField``. @@ -681,6 +694,7 @@ The operation returns these documents: The operation corresponds to this pseudo-SQL statement: .. code-block:: sql + :copyable: false SELECT *, inventory_docs FROM orders @@ -691,7 +705,7 @@ The operation corresponds to this pseudo-SQL statement: ); For more information, see -:ref:`Equality Match Performance Considerations`. +:ref:`Equality Match Performance Considerations `. .. _unwind-example: @@ -942,6 +956,7 @@ The operation returns these documents: The operation corresponds to this pseudo-SQL statement: .. code-block:: sql + :copyable: false SELECT *, stockdata FROM orders @@ -1057,6 +1072,7 @@ The operation returns the following: The operation corresponds to this pseudo-SQL statement: .. code-block:: sql + :copyable: false SELECT *, holidays FROM absences diff --git a/source/reference/operator/aggregation/map.txt b/source/reference/operator/aggregation/map.txt index 42cd9429251..2b6d121b09f 100644 --- a/source/reference/operator/aggregation/map.txt +++ b/source/reference/operator/aggregation/map.txt @@ -8,6 +8,9 @@ $map (aggregation) :name: programming_language :values: shell +.. meta:: + :description: Learn about the $map aggregation operator, which applies an expression to each item in an array and returns an array with the applied results. + .. contents:: On this page :local: :backlinks: none diff --git a/source/reference/operator/aggregation/match.txt b/source/reference/operator/aggregation/match.txt index 3139f56b424..ede719b962a 100644 --- a/source/reference/operator/aggregation/match.txt +++ b/source/reference/operator/aggregation/match.txt @@ -8,6 +8,9 @@ $match (aggregation) :name: programming_language :values: shell +.. meta:: + :description: Learn about the $match aggregation stage, which filters documents to pass only those that match specified conditions to the next pipeline stage. + .. contents:: On this page :local: :backlinks: none diff --git a/source/reference/operator/aggregation/merge.txt b/source/reference/operator/aggregation/merge.txt index d7b24f390b3..39fa9e1856e 100644 --- a/source/reference/operator/aggregation/merge.txt +++ b/source/reference/operator/aggregation/merge.txt @@ -41,27 +41,23 @@ Definition - Can output to a collection in the same or different database. - - Starting in MongoDB 4.4: - - - :pipeline:`$merge` can output to the - same collection that is being aggregated. For more - information, see :ref:`merge-behavior-same-collection`. - - - Pipelines with the :pipeline:`$merge` stage can run on - replica set secondary nodes if all the nodes in cluster have - :ref:`featureCompatibilityVersion ` set - to ``4.4`` or higher and the :doc:`/core/read-preference` - allows secondary reads. - - - Read operations of the :pipeline:`$merge` statement are sent to - secondary nodes, while the write operations occur only on the - primary node. - - - Not all driver versions support targeting of :pipeline:`$merge` - operations to replica set secondary nodes. Check your - :driver:`driver ` documentation to see when your driver added - support for :pipeline:`$merge` read operations running on - secondary nodes. + - Can output to the same collection that is being aggregated. For more + information, see :ref:`merge-behavior-same-collection`. + + - Pipelines with the :pipeline:`$merge` stage can run on + replica set secondary nodes if all the nodes in cluster have + :ref:`featureCompatibilityVersion ` set + to ``5.0`` or higher and the :doc:`/core/read-preference` + allows secondary reads. + + - Read operations of the :pipeline:`$merge` statement are sent to + secondary nodes, while the write operations occur only on the + primary node. + + - Not all driver versions support targeting of :pipeline:`$merge` + operations to replica set secondary nodes. Check your + :driver:`driver ` documentation to see when your driver added support + for :pipeline:`$merge` read operations running on secondary nodes. - Creates a new collection if the output collection does not already exist. @@ -698,8 +694,7 @@ results of the aggregation pipeline to a collection: - :pipeline:`$out` * - - Can output to a collection in the same or different database. - - - Can output to a collection in the same or, starting in - MongoDB 4.4, different database. + - - Can output to a collection in the same or different database. * - - Creates a new collection if the output collection does not already exist. diff --git a/source/reference/operator/aggregation/meta.txt b/source/reference/operator/aggregation/meta.txt index 53c83324629..8126e52af83 100644 --- a/source/reference/operator/aggregation/meta.txt +++ b/source/reference/operator/aggregation/meta.txt @@ -56,9 +56,8 @@ Requires $text Search :pipeline:`$match` stage, the operation fails. - In find, you must specify the :query:`$text` operator in the - query predicate to use ``{ $meta: "textScore" }``. Starting - in MongoDB 4.4, if you do not specify the :query:`$text` - operator in the query predicate, the operation fails. + query predicate to use ``{ $meta: "textScore" }``. If you do not specify + the :query:`$text` operator in the query predicate, the operation fails. Availability ```````````` @@ -120,9 +119,8 @@ Sort without Projection $meta: "textScore" }`` without also having to project the ``textScore``. -- In find, starting in MongoDB 4.4, you can sort the resulting - documents by ``{ $meta: "textScore" }`` without also having to - project the ``textScore``. +- In find, you can sort the resulting documents by ``{ $meta: "textScore" }`` + without also having to project the ``textScore``. | In MongoDB 4.2 and earlier, to use :expression:`{ $meta: "textScore" } <$meta>` expression with @@ -138,9 +136,8 @@ Sort with Projection expression. The field name in the sort is disregarded by the query system. -- In find, starting in MongoDB 4.4, if you include the - :expression:`{ $meta: "textScore" } <$meta>` expression in - both the projection and sort, the projection and sort can have +- In find, if you include the :expression:`{ $meta: "textScore" } <$meta>` + expression in both the projection and sort, the projection and sort can have different field names for the expression. The field name in the sort is disregarded by the query system. diff --git a/source/reference/operator/aggregation/mod.txt b/source/reference/operator/aggregation/mod.txt index 6d18d6bfa25..7c2cc672bcb 100644 --- a/source/reference/operator/aggregation/mod.txt +++ b/source/reference/operator/aggregation/mod.txt @@ -4,6 +4,13 @@ $mod (aggregation) .. default-domain:: mongodb +.. facet:: + :name: genre + :values: reference + +.. meta:: + :keywords: code example + .. contents:: On this page :local: :backlinks: none @@ -17,19 +24,48 @@ Definition Divides one number by another and returns the *remainder*. - The :expression:`$mod` expression has the following syntax: +Syntax +------ + +The ``$mod`` expression has the following syntax: + +.. code-block:: javascript + + { $mod: [ , ] } + +The first argument is the dividend, and the second argument is the +divisor. That is, the first argument is divided by the second +argument. - .. code-block:: javascript +Behavior +-------- - { $mod: [ , ] } +The arguments can be any valid :ref:`expression +` as long as they resolve to numbers. For +more information on expressions, see :ref:`aggregation-expressions`. - The first argument is the dividend, and the second argument is the - divisor; i.e. first argument is divided by the second argument. +Starting in version 7.2, the output data type of the ``$mod`` operator is +the larger of the two input data types. - The arguments can be any valid :ref:`expression - ` as long as they resolve to numbers. For - more information on expressions, see :ref:`aggregation-expressions`. +.. note:: + Prior to version 7.2, the value and field type of inputs determine + the ``$mod`` output type if: + + - The divisor is type ``double`` but has an integer value. + - The dividend is type ``int`` or ``long``. + +In this case, MongoDB converts the divisor to the dividend data +type before it performs the mod operation. The output data type +is the dividend data type. + +Negative Dividend +~~~~~~~~~~~~~~~~~ + +.. include:: /includes/negative-dividend.rst + +For an example, see :ref:``. + Example ------- @@ -42,22 +78,53 @@ Consider a ``conferencePlanning`` collection with the following documents: { "_id" : 2, "city" : "Singapore", "hours" : 40, "tasks" : 4 } ] ) -The following aggregation uses the :expression:`$mod` expression to +The following aggregation uses the ``$mod`` expression to return the remainder of the ``hours`` field divided by the ``tasks`` field: .. code-block:: javascript - db.conferencePlanning.aggregate( - [ - { $project: { remainder: { $mod: [ "$hours", "$tasks" ] } } } - ] - ) + db.conferencePlanning.aggregate( [ + { $project: { remainder: { $mod: [ "$hours", "$tasks" ] } } } + ] ) The operation returns the following results: +.. code-block:: json + :copyable: false + + [ + { '_id' : 1, 'remainder' : 3 }, + { '_id' : 2, 'remainder' : 0 } + ] + +.. _mod-negative-dividend-example: + +Negative Dividend +~~~~~~~~~~~~~~~~~ + +Consider a ``modExample`` collection that contains the following +document: + +.. code-block:: javascript + + db.modExample.insertOne( [ + { "_id" : 1, "dividend": -13, "divisor": 9 } + ] ) + +This aggregation uses the ``$mod`` expression to return the remainder of +``dividend`` divided by the ``divisor`` field: + .. code-block:: javascript + + db.modExample.aggregate( [ + { $project: { remainder: { $mod: [ "$dividend", "$divisor" ] } } } + ] ) + +The operation returns the following results: + +.. code-block:: json :copyable: false - { "_id" : 1, "remainder" : 3 } - { "_id" : 2, "remainder" : 0 } + [ { '_id' : 1, 'remainder' : -4 } ] + diff --git a/source/reference/operator/aggregation/out.txt b/source/reference/operator/aggregation/out.txt index 405c503dc97..8dec38bd8b3 100644 --- a/source/reference/operator/aggregation/out.txt +++ b/source/reference/operator/aggregation/out.txt @@ -18,8 +18,7 @@ Definition .. pipeline:: $out Takes the documents returned by the aggregation pipeline and writes - them to a specified collection. Starting in MongoDB 4.4, you can - specify the output database. + them to a specified collection. You can specify the output database. The ``$out`` stage must be *the last stage* in the pipeline. The ``$out`` operator lets the aggregation @@ -44,8 +43,8 @@ The ``$out`` stage has the following syntax: { $out: "" } // Output collection is in the same database -- Starting in MongoDB 4.4, ``$out`` can take a document to - specify the output database as well as the output collection: +- ``$out`` can take a document to specify the output database as well as the + output collection: .. code-block:: javascript @@ -137,7 +136,7 @@ The ``$out`` stage has the following syntax: collection. The input collection for a pipeline can be sharded. To output to a sharded collection, see :pipeline:`$merge`. - The ``$out`` operator cannot write results to a - :doc:`capped collection `. + :ref:`capped collection `. - If you modify a collection with an :atlas:`Atlas Search ` index, you must first delete and then re-create the search index. Consider using :pipeline:`$merge` instead. @@ -157,8 +156,7 @@ following summarizes the capabilities of the two stages: * - ``$out`` - :pipeline:`$merge` - * - - Can output to a collection in the same or, starting in - MongoDB 4.4, different database. + * - - Can output to a collection in the same or different database. - - Can output to a collection in the same or different database. * - - Creates a new collection if the output collection does not already exist. @@ -407,8 +405,8 @@ Output to a Different Database For a :ref:`sharded cluster `, the specified output database must already exist. -Starting in MongoDB 4.4, ``$out`` can output to a collection in -a database different from where the aggregation is run. +``$out`` can output to a collection in a database different from where the +aggregation is run. The following aggregation operation pivots the data in the ``books`` collection to have titles grouped by authors and then writes the diff --git a/source/reference/operator/aggregation/pow.txt b/source/reference/operator/aggregation/pow.txt index 1df0a952cf6..3b395d46eb0 100644 --- a/source/reference/operator/aggregation/pow.txt +++ b/source/reference/operator/aggregation/pow.txt @@ -69,57 +69,41 @@ cannot be represented accurately in that type. In these cases: Example ------- -A collection named ``quizzes`` contains the following documents: +Create a collection called ``quizzes`` with the following documents: .. code-block:: javascript - { - "_id" : 1, - "scores" : [ - { - "name" : "dave123", - "score" : 85 - }, - { - "name" : "dave2", - "score" : 90 - }, - { - "name" : "ahn", - "score" : 71 - } - ] - } - { - "_id" : 2, - "scores" : [ - { - "name" : "li", - "quiz" : 2, - "score" : 96 - }, - { - "name" : "annT", - "score" : 77 - }, - { - "name" : "ty", - "score" : 82 - } - ] - } + db.quizzes.insertMany( [ + { + _id : 1, + scores : [ + { name : "dave123", score : 85 }, + { name : "dave2", score : 90 }, + { name : "ahn", score : 71 } + ] + }, + { + _id : 2, + scores : [ + { name : "li", quiz : 2, score : 96 }, + { name : "annT", score : 77 }, + { name : "ty", score : 82 } + ] + } + ] ) The following example calculates the variance for each quiz: .. code-block:: javascript - db.quizzes.aggregate([ + db.quizzes.aggregate( [ { $project: { variance: { $pow: [ { $stdDevPop: "$scores.score" }, 2 ] } } } - ]) + ] ) The operation returns the following results: .. code-block:: javascript + :copyable: false - { "_id" : 1, "variance" : 64.66666666666667 } - { "_id" : 2, "variance" : 64.66666666666667 } + { _id : 1, variance : 64.66666666666667 } + { _id : 2, variance : 64.66666666666667 } diff --git a/source/reference/operator/aggregation/project.txt b/source/reference/operator/aggregation/project.txt index 6139e853557..acf9c63e872 100644 --- a/source/reference/operator/aggregation/project.txt +++ b/source/reference/operator/aggregation/project.txt @@ -8,6 +8,9 @@ $project (aggregation) :name: programming_language :values: shell +.. meta:: + :description: Learn about the $project aggregation stage, which allows you to pass documents with specified fields to the next stage. $project supports including or excluding existing fields, adding new fields, and computing field values. + .. contents:: On this page :local: :backlinks: none @@ -55,8 +58,8 @@ The :pipeline:`$project` specifications have the following forms: * - ``: <1 or true>`` - - Specifies the inclusion of a field. Non-zero integers are also - treated as ``true``. + - Specifies the inclusion of a field. Non-zero integers are also treated + as ``true``. * - ``_id: <0 or false>`` - Specifies the suppression of the ``_id`` field. @@ -86,11 +89,11 @@ The :pipeline:`$project` specifications have the following forms: See also the :pipeline:`$unset` stage to exclude fields. -Considerations --------------- +Behavior +-------- -Include Existing Fields -~~~~~~~~~~~~~~~~~~~~~~~ +Include Fields +~~~~~~~~~~~~~~ - The ``_id`` field is, by default, included in the output documents. To include any other fields from the input documents in the output @@ -101,8 +104,8 @@ Include Existing Fields document, :pipeline:`$project` ignores that field inclusion and does not add the field to the document. -Suppress the ``_id`` Field -~~~~~~~~~~~~~~~~~~~~~~~~~~ +``_id`` Field +~~~~~~~~~~~~~ By default, the ``_id`` field is included in the output documents. To exclude the ``_id`` field from the output documents, you @@ -226,11 +229,25 @@ fails with the same error: .. include:: /includes/aggregation/fact-project-stage-placement.rst -Restrictions -~~~~~~~~~~~~ +Considerations +-------------- + +Empty Specification +~~~~~~~~~~~~~~~~~~~ + +MongoDB returns an error if the :pipeline:`$project` stage is passed an +empty document. + +For example, running the following pipeline produces an error: + +.. code-block:: javascript + + db.myCollection.aggregate( [ { + $project: { } + } ] ) -An error is returned if the :pipeline:`$project` specification is -an empty document. +Array Index +~~~~~~~~~~~ .. include:: /includes/project-stage-and-array-index.rst diff --git a/source/reference/operator/aggregation/rand.txt b/source/reference/operator/aggregation/rand.txt index b5557eb5d14..a356b008bce 100644 --- a/source/reference/operator/aggregation/rand.txt +++ b/source/reference/operator/aggregation/rand.txt @@ -15,8 +15,6 @@ Definition .. expression:: $rand - .. versionadded:: 4.4.2 - Returns a random float between 0 and 1 each time it is called. :expression:`$rand` has the following syntax: diff --git a/source/reference/operator/aggregation/rank.txt b/source/reference/operator/aggregation/rank.txt index dba54103c83..462003111fe 100644 --- a/source/reference/operator/aggregation/rank.txt +++ b/source/reference/operator/aggregation/rank.txt @@ -21,10 +21,12 @@ Returns the document position (known as the rank) relative to other documents in the :pipeline:`$setWindowFields` stage :ref:`partition `. -The :pipeline:`$setWindowFields` stage :ref:`sortBy -` field value determines the document rank. For -more information on how MongoDB compares fields with different types, -see :ref:`BSON comparison order `. +The :ref:`sortBy ` field value in the +:pipeline:`$setWindowFields` stage determines the document rank. When +used with the ``$rank`` operator, ``sortBy`` can only take one field as +its value. For more information on how MongoDB compares fields with +different types, see :ref:`BSON comparison order +`. If multiple documents occupy the same rank, :group:`$rank` places the document with the subsequent value at a rank with a gap diff --git a/source/reference/operator/aggregation/reduce.txt b/source/reference/operator/aggregation/reduce.txt index 6a36a6a69ad..22325b8997d 100644 --- a/source/reference/operator/aggregation/reduce.txt +++ b/source/reference/operator/aggregation/reduce.txt @@ -73,7 +73,7 @@ Definition During evaluation of the ``in`` expression, two variables will be available: - - ``value`` is the :doc:`variable ` + - ``value`` is the :ref:`variable ` that represents the cumulative value of the expression. - ``this`` is the :ref:`variable ` @@ -154,14 +154,16 @@ probability of each event in the experiment. .. code-block:: javascript - {_id:1, "type":"die", "experimentId":"r5", "description":"Roll a 5", "eventNum":1, "probability":0.16666666666667} - {_id:2, "type":"card", "experimentId":"d3rc", "description":"Draw 3 red cards", "eventNum":1, "probability":0.5} - {_id:3, "type":"card", "experimentId":"d3rc", "description":"Draw 3 red cards", "eventNum":2, "probability":0.49019607843137} - {_id:4, "type":"card", "experimentId":"d3rc", "description":"Draw 3 red cards", "eventNum":3, "probability":0.48} - {_id:5, "type":"die", "experimentId":"r16", "description":"Roll a 1 then a 6", "eventNum":1, "probability":0.16666666666667} - {_id:6, "type":"die", "experimentId":"r16", "description":"Roll a 1 then a 6", "eventNum":2, "probability":0.16666666666667} - {_id:7, "type":"card", "experimentId":"dak", "description":"Draw an ace, then a king", "eventNum":1, "probability":0.07692307692308} - {_id:8, "type":"card", "experimentId":"dak", "description":"Draw an ace, then a king", "eventNum":2, "probability":0.07843137254902} + db.events.insertMany( [ + { _id : 1, type : "die", experimentId :"r5", description : "Roll a 5", eventNum : 1, probability : 0.16666666666667 }, + { _id : 2, type : "card", experimentId :"d3rc", description : "Draw 3 red cards", eventNum : 1, probability : 0.5 }, + { _id : 3, type : "card", experimentId :"d3rc", description : "Draw 3 red cards", eventNum : 2, probability : 0.49019607843137 }, + { _id : 4, type : "card", experimentId :"d3rc", description : "Draw 3 red cards", eventNum : 3, probability : 0.48 }, + { _id : 5, type : "die", experimentId :"r16", description : "Roll a 1 then a 6", eventNum : 1, probability : 0.16666666666667 }, + { _id : 6, type : "die", experimentId :"r16", description : "Roll a 1 then a 6", eventNum : 2, probability : 0.16666666666667 }, + { _id : 7, type : "card", experimentId :"dak", description : "Draw an ace, then a king", eventNum : 1, probability : 0.07692307692308 }, + { _id : 8, type : "card", experimentId :"dak", description : "Draw an ace, then a king", eventNum : 2, probability : 0.07843137254902 } + ] ) **Steps**: @@ -178,13 +180,13 @@ probability of each event in the experiment. { $group: { _id: "$experimentId", - "probabilityArr": { $push: "$probability" } + probabilityArr: { $push: "$probability" } } }, { $project: { - "description": 1, - "results": { + description: 1, + results: { $reduce: { input: "$probabilityArr", initialValue: 1, @@ -199,11 +201,12 @@ probability of each event in the experiment. The operation returns the following: .. code-block:: javascript + :copyable: false - { "_id" : "dak", "results" : 0.00603318250377101 } - { "_id" : "r5", "results" : 0.16666666666667 } - { "_id" : "r16", "results" : 0.027777777777778886 } - { "_id" : "d3rc", "results" : 0.11764705882352879 } + { _id : "dak", results : 0.00603318250377101 } + { _id : "r5", results : 0.16666666666667 } + { _id : "r16", results : 0.027777777777778886 } + { _id : "d3rc", results : 0.11764705882352879 } Discounted Merchandise `````````````````````` @@ -212,11 +215,13 @@ A collection named ``clothes`` contains the following documents: .. code-block:: javascript - { "_id" : 1, "productId" : "ts1", "description" : "T-Shirt", "color" : "black", "size" : "M", "price" : 20, "discounts" : [ 0.5, 0.1 ] } - { "_id" : 2, "productId" : "j1", "description" : "Jeans", "color" : "blue", "size" : "36", "price" : 40, "discounts" : [ 0.25, 0.15, 0.05 ] } - { "_id" : 3, "productId" : "s1", "description" : "Shorts", "color" : "beige", "size" : "32", "price" : 30, "discounts" : [ 0.15, 0.05 ] } - { "_id" : 4, "productId" : "ts2", "description" : "Cool T-Shirt", "color" : "White", "size" : "L", "price" : 25, "discounts" : [ 0.3 ] } - { "_id" : 5, "productId" : "j2", "description" : "Designer Jeans", "color" : "blue", "size" : "30", "price" : 80, "discounts" : [ 0.1, 0.25 ] } + db.clothes.insertMany( [ + { _id : 1, productId : "ts1", description : "T-Shirt", color : "black", size : "M", price : 20, discounts : [ 0.5, 0.1 ] }, + { _id : 2, productId : "j1", description : "Jeans", color : "blue", size : "36", price : 40, discounts : [ 0.25, 0.15, 0.05 ] }, + { _id : 3, productId : "s1", description : "Shorts", color : "beige", size : "32", price : 30, discounts : [ 0.15, 0.05 ] }, + { _id : 4, productId : "ts2", description : "Cool T-Shirt", color : "White", size : "L", price : 25, discounts : [ 0.3 ] }, + { _id : 5, productId : "j2", description : "Designer Jeans", color : "blue", size : "30", price : 80, discounts : [ 0.1, 0.25 ] } + ] ) Each document contains a ``discounts`` array containing the currently available percent-off coupons for each item. If each discount can be @@ -230,7 +235,7 @@ applied to the product once, we can calculate the lowest price by using [ { $project: { - "discountedPrice": { + discountedPrice: { $reduce: { input: "$discounts", initialValue: "$price", @@ -245,12 +250,13 @@ applied to the product once, we can calculate the lowest price by using The operation returns the following: .. code-block:: javascript + :copyable: false - { "_id" : ObjectId("57c893067054e6e47674ce01"), "discountedPrice" : 9 } - { "_id" : ObjectId("57c9932b7054e6e47674ce12"), "discountedPrice" : 24.224999999999998 } - { "_id" : ObjectId("57c993457054e6e47674ce13"), "discountedPrice" : 24.224999999999998 } - { "_id" : ObjectId("57c993687054e6e47674ce14"), "discountedPrice" : 17.5 } - { "_id" : ObjectId("57c993837054e6e47674ce15"), "discountedPrice" : 54 } + { _id : ObjectId("57c893067054e6e47674ce01"), discountedPrice : 9 } + { _id : ObjectId("57c9932b7054e6e47674ce12"), discountedPrice : 24.224999999999998 } + { _id : ObjectId("57c993457054e6e47674ce13"), discountedPrice : 24.224999999999998 } + { _id : ObjectId("57c993687054e6e47674ce14"), discountedPrice : 17.5 } + { _id : ObjectId("57c993837054e6e47674ce15"), discountedPrice : 54 } String Concatenation ~~~~~~~~~~~~~~~~~~~~ @@ -259,12 +265,15 @@ A collection named ``people`` contains the following documents: .. code-block:: javascript - { "_id" : 1, "name" : "Melissa", "hobbies" : [ "softball", "drawing", "reading" ] } - { "_id" : 2, "name" : "Brad", "hobbies" : [ "gaming", "skateboarding" ] } - { "_id" : 3, "name" : "Scott", "hobbies" : [ "basketball", "music", "fishing" ] } - { "_id" : 4, "name" : "Tracey", "hobbies" : [ "acting", "yoga" ] } - { "_id" : 5, "name" : "Josh", "hobbies" : [ "programming" ] } - { "_id" : 6, "name" : "Claire" } + db.people.insertMany( [ + { _id : 1, name : "Melissa", hobbies : [ "softball", "drawing", "reading" ] }, + { _id : 2, name : "Brad", hobbies : [ "gaming", "skateboarding" ] }, + { _id : 3, name : "Scott", hobbies : [ "basketball", "music", "fishing" ] }, + { _id : 4, name : "Tracey", hobbies : [ "acting", "yoga" ] }, + { _id : 5, name : "Josh", hobbies : [ "programming" ] }, + { _id : 6, name : "Claire" } + ] ) + The following example reduces the ``hobbies`` array of strings into a single string ``bio``: @@ -277,8 +286,8 @@ The following example reduces the ``hobbies`` array of strings into a single str { $match: { "hobbies": { $gt: [ ] } } }, { $project: { - "name": 1, - "bio": { + name: 1, + bio: { $reduce: { input: "$hobbies", initialValue: "My hobbies include:", @@ -305,12 +314,13 @@ The following example reduces the ``hobbies`` array of strings into a single str The operation returns the following: .. code-block:: javascript + :copyable: false - { "_id" : 1, "name" : "Melissa", "bio" : "My hobbies include: softball, drawing, reading" } - { "_id" : 2, "name" : "Brad", "bio" : "My hobbies include: gaming, skateboarding" } - { "_id" : 3, "name" : "Scott", "bio" : "My hobbies include: basketball, music, fishing" } - { "_id" : 4, "name" : "Tracey", "bio" : "My hobbies include: acting, yoga" } - { "_id" : 5, "name" : "Josh", "bio" : "My hobbies include: programming" } + { _id : 1, name : "Melissa", bio : "My hobbies include: softball, drawing, reading" } + { _id : 2, name : "Brad", bio : "My hobbies include: gaming, skateboarding" } + { _id : 3, name : "Scott", bio : "My hobbies include: basketball, music, fishing" } + { _id : 4, name : "Tracey", bio : "My hobbies include: acting, yoga" } + { _id : 5, name : "Josh", bio : "My hobbies include: programming" } Array Concatenation ~~~~~~~~~~~~~~~~~~~ @@ -319,10 +329,12 @@ A collection named ``matrices`` contains the following documents: .. code-block:: javascript - { "_id" : 1, "arr" : [ [ 24, 55, 79 ], [ 14, 78, 35 ], [ 84, 90, 3 ], [ 50, 89, 70 ] ] } - { "_id" : 2, "arr" : [ [ 39, 32, 43, 7 ], [ 62, 17, 80, 64 ], [ 17, 88, 11, 73 ] ] } - { "_id" : 3, "arr" : [ [ 42 ], [ 26, 59 ], [ 17 ], [ 72, 19, 35 ] ] } - { "_id" : 4 } + db.matrices.insertMany( [ + { _id : 1, arr : [ [ 24, 55, 79 ], [ 14, 78, 35 ], [ 84, 90, 3 ], [ 50, 89, 70 ] ] }, + { _id : 2, arr : [ [ 39, 32, 43, 7 ], [ 62, 17, 80, 64 ], [ 17, 88, 11, 73 ] ] }, + { _id : 3, arr : [ [ 42 ], [ 26, 59 ], [ 17 ], [ 72, 19, 35 ] ] }, + { _id : 4 } + ] ) Computing a Single Reduction ```````````````````````````` @@ -335,7 +347,7 @@ The following example collapses the two dimensional arrays into a single array ` [ { $project: { - "collapsed": { + collapsed: { $reduce: { input: "$arr", initialValue: [ ], @@ -350,11 +362,12 @@ The following example collapses the two dimensional arrays into a single array ` The operation returns the following: .. code-block:: javascript + :copyable: false - { "_id" : 1, "collapsed" : [ 24, 55, 79, 14, 78, 35, 84, 90, 3, 50, 89, 70 ] } - { "_id" : 2, "collapsed" : [ 39, 32, 43, 7, 62, 17, 80, 64, 17, 88, 11, 73 ] } - { "_id" : 3, "collapsed" : [ 42, 26, 59, 17, 72, 19, 35 ] } - { "_id" : 4, "collapsed" : null } + { _id : 1, collapsed : [ 24, 55, 79, 14, 78, 35, 84, 90, 3, 50, 89, 70 ] } + { _id : 2, collapsed : [ 39, 32, 43, 7, 62, 17, 80, 64, 17, 88, 11, 73 ] } + { _id : 3, collapsed : [ 42, 26, 59, 17, 72, 19, 35 ] } + { _id : 4, collapsed : null } Computing a Multiple Reductions ``````````````````````````````` @@ -368,15 +381,15 @@ creates a new array containing only the first element of each array. [ { $project: { - "results": { + results: { $reduce: { input: "$arr", initialValue: [ ], in: { - "collapsed": { + collapsed: { $concatArrays: [ "$$value.collapsed", "$$this" ] }, - "firstValues": { + firstValues: { $concatArrays: [ "$$value.firstValues", { $slice: [ "$$this", 1 ] } ] } } @@ -390,8 +403,9 @@ creates a new array containing only the first element of each array. The operation returns the following: .. code-block:: javascript + :copyable: false - { "_id" : 1, "results" : { "collapsed" : [ 24, 55, 79, 14, 78, 35, 84, 90, 3, 50, 89, 70 ], "firstValues" : [ 24, 14, 84, 50 ] } } - { "_id" : 2, "results" : { "collapsed" : [ 39, 32, 43, 7, 62, 17, 80, 64, 17, 88, 11, 73 ], "firstValues" : [ 39, 62, 17 ] } } - { "_id" : 3, "results" : { "collapsed" : [ 42, 26, 59, 17, 72, 19, 35 ], "firstValues" : [ 42, 26, 17, 72 ] } } - { "_id" : 4, "results" : null } + { _id : 1, results : { collapsed : [ 24, 55, 79, 14, 78, 35, 84, 90, 3, 50, 89, 70 ], firstValues : [ 24, 14, 84, 50 ] } } + { _id : 2, results : { collapsed : [ 39, 32, 43, 7, 62, 17, 80, 64, 17, 88, 11, 73 ], firstValues : [ 39, 62, 17 ] } } + { _id : 3, results : { collapsed : [ 42, 26, 59, 17, 72, 19, 35 ], firstValues : [ 42, 26, 17, 72 ] } } + { _id : 4, results : null } diff --git a/source/reference/operator/aggregation/replaceAll.txt b/source/reference/operator/aggregation/replaceAll.txt index dfaa05d584f..52359157a82 100644 --- a/source/reference/operator/aggregation/replaceAll.txt +++ b/source/reference/operator/aggregation/replaceAll.txt @@ -15,8 +15,6 @@ Definition .. expression:: $replaceAll - .. versionadded:: 4.4 - Replaces all instances of a search string in an input string with a replacement string. diff --git a/source/reference/operator/aggregation/replaceOne.txt b/source/reference/operator/aggregation/replaceOne.txt index 6a3e2d1e974..5519a52ec03 100644 --- a/source/reference/operator/aggregation/replaceOne.txt +++ b/source/reference/operator/aggregation/replaceOne.txt @@ -15,8 +15,6 @@ Definition .. expression:: $replaceOne - .. versionadded:: 4.4 - Replaces the first instance of a search string in an input string with a replacement string. diff --git a/source/reference/operator/aggregation/sample.txt b/source/reference/operator/aggregation/sample.txt index 00f038d5f1e..76136853b25 100644 --- a/source/reference/operator/aggregation/sample.txt +++ b/source/reference/operator/aggregation/sample.txt @@ -34,6 +34,17 @@ pseudo-random cursor to select the ``N`` documents: - :pipeline:`$sample` is the first stage of the pipeline. - ``N`` is less than 5% of the total documents in the collection. + + .. note:: + + You can't configure the threshold that :pipeline:`$sample` uses to + determine when to scan the entire collection. The thresholds is 5%. + If the size is greater than 5% of the total number of documents in + the collection, :pipeline:`$sample` performs a + :ref:`top-k ` sort by a generated random value. + The top-k sort could spill to disk if the sample documents are + larger than 100MB. + - The collection contains more than 100 documents. If any of the previous conditions are false, :pipeline:`$sample`: diff --git a/source/reference/operator/aggregation/sampleRate.txt b/source/reference/operator/aggregation/sampleRate.txt index 31eda1277fa..1a5c4924317 100644 --- a/source/reference/operator/aggregation/sampleRate.txt +++ b/source/reference/operator/aggregation/sampleRate.txt @@ -15,8 +15,6 @@ Definition .. expression:: $sampleRate - .. versionadded:: 4.4.2 - Matches a random selection of input documents. The number of documents selected approximates the sample rate expressed as a percentage of the total number of documents. diff --git a/source/reference/operator/aggregation/set.txt b/source/reference/operator/aggregation/set.txt index 94fa0501865..da451966c92 100644 --- a/source/reference/operator/aggregation/set.txt +++ b/source/reference/operator/aggregation/set.txt @@ -8,6 +8,9 @@ $set (aggregation) :name: programming_language :values: shell +.. meta:: + :description: Learn about the $set aggregation stage, which adds new fields to documents in the aggregation pipeline. $set allows you to add new fields based on aggregation expressions or empty objects. + .. contents:: On this page :local: :backlinks: none @@ -95,10 +98,10 @@ Create a sample ``scores`` collection with the following: .. code-block:: javascript - db.scores.insertMany([ + db.scores.insertMany( [ { _id: 1, student: "Maya", homework: [ 10, 5, 10 ], quiz: [ 10, 8 ], extraCredit: 0 }, { _id: 2, student: "Ryan", homework: [ 5, 6, 5 ], quiz: [ 8, 8 ], extraCredit: 8 } - ]) + ] ) The following operation uses two :pipeline:`$set` stages to include three new fields in the output documents: @@ -121,27 +124,30 @@ include three new fields in the output documents: The operation returns the following documents: .. code-block:: javascript + :copyable: false - { - "_id" : 1, - "student" : "Maya", - "homework" : [ 10, 5, 10 ], - "quiz" : [ 10, 8 ], - "extraCredit" : 0, - "totalHomework" : 25, - "totalQuiz" : 18, - "totalScore" : 43 - } - { - "_id" : 2, - "student" : "Ryan", - "homework" : [ 5, 6, 5 ], - "quiz" : [ 8, 8 ], - "extraCredit" : 8, - "totalHomework" : 16, - "totalQuiz" : 16, - "totalScore" : 40 - } + [ + { + _id: 1, + student: "Maya", + homework: [ 10, 5, 10 ], + quiz: [ 10, 8 ], + extraCredit: 0, + totalHomework: 25, + totalQuiz: 18, + totalScore: 43 + }, + { + _id: 2, + student: "Ryan", + homework: [ 5, 6, 5 ], + quiz: [ 8, 8 ], + extraCredit: 8, + totalHomework: 16, + totalQuiz: 16, + totalScore: 40 + } + ] Adding Fields to an Embedded Document ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -154,11 +160,11 @@ Create a sample collection ``vehicles`` with the following: .. code-block:: javascript - db.vehicles.insertMany([ + db.vehicles.insertMany( [ { _id: 1, type: "car", specs: { doors: 4, wheels: 4 } }, { _id: 2, type: "motorcycle", specs: { doors: 0, wheels: 2 } }, { _id: 3, type: "jet ski" } - ]) + ] ) The following aggregation operation adds a new field ``fuel_type`` to the embedded document ``specs``. @@ -172,10 +178,13 @@ the embedded document ``specs``. The operation returns the following results: .. code-block:: javascript + :copyable: false - { _id: 1, type: "car", specs: { doors: 4, wheels: 4, fuel_type: "unleaded" } } - { _id: 2, type: "motorcycle", specs: { doors: 0, wheels: 2, fuel_type: "unleaded" } } - { _id: 3, type: "jet ski", specs: { fuel_type: "unleaded" } } + [ + { _id: 1, type: "car", specs: { doors: 4, wheels: 4, fuel_type: "unleaded" } }, + { _id: 2, type: "motorcycle", specs: { doors: 0, wheels: 2, fuel_type: "unleaded" } }, + { _id: 3, type: "jet ski", specs: { fuel_type: "unleaded" } } + ] Overwriting an existing field ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -194,14 +203,15 @@ The following :pipeline:`$set` operation overrides the ``cats`` field: .. code-block:: javascript db.animals.aggregate( [ - { $set: { "cats": 20 } } + { $set: { cats: 20 } } ] ) The operation returns the following document: .. code-block:: javascript + :copyable: false - { _id: 1, dogs: 10, cats: 20 } + [ { _id: 1, dogs: 10, cats: 20 } ] It is possible to replace one field with another. In the following example the ``item`` field substitutes for the ``_id`` field. @@ -211,11 +221,11 @@ documents: .. code-block:: javascript - db.fruits.insertMany([ - { "_id" : 1, "item" : "tangerine", "type" : "citrus" }, - { "_id" : 2, "item" : "lemon", "type" : "citrus" }, - { "_id" : 3, "item" : "grapefruit", "type" : "citrus" } - ]) + db.fruits.insertMany( [ + { _id: 1, item: "tangerine", type: "citrus" }, + { _id: 2, item: "lemon", type: "citrus" }, + { _id: 3, item: "grapefruit", type: "citrus" } + ] ) The following aggregration operation uses ``$set`` to replace the ``_id`` field of each document with the value of the ``item`` field, @@ -224,16 +234,19 @@ and replaces the ``item`` field with a string ``"fruit"``. .. code-block:: javascript db.fruits.aggregate( [ - { $set: { _id : "$item", item: "fruit" } } + { $set: { _id: "$item", item: "fruit" } } ] ) The operation returns the following: .. code-block:: javascript + :copyable: false - { "_id" : "tangerine", "item" : "fruit", "type" : "citrus" } - { "_id" : "lemon", "item" : "fruit", "type" : "citrus" } - { "_id" : "grapefruit", "item" : "fruit", "type" : "citrus" } + [ + { _id: "tangerine", item: "fruit", type: "citrus" }, + { _id: "lemon", item: "fruit", type: "citrus" }, + { _id: "grapefruit", item: "fruit", type: "citrus" } + ] .. _set-add-element-to-array: @@ -244,10 +257,10 @@ Create a sample ``scores`` collection with the following: .. code-block:: javascript - db.scores.insertMany([ + db.scores.insertMany( [ { _id: 1, student: "Maya", homework: [ 10, 5, 10 ], quiz: [ 10, 8 ], extraCredit: 0 }, { _id: 2, student: "Ryan", homework: [ 5, 6, 5 ], quiz: [ 8, 8 ], extraCredit: 8 } - ]) + ] ) You can use :pipeline:`$set` with a :expression:`$concatArrays` expression to add an element to an existing array field. For example, @@ -258,17 +271,17 @@ score ``[ 7 ]``. .. code-block:: javascript - db.scores.aggregate([ + db.scores.aggregate( [ { $match: { _id: 1 } }, { $set: { homework: { $concatArrays: [ "$homework", [ 7 ] ] } } } - ]) + ] ) The operation returns the following: .. code-block:: javascript :copyable: false - { "_id" : 1, "student" : "Maya", "homework" : [ 10, 5, 10, 7 ], "quiz" : [ 10, 8 ], "extraCredit" : 0 } + [ { _id: 1, student: "Maya", homework: [ 10, 5, 10, 7 ], quiz: [ 10, 8 ], extraCredit: 0 } ] Creating a New Field with Existing Fields ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -277,10 +290,10 @@ Create a sample ``scores`` collection with the following: .. code-block:: javascript - db.scores.insertMany([ + db.scores.insertMany( [ { _id: 1, student: "Maya", homework: [ 10, 5, 10 ], quiz: [ 10, 8 ], extraCredit: 0 }, { _id: 2, student: "Ryan", homework: [ 5, 6, 5 ], quiz: [ 8, 8 ], extraCredit: 8 } - ]) + ] ) The following aggregation operation adds a new field ``quizAverage`` to each document that contains the average of the ``quiz`` array. diff --git a/source/reference/operator/aggregation/setIsSubset.txt b/source/reference/operator/aggregation/setIsSubset.txt index ff6e8165002..6c7290faa46 100644 --- a/source/reference/operator/aggregation/setIsSubset.txt +++ b/source/reference/operator/aggregation/setIsSubset.txt @@ -103,7 +103,7 @@ The operation returns the following results: { "flowerFieldA" : [ "rose", "orchid" ], "flowerFieldB" : [ "rose", "orchid" ], "AisSubset" : true } { "flowerFieldA" : [ "rose", "orchid" ], "flowerFieldB" : [ "orchid", "rose", "orchid" ], "AisSubset" : true } - { "flowerFieldA" : [ "rose", "orchid" ], "flowerFieldB" : [ "rose", "blue", "jasmine" ], "AisSubset" : true } + { "flowerFieldA" : [ "rose", "orchid" ], "flowerFieldB" : [ "rose", "orchid", "jasmine" ], "AisSubset" : true } { "flowerFieldA" : [ "rose", "orchid" ], "flowerFieldB" : [ "jasmine", "rose" ], "AisSubset" : false } { "flowerFieldA" : [ "rose", "orchid" ], "flowerFieldB" : [ ], "AisSubset" : false } { "flowerFieldA" : [ "rose", "orchid" ], "flowerFieldB" : [ [ "rose" ], [ "orchid" ] ], "AisSubset" : false } diff --git a/source/reference/operator/aggregation/shardedDataDistribution.txt b/source/reference/operator/aggregation/shardedDataDistribution.txt index f820818a01c..63a249be322 100644 --- a/source/reference/operator/aggregation/shardedDataDistribution.txt +++ b/source/reference/operator/aggregation/shardedDataDistribution.txt @@ -51,35 +51,51 @@ following fields: Examples -------- +Return All Sharded Data Distibution Metrics +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To return all sharded data distribution metrics, run the following: + .. code-block:: javascript - db.aggregate( [ + db.aggregate([ { $shardedDataDistribution: { } } - ] ) + ]) Example output: -.. code-block:: json - - [ - { - "ns": "test.names", - "shards": [ - { - "shardName": "shard-1", - "numOrphanedDocs": 0, - "numOwnedDocuments": 6, - "ownedSizeBytes": 366, - "orphanedSizeBytes": 0 - }, - { - "shardName": "shard-2", - "numOrphanedDocs": 0, - "numOwnedDocuments": 6, - "ownedSizeBytes": 366, - "orphanedSizeBytes": 0 - } - ] - } - ] +.. include:: /includes/shardedDataDistribution-output-example.rst + +Return Metrics for a Specific Shard +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To return sharded data distribution metrics for a specific shard, +run the following: + +.. code-block:: javascript + + db.aggregate([ + { $shardedDataDistribution: { } }, + { $match: { "shards.shardName": "" } } + ]) + +Return Metrics for a Namespace +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To return sharded data distribution data for a namespace, run the +following: + +.. code-block:: javascript + + db.aggregate([ + { $shardedDataDistribution: { } }, + { $match: { "ns": "." } } + ]) + +.. _shardedDataDistribution-no-orphaned-docs: + +Confirm No Orphaned Documents Remain +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. include:: /includes/shardedDataDistribution-orphaned-docs.rst diff --git a/source/reference/operator/aggregation/size.txt b/source/reference/operator/aggregation/size.txt index b0b66a16ae0..9c88d05f83f 100644 --- a/source/reference/operator/aggregation/size.txt +++ b/source/reference/operator/aggregation/size.txt @@ -8,6 +8,9 @@ $size (aggregation) :name: programming_language :values: shell +.. meta:: + :description: Learn about the $size aggregation operator, which counts and returns the total number of items in an array. + .. contents:: On this page :local: :backlinks: none diff --git a/source/reference/operator/aggregation/sort.txt b/source/reference/operator/aggregation/sort.txt index afcda13657d..c65283a9a4e 100644 --- a/source/reference/operator/aggregation/sort.txt +++ b/source/reference/operator/aggregation/sort.txt @@ -8,6 +8,9 @@ $sort (aggregation) :name: programming_language :values: shell +.. meta:: + :description: Learn about the $sort aggregation operator, which sorts all input documents and returns them to the pipeline in sorted order. + .. contents:: On this page :local: :backlinks: none diff --git a/source/reference/operator/aggregation/split.txt b/source/reference/operator/aggregation/split.txt index ae02591d100..54d709cdf68 100644 --- a/source/reference/operator/aggregation/split.txt +++ b/source/reference/operator/aggregation/split.txt @@ -145,13 +145,15 @@ A collection named ``deliveries`` contains the following documents: .. code-block:: javascript - { "_id" : 1, "city" : "Berkeley, CA", "qty" : 648 } - { "_id" : 2, "city" : "Bend, OR", "qty" : 491 } - { "_id" : 3, "city" : "Kensington, CA", "qty" : 233 } - { "_id" : 4, "city" : "Eugene, OR", "qty" : 842 } - { "_id" : 5, "city" : "Reno, NV", "qty" : 655 } - { "_id" : 6, "city" : "Portland, OR", "qty" : 408 } - { "_id" : 7, "city" : "Sacramento, CA", "qty" : 574 } + db.deliveries.insertMany( [ + { _id: 1, city: "Berkeley, CA", qty: 648 }, + { _id: 2, city: "Bend, OR", qty: 491 }, + { _id: 3, city: "Kensington, CA", qty: 233 }, + { _id: 4, city: "Eugene, OR", qty: 842 }, + { _id: 5, city: "Reno, NV", qty: 655 }, + { _id: 6, city: "Portland, OR", qty: 408 }, + { _id: 7, city: "Sacramento, CA", qty: 574 } + ] ) The goal of following aggregation operation is to find the total quantity of deliveries for each state and sort the list in @@ -176,18 +178,21 @@ descending order. It has five pipeline stages: .. code-block:: javascript - db.deliveries.aggregate([ - { $project : { city_state : { $split: ["$city", ", "] }, qty : 1 } }, - { $unwind : "$city_state" }, - { $match : { city_state : /[A-Z]{2}/ } }, - { $group : { _id: { "state" : "$city_state" }, total_qty : { "$sum" : "$qty" } } }, - { $sort : { total_qty : -1 } } - ]); + db.deliveries.aggregate( [ + { $project: { city_state: { $split: ["$city", ", "] }, qty: 1 } }, + { $unwind: "$city_state" }, + { $match: { city_state: /[A-Z]{2}/ } }, + { $group: { _id: { state: "$city_state" }, total_qty: { $sum: "$qty" } } }, + { $sort: { total_qty: -1 } } + ] ) The operation returns the following results: .. code-block:: javascript + :copyable: false - { "_id" : { "state" : "OR" }, "total_qty" : 1741 } - { "_id" : { "state" : "CA" }, "total_qty" : 1455 } - { "_id" : { "state" : "NV" }, "total_qty" : 655 } + [ + { _id: { state: "OR" }, total_qty: 1741 }, + { _id: { state: "CA" }, total_qty: 1455 }, + { _id: { state: "NV" }, total_qty: 655 } + ] diff --git a/source/reference/operator/aggregation/stdDevPop.txt b/source/reference/operator/aggregation/stdDevPop.txt index e39c2bc60e1..43f44353f73 100644 --- a/source/reference/operator/aggregation/stdDevPop.txt +++ b/source/reference/operator/aggregation/stdDevPop.txt @@ -100,24 +100,26 @@ Examples Use in ``$group`` Stage ~~~~~~~~~~~~~~~~~~~~~~~ -A collection named ``users`` contains the following documents: +Create a collection called ``users`` with the following documents: .. code-block:: javascript - { "_id" : 1, "name" : "dave123", "quiz" : 1, "score" : 85 } - { "_id" : 2, "name" : "dave2", "quiz" : 1, "score" : 90 } - { "_id" : 3, "name" : "ahn", "quiz" : 1, "score" : 71 } - { "_id" : 4, "name" : "li", "quiz" : 2, "score" : 96 } - { "_id" : 5, "name" : "annT", "quiz" : 2, "score" : 77 } - { "_id" : 6, "name" : "ty", "quiz" : 2, "score" : 82 } + db.users.insertMany( [ + { _id : 1, name : "dave123", quiz : 1, score : 85 }, + { _id : 2, name : "dave2", quiz : 1, score : 90 }, + { _id : 3, name : "ahn", quiz : 1, score : 71 }, + { _id : 4, name : "li", quiz : 2, score : 96 }, + { _id : 5, name : "annT", quiz : 2, score : 77 }, + { _id : 6, name : "ty", quiz : 2, score : 82 } + ] ) The following example calculates the standard deviation of each quiz: .. code-block:: javascript - db.users.aggregate([ + db.users.aggregate( [ { $group: { _id: "$quiz", stdDev: { $stdDevPop: "$score" } } } - ]) + ] ) The operation returns the following results: @@ -135,40 +137,40 @@ documents: .. code-block:: javascript - db.quizzes.insertMany([ + db.quizzes.insertMany( [ { - "_id" : 1, - "scores" : [ - { "name" : "dave123", "score" : 85 }, - { "name" : "dave2", "score" : 90 }, - { "name" : "ahn", "score" : 71 } + _id : 1, + scores : [ + { name : "dave123", score : 85 }, + { name : "dave2", score : 90 }, + { name : "ahn", score : 71 } ] }, { - "_id" : 2, - "scores" : [ - { "name" : "li", "quiz" : 2, "score" : 96 }, - { "name" : "annT", "score" : 77 }, - { "name" : "ty", "score" : 82 } + _id : 2, + scores : [ + { name : "li", quiz : 2, score : 96 }, + { name : "annT", score : 77 }, + { name : "ty", score : 82 } ] } - ]) + ] ) The following example calculates the standard deviation of each quiz: .. code-block:: javascript - db.quizzes.aggregate([ + db.quizzes.aggregate( [ { $project: { stdDev: { $stdDevPop: "$scores.score" } } } - ]) + ] ) The operation returns the following results: .. code-block:: javascript :copyable: false - { "_id" : 1, "stdDev" : 8.04155872120988 } - { "_id" : 2, "stdDev" : 8.04155872120988 } + { _id : 1, stdDev : 8.04155872120988 } + { _id : 2, stdDev : 8.04155872120988 } Use in ``$setWindowFields`` Stage ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -222,15 +224,15 @@ value for ``CA`` and ``WA`` is shown in the .. code-block:: javascript :copyable: false - { "_id" : 4, "type" : "strawberry", "orderDate" : ISODate("2019-05-18T16:09:01Z"), - "state" : "CA", "price" : 41, "quantity" : 162, "stdDevPopQuantityForState" : 0 } - { "_id" : 0, "type" : "chocolate", "orderDate" : ISODate("2020-05-18T14:10:30Z"), - "state" : "CA", "price" : 13, "quantity" : 120, "stdDevPopQuantityForState" : 21 } - { "_id" : 2, "type" : "vanilla", "orderDate" : ISODate("2021-01-11T06:31:15Z"), - "state" : "CA", "price" : 12, "quantity" : 145, "stdDevPopQuantityForState" : 17.249798710580816 } - { "_id" : 5, "type" : "strawberry", "orderDate" : ISODate("2019-01-08T06:12:03Z"), - "state" : "WA", "price" : 43, "quantity" : 134, "stdDevPopQuantityForState" : 0 } - { "_id" : 3, "type" : "vanilla", "orderDate" : ISODate("2020-02-08T13:13:23Z"), - "state" : "WA", "price" : 13, "quantity" : 104, "stdDevPopQuantityForState" : 15 } - { "_id" : 1, "type" : "chocolate", "orderDate" : ISODate("2021-03-20T11:30:05Z"), - "state" : "WA", "price" : 14, "quantity" : 140, "stdDevPopQuantityForState" : 15.748015748023622 } + { _id : 4, type : "strawberry", orderDate : ISODate("2019-05-18T16:09:01Z"), + state : "CA", price : 41, quantity : 162, stdDevPopQuantityForState : 0 } + { _id : 0, type : "chocolate", orderDate : ISODate("2020-05-18T14:10:30Z"), + state : "CA", price : 13, quantity : 120, stdDevPopQuantityForState : 21 } + { _id : 2, type : "vanilla", orderDate : ISODate("2021-01-11T06:31:15Z"), + state : "CA", price : 12, quantity : 145, stdDevPopQuantityForState : 17.249798710580816 } + { _id : 5, type : "strawberry", orderDate : ISODate("2019-01-08T06:12:03Z"), + state : "WA", price : 43, quantity : 134, stdDevPopQuantityForState : 0 } + { _id : 3, type : "vanilla", orderDate : ISODate("2020-02-08T13:13:23Z"), + state : "WA", price : 13, quantity : 104, stdDevPopQuantityForState : 15 } + { _id : 1, type : "chocolate", orderDate : ISODate("2021-03-20T11:30:05Z"), + state : "WA", price : 14, quantity : 140, stdDevPopQuantityForState : 15.748015748023622 } diff --git a/source/reference/operator/aggregation/sum.txt b/source/reference/operator/aggregation/sum.txt index 1277de12edf..850746a6539 100644 --- a/source/reference/operator/aggregation/sum.txt +++ b/source/reference/operator/aggregation/sum.txt @@ -8,6 +8,9 @@ $sum (aggregation) :name: programming_language :values: shell +.. meta:: + :description: Learn about the $sum aggregation operator, which calculates and return the collective sum of numeric values. $sum ignores non-numeric values. + .. contents:: On this page :local: :backlinks: none @@ -36,8 +39,7 @@ Compatibility Syntax ------ -When used in the :pipeline:`$bucket`, :pipeline:`$bucketAuto`, -:pipeline:`$group`, and :pipeline:`$setWindowFields` stages, +When used as an :ref:`accumulator `, :group:`$sum` has this syntax: .. code-block:: none @@ -45,23 +47,12 @@ When used in the :pipeline:`$bucket`, :pipeline:`$bucketAuto`, { $sum: } -When used in other supported stages, :group:`$sum` has one of -two syntaxes: - -- :group:`$sum` has one specified expression as its operand: - - .. code-block:: none - :copyable: false - - { $sum: } +When not used as an accumulator, :group:`$sum` has this syntax: -- :group:`$sum` has a list of specified expressions as its - operand: - - .. code-block:: none - :copyable: false +.. code-block:: none + :copyable: false - { $sum: [ , ... ] } + { $sum: [ , ... ] } For more information on expressions, see :ref:`aggregation-expressions`. diff --git a/source/reference/operator/aggregation/toBool.txt b/source/reference/operator/aggregation/toBool.txt index 59fa09a0b13..9a2d8fd17f8 100644 --- a/source/reference/operator/aggregation/toBool.txt +++ b/source/reference/operator/aggregation/toBool.txt @@ -52,44 +52,9 @@ Behavior The following table lists the input types that can be converted to a boolean: -.. list-table:: - :header-rows: 1 - :widths: 55 50 - - * - Input Type - - Behavior - - * - Boolean - - No-op. Returns the boolean value. - - * - Double - - | Returns true if not zero. - | Return false if zero. - - * - Decimal - - | Returns true if not zero. - | Return false if zero. - - * - Integer - - - | Returns true if not zero. - | Return false if zero. - - * - Long - - - | Returns true if not zero. - | Return false if zero. - - * - ObjectId - - - | Returns true. - - * - String - - | Returns true. - - * - Date +.. |null-description| replace:: Returns null - - | Returns true. +.. include:: /includes/aggregation/convert-to-bool-table.rst The following table lists some conversion to boolean examples: diff --git a/source/reference/operator/aggregation/toHashedIndexKey.txt b/source/reference/operator/aggregation/toHashedIndexKey.txt new file mode 100644 index 00000000000..85ead5e86c2 --- /dev/null +++ b/source/reference/operator/aggregation/toHashedIndexKey.txt @@ -0,0 +1,66 @@ +=============================== +$toHashedIndexKey (aggregation) +=============================== + +.. default-domain:: mongodb + +.. contents:: On this page + :local: + :backlinks: none + :depth: 1 + :class: singlecol + +Definition +---------- + +.. expression:: $toHashedIndexKey + + Computes and returns the hash value of the input expression using + the same hash function that MongoDB uses to create a hashed index. + A hash function maps a key or string to a fixed-size numeric + value. + + .. note:: + + Unlike hashed indexes, the ``$toHashedIndexKey`` + aggregation operator does **not** account for collation. + This means the operator can produce a hash that does not + match that of a hashed index based on the same data. + +Syntax +------ + +``$toHashedIndexKey`` has the following syntax: + +.. code-block:: javascript + + { $toHashedIndexKey: } + +Example +------- + +You can use ``$toHashedIndexKey`` to compute the hashed value of a +string in an aggregation pipeline. This example computes the hashed +value of the string ``"string to hash"``: + +.. code-block:: javascript + :emphasize-lines: 4 + + db.aggregate( + [ + { $documents: [ { val: "string to hash" } ] }, + { $addFields: { hashedVal: { $toHashedIndexKey: "$val" } } } + ] + ) + +Example output: + +.. code-block:: javascript + :copyable: false + + [ { val: 'string to hash', hashedVal: Long("763543691661428748") } ] + +Learn More +---------- + +- :method:`convertShardKeyToHashed()` diff --git a/source/reference/operator/aggregation/toUpper.txt b/source/reference/operator/aggregation/toUpper.txt index 36ae121f9aa..e9eb147bc82 100644 --- a/source/reference/operator/aggregation/toUpper.txt +++ b/source/reference/operator/aggregation/toUpper.txt @@ -48,7 +48,7 @@ Consider a ``inventory`` collection with the following documents: { "_id" : 2, "item" : "abc2", quarter: "13Q4", "description" : "Product 2" } { "_id" : 3, "item" : "xyz1", quarter: "14Q2", "description" : null } -The following operation uses the :expression:`$toUpper` operator return +The following operation uses the :expression:`$toUpper` operator to return uppercase ``item`` and uppercase ``description`` values: .. code-block:: javascript diff --git a/source/reference/operator/aggregation/type.txt b/source/reference/operator/aggregation/type.txt index 0a1a9498e88..2071eb2bc6d 100644 --- a/source/reference/operator/aggregation/type.txt +++ b/source/reference/operator/aggregation/type.txt @@ -30,7 +30,7 @@ Definition .. seealso:: - - :expression:`$isNumber` - checks if the argument is a number. *New in MongoDB 4.4* + - :expression:`$isNumber` - checks if the argument is a number. - :query:`$type (Query) <$type>` - filters fields based on BSON type. Behavior diff --git a/source/reference/operator/aggregation/unionWith.txt b/source/reference/operator/aggregation/unionWith.txt index 270d1daee07..a4397e01e5d 100644 --- a/source/reference/operator/aggregation/unionWith.txt +++ b/source/reference/operator/aggregation/unionWith.txt @@ -15,8 +15,6 @@ Definition .. pipeline:: $unionWith - .. versionadded:: 4.4 - Performs a union of two collections. :pipeline:`$unionWith` combines pipeline results from two collections into a single result set. The stage outputs the combined result set (including duplicates) to the next stage. @@ -190,14 +188,13 @@ sharded: Collation ~~~~~~~~~ -If the :method:`db.collection.aggregate()` includes a :ref:`collation -`, that collation is used for -the operation, ignoring any other collations. +If the :method:`db.collection.aggregate()` includes a ``collation`` +document, that collation is used for the operation, ignoring any other +collations. If the :method:`db.collection.aggregate()` does not include a -:ref:`collation `, the -:method:`db.collection.aggregate()` method uses the collation for the -top-level collection/view on which the +``collation`` document, the :method:`db.collection.aggregate()` method +uses the collation for the top-level collection/view on which the :method:`db.collection.aggregate()` is run: - If the :ref:`$unionWith coll ` is a collection, its diff --git a/source/reference/operator/aggregation/unwind.txt b/source/reference/operator/aggregation/unwind.txt index a48530d2f80..819b31adc2c 100644 --- a/source/reference/operator/aggregation/unwind.txt +++ b/source/reference/operator/aggregation/unwind.txt @@ -8,6 +8,9 @@ $unwind (aggregation) :name: programming_language :values: shell +.. meta:: + :description: Learn about the $unwind aggregation stage, which deconstructs array fields to output a document for each element. + .. contents:: On this page :local: :backlinks: none diff --git a/source/reference/operator/aggregation/vectorSearch.txt b/source/reference/operator/aggregation/vectorSearch.txt index dfda1476dc9..d5024a12a11 100644 --- a/source/reference/operator/aggregation/vectorSearch.txt +++ b/source/reference/operator/aggregation/vectorSearch.txt @@ -6,7 +6,7 @@ $vectorSearch (aggregation) ``$vectorSearch`` performs a semantic search on data in your Atlas cluster. If you store vector embeddings that are less than or equal to -2048 dimensions in width for any kind of data along with other data in +4096 dimensions in width for any kind of data along with other data in your collection on the Atlas cluster, you can seamlessly index the vector data along with your other data. You can then use the :pipeline:`$vectorSearch` stage to pre-filter your data and perform diff --git a/source/reference/operator/meta/natural.txt b/source/reference/operator/meta/natural.txt index bf959dd036a..f55f132aed5 100644 --- a/source/reference/operator/meta/natural.txt +++ b/source/reference/operator/meta/natural.txt @@ -10,12 +10,10 @@ Definition .. operator:: $natural -.. versionchanged:: 4.4 +Use in conjunction with :method:`cursor.hint()` to perform a +collection scan to return documents in :term:`natural order`. - Use in conjunction with :method:`cursor.hint()` to perform a - collection scan to return documents in :term:`natural order`. +For usage, see :ref:`hint-collection-scans` example in the +:method:`cursor.hint()` reference page. - For usage, see :ref:`hint-collection-scans` example in the - :method:`cursor.hint()` reference page. - - .. include:: /includes/extracts/4.4-changes-natural-sort-views.rst +.. include:: /includes/extracts/4.4-changes-natural-sort-views.rst diff --git a/source/reference/operator/projection/elemMatch.txt b/source/reference/operator/projection/elemMatch.txt index 9b2ab309788..eb3b31fcc51 100644 --- a/source/reference/operator/projection/elemMatch.txt +++ b/source/reference/operator/projection/elemMatch.txt @@ -68,7 +68,8 @@ assumes a collection ``schools`` with the following documents: students: [ { name: "ajax", school: 100, age: 7 }, { name: "achilles", school: 100, age: 8 }, - ] + ], + athletics: [ "swimming", "basketball", "football" ] } { _id: 3, @@ -76,7 +77,8 @@ assumes a collection ``schools`` with the following documents: students: [ { name: "ajax", school: 100, age: 7 }, { name: "achilles", school: 100, age: 8 }, - ] + ], + athletics: [ "baseball", "basketball", "soccer" ] } { _id: 4, @@ -151,6 +153,43 @@ The operation returns the three documents that have ``zipcode`` equal to ``63109 The document with ``_id`` equal to ``3`` does not contain the ``students`` field since no array element matched the :projection:`$elemMatch` criteria. +The argument to :projection:`$elemMatch` matches elements of the array that +``$elemMatch`` is projecting. If you specify an equality with a field +name to ``$elemMatch``, it attempts to match objects within the array. +For example, ``$elemMatch`` attempts to match objects, instead of scalar +values, within the array for the following in the projection: + +.. code-block:: javascript + + db.schools.find( { zipcode: "63109" }, + { athletics: { $elemMatch: { athletics: "basketball" } } }) + +To match scalar values, use the equality operator along with the scalar +value that you want to match (``{$eq: }``). For example, +the following :method:`~db.collection.find()` operation queries for all +documents where the value of the ``zipcode`` field is ``63109``. The +projection includes the matching element of the ``athletics`` array +where the value is ``basketball``: + +.. code-block:: javascript + + db.schools.find( { zipcode: "63109" }, + { athletics: { $elemMatch: { $eq: "basketball" } } }) + +The operation returns the three documents that have ``zipcode`` equal to +``63109``: + +.. code-block:: javascript + + [ + { _id : 1 }, + { _id: 3, athletics: [ 'basketball' ] }, + { _id : 4 } + ] + +The document with ``_id`` equal to ``3`` is the only document that +matched the :projection:`$elemMatch` criteria. + .. seealso:: :projection:`$ (projection) <$>` operator diff --git a/source/reference/operator/projection/positional.txt b/source/reference/operator/projection/positional.txt index e8614f4373f..8f17bb5e483 100644 --- a/source/reference/operator/projection/positional.txt +++ b/source/reference/operator/projection/positional.txt @@ -59,22 +59,19 @@ condition on the array: db.collection.find( { : ...}, { ".$": 1 } ) -.. versionchanged:: 4.4 +You can use the :projection:`$` operator to limit an ```` field, which +does not appear in the :ref:`query document `. +In previous versions of MongoDB, the ```` field being limited +**must** appear in the query document. - You can use the :projection:`$` operator to limit an ```` - field which does not appear in the - :ref:`query document `. In previous - versions of MongoDB, the ```` field being limited - **must** appear in the query document. - - .. code-block:: javascript - - db.collection.find( { : ... }, - { ".$" : 1 } ) - - .. important:: +.. code-block:: javascript + + db.collection.find( { : ... }, + { ".$" : 1 } ) - .. include:: /includes/fact-behavior-project-different-array.rst +.. important:: + + .. include:: /includes/fact-behavior-project-different-array.rst .. _array-field-limitation: diff --git a/source/reference/operator/query-comparison.txt b/source/reference/operator/query-comparison.txt index 6541041176d..edeaafdae88 100644 --- a/source/reference/operator/query-comparison.txt +++ b/source/reference/operator/query-comparison.txt @@ -8,6 +8,9 @@ Comparison operators return data based on value comparisons. .. default-domain:: mongodb +.. meta:: + :description: Learn about the comparison query operators in MongoDB. The $eq, $gt, $gte, $in, $lt, $lte, $ne, and $nin operators filter documents based on value conditions. + .. contents:: On this page :local: :backlinks: none diff --git a/source/reference/operator/query-logical.txt b/source/reference/operator/query-logical.txt index 1ca19e0a917..573b0437d7d 100644 --- a/source/reference/operator/query-logical.txt +++ b/source/reference/operator/query-logical.txt @@ -4,6 +4,9 @@ Logical Query Operators .. default-domain:: mongodb +.. meta:: + :description: Learn about logical query operators in MongoDB. The $and, $not, $nor, and $or operators help you build complex queries based on logical conditions. + .. contents:: On this page :local: :backlinks: none diff --git a/source/reference/operator/query.txt b/source/reference/operator/query.txt index b407fdca4cd..aab49307aa6 100644 --- a/source/reference/operator/query.txt +++ b/source/reference/operator/query.txt @@ -6,6 +6,9 @@ Query and Projection Operators .. default-domain:: mongodb +.. meta:: + :description: Learn about the query and projection operators in MongoDB. These query selectors, projection operators, and miscellaneous operators help with advanced querying and projection. + .. contents:: On this page :local: :backlinks: none diff --git a/source/reference/operator/query/all.txt b/source/reference/operator/query/all.txt index 7a7b9333bb1..87ec3ce2634 100644 --- a/source/reference/operator/query/all.txt +++ b/source/reference/operator/query/all.txt @@ -8,6 +8,9 @@ $all :name: programming_language :values: shell +.. meta:: + :description: Use the $all operator to efficiently query documents that contain arrays with specific elements. + .. contents:: On this page :local: :backlinks: none diff --git a/source/reference/operator/query/and.txt b/source/reference/operator/query/and.txt index 22d87bfca7e..5bfa5c1f568 100644 --- a/source/reference/operator/query/and.txt +++ b/source/reference/operator/query/and.txt @@ -8,6 +8,9 @@ $and :name: programming_language :values: shell +.. meta:: + :description: Use the $and operator to perform a logical AND operation on expressions, selecting documents satisfying all conditions. MongoDB's query optimizer considers available indexes for optimization. + .. contents:: On this page :local: :backlinks: none @@ -50,7 +53,7 @@ Behavior When evaluating the clauses in the :query:`$and` expression, MongoDB's query optimizer considers which indexes are available that could help satisfy clauses of the :query:`$and` expression when -:doc:`selecting the best plan to execute`. +:ref:`selecting the best plan to execute `. .. include:: /includes/and-or-behavior.rst diff --git a/source/reference/operator/query/bitsAllClear.txt b/source/reference/operator/query/bitsAllClear.txt index 862794ae2e2..edccc53d0c8 100644 --- a/source/reference/operator/query/bitsAllClear.txt +++ b/source/reference/operator/query/bitsAllClear.txt @@ -84,13 +84,13 @@ The following query uses the :query:`$bitsAllClear` operator: .. code-block:: javascript - db.collection.find( { a: { $bitsAllClear: BinData(0, "ID==") } } ) + db.collection.find( { a: { $bitsAllClear: BinData(0, "IA==") } } ) The query: - Specifies ``0`` as the first value for :bsontype:`BinData - `, which indicates ``ID==`` is to be interpreted as - binary. The base-64 value ``ID==`` in binary is ``00100000``, which + `, which indicates ``IA==`` should be interpreted as + binary. The base-64 value ``IA==`` in binary is ``00100000``, which has ``1`` in position 5. - Uses :query:`$bitsAllClear` to return documents where the ``a`` field diff --git a/source/reference/operator/query/bitsAllSet.txt b/source/reference/operator/query/bitsAllSet.txt index a86a4ecfa05..b1f1b5431e0 100644 --- a/source/reference/operator/query/bitsAllSet.txt +++ b/source/reference/operator/query/bitsAllSet.txt @@ -81,11 +81,11 @@ BinData Bitmask The following query uses the :query:`$bitsAllSet` operator to test whether field ``a`` has bits set at positions ``4`` and ``5`` -(the binary representation of ``BinData(0, "MC==")`` is ``00110000``). +(the binary representation of ``BinData(0, "MA==")`` is ``00110000``). .. code-block:: javascript - db.collection.find( { a: { $bitsAllSet: BinData(0, "MC==") } } ) + db.collection.find( { a: { $bitsAllSet: BinData(0, "MA==") } } ) The query matches the following document: diff --git a/source/reference/operator/query/bitsAnyClear.txt b/source/reference/operator/query/bitsAnyClear.txt index c84297fad31..9f065f365db 100644 --- a/source/reference/operator/query/bitsAnyClear.txt +++ b/source/reference/operator/query/bitsAnyClear.txt @@ -83,11 +83,11 @@ BinData Bitmask ~~~~~~~~~~~~~~~ The following query uses the :query:`$bitsAnyClear` operator to test whether field ``a`` has any bits clear at positions ``4`` and ``5`` -(the binary representation of ``BinData(0, "MC==")`` is ``00110000``). +(the binary representation of ``BinData(0, "MA==")`` is ``00110000``). .. code-block:: javascript - db.collection.find( { a: { $bitsAnyClear: BinData(0, "MC==") } } ) + db.collection.find( { a: { $bitsAnyClear: BinData(0, "MA==") } } ) The query matches the following documents: diff --git a/source/reference/operator/query/bitsAnySet.txt b/source/reference/operator/query/bitsAnySet.txt index 8c62434be1f..a7d87277f5c 100644 --- a/source/reference/operator/query/bitsAnySet.txt +++ b/source/reference/operator/query/bitsAnySet.txt @@ -82,11 +82,11 @@ BinData Bitmask The following query uses the :query:`$bitsAnySet` operator to test whether field ``a`` has any bits set at positions ``4``, and ``5`` -(the binary representation of ``BinData(0, "MC==")`` is ``00110000``). +(the binary representation of ``BinData(0, "MA==")`` is ``00110000``). .. code-block:: javascript - db.collection.find( { a: { $bitsAnySet: BinData(0, "MC==") } } ) + db.collection.find( { a: { $bitsAnySet: BinData(0, "MA==") } } ) The query matches the following documents: diff --git a/source/reference/operator/query/comment.txt b/source/reference/operator/query/comment.txt index 054aacbe7ba..e5cd6fa4449 100644 --- a/source/reference/operator/query/comment.txt +++ b/source/reference/operator/query/comment.txt @@ -77,7 +77,7 @@ the following output shows the comment in the "comment" : "Find even values.", ... -Comments also appear in the :doc:`MongoDB log ` +Comments also appear in the :ref:`MongoDB log ` if the :ref:`database profiler level ` is set to 2 and :ref:`slowms ` is set to 0ms. This :method:`db.setProfilingLevel()` command sets these two diff --git a/source/reference/operator/query/elemMatch.txt b/source/reference/operator/query/elemMatch.txt index 8907fbb5447..f5ff82ce948 100644 --- a/source/reference/operator/query/elemMatch.txt +++ b/source/reference/operator/query/elemMatch.txt @@ -8,6 +8,9 @@ $elemMatch (query) :name: programming_language :values: shell +.. meta:: + :description: Use the $elemMatch operator to match documents with array fields containing elements that meet specified criteria. + .. contents:: On this page :local: :backlinks: none @@ -41,12 +44,6 @@ Syntax { : { $elemMatch: { , , ... } } } -If you specify only a single ```` condition in the -:query:`$elemMatch` expression, and are not using the :query:`$not` -or :query:`$ne` operators inside of :query:`$elemMatch`, -:query:`$elemMatch` can be omitted. See -:ref:`elemmatch-single-query-condition`. - Behavior -------- @@ -79,7 +76,7 @@ to ``80`` and is less than ``85``: { results: { $elemMatch: { $gte: 80, $lt: 85 } } } ) -The query returns the following document since the element ``82`` is +The query returns the following document because the element ``82`` is both greater than or equal to ``80`` and is less than ``85``: .. code-block:: javascript @@ -104,12 +101,17 @@ This statement inserts documents into the ``survey`` collection: { "_id": 3, "results": [ { "product": "abc", "score": 7 }, { "product": "xyz", "score": 8 } ] }, { "_id": 4, "results": [ { "product": "abc", "score": 7 }, - { "product": "def", "score": 8 } ] } + { "product": "def", "score": 8 } ] }, + { "_id": 5, "results": { "product": "xyz", "score": 7 } } ] ) -The following query matches only those documents where the ``results`` -array contains at least one element with both ``product`` equal to -``"xyz"`` and ``score`` greater than or equal to ``8``: +The document with an ``_id`` of ``5`` doesn't contain an array. That +document is included to show that ``$elemMatch`` only matches array +elements, which you will see in the following examples. + +The following query matches documents where ``results`` contains at +least one element where ``product`` is ``"xyz"`` and ``score`` is +greater than or equal to ``8``: .. code-block:: javascript @@ -130,14 +132,13 @@ Specifically, the query matches the following document: Single Query Condition ~~~~~~~~~~~~~~~~~~~~~~ -If you specify a single query predicate in the :query:`$elemMatch` -expression, and are not using the :query:`$not` or :query:`$ne` -operators inside of :query:`$elemMatch`, :query:`$elemMatch` can be -omitted. +The following sections show the output differences when you use +``$elemMatch`` with a single query condition, and omit ``$elemMatch``. -The following examples return the same documents. +Example 1 +````````` -With :query:`$elemMatch`: +Query with ``$elemMatch``: .. code-block:: javascript @@ -145,37 +146,76 @@ With :query:`$elemMatch`: { results: { $elemMatch: { product: "xyz" } } } ) -Without :query:`$elemMatch`: +The query returns documents where any ``product`` in ``results`` is +``"xyz"``: .. code-block:: javascript + :copyable: false - db.survey.find( - { "results.product": "xyz" } - ) - -However, if your :query:`$elemMatch` expression contains the -:query:`$not` or :query:`$ne` operators then omitting the -:query:`$elemMatch` expression changes the documents returned. - -The following examples return different documents. - -With :query:`$elemMatch`: + [ + { + _id: 1, + results: [ { product: 'abc', score: 10 }, { product: 'xyz', score: 5 } ] + }, + { + _id: 2, + results: [ { product: 'abc', score: 8 }, { product: 'xyz', score: 7 } ] + }, + { + _id: 3, + results: [ { product: 'abc', score: 7 }, { product: 'xyz', score: 8 } ] + } + ] + +Query without ``$elemMatch``: .. code-block:: javascript db.survey.find( - { "results": { $elemMatch: { product: { $ne: "xyz" } } } } + { "results.product": "xyz" } ) -Without :query:`$elemMatch`: +In the following output, notice that the document with an ``_id`` of +``5`` (which doesn't contain an array) is also included: + +.. code-block:: javascript + :copyable: false + :emphasize-lines: 14 + + [ + { + _id: 1, + results: [ { product: 'abc', score: 10 }, { product: 'xyz', score: 5 } ] + }, + { + _id: 2, + results: [ { product: 'abc', score: 8 }, { product: 'xyz', score: 7 } ] + }, + { + _id: 3, + results: [ { product: 'abc', score: 7 }, { product: 'xyz', score: 8 } ] + }, + { _id: 5, results: { product: 'xyz', score: 7 } } + ] + +Example 2 +````````` + +Consider the following queries: + +- First query has a single ```` condition in ``$elemMatch``. +- Second query omits ``$elemMatch``. + +First query with ``$elemMatch``: .. code-block:: javascript db.survey.find( - { "results.product": { $ne: "xyz" } } + { "results": { $elemMatch: { product: { $ne: "xyz" } } } } ) -With :query:`$elemMatch`, the first query returns these documents: +The query returns documents that has a ``product`` with value other than +``"xyz"``: .. code-block:: javascript :copyable: false @@ -189,8 +229,16 @@ With :query:`$elemMatch`, the first query returns these documents: { "_id" : 4, "results" : [ { "product" : "abc", "score" : 7 }, { "product" : "def", "score" : 8 } ] } -Without :query:`$elemMatch`, the second query returns this -document: +Second query without ``$elemMatch``: + +.. code-block:: javascript + + db.survey.find( + { "results.product": { $ne: "xyz" } } + ) + +The query returns documents where none of the ``product`` ``results`` +are ``"xyz"``: .. code-block:: javascript :copyable: false @@ -198,17 +246,14 @@ document: { "_id" : 4, "results" : [ { "product" : "abc", "score" : 7 }, { "product" : "def", "score" : 8 } ] } -The first query returns the documents where any product in the -``results`` array is not ``"xyz"``. The second query returns the -documents where all of the products in the ``results`` array are not -``"xyz"``. +Both queries include the document with an ``_id`` of ``4``, and omit the +document with an ``_id`` of ``5`` because the ``product`` is ``"xyz"``. -Additional Examples -------------------- +Learn More +---------- .. include:: /includes/extracts/additional-examples-arrays.rst .. seealso:: :method:`db.collection.find()` - diff --git a/source/reference/operator/query/eq.txt b/source/reference/operator/query/eq.txt index 0a3fdbaf022..c1245a4ad7e 100644 --- a/source/reference/operator/query/eq.txt +++ b/source/reference/operator/query/eq.txt @@ -8,6 +8,9 @@ $eq :name: programming_language :values: shell +.. meta:: + :description: Use the $eq operator to match documents where a field value equals a specified value. $eq helps with precise equality comparisons in MongoDB queries. + .. contents:: On this page :local: :backlinks: none @@ -94,11 +97,13 @@ the following documents: .. code-block:: javascript - { _id: 1, item: { name: "ab", code: "123" }, qty: 15, tags: [ "A", "B", "C" ] } - { _id: 2, item: { name: "cd", code: "123" }, qty: 20, tags: [ "B" ] } - { _id: 3, item: { name: "ij", code: "456" }, qty: 25, tags: [ "A", "B" ] } - { _id: 4, item: { name: "xy", code: "456" }, qty: 30, tags: [ "B", "A" ] } - { _id: 5, item: { name: "mn", code: "000" }, qty: 20, tags: [ [ "A", "B" ], "C" ] } + db.inventory.insertMany( [ + { _id: 1, item: { name: "ab", code: "123" }, qty: 15, tags: [ "A", "B", "C" ] }, + { _id: 2, item: { name: "cd", code: "123" }, qty: 20, tags: [ "B" ] }, + { _id: 3, item: { name: "ij", code: "456" }, qty: 25, tags: [ "A", "B" ] }, + { _id: 4, item: { name: "xy", code: "456" }, qty: 30, tags: [ "B", "A" ] }, + { _id: 5, item: { name: "mn", code: "000" }, qty: 20, tags: [ [ "A", "B" ], "C" ] } + ] ) Equals a Specified Value ~~~~~~~~~~~~~~~~~~~~~~~~ @@ -119,9 +124,12 @@ The query is equivalent to: Both queries match the following documents: .. code-block:: javascript + :copyable: false - { _id: 2, item: { name: "cd", code: "123" }, qty: 20, tags: [ "B" ] } - { _id: 5, item: { name: "mn", code: "000" }, qty: 20, tags: [ [ "A", "B" ], "C" ] } + [ + { _id: 2, item: { name: "cd", code: "123" }, qty: 20, tags: [ "B" ] }, + { _id: 5, item: { name: "mn", code: "000" }, qty: 20, tags: [ [ "A", "B" ], "C" ] } + ] Field in Embedded Document Equals a Value ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -144,8 +152,9 @@ The query is equivalent to: Both queries match the following document: .. code-block:: javascript + :copyable: false - { _id: 1, item: { name: "ab", code: "123" }, qty: 15, tags: [ "A", "B", "C" ] } + [ { _id: 1, item: { name: "ab", code: "123" }, qty: 15, tags: [ "A", "B", "C" ] } ] .. seealso:: @@ -171,11 +180,14 @@ The query is equivalent to: Both queries match the following documents: .. code-block:: javascript + :copyable: false - { _id: 1, item: { name: "ab", code: "123" }, qty: 15, tags: [ "A", "B", "C" ] } - { _id: 2, item: { name: "cd", code: "123" }, qty: 20, tags: [ "B" ] } - { _id: 3, item: { name: "ij", code: "456" }, qty: 25, tags: [ "A", "B" ] } - { _id: 4, item: { name: "xy", code: "456" }, qty: 30, tags: [ "B", "A" ] } + [ + { _id: 1, item: { name: "ab", code: "123" }, qty: 15, tags: [ "A", "B", "C" ] }, + { _id: 2, item: { name: "cd", code: "123" }, qty: 20, tags: [ "B" ] }, + { _id: 3, item: { name: "ij", code: "456" }, qty: 25, tags: [ "A", "B" ] }, + { _id: 4, item: { name: "xy", code: "456" }, qty: 30, tags: [ "B", "A" ] } + ] .. seealso:: @@ -208,10 +220,12 @@ The query is equivalent to: Both queries match the following documents: .. code-block:: javascript + :copyable: false - { _id: 3, item: { name: "ij", code: "456" }, qty: 25, tags: [ "A", "B" ] } - { _id: 5, item: { name: "mn", code: "000" }, qty: 20, tags: [ [ "A", "B" ], "C" ] } - + [ + { _id: 3, item: { name: "ij", code: "456" }, qty: 25, tags: [ "A", "B" ] }, + { _id: 5, item: { name: "mn", code: "000" }, qty: 20, tags: [ [ "A", "B" ], "C" ] } + ] .. _eq-regex-matching: @@ -224,8 +238,10 @@ with these documents: .. code-block:: javascript - { _id: 001, company: "MongoDB" } - { _id: 002, company: "MongoDB2" } + db.companies.insertMany( [ + { _id: 001, company: "MongoDB" }, + { _id: 002, company: "MongoDB2" } + ] ) $eq match on a string A string expands to return the same values whether an implicit match @@ -241,7 +257,7 @@ $eq match on a string .. code-block:: javascript :copyable: false - { "company" : "MongoDB" } + [ { company: "MongoDB" } ] $eq match on a regular expression An explicit query using ``$eq`` and a regular expression will only @@ -251,7 +267,7 @@ $eq match on a regular expression .. code-block:: javascript - db.collection.find( { company: { $eq: /MongoDB/ } }, {_id: 0 } ) + db.companies.find( { company: { $eq: /MongoDB/ } }, {_id: 0 } ) Regular expression matches A query with an implicit match against a regular expression is @@ -260,14 +276,16 @@ Regular expression matches .. code-block:: javascript - db.collection.find( { company: /MongoDB/ }, {_id: 0 }) - db.collection.find( { company: { $regex: /MongoDB/ } }, {_id: 0 } ) + db.companies.find( { company: /MongoDB/ }, {_id: 0 }) + db.companies.find( { company: { $regex: /MongoDB/ } }, {_id: 0 } ) return the same results: .. code-block:: javascript :copyable: false - { "company" : "MongoDB" } - { "company" : "MongoDB2" } + [ + { company: "MongoDB" }, + { company: "MongoDB2" } + ] diff --git a/source/reference/operator/query/exists.txt b/source/reference/operator/query/exists.txt index 8428c1fda36..10a87820bcb 100644 --- a/source/reference/operator/query/exists.txt +++ b/source/reference/operator/query/exists.txt @@ -4,6 +4,9 @@ $exists .. default-domain:: mongodb +.. meta:: + :description: Use the $exists operator to match documents with or without a specified field, including those with null values. + .. contents:: On this page :local: :backlinks: none diff --git a/source/reference/operator/query/expr.txt b/source/reference/operator/query/expr.txt index 381f4edb3c2..294d5f2f972 100644 --- a/source/reference/operator/query/expr.txt +++ b/source/reference/operator/query/expr.txt @@ -8,6 +8,9 @@ $expr :name: programming_language :values: shell +.. meta:: + :description: Use the $expr operator to use aggregation expressions in queries. $expr allows for advanced filtering and comparison based on document fields. + .. contents:: On this page :local: :backlinks: none @@ -59,34 +62,7 @@ If the :pipeline:`$match` stage is part of a :pipeline:`$lookup` stage, Examples -------- -Compare Two Fields from A Single Document -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Consider an ``monthlyBudget`` collection with the following documents: - -.. code-block:: javascript - - { "_id" : 1, "category" : "food", "budget": 400, "spent": 450 } - { "_id" : 2, "category" : "drinks", "budget": 100, "spent": 150 } - { "_id" : 3, "category" : "clothes", "budget": 100, "spent": 50 } - { "_id" : 4, "category" : "misc", "budget": 500, "spent": 300 } - { "_id" : 5, "category" : "travel", "budget": 200, "spent": 650 } - -The following operation uses :query:`$expr` to find documents -where the ``spent`` amount exceeds the ``budget``: - -.. code-block:: javascript - - db.monthlyBudget.find( { $expr: { $gt: [ "$spent" , "$budget" ] } } ) - -The operation returns the following results: - -.. code-block:: javascript - - { "_id" : 1, "category" : "food", "budget" : 400, "spent" : 450 } - { "_id" : 2, "category" : "drinks", "budget" : 100, "spent" : 150 } - { "_id" : 5, "category" : "travel", "budget" : 200, "spent" : 650 } - +.. include:: /includes/use-expr-in-find-query.rst Using ``$expr`` With Conditional Statements ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -101,13 +77,13 @@ Create a sample ``supplies`` collection with the following documents: .. code-block:: javascript - db.supplies.insertMany([ - { "_id" : 1, "item" : "binder", "qty" : NumberInt("100"), "price" : NumberDecimal("12") }, - { "_id" : 2, "item" : "notebook", "qty" : NumberInt("200"), "price" : NumberDecimal("8") }, - { "_id" : 3, "item" : "pencil", "qty" : NumberInt("50"), "price" : NumberDecimal("6") }, - { "_id" : 4, "item" : "eraser", "qty" : NumberInt("150"), "price" : NumberDecimal("3") }, - { "_id" : 5, "item" : "legal pad", "qty" : NumberInt("42"), "price" : NumberDecimal("10") } - ]) + db.supplies.insertMany( [ + { _id : 1, item : "binder", qty : NumberInt("100"), price : NumberDecimal("12") }, + { _id : 2, item : "notebook", qty : NumberInt("200"), price : NumberDecimal("8") }, + { _id : 3, item : "pencil", qty : NumberInt("50"), price : NumberDecimal("6") }, + { _id : 4, item : "eraser", qty : NumberInt("150"), price : NumberDecimal("3") }, + { _id : 5, item : "legal pad", qty : NumberInt("42"), price : NumberDecimal("10") } + ] ) Assume that for an upcoming sale next month, you want to discount the prices such that: @@ -174,10 +150,11 @@ The :method:`db.collection.find()` operation returns the documents whose calculated discount price is less than ``NumberDecimal("5")``: .. code-block:: javascript + :copyable: false - { "_id" : 2, "item" : "notebook", "qty": 200 , "price": NumberDecimal("8") } - { "_id" : 3, "item" : "pencil", "qty": 50 , "price": NumberDecimal("6") } - { "_id" : 4, "item" : "eraser", "qty": 150 , "price": NumberDecimal("3") } + { _id : 2, item : "notebook", qty : 200 , price : NumberDecimal("8") } + { _id : 3, item : "pencil", qty : 50 , price : NumberDecimal("6") } + { _id : 4, item : "eraser", qty : 150 , price : NumberDecimal("3") } Even though :expression:`$cond` calculates an effective discounted price, that price is not reflected in the returned documents. Instead, diff --git a/source/reference/operator/query/gt.txt b/source/reference/operator/query/gt.txt index 759c00cb81f..ae278983216 100644 --- a/source/reference/operator/query/gt.txt +++ b/source/reference/operator/query/gt.txt @@ -8,6 +8,9 @@ $gt :name: programming_language :values: shell +.. meta:: + :description: Use the $gt operator to select documents where the value of the specified field is greater than (>) the specified value. + .. contents:: On this page :local: :backlinks: none diff --git a/source/reference/operator/query/gte.txt b/source/reference/operator/query/gte.txt index f11462238cb..b910c9fdc0a 100644 --- a/source/reference/operator/query/gte.txt +++ b/source/reference/operator/query/gte.txt @@ -7,6 +7,9 @@ $gte .. facet:: :name: programming_language :values: shell + +.. meta:: + :description: Use the The $gte operator to select documents where the value of the specified field is greater than or equal to a specified value. .. contents:: On this page :local: diff --git a/source/reference/operator/query/in.txt b/source/reference/operator/query/in.txt index c68d4e7a4f8..a1ad4613f10 100644 --- a/source/reference/operator/query/in.txt +++ b/source/reference/operator/query/in.txt @@ -4,6 +4,9 @@ $in .. default-domain:: mongodb +.. meta:: + :description: Use the $in operator to select documents where the value of a field equals any value in the specified array. $in allows you to query based on multiple possible values for a field. + .. contents:: On this page :local: :backlinks: none diff --git a/source/reference/operator/query/mod.txt b/source/reference/operator/query/mod.txt index f3edca8d52c..dd939f3713d 100644 --- a/source/reference/operator/query/mod.txt +++ b/source/reference/operator/query/mod.txt @@ -4,33 +4,49 @@ $mod .. default-domain:: mongodb +.. facet:: + :name: genre + :values: reference + +.. meta:: + :keywords: code example + .. contents:: On this page :local: :backlinks: none :depth: 1 :class: singlecol +Definition +---------- + .. query:: $mod Select documents where the value of a field divided by a divisor has - the specified remainder (i.e. perform a modulo operation to select - documents). To specify a :query:`$mod` expression, use the following - syntax: + the specified remainder. That is, ``$mod`` performs a modulo + operation to select documents. The first argument is the dividend, + and the second argument is the remainder. - .. code-block:: javascript +Syntax +------ - { field: { $mod: [ divisor, remainder ] } } +To specify a ``$mod`` expression, use the following syntax: + +.. code-block:: javascript + + { field: { $mod: [ divisor, remainder ] } } .. _mod-behavior: Behavior -------- -The ``$mod`` operator returns an error if the ``[ divisor, remainder ]`` array -contains fewer or more than two elements. For examples, see -:ref:`mod-not-enough-elements` and :ref:`mod-too-many-elements` respectively. +``$mod`` returns an error if the ``[ divisor, remainder ]`` array +doesn't contain two elements. For examples, see +:ref:`mod-not-enough-elements` and :ref:`mod-too-many-elements` +respectively. -Also, starting in MongoDB 5.1 (and 5.0.4 and 4.4.10), ``$mod`` +Also, starting in MongoDB 5.1 (and 5.0.4), ``$mod`` returns an error if the ``divisor`` or ``remainder`` values evaluate to: - ``NaN`` (not a number). @@ -43,6 +59,13 @@ If a document in the collection contains a field where the value is ``NaN`` (not a number) or ``Infinity``, ``$mod`` doesn't include the document in the output. +Negative Dividend +~~~~~~~~~~~~~~~~~ + +.. include:: /includes/negative-dividend.rst + +For an example, see :ref:``. + Examples -------- @@ -69,11 +92,13 @@ Then, the following query selects those documents in the The query returns the following documents: -.. code-block:: javascript +.. code-block:: json :copyable: false - { "_id" : 1, "item" : "abc123", "qty" : 0 } - { "_id" : 3, "item" : "ijk123", "qty" : 12 } + [ + { '_id' : 1, 'item' : 'abc123', 'qty' : 0 }, + { '_id' : 3, 'item' : 'ijk123', 'qty' : 12 } + ] .. _mod-not-enough-elements: @@ -156,11 +181,13 @@ The following examples demonstrate this behavior: Results: - .. code-block:: javascript + .. code-block:: json :copyable: false - { _id: 1, item: 'abc123', qty: 0 } - { _id: 3, item: 'ijk123', qty: 12 } + [ + { _id: 1, item: 'abc123', qty: 0 }, + { _id: 3, item: 'ijk123', qty: 12 } + ] .. example:: @@ -172,11 +199,13 @@ The following examples demonstrate this behavior: Results: - .. code-block:: javascript + .. code-block:: json :copyable: false - { _id: 1, item: 'abc123', qty: 0 } - { _id: 3, item: 'ijk123', qty: 12 } + [ + { _id: 1, item: 'abc123', qty: 0 }, + { _id: 3, item: 'ijk123', qty: 12 } + ] .. example:: @@ -188,11 +217,47 @@ The following examples demonstrate this behavior: Results: - .. code-block:: javascript + .. code-block:: json :copyable: false - { _id: 1, item: 'abc123', qty: 0 } - { _id: 3, item: 'ijk123', qty: 12 } + [ + { _id: 1, item: 'abc123', qty: 0 }, + { _id: 3, item: 'ijk123', qty: 12 } + ] Each query applies ``4`` to the ``$mod`` expression regardless of decimal points, resulting in the same result set. + +.. _mod-qo-negative-dividend-example: + +Negative Dividend +~~~~~~~~~~~~~~~~~ + +The ``$mod`` expression produces a negative result when the dividend +is negative. + +The following example demonstrates this behavior: + +.. example:: + + Input query: + + .. code-block:: javascript + + db.inventory.find( { qty: { $mod: [ -4, -0 ] } } ) + + This query returns two documents because the ``qty`` has a remainder + of ``-0`` when the dividend is negative and ``-0`` equals ``0`` in + JavaScript. For details on this equality, see the + `official JavaScript documentation + `_. + + Results: + + .. code-block:: json + :copyable: false + + [ + { _id: 1, item: 'abc123', qty: 0 }, + { _id: 3, item: 'ijk123', qty: 12 } + ] diff --git a/source/reference/operator/query/ne.txt b/source/reference/operator/query/ne.txt index a47a015b316..d8b04eeaeb9 100644 --- a/source/reference/operator/query/ne.txt +++ b/source/reference/operator/query/ne.txt @@ -8,10 +8,13 @@ $ne :name: programming_language :values: shell +.. meta:: + :description: Use the $ne operator to select documents where the field value is not equal to (≠) the specified value, including those that lack the field. + .. contents:: On this page :local: :backlinks: none - :depth: 1 + :depth: 2 :class: singlecol Definition @@ -19,7 +22,7 @@ Definition .. query:: $ne - :query:`$ne` selects the documents where the value of the + ``$ne`` selects the documents where the value of the specified field is not equal to the specified value. This includes documents that do not contain the specified field. @@ -35,100 +38,129 @@ Compatibility Syntax ------ -The :query:`$ne` operator has the following form: +The ``$ne`` operator has the following form: .. code-block:: javascript { field: { $ne: value } } +.. note:: + + If the value of the ``$ne`` operator is null, see + :ref:`non-equality-filter` for more information. + Examples -------- -The following examples use the ``inventory`` collection. Create the -collection: +The following examples use the ``inventory`` collection. To create the +collection run the following :method:`insertMany() ` +command in :binary:`~bin.mongosh`: .. include:: /includes/examples-create-inventory.rst -Match Document Fields -~~~~~~~~~~~~~~~~~~~~~ +Match Document Fields That Are Not Equal +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Select all documents in the ``inventory`` collection where ``quantity`` -is not equal to ``20``: - -.. code-block:: javascript - - db.inventory.find( { quantity: { $ne: 20 } } ) - -The query will also select documents that do not have the ``quantity`` -field. - -Example output: - -.. code-block:: javascript - - { - _id: ObjectId("61ba667dfe687fce2f042420"), - item: 'nuts', - quantity: 30, - carrier: { name: 'Shipit', fee: 3 } - }, - { - _id: ObjectId("61ba667dfe687fce2f042421"), - item: 'bolts', - quantity: 50, - carrier: { name: 'Shipit', fee: 4 } - }, - { - _id: ObjectId("61ba667dfe687fce2f042422"), - item: 'washers', - quantity: 10, - carrier: { name: 'Shipit', fee: 1 } - } - -Perform an Update Based on Embedded Document Fields -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The following example sets the ``price`` field based on a :query:`$ne` -comparison against a field in an embedded document. - -.. code-block:: javascript - - db.inventory.updateMany( { "carrier.fee": { $ne: 1 } }, { $set: { "price": 9.99 } } ) - -Example output: - -.. code-block:: javascript - - { - _id: ObjectId("61ba66e2fe687fce2f042423"), - item: 'nuts', - quantity: 30, - carrier: { name: 'Shipit', fee: 3 }, - price: 9.99 - }, - { - _id: ObjectId("61ba66e2fe687fce2f042424"), - item: 'bolts', - quantity: 50, - carrier: { name: 'Shipit', fee: 4 }, - price: 9.99 - }, - { - _id: ObjectId("61ba66e2fe687fce2f042425"), - item: 'washers', - quantity: 10, - carrier: { name: 'Shipit', fee: 1 } - } - -This :method:`~db.collection.updateMany()` operation searches for an -embedded document, ``carrier``, with a subfield named ``fee``. It sets -``{ price: 9.99 }`` in each document where ``fee`` has a value that -does not equal 1 or where the ``fee`` subfield does not exist. +is not equal to ``20``. This query also selects documents that do not +have the ``quantity`` field: + +.. io-code-block:: + :copyable: true + + .. input:: + :language: javascript + + db.inventory.find( { quantity: { $ne: 20 } } ) + + .. output:: + :language: javascript + :visible: false + + { + _id: ObjectId("61ba667dfe687fce2f042420"), + item: 'nuts', + quantity: 30, + carrier: { name: 'Shipit', fee: 3 } + }, + { + _id: ObjectId("61ba667dfe687fce2f042421"), + item: 'bolts', + quantity: 50, + carrier: { name: 'Shipit', fee: 4 } + }, + { + _id: ObjectId("61ba667dfe687fce2f042422"), + item: 'washers', + quantity: 10, + carrier: { name: 'Shipit', fee: 1 } + } + +The SQL equivalent to this query is: + +.. code-block:: sql + :copyable: false + + SELECT * FROM INVENTORY WHERE QUANTITIY != 20 + +Update Based on Not Equal Embedded Document Fields +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The following example sets the ``price`` field based on a ``$ne`` +comparison against a field in an embedded document. The +:method:`~db.collection.updateMany()` operation searches for an +embedded document, ``carrier``, with a subfield named ``fee``. It uses +:update:`$set` to update the ``price`` field to ``9.99`` in each +document where ``fee`` has a value that does not equal ``1`` or +where the ``fee`` subfield does not exist: + +.. io-code-block:: + :copyable: true + + .. input:: + :language: javascript + + db.inventory.updateMany( + { "carrier.fee" : { $ne: 1 } }, + { $set: { "price": 9.99 } } + ) + + .. output:: + :language: javascript + :visible: false + + { + _id: ObjectId("61ba66e2fe687fce2f042423"), + item: 'nuts', + quantity: 30, + carrier: { name: 'Shipit', fee: 3 }, + price: 9.99 + }, + { + _id: ObjectId("61ba66e2fe687fce2f042424"), + item: 'bolts', + quantity: 50, + carrier: { name: 'Shipit', fee: 4 }, + price: 9.99 + }, + { + _id: ObjectId("61ba66e2fe687fce2f042425"), + item: 'washers', + quantity: 10, + carrier: { name: 'Shipit', fee: 1 } + } + +The SQL equivalent to this query is: + +.. code-block:: sql + :copyable: false + + UPDATE INVENTORY SET PRICE = '9.99' WHERE carrierfee != 1 .. include:: /includes/extracts/ne_operators_selectivity.rst -.. seealso:: - - - :method:`~db.collection.find()` - - :update:`$set` +Learn More +---------- +- :ref:`sql-to-mongodb-mapping` +- :ref:`read-operations-query-document` \ No newline at end of file diff --git a/source/reference/operator/query/near.txt b/source/reference/operator/query/near.txt index 44682596a01..38c44685031 100644 --- a/source/reference/operator/query/near.txt +++ b/source/reference/operator/query/near.txt @@ -94,10 +94,6 @@ Sort Operation .. |geo-operation| replace:: :query:`$near` -.. seealso:: - - :ref:`3.0-geo-near-compatibility` - Examples -------- diff --git a/source/reference/operator/query/nearSphere.txt b/source/reference/operator/query/nearSphere.txt index 2db031a9f70..6e89aeeb937 100644 --- a/source/reference/operator/query/nearSphere.txt +++ b/source/reference/operator/query/nearSphere.txt @@ -75,10 +75,6 @@ Definition If you use longitude and latitude for legacy coordinates, specify the longitude first, then latitude. - .. seealso:: - - :ref:`3.0-geo-near-compatibility` - Behavior -------- diff --git a/source/reference/operator/query/nin.txt b/source/reference/operator/query/nin.txt index c6536048c20..775fe329283 100644 --- a/source/reference/operator/query/nin.txt +++ b/source/reference/operator/query/nin.txt @@ -8,6 +8,9 @@ $nin :name: programming_language :values: shell +.. meta:: + :description: Use the $nin operator to select documents where the field value isn't in the specified array or the field doesn't exist. + .. contents:: On this page :local: :backlinks: none diff --git a/source/reference/operator/query/not.txt b/source/reference/operator/query/not.txt index b690b5a7c5e..51e69622d0c 100644 --- a/source/reference/operator/query/not.txt +++ b/source/reference/operator/query/not.txt @@ -8,6 +8,9 @@ $not :name: programming_language :values: shell, python +.. meta:: + :description: Use the $not operator to perform a logical NOT operation on an operator expression. $not selects documents that do not match the operator expression, even those without the field. Use $not for logical disjunctions, not for direct field checks. + .. contents:: On this page :local: :backlinks: none diff --git a/source/reference/operator/query/or.txt b/source/reference/operator/query/or.txt index b7f2d71e357..e03f8231f87 100644 --- a/source/reference/operator/query/or.txt +++ b/source/reference/operator/query/or.txt @@ -8,6 +8,9 @@ $or :name: programming_language :values: shell +.. meta:: + :description: Use the $or operator to perform a logical OR operation on an array of expressions and select documents that satisfy at least one of the expressions. + .. contents:: On this page :local: :backlinks: none diff --git a/source/reference/operator/query/regex.txt b/source/reference/operator/query/regex.txt index b58d7dd506c..a42456ed574 100644 --- a/source/reference/operator/query/regex.txt +++ b/source/reference/operator/query/regex.txt @@ -87,16 +87,14 @@ expression. .. list-table:: :header-rows: 1 - :widths: 10 60 30 + :widths: 20 80 * - Option - Description - - Syntax Restrictions * - ``i`` - - Case insensitivity to match upper and lower cases. - For an example, see :ref:`regex-case-insensitive`. - - + - Case insensitivity to match upper and lower cases. For an + example, see :ref:`regex-case-insensitive`. * - ``m`` @@ -110,8 +108,6 @@ expression. no newline characters (e.g. ``\n``), the ``m`` option has no effect. - - - * - ``x`` - "Extended" capability to ignore all white space characters in @@ -128,20 +124,21 @@ expression. The ``x`` option does not affect the handling of the VT character (i.e. code 11). - - Requires ``$regex`` with ``$options`` syntax - * - ``s`` - Allows the dot character (i.e. ``.``) to match all characters *including* newline characters. For an example, see :ref:`regex-dot-new-line`. - - Requires ``$regex`` with ``$options`` syntax + * - ``u`` + + - Supports Unicode. This flag is accepted, but is redundant. UTF is set by + default in the ``$regex`` operator, making the ``u`` option + unnecessary. .. note:: - The ``$regex`` operator does not support the global search - modifier ``g``. + The ``$regex`` operator *does not* support the global search modifier ``g``. Behavior -------- @@ -250,10 +247,16 @@ operation on both: Index Use ~~~~~~~~~~ +Index use and performance for ``$regex`` queries varies depending on +whether the query is case-sensitive or case-insensitive. + +Case-Sensitive Queries +`````````````````````` + .. TODO Probably should clean up a bit of the writing here -For case sensitive regular expression queries, if an index exists for -the field, then MongoDB matches the regular expression against the +For case sensitive regular expression queries, if an index exists +for the field, then MongoDB matches the regular expression against the values in the index, which can be faster than a collection scan. Further optimization can occur if the regular expression is a "prefix @@ -273,9 +276,10 @@ All of these expressions use an index if an appropriate index exists; however, ``/^a.*/``, and ``/^a.*$/`` are slower. ``/^a/`` can stop scanning after matching the prefix. -Case insensitive regular expression queries generally cannot use indexes -effectively. The ``$regex`` implementation is not collation-aware -and is unable to utilize case-insensitive indexes. +Case-Insensitive Queries +```````````````````````` + +.. include:: /includes/indexes/case-insensitive-regex-queries.rst Examples -------- diff --git a/source/reference/operator/query/size.txt b/source/reference/operator/query/size.txt index 3b65fa59527..742437fc69d 100644 --- a/source/reference/operator/query/size.txt +++ b/source/reference/operator/query/size.txt @@ -8,6 +8,9 @@ $size :name: programming_language :values: shell +.. meta:: + :description: Use the $size operator to match arrays with the specified number of elements. $size helps you query documents based on the size of an array field. + .. contents:: On this page :local: :backlinks: none diff --git a/source/reference/operator/query/text.txt b/source/reference/operator/query/text.txt index 299efaa0532..9c7a5cade84 100644 --- a/source/reference/operator/query/text.txt +++ b/source/reference/operator/query/text.txt @@ -10,6 +10,7 @@ $text .. meta:: :keywords: search + :description: Use the $text operator to perform text searches on indexed fields. $text supports language-specific rules and scoring. .. contents:: On this page :local: @@ -19,14 +20,14 @@ $text .. include:: /includes/extracts/fact-text-search-legacy-atlas.rst -This page describes :query:`$text` operator for self-managed deployments. +This page describes the ``$text`` operator for self-managed deployments. Definition ---------- .. query:: $text - :query:`$text` performs a text search on the content of the fields + ``$text`` performs a text search on the content of the fields indexed with a :ref:`text index `. Compatibility @@ -39,7 +40,7 @@ Compatibility Syntax ------ -A :query:`$text` expression has the following syntax: +A ``$text`` expression has the following syntax: .. code-block:: javascript @@ -52,7 +53,7 @@ A :query:`$text` expression has the following syntax: } } -The :query:`$text` operator accepts a text query document with the +The ``$text`` operator accepts a text query document with the following fields: .. |object-behavior| replace:: :ref:`text-query-operator-behavior` @@ -75,7 +76,10 @@ following fields: * - ``$language`` - string - - Optional. The language that determines the list of stop words for the search and + + - .. _language-field: + + Optional. The language that determines the list of stop words for the search and the rules for the stemmer and tokenizer. If not specified, the search uses the default language of the index. For supported languages, see :ref:`text-search-languages`. @@ -104,7 +108,7 @@ following fields: For more information, see :ref:`text-operator-diacritic-sensitivity`. -The :query:`$text` operator, by default, does *not* return results +The ``$text`` operator, by default, does *not* return results sorted in terms of the results' scores. For more information on sorting by the text search scores, see the :ref:`text-operator-text-score` documentation. @@ -117,35 +121,35 @@ Behavior Restrictions ~~~~~~~~~~~~ -- A query can specify, at most, one :query:`$text` expression. +- A query can specify, at most, one ``$text`` expression. -- The :query:`$text` query can not appear in :query:`$nor` expressions. +- The ``$text`` query can not appear in :query:`$nor` expressions. -- The :query:`$text` query can not appear in :query:`$elemMatch` query +- The ``$text`` query can not appear in :query:`$elemMatch` query expressions or :projection:`$elemMatch` projection expressions. -- To use a :query:`$text` query in an :query:`$or` expression, all +- To use a ``$text`` query in an :query:`$or` expression, all clauses in the :query:`$or` array must be indexed. - .. include:: /includes/fact-hint-text-query-restriction.rst - .. include:: /includes/fact-natural-sort-order-text-query-restriction.rst - .. |operation| replace:: :query:`$text` expression + .. |operation| replace:: ``$text`` expression - .. include:: /includes/fact-special-indexes-and-text.rst - .. include:: /includes/extracts/views-unsupported-text-search.rst -- :query:`$text` is unsupported for creating indexes using the +- ``$text`` is unsupported for creating indexes using the :ref:`Stable API ` V1. -If using the :query:`$text` operator in aggregation, the following +If using the ``$text`` operator in aggregation, the following restrictions also apply. .. include:: /includes/list-text-search-restrictions-in-agg.rst -.. |text-object| replace:: :query:`$text` +.. |text-object| replace:: ``$text`` .. |meta-object| replace:: :expression:`$meta` projection operator .. |sort-object| replace:: :method:`~cursor.sort()` method @@ -153,16 +157,16 @@ restrictions also apply. ~~~~~~~~~~~~~~~~~ In the ``$search`` field, specify a string of words that the -:query:`$text` operator parses and uses to query the :ref:`text index +``$text`` operator parses and uses to query the :ref:`text index `. -The :query:`$text` operator treats most punctuation +The ``$text`` operator treats most punctuation in the string as delimiters, except a hyphen-minus (``-``) that negates term or an escaped double quotes ``\"`` that specifies a phrase. .. note:: - The ``$search`` field for the :query:`$text` expression is different + The ``$search`` field for the ``$text`` expression is different than the :atlas:`$search aggregation stage ` provided by :atlas:`Atlas Search `. The ``$search`` aggregation @@ -189,7 +193,7 @@ For example, passed a ``$search`` string: "\"ssl certificate\" authority key" -The :query:`$text` operator searches for the phrase ``"ssl +The ``$text`` operator searches for the phrase ``"ssl certificate"``. .. note:: @@ -210,12 +214,12 @@ Prefixing a word with a hyphen-minus (``-``) negates a word: search will not match any documents. - A hyphenated word, such as ``pre-market``, is not a negation. If used - in a hyphenated word, :query:`$text` operator treats the hyphen-minus + in a hyphenated word, the ``$text`` operator treats the hyphen-minus (``-``) as a delimiter. To negate the word ``market`` in this instance, include a space between ``pre`` and ``-market``, i.e., ``pre -market``. -The :query:`$text` operator adds all negations to the query with the +The ``$text`` operator adds all negations to the query with the logical ``AND`` operator. Match Operation @@ -224,7 +228,7 @@ Match Operation Stop Words `````````` -The :query:`$text` operator ignores language-specific stop words, such +The ``$text`` operator ignores language-specific stop words, such as ``the`` and ``and`` in English. .. _match-operation-stemmed-words: @@ -233,7 +237,7 @@ Stemmed Words ````````````` For case insensitive and diacritic insensitive text searches, the -:query:`$text` operator matches on the complete *stemmed* word. So if a +``$text`` operator matches on the complete *stemmed* word. So if a document field contains the word ``blueberry``, a search on the term ``blue`` will not match. However, ``blueberry`` or ``blueberries`` will match. @@ -245,7 +249,7 @@ Case Sensitive Search and Stemmed Words For :ref:`case sensitive ` search (i.e. ``$caseSensitive: true``), if the suffix stem contains uppercase -letters, the :query:`$text` operator matches on the exact word. +letters, the ``$text`` operator matches on the exact word. .. _diacritic-sensitivity-and-stemming: @@ -254,7 +258,7 @@ Diacritic Sensitive Search and Stemmed Words For :ref:`diacritic sensitive ` search (i.e. ``$diacriticSensitive: true``), if the suffix stem -contains the diacritic mark or marks, the :query:`$text` operator +contains the diacritic mark or marks, the ``$text`` operator matches on the exact word. .. _text-operator-case-sensitivity: @@ -262,7 +266,7 @@ matches on the exact word. Case Insensitivity ~~~~~~~~~~~~~~~~~~ -The :query:`$text` operator defaults to the case insensitivity of the +The ``$text`` operator defaults to the case insensitivity of the :ref:`text ` index: - The version 3 :ref:`text index ` is @@ -283,19 +287,19 @@ Case Sensitive Search Process ````````````````````````````` When performing a case sensitive search (``$caseSensitive: true``) -where the ``text`` index is case insensitive, the :query:`$text` +where the ``text`` index is case insensitive, the ``$text`` operator: - First searches the ``text`` index for case insensitive and diacritic matches. - Then, to return just the documents that match the case of the search - terms, the :query:`$text` query operation includes an additional + terms, the ``$text`` query operation includes an additional stage to filter out the documents that do not match the specified case. For case sensitive search (i.e. ``$caseSensitive: true``), if -the suffix stem contains uppercase letters, the :query:`$text` operator +the suffix stem contains uppercase letters, the ``$text`` operator matches on the exact word. Specifying ``$caseSensitive: true`` may impact performance. @@ -309,7 +313,7 @@ Specifying ``$caseSensitive: true`` may impact performance. Diacritic Insensitivity ~~~~~~~~~~~~~~~~~~~~~~~ -The :query:`$text` operator defaults to the diacritic insensitivity of +The ``$text`` operator defaults to the diacritic insensitivity of the :ref:`text ` index: - The version 3 :ref:`text index ` is @@ -327,30 +331,30 @@ specify ``$diacriticSensitive: true``. Text searches against earlier versions of the ``text`` index are inherently diacritic sensitive and cannot be diacritic insensitive. As -such, the ``$diacriticSensitive`` option for the :query:`$text` +such, the ``$diacriticSensitive`` option for the ``$text`` operator has no effect with earlier versions of the ``text`` index. Diacritic Sensitive Search Process `````````````````````````````````` To perform a diacritic sensitive text search (``$diacriticSensitive: -true``) against a version 3 ``text`` index, the :query:`$text` operator: +true``) against a version 3 ``text`` index, the ``$text`` operator: - First searches the ``text`` index, which is diacritic insensitive. - Then, to return just the documents that match the diacritic marked - characters of the search terms, the :query:`$text` query operation + characters of the search terms, the ``$text`` query operation includes an additional stage to filter out the documents that do not match. Specifying ``$diacriticSensitive: true`` may impact performance. To perform a diacritic sensitive search against an earlier version of -the ``text`` index, the :query:`$text` operator searches the ``text`` +the ``text`` index, the ``$text`` operator searches the ``text`` index, which is diacritic sensitive. For diacritic sensitive search, if the suffix stem contains the -diacritic mark or marks, the :query:`$text` operator matches on the +diacritic mark or marks, the ``$text`` operator matches on the exact word. .. seealso:: @@ -419,7 +423,7 @@ the word: Match Any of the Search Terms ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -If the search string is a space-delimited string, :query:`$text` +If the search string is a space-delimited string, the ``$text`` operator performs a logical ``OR`` search on each term and returns documents that contains any of the terms. @@ -493,7 +497,7 @@ Exclude Documents That Contain a Term ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ A *negated* term is a term that is prefixed by a minus sign ``-``. If -you negate a term, the :query:`$text` operator will exclude the +you negate a term, the ``$text`` operator will exclude the documents that contain those terms from the results. The following example searches for documents that contain the words @@ -519,7 +523,7 @@ The query returns the following documents: Search a Different Language ~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Use the optional ``$language`` field in the :query:`$text` expression +Use the optional ``$language`` field in the ``$text`` expression to specify a language that determines the list of stop words and the rules for the stemmer and tokenizer for the search string. @@ -541,7 +545,7 @@ The query returns the following documents: { "_id" : 5, "subject" : "Café Con Leche", "author" : "abc", "views" : 200 } { "_id" : 8, "subject" : "Cafe con Leche", "author" : "xyz", "views" : 10 } -The :query:`$text` expression can also accept the language by name, +The ``$text`` expression can also accept the language by name, ``spanish``. See :ref:`text-search-languages` for the supported languages. @@ -552,7 +556,7 @@ languages. Case and Diacritic Insensitive Search ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -The :query:`$text` operator defers to the case and diacritic +The ``$text`` operator defers to the case and diacritic insensitivity of the ``text`` index. The version 3 ``text`` index is diacritic insensitive and expands its case insensitivity to include the Cyrillic alphabet as well as characters with diacritics. For details, @@ -642,7 +646,7 @@ Case Sensitivity with Negated Term `````````````````````````````````` A *negated* term is a term that is prefixed by a minus sign ``-``. If -you negate a term, the :query:`$text` operator will exclude the +you negate a term, the ``$text`` operator will exclude the documents that contain those terms from the results. You can also specify case sensitivity for negated terms. @@ -700,7 +704,7 @@ Diacritic Sensitivity with Negated Term The ``$diacriticSensitive`` option applies also to negated terms. A negated term is a term that is prefixed by a minus sign ``-``. If you -negate a term, the :query:`$text` operator will exclude the documents that +negate a term, the ``$text`` operator will exclude the documents that contain those terms from the results. The following query performs a diacritic sensitive text search for diff --git a/source/reference/operator/query/type.txt b/source/reference/operator/query/type.txt index 0a335ed749e..0902d6a7418 100644 --- a/source/reference/operator/query/type.txt +++ b/source/reference/operator/query/type.txt @@ -8,6 +8,9 @@ $type :name: programming_language :values: shell +.. meta:: + :description: Query MongoDB documents by data type using the $type operator, which can help you query unstructured data with unpredictable types. + .. contents:: On this page :local: :backlinks: none @@ -51,7 +54,7 @@ types and has the following syntax: { field: { $type: [ , , ... ] } } -The above query will match documents where the ``field`` value is +The above query matches documents where the ``field`` value is any of the listed types. The types specified in the array can be either numeric or string aliases. @@ -63,7 +66,7 @@ their corresponding numeric and string aliases. .. seealso:: - :expression:`$isNumber` - checks if the argument is a number. - *New in MongoDB 4.4* + - :expression:`$type (Aggregation) <$type>` - returns the BSON type of the argument. @@ -94,7 +97,7 @@ types. [#type0]_ .. include:: /includes/fact-bson-types.rst -:query:`$type` supports the ``number`` alias, which will match against +:query:`$type` supports the ``number`` alias, which matches against the following :term:`BSON` types: - :bsontype:`double ` @@ -113,40 +116,41 @@ For examples, see :ref:`query-type-examples`. .. seealso:: - :expression:`$isNumber` *New in MongoDB 4.4* + :expression:`$isNumber` MinKey and MaxKey -~~~~~~~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~ :bsontype:`MinKey` and :bsontype:`MaxKey` are used in comparison operations and exist primarily for internal use. -For all possible :term:`BSON` element values, ``MinKey`` will always be the -smallest value while ``MaxKey`` will always be the greatest value. +For all possible :term:`BSON` element values, ``MinKey`` is always the +smallest value while ``MaxKey`` is always the greatest value. -Querying for ``minKey`` or ``maxKey`` with :query:`$type` -will only return fields that match -the special ``MinKey`` or ``MaxKey`` values. +Querying for ``minKey`` or ``maxKey`` with :query:`$type` only returns fields +that match the special ``MinKey`` or ``MaxKey`` values. Suppose that the ``data`` collection has two documents with ``MinKey`` and ``MaxKey``: .. code-block:: javascript - { "_id" : 1, x : { "$minKey" : 1 } } - { "_id" : 2, y : { "$maxKey" : 1 } } + db.data.insertMany( [ + { _id : 1, x : { "$minKey" : 1 } }, + { _id : 2, y : { "$maxKey" : 1 } } + ] ) -The following query will return the document with ``_id: 1``: +The following query returns the document with ``_id: 1``: .. code-block:: javascript db.data.find( { x: { $type: "minKey" } } ) -The following query will return the document with ``_id: 2``: +The following query returns the document with ``_id: 2``: .. code-block:: javascript - db.data.find( { y: { $type: "maxKey" } } ) + db.data.find( { y: { $type: "maxKey" } } ) .. _query-type-examples: @@ -164,15 +168,13 @@ values: .. code-block:: javascript - db.addressBook.insertMany( - [ - { "_id" : 1, address : "2030 Martian Way", zipCode : "90698345" }, - { "_id" : 2, address: "156 Lunar Place", zipCode : 43339374 }, - { "_id" : 3, address : "2324 Pluto Place", zipCode: NumberLong(3921412) }, - { "_id" : 4, address : "55 Saturn Ring" , zipCode : NumberInt(88602117) }, - { "_id" : 5, address : "104 Venus Drive", zipCode : ["834847278", "1893289032"]} - ] - ) + db.addressBook.insertMany( [ + { _id : 1, address : "2030 Martian Way", zipCode : "90698345" }, + { _id : 2, address : "156 Lunar Place", zipCode : 43339374 }, + { _id : 3, address : "2324 Pluto Place", zipCode : NumberLong(3921412) }, + { _id : 4, address : "55 Saturn Ring" , zipCode : NumberInt(88602117) }, + { _id : 5, address : "104 Venus Drive", zipCode : ["834847278", "1893289032"] } + ] ) The following queries return all documents where ``zipCode`` is the :term:`BSON` type ``string`` *or* is an array containing an element of @@ -181,15 +183,16 @@ the specified type: .. code-block:: javascript - db.addressBook.find( { "zipCode" : { $type : 2 } } ); - db.addressBook.find( { "zipCode" : { $type : "string" } } ); + db.addressBook.find( { zipCode : { $type : 2 } } ); + db.addressBook.find( { zipCode : { $type : "string" } } ); These queries return: .. code-block:: javascript + :copyable: false - { "_id" : 1, "address" : "2030 Martian Way", "zipCode" : "90698345" } - { "_id" : 5, "address" : "104 Venus Drive", "zipCode" : [ "834847278", "1893289032" ] } + { _id : 1, address : "2030 Martian Way", zipCode : "90698345" } + { _id : 5, address : "104 Venus Drive", zipCode : [ "834847278", "1893289032" ] } The following queries return all documents where ``zipCode`` is the :term:`BSON` type ``double`` *or* is an array containing an element of @@ -197,14 +200,15 @@ the specified type: .. code-block:: javascript - db.addressBook.find( { "zipCode" : { $type : 1 } } ) - db.addressBook.find( { "zipCode" : { $type : "double" } } ) + db.addressBook.find( { zipCode : { $type : 1 } } ); + db.addressBook.find( { zipCode : { $type : "double" } } ); These queries return: .. code-block:: javascript + :copyable: false - { "_id" : 2, "address" : "156 Lunar Place", "zipCode" : 43339374 } + { _id : 2, address : "156 Lunar Place", zipCode : 43339374 } The following query uses the ``number`` alias to return documents where ``zipCode`` is the :term:`BSON` type ``double``, ``int``, or ``long`` @@ -212,34 +216,33 @@ The following query uses the ``number`` alias to return documents where .. code-block:: javascript - db.addressBook.find( { "zipCode" : { $type : "number" } } ) + db.addressBook.find( { zipCode : { $type : "number" } } ) These queries return: .. code-block:: javascript + :copyable: false - { "_id" : 2, "address" : "156 Lunar Place", "zipCode" : 43339374 } - { "_id" : 3, "address" : "2324 Pluto Place", "zipCode" : NumberLong(3921412) } - { "_id" : 4, "address" : "55 Saturn Ring", "zipCode" : 88602117 } + { _id : 2, address : "156 Lunar Place", zipCode : 43339374 } + { _id : 3, address : "2324 Pluto Place", zipCode : NumberLong(3921412) } + { _id : 4, address : "55 Saturn Ring", zipCode : 88602117 } .. _document-querying-by-multiple-data-types: -Querying by Multiple Data Type -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Querying by Multiple Data Types +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The ``grades`` collection contains names and averages, where ``classAverage`` has ``string``, ``int``, and ``double`` values: .. code-block:: javascript - db.grades.insertMany( - [ - { "_id" : 1, name : "Alice King" , classAverage : 87.333333333333333 }, - { "_id" : 2, name : "Bob Jenkins", classAverage : "83.52" }, - { "_id" : 3, name : "Cathy Hart", classAverage: "94.06" }, - { "_id" : 4, name : "Drew Williams" , classAverage : NumberInt("93") } - ] - ) + db.grades.insertMany( [ + { _id : 1, name : "Alice King" , classAverage : 87.333333333333333 }, + { _id : 2, name : "Bob Jenkins", classAverage : "83.52" }, + { _id : 3, name : "Cathy Hart", classAverage: "94.06" }, + { _id : 4, name : "Drew Williams" , classAverage : NumberInt("93") } + ] ) The following queries return all documents where ``classAverage`` is the :term:`BSON` type ``string`` or ``double`` *or* is an array containing @@ -248,16 +251,17 @@ while the second query uses string aliases. .. code-block:: javascript - db.grades.find( { "classAverage" : { $type : [ 2 , 1 ] } } ); - db.grades.find( { "classAverage" : { $type : [ "string" , "double" ] } } ); + db.grades.find( { classAverage : { $type : [ 2 , 1 ] } } ); + db.grades.find( { classAverage : { $type : [ "string" , "double" ] } } ); These queries return the following documents: .. code-block:: javascript + :copyable: false - { "_id" : 1, "name" : "Alice King", "classAverage" : 87.33333333333333 } - { "_id" : 2, "name" : "Bob Jenkins", "classAverage" : "83.52" } - { "_id" : 3, "name" : "Cathy Hart", "classAverage" : "94.06" } + { _id : 1, name : "Alice King", classAverage : 87.33333333333333 } + { _id : 2, name : "Bob Jenkins", classAverage : "83.52" } + { _id : 3, name : "Cathy Hart", classAverage : "94.06" } .. _document-querying-by-MinKey-And-MaxKey: @@ -269,49 +273,53 @@ failing grade: .. code-block:: javascript - { - "_id": 1, - "address": { - "building": "230", - "coord": [ -73.996089, 40.675018 ], - "street": "Huntington St", - "zipcode": "11231" - }, - "borough": "Brooklyn", - "cuisine": "Bakery", - "grades": [ - { "date": new Date(1393804800000), "grade": "C", "score": 15 }, - { "date": new Date(1378857600000), "grade": "C", "score": 16 }, - { "date": new Date(1358985600000), "grade": MinKey(), "score": 30 }, - { "date": new Date(1322006400000), "grade": "C", "score": 15 } - ], - "name": "Dirty Dan's Donuts", - "restaurant_id": "30075445" - } + db.restaurants.insertOne( [ + { + _id: 1, + address: { + building: "230", + coord: [ -73.996089, 40.675018 ], + street: "Huntington St", + zipcode: "11231" + }, + borough: "Brooklyn", + cuisine: "Bakery", + grades: [ + { date : new Date(1393804800000), grade : "C", score : 15 }, + { date : new Date(1378857600000), grade : "C", score : 16 }, + { date : new Date(1358985600000), grade : MinKey(), score : 30 }, + { date : new Date(1322006400000), grade : "C", score : 15 } + ], + name : "Dirty Dan's Donuts", + restaurant_id : "30075445" + } + ] ) And ``maxKey`` for any grade that is the highest passing grade: .. code-block:: javascript - { - "_id": 2, - "address": { - "building": "1166", - "coord": [ -73.955184, 40.738589 ], - "street": "Manhattan Ave", - "zipcode": "11222" - }, - "borough": "Brooklyn", - "cuisine": "Bakery", - "grades": [ - { "date": new Date(1393804800000), "grade": MaxKey(), "score": 2 }, - { "date": new Date(1378857600000), "grade": "B", "score": 6 }, - { "date": new Date(1358985600000), "grade": MaxKey(), "score": 3 }, - { "date": new Date(1322006400000), "grade": "B", "score": 5 } - ], - "name": "Dainty Daisey's Donuts", - "restaurant_id": "30075449" - } + db.restaurants.insertOne( [ + { + _id : 2, + address : { + building : "1166", + coord : [ -73.955184, 40.738589 ], + street : "Manhattan Ave", + zipcode : "11222" + }, + borough: "Brooklyn", + cuisine: "Bakery", + grades: [ + { date : new Date(1393804800000), grade : MaxKey(), score : 2 }, + { date : new Date(1378857600000), grade : "B", score : 6 }, + { date : new Date(1358985600000), grade : MaxKey(), score : 3 }, + { date : new Date(1322006400000), grade : "B", score : 5 } + ], + name : "Dainty Daisey's Donuts", + restaurant_id : "30075449" + } + ] ) The following query returns any restaurant whose ``grades.grade`` field contains ``minKey`` *or* is an array containing an element of @@ -323,28 +331,29 @@ the specified type: { "grades.grade" : { $type : "minKey" } } ) -This returns +This returns the following results: .. code-block:: javascript + :copyable: false { - "_id" : 1, - "address" : { - "building" : "230", - "coord" : [ -73.996089, 40.675018 ], - "street" : "Huntington St", - "zipcode" : "11231" + _id : 1, + address : { + building : "230", + coord : [ -73.996089, 40.675018 ], + street : "Huntington St", + zipcode : "11231" }, - "borough" : "Brooklyn", - "cuisine" : "Bakery", - "grades" : [ - { "date" : ISODate("2014-03-03T00:00:00Z"), "grade" : "C", "score" : 15 }, - { "date" : ISODate("2013-09-11T00:00:00Z"), "grade" : "C", "score" : 16 }, - { "date" : ISODate("2013-01-24T00:00:00Z"), "grade" : { "$minKey" : 1 }, "score" : 30 }, - { "date" : ISODate("2011-11-23T00:00:00Z"), "grade" : "C", "score" : 15 } + borough : "Brooklyn", + cuisine : "Bakery", + grades : [ + { date : ISODate("2014-03-03T00:00:00Z"), grade : "C", score : 15 }, + { date : ISODate("2013-09-11T00:00:00Z"), grade : "C", score : 16 }, + { date : ISODate("2013-01-24T00:00:00Z"), grade : { "$minKey" : 1 }, score : 30 }, + { date : ISODate("2011-11-23T00:00:00Z"), grade : "C", score : 15 } ], - "name" : "Dirty Dan's Donuts", - "restaurant_id" : "30075445" + name : "Dirty Dan's Donuts", + restaurant_id : "30075445" } The following query returns any restaurant whose ``grades.grade`` field @@ -358,28 +367,29 @@ the specified type: { "grades.grade" : { $type : "maxKey" } } ) -This returns +This returns the following results: .. code-block:: javascript + :copyable: false { - "_id" : 2, - "address" : { - "building" : "1166", - "coord" : [ -73.955184, 40.738589 ], - "street" : "Manhattan Ave", - "zipcode" : "11222" + _id : 2, + address : { + building : "1166", + coord : [ -73.955184, 40.738589 ], + street : "Manhattan Ave", + zipcode : "11222" }, - "borough" : "Brooklyn", - "cuisine" : "Bakery", - "grades" : [ - { "date" : ISODate("2014-03-03T00:00:00Z"), "grade" : { "$maxKey" : 1 }, "score" : 2 }, - { "date" : ISODate("2013-09-11T00:00:00Z"), "grade" : "B", "score" : 6 }, - { "date" : ISODate("2013-01-24T00:00:00Z"), "grade" : { "$maxKey" : 1 }, "score" : 3 }, - { "date" : ISODate("2011-11-23T00:00:00Z"), "grade" : "B", "score" : 5 } + borough : "Brooklyn", + cuisine : "Bakery", + grades : [ + { date : ISODate("2014-03-03T00:00:00Z"), grade : { "$maxKey" : 1 }, score : 2 }, + { date : ISODate("2013-09-11T00:00:00Z"), grade : "B", score : 6 }, + { date : ISODate("2013-01-24T00:00:00Z"), grade : { "$maxKey" : 1 }, score : 3 }, + { date : ISODate("2011-11-23T00:00:00Z"), grade : "B", score : 5 } ], - "name" : "Dainty Daisey's Donuts", - "restaurant_id" : "30075449" + name : "Dainty Daisey's Donuts", + restaurant_id : "30075449" } @@ -388,59 +398,33 @@ This returns Querying by Array Type ---------------------- -A collection named ``SensorReading`` contains the following documents: +A collection named ``sensorReading`` contains the following documents: .. code-block:: javascript - { - "_id": 1, - "readings": [ - 25, - 23, - [ "Warn: High Temp!", 55 ], - [ "ERROR: SYSTEM SHUTDOWN!", 66 ] - ] - }, - { - "_id": 2, - "readings": [ - 25, - 25, - 24, - 23 - ] - }, - { - "_id": 3, - "readings": [ - 22, - 24, - [] - ] - }, - { - "_id": 4, - "readings": [] - }, - { - "_id": 5, - "readings": 24 - } + db.sensorReading.insertMany( [ + { _id : 1, readings : [ 25, 23, [ "Warn: High Temp!", 55 ], [ "ERROR: SYSTEM SHUTDOWN!", 66 ] ] }, + { _id : 2, readings : [ 25, 25, 24, 23 ] }, + { _id : 3, readings : [ 22, 24, [] ] }, + { _id : 4, readings : [] }, + { _id : 5, readings : 24 } + ] ) The following query returns any document in which the ``readings`` field is an array, empty or non-empty. .. code-block:: javascript - db.SensorReading.find( { "readings" : { $type: "array" } } ) + db.SensorReading.find( { readings : { $type: "array" } } ) The above query returns the following documents: .. code-block:: javascript + :copyable: false { - "_id": 1, - "readings": [ + _id : 1, + readings : [ 25, 23, [ "Warn: High Temp!", 55 ], @@ -448,29 +432,20 @@ The above query returns the following documents: ] }, { - "_id": 2, - "readings": [ - 25, - 25, - 24, - 23 - ] + _id : 2, + readings : [ 25, 25, 24, 23 ] }, { - "_id": 3, - "readings": [ - 22, - 24, - [] - ] + _id : 3, + readings : [ 22, 24, [] ] }, { - "_id": 4, - "readings": [] + _id : 4, + readings : [] } In the documents with ``_id : 1``, ``_id : 2``, ``_id : 3``, and -``_id : 4``, the ``readings`` field is an array. +``_id : 4``, the ``readings`` field is an array. Additional Information diff --git a/source/reference/operator/query/where.txt b/source/reference/operator/query/where.txt index fe057928bee..9364b0e2e1d 100644 --- a/source/reference/operator/query/where.txt +++ b/source/reference/operator/query/where.txt @@ -8,6 +8,9 @@ $where :name: programming_language :values: javascript/typescript +.. meta:: + :description: Use the $where operator to pass JavaScript expressions or functions to the query system. This operator provides greater flexibility, but might impact performance. + .. contents:: On this page :local: :backlinks: none @@ -45,20 +48,19 @@ The :query:`$where` operator has the following form: .. note:: - Starting in MongoDB 4.4, :query:`$where` no longer supports the - deprecated :ref:`BSON type ` JavaScript code with scope - (BSON Type 15). The :query:`$where` operator only supports BSON type - String (BSON Type 2) or BSON type JavaScript (BSON Type 13). The use - of BSON type JavaScript with scope for :query:`$where` has been - deprecated since MongoDB 4.2.1. + :query:`$where` no longer supports the deprecated + :ref:`BSON type ` JavaScript code with scope (BSON Type 15). + The :query:`$where` operator only supports BSON type String (BSON Type 2) or + BSON type JavaScript (BSON Type 13). The use of BSON type JavaScript with + scope for :query:`$where` has been deprecated since MongoDB 4.2.1. .. note:: Aggregation Alternatives Preferred The :query:`$expr` operator allows the use of :ref:`aggregation expressions ` within - the query language. And, starting in MongoDB 4.4, the - :expression:`$function` and :group:`$accumulator` allows users to - define custom aggregation expressions in JavaScript if the provided + the query language. The :expression:`$function` and + :group:`$accumulator` allows users to define custom aggregation expressions + in JavaScript if the provided :ref:`pipeline operators ` cannot fulfill your application's needs. @@ -131,11 +133,7 @@ scripting: - For a :binary:`~bin.mongos` instance, see :setting:`security.javascriptEnabled` configuration option or the - :option:`--noscripting ` command-line option - starting in MongoDB 4.4. - - | In earlier versions, MongoDB does not allow JavaScript execution on - :binary:`~bin.mongos` instances. + :option:`--noscripting ` command-line option. See also :ref:`security-checklist-javascript`. @@ -181,10 +179,9 @@ The operation returns the following result: } As an alternative, the previous example can be rewritten using -:query:`$expr` and :expression:`$function`. Starting in MongoDB 4.4, -you can define custom aggregation expression in JavaScript with the -aggregation operator :expression:`$function`. To -access :expression:`$function` and other aggregation operators in +:query:`$expr` and :expression:`$function`. You can define custom aggregation +expression in JavaScript with the aggregation operator :expression:`$function`. +To access :expression:`$function` and other aggregation operators in :method:`db.collection.find()`, use with :query:`$expr`: .. code-block:: javascript diff --git a/source/reference/operator/update.txt b/source/reference/operator/update.txt index a15bd80a66f..f0a2d3b6838 100644 --- a/source/reference/operator/update.txt +++ b/source/reference/operator/update.txt @@ -4,6 +4,9 @@ Update Operators .. default-domain:: mongodb +.. meta:: + :description: Use update operators to modify MongoDB documents. You can set field values, manipulate arrays, and more. + .. contents:: On this page :local: :backlinks: none @@ -52,9 +55,6 @@ Starting in MongoDB 5.0, update operators process document fields with string-based names in lexicographic order. Fields with numeric names are processed in numeric order. -In MongoDB 4.4 and earlier, update operators process all document fields -in lexicographic order. - Consider this example :update:`$set` command: .. code-block:: javascript @@ -64,9 +64,6 @@ Consider this example :update:`$set` command: In MongoDB 5.0 and later, ``"a.2"`` is processed before ``"a.10"`` because ``2`` comes before ``10`` in numeric order. -In MongoDB 4.4 and earlier, ``"a.10"`` is processed before ``"a.2"`` -because ``10`` comes before ``2`` in lexicographic order. - Fields ~~~~~~ diff --git a/source/reference/operator/update/addToSet.txt b/source/reference/operator/update/addToSet.txt index b9f7d22d09d..c6f4b7d834f 100644 --- a/source/reference/operator/update/addToSet.txt +++ b/source/reference/operator/update/addToSet.txt @@ -8,6 +8,9 @@ $addToSet :name: programming_language :values: shell +.. meta:: + :description: Use the $addToSet operator to add unique values to arrays in MongoDB, ensuring no new duplicates. $addToSet creates fields and appends arrays, but doesn't guarantee element order. + .. contents:: On this page :local: :backlinks: none diff --git a/source/reference/operator/update/currentDate.txt b/source/reference/operator/update/currentDate.txt index dee53d1135d..a146b9b31bf 100644 --- a/source/reference/operator/update/currentDate.txt +++ b/source/reference/operator/update/currentDate.txt @@ -8,6 +8,9 @@ $currentDate :name: programming_language :values: shell +.. meta:: + :description: Use the $currentDate operator to set field values to the current date or timestamp. + .. contents:: On this page :local: :backlinks: none diff --git a/source/reference/operator/update/inc.txt b/source/reference/operator/update/inc.txt index 7b4f974b6c3..6f3d5a732f2 100644 --- a/source/reference/operator/update/inc.txt +++ b/source/reference/operator/update/inc.txt @@ -8,6 +8,9 @@ $inc :name: programming_language :values: shell +.. meta:: + :description: Use the $inc operator to increment field values by specified amounts. $inc creates fields if absent, errors on null values, and is atomic within a document. + .. contents:: On this page :local: :backlinks: none diff --git a/source/reference/operator/update/position.txt b/source/reference/operator/update/position.txt index 06181ca4f95..87938971d5e 100644 --- a/source/reference/operator/update/position.txt +++ b/source/reference/operator/update/position.txt @@ -4,6 +4,9 @@ $position .. default-domain:: mongodb +.. meta:: + :description: Use the $position operator to update array elements without specifying their position. + .. contents:: On this page :local: :backlinks: none diff --git a/source/reference/operator/update/pull.txt b/source/reference/operator/update/pull.txt index 86114d4adc6..5420b98884f 100644 --- a/source/reference/operator/update/pull.txt +++ b/source/reference/operator/update/pull.txt @@ -8,6 +8,9 @@ $pull :name: programming_language :values: shell +.. meta:: + :description: Use the $pull operator to remove all instances of the a value or values from an array that match a specified condition. + .. contents:: On this page :local: :backlinks: none diff --git a/source/reference/operator/update/push.txt b/source/reference/operator/update/push.txt index 7202f478254..518d8d6909c 100644 --- a/source/reference/operator/update/push.txt +++ b/source/reference/operator/update/push.txt @@ -8,6 +8,9 @@ $push :name: programming_language :values: shell +.. meta:: + :description: Use the $push operator to append a specified value to an array. $push supports the $each, $slice, $sort, and $position modifiers for advanced array operations. + .. contents:: On this page :local: :backlinks: none diff --git a/source/reference/operator/update/rename.txt b/source/reference/operator/update/rename.txt index 20d777af415..7a5cbf009fe 100644 --- a/source/reference/operator/update/rename.txt +++ b/source/reference/operator/update/rename.txt @@ -15,52 +15,79 @@ Definition .. update:: $rename - The :update:`$rename` operator updates the name of a field and has the following form: + The :update:`$rename` operator updates the name of a field. + +Syntax +------ - .. code-block:: javascript +.. code-block:: javascript - {$rename: { : , : , ... } } + { $rename: { : , : , ... } } - The new field name must differ from the existing field name. To - specify a ```` in an embedded document, use :term:`dot - notation`. +The new field name must differ from the existing field name. To +specify a ```` in an embedded document, use :term:`dot +notation`. - Consider the following example: +Consider the following example: - .. code-block:: javascript +.. code-block:: javascript - db.students.updateOne( - { _id: 1 }, - { $rename: { 'nickname': 'alias', 'cell': 'mobile' } } - ) + db.students.updateOne( + { _id: 1 }, { $rename: { 'nickname': 'alias', 'cell': 'mobile' } } + ) - This operation renames the field ``nickname`` to ``alias``, and the - field ``cell`` to ``mobile``. +The preceding operation renames the ``nickname`` field to ``alias`` and +the ``cell`` field to ``mobile`` in a document where ``_id`` is 1. Behavior -------- +When you run a ``$rename`` operation, MongoDB performs the following +actions: + +- Delete the old ```` and field with ```` from the + document (using :update:`$unset`). + +- Perform a :update:`$set` operation with ````, using the value + from ````. + +Atomicity +~~~~~~~~~ + +Each document matched by an update command is updated in an individual +operation. Update operations (like ``$rename``) only guarantee atomicity +on a single-document level. + +Field Order +~~~~~~~~~~~ + +The ``$rename`` operation might not preserve the order of the fields in +the document. + +Update Processing Order +~~~~~~~~~~~~~~~~~~~~~~~ + .. include:: /includes/fact-update-operator-processing-order.rst -The :update:`$rename` operator logically performs an :update:`$unset` -of both the old name and the new name, and then performs a -:update:`$set` operation with the new name. As such, the operation may -not preserve the order of the fields in the document; i.e. the renamed -field may move within the document. +Rename Embedded Document Fields +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ``$rename`` operator can move fields into and out of embedded +documents. -If the document already has a field with the ````, the -:update:`$rename` operator removes that field and renames the specified -```` to ````. +``$rename`` does not work on embedded documents in arrays. -If the field to rename does not exist in a document, :update:`$rename` -does nothing (i.e. no operation). +Other Considerations +~~~~~~~~~~~~~~~~~~~~ -For fields in embedded documents, the :update:`$rename` operator can -rename these fields as well as move the fields in and out of embedded -documents. :update:`$rename` does not work if these fields are in array -elements. +- If the document already has a field with the ````, the + :update:`$rename` operator removes that field and renames the + specified ```` to ````. -.. include:: /includes/extracts/update-operation-empty-operand-expressions-rename.rst +- If the field to rename does not exist in a document, :update:`$rename` + does nothing. + +- .. include:: /includes/extracts/update-operation-empty-operand-expressions-rename.rst Examples -------- @@ -102,10 +129,13 @@ name of the field and the new name: .. code-block:: javascript - db.students.updateMany( {}, { $rename: { "nmae": "name" } } ) + db.students.updateMany( + { "nmae": { $ne: null } }, + { $rename: { "nmae": "name" } } + ) -This operation renames the field ``nmae`` to ``name`` for all documents -in the collection: +This operation checks for documents where the ``nmae`` field is not null +and updates those documents to rename the ``nmae`` field to ``name``: .. code-block:: javascript @@ -123,10 +153,12 @@ in the collection: "name" : { "first" : "abigail", "last" : "adams" } } - { "_id" : 3, + { + "_id" : 3, "alias" : [ "Amazing grace" ], "mobile" : "111-111-1111", - "name" : { "first" : "grace", "last" : "hopper" } } + "name" : { "first" : "grace", "last" : "hopper" } + } .. _rename-field-in-embedded-document: @@ -170,4 +202,3 @@ This operation does nothing because there is no field named ``wife``. - :method:`db.collection.updateMany()` - :method:`db.collection.findAndModify()` - diff --git a/source/reference/operator/update/set.txt b/source/reference/operator/update/set.txt index fc122353508..edaddc2f88a 100644 --- a/source/reference/operator/update/set.txt +++ b/source/reference/operator/update/set.txt @@ -8,6 +8,9 @@ $set :name: programming_language :values: shell +.. meta:: + :description: Use the $set operator to replace the value of a field with the specified value. $set supports updating or creating fields at the top level, in embedded documents, or within arrays. + .. contents:: On this page :local: :backlinks: none @@ -153,6 +156,23 @@ After updating, the document has the following values: ratings: [ { by: 'Customer007', rating: 4 } ] } +.. important:: + + The above code uses ``dot notation`` to update the ``make`` field of the + embedded ``details`` document. The code format looks similar to the following + code example, which instead *replaces the entire embedded document*, removing + all other fields in the embedded ``details`` document: + + .. code-block:: javascript + :copyable: false + + db.products.updateOne( + { _id: 100 }, + { $set: { details: + {make: "Kustom Kidz"} + } + }) + Set Elements in Arrays ~~~~~~~~~~~~~~~~~~~~~~ diff --git a/source/reference/operator/update/unset.txt b/source/reference/operator/update/unset.txt index 480a87ba07d..03bf9b46e75 100644 --- a/source/reference/operator/update/unset.txt +++ b/source/reference/operator/update/unset.txt @@ -8,6 +8,9 @@ $unset :name: programming_language :values: shell +.. meta:: + :description: Learn how to delete specific fields in MongoDB documents using the $unset operator. + .. contents:: On this page :local: :backlinks: none diff --git a/source/reference/parameters.txt b/source/reference/parameters.txt index 4b6167290f0..f443f942831 100644 --- a/source/reference/parameters.txt +++ b/source/reference/parameters.txt @@ -73,8 +73,7 @@ Authentication Parameters .. include:: /includes/list-table-auth-mechanisms.rst - You can only set :parameter:`authenticationMechanisms` during - start-up. + .. include:: /includes/fact-startup-parameter For example, to specify both ``PLAIN`` and ``SCRAM-SHA-256`` as the authentication mechanisms, use the following command: @@ -87,7 +86,7 @@ Authentication Parameters .. parameter:: awsSTSRetryCount - .. versionchanged:: 7.0 (Also starting in 6.0.7, 5.0.18, 4.4.22) + .. versionchanged:: 7.0 (Also starting in 6.0.7 and 5.0.18) In previous versions, AWS IAM authentication retried only when the server returned an HTTP 500 error. @@ -130,6 +129,8 @@ Authentication Parameters .. include:: /includes/extracts/ssl-facts-see-more.rst + .. include:: /includes/fact-runtime-parameter + .. code-block:: bash db.adminCommand( { setParameter: 1, clusterAuthMode: "sendX509" } ) @@ -138,17 +139,34 @@ Authentication Parameters |both| + *Default*: ``true`` + Specify ``0`` or ``false`` to disable localhost authentication bypass. Enabled by default. - :parameter:`enableLocalhostAuthBypass` is not available using - :dbcommand:`setParameter` database command. Use the - :setting:`setParameter` option in the configuration file or the - :option:`--setParameter ` option on the - command line. + .. include:: /includes/fact-startup-parameter See :ref:`localhost-exception` for more information. +.. parameter:: enforceUserClusterSeparation + + |both| + + Set to ``false`` to disable the ``O/OU/DC`` check when + ``clusterAuthMode`` is ``keyFile`` in your configuration file. This + allows clients possessing member certificates to authenticate as + users stored in the ``$external`` database. The server won't start if + ``clusterAuthMode`` isn't ``keyFile`` in your configuration file. + + To set the ``enforceUserClusterSeparation`` parameter to ``false``, + run the following command during startup: + + .. code-block:: javascript + + mongod --setParameter enforceUserClusterSeparation=false + + .. include:: /includes/fact-enforce-user-cluster-separation-parameter.rst + .. parameter:: KeysRotationIntervalSec *Default*: 7776000 seconds (90 days) @@ -158,9 +176,7 @@ Authentication Parameters is valid before rotating to the next one. This parameter is intended primarily to facilitate authentication testing. - You can only set :parameter:`KeysRotationIntervalSec` during - start-up, and cannot change this setting with the - :dbcommand:`setParameter` database command. + .. include:: /includes/fact-startup-parameter .. parameter:: ldapForceMultiThreadMode @@ -187,8 +203,6 @@ Authentication Parameters libldap version, please contact MongoDB Support. .. parameter:: ldapQueryPassword - - .. versionadded:: 4.4 |both| @@ -200,8 +214,6 @@ Authentication Parameters If not set, mongod or mongos does not attempt to bind to the LDAP server. .. parameter:: ldapQueryUser - - .. versionadded:: 4.4 |both| @@ -400,7 +412,7 @@ Authentication Parameters .. versionadded:: 4.2.1 - *Changed starting in MongoDB versions 4.4.15, 5.0.9, and 6.0.0* + *Changed starting in MongoDB versions 5.0.9 and 6.0.0* Changed default value to ``2147483647``. In previous versions, the default is unset. @@ -417,7 +429,7 @@ Authentication Parameters .. versionadded:: 4.2.1 - *Changed starting in MongoDB versions 4.4.15, 5.0.9, and 6.0.0* + *Changed starting in MongoDB versions 5.0.9 and 6.0.0* Changed default value to ``2``. In previous versions, the default is unset. @@ -484,7 +496,7 @@ Authentication Parameters .. parameter:: maxValidateMemoryUsageMB - .. versionadded:: 5.0 (*Also available starting in 4.4.7.*) + .. versionadded:: 5.0 *Default*: 200 @@ -493,9 +505,7 @@ Authentication Parameters :dbcommand:`validate` returns as many results as possible and warns that not all corruption might be reported because of the limit. - You can set :parameter:`maxValidateMemoryUsageMB` during startup, and - can change this setting using the :dbcommand:`setParameter` database - command. + .. include:: /includes/fact-runtime-startup-parameter .. parameter:: oidcIdentityProviders @@ -508,31 +518,26 @@ Authentication Parameters (IDP) configurations. An empty array (default) indicates no OpenID Connect support is enabled. When more than one IDP is defined, ``oidcIdentityProviders`` uses the ``matchPattern`` field to select an IDP. Array order determines the - priority and the first IDP is always selected. + priority and the first IDP is always selected. oidcIdentityProviders Fields ```````````````````````````` .. include:: /includes/fact-oidc-providers.rst - You can only set ``oidcIdentityProviders`` during startup in the - :setting:`configuration file ` or with the - ``--setParameter`` option on the command line. + .. include:: /includes/fact-startup-parameter .. parameter:: ocspEnabled - .. versionadded:: 4.4 - - Available on Linux and macOS. + Available on Linux and macOS. *Default*: true The flag that enables or disables OCSP. - You can only set :parameter:`ocspEnabled` during startup in the - :setting:`configuration file ` or with the - ``--setParameter`` option on the command line. For example, the - following disables OCSP: + .. include:: /includes/fact-startup-parameter + + For example, the following disables OCSP: .. code-block:: bash @@ -548,9 +553,7 @@ Authentication Parameters .. parameter:: ocspValidationRefreshPeriodSecs - .. versionadded:: 4.4 - - Available on Linux. + Available on Linux. The number of seconds to wait before refreshing the stapled OCSP status response. Specify a number greater than or equal to 1. @@ -593,9 +596,7 @@ Authentication Parameters cipher suites for use with TLS 1.3, use the :parameter:`opensslCipherSuiteConfig` parameter. - You can only set :parameter:`opensslCipherConfig` during start-up, - and cannot change this setting using the :dbcommand:`setParameter` - database command. + .. include:: /includes/fact-startup-parameter For version 4.2 and greater, the use of ``TLS`` options is preferred over ``SSL`` options. The TLS options have the same functionality as @@ -626,11 +627,10 @@ Authentication Parameters strings for use with TLS 1.2 or earlier, use the :parameter:`opensslCipherConfig` parameter. - You can only set :parameter:`opensslCipherSuiteConfig` during - start-up, and cannot change this setting using the - :dbcommand:`setParameter` database command. For example, the - following configures a :binary:`~bin.mongod` with a - :parameter:`opensslCipherSuiteConfig` cipher suite of + .. include:: /includes/fact-startup-parameter + + For example, the following configures a :binary:`~bin.mongod` + with a :parameter:`opensslCipherSuiteConfig` cipher suite of ``'TLS_AES_256_GCM_SHA384'`` for use with TLS 1.3: .. code-block:: bash @@ -667,9 +667,7 @@ Authentication Parameters not supported with Java 6 and 7 unless extended support has been purchased from Oracle. - You can only set :parameter:`opensslDiffieHellmanParameters` during - startup, and cannot change this setting using the - :dbcommand:`setParameter` database command. + .. include:: /includes/fact-startup-parameter If for performance reasons, you need to disable support for DHE cipher suites, use the :parameter:`opensslCipherConfig` parameter: @@ -687,6 +685,8 @@ Authentication Parameters Specify the path to the Unix Domain Socket of the ``saslauthd`` instance to use for proxy authentication. + .. include:: /includes/fact-startup-parameter + .. parameter:: saslHostName |both| @@ -699,9 +699,7 @@ Authentication Parameters :binary:`~bin.mongod` or :binary:`~bin.mongos` instance for any purpose beyond the configuration of SASL and Kerberos. - You can only set :parameter:`saslHostName` during start-up, and - cannot change this setting using the :dbcommand:`setParameter` - database command. + .. include:: /includes/fact-startup-parameter .. note:: @@ -725,9 +723,7 @@ Authentication Parameters principal name, on a per-instance basis. If unspecified, the default value is ``mongodb``. - MongoDB only permits setting :parameter:`saslServiceName` at - startup. The :dbcommand:`setParameter` command can not change - this setting. + .. include:: /includes/fact-startup-parameter :parameter:`saslServiceName` is only available in MongoDB Enterprise. @@ -752,6 +748,8 @@ Authentication Parameters existing passwords. The :parameter:`scramIterationCount` value must be ``5000`` or greater. + .. include:: /includes/fact-runtime-startup-parameter + For example, the following sets the :parameter:`scramIterationCount` to ``12000``. @@ -788,6 +786,8 @@ Authentication Parameters existing passwords. The :parameter:`scramSHA256IterationCount` value must be ``5000`` or greater. + .. include:: /includes/fact-runtime-startup-parameter + For example, the following sets the :parameter:`scramSHA256IterationCount` to ``20000``. @@ -818,6 +818,8 @@ Authentication Parameters .. include:: /includes/extracts/ssl-facts-see-more.rst + .. include:: /includes/fact-runtime-parameter + .. code-block:: bash db.adminCommand( { setParameter: 1, sslMode: "preferSSL" } ) @@ -842,6 +844,8 @@ Authentication Parameters upgrade to TLS/SSL ` to minimize downtime. + .. include:: /includes/fact-runtime-parameter + .. code-block:: bash db.adminCommand( { setParameter: 1, tlsMode: "preferTLS" } ) @@ -870,11 +874,11 @@ Authentication Parameters Use this parameter to rotate certificates when the new certificates have different attributes or extension values. -.. parameter:: tlsOCSPStaplingTimeoutSecs + .. include:: /includes/fact-startup-parameter - .. versionadded:: 4.4 +.. parameter:: tlsOCSPStaplingTimeoutSecs - Available for Linux. + Available for Linux. The maximum number of seconds the :binary:`mongod` / :binary:`mongos` instance should wait to @@ -884,11 +888,10 @@ Authentication Parameters :parameter:`tlsOCSPStaplingTimeoutSecs` uses the :parameter:`tlsOCSPVerifyTimeoutSecs` value. - You can only set :parameter:`tlsOCSPStaplingTimeoutSecs` during - startup in the :setting:`configuration file ` or with - the ``--setParameter`` option on the command line. For example, the - following sets the :parameter:`tlsOCSPStaplingTimeoutSecs` to 20 - seconds: + .. include:: /includes/fact-startup-parameter + + For example, the following sets the + :parameter:`tlsOCSPStaplingTimeoutSecs` to 20 seconds: .. code-block:: bash @@ -902,9 +905,7 @@ Authentication Parameters .. parameter:: tlsOCSPVerifyTimeoutSecs - .. versionadded:: 4.4 - - Available for Linux and Windows. + Available for Linux and Windows. *Default*: 5 @@ -914,11 +915,10 @@ Authentication Parameters Specify an integer greater than or equal to (``>=``) 1. - You can only set :parameter:`tlsOCSPVerifyTimeoutSecs` during - startup in the :setting:`configuration file ` or with - the ``--setParameter`` option on the command line. For example, the - following sets the :parameter:`tlsOCSPVerifyTimeoutSecs` to 20 - seconds: + .. include:: /includes/fact-startup-parameter + + For example, the following sets the + :parameter:`tlsOCSPVerifyTimeoutSecs` to 20 seconds: .. code-block:: bash @@ -930,6 +930,31 @@ Authentication Parameters - :parameter:`ocspValidationRefreshPeriodSecs` - :parameter:`tlsOCSPStaplingTimeoutSecs` +.. parameter:: tlsUseSystemCA + + |mongod-only| + + *Type*: boolean + + *Default*: false + + Specifies whether MongoDB loads TLS certificates that are already + available to the operating system's certificate authority. + + .. important:: + + .. include:: /includes/fact-ssl-tlsCAFile-tlsUseSystemCA.rst + + .. include:: /includes/fact-startup-parameter + + For example, to set ``tlsUseSystemCA`` to ``true``: + + .. code-block:: bash + + mongod --setParameter tlsUseSystemCA=true + + .. include:: /includes/extracts/ssl-facts-see-more.rst + .. parameter:: tlsWithholdClientCertificate .. versionadded:: 4.2 @@ -954,6 +979,8 @@ Authentication Parameters deployment. ``tlsWithholdClientCertificate`` is mutually exclusive with :option:`--clusterAuthMode x509 `. + .. include:: /includes/fact-startup-parameter + .. parameter:: tlsX509ClusterAuthDNOverride .. versionadded:: 4.2 @@ -988,6 +1015,8 @@ Authentication Parameters If set, you must set this parameter on all members of the deployment. + .. include:: /includes/fact-runtime-startup-parameter + You can use this parameter for a rolling update of certificates to new certificates that contain a new ``DN`` value. See :doc:`/tutorial/rotate-x509-membership-certificates`. @@ -997,16 +1026,14 @@ Authentication Parameters .. parameter:: tlsX509ExpirationWarningThresholdDays - .. versionadded:: 4.4 - |both| *Default* : 30 - Starting in MongoDB 4.4, :binary:`mongod` / :binary:`mongos` - logs a warning on connection if the presented x.509 certificate - expires within ``30`` days of the ``mongod/mongos`` system clock. - Use the :parameter:`tlsX509ExpirationWarningThresholdDays` parameter + :binary:`mongod` / :binary:`mongos` logs a warning on connection if the + presented x.509 certificate expires within ``30`` days of the + ``mongod/mongos`` system clock. Use the + :parameter:`tlsX509ExpirationWarningThresholdDays` parameter to control the certificate expiration warning threshold: - Increase the parameter value to trigger warnings farther ahead of @@ -1019,17 +1046,10 @@ Authentication Parameters This parameter has a minimum value of ``0``. - You can only set :parameter:`tlsX509ExpirationWarningThresholdDays` - during ``mongod/mongos`` startup using either: - - - The :setting:`setParameter` configuration setting, *or* - - - The :option:`mongod --setParameter ` / - :option:`mongos --setParameter ` command - line option. + .. include:: /includes/fact-startup-parameter See :ref:`4.4-rel-notes-certificate-expiration-warning` for more - information on x.509 expiration warnings in MongoDB 4.4. + information on x.509 expiration warnings. For more information on x.509 certificate validity, see `RFC 5280 4.1.2.5 `__. @@ -1076,6 +1096,8 @@ Authentication Parameters This parameter has a minimum value of ``1`` second and a maximum value of ``86400`` seconds (24 hours). + .. include:: /includes/fact-runtime-startup-parameter + .. parameter:: authFailedDelayMs |both| @@ -1103,8 +1125,7 @@ Authentication Parameters A boolean flag that allows or disallows the retrieval of authorization roles from client x.509 certificates. - You can only set :parameter:`allowRolesFromX509Certificates` during - startup in the config file or on the command line. + .. include:: /includes/fact-startup-parameter General Parameters ~~~~~~~~~~~~~~~~~~ @@ -1117,6 +1138,8 @@ General Parameters .. include:: /includes/fact-allowDiskUseByDefault.rst + .. include:: /includes/fact-runtime-startup-parameter + .. code-block:: bash mongod --setParameter allowDiskUseByDefault=false @@ -1137,55 +1160,6 @@ General Parameters } ) -.. parameter:: connPoolMaxShardedConnsPerHost - - |both| - - *Default*: 200 - - Sets the maximum size of the legacy connection pools for communication to the - shards. The size of a pool does not prevent the creation of - additional connections, but *does* prevent the connection pools from - retaining connections above this limit. - - .. note:: - - The parameter is separate from the connections in TaskExecutor - pools. See :parameter:`ShardingTaskExecutorPoolMaxSize`. - - Increase the :parameter:`connPoolMaxShardedConnsPerHost` value - **only** if the number of connections in a connection pool has a - high level of churn or if the total number of created connections - increase. - - You can only set :parameter:`connPoolMaxShardedConnsPerHost` during - startup in the config file or on the command line. For example: - - .. code-block:: bash - - mongos --setParameter connPoolMaxShardedConnsPerHost=250 - - -.. parameter:: connPoolMaxShardedInUseConnsPerHost - - |both| - - Sets the maximum number of in-use connections at any given time for - the legacy sharded cluster connection pools. - - By default, the parameter is unset. - - You can only set :parameter:`connPoolMaxShardedConnsPerHost` during - startup in the config file or on the command line. For example: - - .. code-block:: bash - - mongos --setParameter connPoolMaxShardedInUseConnsPerHost=100 - - .. seealso:: - - :parameter:`connPoolMaxShardedConnsPerHost` - .. parameter:: httpVerboseLogging |both| @@ -1194,32 +1168,12 @@ General Parameters By default, the parameter is unset. - You can only set ``httpVerboseLogging`` during - startup in the config file or on the command line. For example: + .. include:: /includes/fact-runtime-startup-parameter .. code-block:: bash mongos --setParameter httpVerboseLogging=true -.. parameter:: shardedConnPoolIdleTimeoutMinutes - - |both| - - Sets the time limit that a connection in the legacy sharded cluster - connection pool can remain idle before being closed. - - By default, the parameter is unset. - - You can only set :parameter:`shardedConnPoolIdleTimeoutMinutes` during - startup in the config file or on the command line. For example: - - .. code-block:: bash - - mongos --setParameter shardedConnPoolIdleTimeoutMinutes=10 - - .. seealso:: - - :parameter:`connPoolMaxShardedConnsPerHost` .. parameter:: slowConnectionThresholdMillis @@ -1237,6 +1191,8 @@ General Parameters added to the :ref:`log ` with the message ``msg`` field set to ``"Slow connection establishment"``. + .. include:: /includes/fact-runtime-startup-parameter + The following example sets :parameter:`slowConnectionThresholdMillis` to ``250`` milliseconds. @@ -1272,8 +1228,7 @@ General Parameters connections and you're using authentication in the context of a sharded cluster. - You can only set :parameter:`connPoolMaxConnsPerHost` during startup - in the config file or on the command line. For example: + .. include:: /includes/fact-startup-parameter .. code-block:: bash @@ -1289,8 +1244,7 @@ General Parameters By default, the parameter is unset. - You can only set :parameter:`connPoolMaxInUseConnsPerHost` during - startup in the config file or on the command line. For example: + .. include:: /includes/fact-startup-parameter .. code-block:: bash @@ -1309,18 +1263,12 @@ General Parameters By default, the parameter is unset. - You can only set :parameter:`globalConnPoolIdleTimeoutMinutes` - during startup in the config file or on the command line. For - example: + .. include:: /includes/fact-startup-parameter .. code-block:: bash mongos --setParameter globalConnPoolIdleTimeoutMinutes=10 - .. seealso:: - - :parameter:`connPoolMaxShardedConnsPerHost` - .. parameter:: cursorTimeoutMillis |both| @@ -1331,6 +1279,8 @@ General Parameters MongoDB removes them; specifically, MongoDB removes cursors that have been idle for the specified :parameter:`cursorTimeoutMillis`. + .. include:: /includes/fact-runtime-startup-parameter + For example, the following sets the :parameter:`cursorTimeoutMillis` to ``300000`` milliseconds (5 minutes). @@ -1354,10 +1304,9 @@ General Parameters .. warning:: - Starting in MongoDB 4.4.8, MongoDB cleans up - :term:`orphaned cursors ` linked to sessions as - part of session management. This means that orphaned cursors with - session ids do not use ``cursorTimeoutMillis`` to control the + MongoDB cleans up :term:`orphaned cursors ` linked to + sessions as part of session management. This means that orphaned cursors + with session ids do not use ``cursorTimeoutMillis`` to control the timeout. For operations that return a cursor and have an idle period @@ -1367,47 +1316,6 @@ General Parameters the :dbcommand:`refreshSessions` command. For details, see :ref:``. -.. parameter:: failIndexKeyTooLong - - *Removed in 4.4* - - .. important:: - - - **MongoDB 4.4** *removes* the deprecated - :parameter:`failIndexKeyTooLong` parameter. Attempting to use - this parameter with MongoDB 4.4 will result in an error. - - - **MongoDB 4.2** *deprecates* the - :parameter:`failIndexKeyTooLong` parameter and *removes* the - :limit:`Index Key Length Limit ` for - :ref:`featureCompatibilityVersion ` (fCV) set to - ``"4.2"`` or greater. - - Setting :parameter:`failIndexKeyTooLong` to ``false`` is - a temporary workaround, not a permanent solution to the - problem of oversized index keys. With - :parameter:`failIndexKeyTooLong` set to ``false``, queries can - return incomplete results if they use indexes that skip over - documents whose indexed fields exceed the - :limit:`Index Key Length Limit `. - - :parameter:`failIndexKeyTooLong` defaults to ``true``. - - Issue the following command to disable the index key length - validation: - - .. code-block:: javascript - - db.adminCommand( { setParameter: 1, failIndexKeyTooLong: false } ) - - You can also set :parameter:`failIndexKeyTooLong` at startup with the - following option: - - .. code-block:: bash - - mongod --setParameter failIndexKeyTooLong=false - - .. parameter:: maxNumActiveUserIndexBuilds |mongod-only| @@ -1439,6 +1347,8 @@ General Parameters Too many index builds running simultaneously, waiting until the number of active index builds is below the threshold. + .. include:: /includes/fact-runtime-startup-parameter + The following command sets a limit of 4 concurrent index builds: .. code-block:: javascript @@ -1480,6 +1390,15 @@ General Parameters :parameter:`notablescan` because preventing collection scans can potentially affect queries in all databases, including administrative queries. + .. include:: /includes/fact-runtime-startup-parameter + + .. note:: + + ``notablescan`` does not allow unbounded queries that use a + clustered index because the queries require a full collection + scan. For more information, see :ref:`Collection Scans + `. + .. parameter:: ttlMonitorEnabled |mongod-only| @@ -1490,6 +1409,8 @@ General Parameters instances have a background thread that is responsible for deleting documents from collections with TTL indexes. + .. include:: /includes/fact-runtime-startup-parameter + To disable this worker thread for a :binary:`~bin.mongod`, set :parameter:`ttlMonitorEnabled` to ``false``, as in the following operations: @@ -1514,8 +1435,6 @@ General Parameters .. parameter:: tcpFastOpenServer - .. versionadded:: 4.4 - |both| *Default*: ``true`` @@ -1547,9 +1466,7 @@ General Parameters This parameter has no effect if the host operating system does not support *or* is not configured to support TFO connections. - You can only set this parameter on startup, using either the - :setting:`setParameter` configuration file setting or the - :option:`--setParameter ` command line option. + .. include:: /includes/fact-startup-parameter See :ref:`4.4-rel-notes-tcp-fast-open` for more information on MongoDB TFO support. @@ -1560,8 +1477,6 @@ General Parameters .. parameter:: tcpFastOpenClient - .. versionadded:: 4.4 - |both| *Default*: ``true`` @@ -1584,9 +1499,7 @@ General Parameters This parameter has no effect if the host operating system does not support *or* is not configured to support TFO connections. - You can only set this parameter on startup, using either the - :setting:`setParameter` configuration file setting or the - :option:`--setParameter ` command line option. + .. include:: /includes/fact-startup-parameter See :ref:`4.4-rel-notes-tcp-fast-open` for more information on MongoDB TFO support. @@ -1597,8 +1510,6 @@ General Parameters .. parameter:: tcpFastOpenQueueSize - .. versionadded:: 4.4 - |both| *Default*: ``1024`` @@ -1633,6 +1544,8 @@ General Parameters :ref:`4.4-rel-notes-tcp-fast-open` for more information on MongoDB TFO support. + .. include:: /includes/fact-startup-parameter + .. seealso:: - `RFC7413 TCP Fast Open Section 5: Security Considerations @@ -1648,6 +1561,8 @@ General Parameters The MongoDB JavaScript engine uses SpiderMonkey, which implements Just-in-Time (JIT) compilation for improved performance when running scripts. + .. include:: /includes/fact-runtime-startup-parameter + To enable the JIT, set :parameter:`disableJavaScriptJIT` to ``false``, as in the following example: @@ -1693,9 +1608,9 @@ General Parameters disk space and ``indexBuildMinAvailableDiskSpaceMB`` could be set lower. - To modify ``indexBuildMinAvailableDiskSpaceMB`` during runtime, use - the :dbcommand:`setParameter` command. The following example sets - ``indexBuildMinAvailableDiskSpaceMB`` to 650 MB: + .. include:: /includes/fact-runtime-startup-parameter + + The following example sets ``indexBuildMinAvailableDiskSpaceMB`` to 650 MB: .. code-block:: javascript @@ -1720,6 +1635,8 @@ General Parameters :parameter:`indexMaxNumGeneratedKeysPerDocument` parameter specifies, the operation will fail. + .. include:: /includes/fact-startup-parameter + .. parameter:: maxIndexBuildMemoryUsageMegabytes *Default*: @@ -1735,6 +1652,8 @@ General Parameters :dbcommand:`createIndexes` command or its shell helper :method:`db.collection.createIndexes()`. + .. include:: /includes/fact-runtime-startup-parameter + The memory consumed by an index build is separate from the WiredTiger cache memory (see :setting:`~storage.wiredTiger.engineConfig.cacheSizeGB`). @@ -1751,9 +1670,7 @@ General Parameters :method:`db.serverStatus()` method and :dbcommand:`serverStatus` command return :serverstatus:`opWriteConcernCounters` information. [#perf]_ - You can only set - :parameter:`reportOpWriteConcernCountersInServerStatus` during - startup in the config file or on the command line. For example: + .. include;: /includes/fact-starutp-parameter .. code-block:: bash @@ -1844,6 +1761,32 @@ General Parameters .. seealso: :ref:`storage-node-watchdog` +.. parameter:: tcmallocAggressiveMemoryDecommit + + *Type*: integer (``0`` or ``1`` only) + + Default: 0 + + If you enable ``tcmallocAggressiveMemoryDecommit``, MongoDB: + + - releases a :term:`chunk ` of memory to system, and + + - attempts to return all neighboring free chunks. + + A value of ``1`` enables ``tcmallocAggressiveMemoryDecommit``; + ``0`` disables this parameter. + + .. include:: /includes/fact-runtime-startup-parameter + + If you enable this parameter, the system will require new memory allocations + for use. Consider enabling ``tcmallocAggressiveMemoryDecommit`` + only on memory-constrained systems and after pursuing other memory and + performance options. + + Despite the potential performance degradation when using + ``tcmallocAggressiveMemoryDecommit``, it is often preferred over using + :parameter:`tcmallocReleaseRate`. + .. parameter:: tcmallocReleaseRate .. versionadded:: 4.2.3 @@ -1860,6 +1803,14 @@ General Parameters return memory faster; decrease it to return memory slower. Reasonable rates are in the range [0,10]." + .. note:: + + Consider using :parameter:`tcmallocAggressiveMemoryDecommit` instead of + :parameter:`tcmallocReleaseRate`, unless you see a significant performance + degradation when using ``tcmallocAggressiveMemoryDecommit``. + + .. include:: /includes/fact-runtime-startup-parameter + To modify the release rate during run time, you can use the :dbcommand:`setParameter` command; for example: @@ -1890,6 +1841,8 @@ General Parameters ``fassertOnLockTimeoutForStepUpDown`` defaults to 15 seconds. To disable nodes from fasserting, set ``fassertOnLockTimeoutForStepUpDown=0``. + .. include:: /includes/fact-runtime-startup-parameter + The following example disables nodes from fasserting: .. code-block:: bash @@ -1904,7 +1857,7 @@ Logging Parameters |both| Specify an integer between ``0`` and ``5`` signifying the verbosity - of the :doc:`logging `, where ``5`` is the + of the :ref:`logging `, where ``5`` is the most verbose. [#log-message]_ The default :parameter:`logLevel` is ``0`` (Informational). @@ -2072,6 +2025,23 @@ Logging Parameters mongod --setParameter maxLogSizeKB=20 +.. parameter:: profileOperationResourceConsumptionMetrics + + |mongod-only| + + *Type*: boolean + + *Default*: false + + Flag that determines whether operations collect resource + consumption metrics and report them in the slow query logs. + If you enable :ref:`profiling `, + these metrics are also included. + + If set to ``true``, running the :dbcommand:`explain` command + returns :ref:`operationMetrics ` when the verbosity + is ``executionStats`` or higher. + .. parameter:: quiet |both| @@ -2203,7 +2173,7 @@ Logging Parameters *Default*: false By default, a :binary:`~bin.mongod` or :binary:`~bin.mongos` with - :doc:`TLS/SSL enabled ` and + :ref:`TLS/SSL enabled ` and :setting:`net.ssl.allowConnectionsWithoutCertificates` : ``true`` lets clients connect without providing a certificate for validation while logging an warning. Set @@ -2682,7 +2652,7 @@ If you attempt to update ``disableSplitHorizonIPCheck`` at run time, .. parameter:: storeFindAndModifyImagesInSideCollection - .. versionadded:: 5.1 + .. versionadded:: 5.0 |both| @@ -2695,7 +2665,7 @@ If you attempt to update ``disableSplitHorizonIPCheck`` at run time, commands are stored in the *side* collection (``config.image_collection``). - If :parameter:`storeFindAndModifyImagesInSideCollection` is: + If ``storeFindAndModifyImagesInSideCollection`` is: - ``true``, the temporary documents are stored in the side collection. @@ -2703,7 +2673,7 @@ If you attempt to update ``disableSplitHorizonIPCheck`` at run time, - ``false``, the temporary documents are stored in the :ref:`replica set oplog `. - Keep :parameter:`storeFindAndModifyImagesInSideCollection` set to + Keep ``storeFindAndModifyImagesInSideCollection`` set to ``true`` if you: - Have a large :ref:`retryable ` @@ -2717,11 +2687,11 @@ If you attempt to update ``disableSplitHorizonIPCheck`` at run time, .. note:: :term:`Secondaries ` may experience increased CPU - usage when :parameter:`storeFindAndModifyImagesInSideCollection` + usage when ``storeFindAndModifyImagesInSideCollection`` is ``true``. For example, to set - :parameter:`storeFindAndModifyImagesInSideCollection` to ``false`` + ``storeFindAndModifyImagesInSideCollection`` to ``false`` during startup: .. code-block:: bash @@ -2818,8 +2788,6 @@ If you attempt to update ``disableSplitHorizonIPCheck`` at run time, .. parameter:: initialSyncTransientErrorRetryPeriodSeconds - .. versionadded:: 4.4 - *Type*: integer *Default*: 86400 @@ -2830,8 +2798,6 @@ If you attempt to update ``disableSplitHorizonIPCheck`` at run time, .. parameter:: initialSyncSourceReadPreference - .. versionadded:: 4.4 - |mongod-only| *Type*: String @@ -2883,19 +2849,19 @@ If you attempt to update ``disableSplitHorizonIPCheck`` at run time, *Default*: ``logical`` - Available only in MongoDB Enterprise. + Available only in MongoDB Enterprise. - Method used for :ref:`initial sync `. + Method used for :ref:`initial sync `. Set to ``logical`` to use :ref:`logical initial sync `. Set to ``fileCopyBased`` to use :ref:`file copy based initial sync - `. + `. This parameter only affects the sync method for the member on which it is specified. Setting this parameter on a single replica set member does not affect the sync method of any other replica set - members. + members. You can only set this parameter on startup, using either the :setting:`setParameter` configuration file setting or the @@ -2925,8 +2891,6 @@ If you attempt to update ``disableSplitHorizonIPCheck`` at run time, .. parameter:: oplogFetcherUsesExhaust - .. versionadded:: 4.4 - |mongod-only| *Type*: boolean @@ -2938,7 +2902,7 @@ If you attempt to update ``disableSplitHorizonIPCheck`` at run time, streaming replication. Set the value to ``false`` to disable streaming replication. If - disabled, secondaries fetch batches of :doc:`oplog ` + disabled, secondaries fetch batches of :ref:`oplog ` entries by issuing a request to their *sync from* source and waiting for a response. This requires a network roundtrip for each batch of :doc:`oplog ` entries. @@ -3108,8 +3072,6 @@ If you attempt to update ``disableSplitHorizonIPCheck`` at run time, .. parameter:: mirrorReads - .. versionadded:: 4.4 - |mongod-only| *Type*: Document @@ -3548,8 +3510,6 @@ Sharding Parameters .. parameter:: disableResumableRangeDeleter - .. versionadded:: 4.4 - |mongod-only| *Type*: boolean @@ -3591,8 +3551,6 @@ Sharding Parameters .. parameter:: enableShardedIndexConsistencyCheck - .. versionadded:: 4.4 (*Also available starting in 4.2.6.*) - |mongod-only| *Type*: boolean @@ -3662,8 +3620,6 @@ Sharding Parameters .. parameter:: shardedIndexConsistencyCheckIntervalMS - .. versionadded:: 4.4 (*Also available starting in 4.2.6.*) - |mongod-only| *Type*: integer @@ -3693,8 +3649,6 @@ Sharding Parameters .. parameter:: enableFinerGrainedCatalogCacheRefresh - .. versionadded:: 4.4 - |both| *Type*: boolean @@ -3722,8 +3676,6 @@ Sharding Parameters .. parameter:: maxTimeMSForHedgedReads - .. versionadded:: 4.4 - |mongos-only| *Type*: integer @@ -3759,7 +3711,7 @@ Sharding Parameters .. parameter:: maxCatchUpPercentageBeforeBlockingWrites - .. versionadded:: 5.0 (*Also available starting in 4.4.7, 4.2.15*) + .. versionadded:: 5.0 |mongod-only| @@ -3998,8 +3950,6 @@ Sharding Parameters .. parameter:: readHedgingMode - .. versionadded:: 4.4 - |mongos-only| *Type*: string @@ -4295,12 +4245,6 @@ Sharding Parameters startup of the :binary:`~bin.mongos` instance before it begins accepting incoming client connections. - .. note:: - - In MongoDB 4.4, the - :parameter:`warmMinConnectionsInShardingTaskExecutorPoolOnStartup` - parameter is enabled by default for the :binary:`~bin.mongos`. - The following example sets :parameter:`ShardingTaskExecutorPoolMinSize` to ``2`` during startup: @@ -4359,7 +4303,7 @@ Sharding Parameters Default: 60000 (1 minute) Maximum time the :binary:`~bin.mongos` waits before attempting to - heartbeat a resting connection in the pool. An idle connection may be + heartbeat an idle connection in the pool. An idle connection may be discarded during the refresh if the pool is above its :ref:`minimum size `. @@ -4419,7 +4363,7 @@ Sharding Parameters .. parameter:: ShardingTaskExecutorPoolReplicaSetMatching .. versionadded:: 4.2 - .. versionchanged:: 5.0 (*Also starting in 4.4.5 and 4.2.13*) + .. versionchanged:: 5.0 |both| @@ -4449,8 +4393,7 @@ Sharding Parameters * - ``"automatic"`` (Default) - - Starting in 5.0 (and 4.4.5 and 4.2.13), ``"automatic"`` is the - new default value. + - Starting in 5.0, ``"automatic"`` is the new default value. When set for a :binary:`~bin.mongos`, the instance follows the behavior specified for the ``"matchPrimaryNode"`` option. @@ -4587,8 +4530,6 @@ Sharding Parameters .. parameter:: loadRoutingTableOnStartup - .. versionadded:: 4.4 - |mongos-only| Type: boolean @@ -4619,8 +4560,6 @@ Sharding Parameters .. parameter:: warmMinConnectionsInShardingTaskExecutorPoolOnStartup - .. versionadded:: 4.4 - |mongos-only| Type: boolean @@ -4658,8 +4597,6 @@ Sharding Parameters .. parameter:: warmMinConnectionsInShardingTaskExecutorPoolOnStartupWaitMS - .. versionadded:: 4.4 - |mongos-only| Type: integer @@ -4790,7 +4727,7 @@ Sharding Parameters .. parameter:: persistedChunkCacheUpdateMaxBatchSize - .. versionadded:: 7.2 (and 7.1.1, 7.0.4) + .. versionadded:: 7.2 (and 7.1.1, 7.0.4, 6.0.13, 5.0.25) |mongod-only| @@ -4874,8 +4811,7 @@ Sharding Parameters Type: Non-negative integer - Default: 2147483647 starting in MongoDB 5.1.2, 5.0.6, and 4.4.12 (128 - in earlier MongoDB versions) + Default: 2147483647 starting in MongoDB 5.1.2 and 5.0.6 The maximum number of documents in each batch to delete during the cleanup stage of :ref:`range migration ` @@ -5142,8 +5078,6 @@ Storage Parameters .. parameter:: processUmask - .. versionadded:: 4.4 - |mongod-only| Overrides the default permissions used for groups and other users @@ -5247,8 +5181,8 @@ Storage Parameters |mongod-only| - Specify the interval in seconds between :term:`fsync` operations - where :binary:`~bin.mongod` flushes its working memory to disk. By + Specify the interval in seconds when + :binary:`~bin.mongod` flushes its working memory to disk. By default, :binary:`~bin.mongod` flushes memory to disk every 60 seconds. In almost every situation you should not set this value and use the default setting. @@ -5260,6 +5194,8 @@ Storage Parameters db.adminCommand( { setParameter: 1, syncdelay: 60 } ) + .. include:: /includes/checkpoints.rst + .. seealso:: - :parameter:`journalCommitInterval` @@ -5338,57 +5274,6 @@ Storage Parameters WiredTiger Parameters ~~~~~~~~~~~~~~~~~~~~~ -.. parameter:: wiredTigerMaxCacheOverflowSizeGB - - .. note:: Deprecated in MongoDB 4.4 - - - MongoDB deprecates the ``wiredTigerMaxCacheOverflowSizeGB`` - parameter. The parameter has no effect starting in MongoDB 4.4. - - |mongod-only| - - *Default*: 0 (No specified maximum) - - Specify the maximum size (in GB) for the "lookaside (or cache - overflow) table" file :file:`WiredTigerLAS.wt` for MongoDB - 4.2.1-4.2.x. The file no longer exists starting in - version 4.4. - - The parameter can accept the following values: - - .. list-table:: - :header-rows: 1 - :widths: 20 80 - - * - Value - - Description - - * - ``0`` - - - The default value. If set to ``0``, the file size is - unbounded. - - * - number >= 0.1 - - - The maximum size (in GB). If the :file:`WiredTigerLAS.wt` - file exceeds this size, :binary:`~bin.mongod` exits with a - fatal assertion. You can clear the :file:`WiredTigerLAS.wt` - file and restart :binary:`~bin.mongod`. - - You can only set this parameter during run time using the - :dbcommand:`setParameter` database command: - - .. code-block:: javascript - - db.adminCommand( { setParameter: 1, wiredTigerMaxCacheOverflowSizeGB: 100 } ) - - To set the maximum size during start up, use the - :setting:`storage.wiredTiger.engineConfig.maxCacheOverflowFileSizeGB` - instead. - - .. versionadded:: 4.2.1 - .. parameter:: wiredTigerConcurrentReadTransactions .. versionchanged:: 7.0 @@ -5475,8 +5360,29 @@ WiredTiger Parameters "wiredTigerEngineRuntimeConfig": "` are +:ref:`Change streams ` are available for replica sets and sharded clusters. Change streams allow applications to access real-time data changes without the complexity and risk of tailing the oplog. Applications can use change streams to diff --git a/source/security.txt b/source/security.txt index 74e188a7c82..01844d9cceb 100644 --- a/source/security.txt +++ b/source/security.txt @@ -16,6 +16,9 @@ Security :name: genre :values: reference +.. meta:: + :description: Secure MongoDB deployments. Use authentication, access control, and encryption features to safeguard data. + MongoDB provides various features, such as authentication, access control, encryption, to secure your MongoDB deployments. Some key security features include: diff --git a/source/sharding.txt b/source/sharding.txt index 7626e6a7e6e..545d6834de7 100644 --- a/source/sharding.txt +++ b/source/sharding.txt @@ -16,6 +16,9 @@ Sharding :name: genre :values: reference +.. meta:: + :description: Scale MongoDB deployments horizontally. Use sharding to distribute data across multiple machines, supporting large datasets and high throughput operations. + .. contents:: On this page :local: :backlinks: none @@ -81,19 +84,7 @@ MongoDB supports *horizontal scaling* through :term:`sharding`. Sharded Cluster --------------- -A MongoDB :term:`sharded cluster` consists of the following components: - -* :ref:`shard `: Each shard contains a - subset of the sharded data. Each shard can be deployed as a :term:`replica - set`. - -* :doc:`/core/sharded-cluster-query-router`: The ``mongos`` acts as a - query router, providing an interface between client applications and - the sharded cluster. Starting in MongoDB 4.4, ``mongos`` can support - :ref:`hedged reads ` to minimize latencies. - -* :ref:`config servers `: Config - servers store metadata and configuration settings for the cluster. +.. include:: /includes/fact-sharded-cluster-components.rst The following graphic describes the interaction of components within a sharded cluster: @@ -110,25 +101,18 @@ MongoDB uses the :ref:`shard key ` to distribute the collection's documents across shards. The shard key consists of a field or multiple fields in the documents. -- Starting in version 4.4, documents in sharded collections can be - missing the shard key fields. Missing shard key fields are treated as - having null values when distributing the documents across shards but - not when routing queries. For more information, see - :ref:`shard-key-missing`. - -- In version 4.2 and earlier, shard key fields must exist in every - document for a sharded collection. +Documents in sharded collections can be missing the shard key fields. Missing +shard key fields are treated as having null values when distributing the +documents across shards but not when routing queries. For more information, see +:ref:`shard-key-missing`. You select the shard key when :ref:`sharding a collection `. - Starting in MongoDB 5.0, you can :ref:`reshard a collection ` by changing a collection's shard key. -- Starting in MongoDB 4.4, you can :ref:`refine a shard key - ` by adding a suffix field or fields to the existing - shard key. -- In MongoDB 4.2 and earlier, the choice of shard key cannot - be changed after sharding. +- You can :ref:`refine a shard key ` by adding a suffix field + or fields to the existing shard key. A document's shard key value determines its distribution across the shards. @@ -199,8 +183,8 @@ specific shard or set of shards. These :ref:`targeted operations` are generally more efficient than :ref:`broadcasting ` to every shard in the cluster. -Starting in MongoDB 4.4, :binary:`~bin.mongos` can support :ref:`hedged -reads ` to minimize latencies. +:binary:`~bin.mongos` can support :ref:`hedged reads ` +to minimize latencies. Storage Capacity ~~~~~~~~~~~~~~~~ @@ -387,7 +371,7 @@ and collation. Change Streams -------------- -Starting in MongoDB 3.6, :doc:`change streams ` are +:ref:`Change streams ` are available for replica sets and sharded clusters. Change streams allow applications to access real-time data changes without the complexity and risk of tailing the oplog. Applications can use change streams to diff --git a/source/text-search.txt b/source/text-search.txt index a9f7bb165ff..8c5de1d9c67 100644 --- a/source/text-search.txt +++ b/source/text-search.txt @@ -12,6 +12,7 @@ Text Search .. meta:: :keywords: search + :description: MongoDB offers robust text search capabilities for Atlas-hosted and self-managed deployments, including Atlas Search for fine-grained indexing and a rich query language. .. contents:: On this page :local: diff --git a/source/tutorial.txt b/source/tutorial.txt index f05ce0b2ba5..f48bbb8b7bc 100644 --- a/source/tutorial.txt +++ b/source/tutorial.txt @@ -134,6 +134,7 @@ Data Modeling Patterns - :doc:`/tutorial/model-embedded-one-to-one-relationships-between-documents` - :doc:`/tutorial/model-embedded-one-to-many-relationships-between-documents` +- :doc:`/tutorial/model-embedded-many-to-many-relationships-between-documents` - :doc:`/tutorial/model-referenced-one-to-many-relationships-between-documents` - :doc:`/tutorial/model-data-for-atomic-operations` - :doc:`/tutorial/model-tree-structures-with-parent-references` diff --git a/source/tutorial/adjust-replica-set-member-priority.txt b/source/tutorial/adjust-replica-set-member-priority.txt index 538f38d77f3..c37c30e8c63 100644 --- a/source/tutorial/adjust-replica-set-member-priority.txt +++ b/source/tutorial/adjust-replica-set-member-priority.txt @@ -16,7 +16,7 @@ Overview -------- The ``priority`` settings of replica set members affect both the timing -and the outcome of :doc:`elections ` for +and the outcome of :ref:`elections ` for primary. Higher-priority members are more likely to call elections, and are more likely to win. Use this setting to ensure that some members are more likely to become primary and that others can never become primary. @@ -72,11 +72,10 @@ priority of a non-voting member, consider the following: priority of any remaining members in the replica set to be greater than ``0``. -- Starting in MongoDB 4.4, replica reconfiguration can add or remove - no more than *one* voting member at a time. To change multiple - non-voting members to have a priority greater than ``0``, issue a - series of :dbcommand:`replSetReconfig` or :method:`rs.reconfig()` - operations to modify one member at a time. See +- Replica reconfiguration can add or remove no more than *one* voting member at + a time. To change multiple non-voting members to have a priority greater than + ``0``, issue a series of :dbcommand:`replSetReconfig` or + :method:`rs.reconfig()` operations to modify one member at a time. See :ref:`replSetReconfig-cmd-single-node` for more information. Procedure diff --git a/source/tutorial/analyze-query-plan.txt b/source/tutorial/analyze-query-plan.txt index ab4d9b0cfef..011f07dce5b 100644 --- a/source/tutorial/analyze-query-plan.txt +++ b/source/tutorial/analyze-query-plan.txt @@ -1,17 +1,32 @@ -========================= -Analyze Query Performance -========================= +.. _interpret-explain-plan: + +============================== +Interpret Explain Plan Results +============================== .. default-domain:: mongodb +.. facet:: + :name: programming_language + :values: shell + .. contents:: On this page :local: :backlinks: none :depth: 1 :class: singlecol -The :ref:`explain plan results ` for queries are -subject to change between MongoDB versions. +You can use explain results to determine the following information about +a query: + +- The amount of time a query took to complete +- Whether the query used an index +- The number of documents and index keys scanned to fulfill a query + +.. note:: + + Explain plan results for queries are subject to change between + MongoDB versions. .. tabs-drivers:: @@ -141,7 +156,7 @@ Query with No Index - :data:`executionStats.nReturned ` displays ``3`` to - indicate that the query matches and returns three documents. + indicate that the winning query plan returns three documents. - :data:`executionStats.totalKeysExamined ` displays ``0`` @@ -202,7 +217,7 @@ Query with No Index execution stats of the query: - :guilabel:`Documents Returned` displays ``3`` to indicate - that the query matches and returns three documents. + that the winning query plan returns three documents. - :guilabel:`Index Keys Examined` displays ``0`` to indicate that this query is not using an index. @@ -301,8 +316,8 @@ To support the query on the ``quantity`` field, add an index on the ``IXSCAN`` to indicate index use. - :data:`executionStats.nReturned ` - displays ``3`` to indicate that the query matches and - returns three documents. + displays ``3`` to indicate that the winning query plan returns + three documents. - :data:`executionStats.totalKeysExamined ` displays ``3`` @@ -368,7 +383,7 @@ To support the query on the ``quantity`` field, add an index on the execution stats of the query: - :guilabel:`Documents Returned` displays ``3`` to indicate - that the query matches and returns three documents. + that the winning query plan returns three documents. - :guilabel:`Index Keys Examined` displays ``3`` to indicate that MongoDB scanned three index entries. The diff --git a/source/tutorial/authenticate-nativeldap-activedirectory.txt b/source/tutorial/authenticate-nativeldap-activedirectory.txt index 3d7c6e66b0a..02719c0c660 100644 --- a/source/tutorial/authenticate-nativeldap-activedirectory.txt +++ b/source/tutorial/authenticate-nativeldap-activedirectory.txt @@ -48,9 +48,8 @@ This tutorial assumes prior knowledge of SASL and its related subject matter. Configure Internal Member Authentication ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Starting in MongoDB 4.4, you must configure -:ref:`internal member authentication ` before you -can set up LDAP authentication or authorization for a cluster. +You must configure :ref:`internal member authentication ` +before you can set up LDAP authentication or authorization for a cluster. Considerations -------------- diff --git a/source/tutorial/backup-and-restore-tools.txt b/source/tutorial/backup-and-restore-tools.txt index c3a726b10c8..ba1656948a2 100644 --- a/source/tutorial/backup-and-restore-tools.txt +++ b/source/tutorial/backup-and-restore-tools.txt @@ -6,6 +6,9 @@ Back Up and Restore with MongoDB Tools .. default-domain:: mongodb +.. meta:: + :description: Back up MongoDB deployments. Use Cloud Backups for managed deployments in Atlas. Use mongodump and mongorestore for self-managed deployments. + .. contents:: On this page :local: :backlinks: none @@ -36,7 +39,7 @@ Deployments The :binary:`~bin.mongorestore` and :binary:`~bin.mongodump` utilities work with :ref:`BSON ` data dumps, and are useful for creating backups of small deployments. For resilient and -non-disruptive backups, use :doc:`file system snapshots ` +non-disruptive backups, use :ref:`file system snapshots ` or block-level disk snapshots with :atlas:`Cloud Backups ` from {+atlas+}. @@ -49,7 +52,25 @@ Performance Impacts .. include:: /includes/extracts/tools-performance-considerations-dump-restore.rst +.. _considerations-output-format: + +Output Format +~~~~~~~~~~~~~ + +:binary:`~bin.mongorestore` and :binary:`~bin.mongodump` can output data +to an archive file, which is a single-file alternative to multiple BSON +files. Archive files are special-purpose formats that support +non-contiguous file writes. They enable concurrent backups from MongoDB, +as well as restores to MongoDB. Using archive files optimizes disk I/O +while backup and restore operations execute. + +You can also output archive files to the standard output (``stdout``). +Writing to the standard output allows for data migration over networks, +reduced disk I/O footprint, and concurrency gains in both the MongoDB +tools and your storage engine. +For more information on archive files, see the +:option:`--archive ` option. .. _backup-mongodump: .. _backup-and-restore-tools: diff --git a/source/tutorial/backup-sharded-cluster-with-database-dumps.txt b/source/tutorial/backup-sharded-cluster-with-database-dumps.txt index a5225ab7121..d234f4da302 100644 --- a/source/tutorial/backup-sharded-cluster-with-database-dumps.txt +++ b/source/tutorial/backup-sharded-cluster-with-database-dumps.txt @@ -42,6 +42,16 @@ Before you Begin This task uses :program:`mongodump` to back up a sharded cluster. Ensure that you have a cluster running that contains data in sharded collections. +Version Compatibility +~~~~~~~~~~~~~~~~~~~~~ + +This procedure requires a version of MongoDB that supports fsync +locking from :program:`mongos`. + +.. |fsyncLockUnlock| replace:: the :dbcommand:`fsync` and + :dbcommand:`fsyncUnlock` commands +.. include:: /includes/fsync-mongos + Admin Privileges ~~~~~~~~~~~~~~~~ diff --git a/source/tutorial/backup-sharded-cluster-with-filesystem-snapshots.txt b/source/tutorial/backup-sharded-cluster-with-filesystem-snapshots.txt index 1717b04abcd..377a671fd6e 100644 --- a/source/tutorial/backup-sharded-cluster-with-filesystem-snapshots.txt +++ b/source/tutorial/backup-sharded-cluster-with-filesystem-snapshots.txt @@ -6,15 +6,12 @@ Back Up a Sharded Cluster with File System Snapshots .. default-domain:: mongodb - - .. contents:: On this page :local: :backlinks: none :depth: 1 :class: singlecol - Overview -------- @@ -40,7 +37,7 @@ Encrypted Storage Engine (MongoDB Enterprise Only) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. include:: /includes/fact-aes256-backups.rst - + Balancer ~~~~~~~~ @@ -48,7 +45,7 @@ It is *essential* that you stop the :ref:`balancer ` before capturing a backup. If the balancer is active while you capture backups, the backup -artifacts may be incomplete and/or have duplicate data, as :term:`chunks +artifacts may be incomplete or have duplicate data, as :term:`chunks ` may migrate while recording backups. Precision @@ -58,28 +55,202 @@ In this procedure, you will stop the cluster balancer and take a backup up of the :term:`config database`, and then take backups of each shard in the cluster using a file-system snapshot tool. If you need an exact moment-in-time snapshot of the system, you will need to stop all -application writes before taking the file system snapshots; otherwise -the snapshot will only approximate a moment in time. - -For approximate point-in-time snapshots, you can minimize the impact on -the cluster by taking the backup from a secondary member of each -replica set shard. +writes before taking the file system snapshots; otherwise the snapshot will +only approximate a moment in time. Consistency ~~~~~~~~~~~ -If the journal and data files are on the same logical volume, you can -use a single point-in-time snapshot to capture a consistent copy of the -data files. - -If the journal and data files are on different file systems, you must -use :method:`db.fsyncLock()` and :method:`db.fsyncUnlock()` to ensure -that the data files do not change, providing consistency for the -purposes of creating backups. +To back up a sharded cluster, you must use the :dbcommand:`fsync` command or +:method:`db.fsyncLock` method to stop writes on the cluster. This ensures that +data files do not change during the backup. .. include:: /includes/fact-backup-snapshots-with-ebs-in-raid10.rst -Procedure ---------- +Version Compatibility +~~~~~~~~~~~~~~~~~~~~~ + +This procedure requires a version of MongoDB that supports fsync +locking from :program:`mongos`. + +.. |fsyncLockUnlock| replace:: the :dbcommand:`fsync` and + :dbcommand:`fsyncUnlock` commands +.. include:: /includes/fsync-mongos + + +Steps +----- + +To take a self-managed backup of a sharded cluster, complete the following +steps: + +.. procedure:: + :style: normal + + .. step:: Find a Backup Window + + Chunk migrations, resharding, and schema migration operations can cause + inconsistencies in backups. To find a good time to perform a backup, + monitor your application and database usage and find a time when these + operations are unlikely to occur. + + For more information, see :ref:`sharded-schedule-backup`. + + .. step:: Stop the Balancer + + To prevent chunk migrations from disrupting the backup, use + the :method:`sh.stopBalancer` method to stop the balancer: + + .. code-block:: javascript + + sh.stopBalancer() + + If a balancing round is currently in progress, the operation waits for + balancing to complete. + + To confirm that the balancer is stopped, use the + :method:`sh.getBalancerState` method: + + .. io-code-block:: + + .. input:: + :language: javascript + + sh.getBalancerState() + + .. output:: + :language: javascript + + false + + The command returns ``false`` when the balancer is stopped. + + .. step:: Lock the Cluster + + Writes to the database can cause backup inconsistencies. Lock your + sharded cluster to protect the database from writes. + + To lock a sharded cluster, use the :method:`db.fsyncLock` method: + + .. code-block:: javascript + + db.getSiblingDB("admin").fsyncLock() + + Run the following aggregation pipeline on both :program:`mongos` and + the primary :program:`mongod` of the config servers. To confirm the + lock, ensure that the ``fysncLocked`` field returns ``true`` and + ``fsyncUnlocked`` field returns ``false``. + + .. io-code-block:: + + .. input:: + :language: javascript + + db.getSiblingDB("admin").aggregate( [ + { $currentOp: { } }, + { $facet: { + "locked": [ + { $match: { $and: [ + { fsyncLock: { $exists: true } }, + { fsyncLock: true } + ] } }], + "unlocked": [ + { $match: { fsyncLock: { $exists: false } } } + ] + } }, + { $project: { + "fsyncLocked": { $gt: [ { $size: "$locked" }, 0 ] }, + "fsyncUnlocked": { $gt: [ { $size: "$unlocked" }, 0 ] } + } } + ] ) + + .. output:: + :language: json + + [ { fsyncLocked: true }, { fsyncUnlocked: false } ] + + .. step:: Back up the Primary Config Server + + .. note:: + + Backing up a :ref:`config server ` backs + up the sharded cluster's metadata. You only need to back up one + config server, as they all hold the same data. Perform this step + against the CSRS primary member. + + To create a filesystem snapshot of the config server, follow the + procedure in :ref:`lvm-backup-operation`. + + .. step:: Back up the Primary Shards + + Perform a filesystem snapshot against the primary member of each shard, + using the procedure found in :ref:`backup-restore-filesystem-snapshots`. + + .. step:: Unlock the Cluster + + After the backup completes, you can unlock the cluster to allow writes + to resume. + + To unlock the cluster, use the :method:`db.fsyncUnlock` method: + + .. code-block:: bash + + db.getSibling("admin").fsyncUnlock() + + Run the following aggregation pipeline on both :program:`mongos` and + the primary :program:`mongod` of the config servers. To confirm the + unlock, ensure that the ``fysncLocked`` field returns ``false`` and + ``fsyncUnlocked`` field returns ``true``. + + .. io-code-block:: + + .. input:: + :language: javascript + + db.getSiblingDB("admin").aggregate( [ + { $currentOp: { } }, + { $facet: { + "locked": [ + { $match: { $and: [ + { fsyncLock: { $exists: true } }, + { fsyncLock: true } + ] } }], + "unlocked": [ + { $match: { fsyncLock: { $exists: false } } } + ] + } }, + { $project: { + "fsyncLocked": { $gt: [ { $size: "$locked" }, 0 ] }, + "fsyncUnlocked": { $gt: [ { $size: "$unlocked" }, 0 ] } + } } + ] ) + + .. output:: + :language: json + + [ { fsyncLocked: false }, { fsyncUnlocked: true } ] + + .. step:: Restart the Balancer + + To restart the balancer, use the :method:`sh.startBalancer` method: + + .. code-block:: javascript + + sh.startBalancer() + + To confirm that the balancer is running, use the + :method:`sh.getBalancerState` method: + + .. io-code-block:: + + .. input:: + :language: javascript + + sh.getBalancerState() + + .. output:: + :language: javascript + + true -.. include:: /includes/steps/backup-sharded-cluster-with-snapshots.rst + The command returns ``true`` when the balancer is running. diff --git a/source/tutorial/backup-with-filesystem-snapshots.txt b/source/tutorial/backup-with-filesystem-snapshots.txt index 17e78603e7f..60486db9f35 100644 --- a/source/tutorial/backup-with-filesystem-snapshots.txt +++ b/source/tutorial/backup-with-filesystem-snapshots.txt @@ -6,8 +6,6 @@ Back Up and Restore with Filesystem Snapshots .. default-domain:: mongodb - - .. contents:: On this page :local: :backlinks: none @@ -73,7 +71,7 @@ that all writes accepted by the database need to be fully written to disk: either to the :term:`journal` or to data files. If there are writes that are not on disk when the backup occurs, the backup -will not reflect these changes. +will not reflect these changes. For the WiredTiger storage engine, the data files reflect a consistent state as of the last :ref:`checkpoint @@ -227,7 +225,7 @@ snapshot image, such as with the following procedure: The above command sequence does the following: -- Ensures that the ``/dev/vg0/mdb-snap01`` device is not mounted. Never +- Ensures that the ``/dev/vg0/mdb-snap01`` device is not mounted. Never take a block level copy of a filesystem or filesystem snapshot that is mounted. diff --git a/source/tutorial/build-indexes-on-replica-sets.txt b/source/tutorial/build-indexes-on-replica-sets.txt index fe2bebd08e5..9306a873e9d 100644 --- a/source/tutorial/build-indexes-on-replica-sets.txt +++ b/source/tutorial/build-indexes-on-replica-sets.txt @@ -16,7 +16,7 @@ Rolling Index Builds on Replica Sets :class: singlecol Index builds can impact replica set performance. By default, -MongoDB 4.4 and later build indexes simultaneously on all data-bearing +MongoDB builds indexes simultaneously on all data-bearing replica set members. For workloads which cannot tolerate performance decrease due to index builds, consider using the following procedure to build indexes in a rolling fashion. diff --git a/source/tutorial/build-indexes-on-sharded-clusters.txt b/source/tutorial/build-indexes-on-sharded-clusters.txt index 61625ef6ce4..c825b5c7417 100644 --- a/source/tutorial/build-indexes-on-sharded-clusters.txt +++ b/source/tutorial/build-indexes-on-sharded-clusters.txt @@ -16,7 +16,7 @@ Rolling Index Builds on Sharded Clusters :class: singlecol Index builds can impact sharded cluster performance. By default, MongoDB -4.4 and later build indexes simultaneously on all data-bearing replica +builds indexes simultaneously on all data-bearing replica set members. Index builds on sharded clusters occur only on those shards which contain data for the collection being indexed. For workloads which cannot tolerate performance decrease due to index @@ -444,9 +444,8 @@ can occur, such as: manner but either fails to build the index for an associated shard or incorrectly builds an index with different specification. -Starting in MongoDB 4.4 (and 4.2.6), the :ref:`config server -` primary periodically checks for -index inconsistencies across the shards for sharded collections. To +The :ref:`config server ` primary periodically checks +for index inconsistencies across the shards for sharded collections. To configure these periodic checks, see :parameter:`enableShardedIndexConsistencyCheck` and :parameter:`shardedIndexConsistencyCheckIntervalMS`. diff --git a/source/tutorial/change-oplog-size.txt b/source/tutorial/change-oplog-size.txt index cd480a95a65..5313bea52a8 100644 --- a/source/tutorial/change-oplog-size.txt +++ b/source/tutorial/change-oplog-size.txt @@ -99,14 +99,10 @@ oplog size. .. important:: - Starting in MongoDB v4.4, a replica set member can replicate oplog - entries while the ``compact`` operation is ongoing. Previously, oplog - replication would be paused during compaction. Because of this, it was - recommended that oplog compaction only be performed during maintenance - windows, where writes could be minimized or stopped. In MongoDB 4.4 and - later, it is no longer necessary to limit compaction operations on the - oplog to maintenance windows, as oplog replication can continue as normal - during compaction. + A replica set member can replicate oplog entries while the ``compact`` + operation is ongoing. As a result, it is no longer necessary to limit + compaction operations on the oplog to maintenance windows, as oplog + replication can continue as normal during compaction. Do **not** run ``compact`` against the primary replica set member. Connect a :binary:`mongo ` shell directly to the primary diff --git a/source/tutorial/change-replica-set-wiredtiger.txt b/source/tutorial/change-replica-set-wiredtiger.txt index 21e1f633114..82b7134789b 100644 --- a/source/tutorial/change-replica-set-wiredtiger.txt +++ b/source/tutorial/change-replica-set-wiredtiger.txt @@ -79,7 +79,7 @@ down the :term:`primary`, and updates the stepped-down member. To update a member to WiredTiger, the procedure removes a member's data, starts :binary:`~bin.mongod` with WiredTiger, and performs an -:doc:`initial sync `. +:ref:`initial sync `. A. Update the secondary members to WiredTiger. diff --git a/source/tutorial/clear-jumbo-flag.txt b/source/tutorial/clear-jumbo-flag.txt index 94102ffd721..66b4e82dadb 100644 --- a/source/tutorial/clear-jumbo-flag.txt +++ b/source/tutorial/clear-jumbo-flag.txt @@ -47,8 +47,7 @@ clear the flag. Refine the Shard Key ```````````````````` -Starting in 4.4, MongoDB provides the -:dbcommand:`refineCollectionShardKey` command. Using the +MongoDB provides the :dbcommand:`refineCollectionShardKey` command. Using the :dbcommand:`refineCollectionShardKey` command, you can refine a collection's shard key by adding a suffix field or fields to the existing key. By adding new field(s) to the shard key, indivisible diff --git a/source/tutorial/configure-a-hidden-replica-set-member.txt b/source/tutorial/configure-a-hidden-replica-set-member.txt index f9674ac04e9..bf2426cf9b6 100644 --- a/source/tutorial/configure-a-hidden-replica-set-member.txt +++ b/source/tutorial/configure-a-hidden-replica-set-member.txt @@ -19,10 +19,13 @@ more information on hidden members and their uses, see Considerations -------------- -The most common use of hidden nodes is to support :doc:`delayed -members `. If you only need to prevent a member from -becoming primary, configure a :doc:`priority 0 member -`. +The most common use of hidden nodes is to support +:ref:`backups `. + +You can also use hidden nodes to support :ref:`delayed +members `. However, if you only need +to prevent a member from becoming primary, configure a +:ref:`priority 0 member `. .. include:: /includes/fact-replica-set-sync-prefers-non-hidden.rst diff --git a/source/tutorial/configure-a-non-voting-replica-set-member.txt b/source/tutorial/configure-a-non-voting-replica-set-member.txt index beb9fa9aa31..93275cc6348 100644 --- a/source/tutorial/configure-a-non-voting-replica-set-member.txt +++ b/source/tutorial/configure-a-non-voting-replica-set-member.txt @@ -22,10 +22,9 @@ To configure a member as non-voting, use the .. note:: - Starting in MongoDB 4.4, replica reconfiguration can add or remove no - more than *one* voting replica set member at a time. To modify the - votes of multiple members, issue a series of - :dbcommand:`replSetReconfig` or :method:`rs.reconfig()` operations to + Replica reconfiguration can add or remove no more than *one* voting replica + set member at a time. To modify the votes of multiple members, issue a series + of :dbcommand:`replSetReconfig` or :method:`rs.reconfig()` operations to modify one member at a time. See :ref:`replSetReconfig-cmd-single-node` for more information. diff --git a/source/tutorial/configure-audit-filters.txt b/source/tutorial/configure-audit-filters.txt index 5f7b101958e..e2e33a9be56 100644 --- a/source/tutorial/configure-audit-filters.txt +++ b/source/tutorial/configure-audit-filters.txt @@ -25,7 +25,7 @@ Configure Audit Filters :products:`MongoDB Enterprise ` supports :ref:`auditing ` of various operations. When -:doc:`enabled `, the audit facility, by +:ref:`enabled `, the audit facility, by default, records all auditable operations as detailed in :ref:`audit-action-details-results`. You can specify event filters to limit which events are recorded. Filters can be configured at diff --git a/source/tutorial/configure-encryption.txt b/source/tutorial/configure-encryption.txt index 55faeca9dd3..00bf2e1e281 100644 --- a/source/tutorial/configure-encryption.txt +++ b/source/tutorial/configure-encryption.txt @@ -95,6 +95,11 @@ following options to start ``mongod``: .. include:: /includes/extracts/default-bind-ip-security-additional-command-line.rst +.. |kmip-client-cert-file| replace:: :option:`--kmipClientCertificateFile` +.. |kmip-client-cert-selector| replace:: :option:`--kmipClientCertificateSelector` + +.. include:: /includes/enable-KMIP-on-windows.rst + The following operation creates a new master key in your key manager. ``mongod`` uses the master key to encrypt the keys that ``mongod`` generates for each database. diff --git a/source/tutorial/configure-linux-iptables-firewall.txt b/source/tutorial/configure-linux-iptables-firewall.txt index adc79ca951d..c29b694f14d 100644 --- a/source/tutorial/configure-linux-iptables-firewall.txt +++ b/source/tutorial/configure-linux-iptables-firewall.txt @@ -37,7 +37,7 @@ following two chains: ``OUTPUT`` Controls all outgoing traffic. -Given the :doc:`default ports ` of all +Given the :ref:`default ports ` of all MongoDB processes, you must configure networking rules that permit *only* required communication between your application and the appropriate :binary:`~bin.mongod` and :binary:`~bin.mongos` instances. diff --git a/source/tutorial/configure-ssl-clients.txt b/source/tutorial/configure-ssl-clients.txt index 41749bf2716..86d4eb2b39a 100644 --- a/source/tutorial/configure-ssl-clients.txt +++ b/source/tutorial/configure-ssl-clients.txt @@ -13,8 +13,8 @@ TLS/SSL Configuration for Clients :class: singlecol Clients must have support for TLS/SSL to connect to a -:binary:`~bin.mongod` or a :binary:`~bin.mongos` instance that require -:doc:`TLS/SSL connections`. +:binary:`~bin.mongod` or a :binary:`~bin.mongos` instance that require +:ref:`TLS/SSL connections `. .. note:: diff --git a/source/tutorial/configure-ssl.txt b/source/tutorial/configure-ssl.txt index ea29f25c56d..4393a81990a 100644 --- a/source/tutorial/configure-ssl.txt +++ b/source/tutorial/configure-ssl.txt @@ -10,6 +10,9 @@ Configure ``mongod`` and ``mongos`` for TLS/SSL :name: programming_language :values: shell +.. meta:: + :description: Configure MongoDB instances for TLS or SSL encryption using native OS libraries. Ensure strong ciphers with a minimum 128-bit key length for secure connections. + .. contents:: On this page :local: :backlinks: none @@ -250,16 +253,15 @@ settings ` in your bindIp: localhost,mongodb0.example.net port: 27017 -A :binary:`~bin.mongod` instance that uses the above configuration -can only use TLS/SSL connections: +A :binary:`~bin.mongod` instance that uses the above configuration can +only accept TLS/SSL connections: .. code-block:: bash mongod --config -That is, clients must specify TLS/SSL connections. See -:ref:`tls-client-connection-only` for more information on -connecting with TLS/SSL. +See :ref:`tls-client-connection-only` for more information on connecting +with TLS/SSL. .. seealso:: @@ -345,6 +347,10 @@ your :binary:`mongod` / :binary:`mongos` instance's certificate chain includes the certificate of the root Certificate Authority. +.. important:: + + .. include:: /includes/fact-ssl-tlsCAFile-tlsUseSystemCA.rst + For example, consider the following :ref:`configuration file ` for a :binary:`~bin.mongod` instance: @@ -368,16 +374,16 @@ For example, consider the following :ref:`configuration file bindIp: localhost,mongodb0.example.net port: 27017 -A :binary:`~bin.mongod` instance that uses the above configuration -can only use TLS/SSL connections and requires valid certificate from +A :binary:`~bin.mongod` instance that uses the above configuration can +only accept TLS/SSL connections and requires a valid certificate from its clients: .. code-block:: bash mongod --config -That is, clients must specify TLS/SSL connections and presents its -certificate key file to the instance. See +Clients must specify TLS/SSL connections and present their certificate +key file to the instance. See :ref:`mongo-connect-require-client-certificates-tls` for more information on connecting with TLS/SSL. @@ -395,6 +401,7 @@ information on connecting with TLS/SSL. :option:`--tlsCertificateKeyFile `, :option:`--tlsCAFile `. +.. _block-revoked-certs-tls: Block Revoked Certificates for Clients `````````````````````````````````````` @@ -405,46 +412,36 @@ Block Revoked Certificates for Clients MongoDB 4.2). For procedures using the ``net.ssl`` settings, see :ref:`configure-ssl`. -To prevent clients with revoked certificates from connecting to the -:binary:`~bin.mongod` or :binary:`~bin.mongos` instance, you can use: - -- Online Certificate Status Protocol (OCSP) - Starting in version 4.4, to check for certificate revocation, - MongoDB :parameter:`enables ` the use of OCSP - (Online Certificate Status Protocol) by default as an alternative - to specifying a CRL file or using the :setting:`system SSL - certificate store `. +.. include:: /includes/security/block-revoked-certificates-intro.rst - In versions 4.0 and 4.2, the use of OCSP is available only - through the use of :setting:`system certificate store - ` on Windows or macOS. - -- Certificate Revocation List (CRL) - To specify a CRL file, include - :setting:`net.tls.CRLFile` set to a file that contains revoked - certificates. +To specify a :abbr:`CRL (Certificate Revocation List)` file, include +:setting:`net.tls.CRLFile` set to a file that contains revoked +certificates. - For example: +For example: - .. code-block:: yaml - :emphasize-lines: 6 +.. code-block:: yaml + :emphasize-lines: 6 - net: - tls: - mode: requireTLS - certificateKeyFile: /etc/ssl/mongodb.pem - CAFile: /etc/ssl/caToValidateClientCertificates.pem - CRLFile: /etc/ssl/revokedCertificates.pem + net: + tls: + mode: requireTLS + certificateKeyFile: /etc/ssl/mongodb.pem + CAFile: /etc/ssl/caToValidateClientCertificates.pem + CRLFile: /etc/ssl/revokedCertificates.pem - Clients that present certificates that are listed in the - :file:`/etc/ssl/revokedCertificates.pem` will not be able to connect. +Clients that present certificates that are listed in the +:file:`/etc/ssl/revokedCertificates.pem` file are not able to connect. - .. seealso:: +.. seealso:: - You can also configure the revoked certificate list using the command-line option. + You can also configure the revoked certificate list using the + command-line option. - - For :binary:`~bin.mongod`, see :option:`--tlsCRLFile `. - - For :binary:`~bin.mongos`, see :option:`--tlsCRLFile `. + - For :binary:`~bin.mongod`, see :option:`--tlsCRLFile `. + - For :binary:`~bin.mongos`, see :option:`--tlsCRLFile `. .. _ssl-mongod-weak-certification: @@ -797,16 +794,15 @@ your :binary:`mongod` / :binary:`mongos` instance's bindIp: localhost,mongodb0.example.net port: 27017 -A :binary:`~bin.mongod` instance that uses the above configuration -can only use TLS/SSL connections: +A :binary:`~bin.mongod` instance that uses the above configuration can +only accept TLS/SSL connections: .. code-block:: bash mongod --config -That is, clients must specify TLS/SSL connections. See -:ref:`tls-client-connection-only` for more information on -connecting with TLS/SSL. +See :ref:`tls-client-connection-only` for more information on connecting +with TLS/SSL. .. seealso:: @@ -910,16 +906,16 @@ For example, consider the following :ref:`configuration file bindIp: localhost,mongodb0.example.net port: 27017 -A :binary:`~bin.mongod` instance that uses the above configuration -can only use TLS/SSL connections and requires valid certificate from +A :binary:`~bin.mongod` instance that uses the above configuration can +only accept TLS/SSL connections and requires a valid certificate from its clients: .. code-block:: bash mongod --config -That is, clients must specify TLS/SSL connections and present their -certificate key file to the instance. See +Clients must specify TLS/SSL connections and present their certificate +key file to the instance. See :ref:`mongo-connect-require-client-certificates-tls` for more information on connecting with TLS/SSL. @@ -937,50 +933,38 @@ information on connecting with TLS/SSL. :option:`--sslPEMKeyFile `, and :option:`--sslCAFile `. +.. _block-revoked-certs-ssl: + Block Revoked Certificates for Clients `````````````````````````````````````` -To prevent clients with revoked certificates from connecting to the -:binary:`~bin.mongod` or :binary:`~bin.mongos` instance, you can use: - -- Online Certificate Status Protocol (OCSP) - Starting in version 4.4, to check for certificate revocation, - MongoDB :parameter:`enables ` the use of OCSP - (Online Certificate Status Protocol) by default as an alternative - to specifying a CRL file or using the :setting:`system SSL - certificate store `. - - - In versions 4.0 and 4.2, the use of OCSP is available only - through the use of :setting:`system certificate store - ` on Windows or macOS. +.. include:: /includes/security/block-revoked-certificates-intro.rst -- Certificate Revocation List (CRL) - To specify a CRL file, include - :setting:`net.ssl.CRLFile` set to a file that contains revoked - certificates. +To specify a :abbr:`CRL (Certificate Revocation List)` file, include +:setting:`net.ssl.CRLFile` set to a file that contains revoked +certificates. - For example: +For example: - .. code-block:: yaml - :emphasize-lines: 6 +.. code-block:: yaml + :emphasize-lines: 6 - net: - ssl: - mode: requireSSL - PEMKeyFile: /etc/ssl/mongodb.pem - CAFile: /etc/ssl/caToValidateClientCertificates.pem - CRLFile: /etc/ssl/revokedCertificates.pem + net: + ssl: + mode: requireSSL + PEMKeyFile: /etc/ssl/mongodb.pem + CAFile: /etc/ssl/caToValidateClientCertificates.pem + CRLFile: /etc/ssl/revokedCertificates.pem - Clients that present certificates that are listed in the - :file:`/etc/ssl/revokedCertificates.pem` will not be able to connect. +Clients that present certificates that are listed in the +:file:`/etc/ssl/revokedCertificates.pem` file are not able to connect. - .. seealso:: +.. seealso:: - You can also configure the revoked certificate list using the command-line option. + You can also configure the revoked certificate list using the command-line option. - - For :binary:`~bin.mongod`, see :option:`--sslCRLFile `. - - For :binary:`~bin.mongos`, see :option:`--sslCRLFile `. + - For :binary:`~bin.mongod`, see :option:`--sslCRLFile `. + - For :binary:`~bin.mongos`, see :option:`--sslCRLFile `. Validate Only if a Client Presents a Certificate ```````````````````````````````````````````````` diff --git a/source/tutorial/configure-windows-netsh-firewall.txt b/source/tutorial/configure-windows-netsh-firewall.txt index 4e204dd9b31..5e3d0520b97 100644 --- a/source/tutorial/configure-windows-netsh-firewall.txt +++ b/source/tutorial/configure-windows-netsh-firewall.txt @@ -10,6 +10,10 @@ Configure Windows ``netsh`` Firewall for MongoDB :depth: 1 :class: singlecol +.. facet:: + :name: genre + :values: tutorial + On Windows Server systems, the ``netsh`` program provides methods for managing the :guilabel:`Windows Firewall`. These firewall rules make it possible for administrators to control what hosts can connect to the system, @@ -44,7 +48,7 @@ by rule type, and parsed in the following order: By default, the policy in :guilabel:`Windows Firewall` allows all outbound connections and blocks all incoming connections. -Given the :doc:`default ports ` of all +Given the :ref:`default ports ` of all MongoDB processes, you must configure networking rules that permit *only* required communication between your application and the appropriate :binary:`mongod.exe` and :binary:`mongos.exe` instances. @@ -209,16 +213,16 @@ servers, and the :binary:`mongos.exe` instances. .. include:: /includes/fact-deprecated-http-interface.rst -Manage and Maintain *Windows Firewall* Configurations ------------------------------------------------------ +Manage Windows Firewall Configurations +-------------------------------------- This section contains a number of basic operations for managing and using ``netsh``. While you can use the GUI front ends to manage the :guilabel:`Windows Firewall`, all core functionality is accessible is accessible from ``netsh``. -Delete all *Windows Firewall* Rules -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Delete Windows Firewall Rules for Default MongoDB Ports +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To delete the firewall rule allowing :binary:`mongod.exe` traffic: @@ -228,8 +232,8 @@ To delete the firewall rule allowing :binary:`mongod.exe` traffic: netsh advfirewall firewall delete rule name="Open mongod shard port 27018" protocol=tcp localport=27018 -List All *Windows Firewall* Rules -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +List All Windows Firewall Rules +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To return a list of all :guilabel:`Windows Firewall` rules: @@ -237,8 +241,8 @@ To return a list of all :guilabel:`Windows Firewall` rules: netsh advfirewall firewall show rule name=all -Reset *Windows Firewall* -~~~~~~~~~~~~~~~~~~~~~~~~ +Reset Windows Firewall +~~~~~~~~~~~~~~~~~~~~~~ To reset the :guilabel:`Windows Firewall` rules: @@ -246,22 +250,25 @@ To reset the :guilabel:`Windows Firewall` rules: netsh advfirewall reset -Backup and Restore *Windows Firewall* Rules -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Backup and Restore Windows Firewall Rules +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -To simplify administration of larger collection of systems, you can export or -import firewall systems from different servers) rules very easily on Windows: +To simplify administration of larger systems, you can export or import +Windows Firewall rules. -Export all firewall rules with the following command: +- To export all Windows Firewall rules, run the following command: -.. code-block:: bat + .. code-block:: bat - netsh advfirewall export "C:\temp\MongoDBfw.wfw" + netsh advfirewall export "C:\temp\MongoDBfw.wfw" -Replace ``"C:\temp\MongoDBfw.wfw"`` with a path of your choosing. You -can use a command in the following form to import a file created using -this operation: + Replace ``"C:\temp\MongoDBfw.wfw"`` with a path of your choosing. -.. code-block:: bat +- To import Windows Firewall rules, run the following command: + + .. code-block:: bat + + netsh advfirewall import "C:\temp\MongoDBfw.wfw" - netsh advfirewall import "C:\temp\MongoDBfw.wfw" + Replace ``"C:\temp\MongoDBfw.wfw"`` with the path to the file that + contains your Windows Firewall rules. diff --git a/source/tutorial/configure-x509-member-authentication.txt b/source/tutorial/configure-x509-member-authentication.txt index b949507a6e3..26ebde1067c 100644 --- a/source/tutorial/configure-x509-member-authentication.txt +++ b/source/tutorial/configure-x509-member-authentication.txt @@ -13,7 +13,7 @@ Use x.509 Certificate for Membership Authentication :class: singlecol MongoDB supports x.509 certificate authentication for use with a secure -:doc:`TLS/SSL connection `. Sharded cluster +:ref:`TLS/SSL connection `. Sharded cluster members and replica set members can use x.509 certificates to verify their membership to the cluster or the replica set instead of using :ref:`keyfiles `. The membership authentication is @@ -35,7 +35,7 @@ connect and perform operations in the deployment. * See the :doc:`/tutorial/configure-x509-client-authentication` tutorial for instructions on using x.509 certificates for user authentication. -.. _`default distribution of MongoDB`: http://www.mongodb.org/downloads?tck=docs_server +.. _`default distribution of MongoDB`: http://mongodb.com/downloads?tck=docs_server .. _`MongoDB Enterprise`: http://www.mongodb.com/products/mongodb-enterprise-advanced?tck=docs_server .. important:: diff --git a/source/tutorial/control-access-to-mongodb-windows-with-kerberos-authentication.txt b/source/tutorial/control-access-to-mongodb-windows-with-kerberos-authentication.txt index 6a2bde9b2bd..57bdff76a05 100644 --- a/source/tutorial/control-access-to-mongodb-windows-with-kerberos-authentication.txt +++ b/source/tutorial/control-access-to-mongodb-windows-with-kerberos-authentication.txt @@ -123,13 +123,4 @@ not affect MongoDB's internal authentication of cluster members. Testing and Verification ------------------------ -After completing the configuration steps, you can validate your -configuration with the :binary:`~bin.mongokerberos` tool. - -Introduced alongside MongoDB 4.4, :binary:`~bin.mongokerberos` -provides a convenient method to verify your platform's Kerberos -configuration for use with MongoDB, and to test that Kerberos -authentication from a MongoDB client works as expected. See the -:binary:`~bin.mongokerberos` documentation for more information. - -:binary:`~bin.mongokerberos` is available in MongoDB Enterprise only. +.. include:: /includes/fact-mongokerberos.rst \ No newline at end of file diff --git a/source/tutorial/control-access-to-mongodb-with-kerberos-authentication.txt b/source/tutorial/control-access-to-mongodb-with-kerberos-authentication.txt index 8750de5b369..e324e1eeeb9 100644 --- a/source/tutorial/control-access-to-mongodb-with-kerberos-authentication.txt +++ b/source/tutorial/control-access-to-mongodb-with-kerberos-authentication.txt @@ -311,13 +311,4 @@ not affect MongoDB's internal authentication of cluster members. Testing and Verification ------------------------ -After completing the configuration steps, you can validate your -configuration with the :binary:`~bin.mongokerberos` tool. - -Introduced alongside MongoDB 4.4, :binary:`~bin.mongokerberos` -provides a convenient method to verify your platform's Kerberos -configuration for use with MongoDB, and to test that Kerberos -authentication from a MongoDB client works as expected. See the -:binary:`~bin.mongokerberos` documentation for more information. - -:binary:`~bin.mongokerberos` is available in MongoDB Enterprise only. +.. include:: /includes/fact-mongokerberos.rst diff --git a/source/tutorial/convert-standalone-to-replica-set.txt b/source/tutorial/convert-standalone-to-replica-set.txt index 199e32344ca..ce972dc4db7 100644 --- a/source/tutorial/convert-standalone-to-replica-set.txt +++ b/source/tutorial/convert-standalone-to-replica-set.txt @@ -6,6 +6,8 @@ Convert a Standalone mongod to a Replica Set .. default-domain:: mongodb +.. meta:: + :description: Convert a standalone mongod to a replica set for production, ensuring redundancy. Check security before exposing the cluster. A :term:`standalone` :binary:`mongod` instance is useful for testing and development. A standalone instance isn't a good choice for a production diff --git a/source/tutorial/create-queries-that-ensure-selectivity.txt b/source/tutorial/create-queries-that-ensure-selectivity.txt index 67142e7c76b..a32a036765a 100644 --- a/source/tutorial/create-queries-that-ensure-selectivity.txt +++ b/source/tutorial/create-queries-that-ensure-selectivity.txt @@ -1,8 +1,8 @@ .. _index-selectivity: -====================================== -Create Queries that Ensure Selectivity -====================================== +========================================== +Create Indexes to Ensure Query Selectivity +========================================== .. default-domain:: mongodb @@ -12,80 +12,98 @@ Create Queries that Ensure Selectivity :depth: 1 :class: singlecol -Selectivity is the ability of a query to narrow results using the index. -Effective indexes are more selective and allow MongoDB to use the index +Selectivity is the ability of a query to narrow results using indexes. +Effective queries are more selective and allow MongoDB to use indexes for a larger portion of the work associated with fulfilling the query. -To ensure selectivity, -write queries that limit the number of possible documents with the -indexed field. Write queries that are appropriately selective relative -to your indexed data. - -.. example:: - - Suppose you have a field called ``status`` where the possible values - are ``new`` and ``processed``. If you add an index on ``status`` - you've created a low-selectivity index. The index will - be of little help in locating records. - - A better strategy, depending on your queries, would be to create a - :ref:`compound index ` that includes the - low-selectivity field and another field. For example, you could - create a compound index on ``status`` and ``created_at.`` - - Another option, again depending on your use case, might be to use - separate collections, one for each status. - -.. example:: - - Consider an index ``{ a : 1 }`` (i.e. an index on the key ``a`` - sorted in ascending order) on a collection where ``a`` has three - values evenly distributed across the collection: - - .. code-block:: javascript - - { _id: ObjectId(), a: 1, b: "ab" } - { _id: ObjectId(), a: 1, b: "cd" } - { _id: ObjectId(), a: 1, b: "ef" } - { _id: ObjectId(), a: 2, b: "jk" } - { _id: ObjectId(), a: 2, b: "lm" } - { _id: ObjectId(), a: 2, b: "no" } - { _id: ObjectId(), a: 3, b: "pq" } - { _id: ObjectId(), a: 3, b: "rs" } - { _id: ObjectId(), a: 3, b: "tv" } - - If you query for ``{ a: 2, b: "no" }`` MongoDB must scan 3 - :term:`documents ` in the collection to return the one - matching result. Similarly, a query for ``{ a: { $gt: 1}, b: "tv" }`` - must scan 6 documents, also to return one result. - - Consider the same index on a collection where ``a`` has *nine* values - evenly distributed across the collection: - - .. code-block:: javascript - - { _id: ObjectId(), a: 1, b: "ab" } - { _id: ObjectId(), a: 2, b: "cd" } - { _id: ObjectId(), a: 3, b: "ef" } - { _id: ObjectId(), a: 4, b: "jk" } - { _id: ObjectId(), a: 5, b: "lm" } - { _id: ObjectId(), a: 6, b: "no" } - { _id: ObjectId(), a: 7, b: "pq" } - { _id: ObjectId(), a: 8, b: "rs" } - { _id: ObjectId(), a: 9, b: "tv" } - - If you query for ``{ a: 2, b: "cd" }``, MongoDB must scan only one - document to fulfill the query. The index and query are more selective - because the values of ``a`` are evenly distributed *and* the query - can select a specific document using the index. - - However, although the index on ``a`` is more selective, a query such - as ``{ a: { $gt: 5 }, b: "tv" }`` would still need to scan 4 - documents. - - .. todo:: is there an answer to that last "However" paragraph? - -If overall selectivity is low, and if MongoDB must read a number of -documents to return results, then some queries may perform faster -without indexes. To determine performance, see -:ref:`indexes-measuring-use`. +To ensure selectivity, write queries that limit the number of possible +documents with the indexed field or fields. Write queries that are +appropriately selective relative to your indexed data. + +Examples +-------- + +Selectivity with Many Common Values +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Consider a collection of documents that have the following form: + +.. code-block:: javascript + + { + status: "processed", + product_type: "electronics" + } + +In this example, the ``status`` of 99% of documents in the collection is +``processed``. If you add an index on ``status`` and query for documents +with the ``status`` of ``processed``, the index has low selectivity with +this query. However, if you want to query for documents that do **not** +have the ``status`` of ``processed``, this index has high selectivity +because the query only reads 1% of the index. + +Selectivity with Distributed Values +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Consider a collection of documents where the ``status`` field has three +values distributed across the collection: + +.. code-block:: javascript + + [ + { _id: ObjectId(), "status": "processed", "product_type": "electronics" }, + { _id: ObjectId(), "status": "processed", "product_type": "grocery" }, + { _id: ObjectId(), "status": "processed", "product_type": "household" }, + { _id: ObjectId(), "status": "pending", "product_type": "electronics" }, + { _id: ObjectId(), "status": "pending", "product_type": "grocery" }, + { _id: ObjectId(), "status": "pending", "product_type": "household" }, + { _id: ObjectId(), "status": "new", "product_type": "electronics" }, + { _id: ObjectId(), "status": "new", "product_type": "grocery" }, + { _id: ObjectId(), "status": "new", "product_type": "household" } + ] + +If you add an index on ``status`` and query for ``{ "status": "pending", +"product_type": "electronics" }``, MongoDB must read three index keys to +return the one matching result. Similarly, a query for ``{ "status": { +$in: ["processed", "pending"] }, "product_type" : "electronics" }`` must +read six documents to return the two matching documents. + +Consider the same index on a collection where ``status`` has *nine* +values distributed across the collection: + +.. code-block:: javascript + + [ + { _id: ObjectId(), "status": 1, "product_type": "electronics" }, + { _id: ObjectId(), "status": 2, "product_type": "grocery" }, + { _id: ObjectId(), "status": 3, "product_type": "household"}, + { _id: ObjectId(), "status": 4, "product_type": "electronics" }, + { _id: ObjectId(), "status": 5, "product_type": "grocery"}, + { _id: ObjectId(), "status": 6, "product_type": "household"}, + { _id: ObjectId(), "status": 7, "product_type": "electronics" }, + { _id: ObjectId(), "status": 8, "product_type": "grocery" }, + { _id: ObjectId(), "status": 9, "product_type": "household" } + ] + +If you query for ``{ "status": 2, "product_type": "grocery" }``, MongoDB +only reads one document to fulfill the query. The index and query are +more selective because there is only one matching document and the query +can select that specific document using the index. + +Although this example's query on ``status`` equality is more selective, +a query such as ``{ "status": { $gt: 5 }, "product_type": "grocery" }`` +would still need to read four documents. However, if you create a +compound index on ``product_type`` and ``status``, a query for ``{ +"status": { $gt: 5 }, "product_type": "grocery" }`` would only need to +read two documents. + +To improve selectivity, you can create a :ref:`compound index +` that narrows the documents that queries read. For +example, if you want to improve selectivity for queries on ``status`` +and ``product_type``, you could create a compound index on those two +fields. + +If MongoDB reads a high number of documents to return results, some +queries may perform faster without indexes. To determine performance, +see :ref:`indexes-measuring-use`. + diff --git a/source/tutorial/create-users.txt b/source/tutorial/create-users.txt index 3b3e7198158..9ba9ca540a4 100644 --- a/source/tutorial/create-users.txt +++ b/source/tutorial/create-users.txt @@ -10,6 +10,9 @@ Create a User :name: programming_language :values: shell +.. meta:: + :description: Enable access control for MongoDB security. Assign users roles for specific actions. Follow the least privilege principle for user maintenance. + .. contents:: On this page :local: :backlinks: none diff --git a/source/tutorial/deploy-replica-set.txt b/source/tutorial/deploy-replica-set.txt index 14e8af6288b..9a3c0c89671 100644 --- a/source/tutorial/deploy-replica-set.txt +++ b/source/tutorial/deploy-replica-set.txt @@ -1,3 +1,11 @@ +.. facet:: + :name: genre + :values: tutorial + +.. meta:: + :keywords: code example, shell + :description: Deploy a MongoDB replica set for redundancy and distributed reads. Ensure an odd number of members for smooth elections. + .. _server-replica-set-deploy: ==================== diff --git a/source/tutorial/deploy-shard-cluster.txt b/source/tutorial/deploy-shard-cluster.txt index 152ff82369f..c36341ef87a 100644 --- a/source/tutorial/deploy-shard-cluster.txt +++ b/source/tutorial/deploy-shard-cluster.txt @@ -155,7 +155,7 @@ command line parameter to specify the config servers. mongos --config For more information on the configuration file, see - :doc:`configuration options`. + :ref:`configuration options `. .. tab:: Command Line :tabid: command-line diff --git a/source/tutorial/drop-a-hashed-shard-key-index.txt b/source/tutorial/drop-a-hashed-shard-key-index.txt new file mode 100644 index 00000000000..e4f1352d223 --- /dev/null +++ b/source/tutorial/drop-a-hashed-shard-key-index.txt @@ -0,0 +1,94 @@ +.. _drop-a-hashed-shard-key-index: + +============================= +Drop a Hashed Shard Key Index +============================= + +.. default-domain:: mongodb + +.. contents:: On this page + :local: + :backlinks: none + :depth: 1 + :class: singlecol + +.. include:: /includes/drop-hashed-shard-key-index-main.rst + +About this Task +--------------- + +Dropping an `unnecessary index +`_ +can speed up CRUD operations. Each CRUD operation has to update all the +indexes related to a document. Removing one index can increase the +speed of all CRUD operations. + +When you drop a hashed shard key index, the server disables balancing +for that collection and excludes the collection from future balancing +rounds. In order to include the collection in balancing rounds once +again, you must recreate the shard key index. + +Steps +----- + +.. procedure:: + :style: normal + + .. step:: Stop the balancer + + Run the following command to stop the balancer: + + .. code-block:: javascript + + sh.stopBalancer() + + You can only run ``sh.stopBalancer()`` on ``mongos``. + ``sh.stopBalancer()`` produces an error if run on ``mongod``. + + .. step:: Confirm there are no orphaned documents in your collection + + Starting in MongoDB 6.0.3, you can run an aggregation using the + :pipeline:`$shardedDataDistribution` stage to confirm no orphaned + documents remain: + + .. code-block:: javascript + + db.aggregate([ + { $shardedDataDistribution: {} }, + { $match: { "ns": "." } } + ]) + + ``$shardedDataDistribution`` has output similar to the following: + + .. include:: /includes/shardedDataDistribution-output-example.rst + + Ensure that ``"numOrphanedDocs"`` is ``0`` for each shard in the + cluster. + + .. step:: Drop the hashed shard key index + + Run the following command to drop the index: + + .. code-block:: javascript + + db.collection.dropIndex("") + + .. step:: Restart the balancer + + Run the following command to restart the balancer on the cluster: + + .. code-block:: javascript + + sh.startBalancer() + + +Learn More +---------- + +- :ref:`sharding-hashed` +- :ref:`sharding-balancing` +- :method:`db.collection.dropIndex()` +- :method:`sh.stopBalancer()` +- :method:`sh.startBalancer()` +- :method:`sh.getBalancerState()` + diff --git a/source/tutorial/enable-authentication.txt b/source/tutorial/enable-authentication.txt index ee39495716c..3de53f25f00 100644 --- a/source/tutorial/enable-authentication.txt +++ b/source/tutorial/enable-authentication.txt @@ -6,6 +6,9 @@ Enable Access Control .. default-domain:: mongodb +.. meta:: + :description: Enable authentication on MongoDB deployments for secure user access control. + .. contents:: On this page :local: :backlinks: none diff --git a/source/tutorial/equality-sort-range-rule.txt b/source/tutorial/equality-sort-range-rule.txt index 838f8503caa..6daaa753879 100644 --- a/source/tutorial/equality-sort-range-rule.txt +++ b/source/tutorial/equality-sort-range-rule.txt @@ -119,15 +119,17 @@ non-blocking index sort. For more information on blocking sorts, see Additional Considerations ------------------------- -Inequality operators such as :query:`$ne` or :query:`$nin` are range -operators, not equality operators. +- Inequality operators such as :query:`$ne` or :query:`$nin` are range + operators, not equality operators. -:query:`$regex` is a range operator. +- :query:`$regex` is a range operator. -:query:`$in` can be an equality operator or a range operator. -When :query:`$in` is used alone, it is an equality operator that -does a series of equality matches. :query:`$in` acts like a range -operator when it is used with ``.sort()``. +- :query:`$in` can be an equality operator or a range operator. + + - When ``$in`` is used alone, it is an equality operator that performs a + series of equality matches. + - When ``$in`` is used with ``.sort()``, ``$in`` can act like a range + operator. Example ------- diff --git a/source/tutorial/evaluate-operation-performance.txt b/source/tutorial/evaluate-operation-performance.txt index f7512370a13..6ff6508ff50 100644 --- a/source/tutorial/evaluate-operation-performance.txt +++ b/source/tutorial/evaluate-operation-performance.txt @@ -16,7 +16,7 @@ performance. Use the Database Profiler to Evaluate Operations Against the Database --------------------------------------------------------------------- -MongoDB provides a :doc:`database profiler ` that shows performance +MongoDB provides a :ref:`database profiler ` that shows performance characteristics of each operation against the database. Use the profiler to locate any queries or write operations that are running slow. You can use this information, for example, to determine what indexes to create. @@ -59,7 +59,7 @@ Starting in MongoDB 4.2, the explain output includes: with the same :term:`query shape`. - :data:`~explain.queryPlanner.planCacheKey` to provide more insight - into the :doc:`query plan cache ` for slow queries. + into the :ref:`query plan cache ` for slow queries. For more information, see :doc:`/reference/explain-results`, diff --git a/source/tutorial/expand-replica-set.txt b/source/tutorial/expand-replica-set.txt index b28680306ff..a74acaf7735 100644 --- a/source/tutorial/expand-replica-set.txt +++ b/source/tutorial/expand-replica-set.txt @@ -40,6 +40,8 @@ Existing Members You can use these procedures to add new members to an existing replica set. +.. include:: /includes/replica-set-nodes-cannot-be-shared.rst + Restore Former Members ~~~~~~~~~~~~~~~~~~~~~~ diff --git a/source/tutorial/expire-data.txt b/source/tutorial/expire-data.txt index 124c5f374bb..3b55a09bd0a 100644 --- a/source/tutorial/expire-data.txt +++ b/source/tutorial/expire-data.txt @@ -10,6 +10,9 @@ Expire Data from Collections by Setting TTL :name: programming_language :values: shell +.. meta:: + :description: Automatically remove MongoDB data after a set time with TTL or time to live collections. + .. contents:: On this page :local: :backlinks: none @@ -303,37 +306,39 @@ To avoid problems, either drop or correct any misconfigured TTL indexes. .. code-block:: javascript function getNaNIndexes() { - const nan_index = []; - - const dbs = db.adminCommand({ listDatabases: 1 }).databases; - - dbs.forEach((d) => { - const listCollCursor = db - .getSiblingDB(d.name) - .runCommand({ listCollections: 1 }).cursor; - - const collDetails = { - db: listCollCursor.ns.split(".$cmd")[0], - colls: listCollCursor.firstBatch.map((c) => c.name), - }; - - collDetails.colls.forEach((c) => - db - .getSiblingDB(collDetails.db) - .getCollection(c) - .getIndexes() - .forEach((entry) => { - if (Object.is(entry.expireAfterSeconds, NaN)) { - nan_index.push({ ns: `${collDetails.db}.${c}`, index: entry }); - } - }) - ); - }); - - return nan_index; + const nan_index = []; + + const dbs = db.adminCommand({ listDatabases: 1 }).databases; + + dbs.forEach((d) => { + if (d.name != 'local') { + const listCollCursor = db + .getSiblingDB(d.name) + .runCommand({ listCollections: 1 }).cursor; + + const collDetails = { + db: listCollCursor.ns.split(".$cmd")[0], + colls: listCollCursor.firstBatch.map((c) => c.name), + }; + + collDetails.colls.forEach((c) => + db + .getSiblingDB(collDetails.db) + .getCollection(c) + .getIndexes() + .forEach((entry) => { + if (Object.is(entry.expireAfterSeconds, NaN)) { + nan_index.push({ ns: `${collDetails.db}.${c}`, index: entry }); + } + }) + ); + } + }); + + return nan_index; }; - - getNaNIndexes(); + + getNaNIndexes(); .. step:: Correct misconfigured indexes. @@ -343,4 +348,3 @@ To avoid problems, either drop or correct any misconfigured TTL indexes. As an alternative, you can :dbcommand:`drop ` any misconfigured TTL indexes and recreate them later using the :dbcommand:`createIndexes` command. - diff --git a/source/tutorial/getting-started.txt b/source/tutorial/getting-started.txt index c93b01d1c37..64da49c969e 100644 --- a/source/tutorial/getting-started.txt +++ b/source/tutorial/getting-started.txt @@ -2,9 +2,9 @@ .. _getting-started: -=============== -Getting Started -=============== +============================ +Getting Started with MongoDB +============================ .. default-domain:: mongodb @@ -14,6 +14,7 @@ Getting Started .. meta:: :keywords: sample dataset, lab + :description: Experience MongoDB in 5 minutes with an interactive tutorial on MongoDB Atlas. Learn how to insert, query, and delete data. No installation required. To get started and explore MongoDB, try the 5 minute interactive tutorial that connects you to a `MongoDB Atlas `__ @@ -27,6 +28,7 @@ You do not need to install anything. Click the :guilabel:`Launch` button of the in-browser Integrated Development Environment to start the tutorial. .. instruqt:: /mongodb-docs/tracks/getting-started-with-mongodb-v2?token=em_Yadrk-QVCMfR6Zh3 + :title: Getting Started with MongoDB Lab After completing the tutorial, see :atlas:`Getting Started with Atlas ` to deploy a free cluster without any installation overhead. diff --git a/source/tutorial/insert-documents.txt b/source/tutorial/insert-documents.txt index 003bec206d2..c54b483e3be 100644 --- a/source/tutorial/insert-documents.txt +++ b/source/tutorial/insert-documents.txt @@ -11,11 +11,11 @@ Insert Documents .. facet:: :name: programming_language - :values: shell, csharp, go, java, python, perl, php, ruby, scala, javascript/typescript + :values: c, shell, csharp, go, java, python, perl, php, ruby, scala, javascript/typescript, kotlin .. meta:: :description: Examples of how to insert documents using MongoDB, including creating a collection upon first insert. - :keywords: motor, java sync, java async, reactive streams, code example, node.js, compass + :keywords: c, motor, java sync, java async, reactive streams, code example, node.js, compass .. contents:: On this page :local: @@ -137,6 +137,15 @@ Insert a Single Document The following example inserts a new document into the ``test.inventory`` collection: + .. tab:: + :tabid: c + + The following example inserts a new document into the + ``inventory`` collection. If the document does not specify + an ``_id`` field, the C driver adds the ``_id`` field + with an ObjectId value to the new document. For more information, see + :ref:`write-op-insert-behavior`. + .. tab:: :tabid: python @@ -194,6 +203,19 @@ Insert a Single Document ObjectId value to the new document. See :ref:`write-op-insert-behavior`. + .. tab:: + :tabid: kotlin-coroutine + + `MongoCollection.insertOne <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-mongo-collection/insert-one.html>`__ + inserts a *single* :ref:`document` into + a collection. + + The following example inserts a new document into the + ``inventory`` collection. If the document does not specify + an ``_id`` field, the driver adds the ``_id`` field with an + ObjectId value to the new document. See + :ref:`write-op-insert-behavior`. + .. tab:: :tabid: nodejs @@ -210,7 +232,7 @@ Insert a Single Document .. tab:: :tabid: php - :phpmethod:`MongoDB\\Collection::insertOne() ` + :phpmethod:`MongoDB\\Collection::insertOne() ` inserts a *single* :ref:`document` into a collection. @@ -312,6 +334,13 @@ Insert a Single Document For more information on the ``_id`` field, see :ref:`_id Field `. + .. tab:: + :tabid: c + + `mongoc_collection_insert_one `__ + returns ``true`` if successful, or returns ``false`` and sets error if + there are invalid arguments or a server or network error. + .. tab:: :tabid: python @@ -331,10 +360,28 @@ Insert a Single Document .. tab:: :tabid: java-sync + `com.mongodb.client.MongoCollection.insertOne <{+java-api-docs+}/mongodb-driver-sync/com/mongodb/client/MongoCollection.html#insertOne(TDocument)>`__ returns an + instance of `InsertOneResult + <{+java-api-docs+}/mongodb-driver-core/com/mongodb/client/result/InsertOneResult.html>`__. + You can access the ``_id`` field of the inserted document by + calling the `getInsertedId() <{+java-api-docs+}/mongodb-driver-core/com/mongodb/client/result/InsertOneResult.html#getInsertedId()>`__ method on the result. + + .. tab:: + :tabid: java-async + `com.mongodb.reactivestreams.client.MongoCollection.insertOne `_ returns a `Publisher `_ object. The ``Publisher`` inserts the document into a collection when subscribers request data. + .. tab:: + :tabid: kotlin-coroutine + + `MongoCollection.insertOne <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-mongo-collection/insert-one.html>`__ returns an + instance of `InsertOneResult + <{+java-api-docs+}/mongodb-driver-core/com/mongodb/client/result/InsertOneResult.html>`__. + You can access the ``_id`` field of the inserted document by + accessing the ``insertedId`` field of the result. + .. tab:: :tabid: nodejs @@ -346,11 +393,11 @@ Insert a Single Document :tabid: php Upon successful insert, the - :phpmethod:`insertOne() ` + :phpmethod:`insertOne() ` method returns an instance of :phpclass:`MongoDB\\InsertOneResult ` whose - :phpmethod:`getInsertedId() ` + :phpmethod:`getInsertedId() ` method returns the ``_id`` of the newly inserted document. .. tab:: @@ -405,13 +452,6 @@ To retrieve the document that you just inserted, :ref:`query the collection Insert Multiple Documents ------------------------- ----------- - -|arrow| Use the **Select your language** drop-down menu in the -upper-right to set the language of the examples on this page. - ----------- - .. tabs-drivers:: .. tab:: @@ -429,6 +469,19 @@ upper-right to set the language of the examples on this page. .. tab:: :tabid: compass + + .. tab:: + :tabid: c + + `mongoc_bulk_operation_insert_with_opts `__ + inserts *multiple* :ref:`documents ` into a + collection. You must pass an iterable of documents to the method. + + The following example inserts three new documents into the + ``inventory`` collection. If the documents do not specify an + ``_id`` field, the C driver adds the ``_id`` field with + an ObjectId value to each document. See :ref:`write-op-insert-behavior`. + .. tab:: :tabid: python @@ -484,6 +537,18 @@ upper-right to set the language of the examples on this page. ``_id`` field, the driver adds the ``_id`` field with an ObjectId value to each document. See :ref:`write-op-insert-behavior`. + .. tab:: + :tabid: kotlin-coroutine + + `MongoCollection.insertMany <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-mongo-collection/insert-many.html>`__ + inserts *multiple* :ref:`documents ` + into a collection. Pass a list of documents as a parameter to the method. + + The following example inserts three new documents into the + ``inventory`` collection. If the documents do not specify an + ``_id`` field, the driver adds an ObjectId value to each document. + See :ref:`write-op-insert-behavior`. + .. tab:: :tabid: nodejs @@ -500,7 +565,7 @@ upper-right to set the language of the examples on this page. .. tab:: :tabid: php - :phpmethod:`MongoDB\\Collection::insertMany() ` + :phpmethod:`MongoDB\\Collection::insertMany() ` can insert *multiple* :ref:`documents ` into a collection. Pass an array of documents to the method. @@ -570,7 +635,6 @@ upper-right to set the language of the examples on this page. ``_id`` field, the driver adds the ``_id`` field with an ObjectId value to each document. See :ref:`write-op-insert-behavior`. - .. include:: /includes/driver-examples/driver-example-insert-3.rst .. tabs-drivers:: @@ -585,6 +649,16 @@ upper-right to set the language of the examples on this page. To retrieve the inserted documents, :ref:`query the collection `: + .. tab:: + :tabid: c + + `mongoc_bulk_operation_insert_with_opts `__ + returns ``true`` on success, or ``false`` if passed invalid arguments. + + To retrieve the inserted documents, use + `mongoc_collection_find_with_opts `__ to + :ref:`query the collection `: + .. tab:: :tabid: python @@ -625,6 +699,17 @@ upper-right to set the language of the examples on this page. To retrieve the inserted documents, :ref:`query the collection `: + .. tab:: + :tabid: kotlin-coroutine + + `MongoCollection.insertMany() <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-mongo-collection/insert-many.html>`_ + returns an ``InsertManyResult`` instance. The ``insertedIds`` + field of ``InsertManyResult`` contains the ``_id`` values of the + inserted documents. + + To retrieve the inserted documents, :ref:`query the collection + `: + .. tab:: :tabid: nodejs @@ -640,12 +725,12 @@ upper-right to set the language of the examples on this page. :tabid: php Upon successful insert, the - :phpmethod:`insertMany() ` + :phpmethod:`insertMany() ` method returns an instance of :phpclass:`MongoDB\\InsertManyResult ` whose - :phpmethod:`getInsertedIds() ` + :phpmethod:`getInsertedIds() ` method returns the ``_id`` of each newly inserted document. To retrieve the inserted documents, :ref:`query the collection @@ -702,7 +787,6 @@ upper-right to set the language of the examples on this page. To retrieve the inserted documents, :ref:`query the collection `: - .. include:: /includes/driver-examples/driver-example-query-7.rst .. _write-op-insert-behavior: @@ -733,7 +817,7 @@ document. For more information on MongoDB and atomicity, see Write Acknowledgement ~~~~~~~~~~~~~~~~~~~~~ -With write concerns, you can specify the level of acknowledgement +With write concerns, you can specify the level of acknowledgment requested from MongoDB for write operations. For details, see :doc:`/reference/write-concern`. @@ -750,6 +834,15 @@ requested from MongoDB for write operations. For details, see - :ref:`additional-inserts` + .. tab:: + :tabid: c + + .. seealso:: + + - `mongoc_bulk_operation_insert_with_opts `__ + + - :ref:`additional-inserts` + .. tab:: :tabid: python @@ -794,6 +887,18 @@ requested from MongoDB for write operations. For details, see - `Java Reactive Streams Driver Quick Tour `_ + .. tab:: + :tabid: kotlin-coroutine + + .. seealso:: + + - `MongoCollection.insertOne <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-mongo-collection/insert-one.html>`__ + + - `MongoCollection.insertMany <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-mongo-collection/insert-many.html>`__ + + - :driver:`Kotlin Driver Write Operation Examples + ` + .. tab:: :tabid: nodejs @@ -810,9 +915,9 @@ requested from MongoDB for write operations. For details, see .. seealso:: - - :phpmethod:`MongoDB\\Collection::insertOne() ` + - :phpmethod:`MongoDB\\Collection::insertOne() ` - - :phpmethod:`MongoDB\\Collection::insertMany() ` + - :phpmethod:`MongoDB\\Collection::insertMany() ` - :ref:`additional-inserts` diff --git a/source/tutorial/install-mongodb-enterprise-on-red-hat.txt b/source/tutorial/install-mongodb-enterprise-on-red-hat.txt index 8e9bc789388..990b1fd338d 100644 --- a/source/tutorial/install-mongodb-enterprise-on-red-hat.txt +++ b/source/tutorial/install-mongodb-enterprise-on-red-hat.txt @@ -6,6 +6,7 @@ .. meta:: :robots: noindex, nosnippet + :description: Install MongoDB Community Edition on Red Hat, CentOS, Oracle, Rocky, or AlmaLinux using the yum package manager. ======================================================= Install MongoDB Enterprise Edition on Red Hat or CentOS diff --git a/source/tutorial/install-mongodb-enterprise-on-ubuntu.txt b/source/tutorial/install-mongodb-enterprise-on-ubuntu.txt index 76c31a19428..258adaf8e24 100644 --- a/source/tutorial/install-mongodb-enterprise-on-ubuntu.txt +++ b/source/tutorial/install-mongodb-enterprise-on-ubuntu.txt @@ -13,7 +13,8 @@ Install MongoDB Enterprise Edition on Ubuntu .. default-domain:: mongodb - +.. meta:: + :description: Install MongoDB Community Edition on Ubuntu LTS releases using the apt package manager. .. contents:: On this page :local: diff --git a/source/tutorial/install-mongodb-on-debian.txt b/source/tutorial/install-mongodb-on-debian.txt index 18d28dc349d..662b13458bf 100644 --- a/source/tutorial/install-mongodb-on-debian.txt +++ b/source/tutorial/install-mongodb-on-debian.txt @@ -6,6 +6,7 @@ .. meta:: :robots: noindex, nosnippet + :description: Install MongoDB Community Edition on Debian using the apt package manager. =========================================== Install MongoDB Community Edition on Debian diff --git a/source/tutorial/install-mongodb-on-os-x.txt b/source/tutorial/install-mongodb-on-os-x.txt index 9f2ca5fe230..d7bd9e73ef9 100644 --- a/source/tutorial/install-mongodb-on-os-x.txt +++ b/source/tutorial/install-mongodb-on-os-x.txt @@ -6,6 +6,7 @@ .. meta:: :robots: noindex, nosnippet + :description: Install MongoDB Community Edition on macOS using the Homebrew package manager. ========================================== Install MongoDB Community Edition on macOS diff --git a/source/tutorial/install-mongodb-on-windows-unattended.txt b/source/tutorial/install-mongodb-on-windows-unattended.txt index 43955f31af1..4df6e06c162 100644 --- a/source/tutorial/install-mongodb-on-windows-unattended.txt +++ b/source/tutorial/install-mongodb-on-windows-unattended.txt @@ -6,6 +6,7 @@ .. meta:: :robots: noindex, nosnippet + :description: Install MongoDB Community Edition on Windows using msiexec.exe for unattended, automated deployment. ========================================================== Install MongoDB Community on Windows using ``msiexec.exe`` diff --git a/source/tutorial/install-mongodb-on-windows.txt b/source/tutorial/install-mongodb-on-windows.txt index 7fa67040408..4eecf217979 100644 --- a/source/tutorial/install-mongodb-on-windows.txt +++ b/source/tutorial/install-mongodb-on-windows.txt @@ -6,6 +6,7 @@ .. meta:: :robots: noindex, nosnippet + :description: Install MongoDB Community Edition on Windows using the default installation wizard. ============================================ Install MongoDB Community Edition on Windows diff --git a/source/tutorial/iterate-a-cursor.txt b/source/tutorial/iterate-a-cursor.txt index e07a6c8c066..511b0371bc5 100644 --- a/source/tutorial/iterate-a-cursor.txt +++ b/source/tutorial/iterate-a-cursor.txt @@ -121,8 +121,8 @@ Cursor Behaviors Cursors Opened Within a Session ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Starting in MongoDB 5.0 (and 4.4.8), cursors created within a -:doc:`client session ` close +Starting in MongoDB 5.0, cursors created within a +:ref:`client session ` close when the corresponding :ref:`server session ` ends with the :dbcommand:`killSessions` command, if the session times out, or if the client has exhausted the cursor. diff --git a/source/tutorial/kerberos-auth-activedirectory-authz.txt b/source/tutorial/kerberos-auth-activedirectory-authz.txt index f6a41504f4c..0d0d8bc482e 100644 --- a/source/tutorial/kerberos-auth-activedirectory-authz.txt +++ b/source/tutorial/kerberos-auth-activedirectory-authz.txt @@ -207,18 +207,9 @@ For more information on configuring roles and privileges, see: - :ref:`privilege actions ` -- :doc:`collection level access control ` +- :ref:`collection level access control ` Testing and Verification ------------------------ -After completing the configuration steps, you can validate your -configuration with the :binary:`~bin.mongokerberos` tool. - -Introduced alongside MongoDB 4.4, :binary:`~bin.mongokerberos` -provides a convenient method to verify your platform's Kerberos -configuration for use with MongoDB, and to test that Kerberos -authentication from a MongoDB client works as expected. See the -:binary:`~bin.mongokerberos` documentation for more information. - -:binary:`~bin.mongokerberos` is available in MongoDB Enterprise only. +.. include:: /includes/fact-mongokerberos.rst \ No newline at end of file diff --git a/source/tutorial/manage-indexes.txt b/source/tutorial/manage-indexes.txt index 37652323b05..99520e1cb86 100644 --- a/source/tutorial/manage-indexes.txt +++ b/source/tutorial/manage-indexes.txt @@ -45,6 +45,8 @@ To learn how to remove an index in |compass|, see :compass:`Manage Indexes in Co .. .. include:: /includes/driver-remove-indexes-tabs.rst +.. _manage-indexes-modify: + Modify an Index --------------- @@ -191,9 +193,8 @@ can occur , such as: fails to build the index for an associated shard or incorrectly builds an index with different specification. -Starting in MongoDB 4.4 (and 4.2.6), the :ref:`config server -` primary, by default, checks for -index inconsistencies across the shards for sharded collections, and +The :ref:`config server ` primary, by default, checks +for index inconsistencies across the shards for sharded collections, and the command :dbcommand:`serverStatus`, when run on the config server primary, returns the field :serverstatus:`shardedIndexConsistency` field to report on the number of sharded collections with index diff --git a/source/tutorial/manage-journaling.txt b/source/tutorial/manage-journaling.txt index 31420ea3e51..d151f005fbe 100644 --- a/source/tutorial/manage-journaling.txt +++ b/source/tutorial/manage-journaling.txt @@ -36,7 +36,7 @@ Procedures Get Commit Acknowledgement ~~~~~~~~~~~~~~~~~~~~~~~~~~ -You can get commit acknowledgement with the :ref:`write-concern` and +You can get commit acknowledgment with the :ref:`write-concern` and the :writeconcern:`j` option. For details, see :ref:`write-concern-operation`. diff --git a/source/tutorial/manage-mongodb-processes.txt b/source/tutorial/manage-mongodb-processes.txt index ada6e705617..df32eef925d 100644 --- a/source/tutorial/manage-mongodb-processes.txt +++ b/source/tutorial/manage-mongodb-processes.txt @@ -6,6 +6,9 @@ Manage ``mongod`` Processes .. default-domain:: mongodb +.. meta:: + :description: Start MongoDB from the command line with mongod. Learn about mongod, mongos, and mongosh. Stop and troubleshoot the mongod process. Stop a Replica Set. + .. contents:: On this page :local: :backlinks: none diff --git a/source/tutorial/manage-shard-zone.txt b/source/tutorial/manage-shard-zone.txt index 98ac3197765..e5b560adaa3 100644 --- a/source/tutorial/manage-shard-zone.txt +++ b/source/tutorial/manage-shard-zone.txt @@ -123,3 +123,8 @@ return any range associated to the ``NYC`` zone. use config db.tags.find({ tag: "NYC" }) + +.. toctree:: + :hidden: + + /tutorial/manage-shard-zone/update-existing-shard-zone diff --git a/source/tutorial/manage-shard-zone/update-existing-shard-zone.txt b/source/tutorial/manage-shard-zone/update-existing-shard-zone.txt new file mode 100644 index 00000000000..95655ba15c3 --- /dev/null +++ b/source/tutorial/manage-shard-zone/update-existing-shard-zone.txt @@ -0,0 +1,106 @@ +.. _sharding-update-existing-zone: + +============================= +Update an Existing Shard Zone +============================= + +.. contents:: On this page + :local: + :backlinks: none + :depth: 2 + :class: singlecol + +.. facet:: + :name: genre + :values: tutorial + +After you specify a range of values for a shard zone, you can update the +shard zone range if your application's requirements change. + +To update an existing shard zone, perform these steps: + +#. Stop the :term:`balancer`. + +#. Remove the old range from the zone. + +#. Update the zone's range. + +#. Restart the balancer. + +About this Task +--------------- + +- Zone ranges are inclusive of the lower boundary and exclusive of the + upper boundary. + +- After you modify a zone, the balancer must migrate chunks to the + appropriate zones based on the new range of values. Until balancing + completes, some chunks may reside on the wrong shard given the + configured zones for the sharded cluster. + +Before you Begin +---------------- + +To complete this tutorial, you must :ref:`deploy a sharded cluster +` with a sharded collection and create a zone +to modify. + +This example uses a sharded collection named ``users`` in the +``records`` database, sharded by the ``zipcode`` field. + +.. procedure:: + :style: normal + + .. step:: Add a shard to a zone called NYC + + .. code-block:: none + + sh.addShardToZone(, "NYC") + + .. step:: Specify a range of zipcode values for the NYC zone + + .. code-block:: javascript + + sh.updateZoneKeyRange("records.users", { zipcode: "10001" }, { zipcode: "10281" }, "NYC" ) + +Steps +----- + +The following procedure modifies the range of ``zipcode`` values for the +``NYC`` zone to be ``11201`` through ``11240``. + +.. procedure:: + :style: normal + + .. step:: Stop the balancer + + .. code-block:: javascript + + sh.stopBalancer() + + .. step:: Remove the current NYC range from the zone + + .. code-block:: javascript + + sh.removeRangeFromZone("records.user", { zipcode: "10001" }, { zipcode: "10281" } ) + + .. step:: Update the zone key range for the NYC zone + + .. code-block:: javascript + + sh.updateZoneKeyRange("records.users", { zipcode: "11201" }, { zipcode: "11240" }, "NYC" ) + + .. step:: Restart the balancer + + .. code-block:: javascript + + sh.startBalancer() + +Learn More +---------- + +- :method:`sh.removeRangeFromZone()` + +- :method:`sh.updateZoneKeyRange()` + +- :ref:`workload-isolation` diff --git a/source/tutorial/manage-sharded-cluster-balancer.txt b/source/tutorial/manage-sharded-cluster-balancer.txt index 9b31eaa62a2..222868b7e80 100644 --- a/source/tutorial/manage-sharded-cluster-balancer.txt +++ b/source/tutorial/manage-sharded-cluster-balancer.txt @@ -31,7 +31,7 @@ Check the Balancer State actively migrating data. To see if the balancer is enabled in your :term:`sharded cluster`, -issue the following command, which returns a boolean: +run the following command, which returns a boolean: .. code-block:: javascript @@ -39,7 +39,7 @@ issue the following command, which returns a boolean: You can also see if the balancer is enabled using :method:`sh.status()`. The :data:`~sh.status.balancer.currently-enabled` field indicates -whether the balancer is enabled, while the +whether the balancer is enabled, and the :data:`~sh.status.balancer.currently-running` field indicates if the balancer is currently running. @@ -88,13 +88,27 @@ Schedule the Balancing Window In some situations, particularly when your data set grows slowly and a migration can impact performance, it is useful to ensure -that the balancer is active only at certain times. The following -procedure specifies the ``activeWindow``, -which is the timeframe during which the :term:`balancer` will -be able to migrate chunks: +that the balancer is active only at certain times. By default, the +balancer process is always enabled and migrating chunks. The following +procedure specifies the ``activeWindow``, which is the timeframe during +which the :term:`balancer` is able to migrate chunks: .. include:: /includes/steps/schedule-balancer-window.rst +.. _sharding-check-balancing-window: + +Check Balancing Window +---------------------- + +To see the current balancing window, run the following +command: + +.. code-block:: javascript + :copyable: true + + use config + db.settings.find( { _id: "balancer" } ) + .. _sharding-balancing-remove-window: Remove a Balancing Window Schedule @@ -134,7 +148,7 @@ all migration, use the following procedure: .. include:: /includes/extracts/4.2-changes-stop-balancer-autosplit.rst -#. To verify that the balancer will not start, issue the following command, +#. To verify that the balancer won't start, run the following command, which returns ``false`` if the balancer is disabled: .. code-block:: javascript @@ -142,7 +156,7 @@ all migration, use the following procedure: sh.getBalancerState() Optionally, to verify no migrations are in progress after disabling, - issue the following operation in the :binary:`~bin.mongosh` shell: + run the following operation in the :binary:`~bin.mongosh` shell: .. code-block:: javascript @@ -320,18 +334,11 @@ when the migration proceeds with next document in the chunk. In the :data:`config.settings` collection: - - If the ``_secondaryThrottle`` setting for the balancer is set to a - write concern, each document move during chunk migration must receive - the requested acknowledgement before proceeding with the next + :term:`write concern`, each document moved during chunk migration must receive + the requested acknowledgment before proceeding with the next document. -- If the ``_secondaryThrottle`` setting for the balancer is set to - ``true``, each document move during chunk migration must receive - acknowledgement from at least one secondary before the migration - proceeds with the next document in the chunk. This is equivalent to a - write concern of :writeconcern:`{ w: 2 } <\>`. - - If the ``_secondaryThrottle`` setting is unset, the migration process does not wait for replication to a secondary and instead continues with the next document. @@ -342,7 +349,7 @@ To change the ``_secondaryThrottle`` setting, connect to a :binary:`~bin.mongos` instance and directly update the ``_secondaryThrottle`` value in the :data:`~config.settings` collection of the :ref:`config database `. For example, from a -:binary:`~bin.mongosh` shell connected to a :binary:`~bin.mongos`, issue +:binary:`~bin.mongosh` shell connected to a :binary:`~bin.mongos`, run the following command: .. code-block:: javascript @@ -432,16 +439,15 @@ the range is greater than 2 times the result of dividing the configured :ref:`range size` by the average document size. -Starting in MongoDB 4.4, by specifying the balancer setting -``attemptToBalanceJumboChunks`` to ``true``, the balancer can migrate -these large ranges as long as they have not been labeled as -:ref:`jumbo `. +By specifying the balancer setting ``attemptToBalanceJumboChunks`` to ``true``, +the balancer can migrate these large ranges as long as they have not been +labeled as :ref:`jumbo `. To set the balancer's ``attemptToBalanceJumboChunks`` setting, connect to a :binary:`~bin.mongos` instance and directly update the :data:`config.settings` collection. For example, from a :binary:`~bin.mongosh` shell connected to a :binary:`~bin.mongos` -instance, issue the following command: +instance, run the following command: .. code-block:: javascript diff --git a/source/tutorial/manage-the-database-profiler.txt b/source/tutorial/manage-the-database-profiler.txt index e6a04225b1f..a867d1a1c24 100644 --- a/source/tutorial/manage-the-database-profiler.txt +++ b/source/tutorial/manage-the-database-profiler.txt @@ -1,4 +1,5 @@ .. _database-profiler: +.. _database-profiling: ================= Database Profiler @@ -16,9 +17,10 @@ The database profiler collects detailed information about :ref:`database-commands` executed against a running :binary:`~bin.mongod` instance. This includes CRUD operations as well as configuration and administration commands. + The profiler writes all the data it collects to a :data:`system.profile <.system.profile>` collection, a -:doc:`capped collection ` in each profiled +:ref:`capped collection ` in each profiled database. See :doc:`/reference/database-profiler` for an overview of the :data:`system.profile <.system.profile>` documents created by the profiler. @@ -30,8 +32,8 @@ levels `. When enabled, profiling has an effect on database performance and disk use. See :ref:`Database Profiler Overhead` for more information. -This document outlines a number of key administration options for the -database profiler. For additional related information, see: +This page shows important administration options for the +database profiler. For additional information, see: - :ref:`profiler` - :ref:`Profile Command ` @@ -56,31 +58,32 @@ Enable and Configure Database Profiling You can enable database profiling for :binary:`~bin.mongod` instances. -This section uses :binary:`~bin.mongosh` helper -:method:`db.setProfilingLevel()` helper to enable profiling. For -instructions using the driver, see your :driver:`driver -documentation `. +This section shows how you use the :binary:`~bin.mongosh` helper method +:method:`db.setProfilingLevel()` to enable profiling. To use a driver +method instead, see the :api:`driver documentation <>`. -When you enable profiling for a :binary:`~bin.mongod` instance, you set +To enable profiling for a :binary:`~bin.mongod` instance, set the :ref:`profiling level ` to a value -greater than 0. The profiler records data in the :data:`system.profile +greater than ``0``. The profiler records data in the :data:`system.profile <.system.profile>` collection. MongoDB creates the :data:`system.profile <.system.profile>` collection in a database after you enable profiling for that database. To enable profiling and set the profiling level, pass the profiling level to the :method:`db.setProfilingLevel()` helper. For example, to -enable profiling for all database operations, consider the following -operation in :binary:`~bin.mongosh`: +enable profiling for all database operations for the currently connected +database, run this operation in :binary:`~bin.mongosh`: .. code-block:: javascript db.setProfilingLevel(2) -The shell returns a document showing the *previous* level of profiling. -The ``"ok" : 1`` key-value pair indicates the operation succeeded: +The shell returns the *previous* profiling level in the ``was`` field +and sets the new level. In the following output, the ``"ok" : +1`` key-value pair indicates the operation succeeded: .. code-block:: javascript + :copyable: false { "was" : 0, "slowms" : 100, "sampleRate" : 1.0, "ok" : 1 } @@ -97,7 +100,7 @@ The :ref:`slowms ` and settings are *global*. When set, these settings affect all databases in your process. -When set via the :dbcommand:`profile` command or +When set through the :dbcommand:`profile` command or :method:`db.setProfilingLevel()` shell helper method, :ref:`profiling level ` and :ref:`filter ` settings are set at the *database* @@ -111,7 +114,7 @@ Specify the Threshold for Slow Operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ By default, the slow operation threshold is 100 milliseconds. To change -the slow operation threshold, specify the desired threshold value in +the slow operation threshold, specify the required threshold value in one of the following ways: - Set the value of ``slowms`` using the :dbcommand:`profile` command or @@ -122,37 +125,38 @@ one of the following ways: - Set the value of :setting:`~operationProfiling.slowOpThresholdMs` in a :ref:`configuration file `. -For example, the following code sets the profiling level for the -current :binary:`~bin.mongod` instance to ``1`` and sets the slow -operation threshold for the :binary:`~bin.mongod` instance to 20 +The following example sets the profiling level for the +currently connected database to ``1`` and sets the slow +operation threshold for the :binary:`~bin.mongod` instance to ``20`` milliseconds: .. code-block:: javascript - db.setProfilingLevel(1, { slowms: 20 }) + db.setProfilingLevel( 1, { slowms: 20 } ) -Profiling level of ``1`` will profile operations slower than the -threshold. +A profiling level of ``1`` causes the profiler to record operations +slower than the ``slowms`` threshold. .. important:: + The slow operation threshold applies to all databases in a :binary:`~bin.mongod` instance. It is used by both the database profiler and the diagnostic log and should be set to the highest useful value to avoid performance degradation. -Starting in MongoDB 4.0, you can use :method:`db.setProfilingLevel()` +You can use :method:`db.setProfilingLevel()` to configure ``slowms`` and ``sampleRate`` for :binary:`~bin.mongos`. For the :binary:`~bin.mongos`, the ``slowms`` and ``sampleRate`` configuration settings only affect the diagnostic log and not the profiler since profiling is not available on :binary:`~bin.mongos`. [#mongos-systemlog]_ -For example, the following sets a :binary:`~bin.mongos` instance's slow -operation threshold for logging slow operations: +The following example sets a :binary:`~bin.mongos` instance's slow +operation threshold for logging slow operations to ``20``: .. code-block:: javascript - db.setProfilingLevel(0, { slowms: 20 }) + db.setProfilingLevel( 0, { slowms: 20 } ) .. include:: /includes/extracts/4.2-changes-log-query-shapes-plan-cache-key.rst @@ -161,7 +165,7 @@ operation threshold for logging slow operations: Profile a Random Sample of Slow Operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -To profile only a randomly sampled subset of all *slow* operations , +To profile only a randomly sampled subset of all *slow* operations, specify the desired sample rate in one of the following ways: [#slow-oplogs]_ @@ -176,25 +180,25 @@ specify the desired sample rate in one of the following ways: :ref:`configuration file `. By default, ``sampleRate`` is set to ``1.0``, meaning all *slow* -operations are profiled. When ``sampleRate`` is set between 0 and 1, -databases with profiling level ``1`` will only profile a randomly sampled -percentage of *slow* operations according to ``sampleRate``. +operations are profiled. When ``sampleRate`` is set between ``0`` and ``1``, +databases with a profiling level ``1`` only profile a randomly sampled +percentage of *slow* operations based on ``sampleRate``. -For example, the following method sets the profiling level for the -:binary:`~bin.mongod` to ``1`` and sets the profiler to sample 42% of -all *slow* operations: +The following example sets the profiling level for the currently +connected database to ``1`` and sets the profiler to sample 42% of all +*slow* operations: .. code-block:: javascript - db.setProfilingLevel(1, { sampleRate: 0.42 }) + db.setProfilingLevel( 1, { sampleRate: 0.42 } ) The modified sample rate value also applies to the system log. -Starting in MongoDB 4.0, you can use :method:`db.setProfilingLevel()` +You can use :method:`db.setProfilingLevel()` to configure ``slowms`` and ``sampleRate`` for :binary:`~bin.mongos`. For the :binary:`~bin.mongos`, the ``slowms`` and ``sampleRate`` configuration settings only affect the diagnostic log -and not the profiler since profiling is not available on +and not the profiler because profiling isn't available on :binary:`~bin.mongos`. [#mongos-systemlog]_ For example, the following sets a :binary:`~bin.mongos` instance's @@ -202,7 +206,7 @@ sampling rate for logging slow operations: .. code-block:: javascript - db.setProfilingLevel(0, { sampleRate: 0.42 }) + db.setProfilingLevel( 0, { sampleRate: 0.42 } ) .. important:: @@ -215,13 +219,11 @@ sampling rate for logging slow operations: Set a Filter to Determine Profiled Operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -.. versionadded:: 4.4.2 - You can set a filter to control which operations are profiled and logged. You can set the profiling filter in one of the following ways: - Set the value of ``filter`` using the :dbcommand:`profile` command - or :method:`db.setProfilingLevel()` shell helper method. + or the :method:`db.setProfilingLevel()` shell helper method. - Set the value of :setting:`~operationProfiling.filter` in a :ref:`configuration file `. @@ -240,8 +242,8 @@ available on :binary:`~bin.mongos`. ` options do not affect the diagnostic log or the profiler. -For example, the following :method:`db.setProfilingLevel()` method sets -for a :binary:`~bin.mongod` instance: +The following :method:`db.setProfilingLevel()` example sets +the profile level for the currently connected database: - the :ref:`profiling level ` to ``2``, @@ -259,8 +261,8 @@ for a :binary:`~bin.mongod` instance: Check Profiling Level ~~~~~~~~~~~~~~~~~~~~~ -To view the :ref:`profiling level `, issue -the following from :binary:`~bin.mongosh`: +To view the :ref:`profiling level `, run +the following example in :binary:`~bin.mongosh`: .. code-block:: javascript @@ -283,17 +285,23 @@ that should be profiled. Disable Profiling ~~~~~~~~~~~~~~~~~ -To disable profiling, use the following helper in +To disable profiling, run the following example in :binary:`~bin.mongosh`: .. code-block:: javascript db.setProfilingLevel(0) +.. note:: + + Disabling profiling can improve database performance and lower disk + use. For more information, see :ref:`Database Profiler + Overhead` . + Enable Profiling for an Entire ``mongod`` Instance ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -For development purposes in testing environments, you can enable +For development and test environments, you can enable database profiling for an entire :binary:`~bin.mongod` instance. The profiling level applies to all databases provided by the :binary:`~bin.mongod` instance. @@ -313,10 +321,10 @@ This sets the profiling level to ``1``, defines slow operations as those that last longer than ``15`` milliseconds, and specifies that only 50% of *slow* operations should be profiled. [#slow-oplogs]_ -The ``slowms`` and ``slowOpSampleRate`` also affect which operations -are recorded to the diagnostic log when :parameter:`logLevel` is -set to ``0``. The ``slowms`` and ``slowOpSampleRate`` are also -available to configure diagnostic logging for :binary:`~bin.mongos`. [#slow-oplogs]_ +The ``slowms`` and ``slowOpSampleRate`` also affect the operations that +are recorded in the diagnostic log when :parameter:`logLevel` is set to +``0``. The ``slowms`` and ``slowOpSampleRate`` are also available to +configure diagnostic logging for :binary:`~bin.mongos`. [#slow-oplogs]_ .. seealso:: @@ -350,9 +358,8 @@ To view profiling information, query the :data:`system.profile :ref:`database-profiling-example-queries`. For an explanation of the output data, see :doc:`/reference/database-profiler`. -Starting in MongoDB 4.4, it is no longer possible to perform any -operation, including reads, on the :data:`system.profile -<.system.profile>` collection from within a +It is no longer possible to perform any operation, including reads, on the +:data:`system.profile <.system.profile>` collection from within a :ref:`transaction `. .. tip:: @@ -365,8 +372,8 @@ operation, including reads, on the :data:`system.profile Example Profiler Data Queries ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -This section displays example queries to the :data:`system.profile <.system.profile>` -collection. For an explanation of the query output, see +This section shows example queries on the :data:`system.profile +<.system.profile>` collection. For query output details, see :doc:`/reference/database-profiler`. To return the most recent 10 log entries in the :data:`system.profile <.system.profile>` @@ -391,24 +398,23 @@ the following. This example returns operations in the ``mydb`` database's db.system.profile.find( { ns : 'mydb.test' } ).pretty() -To return operations slower than ``5`` milliseconds, run a query -similar to the following: +To return operations that take longer than 5 milliseconds to complete, +run: .. code-block:: javascript db.system.profile.find( { millis : { $gt : 5 } } ).pretty() -To return information from a certain time range, run a query similar to -the following: +To return operations for a specific time range, run: .. code-block:: javascript - db.system.profile.find({ - ts : { - $gt: new ISODate("2012-12-09T03:00:00Z"), - $lt: new ISODate("2012-12-09T03:40:00Z") - } - }).pretty() + db.system.profile.find( { + ts : { + $gt: new ISODate("2012-12-09T03:00:00Z"), + $lt: new ISODate("2012-12-09T03:40:00Z") + } + } ).pretty() The following example looks at the time range, suppresses the ``user`` field from the output to make it easier to read, and sorts the results @@ -416,20 +422,20 @@ by how long each operation took to run: .. code-block:: javascript - db.system.profile.find({ - ts : { - $gt: new ISODate("2011-07-12T03:00:00Z"), - $lt: new ISODate("2011-07-12T03:40:00Z") - } - }, { user: 0 }).sort( { millis: -1 } ) + db.system.profile.find( { + ts : { + $gt: new ISODate("2011-07-12T03:00:00Z"), + $lt: new ISODate("2011-07-12T03:40:00Z") + } + }, { user: 0 } ).sort( { millis: -1 } ) Show the Five Most Recent Events ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ On a database that has profiling enabled, the ``show profile`` helper in :binary:`~bin.mongosh` displays the 5 most recent operations -that took at least 1 millisecond to execute. Issue ``show profile`` -from :binary:`~bin.mongosh`, as follows: +that took at least 1 millisecond to execute. Run ``show profile`` +from :binary:`~bin.mongosh`: .. code-block:: javascript @@ -444,11 +450,16 @@ When enabled, profiling has an effect on database performance, especially when configured with a :ref:`profiling level` of 2, or when using a low :ref:`threshold` value -with a profiling level of 1. Profiling also consumes disk space, as it -logs to both the :data:`system.profile <.system.profile>` -collection and also the MongoDB :option:`logfile `. -Carefully consider any performance and security implications before -configuring and enabling the profiler on a production deployment. +with a profiling level of 1. + +Profiling also uses disk space, because profiling +writes logs to the :data:`system.profile <.system.profile>` +collection and the MongoDB :option:`logfile `. + +.. warning:: + + Consider performance and storage implications before + you enable the profiler in a production deployment. The ``system.profile`` Collection ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/source/tutorial/manage-users-and-roles.txt b/source/tutorial/manage-users-and-roles.txt index 301578f43d0..a9d5432748e 100644 --- a/source/tutorial/manage-users-and-roles.txt +++ b/source/tutorial/manage-users-and-roles.txt @@ -4,6 +4,9 @@ Manage Users and Roles .. default-domain:: mongodb +.. meta:: + :description: Manage MongoDB users and roles. Learn user and role management under MongoDB's authorization model. Create custom roles, modify existing users, and view user roles and privileges. + .. contents:: On this page :local: :backlinks: none @@ -64,7 +67,7 @@ Create a User-Defined Role -------------------------- Roles grant users access to MongoDB resources. MongoDB provides a -number of :doc:`built-in roles ` that +number of :ref:`built-in roles ` that administrators can use to control access to a MongoDB system. However, if these roles cannot describe the desired set of privileges, you can create new roles in a particular database. diff --git a/source/tutorial/map-reduce-examples.txt b/source/tutorial/map-reduce-examples.txt index 7eb1debdcad..1a22ecd6460 100644 --- a/source/tutorial/map-reduce-examples.txt +++ b/source/tutorial/map-reduce-examples.txt @@ -22,8 +22,8 @@ Map-Reduce Examples For map-reduce operations that require custom functionality, MongoDB provides the :group:`$accumulator` and :expression:`$function` - aggregation operators starting in version 4.4. Use these operators to - define custom aggregation expressions in JavaScript. + aggregation operators. Use these operators to define custom aggregation + expressions in JavaScript. In :binary:`~bin.mongosh`, the :method:`db.collection.mapReduce()` method is a wrapper around the :dbcommand:`mapReduce` command. The diff --git a/source/tutorial/model-computed-data.txt b/source/tutorial/model-computed-data.txt index 779b5416d4d..dc03d1045e3 100644 --- a/source/tutorial/model-computed-data.txt +++ b/source/tutorial/model-computed-data.txt @@ -48,30 +48,30 @@ An application displays movie viewer and revenue information. Consider the following ``screenings`` collection: .. code-block:: javascript - - // screenings collection - - { - "theater": "Alger Cinema", - "location": "Lakeview, OR", - "movie_title": "Reservoir Dogs", - "num_viewers": 344, - "revenue": 3440 - } - { - "theater": "City Cinema", - "location": "New York, NY", - "movie_title": "Reservoir Dogs", - "num_viewers": 1496, - "revenue": 22440 - } - { - "theater": "Overland Park Cinema", - "location": "Boise, ID", - "movie_title": "Reservoir Dogs", - "num_viewers": 760, - "revenue": 7600 - } + + db.screenings.insertMany( [ + { + theater : "Alger Cinema", + location : "Lakeview, OR", + movie_title : "Reservoir Dogs", + num_viewers : 344, + revenue : 3440 + }, + { + theater : "City Cinema", + location : "New York, NY", + movie_title : "Reservoir Dogs", + num_viewers : 1496, + revenue : 22440 + }, + { + theater : "Overland Park Cinema", + location : "Boise, ID", + movie_title : "Reservoir Dogs", + num_viewers : 760, + revenue : 7600 + } + ] ) Users often want to know how many people saw a certain movie and how much money that movie made. In this example, to total ``num_viewers`` @@ -82,16 +82,17 @@ is requested, you can compute the total values and store them in a ``movies`` collection with the movie record itself: .. code-block:: javascript - :emphasize-lines: 5-6 - - // movies collection - - { - "title": "Reservoir Dogs", - "total_viewers": 2600, - "total_revenue": 33480, - ... - } + :copyable: false + :emphasize-lines: 4-5 + + db.movies.insertOne( [ + { + title : "Reservoir Dogs", + total_viewers : 2600, + total_revenue : 33480, + ... + } + ] ) In a low write environment, the computation could be done in conjunction with any update of the ``screenings`` data. diff --git a/source/tutorial/model-embedded-many-to-many-relationships-between-documents.txt b/source/tutorial/model-embedded-many-to-many-relationships-between-documents.txt new file mode 100644 index 00000000000..c2149ed3492 --- /dev/null +++ b/source/tutorial/model-embedded-many-to-many-relationships-between-documents.txt @@ -0,0 +1,91 @@ +.. _data-modeling-example-many-to-many: + +======================================================== +Model Many-to-Many Relationships with Embedded Documents +======================================================== + +.. default-domain:: mongodb + +.. contents:: On this page + :local: + :backlinks: none + :depth: 2 + :class: singlecol + +Create a data model that uses :ref:`embedded +` documents to describe a many-to-many relationship +between connected data. Embedding connected data in a single document can +reduce the number of read operations required to obtain data. In general, +structure your schema so your application receives all of its required +information in a single read operation. For example, you can use the embedded +many-to-many model to describe the following relationships: + +- Students to classes +- Actors to movies +- Doctors to patients + +About this Task +--------------- + +The following example schema contains information regarding ``book one`` +and ``book two`` and their authors. You can represent the relationship +differently based on whether you anticipate application users querying by +book or by author. + +If you expect more users to query by book than by author, the example schema +is an effective choice. However, if you expect more queries by author, make +the author the top-level information and place the author's books in an embedded +field. + +Example +------- + +You can use a many-to-many relationship to describe books and authors. A book +can have multiple authors, and an author can write multiple books. + +Embedded Document Pattern +~~~~~~~~~~~~~~~~~~~~~~~~~ + +The application needs to display information for the book and author +objects on a single page. To allow your application to retrieve all +necessary information with a single query, embed author information +inside of the corresponding book document: + +.. code-block:: javascript + + { + _id: "book001", + title: "Cell Biology", + authors: [ + { + author_id: "author124", + name: "Ellie Smith" + }, + { + author_id: "author381", + name: "John Palmer" + } + ] + } + + { + _id: "book002", + title: "Organic Chemistry", + authors: [ + { + author_id: "author290", + name: "Jane James" + }, + { + author_id: "author381", + name: "John Palmer" + } + ] + } + +Learn More +---------- + +- :ref:`data-modeling-example-one-to-one` + +- :ref:`data-modeling-example-one-to-many` diff --git a/source/tutorial/model-embedded-one-to-many-relationships-between-documents.txt b/source/tutorial/model-embedded-one-to-many-relationships-between-documents.txt index a1a66848579..4dfc4f97ecd 100644 --- a/source/tutorial/model-embedded-one-to-many-relationships-between-documents.txt +++ b/source/tutorial/model-embedded-one-to-many-relationships-between-documents.txt @@ -10,6 +10,9 @@ Model One-to-Many Relationships with Embedded Documents :name: programming_language :values: shell +.. meta:: + :description: Learn how to model one-to-many relationships between MongoDB documents using embedded documents. Embedding connected data in a single document can reduce the number of read operations required to obtain data. + .. contents:: On this page :local: :backlinks: none diff --git a/source/tutorial/model-embedded-one-to-one-relationships-between-documents.txt b/source/tutorial/model-embedded-one-to-one-relationships-between-documents.txt index f4e1d691258..ad8e55daa60 100644 --- a/source/tutorial/model-embedded-one-to-one-relationships-between-documents.txt +++ b/source/tutorial/model-embedded-one-to-one-relationships-between-documents.txt @@ -10,6 +10,9 @@ Model One-to-One Relationships with Embedded Documents :name: programming_language :values: shell +.. meta:: + :description: Learn how to model one-to-one relationships between MongoDB documents using embedded documents. Embedding connected data in a single document can reduce the number of read operations required to obtain data. + .. contents:: On this page :local: :backlinks: none diff --git a/source/tutorial/model-referenced-one-to-many-relationships-between-documents.txt b/source/tutorial/model-referenced-one-to-many-relationships-between-documents.txt index 8ff1d3ba17a..5b47e5cfae2 100644 --- a/source/tutorial/model-referenced-one-to-many-relationships-between-documents.txt +++ b/source/tutorial/model-referenced-one-to-many-relationships-between-documents.txt @@ -10,6 +10,9 @@ Model One-to-Many Relationships with Document References :name: programming_language :values: shell +.. meta:: + :description: Learn how to model one-to-many relationships between MongoDB documents using document references. This approach avoids repeating data by storing related information in separate collections. + .. contents:: On this page :local: :backlinks: none diff --git a/source/tutorial/optimize-query-performance-with-indexes-and-projections.txt b/source/tutorial/optimize-query-performance-with-indexes-and-projections.txt index a0320785209..ee2cd414ccc 100644 --- a/source/tutorial/optimize-query-performance-with-indexes-and-projections.txt +++ b/source/tutorial/optimize-query-performance-with-indexes-and-projections.txt @@ -1,3 +1,5 @@ +.. _optimize-query-performance: + ========================== Optimize Query Performance ========================== diff --git a/source/tutorial/project-fields-from-query-results.txt b/source/tutorial/project-fields-from-query-results.txt index b626374eb1f..835826f0519 100644 --- a/source/tutorial/project-fields-from-query-results.txt +++ b/source/tutorial/project-fields-from-query-results.txt @@ -13,10 +13,11 @@ Project Fields to Return from Query .. facet:: :name: programming_language - :values: shell, csharp, go, java, python, perl, php, ruby, scala, javascript/typescript + :values: shell, csharp, go, java, python, perl, php, ruby, scala, javascript/typescript, kotlin .. meta:: - :keywords: motor, java sync, java async, reactive streams, code example, node.js, compass + :keywords: motor, java sync, java async, reactive streams, code example, node.js, compass, kotlin coroutine + :description: Limit the data that MongoDB queries return using projection documents to specify or restrict fields. .. contents:: On this page :local: @@ -79,6 +80,12 @@ Return All Fields in Matching Documents `__ method returns all fields in the matching documents. + - id: kotlin-coroutine + content: | + If you do not specify a :term:`projection` document, the + `MongoCollection.find() <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-mongo-collection/find.html>`__ method returns all + fields in the matching documents. + - id: nodejs content: | If you do not specify a :term:`projection` document, the @@ -88,7 +95,7 @@ Return All Fields in Matching Documents - id: php content: | If you do not specify a :term:`projection` document, the - :phpmethod:`find() ` + :phpmethod:`find() ` method returns all fields in the matching documents. - id: perl @@ -195,8 +202,8 @@ The ``uom`` field remains embedded in the ``size`` document. .. include:: /includes/driver-examples/driver-example-query-47.rst -Starting in MongoDB 4.4, you can also specify embedded fields using the -nested form, e.g. ``{ item: 1, status: 1, size: { uom: 1 } }``. +You can also specify embedded fields using the nested form. For example, +``{ item: 1, status: 1, size: { uom: 1 } }``. Suppress Specific Fields in Embedded Documents ---------------------------------------------- @@ -211,8 +218,8 @@ the matching documents: .. include:: /includes/driver-examples/driver-example-query-48.rst -Starting in MongoDB 4.4, you can also specify embedded fields using the -nested form, e.g. ``{ size: { uom: 0 } }``. +You can also specify embedded fields using the nested form. For example, +``{ size: { uom: 0 } }``. Projection on Embedded Documents in an Array -------------------------------------------- @@ -244,10 +251,10 @@ Project Specific Array Elements in the Returned Array .. include:: /includes/driver-examples/driver-example-query-50.rst :projection:`$elemMatch`, :projection:`$slice`, and - :projection:`$` are the *only* way to project specific elements + :projection:`$` are the *only* operators that you can use to project specific elements to include in the returned array. For instance, you *cannot* project specific array elements using the array index; e.g. - ``{ "instock.0": 1 }`` projection will *not* project the array + ``{ "instock.0": 1 }`` projection does *not* project the array with the first element. - id: compass @@ -267,10 +274,10 @@ Project Specific Array Elements in the Returned Array .. include:: /includes/driver-examples/driver-example-query-50.rst :projection:`$elemMatch`, :projection:`$slice`, and - :projection:`$` are the *only* way to project specific elements + :projection:`$` are the *only* operators that you can use to project specific elements to include in the returned array. For instance, you *cannot* project specific array elements using the array index; e.g. - ``{ "instock.0": 1 }`` projection will *not* project the array + ``{ "instock.0": 1 }`` projection does *not* project the array with the first element. - id: java-sync @@ -282,10 +289,10 @@ Project Specific Array Elements in the Returned Array .. include:: /includes/driver-examples/driver-example-query-50.rst :projection:`$elemMatch`, :projection:`$slice`, and - :projection:`$` are the *only* way to project specific elements + :projection:`$` are the *only* operators that you can use to project specific elements to include in the returned array. For instance, you *cannot* project specific array elements using the array index; e.g. - ``include("instock.0")`` projection will *not* project the array + ``include("instock.0")`` projection does *not* project the array with the first element. - id: java-async @@ -297,10 +304,25 @@ Project Specific Array Elements in the Returned Array .. include:: /includes/driver-examples/driver-example-query-50.rst :projection:`$elemMatch`, :projection:`$slice`, and - :projection:`$` are the *only* way to project specific elements + :projection:`$` are the *only* operators that you can use to project specific elements to include in the returned array. For instance, you *cannot* project specific array elements using the array index; e.g. - ``include("instock.0")`` projection will *not* project the array + ``include("instock.0")`` projection does *not* project the array + with the first element. + + - id: kotlin-coroutine + content: | + .. include:: /includes/fact-projection-ops.rst + + .. include:: /includes/fact-projection-slice-example.rst + + .. include:: /includes/driver-examples/driver-example-query-50.rst + + :projection:`$elemMatch`, :projection:`$slice`, and + :projection:`$` are the *only* operators that you can use to project specific elements + to include in the returned array. For instance, you *cannot* + project specific array elements using the array index; e.g. + ``include("instock.0")`` projection does *not* project the array with the first element. - id: nodejs @@ -312,10 +334,10 @@ Project Specific Array Elements in the Returned Array .. include:: /includes/driver-examples/driver-example-query-50.rst :projection:`$elemMatch`, :projection:`$slice`, and - :projection:`$` are the *only* way to project specific elements + :projection:`$` are the *only* operators that you can use to project specific elements to include in the returned array. For instance, you *cannot* project specific array elements using the array index; e.g. - ``{ "instock.0": 1 }`` projection will *not* project the array + ``{ "instock.0": 1 }`` projection does *not* project the array with the first element. - id: php @@ -327,10 +349,10 @@ Project Specific Array Elements in the Returned Array .. include:: /includes/driver-examples/driver-example-query-50.rst :projection:`$elemMatch`, :projection:`$slice`, and - :projection:`$` are the *only* way to project specific elements + :projection:`$` are the *only* operators that you can use to project specific elements to include in the returned array. For instance, you *cannot* project specific array elements using the array index; e.g. - ``[ "instock.0" => 1 ]`` projection will *not* project the array + ``[ "instock.0" => 1 ]`` projection does *not* project the array with the first element. - id: perl @@ -342,10 +364,10 @@ Project Specific Array Elements in the Returned Array .. include:: /includes/driver-examples/driver-example-query-50.rst :projection:`$elemMatch`, :projection:`$slice`, and - :projection:`$` are the *only* way to project specific elements + :projection:`$` are the *only* operators that you can use to project specific elements to include in the returned array. For instance, you *cannot* project specific array elements using the array index; e.g. - ``{ "instock.0" => 1 }`` projection will *not* project the array + ``{ "instock.0" => 1 }`` projection does *not* project the array with the first element. - id: ruby @@ -357,10 +379,10 @@ Project Specific Array Elements in the Returned Array .. include:: /includes/driver-examples/driver-example-query-50.rst :projection:`$elemMatch`, :projection:`$slice`, and - :projection:`$` are the *only* way to project specific elements + :projection:`$` are the *only* operators that you can use to project specific elements to include in the returned array. For instance, you *cannot* project specific array elements using the array index; e.g. - ``{ "instock.0" => 1 }`` projection will *not* project the array + ``{ "instock.0" => 1 }`` projection does *not* project the array with the first element. - id: scala @@ -372,10 +394,10 @@ Project Specific Array Elements in the Returned Array .. include:: /includes/driver-examples/driver-example-query-50.rst :projection:`$elemMatch`, :projection:`$slice`, and - :projection:`$` are the *only* way to project specific elements + :projection:`$` are the *only* operators that you can use to project specific elements to include in the returned array. For instance, you *cannot* project specific array elements using the array index; e.g. - ``include("instock.0")`` projection will *not* project the array + ``include("instock.0")`` projection does *not* project the array with the first element. - id: csharp @@ -387,7 +409,7 @@ Project Specific Array Elements in the Returned Array .. include:: /includes/driver-examples/driver-example-query-50.rst :projection:`$elemMatch`, :projection:`$slice`, and - :projection:`$` are the *only* way to project specific elements + :projection:`$` are the *only* operators that you can use to project specific elements to include in the returned array. For example, the following operation will not project the array @@ -406,12 +428,102 @@ Project Specific Array Elements in the Returned Array .. include:: /includes/driver-examples/driver-example-query-50.rst :projection:`$elemMatch`, :projection:`$slice`, and - :projection:`$` are the *only* way to project specific elements + :projection:`$` are the *only* operators that you can use to project specific elements to include in the returned array. For instance, you *cannot* project specific array elements using the array index; e.g. - ``include("instock.0")`` projection will *not* project the array + ``include("instock.0")`` projection does *not* project the array with the first element. +Project Fields with Aggregation Expressions +------------------------------------------- + +You can specify :ref:`aggregation expressions ` +in a query projection. Aggregation expressions let you project new +fields and modify the values of existing fields. + +For example, the following operation uses aggregation expressions to +override the value of the ``status`` field, and project new fields +``area`` and ``reportNumber``. + +.. note:: + + The following example uses MongoDB Shell syntax. For driver examples + of projection with aggregation, see your :driver:`driver + documentation `. + +.. io-code-block:: + :copyable: true + + .. input:: + :language: javascript + + db.inventory.find( + { }, + { + _id: 0, + item: 1, + status: { + $switch: { + branches: [ + { + case: { $eq: [ "$status", "A" ] }, + then: "Available" + }, + { + case: { $eq: [ "$status", "D" ] }, + then: "Discontinued" + }, + ], + default: "No status found" + } + }, + area: { + $concat: [ + { $toString: { $multiply: [ "$size.h", "$size.w" ] } }, + " ", + "$size.uom" + ] + }, + reportNumber: { $literal: 1 } + } + ) + + .. output:: + :language: javascript + + [ + { + item: 'journal', + status: 'Available', + area: '294 cm', + reportNumber: 1 + }, + { + item: 'planner', + status: 'Discontinued', + area: '685.5 cm', + reportNumber: 1 + }, + { + item: 'notebook', + status: 'Available', + area: '93.5 in', + reportNumber: 1 + }, + { + item: 'paper', + status: 'Discontinued', + area: '93.5 in', + reportNumber: 1 + }, + { + item: 'postcard', + status: 'Available', + area: '152.5 cm', + reportNumber: 1 + } + ] + .. _project-fields-atlas-ui: Project Fields to Return from a Query with {+atlas+} @@ -476,9 +588,8 @@ steps: Additional Considerations ------------------------- -Starting in MongoDB 4.4, MongoDB enforces additional restrictions with -regards to projections. See :limit:`Projection Restrictions` for -details. +MongoDB enforces additional restrictions with regards to projections. +See :limit:`Projection Restrictions` for details. .. seealso:: diff --git a/source/tutorial/query-array-of-documents.txt b/source/tutorial/query-array-of-documents.txt index fcbeafbbae9..b6d22a87c70 100644 --- a/source/tutorial/query-array-of-documents.txt +++ b/source/tutorial/query-array-of-documents.txt @@ -11,11 +11,11 @@ Query an Array of Embedded Documents .. facet:: :name: programming_language - :values: shell, csharp, go, java, python, perl, php, ruby, rust, scala, javascript/typescript + :values: shell, csharp, go, java, python, perl, php, ruby, rust, scala, javascript/typescript, kotlin .. meta:: :description: MongoDB Manual code examples for how to query an array of documents, including nested or embedded documents. - :keywords: motor, java sync, java async, reactive streams, code example, node.js, compass + :keywords: motor, java sync, java async, reactive streams, code example, node.js, compass, kotlin coroutine .. contents:: On this page :local: diff --git a/source/tutorial/query-arrays.txt b/source/tutorial/query-arrays.txt index d634a981d88..e6aeb256812 100644 --- a/source/tutorial/query-arrays.txt +++ b/source/tutorial/query-arrays.txt @@ -11,11 +11,11 @@ Query an Array .. facet:: :name: programming_language - :values: shell, csharp, go, java, javascript/typescript, python, perl, php, ruby, scala + :values: shell, csharp, go, java, javascript/typescript, python, perl, php, ruby, scala, kotlin .. meta:: :description: MongoDB Manual: code examples for query operations on array fields. Learn how to query an array and an array element or field, query on the array field as a whole, query if a field is in an array, and query by array size. - :keywords: compass, code example, motor, java sync, java async, reactive streams, node.js + :keywords: compass, code example, motor, java sync, java async, reactive streams, node.js, kotlin coroutine .. contents:: On this page :local: diff --git a/source/tutorial/query-documents.txt b/source/tutorial/query-documents.txt index a0994ff3f66..673ad73bc19 100644 --- a/source/tutorial/query-documents.txt +++ b/source/tutorial/query-documents.txt @@ -12,11 +12,11 @@ Query Documents .. facet:: :name: programming_language - :values: csharp, go, java, go, javascript/typescript, perl, php, python, ruby, scala, shell + :values: c, csharp, go, java, go, javascript/typescript, perl, php, python, ruby, scala, shell, kotlin .. meta:: :description: MongoDB Manual: how to query documents and top-level fields, perform equality match, query with query operators, and specify compound query conditions. - :keywords: code example, compass, java sync, java async, reactive streams, motor, atlas, drivers, node.js + :keywords: code example, compass, java sync, java async, reactive streams, motor, atlas, drivers, node.js, kotlin coroutine .. contents:: On this page :local: @@ -67,6 +67,11 @@ the following SQL statement: For more information on the MongoDB Compass query bar, see :ref:`Query Bar `. + - id: c + content: | + For more information on the syntax of the method, see + `mongoc_collection_find_with_opts `__. + - id: python content: | For more information on the syntax of the method, see @@ -83,6 +88,12 @@ the following SQL statement: `com.mongodb.reactivestreams.client.MongoCollection.find `_. + - id: kotlin-coroutine + content: | + For more information on the syntax of the method, see + `MongoCollection.find() + <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-mongo-collection/find.html>`__. + - id: nodejs content: | @@ -92,7 +103,7 @@ the following SQL statement: - id: php content: | For more information on the syntax of the method, see - :phpmethod:`find() `. + :phpmethod:`find() `. - id: perl content: | @@ -328,6 +339,12 @@ Cursor For more information on sampling in MongoDB Compass, see the :ref:`Compass FAQ `. + - id: c + content: | + The `mongoc_collection_find `__ + method returns a :doc:`cursor ` to the + matching documents. + - id: python content: | The :py:meth:`pymongo.collection.Collection.find` method @@ -347,6 +364,14 @@ Cursor returns an instance of the `com.mongodb.reactivestreams.client.FindPublisher `_ interface. + - id: kotlin-coroutine + content: | + The `MongoCollection.find() + <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-mongo-collection/find.html>`__ method returns an + instance of the + `FindFlow <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-find-flow/index.html>`__ + class. + - id: nodejs content: | The :node-api:`Collection.find() ` method @@ -354,7 +379,7 @@ Cursor - id: php content: | - The :phpmethod:`MongoDB\\Collection::find() ` + The :phpmethod:`MongoDB\\Collection::find() ` method returns a :doc:`cursor ` to the matching documents. See the MongoDB PHP Library documentation for @@ -424,203 +449,202 @@ Additional Methods and Options .. tabs-drivers:: - tabs: - - id: shell - content: | + .. tab:: + :tabid: shell - The following methods can also read documents from a collection: + The following can also read documents from a collection: - - :method:`db.collection.findOne` + - The :method:`db.collection.findOne` method - - In :ref:`aggregation pipeline `, the - :pipeline:`$match` pipeline stage provides access to MongoDB - queries. + - The :pipeline:`$match` pipeline stage in an + :ref:`aggregation pipeline `. The + ``$match`` pipeline stage provides access to MongoDB + queries - .. note:: - The :method:`db.collection.findOne()` method also performs a read - operation to return a single document. Internally, the - :method:`db.collection.findOne()` method is the - :method:`db.collection.find()` method with a limit of 1. + .. note:: - - id: compass - content: | + The :method:`db.collection.findOne()` method performs the same + operation as the :method:`db.collection.find()` method with a limit of 1. - In addition to ``filter``, MongoDB Compass also allows the - following options to be passed to the query bar: + .. tab:: + :tabid: compass + + In addition to ``filter``, MongoDB Compass also allows you to pass the + following options to the query bar: - .. list-table:: - :widths: 25 75 + .. list-table:: + :widths: 25 75 - * - :compass:`Project ` + * - :compass:`Project ` - - Specify which fields to return in the resulting data. + - Specify which fields to return in the resulting data. - * - :compass:`Sort ` + * - :compass:`Sort ` - - Specify the sort order of the returned documents. + - Specify the sort order of the returned documents. - * - :compass:`Skip ` + * - :compass:`Skip ` - - Specify the first n-number of document to skip before returning the result set. + - Specify the first n-number of document to skip before returning the result set. - * - :compass:`Limit ` + * - :compass:`Limit ` - - Specify the maximum number of documents to return. + - Specify the maximum number of documents to return. - - id: python - content: | + .. tab:: + :tabid: c - The following methods can also read documents from a collection: + The following method can also read documents from a collection: - - :py:meth:`pymongo.collection.Collection.find_one` + - `mongoc_find_and_modify_opts_t `__ - - In :ref:`aggregation pipeline `, the - :pipeline:`$match` pipeline stage provides access to MongoDB - queries. See the `PyMongo Aggregation Examples `_. + .. tab:: + :tabid: python - .. note:: - The :py:meth:`pymongo.collection.Collection.find_one` - method also performs a read operation to return a single - document. Internally, the - :py:meth:`pymongo.collection.Collection.find_one` method is - the :py:meth:`pymongo.collection.Collection.find` method - with a limit of 1. + The following can also read documents from a collection: - - id: java-sync - content: | + - :py:meth:`pymongo.collection.Collection.find_one` - The following methods can also read documents from a collection: + - In an :ref:`aggregation pipeline `, the + :pipeline:`$match` pipeline stage provides access to MongoDB + queries. See the `PyMongo Aggregation Examples `_. + + .. note:: - - In the :ref:`aggregation pipeline `, - the :pipeline:`$match` pipeline stage provides access to - MongoDB queries. See the `Java Synchronous Driver Aggregation - Examples`_. + The :py:meth:`pymongo.collection.Collection.find_one` + method performs the same operation as the + the :py:meth:`pymongo.collection.Collection.find` method + with a limit of 1. - - id: java-async - content: | + .. tab:: + :tabid: java-sync - The following methods can also read documents from a collection: + The following can also read documents from a collection: - - In :ref:`aggregation pipeline `, - the :pipeline:`$match` pipeline stage provides access to - MongoDB queries. See `com.mongodb.reactivestreams.client.MongoCollection.aggregate - `_ - for more information. + - In the :ref:`aggregation pipeline `, + the :pipeline:`$match` pipeline stage provides access to + MongoDB queries. See the `Java Synchronous Driver Aggregation + Examples`_. - - id: nodejs - content: | - The following methods can also read documents from a collection: + .. tab:: + :tabid: kotlin-coroutine - - :node-api:`Collection.findOne() ` + The following methods can also read documents from a collection: - - In :ref:`aggregation pipeline `, the - :pipeline:`$match` pipeline stage provides access to MongoDB - queries. See the MongoDB Node.js Driver's - :node-docs:`aggregation tutorial`. + - In an :ref:`aggregation pipeline `, + the :pipeline:`$match` pipeline stage allows you to perform + MongoDB queries. See the :driver:`Kotlin Coroutine Driver + Find Operation Examples + ` + to learn more. - .. note:: - The :node-api:`Collection.findOne() ` - method also performs a read operation to return a single - document. Internally, the - :node-api:`Collection.findOne() ` - method is the - :node-api:`Collection.find() ` method - with a limit of 1. + .. tab:: + :tabid: nodejs - - id: php - content: | + The following can also read documents from a collection: - The following methods can also read documents from a collection: + - :node-api:`Collection.findOne() ` + + - In :ref:`aggregation pipeline `, the + :pipeline:`$match` pipeline stage provides access to MongoDB + queries. See the MongoDB Node.js Driver's + :node-docs:`aggregation tutorial`. + + .. note:: - - :phpmethod:`MongoDB\\Collection::findOne() ` + The :node-api:`Collection.findOne() ` + method performs the same operation as the + :node-api:`Collection.find() ` method + with a limit of 1. - - In :ref:`aggregation pipeline `, the - :pipeline:`$match` pipeline stage provides access to MongoDB - queries. See the MongoDB PHP Library's - :ref:`aggregation example `. + .. tab:: + :tabid: php - .. note:: - The :phpmethod:`MongoDB\\Collection::findOne() ` - method also performs a read operation to return a single - document. Internally, the - :phpmethod:`MongoDB\\Collection::findOne() ` - method is the - :phpmethod:`MongoDB\\Collection::find() ` - method with a limit of 1. + The following can also read documents from a collection: - - id: perl - content: | + - :phpmethod:`MongoDB\\Collection::findOne() ` - The following methods can also read documents from a collection: + - In :ref:`aggregation pipeline `, the + :pipeline:`$match` pipeline stage provides access to MongoDB + queries. See the MongoDB PHP Library's + :ref:`aggregation example `. - - :perl-api:`MongoDB::Collection::find_one()` + .. note:: - - In :ref:`aggregation pipeline `, the - :pipeline:`$match` pipeline stage provides access to MongoDB - queries. See the MongoDB Perl driver's - `aggregation examples `__. + The :phpmethod:`MongoDB\\Collection::findOne() ` + method performs the same operation as the + :phpmethod:`MongoDB\\Collection::find() ` + method with a limit of 1. - .. note:: - The :perl-api:`MongoDB::Collection::find_one()` - method also performs a read operation to return a single - document. Internally, the - :perl-api:`MongoDB::Collection::find_one()` - method is the - :perl-api:`MongoDB::Collection::find()` - method with a limit of 1. + .. tab:: + :tabid: perl - - id: ruby - content: | + The following can also read documents from a collection: - The following methods can also read documents from a collection: + - :perl-api:`MongoDB::Collection::find_one()` - - In :ref:`aggregation pipeline `, the - :pipeline:`$match` pipeline stage provides access to MongoDB - queries. See the MongoDB Ruby driver's - :ruby:`aggregation examples `. + - In :ref:`aggregation pipeline `, the + :pipeline:`$match` pipeline stage provides access to MongoDB + queries. See the MongoDB Perl driver's + `aggregation examples `__. - - id: scala - content: | + .. note:: - The following methods can also read documents from a collection: + The :perl-api:`MongoDB::Collection::find_one()` + method performs the same operation as the + :perl-api:`MongoDB::Collection::find()` + method with a limit of 1. - - In :ref:`aggregation pipeline `, the - :pipeline:`$match` pipeline stage provides access to MongoDB - queries. See the MongoDB Scala driver's :scala-api:`aggregate method `. + .. tab:: + :tabid: ruby - - id: csharp - content: | + The following can also read documents from a collection: - The following methods can also read documents from a collection: + - In :ref:`aggregation pipeline `, the + :pipeline:`$match` pipeline stage provides access to MongoDB + queries. See the MongoDB Ruby driver's + :ruby:`aggregation examples `. - - :csharp-api:`MongoCollection.FindOne() ` + .. tab:: + :tabid: scala - - In :ref:`aggregation pipeline `, the - :pipeline:`$match` pipeline stage provides access to MongoDB - queries. See the MongoDB C# driver's - :csharp-docs:`LINQ documentation `. + The following can also read documents from a collection: - .. note:: - The :csharp-api:`MongoCollection.FindOne() ` - method also performs a read operation to return a single - document. Internally, the - :csharp-api:`MongoCollection.FindOne() ` - method is the - :csharp-api:`MongoCollection.Find() ` - method with a limit of 1. + - In :ref:`aggregation pipeline `, the + :pipeline:`$match` pipeline stage provides access to MongoDB + queries. See the MongoDB Scala driver's :scala-api:`aggregate method `. - - id: go - content: | + .. tab:: + :tabid: csharp + + The following can also read documents from a collection: + + - :csharp-api:`MongoCollection.FindOne() ` + + - In :ref:`aggregation pipeline `, the + :pipeline:`$match` pipeline stage provides access to MongoDB + queries. See the MongoDB C# driver's + :csharp-docs:`LINQ documentation `. + + .. note:: + + The :csharp-api:`MongoCollection.FindOne() ` + method performs the same operation as the + :csharp-api:`MongoCollection.Find() ` + method with a limit of 1. + + .. tab:: + :tabid: go - The following methods can also read documents from a collection: + The following can also read documents from a collection: - - :go-api:`Collection.FindOne ` + - :go-api:`Collection.FindOne ` - - In :ref:`aggregation pipeline `, - the :pipeline:`$match` pipeline stage provides access to - MongoDB queries. See - :go-api:`Collection.Aggregate`. + - In :ref:`aggregation pipeline `, + the :pipeline:`$match` pipeline stage provides access to + MongoDB queries. See + :go-api:`Collection.Aggregate`. .. toctree:: :titlesonly: diff --git a/source/tutorial/query-embedded-documents.txt b/source/tutorial/query-embedded-documents.txt index 68e3cd25c7b..b4702caa226 100644 --- a/source/tutorial/query-embedded-documents.txt +++ b/source/tutorial/query-embedded-documents.txt @@ -12,11 +12,11 @@ Query on Embedded/Nested Documents .. facet:: :name: programming_language - :values: shell, csharp, go, java, python, perl, php, ruby, scala, javascript/typescript + :values: shell, csharp, go, java, python, perl, php, ruby, scala, javascript/typescript, kotlin .. meta:: :description: MongoDB Manual: How to query or select on embedded or nested documents, subdocuments and fields. - :keywords: filter, nested documents, subdocuments, nested fields, compound conditions, motor, java sync, java async, reactive streams, code example, node.js, compass + :keywords: filter, nested documents, subdocuments, nested fields, compound conditions, motor, java sync, java async, reactive streams, code example, node.js, compass, kotlin coroutine .. contents:: On this page :local: diff --git a/source/tutorial/query-for-null-fields.txt b/source/tutorial/query-for-null-fields.txt index 7fe6ad2fd03..767334e98fb 100644 --- a/source/tutorial/query-for-null-fields.txt +++ b/source/tutorial/query-for-null-fields.txt @@ -6,15 +6,16 @@ ================================ Query for Null or Missing Fields ================================ - + .. default-domain:: mongodb .. facet:: :name: programming_language - :values: shell, csharp, go, java, javascript/typescript, perl, php, python, ruby, scala + :values: shell, csharp, go, java, javascript/typescript, php, python, ruby, scala, kotlin .. meta:: - :keywords: java sync, java async, reactive streams, motor, code example, node.js, compass + :description: Learn how to query for null or missing fields in MongoDB using various methods including the MongoDB Atlas UI and MongoDB Compass. Understand different query operators' treatment of null values. + :keywords: java sync, java async, reactive streams, motor, code example, node.js, compass, kotlin coroutine .. contents:: On this page :local: @@ -36,11 +37,17 @@ Different query operators in MongoDB treat ``null`` values differently. .. |query_operations| replace:: operations that query for ``null`` values -.. include:: /includes/driver-examples/driver-example-query-intro.rst +.. include:: /includes/driver-examples/driver-example-query-intro-no-perl.rst .. tabs-drivers:: tabs: + - id: c + content: | + .. important:: + Use ``BCON_NULL`` with the MongoDB C driver to + query for ``null`` or missing fields in MongoDB. + - id: python content: | .. important:: @@ -53,10 +60,10 @@ Different query operators in MongoDB treat ``null`` values differently. Use ``None`` with the Motor driver to query for ``null`` or missing fields in MongoDB. - - id: perl + - id: kotlin-coroutine content: | .. important:: - Use ``undef`` with the MongoDB Perl driver to + Use ``null`` with the Kotlin Coroutine driver to query for ``null`` or missing fields in MongoDB. - id: ruby @@ -84,7 +91,6 @@ Different query operators in MongoDB treat ``null`` values differently. Use ``nil`` with the MongoDB Go driver to query for ``null`` or missing fields in MongoDB. - .. include:: /includes/driver-examples/driver-example-query-38.rst .. _faq-comparison-with-null: @@ -107,6 +113,12 @@ Equality Filter contain the ``item`` field whose value is ``null`` *or* that do not contain the ``item`` field. + - id: c + content: | + The ``{ item, BCON_NULL }`` query matches documents that either + contain the ``item`` field whose value is ``null`` *or* that + do not contain the ``item`` field. + - id: python content: | The ``{ item : None }`` query matches documents that either @@ -131,21 +143,21 @@ Equality Filter contain the ``item`` field whose value is ``null`` *or* that do not contain the ``item`` field. - - id: nodejs + - id: kotlin-coroutine content: | - The ``{ item : null }`` query matches documents that either + The ``eq("item", null)`` query matches documents that either contain the ``item`` field whose value is ``null`` *or* that do not contain the ``item`` field. - - id: php + - id: nodejs content: | - The ``[ item => undef ]`` query matches documents that either + The ``{ item : null }`` query matches documents that either contain the ``item`` field whose value is ``null`` *or* that do not contain the ``item`` field. - - id: perl + - id: php content: | - The ``{ item => undef }`` query matches documents that either + The ``[ item => undef ]`` query matches documents that either contain the ``item`` field whose value is ``null`` *or* that do not contain the ``item`` field. @@ -178,6 +190,110 @@ Equality Filter The query returns both documents in the collection. +.. _non-equality-filter: + +Non-Equality Filter +------------------- + +To query for fields that **exist** and are **not null**, use the ``{ $ne +: null }`` filter. The ``{ item : { $ne : null } }`` query matches +documents where the ``item`` field exists *and* has a non-null value. + +.. tabs-drivers:: + + tabs: + - id: shell + content: | + .. code-block:: sh + + db.inventory.find( { item: { $ne : null } } ) + + - id: compass + content: | + .. code-block:: javascript + + { item: { $ne : null } } + + - id: c + content: | + .. code-block:: c + + filter = BCON_NEW ("item", BCON_NULL); + cursor = mongoc_collection_find_with_opts (collection, filter, NULL, NULL); + + - id: python + content: | + .. code-block:: python + + cursor = db.inventory.find( { "item": { "$ne": None } } ) + + - id: motor + content: | + .. code-block:: python + + cursor = db.inventory.find( { "item": { "$ne": None } } ) + + - id: java-sync + content: | + .. code-block:: java + + collection.find(ne("item", null)); + + - id: java-async + content: | + .. code-block:: java + + db.inventory.find( { item: { $ne : null} } ) + + - id: kotlin-coroutine + content: | + .. code-block:: kotlin + + collection.find(ne("item", null)) + + - id: nodejs + content: | + .. code-block:: javascript + + const cursor = db.collection('inventory') + .find({ item: { $ne : null } + }); + + - id: php + content: | + .. code-block:: php + + $cursor = $db->inventory->find(['item' => ['$ne' => null ]]); + + - id: ruby + content: | + .. code-block:: ruby + + client[:inventory].find(item: { '$ne' => nil }) + + - id: scala + content: | + .. code-block:: scala + + collection.find($ne("item", null)); + + - id: csharp + content: | + .. code-block:: csharp + + var filter = Builders.Filter.Ne("item", BsonNull.Value); + var result = collection.Find(filter).ToList(); + + - id: go + content: | + .. code-block:: go + + cursor, err := coll.Find( + context.TODO(), + bson.D{ + {"item", bson.D{"$ne": nil}}, + }) + Type Check ---------- @@ -198,6 +314,13 @@ Type Check ``null``; i.e. the value of the ``item`` field is of :ref:`BSON Type ` ``Null`` (BSON Type 10): + - id: c + content: | + The ``{ item, { $type, BCON_NULL } }`` query matches *only* + documents that contain the ``item`` field whose value is + ``null``; i.e. the value of the ``item`` field is of + :ref:`BSON Type ` ``Null`` (BSON Type 10): + - id: python content: | The ``{ item : { $type: 10 } }`` query matches *only* @@ -226,23 +349,23 @@ Type Check ``null``; i.e. the value of the ``item`` field is of :ref:`BSON Type ` ``Null`` (BSON Type 10): - - id: nodejs + - id: kotlin-coroutine content: | - The ``{ item : { $type: 10 } }`` query matches *only* + The ``type("item", BsonType.NULL)`` query matches *only* documents that contain the ``item`` field whose value is - ``null``; i.e. the value of the ``item`` field is of + ``null``. This means the value of the ``item`` field is of :ref:`BSON Type ` ``Null`` (BSON Type 10): - - id: php + - id: nodejs content: | - The ``[ item => [ $type => 10 ] ]`` query matches *only* + The ``{ item : { $type: 10 } }`` query matches *only* documents that contain the ``item`` field whose value is ``null``; i.e. the value of the ``item`` field is of :ref:`BSON Type ` ``Null`` (BSON Type 10): - - id: perl + - id: php content: | - The ``{ item => { $type => 10 } }`` query matches *only* + The ``[ item => [ $type => 10 ] ]`` query matches *only* documents that contain the ``item`` field whose value is ``null``; i.e. the value of the ``item`` field is of :ref:`BSON Type ` ``Null`` (BSON Type 10): @@ -300,6 +423,11 @@ field. [#type0]_ The ``{ item : { $exists: false } }`` query matches documents that do not contain the ``item`` field: + - id: c + content: | + The ``{ item, { $exists, BCON_BOOL (false) } }`` query matches documents + that do not contain the ``item`` field: + - id: python content: | The ``{ item : { $exists: False } }`` query matches documents @@ -320,6 +448,11 @@ field. [#type0]_ The ``exists("item", false)`` query matches documents that do not contain the ``item`` field: + - id: kotlin-coroutine + content: | + The ``exists("item", false)`` query matches documents that + do not contain the ``item`` field: + - id: nodejs content: | The ``{ item : { $exists: false } }`` query matches documents @@ -330,11 +463,6 @@ field. [#type0]_ The ``[ item => [ $exists => false ] ]`` query matches documents that do not contain the ``item`` field: - - id: perl - content: | - The ``{ item => { $exists => false } }`` query matches documents - that do not contain the ``item`` field: - - id: ruby content: | The ``{ item => { $exists => false } }`` query matches documents diff --git a/source/tutorial/recover-data-following-unexpected-shutdown.txt b/source/tutorial/recover-data-following-unexpected-shutdown.txt index 6455630f0c1..0174ca8cc48 100644 --- a/source/tutorial/recover-data-following-unexpected-shutdown.txt +++ b/source/tutorial/recover-data-following-unexpected-shutdown.txt @@ -50,8 +50,7 @@ these cases: The operation removes and does not save any corrupt data during the repair process. -Starting in MongoDB 4.4, for the WiredTiger storage engine, -:option:`mongod --repair`: +For the WiredTiger storage engine, :option:`mongod --repair`: - Rebuilds all indexes for collections with one or more inconsistent indexes. diff --git a/source/tutorial/remove-documents.txt b/source/tutorial/remove-documents.txt index 70291081144..16b05f2d4ea 100644 --- a/source/tutorial/remove-documents.txt +++ b/source/tutorial/remove-documents.txt @@ -11,11 +11,11 @@ Delete Documents .. facet:: :name: programming_language - :values: shell, csharp, go, java, javascript/typescript, perl, php, python, ruby, scala + :values: shell, csharp, go, java, javascript/typescript, perl, php, python, ruby, scala, kotlin .. meta:: :description: MongoDB Manual: How to delete documents in MongoDB. How to remove documents in MongoDB. How to specify conditions for removing or deleting documents in MongoDB. - :keywords: delete collection, remove document, java sync, java async, reactive streams, motor, code example, node.js, compass + :keywords: delete collection, remove document, java sync, java async, reactive streams, motor, code example, node.js, compass, kotlin coroutine .. contents:: On this page :local: @@ -54,6 +54,16 @@ You can delete documents in MongoDB using the following methods: Populate the ``inventory`` collection with the following documents: + - id: c + content: | + This page uses the following `MongoDB C Driver `__ + methods: + + - `mongoc_collection_delete_one `__ + - `mongoc_collection_delete_many `__ + + .. include:: /includes/driver-examples/examples-intro.rst + - id: python content: | This page uses the following @@ -101,6 +111,16 @@ You can delete documents in MongoDB using the following methods: .. include:: /includes/driver-examples/examples-intro.rst + - id: kotlin-coroutine + content: | + This page uses the + following :driver:`Kotlin Coroutine Driver ` methods: + + - `MongoCollection.deleteOne() <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-mongo-collection/delete-one.html>`__ + - `MongoCollection.deleteMany() <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-mongo-collection/delete-many.html>`__ + + .. include:: /includes/driver-examples/examples-intro.rst + - id: nodejs content: | This page uses the @@ -116,8 +136,8 @@ You can delete documents in MongoDB using the following methods: This page uses the following `MongoDB PHP Library `_ methods: - - :phpmethod:`MongoDB\\Collection::deleteMany() ` - - :phpmethod:`MongoDB\\Collection::deleteOne() ` + - :phpmethod:`MongoDB\\Collection::deleteMany() ` + - :phpmethod:`MongoDB\\Collection::deleteOne() ` .. include:: /includes/driver-examples/examples-intro.rst @@ -193,6 +213,18 @@ Delete All Documents .. include:: /includes/fact-delete-all-inventory.rst + - id: c + content: | + + To delete all documents from a collection, pass the + `mongoc_collection_t `__ + and a `bson_t `__ + that matches all documents to the + `mongoc_collection_delete_many `__ + method. + + .. include:: /includes/fact-delete-all-inventory.rst + - id: python content: | @@ -234,6 +266,15 @@ Delete All Documents .. include:: /includes/fact-delete-all-inventory.rst + - id: kotlin-coroutine + content: | + + To delete all documents from a collection, pass an empty + ``Bson`` object as the :ref:`filter ` to the + `MongoCollection.deleteMany() <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-mongo-collection/delete-many.html>`__ method. + + .. include:: /includes/fact-delete-all-inventory.rst + - id: nodejs content: | @@ -249,7 +290,7 @@ Delete All Documents To delete all documents from a collection, pass an empty :ref:`filter` document ``[]`` to the - :phpmethod:`MongoDB\\Collection::deleteMany() ` + :phpmethod:`MongoDB\\Collection::deleteMany() ` method. .. include:: /includes/fact-delete-all-inventory.rst @@ -329,6 +370,22 @@ Delete All Documents that Match a Condition .. include:: /includes/fact-remove-condition-inv-example.rst + - id: c + content: | + + .. include:: /includes/fact-delete-condition-inventory.rst + + .. include:: /includes/extracts/filter-equality.rst + + .. include:: /includes/extracts/filter-query-operators.rst + + To delete all documents that match a deletion criteria, pass the + `mongoc_collection_t `__ + and a `bson_t `__ + that matches the documents to be deleted to the + `mongoc_collection_delete_many `__ + method. + - id: python content: | @@ -392,6 +449,21 @@ Delete All Documents that Match a Condition .. include:: /includes/fact-remove-condition-inv-example.rst + - id: kotlin-coroutine + content: | + + .. include:: /includes/fact-delete-condition-inventory.rst + + .. include:: /includes/extracts/filter-equality.rst + + .. include:: /includes/extracts/filter-query-operators.rst + + To delete all documents that match a deletion criteria, pass a + :ref:`filter ` parameter to the + `MongoCollection.deleteMany() <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-mongo-collection/delete-many.html>`__ method. + + .. include:: /includes/fact-remove-condition-inv-example.rst + - id: nodejs content: | @@ -419,7 +491,7 @@ Delete All Documents that Match a Condition To delete all documents that match a deletion criteria, pass a :ref:`filter ` parameter to the - :phpmethod:`deleteMany() ` + :phpmethod:`deleteMany() ` method. .. include:: /includes/fact-remove-condition-inv-example.rst @@ -542,6 +614,18 @@ Delete Only One Document that Matches a Condition and List View in Compass, refer to the :ref:`Compass documentation `. + - id: c + content: | + + To delete a single document from a collection, pass the + `mongoc_collection_t `__ + and a `bson_t `__ + that matches the document you want to delete to the + `mongoc_collection_delete_one `__ + method. + + .. include:: /includes/fact-delete-all-inventory.rst + - id: python content: | @@ -584,6 +668,16 @@ Delete Only One Document that Matches a Condition .. include:: /includes/fact-remove-one-condition-inv-example.rst + - id: kotlin-coroutine + content: | + + To delete at most a single document that matches a specified + filter, even if multiple documents match the specified + filter, you can use the `MongoCollection.deleteOne() <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-mongo-collection/delete-one.html>`__ + method. + + .. include:: /includes/fact-remove-one-condition-inv-example.rst + - id: nodejs content: | @@ -601,7 +695,7 @@ Delete Only One Document that Matches a Condition To delete at most a single document that matches a specified filter (even though multiple documents may match the specified filter) use the - :phpmethod:`MongoDB\\Collection::deleteOne() ` + :phpmethod:`MongoDB\\Collection::deleteOne() ` method. .. include:: /includes/fact-remove-one-condition-inv-example.rst @@ -741,7 +835,7 @@ document. For more information on MongoDB and atomicity, see Write Acknowledgement ~~~~~~~~~~~~~~~~~~~~~ -With write concerns, you can specify the level of acknowledgement +With write concerns, you can specify the level of acknowledgment requested from MongoDB for write operations. For details, see :doc:`/reference/write-concern`. @@ -766,6 +860,16 @@ requested from MongoDB for write operations. For details, see - :ref:`Compass Query Bar ` + - id: c + content: | + .. seealso:: + + - `mongoc_collection_delete_one `__ + + - `mongoc_collection_delete_many `__ + + - :ref:`additional-deletes` + - id: python content: | .. seealso:: @@ -808,6 +912,15 @@ requested from MongoDB for write operations. For details, see - `Java Reactive Streams Driver Quick Tour `_ + - id: kotlin-coroutine + content: | + .. seealso:: + + - `MongoCollection.deleteOne() <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-mongo-collection/delete-one.html>`__ + + - `MongoCollection.deleteMany() <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-mongo-collection/delete-many.html>`__ + + - :driver:`Kotlin Coroutine Driver Delete Documents Guide ` - id: nodejs content: | @@ -823,9 +936,9 @@ requested from MongoDB for write operations. For details, see content: | .. seealso:: - - :phpmethod:`MongoDB\\Collection::deleteMany() ` + - :phpmethod:`MongoDB\\Collection::deleteMany() ` - - :phpmethod:`MongoDB\\Collection::deleteOne() ` + - :phpmethod:`MongoDB\\Collection::deleteOne() ` - :ref:`additional-deletes` diff --git a/source/tutorial/remove-replica-set-member.txt b/source/tutorial/remove-replica-set-member.txt index b1bec592700..3d01090c925 100644 --- a/source/tutorial/remove-replica-set-member.txt +++ b/source/tutorial/remove-replica-set-member.txt @@ -50,12 +50,11 @@ using a :doc:`replica configuration document ` where that member is removed from the :rsconf:`members` array. -Starting in MongoDB 4.4, :method:`rs.reconfig()` allows adding or -removing no more than ``1`` :rsconf:`voting ` member -at a time. To remove multiple voting members from the replica set, issue -a series of :method:`rs.reconfig()` operations to remove one member -at a time. See :ref:`replSetReconfig-cmd-single-node` for more -information. +:method:`rs.reconfig()` allows adding or removing no more than ``1`` +:rsconf:`voting ` member at a time. To remove multiple voting +members from the replica set, issue a series of :method:`rs.reconfig()` +operations to remove one member at a time. See +:ref:`replSetReconfig-cmd-single-node` for more information. Procedure ~~~~~~~~~ diff --git a/source/tutorial/rotate-encryption-key.txt b/source/tutorial/rotate-encryption-key.txt index 7a85b966c4e..7388adea636 100644 --- a/source/tutorial/rotate-encryption-key.txt +++ b/source/tutorial/rotate-encryption-key.txt @@ -4,7 +4,13 @@ Rotate Encryption Keys ====================== -.. default-domain:: mongodb +.. facet:: + :name: genre + :values: tutorial + +.. meta:: + :keywords: key management interoperability protocol, customer master key, key management, mongosh, mongod, security + .. contents:: On this page :local: @@ -24,7 +30,7 @@ year. MongoDB provides two options for key rotation. You can rotate out the binary with a new instance that uses a new key. Or, if you are using a -KMIP server for key management, you can rotate the master key. +KMIP server for key management, you can rotate the :term:`Customer Master Key`. Rotate a Replica Set Member --------------------------- @@ -72,7 +78,7 @@ KMIP Master Key Rotation ------------------------ If you are using a KMIP server for key management, you can rotate -the master key, the only externally managed key. With the new +the :term:`Customer Master Key`, the only externally managed key. With the new master key, the internal keystore will be re-encrypted but the database keys will be otherwise left unchanged. This obviates the need to re-encrypt the entire data set. diff --git a/source/tutorial/rotate-log-files.txt b/source/tutorial/rotate-log-files.txt index b5e2126ab9b..ab9a7f4275f 100644 --- a/source/tutorial/rotate-log-files.txt +++ b/source/tutorial/rotate-log-files.txt @@ -41,6 +41,13 @@ Finally, you can configure :binary:`~bin.mongod` to send log data to the ``syslog`` using the :option:`--syslog ` option. In this case, you can take advantage of alternate log rotation tools. +.. note:: + + :dbcommand:`logRotate` isn't a replicated command. You must connect + to each instance of a replica set and run :dbcommand:`logRotate` + to rotate the logs for replica set members. + + To rotate the log files, you must perform one of these steps: - Send a ``SIGUSR1`` signal to the :binary:`~bin.mongod` or diff --git a/source/tutorial/sharding-high-availability-writes.txt b/source/tutorial/sharding-high-availability-writes.txt index c202fe2f31a..3347899cfbc 100644 --- a/source/tutorial/sharding-high-availability-writes.txt +++ b/source/tutorial/sharding-high-availability-writes.txt @@ -271,7 +271,7 @@ For example, the application attempts to write the following document to the } If the application receives an error on attempted write, or if the write -acknowledgement takes too long, the application logs the datacenter as +acknowledgment takes too long, the application logs the datacenter as unavailable and alters the ``datacenter`` field to point to the ``bravo`` datacenter. @@ -318,7 +318,7 @@ include the ``datacenter`` field, and therefore does not perform a The results show that the document with ``message_id`` of ``329620`` has been inserted into MongoDB twice, probably as a result of a delayed write -acknowledgement. +acknowledgment. .. code-block:: javascript diff --git a/source/tutorial/sort-results-with-indexes.txt b/source/tutorial/sort-results-with-indexes.txt index a45964e1b20..8dd248591f7 100644 --- a/source/tutorial/sort-results-with-indexes.txt +++ b/source/tutorial/sort-results-with-indexes.txt @@ -27,10 +27,9 @@ not block concurrent operations on the collection or database. Starting in MongoDB 6.0, if the server requires more than 100 megabytes of memory for a pipeline execution stage, MongoDB automatically writes temporary files to disk unless that query specifies -``{ allowDiskUse: false }``. In versions 4.4 and 5.0, if the server -needs more than 100 megabytes of system memory for the blocking sort -operation, MongoDB returns an error unless that query specifies -:method:`cursor.allowDiskUse()`. For details, see +``{ allowDiskUse: false }``. If the server needs more than 100 megabytes of +system memory for the blocking sort operation, MongoDB returns an error unless +that query specifies :method:`cursor.allowDiskUse()`. For details, see :parameter:`allowDiskUseByDefault`. Sort operations that use an index often have better performance than diff --git a/source/tutorial/store-javascript-function-on-server.txt b/source/tutorial/store-javascript-function-on-server.txt index 5466059896f..ad78f9ae07b 100644 --- a/source/tutorial/store-javascript-function-on-server.txt +++ b/source/tutorial/store-javascript-function-on-server.txt @@ -52,6 +52,6 @@ for use from any JavaScript context, such as :dbcommand:`mapReduce` and Functions saved as the deprecated BSON type :ref:`JavaScript (with scope) `, however, cannot be used by -:dbcommand:`mapReduce` and :query:`$where` starting in MongoDB 4.4. +:dbcommand:`mapReduce` and :query:`$where`. diff --git a/source/tutorial/troubleshoot-kerberos.txt b/source/tutorial/troubleshoot-kerberos.txt index 4d7eecef550..02af2f43eb1 100644 --- a/source/tutorial/troubleshoot-kerberos.txt +++ b/source/tutorial/troubleshoot-kerberos.txt @@ -14,10 +14,9 @@ Troubleshoot Kerberos Authentication ``mongokerberos`` Validation Tool --------------------------------- -Introduced alongside MongoDB 4.4, the :binary:`~bin.mongokerberos` -program provides a convenient method to verify your platform's Kerberos -configuration for use with MongoDB, and to test that Kerberos -authentication from a MongoDB client works as expected. +The :binary:`~bin.mongokerberos` program provides a convenient method to +verify your platform's Kerberos configuration for use with MongoDB, and to +test that Kerberos authentication from a MongoDB client works as expected. The :binary:`~bin.mongokerberos` tool can help diagnose common configuration issues, and is the recommended place to start when diff --git a/source/tutorial/troubleshoot-map-function.txt b/source/tutorial/troubleshoot-map-function.txt index 9fbd3d46314..8fde5b399a1 100644 --- a/source/tutorial/troubleshoot-map-function.txt +++ b/source/tutorial/troubleshoot-map-function.txt @@ -23,19 +23,6 @@ The ``map`` function is a JavaScript function that associates or “maps” a value with a key and emits the key and value pair during a :ref:`map-reduce ` operation. -.. note:: - - Starting in MongoDB 4.4, :dbcommand:`mapReduce` no longer supports - the deprecated :ref:`BSON Type ` JavaScript code with - scope (BSON Type 15) for its functions. The ``map``, ``reduce``, - and ``finalize`` functions must be either BSON type String - (BSON Type 2) or BSON Type JavaScript (BSON Type 13). To pass constant - values which will be accessible in the ``map``, ``reduce``, and - ``finalize`` functions, use the ``scope`` parameter. - - The use of JavaScript code with scope for the :dbcommand:`mapReduce` - functions has been deprecated since version 4.2.1. - Verify Key and Value Pairs -------------------------- diff --git a/source/tutorial/troubleshoot-reduce-function.txt b/source/tutorial/troubleshoot-reduce-function.txt index 439daea6989..5ef6ebbf1b2 100644 --- a/source/tutorial/troubleshoot-reduce-function.txt +++ b/source/tutorial/troubleshoot-reduce-function.txt @@ -38,19 +38,6 @@ For a list of all the requirements for the ``reduce`` function, see :dbcommand:`mapReduce`, or :binary:`~bin.mongosh` helper method :method:`db.collection.mapReduce()`. -.. note:: - - Starting in MongoDB 4.4, :dbcommand:`mapReduce` no longer supports - the deprecated :ref:`BSON type ` JavaScript code with - scope (BSON Type 15) for its functions. The - ``map``, ``reduce``, and ``finalize`` functions must be either BSON - type String (BSON Type 2) or BSON type JavaScript (BSON Type 13). To - pass constant values which will be accessible in the ``map``, - ``reduce``, and ``finalize`` functions, use the ``scope`` parameter. - - The use of JavaScript code with scope for the :dbcommand:`mapReduce` - functions has been deprecated since version 4.2.1. - Confirm Output Type ------------------- diff --git a/source/tutorial/troubleshoot-replica-sets.txt b/source/tutorial/troubleshoot-replica-sets.txt index b1dba32e4bd..4b564ffc989 100644 --- a/source/tutorial/troubleshoot-replica-sets.txt +++ b/source/tutorial/troubleshoot-replica-sets.txt @@ -129,7 +129,7 @@ Possible causes of replication lag include: <\>`, the secondaries will not be able to read the oplog fast enough to keep up with changes. - To prevent this, request :doc:`write acknowledgement + To prevent this, request :doc:`write acknowledgment write concern ` after every 100, 1,000, or another interval to provide an opportunity for secondaries to catch up with the primary. diff --git a/source/tutorial/troubleshoot-sharded-clusters.txt b/source/tutorial/troubleshoot-sharded-clusters.txt index d34aa9d42f5..2c322992515 100644 --- a/source/tutorial/troubleshoot-sharded-clusters.txt +++ b/source/tutorial/troubleshoot-sharded-clusters.txt @@ -128,7 +128,7 @@ To ensure cluster availability: Config Database String Error ---------------------------- -Config servers can be deployed as replica +Config servers must be deployed as replica sets. The :binary:`~bin.mongos` instances for the sharded cluster must specify the same config server replica set name but can specify hostname and port of different members of the replica set. diff --git a/source/tutorial/update-documents.txt b/source/tutorial/update-documents.txt index b711ad2e3aa..12732c7ca8b 100644 --- a/source/tutorial/update-documents.txt +++ b/source/tutorial/update-documents.txt @@ -11,11 +11,11 @@ Update Documents .. facet:: :name: programming_language - :values: shell, csharp, go, java, python, perl, php, ruby, rust, scala, javascript/typescript + :values: shell, csharp, go, java, python, perl, php, ruby, rust, scala, javascript/typescript, kotlin .. meta:: :description: How to update single or multiple documents in MongoDB. How to update all or replace documents in MongoDB. How to update fields in documents in MongoDB. - :keywords: update collection, motor, java sync, java async, reactive streams, code example, node.js, compass + :keywords: update collection, motor, java sync, java async, reactive streams, code example, node.js, compass, kotlin coroutine .. contents:: On this page :local: @@ -69,6 +69,17 @@ upper-right to set the language of the following examples. Populate the ``inventory`` collection with the following documents: + .. tab:: + :tabid: c + + This page uses the following `MongoDB C Driver `__ + methods: + + - `mongoc_collection_update_one `__ + - `mongoc_collection_replace_one `__ + + |populate-inventory| + .. tab:: :tabid: python @@ -131,6 +142,18 @@ upper-right to set the language of the following examples. |populate-inventory| + .. tab:: + :tabid: kotlin-coroutine + + This page uses the + following :driver:`Kotlin Coroutine Driver ` methods: + + - `MongoCollection.updateOne() <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-mongo-collection/update-one.html>`__ + - `MongoCollection.updateMany() <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-mongo-collection/update-many.html>`__ + - `MongoCollection.replaceOne() <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-mongo-collection/replace-one.html>`__ + + |populate-inventory| + .. tab:: :tabid: nodejs @@ -151,11 +174,11 @@ upper-right to set the language of the following examples. This page uses the following `MongoDB PHP Library `_ methods: - - :phpmethod:`MongoDB\\Collection::updateOne() ` + - :phpmethod:`MongoDB\\Collection::updateOne() ` - - :phpmethod:`MongoDB\\Collection::updateMany() ` + - :phpmethod:`MongoDB\\Collection::updateMany() ` - - :phpmethod:`MongoDB\\Collection::replaceOne() ` + - :phpmethod:`MongoDB\\Collection::replaceOne() ` |populate-inventory| @@ -336,7 +359,7 @@ Update Documents in a Collection .. code-block:: java - combine(set( , ), set(, ) ) + combine(set(, ), set(, )) For a list of the update helpers, see `com.mongodb.client.model.Updates @@ -357,7 +380,7 @@ Update Documents in a Collection .. code-block:: java - combine(set( , ), set(, ) ) + combine(set(, ), set(, )) For a list of the update helpers, see `com.mongodb.client.model.Updates @@ -365,6 +388,25 @@ Update Documents in a Collection .. include:: /includes/fact-update-set-create-fields.rst + .. tab:: + :tabid: kotlin-coroutine + + To update a document, MongoDB provides + :ref:`update operators ` such + as :update:`$set` to modify field values. + + The driver provides the `com.mongodb.client.model.Updates + <{+java-api-docs+}/mongodb-driver-core/com/mongodb/client/model/Updates.html>`__ + class to facilitate the creation of update documents. The + following code shows an update document that uses methods + from the ``Updates`` builder class: + + .. code-block:: kotlin + + combine(set(, ), set(, )) + + .. include:: /includes/fact-update-set-create-fields.rst + .. tab:: :tabid: nodejs @@ -567,6 +609,15 @@ Update a Single Document on the ``inventory`` collection to update the *first* document where ``item`` equals ``"paper"``: + .. tab:: + :tabid: kotlin-coroutine + + The following example uses the + `MongoCollection.updateOne() + <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-mongo-collection/update-one.html>`__ + method on the ``inventory`` collection to update the *first* + document where ``item`` equals ``"paper"``: + .. tab:: :tabid: nodejs @@ -579,7 +630,7 @@ Update a Single Document :tabid: php The following example uses the :phpmethod:`updateOne() - ` method on the + ` method on the ``inventory`` collection to update the *first* document where ``item`` equals ``"paper"``: @@ -683,6 +734,15 @@ Update Multiple Documents method on the ``inventory`` collection to update all documents where ``qty`` is less than ``50``: + .. tab:: + :tabid: kotlin-coroutine + + The following example uses the + `MongoCollection.updateMany() + <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-mongo-collection/update-many.html>`__ + method on the ``inventory`` collection to update all documents + where ``qty`` is less than ``50``: + .. tab:: :tabid: nodejs @@ -695,7 +755,7 @@ Update Multiple Documents :tabid: php The following example uses the :phpmethod:`updateMany() - ` method on the + ` method on the ``inventory`` collection to update all documents where ``qty`` is less than ``50``: @@ -801,6 +861,15 @@ Replace a Document .. include:: /includes/fact-update-replace-example.rst + .. tab:: + :tabid: kotlin-coroutine + + To replace the entire content of a document except for the ``_id`` + field, pass an entirely new document as the second argument to + the `MongoCollection.replaceOne() <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-mongo-collection/replace-one.html>`__ method. + + .. include:: /includes/fact-update-replace-example.rst + .. tab:: :tabid: nodejs @@ -817,7 +886,7 @@ Replace a Document To replace the entire content of a document except for the ``_id`` field, pass an entirely new document as the second argument to :phpmethod:`replaceOne() - `. + `. .. include:: /includes/fact-update-replace-example.rst @@ -1040,6 +1109,21 @@ Upsert Option For details on the new document created, see the individual reference pages for the methods. + .. tab:: + :tabid: kotlin-coroutine + + If the update and replace methods include the + `com.mongodb.client.model.UpdateOptions + <{+java-api-docs+}/mongodb-driver-core/com/mongodb/client/model/UpdateOptions.html>`__ + parameter that specifies ``upsert(true)``, + **and** no documents match the specified filter, then the + operation creates a new document and inserts it. If there are + matching documents, then the operation modifies or replaces + the matching document or documents. + + For details on the new document created, see the individual + reference pages for the methods. + .. tab:: :tabid: nodejs @@ -1059,11 +1143,11 @@ Upsert Option :tabid: php If :phpmethod:`updateOne() - `, + `, :phpmethod:`updateMany() - `, or + `, or :phpmethod:`replaceOne() - ` includes ``upsert => + ` includes ``upsert => true`` **and** no documents match the specified filter, then the operation creates a new document and inserts it. If there are matching documents, then the operation modifies or replaces the @@ -1153,7 +1237,7 @@ Upsert Option Write Acknowledgement ~~~~~~~~~~~~~~~~~~~~~ -With write concerns, you can specify the level of acknowledgement +With write concerns, you can specify the level of acknowledgment requested from MongoDB for write operations. For details, see :doc:`/reference/write-concern`. @@ -1238,6 +1322,17 @@ requested from MongoDB for write operations. For details, see - `Java Reactive Streams Driver Quick Tour `_ + .. tab:: + :tabid: kotlin-coroutine + + .. seealso:: + + - `MongoCollection.updateOne() <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-mongo-collection/update-one.html>`__ + - `MongoCollection.updateMany() <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-mongo-collection/update-many.html>`__ + - `MongoCollection.replaceOne() <{+java-api-docs+}/mongodb-driver-kotlin-coroutine/mongodb-driver-kotlin-coroutine/com.mongodb.kotlin.client.coroutine/-mongo-collection/replace-one.html>`__ + + - :driver:`Kotlin Coroutine Driver Modify Documents Guide ` + .. tab:: :tabid: nodejs @@ -1256,11 +1351,11 @@ requested from MongoDB for write operations. For details, see .. seealso:: - - :phpmethod:`MongoDB\\Collection::updateOne() ` + - :phpmethod:`MongoDB\\Collection::updateOne() ` - - :phpmethod:`MongoDB\\Collection::updateMany() ` + - :phpmethod:`MongoDB\\Collection::updateMany() ` - - :phpmethod:`MongoDB\\Collection::replaceOne() ` + - :phpmethod:`MongoDB\\Collection::replaceOne() ` - :ref:`additional-updates` diff --git a/source/tutorial/upgrade-cluster-to-ssl.txt b/source/tutorial/upgrade-cluster-to-ssl.txt index 5fdca5855f2..0b374bfa9ac 100644 --- a/source/tutorial/upgrade-cluster-to-ssl.txt +++ b/source/tutorial/upgrade-cluster-to-ssl.txt @@ -61,7 +61,7 @@ process. .. code-block:: bash - mongod --replSet --tlsMode allowTLS --tlsCertificateKeyFile --sslCAFile + mongod --replSet --tlsMode allowTLS --tlsCertificateKeyFile --tlsCAFile - id: config name: Configuration File Options diff --git a/source/tutorial/upgrade-keyfile-to-x509.txt b/source/tutorial/upgrade-keyfile-to-x509.txt index 59229d87063..b52ae6c147f 100644 --- a/source/tutorial/upgrade-keyfile-to-x509.txt +++ b/source/tutorial/upgrade-keyfile-to-x509.txt @@ -68,7 +68,7 @@ cluster authentication, use the following rolling upgrade process: :binary:`mongod` / :binary:`mongos` presents this file to other members of the cluster to identify itself as a member. - Include other :doc:`TLS/SSL options ` and + Include other :ref:`TLS/SSL options ` and any other options as appropriate for your specific configuration. For example: @@ -221,7 +221,7 @@ to x.509 membership authentication and TLS/SSL connections: each node can receive either a keyfile or an x.509 certificate from other members to authenticate those members. - Include other :doc:`TLS/SSL options ` and + Include other :ref:`TLS/SSL options ` and any other options as appropriate for your specific configuration. For example: diff --git a/source/tutorial/upgrade-revision.txt b/source/tutorial/upgrade-revision.txt new file mode 100644 index 00000000000..3a63b383477 --- /dev/null +++ b/source/tutorial/upgrade-revision.txt @@ -0,0 +1,224 @@ +.. _upgrade-to-latest-revision: + +============================================== +Upgrade to the Latest Patch Release of MongoDB +============================================== + +.. default-domain:: mongodb + +.. contents:: On this page + :local: + :backlinks: none + :depth: 2 + :class: singlecol + +MongoDB version numbers have the form ``X.Y.Z`` where ``Z`` refers to +the patch release number. Patch releases provide security patches, bug +fixes, and new or changed features that generally do not contain any +backward breaking changes. Always upgrade to the latest patch release in +your release series. + +For more information on versioning, see :ref:`release-version-numbers`. + +About this Task +--------------- + +This page describes upgrade procedures for the MongoDB +{+latest-lts-version+} release series. To upgrade a different release +series, refer to the corresponding version of the manual. + +.. _upgrade-options: + +Before You Begin +---------------- + +Review the following sections to ensure that your deployment is ready to +be upgraded. + +Backup +~~~~~~ + +Ensure you have an up-to-date backup of your data set. See +:ref:`backup-methods`. + +Compatibility Considerations +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Consult the following documents for any special considerations or +compatibility issues specific to your MongoDB release: + +- :ref:`Release notes ` + +- :driver:`Driver documentation ` + +Maintenance Window +~~~~~~~~~~~~~~~~~~ + +If your installation includes :term:`replica sets `, set +the upgrade to occur during a predefined maintenance window. + +Staging Environment Check +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Before you upgrade a production environment, use the procedures in this +document to upgrade a *staging* environment that reproduces your +production environment. Ensure that your production configuration is +compatible with all changes before upgrading. + +.. _upgrade-procedure: + +Steps +----- + +Upgrade each :binary:`~bin.mongod` and :binary:`~bin.mongos` binary +separately. Follow this upgrade procedure: + +#. For deployments that use authentication, first upgrade all of your + MongoDB Drivers. To upgrade, see the + :driver:`documentation for your driver `. + +#. Upgrade any standalone instances. See + :ref:`upgrade-mongodb-instance`. + +#. Upgrade any replica sets that are not part of a sharded cluster, as + described in :ref:`upgrade-replica-set`. + +#. Upgrade sharded clusters, as described in + :ref:`upgrade-sharded-cluster`. + +.. _upgrade-mongodb-instance: + +Upgrade a MongoDB Instance +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To upgrade a {+latest-lts-version+} :binary:`~bin.mongod` or +:binary:`~bin.mongos` instance, use one of these approaches: + +- Upgrade the instance using the operating system's package management + tool and the official MongoDB packages. This is the preferred + approach. See :doc:`/installation`. + +- Upgrade the instance by replacing the existing binaries with new + binaries. See :ref:`upgrade-replace-binaries`. + +.. _upgrade-replace-binaries: + +Replace the Existing Binaries +````````````````````````````` + +This section describes how to upgrade MongoDB by replacing the existing +binaries. The preferred approach to an upgrade is to use the operating +system's package management tool and the official MongoDB packages, as +described in :doc:`/installation`. + +To upgrade a :binary:`~bin.mongod` or :binary:`~bin.mongos` instance by +replacing the existing binaries: + +1. Download the binaries for the latest MongoDB patch release from the + `MongoDB Download Page`_ and store the binaries in a temporary + location. The binaries download as compressed files that uncompress + to the directory structure used by the MongoDB installation. + +#. Shutdown the instance. + +#. Replace the existing MongoDB binaries with the downloaded binaries. + +#. Make any required configuration file changes. + +#. Restart the instance. + +.. _`MongoDB Download Page`: https://www.mongodb.com/try/download/community?tck=docs_server + +.. _upgrade-replica-set: + +Upgrade Replica Sets +~~~~~~~~~~~~~~~~~~~~ + +To upgrade a {+latest-lts-version+} replica set, upgrade each member +individually, starting with the :term:`secondaries ` and +finishing with the :term:`primary`. Plan the upgrade during a predefined +maintenance window. + +.. include:: /includes/upgrade-downgrade-replica-set.rst + +Upgrade Secondaries +``````````````````` + +Upgrade each secondary separately as follows: + +1. Upgrade the secondary's :binary:`~bin.mongod` binary by following the + instructions in :ref:`upgrade-mongodb-instance`. + +#. After upgrading a secondary, wait for the secondary to recover to + the ``SECONDARY`` state before upgrading the next instance. To + check the member's state, issue :method:`rs.status()` in + :binary:`~bin.mongosh`. + + The secondary may briefly go into ``STARTUP2`` or ``RECOVERING``. + This is normal. Make sure to wait for the secondary to fully recover + to ``SECONDARY`` before you continue the upgrade. + +Upgrade the Primary +``````````````````` + +1. Step down the primary to initiate the normal :ref:`failover + ` procedure. Using one of the following: + + - The :method:`rs.stepDown()` helper in :binary:`~bin.mongosh`. + + - The :dbcommand:`replSetStepDown` database command. + + During failover, the set cannot accept writes. Typically this takes + 10-20 seconds. Plan the upgrade during a predefined maintenance + window. + + .. note:: Stepping down the primary is preferable to directly + *shutting down* the primary. Stepping down expedites the + failover procedure. + +#. Once the primary has stepped down, call the :method:`rs.status()` + method from :binary:`~bin.mongosh` until you see that another + member has assumed the ``PRIMARY`` state. + +#. Shut down the original primary and upgrade its instance by + following the instructions in :ref:`upgrade-mongodb-instance`. + +.. _upgrade-sharded-cluster: + +Upgrade Sharded Clusters +~~~~~~~~~~~~~~~~~~~~~~~~ + +To upgrade a {+latest-lts-version+} sharded cluster: + +1. Disable the cluster's balancer as described in + :ref:`sharding-balancing-disable-temporarily`. + +#. Upgrade the :ref:`config servers `. + + To upgrade the config server replica set, use the procedures in + :ref:`upgrade-replica-set`. + +#. Upgrade each shard. + + - If a shard is a replica set, upgrade the shard using the + procedure titled :ref:`upgrade-replica-set`. + + - If a shard is a standalone instance, upgrade the shard using the + procedure titled + :ref:`upgrade-mongodb-instance`. + +#. Once the config servers and the shards have been upgraded, upgrade + each :binary:`~bin.mongos` instance by following the instructions in + :ref:`upgrade-mongodb-instance`. You can upgrade the + :binary:`~bin.mongos` instances in any order. + +#. Re-enable the balancer, as described in :ref:`sharding-balancing-re-enable`. + +Learn More +---------- + +- :ref:`production-notes` + +- :ref:`sharding-manage-shards` + +- :ref:`replica-set-sync`