From 38e5e1dfb46b5fcedb3ece9e41feea864d6518bc Mon Sep 17 00:00:00 2001 From: Christoph Strobl Date: Thu, 31 Aug 2023 18:22:09 +0200 Subject: [PATCH] Fix indentation for all pages --- .../reference/aggregation-framework.adoc | 40 +++++++++---------- .../ROOT/pages/reference/change-streams.adoc | 8 ++-- .../pages/reference/document-references.adoc | 4 +- .../modules/ROOT/pages/reference/gridfs.adoc | 2 +- .../ROOT/pages/reference/mongo-auditing.adoc | 2 +- .../reference/mongo-custom-conversions.adoc | 8 ++-- .../pages/reference/mongo-json-schema.adoc | 10 ++--- .../reference/mongo-property-converters.adoc | 12 +++--- .../mongo-repositories-aggregation.adoc | 2 +- .../ROOT/pages/reference/observability.adoc | 2 +- .../pages/reference/query-by-example.adoc | 4 +- .../pages/reference/tailable-cursors.adoc | 6 +-- .../ROOT/pages/reference/time-series.adoc | 2 +- .../pages/reference/unwrapping-entities.adoc | 22 +++++----- 14 files changed, 62 insertions(+), 62 deletions(-) diff --git a/src/main/antora/modules/ROOT/pages/reference/aggregation-framework.adoc b/src/main/antora/modules/ROOT/pages/reference/aggregation-framework.adoc index 728c54e22..f58df0db7 100644 --- a/src/main/antora/modules/ROOT/pages/reference/aggregation-framework.adoc +++ b/src/main/antora/modules/ROOT/pages/reference/aggregation-framework.adoc @@ -1,12 +1,12 @@ [[mongo.aggregation]] -== Aggregation Framework Support += Aggregation Framework Support Spring Data MongoDB provides support for the Aggregation Framework introduced to MongoDB in version 2.2. For further information, see the full https://docs.mongodb.org/manual/aggregation/[reference documentation] of the aggregation framework and other data aggregation tools for MongoDB. [[mongo.aggregation.basic-concepts]] -=== Basic Concepts +== Basic Concepts The Aggregation Framework support in Spring Data MongoDB is based on the following key abstractions: `Aggregation`, `AggregationDefinition`, and `AggregationResults`. @@ -52,12 +52,12 @@ List mappedResult = results.getMappedResults(); Note that, if you provide an input class as the first parameter to the `newAggregation` method, the `MongoTemplate` derives the name of the input collection from this class. Otherwise, if you do not not specify an input class, you must provide the name of the input collection explicitly. If both an input class and an input collection are provided, the latter takes precedence. [[mongo.aggregation.supported-aggregation-operations]] -=== Supported Aggregation Operations & Stages +== Supported Aggregation Operations & Stages The MongoDB Aggregation Framework provides the following types of aggregation stages and operations: [[aggregation-stages]] -==== Aggregation Stages +=== Aggregation Stages * addFields - `AddFieldsOperation` * bucket / bucketAuto - `BucketOperation` / `BucketAutoOperation` @@ -104,7 +104,7 @@ Aggregation.stage(""" ==== [[aggregation-operators]] -==== Aggregation Operators +=== Aggregation Operators * Group/Accumulator Aggregation Operators * Boolean Aggregation Operators @@ -173,7 +173,7 @@ At the time of this writing, we provide support for the following Aggregation Op Note that the aggregation operations not listed here are currently not supported by Spring Data MongoDB. Comparison aggregation operators are expressed as `Criteria` expressions. [[mongo.aggregation.projection]] -=== Projection Expressions +== Projection Expressions Projection expressions are used to define the fields that are the outcome of a particular aggregation step. Projection expressions can be defined through the `project` method of the `Aggregation` class, either by passing a list of `String` objects or an aggregation framework `Fields` object. The projection can be extended with additional fields through a fluent API by using the `and(String)` method and aliased by using the `as(String)` method. Note that you can also define fields with aliases by using the `Fields.field` static factory method of the aggregation framework, which you can then use to construct a new `Fields` instance. References to projected fields in later aggregation stages are valid only for the field names of included fields or their aliases (including newly defined fields and their aliases). Fields not included in the projection cannot be referenced in later aggregation stages. The following listings show examples of projection expression: @@ -211,12 +211,12 @@ project().and("firstname").as("name"), sort(ASC, "firstname") More examples for project operations can be found in the `AggregationTests` class. Note that further details regarding the projection expressions can be found in the https://docs.mongodb.org/manual/reference/operator/aggregation/project/#pipe._S_project[corresponding section] of the MongoDB Aggregation Framework reference documentation. [[mongo.aggregation.facet]] -=== Faceted Classification +== Faceted Classification As of Version 3.4, MongoDB supports faceted classification by using the Aggregation Framework. A faceted classification uses semantic categories (either general or subject-specific) that are combined to create the full classification entry. Documents flowing through the aggregation pipeline are classified into buckets. A multi-faceted classification enables various aggregations on the same set of input documents, without needing to retrieve the input documents multiple times. [[buckets]] -==== Buckets +=== Buckets Bucket operations categorize incoming documents into groups, called buckets, based on a specified expression and bucket boundaries. Bucket operations require a grouping field or a grouping expression. You can define them by using the `bucket()` and `bucketAuto()` methods of the `Aggregate` class. `BucketOperation` and `BucketAutoOperation` can expose accumulations based on aggregation expressions for input documents. You can extend the bucket operation with additional parameters through a fluent API by using the `with…()` methods and the `andOutput(String)` method. You can alias the operation by using the `as(String)` method. Each bucket is represented as a document in the output. @@ -263,7 +263,7 @@ Note that further details regarding bucket expressions can be found in the https https://docs.mongodb.org/manual/reference/operator/aggregation/bucketAuto/[`$bucketAuto` section] of the MongoDB Aggregation Framework reference documentation. [[multi-faceted-aggregation]] -==== Multi-faceted Aggregation +=== Multi-faceted Aggregation Multiple aggregation pipelines can be used to create multi-faceted aggregations that characterize data across multiple dimensions (or facets) within a single aggregation stage. Multi-faceted aggregations provide multiple filters and categorizations to guide data browsing and analysis. A common implementation of faceting is how many online retailers provide ways to narrow down search results by applying filters on product price, manufacturer, size, and other factors. @@ -294,7 +294,7 @@ facet(project("title").and("publicationDate").extractYear().as("publicationYear" Note that further details regarding facet operation can be found in the https://docs.mongodb.org/manual/reference/operator/aggregation/facet/[`$facet` section] of the MongoDB Aggregation Framework reference documentation. [[mongo.aggregation.sort-by-count]] -==== Sort By Count +=== Sort By Count Sort by count operations group incoming documents based on the value of a specified expression, compute the count of documents in each distinct group, and sort the results by count. It offers a handy shortcut to apply sorting when using <>. Sort by count operations require a grouping field or grouping expression. The following listing shows a sort by count example: @@ -315,12 +315,12 @@ A sort by count operation is equivalent to the following BSON (Binary JSON): ---- [[mongo.aggregation.projection.expressions]] -==== Spring Expression Support in Projection Expressions +=== Spring Expression Support in Projection Expressions We support the use of SpEL expressions in projection expressions through the `andExpression` method of the `ProjectionOperation` and `BucketOperation` classes. This feature lets you define the desired expression as a SpEL expression. On running a query, the SpEL expression is translated into a corresponding MongoDB projection expression part. This arrangement makes it much easier to express complex calculations. [[complex-calculations-with-spel-expressions]] -===== Complex Calculations with SpEL expressions +==== Complex Calculations with SpEL expressions Consider the following SpEL expression: @@ -389,12 +389,12 @@ In addition to the transformations shown in the preceding table, you can use sta ---- [[mongo.aggregation.examples]] -==== Aggregation Framework Examples +=== Aggregation Framework Examples The examples in this section demonstrate the usage patterns for the MongoDB Aggregation Framework with Spring Data MongoDB. [[mongo.aggregation.examples.example1]] -===== Aggregation Framework Example 1 +==== Aggregation Framework Example 1 In this introductory example, we want to aggregate a list of tags to get the occurrence count of a particular tag from a MongoDB collection (called `tags`) sorted by the occurrence count in descending order. This example demonstrates the usage of grouping, sorting, projections (selection), and unwinding (result splitting). @@ -435,7 +435,7 @@ The preceding listing uses the following algorithm: Note that the input collection is explicitly specified as the `tags` parameter to the `aggregate` Method. If the name of the input collection is not specified explicitly, it is derived from the input class passed as the first parameter to the `newAggreation` method. [[mongo.aggregation.examples.example2]] -===== Aggregation Framework Example 2 +==== Aggregation Framework Example 2 This example is based on the https://docs.mongodb.org/manual/tutorial/aggregation-examples/#largest-and-smallest-cities-by-state[Largest and Smallest Cities by State] example from the MongoDB Aggregation Framework documentation. We added additional sorting to produce stable results with different MongoDB versions. Here we want to return the smallest and largest cities by population for each state by using the aggregation framework. This example demonstrates grouping, sorting, and projections (selection). @@ -501,7 +501,7 @@ The preceding listings use the following algorithm: Note that we derive the name of the input collection from the `ZipInfo` class passed as the first parameter to the `newAggregation` method. [[mongo.aggregation.examples.example3]] -===== Aggregation Framework Example 3 +==== Aggregation Framework Example 3 This example is based on the https://docs.mongodb.org/manual/tutorial/aggregation-examples/#states-with-populations-over-10-million[States with Populations Over 10 Million] example from the MongoDB Aggregation Framework documentation. We added additional sorting to produce stable results with different MongoDB versions. Here we want to return all states with a population greater than 10 million, using the aggregation framework. This example demonstrates grouping, sorting, and matching (filtering). @@ -537,7 +537,7 @@ The preceding listings use the following algorithm: Note that we derive the name of the input collection from the `ZipInfo` class passed as first parameter to the `newAggregation` method. [[mongo.aggregation.examples.example4]] -===== Aggregation Framework Example 4 +==== Aggregation Framework Example 4 This example demonstrates the use of simple arithmetic operations in the projection operation. @@ -571,7 +571,7 @@ List resultList = result.getMappedResults(); Note that we derive the name of the input collection from the `Product` class passed as first parameter to the `newAggregation` method. [[mongo.aggregation.examples.example5]] -===== Aggregation Framework Example 5 +==== Aggregation Framework Example 5 This example demonstrates the use of simple arithmetic operations derived from SpEL Expressions in the projection operation. @@ -605,7 +605,7 @@ List resultList = result.getMappedResults(); ---- [[mongo.aggregation.examples.example6]] -===== Aggregation Framework Example 6 +==== Aggregation Framework Example 6 This example demonstrates the use of complex arithmetic operations derived from SpEL Expressions in the projection operation. @@ -639,7 +639,7 @@ List resultList = result.getMappedResults(); Note that we can also refer to other fields of the document within the SpEL expression. [[mongo.aggregation.examples.example7]] -===== Aggregation Framework Example 7 +==== Aggregation Framework Example 7 This example uses conditional projection. It is derived from the https://docs.mongodb.com/manual/reference/operator/aggregation/cond/[$cond reference documentation]. diff --git a/src/main/antora/modules/ROOT/pages/reference/change-streams.adoc b/src/main/antora/modules/ROOT/pages/reference/change-streams.adoc index 09650785c..ed2e169c9 100644 --- a/src/main/antora/modules/ROOT/pages/reference/change-streams.adoc +++ b/src/main/antora/modules/ROOT/pages/reference/change-streams.adoc @@ -1,5 +1,5 @@ [[change-streams]] -== Change Streams += Change Streams As of MongoDB 3.6, https://docs.mongodb.com/manual/changeStreams/[Change Streams] let applications get notified about changes without having to tail the oplog. @@ -13,7 +13,7 @@ changes from all collections within the database. When subscribing to a database In doubt, use `Document`. [[change-streams-with-messagelistener]] -=== Change Streams with `MessageListener` +== Change Streams with `MessageListener` Listening to a https://docs.mongodb.com/manual/tutorial/change-streams-example/[Change Stream by using a Sync Driver] creates a long running, blocking task that needs to be delegated to a separate component. In this case, we need to first create a `MessageListenerContainer`, which will be the main entry point for running the specific `SubscriptionRequest` tasks. @@ -51,7 +51,7 @@ Please use `register(request, body, errorHandler)` to provide additional functio ==== [[reactive-change-streams]] -=== Reactive Change Streams +== Reactive Change Streams Subscribing to Change Streams with the reactive API is a more natural approach to work with streams. Still, the essential building blocks, such as `ChangeStreamOptions`, remain the same. The following example shows how to use Change Streams emitting ``ChangeStreamEvent``s: @@ -70,7 +70,7 @@ Flux> flux = reactiveTemplate.changeStream(User.class) < ==== [[resuming-change-streams]] -=== Resuming Change Streams +== Resuming Change Streams Change Streams can be resumed and resume emitting events where you left. To resume the stream, you need to supply either a resume token or the last known server time (in UTC). Use `ChangeStreamOptions` to set the value accordingly. diff --git a/src/main/antora/modules/ROOT/pages/reference/document-references.adoc b/src/main/antora/modules/ROOT/pages/reference/document-references.adoc index a3ddd1cba..0056ce7e8 100644 --- a/src/main/antora/modules/ROOT/pages/reference/document-references.adoc +++ b/src/main/antora/modules/ROOT/pages/reference/document-references.adoc @@ -1,5 +1,5 @@ [[mapping-usage-references]] -=== Using DBRefs += Using DBRefs The mapping framework does not have to store child objects embedded within the document. You can also store them separately and use a `DBRef` to refer to that document. @@ -53,7 +53,7 @@ CAUTION: Lazy loading may require class proxies, that in turn, might need access For those cases please consider falling back to an interface type (eg. switch from `ArrayList` to `List`) or provide the required `--add-opens` argument. [[mapping-usage.document-references]] -=== Using Document References += Using Document References Using `@DocumentReference` offers a flexible way of referencing entities in MongoDB. While the goal is the same as when using <>, the store representation is different. diff --git a/src/main/antora/modules/ROOT/pages/reference/gridfs.adoc b/src/main/antora/modules/ROOT/pages/reference/gridfs.adoc index 0362b9b49..6193a072a 100644 --- a/src/main/antora/modules/ROOT/pages/reference/gridfs.adoc +++ b/src/main/antora/modules/ROOT/pages/reference/gridfs.adoc @@ -1,5 +1,5 @@ [[gridfs]] -== GridFS Support += GridFS Support MongoDB supports storing binary files inside its filesystem, GridFS. Spring Data MongoDB provides a `GridFsOperations` interface as well as the corresponding implementation, `GridFsTemplate`, to let you interact with the filesystem. You can set up a `GridFsTemplate` instance by handing it a `MongoDatabaseFactory` as well as a `MongoConverter`, as the following example shows: diff --git a/src/main/antora/modules/ROOT/pages/reference/mongo-auditing.adoc b/src/main/antora/modules/ROOT/pages/reference/mongo-auditing.adoc index dfcf6b522..4feaad6f7 100644 --- a/src/main/antora/modules/ROOT/pages/reference/mongo-auditing.adoc +++ b/src/main/antora/modules/ROOT/pages/reference/mongo-auditing.adoc @@ -1,5 +1,5 @@ [[mongo.auditing]] -== General Auditing Configuration for MongoDB += General Auditing Configuration for MongoDB Since Spring Data MongoDB 1.4, auditing can be enabled by annotating a configuration class with the `@EnableMongoAuditing` annotation, as the following example shows: diff --git a/src/main/antora/modules/ROOT/pages/reference/mongo-custom-conversions.adoc b/src/main/antora/modules/ROOT/pages/reference/mongo-custom-conversions.adoc index 8c05e4225..1ba7815a4 100644 --- a/src/main/antora/modules/ROOT/pages/reference/mongo-custom-conversions.adoc +++ b/src/main/antora/modules/ROOT/pages/reference/mongo-custom-conversions.adoc @@ -1,5 +1,5 @@ [[mongo.custom-converters]] -== Custom Conversions - Overriding Default Mapping += Custom Conversions - Overriding Default Mapping The most trivial way of influencing the mapping result is by specifying the desired native MongoDB target type via the `@Field` annotation. This allows to work with non MongoDB types like `BigDecimal` in the domain model while persisting @@ -43,7 +43,7 @@ The `MappingMongoConverter` checks to see if any Spring converters can handle a NOTE: For more information on the Spring type conversion service, see the reference docs link:{springDocsUrl}/core.html#validation[here]. [[mongo.custom-converters.writer]] -=== Saving by Using a Registered Spring Converter +== Saving by Using a Registered Spring Converter The following example shows an implementation of the `Converter` that converts from a `Person` object to a `org.bson.Document`: @@ -66,7 +66,7 @@ public class PersonWriteConverter implements Converter { ---- [[mongo.custom-converters.reader]] -=== Reading by Using a Spring Converter +== Reading by Using a Spring Converter The following example shows an implementation of a `Converter` that converts from a `Document` to a `Person` object: @@ -83,7 +83,7 @@ public class PersonReadConverter implements Converter { ---- [[mongo.custom-converters.xml]] -=== Registering Spring Converters with the `MongoConverter` +== Registering Spring Converters with the `MongoConverter` [source,java] ---- diff --git a/src/main/antora/modules/ROOT/pages/reference/mongo-json-schema.adoc b/src/main/antora/modules/ROOT/pages/reference/mongo-json-schema.adoc index 928b1fc9b..9b487aa9b 100644 --- a/src/main/antora/modules/ROOT/pages/reference/mongo-json-schema.adoc +++ b/src/main/antora/modules/ROOT/pages/reference/mongo-json-schema.adoc @@ -1,5 +1,5 @@ [[mongo.jsonSchema]] -=== JSON Schema += JSON Schema As of version 3.6, MongoDB supports collections that validate documents against a provided https://docs.mongodb.com/manual/core/schema-validation/#json-schema[JSON Schema]. The schema itself and both validation action and level can be defined when creating the collection, as the following example shows: @@ -87,7 +87,7 @@ template.createCollection(Person.class, CollectionOptions.empty().schema(schema) ==== [[mongo.jsonSchema.generated]] -==== Generating a Schema +== Generating a Schema Setting up a schema can be a time consuming task and we encourage everyone who decides to do so, to really take the time it takes. It's important, schema changes can be hard. @@ -288,7 +288,7 @@ class B extends Root { ==== [[mongo.jsonSchema.query]] -==== Query a collection for matching JSON Schema +== Query a collection for matching JSON Schema You can use a schema to query any collection for documents that match a given structure defined by a JSON schema, as the following example shows: @@ -303,7 +303,7 @@ template.find(query(matchingDocumentStructure(schema)), Person.class); ==== [[mongo.jsonSchema.encrypted-fields]] -==== Encrypted Fields +== Encrypted Fields MongoDB 4.2 https://docs.mongodb.com/master/core/security-client-side-encryption/[Field Level Encryption] allows to directly encrypt individual properties. @@ -399,7 +399,7 @@ public class EncryptionExtension implements EvaluationContextExtension { ==== [[mongo.jsonSchema.types]] -==== JSON Schema Types +== JSON Schema Types The following table shows the supported JSON schema types: diff --git a/src/main/antora/modules/ROOT/pages/reference/mongo-property-converters.adoc b/src/main/antora/modules/ROOT/pages/reference/mongo-property-converters.adoc index 0cd9fe16f..72d4d82d4 100644 --- a/src/main/antora/modules/ROOT/pages/reference/mongo-property-converters.adoc +++ b/src/main/antora/modules/ROOT/pages/reference/mongo-property-converters.adoc @@ -1,5 +1,5 @@ [[mongo.property-converters]] -== Property Converters - Mapping specific fields += Property Converters - Mapping specific fields While <> already offers ways to influence the conversion and representation of certain types within the target store, it has limitations when only certain values or properties of a particular type should be considered for conversion. Property-based converters allow configuring conversion rules on a per-property basis, either declaratively (via `@ValueConverter`) or programmatically (by registering a `PropertyValueConverter` for a specific property). @@ -35,7 +35,7 @@ You can use `PropertyValueConverterFactory.beanFactoryAware(…)` to obtain a `P You can change the default behavior through `ConverterConfiguration`. [[mongo.property-converters.declarative]] -=== Declarative Value Converter +== Declarative Value Converter The most straight forward usage of a `PropertyValueConverter` is by annotating properties with the `@ValueConverter` annotation that defines the converter type: @@ -52,7 +52,7 @@ class Person { ==== [[mongo.property-converters.programmatic]] -=== Programmatic Value Converter Registration +== Programmatic Value Converter Registration Programmatic registration registers `PropertyValueConverter` instances for properties within an entity model by using a `PropertyValueConverterRegistrar`, as the following example shows. The difference between declarative registration and programmatic registration is that programmatic registration happens entirely outside of the entity model. @@ -79,18 +79,18 @@ registrar.registerConverter(Person.class, Person::getSsn()) WARNING: Dot notation (such as `registerConverter(Person.class, "address.street", …)`) for nagivating across properties into subdocuments is *not* supported when registering converters. [[mongo.property-converters.value-conversions]] -=== MongoDB property value conversions +== MongoDB property value conversions The preceding sections outlined the purpose an overall structure of `PropertyValueConverters`. This section focuses on MongoDB specific aspects. [[mongovalueconverter-and-mongoconversioncontext]] -==== MongoValueConverter and MongoConversionContext +=== MongoValueConverter and MongoConversionContext `MongoValueConverter` offers a pre-typed `PropertyValueConverter` interface that uses `MongoConversionContext`. [[mongocustomconversions-configuration]] -==== MongoCustomConversions configuration +=== MongoCustomConversions configuration By default, `MongoCustomConversions` can handle declarative value converters, depending on the configured `PropertyValueConverterFactory`. `MongoConverterConfigurationAdapter` helps to set up programmatic value conversions or define the `PropertyValueConverterFactory` to be used. diff --git a/src/main/antora/modules/ROOT/pages/reference/mongo-repositories-aggregation.adoc b/src/main/antora/modules/ROOT/pages/reference/mongo-repositories-aggregation.adoc index d83f3e871..6f4852923 100644 --- a/src/main/antora/modules/ROOT/pages/reference/mongo-repositories-aggregation.adoc +++ b/src/main/antora/modules/ROOT/pages/reference/mongo-repositories-aggregation.adoc @@ -1,5 +1,5 @@ [[mongodb.repositories.queries.aggregation]] -=== Aggregation Repository Methods += Aggregation Repository Methods The repository layer offers means to interact with <> via annotated repository query methods. Similar to the <>, you can define a pipeline using the `org.springframework.data.mongodb.repository.Aggregation` annotation. diff --git a/src/main/antora/modules/ROOT/pages/reference/observability.adoc b/src/main/antora/modules/ROOT/pages/reference/observability.adoc index 5d4679556..80a9ad8b5 100644 --- a/src/main/antora/modules/ROOT/pages/reference/observability.adoc +++ b/src/main/antora/modules/ROOT/pages/reference/observability.adoc @@ -1,7 +1,7 @@ :root-target: ../../../../target/ [[mongodb.observability]] -== Observability += Observability Spring Data MongoDB currently has the most up-to-date code to support Observability in your MongoDB application. These changes, however, haven't been picked up by Spring Boot (yet). diff --git a/src/main/antora/modules/ROOT/pages/reference/query-by-example.adoc b/src/main/antora/modules/ROOT/pages/reference/query-by-example.adoc index 342ade291..34b816961 100644 --- a/src/main/antora/modules/ROOT/pages/reference/query-by-example.adoc +++ b/src/main/antora/modules/ROOT/pages/reference/query-by-example.adoc @@ -1,5 +1,5 @@ [[query-by-example.running]] -== Running an Example += Running an Example The following example shows how to query by example when using a repository (of `Person` objects, in this case): @@ -73,7 +73,7 @@ Spring Data MongoDB provides support for the following matching options: |=== [[query-by-example.untyped]] -== Untyped Example += Untyped Example By default `Example` is strictly typed. This means that the mapped query has an included type match, restricting it to probe assignable types. For example, when sticking with the default type key (`_class`), the query has restrictions such as (`_class : { $in : [ com.acme.Person] }`). diff --git a/src/main/antora/modules/ROOT/pages/reference/tailable-cursors.adoc b/src/main/antora/modules/ROOT/pages/reference/tailable-cursors.adoc index 09a8950f9..f884e5281 100644 --- a/src/main/antora/modules/ROOT/pages/reference/tailable-cursors.adoc +++ b/src/main/antora/modules/ROOT/pages/reference/tailable-cursors.adoc @@ -1,6 +1,6 @@ // carry over the old bookmarks to prevent external links from failing [[tailable-cursors]] -== [[mongo.reactive.repositories.infinite-streams]] Infinite Streams with Tailable Cursors += [[mongo.reactive.repositories.infinite-streams]] Infinite Streams with Tailable Cursors By default, MongoDB automatically closes a cursor when the client exhausts all results supplied by the cursor. Closing a cursor on exhaustion turns a stream into a finite stream. For https://docs.mongodb.com/manual/core/capped-collections/[capped collections], @@ -14,7 +14,7 @@ reactive variant, as it is less resource-intensive. However, if you cannot use t concept that is already prevalent in the Spring ecosystem. [[tailable-cursors.sync]] -=== Tailable Cursors with `MessageListener` +== Tailable Cursors with `MessageListener` Listening to a capped collection using a Sync Driver creates a long running, blocking task that needs to be delegated to a separate component. In this case, we need to first create a `MessageListenerContainer`, which will be the main entry point @@ -54,7 +54,7 @@ container.stop(); ==== [[tailable-cursors.reactive]] -=== Reactive Tailable Cursors +== Reactive Tailable Cursors Using tailable cursors with a reactive data types allows construction of infinite streams. A tailable cursor remains open until it is closed externally. It emits data as new documents arrive in a capped collection. diff --git a/src/main/antora/modules/ROOT/pages/reference/time-series.adoc b/src/main/antora/modules/ROOT/pages/reference/time-series.adoc index 54601a8ed..7f30b3640 100644 --- a/src/main/antora/modules/ROOT/pages/reference/time-series.adoc +++ b/src/main/antora/modules/ROOT/pages/reference/time-series.adoc @@ -1,5 +1,5 @@ [[time-series]] -== Time Series += Time Series MongoDB 5.0 introduced https://docs.mongodb.com/manual/core/timeseries-collections/[Time Series] collections that are optimized to efficiently store documents over time such as measurements or events. Those collections need to be created as such before inserting any data. diff --git a/src/main/antora/modules/ROOT/pages/reference/unwrapping-entities.adoc b/src/main/antora/modules/ROOT/pages/reference/unwrapping-entities.adoc index 532fc0282..8c79deff1 100644 --- a/src/main/antora/modules/ROOT/pages/reference/unwrapping-entities.adoc +++ b/src/main/antora/modules/ROOT/pages/reference/unwrapping-entities.adoc @@ -1,10 +1,10 @@ [[unwrapped-entities]] -== Unwrapping Types += Unwrapping Types Unwrapped entities are used to design value objects in your Java domain model whose properties are flattened out into the parent's MongoDB Document. [[unwrapped-entities.mapping]] -=== Unwrapped Types Mapping +== Unwrapped Types Mapping Consider the following domain model where `User.name` is annotated with `@Unwrapped`. The `@Unwrapped` annotation signals that all properties of `UserName` should be flattened out into the `user` document that owns the `name` property. @@ -53,7 +53,7 @@ However, those must not be, nor contain unwrapped fields themselves. ==== [[unwrapped-entities.mapping.field-names]] -=== Unwrapped Types field names +== Unwrapped Types field names A value object can be unwrapped multiple times by using the optional `prefix` attribute of the `@Unwrapped` annotation. By dosing so the chosen prefix is prepended to each property or `@Field("…")` name in the unwrapped object. @@ -136,7 +136,7 @@ public class UserName { ==== [[unwrapped-entities.queries]] -=== Query on Unwrapped Objects +== Query on Unwrapped Objects Defining queries on unwrapped properties is possible on type- as well as field-level as the provided `Criteria` is matched against the domain type. Prefixes and potential custom field names will be considered when rendering the actual query. @@ -179,7 +179,7 @@ db.collection.find({ ==== [[unwrapped-entities.queries.sort]] -==== Sort by unwrapped field. +=== Sort by unwrapped field. Fields of unwrapped objects can be used for sorting via their property path as shown in the sample below. @@ -205,7 +205,7 @@ Though possible, using the unwrapped object itself as sort criteria includes all ==== [[unwrapped-entities.queries.project]] -==== Field projection on unwrapped objects +=== Field projection on unwrapped objects Fields of unwrapped objects can be subject for projection either as a whole or via single fields as shown in the samples below. @@ -253,13 +253,13 @@ db.collection.find({ ==== [[unwrapped-entities.queries.by-example]] -==== Query By Example on unwrapped object. +=== Query By Example on unwrapped object. Unwrapped objects can be used within an `Example` probe just as any other type. Please review the <> section, to learn more about this feature. [[unwrapped-entities.queries.repository]] -==== Repository Queries on unwrapped objects. +=== Repository Queries on unwrapped objects. The `Repository` abstraction allows deriving queries on fields of unwrapped objects as well as the entire object. @@ -284,7 +284,7 @@ Index creation for unwrapped objects is suspended even if the repository `create ==== [[unwrapped-entities.update]] -=== Update on Unwrapped Objects +== Update on Unwrapped Objects Unwrapped objects can be updated as any other object that is part of the domain model. The mapping layer takes care of flattening structures into their surroundings. @@ -338,14 +338,14 @@ db.collection.update({ ==== [[unwrapped-entities.aggregations]] -=== Aggregations on Unwrapped Objects +== Aggregations on Unwrapped Objects The <> will attempt to map unwrapped values of typed aggregations. Please make sure to work with the property path including the wrapper object when referencing one of its values. Other than that no special action is required. [[unwrapped-entities.indexes]] -=== Index on Unwrapped Objects +== Index on Unwrapped Objects It is possible to attach the `@Indexed` annotation to properties of an unwrapped type just as it is done with regular objects. It is not possible to use `@Indexed` along with the `@Unwrapped` annotation on the owning property.