[[mapping]] = Mapping Rich mapping support is provided by the `MappingR2dbcConverter`. `MappingR2dbcConverter` has a rich metadata model that allows mapping domain objects to a data row. The mapping metadata model is populated by using annotations on your domain objects. However, the infrastructure is not limited to using annotations as the only source of metadata information. The `MappingR2dbcConverter` also lets you map objects to rows without providing any additional metadata, by following a set of conventions. This section describes the features of the `MappingR2dbcConverter`, including how to use conventions for mapping objects to rows and how to override those conventions with annotation-based mapping metadata. include::../{spring-data-commons-docs}/object-mapping.adoc[leveloffset=+1] [[mapping.conventions]] == Convention-based Mapping `MappingR2dbcConverter` has a few conventions for mapping objects to rows when no additional mapping metadata is provided. The conventions are: * The short Java class name is mapped to the table name in the following manner. The `com.bigbank.SavingsAccount` class maps to the `SAVINGS_ACCOUNT` table name. The same name mapping is applied for mapping fields to column names. For example, the `firstName` field maps to the `FIRST_NAME` column. You can control this mapping by providing a custom `NamingStrategy`. See <> for more detail. Table and column names that are derived from property or class names are used in SQL statements without quotes by default. You can control this behavior by setting `R2dbcMappingContext.setForceQuote(true)`. * Nested objects are not supported. * The converter uses any Spring Converters registered with it to override the default mapping of object properties to row columns and values. * The fields of an object are used to convert to and from columns in the row. Public `JavaBean` properties are not used. * If you have a single non-zero-argument constructor whose constructor argument names match top-level column names of the row, that constructor is used. Otherwise, the zero-argument constructor is used. If there is more than one non-zero-argument constructor, an exception is thrown. [[mapping.configuration]] == Mapping Configuration By default (unless explicitly configured) an instance of `MappingR2dbcConverter` is created when you create a `DatabaseClient`. You can create your own instance of the `MappingR2dbcConverter`. By creating your own instance, you can register Spring converters to map specific classes to and from the database. You can configure the `MappingR2dbcConverter` as well as `DatabaseClient` and `ConnectionFactory` by using Java-based metadata. The following example uses Spring's Java-based configuration: If you set `setForceQuote` of the `R2dbcMappingContext to` true, table and column names derived from classes and properties are used with database specific quotes. This means that it is OK to use reserved SQL words (such as order) in these names. You can do so by overriding `r2dbcMappingContext(Optional)` of `AbstractR2dbcConfiguration`. Spring Data converts the letter casing of such a name to that form which is also used by the configured database when no quoting is used. Therefore, you can use unquoted names when creating tables, as long as you do not use keywords or special characters in your names. For databases that adhere to the SQL standard, this means that names are converted to upper case. The quoting character and the way names get capitalized is controlled by the used `Dialect`. See <> for how to configure custom dialects. .@Configuration class to configure R2DBC mapping support ==== [source,java] ---- @Configuration public class MyAppConfig extends AbstractR2dbcConfiguration { public ConnectionFactory connectionFactory() { return ConnectionFactories.get("r2dbc:…"); } // the following are optional @Override protected List getCustomConverters() { return List.of(new PersonReadConverter(), new PersonWriteConverter()); } } ---- ==== `AbstractR2dbcConfiguration` requires you to implement a method that defines a `ConnectionFactory`. You can add additional converters to the converter by overriding the `r2dbcCustomConversions` method. You can configure a custom `NamingStrategy` by registering it as a bean. The `NamingStrategy` controls how the names of classes and properties get converted to the names of tables and columns. NOTE: `AbstractR2dbcConfiguration` creates a `DatabaseClient` instance and registers it with the container under the name of `databaseClient`. [[mapping.usage]] == Metadata-based Mapping To take full advantage of the object mapping functionality inside the Spring Data R2DBC support, you should annotate your mapped objects with the `@Table` annotation. Although it is not necessary for the mapping framework to have this annotation (your POJOs are mapped correctly, even without any annotations), it lets the classpath scanner find and pre-process your domain objects to extract the necessary metadata. If you do not use this annotation, your application takes a slight performance hit the first time you store a domain object, because the mapping framework needs to build up its internal metadata model so that it knows about the properties of your domain object and how to persist them. The following example shows a domain object: .Example domain object ==== [source,java] ---- package com.mycompany.domain; @Table public class Person { @Id private Long id; private Integer ssn; private String firstName; private String lastName; } ---- ==== IMPORTANT: The `@Id` annotation tells the mapper which property you want to use as the primary key. [[mapping.types]] === Default Type Mapping The following table explains how property types of an entity affect mapping: |=== |Source Type | Target Type | Remarks |Primitive types and wrapper types |Passthru |Can be customized using <>. |JSR-310 Date/Time types |Passthru |Can be customized using <>. |`String`, `BigInteger`, `BigDecimal`, and `UUID` |Passthru |Can be customized using <>. |`Enum` |String |Can be customized by registering a <>. |`Blob` and `Clob` |Passthru |Can be customized using <>. |`byte[]`, `ByteBuffer` |Passthru |Considered a binary payload. |`Collection` |Array of `T` |Conversion to Array type if supported by the configured <>, not supported otherwise. |Arrays of primitive types, wrapper types and `String` |Array of wrapper type (e.g. `int[]` -> `Integer[]`) |Conversion to Array type if supported by the configured <>, not supported otherwise. |Driver-specific types |Passthru |Contributed as a simple type by the used `R2dbcDialect`. |Complex objects |Target type depends on registered `Converter`. |Requires a <>, not supported otherwise. |=== NOTE: The native data type for a column depends on the R2DBC driver type mapping. Drivers can contribute additional simple types such as Geometry types. [[mapping.usage.annotations]] === Mapping Annotation Overview The `MappingR2dbcConverter` can use metadata to drive the mapping of objects to rows. The following annotations are available: * `@Id`: Applied at the field level to mark the primary key. * `@Table`: Applied at the class level to indicate this class is a candidate for mapping to the database. You can specify the name of the table where the database is stored. * `@Transient`: By default, all fields are mapped to the row. This annotation excludes the field where it is applied from being stored in the database. Transient properties cannot be used within a persistence constructor as the converter cannot materialize a value for the constructor argument. * `@PersistenceConstructor`: Marks a given constructor -- even a package protected one -- to use when instantiating the object from the database. Constructor arguments are mapped by name to the values in the retrieved row. * `@Value`: This annotation is part of the Spring Framework. Within the mapping framework it can be applied to constructor arguments. This lets you use a Spring Expression Language statement to transform a key’s value retrieved in the database before it is used to construct a domain object. In order to reference a column of a given row one has to use expressions like: `@Value("#root.myProperty")` where root refers to the root of the given `Row`. * `@Column`: Applied at the field level to describe the name of the column as it is represented in the row, letting the name be different from the field name of the class. Names specified with a `@Column` annotation are always quoted when used in SQL statements. For most databases, this means that these names are case-sensitive. It also means that you can use special characters in these names. However, this is not recommended, since it may cause problems with other tools. * `@Version`: Applied at field level is used for optimistic locking and checked for modification on save operations. The value is `null` (`zero` for primitive types) is considered as marker for entities to be new. The initially stored value is `zero` (`one` for primitive types). The version gets incremented automatically on every update. See <> for further reference. The mapping metadata infrastructure is defined in the separate `spring-data-commons` project that is technology-agnostic. Specific subclasses are used in the R2DBC support to support annotation based metadata. Other strategies can also be put in place (if there is demand). [[mapping.custom.object.construction]] === Customized Object Construction The mapping subsystem allows the customization of the object construction by annotating a constructor with the `@PersistenceConstructor` annotation.The values to be used for the constructor parameters are resolved in the following way: * If a parameter is annotated with the `@Value` annotation, the given expression is evaluated, and the result is used as the parameter value. * If the Java type has a property whose name matches the given field of the input row, then its property information is used to select the appropriate constructor parameter to which to pass the input field value. This works only if the parameter name information is present in the Java `.class` files, which you can achieve by compiling the source with debug information or using the `-parameters` command-line switch for `javac` in Java 8. * Otherwise, a `MappingException` is thrown to indicate that the given constructor parameter could not be bound. ==== [source,java] ---- class OrderItem { private @Id final String id; private final int quantity; private final double unitPrice; OrderItem(String id, int quantity, double unitPrice) { this.id = id; this.quantity = quantity; this.unitPrice = unitPrice; } // getters/setters ommitted } ---- ==== [[mapping.explicit.converters]] === Overriding Mapping with Explicit Converters When storing and querying your objects, it is often convenient to have a `R2dbcConverter` instance to handle the mapping of all Java types to `OutboundRow` instances. However, you may sometimes want the `R2dbcConverter` instances to do most of the work but let you selectively handle the conversion for a particular type -- perhaps to optimize performance. To selectively handle the conversion yourself, register one or more one or more `org.springframework.core.convert.converter.Converter` instances with the `R2dbcConverter`. You can use the `r2dbcCustomConversions` method in `AbstractR2dbcConfiguration` to configure converters. The examples <> show how to perform the configuration with Java. NOTE: Custom top-level entity conversion requires asymmetric types for conversion. Inbound data is extracted from R2DBC's `Row`. Outbound data (to be used with `INSERT`/`UPDATE` statements) is represented as `OutboundRow` and later assembled to a statement. The following example of a Spring Converter implementation converts from a `Row` to a `Person` POJO: ==== [source,java] ---- @ReadingConverter public class PersonReadConverter implements Converter { public Person convert(Row source) { Person p = new Person(source.get("id", String.class),source.get("name", String.class)); p.setAge(source.get("age", Integer.class)); return p; } } ---- ==== Please note that converters get applied on singular properties. Collection properties (e.g. `Collection`) are iterated and converted element-wise. Collection converters (e.g. `Converter>, OutboundRow`) are not supported. NOTE: R2DBC uses boxed primitives (`Integer.class` instead of `int.class`) to return primitive values. The following example converts from a `Person` to a `OutboundRow`: ==== [source,java] ---- @WritingConverter public class PersonWriteConverter implements Converter { public OutboundRow convert(Person source) { OutboundRow row = new OutboundRow(); row.put("id", Parameter.from(source.getId())); row.put("name", Parameter.from(source.getFirstName())); row.put("age", Parameter.from(source.getAge())); return row; } } ---- ==== [[mapping.explicit.enum.converters]] ==== Overriding Enum Mapping with Explicit Converters Some databases, such as https://github.com/pgjdbc/r2dbc-postgresql#postgres-enum-types[Postgres], can natively write enum values using their database-specific enumerated column type. Spring Data converts `Enum` values by default to `String` values for maximum portability. To retain the actual enum value, register a `@Writing` converter whose source and target types use the actual enum type to avoid using `Enum.name()` conversion. Additionally, you need to configure the enum type on the driver level so that the driver is aware how to represent the enum type. The following example shows the involved components to read and write `Color` enum values natively: ==== [source,java] ---- enum Color { Grey, Blue } class ColorConverter extends EnumWriteSupport { } class Product { @Id long id; Color color; // … } ---- ====