Browse Source
Refactor content to a natural flow, remove duplications, extract partials. See #1597pull/1604/head
30 changed files with 774 additions and 666 deletions
@ -1,64 +0,0 @@
@@ -1,64 +0,0 @@
|
||||
[[jdbc.java-config]] |
||||
= Configuration |
||||
|
||||
The Spring Data JDBC repositories support can be activated by an annotation through Java configuration, as the following example shows: |
||||
|
||||
.Spring Data JDBC repositories using Java configuration |
||||
[source,java] |
||||
---- |
||||
@Configuration |
||||
@EnableJdbcRepositories // <1> |
||||
class ApplicationConfig extends AbstractJdbcConfiguration { // <2> |
||||
|
||||
@Bean |
||||
DataSource dataSource() { // <3> |
||||
|
||||
EmbeddedDatabaseBuilder builder = new EmbeddedDatabaseBuilder(); |
||||
return builder.setType(EmbeddedDatabaseType.HSQL).build(); |
||||
} |
||||
|
||||
@Bean |
||||
NamedParameterJdbcOperations namedParameterJdbcOperations(DataSource dataSource) { // <4> |
||||
return new NamedParameterJdbcTemplate(dataSource); |
||||
} |
||||
|
||||
@Bean |
||||
TransactionManager transactionManager(DataSource dataSource) { // <5> |
||||
return new DataSourceTransactionManager(dataSource); |
||||
} |
||||
} |
||||
---- |
||||
<1> `@EnableJdbcRepositories` creates implementations for interfaces derived from `Repository` |
||||
<2> `AbstractJdbcConfiguration` provides various default beans required by Spring Data JDBC |
||||
<3> Creates a `DataSource` connecting to a database. |
||||
This is required by the following two bean methods. |
||||
<4> Creates the `NamedParameterJdbcOperations` used by Spring Data JDBC to access the database. |
||||
<5> Spring Data JDBC utilizes the transaction management provided by Spring JDBC. |
||||
|
||||
The configuration class in the preceding example sets up an embedded HSQL database by using the `EmbeddedDatabaseBuilder` API of `spring-jdbc`. |
||||
The `DataSource` is then used to set up `NamedParameterJdbcOperations` and a `TransactionManager`. |
||||
We finally activate Spring Data JDBC repositories by using the `@EnableJdbcRepositories`. |
||||
If no base package is configured, it uses the package in which the configuration class resides. |
||||
Extending `AbstractJdbcConfiguration` ensures various beans get registered. |
||||
Overwriting its methods can be used to customize the setup (see below). |
||||
|
||||
This configuration can be further simplified by using Spring Boot. |
||||
With Spring Boot a `DataSource` is sufficient once the starter `spring-boot-starter-data-jdbc` is included in the dependencies. |
||||
Everything else is done by Spring Boot. |
||||
|
||||
There are a couple of things one might want to customize in this setup. |
||||
|
||||
[[jdbc.dialects]] |
||||
== Dialects |
||||
|
||||
Spring Data JDBC uses implementations of the interface `Dialect` to encapsulate behavior that is specific to a database or its JDBC driver. |
||||
By default, the `AbstractJdbcConfiguration` tries to determine the database in use and register the correct `Dialect`. |
||||
This behavior can be changed by overwriting `jdbcDialect(NamedParameterJdbcOperations)`. |
||||
|
||||
If you use a database for which no dialect is available, then your application won’t startup. In that case, you’ll have to ask your vendor to provide a `Dialect` implementation. Alternatively, you can: |
||||
|
||||
1. Implement your own `Dialect`. |
||||
2. Implement a `JdbcDialectProvider` returning the `Dialect`. |
||||
3. Register the provider by creating a `spring.factories` resource under `META-INF` and perform the registration by adding a line + |
||||
`org.springframework.data.jdbc.repository.config.DialectResolver$JdbcDialectProvider=<fully qualified name of your JdbcDialectProvider>` |
||||
|
||||
@ -1,75 +0,0 @@
@@ -1,75 +0,0 @@
|
||||
[[jdbc.custom-converters]] |
||||
= Custom Conversions |
||||
|
||||
Spring Data JDBC allows registration of custom converters to influence how values are mapped in the database. |
||||
Currently, converters are only applied on property-level. |
||||
|
||||
[[jdbc.custom-converters.writer]] |
||||
== Writing a Property by Using a Registered Spring Converter |
||||
|
||||
The following example shows an implementation of a `Converter` that converts from a `Boolean` object to a `String` value: |
||||
|
||||
[source,java] |
||||
---- |
||||
import org.springframework.core.convert.converter.Converter; |
||||
|
||||
@WritingConverter |
||||
public class BooleanToStringConverter implements Converter<Boolean, String> { |
||||
|
||||
@Override |
||||
public String convert(Boolean source) { |
||||
return source != null && source ? "T" : "F"; |
||||
} |
||||
} |
||||
---- |
||||
|
||||
There are a couple of things to notice here: `Boolean` and `String` are both simple types hence Spring Data requires a hint in which direction this converter should apply (reading or writing). |
||||
By annotating this converter with `@WritingConverter` you instruct Spring Data to write every `Boolean` property as `String` in the database. |
||||
|
||||
[[jdbc.custom-converters.reader]] |
||||
== Reading by Using a Spring Converter |
||||
|
||||
The following example shows an implementation of a `Converter` that converts from a `String` to a `Boolean` value: |
||||
|
||||
[source,java] |
||||
---- |
||||
@ReadingConverter |
||||
public class StringToBooleanConverter implements Converter<String, Boolean> { |
||||
|
||||
@Override |
||||
public Boolean convert(String source) { |
||||
return source != null && source.equalsIgnoreCase("T") ? Boolean.TRUE : Boolean.FALSE; |
||||
} |
||||
} |
||||
---- |
||||
|
||||
There are a couple of things to notice here: `String` and `Boolean` are both simple types hence Spring Data requires a hint in which direction this converter should apply (reading or writing). |
||||
By annotating this converter with `@ReadingConverter` you instruct Spring Data to convert every `String` value from the database that should be assigned to a `Boolean` property. |
||||
|
||||
[[jdbc.custom-converters.configuration]] |
||||
== Registering Spring Converters with the `JdbcConverter` |
||||
|
||||
[source,java] |
||||
---- |
||||
class MyJdbcConfiguration extends AbstractJdbcConfiguration { |
||||
|
||||
// … |
||||
|
||||
@Override |
||||
protected List<?> userConverters() { |
||||
return Arrays.asList(new BooleanToStringConverter(), new StringToBooleanConverter()); |
||||
} |
||||
|
||||
} |
||||
---- |
||||
|
||||
NOTE: In previous versions of Spring Data JDBC it was recommended to directly overwrite `AbstractJdbcConfiguration.jdbcCustomConversions()`. |
||||
This is no longer necessary or even recommended, since that method assembles conversions intended for all databases, conversions registered by the `Dialect` used and conversions registered by the user. |
||||
If you are migrating from an older version of Spring Data JDBC and have `AbstractJdbcConfiguration.jdbcCustomConversions()` overwritten conversions from your `Dialect` will not get registered. |
||||
|
||||
[[jdbc.custom-converters.jdbc-value]] |
||||
== JdbcValue |
||||
|
||||
Value conversion uses `JdbcValue` to enrich values propagated to JDBC operations with a `java.sql.Types` type. |
||||
Register a custom write converter if you need to specify a JDBC-specific type instead of using type derivation. |
||||
This converter should convert the value to `JdbcValue` which has a field for the value and for the actual `JDBCType`. |
||||
@ -1,5 +0,0 @@
@@ -1,5 +0,0 @@
|
||||
[[jdbc.examples-repo]] |
||||
= Examples Repository |
||||
:page-section-summary-toc: 1 |
||||
|
||||
There is a https://github.com/spring-projects/spring-data-examples[GitHub repository with several examples] that you can download and play around with to get a feel for how the library works. |
||||
@ -1,28 +0,0 @@
@@ -1,28 +0,0 @@
|
||||
[[jdbc.loading-aggregates]] |
||||
= Loading Aggregates |
||||
|
||||
Spring Data JDBC offers two ways how it can load aggregates. |
||||
The traditional and before version 3.2 the only way is really simple: |
||||
Each query loads the aggregate roots, independently if the query is based on a `CrudRepository` method, a derived query or a annotated query. |
||||
If the aggregate root references other entities those are loaded with separate statements. |
||||
|
||||
Spring Data JDBC now allows the use of _Single Query Loading_. |
||||
With this an arbitrary number of aggregates can be fully loaded with a single SQL query. |
||||
This should be significant more efficient, especially for complex aggregates, consisting of many entities. |
||||
|
||||
Currently, this feature is very restricted. |
||||
|
||||
1. It only works for aggregates that only reference one entity collection.The plan is to remove this constraint in the future. |
||||
|
||||
2. The aggregate must also not use `AggregateReference` or embedded entities.The plan is to remove this constraint in the future. |
||||
|
||||
3. The database dialect must support it.Of the dialects provided by Spring Data JDBC all but H2 and HSQL support this.H2 and HSQL don't support analytic functions (aka windowing functions). |
||||
|
||||
4. It only works for the find methods in `CrudRepository`, not for derived queries and not for annotated queries.The plan is to remove this constraint in the future. |
||||
|
||||
5. Single Query Loading needs to be enabled in the `JdbcMappingContext`, by calling `setSingleQueryLoadingEnabled(true)` |
||||
|
||||
Note: Single Query Loading is to be considered experimental. We appreciate feedback on how it works for you. |
||||
|
||||
Note:Single Query Loading can be abbreviated as SQL, but we highly discourage that since confusion with Structured Query Language is almost guaranteed. |
||||
|
||||
@ -1,28 +0,0 @@
@@ -1,28 +0,0 @@
|
||||
[[jdbc.locking]] |
||||
= JDBC Locking |
||||
|
||||
Spring Data JDBC supports locking on derived query methods. |
||||
To enable locking on a given derived query method inside a repository, you annotate it with `@Lock`. |
||||
The required value of type `LockMode` offers two values: `PESSIMISTIC_READ` which guarantees that the data you are reading doesn't get modified and `PESSIMISTIC_WRITE` which obtains a lock to modify the data. |
||||
Some databases do not make this distinction. |
||||
In that cases both modes are equivalent of `PESSIMISTIC_WRITE`. |
||||
|
||||
.Using @Lock on derived query method |
||||
[source,java] |
||||
---- |
||||
interface UserRepository extends CrudRepository<User, Long> { |
||||
|
||||
@Lock(LockMode.PESSIMISTIC_READ) |
||||
List<User> findByLastname(String lastname); |
||||
} |
||||
---- |
||||
|
||||
As you can see above, the method `findByLastname(String lastname)` will be executed with a pessimistic read lock. If you are using a databse with the MySQL Dialect this will result for example in the following query: |
||||
|
||||
.Resulting Sql query for MySQL dialect |
||||
[source,sql] |
||||
---- |
||||
Select * from user u where u.lastname = lastname LOCK IN SHARE MODE |
||||
---- |
||||
|
||||
Alternative to `LockMode.PESSIMISTIC_READ` you can use `LockMode.PESSIMISTIC_WRITE`. |
||||
@ -1,8 +0,0 @@
@@ -1,8 +0,0 @@
|
||||
[[jdbc.logging]] |
||||
= Logging |
||||
:page-section-summary-toc: 1 |
||||
|
||||
Spring Data JDBC does little to no logging on its own. |
||||
Instead, the mechanics of `JdbcTemplate` to issue SQL statements provide logging. |
||||
Thus, if you want to inspect what SQL statements are run, activate logging for Spring's {spring-framework-docs}/data-access.html#jdbc-JdbcTemplate[`NamedParameterJdbcTemplate`] or https://www.mybatis.org/mybatis-3/logging.html[MyBatis]. |
||||
|
||||
@ -1,67 +1,33 @@
@@ -1,67 +1,33 @@
|
||||
[[query-by-example.running]] |
||||
= Query by Example |
||||
include::{commons}@data-commons::query-by-example.adoc[] |
||||
|
||||
include::{commons}@data-commons::page$query-by-example.adoc[leveloffset=+1] |
||||
Here's an example: |
||||
|
||||
In Spring Data JDBC and R2DBC, you can use Query by Example with Repositories, as shown in the following example: |
||||
|
||||
.Query by Example using a Repository |
||||
[source,java] |
||||
[source,java,indent=0] |
||||
---- |
||||
public interface PersonRepository |
||||
extends CrudRepository<Person, String>, |
||||
QueryByExampleExecutor<Person> { … } |
||||
|
||||
public class PersonService { |
||||
|
||||
@Autowired PersonRepository personRepository; |
||||
|
||||
public List<Person> findPeople(Person probe) { |
||||
return personRepository.findAll(Example.of(probe)); |
||||
} |
||||
} |
||||
include::example$r2dbc/QueryByExampleTests.java[tag=example] |
||||
---- |
||||
|
||||
NOTE: Currently, only `SingularAttribute` properties can be used for property matching. |
||||
|
||||
The property specifier accepts property names (such as `firstname` and `lastname`). You can navigate by chaining properties together with dots (`address.city`). You can also tune it with matching options and case sensitivity. |
||||
|
||||
The following table shows the various `StringMatcher` options that you can use and the result of using them on a field named `firstname`: |
||||
|
||||
[cols="1,2", options="header"] |
||||
.`StringMatcher` options |
||||
|=== |
||||
| Matching |
||||
| Logical result |
||||
|
||||
| `DEFAULT` (case-sensitive) |
||||
| `firstname = ?0` |
||||
<1> Create a domain object with the criteria (`null` fields will be ignored). |
||||
<2> Using the domain object, create an `Example`. |
||||
<3> Through the repository, execute query (use `findOne` for a single item). |
||||
|
||||
| `DEFAULT` (case-insensitive) |
||||
| `LOWER(firstname) = LOWER(?0)` |
||||
This illustrates how to craft a simple probe using a domain object. |
||||
In this case, it will query based on the `Employee` object's `name` field being equal to `Frodo`. |
||||
`null` fields are ignored. |
||||
|
||||
| `EXACT` (case-sensitive) |
||||
| `firstname = ?0` |
||||
|
||||
| `EXACT` (case-insensitive) |
||||
| `LOWER(firstname) = LOWER(?0)` |
||||
|
||||
| `STARTING` (case-sensitive) |
||||
| `firstname like ?0 + '%'` |
||||
|
||||
| `STARTING` (case-insensitive) |
||||
| `LOWER(firstname) like LOWER(?0) + '%'` |
||||
|
||||
| `ENDING` (case-sensitive) |
||||
| `firstname like '%' + ?0` |
||||
|
||||
| `ENDING` (case-insensitive) |
||||
| `LOWER(firstname) like '%' + LOWER(?0)` |
||||
[source,java,indent=0] |
||||
---- |
||||
include::example$r2dbc/QueryByExampleTests.java[tag=example-2] |
||||
---- |
||||
|
||||
| `CONTAINING` (case-sensitive) |
||||
| `firstname like '%' + ?0 + '%'` |
||||
<1> Create a custom `ExampleMatcher` that matches on ALL fields (use `matchingAny()` to match on *ANY* fields) |
||||
<2> For the `name` field, use a wildcard that matches against the end of the field |
||||
<3> Match columns against `null` (don't forget that `NULL` doesn't equal `NULL` in relational databases). |
||||
<4> Ignore the `role` field when forming the query. |
||||
<5> Plug the custom `ExampleMatcher` into the probe. |
||||
|
||||
| `CONTAINING` (case-insensitive) |
||||
| `LOWER(firstname) like '%' + LOWER(?0) + '%'` |
||||
It's also possible to apply a `withTransform()` against any property, allowing you to transform a property before forming the query. |
||||
For example, you can apply a `toUpperCase()` to a `String` -based property before the query is created. |
||||
|
||||
|=== |
||||
Query By Example really shines when you don't know all the fields needed in a query in advance. |
||||
If you were building a filter on a web page where the user can pick the fields, Query By Example is a great way to flexibly capture that into an efficient query. |
||||
|
||||
@ -1,15 +0,0 @@
@@ -1,15 +0,0 @@
|
||||
[[r2dbc.core]] |
||||
= R2DBC Core Support |
||||
:page-section-summary-toc: 1 |
||||
|
||||
R2DBC contains a wide range of features: |
||||
|
||||
* Spring configuration support with Java-based `@Configuration` classes for an R2DBC driver instance. |
||||
* `R2dbcEntityTemplate` as central class for entity-bound operations that increases productivity when performing common R2DBC operations with integrated object mapping between rows and POJOs. |
||||
* Feature-rich object mapping integrated with Spring's Conversion Service. |
||||
* Annotation-based mapping metadata that is extensible to support other metadata formats. |
||||
* Automatic implementation of Repository interfaces, including support for custom query methods. |
||||
|
||||
For most tasks, you should use `R2dbcEntityTemplate` or the repository support, which both use the rich mapping functionality. |
||||
`R2dbcEntityTemplate` is the place to look for accessing functionality such as ad-hoc CRUD operations. |
||||
|
||||
@ -1,38 +0,0 @@
@@ -1,38 +0,0 @@
|
||||
[[r2dbc.repositories.queries.query-by-example]] |
||||
= Query By Example |
||||
|
||||
Spring Data R2DBC also lets you use xref:query-by-example.adoc[Query By Example] to fashion queries. |
||||
This technique allows you to use a "probe" object. |
||||
Essentially, any field that isn't empty or `null` will be used to match. |
||||
|
||||
Here's an example: |
||||
|
||||
[source,java,indent=0] |
||||
---- |
||||
include::example$r2dbc/QueryByExampleTests.java[tag=example] |
||||
---- |
||||
|
||||
<1> Create a domain object with the criteria (`null` fields will be ignored). |
||||
<2> Using the domain object, create an `Example`. |
||||
<3> Through the `R2dbcRepository`, execute query (use `findOne` for a `Mono`). |
||||
|
||||
This illustrates how to craft a simple probe using a domain object. |
||||
In this case, it will query based on the `Employee` object's `name` field being equal to `Frodo`. |
||||
`null` fields are ignored. |
||||
|
||||
[source,java,indent=0] |
||||
---- |
||||
include::example$r2dbc/QueryByExampleTests.java[tag=example-2] |
||||
---- |
||||
|
||||
<1> Create a custom `ExampleMatcher` that matches on ALL fields (use `matchingAny()` to match on *ANY* fields) |
||||
<2> For the `name` field, use a wildcard that matches against the end of the field |
||||
<3> Match columns against `null` (don't forget that `NULL` doesn't equal `NULL` in relational databases). |
||||
<4> Ignore the `role` field when forming the query. |
||||
<5> Plug the custom `ExampleMatcher` into the probe. |
||||
|
||||
It's also possible to apply a `withTransform()` against any property, allowing you to transform a property before forming the query. |
||||
For example, you can apply a `toUpperCase()` to a `String` -based property before the query is created. |
||||
|
||||
Query By Example really shines when you don't know all the fields needed in a query in advance. |
||||
If you were building a filter on a web page where the user can pick the fields, Query By Example is a great way to flexibly capture that into an efficient query. |
||||
@ -1 +1,3 @@
@@ -1 +1,3 @@
|
||||
include::{commons}@data-commons::page$repositories/core-concepts.adoc[] |
||||
|
||||
include::{commons}@data-commons::page$is-new-state-detection.adoc[leveloffset=+1] |
||||
|
||||
@ -0,0 +1,16 @@
@@ -0,0 +1,16 @@
|
||||
[[entity-persistence.id-generation]] |
||||
== ID Generation |
||||
|
||||
Spring Data uses the identifer property to identify entities. |
||||
The ID of an entity must be annotated with Spring Data's https://docs.spring.io/spring-data/commons/docs/current/api/org/springframework/data/annotation/Id.html[`@Id`] annotation. |
||||
|
||||
When your database has an auto-increment column for the ID column, the generated value gets set in the entity after inserting it into the database. |
||||
|
||||
Spring Data does not attempt to insert values of identifier columns when the entity is new and the identifier value defaults to its initial value. |
||||
That is `0` for primitive types and `null` if the identifier property uses a numeric wrapper type such as `Long`. |
||||
|
||||
xref:repositories/core-concepts.adoc#is-new-state-detection[Entity State Detection] explains in detail the strategies to detect whether an entity is new or whether it is expected to exist in your database. |
||||
|
||||
One important constraint is that, after saving an entity, the entity must not be new anymore. |
||||
Note that whether an entity is new is part of the entity's state. |
||||
With auto-increment columns, this happens automatically, because the ID gets set by Spring Data with the value from the ID column. |
||||
@ -0,0 +1,24 @@
@@ -0,0 +1,24 @@
|
||||
The `RelationalConverter` can use metadata to drive the mapping of objects to rows. |
||||
The following annotations are available: |
||||
|
||||
* `@Id`: Applied at the field level to mark the primary key. |
||||
* `@Table`: Applied at the class level to indicate this class is a candidate for mapping to the database. |
||||
You can specify the name of the table where the database is stored. |
||||
* `@Transient`: By default, all fields are mapped to the row. |
||||
This annotation excludes the field where it is applied from being stored in the database. |
||||
Transient properties cannot be used within a persistence constructor as the converter cannot materialize a value for the constructor argument. |
||||
* `@PersistenceCreator`: Marks a given constructor or static factory method -- even a package protected one -- to use when instantiating the object from the database. |
||||
Constructor arguments are mapped by name to the values in the retrieved row. |
||||
* `@Value`: This annotation is part of the Spring Framework. |
||||
Within the mapping framework it can be applied to constructor arguments. |
||||
This lets you use a Spring Expression Language statement to transform a key’s value retrieved in the database before it is used to construct a domain object. |
||||
In order to reference a column of a given row one has to use expressions like: `@Value("#root.myProperty")` where root refers to the root of the given `Row`. |
||||
* `@Column`: Applied at the field level to describe the name of the column as it is represented in the row, letting the name be different from the field name of the class. |
||||
Names specified with a `@Column` annotation are always quoted when used in SQL statements. |
||||
For most databases, this means that these names are case-sensitive. |
||||
It also means that you can use special characters in these names. |
||||
However, this is not recommended, since it may cause problems with other tools. |
||||
* `@Version`: Applied at field level is used for optimistic locking and checked for modification on save operations. |
||||
The value is `null` (`zero` for primitive types) is considered as marker for entities to be new. |
||||
The initially stored value is `zero` (`one` for primitive types). |
||||
The version gets incremented automatically on every update. |
||||
@ -0,0 +1,190 @@
@@ -0,0 +1,190 @@
|
||||
[[entity-persistence.naming-strategy]] |
||||
== Naming Strategy |
||||
|
||||
By convention, Spring Data applies a `NamingStrategy` to determine table, column, and schema names defaulting to https://en.wikipedia.org/wiki/Snake_case[snake case]. |
||||
An object property named `firstName` becomes `first_name`. |
||||
You can tweak that by providing a {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/NamingStrategy.html[`NamingStrategy`] in your application context. |
||||
|
||||
[[entity-persistence.custom-table-name]] |
||||
== Override table names |
||||
|
||||
When the table naming strategy does not match your database table names, you can override the table name with the {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/Table.html[`@Table`] annotation. |
||||
The element `value` of this annotation provides the custom table name. |
||||
The following example maps the `MyEntity` class to the `CUSTOM_TABLE_NAME` table in the database: |
||||
|
||||
[source,java] |
||||
---- |
||||
@Table("CUSTOM_TABLE_NAME") |
||||
class MyEntity { |
||||
@Id |
||||
Integer id; |
||||
|
||||
String name; |
||||
} |
||||
---- |
||||
|
||||
[[entity-persistence.custom-column-name]] |
||||
== Override column names |
||||
|
||||
When the column naming strategy does not match your database table names, you can override the table name with the {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/Column.html[`@Column`] annotation. |
||||
The element `value` of this annotation provides the custom column name. |
||||
The following example maps the `name` property of the `MyEntity` class to the `CUSTOM_COLUMN_NAME` column in the database: |
||||
|
||||
[source,java] |
||||
---- |
||||
class MyEntity { |
||||
@Id |
||||
Integer id; |
||||
|
||||
@Column("CUSTOM_COLUMN_NAME") |
||||
String name; |
||||
} |
||||
---- |
||||
|
||||
ifdef::mapped-collection[] |
||||
|
||||
The {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/MappedCollection.html[`@MappedCollection`] |
||||
annotation can be used on a reference type (one-to-one relationship) or on Sets, Lists, and Maps (one-to-many relationship). |
||||
`idColumn` element of the annotation provides a custom name for the foreign key column referencing the id column in the other table. |
||||
In the following example the corresponding table for the `MySubEntity` class has a `NAME` column, and the `CUSTOM_MY_ENTITY_ID_COLUMN_NAME` column of the `MyEntity` id for relationship reasons: |
||||
|
||||
[source,java] |
||||
---- |
||||
class MyEntity { |
||||
@Id |
||||
Integer id; |
||||
|
||||
@MappedCollection(idColumn = "CUSTOM_MY_ENTITY_ID_COLUMN_NAME") |
||||
Set<MySubEntity> subEntities; |
||||
} |
||||
|
||||
class MySubEntity { |
||||
String name; |
||||
} |
||||
---- |
||||
|
||||
When using `List` and `Map` you must have an additional column for the position of a dataset in the `List` or the key value of the entity in the `Map`. |
||||
This additional column name may be customized with the `keyColumn` Element of the {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/MappedCollection.html[`@MappedCollection`] annotation: |
||||
|
||||
[source,java] |
||||
---- |
||||
class MyEntity { |
||||
@Id |
||||
Integer id; |
||||
|
||||
@MappedCollection(idColumn = "CUSTOM_COLUMN_NAME", keyColumn = "CUSTOM_KEY_COLUMN_NAME") |
||||
List<MySubEntity> name; |
||||
} |
||||
|
||||
class MySubEntity { |
||||
String name; |
||||
} |
||||
---- |
||||
endif::[] |
||||
|
||||
ifdef::embedded-entities[] |
||||
|
||||
[[entity-persistence.embedded-entities]] |
||||
== Embedded entities |
||||
|
||||
Embedded entities are used to have value objects in your java data model, even if there is only one table in your database. |
||||
In the following example you see, that `MyEntity` is mapped with the `@Embedded` annotation. |
||||
The consequence of this is, that in the database a table `my_entity` with the two columns `id` and `name` (from the `EmbeddedEntity` class) is expected. |
||||
|
||||
However, if the `name` column is actually `null` within the result set, the entire property `embeddedEntity` will be set to null according to the `onEmpty` of `@Embedded`, which ``null``s objects when all nested properties are `null`. + |
||||
Opposite to this behavior `USE_EMPTY` tries to create a new instance using either a default constructor or one that accepts nullable parameter values from the result set. |
||||
|
||||
.Sample Code of embedding objects |
||||
==== |
||||
[source,java] |
||||
---- |
||||
class MyEntity { |
||||
|
||||
@Id |
||||
Integer id; |
||||
|
||||
@Embedded(onEmpty = USE_NULL) <1> |
||||
EmbeddedEntity embeddedEntity; |
||||
} |
||||
|
||||
class EmbeddedEntity { |
||||
String name; |
||||
} |
||||
---- |
||||
|
||||
<1> ``Null``s `embeddedEntity` if `name` in `null`. |
||||
Use `USE_EMPTY` to instantiate `embeddedEntity` with a potential `null` value for the `name` property. |
||||
==== |
||||
|
||||
If you need a value object multiple times in an entity, this can be achieved with the optional `prefix` element of the `@Embedded` annotation. |
||||
This element represents a prefix and is prepend for each column name in the embedded object. |
||||
|
||||
[TIP] |
||||
==== |
||||
Make use of the shortcuts `@Embedded.Nullable` & `@Embedded.Empty` for `@Embedded(onEmpty = USE_NULL)` and `@Embedded(onEmpty = USE_EMPTY)` to reduce verbosity and simultaneously set JSR-305 `@javax.annotation.Nonnull` accordingly. |
||||
|
||||
[source,java] |
||||
---- |
||||
class MyEntity { |
||||
|
||||
@Id |
||||
Integer id; |
||||
|
||||
@Embedded.Nullable <1> |
||||
EmbeddedEntity embeddedEntity; |
||||
} |
||||
---- |
||||
|
||||
<1> Shortcut for `@Embedded(onEmpty = USE_NULL)`. |
||||
==== |
||||
|
||||
Embedded entities containing a `Collection` or a `Map` will always be considered non-empty since they will at least contain the empty collection or map. |
||||
Such an entity will therefore never be `null` even when using @Embedded(onEmpty = USE_NULL). |
||||
endif::[] |
||||
|
||||
[[entity-persistence.read-only-properties]] |
||||
== Read Only Properties |
||||
|
||||
Attributes annotated with `@ReadOnlyProperty` will not be written to the database by Spring Data, but they will be read when an entity gets loaded. |
||||
|
||||
Spring Data will not automatically reload an entity after writing it. |
||||
Therefore, you have to reload it explicitly if you want to see data that was generated in the database for such columns. |
||||
|
||||
If the annotated attribute is an entity or collection of entities, it is represented by one or more separate rows in separate tables. |
||||
Spring Data will not perform any insert, delete or update for these rows. |
||||
|
||||
[[entity-persistence.insert-only-properties]] |
||||
== Insert Only Properties |
||||
|
||||
Attributes annotated with `@InsertOnlyProperty` will only be written to the database by Spring Data during insert operations. |
||||
For updates these properties will be ignored. |
||||
|
||||
`@InsertOnlyProperty` is only supported for the aggregate root. |
||||
|
||||
[[mapping.custom.object.construction]] |
||||
== Customized Object Construction |
||||
|
||||
The mapping subsystem allows the customization of the object construction by annotating a constructor with the `@PersistenceConstructor` annotation.The values to be used for the constructor parameters are resolved in the following way: |
||||
|
||||
* If a parameter is annotated with the `@Value` annotation, the given expression is evaluated, and the result is used as the parameter value. |
||||
* If the Java type has a property whose name matches the given field of the input row, then its property information is used to select the appropriate constructor parameter to which to pass the input field value. |
||||
This works only if the parameter name information is present in the Java `.class` files, which you can achieve by compiling the source with debug information or using the `-parameters` command-line switch for `javac` in Java 8. |
||||
* Otherwise, a `MappingException` is thrown to indicate that the given constructor parameter could not be bound. |
||||
|
||||
[source,java] |
||||
---- |
||||
class OrderItem { |
||||
|
||||
private @Id final String id; |
||||
private final int quantity; |
||||
private final double unitPrice; |
||||
|
||||
OrderItem(String id, int quantity, double unitPrice) { |
||||
this.id = id; |
||||
this.quantity = quantity; |
||||
this.unitPrice = unitPrice; |
||||
} |
||||
|
||||
// getters/setters omitted |
||||
} |
||||
---- |
||||
@ -0,0 +1,12 @@
@@ -0,0 +1,12 @@
|
||||
Spring Data supports optimistic locking by means of a numeric attribute that is annotated with |
||||
https://docs.spring.io/spring-data/commons/docs/current/api/org/springframework/data/annotation/Version.html[`@Version`] on the aggregate root. |
||||
Whenever Spring Data saves an aggregate with such a version attribute two things happen: |
||||
|
||||
* The update statement for the aggregate root will contain a where clause checking that the version stored in the database is actually unchanged. |
||||
* If this isn't the case an `OptimisticLockingFailureException` will be thrown. |
||||
|
||||
Also, the version attribute gets increased both in the entity and in the database so a concurrent action will notice the change and throw an `OptimisticLockingFailureException` if applicable as described above. |
||||
|
||||
This process also applies to inserting new aggregates, where a `null` or `0` version indicates a new instance and the increased instance afterwards marks the instance as not new anymore, making this work rather nicely with cases where the id is generated during object construction for example when UUIDs are used. |
||||
|
||||
During deletes the version check also applies but no version is increased. |
||||
Loading…
Reference in new issue