78 changed files with 1934 additions and 2092 deletions
@ -1,54 +0,0 @@ |
|||||||
= Spring Data R2DBC - Reference Documentation |
|
||||||
Mark Paluch, Jay Bryant, Stephen Cohen |
|
||||||
:revnumber: {version} |
|
||||||
:revdate: {localdate} |
|
||||||
ifdef::backend-epub3[:front-cover-image: image:epub-cover.png[Front Cover,1050,1600]] |
|
||||||
:spring-data-commons-docs: ../../../../../spring-data-commons/src/main/asciidoc |
|
||||||
:spring-data-r2dbc-javadoc: https://docs.spring.io/spring-data/r2dbc/docs/{version}/api |
|
||||||
:spring-framework-ref: https://docs.spring.io/spring/docs/{springVersion}/reference/html |
|
||||||
:reactiveStreamsJavadoc: https://www.reactive-streams.org/reactive-streams-{reactiveStreamsVersion}-javadoc |
|
||||||
:example-root: ../../../src/test/java/org/springframework/data/r2dbc/documentation |
|
||||||
:tabsize: 2 |
|
||||||
:include-xml-namespaces: false |
|
||||||
|
|
||||||
(C) 2018-2022 The original authors. |
|
||||||
|
|
||||||
NOTE: Copies of this document may be made for your own use and for distribution to others, provided that you do not charge any fee for such copies and further provided that each copy contains this Copyright Notice, whether distributed in print or electronically. |
|
||||||
|
|
||||||
toc::[] |
|
||||||
|
|
||||||
// The blank line before each include prevents content from running together in a bad way |
|
||||||
// (because an included bit does not have its own blank lines). |
|
||||||
|
|
||||||
include::preface.adoc[] |
|
||||||
|
|
||||||
include::{spring-data-commons-docs}/upgrade.adoc[leveloffset=+1] |
|
||||||
|
|
||||||
include::{spring-data-commons-docs}/dependencies.adoc[leveloffset=+1] |
|
||||||
|
|
||||||
include::{spring-data-commons-docs}/repositories.adoc[leveloffset=+1] |
|
||||||
|
|
||||||
[[reference]] |
|
||||||
= Reference Documentation |
|
||||||
|
|
||||||
include::reference/introduction.adoc[leveloffset=+1] |
|
||||||
|
|
||||||
include::reference/r2dbc.adoc[leveloffset=+1] |
|
||||||
|
|
||||||
include::reference/r2dbc-repositories.adoc[leveloffset=+1] |
|
||||||
|
|
||||||
include::{spring-data-commons-docs}/auditing.adoc[leveloffset=+1] |
|
||||||
|
|
||||||
include::reference/r2dbc-auditing.adoc[leveloffset=+1] |
|
||||||
|
|
||||||
include::reference/mapping.adoc[leveloffset=+1] |
|
||||||
|
|
||||||
include::reference/kotlin.adoc[leveloffset=+1] |
|
||||||
|
|
||||||
[[appendix]] |
|
||||||
= Appendix |
|
||||||
|
|
||||||
:numbered!: |
|
||||||
include::{spring-data-commons-docs}/repository-query-keywords-reference.adoc[leveloffset=+1] |
|
||||||
include::{spring-data-commons-docs}/repository-query-return-types-reference.adoc[leveloffset=+1] |
|
||||||
include::reference/r2dbc-upgrading.adoc[leveloffset=+1] |
|
||||||
@ -1,10 +0,0 @@ |
|||||||
[[introduction]] |
|
||||||
= Introduction |
|
||||||
|
|
||||||
== Document Structure |
|
||||||
|
|
||||||
This part of the reference documentation explains the core functionality offered by Spring Data R2DBC. |
|
||||||
|
|
||||||
"`<<r2dbc.core>>`" introduces the R2DBC module feature set. |
|
||||||
|
|
||||||
"`<<r2dbc.repositories>>`" introduces the repository support for R2DBC. |
|
||||||
@ -1,442 +0,0 @@ |
|||||||
[[r2dbc.repositories]] |
|
||||||
= R2DBC Repositories |
|
||||||
|
|
||||||
[[r2dbc.repositories.intro]] |
|
||||||
This chapter points out the specialties for repository support for R2DBC. |
|
||||||
This chapter builds on the core repository support explained in <<repositories>>. |
|
||||||
Before reading this chapter, you should have a sound understanding of the basic concepts explained there. |
|
||||||
|
|
||||||
[[r2dbc.repositories.usage]] |
|
||||||
== Usage |
|
||||||
|
|
||||||
To access domain entities stored in a relational database, you can use our sophisticated repository support that eases implementation quite significantly. |
|
||||||
To do so, create an interface for your repository. |
|
||||||
Consider the following `Person` class: |
|
||||||
|
|
||||||
.Sample Person entity |
|
||||||
==== |
|
||||||
[source,java] |
|
||||||
---- |
|
||||||
public class Person { |
|
||||||
|
|
||||||
@Id |
|
||||||
private Long id; |
|
||||||
private String firstname; |
|
||||||
private String lastname; |
|
||||||
|
|
||||||
// … getters and setters omitted |
|
||||||
} |
|
||||||
---- |
|
||||||
==== |
|
||||||
|
|
||||||
The following example shows a repository interface for the preceding `Person` class: |
|
||||||
|
|
||||||
.Basic repository interface to persist Person entities |
|
||||||
==== |
|
||||||
[source,java] |
|
||||||
---- |
|
||||||
public interface PersonRepository extends ReactiveCrudRepository<Person, Long> { |
|
||||||
|
|
||||||
// additional custom query methods go here |
|
||||||
} |
|
||||||
---- |
|
||||||
==== |
|
||||||
|
|
||||||
To configure R2DBC repositories, you can use the `@EnableR2dbcRepositories` annotation. |
|
||||||
If no base package is configured, the infrastructure scans the package of the annotated configuration class. |
|
||||||
The following example shows how to use Java configuration for a repository: |
|
||||||
|
|
||||||
.Java configuration for repositories |
|
||||||
==== |
|
||||||
[source,java] |
|
||||||
---- |
|
||||||
@Configuration |
|
||||||
@EnableR2dbcRepositories |
|
||||||
class ApplicationConfig extends AbstractR2dbcConfiguration { |
|
||||||
|
|
||||||
@Override |
|
||||||
public ConnectionFactory connectionFactory() { |
|
||||||
return … |
|
||||||
} |
|
||||||
} |
|
||||||
---- |
|
||||||
==== |
|
||||||
|
|
||||||
Because our domain repository extends `ReactiveCrudRepository`, it provides you with reactive CRUD operations to access the entities. |
|
||||||
On top of `ReactiveCrudRepository`, there is also `ReactiveSortingRepository`, which adds additional sorting functionality similar to that of `PagingAndSortingRepository`. |
|
||||||
Working with the repository instance is merely a matter of dependency injecting it into a client. |
|
||||||
Consequently, you can retrieve all `Person` objects with the following code: |
|
||||||
|
|
||||||
.Paging access to Person entities |
|
||||||
==== |
|
||||||
[source,java,indent=0] |
|
||||||
---- |
|
||||||
include::../{example-root}/PersonRepositoryTests.java[tags=class] |
|
||||||
---- |
|
||||||
==== |
|
||||||
|
|
||||||
The preceding example creates an application context with Spring's unit test support, which performs annotation-based dependency injection into test cases. |
|
||||||
Inside the test method, we use the repository to query the database. |
|
||||||
We use `StepVerifier` as a test aid to verify our expectations against the results. |
|
||||||
|
|
||||||
[[r2dbc.repositories.queries]] |
|
||||||
== Query Methods |
|
||||||
|
|
||||||
Most of the data access operations you usually trigger on a repository result in a query being run against the databases. |
|
||||||
Defining such a query is a matter of declaring a method on the repository interface, as the following example shows: |
|
||||||
|
|
||||||
.PersonRepository with query methods |
|
||||||
==== |
|
||||||
[source,java] |
|
||||||
---- |
|
||||||
interface ReactivePersonRepository extends ReactiveSortingRepository<Person, Long> { |
|
||||||
|
|
||||||
Flux<Person> findByFirstname(String firstname); <1> |
|
||||||
|
|
||||||
Flux<Person> findByFirstname(Publisher<String> firstname); <2> |
|
||||||
|
|
||||||
Flux<Person> findByFirstnameOrderByLastname(String firstname, Pageable pageable); <3> |
|
||||||
|
|
||||||
Mono<Person> findByFirstnameAndLastname(String firstname, String lastname); <4> |
|
||||||
|
|
||||||
Mono<Person> findFirstByLastname(String lastname); <5> |
|
||||||
|
|
||||||
@Query("SELECT * FROM person WHERE lastname = :lastname") |
|
||||||
Flux<Person> findByLastname(String lastname); <6> |
|
||||||
|
|
||||||
@Query("SELECT firstname, lastname FROM person WHERE lastname = $1") |
|
||||||
Mono<Person> findFirstByLastname(String lastname); <7> |
|
||||||
} |
|
||||||
---- |
|
||||||
<1> The method shows a query for all people with the given `firstname`. The query is derived by parsing the method name for constraints that can be concatenated with `And` and `Or`. Thus, the method name results in a query expression of `SELECT … FROM person WHERE firstname = :firstname`. |
|
||||||
<2> The method shows a query for all people with the given `firstname` once the `firstname` is emitted by the given `Publisher`. |
|
||||||
<3> Use `Pageable` to pass offset and sorting parameters to the database. |
|
||||||
<4> Find a single entity for the given criteria. It completes with `IncorrectResultSizeDataAccessException` on non-unique results. |
|
||||||
<5> Unless <4>, the first entity is always emitted even if the query yields more result rows. |
|
||||||
<6> The `findByLastname` method shows a query for all people with the given last name. |
|
||||||
<7> A query for a single `Person` entity projecting only `firstname` and `lastname` columns. |
|
||||||
The annotated query uses native bind markers, which are Postgres bind markers in this example. |
|
||||||
==== |
|
||||||
|
|
||||||
Note that the columns of a select statement used in a `@Query` annotation must match the names generated by the `NamingStrategy` for the respective property. |
|
||||||
If a select statement does not include a matching column, that property is not set. If that property is required by the persistence constructor, either null or (for primitive types) the default value is provided. |
|
||||||
|
|
||||||
The following table shows the keywords that are supported for query methods: |
|
||||||
|
|
||||||
[cols="1,2,3", options="header", subs="quotes"] |
|
||||||
.Supported keywords for query methods |
|
||||||
|=== |
|
||||||
| Keyword |
|
||||||
| Sample |
|
||||||
| Logical result |
|
||||||
|
|
||||||
| `After` |
|
||||||
| `findByBirthdateAfter(Date date)` |
|
||||||
| `birthdate > date` |
|
||||||
|
|
||||||
| `GreaterThan` |
|
||||||
| `findByAgeGreaterThan(int age)` |
|
||||||
| `age > age` |
|
||||||
|
|
||||||
| `GreaterThanEqual` |
|
||||||
| `findByAgeGreaterThanEqual(int age)` |
|
||||||
| `age >= age` |
|
||||||
|
|
||||||
| `Before` |
|
||||||
| `findByBirthdateBefore(Date date)` |
|
||||||
| `birthdate < date` |
|
||||||
|
|
||||||
| `LessThan` |
|
||||||
| `findByAgeLessThan(int age)` |
|
||||||
| `age < age` |
|
||||||
|
|
||||||
| `LessThanEqual` |
|
||||||
| `findByAgeLessThanEqual(int age)` |
|
||||||
| `age \<= age` |
|
||||||
|
|
||||||
| `Between` |
|
||||||
| `findByAgeBetween(int from, int to)` |
|
||||||
| `age BETWEEN from AND to` |
|
||||||
|
|
||||||
| `NotBetween` |
|
||||||
| `findByAgeNotBetween(int from, int to)` |
|
||||||
| `age NOT BETWEEN from AND to` |
|
||||||
|
|
||||||
| `In` |
|
||||||
| `findByAgeIn(Collection<Integer> ages)` |
|
||||||
| `age IN (age1, age2, ageN)` |
|
||||||
|
|
||||||
| `NotIn` |
|
||||||
| `findByAgeNotIn(Collection ages)` |
|
||||||
| `age NOT IN (age1, age2, ageN)` |
|
||||||
|
|
||||||
| `IsNotNull`, `NotNull` |
|
||||||
| `findByFirstnameNotNull()` |
|
||||||
| `firstname IS NOT NULL` |
|
||||||
|
|
||||||
| `IsNull`, `Null` |
|
||||||
| `findByFirstnameNull()` |
|
||||||
| `firstname IS NULL` |
|
||||||
|
|
||||||
| `Like`, `StartingWith`, `EndingWith` |
|
||||||
| `findByFirstnameLike(String name)` |
|
||||||
| `firstname LIKE name` |
|
||||||
|
|
||||||
| `NotLike`, `IsNotLike` |
|
||||||
| `findByFirstnameNotLike(String name)` |
|
||||||
| `firstname NOT LIKE name` |
|
||||||
|
|
||||||
| `Containing` on String |
|
||||||
| `findByFirstnameContaining(String name)` |
|
||||||
| `firstname LIKE '%' + name +'%'` |
|
||||||
|
|
||||||
| `NotContaining` on String |
|
||||||
| `findByFirstnameNotContaining(String name)` |
|
||||||
| `firstname NOT LIKE '%' + name +'%'` |
|
||||||
|
|
||||||
| `(No keyword)` |
|
||||||
| `findByFirstname(String name)` |
|
||||||
| `firstname = name` |
|
||||||
|
|
||||||
| `Not` |
|
||||||
| `findByFirstnameNot(String name)` |
|
||||||
| `firstname != name` |
|
||||||
|
|
||||||
| `IsTrue`, `True` |
|
||||||
| `findByActiveIsTrue()` |
|
||||||
| `active IS TRUE` |
|
||||||
|
|
||||||
| `IsFalse`, `False` |
|
||||||
| `findByActiveIsFalse()` |
|
||||||
| `active IS FALSE` |
|
||||||
|=== |
|
||||||
|
|
||||||
[[r2dbc.repositories.modifying]] |
|
||||||
=== Modifying Queries |
|
||||||
|
|
||||||
The previous sections describe how to declare queries to access a given entity or collection of entities. |
|
||||||
Using keywords from the preceding table can be used in conjunction with `delete…By` or `remove…By` to create derived queries that delete matching rows. |
|
||||||
|
|
||||||
.`Delete…By` Query |
|
||||||
==== |
|
||||||
[source,java] |
|
||||||
---- |
|
||||||
interface ReactivePersonRepository extends ReactiveSortingRepository<Person, String> { |
|
||||||
|
|
||||||
Mono<Integer> deleteByLastname(String lastname); <1> |
|
||||||
|
|
||||||
Mono<Void> deletePersonByLastname(String lastname); <2> |
|
||||||
|
|
||||||
Mono<Boolean> deletePersonByLastname(String lastname); <3> |
|
||||||
} |
|
||||||
---- |
|
||||||
<1> Using a return type of `Mono<Integer>` returns the number of affected rows. |
|
||||||
<2> Using `Void` just reports whether the rows were successfully deleted without emitting a result value. |
|
||||||
<3> Using `Boolean` reports whether at least one row was removed. |
|
||||||
==== |
|
||||||
|
|
||||||
As this approach is feasible for comprehensive custom functionality, you can modify queries that only need parameter binding by annotating the query method with `@Modifying`, as shown in the following example: |
|
||||||
|
|
||||||
==== |
|
||||||
[source,java,indent=0] |
|
||||||
---- |
|
||||||
include::../{example-root}/PersonRepository.java[tags=atModifying] |
|
||||||
---- |
|
||||||
==== |
|
||||||
|
|
||||||
The result of a modifying query can be: |
|
||||||
|
|
||||||
* `Void` (or Kotlin `Unit`) to discard update count and await completion. |
|
||||||
* `Integer` or another numeric type emitting the affected rows count. |
|
||||||
* `Boolean` to emit whether at least one row was updated. |
|
||||||
|
|
||||||
The `@Modifying` annotation is only relevant in combination with the `@Query` annotation. |
|
||||||
Derived custom methods do not require this annotation. |
|
||||||
|
|
||||||
Modifying queries are executed directly against the database. |
|
||||||
No events or callbacks get called. |
|
||||||
Therefore also fields with auditing annotations do not get updated if they don't get updated in the annotated query. |
|
||||||
|
|
||||||
Alternatively, you can add custom modifying behavior by using the facilities described in <<repositories.custom-implementations,Custom Implementations for Spring Data Repositories>>. |
|
||||||
|
|
||||||
[[r2dbc.repositories.queries.spel]] |
|
||||||
=== Queries with SpEL Expressions |
|
||||||
|
|
||||||
Query string definitions can be used together with SpEL expressions to create dynamic queries at runtime. |
|
||||||
SpEL expressions can provide predicate values which are evaluated right before running the query. |
|
||||||
|
|
||||||
Expressions expose method arguments through an array that contains all the arguments. |
|
||||||
The following query uses `[0]` |
|
||||||
to declare the predicate value for `lastname` (which is equivalent to the `:lastname` parameter binding): |
|
||||||
|
|
||||||
==== |
|
||||||
[source,java,indent=0] |
|
||||||
---- |
|
||||||
include::../{example-root}/PersonRepository.java[tags=spel] |
|
||||||
---- |
|
||||||
==== |
|
||||||
|
|
||||||
SpEL in query strings can be a powerful way to enhance queries. |
|
||||||
However, they can also accept a broad range of unwanted arguments. |
|
||||||
You should make sure to sanitize strings before passing them to the query to avoid unwanted changes to your query. |
|
||||||
|
|
||||||
Expression support is extensible through the Query SPI: `org.springframework.data.spel.spi.EvaluationContextExtension`. |
|
||||||
The Query SPI can contribute properties and functions and can customize the root object. |
|
||||||
Extensions are retrieved from the application context at the time of SpEL evaluation when the query is built. |
|
||||||
|
|
||||||
TIP: When using SpEL expressions in combination with plain parameters, use named parameter notation instead of native bind markers to ensure a proper binding order. |
|
||||||
|
|
||||||
[[r2dbc.repositories.queries.query-by-example]] |
|
||||||
=== Query By Example |
|
||||||
|
|
||||||
Spring Data R2DBC also lets you use Query By Example to fashion queries. |
|
||||||
This technique allows you to use a "probe" object. |
|
||||||
Essentially, any field that isn't empty or `null` will be used to match. |
|
||||||
|
|
||||||
Here's an example: |
|
||||||
|
|
||||||
==== |
|
||||||
[source,java,indent=0] |
|
||||||
---- |
|
||||||
include::../{example-root}/QueryByExampleTests.java[tag=example] |
|
||||||
---- |
|
||||||
<1> Create a domain object with the criteria (`null` fields will be ignored). |
|
||||||
<2> Using the domain object, create an `Example`. |
|
||||||
<3> Through the `R2dbcRepository`, execute query (use `findOne` for a `Mono`). |
|
||||||
==== |
|
||||||
|
|
||||||
This illustrates how to craft a simple probe using a domain object. |
|
||||||
In this case, it will query based on the `Employee` object's `name` field being equal to `Frodo`. |
|
||||||
`null` fields are ignored. |
|
||||||
|
|
||||||
==== |
|
||||||
[source,java,indent=0] |
|
||||||
---- |
|
||||||
include::../{example-root}/QueryByExampleTests.java[tag=example-2] |
|
||||||
---- |
|
||||||
<1> Create a custom `ExampleMatcher` that matches on ALL fields (use `matchingAny()` to match on *ANY* fields) |
|
||||||
<2> For the `name` field, use a wildcard that matches against the end of the field |
|
||||||
<3> Match columns against `null` (don't forget that `NULL` doesn't equal `NULL` in relational databases). |
|
||||||
<4> Ignore the `role` field when forming the query. |
|
||||||
<5> Plug the custom `ExampleMatcher` into the probe. |
|
||||||
==== |
|
||||||
|
|
||||||
It's also possible to apply a `withTransform()` against any property, allowing you to transform a property before forming the query. |
|
||||||
For example, you can apply a `toUpperCase()` to a `String` -based property before the query is created. |
|
||||||
|
|
||||||
Query By Example really shines when you you don't know all the fields needed in a query in advance. |
|
||||||
If you were building a filter on a web page where the user can pick the fields, Query By Example is a great way to flexibly capture that into an efficient query. |
|
||||||
|
|
||||||
[[r2dbc.entity-persistence.state-detection-strategies]] |
|
||||||
include::../{spring-data-commons-docs}/is-new-state-detection.adoc[leveloffset=+2] |
|
||||||
|
|
||||||
[[r2dbc.entity-persistence.id-generation]] |
|
||||||
=== ID Generation |
|
||||||
|
|
||||||
Spring Data R2DBC uses the ID to identify entities. |
|
||||||
The ID of an entity must be annotated with Spring Data's https://docs.spring.io/spring-data/commons/docs/current/api/org/springframework/data/annotation/Id.html[`@Id`] annotation. |
|
||||||
|
|
||||||
When your database has an auto-increment column for the ID column, the generated value gets set in the entity after inserting it into the database. |
|
||||||
|
|
||||||
Spring Data R2DBC does not attempt to insert values of identifier columns when the entity is new and the identifier value defaults to its initial value. |
|
||||||
That is `0` for primitive types and `null` if the identifier property uses a numeric wrapper type such as `Long`. |
|
||||||
|
|
||||||
One important constraint is that, after saving an entity, the entity must not be new anymore. |
|
||||||
Note that whether an entity is new is part of the entity's state. |
|
||||||
With auto-increment columns, this happens automatically, because the ID gets set by Spring Data with the value from the ID column. |
|
||||||
|
|
||||||
[[r2dbc.optimistic-locking]] |
|
||||||
=== Optimistic Locking |
|
||||||
|
|
||||||
The `@Version` annotation provides syntax similar to that of JPA in the context of R2DBC and makes sure updates are only applied to rows with a matching version. |
|
||||||
Therefore, the actual value of the version property is added to the update query in such a way that the update does not have any effect if another operation altered the row in the meantime. |
|
||||||
In that case, an `OptimisticLockingFailureException` is thrown. |
|
||||||
The following example shows these features: |
|
||||||
|
|
||||||
==== |
|
||||||
[source,java] |
|
||||||
---- |
|
||||||
@Table |
|
||||||
class Person { |
|
||||||
|
|
||||||
@Id Long id; |
|
||||||
String firstname; |
|
||||||
String lastname; |
|
||||||
@Version Long version; |
|
||||||
} |
|
||||||
|
|
||||||
R2dbcEntityTemplate template = …; |
|
||||||
|
|
||||||
Mono<Person> daenerys = template.insert(new Person("Daenerys")); <1> |
|
||||||
|
|
||||||
Person other = template.select(Person.class) |
|
||||||
.matching(query(where("id").is(daenerys.getId()))) |
|
||||||
.first().block(); <2> |
|
||||||
|
|
||||||
daenerys.setLastname("Targaryen"); |
|
||||||
template.update(daenerys); <3> |
|
||||||
|
|
||||||
template.update(other).subscribe(); // emits OptimisticLockingFailureException <4> |
|
||||||
---- |
|
||||||
<1> Initially insert row. `version` is set to `0`. |
|
||||||
<2> Load the just inserted row. `version` is still `0`. |
|
||||||
<3> Update the row with `version = 0`.Set the `lastname` and bump `version` to `1`. |
|
||||||
<4> Try to update the previously loaded row that still has `version = 0`.The operation fails with an `OptimisticLockingFailureException`, as the current `version` is `1`. |
|
||||||
==== |
|
||||||
|
|
||||||
:projection-collection: Flux |
|
||||||
include::../{spring-data-commons-docs}/repository-projections.adoc[leveloffset=+2] |
|
||||||
|
|
||||||
[[projections.resultmapping]] |
|
||||||
==== Result Mapping |
|
||||||
|
|
||||||
A query method returning an Interface- or DTO projection is backed by results produced by the actual query. |
|
||||||
Interface projections generally rely on mapping results onto the domain type first to consider potential `@Column` type mappings and the actual projection proxy uses a potentially partially materialized entity to expose projection data. |
|
||||||
|
|
||||||
Result mapping for DTO projections depends on the actual query type. |
|
||||||
Derived queries use the domain type to map results, and Spring Data creates DTO instances solely from properties available on the domain type. |
|
||||||
Declaring properties in your DTO that are not available on the domain type is not supported. |
|
||||||
|
|
||||||
String-based queries use a different approach since the actual query, specifically the field projection, and result type declaration are close together. |
|
||||||
DTO projections used with query methods annotated with `@Query` map query results directly into the DTO type. |
|
||||||
Field mappings on the domain type are not considered. |
|
||||||
Using the DTO type directly, your query method can benefit from a more dynamic projection that isn't restricted to the domain model. |
|
||||||
|
|
||||||
include::../{spring-data-commons-docs}/entity-callbacks.adoc[leveloffset=+1] |
|
||||||
include::./r2dbc-entity-callbacks.adoc[leveloffset=+2] |
|
||||||
|
|
||||||
[[r2dbc.multiple-databases]] |
|
||||||
== Working with multiple Databases |
|
||||||
|
|
||||||
When working with multiple, potentially different databases, your application will require a different approach to configuration. |
|
||||||
The provided `AbstractR2dbcConfiguration` support class assumes a single `ConnectionFactory` from which the `Dialect` gets derived. |
|
||||||
That being said, you need to define a few beans yourself to configure Spring Data R2DBC to work with multiple databases. |
|
||||||
|
|
||||||
R2DBC repositories require `R2dbcEntityOperations` to implement repositories. |
|
||||||
A simple configuration to scan for repositories without using `AbstractR2dbcConfiguration` looks like: |
|
||||||
|
|
||||||
[source,java] |
|
||||||
---- |
|
||||||
@Configuration |
|
||||||
@EnableR2dbcRepositories(basePackages = "com.acme.mysql", entityOperationsRef = "mysqlR2dbcEntityOperations") |
|
||||||
static class MySQLConfiguration { |
|
||||||
|
|
||||||
@Bean |
|
||||||
@Qualifier("mysql") |
|
||||||
public ConnectionFactory mysqlConnectionFactory() { |
|
||||||
return … |
|
||||||
} |
|
||||||
|
|
||||||
@Bean |
|
||||||
public R2dbcEntityOperations mysqlR2dbcEntityOperations(@Qualifier("mysql") ConnectionFactory connectionFactory) { |
|
||||||
|
|
||||||
DatabaseClient databaseClient = DatabaseClient.create(connectionFactory); |
|
||||||
|
|
||||||
return new R2dbcEntityTemplate(databaseClient, MySqlDialect.INSTANCE); |
|
||||||
} |
|
||||||
} |
|
||||||
---- |
|
||||||
|
|
||||||
Note that `@EnableR2dbcRepositories` allows configuration either through `databaseClientRef` or `entityOperationsRef`. |
|
||||||
Using various `DatabaseClient` beans is useful when connecting to multiple databases of the same type. |
|
||||||
When using different database systems that differ in their dialect, use `@EnableR2dbcRepositories`(entityOperationsRef = …)` instead. |
|
||||||
@ -1,6 +0,0 @@ |
|||||||
[[r2dbc.core]] |
|
||||||
= R2DBC support |
|
||||||
|
|
||||||
include::r2dbc-core.adoc[] |
|
||||||
|
|
||||||
include::r2dbc-template.adoc[leveloffset=+1] |
|
||||||
@ -0,0 +1,42 @@ |
|||||||
|
# PACKAGES antora@3.2.0-alpha.2 @antora/atlas-extension:1.0.0-alpha.1 @antora/collector-extension@1.0.0-alpha.3 @springio/antora-extensions@1.1.0-alpha.2 @asciidoctor/tabs@1.0.0-alpha.12 @opendevise/antora-release-line-extension@1.0.0-alpha.2 |
||||||
|
# |
||||||
|
# The purpose of this Antora playbook is to build the docs in the current branch. |
||||||
|
antora: |
||||||
|
extensions: |
||||||
|
- '@antora/collector-extension' |
||||||
|
- require: '@springio/antora-extensions/root-component-extension' |
||||||
|
root_component_name: 'data-relational' |
||||||
|
site: |
||||||
|
title: Spring Data Relational |
||||||
|
url: https://docs.spring.io/spring-data-relational/reference/ |
||||||
|
content: |
||||||
|
sources: |
||||||
|
- url: ./../../.. |
||||||
|
branches: HEAD |
||||||
|
start_path: src/main/antora |
||||||
|
worktrees: true |
||||||
|
- url: https://github.com/spring-projects/spring-data-commons |
||||||
|
# Refname matching: |
||||||
|
# https://docs.antora.org/antora/latest/playbook/content-refname-matching/ |
||||||
|
branches: [ main, 3.2.x ] |
||||||
|
start_path: src/main/antora |
||||||
|
asciidoc: |
||||||
|
attributes: |
||||||
|
page-pagination: '' |
||||||
|
hide-uri-scheme: '@' |
||||||
|
tabs-sync-option: '@' |
||||||
|
chomp: 'all' |
||||||
|
extensions: |
||||||
|
- '@asciidoctor/tabs' |
||||||
|
- '@springio/asciidoctor-extensions' |
||||||
|
sourcemap: true |
||||||
|
urls: |
||||||
|
latest_version_segment: '' |
||||||
|
runtime: |
||||||
|
log: |
||||||
|
failure_level: warn |
||||||
|
format: pretty |
||||||
|
ui: |
||||||
|
bundle: |
||||||
|
url: https://github.com/spring-io/antora-ui-spring/releases/download/v0.3.3/ui-bundle.zip |
||||||
|
snapshot: true |
||||||
@ -0,0 +1,12 @@ |
|||||||
|
name: data-relational |
||||||
|
version: true |
||||||
|
title: Spring Data Relational |
||||||
|
nav: |
||||||
|
- modules/ROOT/nav.adoc |
||||||
|
ext: |
||||||
|
collector: |
||||||
|
- run: |
||||||
|
command: ./mvnw validate process-resources -pl :spring-data-jdbc-distribution -am -Pantora-process-resources |
||||||
|
local: true |
||||||
|
scan: |
||||||
|
dir: spring-data-jdbc-distribution/target/classes/ |
||||||
@ -0,0 +1 @@ |
|||||||
|
../../../../../../spring-data-r2dbc/src/test/java/org/springframework/data/r2dbc/documentation |
||||||
@ -0,0 +1,55 @@ |
|||||||
|
* xref:index.adoc[Overview] |
||||||
|
** xref:commons/upgrade.adoc[] |
||||||
|
* xref:repositories/introduction.adoc[] |
||||||
|
** xref:repositories/core-concepts.adoc[] |
||||||
|
** xref:repositories/definition.adoc[] |
||||||
|
** xref:repositories/create-instances.adoc[] |
||||||
|
** xref:repositories/query-methods-details.adoc[] |
||||||
|
** xref:repositories/projections.adoc[] |
||||||
|
** xref:object-mapping.adoc[] |
||||||
|
** xref:commons/custom-conversions.adoc[] |
||||||
|
** xref:repositories/custom-implementations.adoc[] |
||||||
|
** xref:repositories/core-domain-events.adoc[] |
||||||
|
** xref:commons/entity-callbacks.adoc[] |
||||||
|
** xref:repositories/core-extensions.adoc[] |
||||||
|
** xref:repositories/null-handling.adoc[] |
||||||
|
** xref:repositories/query-keywords-reference.adoc[] |
||||||
|
** xref:repositories/query-return-types-reference.adoc[] |
||||||
|
* xref:jdbc.adoc[] |
||||||
|
** xref:jdbc/why.adoc[] |
||||||
|
** xref:jdbc/domain-driven-design.adoc[] |
||||||
|
** xref:jdbc/getting-started.adoc[] |
||||||
|
** xref:jdbc/examples-repo.adoc[] |
||||||
|
** xref:jdbc/configuration.adoc[] |
||||||
|
** xref:jdbc/entity-persistence.adoc[] |
||||||
|
** xref:jdbc/loading-aggregates.adoc[] |
||||||
|
** xref:jdbc/query-methods.adoc[] |
||||||
|
** xref:jdbc/mybatis.adoc[] |
||||||
|
** xref:jdbc/events.adoc[] |
||||||
|
** xref:jdbc/logging.adoc[] |
||||||
|
** xref:jdbc/transactions.adoc[] |
||||||
|
** xref:jdbc/auditing.adoc[] |
||||||
|
** xref:jdbc/mapping.adoc[] |
||||||
|
** xref:jdbc/custom-conversions.adoc[] |
||||||
|
** xref:jdbc/locking.adoc[] |
||||||
|
** xref:query-by-example.adoc[] |
||||||
|
** xref:jdbc/schema-support.adoc[] |
||||||
|
* xref:r2dbc.adoc[] |
||||||
|
** xref:r2dbc/getting-started.adoc[] |
||||||
|
** xref:r2dbc/core.adoc[] |
||||||
|
** xref:r2dbc/template.adoc[] |
||||||
|
** xref:r2dbc/repositories.adoc[] |
||||||
|
** xref:r2dbc/query-methods.adoc[] |
||||||
|
** xref:r2dbc/entity-callbacks.adoc[] |
||||||
|
** xref:r2dbc/auditing.adoc[] |
||||||
|
** xref:r2dbc/mapping.adoc[] |
||||||
|
** xref:r2dbc/query-by-example.adoc[] |
||||||
|
** xref:r2dbc/kotlin.adoc[] |
||||||
|
** xref:r2dbc/migration-guide.adoc[] |
||||||
|
* xref:kotlin.adoc[] |
||||||
|
** xref:kotlin/requirements.adoc[] |
||||||
|
** xref:kotlin/null-safety.adoc[] |
||||||
|
** xref:kotlin/object-mapping.adoc[] |
||||||
|
** xref:kotlin/extensions.adoc[] |
||||||
|
** xref:kotlin/coroutines.adoc[] |
||||||
|
* https://github.com/spring-projects/spring-data-commons/wiki[Wiki] |
||||||
@ -0,0 +1 @@ |
|||||||
|
include::{commons}@data-commons::page$custom-conversions.adoc[] |
||||||
@ -0,0 +1 @@ |
|||||||
|
include::{commons}@data-commons::page$entity-callbacks.adoc[] |
||||||
@ -0,0 +1 @@ |
|||||||
|
include::{commons}@data-commons::page$upgrade.adoc[] |
||||||
@ -0,0 +1,21 @@ |
|||||||
|
[[spring-data-jpa-reference-documentation]] |
||||||
|
= Spring Data JDBC and R2DBC |
||||||
|
:revnumber: {version} |
||||||
|
:revdate: {localdate} |
||||||
|
:feature-scroll: true |
||||||
|
|
||||||
|
_Spring Data JDBC and R2DBC provide repository support for the Java Database Connectivity (JDBC) respective Reactive Relational Database Connectivity (R2DBC) APIs. |
||||||
|
It eases development of applications with a consistent programming model that need to access SQL data sources._ |
||||||
|
|
||||||
|
[horizontal] |
||||||
|
xref:repositories/introduction.adoc[Introduction] :: Introduction to Repositories |
||||||
|
xref:jdbc.adoc[JDBC] :: JDBC Object Mapping and Repositories |
||||||
|
xref:r2dbc.adoc[R2DBC] :: R2DBC Object Mapping and Repositories |
||||||
|
xref:kotlin.adoc[Kotlin] :: Kotlin-specific Support |
||||||
|
https://github.com/spring-projects/spring-data-commons/wiki[Wiki] :: What's New, Upgrade Notes, Supported Versions, additional cross-version information. |
||||||
|
|
||||||
|
Jens Schauder, Jay Bryant, Mark Paluch, Bastian Wilhelm |
||||||
|
|
||||||
|
(C) 2008-2023 VMware, Inc. |
||||||
|
|
||||||
|
Copies of this document may be made for your own use and for distribution to others, provided that you do not charge any fee for such copies and further provided that each copy contains this Copyright Notice, whether distributed in print or electronically. |
||||||
@ -0,0 +1,16 @@ |
|||||||
|
[[jdbc.repositories]] |
||||||
|
= JDBC |
||||||
|
:page-section-summary-toc: 1 |
||||||
|
|
||||||
|
The Spring Data JDBC module applies core Spring concepts to the development of solutions that use JDBC database drivers aligned with xref:jdbc/domain-driven-design.adoc[Domain-driven design principles]. |
||||||
|
We provide a "`template`" as a high-level abstraction for storing and querying aggregates. |
||||||
|
|
||||||
|
This document is the reference guide for Spring Data JDBC support. |
||||||
|
It explains the concepts and semantics and syntax. |
||||||
|
|
||||||
|
This chapter points out the specialties for repository support for JDBC. |
||||||
|
This builds on the core repository support explained in xref:repositories/introduction.adoc[Working with Spring Data Repositories]. |
||||||
|
You should have a sound understanding of the basic concepts explained there. |
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@ -0,0 +1,23 @@ |
|||||||
|
[[jdbc.auditing]] |
||||||
|
= JDBC Auditing |
||||||
|
:page-section-summary-toc: 1 |
||||||
|
|
||||||
|
In order to activate auditing, add `@EnableJdbcAuditing` to your configuration, as the following example shows: |
||||||
|
|
||||||
|
.Activating auditing with Java configuration |
||||||
|
[source,java] |
||||||
|
---- |
||||||
|
@Configuration |
||||||
|
@EnableJdbcAuditing |
||||||
|
class Config { |
||||||
|
|
||||||
|
@Bean |
||||||
|
AuditorAware<AuditableUser> auditorProvider() { |
||||||
|
return new AuditorAwareImpl(); |
||||||
|
} |
||||||
|
} |
||||||
|
---- |
||||||
|
|
||||||
|
If you expose a bean of type `AuditorAware` to the `ApplicationContext`, the auditing infrastructure automatically picks it up and uses it to determine the current user to be set on domain types. |
||||||
|
If you have multiple implementations registered in the `ApplicationContext`, you can select the one to be used by explicitly setting the `auditorAwareRef` attribute of `@EnableJdbcAuditing`. |
||||||
|
|
||||||
@ -0,0 +1,64 @@ |
|||||||
|
[[jdbc.java-config]] |
||||||
|
= Configuration |
||||||
|
|
||||||
|
The Spring Data JDBC repositories support can be activated by an annotation through Java configuration, as the following example shows: |
||||||
|
|
||||||
|
.Spring Data JDBC repositories using Java configuration |
||||||
|
[source,java] |
||||||
|
---- |
||||||
|
@Configuration |
||||||
|
@EnableJdbcRepositories // <1> |
||||||
|
class ApplicationConfig extends AbstractJdbcConfiguration { // <2> |
||||||
|
|
||||||
|
@Bean |
||||||
|
DataSource dataSource() { // <3> |
||||||
|
|
||||||
|
EmbeddedDatabaseBuilder builder = new EmbeddedDatabaseBuilder(); |
||||||
|
return builder.setType(EmbeddedDatabaseType.HSQL).build(); |
||||||
|
} |
||||||
|
|
||||||
|
@Bean |
||||||
|
NamedParameterJdbcOperations namedParameterJdbcOperations(DataSource dataSource) { // <4> |
||||||
|
return new NamedParameterJdbcTemplate(dataSource); |
||||||
|
} |
||||||
|
|
||||||
|
@Bean |
||||||
|
TransactionManager transactionManager(DataSource dataSource) { // <5> |
||||||
|
return new DataSourceTransactionManager(dataSource); |
||||||
|
} |
||||||
|
} |
||||||
|
---- |
||||||
|
<1> `@EnableJdbcRepositories` creates implementations for interfaces derived from `Repository` |
||||||
|
<2> `AbstractJdbcConfiguration` provides various default beans required by Spring Data JDBC |
||||||
|
<3> Creates a `DataSource` connecting to a database. |
||||||
|
This is required by the following two bean methods. |
||||||
|
<4> Creates the `NamedParameterJdbcOperations` used by Spring Data JDBC to access the database. |
||||||
|
<5> Spring Data JDBC utilizes the transaction management provided by Spring JDBC. |
||||||
|
|
||||||
|
The configuration class in the preceding example sets up an embedded HSQL database by using the `EmbeddedDatabaseBuilder` API of `spring-jdbc`. |
||||||
|
The `DataSource` is then used to set up `NamedParameterJdbcOperations` and a `TransactionManager`. |
||||||
|
We finally activate Spring Data JDBC repositories by using the `@EnableJdbcRepositories`. |
||||||
|
If no base package is configured, it uses the package in which the configuration class resides. |
||||||
|
Extending `AbstractJdbcConfiguration` ensures various beans get registered. |
||||||
|
Overwriting its methods can be used to customize the setup (see below). |
||||||
|
|
||||||
|
This configuration can be further simplified by using Spring Boot. |
||||||
|
With Spring Boot a `DataSource` is sufficient once the starter `spring-boot-starter-data-jdbc` is included in the dependencies. |
||||||
|
Everything else is done by Spring Boot. |
||||||
|
|
||||||
|
There are a couple of things one might want to customize in this setup. |
||||||
|
|
||||||
|
[[jdbc.dialects]] |
||||||
|
== Dialects |
||||||
|
|
||||||
|
Spring Data JDBC uses implementations of the interface `Dialect` to encapsulate behavior that is specific to a database or its JDBC driver. |
||||||
|
By default, the `AbstractJdbcConfiguration` tries to determine the database in use and register the correct `Dialect`. |
||||||
|
This behavior can be changed by overwriting `jdbcDialect(NamedParameterJdbcOperations)`. |
||||||
|
|
||||||
|
If you use a database for which no dialect is available, then your application won’t startup. In that case, you’ll have to ask your vendor to provide a `Dialect` implementation. Alternatively, you can: |
||||||
|
|
||||||
|
1. Implement your own `Dialect`. |
||||||
|
2. Implement a `JdbcDialectProvider` returning the `Dialect`. |
||||||
|
3. Register the provider by creating a `spring.factories` resource under `META-INF` and perform the registration by adding a line + |
||||||
|
`org.springframework.data.jdbc.repository.config.DialectResolver$JdbcDialectProvider=<fully qualified name of your JdbcDialectProvider>` |
||||||
|
|
||||||
@ -0,0 +1,26 @@ |
|||||||
|
[[jdbc.domain-driven-design]] |
||||||
|
= Domain Driven Design and Relational Databases |
||||||
|
|
||||||
|
All Spring Data modules are inspired by the concepts of "`repository`", "`aggregate`", and "`aggregate root`" from Domain Driven Design. |
||||||
|
These are possibly even more important for Spring Data JDBC, because they are, to some extent, contrary to normal practice when working with relational databases. |
||||||
|
|
||||||
|
An aggregate is a group of entities that is guaranteed to be consistent between atomic changes to it. |
||||||
|
A classic example is an `Order` with `OrderItems`. |
||||||
|
A property on `Order` (for example, `numberOfItems` is consistent with the actual number of `OrderItems`) remains consistent as changes are made. |
||||||
|
|
||||||
|
References across aggregates are not guaranteed to be consistent at all times. |
||||||
|
They are guaranteed to become consistent eventually. |
||||||
|
|
||||||
|
Each aggregate has exactly one aggregate root, which is one of the entities of the aggregate. |
||||||
|
The aggregate gets manipulated only through methods on that aggregate root. |
||||||
|
These are the atomic changes mentioned earlier. |
||||||
|
|
||||||
|
A repository is an abstraction over a persistent store that looks like a collection of all the aggregates of a certain type. |
||||||
|
For Spring Data in general, this means you want to have one `Repository` per aggregate root. |
||||||
|
In addition, for Spring Data JDBC this means that all entities reachable from an aggregate root are considered to be part of that aggregate root. |
||||||
|
Spring Data JDBC assumes that only the aggregate has a foreign key to a table storing non-root entities of the aggregate and no other entity points toward non-root entities. |
||||||
|
|
||||||
|
WARNING: In the current implementation, entities referenced from an aggregate root are deleted and recreated by Spring Data JDBC. |
||||||
|
|
||||||
|
You can overwrite the repository methods with implementations that match your style of working and designing your database. |
||||||
|
|
||||||
@ -0,0 +1,44 @@ |
|||||||
|
[[jdbc.entity-persistence]] |
||||||
|
= Persisting Entities |
||||||
|
|
||||||
|
Saving an aggregate can be performed with the `CrudRepository.save(…)` method. |
||||||
|
If the aggregate is new, this results in an insert for the aggregate root, followed by insert statements for all directly or indirectly referenced entities. |
||||||
|
|
||||||
|
If the aggregate root is not new, all referenced entities get deleted, the aggregate root gets updated, and all referenced entities get inserted again. |
||||||
|
Note that whether an instance is new is part of the instance's state. |
||||||
|
|
||||||
|
NOTE: This approach has some obvious downsides. |
||||||
|
If only few of the referenced entities have been actually changed, the deletion and insertion is wasteful. |
||||||
|
While this process could and probably will be improved, there are certain limitations to what Spring Data JDBC can offer. |
||||||
|
It does not know the previous state of an aggregate. |
||||||
|
So any update process always has to take whatever it finds in the database and make sure it converts it to whatever is the state of the entity passed to the save method. |
||||||
|
|
||||||
|
[[jdbc.entity-persistence.state-detection-strategies]] |
||||||
|
include::{commons}@data-commons::page$is-new-state-detection.adoc[leveloffset=+1] |
||||||
|
|
||||||
|
[[jdbc.entity-persistence.id-generation]] |
||||||
|
== ID Generation |
||||||
|
|
||||||
|
Spring Data JDBC uses the ID to identify entities. |
||||||
|
The ID of an entity must be annotated with Spring Data's https://docs.spring.io/spring-data/commons/docs/current/api/org/springframework/data/annotation/Id.html[`@Id`] annotation. |
||||||
|
|
||||||
|
When your database has an auto-increment column for the ID column, the generated value gets set in the entity after inserting it into the database. |
||||||
|
|
||||||
|
One important constraint is that, after saving an entity, the entity must not be new any more. |
||||||
|
Note that whether an entity is new is part of the entity's state. |
||||||
|
With auto-increment columns, this happens automatically, because the ID gets set by Spring Data with the value from the ID column. |
||||||
|
If you are not using auto-increment columns, you can use a `BeforeConvertCallback` to set the ID of the entity (covered later in this document). |
||||||
|
|
||||||
|
[[jdbc.entity-persistence.optimistic-locking]] |
||||||
|
== Optimistic Locking |
||||||
|
|
||||||
|
Spring Data JDBC supports optimistic locking by means of a numeric attribute that is annotated with |
||||||
|
https://docs.spring.io/spring-data/commons/docs/current/api/org/springframework/data/annotation/Version.html[`@Version`] on the aggregate root. |
||||||
|
Whenever Spring Data JDBC saves an aggregate with such a version attribute two things happen: |
||||||
|
The update statement for the aggregate root will contain a where clause checking that the version stored in the database is actually unchanged. |
||||||
|
If this isn't the case an `OptimisticLockingFailureException` will be thrown. |
||||||
|
Also the version attribute gets increased both in the entity and in the database so a concurrent action will notice the change and throw an `OptimisticLockingFailureException` if applicable as described above. |
||||||
|
|
||||||
|
This process also applies to inserting new aggregates, where a `null` or `0` version indicates a new instance and the increased instance afterwards marks the instance as not new anymore, making this work rather nicely with cases where the id is generated during object construction for example when UUIDs are used. |
||||||
|
|
||||||
|
During deletes the version check also applies but no version is increased. |
||||||
@ -0,0 +1,110 @@ |
|||||||
|
[[jdbc.events]] |
||||||
|
= Lifecycle Events |
||||||
|
|
||||||
|
Spring Data JDBC publishes lifecycle events to `ApplicationListener` objects, typically beans in the application context. |
||||||
|
Events are notifications about a certain lifecycle phase. |
||||||
|
In contrast to entity callbacks, events are intended for notification. |
||||||
|
Transactional listeners will receive events when the transaction completes. |
||||||
|
Events and callbacks get only triggered for aggregate roots. |
||||||
|
If you want to process non-root entities, you need to do that through a listener for the containing aggregate root. |
||||||
|
|
||||||
|
Entity lifecycle events can be costly, and you may notice a change in the performance profile when loading large result sets. |
||||||
|
You can disable lifecycle events on the link:{spring-data-jdbc-javadoc}org/springframework/data/jdbc/core/JdbcAggregateTemplate.html#setEntityLifecycleEventsEnabled(boolean)[Template API]. |
||||||
|
|
||||||
|
For example, the following listener gets invoked before an aggregate gets saved: |
||||||
|
|
||||||
|
[source,java] |
||||||
|
---- |
||||||
|
@Bean |
||||||
|
ApplicationListener<BeforeSaveEvent<Object>> loggingSaves() { |
||||||
|
|
||||||
|
return event -> { |
||||||
|
|
||||||
|
Object entity = event.getEntity(); |
||||||
|
LOG.info("{} is getting saved.", entity); |
||||||
|
}; |
||||||
|
} |
||||||
|
---- |
||||||
|
|
||||||
|
If you want to handle events only for a specific domain type you may derive your listener from `AbstractRelationalEventListener` and overwrite one or more of the `onXXX` methods, where `XXX` stands for an event type. |
||||||
|
Callback methods will only get invoked for events related to the domain type and their subtypes, therefore you don't require further casting. |
||||||
|
|
||||||
|
[source,java] |
||||||
|
---- |
||||||
|
class PersonLoadListener extends AbstractRelationalEventListener<Person> { |
||||||
|
|
||||||
|
@Override |
||||||
|
protected void onAfterLoad(AfterLoadEvent<Person> personLoad) { |
||||||
|
LOG.info(personLoad.getEntity()); |
||||||
|
} |
||||||
|
} |
||||||
|
---- |
||||||
|
|
||||||
|
The following table describes the available events.For more details about the exact relation between process steps see the link:#jdbc.entity-callbacks[description of available callbacks] which map 1:1 to events. |
||||||
|
|
||||||
|
.Available events |
||||||
|
|=== |
||||||
|
| Event | When It Is Published |
||||||
|
|
||||||
|
| {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/event/BeforeDeleteEvent.html[`BeforeDeleteEvent`] |
||||||
|
| Before an aggregate root gets deleted. |
||||||
|
|
||||||
|
| {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/event/AfterDeleteEvent.html[`AfterDeleteEvent`] |
||||||
|
| After an aggregate root gets deleted. |
||||||
|
|
||||||
|
| {spring-data-jdbc-javadoc}/org/springframework/data/relational/core/mapping/event/BeforeConvertEvent.html[`BeforeConvertEvent`] |
||||||
|
| Before an aggregate root gets converted into a plan for executing SQL statements, but after the decision was made if the aggregate is new or not, i.e. if an update or an insert is in order. |
||||||
|
|
||||||
|
| {spring-data-jdbc-javadoc}/org/springframework/data/relational/core/mapping/event/BeforeSaveEvent.html[`BeforeSaveEvent`] |
||||||
|
| Before an aggregate root gets saved (that is, inserted or updated but after the decision about whether if it gets inserted or updated was made). |
||||||
|
|
||||||
|
| {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/event/AfterSaveEvent.html[`AfterSaveEvent`] |
||||||
|
| After an aggregate root gets saved (that is, inserted or updated). |
||||||
|
|
||||||
|
| {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/event/AfterConvertEvent.html[`AfterConvertEvent`] |
||||||
|
| After an aggregate root gets created from a database `ResultSet` and all its properties get set. |
||||||
|
|=== |
||||||
|
|
||||||
|
WARNING: Lifecycle events depend on an `ApplicationEventMulticaster`, which in case of the `SimpleApplicationEventMulticaster` can be configured with a `TaskExecutor`, and therefore gives no guarantees when an Event is processed. |
||||||
|
|
||||||
|
|
||||||
|
[[jdbc.entity-callbacks]] |
||||||
|
== Store-specific EntityCallbacks |
||||||
|
|
||||||
|
Spring Data JDBC uses the xref:commons/entity-callbacks.adoc[`EntityCallback` API] for its auditing support and reacts on the callbacks listed in the following table. |
||||||
|
|
||||||
|
.Process Steps and Callbacks of the Different Processes performed by Spring Data JDBC. |
||||||
|
|=== |
||||||
|
| Process | `EntityCallback` / Process Step | Comment |
||||||
|
|
||||||
|
.3+| Delete | {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/event/BeforeDeleteCallback.html[`BeforeDeleteCallback`] |
||||||
|
| Before the actual deletion. |
||||||
|
|
||||||
|
2+| The aggregate root and all the entities of that aggregate get removed from the database. |
||||||
|
|
||||||
|
| {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/event/AfterDeleteCallback.html[`AfterDeleteCallback`] |
||||||
|
| After an aggregate gets deleted. |
||||||
|
|
||||||
|
|
||||||
|
.6+| Save 2+| Determine if an insert or an update of the aggregate is to be performed dependen on if it is new or not. |
||||||
|
| {spring-data-jdbc-javadoc}/org/springframework/data/relational/core/mapping/event/BeforeConvertCallback.html[`BeforeConvertCallback`] |
||||||
|
| This is the correct callback if you want to set an id programmatically. In the previous step new aggregates got detected as such and a Id generated in this step would be used in the following step. |
||||||
|
|
||||||
|
2+| Convert the aggregate to a aggregate change, it is a sequence of SQL statements to be executed against the database. In this step the decision is made if an Id is provided by the aggregate or if the Id is still empty and is expected to be generated by the database. |
||||||
|
|
||||||
|
| {spring-data-jdbc-javadoc}/org/springframework/data/relational/core/mapping/event/BeforeSaveCallback.html[`BeforeSaveCallback`] |
||||||
|
| Changes made to the aggregate root may get considered, but the decision if an id value will be sent to the database is already made in the previous step. |
||||||
|
Do not use this for creating Ids for new aggregates. Use `BeforeConvertCallback` instead. |
||||||
|
|
||||||
|
2+| The SQL statements determined above get executed against the database. |
||||||
|
|
||||||
|
| {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/event/AfterSaveCallback.html[`AfterSaveCallback`] |
||||||
|
| After an aggregate root gets saved (that is, inserted or updated). |
||||||
|
|
||||||
|
|
||||||
|
.2+| Load 2+| Load the aggregate using 1 or more SQL queries. Construct the aggregate from the resultset. |
||||||
|
| {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/event/AfterConvertCallback.html[`AfterConvertCallback`] |
||||||
|
| |
||||||
|
|=== |
||||||
|
|
||||||
|
We encourage the use of callbacks over events since they support the use of immutable classes and therefore are more powerful and versatile than events. |
||||||
@ -0,0 +1,5 @@ |
|||||||
|
[[jdbc.examples-repo]] |
||||||
|
= Examples Repository |
||||||
|
:page-section-summary-toc: 1 |
||||||
|
|
||||||
|
There is a https://github.com/spring-projects/spring-data-examples[GitHub repository with several examples] that you can download and play around with to get a feel for how the library works. |
||||||
@ -0,0 +1,68 @@ |
|||||||
|
[[jdbc.getting-started]] |
||||||
|
= Getting Started |
||||||
|
|
||||||
|
An easy way to bootstrap setting up a working environment is to create a Spring-based project in https://spring.io/tools[Spring Tools] or from https://start.spring.io[Spring Initializr]. |
||||||
|
|
||||||
|
First, you need to set up a running database server. Refer to your vendor documentation on how to configure your database for JDBC access. |
||||||
|
|
||||||
|
[[requirements]] |
||||||
|
== Requirements |
||||||
|
|
||||||
|
Spring Data JDBC requires https://spring.io/docs[Spring Framework] {springVersion} and above. |
||||||
|
|
||||||
|
In terms of databases, Spring Data JDBC requires a xref:jdbc/configuration.adoc#jdbc.dialects[dialect] to abstract common SQL functionality over vendor-specific flavours. |
||||||
|
Spring Data JDBC includes direct support for the following databases: |
||||||
|
|
||||||
|
* DB2 |
||||||
|
* H2 |
||||||
|
* HSQLDB |
||||||
|
* MariaDB |
||||||
|
* Microsoft SQL Server |
||||||
|
* MySQL |
||||||
|
* Oracle |
||||||
|
* Postgres |
||||||
|
|
||||||
|
If you use a different database then your application won’t startup. |
||||||
|
The xref:jdbc/configuration.adoc#jdbc.dialects[dialect] section contains further detail on how to proceed in such case. |
||||||
|
|
||||||
|
To create a Spring project in STS: |
||||||
|
|
||||||
|
. Go to File -> New -> Spring Template Project -> Simple Spring Utility Project, and press Yes when prompted. |
||||||
|
Then enter a project and a package name, such as `org.spring.jdbc.example`. |
||||||
|
. Add the following to the `pom.xml` files `dependencies` element: |
||||||
|
+ |
||||||
|
[source,xml,subs="+attributes"] |
||||||
|
---- |
||||||
|
<dependencies> |
||||||
|
|
||||||
|
<!-- other dependency elements omitted --> |
||||||
|
|
||||||
|
<dependency> |
||||||
|
<groupId>org.springframework.data</groupId> |
||||||
|
<artifactId>spring-data-jdbc</artifactId> |
||||||
|
<version>{version}</version> |
||||||
|
</dependency> |
||||||
|
|
||||||
|
</dependencies> |
||||||
|
---- |
||||||
|
. Change the version of Spring in the pom.xml to be |
||||||
|
+ |
||||||
|
[source,xml,subs="+attributes"] |
||||||
|
---- |
||||||
|
<spring.framework.version>{springVersion}</spring.framework.version> |
||||||
|
---- |
||||||
|
. Add the following location of the Spring Milestone repository for Maven to your `pom.xml` such that it is at the same level of your `<dependencies/>` element: |
||||||
|
+ |
||||||
|
[source,xml] |
||||||
|
---- |
||||||
|
<repositories> |
||||||
|
<repository> |
||||||
|
<id>spring-milestone</id> |
||||||
|
<name>Spring Maven MILESTONE Repository</name> |
||||||
|
<url>https://repo.spring.io/milestone</url> |
||||||
|
</repository> |
||||||
|
</repositories> |
||||||
|
---- |
||||||
|
|
||||||
|
The repository is also https://repo.spring.io/milestone/org/springframework/data/[browseable here]. |
||||||
|
|
||||||
@ -0,0 +1,28 @@ |
|||||||
|
[[jdbc.loading-aggregates]] |
||||||
|
= Loading Aggregates |
||||||
|
|
||||||
|
Spring Data JDBC offers two ways how it can load aggregates. |
||||||
|
The traditional and before version 3.2 the only way is really simple: |
||||||
|
Each query loads the aggregate roots, independently if the query is based on a `CrudRepository` method, a derived query or a annotated query. |
||||||
|
If the aggregate root references other entities those are loaded with separate statements. |
||||||
|
|
||||||
|
Spring Data JDBC now allows the use of _Single Query Loading_. |
||||||
|
With this an arbitrary number of aggregates can be fully loaded with a single SQL query. |
||||||
|
This should be significant more efficient, especially for complex aggregates, consisting of many entities. |
||||||
|
|
||||||
|
Currently, this feature is very restricted. |
||||||
|
|
||||||
|
1. It only works for aggregates that only reference one entity collection.The plan is to remove this constraint in the future. |
||||||
|
|
||||||
|
2. The aggregate must also not use `AggregateReference` or embedded entities.The plan is to remove this constraint in the future. |
||||||
|
|
||||||
|
3. The database dialect must support it.Of the dialects provided by Spring Data JDBC all but H2 and HSQL support this.H2 and HSQL don't support analytic functions (aka windowing functions). |
||||||
|
|
||||||
|
4. It only works for the find methods in `CrudRepository`, not for derived queries and not for annotated queries.The plan is to remove this constraint in the future. |
||||||
|
|
||||||
|
5. Single Query Loading needs to be enabled in the `JdbcMappingContext`, by calling `setSingleQueryLoadingEnabled(true)` |
||||||
|
|
||||||
|
Note: Single Query Loading is to be considered experimental. We appreciate feedback on how it works for you. |
||||||
|
|
||||||
|
Note:Single Query Loading can be abbreviated as SQL, but we highly discourage that since confusion with Structured Query Language is almost guaranteed. |
||||||
|
|
||||||
@ -0,0 +1,28 @@ |
|||||||
|
[[jdbc.locking]] |
||||||
|
= JDBC Locking |
||||||
|
|
||||||
|
Spring Data JDBC supports locking on derived query methods. |
||||||
|
To enable locking on a given derived query method inside a repository, you annotate it with `@Lock`. |
||||||
|
The required value of type `LockMode` offers two values: `PESSIMISTIC_READ` which guarantees that the data you are reading doesn't get modified and `PESSIMISTIC_WRITE` which obtains a lock to modify the data. |
||||||
|
Some databases do not make this distinction. |
||||||
|
In that cases both modes are equivalent of `PESSIMISTIC_WRITE`. |
||||||
|
|
||||||
|
.Using @Lock on derived query method |
||||||
|
[source,java] |
||||||
|
---- |
||||||
|
interface UserRepository extends CrudRepository<User, Long> { |
||||||
|
|
||||||
|
@Lock(LockMode.PESSIMISTIC_READ) |
||||||
|
List<User> findByLastname(String lastname); |
||||||
|
} |
||||||
|
---- |
||||||
|
|
||||||
|
As you can see above, the method `findByLastname(String lastname)` will be executed with a pessimistic read lock. If you are using a databse with the MySQL Dialect this will result for example in the following query: |
||||||
|
|
||||||
|
.Resulting Sql query for MySQL dialect |
||||||
|
[source,sql] |
||||||
|
---- |
||||||
|
Select * from user u where u.lastname = lastname LOCK IN SHARE MODE |
||||||
|
---- |
||||||
|
|
||||||
|
Alternative to `LockMode.PESSIMISTIC_READ` you can use `LockMode.PESSIMISTIC_WRITE`. |
||||||
@ -0,0 +1,8 @@ |
|||||||
|
[[jdbc.logging]] |
||||||
|
= Logging |
||||||
|
:page-section-summary-toc: 1 |
||||||
|
|
||||||
|
Spring Data JDBC does little to no logging on its own. |
||||||
|
Instead, the mechanics of `JdbcTemplate` to issue SQL statements provide logging. |
||||||
|
Thus, if you want to inspect what SQL statements are run, activate logging for Spring's {spring-framework-docs}/data-access.html#jdbc-JdbcTemplate[`NamedParameterJdbcTemplate`] or https://www.mybatis.org/mybatis-3/logging.html[MyBatis]. |
||||||
|
|
||||||
@ -0,0 +1,273 @@ |
|||||||
|
[[mapping]] |
||||||
|
= Mapping |
||||||
|
|
||||||
|
Rich mapping support is provided by the `BasicJdbcConverter`. `BasicJdbcConverter` has a rich metadata model that allows mapping domain objects to a data row. |
||||||
|
The mapping metadata model is populated by using annotations on your domain objects. |
||||||
|
However, the infrastructure is not limited to using annotations as the only source of metadata information. |
||||||
|
The `BasicJdbcConverter` also lets you map objects to rows without providing any additional metadata, by following a set of conventions. |
||||||
|
|
||||||
|
This section describes the features of the `BasicJdbcConverter`, including how to use conventions for mapping objects to rows and how to override those conventions with annotation-based mapping metadata. |
||||||
|
|
||||||
|
Read on the basics about xref:object-mapping.adoc[] before continuing with this chapter. |
||||||
|
|
||||||
|
[[mapping.conventions]] |
||||||
|
== Convention-based Mapping |
||||||
|
|
||||||
|
`BasicJdbcConverter` has a few conventions for mapping objects to rows when no additional mapping metadata is provided. |
||||||
|
The conventions are: |
||||||
|
|
||||||
|
* The short Java class name is mapped to the table name in the following manner. |
||||||
|
The `com.bigbank.SavingsAccount` class maps to the `SAVINGS_ACCOUNT` table name. |
||||||
|
The same name mapping is applied for mapping fields to column names. |
||||||
|
For example, the `firstName` field maps to the `FIRST_NAME` column. |
||||||
|
You can control this mapping by providing a custom `NamingStrategy`. |
||||||
|
Table and column names that are derived from property or class names are used in SQL statements without quotes by default. |
||||||
|
You can control this behavior by setting `JdbcMappingContext.setForceQuote(true)`. |
||||||
|
|
||||||
|
* Nested objects are not supported. |
||||||
|
|
||||||
|
* The converter uses any Spring Converters registered with it to override the default mapping of object properties to row columns and values. |
||||||
|
|
||||||
|
* The fields of an object are used to convert to and from columns in the row. |
||||||
|
Public `JavaBean` properties are not used. |
||||||
|
|
||||||
|
* If you have a single non-zero-argument constructor whose constructor argument names match top-level column names of the row, that constructor is used. |
||||||
|
Otherwise, the zero-argument constructor is used. |
||||||
|
If there is more than one non-zero-argument constructor, an exception is thrown. |
||||||
|
|
||||||
|
[[jdbc.entity-persistence.types]] |
||||||
|
== Supported Types in Your Entity |
||||||
|
|
||||||
|
The properties of the following types are currently supported: |
||||||
|
|
||||||
|
* All primitive types and their boxed types (`int`, `float`, `Integer`, `Float`, and so on) |
||||||
|
|
||||||
|
* Enums get mapped to their name. |
||||||
|
|
||||||
|
* `String` |
||||||
|
|
||||||
|
* `java.util.Date`, `java.time.LocalDate`, `java.time.LocalDateTime`, and `java.time.LocalTime` |
||||||
|
|
||||||
|
* Arrays and Collections of the types mentioned above can be mapped to columns of array type if your database supports that. |
||||||
|
|
||||||
|
* Anything your database driver accepts. |
||||||
|
|
||||||
|
* References to other entities. |
||||||
|
They are considered a one-to-one relationship, or an embedded type. |
||||||
|
It is optional for one-to-one relationship entities to have an `id` attribute. |
||||||
|
The table of the referenced entity is expected to have an additional column with a name based on the referencing entity see xref:jdbc/entity-persistence.adoc#jdbc.entity-persistence.types.backrefs[Back References]. |
||||||
|
Embedded entities do not need an `id`. |
||||||
|
If one is present it gets ignored. |
||||||
|
|
||||||
|
* `Set<some entity>` is considered a one-to-many relationship. |
||||||
|
The table of the referenced entity is expected to have an additional column with a name based on the referencing entity see xref:jdbc/entity-persistence.adoc#jdbc.entity-persistence.types.backrefs[Back References]. |
||||||
|
|
||||||
|
* `Map<simple type, some entity>` is considered a qualified one-to-many relationship. |
||||||
|
The table of the referenced entity is expected to have two additional columns: One named based on the referencing entity for the foreign key (see xref:jdbc/entity-persistence.adoc#jdbc.entity-persistence.types.backrefs[Back References]) and one with the same name and an additional `_key` suffix for the map key. |
||||||
|
You can change this behavior by implementing `NamingStrategy.getReverseColumnName(PersistentPropertyPathExtension path)` and `NamingStrategy.getKeyColumn(RelationalPersistentProperty property)`, respectively. |
||||||
|
Alternatively you may annotate the attribute with `@MappedCollection(idColumn="your_column_name", keyColumn="your_key_column_name")` |
||||||
|
|
||||||
|
* `List<some entity>` is mapped as a `Map<Integer, some entity>`. |
||||||
|
|
||||||
|
[[jdbc.entity-persistence.types.referenced-entities]] |
||||||
|
=== Referenced Entities |
||||||
|
|
||||||
|
The handling of referenced entities is limited. |
||||||
|
This is based on the idea of aggregate roots as described above. |
||||||
|
If you reference another entity, that entity is, by definition, part of your aggregate. |
||||||
|
So, if you remove the reference, the previously referenced entity gets deleted. |
||||||
|
This also means references are 1-1 or 1-n, but not n-1 or n-m. |
||||||
|
|
||||||
|
If you have n-1 or n-m references, you are, by definition, dealing with two separate aggregates. |
||||||
|
References between those may be encoded as simple `id` values, which map properly with Spring Data JDBC. |
||||||
|
A better way to encode these, is to make them instances of `AggregateReference`. |
||||||
|
An `AggregateReference` is a wrapper around an id value which marks that value as a reference to a different aggregate. |
||||||
|
Also, the type of that aggregate is encoded in a type parameter. |
||||||
|
|
||||||
|
[[jdbc.entity-persistence.types.backrefs]] |
||||||
|
=== Back References |
||||||
|
|
||||||
|
All references in an aggregate result in a foreign key relationship in the opposite direction in the database. |
||||||
|
By default, the name of the foreign key column is the table name of the referencing entity. |
||||||
|
|
||||||
|
Alternatively you may choose to have them named by the entity name of the referencing entity ignoreing `@Table` annotations. |
||||||
|
You activate this behaviour by calling `setForeignKeyNaming(ForeignKeyNaming.IGNORE_RENAMING)` on the `RelationalMappingContext`. |
||||||
|
|
||||||
|
For `List` and `Map` references an additional column is required for holding the list index or map key. |
||||||
|
It is based on the foreign key column with an additional `_KEY` suffix. |
||||||
|
|
||||||
|
If you want a completely different way of naming these back references you may implement `NamingStrategy.getReverseColumnName(PersistentPropertyPathExtension path)` in a way that fits your needs. |
||||||
|
|
||||||
|
.Declaring and setting an `AggregateReference` |
||||||
|
[source,java] |
||||||
|
---- |
||||||
|
class Person { |
||||||
|
@Id long id; |
||||||
|
AggregateReference<Person, Long> bestFriend; |
||||||
|
} |
||||||
|
|
||||||
|
// ... |
||||||
|
|
||||||
|
Person p1, p2 = // some initialization |
||||||
|
|
||||||
|
p1.bestFriend = AggregateReference.to(p2.id); |
||||||
|
|
||||||
|
---- |
||||||
|
|
||||||
|
* Types for which you registered suitable [[jdbc.custom-converters, custom conversions]]. |
||||||
|
|
||||||
|
[[jdbc.entity-persistence.naming-strategy]] |
||||||
|
== `NamingStrategy` |
||||||
|
|
||||||
|
When you use the standard implementations of `CrudRepository` that Spring Data JDBC provides, they expect a certain table structure. |
||||||
|
You can tweak that by providing a {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/NamingStrategy.html[`NamingStrategy`] in your application context. |
||||||
|
|
||||||
|
[[jdbc.entity-persistence.custom-table-name]] |
||||||
|
== `Custom table names` |
||||||
|
|
||||||
|
When the NamingStrategy does not matching on your database table names, you can customize the names with the {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/Table.html[`@Table`] annotation. |
||||||
|
The element `value` of this annotation provides the custom table name. |
||||||
|
The following example maps the `MyEntity` class to the `CUSTOM_TABLE_NAME` table in the database: |
||||||
|
|
||||||
|
[source,java] |
||||||
|
---- |
||||||
|
@Table("CUSTOM_TABLE_NAME") |
||||||
|
class MyEntity { |
||||||
|
@Id |
||||||
|
Integer id; |
||||||
|
|
||||||
|
String name; |
||||||
|
} |
||||||
|
---- |
||||||
|
|
||||||
|
[[jdbc.entity-persistence.custom-column-name]] |
||||||
|
== `Custom column names` |
||||||
|
|
||||||
|
When the NamingStrategy does not matching on your database column names, you can customize the names with the {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/Column.html[`@Column`] annotation. |
||||||
|
The element `value` of this annotation provides the custom column name. |
||||||
|
The following example maps the `name` property of the `MyEntity` class to the `CUSTOM_COLUMN_NAME` column in the database: |
||||||
|
|
||||||
|
[source,java] |
||||||
|
---- |
||||||
|
class MyEntity { |
||||||
|
@Id |
||||||
|
Integer id; |
||||||
|
|
||||||
|
@Column("CUSTOM_COLUMN_NAME") |
||||||
|
String name; |
||||||
|
} |
||||||
|
---- |
||||||
|
|
||||||
|
The {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/MappedCollection.html[`@MappedCollection`] |
||||||
|
annotation can be used on a reference type (one-to-one relationship) or on Sets, Lists, and Maps (one-to-many relationship). |
||||||
|
`idColumn` element of the annotation provides a custom name for the foreign key column referencing the id column in the other table. |
||||||
|
In the following example the corresponding table for the `MySubEntity` class has a `NAME` column, and the `CUSTOM_MY_ENTITY_ID_COLUMN_NAME` column of the `MyEntity` id for relationship reasons: |
||||||
|
|
||||||
|
[source,java] |
||||||
|
---- |
||||||
|
class MyEntity { |
||||||
|
@Id |
||||||
|
Integer id; |
||||||
|
|
||||||
|
@MappedCollection(idColumn = "CUSTOM_MY_ENTITY_ID_COLUMN_NAME") |
||||||
|
Set<MySubEntity> subEntities; |
||||||
|
} |
||||||
|
|
||||||
|
class MySubEntity { |
||||||
|
String name; |
||||||
|
} |
||||||
|
---- |
||||||
|
|
||||||
|
When using `List` and `Map` you must have an additional column for the position of a dataset in the `List` or the key value of the entity in the `Map`. |
||||||
|
This additional column name may be customized with the `keyColumn` Element of the {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/MappedCollection.html[`@MappedCollection`] annotation: |
||||||
|
|
||||||
|
[source,java] |
||||||
|
---- |
||||||
|
class MyEntity { |
||||||
|
@Id |
||||||
|
Integer id; |
||||||
|
|
||||||
|
@MappedCollection(idColumn = "CUSTOM_COLUMN_NAME", keyColumn = "CUSTOM_KEY_COLUMN_NAME") |
||||||
|
List<MySubEntity> name; |
||||||
|
} |
||||||
|
|
||||||
|
class MySubEntity { |
||||||
|
String name; |
||||||
|
} |
||||||
|
---- |
||||||
|
|
||||||
|
[[jdbc.entity-persistence.embedded-entities]] |
||||||
|
== Embedded entities |
||||||
|
|
||||||
|
Embedded entities are used to have value objects in your java data model, even if there is only one table in your database. |
||||||
|
In the following example you see, that `MyEntity` is mapped with the `@Embedded` annotation. |
||||||
|
The consequence of this is, that in the database a table `my_entity` with the two columns `id` and `name` (from the `EmbeddedEntity` class) is expected. |
||||||
|
|
||||||
|
However, if the `name` column is actually `null` within the result set, the entire property `embeddedEntity` will be set to null according to the `onEmpty` of `@Embedded`, which ``null``s objects when all nested properties are `null`. + |
||||||
|
Opposite to this behavior `USE_EMPTY` tries to create a new instance using either a default constructor or one that accepts nullable parameter values from the result set. |
||||||
|
|
||||||
|
.Sample Code of embedding objects |
||||||
|
==== |
||||||
|
[source,java] |
||||||
|
---- |
||||||
|
class MyEntity { |
||||||
|
|
||||||
|
@Id |
||||||
|
Integer id; |
||||||
|
|
||||||
|
@Embedded(onEmpty = USE_NULL) <1> |
||||||
|
EmbeddedEntity embeddedEntity; |
||||||
|
} |
||||||
|
|
||||||
|
class EmbeddedEntity { |
||||||
|
String name; |
||||||
|
} |
||||||
|
---- |
||||||
|
|
||||||
|
<1> ``Null``s `embeddedEntity` if `name` in `null`. |
||||||
|
Use `USE_EMPTY` to instantiate `embeddedEntity` with a potential `null` value for the `name` property. |
||||||
|
==== |
||||||
|
|
||||||
|
If you need a value object multiple times in an entity, this can be achieved with the optional `prefix` element of the `@Embedded` annotation. |
||||||
|
This element represents a prefix and is prepend for each column name in the embedded object. |
||||||
|
|
||||||
|
[TIP] |
||||||
|
==== |
||||||
|
Make use of the shortcuts `@Embedded.Nullable` & `@Embedded.Empty` for `@Embedded(onEmpty = USE_NULL)` and `@Embedded(onEmpty = USE_EMPTY)` to reduce verbosity and simultaneously set JSR-305 `@javax.annotation.Nonnull` accordingly. |
||||||
|
|
||||||
|
[source,java] |
||||||
|
---- |
||||||
|
class MyEntity { |
||||||
|
|
||||||
|
@Id |
||||||
|
Integer id; |
||||||
|
|
||||||
|
@Embedded.Nullable <1> |
||||||
|
EmbeddedEntity embeddedEntity; |
||||||
|
} |
||||||
|
---- |
||||||
|
|
||||||
|
<1> Shortcut for `@Embedded(onEmpty = USE_NULL)`. |
||||||
|
==== |
||||||
|
|
||||||
|
Embedded entities containing a `Collection` or a `Map` will always be considered non empty since they will at least contain the empty collection or map. |
||||||
|
Such an entity will therefore never be `null` even when using @Embedded(onEmpty = USE_NULL). |
||||||
|
|
||||||
|
[[jdbc.entity-persistence.read-only-properties]] |
||||||
|
== Read Only Properties |
||||||
|
|
||||||
|
Attributes annotated with `@ReadOnlyProperty` will not be written to the database by Spring Data JDBC, but they will be read when an entity gets loaded. |
||||||
|
|
||||||
|
Spring Data JDBC will not automatically reload an entity after writing it. |
||||||
|
Therefore, you have to reload it explicitly if you want to see data that was generated in the database for such columns. |
||||||
|
|
||||||
|
If the annotated attribute is an entity or collection of entities, it is represented by one or more separate rows in separate tables. |
||||||
|
Spring Data JDBC will not perform any insert, delete or update for these rows. |
||||||
|
|
||||||
|
[[jdbc.entity-persistence.insert-only-properties]] |
||||||
|
== Insert Only Properties |
||||||
|
|
||||||
|
Attributes annotated with `@InsertOnlyProperty` will only be written to the database by Spring Data JDBC during insert operations. |
||||||
|
For updates these properties will be ignored. |
||||||
|
|
||||||
|
`@InsertOnlyProperty` is only supported for the aggregate root. |
||||||
@ -0,0 +1,120 @@ |
|||||||
|
[[jdbc.mybatis]] |
||||||
|
= MyBatis Integration |
||||||
|
|
||||||
|
The CRUD operations and query methods can be delegated to MyBatis. |
||||||
|
This section describes how to configure Spring Data JDBC to integrate with MyBatis and which conventions to follow to hand over the running of the queries as well as the mapping to the library. |
||||||
|
|
||||||
|
[[jdbc.mybatis.configuration]] |
||||||
|
== Configuration |
||||||
|
|
||||||
|
The easiest way to properly plug MyBatis into Spring Data JDBC is by importing `MyBatisJdbcConfiguration` into you application configuration: |
||||||
|
|
||||||
|
[source,java] |
||||||
|
---- |
||||||
|
@Configuration |
||||||
|
@EnableJdbcRepositories |
||||||
|
@Import(MyBatisJdbcConfiguration.class) |
||||||
|
class Application { |
||||||
|
|
||||||
|
@Bean |
||||||
|
SqlSessionFactoryBean sqlSessionFactoryBean() { |
||||||
|
// Configure MyBatis here |
||||||
|
} |
||||||
|
} |
||||||
|
---- |
||||||
|
|
||||||
|
As you can see, all you need to declare is a `SqlSessionFactoryBean` as `MyBatisJdbcConfiguration` relies on a `SqlSession` bean to be available in the `ApplicationContext` eventually. |
||||||
|
|
||||||
|
[[jdbc.mybatis.conventions]] |
||||||
|
== Usage conventions |
||||||
|
|
||||||
|
For each operation in `CrudRepository`, Spring Data JDBC runs multiple statements. |
||||||
|
If there is a https://github.com/mybatis/mybatis-3/blob/master/src/main/java/org/apache/ibatis/session/SqlSessionFactory.java[`SqlSessionFactory`] in the application context, Spring Data checks, for each step, whether the `SessionFactory` offers a statement. |
||||||
|
If one is found, that statement (including its configured mapping to an entity) is used. |
||||||
|
|
||||||
|
The name of the statement is constructed by concatenating the fully qualified name of the entity type with `Mapper.` and a `String` determining the kind of statement. |
||||||
|
For example, if an instance of `org.example.User` is to be inserted, Spring Data JDBC looks for a statement named `org.example.UserMapper.insert`. |
||||||
|
|
||||||
|
When the statement is run, an instance of [`MyBatisContext`] gets passed as an argument, which makes various arguments available to the statement. |
||||||
|
|
||||||
|
The following table describes the available MyBatis statements: |
||||||
|
|
||||||
|
[cols="default,default,default,asciidoc"] |
||||||
|
|=== |
||||||
|
| Name | Purpose | CrudRepository methods that might trigger this statement | Attributes available in the `MyBatisContext` |
||||||
|
|
||||||
|
| `insert` | Inserts a single entity. This also applies for entities referenced by the aggregate root. | `save`, `saveAll`. | |
||||||
|
`getInstance`: the instance to be saved |
||||||
|
|
||||||
|
`getDomainType`: The type of the entity to be saved. |
||||||
|
|
||||||
|
`get(<key>)`: ID of the referencing entity, where `<key>` is the name of the back reference column provided by the `NamingStrategy`. |
||||||
|
|
||||||
|
|
||||||
|
| `update` | Updates a single entity. This also applies for entities referenced by the aggregate root. | `save`, `saveAll`.| |
||||||
|
`getInstance`: The instance to be saved |
||||||
|
|
||||||
|
`getDomainType`: The type of the entity to be saved. |
||||||
|
|
||||||
|
| `delete` | Deletes a single entity. | `delete`, `deleteById`.| |
||||||
|
`getId`: The ID of the instance to be deleted |
||||||
|
|
||||||
|
`getDomainType`: The type of the entity to be deleted. |
||||||
|
|
||||||
|
| `deleteAll-<propertyPath>` | Deletes all entities referenced by any aggregate root of the type used as prefix with the given property path. |
||||||
|
Note that the type used for prefixing the statement name is the name of the aggregate root, not the one of the entity to be deleted. | `deleteAll`.| |
||||||
|
|
||||||
|
`getDomainType`: The types of the entities to be deleted. |
||||||
|
|
||||||
|
| `deleteAll` | Deletes all aggregate roots of the type used as the prefix | `deleteAll`.| |
||||||
|
|
||||||
|
`getDomainType`: The type of the entities to be deleted. |
||||||
|
|
||||||
|
| `delete-<propertyPath>` | Deletes all entities referenced by an aggregate root with the given propertyPath | `deleteById`.| |
||||||
|
|
||||||
|
`getId`: The ID of the aggregate root for which referenced entities are to be deleted. |
||||||
|
|
||||||
|
`getDomainType`: The type of the entities to be deleted. |
||||||
|
|
||||||
|
| `findById` | Selects an aggregate root by ID | `findById`.| |
||||||
|
|
||||||
|
`getId`: The ID of the entity to load. |
||||||
|
|
||||||
|
`getDomainType`: The type of the entity to load. |
||||||
|
|
||||||
|
| `findAll` | Select all aggregate roots | `findAll`.| |
||||||
|
|
||||||
|
`getDomainType`: The type of the entity to load. |
||||||
|
|
||||||
|
| `findAllById` | Select a set of aggregate roots by ID values | `findAllById`.| |
||||||
|
|
||||||
|
`getId`: A list of ID values of the entities to load. |
||||||
|
|
||||||
|
`getDomainType`: The type of the entity to load. |
||||||
|
|
||||||
|
| `findAllByProperty-<propertyName>` | Select a set of entities that is referenced by another entity. The type of the referencing entity is used for the prefix. The referenced entities type is used as the suffix. _This method is deprecated. Use `findAllByPath` instead_ | All `find*` methods. If no query is defined for `findAllByPath`| |
||||||
|
|
||||||
|
`getId`: The ID of the entity referencing the entities to be loaded. |
||||||
|
|
||||||
|
`getDomainType`: The type of the entity to load. |
||||||
|
|
||||||
|
|
||||||
|
| `findAllByPath-<propertyPath>` | Select a set of entities that is referenced by another entity via a property path. | All `find*` methods.| |
||||||
|
|
||||||
|
`getIdentifier`: The `Identifier` holding the id of the aggregate root plus the keys and list indexes of all path elements. |
||||||
|
|
||||||
|
`getDomainType`: The type of the entity to load. |
||||||
|
|
||||||
|
| `findAllSorted` | Select all aggregate roots, sorted | `findAll(Sort)`.| |
||||||
|
|
||||||
|
`getSort`: The sorting specification. |
||||||
|
|
||||||
|
| `findAllPaged` | Select a page of aggregate roots, optionally sorted | `findAll(Page)`.| |
||||||
|
|
||||||
|
`getPageable`: The paging specification. |
||||||
|
|
||||||
|
| `count` | Count the number of aggregate root of the type used as prefix | `count` | |
||||||
|
|
||||||
|
`getDomainType`: The type of aggregate roots to count. |
||||||
|
|=== |
||||||
|
|
||||||
@ -0,0 +1,251 @@ |
|||||||
|
[[jdbc.query-methods]] |
||||||
|
= Query Methods |
||||||
|
|
||||||
|
This section offers some specific information about the implementation and use of Spring Data JDBC. |
||||||
|
|
||||||
|
Most of the data access operations you usually trigger on a repository result in a query being run against the databases. |
||||||
|
Defining such a query is a matter of declaring a method on the repository interface, as the following example shows: |
||||||
|
|
||||||
|
.PersonRepository with query methods |
||||||
|
[source,java] |
||||||
|
---- |
||||||
|
interface PersonRepository extends PagingAndSortingRepository<Person, String> { |
||||||
|
|
||||||
|
List<Person> findByFirstname(String firstname); <1> |
||||||
|
|
||||||
|
List<Person> findByFirstnameOrderByLastname(String firstname, Pageable pageable); <2> |
||||||
|
|
||||||
|
Slice<Person> findByLastname(String lastname, Pageable pageable); <3> |
||||||
|
|
||||||
|
Page<Person> findByLastname(String lastname, Pageable pageable); <4> |
||||||
|
|
||||||
|
Person findByFirstnameAndLastname(String firstname, String lastname); <5> |
||||||
|
|
||||||
|
Person findFirstByLastname(String lastname); <6> |
||||||
|
|
||||||
|
@Query("SELECT * FROM person WHERE lastname = :lastname") |
||||||
|
List<Person> findByLastname(String lastname); <7> |
||||||
|
@Query("SELECT * FROM person WHERE lastname = :lastname") |
||||||
|
Stream<Person> streamByLastname(String lastname); <8> |
||||||
|
|
||||||
|
@Query("SELECT * FROM person WHERE username = :#{ principal?.username }") |
||||||
|
Person findActiveUser(); <9> |
||||||
|
} |
||||||
|
---- |
||||||
|
<1> The method shows a query for all people with the given `firstname`. |
||||||
|
The query is derived by parsing the method name for constraints that can be concatenated with `And` and `Or`. |
||||||
|
Thus, the method name results in a query expression of `SELECT … FROM person WHERE firstname = :firstname`. |
||||||
|
<2> Use `Pageable` to pass offset and sorting parameters to the database. |
||||||
|
<3> Return a `Slice<Person>`.Selects `LIMIT+1` rows to determine whether there's more data to consume. `ResultSetExtractor` customization is not supported. |
||||||
|
<4> Run a paginated query returning `Page<Person>`.Selects only data within the given page bounds and potentially a count query to determine the total count. `ResultSetExtractor` customization is not supported. |
||||||
|
<5> Find a single entity for the given criteria. |
||||||
|
It completes with `IncorrectResultSizeDataAccessException` on non-unique results. |
||||||
|
<6> In contrast to <3>, the first entity is always emitted even if the query yields more result documents. |
||||||
|
<7> The `findByLastname` method shows a query for all people with the given `lastname`. |
||||||
|
<8> The `streamByLastname` method returns a `Stream`, which makes values possible as soon as they are returned from the database. |
||||||
|
<9> You can use the Spring Expression Language to dynamically resolve parameters. |
||||||
|
In the sample, Spring Security is used to resolve the username of the current user. |
||||||
|
|
||||||
|
The following table shows the keywords that are supported for query methods: |
||||||
|
|
||||||
|
[cols="1,2,3",options="header",subs="quotes"] |
||||||
|
.Supported keywords for query methods |
||||||
|
|=== |
||||||
|
| Keyword |
||||||
|
| Sample |
||||||
|
| Logical result |
||||||
|
|
||||||
|
| `After` |
||||||
|
| `findByBirthdateAfter(Date date)` |
||||||
|
| `birthdate > date` |
||||||
|
|
||||||
|
| `GreaterThan` |
||||||
|
| `findByAgeGreaterThan(int age)` |
||||||
|
| `age > age` |
||||||
|
|
||||||
|
| `GreaterThanEqual` |
||||||
|
| `findByAgeGreaterThanEqual(int age)` |
||||||
|
| `age >= age` |
||||||
|
|
||||||
|
| `Before` |
||||||
|
| `findByBirthdateBefore(Date date)` |
||||||
|
| `birthdate < date` |
||||||
|
|
||||||
|
| `LessThan` |
||||||
|
| `findByAgeLessThan(int age)` |
||||||
|
| `age < age` |
||||||
|
|
||||||
|
| `LessThanEqual` |
||||||
|
| `findByAgeLessThanEqual(int age)` |
||||||
|
| `age \<= age` |
||||||
|
|
||||||
|
| `Between` |
||||||
|
| `findByAgeBetween(int from, int to)` |
||||||
|
| `age BETWEEN from AND to` |
||||||
|
|
||||||
|
| `NotBetween` |
||||||
|
| `findByAgeNotBetween(int from, int to)` |
||||||
|
| `age NOT BETWEEN from AND to` |
||||||
|
|
||||||
|
| `In` |
||||||
|
| `findByAgeIn(Collection<Integer> ages)` |
||||||
|
| `age IN (age1, age2, ageN)` |
||||||
|
|
||||||
|
| `NotIn` |
||||||
|
| `findByAgeNotIn(Collection ages)` |
||||||
|
| `age NOT IN (age1, age2, ageN)` |
||||||
|
|
||||||
|
| `IsNotNull`, `NotNull` |
||||||
|
| `findByFirstnameNotNull()` |
||||||
|
| `firstname IS NOT NULL` |
||||||
|
|
||||||
|
| `IsNull`, `Null` |
||||||
|
| `findByFirstnameNull()` |
||||||
|
| `firstname IS NULL` |
||||||
|
|
||||||
|
| `Like`, `StartingWith`, `EndingWith` |
||||||
|
| `findByFirstnameLike(String name)` |
||||||
|
| `firstname LIKE name` |
||||||
|
|
||||||
|
| `NotLike`, `IsNotLike` |
||||||
|
| `findByFirstnameNotLike(String name)` |
||||||
|
| `firstname NOT LIKE name` |
||||||
|
|
||||||
|
| `Containing` on String |
||||||
|
| `findByFirstnameContaining(String name)` |
||||||
|
| `firstname LIKE '%' + name + '%'` |
||||||
|
|
||||||
|
| `NotContaining` on String |
||||||
|
| `findByFirstnameNotContaining(String name)` |
||||||
|
| `firstname NOT LIKE '%' + name + '%'` |
||||||
|
|
||||||
|
| `(No keyword)` |
||||||
|
| `findByFirstname(String name)` |
||||||
|
| `firstname = name` |
||||||
|
|
||||||
|
| `Not` |
||||||
|
| `findByFirstnameNot(String name)` |
||||||
|
| `firstname != name` |
||||||
|
|
||||||
|
| `IsTrue`, `True` |
||||||
|
| `findByActiveIsTrue()` |
||||||
|
| `active IS TRUE` |
||||||
|
|
||||||
|
| `IsFalse`, `False` |
||||||
|
| `findByActiveIsFalse()` |
||||||
|
| `active IS FALSE` |
||||||
|
|=== |
||||||
|
|
||||||
|
NOTE: Query derivation is limited to properties that can be used in a `WHERE` clause without using joins. |
||||||
|
|
||||||
|
[[jdbc.query-methods.strategies]] |
||||||
|
== Query Lookup Strategies |
||||||
|
|
||||||
|
The JDBC module supports defining a query manually as a String in a `@Query` annotation or as named query in a property file. |
||||||
|
|
||||||
|
Deriving a query from the name of the method is is currently limited to simple properties, that means properties present in the aggregate root directly. |
||||||
|
Also, only select queries are supported by this approach. |
||||||
|
|
||||||
|
[[jdbc.query-methods.at-query]] |
||||||
|
== Using `@Query` |
||||||
|
|
||||||
|
The following example shows how to use `@Query` to declare a query method: |
||||||
|
|
||||||
|
.Declare a query method by using @Query |
||||||
|
[source,java] |
||||||
|
---- |
||||||
|
interface UserRepository extends CrudRepository<User, Long> { |
||||||
|
|
||||||
|
@Query("select firstName, lastName from User u where u.emailAddress = :email") |
||||||
|
User findByEmailAddress(@Param("email") String email); |
||||||
|
} |
||||||
|
---- |
||||||
|
|
||||||
|
For converting the query result into entities the same `RowMapper` is used by default as for the queries Spring Data JDBC generates itself. |
||||||
|
The query you provide must match the format the `RowMapper` expects. |
||||||
|
Columns for all properties that are used in the constructor of an entity must be provided. |
||||||
|
Columns for properties that get set via setter, wither or field access are optional. |
||||||
|
Properties that don't have a matching column in the result will not be set. |
||||||
|
The query is used for populating the aggregate root, embedded entities and one-to-one relationships including arrays of primitive types which get stored and loaded as SQL-array-types. |
||||||
|
Separate queries are generated for maps, lists, sets and arrays of entities. |
||||||
|
|
||||||
|
NOTE: Spring fully supports Java 8’s parameter name discovery based on the `-parameters` compiler flag. |
||||||
|
By using this flag in your build as an alternative to debug information, you can omit the `@Param` annotation for named parameters. |
||||||
|
|
||||||
|
NOTE: Spring Data JDBC supports only named parameters. |
||||||
|
|
||||||
|
[[jdbc.query-methods.named-query]] |
||||||
|
== Named Queries |
||||||
|
|
||||||
|
If no query is given in an annotation as described in the previous section Spring Data JDBC will try to locate a named query. |
||||||
|
There are two ways how the name of the query can be determined. |
||||||
|
The default is to take the _domain class_ of the query, i.e. the aggregate root of the repository, take its simple name and append the name of the method separated by a `.`. |
||||||
|
Alternatively the `@Query` annotation has a `name` attribute which can be used to specify the name of a query to be looked up. |
||||||
|
|
||||||
|
Named queries are expected to be provided in the property file `META-INF/jdbc-named-queries.properties` on the classpath. |
||||||
|
|
||||||
|
The location of that file may be changed by setting a value to `@EnableJdbcRepositories.namedQueriesLocation`. |
||||||
|
|
||||||
|
[[jdbc.query-methods.at-query.streaming-results]] |
||||||
|
=== Streaming Results |
||||||
|
|
||||||
|
When you specify Stream as the return type of a query method, Spring Data JDBC returns elements as soon as they become available. |
||||||
|
When dealing with large amounts of data this is suitable for reducing latency and memory requirements. |
||||||
|
|
||||||
|
The stream contains an open connection to the database. |
||||||
|
To avoid memory leaks, that connection needs to be closed eventually, by closing the stream. |
||||||
|
The recommended way to do that is a `try-with-resource clause`. |
||||||
|
It also means that, once the connection to the database is closed, the stream cannot obtain further elements and likely throws an exception. |
||||||
|
|
||||||
|
[[jdbc.query-methods.at-query.custom-rowmapper]] |
||||||
|
=== Custom `RowMapper` |
||||||
|
|
||||||
|
You can configure which `RowMapper` to use, either by using the `@Query(rowMapperClass = ....)` or by registering a `RowMapperMap` bean and registering a `RowMapper` per method return type. |
||||||
|
The following example shows how to register `DefaultQueryMappingConfiguration`: |
||||||
|
|
||||||
|
[source,java] |
||||||
|
---- |
||||||
|
@Bean |
||||||
|
QueryMappingConfiguration rowMappers() { |
||||||
|
return new DefaultQueryMappingConfiguration() |
||||||
|
.register(Person.class, new PersonRowMapper()) |
||||||
|
.register(Address.class, new AddressRowMapper()); |
||||||
|
} |
||||||
|
---- |
||||||
|
|
||||||
|
When determining which `RowMapper` to use for a method, the following steps are followed, based on the return type of the method: |
||||||
|
|
||||||
|
. If the type is a simple type, no `RowMapper` is used. |
||||||
|
+ |
||||||
|
Instead, the query is expected to return a single row with a single column, and a conversion to the return type is applied to that value. |
||||||
|
. The entity classes in the `QueryMappingConfiguration` are iterated until one is found that is a superclass or interface of the return type in question. |
||||||
|
The `RowMapper` registered for that class is used. |
||||||
|
+ |
||||||
|
Iterating happens in the order of registration, so make sure to register more general types after specific ones. |
||||||
|
|
||||||
|
If applicable, wrapper types such as collections or `Optional` are unwrapped. |
||||||
|
Thus, a return type of `Optional<Person>` uses the `Person` type in the preceding process. |
||||||
|
|
||||||
|
NOTE: Using a custom `RowMapper` through `QueryMappingConfiguration`, `@Query(rowMapperClass=…)`, or a custom `ResultSetExtractor` disables Entity Callbacks and Lifecycle Events as the result mapping can issue its own events/callbacks if needed. |
||||||
|
|
||||||
|
[[jdbc.query-methods.at-query.modifying]] |
||||||
|
=== Modifying Query |
||||||
|
|
||||||
|
You can mark a query as being a modifying query by using the `@Modifying` on query method, as the following example shows: |
||||||
|
|
||||||
|
[source,java] |
||||||
|
---- |
||||||
|
@Modifying |
||||||
|
@Query("UPDATE DUMMYENTITY SET name = :name WHERE id = :id") |
||||||
|
boolean updateName(@Param("id") Long id, @Param("name") String name); |
||||||
|
---- |
||||||
|
|
||||||
|
You can specify the following return types: |
||||||
|
|
||||||
|
* `void` |
||||||
|
* `int` (updated record count) |
||||||
|
* `boolean`(whether a record was updated) |
||||||
|
|
||||||
|
Modifying queries are executed directly against the database. |
||||||
|
No events or callbacks get called. |
||||||
|
Therefore also fields with auditing annotations do not get updated if they don't get updated in the annotated query. |
||||||
@ -0,0 +1,92 @@ |
|||||||
|
[[jdbc.transactions]] |
||||||
|
= Transactionality |
||||||
|
|
||||||
|
The methods of `CrudRepository` instances are transactional by default. |
||||||
|
For reading operations, the transaction configuration `readOnly` flag is set to `true`. |
||||||
|
All others are configured with a plain `@Transactional` annotation so that default transaction configuration applies. |
||||||
|
For details, see the Javadoc of link:{spring-data-jdbc-javadoc}org/springframework/data/jdbc/repository/support/SimpleJdbcRepository.html[`SimpleJdbcRepository`]. |
||||||
|
If you need to tweak transaction configuration for one of the methods declared in a repository, redeclare the method in your repository interface, as follows: |
||||||
|
|
||||||
|
.Custom transaction configuration for CRUD |
||||||
|
[source,java] |
||||||
|
---- |
||||||
|
interface UserRepository extends CrudRepository<User, Long> { |
||||||
|
|
||||||
|
@Override |
||||||
|
@Transactional(timeout = 10) |
||||||
|
List<User> findAll(); |
||||||
|
|
||||||
|
// Further query method declarations |
||||||
|
} |
||||||
|
---- |
||||||
|
|
||||||
|
The preceding causes the `findAll()` method to be run with a timeout of 10 seconds and without the `readOnly` flag. |
||||||
|
|
||||||
|
Another way to alter transactional behavior is by using a facade or service implementation that typically covers more than one repository. |
||||||
|
Its purpose is to define transactional boundaries for non-CRUD operations. |
||||||
|
The following example shows how to create such a facade: |
||||||
|
|
||||||
|
.Using a facade to define transactions for multiple repository calls |
||||||
|
[source,java] |
||||||
|
---- |
||||||
|
@Service |
||||||
|
public class UserManagementImpl implements UserManagement { |
||||||
|
|
||||||
|
private final UserRepository userRepository; |
||||||
|
private final RoleRepository roleRepository; |
||||||
|
|
||||||
|
UserManagementImpl(UserRepository userRepository, |
||||||
|
RoleRepository roleRepository) { |
||||||
|
this.userRepository = userRepository; |
||||||
|
this.roleRepository = roleRepository; |
||||||
|
} |
||||||
|
|
||||||
|
@Transactional |
||||||
|
public void addRoleToAllUsers(String roleName) { |
||||||
|
|
||||||
|
Role role = roleRepository.findByName(roleName); |
||||||
|
|
||||||
|
for (User user : userRepository.findAll()) { |
||||||
|
user.addRole(role); |
||||||
|
userRepository.save(user); |
||||||
|
} |
||||||
|
} |
||||||
|
---- |
||||||
|
|
||||||
|
The preceding example causes calls to `addRoleToAllUsers(…)` to run inside a transaction (participating in an existing one or creating a new one if none are already running). |
||||||
|
The transaction configuration for the repositories is neglected, as the outer transaction configuration determines the actual repository to be used. |
||||||
|
Note that you have to explicitly activate `<tx:annotation-driven />` or use `@EnableTransactionManagement` to get annotation-based configuration for facades working. |
||||||
|
Note that the preceding example assumes you use component scanning. |
||||||
|
|
||||||
|
[[jdbc.transaction.query-methods]] |
||||||
|
== Transactional Query Methods |
||||||
|
|
||||||
|
To let your query methods be transactional, use `@Transactional` at the repository interface you define, as the following example shows: |
||||||
|
|
||||||
|
.Using @Transactional at query methods |
||||||
|
[source,java] |
||||||
|
---- |
||||||
|
@Transactional(readOnly = true) |
||||||
|
interface UserRepository extends CrudRepository<User, Long> { |
||||||
|
|
||||||
|
List<User> findByLastname(String lastname); |
||||||
|
|
||||||
|
@Modifying |
||||||
|
@Transactional |
||||||
|
@Query("delete from User u where u.active = false") |
||||||
|
void deleteInactiveUsers(); |
||||||
|
} |
||||||
|
---- |
||||||
|
|
||||||
|
Typically, you want the `readOnly` flag to be set to true, because most of the query methods only read data. |
||||||
|
In contrast to that, `deleteInactiveUsers()` uses the `@Modifying` annotation and overrides the transaction configuration. |
||||||
|
Thus, the method is with the `readOnly` flag set to `false`. |
||||||
|
|
||||||
|
NOTE: It is highly recommended to make query methods transactional. These methods might execute more then one query in order to populate an entity. |
||||||
|
Without a common transaction Spring Data JDBC executes the queries in different connections. |
||||||
|
This may put excessive strain on the connection pool and might even lead to dead locks when multiple methods request a fresh connection while holding on to one. |
||||||
|
|
||||||
|
NOTE: It is definitely reasonable to mark read-only queries as such by setting the `readOnly` flag. |
||||||
|
This does not, however, act as a check that you do not trigger a manipulating query (although some databases reject `INSERT` and `UPDATE` statements inside a read-only transaction). |
||||||
|
Instead, the `readOnly` flag is propagated as a hint to the underlying JDBC driver for performance optimizations. |
||||||
|
|
||||||
@ -0,0 +1,31 @@ |
|||||||
|
[[jdbc.why]] |
||||||
|
= Why Spring Data JDBC? |
||||||
|
|
||||||
|
The main persistence API for relational databases in the Java world is certainly JPA, which has its own Spring Data module. |
||||||
|
Why is there another one? |
||||||
|
|
||||||
|
JPA does a lot of things in order to help the developer. |
||||||
|
Among other things, it tracks changes to entities. |
||||||
|
It does lazy loading for you. |
||||||
|
It lets you map a wide array of object constructs to an equally wide array of database designs. |
||||||
|
|
||||||
|
This is great and makes a lot of things really easy. |
||||||
|
Just take a look at a basic JPA tutorial. |
||||||
|
But it often gets really confusing as to why JPA does a certain thing. |
||||||
|
Also, things that are really simple conceptually get rather difficult with JPA. |
||||||
|
|
||||||
|
Spring Data JDBC aims to be much simpler conceptually, by embracing the following design decisions: |
||||||
|
|
||||||
|
* If you load an entity, SQL statements get run. |
||||||
|
Once this is done, you have a completely loaded entity. |
||||||
|
No lazy loading or caching is done. |
||||||
|
|
||||||
|
* If you save an entity, it gets saved. |
||||||
|
If you do not, it does not. |
||||||
|
There is no dirty tracking and no session. |
||||||
|
|
||||||
|
* There is a simple model of how to map entities to tables. |
||||||
|
It probably only works for rather simple cases. |
||||||
|
If you do not like that, you should code your own strategy. |
||||||
|
Spring Data JDBC offers only very limited support for customizing the strategy with annotations. |
||||||
|
|
||||||
@ -0,0 +1 @@ |
|||||||
|
include::{commons}@data-commons::page$kotlin.adoc[] |
||||||
@ -0,0 +1 @@ |
|||||||
|
include::{commons}@data-commons::page$kotlin/coroutines.adoc[] |
||||||
@ -0,0 +1 @@ |
|||||||
|
include::{commons}@data-commons::page$kotlin/extensions.adoc[] |
||||||
@ -0,0 +1 @@ |
|||||||
|
include::{commons}@data-commons::page$kotlin/null-safety.adoc[] |
||||||
@ -0,0 +1 @@ |
|||||||
|
include::{commons}@data-commons::page$kotlin/object-mapping.adoc[] |
||||||
@ -0,0 +1 @@ |
|||||||
|
include::{commons}@data-commons::page$kotlin/requirements.adoc[] |
||||||
@ -0,0 +1 @@ |
|||||||
|
include::{commons}@data-commons::page$object-mapping.adoc[] |
||||||
@ -0,0 +1,16 @@ |
|||||||
|
[[r2dbc.repositories]] |
||||||
|
= R2DBC |
||||||
|
:page-section-summary-toc: 1 |
||||||
|
|
||||||
|
The Spring Data R2DBC module applies core Spring concepts to the development of solutions that use R2DBC database drivers aligned with xref:jdbc/domain-driven-design.adoc[Domain-driven design principles]. |
||||||
|
We provide a "`template`" as a high-level abstraction for storing and querying aggregates. |
||||||
|
|
||||||
|
This document is the reference guide for Spring Data R2DBC support. |
||||||
|
It explains the concepts and semantics and syntax. |
||||||
|
|
||||||
|
This chapter points out the specialties for repository support for JDBC. |
||||||
|
This builds on the core repository support explained in xref:repositories/introduction.adoc[Working with Spring Data Repositories]. |
||||||
|
You should have a sound understanding of the basic concepts explained there. |
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@ -0,0 +1,15 @@ |
|||||||
|
[[r2dbc.core]] |
||||||
|
= R2DBC Core Support |
||||||
|
:page-section-summary-toc: 1 |
||||||
|
|
||||||
|
R2DBC contains a wide range of features: |
||||||
|
|
||||||
|
* Spring configuration support with Java-based `@Configuration` classes for an R2DBC driver instance. |
||||||
|
* `R2dbcEntityTemplate` as central class for entity-bound operations that increases productivity when performing common R2DBC operations with integrated object mapping between rows and POJOs. |
||||||
|
* Feature-rich object mapping integrated with Spring's Conversion Service. |
||||||
|
* Annotation-based mapping metadata that is extensible to support other metadata formats. |
||||||
|
* Automatic implementation of Repository interfaces, including support for custom query methods. |
||||||
|
|
||||||
|
For most tasks, you should use `R2dbcEntityTemplate` or the repository support, which both use the rich mapping functionality. |
||||||
|
`R2dbcEntityTemplate` is the place to look for accessing functionality such as ad-hoc CRUD operations. |
||||||
|
|
||||||
@ -1,7 +1,7 @@ |
|||||||
[[r2dbc.entity-callbacks]] |
[[r2dbc.entity-callbacks]] |
||||||
= Store specific EntityCallbacks |
= EntityCallbacks |
||||||
|
|
||||||
Spring Data R2DBC uses the `EntityCallback` API for its auditing support and reacts on the following callbacks. |
Spring Data R2DBC uses the xref:commons/entity-callbacks.adoc[`EntityCallback` API] for its auditing support and reacts on the following callbacks. |
||||||
|
|
||||||
.Supported Entity Callbacks |
.Supported Entity Callbacks |
||||||
[%header,cols="4"] |
[%header,cols="4"] |
||||||
@ -0,0 +1,38 @@ |
|||||||
|
[[r2dbc.repositories.queries.query-by-example]] |
||||||
|
= Query By Example |
||||||
|
|
||||||
|
Spring Data R2DBC also lets you use xref:query-by-example.adoc[Query By Example] to fashion queries. |
||||||
|
This technique allows you to use a "probe" object. |
||||||
|
Essentially, any field that isn't empty or `null` will be used to match. |
||||||
|
|
||||||
|
Here's an example: |
||||||
|
|
||||||
|
[source,java,indent=0] |
||||||
|
---- |
||||||
|
include::example$r2dbc/QueryByExampleTests.java[tag=example] |
||||||
|
---- |
||||||
|
|
||||||
|
<1> Create a domain object with the criteria (`null` fields will be ignored). |
||||||
|
<2> Using the domain object, create an `Example`. |
||||||
|
<3> Through the `R2dbcRepository`, execute query (use `findOne` for a `Mono`). |
||||||
|
|
||||||
|
This illustrates how to craft a simple probe using a domain object. |
||||||
|
In this case, it will query based on the `Employee` object's `name` field being equal to `Frodo`. |
||||||
|
`null` fields are ignored. |
||||||
|
|
||||||
|
[source,java,indent=0] |
||||||
|
---- |
||||||
|
include::example$r2dbc/QueryByExampleTests.java[tag=example-2] |
||||||
|
---- |
||||||
|
|
||||||
|
<1> Create a custom `ExampleMatcher` that matches on ALL fields (use `matchingAny()` to match on *ANY* fields) |
||||||
|
<2> For the `name` field, use a wildcard that matches against the end of the field |
||||||
|
<3> Match columns against `null` (don't forget that `NULL` doesn't equal `NULL` in relational databases). |
||||||
|
<4> Ignore the `role` field when forming the query. |
||||||
|
<5> Plug the custom `ExampleMatcher` into the probe. |
||||||
|
|
||||||
|
It's also possible to apply a `withTransform()` against any property, allowing you to transform a property before forming the query. |
||||||
|
For example, you can apply a `toUpperCase()` to a `String` -based property before the query is created. |
||||||
|
|
||||||
|
Query By Example really shines when you don't know all the fields needed in a query in advance. |
||||||
|
If you were building a filter on a web page where the user can pick the fields, Query By Example is a great way to flexibly capture that into an efficient query. |
||||||
@ -0,0 +1,208 @@ |
|||||||
|
[[r2dbc.repositories.queries]] |
||||||
|
= Query Methods |
||||||
|
|
||||||
|
Most of the data access operations you usually trigger on a repository result in a query being run against the databases. |
||||||
|
Defining such a query is a matter of declaring a method on the repository interface, as the following example shows: |
||||||
|
|
||||||
|
.PersonRepository with query methods |
||||||
|
==== |
||||||
|
[source,java] |
||||||
|
---- |
||||||
|
interface ReactivePersonRepository extends ReactiveSortingRepository<Person, Long> { |
||||||
|
|
||||||
|
Flux<Person> findByFirstname(String firstname); <1> |
||||||
|
|
||||||
|
Flux<Person> findByFirstname(Publisher<String> firstname); <2> |
||||||
|
|
||||||
|
Flux<Person> findByFirstnameOrderByLastname(String firstname, Pageable pageable); <3> |
||||||
|
|
||||||
|
Mono<Person> findByFirstnameAndLastname(String firstname, String lastname); <4> |
||||||
|
|
||||||
|
Mono<Person> findFirstByLastname(String lastname); <5> |
||||||
|
|
||||||
|
@Query("SELECT * FROM person WHERE lastname = :lastname") |
||||||
|
Flux<Person> findByLastname(String lastname); <6> |
||||||
|
|
||||||
|
@Query("SELECT firstname, lastname FROM person WHERE lastname = $1") |
||||||
|
Mono<Person> findFirstByLastname(String lastname); <7> |
||||||
|
} |
||||||
|
---- |
||||||
|
|
||||||
|
<1> The method shows a query for all people with the given `firstname`. |
||||||
|
The query is derived by parsing the method name for constraints that can be concatenated with `And` and `Or`. |
||||||
|
Thus, the method name results in a query expression of `SELECT … FROM person WHERE firstname = :firstname`. |
||||||
|
<2> The method shows a query for all people with the given `firstname` once the `firstname` is emitted by the given `Publisher`. |
||||||
|
<3> Use `Pageable` to pass offset and sorting parameters to the database. |
||||||
|
<4> Find a single entity for the given criteria. |
||||||
|
It completes with `IncorrectResultSizeDataAccessException` on non-unique results. |
||||||
|
<5> Unless <4>, the first entity is always emitted even if the query yields more result rows. |
||||||
|
<6> The `findByLastname` method shows a query for all people with the given last name. |
||||||
|
<7> A query for a single `Person` entity projecting only `firstname` and `lastname` columns. |
||||||
|
The annotated query uses native bind markers, which are Postgres bind markers in this example. |
||||||
|
==== |
||||||
|
|
||||||
|
Note that the columns of a select statement used in a `@Query` annotation must match the names generated by the `NamingStrategy` for the respective property. |
||||||
|
If a select statement does not include a matching column, that property is not set. |
||||||
|
If that property is required by the persistence constructor, either null or (for primitive types) the default value is provided. |
||||||
|
|
||||||
|
The following table shows the keywords that are supported for query methods: |
||||||
|
|
||||||
|
[cols="1,2,3",options="header",subs="quotes"] |
||||||
|
.Supported keywords for query methods |
||||||
|
|=== |
||||||
|
| Keyword |
||||||
|
| Sample |
||||||
|
| Logical result |
||||||
|
|
||||||
|
| `After` |
||||||
|
| `findByBirthdateAfter(Date date)` |
||||||
|
| `birthdate > date` |
||||||
|
|
||||||
|
| `GreaterThan` |
||||||
|
| `findByAgeGreaterThan(int age)` |
||||||
|
| `age > age` |
||||||
|
|
||||||
|
| `GreaterThanEqual` |
||||||
|
| `findByAgeGreaterThanEqual(int age)` |
||||||
|
| `age >= age` |
||||||
|
|
||||||
|
| `Before` |
||||||
|
| `findByBirthdateBefore(Date date)` |
||||||
|
| `birthdate < date` |
||||||
|
|
||||||
|
| `LessThan` |
||||||
|
| `findByAgeLessThan(int age)` |
||||||
|
| `age < age` |
||||||
|
|
||||||
|
| `LessThanEqual` |
||||||
|
| `findByAgeLessThanEqual(int age)` |
||||||
|
| `age \<= age` |
||||||
|
|
||||||
|
| `Between` |
||||||
|
| `findByAgeBetween(int from, int to)` |
||||||
|
| `age BETWEEN from AND to` |
||||||
|
|
||||||
|
| `NotBetween` |
||||||
|
| `findByAgeNotBetween(int from, int to)` |
||||||
|
| `age NOT BETWEEN from AND to` |
||||||
|
|
||||||
|
| `In` |
||||||
|
| `findByAgeIn(Collection<Integer> ages)` |
||||||
|
| `age IN (age1, age2, ageN)` |
||||||
|
|
||||||
|
| `NotIn` |
||||||
|
| `findByAgeNotIn(Collection ages)` |
||||||
|
| `age NOT IN (age1, age2, ageN)` |
||||||
|
|
||||||
|
| `IsNotNull`, `NotNull` |
||||||
|
| `findByFirstnameNotNull()` |
||||||
|
| `firstname IS NOT NULL` |
||||||
|
|
||||||
|
| `IsNull`, `Null` |
||||||
|
| `findByFirstnameNull()` |
||||||
|
| `firstname IS NULL` |
||||||
|
|
||||||
|
| `Like`, `StartingWith`, `EndingWith` |
||||||
|
| `findByFirstnameLike(String name)` |
||||||
|
| `firstname LIKE name` |
||||||
|
|
||||||
|
| `NotLike`, `IsNotLike` |
||||||
|
| `findByFirstnameNotLike(String name)` |
||||||
|
| `firstname NOT LIKE name` |
||||||
|
|
||||||
|
| `Containing` on String |
||||||
|
| `findByFirstnameContaining(String name)` |
||||||
|
| `firstname LIKE '%' + name +'%'` |
||||||
|
|
||||||
|
| `NotContaining` on String |
||||||
|
| `findByFirstnameNotContaining(String name)` |
||||||
|
| `firstname NOT LIKE '%' + name +'%'` |
||||||
|
|
||||||
|
| `(No keyword)` |
||||||
|
| `findByFirstname(String name)` |
||||||
|
| `firstname = name` |
||||||
|
|
||||||
|
| `Not` |
||||||
|
| `findByFirstnameNot(String name)` |
||||||
|
| `firstname != name` |
||||||
|
|
||||||
|
| `IsTrue`, `True` |
||||||
|
| `findByActiveIsTrue()` |
||||||
|
| `active IS TRUE` |
||||||
|
|
||||||
|
| `IsFalse`, `False` |
||||||
|
| `findByActiveIsFalse()` |
||||||
|
| `active IS FALSE` |
||||||
|
|=== |
||||||
|
|
||||||
|
[[r2dbc.repositories.modifying]] |
||||||
|
== Modifying Queries |
||||||
|
|
||||||
|
The previous sections describe how to declare queries to access a given entity or collection of entities. |
||||||
|
Using keywords from the preceding table can be used in conjunction with `delete…By` or `remove…By` to create derived queries that delete matching rows. |
||||||
|
|
||||||
|
.`Delete…By` Query |
||||||
|
==== |
||||||
|
[source,java] |
||||||
|
---- |
||||||
|
interface ReactivePersonRepository extends ReactiveSortingRepository<Person, String> { |
||||||
|
|
||||||
|
Mono<Integer> deleteByLastname(String lastname); <1> |
||||||
|
|
||||||
|
Mono<Void> deletePersonByLastname(String lastname); <2> |
||||||
|
|
||||||
|
Mono<Boolean> deletePersonByLastname(String lastname); <3> |
||||||
|
} |
||||||
|
---- |
||||||
|
|
||||||
|
<1> Using a return type of `Mono<Integer>` returns the number of affected rows. |
||||||
|
<2> Using `Void` just reports whether the rows were successfully deleted without emitting a result value. |
||||||
|
<3> Using `Boolean` reports whether at least one row was removed. |
||||||
|
==== |
||||||
|
|
||||||
|
As this approach is feasible for comprehensive custom functionality, you can modify queries that only need parameter binding by annotating the query method with `@Modifying`, as shown in the following example: |
||||||
|
|
||||||
|
[source,java,indent=0] |
||||||
|
---- |
||||||
|
include::example$r2dbc/PersonRepository.java[tags=atModifying] |
||||||
|
---- |
||||||
|
|
||||||
|
The result of a modifying query can be: |
||||||
|
|
||||||
|
* `Void` (or Kotlin `Unit`) to discard update count and await completion. |
||||||
|
* `Integer` or another numeric type emitting the affected rows count. |
||||||
|
* `Boolean` to emit whether at least one row was updated. |
||||||
|
|
||||||
|
The `@Modifying` annotation is only relevant in combination with the `@Query` annotation. |
||||||
|
Derived custom methods do not require this annotation. |
||||||
|
|
||||||
|
Modifying queries are executed directly against the database. |
||||||
|
No events or callbacks get called. |
||||||
|
Therefore also fields with auditing annotations do not get updated if they don't get updated in the annotated query. |
||||||
|
|
||||||
|
Alternatively, you can add custom modifying behavior by using the facilities described in xref:repositories/custom-implementations.adoc[Custom Implementations for Spring Data Repositories]. |
||||||
|
|
||||||
|
[[r2dbc.repositories.queries.spel]] |
||||||
|
=== Queries with SpEL Expressions |
||||||
|
|
||||||
|
Query string definitions can be used together with SpEL expressions to create dynamic queries at runtime. |
||||||
|
SpEL expressions can provide predicate values which are evaluated right before running the query. |
||||||
|
|
||||||
|
Expressions expose method arguments through an array that contains all the arguments. |
||||||
|
The following query uses `[0]` |
||||||
|
to declare the predicate value for `lastname` (which is equivalent to the `:lastname` parameter binding): |
||||||
|
|
||||||
|
[source,java,indent=0] |
||||||
|
---- |
||||||
|
include::example$r2dbc/PersonRepository.java[tags=spel] |
||||||
|
---- |
||||||
|
|
||||||
|
SpEL in query strings can be a powerful way to enhance queries. |
||||||
|
However, they can also accept a broad range of unwanted arguments. |
||||||
|
You should make sure to sanitize strings before passing them to the query to avoid unwanted changes to your query. |
||||||
|
|
||||||
|
Expression support is extensible through the Query SPI: `org.springframework.data.spel.spi.EvaluationContextExtension`. |
||||||
|
The Query SPI can contribute properties and functions and can customize the root object. |
||||||
|
Extensions are retrieved from the application context at the time of SpEL evaluation when the query is built. |
||||||
|
|
||||||
|
TIP: When using SpEL expressions in combination with plain parameters, use named parameter notation instead of native bind markers to ensure a proper binding order. |
||||||
@ -0,0 +1,178 @@ |
|||||||
|
[[r2dbc.repositories]] |
||||||
|
= R2DBC Repositories |
||||||
|
|
||||||
|
[[r2dbc.repositories.intro]] |
||||||
|
This chapter points out the specialties for repository support for R2DBC. |
||||||
|
This builds on the core repository support explained in xref:repositories/introduction.adoc[Working with Spring Data Repositories]. |
||||||
|
Before reading this chapter, you should have a sound understanding of the basic concepts explained there. |
||||||
|
|
||||||
|
[[r2dbc.repositories.usage]] |
||||||
|
== Usage |
||||||
|
|
||||||
|
To access domain entities stored in a relational database, you can use our sophisticated repository support that eases implementation quite significantly. |
||||||
|
To do so, create an interface for your repository. |
||||||
|
Consider the following `Person` class: |
||||||
|
|
||||||
|
.Sample Person entity |
||||||
|
[source,java] |
||||||
|
---- |
||||||
|
public class Person { |
||||||
|
|
||||||
|
@Id |
||||||
|
private Long id; |
||||||
|
private String firstname; |
||||||
|
private String lastname; |
||||||
|
|
||||||
|
// … getters and setters omitted |
||||||
|
} |
||||||
|
---- |
||||||
|
|
||||||
|
The following example shows a repository interface for the preceding `Person` class: |
||||||
|
|
||||||
|
.Basic repository interface to persist Person entities |
||||||
|
[source,java] |
||||||
|
---- |
||||||
|
public interface PersonRepository extends ReactiveCrudRepository<Person, Long> { |
||||||
|
|
||||||
|
// additional custom query methods go here |
||||||
|
} |
||||||
|
---- |
||||||
|
|
||||||
|
To configure R2DBC repositories, you can use the `@EnableR2dbcRepositories` annotation. |
||||||
|
If no base package is configured, the infrastructure scans the package of the annotated configuration class. |
||||||
|
The following example shows how to use Java configuration for a repository: |
||||||
|
|
||||||
|
.Java configuration for repositories |
||||||
|
[source,java] |
||||||
|
---- |
||||||
|
@Configuration |
||||||
|
@EnableR2dbcRepositories |
||||||
|
class ApplicationConfig extends AbstractR2dbcConfiguration { |
||||||
|
|
||||||
|
@Override |
||||||
|
public ConnectionFactory connectionFactory() { |
||||||
|
return … |
||||||
|
} |
||||||
|
} |
||||||
|
---- |
||||||
|
|
||||||
|
Because our domain repository extends `ReactiveCrudRepository`, it provides you with reactive CRUD operations to access the entities. |
||||||
|
On top of `ReactiveCrudRepository`, there is also `ReactiveSortingRepository`, which adds additional sorting functionality similar to that of `PagingAndSortingRepository`. |
||||||
|
Working with the repository instance is merely a matter of dependency injecting it into a client. |
||||||
|
Consequently, you can retrieve all `Person` objects with the following code: |
||||||
|
|
||||||
|
.Paging access to Person entities |
||||||
|
[source,java,indent=0] |
||||||
|
---- |
||||||
|
include::example$r2dbc/PersonRepositoryTests.java[tags=class] |
||||||
|
---- |
||||||
|
|
||||||
|
The preceding example creates an application context with Spring's unit test support, which performs annotation-based dependency injection into test cases. |
||||||
|
Inside the test method, we use the repository to query the database. |
||||||
|
We use `StepVerifier` as a test aid to verify our expectations against the results. |
||||||
|
|
||||||
|
[[r2dbc.entity-persistence.state-detection-strategies]] |
||||||
|
include::{commons}@data-commons::page$is-new-state-detection.adoc[leveloffset=+1] |
||||||
|
|
||||||
|
[[r2dbc.entity-persistence.id-generation]] |
||||||
|
=== ID Generation |
||||||
|
|
||||||
|
Spring Data R2DBC uses the ID to identify entities. |
||||||
|
The ID of an entity must be annotated with Spring Data's https://docs.spring.io/spring-data/commons/docs/current/api/org/springframework/data/annotation/Id.html[`@Id`] annotation. |
||||||
|
|
||||||
|
When your database has an auto-increment column for the ID column, the generated value gets set in the entity after inserting it into the database. |
||||||
|
|
||||||
|
Spring Data R2DBC does not attempt to insert values of identifier columns when the entity is new and the identifier value defaults to its initial value. |
||||||
|
That is `0` for primitive types and `null` if the identifier property uses a numeric wrapper type such as `Long`. |
||||||
|
|
||||||
|
One important constraint is that, after saving an entity, the entity must not be new anymore. |
||||||
|
Note that whether an entity is new is part of the entity's state. |
||||||
|
With auto-increment columns, this happens automatically, because the ID gets set by Spring Data with the value from the ID column. |
||||||
|
|
||||||
|
[[r2dbc.optimistic-locking]] |
||||||
|
=== Optimistic Locking |
||||||
|
|
||||||
|
The `@Version` annotation provides syntax similar to that of JPA in the context of R2DBC and makes sure updates are only applied to rows with a matching version. |
||||||
|
Therefore, the actual value of the version property is added to the update query in such a way that the update does not have any effect if another operation altered the row in the meantime. |
||||||
|
In that case, an `OptimisticLockingFailureException` is thrown. |
||||||
|
The following example shows these features: |
||||||
|
|
||||||
|
[source,java] |
||||||
|
---- |
||||||
|
@Table |
||||||
|
class Person { |
||||||
|
|
||||||
|
@Id Long id; |
||||||
|
String firstname; |
||||||
|
String lastname; |
||||||
|
@Version Long version; |
||||||
|
} |
||||||
|
|
||||||
|
R2dbcEntityTemplate template = …; |
||||||
|
|
||||||
|
Mono<Person> daenerys = template.insert(new Person("Daenerys")); <1> |
||||||
|
|
||||||
|
Person other = template.select(Person.class) |
||||||
|
.matching(query(where("id").is(daenerys.getId()))) |
||||||
|
.first().block(); <2> |
||||||
|
|
||||||
|
daenerys.setLastname("Targaryen"); |
||||||
|
template.update(daenerys); <3> |
||||||
|
|
||||||
|
template.update(other).subscribe(); // emits OptimisticLockingFailureException <4> |
||||||
|
---- |
||||||
|
<1> Initially insert row. `version` is set to `0`. |
||||||
|
<2> Load the just inserted row. `version` is still `0`. |
||||||
|
<3> Update the row with `version = 0`.Set the `lastname` and bump `version` to `1`. |
||||||
|
<4> Try to update the previously loaded row that still has `version = 0`.The operation fails with an `OptimisticLockingFailureException`, as the current `version` is `1`. |
||||||
|
|
||||||
|
[[projections.resultmapping]] |
||||||
|
==== Result Mapping |
||||||
|
|
||||||
|
A query method returning an Interface- or DTO projection is backed by results produced by the actual query. |
||||||
|
Interface projections generally rely on mapping results onto the domain type first to consider potential `@Column` type mappings and the actual projection proxy uses a potentially partially materialized entity to expose projection data. |
||||||
|
|
||||||
|
Result mapping for DTO projections depends on the actual query type. |
||||||
|
Derived queries use the domain type to map results, and Spring Data creates DTO instances solely from properties available on the domain type. |
||||||
|
Declaring properties in your DTO that are not available on the domain type is not supported. |
||||||
|
|
||||||
|
String-based queries use a different approach since the actual query, specifically the field projection, and result type declaration are close together. |
||||||
|
DTO projections used with query methods annotated with `@Query` map query results directly into the DTO type. |
||||||
|
Field mappings on the domain type are not considered. |
||||||
|
Using the DTO type directly, your query method can benefit from a more dynamic projection that isn't restricted to the domain model. |
||||||
|
|
||||||
|
[[r2dbc.multiple-databases]] |
||||||
|
== Working with multiple Databases |
||||||
|
|
||||||
|
When working with multiple, potentially different databases, your application will require a different approach to configuration. |
||||||
|
The provided `AbstractR2dbcConfiguration` support class assumes a single `ConnectionFactory` from which the `Dialect` gets derived. |
||||||
|
That being said, you need to define a few beans yourself to configure Spring Data R2DBC to work with multiple databases. |
||||||
|
|
||||||
|
R2DBC repositories require `R2dbcEntityOperations` to implement repositories. |
||||||
|
A simple configuration to scan for repositories without using `AbstractR2dbcConfiguration` looks like: |
||||||
|
|
||||||
|
[source,java] |
||||||
|
---- |
||||||
|
@Configuration |
||||||
|
@EnableR2dbcRepositories(basePackages = "com.acme.mysql", entityOperationsRef = "mysqlR2dbcEntityOperations") |
||||||
|
static class MySQLConfiguration { |
||||||
|
|
||||||
|
@Bean |
||||||
|
@Qualifier("mysql") |
||||||
|
public ConnectionFactory mysqlConnectionFactory() { |
||||||
|
return … |
||||||
|
} |
||||||
|
|
||||||
|
@Bean |
||||||
|
public R2dbcEntityOperations mysqlR2dbcEntityOperations(@Qualifier("mysql") ConnectionFactory connectionFactory) { |
||||||
|
|
||||||
|
DatabaseClient databaseClient = DatabaseClient.create(connectionFactory); |
||||||
|
|
||||||
|
return new R2dbcEntityTemplate(databaseClient, MySqlDialect.INSTANCE); |
||||||
|
} |
||||||
|
} |
||||||
|
---- |
||||||
|
|
||||||
|
Note that `@EnableR2dbcRepositories` allows configuration either through `databaseClientRef` or `entityOperationsRef`. |
||||||
|
Using various `DatabaseClient` beans is useful when connecting to multiple databases of the same type. |
||||||
|
When using different database systems that differ in their dialect, use `@EnableR2dbcRepositories`(entityOperationsRef = …)` instead. |
||||||
@ -0,0 +1 @@ |
|||||||
|
include::{commons}@data-commons::page$auditing.adoc[leveloffset=+1] |
||||||
@ -0,0 +1 @@ |
|||||||
|
include::{commons}@data-commons::page$repositories/core-concepts.adoc[] |
||||||
@ -0,0 +1 @@ |
|||||||
|
include::{commons}@data-commons::page$repositories/core-domain-events.adoc[] |
||||||
@ -0,0 +1 @@ |
|||||||
|
include::{commons}@data-commons::page$repositories/core-extensions.adoc[] |
||||||
@ -0,0 +1 @@ |
|||||||
|
include::{commons}@data-commons::page$repositories/create-instances.adoc[] |
||||||
@ -0,0 +1 @@ |
|||||||
|
include::{commons}@data-commons::page$repositories/custom-implementations.adoc[] |
||||||
@ -0,0 +1 @@ |
|||||||
|
include::{commons}@data-commons::page$repositories/definition.adoc[] |
||||||
@ -0,0 +1,8 @@ |
|||||||
|
[[common.basics]] |
||||||
|
= Introduction |
||||||
|
:page-section-summary-toc: 1 |
||||||
|
|
||||||
|
This chapter explains the basic foundations of Spring Data repositories. |
||||||
|
Before continuing to the JDBC or R2DBC specifics, make sure you have a sound understanding of the basic concepts explained here. |
||||||
|
|
||||||
|
The goal of the Spring Data repository abstraction is to significantly reduce the amount of boilerplate code required to implement data access layers for various persistence stores. |
||||||
@ -0,0 +1 @@ |
|||||||
|
include::{commons}@data-commons::page$repositories/null-handling.adoc[] |
||||||
@ -0,0 +1,4 @@ |
|||||||
|
[[relational.projections]] |
||||||
|
= Projections |
||||||
|
|
||||||
|
include::{commons}@data-commons::page$repositories/projections.adoc[leveloffset=+1] |
||||||
@ -0,0 +1 @@ |
|||||||
|
include::{commons}@data-commons::page$repositories/query-keywords-reference.adoc[] |
||||||
@ -0,0 +1 @@ |
|||||||
|
include::{commons}@data-commons::page$repositories/query-methods-details.adoc[] |
||||||
@ -0,0 +1 @@ |
|||||||
|
include::{commons}@data-commons::page$repositories/query-return-types-reference.adoc[] |
||||||
@ -0,0 +1,22 @@ |
|||||||
|
version: ${antora-component.version} |
||||||
|
prerelease: ${antora-component.prerelease} |
||||||
|
|
||||||
|
asciidoc: |
||||||
|
attributes: |
||||||
|
version: ${project.version} |
||||||
|
springversionshort: ${spring.short} |
||||||
|
springversion: ${spring} |
||||||
|
attribute-missing: 'warn' |
||||||
|
commons: ${springdata.commons.docs} |
||||||
|
include-xml-namespaces: false |
||||||
|
spring-data-commons-docs-url: https://docs.spring.io/spring-data-commons/reference |
||||||
|
spring-data-commons-javadoc-base: https://docs.spring.io/spring-data/commons/docs/${springdata.commons}/api/ |
||||||
|
spring-data-jdbc-javadoc: https://docs.spring.io/spring-data/jdbc/docs/${version}/api/ |
||||||
|
spring-data-r2dbc-javadoc: https://docs.spring.io/spring-data/r2dbc/docs/${version}/api/ |
||||||
|
springdocsurl: https://docs.spring.io/spring-framework/reference/{springversionshort} |
||||||
|
springjavadocurl: https://docs.spring.io/spring-framework/docs/${spring}/javadoc-api |
||||||
|
spring-framework-docs: '{springdocsurl}' |
||||||
|
spring-framework-javadoc: '{springjavadocurl}' |
||||||
|
springhateoasversion: ${spring-hateoas} |
||||||
|
releasetrainversion: ${releasetrain} |
||||||
|
store: Jdbc |
||||||
@ -1,19 +0,0 @@ |
|||||||
[[glossary]] |
|
||||||
[appendix,glossary] |
|
||||||
= Glossary |
|
||||||
|
|
||||||
AOP:: |
|
||||||
Aspect-Oriented Programming |
|
||||||
|
|
||||||
CRUD:: |
|
||||||
Create, Read, Update, Delete - Basic persistence operations |
|
||||||
|
|
||||||
Dependency Injection:: |
|
||||||
Pattern to hand a component's dependency to the component from outside, freeing the component to lookup the dependent itself. |
|
||||||
For more information, see link:$$https://en.wikipedia.org/wiki/Dependency_Injection$$[https://en.wikipedia.org/wiki/Dependency_Injection]. |
|
||||||
|
|
||||||
JPA:: |
|
||||||
Java Persistence API |
|
||||||
|
|
||||||
Spring:: |
|
||||||
Java application framework -- link:$$https://projects.spring.io/spring-framework$$[https://projects.spring.io/spring-framework] |
|
||||||
|
Before Width: | Height: | Size: 40 KiB |
|
Before Width: | Height: | Size: 8.6 KiB |
@ -1,36 +0,0 @@ |
|||||||
= Spring Data JDBC - Reference Documentation |
|
||||||
Jens Schauder, Jay Bryant, Mark Paluch, Bastian Wilhelm |
|
||||||
:revnumber: {version} |
|
||||||
:revdate: {localdate} |
|
||||||
:javadoc-base: https://docs.spring.io/spring-data/jdbc/docs/{revnumber}/api/ |
|
||||||
ifdef::backend-epub3[:front-cover-image: image:epub-cover.png[Front Cover,1050,1600]] |
|
||||||
:spring-data-commons-docs: ../../../../spring-data-commons/src/main/asciidoc |
|
||||||
:spring-framework-docs: https://docs.spring.io/spring-framework/docs/{springVersion}/reference/html |
|
||||||
:include-xml-namespaces: false |
|
||||||
|
|
||||||
(C) 2018-2022 The original authors. |
|
||||||
|
|
||||||
NOTE: Copies of this document may be made for your own use and for distribution to others, provided that you do not charge any fee for such copies and further provided that each copy contains this Copyright Notice, whether distributed in print or electronically. |
|
||||||
|
|
||||||
include::preface.adoc[] |
|
||||||
|
|
||||||
|
|
||||||
include::{spring-data-commons-docs}/upgrade.adoc[leveloffset=+1] |
|
||||||
include::{spring-data-commons-docs}/dependencies.adoc[leveloffset=+1] |
|
||||||
include::{spring-data-commons-docs}/repositories.adoc[leveloffset=+1] |
|
||||||
|
|
||||||
[[reference]] |
|
||||||
= Reference Documentation |
|
||||||
|
|
||||||
include::jdbc.adoc[leveloffset=+1] |
|
||||||
include::schema-support.adoc[leveloffset=+1] |
|
||||||
|
|
||||||
[[appendix]] |
|
||||||
= Appendix |
|
||||||
|
|
||||||
:numbered!: |
|
||||||
include::glossary.adoc[leveloffset=+1] |
|
||||||
include::{spring-data-commons-docs}/repository-populator-namespace-reference.adoc[leveloffset=+1] |
|
||||||
include::{spring-data-commons-docs}/repository-query-keywords-reference.adoc[leveloffset=+1] |
|
||||||
include::{spring-data-commons-docs}/repository-query-return-types-reference.adoc[leveloffset=+1] |
|
||||||
|
|
||||||
@ -1,82 +0,0 @@ |
|||||||
[[preface]] |
|
||||||
= Preface |
|
||||||
|
|
||||||
The Spring Data JDBC project applies core Spring concepts to the development of solutions that use JDBC databases aligned with <<jdbc.domain-driven-design,Domain-driven design principles>>. |
|
||||||
We provide a "`template`" as a high-level abstraction for storing and querying aggregates. |
|
||||||
|
|
||||||
This document is the reference guide for Spring Data JDBC Support. |
|
||||||
It explains the concepts and semantics and syntax.. |
|
||||||
|
|
||||||
This section provides some basic introduction. |
|
||||||
The rest of the document refers only to Spring Data JDBC features and assumes the user is familiar with SQL and Spring concepts. |
|
||||||
|
|
||||||
[[get-started:first-steps:spring]] |
|
||||||
== Learning Spring |
|
||||||
|
|
||||||
Spring Data uses Spring framework's {spring-framework-docs}/core.html[core] functionality, including: |
|
||||||
|
|
||||||
* {spring-framework-docs}/core.html#beans[IoC] container |
|
||||||
* {spring-framework-docs}/core.html#validation[type conversion system] |
|
||||||
* {spring-framework-docs}/core.html#expressions[expression language] |
|
||||||
* {spring-framework-docs}/integration.html#jmx[JMX integration] |
|
||||||
* {spring-framework-docs}/data-access.html#dao-exceptions[DAO exception hierarchy]. |
|
||||||
|
|
||||||
While you need not know the Spring APIs, understanding the concepts behind them is important. |
|
||||||
At a minimum, the idea behind Inversion of Control (IoC) should be familiar, and you should be familiar with whatever IoC container you choose to use. |
|
||||||
|
|
||||||
The core functionality of the JDBC Aggregate support can be used directly, with no need to invoke the IoC services of the Spring Container. |
|
||||||
This is much like `JdbcTemplate`, which can be used "'standalone'" without any other services of the Spring container. |
|
||||||
To leverage all the features of Spring Data JDBC, such as the repository support, you need to configure some parts of the library to use Spring. |
|
||||||
|
|
||||||
To learn more about Spring, you can refer to the comprehensive documentation that explains the Spring Framework in detail. |
|
||||||
There are a lot of articles, blog entries, and books on the subject. |
|
||||||
See the Spring framework https://spring.io/docs[home page] for more information. |
|
||||||
|
|
||||||
[[requirements]] |
|
||||||
== Requirements |
|
||||||
|
|
||||||
The Spring Data JDBC binaries require JDK level 8.0 and above and https://spring.io/docs[Spring Framework] {springVersion} and above. |
|
||||||
|
|
||||||
In terms of databases, Spring Data JDBC requires a <<jdbc.dialects,dialect>> to abstract common SQL functionality over vendor-specific flavours. |
|
||||||
Spring Data JDBC includes direct support for the following databases: |
|
||||||
|
|
||||||
* DB2 |
|
||||||
* H2 |
|
||||||
* HSQLDB |
|
||||||
* MariaDB |
|
||||||
* Microsoft SQL Server |
|
||||||
* MySQL |
|
||||||
* Oracle |
|
||||||
* Postgres |
|
||||||
|
|
||||||
If you use a different database then your application won’t startup. The <<jdbc.dialects,dialect>> section contains further detail on how to proceed in such case. |
|
||||||
|
|
||||||
[[get-started:help]] |
|
||||||
== Additional Help Resources |
|
||||||
|
|
||||||
Learning a new framework is not always straightforward. |
|
||||||
In this section, we try to provide what we think is an easy-to-follow guide for starting with the Spring Data JDBC module. |
|
||||||
However, if you encounter issues or you need advice, feel free to use one of the following links: |
|
||||||
|
|
||||||
[[get-started:help:community]] |
|
||||||
Community Forum :: Spring Data on https://stackoverflow.com/questions/tagged/spring-data[Stack Overflow] is a tag for all Spring Data (not just Document) users to share information and help each other. |
|
||||||
Note that registration is needed only for posting. |
|
||||||
|
|
||||||
[[get-started:help:professional]] |
|
||||||
Professional Support :: Professional, from-the-source support, with guaranteed response time, is available from https://pivotal.io/[Pivotal Sofware, Inc.], the company behind Spring Data and Spring. |
|
||||||
|
|
||||||
[[get-started:up-to-date]] |
|
||||||
== Following Development |
|
||||||
|
|
||||||
For information on the Spring Data JDBC source code repository, nightly builds, and snapshot artifacts, see the Spring Data JDBC https://spring.io/projects/spring-data-jdbc/[homepage]. |
|
||||||
You can help make Spring Data best serve the needs of the Spring community by interacting with developers through the Community on https://stackoverflow.com/questions/tagged/spring-data[Stack Overflow]. |
|
||||||
If you encounter a bug or want to suggest an improvement, please create a ticket on the https://github.com/spring-projects/spring-data-jdbc/issues[Spring Data issue tracker]. |
|
||||||
To stay up to date with the latest news and announcements in the Spring eco system, subscribe to the Spring Community https://spring.io[Portal]. |
|
||||||
You can also follow the Spring https://spring.io/blog[blog] or the project team on Twitter (https://twitter.com/SpringData[SpringData]). |
|
||||||
|
|
||||||
[[project]] |
|
||||||
== Project Metadata |
|
||||||
|
|
||||||
* Release repository: https://repo1.maven.org/maven2/ |
|
||||||
* Milestone repository: https://repo.spring.io/milestone |
|
||||||
* Snapshot repository: https://repo.spring.io/snapshot |
|
||||||
Loading…
Reference in new issue