Browse Source

Migrate documentation to Antora.

Closes #1597
pull/1619/head
Mark Paluch 2 years ago
parent
commit
4202b090ed
No known key found for this signature in database
GPG Key ID: 4406B84C1661DCD1
  1. 6
      .gitignore
  2. 4
      README.adoc
  3. 23
      spring-data-jdbc-distribution/pom.xml
  4. 54
      spring-data-r2dbc/src/main/asciidoc/index.adoc
  5. 122
      spring-data-r2dbc/src/main/asciidoc/preface.adoc
  6. 10
      spring-data-r2dbc/src/main/asciidoc/reference/introduction.adoc
  7. 442
      spring-data-r2dbc/src/main/asciidoc/reference/r2dbc-repositories.adoc
  8. 6
      spring-data-r2dbc/src/main/asciidoc/reference/r2dbc.adoc
  9. 6
      spring-data-r2dbc/src/test/java/org/springframework/data/r2dbc/documentation/PersonRepositoryTests.java
  10. 19
      spring-data-r2dbc/src/test/java/org/springframework/data/r2dbc/documentation/QueryByExampleTests.java
  11. 7
      spring-data-r2dbc/src/test/java/org/springframework/data/r2dbc/documentation/R2dbcApp.java
  12. 14
      spring-data-r2dbc/src/test/java/org/springframework/data/r2dbc/documentation/R2dbcEntityTemplateSnippets.java
  13. 42
      src/main/antora/antora-playbook.yml
  14. 12
      src/main/antora/antora.yml
  15. 1
      src/main/antora/modules/ROOT/examples/r2dbc
  16. 55
      src/main/antora/modules/ROOT/nav.adoc
  17. 1
      src/main/antora/modules/ROOT/pages/commons/custom-conversions.adoc
  18. 1
      src/main/antora/modules/ROOT/pages/commons/entity-callbacks.adoc
  19. 1
      src/main/antora/modules/ROOT/pages/commons/upgrade.adoc
  20. 21
      src/main/antora/modules/ROOT/pages/index.adoc
  21. 16
      src/main/antora/modules/ROOT/pages/jdbc.adoc
  22. 23
      src/main/antora/modules/ROOT/pages/jdbc/auditing.adoc
  23. 64
      src/main/antora/modules/ROOT/pages/jdbc/configuration.adoc
  24. 17
      src/main/antora/modules/ROOT/pages/jdbc/custom-conversions.adoc
  25. 26
      src/main/antora/modules/ROOT/pages/jdbc/domain-driven-design.adoc
  26. 44
      src/main/antora/modules/ROOT/pages/jdbc/entity-persistence.adoc
  27. 110
      src/main/antora/modules/ROOT/pages/jdbc/events.adoc
  28. 5
      src/main/antora/modules/ROOT/pages/jdbc/examples-repo.adoc
  29. 68
      src/main/antora/modules/ROOT/pages/jdbc/getting-started.adoc
  30. 28
      src/main/antora/modules/ROOT/pages/jdbc/loading-aggregates.adoc
  31. 28
      src/main/antora/modules/ROOT/pages/jdbc/locking.adoc
  32. 8
      src/main/antora/modules/ROOT/pages/jdbc/logging.adoc
  33. 273
      src/main/antora/modules/ROOT/pages/jdbc/mapping.adoc
  34. 120
      src/main/antora/modules/ROOT/pages/jdbc/mybatis.adoc
  35. 251
      src/main/antora/modules/ROOT/pages/jdbc/query-methods.adoc
  36. 0
      src/main/antora/modules/ROOT/pages/jdbc/schema-support.adoc
  37. 92
      src/main/antora/modules/ROOT/pages/jdbc/transactions.adoc
  38. 31
      src/main/antora/modules/ROOT/pages/jdbc/why.adoc
  39. 1
      src/main/antora/modules/ROOT/pages/kotlin.adoc
  40. 1
      src/main/antora/modules/ROOT/pages/kotlin/coroutines.adoc
  41. 1
      src/main/antora/modules/ROOT/pages/kotlin/extensions.adoc
  42. 1
      src/main/antora/modules/ROOT/pages/kotlin/null-safety.adoc
  43. 1
      src/main/antora/modules/ROOT/pages/kotlin/object-mapping.adoc
  44. 1
      src/main/antora/modules/ROOT/pages/kotlin/requirements.adoc
  45. 1
      src/main/antora/modules/ROOT/pages/object-mapping.adoc
  46. 10
      src/main/antora/modules/ROOT/pages/query-by-example.adoc
  47. 16
      src/main/antora/modules/ROOT/pages/r2dbc.adoc
  48. 4
      src/main/antora/modules/ROOT/pages/r2dbc/auditing.adoc
  49. 15
      src/main/antora/modules/ROOT/pages/r2dbc/core.adoc
  50. 4
      src/main/antora/modules/ROOT/pages/r2dbc/entity-callbacks.adoc
  51. 46
      src/main/antora/modules/ROOT/pages/r2dbc/getting-started.adoc
  52. 10
      src/main/antora/modules/ROOT/pages/r2dbc/kotlin.adoc
  53. 44
      src/main/antora/modules/ROOT/pages/r2dbc/mapping.adoc
  54. 2
      src/main/antora/modules/ROOT/pages/r2dbc/migration-guide.adoc
  55. 38
      src/main/antora/modules/ROOT/pages/r2dbc/query-by-example.adoc
  56. 208
      src/main/antora/modules/ROOT/pages/r2dbc/query-methods.adoc
  57. 178
      src/main/antora/modules/ROOT/pages/r2dbc/repositories.adoc
  58. 38
      src/main/antora/modules/ROOT/pages/r2dbc/template.adoc
  59. 1
      src/main/antora/modules/ROOT/pages/repositories/auditing.adoc
  60. 1
      src/main/antora/modules/ROOT/pages/repositories/core-concepts.adoc
  61. 1
      src/main/antora/modules/ROOT/pages/repositories/core-domain-events.adoc
  62. 1
      src/main/antora/modules/ROOT/pages/repositories/core-extensions.adoc
  63. 1
      src/main/antora/modules/ROOT/pages/repositories/create-instances.adoc
  64. 1
      src/main/antora/modules/ROOT/pages/repositories/custom-implementations.adoc
  65. 1
      src/main/antora/modules/ROOT/pages/repositories/definition.adoc
  66. 8
      src/main/antora/modules/ROOT/pages/repositories/introduction.adoc
  67. 1
      src/main/antora/modules/ROOT/pages/repositories/null-handling.adoc
  68. 4
      src/main/antora/modules/ROOT/pages/repositories/projections.adoc
  69. 1
      src/main/antora/modules/ROOT/pages/repositories/query-keywords-reference.adoc
  70. 1
      src/main/antora/modules/ROOT/pages/repositories/query-methods-details.adoc
  71. 1
      src/main/antora/modules/ROOT/pages/repositories/query-return-types-reference.adoc
  72. 22
      src/main/antora/resources/antora-resources/antora.yml
  73. 19
      src/main/asciidoc/glossary.adoc
  74. BIN
      src/main/asciidoc/images/epub-cover.png
  75. 8
      src/main/asciidoc/images/epub-cover.svg
  76. 36
      src/main/asciidoc/index.adoc
  77. 1165
      src/main/asciidoc/jdbc.adoc
  78. 82
      src/main/asciidoc/preface.adoc

6
.gitignore vendored

@ -11,7 +11,11 @@ target/ @@ -11,7 +11,11 @@ target/
*.graphml
*.json
build/
node_modules
node
#prevent license accepting file to get accidentially commited to git
container-license-acceptance.txt
spring-data-jdbc/src/test/java/org/springframework/data/ProxyImageNameSubstitutor.java
spring-data-r2dbc/src/test/java/org/springframework/data/ProxyImageNameSubstitutor.java
spring-data-r2dbc/src/test/java/org/springframework/data/ProxyImageNameSubstitutor.java

4
README.adoc

@ -184,10 +184,10 @@ Building the documentation builds also the project without running tests. @@ -184,10 +184,10 @@ Building the documentation builds also the project without running tests.
[source,bash]
----
$ ./mvnw clean install -Pdistribute
$ ./mvnw clean install -Pantora
----
The generated documentation is available from `target/site/reference/html/index.html`.
The generated documentation is available from `spring-data-jdbc-distribution/target/antora/site/index.html`.
== Modules

23
spring-data-jdbc-distribution/pom.xml

@ -20,18 +20,33 @@ @@ -20,18 +20,33 @@
<properties>
<project.root>${basedir}/..</project.root>
<dist.key>SDJDBC</dist.key>
<antora.playbook>${project.basedir}/../src/main/antora/antora-playbook.yml</antora.playbook>
</properties>
<build>
<resources>
<resource>
<directory>${project.basedir}/../src/main/antora/resources/antora-resources</directory>
<filtering>true</filtering>
</resource>
</resources>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<artifactId>maven-resources-plugin</artifactId>
<executions>
<execution>
<goals>
<goal>resources</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.asciidoctor</groupId>
<artifactId>asciidoctor-maven-plugin</artifactId>
<groupId>io.spring.maven.antora</groupId>
<artifactId>antora-maven-plugin</artifactId>
</plugin>
</plugins>
</build>

54
spring-data-r2dbc/src/main/asciidoc/index.adoc

@ -1,54 +0,0 @@ @@ -1,54 +0,0 @@
= Spring Data R2DBC - Reference Documentation
Mark Paluch, Jay Bryant, Stephen Cohen
:revnumber: {version}
:revdate: {localdate}
ifdef::backend-epub3[:front-cover-image: image:epub-cover.png[Front Cover,1050,1600]]
:spring-data-commons-docs: ../../../../../spring-data-commons/src/main/asciidoc
:spring-data-r2dbc-javadoc: https://docs.spring.io/spring-data/r2dbc/docs/{version}/api
:spring-framework-ref: https://docs.spring.io/spring/docs/{springVersion}/reference/html
:reactiveStreamsJavadoc: https://www.reactive-streams.org/reactive-streams-{reactiveStreamsVersion}-javadoc
:example-root: ../../../src/test/java/org/springframework/data/r2dbc/documentation
:tabsize: 2
:include-xml-namespaces: false
(C) 2018-2022 The original authors.
NOTE: Copies of this document may be made for your own use and for distribution to others, provided that you do not charge any fee for such copies and further provided that each copy contains this Copyright Notice, whether distributed in print or electronically.
toc::[]
// The blank line before each include prevents content from running together in a bad way
// (because an included bit does not have its own blank lines).
include::preface.adoc[]
include::{spring-data-commons-docs}/upgrade.adoc[leveloffset=+1]
include::{spring-data-commons-docs}/dependencies.adoc[leveloffset=+1]
include::{spring-data-commons-docs}/repositories.adoc[leveloffset=+1]
[[reference]]
= Reference Documentation
include::reference/introduction.adoc[leveloffset=+1]
include::reference/r2dbc.adoc[leveloffset=+1]
include::reference/r2dbc-repositories.adoc[leveloffset=+1]
include::{spring-data-commons-docs}/auditing.adoc[leveloffset=+1]
include::reference/r2dbc-auditing.adoc[leveloffset=+1]
include::reference/mapping.adoc[leveloffset=+1]
include::reference/kotlin.adoc[leveloffset=+1]
[[appendix]]
= Appendix
:numbered!:
include::{spring-data-commons-docs}/repository-query-keywords-reference.adoc[leveloffset=+1]
include::{spring-data-commons-docs}/repository-query-return-types-reference.adoc[leveloffset=+1]
include::reference/r2dbc-upgrading.adoc[leveloffset=+1]

122
spring-data-r2dbc/src/main/asciidoc/preface.adoc

@ -1,122 +0,0 @@ @@ -1,122 +0,0 @@
[[preface]]
= Preface
The Spring Data R2DBC project applies core Spring concepts to the development of solutions that use the https://r2dbc.io[R2DBC] drivers for relational databases.
We provide a `DatabaseClient` as a high-level abstraction for storing and querying rows.
This document is the reference guide for Spring Data - R2DBC Support.
It explains R2DBC module concepts and semantics.
This section provides some basic introduction to Spring and databases.
[[get-started:first-steps:spring]]
== Learning Spring
Spring Data uses Spring framework's {spring-framework-ref}/core.html[core] functionality, including:
* {spring-framework-ref}/core.html#beans[IoC] container
* {spring-framework-ref}/core.html#validation[type conversion system]
* {spring-framework-ref}/core.html#expressions[expression language]
* {spring-framework-ref}/integration.html#jmx[JMX integration]
* {spring-framework-ref}/data-access.html#dao-exceptions[DAO exception hierarchy].
While you need not know the Spring APIs, understanding the concepts behind them is important.
At a minimum, the idea behind Inversion of Control (IoC) should be familiar, and you should be familiar with whatever IoC container you choose to use.
You can use the core functionality of the R2DBC support directly, with no need to invoke the IoC services of the Spring Container.
This is much like `JdbcTemplate`, which can be used "`standalone`" without any other services of the Spring container.
To use all the features of Spring Data R2DBC, such as the repository support, you need to configure some parts of the library to use Spring.
To learn more about Spring, refer to the comprehensive documentation that explains the Spring Framework in detail.
There are a lot of articles, blog entries, and books on the subject.
See the Spring framework https://spring.io/docs[home page] for more information.
[[get-started:first-steps:what]]
== What is R2DBC?
https://r2dbc.io[R2DBC] is the acronym for Reactive Relational Database Connectivity.
R2DBC is an API specification initiative that declares a reactive API to be implemented by driver vendors to access their relational databases.
Part of the answer as to why R2DBC was created is the need for a non-blocking application stack to handle concurrency with a small number of threads and scale with fewer hardware resources.
This need cannot be satisfied by reusing standardized relational database access APIs -- namely JDBC –- as JDBC is a fully blocking API.
Attempts to compensate for blocking behavior with a `ThreadPool` are of limited use.
The other part of the answer is that most applications use a relational database to store their data.
While several NoSQL database vendors provide reactive database clients for their databases, migration to NoSQL is not an option for most projects.
This was the motivation for a new common API to serve as a foundation for any non-blocking database driver.
While the open source ecosystem hosts various non-blocking relational database driver implementations, each client comes with a vendor-specific API, so a generic layer on top of these libraries is not possible.
[[get-started:first-steps:reactive]]
== What is Reactive?
The term, "`reactive`", refers to programming models that are built around reacting to change, availability, and processability-network components reacting to I/O events, UI controllers reacting to mouse events, resources being made available, and others.
In that sense, non-blocking is reactive, because, instead of being blocked, we are now in the mode of reacting to notifications as operations complete or data becomes available.
There is also another important mechanism that we on the Spring team associate with reactive, and that is non-blocking back pressure.
In synchronous, imperative code, blocking calls serve as a natural form of back pressure that forces the caller to wait.
In non-blocking code, it becomes essential to control the rate of events so that a fast producer does not overwhelm its destination.
https://github.com/reactive-streams/reactive-streams-jvm/blob/v{reactiveStreamsVersion}/README.md#specification[Reactive Streams is a small spec] (also https://docs.oracle.com/javase/9/docs/api/java/util/concurrent/Flow.html[adopted in Java 9]) that defines the interaction between asynchronous components with back pressure.
For example, a data repository (acting as a {reactiveStreamsJavadoc}/org/reactivestreams/Publisher.html[`Publisher`]) can produce data that an HTTP server (acting as a {reactiveStreamsJavadoc}/org/reactivestreams/Subscriber.html`[`Subscriber`]) can then write to the response.
The main purpose of Reactive Streams is to let the subscriber control how quickly or how slowly the publisher produces data.
[[get-started:first-steps:reactive-api]]
== Reactive API
Reactive Streams plays an important role for interoperability.It is of interest to libraries and infrastructure components but less useful as an application API, because it is too low-level.
Applications need a higher-level and richer, functional API to compose async logic—-similar to the Java 8 Stream API but not only for tables.
This is the role that reactive libraries play.
https://github.com/reactor/reactor[Project Reactor] is the reactive library of choice for Spring Data R2DBC.
It provides the https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Mono.html[`Mono`] and https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html[`Flux`] API types to work on data sequences of `0..1` (`Mono`) and `0..N` (`Flux`) through a rich set of operators aligned with the ReactiveX vocabulary of operators.
Reactor is a Reactive Streams library, and, therefore, all of its operators support non-blocking back pressure.
Reactor has a strong focus on server-side Java. It is developed in close collaboration with Spring.
Spring Data R2DBC requires Project Reactor as a core dependency, but it is interoperable with other reactive libraries through the Reactive Streams specification.
As a general rule, a Spring Data R2DBC repository accepts a plain `Publisher` as input, adapts it to a Reactor type internally, uses that, and returns either a `Mono` or a `Flux` as output.
So, you can pass any `Publisher` as input and apply operations on the output, but you need to adapt the output for use with another reactive library.
Whenever feasible, Spring Data adapts transparently to the use of RxJava or another reactive library.
[[requirements]]
== Requirements
The Spring Data R2DBC 3.x binaries require:
* JDK level 17 and above
* https://spring.io/docs[Spring Framework] {springVersion} and above
* https://r2dbc.io[R2DBC] {r2dbcVersion} and above
[[get-started:help]]
== Additional Help Resources
Learning a new framework is not always straightforward.
In this section, we try to provide what we think is an easy-to-follow guide for starting with the Spring Data R2DBC module.
However, if you encounter issues or you need advice, use one of the following links:
[[get-started:help:community]]
Community Forum :: Spring Data on https://stackoverflow.com/questions/tagged/spring-data[Stack Overflow] is a tag for all Spring Data (not just R2DBC) users to share information and help each other.
Note that registration is needed only for posting.
[[get-started:help:professional]]
Professional Support :: Professional, from-the-source support, with guaranteed response time, is available from https://pivotal.io/[Pivotal Software, Inc.], the company behind Spring Data and Spring.
[[get-started:up-to-date]]
== Following Development
* For information on the Spring Data R2DBC source code repository, nightly builds, and snapshot artifacts, see the Spring Data R2DBC https://projects.spring.io/spring-data-r2dbc/[home page].
* You can help make Spring Data best serve the needs of the Spring community by interacting with developers through the community on https://stackoverflow.com/questions/tagged/spring-data[Stack Overflow].
* If you encounter a bug or want to suggest an improvement, please create a ticket on the Spring Data R2DBC https://github.com/spring-projects/spring-data-r2dbc/issues[issue tracker].
* To stay up to date with the latest news and announcements in the Spring ecosystem, subscribe to the Spring Community https://spring.io[Portal].
* You can also follow the Spring https://spring.io/blog[blog] or the Spring Data project team on Twitter (https://twitter.com/SpringData[SpringData]).
[[project-metadata]]
== Project Metadata
* Version control: https://github.com/spring-projects/spring-data-r2dbc
* Bugtracker: https://github.com/spring-projects/spring-data-relational/issues
* Release repository: https://repo1.maven.org/maven2/
* Milestone repository: https://repo.spring.io/milestone
* Snapshot repository: https://repo.spring.io/snapshot

10
spring-data-r2dbc/src/main/asciidoc/reference/introduction.adoc

@ -1,10 +0,0 @@ @@ -1,10 +0,0 @@
[[introduction]]
= Introduction
== Document Structure
This part of the reference documentation explains the core functionality offered by Spring Data R2DBC.
"`<<r2dbc.core>>`" introduces the R2DBC module feature set.
"`<<r2dbc.repositories>>`" introduces the repository support for R2DBC.

442
spring-data-r2dbc/src/main/asciidoc/reference/r2dbc-repositories.adoc

@ -1,442 +0,0 @@ @@ -1,442 +0,0 @@
[[r2dbc.repositories]]
= R2DBC Repositories
[[r2dbc.repositories.intro]]
This chapter points out the specialties for repository support for R2DBC.
This chapter builds on the core repository support explained in <<repositories>>.
Before reading this chapter, you should have a sound understanding of the basic concepts explained there.
[[r2dbc.repositories.usage]]
== Usage
To access domain entities stored in a relational database, you can use our sophisticated repository support that eases implementation quite significantly.
To do so, create an interface for your repository.
Consider the following `Person` class:
.Sample Person entity
====
[source,java]
----
public class Person {
@Id
private Long id;
private String firstname;
private String lastname;
// … getters and setters omitted
}
----
====
The following example shows a repository interface for the preceding `Person` class:
.Basic repository interface to persist Person entities
====
[source,java]
----
public interface PersonRepository extends ReactiveCrudRepository<Person, Long> {
// additional custom query methods go here
}
----
====
To configure R2DBC repositories, you can use the `@EnableR2dbcRepositories` annotation.
If no base package is configured, the infrastructure scans the package of the annotated configuration class.
The following example shows how to use Java configuration for a repository:
.Java configuration for repositories
====
[source,java]
----
@Configuration
@EnableR2dbcRepositories
class ApplicationConfig extends AbstractR2dbcConfiguration {
@Override
public ConnectionFactory connectionFactory() {
return …
}
}
----
====
Because our domain repository extends `ReactiveCrudRepository`, it provides you with reactive CRUD operations to access the entities.
On top of `ReactiveCrudRepository`, there is also `ReactiveSortingRepository`, which adds additional sorting functionality similar to that of `PagingAndSortingRepository`.
Working with the repository instance is merely a matter of dependency injecting it into a client.
Consequently, you can retrieve all `Person` objects with the following code:
.Paging access to Person entities
====
[source,java,indent=0]
----
include::../{example-root}/PersonRepositoryTests.java[tags=class]
----
====
The preceding example creates an application context with Spring's unit test support, which performs annotation-based dependency injection into test cases.
Inside the test method, we use the repository to query the database.
We use `StepVerifier` as a test aid to verify our expectations against the results.
[[r2dbc.repositories.queries]]
== Query Methods
Most of the data access operations you usually trigger on a repository result in a query being run against the databases.
Defining such a query is a matter of declaring a method on the repository interface, as the following example shows:
.PersonRepository with query methods
====
[source,java]
----
interface ReactivePersonRepository extends ReactiveSortingRepository<Person, Long> {
Flux<Person> findByFirstname(String firstname); <1>
Flux<Person> findByFirstname(Publisher<String> firstname); <2>
Flux<Person> findByFirstnameOrderByLastname(String firstname, Pageable pageable); <3>
Mono<Person> findByFirstnameAndLastname(String firstname, String lastname); <4>
Mono<Person> findFirstByLastname(String lastname); <5>
@Query("SELECT * FROM person WHERE lastname = :lastname")
Flux<Person> findByLastname(String lastname); <6>
@Query("SELECT firstname, lastname FROM person WHERE lastname = $1")
Mono<Person> findFirstByLastname(String lastname); <7>
}
----
<1> The method shows a query for all people with the given `firstname`. The query is derived by parsing the method name for constraints that can be concatenated with `And` and `Or`. Thus, the method name results in a query expression of `SELECT … FROM person WHERE firstname = :firstname`.
<2> The method shows a query for all people with the given `firstname` once the `firstname` is emitted by the given `Publisher`.
<3> Use `Pageable` to pass offset and sorting parameters to the database.
<4> Find a single entity for the given criteria. It completes with `IncorrectResultSizeDataAccessException` on non-unique results.
<5> Unless <4>, the first entity is always emitted even if the query yields more result rows.
<6> The `findByLastname` method shows a query for all people with the given last name.
<7> A query for a single `Person` entity projecting only `firstname` and `lastname` columns.
The annotated query uses native bind markers, which are Postgres bind markers in this example.
====
Note that the columns of a select statement used in a `@Query` annotation must match the names generated by the `NamingStrategy` for the respective property.
If a select statement does not include a matching column, that property is not set. If that property is required by the persistence constructor, either null or (for primitive types) the default value is provided.
The following table shows the keywords that are supported for query methods:
[cols="1,2,3", options="header", subs="quotes"]
.Supported keywords for query methods
|===
| Keyword
| Sample
| Logical result
| `After`
| `findByBirthdateAfter(Date date)`
| `birthdate > date`
| `GreaterThan`
| `findByAgeGreaterThan(int age)`
| `age > age`
| `GreaterThanEqual`
| `findByAgeGreaterThanEqual(int age)`
| `age >= age`
| `Before`
| `findByBirthdateBefore(Date date)`
| `birthdate < date`
| `LessThan`
| `findByAgeLessThan(int age)`
| `age < age`
| `LessThanEqual`
| `findByAgeLessThanEqual(int age)`
| `age \<= age`
| `Between`
| `findByAgeBetween(int from, int to)`
| `age BETWEEN from AND to`
| `NotBetween`
| `findByAgeNotBetween(int from, int to)`
| `age NOT BETWEEN from AND to`
| `In`
| `findByAgeIn(Collection<Integer> ages)`
| `age IN (age1, age2, ageN)`
| `NotIn`
| `findByAgeNotIn(Collection ages)`
| `age NOT IN (age1, age2, ageN)`
| `IsNotNull`, `NotNull`
| `findByFirstnameNotNull()`
| `firstname IS NOT NULL`
| `IsNull`, `Null`
| `findByFirstnameNull()`
| `firstname IS NULL`
| `Like`, `StartingWith`, `EndingWith`
| `findByFirstnameLike(String name)`
| `firstname LIKE name`
| `NotLike`, `IsNotLike`
| `findByFirstnameNotLike(String name)`
| `firstname NOT LIKE name`
| `Containing` on String
| `findByFirstnameContaining(String name)`
| `firstname LIKE '%' + name +'%'`
| `NotContaining` on String
| `findByFirstnameNotContaining(String name)`
| `firstname NOT LIKE '%' + name +'%'`
| `(No keyword)`
| `findByFirstname(String name)`
| `firstname = name`
| `Not`
| `findByFirstnameNot(String name)`
| `firstname != name`
| `IsTrue`, `True`
| `findByActiveIsTrue()`
| `active IS TRUE`
| `IsFalse`, `False`
| `findByActiveIsFalse()`
| `active IS FALSE`
|===
[[r2dbc.repositories.modifying]]
=== Modifying Queries
The previous sections describe how to declare queries to access a given entity or collection of entities.
Using keywords from the preceding table can be used in conjunction with `delete…By` or `remove…By` to create derived queries that delete matching rows.
.`Delete…By` Query
====
[source,java]
----
interface ReactivePersonRepository extends ReactiveSortingRepository<Person, String> {
Mono<Integer> deleteByLastname(String lastname); <1>
Mono<Void> deletePersonByLastname(String lastname); <2>
Mono<Boolean> deletePersonByLastname(String lastname); <3>
}
----
<1> Using a return type of `Mono<Integer>` returns the number of affected rows.
<2> Using `Void` just reports whether the rows were successfully deleted without emitting a result value.
<3> Using `Boolean` reports whether at least one row was removed.
====
As this approach is feasible for comprehensive custom functionality, you can modify queries that only need parameter binding by annotating the query method with `@Modifying`, as shown in the following example:
====
[source,java,indent=0]
----
include::../{example-root}/PersonRepository.java[tags=atModifying]
----
====
The result of a modifying query can be:
* `Void` (or Kotlin `Unit`) to discard update count and await completion.
* `Integer` or another numeric type emitting the affected rows count.
* `Boolean` to emit whether at least one row was updated.
The `@Modifying` annotation is only relevant in combination with the `@Query` annotation.
Derived custom methods do not require this annotation.
Modifying queries are executed directly against the database.
No events or callbacks get called.
Therefore also fields with auditing annotations do not get updated if they don't get updated in the annotated query.
Alternatively, you can add custom modifying behavior by using the facilities described in <<repositories.custom-implementations,Custom Implementations for Spring Data Repositories>>.
[[r2dbc.repositories.queries.spel]]
=== Queries with SpEL Expressions
Query string definitions can be used together with SpEL expressions to create dynamic queries at runtime.
SpEL expressions can provide predicate values which are evaluated right before running the query.
Expressions expose method arguments through an array that contains all the arguments.
The following query uses `[0]`
to declare the predicate value for `lastname` (which is equivalent to the `:lastname` parameter binding):
====
[source,java,indent=0]
----
include::../{example-root}/PersonRepository.java[tags=spel]
----
====
SpEL in query strings can be a powerful way to enhance queries.
However, they can also accept a broad range of unwanted arguments.
You should make sure to sanitize strings before passing them to the query to avoid unwanted changes to your query.
Expression support is extensible through the Query SPI: `org.springframework.data.spel.spi.EvaluationContextExtension`.
The Query SPI can contribute properties and functions and can customize the root object.
Extensions are retrieved from the application context at the time of SpEL evaluation when the query is built.
TIP: When using SpEL expressions in combination with plain parameters, use named parameter notation instead of native bind markers to ensure a proper binding order.
[[r2dbc.repositories.queries.query-by-example]]
=== Query By Example
Spring Data R2DBC also lets you use Query By Example to fashion queries.
This technique allows you to use a "probe" object.
Essentially, any field that isn't empty or `null` will be used to match.
Here's an example:
====
[source,java,indent=0]
----
include::../{example-root}/QueryByExampleTests.java[tag=example]
----
<1> Create a domain object with the criteria (`null` fields will be ignored).
<2> Using the domain object, create an `Example`.
<3> Through the `R2dbcRepository`, execute query (use `findOne` for a `Mono`).
====
This illustrates how to craft a simple probe using a domain object.
In this case, it will query based on the `Employee` object's `name` field being equal to `Frodo`.
`null` fields are ignored.
====
[source,java,indent=0]
----
include::../{example-root}/QueryByExampleTests.java[tag=example-2]
----
<1> Create a custom `ExampleMatcher` that matches on ALL fields (use `matchingAny()` to match on *ANY* fields)
<2> For the `name` field, use a wildcard that matches against the end of the field
<3> Match columns against `null` (don't forget that `NULL` doesn't equal `NULL` in relational databases).
<4> Ignore the `role` field when forming the query.
<5> Plug the custom `ExampleMatcher` into the probe.
====
It's also possible to apply a `withTransform()` against any property, allowing you to transform a property before forming the query.
For example, you can apply a `toUpperCase()` to a `String` -based property before the query is created.
Query By Example really shines when you you don't know all the fields needed in a query in advance.
If you were building a filter on a web page where the user can pick the fields, Query By Example is a great way to flexibly capture that into an efficient query.
[[r2dbc.entity-persistence.state-detection-strategies]]
include::../{spring-data-commons-docs}/is-new-state-detection.adoc[leveloffset=+2]
[[r2dbc.entity-persistence.id-generation]]
=== ID Generation
Spring Data R2DBC uses the ID to identify entities.
The ID of an entity must be annotated with Spring Data's https://docs.spring.io/spring-data/commons/docs/current/api/org/springframework/data/annotation/Id.html[`@Id`] annotation.
When your database has an auto-increment column for the ID column, the generated value gets set in the entity after inserting it into the database.
Spring Data R2DBC does not attempt to insert values of identifier columns when the entity is new and the identifier value defaults to its initial value.
That is `0` for primitive types and `null` if the identifier property uses a numeric wrapper type such as `Long`.
One important constraint is that, after saving an entity, the entity must not be new anymore.
Note that whether an entity is new is part of the entity's state.
With auto-increment columns, this happens automatically, because the ID gets set by Spring Data with the value from the ID column.
[[r2dbc.optimistic-locking]]
=== Optimistic Locking
The `@Version` annotation provides syntax similar to that of JPA in the context of R2DBC and makes sure updates are only applied to rows with a matching version.
Therefore, the actual value of the version property is added to the update query in such a way that the update does not have any effect if another operation altered the row in the meantime.
In that case, an `OptimisticLockingFailureException` is thrown.
The following example shows these features:
====
[source,java]
----
@Table
class Person {
@Id Long id;
String firstname;
String lastname;
@Version Long version;
}
R2dbcEntityTemplate template = …;
Mono<Person> daenerys = template.insert(new Person("Daenerys")); <1>
Person other = template.select(Person.class)
.matching(query(where("id").is(daenerys.getId())))
.first().block(); <2>
daenerys.setLastname("Targaryen");
template.update(daenerys); <3>
template.update(other).subscribe(); // emits OptimisticLockingFailureException <4>
----
<1> Initially insert row. `version` is set to `0`.
<2> Load the just inserted row. `version` is still `0`.
<3> Update the row with `version = 0`.Set the `lastname` and bump `version` to `1`.
<4> Try to update the previously loaded row that still has `version = 0`.The operation fails with an `OptimisticLockingFailureException`, as the current `version` is `1`.
====
:projection-collection: Flux
include::../{spring-data-commons-docs}/repository-projections.adoc[leveloffset=+2]
[[projections.resultmapping]]
==== Result Mapping
A query method returning an Interface- or DTO projection is backed by results produced by the actual query.
Interface projections generally rely on mapping results onto the domain type first to consider potential `@Column` type mappings and the actual projection proxy uses a potentially partially materialized entity to expose projection data.
Result mapping for DTO projections depends on the actual query type.
Derived queries use the domain type to map results, and Spring Data creates DTO instances solely from properties available on the domain type.
Declaring properties in your DTO that are not available on the domain type is not supported.
String-based queries use a different approach since the actual query, specifically the field projection, and result type declaration are close together.
DTO projections used with query methods annotated with `@Query` map query results directly into the DTO type.
Field mappings on the domain type are not considered.
Using the DTO type directly, your query method can benefit from a more dynamic projection that isn't restricted to the domain model.
include::../{spring-data-commons-docs}/entity-callbacks.adoc[leveloffset=+1]
include::./r2dbc-entity-callbacks.adoc[leveloffset=+2]
[[r2dbc.multiple-databases]]
== Working with multiple Databases
When working with multiple, potentially different databases, your application will require a different approach to configuration.
The provided `AbstractR2dbcConfiguration` support class assumes a single `ConnectionFactory` from which the `Dialect` gets derived.
That being said, you need to define a few beans yourself to configure Spring Data R2DBC to work with multiple databases.
R2DBC repositories require `R2dbcEntityOperations` to implement repositories.
A simple configuration to scan for repositories without using `AbstractR2dbcConfiguration` looks like:
[source,java]
----
@Configuration
@EnableR2dbcRepositories(basePackages = "com.acme.mysql", entityOperationsRef = "mysqlR2dbcEntityOperations")
static class MySQLConfiguration {
@Bean
@Qualifier("mysql")
public ConnectionFactory mysqlConnectionFactory() {
return …
}
@Bean
public R2dbcEntityOperations mysqlR2dbcEntityOperations(@Qualifier("mysql") ConnectionFactory connectionFactory) {
DatabaseClient databaseClient = DatabaseClient.create(connectionFactory);
return new R2dbcEntityTemplate(databaseClient, MySqlDialect.INSTANCE);
}
}
----
Note that `@EnableR2dbcRepositories` allows configuration either through `databaseClientRef` or `entityOperationsRef`.
Using various `DatabaseClient` beans is useful when connecting to multiple databases of the same type.
When using different database systems that differ in their dialect, use `@EnableR2dbcRepositories`(entityOperationsRef = …)` instead.

6
spring-data-r2dbc/src/main/asciidoc/reference/r2dbc.adoc

@ -1,6 +0,0 @@ @@ -1,6 +0,0 @@
[[r2dbc.core]]
= R2DBC support
include::r2dbc-core.adoc[]
include::r2dbc-template.adoc[leveloffset=+1]

6
spring-data-r2dbc/src/test/java/org/springframework/data/r2dbc/documentation/PersonRepositoryTests.java

@ -1,5 +1,5 @@ @@ -1,5 +1,5 @@
/*
* Copyright 2020-2023 the original author or authors.
* Copyright 2023 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@ -15,11 +15,11 @@ @@ -15,11 +15,11 @@
*/
package org.springframework.data.r2dbc.documentation;
import reactor.test.StepVerifier;
import org.junit.jupiter.api.Disabled;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;
import reactor.test.StepVerifier;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit.jupiter.SpringExtension;

19
spring-data-r2dbc/src/test/java/org/springframework/data/r2dbc/documentation/QueryByExampleTests.java

@ -1,5 +1,5 @@ @@ -1,5 +1,5 @@
/*
* Copyright 2021-2023 the original author or authors.
* Copyright 2023 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@ -15,19 +15,20 @@ @@ -15,19 +15,20 @@
*/
package org.springframework.data.r2dbc.documentation;
import org.junit.jupiter.api.Test;
import org.springframework.data.annotation.Id;
import org.springframework.data.domain.Example;
import org.springframework.data.domain.ExampleMatcher;
import org.springframework.data.r2dbc.repository.R2dbcRepository;
import static org.mockito.Mockito.*;
import static org.springframework.data.domain.ExampleMatcher.*;
import static org.springframework.data.domain.ExampleMatcher.GenericPropertyMatchers.endsWith;
import reactor.core.publisher.Flux;
import reactor.test.StepVerifier;
import java.util.Objects;
import static org.mockito.Mockito.*;
import static org.springframework.data.domain.ExampleMatcher.GenericPropertyMatchers.endsWith;
import static org.springframework.data.domain.ExampleMatcher.*;
import org.junit.jupiter.api.Test;
import org.springframework.data.annotation.Id;
import org.springframework.data.domain.Example;
import org.springframework.data.domain.ExampleMatcher;
import org.springframework.data.r2dbc.repository.R2dbcRepository;
/**
* Code to demonstrate Query By Example in reference documentation.

7
spring-data-r2dbc/src/test/java/org/springframework/data/r2dbc/documentation/R2dbcApp.java

@ -1,5 +1,5 @@ @@ -1,5 +1,5 @@
/*
* Copyright 2020-2023 the original author or authors.
* Copyright 2023 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@ -15,12 +15,13 @@ @@ -15,12 +15,13 @@
*/
package org.springframework.data.r2dbc.documentation;
// tag::class[]
import io.r2dbc.spi.ConnectionFactories;
import io.r2dbc.spi.ConnectionFactory;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import reactor.test.StepVerifier;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.data.r2dbc.core.R2dbcEntityTemplate;
public class R2dbcApp {

14
spring-data-r2dbc/src/test/java/org/springframework/data/r2dbc/documentation/R2dbcEntityTemplateSnippets.java

@ -1,5 +1,5 @@ @@ -1,5 +1,5 @@
/*
* Copyright 2020-2023 the original author or authors.
* Copyright 2023 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@ -15,17 +15,17 @@ @@ -15,17 +15,17 @@
*/
package org.springframework.data.r2dbc.documentation;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
import org.springframework.data.r2dbc.core.R2dbcEntityTemplate;
import static org.springframework.data.domain.Sort.*;
import static org.springframework.data.domain.Sort.by;
import static org.springframework.data.domain.Sort.Order.*;
import static org.springframework.data.relational.core.query.Criteria.*;
import static org.springframework.data.relational.core.query.Query.*;
import static org.springframework.data.relational.core.query.Update.*;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
import org.springframework.data.r2dbc.core.R2dbcEntityTemplate;
/**
* @author Mark Paluch
*/

42
src/main/antora/antora-playbook.yml

@ -0,0 +1,42 @@ @@ -0,0 +1,42 @@
# PACKAGES antora@3.2.0-alpha.2 @antora/atlas-extension:1.0.0-alpha.1 @antora/collector-extension@1.0.0-alpha.3 @springio/antora-extensions@1.1.0-alpha.2 @asciidoctor/tabs@1.0.0-alpha.12 @opendevise/antora-release-line-extension@1.0.0-alpha.2
#
# The purpose of this Antora playbook is to build the docs in the current branch.
antora:
extensions:
- '@antora/collector-extension'
- require: '@springio/antora-extensions/root-component-extension'
root_component_name: 'data-relational'
site:
title: Spring Data Relational
url: https://docs.spring.io/spring-data-relational/reference/
content:
sources:
- url: ./../../..
branches: HEAD
start_path: src/main/antora
worktrees: true
- url: https://github.com/spring-projects/spring-data-commons
# Refname matching:
# https://docs.antora.org/antora/latest/playbook/content-refname-matching/
branches: [ main, 3.2.x ]
start_path: src/main/antora
asciidoc:
attributes:
page-pagination: ''
hide-uri-scheme: '@'
tabs-sync-option: '@'
chomp: 'all'
extensions:
- '@asciidoctor/tabs'
- '@springio/asciidoctor-extensions'
sourcemap: true
urls:
latest_version_segment: ''
runtime:
log:
failure_level: warn
format: pretty
ui:
bundle:
url: https://github.com/spring-io/antora-ui-spring/releases/download/v0.3.3/ui-bundle.zip
snapshot: true

12
src/main/antora/antora.yml

@ -0,0 +1,12 @@ @@ -0,0 +1,12 @@
name: data-relational
version: true
title: Spring Data Relational
nav:
- modules/ROOT/nav.adoc
ext:
collector:
- run:
command: ./mvnw validate process-resources -pl :spring-data-jdbc-distribution -am -Pantora-process-resources
local: true
scan:
dir: spring-data-jdbc-distribution/target/classes/

1
src/main/antora/modules/ROOT/examples/r2dbc

@ -0,0 +1 @@ @@ -0,0 +1 @@
../../../../../../spring-data-r2dbc/src/test/java/org/springframework/data/r2dbc/documentation

55
src/main/antora/modules/ROOT/nav.adoc

@ -0,0 +1,55 @@ @@ -0,0 +1,55 @@
* xref:index.adoc[Overview]
** xref:commons/upgrade.adoc[]
* xref:repositories/introduction.adoc[]
** xref:repositories/core-concepts.adoc[]
** xref:repositories/definition.adoc[]
** xref:repositories/create-instances.adoc[]
** xref:repositories/query-methods-details.adoc[]
** xref:repositories/projections.adoc[]
** xref:object-mapping.adoc[]
** xref:commons/custom-conversions.adoc[]
** xref:repositories/custom-implementations.adoc[]
** xref:repositories/core-domain-events.adoc[]
** xref:commons/entity-callbacks.adoc[]
** xref:repositories/core-extensions.adoc[]
** xref:repositories/null-handling.adoc[]
** xref:repositories/query-keywords-reference.adoc[]
** xref:repositories/query-return-types-reference.adoc[]
* xref:jdbc.adoc[]
** xref:jdbc/why.adoc[]
** xref:jdbc/domain-driven-design.adoc[]
** xref:jdbc/getting-started.adoc[]
** xref:jdbc/examples-repo.adoc[]
** xref:jdbc/configuration.adoc[]
** xref:jdbc/entity-persistence.adoc[]
** xref:jdbc/loading-aggregates.adoc[]
** xref:jdbc/query-methods.adoc[]
** xref:jdbc/mybatis.adoc[]
** xref:jdbc/events.adoc[]
** xref:jdbc/logging.adoc[]
** xref:jdbc/transactions.adoc[]
** xref:jdbc/auditing.adoc[]
** xref:jdbc/mapping.adoc[]
** xref:jdbc/custom-conversions.adoc[]
** xref:jdbc/locking.adoc[]
** xref:query-by-example.adoc[]
** xref:jdbc/schema-support.adoc[]
* xref:r2dbc.adoc[]
** xref:r2dbc/getting-started.adoc[]
** xref:r2dbc/core.adoc[]
** xref:r2dbc/template.adoc[]
** xref:r2dbc/repositories.adoc[]
** xref:r2dbc/query-methods.adoc[]
** xref:r2dbc/entity-callbacks.adoc[]
** xref:r2dbc/auditing.adoc[]
** xref:r2dbc/mapping.adoc[]
** xref:r2dbc/query-by-example.adoc[]
** xref:r2dbc/kotlin.adoc[]
** xref:r2dbc/migration-guide.adoc[]
* xref:kotlin.adoc[]
** xref:kotlin/requirements.adoc[]
** xref:kotlin/null-safety.adoc[]
** xref:kotlin/object-mapping.adoc[]
** xref:kotlin/extensions.adoc[]
** xref:kotlin/coroutines.adoc[]
* https://github.com/spring-projects/spring-data-commons/wiki[Wiki]

1
src/main/antora/modules/ROOT/pages/commons/custom-conversions.adoc

@ -0,0 +1 @@ @@ -0,0 +1 @@
include::{commons}@data-commons::page$custom-conversions.adoc[]

1
src/main/antora/modules/ROOT/pages/commons/entity-callbacks.adoc

@ -0,0 +1 @@ @@ -0,0 +1 @@
include::{commons}@data-commons::page$entity-callbacks.adoc[]

1
src/main/antora/modules/ROOT/pages/commons/upgrade.adoc

@ -0,0 +1 @@ @@ -0,0 +1 @@
include::{commons}@data-commons::page$upgrade.adoc[]

21
src/main/antora/modules/ROOT/pages/index.adoc

@ -0,0 +1,21 @@ @@ -0,0 +1,21 @@
[[spring-data-jpa-reference-documentation]]
= Spring Data JDBC and R2DBC
:revnumber: {version}
:revdate: {localdate}
:feature-scroll: true
_Spring Data JDBC and R2DBC provide repository support for the Java Database Connectivity (JDBC) respective Reactive Relational Database Connectivity (R2DBC) APIs.
It eases development of applications with a consistent programming model that need to access SQL data sources._
[horizontal]
xref:repositories/introduction.adoc[Introduction] :: Introduction to Repositories
xref:jdbc.adoc[JDBC] :: JDBC Object Mapping and Repositories
xref:r2dbc.adoc[R2DBC] :: R2DBC Object Mapping and Repositories
xref:kotlin.adoc[Kotlin] :: Kotlin-specific Support
https://github.com/spring-projects/spring-data-commons/wiki[Wiki] :: What's New, Upgrade Notes, Supported Versions, additional cross-version information.
Jens Schauder, Jay Bryant, Mark Paluch, Bastian Wilhelm
(C) 2008-2023 VMware, Inc.
Copies of this document may be made for your own use and for distribution to others, provided that you do not charge any fee for such copies and further provided that each copy contains this Copyright Notice, whether distributed in print or electronically.

16
src/main/antora/modules/ROOT/pages/jdbc.adoc

@ -0,0 +1,16 @@ @@ -0,0 +1,16 @@
[[jdbc.repositories]]
= JDBC
:page-section-summary-toc: 1
The Spring Data JDBC module applies core Spring concepts to the development of solutions that use JDBC database drivers aligned with xref:jdbc/domain-driven-design.adoc[Domain-driven design principles].
We provide a "`template`" as a high-level abstraction for storing and querying aggregates.
This document is the reference guide for Spring Data JDBC support.
It explains the concepts and semantics and syntax.
This chapter points out the specialties for repository support for JDBC.
This builds on the core repository support explained in xref:repositories/introduction.adoc[Working with Spring Data Repositories].
You should have a sound understanding of the basic concepts explained there.

23
src/main/antora/modules/ROOT/pages/jdbc/auditing.adoc

@ -0,0 +1,23 @@ @@ -0,0 +1,23 @@
[[jdbc.auditing]]
= JDBC Auditing
:page-section-summary-toc: 1
In order to activate auditing, add `@EnableJdbcAuditing` to your configuration, as the following example shows:
.Activating auditing with Java configuration
[source,java]
----
@Configuration
@EnableJdbcAuditing
class Config {
@Bean
AuditorAware<AuditableUser> auditorProvider() {
return new AuditorAwareImpl();
}
}
----
If you expose a bean of type `AuditorAware` to the `ApplicationContext`, the auditing infrastructure automatically picks it up and uses it to determine the current user to be set on domain types.
If you have multiple implementations registered in the `ApplicationContext`, you can select the one to be used by explicitly setting the `auditorAwareRef` attribute of `@EnableJdbcAuditing`.

64
src/main/antora/modules/ROOT/pages/jdbc/configuration.adoc

@ -0,0 +1,64 @@ @@ -0,0 +1,64 @@
[[jdbc.java-config]]
= Configuration
The Spring Data JDBC repositories support can be activated by an annotation through Java configuration, as the following example shows:
.Spring Data JDBC repositories using Java configuration
[source,java]
----
@Configuration
@EnableJdbcRepositories // <1>
class ApplicationConfig extends AbstractJdbcConfiguration { // <2>
@Bean
DataSource dataSource() { // <3>
EmbeddedDatabaseBuilder builder = new EmbeddedDatabaseBuilder();
return builder.setType(EmbeddedDatabaseType.HSQL).build();
}
@Bean
NamedParameterJdbcOperations namedParameterJdbcOperations(DataSource dataSource) { // <4>
return new NamedParameterJdbcTemplate(dataSource);
}
@Bean
TransactionManager transactionManager(DataSource dataSource) { // <5>
return new DataSourceTransactionManager(dataSource);
}
}
----
<1> `@EnableJdbcRepositories` creates implementations for interfaces derived from `Repository`
<2> `AbstractJdbcConfiguration` provides various default beans required by Spring Data JDBC
<3> Creates a `DataSource` connecting to a database.
This is required by the following two bean methods.
<4> Creates the `NamedParameterJdbcOperations` used by Spring Data JDBC to access the database.
<5> Spring Data JDBC utilizes the transaction management provided by Spring JDBC.
The configuration class in the preceding example sets up an embedded HSQL database by using the `EmbeddedDatabaseBuilder` API of `spring-jdbc`.
The `DataSource` is then used to set up `NamedParameterJdbcOperations` and a `TransactionManager`.
We finally activate Spring Data JDBC repositories by using the `@EnableJdbcRepositories`.
If no base package is configured, it uses the package in which the configuration class resides.
Extending `AbstractJdbcConfiguration` ensures various beans get registered.
Overwriting its methods can be used to customize the setup (see below).
This configuration can be further simplified by using Spring Boot.
With Spring Boot a `DataSource` is sufficient once the starter `spring-boot-starter-data-jdbc` is included in the dependencies.
Everything else is done by Spring Boot.
There are a couple of things one might want to customize in this setup.
[[jdbc.dialects]]
== Dialects
Spring Data JDBC uses implementations of the interface `Dialect` to encapsulate behavior that is specific to a database or its JDBC driver.
By default, the `AbstractJdbcConfiguration` tries to determine the database in use and register the correct `Dialect`.
This behavior can be changed by overwriting `jdbcDialect(NamedParameterJdbcOperations)`.
If you use a database for which no dialect is available, then your application won’t startup. In that case, you’ll have to ask your vendor to provide a `Dialect` implementation. Alternatively, you can:
1. Implement your own `Dialect`.
2. Implement a `JdbcDialectProvider` returning the `Dialect`.
3. Register the provider by creating a `spring.factories` resource under `META-INF` and perform the registration by adding a line +
`org.springframework.data.jdbc.repository.config.DialectResolver$JdbcDialectProvider=<fully qualified name of your JdbcDialectProvider>`

17
src/main/asciidoc/jdbc-custom-conversions.adoc → src/main/antora/modules/ROOT/pages/jdbc/custom-conversions.adoc

@ -1,13 +1,11 @@ @@ -1,13 +1,11 @@
[[jdbc.custom-converters]]
// for backward compatibility only:
[[jdbc.entity-persistence.custom-converters]]
== Custom Conversions
= Custom Conversions
Spring Data JDBC allows registration of custom converters to influence how values are mapped in the database.
Currently, converters are only applied on property-level.
[[jdbc.custom-converters.writer]]
=== Writing a Property by Using a Registered Spring Converter
== Writing a Property by Using a Registered Spring Converter
The following example shows an implementation of a `Converter` that converts from a `Boolean` object to a `String` value:
@ -29,7 +27,7 @@ There are a couple of things to notice here: `Boolean` and `String` are both sim @@ -29,7 +27,7 @@ There are a couple of things to notice here: `Boolean` and `String` are both sim
By annotating this converter with `@WritingConverter` you instruct Spring Data to write every `Boolean` property as `String` in the database.
[[jdbc.custom-converters.reader]]
=== Reading by Using a Spring Converter
== Reading by Using a Spring Converter
The following example shows an implementation of a `Converter` that converts from a `String` to a `Boolean` value:
@ -49,7 +47,7 @@ There are a couple of things to notice here: `String` and `Boolean` are both sim @@ -49,7 +47,7 @@ There are a couple of things to notice here: `String` and `Boolean` are both sim
By annotating this converter with `@ReadingConverter` you instruct Spring Data to convert every `String` value from the database that should be assigned to a `Boolean` property.
[[jdbc.custom-converters.configuration]]
=== Registering Spring Converters with the `JdbcConverter`
== Registering Spring Converters with the `JdbcConverter`
[source,java]
----
@ -70,13 +68,8 @@ This is no longer necessary or even recommended, since that method assembles con @@ -70,13 +68,8 @@ This is no longer necessary or even recommended, since that method assembles con
If you are migrating from an older version of Spring Data JDBC and have `AbstractJdbcConfiguration.jdbcCustomConversions()` overwritten conversions from your `Dialect` will not get registered.
[[jdbc.custom-converters.jdbc-value]]
// for backward compatibility only:
[[jdbc.entity-persistence.custom-converters.jdbc-value]]
=== JdbcValue
== JdbcValue
Value conversion uses `JdbcValue` to enrich values propagated to JDBC operations with a `java.sql.Types` type.
Register a custom write converter if you need to specify a JDBC-specific type instead of using type derivation.
This converter should convert the value to `JdbcValue` which has a field for the value and for the actual `JDBCType`.
include::{spring-data-commons-docs}/custom-conversions.adoc[leveloffset=+3]

26
src/main/antora/modules/ROOT/pages/jdbc/domain-driven-design.adoc

@ -0,0 +1,26 @@ @@ -0,0 +1,26 @@
[[jdbc.domain-driven-design]]
= Domain Driven Design and Relational Databases
All Spring Data modules are inspired by the concepts of "`repository`", "`aggregate`", and "`aggregate root`" from Domain Driven Design.
These are possibly even more important for Spring Data JDBC, because they are, to some extent, contrary to normal practice when working with relational databases.
An aggregate is a group of entities that is guaranteed to be consistent between atomic changes to it.
A classic example is an `Order` with `OrderItems`.
A property on `Order` (for example, `numberOfItems` is consistent with the actual number of `OrderItems`) remains consistent as changes are made.
References across aggregates are not guaranteed to be consistent at all times.
They are guaranteed to become consistent eventually.
Each aggregate has exactly one aggregate root, which is one of the entities of the aggregate.
The aggregate gets manipulated only through methods on that aggregate root.
These are the atomic changes mentioned earlier.
A repository is an abstraction over a persistent store that looks like a collection of all the aggregates of a certain type.
For Spring Data in general, this means you want to have one `Repository` per aggregate root.
In addition, for Spring Data JDBC this means that all entities reachable from an aggregate root are considered to be part of that aggregate root.
Spring Data JDBC assumes that only the aggregate has a foreign key to a table storing non-root entities of the aggregate and no other entity points toward non-root entities.
WARNING: In the current implementation, entities referenced from an aggregate root are deleted and recreated by Spring Data JDBC.
You can overwrite the repository methods with implementations that match your style of working and designing your database.

44
src/main/antora/modules/ROOT/pages/jdbc/entity-persistence.adoc

@ -0,0 +1,44 @@ @@ -0,0 +1,44 @@
[[jdbc.entity-persistence]]
= Persisting Entities
Saving an aggregate can be performed with the `CrudRepository.save(…)` method.
If the aggregate is new, this results in an insert for the aggregate root, followed by insert statements for all directly or indirectly referenced entities.
If the aggregate root is not new, all referenced entities get deleted, the aggregate root gets updated, and all referenced entities get inserted again.
Note that whether an instance is new is part of the instance's state.
NOTE: This approach has some obvious downsides.
If only few of the referenced entities have been actually changed, the deletion and insertion is wasteful.
While this process could and probably will be improved, there are certain limitations to what Spring Data JDBC can offer.
It does not know the previous state of an aggregate.
So any update process always has to take whatever it finds in the database and make sure it converts it to whatever is the state of the entity passed to the save method.
[[jdbc.entity-persistence.state-detection-strategies]]
include::{commons}@data-commons::page$is-new-state-detection.adoc[leveloffset=+1]
[[jdbc.entity-persistence.id-generation]]
== ID Generation
Spring Data JDBC uses the ID to identify entities.
The ID of an entity must be annotated with Spring Data's https://docs.spring.io/spring-data/commons/docs/current/api/org/springframework/data/annotation/Id.html[`@Id`] annotation.
When your database has an auto-increment column for the ID column, the generated value gets set in the entity after inserting it into the database.
One important constraint is that, after saving an entity, the entity must not be new any more.
Note that whether an entity is new is part of the entity's state.
With auto-increment columns, this happens automatically, because the ID gets set by Spring Data with the value from the ID column.
If you are not using auto-increment columns, you can use a `BeforeConvertCallback` to set the ID of the entity (covered later in this document).
[[jdbc.entity-persistence.optimistic-locking]]
== Optimistic Locking
Spring Data JDBC supports optimistic locking by means of a numeric attribute that is annotated with
https://docs.spring.io/spring-data/commons/docs/current/api/org/springframework/data/annotation/Version.html[`@Version`] on the aggregate root.
Whenever Spring Data JDBC saves an aggregate with such a version attribute two things happen:
The update statement for the aggregate root will contain a where clause checking that the version stored in the database is actually unchanged.
If this isn't the case an `OptimisticLockingFailureException` will be thrown.
Also the version attribute gets increased both in the entity and in the database so a concurrent action will notice the change and throw an `OptimisticLockingFailureException` if applicable as described above.
This process also applies to inserting new aggregates, where a `null` or `0` version indicates a new instance and the increased instance afterwards marks the instance as not new anymore, making this work rather nicely with cases where the id is generated during object construction for example when UUIDs are used.
During deletes the version check also applies but no version is increased.

110
src/main/antora/modules/ROOT/pages/jdbc/events.adoc

@ -0,0 +1,110 @@ @@ -0,0 +1,110 @@
[[jdbc.events]]
= Lifecycle Events
Spring Data JDBC publishes lifecycle events to `ApplicationListener` objects, typically beans in the application context.
Events are notifications about a certain lifecycle phase.
In contrast to entity callbacks, events are intended for notification.
Transactional listeners will receive events when the transaction completes.
Events and callbacks get only triggered for aggregate roots.
If you want to process non-root entities, you need to do that through a listener for the containing aggregate root.
Entity lifecycle events can be costly, and you may notice a change in the performance profile when loading large result sets.
You can disable lifecycle events on the link:{spring-data-jdbc-javadoc}org/springframework/data/jdbc/core/JdbcAggregateTemplate.html#setEntityLifecycleEventsEnabled(boolean)[Template API].
For example, the following listener gets invoked before an aggregate gets saved:
[source,java]
----
@Bean
ApplicationListener<BeforeSaveEvent<Object>> loggingSaves() {
return event -> {
Object entity = event.getEntity();
LOG.info("{} is getting saved.", entity);
};
}
----
If you want to handle events only for a specific domain type you may derive your listener from `AbstractRelationalEventListener` and overwrite one or more of the `onXXX` methods, where `XXX` stands for an event type.
Callback methods will only get invoked for events related to the domain type and their subtypes, therefore you don't require further casting.
[source,java]
----
class PersonLoadListener extends AbstractRelationalEventListener<Person> {
@Override
protected void onAfterLoad(AfterLoadEvent<Person> personLoad) {
LOG.info(personLoad.getEntity());
}
}
----
The following table describes the available events.For more details about the exact relation between process steps see the link:#jdbc.entity-callbacks[description of available callbacks] which map 1:1 to events.
.Available events
|===
| Event | When It Is Published
| {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/event/BeforeDeleteEvent.html[`BeforeDeleteEvent`]
| Before an aggregate root gets deleted.
| {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/event/AfterDeleteEvent.html[`AfterDeleteEvent`]
| After an aggregate root gets deleted.
| {spring-data-jdbc-javadoc}/org/springframework/data/relational/core/mapping/event/BeforeConvertEvent.html[`BeforeConvertEvent`]
| Before an aggregate root gets converted into a plan for executing SQL statements, but after the decision was made if the aggregate is new or not, i.e. if an update or an insert is in order.
| {spring-data-jdbc-javadoc}/org/springframework/data/relational/core/mapping/event/BeforeSaveEvent.html[`BeforeSaveEvent`]
| Before an aggregate root gets saved (that is, inserted or updated but after the decision about whether if it gets inserted or updated was made).
| {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/event/AfterSaveEvent.html[`AfterSaveEvent`]
| After an aggregate root gets saved (that is, inserted or updated).
| {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/event/AfterConvertEvent.html[`AfterConvertEvent`]
| After an aggregate root gets created from a database `ResultSet` and all its properties get set.
|===
WARNING: Lifecycle events depend on an `ApplicationEventMulticaster`, which in case of the `SimpleApplicationEventMulticaster` can be configured with a `TaskExecutor`, and therefore gives no guarantees when an Event is processed.
[[jdbc.entity-callbacks]]
== Store-specific EntityCallbacks
Spring Data JDBC uses the xref:commons/entity-callbacks.adoc[`EntityCallback` API] for its auditing support and reacts on the callbacks listed in the following table.
.Process Steps and Callbacks of the Different Processes performed by Spring Data JDBC.
|===
| Process | `EntityCallback` / Process Step | Comment
.3+| Delete | {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/event/BeforeDeleteCallback.html[`BeforeDeleteCallback`]
| Before the actual deletion.
2+| The aggregate root and all the entities of that aggregate get removed from the database.
| {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/event/AfterDeleteCallback.html[`AfterDeleteCallback`]
| After an aggregate gets deleted.
.6+| Save 2+| Determine if an insert or an update of the aggregate is to be performed dependen on if it is new or not.
| {spring-data-jdbc-javadoc}/org/springframework/data/relational/core/mapping/event/BeforeConvertCallback.html[`BeforeConvertCallback`]
| This is the correct callback if you want to set an id programmatically. In the previous step new aggregates got detected as such and a Id generated in this step would be used in the following step.
2+| Convert the aggregate to a aggregate change, it is a sequence of SQL statements to be executed against the database. In this step the decision is made if an Id is provided by the aggregate or if the Id is still empty and is expected to be generated by the database.
| {spring-data-jdbc-javadoc}/org/springframework/data/relational/core/mapping/event/BeforeSaveCallback.html[`BeforeSaveCallback`]
| Changes made to the aggregate root may get considered, but the decision if an id value will be sent to the database is already made in the previous step.
Do not use this for creating Ids for new aggregates. Use `BeforeConvertCallback` instead.
2+| The SQL statements determined above get executed against the database.
| {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/event/AfterSaveCallback.html[`AfterSaveCallback`]
| After an aggregate root gets saved (that is, inserted or updated).
.2+| Load 2+| Load the aggregate using 1 or more SQL queries. Construct the aggregate from the resultset.
| {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/event/AfterConvertCallback.html[`AfterConvertCallback`]
|
|===
We encourage the use of callbacks over events since they support the use of immutable classes and therefore are more powerful and versatile than events.

5
src/main/antora/modules/ROOT/pages/jdbc/examples-repo.adoc

@ -0,0 +1,5 @@ @@ -0,0 +1,5 @@
[[jdbc.examples-repo]]
= Examples Repository
:page-section-summary-toc: 1
There is a https://github.com/spring-projects/spring-data-examples[GitHub repository with several examples] that you can download and play around with to get a feel for how the library works.

68
src/main/antora/modules/ROOT/pages/jdbc/getting-started.adoc

@ -0,0 +1,68 @@ @@ -0,0 +1,68 @@
[[jdbc.getting-started]]
= Getting Started
An easy way to bootstrap setting up a working environment is to create a Spring-based project in https://spring.io/tools[Spring Tools] or from https://start.spring.io[Spring Initializr].
First, you need to set up a running database server. Refer to your vendor documentation on how to configure your database for JDBC access.
[[requirements]]
== Requirements
Spring Data JDBC requires https://spring.io/docs[Spring Framework] {springVersion} and above.
In terms of databases, Spring Data JDBC requires a xref:jdbc/configuration.adoc#jdbc.dialects[dialect] to abstract common SQL functionality over vendor-specific flavours.
Spring Data JDBC includes direct support for the following databases:
* DB2
* H2
* HSQLDB
* MariaDB
* Microsoft SQL Server
* MySQL
* Oracle
* Postgres
If you use a different database then your application won’t startup.
The xref:jdbc/configuration.adoc#jdbc.dialects[dialect] section contains further detail on how to proceed in such case.
To create a Spring project in STS:
. Go to File -> New -> Spring Template Project -> Simple Spring Utility Project, and press Yes when prompted.
Then enter a project and a package name, such as `org.spring.jdbc.example`.
. Add the following to the `pom.xml` files `dependencies` element:
+
[source,xml,subs="+attributes"]
----
<dependencies>
<!-- other dependency elements omitted -->
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-jdbc</artifactId>
<version>{version}</version>
</dependency>
</dependencies>
----
. Change the version of Spring in the pom.xml to be
+
[source,xml,subs="+attributes"]
----
<spring.framework.version>{springVersion}</spring.framework.version>
----
. Add the following location of the Spring Milestone repository for Maven to your `pom.xml` such that it is at the same level of your `<dependencies/>` element:
+
[source,xml]
----
<repositories>
<repository>
<id>spring-milestone</id>
<name>Spring Maven MILESTONE Repository</name>
<url>https://repo.spring.io/milestone</url>
</repository>
</repositories>
----
The repository is also https://repo.spring.io/milestone/org/springframework/data/[browseable here].

28
src/main/antora/modules/ROOT/pages/jdbc/loading-aggregates.adoc

@ -0,0 +1,28 @@ @@ -0,0 +1,28 @@
[[jdbc.loading-aggregates]]
= Loading Aggregates
Spring Data JDBC offers two ways how it can load aggregates.
The traditional and before version 3.2 the only way is really simple:
Each query loads the aggregate roots, independently if the query is based on a `CrudRepository` method, a derived query or a annotated query.
If the aggregate root references other entities those are loaded with separate statements.
Spring Data JDBC now allows the use of _Single Query Loading_.
With this an arbitrary number of aggregates can be fully loaded with a single SQL query.
This should be significant more efficient, especially for complex aggregates, consisting of many entities.
Currently, this feature is very restricted.
1. It only works for aggregates that only reference one entity collection.The plan is to remove this constraint in the future.
2. The aggregate must also not use `AggregateReference` or embedded entities.The plan is to remove this constraint in the future.
3. The database dialect must support it.Of the dialects provided by Spring Data JDBC all but H2 and HSQL support this.H2 and HSQL don't support analytic functions (aka windowing functions).
4. It only works for the find methods in `CrudRepository`, not for derived queries and not for annotated queries.The plan is to remove this constraint in the future.
5. Single Query Loading needs to be enabled in the `JdbcMappingContext`, by calling `setSingleQueryLoadingEnabled(true)`
Note: Single Query Loading is to be considered experimental. We appreciate feedback on how it works for you.
Note:Single Query Loading can be abbreviated as SQL, but we highly discourage that since confusion with Structured Query Language is almost guaranteed.

28
src/main/antora/modules/ROOT/pages/jdbc/locking.adoc

@ -0,0 +1,28 @@ @@ -0,0 +1,28 @@
[[jdbc.locking]]
= JDBC Locking
Spring Data JDBC supports locking on derived query methods.
To enable locking on a given derived query method inside a repository, you annotate it with `@Lock`.
The required value of type `LockMode` offers two values: `PESSIMISTIC_READ` which guarantees that the data you are reading doesn't get modified and `PESSIMISTIC_WRITE` which obtains a lock to modify the data.
Some databases do not make this distinction.
In that cases both modes are equivalent of `PESSIMISTIC_WRITE`.
.Using @Lock on derived query method
[source,java]
----
interface UserRepository extends CrudRepository<User, Long> {
@Lock(LockMode.PESSIMISTIC_READ)
List<User> findByLastname(String lastname);
}
----
As you can see above, the method `findByLastname(String lastname)` will be executed with a pessimistic read lock. If you are using a databse with the MySQL Dialect this will result for example in the following query:
.Resulting Sql query for MySQL dialect
[source,sql]
----
Select * from user u where u.lastname = lastname LOCK IN SHARE MODE
----
Alternative to `LockMode.PESSIMISTIC_READ` you can use `LockMode.PESSIMISTIC_WRITE`.

8
src/main/antora/modules/ROOT/pages/jdbc/logging.adoc

@ -0,0 +1,8 @@ @@ -0,0 +1,8 @@
[[jdbc.logging]]
= Logging
:page-section-summary-toc: 1
Spring Data JDBC does little to no logging on its own.
Instead, the mechanics of `JdbcTemplate` to issue SQL statements provide logging.
Thus, if you want to inspect what SQL statements are run, activate logging for Spring's {spring-framework-docs}/data-access.html#jdbc-JdbcTemplate[`NamedParameterJdbcTemplate`] or https://www.mybatis.org/mybatis-3/logging.html[MyBatis].

273
src/main/antora/modules/ROOT/pages/jdbc/mapping.adoc

@ -0,0 +1,273 @@ @@ -0,0 +1,273 @@
[[mapping]]
= Mapping
Rich mapping support is provided by the `BasicJdbcConverter`. `BasicJdbcConverter` has a rich metadata model that allows mapping domain objects to a data row.
The mapping metadata model is populated by using annotations on your domain objects.
However, the infrastructure is not limited to using annotations as the only source of metadata information.
The `BasicJdbcConverter` also lets you map objects to rows without providing any additional metadata, by following a set of conventions.
This section describes the features of the `BasicJdbcConverter`, including how to use conventions for mapping objects to rows and how to override those conventions with annotation-based mapping metadata.
Read on the basics about xref:object-mapping.adoc[] before continuing with this chapter.
[[mapping.conventions]]
== Convention-based Mapping
`BasicJdbcConverter` has a few conventions for mapping objects to rows when no additional mapping metadata is provided.
The conventions are:
* The short Java class name is mapped to the table name in the following manner.
The `com.bigbank.SavingsAccount` class maps to the `SAVINGS_ACCOUNT` table name.
The same name mapping is applied for mapping fields to column names.
For example, the `firstName` field maps to the `FIRST_NAME` column.
You can control this mapping by providing a custom `NamingStrategy`.
Table and column names that are derived from property or class names are used in SQL statements without quotes by default.
You can control this behavior by setting `JdbcMappingContext.setForceQuote(true)`.
* Nested objects are not supported.
* The converter uses any Spring Converters registered with it to override the default mapping of object properties to row columns and values.
* The fields of an object are used to convert to and from columns in the row.
Public `JavaBean` properties are not used.
* If you have a single non-zero-argument constructor whose constructor argument names match top-level column names of the row, that constructor is used.
Otherwise, the zero-argument constructor is used.
If there is more than one non-zero-argument constructor, an exception is thrown.
[[jdbc.entity-persistence.types]]
== Supported Types in Your Entity
The properties of the following types are currently supported:
* All primitive types and their boxed types (`int`, `float`, `Integer`, `Float`, and so on)
* Enums get mapped to their name.
* `String`
* `java.util.Date`, `java.time.LocalDate`, `java.time.LocalDateTime`, and `java.time.LocalTime`
* Arrays and Collections of the types mentioned above can be mapped to columns of array type if your database supports that.
* Anything your database driver accepts.
* References to other entities.
They are considered a one-to-one relationship, or an embedded type.
It is optional for one-to-one relationship entities to have an `id` attribute.
The table of the referenced entity is expected to have an additional column with a name based on the referencing entity see xref:jdbc/entity-persistence.adoc#jdbc.entity-persistence.types.backrefs[Back References].
Embedded entities do not need an `id`.
If one is present it gets ignored.
* `Set<some entity>` is considered a one-to-many relationship.
The table of the referenced entity is expected to have an additional column with a name based on the referencing entity see xref:jdbc/entity-persistence.adoc#jdbc.entity-persistence.types.backrefs[Back References].
* `Map<simple type, some entity>` is considered a qualified one-to-many relationship.
The table of the referenced entity is expected to have two additional columns: One named based on the referencing entity for the foreign key (see xref:jdbc/entity-persistence.adoc#jdbc.entity-persistence.types.backrefs[Back References]) and one with the same name and an additional `_key` suffix for the map key.
You can change this behavior by implementing `NamingStrategy.getReverseColumnName(PersistentPropertyPathExtension path)` and `NamingStrategy.getKeyColumn(RelationalPersistentProperty property)`, respectively.
Alternatively you may annotate the attribute with `@MappedCollection(idColumn="your_column_name", keyColumn="your_key_column_name")`
* `List<some entity>` is mapped as a `Map<Integer, some entity>`.
[[jdbc.entity-persistence.types.referenced-entities]]
=== Referenced Entities
The handling of referenced entities is limited.
This is based on the idea of aggregate roots as described above.
If you reference another entity, that entity is, by definition, part of your aggregate.
So, if you remove the reference, the previously referenced entity gets deleted.
This also means references are 1-1 or 1-n, but not n-1 or n-m.
If you have n-1 or n-m references, you are, by definition, dealing with two separate aggregates.
References between those may be encoded as simple `id` values, which map properly with Spring Data JDBC.
A better way to encode these, is to make them instances of `AggregateReference`.
An `AggregateReference` is a wrapper around an id value which marks that value as a reference to a different aggregate.
Also, the type of that aggregate is encoded in a type parameter.
[[jdbc.entity-persistence.types.backrefs]]
=== Back References
All references in an aggregate result in a foreign key relationship in the opposite direction in the database.
By default, the name of the foreign key column is the table name of the referencing entity.
Alternatively you may choose to have them named by the entity name of the referencing entity ignoreing `@Table` annotations.
You activate this behaviour by calling `setForeignKeyNaming(ForeignKeyNaming.IGNORE_RENAMING)` on the `RelationalMappingContext`.
For `List` and `Map` references an additional column is required for holding the list index or map key.
It is based on the foreign key column with an additional `_KEY` suffix.
If you want a completely different way of naming these back references you may implement `NamingStrategy.getReverseColumnName(PersistentPropertyPathExtension path)` in a way that fits your needs.
.Declaring and setting an `AggregateReference`
[source,java]
----
class Person {
@Id long id;
AggregateReference<Person, Long> bestFriend;
}
// ...
Person p1, p2 = // some initialization
p1.bestFriend = AggregateReference.to(p2.id);
----
* Types for which you registered suitable [[jdbc.custom-converters, custom conversions]].
[[jdbc.entity-persistence.naming-strategy]]
== `NamingStrategy`
When you use the standard implementations of `CrudRepository` that Spring Data JDBC provides, they expect a certain table structure.
You can tweak that by providing a {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/NamingStrategy.html[`NamingStrategy`] in your application context.
[[jdbc.entity-persistence.custom-table-name]]
== `Custom table names`
When the NamingStrategy does not matching on your database table names, you can customize the names with the {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/Table.html[`@Table`] annotation.
The element `value` of this annotation provides the custom table name.
The following example maps the `MyEntity` class to the `CUSTOM_TABLE_NAME` table in the database:
[source,java]
----
@Table("CUSTOM_TABLE_NAME")
class MyEntity {
@Id
Integer id;
String name;
}
----
[[jdbc.entity-persistence.custom-column-name]]
== `Custom column names`
When the NamingStrategy does not matching on your database column names, you can customize the names with the {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/Column.html[`@Column`] annotation.
The element `value` of this annotation provides the custom column name.
The following example maps the `name` property of the `MyEntity` class to the `CUSTOM_COLUMN_NAME` column in the database:
[source,java]
----
class MyEntity {
@Id
Integer id;
@Column("CUSTOM_COLUMN_NAME")
String name;
}
----
The {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/MappedCollection.html[`@MappedCollection`]
annotation can be used on a reference type (one-to-one relationship) or on Sets, Lists, and Maps (one-to-many relationship).
`idColumn` element of the annotation provides a custom name for the foreign key column referencing the id column in the other table.
In the following example the corresponding table for the `MySubEntity` class has a `NAME` column, and the `CUSTOM_MY_ENTITY_ID_COLUMN_NAME` column of the `MyEntity` id for relationship reasons:
[source,java]
----
class MyEntity {
@Id
Integer id;
@MappedCollection(idColumn = "CUSTOM_MY_ENTITY_ID_COLUMN_NAME")
Set<MySubEntity> subEntities;
}
class MySubEntity {
String name;
}
----
When using `List` and `Map` you must have an additional column for the position of a dataset in the `List` or the key value of the entity in the `Map`.
This additional column name may be customized with the `keyColumn` Element of the {spring-data-jdbc-javadoc}org/springframework/data/relational/core/mapping/MappedCollection.html[`@MappedCollection`] annotation:
[source,java]
----
class MyEntity {
@Id
Integer id;
@MappedCollection(idColumn = "CUSTOM_COLUMN_NAME", keyColumn = "CUSTOM_KEY_COLUMN_NAME")
List<MySubEntity> name;
}
class MySubEntity {
String name;
}
----
[[jdbc.entity-persistence.embedded-entities]]
== Embedded entities
Embedded entities are used to have value objects in your java data model, even if there is only one table in your database.
In the following example you see, that `MyEntity` is mapped with the `@Embedded` annotation.
The consequence of this is, that in the database a table `my_entity` with the two columns `id` and `name` (from the `EmbeddedEntity` class) is expected.
However, if the `name` column is actually `null` within the result set, the entire property `embeddedEntity` will be set to null according to the `onEmpty` of `@Embedded`, which ``null``s objects when all nested properties are `null`. +
Opposite to this behavior `USE_EMPTY` tries to create a new instance using either a default constructor or one that accepts nullable parameter values from the result set.
.Sample Code of embedding objects
====
[source,java]
----
class MyEntity {
@Id
Integer id;
@Embedded(onEmpty = USE_NULL) <1>
EmbeddedEntity embeddedEntity;
}
class EmbeddedEntity {
String name;
}
----
<1> ``Null``s `embeddedEntity` if `name` in `null`.
Use `USE_EMPTY` to instantiate `embeddedEntity` with a potential `null` value for the `name` property.
====
If you need a value object multiple times in an entity, this can be achieved with the optional `prefix` element of the `@Embedded` annotation.
This element represents a prefix and is prepend for each column name in the embedded object.
[TIP]
====
Make use of the shortcuts `@Embedded.Nullable` & `@Embedded.Empty` for `@Embedded(onEmpty = USE_NULL)` and `@Embedded(onEmpty = USE_EMPTY)` to reduce verbosity and simultaneously set JSR-305 `@javax.annotation.Nonnull` accordingly.
[source,java]
----
class MyEntity {
@Id
Integer id;
@Embedded.Nullable <1>
EmbeddedEntity embeddedEntity;
}
----
<1> Shortcut for `@Embedded(onEmpty = USE_NULL)`.
====
Embedded entities containing a `Collection` or a `Map` will always be considered non empty since they will at least contain the empty collection or map.
Such an entity will therefore never be `null` even when using @Embedded(onEmpty = USE_NULL).
[[jdbc.entity-persistence.read-only-properties]]
== Read Only Properties
Attributes annotated with `@ReadOnlyProperty` will not be written to the database by Spring Data JDBC, but they will be read when an entity gets loaded.
Spring Data JDBC will not automatically reload an entity after writing it.
Therefore, you have to reload it explicitly if you want to see data that was generated in the database for such columns.
If the annotated attribute is an entity or collection of entities, it is represented by one or more separate rows in separate tables.
Spring Data JDBC will not perform any insert, delete or update for these rows.
[[jdbc.entity-persistence.insert-only-properties]]
== Insert Only Properties
Attributes annotated with `@InsertOnlyProperty` will only be written to the database by Spring Data JDBC during insert operations.
For updates these properties will be ignored.
`@InsertOnlyProperty` is only supported for the aggregate root.

120
src/main/antora/modules/ROOT/pages/jdbc/mybatis.adoc

@ -0,0 +1,120 @@ @@ -0,0 +1,120 @@
[[jdbc.mybatis]]
= MyBatis Integration
The CRUD operations and query methods can be delegated to MyBatis.
This section describes how to configure Spring Data JDBC to integrate with MyBatis and which conventions to follow to hand over the running of the queries as well as the mapping to the library.
[[jdbc.mybatis.configuration]]
== Configuration
The easiest way to properly plug MyBatis into Spring Data JDBC is by importing `MyBatisJdbcConfiguration` into you application configuration:
[source,java]
----
@Configuration
@EnableJdbcRepositories
@Import(MyBatisJdbcConfiguration.class)
class Application {
@Bean
SqlSessionFactoryBean sqlSessionFactoryBean() {
// Configure MyBatis here
}
}
----
As you can see, all you need to declare is a `SqlSessionFactoryBean` as `MyBatisJdbcConfiguration` relies on a `SqlSession` bean to be available in the `ApplicationContext` eventually.
[[jdbc.mybatis.conventions]]
== Usage conventions
For each operation in `CrudRepository`, Spring Data JDBC runs multiple statements.
If there is a https://github.com/mybatis/mybatis-3/blob/master/src/main/java/org/apache/ibatis/session/SqlSessionFactory.java[`SqlSessionFactory`] in the application context, Spring Data checks, for each step, whether the `SessionFactory` offers a statement.
If one is found, that statement (including its configured mapping to an entity) is used.
The name of the statement is constructed by concatenating the fully qualified name of the entity type with `Mapper.` and a `String` determining the kind of statement.
For example, if an instance of `org.example.User` is to be inserted, Spring Data JDBC looks for a statement named `org.example.UserMapper.insert`.
When the statement is run, an instance of [`MyBatisContext`] gets passed as an argument, which makes various arguments available to the statement.
The following table describes the available MyBatis statements:
[cols="default,default,default,asciidoc"]
|===
| Name | Purpose | CrudRepository methods that might trigger this statement | Attributes available in the `MyBatisContext`
| `insert` | Inserts a single entity. This also applies for entities referenced by the aggregate root. | `save`, `saveAll`. |
`getInstance`: the instance to be saved
`getDomainType`: The type of the entity to be saved.
`get(<key>)`: ID of the referencing entity, where `<key>` is the name of the back reference column provided by the `NamingStrategy`.
| `update` | Updates a single entity. This also applies for entities referenced by the aggregate root. | `save`, `saveAll`.|
`getInstance`: The instance to be saved
`getDomainType`: The type of the entity to be saved.
| `delete` | Deletes a single entity. | `delete`, `deleteById`.|
`getId`: The ID of the instance to be deleted
`getDomainType`: The type of the entity to be deleted.
| `deleteAll-<propertyPath>` | Deletes all entities referenced by any aggregate root of the type used as prefix with the given property path.
Note that the type used for prefixing the statement name is the name of the aggregate root, not the one of the entity to be deleted. | `deleteAll`.|
`getDomainType`: The types of the entities to be deleted.
| `deleteAll` | Deletes all aggregate roots of the type used as the prefix | `deleteAll`.|
`getDomainType`: The type of the entities to be deleted.
| `delete-<propertyPath>` | Deletes all entities referenced by an aggregate root with the given propertyPath | `deleteById`.|
`getId`: The ID of the aggregate root for which referenced entities are to be deleted.
`getDomainType`: The type of the entities to be deleted.
| `findById` | Selects an aggregate root by ID | `findById`.|
`getId`: The ID of the entity to load.
`getDomainType`: The type of the entity to load.
| `findAll` | Select all aggregate roots | `findAll`.|
`getDomainType`: The type of the entity to load.
| `findAllById` | Select a set of aggregate roots by ID values | `findAllById`.|
`getId`: A list of ID values of the entities to load.
`getDomainType`: The type of the entity to load.
| `findAllByProperty-<propertyName>` | Select a set of entities that is referenced by another entity. The type of the referencing entity is used for the prefix. The referenced entities type is used as the suffix. _This method is deprecated. Use `findAllByPath` instead_ | All `find*` methods. If no query is defined for `findAllByPath`|
`getId`: The ID of the entity referencing the entities to be loaded.
`getDomainType`: The type of the entity to load.
| `findAllByPath-<propertyPath>` | Select a set of entities that is referenced by another entity via a property path. | All `find*` methods.|
`getIdentifier`: The `Identifier` holding the id of the aggregate root plus the keys and list indexes of all path elements.
`getDomainType`: The type of the entity to load.
| `findAllSorted` | Select all aggregate roots, sorted | `findAll(Sort)`.|
`getSort`: The sorting specification.
| `findAllPaged` | Select a page of aggregate roots, optionally sorted | `findAll(Page)`.|
`getPageable`: The paging specification.
| `count` | Count the number of aggregate root of the type used as prefix | `count` |
`getDomainType`: The type of aggregate roots to count.
|===

251
src/main/antora/modules/ROOT/pages/jdbc/query-methods.adoc

@ -0,0 +1,251 @@ @@ -0,0 +1,251 @@
[[jdbc.query-methods]]
= Query Methods
This section offers some specific information about the implementation and use of Spring Data JDBC.
Most of the data access operations you usually trigger on a repository result in a query being run against the databases.
Defining such a query is a matter of declaring a method on the repository interface, as the following example shows:
.PersonRepository with query methods
[source,java]
----
interface PersonRepository extends PagingAndSortingRepository<Person, String> {
List<Person> findByFirstname(String firstname); <1>
List<Person> findByFirstnameOrderByLastname(String firstname, Pageable pageable); <2>
Slice<Person> findByLastname(String lastname, Pageable pageable); <3>
Page<Person> findByLastname(String lastname, Pageable pageable); <4>
Person findByFirstnameAndLastname(String firstname, String lastname); <5>
Person findFirstByLastname(String lastname); <6>
@Query("SELECT * FROM person WHERE lastname = :lastname")
List<Person> findByLastname(String lastname); <7>
@Query("SELECT * FROM person WHERE lastname = :lastname")
Stream<Person> streamByLastname(String lastname); <8>
@Query("SELECT * FROM person WHERE username = :#{ principal?.username }")
Person findActiveUser(); <9>
}
----
<1> The method shows a query for all people with the given `firstname`.
The query is derived by parsing the method name for constraints that can be concatenated with `And` and `Or`.
Thus, the method name results in a query expression of `SELECT … FROM person WHERE firstname = :firstname`.
<2> Use `Pageable` to pass offset and sorting parameters to the database.
<3> Return a `Slice<Person>`.Selects `LIMIT+1` rows to determine whether there's more data to consume. `ResultSetExtractor` customization is not supported.
<4> Run a paginated query returning `Page<Person>`.Selects only data within the given page bounds and potentially a count query to determine the total count. `ResultSetExtractor` customization is not supported.
<5> Find a single entity for the given criteria.
It completes with `IncorrectResultSizeDataAccessException` on non-unique results.
<6> In contrast to <3>, the first entity is always emitted even if the query yields more result documents.
<7> The `findByLastname` method shows a query for all people with the given `lastname`.
<8> The `streamByLastname` method returns a `Stream`, which makes values possible as soon as they are returned from the database.
<9> You can use the Spring Expression Language to dynamically resolve parameters.
In the sample, Spring Security is used to resolve the username of the current user.
The following table shows the keywords that are supported for query methods:
[cols="1,2,3",options="header",subs="quotes"]
.Supported keywords for query methods
|===
| Keyword
| Sample
| Logical result
| `After`
| `findByBirthdateAfter(Date date)`
| `birthdate > date`
| `GreaterThan`
| `findByAgeGreaterThan(int age)`
| `age > age`
| `GreaterThanEqual`
| `findByAgeGreaterThanEqual(int age)`
| `age >= age`
| `Before`
| `findByBirthdateBefore(Date date)`
| `birthdate < date`
| `LessThan`
| `findByAgeLessThan(int age)`
| `age < age`
| `LessThanEqual`
| `findByAgeLessThanEqual(int age)`
| `age \<= age`
| `Between`
| `findByAgeBetween(int from, int to)`
| `age BETWEEN from AND to`
| `NotBetween`
| `findByAgeNotBetween(int from, int to)`
| `age NOT BETWEEN from AND to`
| `In`
| `findByAgeIn(Collection<Integer> ages)`
| `age IN (age1, age2, ageN)`
| `NotIn`
| `findByAgeNotIn(Collection ages)`
| `age NOT IN (age1, age2, ageN)`
| `IsNotNull`, `NotNull`
| `findByFirstnameNotNull()`
| `firstname IS NOT NULL`
| `IsNull`, `Null`
| `findByFirstnameNull()`
| `firstname IS NULL`
| `Like`, `StartingWith`, `EndingWith`
| `findByFirstnameLike(String name)`
| `firstname LIKE name`
| `NotLike`, `IsNotLike`
| `findByFirstnameNotLike(String name)`
| `firstname NOT LIKE name`
| `Containing` on String
| `findByFirstnameContaining(String name)`
| `firstname LIKE '%' + name + '%'`
| `NotContaining` on String
| `findByFirstnameNotContaining(String name)`
| `firstname NOT LIKE '%' + name + '%'`
| `(No keyword)`
| `findByFirstname(String name)`
| `firstname = name`
| `Not`
| `findByFirstnameNot(String name)`
| `firstname != name`
| `IsTrue`, `True`
| `findByActiveIsTrue()`
| `active IS TRUE`
| `IsFalse`, `False`
| `findByActiveIsFalse()`
| `active IS FALSE`
|===
NOTE: Query derivation is limited to properties that can be used in a `WHERE` clause without using joins.
[[jdbc.query-methods.strategies]]
== Query Lookup Strategies
The JDBC module supports defining a query manually as a String in a `@Query` annotation or as named query in a property file.
Deriving a query from the name of the method is is currently limited to simple properties, that means properties present in the aggregate root directly.
Also, only select queries are supported by this approach.
[[jdbc.query-methods.at-query]]
== Using `@Query`
The following example shows how to use `@Query` to declare a query method:
.Declare a query method by using @Query
[source,java]
----
interface UserRepository extends CrudRepository<User, Long> {
@Query("select firstName, lastName from User u where u.emailAddress = :email")
User findByEmailAddress(@Param("email") String email);
}
----
For converting the query result into entities the same `RowMapper` is used by default as for the queries Spring Data JDBC generates itself.
The query you provide must match the format the `RowMapper` expects.
Columns for all properties that are used in the constructor of an entity must be provided.
Columns for properties that get set via setter, wither or field access are optional.
Properties that don't have a matching column in the result will not be set.
The query is used for populating the aggregate root, embedded entities and one-to-one relationships including arrays of primitive types which get stored and loaded as SQL-array-types.
Separate queries are generated for maps, lists, sets and arrays of entities.
NOTE: Spring fully supports Java 8’s parameter name discovery based on the `-parameters` compiler flag.
By using this flag in your build as an alternative to debug information, you can omit the `@Param` annotation for named parameters.
NOTE: Spring Data JDBC supports only named parameters.
[[jdbc.query-methods.named-query]]
== Named Queries
If no query is given in an annotation as described in the previous section Spring Data JDBC will try to locate a named query.
There are two ways how the name of the query can be determined.
The default is to take the _domain class_ of the query, i.e. the aggregate root of the repository, take its simple name and append the name of the method separated by a `.`.
Alternatively the `@Query` annotation has a `name` attribute which can be used to specify the name of a query to be looked up.
Named queries are expected to be provided in the property file `META-INF/jdbc-named-queries.properties` on the classpath.
The location of that file may be changed by setting a value to `@EnableJdbcRepositories.namedQueriesLocation`.
[[jdbc.query-methods.at-query.streaming-results]]
=== Streaming Results
When you specify Stream as the return type of a query method, Spring Data JDBC returns elements as soon as they become available.
When dealing with large amounts of data this is suitable for reducing latency and memory requirements.
The stream contains an open connection to the database.
To avoid memory leaks, that connection needs to be closed eventually, by closing the stream.
The recommended way to do that is a `try-with-resource clause`.
It also means that, once the connection to the database is closed, the stream cannot obtain further elements and likely throws an exception.
[[jdbc.query-methods.at-query.custom-rowmapper]]
=== Custom `RowMapper`
You can configure which `RowMapper` to use, either by using the `@Query(rowMapperClass = ....)` or by registering a `RowMapperMap` bean and registering a `RowMapper` per method return type.
The following example shows how to register `DefaultQueryMappingConfiguration`:
[source,java]
----
@Bean
QueryMappingConfiguration rowMappers() {
return new DefaultQueryMappingConfiguration()
.register(Person.class, new PersonRowMapper())
.register(Address.class, new AddressRowMapper());
}
----
When determining which `RowMapper` to use for a method, the following steps are followed, based on the return type of the method:
. If the type is a simple type, no `RowMapper` is used.
+
Instead, the query is expected to return a single row with a single column, and a conversion to the return type is applied to that value.
. The entity classes in the `QueryMappingConfiguration` are iterated until one is found that is a superclass or interface of the return type in question.
The `RowMapper` registered for that class is used.
+
Iterating happens in the order of registration, so make sure to register more general types after specific ones.
If applicable, wrapper types such as collections or `Optional` are unwrapped.
Thus, a return type of `Optional<Person>` uses the `Person` type in the preceding process.
NOTE: Using a custom `RowMapper` through `QueryMappingConfiguration`, `@Query(rowMapperClass=…)`, or a custom `ResultSetExtractor` disables Entity Callbacks and Lifecycle Events as the result mapping can issue its own events/callbacks if needed.
[[jdbc.query-methods.at-query.modifying]]
=== Modifying Query
You can mark a query as being a modifying query by using the `@Modifying` on query method, as the following example shows:
[source,java]
----
@Modifying
@Query("UPDATE DUMMYENTITY SET name = :name WHERE id = :id")
boolean updateName(@Param("id") Long id, @Param("name") String name);
----
You can specify the following return types:
* `void`
* `int` (updated record count)
* `boolean`(whether a record was updated)
Modifying queries are executed directly against the database.
No events or callbacks get called.
Therefore also fields with auditing annotations do not get updated if they don't get updated in the annotated query.

0
src/main/asciidoc/schema-support.adoc → src/main/antora/modules/ROOT/pages/jdbc/schema-support.adoc

92
src/main/antora/modules/ROOT/pages/jdbc/transactions.adoc

@ -0,0 +1,92 @@ @@ -0,0 +1,92 @@
[[jdbc.transactions]]
= Transactionality
The methods of `CrudRepository` instances are transactional by default.
For reading operations, the transaction configuration `readOnly` flag is set to `true`.
All others are configured with a plain `@Transactional` annotation so that default transaction configuration applies.
For details, see the Javadoc of link:{spring-data-jdbc-javadoc}org/springframework/data/jdbc/repository/support/SimpleJdbcRepository.html[`SimpleJdbcRepository`].
If you need to tweak transaction configuration for one of the methods declared in a repository, redeclare the method in your repository interface, as follows:
.Custom transaction configuration for CRUD
[source,java]
----
interface UserRepository extends CrudRepository<User, Long> {
@Override
@Transactional(timeout = 10)
List<User> findAll();
// Further query method declarations
}
----
The preceding causes the `findAll()` method to be run with a timeout of 10 seconds and without the `readOnly` flag.
Another way to alter transactional behavior is by using a facade or service implementation that typically covers more than one repository.
Its purpose is to define transactional boundaries for non-CRUD operations.
The following example shows how to create such a facade:
.Using a facade to define transactions for multiple repository calls
[source,java]
----
@Service
public class UserManagementImpl implements UserManagement {
private final UserRepository userRepository;
private final RoleRepository roleRepository;
UserManagementImpl(UserRepository userRepository,
RoleRepository roleRepository) {
this.userRepository = userRepository;
this.roleRepository = roleRepository;
}
@Transactional
public void addRoleToAllUsers(String roleName) {
Role role = roleRepository.findByName(roleName);
for (User user : userRepository.findAll()) {
user.addRole(role);
userRepository.save(user);
}
}
----
The preceding example causes calls to `addRoleToAllUsers(…)` to run inside a transaction (participating in an existing one or creating a new one if none are already running).
The transaction configuration for the repositories is neglected, as the outer transaction configuration determines the actual repository to be used.
Note that you have to explicitly activate `<tx:annotation-driven />` or use `@EnableTransactionManagement` to get annotation-based configuration for facades working.
Note that the preceding example assumes you use component scanning.
[[jdbc.transaction.query-methods]]
== Transactional Query Methods
To let your query methods be transactional, use `@Transactional` at the repository interface you define, as the following example shows:
.Using @Transactional at query methods
[source,java]
----
@Transactional(readOnly = true)
interface UserRepository extends CrudRepository<User, Long> {
List<User> findByLastname(String lastname);
@Modifying
@Transactional
@Query("delete from User u where u.active = false")
void deleteInactiveUsers();
}
----
Typically, you want the `readOnly` flag to be set to true, because most of the query methods only read data.
In contrast to that, `deleteInactiveUsers()` uses the `@Modifying` annotation and overrides the transaction configuration.
Thus, the method is with the `readOnly` flag set to `false`.
NOTE: It is highly recommended to make query methods transactional. These methods might execute more then one query in order to populate an entity.
Without a common transaction Spring Data JDBC executes the queries in different connections.
This may put excessive strain on the connection pool and might even lead to dead locks when multiple methods request a fresh connection while holding on to one.
NOTE: It is definitely reasonable to mark read-only queries as such by setting the `readOnly` flag.
This does not, however, act as a check that you do not trigger a manipulating query (although some databases reject `INSERT` and `UPDATE` statements inside a read-only transaction).
Instead, the `readOnly` flag is propagated as a hint to the underlying JDBC driver for performance optimizations.

31
src/main/antora/modules/ROOT/pages/jdbc/why.adoc

@ -0,0 +1,31 @@ @@ -0,0 +1,31 @@
[[jdbc.why]]
= Why Spring Data JDBC?
The main persistence API for relational databases in the Java world is certainly JPA, which has its own Spring Data module.
Why is there another one?
JPA does a lot of things in order to help the developer.
Among other things, it tracks changes to entities.
It does lazy loading for you.
It lets you map a wide array of object constructs to an equally wide array of database designs.
This is great and makes a lot of things really easy.
Just take a look at a basic JPA tutorial.
But it often gets really confusing as to why JPA does a certain thing.
Also, things that are really simple conceptually get rather difficult with JPA.
Spring Data JDBC aims to be much simpler conceptually, by embracing the following design decisions:
* If you load an entity, SQL statements get run.
Once this is done, you have a completely loaded entity.
No lazy loading or caching is done.
* If you save an entity, it gets saved.
If you do not, it does not.
There is no dirty tracking and no session.
* There is a simple model of how to map entities to tables.
It probably only works for rather simple cases.
If you do not like that, you should code your own strategy.
Spring Data JDBC offers only very limited support for customizing the strategy with annotations.

1
src/main/antora/modules/ROOT/pages/kotlin.adoc

@ -0,0 +1 @@ @@ -0,0 +1 @@
include::{commons}@data-commons::page$kotlin.adoc[]

1
src/main/antora/modules/ROOT/pages/kotlin/coroutines.adoc

@ -0,0 +1 @@ @@ -0,0 +1 @@
include::{commons}@data-commons::page$kotlin/coroutines.adoc[]

1
src/main/antora/modules/ROOT/pages/kotlin/extensions.adoc

@ -0,0 +1 @@ @@ -0,0 +1 @@
include::{commons}@data-commons::page$kotlin/extensions.adoc[]

1
src/main/antora/modules/ROOT/pages/kotlin/null-safety.adoc

@ -0,0 +1 @@ @@ -0,0 +1 @@
include::{commons}@data-commons::page$kotlin/null-safety.adoc[]

1
src/main/antora/modules/ROOT/pages/kotlin/object-mapping.adoc

@ -0,0 +1 @@ @@ -0,0 +1 @@
include::{commons}@data-commons::page$kotlin/object-mapping.adoc[]

1
src/main/antora/modules/ROOT/pages/kotlin/requirements.adoc

@ -0,0 +1 @@ @@ -0,0 +1 @@
include::{commons}@data-commons::page$kotlin/requirements.adoc[]

1
src/main/antora/modules/ROOT/pages/object-mapping.adoc

@ -0,0 +1 @@ @@ -0,0 +1 @@
include::{commons}@data-commons::page$object-mapping.adoc[]

10
src/main/asciidoc/query-by-example.adoc → src/main/antora/modules/ROOT/pages/query-by-example.adoc

@ -1,11 +1,12 @@ @@ -1,11 +1,12 @@
[[query-by-example.running]]
== Running an Example
= Query by Example
In Spring Data JDBC, you can use Query by Example with Repositories, as shown in the following example:
include::{commons}@data-commons::page$query-by-example.adoc[leveloffset=+1]
In Spring Data JDBC and R2DBC, you can use Query by Example with Repositories, as shown in the following example:
.Query by Example using a Repository
====
[source, java]
[source,java]
----
public interface PersonRepository
extends CrudRepository<Person, String>,
@ -20,7 +21,6 @@ public class PersonService { @@ -20,7 +21,6 @@ public class PersonService {
}
}
----
====
NOTE: Currently, only `SingularAttribute` properties can be used for property matching.

16
src/main/antora/modules/ROOT/pages/r2dbc.adoc

@ -0,0 +1,16 @@ @@ -0,0 +1,16 @@
[[r2dbc.repositories]]
= R2DBC
:page-section-summary-toc: 1
The Spring Data R2DBC module applies core Spring concepts to the development of solutions that use R2DBC database drivers aligned with xref:jdbc/domain-driven-design.adoc[Domain-driven design principles].
We provide a "`template`" as a high-level abstraction for storing and querying aggregates.
This document is the reference guide for Spring Data R2DBC support.
It explains the concepts and semantics and syntax.
This chapter points out the specialties for repository support for JDBC.
This builds on the core repository support explained in xref:repositories/introduction.adoc[Working with Spring Data Repositories].
You should have a sound understanding of the basic concepts explained there.

4
spring-data-r2dbc/src/main/asciidoc/reference/r2dbc-auditing.adoc → src/main/antora/modules/ROOT/pages/r2dbc/auditing.adoc

@ -1,10 +1,9 @@ @@ -1,10 +1,9 @@
[[r2dbc.auditing]]
== General Auditing Configuration for R2DBC
= Auditing
Since Spring Data R2DBC 1.2, auditing can be enabled by annotating a configuration class with the `@EnableR2dbcAuditing` annotation, as the following example shows:
.Activating auditing using JavaConfig
====
[source,java]
----
@Configuration
@ -17,7 +16,6 @@ class Config { @@ -17,7 +16,6 @@ class Config {
}
}
----
====
If you expose a bean of type `ReactiveAuditorAware` to the `ApplicationContext`, the auditing infrastructure picks it up automatically and uses it to determine the current user to be set on domain types.
If you have multiple implementations registered in the `ApplicationContext`, you can select the one to be used by explicitly setting the `auditorAwareRef` attribute of `@EnableR2dbcAuditing`.

15
src/main/antora/modules/ROOT/pages/r2dbc/core.adoc

@ -0,0 +1,15 @@ @@ -0,0 +1,15 @@
[[r2dbc.core]]
= R2DBC Core Support
:page-section-summary-toc: 1
R2DBC contains a wide range of features:
* Spring configuration support with Java-based `@Configuration` classes for an R2DBC driver instance.
* `R2dbcEntityTemplate` as central class for entity-bound operations that increases productivity when performing common R2DBC operations with integrated object mapping between rows and POJOs.
* Feature-rich object mapping integrated with Spring's Conversion Service.
* Annotation-based mapping metadata that is extensible to support other metadata formats.
* Automatic implementation of Repository interfaces, including support for custom query methods.
For most tasks, you should use `R2dbcEntityTemplate` or the repository support, which both use the rich mapping functionality.
`R2dbcEntityTemplate` is the place to look for accessing functionality such as ad-hoc CRUD operations.

4
spring-data-r2dbc/src/main/asciidoc/reference/r2dbc-entity-callbacks.adoc → src/main/antora/modules/ROOT/pages/r2dbc/entity-callbacks.adoc

@ -1,7 +1,7 @@ @@ -1,7 +1,7 @@
[[r2dbc.entity-callbacks]]
= Store specific EntityCallbacks
= EntityCallbacks
Spring Data R2DBC uses the `EntityCallback` API for its auditing support and reacts on the following callbacks.
Spring Data R2DBC uses the xref:commons/entity-callbacks.adoc[`EntityCallback` API] for its auditing support and reacts on the following callbacks.
.Supported Entity Callbacks
[%header,cols="4"]

46
spring-data-r2dbc/src/main/asciidoc/reference/r2dbc-core.adoc → src/main/antora/modules/ROOT/pages/r2dbc/getting-started.adoc

@ -1,23 +1,11 @@ @@ -1,23 +1,11 @@
R2DBC contains a wide range of features:
* Spring configuration support with Java-based `@Configuration` classes for an R2DBC driver instance.
* `R2dbcEntityTemplate` as central class for entity-bound operations that increases productivity when performing common R2DBC operations with integrated object mapping between rows and POJOs.
* Feature-rich object mapping integrated with Spring's Conversion Service.
* Annotation-based mapping metadata that is extensible to support other metadata formats.
* Automatic implementation of Repository interfaces, including support for custom query methods.
For most tasks, you should use `R2dbcEntityTemplate` or the repository support, which both use the rich mapping functionality.
`R2dbcEntityTemplate` is the place to look for accessing functionality such as ad-hoc CRUD operations.
[[r2dbc.getting-started]]
== Getting Started
= Getting Started
An easy way to set up a working environment is to create a Spring-based project through https://start.spring.io[start.spring.io].
To do so:
. Add the following to the pom.xml files `dependencies` element:
+
====
[source,xml,subs="+attributes"]
----
<dependencies>
@ -39,20 +27,16 @@ To do so: @@ -39,20 +27,16 @@ To do so:
</dependencies>
----
====
. Change the version of Spring in the pom.xml to be
+
====
[source,xml,subs="+attributes"]
----
<spring-framework.version>{springVersion}</spring-framework.version>
----
====
. Add the following location of the Spring Milestone repository for Maven to your `pom.xml` such that it is at the same level as your `<dependencies/>` element:
+
====
[source,xml]
----
<repositories>
@ -63,32 +47,26 @@ To do so: @@ -63,32 +47,26 @@ To do so:
</repository>
</repositories>
----
====
The repository is also https://repo.spring.io/milestone/org/springframework/data/[browseable here].
You may also want to set the logging level to `DEBUG` to see some additional information.
To do so, edit the `application.properties` file to have the following content:
====
[source]
----
logging.level.org.springframework.r2dbc=DEBUG
----
====
Then you can, for example, create a `Person` class to persist, as follows:
====
[source,java,indent=0]
----
include::../{example-root}/Person.java[tags=class]
include::example$r2dbc/Person.java[tags=class]
----
====
Next, you need to create a table structure in your database, as follows:
====
[source,sql]
----
CREATE TABLE person
@ -96,21 +74,16 @@ CREATE TABLE person @@ -96,21 +74,16 @@ CREATE TABLE person
name VARCHAR(255),
age INT);
----
====
You also need a main application to run, as follows:
====
[source,java,indent=0]
----
include::../{example-root}/R2dbcApp.java[tag=class]
include::example$r2dbc/R2dbcApp.java[tag=class]
----
====
When you run the main program, the preceding examples produce output similar to the following:
====
[source]
----
2018-11-28 10:47:03,893 DEBUG amework.core.r2dbc.DefaultDatabaseClient: 310 - Executing SQL statement [CREATE TABLE person
@ -121,12 +94,11 @@ When you run the main program, the preceding examples produce output similar to @@ -121,12 +94,11 @@ When you run the main program, the preceding examples produce output similar to
2018-11-28 10:47:04,092 DEBUG amework.core.r2dbc.DefaultDatabaseClient: 575 - Executing SQL statement [SELECT id, name, age FROM person]
2018-11-28 10:47:04,436 INFO org.spring.r2dbc.example.R2dbcApp: 43 - Person [id='joe', name='Joe', age=34]
----
====
Even in this simple example, there are few things to notice:
* You can create an instance of the central helper class in Spring Data R2DBC (`R2dbcEntityTemplate`) by using a standard `io.r2dbc.spi.ConnectionFactory` object.
* The mapper works against standard POJO objects without the need for any additional metadata (though you can, optionally, provide that information -- see <<mapping,here>>.).
* The mapper works against standard POJO objects without the need for any additional metadata (though you can, optionally, provide that information -- see xref:r2dbc/mapping.adoc[here].).
* Mapping conventions can use field access.Notice that the `Person` class has only getters.
* If the constructor argument names match the column names of the stored row, they are used to instantiate the object.
@ -138,15 +110,14 @@ There is a https://github.com/spring-projects/spring-data-examples[GitHub reposi @@ -138,15 +110,14 @@ There is a https://github.com/spring-projects/spring-data-examples[GitHub reposi
[[r2dbc.connecting]]
== Connecting to a Relational Database with Spring
One of the first tasks when using relational databases and Spring is to create a `io.r2dbc.spi.ConnectionFactory` object by using the IoC container.Make sure to use a <<r2dbc.drivers,supported database and driver>>.
One of the first tasks when using relational databases and Spring is to create a `io.r2dbc.spi.ConnectionFactory` object by using the IoC container.Make sure to use a xref:r2dbc/getting-started.adoc#r2dbc.drivers[supported database and driver].
[[r2dbc.connectionfactory]]
=== Registering a `ConnectionFactory` Instance using Java-based Metadata
== Registering a `ConnectionFactory` Instance using Java-based Metadata
The following example shows an example of using Java-based bean metadata to register an instance of `io.r2dbc.spi.ConnectionFactory`:
.Registering a `io.r2dbc.spi.ConnectionFactory` object using Java-based bean metadata
====
[source,java]
----
@Configuration
@ -159,14 +130,13 @@ public class ApplicationConfiguration extends AbstractR2dbcConfiguration { @@ -159,14 +130,13 @@ public class ApplicationConfiguration extends AbstractR2dbcConfiguration {
}
}
----
====
This approach lets you use the standard `io.r2dbc.spi.ConnectionFactory` instance, with the container using Spring's `AbstractR2dbcConfiguration`.As compared to registering a `ConnectionFactory` instance directly, the configuration support has the added advantage of also providing the container with an `ExceptionTranslator` implementation that translates R2DBC exceptions to exceptions in Spring's portable `DataAccessException` hierarchy for data access classes annotated with the `@Repository` annotation.This hierarchy and the use of `@Repository` is described in {spring-framework-ref}/data-access.html[Spring's DAO support features].
This approach lets you use the standard `io.r2dbc.spi.ConnectionFactory` instance, with the container using Spring's `AbstractR2dbcConfiguration`.As compared to registering a `ConnectionFactory` instance directly, the configuration support has the added advantage of also providing the container with an `ExceptionTranslator` implementation that translates R2DBC exceptions to exceptions in Spring's portable `DataAccessException` hierarchy for data access classes annotated with the `@Repository` annotation.This hierarchy and the use of `@Repository` is described in {spring-framework-docs}/data-access.html[Spring's DAO support features].
`AbstractR2dbcConfiguration` also registers `DatabaseClient`, which is required for database interaction and for Repository implementation.
[[r2dbc.drivers]]
=== R2DBC Drivers
== R2DBC Drivers
Spring Data R2DBC supports drivers through R2DBC's pluggable SPI mechanism.
You can use any driver that implements the R2DBC spec with Spring Data R2DBC.

10
spring-data-r2dbc/src/main/asciidoc/reference/kotlin.adoc → src/main/antora/modules/ROOT/pages/r2dbc/kotlin.adoc

@ -1,6 +1,8 @@ @@ -1,6 +1,8 @@
include::../{spring-data-commons-docs}/kotlin.adoc[]
[[kotlin]]
= Kotlin
include::../{spring-data-commons-docs}/kotlin-extensions.adoc[leveloffset=+1]
This part of the reference documentation explains the specific Kotlin functionality offered by Spring Data R2DBC.
See xref:kotlin.adoc for the general functionality provided by Spring Data.
To retrieve a list of `SWCharacter` objects in Java, you would normally write the following:
@ -23,6 +25,4 @@ As in Java, `characters` in Kotlin is strongly typed, but Kotlin's clever type i @@ -23,6 +25,4 @@ As in Java, `characters` in Kotlin is strongly typed, but Kotlin's clever type i
Spring Data R2DBC provides the following extensions:
* Reified generics support for `DatabaseClient` and `Criteria`.
* <<kotlin.coroutines>> extensions for `DatabaseClient`.
include::../{spring-data-commons-docs}/kotlin-coroutines.adoc[leveloffset=+1]
* xref:kotlin/coroutines.adoc[] extensions for `DatabaseClient`.

44
spring-data-r2dbc/src/main/asciidoc/reference/mapping.adoc → src/main/antora/modules/ROOT/pages/r2dbc/mapping.adoc

@ -8,7 +8,7 @@ The `MappingR2dbcConverter` also lets you map objects to rows without providing @@ -8,7 +8,7 @@ The `MappingR2dbcConverter` also lets you map objects to rows without providing
This section describes the features of the `MappingR2dbcConverter`, including how to use conventions for mapping objects to rows and how to override those conventions with annotation-based mapping metadata.
include::../{spring-data-commons-docs}/object-mapping.adoc[leveloffset=+1]
Read on the basics about xref:object-mapping.adoc[] before continuing with this chapter.
[[mapping.conventions]]
== Convention-based Mapping
@ -20,7 +20,8 @@ The conventions are: @@ -20,7 +20,8 @@ The conventions are:
The `com.bigbank.SavingsAccount` class maps to the `SAVINGS_ACCOUNT` table name.
The same name mapping is applied for mapping fields to column names.
For example, the `firstName` field maps to the `FIRST_NAME` column.
You can control this mapping by providing a custom `NamingStrategy`. See <<mapping.configuration>> for more detail.
You can control this mapping by providing a custom `NamingStrategy`.
See xref:r2dbc/mapping.adoc#mapping.configuration[Mapping Configuration] for more detail.
Table and column names that are derived from property or class names are used in SQL statements without quotes by default.
You can control this behavior by setting `R2dbcMappingContext.setForceQuote(true)`.
@ -42,7 +43,8 @@ By default (unless explicitly configured) an instance of `MappingR2dbcConverter` @@ -42,7 +43,8 @@ By default (unless explicitly configured) an instance of `MappingR2dbcConverter`
You can create your own instance of the `MappingR2dbcConverter`.
By creating your own instance, you can register Spring converters to map specific classes to and from the database.
You can configure the `MappingR2dbcConverter` as well as `DatabaseClient` and `ConnectionFactory` by using Java-based metadata. The following example uses Spring's Java-based configuration:
You can configure the `MappingR2dbcConverter` as well as `DatabaseClient` and `ConnectionFactory` by using Java-based metadata.
The following example uses Spring's Java-based configuration:
If you set `setForceQuote` of the `R2dbcMappingContext to` true, table and column names derived from classes and properties are used with database specific quotes.
This means that it is OK to use reserved SQL words (such as order) in these names.
@ -51,10 +53,9 @@ Spring Data converts the letter casing of such a name to that form which is also @@ -51,10 +53,9 @@ Spring Data converts the letter casing of such a name to that form which is also
Therefore, you can use unquoted names when creating tables, as long as you do not use keywords or special characters in your names.
For databases that adhere to the SQL standard, this means that names are converted to upper case.
The quoting character and the way names get capitalized is controlled by the used `Dialect`.
See <<r2dbc.drivers>> for how to configure custom dialects.
See xref:r2dbc/core.adoc#r2dbc.drivers[R2DBC Drivers] for how to configure custom dialects.
.@Configuration class to configure R2DBC mapping support
====
[source,java]
----
@Configuration
@ -72,7 +73,6 @@ public class MyAppConfig extends AbstractR2dbcConfiguration { @@ -72,7 +73,6 @@ public class MyAppConfig extends AbstractR2dbcConfiguration {
}
}
----
====
`AbstractR2dbcConfiguration` requires you to implement a method that defines a `ConnectionFactory`.
@ -92,7 +92,6 @@ If you do not use this annotation, your application takes a slight performance h @@ -92,7 +92,6 @@ If you do not use this annotation, your application takes a slight performance h
The following example shows a domain object:
.Example domain object
====
[source,java]
----
package com.mycompany.domain;
@ -110,7 +109,6 @@ public class Person { @@ -110,7 +109,6 @@ public class Person {
private String lastName;
}
----
====
IMPORTANT: The `@Id` annotation tells the mapper which property you want to use as the primary key.
@ -124,24 +122,24 @@ The following table explains how property types of an entity affect mapping: @@ -124,24 +122,24 @@ The following table explains how property types of an entity affect mapping:
|Primitive types and wrapper types
|Passthru
|Can be customized using <<mapping.explicit.converters, Explicit Converters>>.
|Can be customized using <<mapping.explicit.converters,Explicit Converters>>.
|JSR-310 Date/Time types
|Passthru
|Can be customized using <<mapping.explicit.converters, Explicit Converters>>.
|Can be customized using <<mapping.explicit.converters,Explicit Converters>>.
|`String`, `BigInteger`, `BigDecimal`, and `UUID`
|Passthru
|Can be customized using <<mapping.explicit.converters, Explicit Converters>>.
|Can be customized using <<mapping.explicit.converters,Explicit Converters>>.
|`Enum`
|String
|Can be customized by registering a <<mapping.explicit.converters, Explicit Converters>>.
|Can be customized by registering <<mapping.explicit.converters,Explicit Converters>>.
|`Blob` and `Clob`
|Passthru
|Can be customized using <<mapping.explicit.converters, Explicit Converters>>.
|Can be customized using <<mapping.explicit.converters,Explicit Converters>>.
|`byte[]`, `ByteBuffer`
|Passthru
@ -149,11 +147,11 @@ The following table explains how property types of an entity affect mapping: @@ -149,11 +147,11 @@ The following table explains how property types of an entity affect mapping:
|`Collection<T>`
|Array of `T`
|Conversion to Array type if supported by the configured <<r2dbc.drivers, driver>>, not supported otherwise.
|Conversion to Array type if supported by the configured xref:r2dbc/core.adoc#r2dbc.drivers[driver], not supported otherwise.
|Arrays of primitive types, wrapper types and `String`
|Array of wrapper type (e.g. `int[]` -> `Integer[]`)
|Conversion to Array type if supported by the configured <<r2dbc.drivers, driver>>, not supported otherwise.
|Conversion to Array type if supported by the configured xref:r2dbc/core.adoc#r2dbc.drivers[driver], not supported otherwise.
|Driver-specific types
|Passthru
@ -161,7 +159,7 @@ The following table explains how property types of an entity affect mapping: @@ -161,7 +159,7 @@ The following table explains how property types of an entity affect mapping:
|Complex objects
|Target type depends on registered `Converter`.
|Requires a <<mapping.explicit.converters, Explicit Converters>>, not supported otherwise.
|Requires a <<mapping.explicit.converters,Explicit Converters>>, not supported otherwise.
|===
@ -195,7 +193,7 @@ However, this is not recommended, since it may cause problems with other tools. @@ -195,7 +193,7 @@ However, this is not recommended, since it may cause problems with other tools.
The value is `null` (`zero` for primitive types) is considered as marker for entities to be new.
The initially stored value is `zero` (`one` for primitive types).
The version gets incremented automatically on every update.
See <<r2dbc.optimistic-locking>> for further reference.
See xref:r2dbc/repositories.adoc#r2dbc.optimistic-locking[Optimistic Locking] for further reference.
The mapping metadata infrastructure is defined in the separate `spring-data-commons` project that is technology-agnostic.
Specific subclasses are used in the R2DBC support to support annotation based metadata.
@ -211,7 +209,6 @@ The mapping subsystem allows the customization of the object construction by ann @@ -211,7 +209,6 @@ The mapping subsystem allows the customization of the object construction by ann
This works only if the parameter name information is present in the Java `.class` files, which you can achieve by compiling the source with debug information or using the `-parameters` command-line switch for `javac` in Java 8.
* Otherwise, a `MappingException` is thrown to indicate that the given constructor parameter could not be bound.
====
[source,java]
----
class OrderItem {
@ -226,10 +223,9 @@ class OrderItem { @@ -226,10 +223,9 @@ class OrderItem {
this.unitPrice = unitPrice;
}
// getters/setters ommitted
// getters/setters omitted
}
----
====
[[mapping.explicit.converters]]
=== Overriding Mapping with Explicit Converters
@ -240,7 +236,7 @@ However, you may sometimes want the `R2dbcConverter` instances to do most of the @@ -240,7 +236,7 @@ However, you may sometimes want the `R2dbcConverter` instances to do most of the
To selectively handle the conversion yourself, register one or more one or more `org.springframework.core.convert.converter.Converter` instances with the `R2dbcConverter`.
You can use the `r2dbcCustomConversions` method in `AbstractR2dbcConfiguration` to configure converters.
The examples <<mapping.configuration, at the beginning of this chapter>> show how to perform the configuration with Java.
The examples xref:r2dbc/mapping.adoc#mapping.configuration[at the beginning of this chapter] show how to perform the configuration with Java.
NOTE: Custom top-level entity conversion requires asymmetric types for conversion.
Inbound data is extracted from R2DBC's `Row`.
@ -248,7 +244,6 @@ Outbound data (to be used with `INSERT`/`UPDATE` statements) is represented as ` @@ -248,7 +244,6 @@ Outbound data (to be used with `INSERT`/`UPDATE` statements) is represented as `
The following example of a Spring Converter implementation converts from a `Row` to a `Person` POJO:
====
[source,java]
----
@ReadingConverter
@ -261,7 +256,6 @@ The following example of a Spring Converter implementation converts from a `Row` @@ -261,7 +256,6 @@ The following example of a Spring Converter implementation converts from a `Row`
}
}
----
====
Please note that converters get applied on singular properties.
Collection properties (e.g. `Collection<Person>`) are iterated and converted element-wise.
@ -271,7 +265,6 @@ NOTE: R2DBC uses boxed primitives (`Integer.class` instead of `int.class`) to re @@ -271,7 +265,6 @@ NOTE: R2DBC uses boxed primitives (`Integer.class` instead of `int.class`) to re
The following example converts from a `Person` to a `OutboundRow`:
====
[source,java]
----
@WritingConverter
@ -286,7 +279,6 @@ public class PersonWriteConverter implements Converter<Person, OutboundRow> { @@ -286,7 +279,6 @@ public class PersonWriteConverter implements Converter<Person, OutboundRow> {
}
}
----
====
[[mapping.explicit.enum.converters]]
==== Overriding Enum Mapping with Explicit Converters
@ -298,7 +290,6 @@ Additionally, you need to configure the enum type on the driver level so that th @@ -298,7 +290,6 @@ Additionally, you need to configure the enum type on the driver level so that th
The following example shows the involved components to read and write `Color` enum values natively:
====
[source,java]
----
enum Color {
@ -317,4 +308,3 @@ class Product { @@ -317,4 +308,3 @@ class Product {
// …
}
----
====

2
spring-data-r2dbc/src/main/asciidoc/reference/r2dbc-upgrading.adoc → src/main/antora/modules/ROOT/pages/r2dbc/migration-guide.adoc

@ -1,4 +1,3 @@ @@ -1,4 +1,3 @@
[appendix]
[[migration-guide]]
= Migration Guide
@ -49,6 +48,7 @@ Specifically the following classes are changed: @@ -49,6 +48,7 @@ Specifically the following classes are changed:
We recommend that you review and update your imports if you work with these types directly.
[[breaking-changes]]
=== Breaking Changes
* `OutboundRow` and statement mappers switched from using `SettableValue` to `Parameter`

38
src/main/antora/modules/ROOT/pages/r2dbc/query-by-example.adoc

@ -0,0 +1,38 @@ @@ -0,0 +1,38 @@
[[r2dbc.repositories.queries.query-by-example]]
= Query By Example
Spring Data R2DBC also lets you use xref:query-by-example.adoc[Query By Example] to fashion queries.
This technique allows you to use a "probe" object.
Essentially, any field that isn't empty or `null` will be used to match.
Here's an example:
[source,java,indent=0]
----
include::example$r2dbc/QueryByExampleTests.java[tag=example]
----
<1> Create a domain object with the criteria (`null` fields will be ignored).
<2> Using the domain object, create an `Example`.
<3> Through the `R2dbcRepository`, execute query (use `findOne` for a `Mono`).
This illustrates how to craft a simple probe using a domain object.
In this case, it will query based on the `Employee` object's `name` field being equal to `Frodo`.
`null` fields are ignored.
[source,java,indent=0]
----
include::example$r2dbc/QueryByExampleTests.java[tag=example-2]
----
<1> Create a custom `ExampleMatcher` that matches on ALL fields (use `matchingAny()` to match on *ANY* fields)
<2> For the `name` field, use a wildcard that matches against the end of the field
<3> Match columns against `null` (don't forget that `NULL` doesn't equal `NULL` in relational databases).
<4> Ignore the `role` field when forming the query.
<5> Plug the custom `ExampleMatcher` into the probe.
It's also possible to apply a `withTransform()` against any property, allowing you to transform a property before forming the query.
For example, you can apply a `toUpperCase()` to a `String` -based property before the query is created.
Query By Example really shines when you don't know all the fields needed in a query in advance.
If you were building a filter on a web page where the user can pick the fields, Query By Example is a great way to flexibly capture that into an efficient query.

208
src/main/antora/modules/ROOT/pages/r2dbc/query-methods.adoc

@ -0,0 +1,208 @@ @@ -0,0 +1,208 @@
[[r2dbc.repositories.queries]]
= Query Methods
Most of the data access operations you usually trigger on a repository result in a query being run against the databases.
Defining such a query is a matter of declaring a method on the repository interface, as the following example shows:
.PersonRepository with query methods
====
[source,java]
----
interface ReactivePersonRepository extends ReactiveSortingRepository<Person, Long> {
Flux<Person> findByFirstname(String firstname); <1>
Flux<Person> findByFirstname(Publisher<String> firstname); <2>
Flux<Person> findByFirstnameOrderByLastname(String firstname, Pageable pageable); <3>
Mono<Person> findByFirstnameAndLastname(String firstname, String lastname); <4>
Mono<Person> findFirstByLastname(String lastname); <5>
@Query("SELECT * FROM person WHERE lastname = :lastname")
Flux<Person> findByLastname(String lastname); <6>
@Query("SELECT firstname, lastname FROM person WHERE lastname = $1")
Mono<Person> findFirstByLastname(String lastname); <7>
}
----
<1> The method shows a query for all people with the given `firstname`.
The query is derived by parsing the method name for constraints that can be concatenated with `And` and `Or`.
Thus, the method name results in a query expression of `SELECT … FROM person WHERE firstname = :firstname`.
<2> The method shows a query for all people with the given `firstname` once the `firstname` is emitted by the given `Publisher`.
<3> Use `Pageable` to pass offset and sorting parameters to the database.
<4> Find a single entity for the given criteria.
It completes with `IncorrectResultSizeDataAccessException` on non-unique results.
<5> Unless <4>, the first entity is always emitted even if the query yields more result rows.
<6> The `findByLastname` method shows a query for all people with the given last name.
<7> A query for a single `Person` entity projecting only `firstname` and `lastname` columns.
The annotated query uses native bind markers, which are Postgres bind markers in this example.
====
Note that the columns of a select statement used in a `@Query` annotation must match the names generated by the `NamingStrategy` for the respective property.
If a select statement does not include a matching column, that property is not set.
If that property is required by the persistence constructor, either null or (for primitive types) the default value is provided.
The following table shows the keywords that are supported for query methods:
[cols="1,2,3",options="header",subs="quotes"]
.Supported keywords for query methods
|===
| Keyword
| Sample
| Logical result
| `After`
| `findByBirthdateAfter(Date date)`
| `birthdate > date`
| `GreaterThan`
| `findByAgeGreaterThan(int age)`
| `age > age`
| `GreaterThanEqual`
| `findByAgeGreaterThanEqual(int age)`
| `age >= age`
| `Before`
| `findByBirthdateBefore(Date date)`
| `birthdate < date`
| `LessThan`
| `findByAgeLessThan(int age)`
| `age < age`
| `LessThanEqual`
| `findByAgeLessThanEqual(int age)`
| `age \<= age`
| `Between`
| `findByAgeBetween(int from, int to)`
| `age BETWEEN from AND to`
| `NotBetween`
| `findByAgeNotBetween(int from, int to)`
| `age NOT BETWEEN from AND to`
| `In`
| `findByAgeIn(Collection<Integer> ages)`
| `age IN (age1, age2, ageN)`
| `NotIn`
| `findByAgeNotIn(Collection ages)`
| `age NOT IN (age1, age2, ageN)`
| `IsNotNull`, `NotNull`
| `findByFirstnameNotNull()`
| `firstname IS NOT NULL`
| `IsNull`, `Null`
| `findByFirstnameNull()`
| `firstname IS NULL`
| `Like`, `StartingWith`, `EndingWith`
| `findByFirstnameLike(String name)`
| `firstname LIKE name`
| `NotLike`, `IsNotLike`
| `findByFirstnameNotLike(String name)`
| `firstname NOT LIKE name`
| `Containing` on String
| `findByFirstnameContaining(String name)`
| `firstname LIKE '%' + name +'%'`
| `NotContaining` on String
| `findByFirstnameNotContaining(String name)`
| `firstname NOT LIKE '%' + name +'%'`
| `(No keyword)`
| `findByFirstname(String name)`
| `firstname = name`
| `Not`
| `findByFirstnameNot(String name)`
| `firstname != name`
| `IsTrue`, `True`
| `findByActiveIsTrue()`
| `active IS TRUE`
| `IsFalse`, `False`
| `findByActiveIsFalse()`
| `active IS FALSE`
|===
[[r2dbc.repositories.modifying]]
== Modifying Queries
The previous sections describe how to declare queries to access a given entity or collection of entities.
Using keywords from the preceding table can be used in conjunction with `delete…By` or `remove…By` to create derived queries that delete matching rows.
.`Delete…By` Query
====
[source,java]
----
interface ReactivePersonRepository extends ReactiveSortingRepository<Person, String> {
Mono<Integer> deleteByLastname(String lastname); <1>
Mono<Void> deletePersonByLastname(String lastname); <2>
Mono<Boolean> deletePersonByLastname(String lastname); <3>
}
----
<1> Using a return type of `Mono<Integer>` returns the number of affected rows.
<2> Using `Void` just reports whether the rows were successfully deleted without emitting a result value.
<3> Using `Boolean` reports whether at least one row was removed.
====
As this approach is feasible for comprehensive custom functionality, you can modify queries that only need parameter binding by annotating the query method with `@Modifying`, as shown in the following example:
[source,java,indent=0]
----
include::example$r2dbc/PersonRepository.java[tags=atModifying]
----
The result of a modifying query can be:
* `Void` (or Kotlin `Unit`) to discard update count and await completion.
* `Integer` or another numeric type emitting the affected rows count.
* `Boolean` to emit whether at least one row was updated.
The `@Modifying` annotation is only relevant in combination with the `@Query` annotation.
Derived custom methods do not require this annotation.
Modifying queries are executed directly against the database.
No events or callbacks get called.
Therefore also fields with auditing annotations do not get updated if they don't get updated in the annotated query.
Alternatively, you can add custom modifying behavior by using the facilities described in xref:repositories/custom-implementations.adoc[Custom Implementations for Spring Data Repositories].
[[r2dbc.repositories.queries.spel]]
=== Queries with SpEL Expressions
Query string definitions can be used together with SpEL expressions to create dynamic queries at runtime.
SpEL expressions can provide predicate values which are evaluated right before running the query.
Expressions expose method arguments through an array that contains all the arguments.
The following query uses `[0]`
to declare the predicate value for `lastname` (which is equivalent to the `:lastname` parameter binding):
[source,java,indent=0]
----
include::example$r2dbc/PersonRepository.java[tags=spel]
----
SpEL in query strings can be a powerful way to enhance queries.
However, they can also accept a broad range of unwanted arguments.
You should make sure to sanitize strings before passing them to the query to avoid unwanted changes to your query.
Expression support is extensible through the Query SPI: `org.springframework.data.spel.spi.EvaluationContextExtension`.
The Query SPI can contribute properties and functions and can customize the root object.
Extensions are retrieved from the application context at the time of SpEL evaluation when the query is built.
TIP: When using SpEL expressions in combination with plain parameters, use named parameter notation instead of native bind markers to ensure a proper binding order.

178
src/main/antora/modules/ROOT/pages/r2dbc/repositories.adoc

@ -0,0 +1,178 @@ @@ -0,0 +1,178 @@
[[r2dbc.repositories]]
= R2DBC Repositories
[[r2dbc.repositories.intro]]
This chapter points out the specialties for repository support for R2DBC.
This builds on the core repository support explained in xref:repositories/introduction.adoc[Working with Spring Data Repositories].
Before reading this chapter, you should have a sound understanding of the basic concepts explained there.
[[r2dbc.repositories.usage]]
== Usage
To access domain entities stored in a relational database, you can use our sophisticated repository support that eases implementation quite significantly.
To do so, create an interface for your repository.
Consider the following `Person` class:
.Sample Person entity
[source,java]
----
public class Person {
@Id
private Long id;
private String firstname;
private String lastname;
// … getters and setters omitted
}
----
The following example shows a repository interface for the preceding `Person` class:
.Basic repository interface to persist Person entities
[source,java]
----
public interface PersonRepository extends ReactiveCrudRepository<Person, Long> {
// additional custom query methods go here
}
----
To configure R2DBC repositories, you can use the `@EnableR2dbcRepositories` annotation.
If no base package is configured, the infrastructure scans the package of the annotated configuration class.
The following example shows how to use Java configuration for a repository:
.Java configuration for repositories
[source,java]
----
@Configuration
@EnableR2dbcRepositories
class ApplicationConfig extends AbstractR2dbcConfiguration {
@Override
public ConnectionFactory connectionFactory() {
return …
}
}
----
Because our domain repository extends `ReactiveCrudRepository`, it provides you with reactive CRUD operations to access the entities.
On top of `ReactiveCrudRepository`, there is also `ReactiveSortingRepository`, which adds additional sorting functionality similar to that of `PagingAndSortingRepository`.
Working with the repository instance is merely a matter of dependency injecting it into a client.
Consequently, you can retrieve all `Person` objects with the following code:
.Paging access to Person entities
[source,java,indent=0]
----
include::example$r2dbc/PersonRepositoryTests.java[tags=class]
----
The preceding example creates an application context with Spring's unit test support, which performs annotation-based dependency injection into test cases.
Inside the test method, we use the repository to query the database.
We use `StepVerifier` as a test aid to verify our expectations against the results.
[[r2dbc.entity-persistence.state-detection-strategies]]
include::{commons}@data-commons::page$is-new-state-detection.adoc[leveloffset=+1]
[[r2dbc.entity-persistence.id-generation]]
=== ID Generation
Spring Data R2DBC uses the ID to identify entities.
The ID of an entity must be annotated with Spring Data's https://docs.spring.io/spring-data/commons/docs/current/api/org/springframework/data/annotation/Id.html[`@Id`] annotation.
When your database has an auto-increment column for the ID column, the generated value gets set in the entity after inserting it into the database.
Spring Data R2DBC does not attempt to insert values of identifier columns when the entity is new and the identifier value defaults to its initial value.
That is `0` for primitive types and `null` if the identifier property uses a numeric wrapper type such as `Long`.
One important constraint is that, after saving an entity, the entity must not be new anymore.
Note that whether an entity is new is part of the entity's state.
With auto-increment columns, this happens automatically, because the ID gets set by Spring Data with the value from the ID column.
[[r2dbc.optimistic-locking]]
=== Optimistic Locking
The `@Version` annotation provides syntax similar to that of JPA in the context of R2DBC and makes sure updates are only applied to rows with a matching version.
Therefore, the actual value of the version property is added to the update query in such a way that the update does not have any effect if another operation altered the row in the meantime.
In that case, an `OptimisticLockingFailureException` is thrown.
The following example shows these features:
[source,java]
----
@Table
class Person {
@Id Long id;
String firstname;
String lastname;
@Version Long version;
}
R2dbcEntityTemplate template = …;
Mono<Person> daenerys = template.insert(new Person("Daenerys")); <1>
Person other = template.select(Person.class)
.matching(query(where("id").is(daenerys.getId())))
.first().block(); <2>
daenerys.setLastname("Targaryen");
template.update(daenerys); <3>
template.update(other).subscribe(); // emits OptimisticLockingFailureException <4>
----
<1> Initially insert row. `version` is set to `0`.
<2> Load the just inserted row. `version` is still `0`.
<3> Update the row with `version = 0`.Set the `lastname` and bump `version` to `1`.
<4> Try to update the previously loaded row that still has `version = 0`.The operation fails with an `OptimisticLockingFailureException`, as the current `version` is `1`.
[[projections.resultmapping]]
==== Result Mapping
A query method returning an Interface- or DTO projection is backed by results produced by the actual query.
Interface projections generally rely on mapping results onto the domain type first to consider potential `@Column` type mappings and the actual projection proxy uses a potentially partially materialized entity to expose projection data.
Result mapping for DTO projections depends on the actual query type.
Derived queries use the domain type to map results, and Spring Data creates DTO instances solely from properties available on the domain type.
Declaring properties in your DTO that are not available on the domain type is not supported.
String-based queries use a different approach since the actual query, specifically the field projection, and result type declaration are close together.
DTO projections used with query methods annotated with `@Query` map query results directly into the DTO type.
Field mappings on the domain type are not considered.
Using the DTO type directly, your query method can benefit from a more dynamic projection that isn't restricted to the domain model.
[[r2dbc.multiple-databases]]
== Working with multiple Databases
When working with multiple, potentially different databases, your application will require a different approach to configuration.
The provided `AbstractR2dbcConfiguration` support class assumes a single `ConnectionFactory` from which the `Dialect` gets derived.
That being said, you need to define a few beans yourself to configure Spring Data R2DBC to work with multiple databases.
R2DBC repositories require `R2dbcEntityOperations` to implement repositories.
A simple configuration to scan for repositories without using `AbstractR2dbcConfiguration` looks like:
[source,java]
----
@Configuration
@EnableR2dbcRepositories(basePackages = "com.acme.mysql", entityOperationsRef = "mysqlR2dbcEntityOperations")
static class MySQLConfiguration {
@Bean
@Qualifier("mysql")
public ConnectionFactory mysqlConnectionFactory() {
return …
}
@Bean
public R2dbcEntityOperations mysqlR2dbcEntityOperations(@Qualifier("mysql") ConnectionFactory connectionFactory) {
DatabaseClient databaseClient = DatabaseClient.create(connectionFactory);
return new R2dbcEntityTemplate(databaseClient, MySqlDialect.INSTANCE);
}
}
----
Note that `@EnableR2dbcRepositories` allows configuration either through `databaseClientRef` or `entityOperationsRef`.
Using various `DatabaseClient` beans is useful when connecting to multiple databases of the same type.
When using different database systems that differ in their dialect, use `@EnableR2dbcRepositories`(entityOperationsRef = …)` instead.

38
spring-data-r2dbc/src/main/asciidoc/reference/r2dbc-template.adoc → src/main/antora/modules/ROOT/pages/r2dbc/template.adoc

@ -1,6 +1,5 @@ @@ -1,6 +1,5 @@
[[r2dbc.datbaseclient.fluent-api]]
[[r2dbc.entityoperations]]
= R2dbcEntityOperations Data Access API
= Data Access API
`R2dbcEntityTemplate` is the central entrypoint for Spring Data R2DBC.
It provides direct entity-oriented methods and a more narrow, fluent interface for typical ad-hoc use-cases, such as querying, inserting, updating, and deleting data.
@ -28,12 +27,10 @@ Consequently, for auto-generation the type of the `Id` property or field in your @@ -28,12 +27,10 @@ Consequently, for auto-generation the type of the `Id` property or field in your
The following example shows how to insert a row and retrieving its contents:
.Inserting and retrieving entities using the `R2dbcEntityTemplate`
====
[source,java,indent=0]
----
include::../{example-root}/R2dbcEntityTemplateSnippets.java[tag=insertAndSelect]
include::example$r2dbc/R2dbcEntityTemplateSnippets.java[tag=insertAndSelect]
----
====
The following insert and update operations are available:
@ -48,17 +45,15 @@ Table names can be customized by using the fluent API. @@ -48,17 +45,15 @@ Table names can be customized by using the fluent API.
== Selecting Data
The `select(…)` and `selectOne(…)` methods on `R2dbcEntityTemplate` are used to select data from a table.
Both methods take a <<r2dbc.datbaseclient.fluent-api.criteria,`Query`>> object that defines the field projection, the `WHERE` clause, the `ORDER BY` clause and limit/offset pagination.
Both methods take a xref:r2dbc/template.adoc#r2dbc.datbaseclient.fluent-api.criteria[`Query`] object that defines the field projection, the `WHERE` clause, the `ORDER BY` clause and limit/offset pagination.
Limit/offset functionality is transparent to the application regardless of the underlying database.
This functionality is supported by the <<r2dbc.drivers,`R2dbcDialect` abstraction>> to cater for differences between the individual SQL flavors.
This functionality is supported by the xref:r2dbc/core.adoc#r2dbc.drivers[`R2dbcDialect` abstraction] to cater for differences between the individual SQL flavors.
.Selecting entities using the `R2dbcEntityTemplate`
====
[source,java,indent=0]
----
include::../{example-root}/R2dbcEntityTemplateSnippets.java[tag=select]
include::example$r2dbc/R2dbcEntityTemplateSnippets.java[tag=select]
----
====
[[r2dbc.entityoperations.fluent-api]]
== Fluent API
@ -66,31 +61,28 @@ include::../{example-root}/R2dbcEntityTemplateSnippets.java[tag=select] @@ -66,31 +61,28 @@ include::../{example-root}/R2dbcEntityTemplateSnippets.java[tag=select]
This section explains the fluent API usage.
Consider the following simple query:
====
[source,java,indent=0]
----
include::../{example-root}/R2dbcEntityTemplateSnippets.java[tag=simpleSelect]
include::example$r2dbc/R2dbcEntityTemplateSnippets.java[tag=simpleSelect]
----
<1> Using `Person` with the `select(…)` method maps tabular results on `Person` result objects.
<2> Fetching `all()` rows returns a `Flux<Person>` without limiting results.
====
The following example declares a more complex query that specifies the table name by name, a `WHERE` condition, and an `ORDER BY` clause:
====
[source,java,indent=0]
----
include::../{example-root}/R2dbcEntityTemplateSnippets.java[tag=fullSelect]
include::example$r2dbc/R2dbcEntityTemplateSnippets.java[tag=fullSelect]
----
<1> Selecting from a table by name returns row results using the given domain type.
<2> The issued query declares a `WHERE` condition on `firstname` and `lastname` columns to filter results.
<3> Results can be ordered by individual column names, resulting in an `ORDER BY` clause.
<4> Selecting the one result fetches only a single row.
This way of consuming rows expects the query to return exactly a single result.
`Mono` emits a `IncorrectResultSizeDataAccessException` if the query yields more than a single result.
====
TIP: You can directly apply <<projections,Projections>> to results by providing the target type via `select(Class<?>)`.
TIP: You can directly apply xref:repositories/projections.adoc[Projections] to results by providing the target type via `select(Class<?>)`.
You can switch between retrieving a single entity and retrieving multiple entities through the following terminating methods:
@ -138,17 +130,15 @@ You can use the `insert()` entry point to insert data. @@ -138,17 +130,15 @@ You can use the `insert()` entry point to insert data.
Consider the following simple typed insert operation:
====
[source,java,indent=0]
----
include::../{example-root}/R2dbcEntityTemplateSnippets.java[tag=insert]
include::example$r2dbc/R2dbcEntityTemplateSnippets.java[tag=insert]
----
<1> Using `Person` with the `into(…)` method sets the `INTO` table, based on mapping metadata.
It also prepares the insert statement to accept `Person` objects for inserting.
<2> Provide a scalar `Person` object.
Alternatively, you can supply a `Publisher` to run a stream of `INSERT` statements.
This method extracts all non-`null` values and inserts them.
====
[[r2dbc.entityoperations.fluent-api.update]]
== Updating Data
@ -159,19 +149,17 @@ It also accepts `Query` to create a `WHERE` clause. @@ -159,19 +149,17 @@ It also accepts `Query` to create a `WHERE` clause.
Consider the following simple typed update operation:
====
[source,java]
----
Person modified = …
include::../{example-root}/R2dbcEntityTemplateSnippets.java[tag=update]
include::example$r2dbc/R2dbcEntityTemplateSnippets.java[tag=update]
----
<1> Update `Person` objects and apply mapping based on mapping metadata.
<2> Set a different table name by calling the `inTable(…)` method.
<3> Specify a query that translates into a `WHERE` clause.
<4> Apply the `Update` object.
Set in this case `age` to `42` and return the number of affected rows.
====
[[r2dbc.entityoperations.fluent-api.delete]]
== Deleting Data
@ -181,13 +169,11 @@ Removing data starts with a specification of the table to delete from and, optio @@ -181,13 +169,11 @@ Removing data starts with a specification of the table to delete from and, optio
Consider the following simple insert operation:
====
[source,java]
----
include::../{example-root}/R2dbcEntityTemplateSnippets.java[tag=delete]
include::example$r2dbc/R2dbcEntityTemplateSnippets.java[tag=delete]
----
<1> Delete `Person` objects and apply mapping based on mapping metadata.
<2> Set a different table name by calling the `from(…)` method.
<3> Specify a query that translates into a `WHERE` clause.
<4> Apply the delete operation and return the number of affected rows.
====

1
src/main/antora/modules/ROOT/pages/repositories/auditing.adoc

@ -0,0 +1 @@ @@ -0,0 +1 @@
include::{commons}@data-commons::page$auditing.adoc[leveloffset=+1]

1
src/main/antora/modules/ROOT/pages/repositories/core-concepts.adoc

@ -0,0 +1 @@ @@ -0,0 +1 @@
include::{commons}@data-commons::page$repositories/core-concepts.adoc[]

1
src/main/antora/modules/ROOT/pages/repositories/core-domain-events.adoc

@ -0,0 +1 @@ @@ -0,0 +1 @@
include::{commons}@data-commons::page$repositories/core-domain-events.adoc[]

1
src/main/antora/modules/ROOT/pages/repositories/core-extensions.adoc

@ -0,0 +1 @@ @@ -0,0 +1 @@
include::{commons}@data-commons::page$repositories/core-extensions.adoc[]

1
src/main/antora/modules/ROOT/pages/repositories/create-instances.adoc

@ -0,0 +1 @@ @@ -0,0 +1 @@
include::{commons}@data-commons::page$repositories/create-instances.adoc[]

1
src/main/antora/modules/ROOT/pages/repositories/custom-implementations.adoc

@ -0,0 +1 @@ @@ -0,0 +1 @@
include::{commons}@data-commons::page$repositories/custom-implementations.adoc[]

1
src/main/antora/modules/ROOT/pages/repositories/definition.adoc

@ -0,0 +1 @@ @@ -0,0 +1 @@
include::{commons}@data-commons::page$repositories/definition.adoc[]

8
src/main/antora/modules/ROOT/pages/repositories/introduction.adoc

@ -0,0 +1,8 @@ @@ -0,0 +1,8 @@
[[common.basics]]
= Introduction
:page-section-summary-toc: 1
This chapter explains the basic foundations of Spring Data repositories.
Before continuing to the JDBC or R2DBC specifics, make sure you have a sound understanding of the basic concepts explained here.
The goal of the Spring Data repository abstraction is to significantly reduce the amount of boilerplate code required to implement data access layers for various persistence stores.

1
src/main/antora/modules/ROOT/pages/repositories/null-handling.adoc

@ -0,0 +1 @@ @@ -0,0 +1 @@
include::{commons}@data-commons::page$repositories/null-handling.adoc[]

4
src/main/antora/modules/ROOT/pages/repositories/projections.adoc

@ -0,0 +1,4 @@ @@ -0,0 +1,4 @@
[[relational.projections]]
= Projections
include::{commons}@data-commons::page$repositories/projections.adoc[leveloffset=+1]

1
src/main/antora/modules/ROOT/pages/repositories/query-keywords-reference.adoc

@ -0,0 +1 @@ @@ -0,0 +1 @@
include::{commons}@data-commons::page$repositories/query-keywords-reference.adoc[]

1
src/main/antora/modules/ROOT/pages/repositories/query-methods-details.adoc

@ -0,0 +1 @@ @@ -0,0 +1 @@
include::{commons}@data-commons::page$repositories/query-methods-details.adoc[]

1
src/main/antora/modules/ROOT/pages/repositories/query-return-types-reference.adoc

@ -0,0 +1 @@ @@ -0,0 +1 @@
include::{commons}@data-commons::page$repositories/query-return-types-reference.adoc[]

22
src/main/antora/resources/antora-resources/antora.yml

@ -0,0 +1,22 @@ @@ -0,0 +1,22 @@
version: ${antora-component.version}
prerelease: ${antora-component.prerelease}
asciidoc:
attributes:
version: ${project.version}
springversionshort: ${spring.short}
springversion: ${spring}
attribute-missing: 'warn'
commons: ${springdata.commons.docs}
include-xml-namespaces: false
spring-data-commons-docs-url: https://docs.spring.io/spring-data-commons/reference
spring-data-commons-javadoc-base: https://docs.spring.io/spring-data/commons/docs/${springdata.commons}/api/
spring-data-jdbc-javadoc: https://docs.spring.io/spring-data/jdbc/docs/${version}/api/
spring-data-r2dbc-javadoc: https://docs.spring.io/spring-data/r2dbc/docs/${version}/api/
springdocsurl: https://docs.spring.io/spring-framework/reference/{springversionshort}
springjavadocurl: https://docs.spring.io/spring-framework/docs/${spring}/javadoc-api
spring-framework-docs: '{springdocsurl}'
spring-framework-javadoc: '{springjavadocurl}'
springhateoasversion: ${spring-hateoas}
releasetrainversion: ${releasetrain}
store: Jdbc

19
src/main/asciidoc/glossary.adoc

@ -1,19 +0,0 @@ @@ -1,19 +0,0 @@
[[glossary]]
[appendix,glossary]
= Glossary
AOP::
Aspect-Oriented Programming
CRUD::
Create, Read, Update, Delete - Basic persistence operations
Dependency Injection::
Pattern to hand a component's dependency to the component from outside, freeing the component to lookup the dependent itself.
For more information, see link:$$https://en.wikipedia.org/wiki/Dependency_Injection$$[https://en.wikipedia.org/wiki/Dependency_Injection].
JPA::
Java Persistence API
Spring::
Java application framework -- link:$$https://projects.spring.io/spring-framework$$[https://projects.spring.io/spring-framework]

BIN
src/main/asciidoc/images/epub-cover.png

Binary file not shown.

Before

Width:  |  Height:  |  Size: 40 KiB

8
src/main/asciidoc/images/epub-cover.svg

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 8.6 KiB

36
src/main/asciidoc/index.adoc

@ -1,36 +0,0 @@ @@ -1,36 +0,0 @@
= Spring Data JDBC - Reference Documentation
Jens Schauder, Jay Bryant, Mark Paluch, Bastian Wilhelm
:revnumber: {version}
:revdate: {localdate}
:javadoc-base: https://docs.spring.io/spring-data/jdbc/docs/{revnumber}/api/
ifdef::backend-epub3[:front-cover-image: image:epub-cover.png[Front Cover,1050,1600]]
:spring-data-commons-docs: ../../../../spring-data-commons/src/main/asciidoc
:spring-framework-docs: https://docs.spring.io/spring-framework/docs/{springVersion}/reference/html
:include-xml-namespaces: false
(C) 2018-2022 The original authors.
NOTE: Copies of this document may be made for your own use and for distribution to others, provided that you do not charge any fee for such copies and further provided that each copy contains this Copyright Notice, whether distributed in print or electronically.
include::preface.adoc[]
include::{spring-data-commons-docs}/upgrade.adoc[leveloffset=+1]
include::{spring-data-commons-docs}/dependencies.adoc[leveloffset=+1]
include::{spring-data-commons-docs}/repositories.adoc[leveloffset=+1]
[[reference]]
= Reference Documentation
include::jdbc.adoc[leveloffset=+1]
include::schema-support.adoc[leveloffset=+1]
[[appendix]]
= Appendix
:numbered!:
include::glossary.adoc[leveloffset=+1]
include::{spring-data-commons-docs}/repository-populator-namespace-reference.adoc[leveloffset=+1]
include::{spring-data-commons-docs}/repository-query-keywords-reference.adoc[leveloffset=+1]
include::{spring-data-commons-docs}/repository-query-return-types-reference.adoc[leveloffset=+1]

1165
src/main/asciidoc/jdbc.adoc

File diff suppressed because it is too large Load Diff

82
src/main/asciidoc/preface.adoc

@ -1,82 +0,0 @@ @@ -1,82 +0,0 @@
[[preface]]
= Preface
The Spring Data JDBC project applies core Spring concepts to the development of solutions that use JDBC databases aligned with <<jdbc.domain-driven-design,Domain-driven design principles>>.
We provide a "`template`" as a high-level abstraction for storing and querying aggregates.
This document is the reference guide for Spring Data JDBC Support.
It explains the concepts and semantics and syntax..
This section provides some basic introduction.
The rest of the document refers only to Spring Data JDBC features and assumes the user is familiar with SQL and Spring concepts.
[[get-started:first-steps:spring]]
== Learning Spring
Spring Data uses Spring framework's {spring-framework-docs}/core.html[core] functionality, including:
* {spring-framework-docs}/core.html#beans[IoC] container
* {spring-framework-docs}/core.html#validation[type conversion system]
* {spring-framework-docs}/core.html#expressions[expression language]
* {spring-framework-docs}/integration.html#jmx[JMX integration]
* {spring-framework-docs}/data-access.html#dao-exceptions[DAO exception hierarchy].
While you need not know the Spring APIs, understanding the concepts behind them is important.
At a minimum, the idea behind Inversion of Control (IoC) should be familiar, and you should be familiar with whatever IoC container you choose to use.
The core functionality of the JDBC Aggregate support can be used directly, with no need to invoke the IoC services of the Spring Container.
This is much like `JdbcTemplate`, which can be used "'standalone'" without any other services of the Spring container.
To leverage all the features of Spring Data JDBC, such as the repository support, you need to configure some parts of the library to use Spring.
To learn more about Spring, you can refer to the comprehensive documentation that explains the Spring Framework in detail.
There are a lot of articles, blog entries, and books on the subject.
See the Spring framework https://spring.io/docs[home page] for more information.
[[requirements]]
== Requirements
The Spring Data JDBC binaries require JDK level 8.0 and above and https://spring.io/docs[Spring Framework] {springVersion} and above.
In terms of databases, Spring Data JDBC requires a <<jdbc.dialects,dialect>> to abstract common SQL functionality over vendor-specific flavours.
Spring Data JDBC includes direct support for the following databases:
* DB2
* H2
* HSQLDB
* MariaDB
* Microsoft SQL Server
* MySQL
* Oracle
* Postgres
If you use a different database then your application won’t startup. The <<jdbc.dialects,dialect>> section contains further detail on how to proceed in such case.
[[get-started:help]]
== Additional Help Resources
Learning a new framework is not always straightforward.
In this section, we try to provide what we think is an easy-to-follow guide for starting with the Spring Data JDBC module.
However, if you encounter issues or you need advice, feel free to use one of the following links:
[[get-started:help:community]]
Community Forum :: Spring Data on https://stackoverflow.com/questions/tagged/spring-data[Stack Overflow] is a tag for all Spring Data (not just Document) users to share information and help each other.
Note that registration is needed only for posting.
[[get-started:help:professional]]
Professional Support :: Professional, from-the-source support, with guaranteed response time, is available from https://pivotal.io/[Pivotal Sofware, Inc.], the company behind Spring Data and Spring.
[[get-started:up-to-date]]
== Following Development
For information on the Spring Data JDBC source code repository, nightly builds, and snapshot artifacts, see the Spring Data JDBC https://spring.io/projects/spring-data-jdbc/[homepage].
You can help make Spring Data best serve the needs of the Spring community by interacting with developers through the Community on https://stackoverflow.com/questions/tagged/spring-data[Stack Overflow].
If you encounter a bug or want to suggest an improvement, please create a ticket on the https://github.com/spring-projects/spring-data-jdbc/issues[Spring Data issue tracker].
To stay up to date with the latest news and announcements in the Spring eco system, subscribe to the Spring Community https://spring.io[Portal].
You can also follow the Spring https://spring.io/blog[blog] or the project team on Twitter (https://twitter.com/SpringData[SpringData]).
[[project]]
== Project Metadata
* Release repository: https://repo1.maven.org/maven2/
* Milestone repository: https://repo.spring.io/milestone
* Snapshot repository: https://repo.spring.io/snapshot
Loading…
Cancel
Save