Spring Data Relational
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 

1038 lines
44 KiB

[[jdbc.repositories]]
= JDBC Repositories
This chapter points out the specialties for repository support for JDBC. This builds on the core repository support explained in <<repositories>>.
You should have a sound understanding of the basic concepts explained there.
[[jdbc.why]]
== Why Spring Data JDBC?
The main persistence API for relational databases in the Java world is certainly JPA, which has its own Spring Data module.
Why is there another one?
JPA does a lot of things in order to help the developer.
Among other things, it tracks changes to entities.
It does lazy loading for you.
It lets you map a wide array of object constructs to an equally wide array of database designs.
This is great and makes a lot of things really easy.
Just take a look at a basic JPA tutorial.
But it often gets really confusing as to why JPA does a certain thing.
Also, things that are really simple conceptually get rather difficult with JPA.
Spring Data JDBC aims to be much simpler conceptually, by embracing the following design decisions:
* If you load an entity, SQL statements get run.
Once this is done, you have a completely loaded entity.
No lazy loading or caching is done.
* If you save an entity, it gets saved.
If you do not, it does not.
There is no dirty tracking and no session.
* There is a simple model of how to map entities to tables.
It probably only works for rather simple cases.
If you do not like that, you should code your own strategy.
Spring Data JDBC offers only very limited support for customizing the strategy with annotations.
[[jdbc.domain-driven-design]]
== Domain Driven Design and Relational Databases.
All Spring Data modules are inspired by the concepts of "`repository`", "`aggregate`", and "`aggregate root`" from Domain Driven Design.
These are possibly even more important for Spring Data JDBC, because they are, to some extent, contrary to normal practice when working with relational databases.
An aggregate is a group of entities that is guaranteed to be consistent between atomic changes to it.
A classic example is an `Order` with `OrderItems`.
A property on `Order` (for example, `numberOfItems` is consistent with the actual number of `OrderItems`) remains consistent as changes are made.
References across aggregates are not guaranteed to be consistent at all times.
They are guaranteed to become consistent eventually.
Each aggregate has exactly one aggregate root, which is one of the entities of the aggregate.
The aggregate gets manipulated only through methods on that aggregate root.
These are the atomic changes mentioned earlier.
A repository is an abstraction over a persistent store that looks like a collection of all the aggregates of a certain type.
For Spring Data in general, this means you want to have one `Repository` per aggregate root.
In addition, for Spring Data JDBC this means that all entities reachable from an aggregate root are considered to be part of that aggregate root.
Spring Data JDBC assumes that only the aggregate has a foreign key to a table storing non-root entities of the aggregate and no other entity points toward non-root entities.
WARNING: In the current implementation, entities referenced from an aggregate root are deleted and recreated by Spring Data JDBC.
You can overwrite the repository methods with implementations that match your style of working and designing your database.
[[jdbc.getting-started]]
== Getting Started
An easy way to bootstrap setting up a working environment is to create a Spring-based project in https://spring.io/tools/sts[STS] or from https://start.spring.io[Spring Initializr].
First, you need to set up a running database server. Refer to your vendor documentation on how to configure your database for JDBC access.
To create a Spring project in STS:
. Go to File -> New -> Spring Template Project -> Simple Spring Utility Project, and press Yes when prompted. Then enter a project and a package name, such as `org.spring.jdbc.example`.
. Add the following to the `pom.xml` files `dependencies` element:
+
[source,xml,subs="+attributes"]
----
<dependencies>
<!-- other dependency elements omitted -->
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-jdbc</artifactId>
<version>{version}</version>
</dependency>
</dependencies>
----
. Change the version of Spring in the pom.xml to be
+
[source,xml,subs="+attributes"]
----
<spring.framework.version>{springVersion}</spring.framework.version>
----
. Add the following location of the Spring Milestone repository for Maven to your `pom.xml` such that it is at the same level of your `<dependencies/>` element:
+
[source,xml]
----
<repositories>
<repository>
<id>spring-milestone</id>
<name>Spring Maven MILESTONE Repository</name>
<url>https://repo.spring.io/libs-milestone</url>
</repository>
</repositories>
----
The repository is also https://repo.spring.io/milestone/org/springframework/data/[browseable here].
[[jdbc.examples-repo]]
== Examples Repository
There is a https://github.com/spring-projects/spring-data-examples[GitHub repository with several examples] that you can download and play around with to get a feel for how the library works.
[[jdbc.java-config]]
== Annotation-based Configuration
The Spring Data JDBC repositories support can be activated by an annotation through Java configuration, as the following example shows:
.Spring Data JDBC repositories using Java configuration
====
[source,java]
----
@Configuration
@EnableJdbcRepositories // <1>
class ApplicationConfig extends AbstractJdbcConfiguration { // <2>
@Bean
public DataSource dataSource() { // <3>
EmbeddedDatabaseBuilder builder = new EmbeddedDatabaseBuilder();
return builder.setType(EmbeddedDatabaseType.HSQL).build();
}
@Bean
NamedParameterJdbcOperations namedParameterJdbcOperations(DataSource dataSource) { // <4>
return new NamedParameterJdbcTemplate(dataSource);
}
@Bean
TransactionManager transactionManager(DataSource dataSource) { // <5>
return new DataSourceTransactionManager(dataSource);
}
}
----
<1> `@EnableJdbcRepositories` creates implementations for interfaces derived from `Repository`
<2> `AbstractJdbcConfiguration` provides various default beans required by Spring Data JDBC
<3> Creates a `DataSource` connecting to a database.
This is required by the following two bean methods.
<4> Creates the `NamedParameterJdbcOperations` used by Spring Data JDBC to access the database.
<5> Spring Data JDBC utilizes the transaction management provided by Spring JDBC.
====
The configuration class in the preceding example sets up an embedded HSQL database by using the `EmbeddedDatabaseBuilder` API of `spring-jdbc`.
The `DataSource` is then used to set up `NamedParameterJdbcOperations` and a `TransactionManager`.
We finally activate Spring Data JDBC repositories by using the `@EnableJdbcRepositories`.
If no base package is configured, it uses the package in which the configuration class resides.
Extending `AbstractJdbcConfiguration` ensures various beans get registered.
Overwriting its methods can be used to customize the setup (see below).
This configuration can be further simplified by using Spring Boot.
With Spring Boot a `DataSource` is sufficient once the starter `spring-boot-starter-data-jdbc` is included in the dependencies.
Everything else is done by Spring Boot.
There are a couple of things one might want to customize in this setup.
[[jdbc.dialects]]
=== Dialects
Spring Data JDBC uses implementations of the interface `Dialect` to encapsulate behavior that is specific to a database or its JDBC driver.
By default, the `AbstractJdbcConfiguration` tries to determine the database in use and register the correct `Dialect`.
This behavior can be changed by overwriting `jdbcDialect(NamedParameterJdbcOperations)`.
If you use a database for which no dialect is available, then your application won’t startup. In that case, you’ll have to ask your vendor to provide a `Dialect` implementation. Alternatively, you can:
1. Implement your own `Dialect`.
2. Implement a `JdbcDialectProvider` returning the `Dialect`.
3. Register the provider by creating a `spring.factories` resource under `META-INF` and perform the registration by adding a line +
`org.springframework.data.jdbc.repository.config.DialectResolver$JdbcDialectProvider=<fully qualified name of your JdbcDialectProvider>`
[[jdbc.entity-persistence]]
== Persisting Entities
Saving an aggregate can be performed with the `CrudRepository.save(…)` method.
If the aggregate is new, this results in an insert for the aggregate root, followed by insert statements for all directly or indirectly referenced entities.
If the aggregate root is not new, all referenced entities get deleted, the aggregate root gets updated, and all referenced entities get inserted again.
Note that whether an instance is new is part of the instance's state.
NOTE: This approach has some obvious downsides.
If only few of the referenced entities have been actually changed, the deletion and insertion is wasteful.
While this process could and probably will be improved, there are certain limitations to what Spring Data JDBC can offer.
It does not know the previous state of an aggregate.
So any update process always has to take whatever it finds in the database and make sure it converts it to whatever is the state of the entity passed to the save method.
include::{spring-data-commons-docs}/object-mapping.adoc[leveloffset=+2]
[[jdbc.entity-persistence.types]]
=== Supported Types in Your Entity
The properties of the following types are currently supported:
* All primitive types and their boxed types (`int`, `float`, `Integer`, `Float`, and so on)
* Enums get mapped to their name.
* `String`
* `java.util.Date`, `java.time.LocalDate`, `java.time.LocalDateTime`, and `java.time.LocalTime`
* Arrays and Collections of the types mentioned above can be mapped to columns of array type if your database supports that.
* Anything your database driver accepts.
* References to other entities.
They are considered a one-to-one relationship, or an embedded type.
It is optional for one-to-one relationship entities to have an `id` attribute.
The table of the referenced entity is expected to have an additional column named the same as the table of the referencing entity.
You can change this name by implementing `NamingStrategy.getReverseColumnName(PersistentPropertyPathExtension path)`.
Embedded entities do not need an `id`.
If one is present it gets ignored.
* `Set<some entity>` is considered a one-to-many relationship.
The table of the referenced entity is expected to have an additional column named the same as the table of the referencing entity.
You can change this name by implementing `NamingStrategy.getReverseColumnName(PersistentPropertyPathExtension path)`.
* `Map<simple type, some entity>` is considered a qualified one-to-many relationship.
The table of the referenced entity is expected to have two additional columns: One named the same as the table of the referencing entity for the foreign key and one with the same name and an additional `_key` suffix for the map key.
You can change this behavior by implementing `NamingStrategy.getReverseColumnName(PersistentPropertyPathExtension path)` and `NamingStrategy.getKeyColumn(RelationalPersistentProperty property)`, respectively.
Alternatively you may annotate the attribute with `@MappedCollection(idColumn="your_column_name", keyColumn="your_key_column_name")`
* `List<some entity>` is mapped as a `Map<Integer, some entity>`.
The handling of referenced entities is limited.
This is based on the idea of aggregate roots as described above.
If you reference another entity, that entity is, by definition, part of your aggregate.
So, if you remove the reference, the previously referenced entity gets deleted.
This also means references are 1-1 or 1-n, but not n-1 or n-m.
If you have n-1 or n-m references, you are, by definition, dealing with two separate aggregates.
References between those should be encoded as simple `id` values, which should map properly with Spring Data JDBC.
[[jdbc.entity-persistence.custom-converters]]
=== Custom converters
Custom converters can be registered, for types that are not supported by default, by inheriting your configuration from `AbstractJdbcConfiguration` and overwriting the method `jdbcCustomConversions()`.
====
[source,java]
----
@Configuration
public class DataJdbcConfiguration extends AbstractJdbcConfiguration {
@Override
public JdbcCustomConversions jdbcCustomConversions() {
return new JdbcCustomConversions(Collections.singletonList(TimestampTzToDateConverter.INSTANCE));
}
@ReadingConverter
enum TimestampTzToDateConverter implements Converter<TIMESTAMPTZ, Date> {
INSTANCE;
@Override
public Date convert(TIMESTAMPTZ source) {
//...
}
}
}
----
====
The constructor of `JdbcCustomConversions` accepts a list of `org.springframework.core.convert.converter.Converter`.
Converters should be annotated with `@ReadingConverter` or `@WritingConverter` in order to control their applicability to only reading from or to writing to the database.
`TIMESTAMPTZ` in the example is a database specific data type that needs conversion into something more suitable for a domain model.
[[jdbc.entity-persistence.custom-converters.jdbc-value]]
==== JdbcValue
Value conversion uses `JdbcValue` to enrich values propagated to JDBC operations with a `java.sql.Types` type.
Register a custom write converter if you need to specify a JDBC-specific type instead of using type derivation.
This converter should convert the value to `JdbcValue` which has a field for the value and for the actual `JDBCType`.
[[jdbc.entity-persistence.naming-strategy]]
=== `NamingStrategy`
When you use the standard implementations of `CrudRepository` that Spring Data JDBC provides, they expect a certain table structure.
You can tweak that by providing a {javadoc-base}org/springframework/data/relational/core/mapping/NamingStrategy.html[`NamingStrategy`] in your application context.
[[jdbc.entity-persistence.custom-table-name]]
=== `Custom table names`
When the NamingStrategy does not matching on your database table names, you can customize the names with the {javadoc-base}org/springframework/data/relational/core/mapping/Table.html[`@Table`] annotation.
The element `value` of this annotation provides the custom table name.
The following example maps the `MyEntity` class to the `CUSTOM_TABLE_NAME` table in the database:
====
[source,java]
----
@Table("CUSTOM_TABLE_NAME")
public class MyEntity {
@Id
Integer id;
String name;
}
----
====
[[jdbc.entity-persistence.custom-column-name]]
=== `Custom column names`
When the NamingStrategy does not matching on your database column names, you can customize the names with the {javadoc-base}org/springframework/data/relational/core/mapping/Column.html[`@Column`] annotation.
The element `value` of this annotation provides the custom column name.
The following example maps the `name` property of the `MyEntity` class to the `CUSTOM_COLUMN_NAME` column in the database:
====
[source,java]
----
public class MyEntity {
@Id
Integer id;
@Column("CUSTOM_COLUMN_NAME")
String name;
}
----
====
The {javadoc-base}org/springframework/data/relational/core/mapping/MappedCollection.html[`@MappedCollection`]
annotation can be used on a reference type (one-to-one relationship) or on Sets, Lists, and Maps (one-to-many relationship).
`idColumn` element of the annotation provides a custom name for the foreign key column referencing the id column in the other table.
In the following example the corresponding table for the `MySubEntity` class has a `NAME` column, and the `CUSTOM_MY_ENTITY_ID_COLUMN_NAME` column of the `MyEntity` id for relationship reasons:
====
[source,java]
----
public class MyEntity {
@Id
Integer id;
@MappedCollection(idColumn = "CUSTOM_MY_ENTITY_ID_COLUMN_NAME")
Set<MySubEntity> subEntities;
}
public class MySubEntity {
String name;
}
----
====
When using `List` and `Map` you must have an additional column for the position of a dataset in the `List` or the key value of the entity in the `Map`.
This additional column name may be customized with the `keyColumn` Element of the {javadoc-base}org/springframework/data/relational/core/mapping/MappedCollection.html[`@MappedCollection`] annotation:
====
[source,java]
----
public class MyEntity {
@Id
Integer id;
@MappedCollection(idColumn = "CUSTOM_COLUMN_NAME", keyColumn = "CUSTOM_KEY_COLUMN_NAME")
List<MySubEntity> name;
}
public class MySubEntity {
String name;
}
----
====
[[jdbc.entity-persistence.embedded-entities]]
=== Embedded entities
Embedded entities are used to have value objects in your java data model, even if there is only one table in your database.
In the following example you see, that `MyEntity` is mapped with the `@Embedded` annotation.
The consequence of this is, that in the database a table `my_entity` with the two columns `id` and `name` (from the `EmbeddedEntity` class) is expected.
However, if the `name` column is actually `null` within the result set, the entire property `embeddedEntity` will be set to null according to the `onEmpty` of `@Embedded`, which ``null``s objects when all nested properties are `null`. +
Opposite to this behavior `USE_EMPTY` tries to create a new instance using either a default constructor or one that accepts nullable parameter values from the result set.
.Sample Code of embedding objects
====
[source,java]
----
public class MyEntity {
@Id
Integer id;
@Embedded(onEmpty = USE_NULL) <1>
EmbeddedEntity embeddedEntity;
}
public class EmbeddedEntity {
String name;
}
----
<1> ``Null``s `embeddedEntity` if `name` in `null`.
Use `USE_EMPTY` to instantiate `embeddedEntity` with a potential `null` value for the `name` property.
====
If you need a value object multiple times in an entity, this can be achieved with the optional `prefix` element of the `@Embedded` annotation.
This element represents a prefix and is prepend for each column name in the embedded object.
[TIP]
====
Make use of the shortcuts `@Embedded.Nullable` & `@Embedded.Empty` for `@Embedded(onEmpty = USE_NULL)` and `@Embedded(onEmpty = USE_EMPTY)` to reduce verbosity and simultaneously set JSR-305 `@javax.annotation.Nonnull` accordingly.
[source,java]
----
public class MyEntity {
@Id
Integer id;
@Embedded.Nullable <1>
EmbeddedEntity embeddedEntity;
}
----
<1> Shortcut for `@Embedded(onEmpty = USE_NULL)`.
====
Embedded entities containing a `Collection` or a `Map` will always be considered non empty since they will at least contain the empty collection or map.
Such an entity will therefore never be `null` even when using @Embedded(onEmpty = USE_NULL).
[[jdbc.entity-persistence.state-detection-strategies]]
include::{spring-data-commons-docs}/is-new-state-detection.adoc[leveloffset=+2]
[[jdbc.entity-persistence.id-generation]]
=== ID Generation
Spring Data JDBC uses the ID to identify entities.
The ID of an entity must be annotated with Spring Data's https://docs.spring.io/spring-data/commons/docs/current/api/org/springframework/data/annotation/Id.html[`@Id`] annotation.
When your data base has an auto-increment column for the ID column, the generated value gets set in the entity after inserting it into the database.
One important constraint is that, after saving an entity, the entity must not be new any more.
Note that whether an entity is new is part of the entity's state.
With auto-increment columns, this happens automatically, because the ID gets set by Spring Data with the value from the ID column.
If you are not using auto-increment columns, you can use a `BeforeSave` listener, which sets the ID of the entity (covered later in this document).
[[jdbc.entity-persistence.optimistic-locking]]
=== Optimistic Locking
Spring Data JDBC supports optimistic locking by means of a numeric attribute that is annotated with
https://docs.spring.io/spring-data/commons/docs/current/api/org/springframework/data/annotation/Version.html[`@Version`] on the aggregate root.
Whenever Spring Data JDBC saves an aggregate with such a version attribute two things happen:
The update statement for the aggregate root will contain a where clause checking that the version stored in the database is actually unchanged.
If this isn't the case an `OptimisticLockingFailureException` will be thrown.
Also the version attribute gets increased both in the entity and in the database so a concurrent action will notice the change and throw an `OptimisticLockingFailureException` if applicable as described above.
This process also applies to inserting new aggregates, where a `null` or `0` version indicates a new instance and the increased instance afterwards marks the instance as not new anymore, making this work rather nicely with cases where the id is generated during object construction for example when UUIDs are used.
During deletes the version check also applies but no version is increased.
[[jdbc.query-methods]]
== Query Methods
This section offers some specific information about the implementation and use of Spring Data JDBC.
Most of the data access operations you usually trigger on a repository result in a query being run against the databases.
Defining such a query is a matter of declaring a method on the repository interface, as the following example shows:
.PersonRepository with query methods
====
[source,java]
----
interface PersonRepository extends PagingAndSortingRepository<Person, String> {
List<Person> findByFirstname(String firstname); <1>
List<Person> findByFirstnameOrderByLastname(String firstname, Pageable pageable); <2>
Person findByFirstnameAndLastname(String firstname, String lastname); <3>
Person findFirstByLastname(String lastname); <4>
@Query("SELECT * FROM person WHERE lastname = :lastname")
List<Person> findByLastname(String lastname); <5>
}
----
<1> The method shows a query for all people with the given `lastname`.
The query is derived by parsing the method name for constraints that can be concatenated with `And` and `Or`.
Thus, the method name results in a query expression of `SELECT … FROM person WHERE firstname = :firstname`.
<2> Use `Pageable` to pass offset and sorting parameters to the database.
<3> Find a single entity for the given criteria.
It completes with `IncorrectResultSizeDataAccessException` on non-unique results.
<4> In contrast to <3>, the first entity is always emitted even if the query yields more result documents.
<5> The `findByLastname` method shows a query for all people with the given last name.
====
The following table shows the keywords that are supported for query methods:
[cols="1,2,3",options="header",subs="quotes"]
.Supported keywords for query methods
|===
| Keyword
| Sample
| Logical result
| `After`
| `findByBirthdateAfter(Date date)`
| `birthdate > date`
| `GreaterThan`
| `findByAgeGreaterThan(int age)`
| `age > age`
| `GreaterThanEqual`
| `findByAgeGreaterThanEqual(int age)`
| `age >= age`
| `Before`
| `findByBirthdateBefore(Date date)`
| `birthdate < date`
| `LessThan`
| `findByAgeLessThan(int age)`
| `age < age`
| `LessThanEqual`
| `findByAgeLessThanEqual(int age)`
| `age \<= age`
| `Between`
| `findByAgeBetween(int from, int to)`
| `age BETWEEN from AND to`
| `NotBetween`
| `findByAgeNotBetween(int from, int to)`
| `age NOT BETWEEN from AND to`
| `In`
| `findByAgeIn(Collection<Integer> ages)`
| `age IN (age1, age2, ageN)`
| `NotIn`
| `findByAgeNotIn(Collection ages)`
| `age NOT IN (age1, age2, ageN)`
| `IsNotNull`, `NotNull`
| `findByFirstnameNotNull()`
| `firstname IS NOT NULL`
| `IsNull`, `Null`
| `findByFirstnameNull()`
| `firstname IS NULL`
| `Like`, `StartingWith`, `EndingWith`
| `findByFirstnameLike(String name)`
| `firstname LIKE name`
| `NotLike`, `IsNotLike`
| `findByFirstnameNotLike(String name)`
| `firstname NOT LIKE name`
| `Containing` on String
| `findByFirstnameContaining(String name)`
| `firstname LIKE '%' name +'%'`
| `NotContaining` on String
| `findByFirstnameNotContaining(String name)`
| `firstname NOT LIKE '%' name +'%'`
| `(No keyword)`
| `findByFirstname(String name)`
| `firstname = name`
| `Not`
| `findByFirstnameNot(String name)`
| `firstname != name`
| `IsTrue`, `True`
| `findByActiveIsTrue()`
| `active IS TRUE`
| `IsFalse`, `False`
| `findByActiveIsFalse()`
| `active IS FALSE`
|===
NOTE: Query derivation is limited to properties that can be used in a `WHERE` clause without using joins.
[[jdbc.query-methods.strategies]]
=== Query Lookup Strategies
The JDBC module supports defining a query manually as a String in a `@Query` annotation or as named query in a property file.
Deriving a query from the name of the method is is currently limited to simple properties, that means properties present in the aggregate root directly.
Also, only select queries are supported by this approach.
[[jdbc.query-methods.at-query]]
=== Using `@Query`
The following example shows how to use `@Query` to declare a query method:
.Declare a query method by using @Query
====
[source,java]
----
public interface UserRepository extends CrudRepository<User, Long> {
@Query("select firstName, lastName from User u where u.emailAddress = :email")
User findByEmailAddress(@Param("email") String email);
}
----
====
For converting the query result into entities the same `RowMapper` is used by default as for the queries Spring Data JDBC generates itself.
The query you provide must match the format the `RowMapper` expects.
Columns for all properties that are used in the constructor of an entity must be provided.
Columns for properties that get set via setter, wither or field access are optional.
Properties that don't have a matching column in the result will not be set.
The query is used for populating the aggregate root, embedded entities and one-to-one relationships including arrays of primitive types which get stored and loaded as SQL-array-types.
Separate queries are generated for maps, lists, sets and arrays of entities.
NOTE: Spring fully supports Java 8’s parameter name discovery based on the `-parameters` compiler flag.
By using this flag in your build as an alternative to debug information, you can omit the `@Param` annotation for named parameters.
NOTE: Spring Data JDBC supports only named parameters.
[[jdbc.query-methods.named-query]]
=== Named Queries
If no query is given in an annotation as described in the previous section Spring Data JDBC will try to locate a named query.
There are two ways how the name of the query can be determined.
The default is to take the _domain class_ of the query, i.e. the aggregate root of the repository, take its simple name and append the name of the method separated by a `.`.
Alternatively the `@Query` annotation has a `name` attribute which can be used to specify the name of a query to be looked up.
Named queries are expected to be provided in the property file `META-INF/jdbc-named-queries.properties` on the classpath.
The location of that file may be changed by setting a value to `@EnableJdbcRepositories.namedQueriesLocation`.
[[jdbc.query-methods.at-query.custom-rowmapper]]
==== Custom `RowMapper`
You can configure which `RowMapper` to use, either by using the `@Query(rowMapperClass = ....)` or by registering a `RowMapperMap` bean and registering a `RowMapper` per method return type.
The following example shows how to register `DefaultQueryMappingConfiguration`:
====
[source,java]
----
@Bean
QueryMappingConfiguration rowMappers() {
return new DefaultQueryMappingConfiguration()
.register(Person.class, new PersonRowMapper())
.register(Address.class, new AddressRowMapper());
}
----
====
When determining which `RowMapper` to use for a method, the following steps are followed, based on the return type of the method:
. If the type is a simple type, no `RowMapper` is used.
+
Instead, the query is expected to return a single row with a single column, and a conversion to the return type is applied to that value.
. The entity classes in the `QueryMappingConfiguration` are iterated until one is found that is a superclass or interface of the return type in question.
The `RowMapper` registered for that class is used.
+
Iterating happens in the order of registration, so make sure to register more general types after specific ones.
If applicable, wrapper types such as collections or `Optional` are unwrapped.
Thus, a return type of `Optional<Person>` uses the `Person` type in the preceding process.
NOTE: Using a custom `RowMapper` through `QueryMappingConfiguration`, `@Query(rowMapperClass=…)`, or a custom `ResultSetExtractor` disables Entity Callbacks and Lifecycle Events as the result mapping can issue its own events/callbacks if needed.
[[jdbc.query-methods.at-query.modifying]]
==== Modifying Query
You can mark a query as being a modifying query by using the `@Modifying` on query method, as the following example shows:
====
[source,java]
----
@Modifying
@Query("UPDATE DUMMYENTITY SET name = :name WHERE id = :id")
boolean updateName(@Param("id") Long id, @Param("name") String name);
----
====
You can specify the following return types:
* `void`
* `int` (updated record count)
* `boolean`(whether a record was updated)
[[jdbc.mybatis]]
== MyBatis Integration
The CRUD operations and query methods can be delegated to MyBatis.
This section describes how to configure Spring Data JDBC to integrate with MyBatis and which conventions to follow to hand over the running of the queries as well as the mapping to the library.
[[jdbc.mybatis.configuration]]
=== Configuration
The easiest way to properly plug MyBatis into Spring Data JDBC is by importing `MyBatisJdbcConfiguration` into you application configuration:
[source,java]
----
@Configuration
@EnableJdbcRepositories
@Import(MyBatisJdbcConfiguration.class)
class Application {
@Bean
SqlSessionFactoryBean sqlSessionFactoryBean() {
// Configure MyBatis here
}
}
----
As you can see, all you need to declare is a `SqlSessionFactoryBean` as `MyBatisJdbcConfiguration` relies on a `SqlSession` bean to be available in the `ApplicationContext` eventually.
[[jdbc.mybatis.conventions]]
=== Usage conventions
For each operation in `CrudRepository`, Spring Data JDBC runs multiple statements.
If there is a https://github.com/mybatis/mybatis-3/blob/master/src/main/java/org/apache/ibatis/session/SqlSessionFactory.java[`SqlSessionFactory`] in the application context, Spring Data checks, for each step, whether the `SessionFactory` offers a statement.
If one is found, that statement (including its configured mapping to an entity) is used.
The name of the statement is constructed by concatenating the fully qualified name of the entity type with `Mapper.` and a `String` determining the kind of statement.
For example, if an instance of `org.example.User` is to be inserted, Spring Data JDBC looks for a statement named `org.example.UserMapper.insert`.
When the statement is run, an instance of [`MyBatisContext`] gets passed as an argument, which makes various arguments available to the statement.
The following table describes the available MyBatis statements:
[cols="default,default,default,asciidoc"]
|===
| Name | Purpose | CrudRepository methods that might trigger this statement | Attributes available in the `MyBatisContext`
| `insert` | Inserts a single entity. This also applies for entities referenced by the aggregate root. | `save`, `saveAll`. |
`getInstance`: the instance to be saved
`getDomainType`: The type of the entity to be saved.
`get(<key>)`: ID of the referencing entity, where `<key>` is the name of the back reference column provided by the `NamingStrategy`.
| `update` | Updates a single entity. This also applies for entities referenced by the aggregate root. | `save`, `saveAll`.|
`getInstance`: The instance to be saved
`getDomainType`: The type of the entity to be saved.
| `delete` | Deletes a single entity. | `delete`, `deleteById`.|
`getId`: The ID of the instance to be deleted
`getDomainType`: The type of the entity to be deleted.
| `deleteAll-<propertyPath>` | Deletes all entities referenced by any aggregate root of the type used as prefix with the given property path.
Note that the type used for prefixing the statement name is the name of the aggregate root, not the one of the entity to be deleted. | `deleteAll`.|
`getDomainType`: The types of the entities to be deleted.
| `deleteAll` | Deletes all aggregate roots of the type used as the prefix | `deleteAll`.|
`getDomainType`: The type of the entities to be deleted.
| `delete-<propertyPath>` | Deletes all entities referenced by an aggregate root with the given propertyPath | `deleteById`.|
`getId`: The ID of the aggregate root for which referenced entities are to be deleted.
`getDomainType`: The type of the entities to be deleted.
| `findById` | Selects an aggregate root by ID | `findById`.|
`getId`: The ID of the entity to load.
`getDomainType`: The type of the entity to load.
| `findAll` | Select all aggregate roots | `findAll`.|
`getDomainType`: The type of the entity to load.
| `findAllById` | Select a set of aggregate roots by ID values | `findAllById`.|
`getId`: A list of ID values of the entities to load.
`getDomainType`: The type of the entity to load.
| `findAllByProperty-<propertyName>` | Select a set of entities that is referenced by another entity. The type of the referencing entity is used for the prefix. The referenced entities type is used as the suffix. _This method is deprecated. Use `findAllByPath` instead_ | All `find*` methods. If no query is defined for `findAllByPath`|
`getId`: The ID of the entity referencing the entities to be loaded.
`getDomainType`: The type of the entity to load.
| `findAllByPath-<propertyPath>` | Select a set of entities that is referenced by another entity via a property path. | All `find*` methods.|
`getIdentifier`: The `Identifier` holding the id of the aggregate root plus the keys and list indexes of all path elements.
`getDomainType`: The type of the entity to load.
| `findAllSorted` | Select all aggregate roots, sorted | `findAll(Sort)`.|
`getSort`: The sorting specification.
| `findAllPaged` | Select a page of aggregate roots, optionally sorted | `findAll(Page)`.|
`getPageable`: The paging specification.
| `count` | Count the number of aggregate root of the type used as prefix | `count` |
`getDomainType`: The type of aggregate roots to count.
|===
[[jdbc.events]]
== Lifecycle Events
Spring Data JDBC triggers events that get published to any matching `ApplicationListener` beans in the application context.
For example, the following listener gets invoked before an aggregate gets saved:
====
[source,java]
----
@Bean
public ApplicationListener<BeforeSaveEvent<Object>> loggingSaves() {
return event -> {
Object entity = event.getEntity();
LOG.info("{} is getting saved.", entity);
};
}
----
====
If you want to handle events only for a specific domain type you may derive your listener from `AbstractRelationalEventListener` and overwrite one or more of the `onXXX` methods, where `XXX` stands for an event type.
Callback methods will only get invoked for events related to the domain type and their subtypes so you don't require further casting.
====
[source,java]
----
public class PersonLoadListener extends AbstractRelationalEventListener<Person> {
@Override
protected void onAfterLoad(AfterLoadEvent<Person> personLoad) {
LOG.info(personLoad.getEntity());
}
}
----
====
The following table describes the available events:
.Available events
|===
| Event | When It Is Published
| {javadoc-base}org/springframework/data/relational/core/mapping/event/BeforeDeleteEvent.html[`BeforeDeleteEvent`]
| Before an aggregate root gets deleted.
| {javadoc-base}org/springframework/data/relational/core/mapping/event/AfterDeleteEvent.html[`AfterDeleteEvent`]
| After an aggregate root gets deleted.
| {javadoc-base}/org/springframework/data/relational/core/mapping/event/BeforeConvertEvent.html[`BeforeConvertEvent`]
| Before an aggregate root gets converted into a plan for executing SQL statements, but after the decision was made if the aggregate is new or not, i.e. if an update or an insert is in order.
This is the correct event if you want to set an id programmatically.
| {javadoc-base}/org/springframework/data/relational/core/mapping/event/BeforeSaveEvent.html[`BeforeSaveEvent`]
| Before an aggregate root gets saved (that is, inserted or updated but after the decision about whether if it gets updated or deleted was made).
| {javadoc-base}org/springframework/data/relational/core/mapping/event/AfterSaveEvent.html[`AfterSaveEvent`]
| After an aggregate root gets saved (that is, inserted or updated).
| {javadoc-base}org/springframework/data/relational/core/mapping/event/AfterLoadEvent.html[`AfterLoadEvent`]
| After an aggregate root gets created from a database `ResultSet` and all its properties get set.
|===
WARNING: Lifecycle events depend on an `ApplicationEventMulticaster`, which in case of the `SimpleApplicationEventMulticaster` can be configured with a `TaskExecutor`, and therefore gives no guarantees when an Event is processed.
[[jdbc.entity-callbacks]]
=== Store-specific EntityCallbacks
Spring Data JDBC uses the `EntityCallback` API for its auditing support and reacts on the following callbacks:
.Available Callbacks
|===
| `EntityCallback` | When It Is Published
| {javadoc-base}org/springframework/data/relational/core/mapping/event/BeforeDeleteCallback.html[`BeforeDeleteCallback`]
| Before an aggregate root gets deleted.
| {javadoc-base}org/springframework/data/relational/core/mapping/event/AfterDeleteCallback.html[`AfterDeleteCallback`]
| After an aggregate root gets deleted.
| {javadoc-base}/org/springframework/data/relational/core/mapping/event/BeforeConvertCallback.html[`BeforeConvertCallback`]
| Before an aggregate root gets converted into a plan for executing SQL statements, but after the decision was made if the aggregate is new or not, i.e. if an update or an insert is in order.
This is the correct callback if you want to set an id programmatically.
| {javadoc-base}/org/springframework/data/relational/core/mapping/event/BeforeSaveCallback.html[`BeforeSaveCallback`]
| Before an aggregate root gets saved (that is, inserted or updated but after the decision about whether if it gets updated or deleted was made).
| {javadoc-base}org/springframework/data/relational/core/mapping/event/AfterSaveCallback.html[`AfterSaveCallback`]
| After an aggregate root gets saved (that is, inserted or updated).
| {javadoc-base}org/springframework/data/relational/core/mapping/event/AfterLoadCallback.html[`AfterLoadCallback`]
| After an aggregate root gets created from a database `ResultSet` and all its property get set.
|===
include::{spring-data-commons-docs}/entity-callbacks.adoc[leveloffset=+1]
include::jdbc-custom-conversions.adoc[]
[[jdbc.logging]]
== Logging
Spring Data JDBC does little to no logging on its own.
Instead, the mechanics of `JdbcTemplate` to issue SQL statements provide logging.
Thus, if you want to inspect what SQL statements are run, activate logging for Spring's {spring-framework-docs}/data-access.html#jdbc-JdbcTemplate[`NamedParameterJdbcTemplate`] or https://www.mybatis.org/mybatis-3/logging.html[MyBatis].
[[jdbc.transactions]]
== Transactionality
CRUD methods on repository instances are transactional by default.
For reading operations, the transaction configuration `readOnly` flag is set to `true`.
All others are configured with a plain `@Transactional` annotation so that default transaction configuration applies.
For details, see the Javadoc of link:{javadoc-base}org/springframework/data/jdbc/repository/support/SimpleJdbcRepository.html[`SimpleJdbcRepository`].
If you need to tweak transaction configuration for one of the methods declared in a repository, redeclare the method in your repository interface, as follows:
.Custom transaction configuration for CRUD
====
[source,java]
----
public interface UserRepository extends CrudRepository<User, Long> {
@Override
@Transactional(timeout = 10)
public List<User> findAll();
// Further query method declarations
}
----
====
The preceding causes the `findAll()` method to be run with a timeout of 10 seconds and without the `readOnly` flag.
Another way to alter transactional behavior is by using a facade or service implementation that typically covers more than one repository.
Its purpose is to define transactional boundaries for non-CRUD operations.
The following example shows how to create such a facade:
.Using a facade to define transactions for multiple repository calls
====
[source,java]
----
@Service
class UserManagementImpl implements UserManagement {
private final UserRepository userRepository;
private final RoleRepository roleRepository;
@Autowired
public UserManagementImpl(UserRepository userRepository,
RoleRepository roleRepository) {
this.userRepository = userRepository;
this.roleRepository = roleRepository;
}
@Transactional
public void addRoleToAllUsers(String roleName) {
Role role = roleRepository.findByName(roleName);
for (User user : userRepository.findAll()) {
user.addRole(role);
userRepository.save(user);
}
}
----
====
The preceding example causes calls to `addRoleToAllUsers(…)` to run inside a transaction (participating in an existing one or creating a new one if none are already running).
The transaction configuration for the repositories is neglected, as the outer transaction configuration determines the actual repository to be used.
Note that you have to explicitly activate `<tx:annotation-driven />` or use `@EnableTransactionManagement` to get annotation-based configuration for facades working.
Note that the preceding example assumes you use component scanning.
[[jdbc.transaction.query-methods]]
=== Transactional Query Methods
To let your query methods be transactional, use `@Transactional` at the repository interface you define, as the following example shows:
.Using @Transactional at query methods
====
[source,java]
----
@Transactional(readOnly = true)
public interface UserRepository extends CrudRepository<User, Long> {
List<User> findByLastname(String lastname);
@Modifying
@Transactional
@Query("delete from User u where u.active = false")
void deleteInactiveUsers();
}
----
====
Typically, you want the `readOnly` flag to be set to true, because most of the query methods only read data.
In contrast to that, `deleteInactiveUsers()` uses the `@Modifying` annotation and overrides the transaction configuration.
Thus, the method is with the `readOnly` flag set to `false`.
NOTE: It is definitely reasonable to use transactions for read-only queries, and we can mark them as such by setting the `readOnly` flag.
This does not, however, act as a check that you do not trigger a manipulating query (although some databases reject `INSERT` and `UPDATE` statements inside a read-only transaction).
Instead, the `readOnly` flag is propagated as a hint to the underlying JDBC driver for performance optimizations.
include::{spring-data-commons-docs}/auditing.adoc[leveloffset=+1]
[[jdbc.auditing]]
== JDBC Auditing
In order to activate auditing, add `@EnableJdbcAuditing` to your configuration, as the following example shows:
.Activating auditing with Java configuration
====
[source,java]
----
@Configuration
@EnableJdbcAuditing
class Config {
@Bean
public AuditorAware<AuditableUser> auditorProvider() {
return new AuditorAwareImpl();
}
}
----
====
If you expose a bean of type `AuditorAware` to the `ApplicationContext`, the auditing infrastructure automatically picks it up and uses it to determine the current user to be set on domain types.
If you have multiple implementations registered in the `ApplicationContext`, you can select the one to be used by explicitly setting the `auditorAwareRef` attribute of `@EnableJdbcAuditing`.