For mapping java objects to a relational database we use the Java Persistence API (JPA). As JPA implementation we recommend to use Hibernate. For general documentation about JPA and Hibernate follow the links above as we will not replicate the documentation. Here you will only find guidelines and examples how we recommend to use it properly. The following examples show how to map the data of a database to an entity. As we use JPA we abstract from SQL here. However, you will still need a DDL script for your schema and during maintenance also database migrations. Please follow our SQL guide for such artifacts.
Entities are part of the persistence layer and contain the actual data. They are POJOs (Plain Old Java Objects) on which the relational data of a database is mapped and vice versa. The mapping is configured via JPA annotations (javax.persistence
). Usually an entity class corresponds to a table of a database and a property to a column of that table. A persistent entity instance then represents a row of the database table.
The following listing shows a simple example:
@Entity
@Table(name="TEXTMESSAGE")
public class MessageEntity extends ApplicationPersistenceEntity implements Message {
private String text;
public String getText() {
return this.text;
}
public void setText(String text) {
this.text = text;
}
}
The @Entity
annotation defines that instances of this class will be entities which can be stored in the database. The @Table
annotation is optional and can be used to define the name of the corresponding table in the database. If it is not specified, the simple name of the entity class is used instead.
In order to specify how to map the attributes to columns we annotate the corresponding getter methods (technically also private field annotation is also possible but approaches can not be mixed).
The @Id
annotation specifies that a property should be used as primary key.
With the help of the @Column
annotation it is possible to define the name of the column that an attribute is mapped to as well as other aspects such as nullable
or unique
. If no column name is specified, the name of the property is used as default.
Note that every entity class needs a constructor with public or protected visibility that does not have any arguments. Moreover, neither the class nor its getters and setters may be final.
Entities should be simple POJOs and not contain business logic.
Standard datatypes like Integer
, BigDecimal
, String
, etc. are mapped automatically by JPA. Custom datatypes are mapped as serialized BLOB by default what is typically undesired.
In order to map atomic custom datatypes (implementations of`+SimpleDatatype`) we implement an AttributeConverter
. Here is a simple example:
@Converter(autoApply = true)
public class MoneyAttributeConverter implements AttributeConverter<Money, BigDecimal> {
public BigDecimal convertToDatabaseColumn(Money attribute) {
return attribute.getValue();
}
public Money convertToEntityAttribute(BigDecimal dbData) {
return new Money(dbData);
}
}
The annotation @Converter
is detected by the JPA vendor if the annotated class is in the packages to scan. Further, autoApply = true
implies that the converter is automatically used for all properties of the handled datatype. Therefore all entities with properties of that datatype will automatically be mapped properly (in our example Money
is mapped as BigDecimal
).
In case you have a composite datatype that you need to map to multiple columns the JPA does not offer a real solution. As a workaround you can use a bean instead of a real datatype and declare it as @Embeddable
. If you are using Hibernate you can implement CompositeUserType
. Via the @TypeDef
annotation it can be registered to Hibernate. If you want to annotate the CompositeUserType
implementation itself you also need another annotation (e.g. MappedSuperclass
tough not technically correct) so it is found by the scan.
By default JPA maps Enums via their ordinal. Therefore the database will only contain the ordinals (0, 1, 2, etc.) . So , inside the database you can not easily understand their meaning. Using @Enumerated
with EnumType.STRING
allows to map the enum values to their name (Enum.name()
). Both approaches are fragile when it comes to code changes and refactoring (if you change the order of the enum values or rename them) after the application is deployed to production. If you want to avoid this and get a robust mapping you can define a dedicated string in each enum value for database representation that you keep untouched. Then you treat the enum just like any other custom datatype.
If binary or character large objects (BLOB/CLOB) should be used to store the value of an attribute, e.g. to store an icon, the @Lob
annotation should be used as shown in the following listing:
@Lob
public byte[] getIcon() {
return this.icon;
}
Warning
|
Using a byte array will cause problems if BLOBs get large because the entire BLOB is loaded into the RAM of the server and has to be processed by the garbage collector. For larger BLOBs the type Blob and streaming should be used. |
public Blob getAttachment() {
return this.attachment;
}
To store date and time related values, the temporal annotation can be used as shown in the listing below:
@Temporal(TemporalType.TIMESTAMP)
public java.util.Date getStart() {
return start;
}
Until Java8 the java data type java.util.Date
(or Jodatime) has to be used.
TemporalType
defines the granularity. In this case, a precision of nanoseconds is used. If this granularity is not wanted, TemporalType.DATE
can be used instead, which only has a granularity of milliseconds.
Mixing these two granularities can cause problems when comparing one value to another. This is why we only use TemporalType.TIMESTAMP
.
Using the Aliases API of QueryDSL might result in an InvalidDataAccessApiUsageException
when using custom datatypes in entity properties. This can be circumvented in two steps:
-
Ensure you have the following maven dependencies in your project (
core
module) to support custom types via the Aliases API:<dependency> <groupId>org.ow2.asm</groupId> <artifactId>asm</artifactId> </dependency> <dependency> <groupId>cglib</groupId> <artifactId>cglib</artifactId> </dependency>
-
Make sure, that all your custom types used in entities provide a non-argument constructor with at least visibility level
protected
.
We only use simple Long values as primary keys (IDs).
By default it is auto generated (@GeneratedValue(strategy=GenerationType.AUTO)
).
This is already provided by the class com.devonfw.<projectName>.general.dataaccess.api.AbstractPersistenceEntity
within the classic project structure respectively com.devonfw.<projectName>.general.domain.model.AbstractPersistenceEntity
within the modern project structure, that you can extend.
The reason for this recommendation is simply because using a number (Long
) is the most efficient representation for the database.
You may also consider to use other types like String
or UUID
or even composite custom datatypes and this is technically possible.
However, please consider that the primary key is used to lookup the row from the database table, also in foreign keys and thus in JOINs.
Please note that your project sooner or later may reach some complexity where performance really matters.
Working on big data and performing JOINs when using types such as String
(VARCHAR[2]
) as primary and foreign keys will kill your performance.
You are still free to make a different choice and devonfw only gives recommendations but does not want to dictate you what to do.
However, you have been warned about the concequences.
If you are well aware of what you are doing, you can still use differnet types of primary keys.
In such case, create your own entity not extending AbstractPersistenceEntity
or create your own copy of AbstractPersistenceEntity
with a different name and a different type of primary key.
In case you have business oriented keys (often as String
), you can define an additional property for it and declare it as unique (@Column(unique=true)
).
Be sure to include "AUTO_INCREMENT" in your sql table field ID to be able to persist data (or similar for other databases).
Entities often do not exist independently but are in some relation to each other. For example, for every period of time one of the StaffMember’s of the restaurant example has worked, which is represented by the class WorkingTime
, there is a relationship to this StaffMember.
The following listing shows how this can be modeled using JPA:
...
@Entity
public class WorkingTimeEntity {
...
private StaffMemberEntity staffMember;
@ManyToOne
@JoinColumn(name="STAFFMEMBER")
public StaffMemberEntity getStaffMember() {
return this.staffMember;
}
public void setStaffMember(StaffMemberEntity staffMember) {
this.staffMember = staffMember;
}
}
To represent the relationship, an attribute of the type of the corresponding entity class that is referenced has been introduced. The relationship is a n:1 relationship, because every WorkingTime
belongs to exactly one StaffMember
, but a StaffMember
usually worked more often than once.
This is why the @ManyToOne
annotation is used here. For 1:1 relationships the @OneToOne
annotation can be used which works basically the same way. To be able to save information about the relation in the database, an additional column in the corresponding table of WorkingTime is needed which contains the primary key of the referenced StaffMember. With the name
element of the @JoinColumn
annotation it is possible to specify the name of this column.
The relationship of the example listed above is currently an unidirectional one, as there is a getter method for retrieving the StaffMember
from the WorkingTime
object, but not vice versa.
To make it a bidirectional one, the following code has to be added to StaffMember
:
private Set<WorkingTimeEntity> workingTimes;
@OneToMany(mappedBy="staffMember")
public Set<WorkingTimeEntity> getWorkingTimes() {
return this.workingTimes;
}
public void setWorkingTimes(Set<WorkingTimeEntity> workingTimes) {
this.workingTimes = workingTimes;
}
To make the relationship bidirectional, the tables in the database do not have to be changed. Instead the column that corresponds to the attribute staffMember
in class WorkingTime
is used, which is specified by the mappedBy
element of the @OneToMany
annotation. Hibernate will search for corresponding WorkingTime
objects automatically when a StaffMember
is loaded.
The problem with bidirectional relationships is that if a WorkingTime
object is added to the set or list workingTimes
in StaffMember
, this does not have any effect in the database unless
the staffMember
attribute of that WorkingTime
object is set. That is why the devon4j advices not to use bidirectional relationships but to use queries instead. How to do this is shown here. If a bidirectional relationship should be used nevertheless, appropriate add and remove methods must be used.
For 1:n and n:m relations, the devon4j demands that (unordered) Sets and no other collection types are used, as shown in the listing above. The only exception is whenever an ordering is really needed, (sorted) lists can be used.
For example, if WorkingTime
objects should be sorted by their start time, this could be done like this:
private List<WorkingTimeEntity> workingTimes;
@OneToMany(mappedBy = "staffMember")
@OrderBy("startTime asc")
public List<WorkingTimeEntity> getWorkingTimes() {
return this.workingTimes;
}
public void setWorkingTimes(List<WorkingTimeEntity> workingTimes) {
this.workingTimes = workingTimes;
}
The value of the @OrderBy
annotation consists of an attribute name of the class followed by asc
(ascending) or desc
(descending).
To store information about a n:m relationship, a separate table has to be used, as one column cannot store several values (at least if the database schema is in first normal form).
For example if one wanted to extend the example application so that all ingredients of one FoodDrink
can be saved and to model the ingredients themselves as entities (e.g. to store additional information about them), this could be modeled as follows (extract of class FoodDrink
):
private Set<IngredientEntity> ingredients;
@ManyToMany()
@JoinTable
public Set<IngredientEntity> getIngredients() {
return this.ingredients;
}
public void setOrders(Set<IngredientEntity> ingredients) {
this.ingredients = ingredients;
}
Information about the relation is stored in a table called BILL_ORDER
that has to have two columns, one for referencing the Bill, the other one for referencing the Order. Note that the @JoinTable
annotation is not needed in this case because a separate table is the default solution here (same for n:m relations) unless there is a mappedBy
element specified.
For 1:n relationships this solution has the disadvantage that more joins (in the database system) are needed to get a Bill with all the Orders it refers to. This might have a negative impact on performance so that the solution to store a reference to the Bill row/entity in the Order’s table is probably the better solution in most cases.
Note that bidirectional n:m relationships are not allowed for applications based on devon4j. Instead a third entity has to be introduced, which "represents" the relationship (it has two n:1 relationships).
Using JPA it is possible to use either lazy or eager loading. Eager loading means that for entities retrieved from the database, other entities that are referenced by these entities are also retrieved, whereas lazy loading means that this is only done when they are actually needed, i.e. when the corresponding getter method is invoked.
Application based on devon4j are strongly advised to always use lazy loading. The JPA defaults are:
-
@OneToMany
: LAZY -
@ManyToMany
: LAZY -
@ManyToOne
: EAGER -
@OneToOne
: EAGER
So at least for @ManyToOne
and @OneToOne
you always need to override the default by providing fetch = FetchType.LAZY
.
Important
|
Please read the performance guide. |
For relations it is also possible to define whether operations are cascaded (like a recursion) to the related entity.
By default, nothing is done in these situations. This can be changed by using the cascade
property of the annotation that specifies the relation type (@OneToOne
, @ManyToOne
, @OneToMany
, @ManyToOne
). This property accepts a CascadeType
that offers the following options:
-
PERSIST (for
EntityManager.persist
, relevant to inserted transient entities into DB) -
REMOVE (for
EntityManager.remove
to delete entity from DB) -
MERGE (for
EntityManager.merge
) -
REFRESH (for
EntityManager.refresh
) -
DETACH (for
EntityManager.detach
) -
ALL (cascade all of the above operations)
See here for more information.
For simple usage you can use Long
for all your foreign keys.
However, as an optional pattern for advanced and type-safe usage, we offer IdRef.
An embeddable Object is a way to group properties of an entity into a separate Java (child) object. Unlike with implement relationships the embeddable is not a separate entity and its properties are stored (embedded) in the same table together with the entity. This is helpful to structure and reuse groups of properties.
The following example shows an Address
implemented as an embeddable class:
@Embeddable
public class AddressEmbeddable {
private String street;
private String number;
private Integer zipCode;
private String city;
@Column(name="STREETNUMBER")
public String getNumber() {
return number;
}
public void setNumber(String number) {
this.number = number;
}
... // other getter and setter methods, equals, hashCode
}
As you can see an embeddable is similar to an entity class, but with an @Embeddable
annotation instead of the @Entity
annotation and without primary key or modification counter.
An Embeddable does not exist on its own but in the context of an entity.
As a simplification Embeddables do not require a separate interface and ETO as the bean-mapper will create a copy automatically when converting the owning entity to an ETO.
However, in this case the embeddable becomes part of your api
module that therefore needs a dependency on the JPA
.
In addition to that the methods equals(Object)
and hashCode()
need to be implemented as this is required by Hibernate (it is not required for entities because they can be unambiguously identified by their primary key). For some hints on how to implement the hashCode()
method please have a look here.
Using this AddressEmbeddable
inside an entity class can be done like this:
private AddressEmbeddable address;
@Embedded
public AddressEmbeddable getAddress() {
return this.address;
}
public void setAddress(AddressEmbeddable address) {
this.address = address;
}
}
The @Embedded
annotation needs to be used for embedded attributes. Note that if in all columns of the embeddable (here Address
) are null
, then the embeddable object itself is also null
inside the entity. This has to be considered to avoid NullPointerException’s. Further this causes some issues with primitive types in embeddable classes that can be avoided by only using object types instead.
Just like normal java classes, entity classes can inherit from others. The only difference is that you need to specify how to map a class hierarchy to database tables. Generic abstract super-classes for entities can simply be annotated with @MappedSuperclass
.
For all other cases the JPA offers the annotation @Inheritance
with the property strategy
talking an InheritanceType
that has the following options:
-
SINGLE_TABLE
: This strategy uses a single table that contains all columns needed to store all entity-types of the entire inheritance hierarchy. If a column is not needed for an entity because of its type, there is a null value in this column. An additional column is introduced, which denotes the type of the entity (calleddtype
). -
TABLE_PER_CLASS
: For each concrete entity class there is a table in the database that can store such an entity with all its attributes. An entity is only saved in the table corresponding to its most concrete type. To get all entities of a super type, joins are needed. -
JOINED
: In this case there is a table for every entity class including abstract classes, which contains only the columns for the persistent properties of that particular class. Additionally there is a primary key column in every table. To get an entity of a class that is a subclass of another one, joins are needed.
Each of the three approaches has its advantages and drawbacks, which are discussed in detail here. In most cases, the first one should be used, because it is usually the fastest way to do the mapping, as no joins are needed when retrieving, searching or persisting entities. Moreover it is rather simple and easy to understand. One major disadvantage is that the first approach could lead to a table with a lot of null values, which might have a negative impact on the database size.
The inheritance strategy has to be annotated to the top-most entity of the class hierarchy (where @MappedSuperclass
classes are not considered) like in the following example:
@Entity
@Inheritance(strategy=InheritanceType.SINGLE_TABLE)
public abstract class MyParentEntity extends ApplicationPersistenceEntity implements MyParent {
...
}
@Entity
public class MyChildEntity extends MyParentEntity implements MyChild {
...
}
@Entity
public class MyOtherEntity extends MyParentEntity implements MyChild {
...
}
As a best practice we advise you to avoid entity hierarchies at all where possible and otherwise to keep the hierarchy as small as possible. In order to just ensure reuse or establish a common API you can consider a shared interface, a @MappedSuperclass
or an @Embeddable
instead of an entity hierarchy.
For each entity a code unit is created that groups all database operations for that entity. We recommend to use spring-data repositories for that as it is most efficient for developers. As an alternative there is still the classic approach using DAOs.
The concurrency control defines the way concurrent access to the same data of a database is handled. When several users (or threads of application servers) concurrently access a database, anomalies may happen, e.g. a transaction is able to see changes from another transaction although that one did, not yet commit these changes. Most of these anomalies are automatically prevented by the database system, depending on the isolation level (property hibernate.connection.isolation
in the jpa.xml
, see here, or quarkus.datasource.jdbc.transaction-isolation-level
in the application.properties
).
Another anomaly is when two stakeholders concurrently access a record, do some changes and write them back to the database. The JPA addresses this with different locking strategies (see here).
The class com.devonfw.module.jpa.persistence.api.AbstractPersistenceEntity
already provides optimistic locking via a modificationCounter
with the @Version
annotation. Therefore JPA takes care of optimistic locking for you. When entities are transferred to clients, modified and sent back for update you need to ensure the modificationCounter
is part of the game. If you follow our guides about transfer-objects and services this will also work out of the box.
You only have to care about two things:
-
How to deal with optimistic locking in relationships?
Assume an entityA
contains a collection ofB
entities. Should there be a locking conflict if one user modifies an instance ofA
while another user in parallel modifies an instance ofB
that is contained in the other instance? To address this , take a look at FeatureForceIncrementModificationCounter. -
What should happen in the UI if an
OptimisticLockException
occurred?
According to KISS our recommendation is that the user gets an error displayed that tells him to do his change again on the recent data. Try to design your system and the work processing in a way to keep such conflicts rare and you are fine.
For back-end services and especially for batches optimistic locking is not suitable. A human user shall not cause a large batch process to fail because he was editing the same entity. Therefore such use-cases use pessimistic locking what gives them a kind of priority over the human users.
In your DAO implementation you can provide methods that do pessimistic locking via EntityManager
operations that take a LockModeType
. Here is a simple example:
getEntityManager().lock(entity, LockModeType.READ);
When using the lock(Object, LockModeType)
method with LockModeType.READ
, Hibernate will issue a SELECT … FOR UPDATE
. This means that no one else can update the entity (see here for more information on the statement). If LockModeType.WRITE
is specified, Hibernate issues a SELECT … FOR UPDATE NOWAIT
instead, which has has the same meaning as the statement above, but if there is already a lock, the program will not wait for this lock to be released. Instead, an exception is raised.
Use one of the types if you want to modify the entity later on, for read only access no lock is required.
As you might have noticed, the behavior of Hibernate deviates from what one would expect by looking at the LockModeType
(especially LockModeType.READ
should not cause a SELECT … FOR UPDATE
to be issued). The framework actually deviates from what is specified in the JPA for unknown reasons.
See auditing guide.
For testing of Entities and Repositories or DAOs see testing guide.
We strongly recommend these principles:
-
Use the JPA where ever possible and use vendor (hibernate) specific features only for situations when JPA does not provide a solution. In the latter case consider first if you really need the feature.
-
Create your entities as simple POJOs and use JPA to annotate the getters in order to define the mapping.
-
Keep your entities simple and avoid putting advanced logic into entity methods.
For details on the configuration of the database connection and database logging of the individual framework, please refer to the respective configuration guide.
See database migration.
A common security threat is SQL-injection. Never build queries with string concatenation or your code might be vulnerable as in the following example:
String query = "Select op from OrderPosition op where op.comment = " + userInput;
return getEntityManager().createQuery(query).getResultList();
Via the parameter userInput
an attacker can inject SQL (JPQL) and execute arbitrary statements in the database causing extreme damage.
In order to prevent such injections you have to strictly follow our rules for queries:
-
Use named queries for static queries.
-
Use QueryDSL for dynamic queries.
-
Please also consult the SQL Injection Prevention Cheat Sheet.
We suggest that you operate your application with a database user that has limited permissions so he can not modify the SQL schema (e.g. drop tables). For initializing the schema (DDL) or to do schema migrations use a separate user that is not used by the application itself.