Semla is a lightweight library driven by the Java Persistence API supporting most of the features required to persist, query, serialize/deserialize entities as well as injecting dependencies.
It could be seen as Hibernate + Jackson + Guava + Guice, all in one.
Using reflection and static/dynamic source generation, it provides fluent and typed interfaces that can be used as DAOs. The query language is independant of the storage vendor and remains the same if you migrate from one database vendor to another.
One biggest difference with other JPA frameworks is that there is no persistence context in Semla. All the objects you will get are ready to use and won't introduce any side effect due to Proxies not being initialized.
Semla is fully extensible but comes with those maven modules:
- semla-common: common library including a lot of utils as well as the json and yaml serializers
- semla-common-test: common test library based on tzatziki
- semla-inject: dependency injection library.
- semla-jpa: the base JPA library.
- semla-jpa-test: test library for any datasource based on the JPA module
- semla-jdbi: generic SQL database support using jdbi
- semla-logging: Logging support using logback
- semla-maven-plugin: maven plugin to generate typed interfaces
- semla-memcached: memcached support using spymemcached
- semla-mongodb: Mongodb support using mongo-java-driver
- semla-mysql: Mysql support extending semla-jdbi
- semla-postgresql: Postgresql support extending semla-jdbi
- semla-redis: Redis support using jedis
- semla-grapql: graphql support and autogenerated schema from your entities using graphql-java
- semla-jackson: module to allow using semla's serializer/deserializer in jackson
Example given with mysql, but you can replace the module with the vendor of your choice!
Get it from maven central:
<dependency>
<groupId>io.semla</groupId>
<artifactId>semla-mysql</artifactId>
<version>1.x.x</version>
<scope>compile</scope>
</dependency>
Semla uses names very similar to those used by JPA, but their usage and interface might differ a bit, for example:
io.semla.datasource.Datasource<T>
is the low level datasource translating the query to the vendor APIio.semla.persistence.EntityManager<T>
is the class implementing all the query logicio.semla.persistence.EntityManagerFactory
is the class generating the EntityManagersio.semla.persistence.PersistenceContext
is local to a user query and will keep track of which entities and relations have been already fetched.
Semla comes with a plugin to generate typed EntityManagers extending io.semla.persistence.TypedEntityManager
and
having type-safe methods for all the properties of your types.
Given that you annotate a User
class with io.semla.persistence.annotations.Managed
and that you add this plugin to
your project:
<plugin>
<groupId>io.semla</groupId>
<artifactId>semla-maven-plugin</artifactId>
<version>1.x.x</version>
<configuration>
<sources>
<source>/src/main/java/package/of/your/model/**</source>
</sources>
</configuration>
<executions>
<execution>
<phase>generate-sources</phase>
<goals>
<goal>generate</goal>
</goals>
</execution>
</executions>
</plugin>
Then running mvn generate-sources
should generate a new class UserManager
extending TypedEntityManager
.
The main class is the io.semla.Semla
class, which can be configured for example with a default mysql datasource:
Semla semla = Semla.configure()
.withDefaultDatasource(MysqlDatasource.configure()
.withJdbcUrl("url")
.withUsername("username")
.withPassword("password"))
.create();
A datasource configuration shared for a set of entities:
Semla semla = Semla.configure()
.withDatasourceOf(User.class, Group.class)
.as(MysqlDatasource.configure()
.withJdbcUrl("url")
.withUsername("username")
.withPassword("password"))
.withDatasourceOf(Cache.class)
.as(RedisDatasource.configure()
.withHost("1.2.3.4"))
.create();
Or directly a specific datasource:
Semla semla = Semla.configure()
.withDatasource(MysqlDatasource.configure()
.withJdbcUrl("url")
.withUsername("username")
.withPassword("password"))
.create(EntityModel.of(User.class)))
.create();
Semla can easily mix different datasources and recursively query them. You can even write a Datasource for your favorite vendor if it's not already supported!
By default, the following implementations are included in the library:
- Postgresql
- MySQL
- MongoDB
- Redis
- Memcached
As well as some useful datasources:
- InMemoryDatasource: useful for prototyping, it is a non-expiring in-memory relational datasource backed by a HashMap.
- SoftKeyValueDatasource: SoftHashMap backed datasource that can be used for caching.
- KeyValueDatasource: NoSQL interface to extend in other Datasources (like memcached or redis)
- CachedDatasource: 2 layers datasource using a KeyValueDatasource as a cache layer
- MasterSlaveDatasource: "write one, read all" replicated datasource, to use for example with a Mysql cluster.
- ReadOneWriteAllDatasource: when you want replication to be handled by Semla.
- ShardedDatasource: shards on primary key and automatically rebalances if a shard is added.
Semla will create a model for each type it manages, mostly holding instances of everything obtained through reflection. If
the type is annotated with javax.persistence.Entity
, it will create an io.semla.model.EntityModel
that will also
contain information about the relational and column annotations present on the type.
Semla packs its own dependency injection framework which can be configured during the configuration:
Semla semla = Semla.configure()
.withBindings(binder -> binder
.bind(String.class).named("applicationName").to("myAwesomeService")
)
.create();
Bindings can also be organized in modules through the io.semla.inject.Module
class:
Semla semla = Semla.configure()
.withModules(new YourCustomModule())
.create();
Explicit binding can be required with:
Semla semla = Semla.configure()
.withBindings(Binder::requireExplicitBinding)
.create();
Multibiding can be achieved with:
Semla semla = Semla.configure()
.withBindings(binder -> binder
.multiBind(Action.class).named("actions").add(ActionA.class)
.multiBind(Action.class).named("actions").add(Lists.of(ActionB.class)) // annotated
.multiBind(Action.class).named("actions").add(new ActionC()) // will always return the same instance
)
.create();
// actions will contain a new instance of ActionA, of ActionB and the implicit singleton of ActionC
Set<Action> actions = injector.getInstance(Types.parameterized(Set.class).of(Action.class), Annotations.named("actions"));
You can intercept an injection (for debugging or testing purpose):
Semla semla = Semla.configure()
.withBindings(binder -> binder
.intercept(SomeObject.class).with(someObject -> {
// do something with the object or swap it for another one
return someObject;
}))
.create();
All the injector methods are available on the semla instance for convenience:
semla.getInstance(EntityManagerFactory.class);
semla.getInstance(new TypeReference<EntityManager<User>>(){});
semla.getInstance(YourType.class);
semla.inject(yourInstance);
And if you are not interested in the entity management part of Semla, you can include solely the semla-inject
module
and create the injector manually:
Injector injector = SemlaInjector.create(
binder -> binder.bind(YourType.class).to(yourInstance));
Factories are used by the injector to create all the instances and hold the singletons. A factory must implement
the io.semla.inject.Factory
interface.
3 singleton factories are preconfigured:
io.semla.datasource.DatasourceFactory
: creates and holds all theio.semla.datasource.Datasource<T>
instances (1 per type)io.semla.persistence.EntityManagerFactory
: creates and holds all the genericio.semla.persistence.EntityManager<T>
instancesio.semla.persistence.TypedEntityManagerFactory
: creates and holds all theio.semla.persistence.TypedEntityManager
implementations.
Let's consider the 2 following classes:
@Entity
@Managed
public class User {
@Id
@GeneratedValue
public int id;
@NotNull
public String name;
@ManyToOne
public Group group;
}
@Entity
@Managed
public class Group {
@Id
@GeneratedValue
public int id;
@NotNull
public String name;
@OneToMany(mappedBy = "group")
public List<User> users;
}
Once your factory is configured, you can get an io.semla.persistence.EntityManager
instance:
EntityManager<User> userManager = semla.getInstance(EntityManagerFactory.class).of(User.class);
This is a generic entity manager that will let you manipulate your entities and query your datasource.
However, if you have run the maven plugin to generate your TypeEntityManager classes, those 2 TypeEntityManager are available:
UserManager userManager = semla.getInstance(UserManager.class);
GroupManager groupManager = semla.getInstance(GroupManager.class);
You can either use the generic, or the generated manager to query your entities. Since the second is mostly a wrapper around the first, their behaviour is the same.
The methods on the generic EntityManager are the same, but they use a String parameter in place of field names and enum values.
To manipulate your entities, the following operations are available:
Group defaultGroup = groupManager.newGroup("default").create();
User user = userManager.newUser("bob").group(defaultGroup).create();
Optional<User> user = userManager.get(1);
Map<Integer, User> users = userManager.get(1, 2, 3); // values not found will be returned as null in the map
You can either update a modified entity:
user.name = "tom";
userManager.update(user);
Or patch it directly through the manager:
userManager.set().name("tom").where().id().is(1).patch();
boolean deleted = userManager.delete(1);
long deleted = userManager.delete(1, 2, 3);
long deleted = userManager.delete(Lists.of(1, 2, 3));
long deleted = userManager.where().name().is("bob").delete();
long deleted = userManager.where().name().in("bob", "tom").delete();
Optional<User> user = userManager.where().name().is("bob").first();
List<User> users = userManager.where().name().like("b.*").list();
long count = userManager.where().name().like("b.*").count();
Semla supports all the relations defined by the JPA annotations, so we can easily fetch sub entities in the same query:
List<Group> groups = groupManager.list(group -> group.users());
Optional<User> bob = userManager.where().name().is("bob").first(user -> user.group());
Note that we pass a function as a parameter. The query can be read
as: get the first user named bob and for this user get its group
Relations can be traversed in both directions. For example, we can fetch all the users in Bob's group:
List<User> users = userManager.where().name().is("bob")
.first(user -> user.group(group -> group.users()))
.get().group.users;
Semla will expose an async()
method whenever it can be applied, usually just before the method you would otherwise call.
The type returned by the async()
method should contain the same methods and parameters than their synchronous equivalent,
but they will all return a CompletionStage
of the result.
For example:
userManager.where().name().is("bob")
.async()
.list(user -> user.group(group -> group.users()))
.thenAccept(users -> ...)
userManager.async().get(1).thenApply(user -> ...)
CompletionStage<Long> count = userManager.async().count();
By default, all the asynchronous queries will be run on the common ForkJoinPool
.
Not to run into thread depletion when running blocking calls, semla uses the
ManagedBlocker interface
so that the ForkJoinPool elastically extends until 256 threads before queueing the extra jobs.
This behaviour can be tweaked by providing your own ExecutorService
with:
Async.setDefaultExecutorService(yourExecutorService)
Note: if you provide your own instance of a ForkJoinPool, this one will also be extended to follow the demand of blocking threads, the parallelism parameter will not be honored
To select entities, the following predicates are available:
- is(Object object)
- not(Object object)
- in(Object[] objects)
- in(Object object, Object... objects)
- notIn(Object[] objects)
- notIn(Object object, Object... objects)
- greaterOrEquals(Number number)
- greaterThan(Number number)
- lessOrEquals(Number number)
- lessThan(Number number)
- like(String pattern)
- notLike(String pattern)
- contains(String pattern)
- doesNotContain(String pattern)
- containedIn(String pattern)
- notContainedIn(String pattern)
They can be chained to make a query filter:
List<User> users = userManager.where().name().like("b.*").and().id().lessThan(10).list();
Semla comes with its own simple query language mapping the executed query.
Query<Group, Optional<Group>> query = Query.<Group, Optional<Group>>parse("get the group where id is 1 including its users");
Optional<Group> group = query.in(entityManagerFactory.newContext());
It is mostly used by the tests and for debugging, as it allows for reparsing the query printed in the logs.
Every query is thus mapped to a humanly readable expression, and for example the above query would output:
DEBUG [i.s.p.EntityManager] executing: list all the users where group is 1 ordered by id took 0.130142ms and returned [{id: 1, name: bob, group: 1}]
DEBUG [i.s.p.EntityManager] executing: get the group where id is 1 including its users took 0.196899ms and returned {id: 1, name: admin, users: [{id: 1, name: bob, group: 1}]}
Entities can be ordered using:
List<User> users = userManager.orderedBy(name().desc()).startAt(10).limitTo(30).list();
If the injector is configured to use a Cache:
Semla semla = Semla.configure()
.withBindings(binder -> binder
.bind(Cache.class).to(MemcachedDatasource.configure().withHosts("ip:port").asCache())
)
.create();
Then you can easily cache all the read queries with:
userManager.where().name().is("bob").cachedFor(Duration.ofMinutes(3)).first();
userManager.cachedFor(Duration.ofMinutes(3)).get(1);
To manually refresh the cache:
userManager.where().name().is("bob").invalidateCache().cachedFor(Duration.ofMinutes(3)).first();
Or evict it:
userManager.where().name().is("bob").evictCache().first(); // this returns a void
You can also use your cache for custom queries:
long users = semla.getInstance(Cache.class).get("onlineUsers", () -> computeUserCounts(), Duration.ofMinutes(1));
If you need multiple caches, with different datasources, you should name them:
Semla semla = Semla.configure()
.withBindings(binder -> binder
.bind(Cache.class).named("shared").to(MemcachedDatasource.configure().withHosts("ip:port").asCache())
)
.create();
semla.getInstance(Cache.class,Annotations.named("shared")).get(...);
All the datasources can be used as a cache, even the sql ones.
if @StrictIndices
is added to the class, then only the primary key and the explicitly indexed properties will be
queryable. The typed manager will not have the non indexed methods, and the generic manager will reject the queries at
runtime.
Indices on columns can be defined on the class as:
@StrictIndices
@Indices(
@Index(name = "idx_name_value", properties = {"name", "value"}, unique = true)
)
public class YourEntity...
Or directly on the field:
@Indexed(unique = false)
public String name;
Semla includes both a Json and a Yaml serializer/deserizalizer. Available as singletons through the Json
and Yaml
classes, they are thread safe and can take Options directly as parameters. However, if you want those options to be
default, you can either configure them or create your own instance locally.
Here are some usage examples:
List<Integer> list = Json.read("[1,2,3,4,5]");
List<Integer> list = Json.read("[1,2,3,4,5]", LinkedList.class);
Set<Integer> list = Json.read("[1,2,3,4,5]", new TypeReference<LinkedHashSet<Integer>>(){});
Map<String, Integer> map = Yaml.read(inputStream);
String content = Json.write(list);
String content = Yaml.write(list);
String content = Json.write(list, JsonSerializer.PRETTY); // enable pretty serialization only for this method call
Json.defaultSerializer().defaultOptions().add(JsonSerializer.PRETTY); // enable pretty serialization for all
While less configurable than Jackson, it should be sufficient for most projects. Current options are:
option | description |
---|---|
YamlSerializer.NO_BREAK | will not split the yaml at 80 columns |
JsonSerializer.PRETTY | indented pretty json |
Deserializer.IGNORE_UNKNOWN_PROPERTIES | will ignore unknown properties instead of throwing an exception |
Deserializer.UNWRAP_STRINGS | will unwrap string properties if the expected type is something else |
However, contrary to Jackson, it does support references and anchors as well as including sub files through
the !include
tag:
data:
<<: !include base.yaml
more: value
Field serialization/deserialization can be controlled with the @Serialize
and @Deserialize
annotations.
By default, all getters/setters with matching fields are serialized/deserialized. Chained setters are also supported (
ie: public T withName(String value)
). Regular methods have to be explicitly annotated to be serialized/deserialized.
Relational graphs are handled natively, so references to values should be preserved after deserialization.
An enum When
is also available to serialize/deserialize only on some cases, the supported values
are: ALWAYS, NEVER, NOT_NULL, NOT_EMPTY, NOT_DEFAULT
For example:
public class Character {
private String internalName;
@Serialize(When.NOT_NULL)
public String alias;
@Serialize(as = "name")
public String name() {
return internalName;
}
@Deserialize(from = "name")
public Character withName(String name) {
this.internalName = name;
// do something with the name
return this;
}
}
Finally, polymorphism is supported via the @TypeInfo(property = "type")
and @TypeName("typename")
annotations,
example:
@TypeInfo // type is the default value
public abstract class Character {
public String name;
}
@TypeName("hero")
public class Hero extends Character {
}
The Hero type needs to be registered:
Types.registerSubTypes(Hero.class);
Then it can be serialized and deserialized properly:
List<Character> characters = Yaml.read(
"- type: hero" +
" name: Luke" +
"- type: hero" +
" name: Leia",
Types.parameterized(List.class).of(Character.class)
);
Note: subtypes can also be deserialized from their typenames only:
List<Character> characters = Yaml.read("[hero, hero]", Types.parameterized(List.class).of(Character.class)); // this will return 2 default heroes
semla-logging provides a nice wrapper around Logback.
Setting the log level in your application or tests is as simple as:
Logging.setTo(Level.ERROR);
However, you can also customize the logger:
Logging
.withLogLevel(Level.INFO) // set the default log level to INFO
.withAppenderLevel("io.semla", Level.ALL) // but a specific appender to ALL
.withPattern("%-5p [%t]: %m%n")
.setup();
Capture all your logs to a specific appender:
ListAppender listAppender = new ListAppender();
Logging.withAppender(listAppender).noConsole().withPattern("%-5p [%t]: %m%n").setup();
Or log to a file, optionally rolling:
Logging.configure()
.withPattern("%-5p [%t]: %m%n")
.noConsole()
.withFileAppender()
.withLogFilename("test.log")
// if you want to keep the last 30 days
.keep(30).withLogFilenamePattern("test-%d.log.gz")
.setup();
The semla-graphql module provides support for graphql. You can enable it by adding the dependency to your project and
the GraphQLModule
module to your configuration:
Semla semla = Semla.configure()
.withModules(new GraphQLModule())
.create();
This will make a GraphQL
and a GraphQLProvider
instance available in your injector. The GraphQL
instance will be
configured with the base schema for all your entities, so you should be able to access your database right away.
The generated schema is available through:
String schema = semla.getInstance(GraphQLProvider.class).getSchema()
See the tests for the queries and the configuration for more examples or for how to add your own queries, types and mutations to the base schema.
check https://github.com/mimfgg/semla-examples for more examples!