This library is an extension of Lagom Java/Scala DSL.
Note: We try not to change the API, but before the release of stable version 1.0.0
API may be changed.
Lagom Extensions | Lagom | Scala |
---|---|---|
0.+ | 1.5.+ 1.6.+ |
2.12 2.13 |
MessageProtocols
have constants for most used message protocols (application/json
, application/json; charset=utf-8
, etc).
See Javadoc for more information.
ResponseHeaders
have constants of ResponseHeader
and utilities functions for instantiation Pair<ResponseHeader, T>
.
Code example:
// Lagom
(headers, request) -> {
...
return completedFuture(
new Pair<>(
ResponseHeader.OK.withProtocol(MessageProtocol.fromContentTypeHeader(Optional.of("application/json"))),
result
)
);
};
// Lagom Extensions
(headers, request) -> {
...
return completedFuture(okJson(result));
};
At this moment Lagom (1.4.+) doesn't provide any framework-level API to produce records to topics declared in subscriber-only service descriptors. In such cases, we need to use the underlying Alpakka Kafka directly to publish.
- It is useful to place
TopicDescriptor
in the subscriber-only service descriptor.
public interface FooTopicService extends Service {
TopicDescriptor<FooTopicRecord> FOO_TOPIC = TopicDescriptor.of("foo-topic", FooTopicRecord.class);
Topic<String> fooTopic();
@Override
default Descriptor descriptor() {
return named("foo-topic-service")
.withTopics(topic(FOO_TOPIC.getId(), this::fooTopic))
.withAutoAcl(true);
}
}
At the topic call declaration you may also specify .withProperties(KafkaProperties.partitionKeyStrategy, ...)
to support topic record key generation (see Lagom docs).
- You should inject
SimpleTopicProducersRegistry
and register producers for the declared topics (other details are intentionally omitted)
public class BarServiceImpl implements BarService {
private SimpleTopicProducersRegistry registry;
@Inject
public BarServiceImpl(FooTopicService fooTopicService, SimpleTopicProducersRegistry registry) {
this.registry = registry.register(fooTopicService);
}
}
- Now you able to retrieve producer for the desired topic from the registry and to publish record easily.
@Override
public ServiceCall<FooTopicRecord, NotUsed> publishToFoo() {
return fooTopicRecord ->
registry.get(FooTopicService.FOO_TOPIC).publish(fooTopicRecord)
.thenApply( x -> NotUsed.getInstance() );
}
SimpleTopicProducer
relies onakka.kafka.producer
config by default (see Akka producer, Akka source). You also may provide a separate config for each topic producer. In that case, config path should be<topic-name>.producer
instead ofakka.kafka.producer
.
foo-topic.producer {
# Tuning parameter of how many sends that can run in parallel.
parallelism = 100
# Duration to wait for `KafkaConsumer.close` to finish.
close-timeout = 60s
# Fully qualified config path which holds the dispatcher configuration
# to be used by the producer stages. Some blocking may occur.
# When this value is empty, the dispatcher configured for the stream
# will be used.
use-dispatcher = "akka.kafka.default-dispatcher"
# The time interval to commit a transaction when using the `Transactional.sink` or `Transactional.flow`
eos-commit-interval = 100ms
# Size of buffer in element count
buffer-size = 100
# Strategy that is used when incoming elements cannot fit inside the buffer.
# Possible values: "dropHead", "backpressure", "dropBuffer", "dropNew", "dropTail", "fail".
overflow-strategy = "dropHead"
# Minimum (initial) duration until the child actor will started again, if it is terminated.
min-backoff = 3s
# The exponential back-off is capped to this duration.
max-backoff = 30s
# After calculation of the exponential back-off an additional random delay based on this factor is added,
# e.g. 0.2 adds up to 20% delay. In order to skip this additional delay pass in 0.
random-factor = 0.2
# Properties defined by org.apache.kafka.clients.producer.ProducerConfig
# can be defined in this configuration section.
kafka-clients {
}
}
- Also you can use a
serviceName
property for lookup bootstrap servers byServiceLocator
of Lagom. And you can customize the name of topic by propertytopic-name
(it can be useful for using naming convensions for difference environments).
foo-topic {
serviceName = "kafka_native"
topic-name = "foo-topic-envXY"
}
Unfortunately out-of-the-box Lagom doesn't support request/response logging for client strict HTTP calls.
ConfiguredAhcWSClient
is a simple custom implementation of the play.api.libs.ws.WSClient
which Lagom uses to perform the strict client HTTP calls.
It allows you to enable request/response (including the body) logging. It can be enabled in your application.conf
as follows:
configured-ahc-ws-client.logging.enabled = true
Also, you can exclude some URLs by specifying a list of matching regexps.
configured-ahc-ws-client.logging.skip-urls = ["(foo|bar)\\.acme\\.com/some/path"]
Enjoy!
Using CoroutineService
you can make requests using coroutines.
Example:
class TestService @Inject constructor(actorSystem: ActorSystem) : Service, CoroutineService {
override val dispatcher: CoroutineDispatcher = actorSystem.dispatcher.asCoroutineDispatcher()
private fun testMethod(): ServiceCall<NotUsed, String> = serviceCall {
"Hello, from coroutine!"
}
override fun descriptor(): Descriptor {
return Service.named("test-service")
.withCalls(
Service.restCall<NotUsed, String>(Method.GET, "/test", TestService::testMethod.javaMethod)
)
}
}
You must define the CoroutineDispatcher
on which the coroutines will run. Basically, you need to use akka default execution context.
CoroutineSecuredService
allows you to execute authorized requests from org.pac4j.lagom
. Example:
class TestService @Inject constructor(actorSystem: ActorSystem) : Service, CoroutineSecuredService {
override fun getSecurityConfig(): Config {
TODO("Return security config")
}
override val dispatcher: CoroutineDispatcher = actorSystem.dispatcher.asCoroutineDispatcher()
private fun testMethod(): ServiceCall<NotUsed, String> = authenticatedServiceCall { request, profile ->
"Hello, from coroutine!"
}
override fun descriptor(): Descriptor {
return Service.named("test-service")
.withCalls(
Service.restCall<NotUsed, String>(Method.GET, "/test", TestService::testMethod.javaMethod)
)
}
}
It is also possible to set the coroutine context. To do this, you need to override the value of the context
property.
This allows you to set CoroutineContext.Element
.
Example of changing the name of a coroutine:
class TestService @Inject constructor(actorSystem: ActorSystem) : Service, CoroutineService {
override val dispatcher: CoroutineDispatcher = actorSystem.dispatcher.asCoroutineDispatcher()
override val context: CoroutineContext = CoroutineName("custom-coroutine-name")
private fun testMethod(): ServiceCall<NotUsed, String> = serviceCall {
"Hello, from coroutine!"
}
override fun descriptor(): Descriptor {
return Service.named("test-service")
.withCalls(
Service.restCall<NotUsed, String>(Method.GET, "/test", TestService::testMethod.javaMethod)
)
}
}
org.taymyr.lagom.kotlindsl.cache.AsyncCacheApi
allows using methods from play.cache.AsyncCacheApi
along with suspend functions.
To use, you need to call the org.taymyr.lagom.kotlindsl.cache.AsyncCacheApiKt#suspend
Example:
class TestCache @Inject constructor(playCache: play.cache.AsyncCacheApi) {
private val cacheApi = playCache.suspend()
suspend fun cacheSomeData(someData: String) {
cacheApi.set("key", someData)
cacheApi.set("key", Duration.ofSeconds(10), someData)
cacheApi.getOrElseUpdate("key") { someData }
cacheApi.getOrElseUpdate("key", Duration.ofSeconds(10)) { someData }
val cacheValue = cacheApi.get<String>("key")
cacheApi.remove("key")
cacheApi.removeAll()
}
}
Supported cache implementations:
Using KotlinJsonSerializer
you can serialize/deserialize service responses using kotlinx-serialization.
Serializable classes must be annotated with kotlinx.serialization.Serializable
, otherwise IllegalArgumentException
exception will be thrown when the service starts.
For create KotlinJsonSerializer
, you need to use the function KotlinJsonSerializer.serializer
.
To set the message serializer, you need to use the extension function withKotlinJsonSerializer
for Descriptor
.
But this function is not intended for parameterized types, since Lagom will use one serializer for all variants.
Therefore, using withKotlinJsonSerializer
with parameterized types will throw an UnsupportedOperationException
.
For parameterized types(and not only), you need to use the extension functions withRequestKotlinJsonSerializer
, withResponseKotlinJsonSerializer
for Descriptor.Call
.
Example:
@Serializable
data class TestData(
val field1: String,
val field2: Int
)
@Serializable
data class TestGenericData<T>(val data: T)
val json = Json { ignoreUnknownKeys = true }
interface TestService : Service {
fun testSerialization(): ServiceCall<TestData, TestData>
fun testGenericSerialization(): ServiceCall<TestGenericData<TestData>, TestGenericData<TestData>>
override fun descriptor(): Descriptor = named("test-service").withCalls(
restCall<TestData, TestData>(
Method.POST,
"/api/test/serialization",
TestService::testSerialization.javaMethod
),
restCall<TestGenericData<TestData>, TestGenericData<TestData>>(
Method.POST,
"/api/test/serialization/generic",
TestService::testGenericSerialization.javaMethod
).withRequestKotlinJsonSerializer(json)
.withResponseKotlinJsonSerializer(json),
).withKotlinJsonSerializer<TestData>(json)
}
All released artifacts are available in the Maven central repository.
Just add a lagom-extensions
to your service dependencies:
- SBT
libraryDependencies += "org.taymyr.lagom" %% "lagom-extensions-java" % "X.Y.Z"
- Maven
<dependency>
<groupId>org.taymyr.lagom</groupId>
<artifactId>lagom-extensions-java_${scala.binary.version}</artifactId>
<version>X.Y.Z</version>
</dependency>
All snapshot artifacts are available in the Sonatype snapshots repository. This repository must be added in your build system.
- SBT
resolvers ++= Resolver.sonatypeRepo("snapshots")
- Maven
<repositories>
<repository>
<id>snapshots-repo</id>
<url>https://oss.sonatype.org/content/repositories/snapshots</url>
<releases><enabled>false</enabled></releases>
<snapshots><enabled>true</enabled></snapshots>
</repository>
</repositories>
Contributions are very welcome.
Copyright © 2018-2020 Digital Economy League (https://www.digitalleague.ru/).
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this project except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0.
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.