The Boxfuse Java log appender for AWS CloudWatch Logs is a Logback and Log4J2 appender that ships your log events directly and securely to AWS CloudWatch Logs via HTTPS.
All log events are structured and standardized. Each Boxfuse environment maps to an AWS CloudWatch Logs LogGroup which contains one LogStream per application.
More info: https://boxfuse.com/blog/cloudwatch-logs
To include the Boxfuse Java log appender for AWS CloudWatch Logs in your application all you need to do is include the dependency in your build file.
Start by adding the Boxfuse Maven repository to your list of repositories in your pom.xml
:
<repositories>
<repository>
<id>central</id>
<url>http://repo1.maven.org/maven2/</url>
</repository>
<repository>
<id>boxfuse-repo</id>
<url>https://files.boxfuse.com</url>
</repository>
</repositories>
Then add the dependency:
<dependency>
<groupId>com.boxfuse.cloudwatchlogs</groupId>
<artifactId>cloudwatchlogs-java-appender</artifactId>
<version>1.1.9.62</version>
</dependency>
Start by adding the Boxfuse Maven repository to your list of repositories in your build.gradle
:
repositories {
mavenCentral()
maven {
url "https://files.boxfuse.com"
}
}
Then add the dependency:
dependencies {
compile 'com.boxfuse.cloudwatchlogs:cloudwatchlogs-java-appender:1.1.9.62'
}
Besides Logback or Log4J2 this appender also requires the following dependency (declared as a transitive dependency in the pom.xml
):
com.amazonaws:aws-java-sdk-logs:1.1.143
(or newer)
To use the appender you must add it to the configuration of your logging system.
Add the appender to your logback.xml
file at the root of your classpath. In a Maven or Gradle project you can find it under src/main/resources :
<configuration>
<appender name="Boxfuse-CloudwatchLogs" class="com.boxfuse.cloudwatchlogs.logback.CloudwatchLogsLogbackAppender">
<!-- Optional config parameters -->
<config>
<!-- Whether to fall back to stdout instead of disabling the appender when running outside of a Boxfuse instance. Default: false -->
<stdoutFallback>false</stdoutFallback>
<!-- The maximum size of the async log event queue. Default: 1000000.
Increase to avoid dropping log events at very high throughput.
Decrease to reduce maximum memory usage at the risk if the occasional log event drop when it gets full. -->
<maxEventQueueSize>1000000</maxEventQueueSize>
<!-- The default maximum delay in milliseconds before forcing a flush of the buffered log events to CloudWatch Logs. Default: 500. -->
<maxFlushDelay>500</maxFlushDelay>
<!-- Custom MDC keys to include in the log events along with their values. -->
<customMdcKey>my-custom-key</customMdcKey>
<customMdcKey>my-other-key</customMdcKey>
<!-- The AWS CloudWatch Logs LogGroup to use. This is determined automatically within Boxfuse environments. -->
<!--
<logGroup>my-custom-log-group</logGroup>
-->
</config>
</appender>
<root level="debug">
<appender-ref ref="Boxfuse-CloudwatchLogs" />
</root>
</configuration>
Add the appender to your log4j2.xml
file at the root of your classpath. In a Maven or Gradle project you can find it under src/main/resources :
<?xml version="1.0" encoding="UTF-8"?>
<Configuration packages="com.boxfuse.cloudwatchlogs.log4j2">
<Appenders>
<Boxfuse-CloudwatchLogs>
<!-- Optional config parameters -->
<!-- Whether to fall back to stdout instead of disabling the appender when running outside of a Boxfuse instance. Default: false -->
<stdoutFallback>false</stdoutFallback>
<!-- The maximum size of the async log event queue. Default: 1000000.
Increase to avoid dropping log events at very high throughput.
Decrease to reduce maximum memory usage at the risk if the occasional log event drop when it gets full. -->
<maxEventQueueSize>1000000</maxEventQueueSize>
<!-- The default maximum delay in milliseconds before forcing a flush of the buffered log events to CloudWatch Logs. Default: 500. -->
<maxFlushDelay>500</maxFlushDelay>
<!-- Custom MDC (ThreadContext) keys to include in the log events along with their values. -->
<customMdcKey key="my-custom-key"/>
<customMdcKey key="my-other-key"/>
<!-- The AWS CloudWatch Logs LogGroup to use. This is determined automatically within Boxfuse environments. -->
<!--
<logGroup>my-custom-log-group</logGroup>
-->
</Boxfuse-CloudwatchLogs>
</Appenders>
<Loggers>
<Root level="debug">
<AppenderRef ref="Boxfuse-CloudwatchLogs"/>
</Root>
</Loggers>
</Configuration>
All log events are structured and standardized. What this means is that instead of shipping log events as strings like this:
2014-03-05 10:57:51.702 INFO 45469 --- [ost-startStop-1] o.s.b.c.embedded.FilterRegistrationBean : Mapping filter: 'hiddenHttpMethodFilter' to: [/*]
events are shipped as JSON documents will all required metadata:
{
"image": "myuser/myapp:123",
"instance": "i-607b5ddc",
"level": "INFO",
"logger": "org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping",
"message": "Mapping filter: 'hiddenHttpMethodFilter' to: [/*]",
"thread": "main"
}
This has several advantages:
- It cleanly separates presentation and formatting from log event content
- Log events are now machine searchable
- All log events from all applications now have exactly the same attributes, which enables searches across application boundaries
When the appender is run within a Boxfuse instance, it will send the log events to the AWS CloudWatch Logs log group for the current Boxfuse environment. Within that log group the events will be placed in the log stream for the current Boxfuse application.
A number of log event attributes are populated automatically when the appender is run within a Boxfuse instance:
image
is the current Boxfuse imageinstance
is the current AWS instance id
When logging a message from your code using SLF4J as follows:
Logger log = LoggerFactory.getLogger(MyClass.class);
...
log.info("My log message");
the timestamp of the log event is added to its metadata and the following attributes are also automatically extracted:
level
is the log level (INFO
in this case)logger
is the logger used (com.mypkg.MyClass
in this case)thread
that was logged from (main
for the main application thread)message
is the actual log message (My log message
in this case)
When using an SLF4J marker you can also make it much easier to filter specific event types. The following code:
Logger log = LoggerFactory.getLogger(MyClass.class);
Marker USER_CREATED = MarkerFactory.getMarker("USER_CREATED");
String username = "MyUser";
...
log.info(USER_CREATED, "Created user {}", username);
now also automatically defines an additional log event attribute:
event
which is the exact type of the event, making it easy to search and filter for this (USER_CREATED
in this case)
Additionally a number of optional attributes can also be defined via MDC to provide further information of the log event:
account
is the current account in the systemaction
is the current action in the system (for grouping log events all related to the same domain-specific thing like the current order for example)user
is the user of the account (for systems with the concept of teams or multiple users per account)session
is the ID of the current session of the userrequest
is the ID of the request
They are populated in the MDC as follows:
MDC.put(CloudwatchLogsMDCPropertyNames.ACCOUNT, "MyCurrentAccount");
MDC.put(CloudwatchLogsMDCPropertyNames.ACTION, "order-12345");
MDC.put(CloudwatchLogsMDCPropertyNames.USER, "MyUser");
MDC.put(CloudwatchLogsMDCPropertyNames.SESSION, "session-9876543210");
MDC.put(CloudwatchLogsMDCPropertyNames.REQUEST, "req-111222333");
When finishing processing (after sending out a response for example) they should be cleaned up again to prevent mixups:
MDC.remove(CloudwatchLogsMDCPropertyNames.ACCOUNT);
MDC.remove(CloudwatchLogsMDCPropertyNames.ACTION);
MDC.remove(CloudwatchLogsMDCPropertyNames.USER);
MDC.remove(CloudwatchLogsMDCPropertyNames.SESSION);
MDC.remove(CloudwatchLogsMDCPropertyNames.REQUEST);
In a microservices architecture these attributes should be included in all requests sent between systems, to ensure they can be put in the MDC by each individual service in order to be correlated later. This is very powerful as it allows you to retrieve all the logs pertaining for example to a specific request across all microservices in your environment.
The log events are shipped asynchronously on a separate background thread, leaving the performance of your application thread unaffected. To make this possible the appender buffers your messages in a concurrent bounded queue. By default the buffer allows for 1,000,000 messages. If the buffer fills up it will not expand further. This is done to prevent OutOfMemoryErrors. Instead log events are dropped in a FIFO fashion.
If you are seeing dropped messages without having been affected by AWS CloudWatch Logs availability issues,
you should consider increasing maxEventQueueSize
in the config to allow more log events to be buffered before having to drop them.
- Fixed
stdoutFallback
handling
- Improved polling logic under high load
- Added optional
maxFlushDelay
configuration param - Added optional
customMdcKey
configuration param
- Added thread name
- Improved polling logic
- Added optional
logGroup
configuration param
- Fixed: Handling of DataAlreadyAcceptedException
- Prevent creation of AWS CloudWatch Logs client when disabled
- Fixed: Flushing under high load caused maximum batch size to be exceeded
- Fixed: Maximum batch size restored to 1,048,576 bytes
- Added warning when an individual message exceeds the maximum allowed batch size
- Fixed: Reduced maximum batch size to 1,000,000 bytes to avoid occasional batch size exceeded errors
- Fixed: Better handling of temporary network connectivity loss
- Fixed: Exception name is now part of the message along with the stacktrace
- Added
stdoutFallback
configuration property - Fixed: Maximum batch size enforcement before flushing events to CloudWatch Logs
- Fixed: Do not let log thread die after an exception / auto-restart if possible
- Fixed: Enforce that all events within a single PutLogEvents call are always chronological
- Initial release
Copyright (C) 2018 Boxfuse GmbH
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.