Clog - a library for integrating slf4j with Spring Shell & Colorization

Spring Shell is a fantastic piece of kit for building commandline applications.

In this post I share some cool ways we can integrate slf4j and as a result, we can avoid peppering our code with System.out.println(), avoid untidy string concatenation by making use of slf4j’s {} {} string interpolation, all the while retaining colorization with a neat abstraction called markers to avoid peppering our code with ANSI-COLOR constants.

When I was building my application, all the tutorials that explained how to write back to the user did so using System.out.println(). At first I thought this was maybe a stop-gap, and that there was some other way in the library to write back to the user. Writing to stdout felt wrong, we were always taught not to do it, and that if we wanted to log something, to use a logging framework. Right? Well I guess technically we’re not logging, we’re talking to the user. Even so, there are other reasons not to use stdout. For a start, how do you test it? How do you assert what was emitted to the user? I don’t know but it’s probably a royal pain in a unit test to intercept stdout and check what’s being written and then forward it on. On top of that, it’s just so ugly, sure you could wrap it up in a method and call the method instead but it still felt like there’s something dirty about writing to stdout directly. For sure this is a command line application and ultimately something’s got to write it to stdout, but I just don’t feel that it should be done at the application level.

So what were all the reasons for not liking stdout?

  • It’s not easily tested in junits.
  • Direct inline usage would result in peppering the code with System.out.println("") which is rather verbose.
  • It has this ugly concatenation syntax and would require a StringFormatter to wrap every string which is not very pretty either.
  • How would I colorize things? ANSI.color everywhere?
  • It just feels nasty.
  • If I wanted to log a response, I would have to have an additional log statement for each print statement.
  • You can’t easily make the output to stdout less verbose via configuration.

Let us consider slf4j and how it might solve the above issues.

  • Instead of duplicating output in stdout and logger, just log once and set an additional STDOUT appender.
  • An slf4j logged could be injected in unit tests, better yet an object that wraps this could too.
  • is slightly shorter than System.out.println()
  • Invoking a logger feels more natural.
  • Setting a logger level could make the output more or less verbose as necessary.
  • Colorization can be done with Markers (we’ll see shortly).

We can see the slf4j solves some of the problems, one of the outstanding problems is ensuring that if we change our design, we won’t have to change everything in our code. It looks like we need a flexible abstraction that is clear enough to be understood but terse enough to not clutter up our code. An abstraction that can hide the implementation details of how we use slf4j to colorize output and be injected into our classes.

Lets start with our abstraction, we create a class called Output

  • Try and mimic the slf4j interface
  • Create a method of each color we might use.
  • Create a generic method to allow us to extend.
  • Take varargs that can be interpolated in the message.

public interface Out {
    void green(String message, Object[] loggables);
    void magenta(String message, Object[] loggables);
    void white(String message, Object[] loggables);
    void color(OuputColor color, String message, Object[] loggables);
    Out info();
    void info(String message, Object[] loggables);
    Out debug();
    void debug(String message, Object[] loggables);
    Out warn();    
    void warn(String message, Object[] loggables);
    Out error();    
    void error(String message, Object[] loggables);
    Out level(LogLevel logLevel);

We would set the default log level to be something sensible like info.

Here are a few examples

private static final Out out = SimpleOut.getOut(MyClass.class);

//output green at default info level"How many words in this green info text? {}", 7);

//warning without specifying color (yellow by default)
out.warn("This warning would be in yellow by default");

//warning level message but in red.
out.warn().red("However this warning would be in red");

//custom color at info level
out.color(customcolor(), "message {} {}", param1, param2);

//custom pink color at error level
out.error().color(pink(), "message {} {}", param1, param2);

//custom level with custom color for the ultimate cheese on toast.
out.level(trace()).color(pink(), "message {} {}", param1, param2);

We could implement these classes as follows:

public class SimpleOut implements Out {
    public enum Color {
        RED, YELLOW

    private final Class<?> clazz;
    private final LogLevel logLevel;
    public static Out getOut(Class<?> clazz) {
        return new SimpleOut(clazz, INFO);
    public static Out getOut(Class<?> clazz, Level logLevel) {
        return new SimpleOut(clazz, logLevel);
    public SimpleOut(Clazz<?> clazz, Level logLevel) {
        this.clazz = clazz;
        this.logLevel = logLevel;
    public void green(String msg, Object[] args) {
        log(GREEN, msg, args);
    public error() {
        return new SimpleOut(clazz, ERROR);
    public void log(Color color, String msg, Object args[]) {
        if (level.equals(INFO)) { 
  , msg, args);
        } else if (level.equals(WARN)) {
        } // error & debug too

Custom Highlighter

public class CustomColorConverter extends HighlightingCompositeConverter {

    protected String getForegroundColorCode(ILoggingEvent e) {
        if (e.getMarker() != null) {
            if (e.getMarker().contains(red())) {
                return RED_FG;
            } else if (e.getMarker().contains(yellow())) {
                return YELLOW_FG;
        return WHITE_FG;

The log configuration in logback would look something like this:

<?xml version="1.0" encoding="UTF-8" ?>
<configuration debug="false">
  <include resource="org/springframework/boot/logging/logback/defaults.xml" />
  <conversionRule conversionWord="highlight"
    converterClass="" />
  <statusListener class="ch.qos.logback.core.status.NopStatusListener" />
  <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
    <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
    <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">

  <!-- this logger writes all classes within the package space to STDOUT but also to a file based log -->
  <logger name="" level="INFO" additivity="false">
    <appender-ref ref="FILE-AUDIT" />
    <appender-ref ref="STDOUT" />
    <level value="INFO" />
    <appender-ref ref="FILE-AUDIT" />

The log file has a logger which binds to the package name so that only the log statements from classes contained within this package namespace will be fed to the appenders.

I’ve only shown one appender here, but the other is the file-audit you can configure it how you like. Note that the file-audit is bound to the root in order to ensure that everything is logged in the log-file. The STDOUT is bound to logger to reduce noise from 3rd party libraries.

We include the springframework defaults for some reason I can’t remember. The conversionRule exists to process the highlight() function by using the CustomColorConverter. This is the main highlighting responsibility. The msg section is just the regular msg string without any timestamps attached, this is different to how you would normally format a log line.

A complete implementation of this blog post will be made available in a library I am wroting called Clog (as in Color Log, not the Danish footware) Unlike the Danish clog, this library, compared to using stdout, is hopefully comfortable to use and looks good.


Read more... 👓

ActiveMQ vs RabbitMQ

In this article I highlight some of the differences between RabbitMQ and ActiveMQ and explain why I am hugely biased towards RabbitMQ.

I’ve used ActiveMQ in production for 3 years and recently made the switch from ActiveMQ to RabbitMQ. Switching to RabbitMQ requires some investment from a code perspective since each technology implements a different messaging specification, however, having made the switch 6 months ago, I feel that the investment was well worth it. I have not looked back at ActiveMQ, my only regret is that I didn’t make the switch sooner.

Let me start off by saying that from a specification perspective, both technologies are very much ‘RONSEAL’; They both “do messaging”, but exactly how they do it is different, and the perceived effort made to achieve those ends in each project differs greatly.

I always pick technology stacks based on how well the technology solves the problem in practice rather than on paper, and favour this over any argument of how modern or established it is. To be clear, I am not a technology hipster, nor am I irrationally fearful of bleeding edge tech. The fact that the tech-stacks I choose happen to be modern has less to do with their “cool” factor and more to do with a belief that modern popular technologies enjoying better overall support stemming from their popularity than their ageing established counterparts. Having said that, I’m open to changing stacks at any time if I see a need to do so, i.e fundamental change in requirements that the existing stack will not cope with, functional regression/instability, high maintenance cost, or an increasing lack of free support.

With this in mind, the reasons for moving from ActiveMQ to RabbitMQ centre around some under-the-hood fundamentals, combined a perceived lack of enthusiasm by ActiveMQ to go beyond the minimum set of requirements to produce a well polished product like RabbitMQ. A common theme throughout this article will be “ActiveMQ does x, but RabbitMQ does it better”. Each of my reasons taken in isolation might not compel someone to switch, but the sum is greater than the whole here; the polish on RabbitMQ makes for a much smoother development experience and as a result I have a lot more confidence in the continued quality of RabbitMQ. Here are the highlights of RabbitMQ.

  • Just works out of the box
  • Ability to provision easily in puppet
  • Native package manager based installation
  • Knowledge/Expert principle applied to routing
  • Spring integration
  • Console
  • Support

It works out of the box

One of the first things I do with any technology is “demo” it, when you’re doing a demo of something, you typically don’t care about the configs and obscure parameters, you just want to see the tool in a sensible default state so you can start tinkering with it as soon as possible. ActiveMQ works out of the box, but RabbitMQ does it better. Like most apache-owned project sites, if you eventually manage to claw your way through the laberynth-like maze that is the ActiveMQ home page, figure out which of the versions is the latest, you might eventually get a hold of the zip containing the binaries. After you read the manual and edit the config file you might finally start the broker and (only because of the JMS spec) be able to immediately post messages to a queue. RabbitMQ’s signposting around artifacts is much clearer, and it’s easy to get hold of and install. Starting a broker on windows or linux comes as part of the pacakge with native runscripts/service mechanisms available, there are proper start and stop scripts to let you run RabbitMQ, and with zero configuration (sensible defaults), a broker can be started immediately.

Distribution Availability

ActiveMQ give you a choice between a zip file distribution and source code. Rabbit by contrast comes with an native installer for Windows, a .deb file you can download for Ubuntu, or repository information for fetching from RabbitMQ maintained PPAs via the native package manager, an RPM for Redhat people, some artifacts for mac users, and also a plain old zip file. This is a big deal; Aside from windows and all the other distros, ActiveMQ have failed to provide up to date native debian packaged versions of their software, the last package I saw was on Ubuntu 10.04, this package was broken and remained broken for the life-time of 10.04, same story with 12.04. One of the golden rules of Server Provisioning with Puppet is to use the native package, and to avoid the “download -> extract” anti-pattern since this always ends in tears. RabbitMQ provide their own actively maintained debian PPA repositories, this means that not only is the latest upgrades of the software are available to you on Ubuntu from Day 1, it also makes it exceptionally easy to provision a rabbit instance even without any further custom puppet modules. This means you can begin using RabbitMQ with puppet immediately using nothing more than the package expression and an augeus lense. The fact that they go the extra mile and provide a module that lets you configure just about everything under the sun without writing your own augeus lense is just the icing on the big rabbit shaped cake that is RabbitMQ. The 3 or so ActiveMQ puppet modules that do exist work by either downloading and extracting a zip file (see the anti pattern above), or by relying on the debian packages shipped with ubuntu which are massively out of date and just don’t work out of the box - just the bacon bits on the faulty towers waldorf salad that is ActiveMQ.


Routing in RabbitMQ is very different to ActiveMQ. In ActiveMQ, the clients know about the queues they’re sending to, but they also know about how those queues work. For example, the consumer cares about where the message is going and how it’s being broadcast, (Topic, or Consumer). In RabbitMQ by contrast, the clients know nothing about the topology. They know they need to publish to a queue, but they don’t know how the message will be routed and who will consume it, it is the responsibility of a middle entity known as the ‘exchange’ to manage the message routing, and rightly so. Because since when should my clients care about how a message gets routed, clients should not be the experts in routing, this is just another form of unnecessary coupling, and this responsibility should lie somewhere else. If I decide to change the routing, I should be able to do so freely and silently without the clients knowing. The implication here is that you cannot just push or listen to a queue without first having created it, so in this sense RabbitMQ is less “out of the box like” than ActiveMQ as it requires the extra step of creating queues, however, I feel this is a small price to pay for the ability to route dynamically from an exchange.

Spring Integration

Both ActiveMQ and RabbitMQ have excellent support within the Spring framework, however, ActiveMQ is an implementation of the JMS spec, whereas RabbitMQ is an implementation of the AMQP protocol (Yes just to be confusing -ActiveMQ and AMQP share 3 letters - pure co-incidence). As a result, you cannot just switch out ActiveMQ for RabbitMQ without first investing in some code changes. Specifically you will need to move from «InsertActiveMQ» to «InsertRabbitMQ», and as part of your configuration process, you will need to configure the creation of the queues in RabbitMQ before you can write to them, the investment is well worth it.


ActiveMQ’ console used to look like a website from the 90’s. Ugly, clunky, form based. They recently improved it, and last time I saw it, it was in beta, and I ended up not being able to do what I wanted, and going back to the classic console. RabbitMQ on the other hand comes with an all-in-one realtime console which is obviously fit for purpose, probably designed from the ground up and not an afterthought.


Try going into the ActiveMQ IRC channel and you’ll see how dead quiet it is. RabbitMQ’s IRC channel actually has people frequenting it, not that this matters, because RabbitMQ just works and you probably won’t need to ask anybody for help ever.


Read more... 👓

Deploying deb artifacts to Gemfury

In a previous post I explained how to package a service up into a deb package. Once your service is packaged up, it can be installed by provisioners like puppet. However we still need to publish the package. For open-source project there are plenty of PPA providers out there that will host your package for free, however occasionally it’s desirable to keep the debian artifacts private. Open-source PPA providers do not (by definition) allow private artifacts to be hosted with them, so typically you have to pay for such hosting.

For my own projects I have been using Gemfury to host the packages for about $5 per month. Gemfury provides a nice realtime dashboard (probably implemented with websockets) that enables you to manage deb packages. However as part of continuous integration, I needed a way to get my debian artifacts into Gemfury’s repository without manual intervention.

To this end I build a maven plugin ‘gemfury-maven-plugin’ which can be used as shown below to automatically deploy the debian artifacts to your repository.

            <!-- include sources -->

We can see the maven plugin binds to the deploy phase while in release mode, which allows us to place the secrettoken in a separate document which we do not commit to source control.

Note that Gemfury use a cheap and nasty ssl provider and so the https certificates are not in the java certificate repository. To get around this, you can use the ignoreHttpsCertificateWarnings parameter, however we strongly recommend you install Gemfury’s certificate for production systems.

The plugin will retry 99 times in the event that Gemfury servers are unavailable - a situation which at the time of writing was quite frequent (1 in 5 deployment attempts required a retry).

You can then use puppet to provision your servers with that package by writing a manifest similar to this:

apt::source { 'fury':
  location => '',
  release => ' ',
  repos => '/',
  allow_unsigned => true,

package { 'yourpackage':
  name     => 'yourpackage',
  ensure   => latest,
  require  => Apt::Source['fury'],

Read more... 👓

Steam Authentication with Steam4J-Auth

Steam’s OpenID-based authentication enables site visitors to login using their steam credentials without your site having to store or manage the credentials directly thereby reducing the amount of sensitive data that could be leaked in the event of a data breach.

Steam authorisation is based on the OpenID spec, as such when implemented properly should provide sufficient confidence that the user is who they say they are.

This article explains how to authenticate a steam account using the Steam4J-Auth module from the 3rd party Steam4J library.

The workflow

Let’s look at the workflow from the user’s perspective using as an example.

User visiting decides they want to make an edit to their wishlist, before they can do this, they must first login, so the user clicks on the login button to sign in. The user is redirected to steam where they are asked to provide their username and password, (and their 2 factor authentication code if necessary), if they successfully login, they are redirected back to and the user is now able to edit their wishlist.

The workflow is relatively simple from the user’s perspective and probably explains why OpenId and similar authentication systems have gained so much traction.

Behind the scenes

This is what needs to happen behind the scenes in order to authenticate a user:

  • The website must be secured by https.
  • User clicks a login button which links to the openId site.
  • Steam examines the URL query parameters noting the openId.mode is set to ‘check_authorisation’, together with a redirect parameter which should be an endpoint on
  • The user enters their credentials into
  • Steam validates these credentials, and then redirects the user to the endpoint which was specified in the initial request.
  • In order to validate the user, examines the query parameters provided by steam, in particular the ‘claimedId’ and the signature.
  • server (never the client side) calls the endpoint back, passing the given request parameters and the signature.
  • checks if the signature is valid for the set of parameters provided.
  • also handles nonce expiration, so the same signature + claimedId pair cannot be replayed.

In summary, provides a signed set of claims to the client, the client submits these claims to the backend, and the backend verifies these claims with, it is the up to to store the user in a signed/encrypted cookie and then check the entitlements of that user against it’s own database.

Note that you absolutely must make the call to to verify the claim provided by the client, since without this signature based check, the claimedId is meaningless (i.e. anybody can change the claimedId).


The steam4j library provides a module called steam4j-auth which can help us implement authorisation in our applications.

In particular, the SteamAuth class provides several useful helper methods for implementing the above workflow.

SteamAuth is thread safe so a spring managed singleton used across your application is fine, to construct simply provide the ‘realm’ (the domain of your site) and ‘redirect’ url for example:

SteamAuth sa = new SteamAuth("", "");

sa.getLoginUrl() returns a string representation of the URL which the login button should link to. It contains the realm, and redirect url, as well as boiler plate parameters required. You can of course expose this link via an endpoint.

sa.authenticate(HttpServletRequest r) examines the given HttpServletRequest for the claimedId and the signature, verifies this with steam, and returns an Authorisation result which states if the login is successful, you can then based on this issue a signed cookie containing this information. Assuming you are using Spring’s REST related annotations, this method is useful since you can pass the HttpServletRequest from your endpoint directly to the authenticate method.

Example usage

Finally, here is an illustration of what a Spring rest endpoint might look like:

public class LoginController {
    private final SteamAuth steamAuth;

    //this method can be called by your javascript client to figure out where to redirect 
    //the user when the login button is clicked
    public @ResponseBody JsonNode generateSteamUrl(HttpServletRequest request) {"Logging in");
        try {
            return new TextNode(steamAuth.getLoginUrl());
        } catch (Exception e) {
            throw new SteamAPIDown();

    //this method gets called when redirects to the site on successful login
    //you must call steamAuth.authenticate(request) in order to verify the claimedId.
    //In this method we don’t show you how to set a signed cookie, only where it should be done
    public @ResponseBody boolean openIDCallback(HttpServletRequest request, HttpServletResponse response) {
        AuthorizationResult result = steamAuth.authenticate(request);
        if (result.isSuccess()) {
            //perform signed cookie set here
  "{} logged in ", result.getUserId());
            return true;
        } else {
            return false;

    //SteamAuth instance provided via dependency injection
    public SessionController(SteamAuth steamAuth) {
        this.steamAuth = steamAuth;


Read more... 👓

RabbitMQ Lossless Installation via Puppet

This article explains how to install a production ready RabbitMQ instance via Puppet in a way that allows the host OS to be destroyed and recreated without data loss relating to the queue state.

Most VPS providers allow you to define and mount multiple disks on your server. A typical split might look like this:

  • OS
  • Swap
  • Data

Typically an administrator will create these disks once, and when required will destroy the OS disk but leave the Data disk intact. If RabbitMQ is installed directly on the OS without a Data mount, then reinstalling the OS will result in the loss of your data, however if we configure rabbitMQ to use the data disk for storage, then we can resume our application’s state in the event that we have to rebuild the server.

Puppet manifest

We must create the following manifest:

include ufw

class { 'jdk_oracle': }

ufw::allow { "allow-rabbitmq-from-dev":
  from => "",
  port => 5672,

ufw::allow { "allow-rabbitmq-admin-from-dev":
  from => "",
  port => 15672,

mount { 'data':
  device  => '/dev/sdc',
  ensure  => 'mounted',
  name    => '/data',
  atboot  => true,
  dump    => 0,
  fstype  => 'ext4',
  options => 'noatime,errors=remount-ro',
  pass    => 1,
  require => File['datadir'],

file { 'datadir':
  path   => '/data',
  ensure => 'directory',
  mode   => 777,

file { 'rabbitmqdir':
  path    => '/data/rabbitmq',
  ensure  => 'directory',
  mode    => 777,
  require => Mount['data'],

file { '/data/rabbitmq/logs':
  ensure  => 'directory',
  mode    => 777,
  require => File['rabbitmqdir'],

file { '/data/rabbitmq/data':
  ensure  => 'directory',
  mode    => 777,
  require => File['rabbitmqdir'],

class { 'rabbitmq':
  service_manage        => true,
  port                  => '5672',
  delete_guest_user     => true,
  environment_variables => {
    'RABBITMQ_MNESIA_BASE' => '/data/rabbitmq/data',
    'RABBITMQ_LOG_BASE'    => '/data/rabbitmq/logs',
    'RABBITMQ_MNESIA_DIR'  => '/data/rabbitmq/data/localhost'
  require               => [File['/data/rabbitmq/data'], File['/data/rabbitmq/logs']],

rabbitmq_user { 'prodid':
  admin    => true,
  password => 'prodidPassword',

rabbitmq_user { 'user':
  admin    => true,
  password => 'rabbitMQPa$$word',

rabbitmq_vhost { 'domain': ensure => present, }

rabbitmq_user_permissions { 'user@domain':
  configure_permission => '.*',
  read_permission      => '.*',
  write_permission     => '.*',

rabbitmq_user_permissions { 'prodid@domain':
  configure_permission => '.*',
  read_permission      => '.*',
  write_permission     => '.*',

The above puppet manifest mounts the data disk, creates a data folder, and a rabbitmq directory inside this.

In the event that the operating system disk is replaced the rabbitMq data folder is safe on another disk. Note how we also configure the firewall, and segregate the responsibilities for the prodid vs the human user.


Read more... 👓