Lets Encrypt Microsoft Remote Desktop via Route53

I recently got some UniFI equipment, and decided that with my USG Router capable of routing hostnames properly, it might be fun to ditch the self-signed certificates when connecting to Microsoft’s Remote Desktop service, not so much from a security perspective (as there’s nothing wrong with self-signed certificates per-se), just from an extra step of having to accept the certificate every time on my mac.

So in this post I’m going to cover how to replace the default self-signed certificate that identifies the Remote Desktop when connecting to a Windows 10 Pro PC via a Remote Desktop connection. In particular, we’re going to use Lets Encrypt to generate the certificates for free. We’re going to use the DNS01 challenge because that’s the only one that will allow a certificate to be distributed within a private network - that is, we’re going to get the certificate issued by proving that we own the domain, rather than exposing any ports. I personally would use this method anyway, even if we were not behind a private network as I think it’s a much cleaner approach architecturally, and also in terms of security and reliability.

I’ve written up this post as there was a lot of fragmented information on the different aspects of this use case but no single robust script to solve the problem. There was an example of:

  • how to import non-LE 3rd party certificates into Remote Desktop.
  • how to generate lets encrypt certificates.
  • How to import certificates into Remote Desktop Services (Not Remote Desktop)
  • How to manually import a certificate on windows 10.
  • Working with PoshACME to produce a certificate.
  • Integrating with Route53

By writing this post I am cherry picking the best parts to allow you to install a letencrypt certificate via Route53 to replace the self-signed certificate on your Remote Desktop connections to Windows 10 Pro hosts.

Summary

Here’s what we’re going to do to achieve this:

  • Install the PowerShell modules we need, and adjust the execution policy so we can run scripts.
  • Make use of Posh-ACME and the Route53 API to acquire a certificate for our target machine mydesktop.lan.mydomain.com
  • Install that certificate in our target machine’s Certificate repository.
  • Make Microsoft Remote Desktop use that certificate.

Security

Before we go any further, a word on security. This method relies on supplying AWS credentials that effectively allow the creation and destruction of DNS records, so it is vital to follow security best practices by:

  • Create a separate non-root IAM identity to access the Route53 API.
  • Properly define the policy that provides the various permissions.
  • Keeping these permissions to a minimum
  • Not leaving the credentials in an unsecured location.
  • Not being lazy and re-using the credentials for other projects.

Assumptions

I’m going to assume a certain competence with AWS and in particular the creation of IAM users, Route53, Policy creation, and knowledge of how DNS works and the various record types in particular TXT and CNAME.

I’m also going to assume you already have a domain, and have already created a zone in Route53 with an A Record. This domain must be real and you must actually have ownership such that DNS records resolve correctly via Route53.

I’m also going to assume you have a router with a local subdomain (which isn’t on the public internet), for example.

  • Your domain on the public internet burrell.co This MUST exist.
  • Your subdomain for your LAN lan.burrell.co This is figment of your router’s imagination.

This means that if the hostname of the machine you want to RDP to (known henceforth as target) is mydesktop then the fqdn is mydesktop.lan.burrell.co. This should be reachable already via RDP within your LAN. This tutorial is about certificates, not hostname resolution!

The whole script

Before we go further I must take a moment to acknowledge the inspiration for this script. This script is the result of cobbling code together from other parts of the internet , and then tailoring so that it works for this particular use case - letsencrypt, route53, windows powershell, and remote desktop (as opposed to remote desktop services - which is a totally different kettle of fish!). Some parts of the original script have not aged well due backward incompatible evolution of the API over time so I’ve fixed them too.

Let’s take a look at it in its entirety and then examine the key parts.

<#
.Synopsis
   Script to Automated Certificate Renewal for Remote Desktop via Lets Encrypt and Route53.
.DESCRIPTION
   Script to Automated Certificate Renewal for Windows 10 Remote Desktop via Lets Encrypt and Route53.
   Using Lets Encrypt and Route53 (Posh-ACME, AWSPowerShell) we can automate the issuance of certificates for our 
   Remote Desktop running on Windows 10.
.EXAMPLE
   .\letsencrypt-rdp.ps1 -LEServer le_prod 
              -domain desktop-w2005.lan.example.com
              -challengeDomain desktop-w2005.lan.example.com
              -contact daniel@example.com
              -r53secret (Get-Content secretkey.txt)
              -r53key AKIABC123

Run this script once you've configured a CNAME record _acme-challenge.desktop-w2005.lan.[example.com] -> desktop-w2005.lan.example.com

You can separately automate the creation and destruction of that cname record but if not you only create it once.

#>

param(
    [string]$LEServer,
    [string]$matchDomain,
    [string]$domain,
    [string]$challengeDomain,
    [string]$contact,
    [string]$r53Secret,
    [string]$r53Key,
    [integer]$delay = 60
)

$cdomain = "cn",$domain -join "="
$date = [datetime]::Now
$expires = $date.addHours(48)
$cert = Invoke-Command -ScriptBlock {$cdomain = $args[0];get-childitem cert:\localmachine\my | where { $_.Subject -eq $cdomain} } -ArgumentList $cdomain
$thumbprint = $cert.Thumbprint
if(($cert.NotAfter) -le $expires)
{
    # If Certificate due to expire in 48 hours, request new certificate and install to RDS.
    Write-Output "Certificate Requires Replacement"
    # Get the Certificate
    Set-PAServer $LESERVER
    if(($LESERVER) -eq "LE_STAGE")
    {
        $certName = "ACME-STAGE"
    }
    if(($LESERVER) -eq "LE_PROD")
    {
        $certName = "ACME-PROD"
    }
    $SecurePassword =  $r53Secret | ConvertTo-SecureString -AsPlainText -Force
    $r53Params = @{R53AccessKey=$r53Key; R53SecretKey=$SecurePassword}
    try{
        add-type -AssemblyName System.Web
        $randomPassword = [System.Web.Security.Membership]::GeneratePassword(15,2) | ConvertTo-SecureString -AsPlainText -Force 
        $certificate = New-PACertificate $domain -PfxPass $randomPassword -AcceptTOS -Contact $contact -DnsPlugin Route53 -DnsAlias $challengeDomain -PluginArgs $r53Params -Verbose -force -ErrorAction Stop
        Start-Sleep $delay

        Write-Output "Import Certificate"
        certutil -v -p $randomPassword -importPFX $certificate.pfxFile noExport
        
        Write-Output "Install imported certificate"
        $tp = (ls cert:\localmachine\my | WHERE {$_.Subject -match $matchDomain } | Select -First 1).Thumbprint
        wmic /namespace:\\root\CIMV2\TerminalServices PATH Win32_TSGeneralSetting Set SSLCertificateSHA1Hash="$tp" 
    }
    catch{
        $_.exception.message
    }
}
else
{
    Write-Output "Certificate does not need replacing"
    $expiry = $cert.NotAfter
    Write-Output "Expires : $expiry"
}
Get-PsSession | Remove-PSSession

We can see the main structure comes from Robert Pearman’s guide for RDS; I’ve trimmed a few unnecessary params and replaced the RDS section with the code for importing into RD instead.

Let’s break the script down into its key parts.

Configure Powershell.

  1. Start an Administrator PowerShell session on the target machine.
  2. Ensure PowerShell’s execution policy allows you to execute scripts.
  3. Looking at the GitHub repo danielburrell/rds-certs Lets execute the prerequisites.ps1 script shown below to configure the target machine with the necessary PowerShell modules.

    prerequisites.ps1

     Install-Module -Name Posh-ACME -RequiredVersion 3.6.0
     Install-Module -Name AWSPowerShell -RequiredVersion 3.3.563.1
    

Setup the CNAME record

As a one time operation we must create a CNAME record _acme-challenge.mydesktop.lan.example.com which points to mydesktop.lan.example.com. Remember that when creating a CNAME in route53, the suffix .example.com is only needed for the value, so for the key _acme-challenge.mydesktop.lan is fine.

Ideally our script would create this via the API and actually delete it too (since it’s not nice to leave details of our network as a public record).

Acquire the certificate

The follow snippet creates a LetsEncrypt certificate by fiddling with Route53 DNS records. Specifically it creates a TXT record for the domain. It then visits the challenge URL which must resolve to our TXT record.

We start by generating a password using the System.Web.Security.Membership password generator (the add-type command loads the assembly, so we can use this library). We convert the plaintext value into a SecureString so the New-PACertificate cmdlet accepts it.

  add-type -AssemblyName System.Web
  $randomPassword = [System.Web.Security.Membership]::GeneratePassword(15,2) | ConvertTo-SecureString -AsPlainText -Force 
  $certificate = New-PACertificate $domain -PfxPass $randomPassword -AcceptTOS -Contact $contact -DnsPlugin Route53 -DnsAlias $challengeDomain -PluginArgs $r53Params -Verbose -force -ErrorAction Stop

Import the certificate

This section of the script uses certutil to import the newly created certificate into the target machine’s certificate store. Note we need to pass the password we generated earlier in order to read the pfx content.

  certutil -v -p $randomPassword -importPFX $certificate.pfxFile noExport

Tell Remote Desktop to use this certificate

This section of the script tells Microsoft’s Remote Desktop to use the given certificate.

  $tp = (ls cert:\localmachine\my | WHERE {$_.Subject -match $matchDomain } | Select -First 1).Thumbprint
  wmic /namespace:\\root\CIMV2\TerminalServices PATH Win32_TSGeneralSetting Set SSLCertificateSHA1Hash="$tp"

Run this script periodically

Lets Encrypt certificates are designed to expire after 90 days as a matter of security and automation best practice. Our script ensures that we don’t bother the Lets Encrypt servers unless it’s time to renew, so we can simply run our script daily in the background to ensure the certificates are rotated. Here’s how to do this programmatically.


 $Time=New-ScheduledTaskTrigger -At 1.00PM -Once
 
 $Action=New-ScheduledTaskAction -Execute PowerShell.exe -WorkingDirectory C:/Scripts `
        -Argument letsencrypt-rdp.ps1 `
                  -LEServer le_prod `
                  -domain desktop-w2005.lan.example.com `
                  -challengeDomain desktop-w2005.lan.example.com `
                  -contact daniel@example.com `
                  -r53secret xxxxxxx `
                  -r53key AKIABC123
 
 Register-ScheduledTask -TaskName "Schedule MFA Status Report" -Trigger $Time -Action $Action -RunLevel Highest

The above code creates a new task trigger every day at 4pm, defines a script to be called, and finally binds the task to the action so it runs.

Packaging it all up

You can download the above as an installable zip from the GitHub repo danielburrell/rds-certs.

.

Read more... 👓

Let's Encrypt UniFi Cloud Key

Having put up with the stock router my ISP gave me for the last 10 years, I finally shelled out for Ubiquity’s UniFI Controller, USG, and IWHD-AP. On my old router I’d always have to click “Proceed” every time my browser complains about the self-signed certificate. On top of that, remembering the router address was 192.168.1.1 or 10.1.1.1 every time I needed to configure something and typing that in became annoying.

So in this post I’m going to cover how to set up the UniFI Controller to be reachable by hostname, and then how to replace the tedious self-signed certificates that are generated when you set up the cloud key.

Note: There’s nothing wrong with self-signed certificates, and other than avoiding ‘Just Click Yes’ override fatigue in the user, there isn’t much to be said about improving security by replacing a self-signed certificate with an official one. So for me, aside from learning about the services deployed on USG and where they store their certificates, this is about convenience and avoiding interruptions in my workflow. Yes it’s a bit like The Fly episode from the TV series Breaking Bad where Walter stops his production line, and the audience is subjected to a whole one hour of watching him hunt round his lab to eliminate a fly before production can be resumed. Just as Walter found that fly annoying, I find IP addresses and
self-signed certificates annoying too.

We’re going to use Let’s Encrypt to generate the certificates for free. In particular, we’re going to use the DNS01 challenge because that’s the only one that will allow a certificate to be distributed within a private network - that is, we’re going to get the certificate issued to us by proving to Let’s Encrypt that we own the domain, rather than exposing any ports within our network. I personally would use this method anyway, even if we were not behind a private network as I think it’s a much cleaner approach architecturally, and also in terms of security and reliability (port forwarding is not an elegant solution in this situation).

Summary

Everything I do here has been written up as a nice collection of scripts which you can get at danielburrell/unifi-certs

Here’s what we’re going to do to achieve this:

  • Configure CloudKey to associate a domain with the network.
  • Configure AWS with a dedicated IAM and Policy for the automatic creation of DNS records under our domain of choice.
  • Install pip3 because apparently Unifi CloudKey Gen 2 doesn’t come with this installed.
  • Make use of CertBot and the Route53 plugin to acquire a certificate for our target machine cloudkey.home.example.com
  • Use openssl to mangle the certificates into an intermediate format.
  • Replace the certificates, and import them into the key store
  • Restart the relevant services so they pick up the new certificate.
  • Set a cron job so that we refresh the certificates before the LE 90 day window.

Security

Before we go any further, a word on security. This method relies on supplying AWS credentials that effectively allow the creation and destruction of DNS records, so it is vital to follow security best practices by:

  • Creating a separate non-root IAM identity to access the Route53 API.
  • Properly defining the policy that provides the various permissions.
  • Keeping these permissions to a minimum
  • Not leaving the credentials in an unsecured location.
  • Not being lazy and re-using the credentials for other projects.

Assumptions

I’m going to assume a certain competence with AWS and in particular the creation of IAM users, Route53, Policy creation, and knowledge of how DNS works and the various record types in particular TXT, whether this be through the UI or via terraform or similar.

I’m also going to assume you already have a domain, and have already created a zone in Route53 with an A Record. This domain must be real, and you must actually have ownership such that DNS records resolve correctly via Route53. For example:

  • Your domain on the public internet example.com This MUST exist.

In this tutorial we will then create:

  • Your desired subdomain for your LAN e.g. home.example.com This is a figment of your router’s imagination.

Configure Cloud Key with a domain

We’re going to configure your network with a name. This is the suffix that all your devices will be named on, so if you have a device with a hostname dans-iphone, and your domain is home.example.com then you’ll be able to reach this device at dans-iphone.home.example.com

  • Log into your Ubiquity Cloud Key which I assume is at 192.168.1.2.
  • Go to Settings -> Networks
  • Find the LAN (most people only have one!), and click Edit (you might need to hover over the entry to see the edit button, it’s on the far right).
  • Open the Advanced Section
  • Find the Domain Name setting and set it to a subdomain e.g. home.example.com.
  • Click Apply Changes

Installing Prerequisite Packages

We’ll be using certbot, and certbot’s route53 module. Certbot can be installed with the package manager. However, route53 is a python module, and so we’ll also need to install pip3 as it’s not installed by default even though python3 is.

With the previous changes you should be able to ssh to the controller via its hostname. You can find out the hostname by going to Settings -> System Settings, and looking at the Device Name field.

Assuming that the hostname is cloudkey you should be able to logon using.

ssh cloudkey.home.example.com

apt-get install python3-pip
apt-get install certbot
pip3 install certbot-dns-route53

Creating the policy

Log into AWS and look for Route53. Find the hosted ZoneID for your domain and make a note of this.

The following policy, when bound to an IAM group or user, allows the user to perform the necessary DNS record operations in Route53. The key feature of this policy is that it allows record changes for the given domain.

Create this policy replacing the YOURZONEIDHERE placeholder for your own ZoneId:

{
    "Version": "2012-10-17",
    "Id": "update-rdp-dns-route53",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "route53:ListHostedZones",
                "route53:GetChange"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Effect" : "Allow",
            "Action" : [
                "route53:ChangeResourceRecordSets",
                "route53:ListResourceRecordSets"
            ],
            "Resource" : [
                "arn:aws:route53:::hostedzone/YOURZONEIDHERE"
            ]
        }
    ]
}

Now create a new IAM user, and create a group for it, and bind the newly created policy to the group. When you create your IAM user you will be given a pair of credentials. Note these down and store them somewhere sensible like a password manager.

Stashing and securing AWS credentials

We need to put the AWS credentials on our cloudkey so that certbot’s route53 plugin can use them to create the necessary DNS records to issue our certificate. Note there are a number of ways for certbot to read the credentials so if you prefer an alternative approach, then feel free to deviate from these next instructions.

  • Go to the cloudkey again ssh cloudkey.home.example.com
  • Create a file touch ~/.aws/config && chmod 600 ~/.aws/config
  • Open the file for editing vi ~/.aws/config
  • Set the content substituting your keys as appropriate (if you’re confused which key is which, the access_key_id is the shorter value that starts with AK)
    [default]
    aws_access_key_id=*****************
    aws_secret_access_key=************************************
    
  • Save the contents :wq
  • Secure the file some more chmod 400 ~/.aws/config

Acquiring the certificates

Let’s acquire the certificates by calling certbot. I’m going to request a wildcard by requesting *.home.example.com but you don’t have to. For example you can request a non-wildcard certificate using -d cloudkey.home.example.com. I’m not going to go into the benefits and drawbacks of wildcard certificates.

certbot certonly \
        --dns-route53 \
        --dns-route53-propagation-seconds 30 \
        --noninteractive \
        -d *.home.example.com

The noninteractive stops certbot from asking questions about what to do if the renewal isn’t yet needed. By default certbot will not renew unless the certificate is soon to expire. This is useful for our cron job that we will create later.

If this command runs successfully then certbot will create

  • a private key /etc/letscencrypt/live/home.example.com/privkey.pem
  • a certificate /etc/letsencrypt/live/home.example.com/cert.pem

Import the certificate into nginx and unifi services.

The next sections are dedicated to deploying these keys to two services

  • nginx
  • unifi

Ultimately from our privkey.pem and cert.pem we’re aiming to create

  • a cert.tar file /etc/ssl/private/cert.tar containing
    • key file /etc/ssl/private/cloudkey.key
    • cer file /etc/ssl/private/cloudkey.crt
    • Java Key Store /etc/ssl/private/unifi.keystore.jks

To build the keystore we will first need to create a temporary p12 format of the pems. From this we will create

  • The keystore /usr/lib/unifi/data/keystore which is actually just a symlink to /etc/ssl/private/unifi.keystore.jks

I presume this is because one of the services requires the key and cert file (probably nginx) and the other is written in java and requires a Java Key Store.

I’m not going to question this certificate management approach except to remark that it’s not how I would have done it…

Let’s begin!

Stop the unifi service (nobody likes to have the certificate rug pulled from under them)

service unifi stop

Delete the current self-signed configuration located in /etc/ssl/private/

rm -f /etc/ssl/private/cert.tar
rm -f /etc/ssl/private/unifi.keystore.jks
rm -f /etc/ssl/private/unifi.keystore.jks.md5
rm -f /etc/ssl/private/cloudkey.crt
rm -f /etc/ssl/private/cloudkey.key

Import to the JKS for unifi service

Let’s use openssl to generate a random passsword for us. This is just to secure a p12 ephemeral key

ephemeralPassword=$(openssl rand -base64 32)

openssl pkcs12 \
         -export \
         -out /etc/ssl/private/cloudkey.p12 \
         -inkey /etc/letsencrypt/live/home.example.com/privkey.pem \
         -in /etc/letsencrypt/live/home.example.com/cert.pem \
         -name unifi \
         -password pass:$ephemeralPassword
keytool -importkeystore -deststorepass aircontrolenterprise \
        -destkeypass aircontrolenterprise \
        -destkeystore /usr/lib/unifi/data/keystore \
        -srcstorepass $ephemeralPassword \
        -srckeystore /etc/ssl/private/cloudkey.p12 \
        -srcstoretype PKCS12 -alias unifi

rm -f /etc/ssl/private/cloudkey.p12

Now we have an ephemeral p12 key let’s import the key into the keystore and then dispose of the key. Note that the password for the keystore is not random, it must be aircontrolenterprise as this hardcoded value is what the service expects and this is non-configurable at the time of writing. Note that Ubiquity’s decision to have a known hardcoded value completely defeats the purpose of the encrypting keys, but that’s none of my business.

Importing for nginx

So far we’ve fixed unifi, now we just need to get nginx to use the LE Certs. To do this we copy the LE certs across verbatim (they’re already in the ASCII pem format we need). We tar these up and leave the tar file in the same directory to be discovered by nginx presumably.

cp /etc/letsencrypt/live/home.example.com/privkey.pem /etc/ssl/private/cloudkey.key
cp /etc/letsencrypt/live/home.example.com/cert.pem /etc/ssl/private/cloudkey.crt

tar -cvf /etc/ssl/private/cert.tar -C /etc/ssl/private/ .

chown root:ssl-cert /etc/ssl/private/*
chmod 640 /etc/ssl/private/*

Restart the services

Let’s check that the nginx config is still valid and then restart the services

/usr/sbin/nginx -t

service nginx restart
service unifi start

All done!

Testing it

Visit your controller at https://cloudkey.home.example.com, the certificate should be valid, and signed by R3.

Automating Fully

Recall that all the above has been wrapped up in a nice script for you at danielburrell/unifi-certs. You can run the get-certs.sh script every sunday at midnight by running echo "$(echo '0 0 * * 0 /root/get-certs.sh home.example.com' ; crontab -l)" | crontab -

We run the job weekly even though the certificate expires every 90 days because if you don’t run it often you run the risk of having the certificates lapse. e.g. if we ask to renew every 60 days and we get told we’re not close enough to the 90 mark to renew, then our next renewal attempt would be day 120. That’s 30 days late!.

Improvements

I think it should be possible to do all the above as a non-root user and improve the security there.

Feedback

I spent a while reverse engineering this process and figuring out how to reduce the number of intermediate stages required to get the LE certs in the right place despite all the confusing file extensions.

I think it all works rather well so if you have any comments or feedback let me know below.

.

Read more... 👓

How a keyboard should be

And now for something completely different. My rather pricey Logitech G910 keyboard has broken down after only 18 months with no less than 3 of the keys currently stuck on the wrong colour. Looks like a physical issue with the LEDs rather than anything software related. It’s actually not a very nice keyboard to type on, and frankly Logitech have gone down in my estimation for the lack of quality in their products over the last 15 years.

So what’s the alternative? As a developer I’m quite particular about my keyboards and their functionality. There are literally thousands of combinations to choose from, and yet I still haven’t found my perfect keyboard.

Yes I’ve seen Jeff Atwood’s Code Keyboard and in many respects it comes so close to being what I want, USB-C, cable management, black, backlit, but they only ship the cherry version to the UK and and if the reviews are to be believed, a few of their keyboards have issues.

The CodeKeyboard is an expensive keyboard, built by a guy that wanted to build a keyboard; so it should be a great keyboard! Yet only 60% of people rated it 5 Stars. Glossing over the fact that it costs a fortune to actually ship it to the UK, and that’s if they have it in stock, which right now, they don’t!

So what now? Spend my nights learning to build mechanical keyboards? Ain’t Nobody Got Time For That!

Instead, I’m going to set out what I think a keyboard should be, and then compromise on something - probably the backlight as working in an unlit room is superbad for you and I should probably not do it.

This wishlist for a keyboard is a bit of lighthearted fun and a way to convey my frustration at a universe of keyboards in which my perfect keyboard does not exist.

Power and Connectivity

  • Ships with a detachable USB-C braided cable. The DAS-4 keyboard actually comes pretty close to my overall keyboard requirements, unfortunately the cable is made of nasty rubber and is fixed to the back of the keyboard rather than being detachable. Filco Convetable 2 also comes pretty close since their cable is actually detachable so I could provide my own braided cable. Sadly they missed a trick and supplied a Micro-USB connection instead of USB-C. This is unfortunate as being behind the keyboard it would be nice if the socket was reversible so that I do not have to deal with the USB Superposition problem.
  • Recessed connector. The connector should be recessed so that the cable head doesn’t stick out.
  • Open recess. The recess should be open so that you can disconnect the keyboard without having to turn it upside down. An open recess should also allow the cable to exit via the rear of the keyboard.
  • Ducts on the underside running parallel allowing an exit to the left of the keyboard.
  • Duct runs parallel with the length of the keyboard. This means that the cable terminator won’t stick out the back of the keyboard.
  • Wireless Mode. While USB-C allows the keyboard to be ‘wired’, detaching it should put it in wireless mode.
  • Use while charging. May be obvious but I’ve seen devices that can’t be used while they’re charging.
  • Lithium-Ion Battery would provide 6 months of 8 hour a day use (without back-lighting).
  • Headroom for magnetically detaching usb systems as nobody wants to snag a usb-c connection. Apple implemented this as ‘MagSafe’ for their power systems and there are some options out there for doing this with USB-C (with data transfer).

Lighting, Colour, Materials

  • Back-lit in white This is one thing I admire about Jeff Atwood’s Code Keyboard, a nice back-lit white.
  • Adjustable brightness with dip switches or similar.
  • White under-key backplate. Again this is one thing that makes ‘Code Keyboard’ look beautiful - a solid white backplate for light to reflect back.
  • Fixed colour back-light. No flashing rainbow colours just because the host OS isn’t connected yet. Looking at you Logitech. This is a keyboard not a mario kart level.
  • Black Double Shot Keycaps Throughout Because Double Shot requires no printing, there’s no print to wear away. Yes this includes the swappable keycaps like the OS key and the Menu key. I’d look to MaxKeyboard for this.
  • Black anodised steel case. This thing should be rugged and heavy.
  • Illuminated keycaps. Back-lights are no good if you can’t see the keys.
  • Soft cap design. A lot of people complaining about Code Keyboard caps being ‘sharp’ and painful to type on.
  • Caps/Scroll/Num lock lights would be within line of sight, dim, and either white (to match the rest of the keyboard) or red like on a BBC Computer. No idea why people choose blue. It’s no good placing them behind a back row of tall keys if I can’t see it at the angle I’m working at.

Ruggedness

  • Made of steel Did I mention this already?
  • Cherry MX do all the hard work of producing moving parts that withstand 50 million strokes, so there are no other moving parts, so it should be easy to slap a 10 year guarantee on it. Right?
  • 10 Year Warranty I’m tired of having to re-evaluate the keyboard I want every 2 years. On top of that think of all the electronic waste from Logitech keyboards breaking after 1 year.
  • Steel Keyboard Hinges. Ever had a keyboard hinge break off? These ones should be made from steel too.
  • Hefty. This thing should not slide around the desk. Not even for pair programming.
  • Thick grippy firm rubber feet - seriously this thing should not slide around the desk.
  • Sprill Proof. This is a big ask, but why can’t keyboards be spill proof?, if they were, they’d also be easier to clean.
  • User Servicable Parts. Ok maybe a PCB isn’t that servicable but I should be able to unscrew this keyboard to clean it, and maybe even replace the USB connector that will inevitably get damaged somehow.

Keys

  • Cherry MX Brown. I plan to use this keyboard in an office, blues are out of the question and reds give no feedback.
  • O Rings by default so that again, this thing is quiet.
  • OS Key Deactivation toggle. I still like to game, and this is a really great feature that Logitech have incorporated into most of their gaming keyboards. Trying to implement this at the software level requires a machine reboot which is impractical for most people.
  • Big Fat Instant Mute toggle - sometimes a call comes in; got to know where your mute button is.
  • [Optional] Play pause rewind forward media buttons - make them easy to press, in fact, make them actual keys like this.
  • [Optional] Tactile volume wheel exactly like the one on DAS 4, exposed fully like a hi-fi wheel. The partially exposed ones on Corsair and Logitech keyboards are not as great as they don’t enable fast and precise adjustment of volume. A full wheel like some old school 1980s stereo is what you need.
  • Exactly 5 Custom keys. In logitech land they’re known as G-Keys. Five. No more. No less. These are incredibly useful.
  • Comes in 105 UK form. Yes with a menu button. No, not with an FN button. Full size Enter key with the @ and “ in the right place.

Availability

  • Ships to UK It’s no good having the best keyboard in the world if you don’t have partners around the world who can ship it.

Definite No-Nos

  • No hello kitty prints. I said black. Black looks professional. Try and blend
  • No screens. You already have a screen in front of you. This is a keyboard. It takes input. Need more output or indicators? get a second screen! Or an Elgato console - Seriously. Logitech G15 required drivers to be installed for the screen to operate and nobody writes code for these sorts of things anymore.
  • No embedded wrist wrest. Again this is orthogonal to a keyboard. If you’re desperate then have your custom wrist-wrest 3D printed
  • No speakers. I already have speakers, and my computer is already spoilt for choice from the plethora of tinny sounding audio output devices. This is largely due to ASUS’s unhealthy obsession with stuffing speakers in their monitors or every other device they manufacture. A toaster does not need a speaker. A kettle does not need a speaker. If your kettle has something to say then let it speak its peace by having it connect to your network’s smart speaker.
  • No Gaudy branding. Ever seen a SuperDry t-shirt? Loud and obnoxious, isn’t it? They say pets are like their owners, well the same seems to be true for keyboards. Subtle and refined, classy and tasteful and professional. Leave the ‘FRAGZ_GAMER_WINZOR_STEALTH_PRO_ALPHA_9_EDITION’ branding at home.
  • No Power Buttons. Whose idea was it to place a shutdown button next to media keys? I’m looking at you DAS 4. Really this is exceptionally bad design, so bad that I simply cannot contemplate the DAS 4 as an option.

Summary

In short, the ideal keyboard would have

  • DAS 4 media controls.
  • Code Keyboard’s backlighting and elegant form with quiet Cherry MX Browns.
  • Would use a detachable USB-C not Micro USB cable to charge/toggle its wired and wireless form.
  • Would have a 105 UK layout.
  • Would be subtle and understated i.e. black except for the back-light which would be white.
  • Would be made of rugged materials like steel.
  • Would not have anything that is orthogonal to the form and function of a keyboard.
  • Would actually exist.

I think I’ll settle for the UK Filco Convertible 2 MX Brown Tactile when it’s back in stock. I can at least add my own magneticly detachable and reversible USB cable and substitute the lack of backlighting and illuminated keycaps for a keyboard lamp.

.

Read more... 👓

The Trouble With Maven Release Plugin

In a previous post I talked about what the maven-release-plugin does and explained how a typical mvn release:clean release:prepare release:perform command actually works.

Last week I had to integrate a maven project I had written into a GitLab CI environment. Up until then I had been cutting releases using the Maven release plugin locally on the command line.

The workflow is something like this:

  • Make sure there’s no uncommitted source code.
  • Clean out the working directory of any temporary release files (like backup poms, release.properties etc) and then get to work on the actual release.

Remember, performing a release should be nothing more than cutting SNAPSHOT releases into solid releases, committing this change, and tagging it with the solid version. You can then build and release artifacts from this commit if necessary.

As a quick recap, mvn:perpare makes the solid release and promotes the pom to a new SNAPSHOT version. You can then compile and ship the artifact by running mvn release:perform which will release the 1.0 code.

When integrating with CI this is what we’d like to happen

  • Developer commits a new feature to master (or makes a merge request for an entire feature to master).
  • Continuous integration builds and tests the code.
  • Upon succesful build, the pipeline would ask “do you want to release?”

If you say yes, then the build job should cut, tag and push the release commit. You then increment the project version to a SNAPSHOT and commit this so developers can continue working. In total, two additional pushes have occured. As a result of this tag being pushed, the CI system triggers another pipeline which is responsible for building the project to generate the artefact and then uploading the artefact to the release area and generating a release on the releases page. The second build makes sure the new SNAPSHOT build works fine.

This all sounds great, but when you try to actually build this workflow in practice by using something like gitlab, you quickly find that the implementation details of the release plugin cause problems with the pipeline creation.

But why? Here’s the logs:

01 [INFO] Executing: /bin/sh -c cd /builds/pet-projects/project && git commit --verbose -F /tmp/maven-scm-51222345.commit pom.xml project-common/pom.xml project-silent/pom.xml project-slack/pom.xml project-server/pom.xml project-test/pom.xml project-application/pom.xml project-installer/pom.xml
02 [INFO] Working directory: /builds/pet-projects/project
03 [INFO] Executing: /bin/sh -c cd /builds/pet-projects/project && git symbolic-ref HEAD
04 [INFO] Working directory: /builds/pet-projects/project
05 [INFO] Executing: /bin/sh -c cd /builds/pet-projects/project && git push git@gitlab.solong.co.uk/pet-projects/project.git refs/heads/master:refs/heads/master
06 [INFO] Working directory: /builds/pet-projects/project
07 [INFO] Tagging release with the label project-1.17...
08 [INFO] Executing: /bin/sh -c cd /builds/pet-projects/project && git tag -F /tmp/maven-scm-67289022.commit project-1.17
09 [INFO] Working directory: /builds/pet-projects/project
10 [INFO] Executing: /bin/sh -c cd /builds/pet-projects/project && git push git@gitlab.solong.co.uk/pet-projects/project.git refs/tags/project-1.17
11 [INFO] Working directory: /builds/pet-projects/project
12 [INFO] Executing: /bin/sh -c cd /builds/pet-projects/project && git ls-files
13 [INFO] Working directory: /builds/pet-projects/project

If we look closely at the logs, we can see that for the initial cut, mvn release actually does two pushes instead of once. The first is after the untagged commit containing the solid release pom, the second is after it has been tagged to push the tag upstream. Remember every push will result in a pipeline trigger. The effect of this is that two release pipelines appear, with one of them cancelled. This is the way the plugin has been implemented and there’s nothing that can be done (short of changing the plugin’s implementation)

As a result, the operation of mvn release:prepare is incorrect in the context of a CI system. release:prepare is the most useful part of the release plugin as it is the part that actually does the work of manipulating the pom and scm, so not being able to invoke it renders the plugin almost useless.

What is truely absurd here, is that the maven-release-plugin doesn’t actually do anything with commits; it delegates the whole process to the scm plugin, which is quite capable of committing without pushing. I’m sure I’m not the only person who has encountered this problem, and having read the documentation from top to bottom and found no additional flags to solve it, I assume the folks at mvn won’t fix it.

The alternatives are:

  • Reimplement a better maven relase plugin
  • Contribute a fix to the existing one.
  • Or just write the rather short set of instructions directly in .gitlab-ci.yml

Here’s the equivalent behaviour in gitlab CI (as a replacement for the maven-release-plugin

script:
  #release the current version
  - mvn org.codehaus.mojo:versions-maven-plugin:2.7:set -DremoveSnapshot=true
  - CURRENT_VERSION=$(mvn -q -Dexec.executable=echo -Dexec.args='${project.version}' --non-recursive exec:exec)
  - git add pom.xml **/pom.xml
  - git commit -am "Releasing Project $CURRENT_VERSION"
  - git tag -a $CURRENT_VERSION -m "Releasing Project $CURRENT_VERSION"
  - git push git@gitlab.solong.co.uk:pet-projects/project.git $CURRENT_VERSION

  #roll the next version
  - mvn org.codehaus.mojo:versions-maven-plugin:2.7:set -DnextSnapshot=true
  - CURRENT_VERSION=$(mvn -q -Dexec.executable=echo -Dexec.args='${project.version}' --non-recursive exec:exec)
  - git add pom.xml **/pom.xml
  - git commit -am "Bumping pom.xml version to $CURRENT_VERSION"
  - git push git@gitlab.solong.co.uk:pet-projects/project.git

When you push the tag it pushes the corresponding commit with it. When CI sees the tag, it starts the pipeline on the basis of a tag ref, since our publish stage only listens to tag references, we end up building the code as expected.

It has to be said, with so many individual maven plugins having to be configured to produce a released artefact and tagged commit, it might be worth sitting down and writing a better maven relase plugin, one that ideally orchestrates the whole flow using existing plugins.

.

Read more... 👓

Packaging java applications as RPMs in maven.

I recently migrated away from using debian based servers to using rhel, and with it came the requirement to repackage all my services as rpms. All my software is packaged as an RPM, this makes it easy to install on bare-metal or within a docker container, with ansible or with puppet. It’s easy to manage and vastly superior to zipping things up. You can ensure that the correct permissions are applied to the files to ensure a secure installation, the appropriate users are created, and have a consistent installation path that makes support and documentation easier.

Up until this migration, I had been using a trusty maven plugin called jdeb which works wonders on both windows and linux. I can’t sing the praises of this plugin enough, it works out of the box cross platform and it’s incredibly simple to use. The cross platform aspect was key for me as I was primarily a windows developer who deployed to a linux environment.

However having recently committed to development on a mac for home projects, and a linux machine at work, the requirement to develop on windows has fallen by the wayside. I’ve provided the above context because the RPM plugin I’m going to talk about only works in an environment that has rpmbuildtools which is exclusively a linux environment. However with the recently release of the Windows Subsystem for Linux (WSL), I’m sure it’s possible to build these projects that use this rpm plugin on windows.

For this article, I will be assuming we want to package a java maven project that has a start script which is generated using the app-assembler plugin.

The requirements for this service are as follows.

  • Have maven generate an rpm
  • The rpm should install the application.
  • The rpm should force a particular Java Runtime to be installed if necessary.
  • The service should be configurable to run automatically on startup
  • Any log directories should be created.
  • The service should be installed under a particular user (which should be created if necessary)
  • The service’s configuration directory should be private and secure
  • The service should clean itself up upon uninstallation
  • The service should preserve data files during upgrades

Lets start with the RPM plugin declaration.

This code goes in the pom of whichever projects have artifacts that need to be packaged as an RPM. In a multi-module maven project it’s fine to have 2 rpm declarations. You just have to do the work for each module. For the purposes of this guide, we’ll only do it to one module.

To assist with installation (like ensuring certain users are created, or certain directories exist) we can create some shell scripts - 4 in total. When a package is installed, these 4 separate scripts get run at different stages of the process. The stages are:

  • preinstall
  • postinstall
  • preuninstall
  • postuninstall

We can give the scripts whatever name we like so long as the name matches in the pom file.

Here’s what happens when you run the install, uninstall and upgrade scenarios respectively

yum install mypackage

# preinstall
# files copied
# postinstall
yum remove mypackage

# preuninstall
# files removed
# postuninstall
yum upgrade mypackage

# preinstall (new package)
# new files copied
# postinstall (new package)

# preuninstall (old package)
# old files removed
# postuninstall (old package)

Notice that there’s no ‘upgrade’ script. On the face of it, it looks like this could be a problem, as we probably only want to delete the data directory if the package is being uninstalled for goot. It turns out that an upgrade can be detected by an argument $1 passed into the script. RPM counts how many installations are currently present and passes this total count in as an argument into the script. The argument represents the number of versions installed. (More precisely, the number that would be installed as a result of the ongoing transaction - recall that an upgrade is 2 transactions; an install and an uninstall)

  • When you install for the first time, the parameter is 1 (in the context of the install scripts).
  • When you upgrade, (which is essentially an additional install followed by an uninstall of the previous version)
    • The parameter value is 2 or more (in the context of the install scripts)
    • The parameter value is 1 or more (in the context of uninstall scripts).
  • When you uninstall entirely, the parameter value is 0 (in the context of uninstall scripts).

Using this information, we can ensure that the install scripts only creates users when $1 == 1 and uninstall only removes users once $1 == 0

Now lets look at how we would define the rpm’s creation in maven.

Start by creating 4 scripts and placing them in a scripts directory in the root of the project.

Then use this for the pom in the <build><plugins> section

  <plugin>
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>rpm-maven-plugin</artifactId>
    <version>2.2.0</version>
    <extensions>true</extensions>
    <executions>
      <execution>
        <id>build-rpm</id>
        <goals>
          <goal>attached-rpm</goal>
        </goals>
        <phase>package</phase>
      </execution>
    </executions>
    <configuration>
      <license>MIT (c) $project.inceptionYear $project.organization</license>
      <distribution>Project Distribution</distribution>
      <icon>src/main/resources/icon.xpm</icon>
      <group>Applications/Editor</group>
      <packager>Dan Burrell daniel@solong.co.uk</packager>
      <prefix>/usr/local</prefix>
      <changelogFile>src/changelog</changelogFile>
      <defineStatements>
        <defineStatement>_unpackaged_files_terminate_build 0</defineStatement>
      </defineStatements>
      <mappings>
        <mapping>
          <directory>/opt/companyname/product/repo</directory>
          <filemode>755</filemode>
          <sources>
            <source>
              <location>target/appassembler/repo</location>
            </source>
          </sources>
        </mapping>
        <mapping>
          <directory>/usr/lib/systemd/system/</directory>
          <filemode>644</filemode>
          <sources>
            <source>
              <location>src/main/scripts/service/myproduct.service</location>
              <destination>myproduct.service</destination>
            </source>
          </sources>
          <directoryIncluded>false</directoryIncluded>
          <configuration>false</configuration>
        </mapping>
        <mapping>
          <directory>/opt/companyname/product/bin</directory>
          <filemode>755</filemode>
          <sources>
            <source>
              <location>target/appassembler/bin</location>
            </source>
          </sources>
        </mapping>
      </mappings>
      <preinstallScriptlet>
        <scriptFile>src/main/rpm-scripts/preinstall.sh</scriptFile>
        <fileEncoding>utf-8</fileEncoding>
        <filter>true</filter>
      </preinstallScriptlet>
      <postinstallScriptlet>
        <scriptFile>src/main/rpm-scripts/postinstall.sh</scriptFile>
        <fileEncoding>utf-8</fileEncoding>
        <filter>true</filter>
      </postinstallScriptlet>
      <preremoveScriptlet>
        <scriptFile>src/main/rpm-scripts/preremove.sh</scriptFile>
        <fileEncoding>utf-8</fileEncoding>
        <filter>true</filter>
      </preremoveScriptlet>
      <postremoveScriptlet>
        <scriptFile>src/main/rpm-scripts/postremove.sh</scriptFile>
        <fileEncoding>utf-8</fileEncoding>
        <filter>true</filter>
      </postremoveScriptlet>
      <requires>
        <require>java-1.8.0-openjdk &gt; 1.8</require>
      </requires>
    </configuration>
  </plugin>

  • The use of the filter tags ensures that we can inject relevant property information in all our scripts.
  • The service section ensures we can configure the service to startup automatically (if we enable it).
  • The permissions 644 root:root are correct.
  • You’ll need an xpm file as the icon.
  • The license normally has a license type, but I often put the copyright details there (because I can).
  • Note how we map from the target/appassembler/bin directory where executable scripts which start the application are located.
    • These are mapped to the /bin directory on the destination system under an opt path. Ensure that the filemode is correct for your application. (755 is reasonable).
  • Note how we also map our jar binaries from /repo in the build folder to /rep on the destination system.
    • Again 755 is fine.
  • That last requires section allows us to automatically install java 8 to ensure our service works.

The preinstall script should do things like create users with nologin

   useradd -s /sbin/nologin myuser

Here’s an example .service file

[Unit]
Description=My Service
[Service]
ExecStart=/opt/companyname/product/bin/myproduct
User=myuser
Type=simple
SuccessExitStatus=143
[Install]
WantedBy=multi-user.target

What’s with SuccessExitStatus=143? Well the JVM exits with code 143 when it receives SIGTERM; it conflates its exit status with the SIGTERM received (128+15). WIthout being told that this is fine, systemd will think something went wrong. This solves the problem.

That’s it! Once you know how to invoke the scripts, and what their responsibility is, the rest is fairly easy.

.

Read more... 👓