Let's Encrypt UniFi Cloud Key

Having put up with the stock router my ISP gave me for the last 10 years, I finally shelled out for Ubiquity’s UniFI Controller, USG, and IWHD-AP. On my old router I’d always have to click “Proceed” every time my browser complains about the self-signed certificate. On top of that, remembering the router address was or every time I needed to configure something and typing that in became annoying.

So in this post I’m going to cover how to set up the UniFI Controller to be reachable by hostname, and then how to replace the tedious self-signed certificates that are generated when you set up the cloud key.

Note: There’s nothing wrong with self-signed certificates, and other than avoiding ‘Just Click Yes’ override fatigue in the user, there isn’t much to be said about improving security by replacing a self-signed certificate with an official one. So for me, aside from learning about the services deployed on USG and where they store their certificates, this is about convenience and avoiding interruptions in my workflow. Yes it’s a bit like The Fly episode from the TV series Breaking Bad where Walter stops his production line, and the audience is subjected to a whole one hour of watching him hunt round his lab to eliminate a fly before production can be resumed. Just as Walter found that fly annoying, I find IP addresses and
self-signed certificates annoying too.

We’re going to use Let’s Encrypt to generate the certificates for free. In particular, we’re going to use the DNS01 challenge because that’s the only one that will allow a certificate to be distributed within a private network - that is, we’re going to get the certificate issued to us by proving to Let’s Encrypt that we own the domain, rather than exposing any ports within our network. I personally would use this method anyway, even if we were not behind a private network as I think it’s a much cleaner approach architecturally, and also in terms of security and reliability (port forwarding is not an elegant solution in this situation).


Everything I do here has been written up as a nice collection of scripts which you can get at danielburrell/unifi-certs

Here’s what we’re going to do to achieve this:

  • Configure CloudKey to associate a domain with the network.
  • Configure AWS with a dedicated IAM and Policy for the automatic creation of DNS records under our domain of choice.
  • Install pip3 because apparently Unifi CloudKey Gen 2 doesn’t come with this installed.
  • Make use of CertBot and the Route53 plugin to acquire a certificate for our target machine cloudkey.home.example.com
  • Use openssl to mangle the certificates into an intermediate format.
  • Replace the certificates, and import them into the key store
  • Restart the relevant services so they pick up the new certificate.
  • Set a cron job so that we refresh the certificates before the LE 90 day window.


Before we go any further, a word on security. This method relies on supplying AWS credentials that effectively allow the creation and destruction of DNS records, so it is vital to follow security best practices by:

  • Creating a separate non-root IAM identity to access the Route53 API.
  • Properly defining the policy that provides the various permissions.
  • Keeping these permissions to a minimum
  • Not leaving the credentials in an unsecured location.
  • Not being lazy and re-using the credentials for other projects.


I’m going to assume a certain competence with AWS and in particular the creation of IAM users, Route53, Policy creation, and knowledge of how DNS works and the various record types in particular TXT, whether this be through the UI or via terraform or similar.

I’m also going to assume you already have a domain, and have already created a zone in Route53 with an A Record. This domain must be real, and you must actually have ownership such that DNS records resolve correctly via Route53. For example:

  • Your domain on the public internet example.com This MUST exist.

In this tutorial we will then create:

  • Your desired subdomain for your LAN e.g. home.example.com This is a figment of your router’s imagination.

Configure Cloud Key with a domain

We’re going to configure your network with a name. This is the suffix that all your devices will be named on, so if you have a device with a hostname dans-iphone, and your domain is home.example.com then you’ll be able to reach this device at dans-iphone.home.example.com

  • Log into your Ubiquity Cloud Key which I assume is at
  • Go to Settings -> Networks
  • Find the LAN (most people only have one!), and click Edit (you might need to hover over the entry to see the edit button, it’s on the far right).
  • Open the Advanced Section
  • Find the Domain Name setting and set it to a subdomain e.g. home.example.com.
  • Click Apply Changes

Installing Prerequisite Packages

We’ll be using certbot, and certbot’s route53 module. Certbot can be installed with the package manager. However, route53 is a python module, and so we’ll also need to install pip3 as it’s not installed by default even though python3 is.

With the previous changes you should be able to ssh to the controller via its hostname. You can find out the hostname by going to Settings -> System Settings, and looking at the Device Name field.

Assuming that the hostname is cloudkey you should be able to logon using.

ssh cloudkey.home.example.com

apt-get install python3-pip
apt-get install certbot
pip3 install certbot-dns-route53

Creating the policy

Log into AWS and look for Route53. Find the hosted ZoneID for your domain and make a note of this.

The following policy, when bound to an IAM group or user, allows the user to perform the necessary DNS record operations in Route53. The key feature of this policy is that it allows record changes for the given domain.

Create this policy replacing the YOURZONEIDHERE placeholder for your own ZoneId:

    "Version": "2012-10-17",
    "Id": "update-rdp-dns-route53",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect" : "Allow",
            "Action" : [
            "Resource" : [

Now create a new IAM user, and create a group for it, and bind the newly created policy to the group. When you create your IAM user you will be given a pair of credentials. Note these down and store them somewhere sensible like a password manager.

Stashing and securing AWS credentials

We need to put the AWS credentials on our cloudkey so that certbot’s route53 plugin can use them to create the necessary DNS records to issue our certificate. Note there are a number of ways for certbot to read the credentials so if you prefer an alternative approach, then feel free to deviate from these next instructions.

  • Go to the cloudkey again ssh cloudkey.home.example.com
  • Create a file touch ~/.aws/config && chmod 600 ~/.aws/config
  • Open the file for editing vi ~/.aws/config
  • Set the content substituting your keys as appropriate (if you’re confused which key is which, the access_key_id is the shorter value that starts with AK)
  • Save the contents :wq
  • Secure the file some more chmod 400 ~/.aws/config

Acquiring the certificates

Let’s acquire the certificates by calling certbot. I’m going to request a wildcard by requesting *.home.example.com but you don’t have to. For example you can request a non-wildcard certificate using -d cloudkey.home.example.com. I’m not going to go into the benefits and drawbacks of wildcard certificates.

certbot certonly \
        --dns-route53 \
        --dns-route53-propagation-seconds 30 \
        --noninteractive \
        -d *.home.example.com

The noninteractive stops certbot from asking questions about what to do if the renewal isn’t yet needed. By default certbot will not renew unless the certificate is soon to expire. This is useful for our cron job that we will create later.

If this command runs successfully then certbot will create

  • a private key /etc/letscencrypt/live/home.example.com/privkey.pem
  • a certificate /etc/letsencrypt/live/home.example.com/cert.pem

Import the certificate into nginx and unifi services.

The next sections are dedicated to deploying these keys to two services

  • nginx
  • unifi

Ultimately from our privkey.pem and cert.pem we’re aiming to create

  • a cert.tar file /etc/ssl/private/cert.tar containing
    • key file /etc/ssl/private/cloudkey.key
    • cer file /etc/ssl/private/cloudkey.crt
    • Java Key Store /etc/ssl/private/unifi.keystore.jks

To build the keystore we will first need to create a temporary p12 format of the pems. From this we will create

  • The keystore /usr/lib/unifi/data/keystore which is actually just a symlink to /etc/ssl/private/unifi.keystore.jks

I presume this is because one of the services requires the key and cert file (probably nginx) and the other is written in java and requires a Java Key Store.

I’m not going to question this certificate management approach except to remark that it’s not how I would have done it…

Let’s begin!

Stop the unifi service (nobody likes to have the certificate rug pulled from under them)

service unifi stop

Delete the current self-signed configuration located in /etc/ssl/private/

rm -f /etc/ssl/private/cert.tar
rm -f /etc/ssl/private/unifi.keystore.jks
rm -f /etc/ssl/private/unifi.keystore.jks.md5
rm -f /etc/ssl/private/cloudkey.crt
rm -f /etc/ssl/private/cloudkey.key

Import to the JKS for unifi service

Let’s use openssl to generate a random passsword for us. This is just to secure a p12 ephemeral key

ephemeralPassword=$(openssl rand -base64 32)

openssl pkcs12 \
         -export \
         -out /etc/ssl/private/cloudkey.p12 \
         -inkey /etc/letsencrypt/live/home.example.com/privkey.pem \
         -in /etc/letsencrypt/live/home.example.com/cert.pem \
         -name unifi \
         -password pass:$ephemeralPassword
keytool -importkeystore -deststorepass aircontrolenterprise \
        -destkeypass aircontrolenterprise \
        -destkeystore /usr/lib/unifi/data/keystore \
        -srcstorepass $ephemeralPassword \
        -srckeystore /etc/ssl/private/cloudkey.p12 \
        -srcstoretype PKCS12 -alias unifi

rm -f /etc/ssl/private/cloudkey.p12

Now we have an ephemeral p12 key let’s import the key into the keystore and then dispose of the key. Note that the password for the keystore is not random, it must be aircontrolenterprise as this hardcoded value is what the service expects and this is non-configurable at the time of writing. Note that Ubiquity’s decision to have a known hardcoded value completely defeats the purpose of the encrypting keys, but that’s none of my business.

Importing for nginx

So far we’ve fixed unifi, now we just need to get nginx to use the LE Certs. To do this we copy the LE certs across verbatim (they’re already in the ASCII pem format we need). We tar these up and leave the tar file in the same directory to be discovered by nginx presumably.

cp /etc/letsencrypt/live/home.example.com/privkey.pem /etc/ssl/private/cloudkey.key
cp /etc/letsencrypt/live/home.example.com/cert.pem /etc/ssl/private/cloudkey.crt

tar -cvf /etc/ssl/private/cert.tar -C /etc/ssl/private/ .

chown root:ssl-cert /etc/ssl/private/*
chmod 640 /etc/ssl/private/*

Restart the services

Let’s check that the nginx config is still valid and then restart the services

/usr/sbin/nginx -t

service nginx restart
service unifi start

All done!

Testing it

Visit your controller at https://cloudkey.home.example.com, the certificate should be valid, and signed by R3.

Automating Fully

Recall that all the above has been wrapped up in a nice script for you at danielburrell/unifi-certs. You can run the get-certs.sh script every sunday at midnight by running echo "$(echo '0 0 * * 0 /root/get-certs.sh home.example.com' ; crontab -l)" | crontab -

We run the job weekly even though the certificate expires every 90 days because if you don’t run it often you run the risk of having the certificates lapse. e.g. if we ask to renew every 60 days and we get told we’re not close enough to the 90 mark to renew, then our next renewal attempt would be day 120. That’s 30 days late!.


I think it should be possible to do all the above as a non-root user and improve the security there.


I spent a while reverse engineering this process and figuring out how to reduce the number of intermediate stages required to get the LE certs in the right place despite all the confusing file extensions.

I think it all works rather well so if you have any comments or feedback let me know below.