Managing SSH host keys in a reliable way
I’ve been managing my virtual machines using Foreman for close to 2 years now, and that’s brought me a huge set of benefits in terms of how I test new code (or changes to existing code), and new packages. That’s just awesome :)
But repeated rebuilds of a machine lead to one small niggling problem. One which bites you on every rebuild. One which doesn’t stop you working, but requires a few extra keypresses after every rebuild, and possibly at every login.
Not got it yet? Does this look familiar?
(If it doesn’t, the answer is changed SSH host keys, but the rest of this blog probably won’t make much sense :P)
Yeah. Irritating, isn’t it? Every time, every rebuild, a few seconds wasted. It adds up. There’s got to be a way to make this go away, right?
This is a quick solution, from a code perspective, and making sure all the machines in your infrastructure have each others keys is really simple (example courtesy of Puppet Labs’ documentation:
The problem with this is two-fold. Firstly, it’s a quick win in code, but only if you’re already using exported resources. See, to use them, you have to be using a database backend for Puppet itself. Given the database backend is otherwise optional, not everyone does. If I want to write a nice solution for everyone to use, this isn’t going to work.
Second, it’s also slow. When a machine changes it’s host keys, it’s going to take two puppet runs for the systems to catch up - the first is when the new VM uploads it’s new key to the puppet database, and the second when my laptop retrieves the new key and updates it’s key list.
So that’s actually slower than just deleting the changed key from my known_hosts. No good.
So, since the facts of a host are uploaded to Foreman we could replicate the above code by doing a Foreman search for all the $sshkey facts and writing them to a file. Pretty neat.
However, this doesn’t help either. Not everyone uses Foreman (even if I think they should :P), and we still have the 2 run problem which makes it slower to fix the problem than to suffer it. We’re not getting closer…
I Googled around and came across this: Github: fup/puppet-ssh
This is a function to generate keys as required on demand, and store them on the Puppet master, and make it possible to read them back (both the private and public parts) for use in Puppet manifests.
Bingo! With this function I can create a key the first time it is requested, but thereafter it will be re-read from the keystore dir on the puppetmaster. Since the puppetmaster isn’t the VM being rebuilt, when I recreate my vm, it’ll get the same SSH key as last time it was built.
The function didn’t quite fit my needs, as I want to keep some types of keys separated by environment (development backup servers should have access to production machines, for example), so I forked the function and extended it a little. You can find the result at Github: GregSutcliffe/puppet-modules, but let’s take a quick look at how the module works.
What’s happening here? Well, a couple of things. Firstly the
Here we take those variables and apply them to the server in question. Thus, when this module runs, it will overwrite the auto-generated ssh key with the one from the puppetmaster. As such, whenever I log into the machine, it will always have the same key (and the same fingerprint) so my known_hosts file is happy.
Success!
So there you have it - consistent SSH host keys for your machines every time! Better, it works for everyone, regardless of database backends or other external stores of data. It’s also fast - since I do a small Puppet run as part of the provisioning of my machines, the host key is already set when it comes up at first boot. All my requirements met. Wonderful :)
But repeated rebuilds of a machine lead to one small niggling problem. One which bites you on every rebuild. One which doesn’t stop you working, but requires a few extra keypresses after every rebuild, and possibly at every login.
Not got it yet? Does this look familiar?
[greg:~]$ ssh test2
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
c1:63:5e:c2:e4:c7:2a:19:fd:80:11:a2:73:c2:f6:b1.
Please contact your system administrator.
Add correct host key in /home/greg/.ssh/known_hosts to get rid of this message.
Offending RSA key in /home/greg/.ssh/known_hosts:217
RSA host key for test2 has changed and you have requested strict checking.
Host key verification failed.
(If it doesn’t, the answer is changed SSH host keys, but the rest of this blog probably won’t make much sense :P)
Yeah. Irritating, isn’t it? Every time, every rebuild, a few seconds wasted. It adds up. There’s got to be a way to make this go away, right?
The ‘Traditional’ Way
The way most Puppet users might approach this problem would be to use one of Puppet’s greatest features: exported resources. This feature allows you to exchange information between hosts that the puppet master doesn’t know a priori.This is a quick solution, from a code perspective, and making sure all the machines in your infrastructure have each others keys is really simple (example courtesy of Puppet Labs’ documentation:
class ssh {
# Declare:
@@sshkey { $hostname:
type => dsa,
key => $sshdsakey,
}
# Collect:
Sshkey <<| |>>
}
The problem with this is two-fold. Firstly, it’s a quick win in code, but only if you’re already using exported resources. See, to use them, you have to be using a database backend for Puppet itself. Given the database backend is otherwise optional, not everyone does. If I want to write a nice solution for everyone to use, this isn’t going to work.
Second, it’s also slow. When a machine changes it’s host keys, it’s going to take two puppet runs for the systems to catch up - the first is when the new VM uploads it’s new key to the puppet database, and the second when my laptop retrieves the new key and updates it’s key list.
So that’s actually slower than just deleting the changed key from my known_hosts. No good.
The ‘Foreman’ Way
My next thought was to look at how Foreman can be used to replace exported resources. This has been covered in other blogs (The Foreman Blog covered this a while back).So, since the facts of a host are uploaded to Foreman we could replicate the above code by doing a Foreman search for all the $sshkey facts and writing them to a file. Pretty neat.
However, this doesn’t help either. Not everyone uses Foreman (even if I think they should :P), and we still have the 2 run problem which makes it slower to fix the problem than to suffer it. We’re not getting closer…
Inspiration Via Services
The solution came to me when working on a module for backups. I needed a way to allow the backup system to SSH onto the backup targets to initiate rsync. Ideally, I didn’t want to store the private key in puppet, since I was planning to publish the repo. But I also want the module to work without an end user having to manually add an admin ssh key for the service.I Googled around and came across this: Github: fup/puppet-ssh
This is a function to generate keys as required on demand, and store them on the Puppet master, and make it possible to read them back (both the private and public parts) for use in Puppet manifests.
Bingo! With this function I can create a key the first time it is requested, but thereafter it will be re-read from the keystore dir on the puppetmaster. Since the puppetmaster isn’t the VM being rebuilt, when I recreate my vm, it’ll get the same SSH key as last time it was built.
The function didn’t quite fit my needs, as I want to keep some types of keys separated by environment (development backup servers should have access to production machines, for example), so I forked the function and extended it a little. You can find the result at Github: GregSutcliffe/puppet-modules, but let’s take a quick look at how the module works.
The Code
Most of the ssh module is fairly tedious - make sure it’s installed, manage the config file, start the service, yadda yadda…. The only interesting bit is the key handling. Lets take a snippet straight from the repo:$rsa_priv = ssh_keygen({name => "ssh_host_rsa_${::fqdn}", dir => 'ssh/hostkeys'})
$rsa_pub = ssh_keygen({name => "ssh_host_rsa_${::fqdn}", dir => 'ssh/hostkeys', public => 'true'})
What’s happening here? Well, a couple of things. Firstly the
dir
parameter is my extension to Fup’s original function - it allows me to specify where to store the keys on the puppet master. Otherwise, we’re asking the function to read (and if required, create) a key named ssh_host_rsa_myvm.fqdn.com
and read both the private and public parts into appropriately named variables.file { '/etc/ssh/ssh_host_rsa_key':
owner => 'root',
group => 'root',
mode => 0600,
content => $rsa_priv,
}
file { '/etc/ssh/ssh_host_rsa_key.pub':
owner => 'root',
group => 'root',
mode => 0644,
content => "ssh-rsa $rsa_pub host_rsa_${::hostname}\n",
}
Here we take those variables and apply them to the server in question. Thus, when this module runs, it will overwrite the auto-generated ssh key with the one from the puppetmaster. As such, whenever I log into the machine, it will always have the same key (and the same fingerprint) so my known_hosts file is happy.
Success!
Security Caveat
There is a minor security issue - all the keys generated by the function live on the puppet master. Technically, if they got into the wrong hands, that could be bad. However, as the machine which hands out configuration data to your infrastructure, if someone compromises it to the point where they can read those keys, it’s already game over. Plus, if they leak some other way, regenerating all the keys of your infrastructure is only arm -rf /etc/puppet/ssh
away ;)So there you have it - consistent SSH host keys for your machines every time! Better, it works for everyone, regardless of database backends or other external stores of data. It’s also fast - since I do a small Puppet run as part of the provisioning of my machines, the host key is already set when it comes up at first boot. All my requirements met. Wonderful :)