How To Manage SSH Keys Using Ansible

by Applied Informatics

Key management is an issue whenever access to servers must be controlled. Keys must be added when new users are created, old keys must be removed when users are deleted and keys must be updated when someone forgets a pass phrase.

We should also not allow individual users to have control over their own authorized_keys file. Instead, we should make use of the AuthorizedKeysFile option for SSHD and place the keys under the /etc/ssh/authorized_keys directory. This prevents users from adding/changing their respective ssh-keys and also prevents an intruder from adding their own key. This approach centralizes the control and location of all ssh-keys using standard SSHD configuration.

Here is how we can use Ansible as a configuration manager, to manage the servers. This will,

  • add authorized_keys files for new users
  • disable existing users
  • maintain authorized_keys file for existing users

The following is a list of routine maintenance and how to perform them:

Adding a new user

When we add a new user, we add them to SSH users and create their authorized_keys file at/home/ansible/crossplatform/etc/ssh/authorized_keys/ on our ansible server. We then invoke the playbook and the authorized_keys are copied to /etc/ssh/authorized_keys on all servers.

Disabling a user

To disable a user, delete the contents of their respective file at /home/ansible/crossplatform/etc/ssh/authorized_keys/, then invoke the playbook. We could easily go a step further and disable the account (e.g. password lock user user-X). We do not remove the user from SSH users yet. This will not be done until we delete the user from the system.

We could also create a playbook to remove a given user from the /etc/ssh/authorized_keys directory on each server. Presumably that would be part of the ‘Remove User’ process, which is outside the scope of this blog.

Updating keys

To update the authorized_keys for a user, update the file at /home/ansible/crossplatform/etc/ssh/authorized_keys/, then invoke the playbook.

It would be fairly straight forward to create another playbook which would verify /etc/password to look for any rogue users which have been added. We could take that a step further and maintain the users via an existing Ansible module.

Adding authorized keys for new users

Usually we can put the public SSH key strings inside the playbook vars. Because of SSH keys being long in size, they might have specific options (although the authorized_key module allows you to configure that) and it’s harder to maintain the list of keys like this. Here, our target is to add the public SSH keys for users as static files in an ansible role. Basically, we will be populating group_vars files by reading files inside roles.

  • First, we add the public key files in the ‘files’ directory of the role we will be using to configure the users.
  • Next, we have to find a way to “read” the key files and set them in the vars file. Ansible provides lookup plugins that allows to do this.
  • So, the related part of the vars file should look like this:
  - name: user1
    key: "{{ lookup('file', '') }}"
  - name: user2
    key: "{{ lookup('file', '') }}"
  - name: user3
    key: "{{ lookup('file', '') }}"
  - name: user4
    key: "{{ lookup('file', '') }}"
  • Next, all we need to do is call the authorized_key module as usual
  - name: Add ssh user keys
  authorized_key: user={{ }} key="{{ item.key }}"
  with_items: ssh_users

Key files are neatly tucked in the files directory, are easy to maintain and no wrapped lines and cluttered options will mess up your var files.

Managing sshd configuration

The ansible configuration tool will need SSH access to each managed node. By its nature, this user will need to have root privileges, and in our case, that will be achieved via sudo. The ansible user will login via ssh-key, and the pass phrase for this user will need to be protected and trusted to a few individuals. Access to systems by the ansible user can be restricted to connections originating from a predetermined IP address (via the authorized_keys file and/or Match options in sshd_config). This ansible user is permitted unrestricted sudo access (but that can be restricted via the sudoers file).

Here are a few of the key items from the ssd_config file which are central to this solution:

AuthorizedKeysFile /etc/ssh/authorized_keys/%u
PasswordAuthentication no
ChallengeResponseAuthentication no

Let’s keep that each user’s authorized keys are in a file named after the username, in the directory /etc/ssh/authorized_keys. Our sshd_config will direct SSHD to look in that directory.

Here are the users who can access our servers via SSH. We create this file: /etc/ansible/group_vars/ssh_users

#users who get SSH access to webservers
  - user1
  - user2
  - user3
  - user4
  - user5

Look at what’s left after we remove the blank lines.

With the sshd_config file mentioned in the previous section, here is what /etc/ansible/configs/etc/ssh/sshd_config.j2 (.j2 indicates this is a Jinja2 template, which is what Ansible uses for creating files) will contain:

#{{ ansible_managed }}
ListenAddress {{ ansible_ssh_host }}
PermitRootLogin without-password
AuthorizedKeysFile /etc/ssh/authorized_keys/%u
PasswordAuthentication no
ChallengeResponseAuthentication no
UsePAM yes

The following is an Ansible playbook and will manage both sshd configuration and authorized_keys:

- hosts: webserver
  user: ansible
  sudo: yes

  - name: create key directory
    action: file path=/etc/ssh/authorized_keys state=directory
      owner=0 group=0 mode=0755

  - name: upload user key
    action: copy src=/home/ansible/etc/ssh/authorized_keys/{{ item }}
      owner=0 group=0 mode=644
    with_items: sshusers

  - name: sshd configuration file update
    template: src=/etc/ansible/configs/etc/ssh/sshd_config.j2
      owner=0 group=0 mode=0644
      validate='/usr/sbin/sshd -T -f %s'
    - restart sshd

    - name: restart sshd
      service: name=sshd state=restarted

The first task creates the directory for key storage. We do not allow users to upload authorized_keys files for their own account. We don’t want an intruder to add their own key to a user account. Instead, each user’s authorized keys are in a file named after the username, in the directory /etc/ssh/authorized_keys, and our sshd_config tells SSHD to look in that directory.

The second task copies the user keys listed in the SSH users variable defined in the group_vars file. While Ansible has an authorized_keys module specifically for handling these files, it has problems with quotes in restricted keys.

The third task reads the jinja2 template for sshd_config, adds the necessary information, and copies the file to the server. It also validates that the configuration is legitimate — not that it will do what you want, mind you, but it will verify that SSHD understands this sshd_config file.

Finally, we restart SSHD.

Using this configuration, users can log in only via ssh-key and those public keys are centrally controlled.

Drop a comment and share your experience with Ansible here.

7 thoughts on “How To Manage SSH Keys Using Ansible”

  1. Very well done, just a couple of things

    ansible_ssh_host is now deprecated

    In the j2 file you define

    AuthorizedKeysFile /etc/ssh/authorized_keys/%u

    but the playbook

    action: copy src=/home/ansible/etc/ssh/authorized_keys/{{ item }}

    copies the key in /etc/ssh/authorized_keys/sshusers/%u

  2. Thanks for post, nice read, #yamlbeutification

    I have one question 🙂
    how do yo iterate over complex structures like:
    – name:
    – user1
    – user12
    key: “{{ lookup(‘file’, ‘’) }}”
    – name:
    – user2
    – user22
    – user23
    key: “{{ lookup(‘file’, ‘’) }}”

Leave a Reply to thapakazi Cancel reply

Your email address will not be published. Required fields are marked *

Tools & Practices

Tools and Technologies we use at Applied

Contact us now

Popular Posts