farblog

by Malcolm Rowe

Simple privilege separation using ssh

If you have a privileged process that needs to invoke a less-trusted child process, one easy way to reduce what the child is able to do is to run it under a separate user account and use ssh to handle the delegation.

This is pretty simple stuff, but as I’ve just wasted a day trying to achieve the same thing in a much more complicated way, I’m writing it up now to make sure that I don’t forget about it again.

(Note that this is about implementing privilege separation using ssh, not about how ssh itself implements privilege separation; if you came here for that, see the paper Preventing Privilege Escalation by Niels Provos et al.)

In my case, I’ve been migrating my home server to a new less unhappy machine, and one of the things I thought I’d clean up was how push-to-deploy works for this site, which is stored in a Mercurial repository.

What used to happen was that I’d push from wherever I was editing, over ssh, to a repository in my home directory on my home server, then a changegroup hook would update the working copy (hg up) to include whatever I’d just pushed, and run a script (from the repository) to deploy to my webserver. The hook script that runs sends stdout back to me, so I also get to see what happened.

(This may sound a bit convoluted, but I’m not always able to deploy directly from where I’m editing to the webserver. This also has the nice property that I can’t accidentally push an old version live by running from the wrong place, since history is serialised through a single repository.)

The two main problems here are that pushing to the repository has the surprising side-effect of updating the working copy in my home directory (and so falls apart if I accidentally leave uncommitted changes lying around), and that the hook script runs as the user who owns the repository (i.e. me), which is largely unnecessary.

For entirely separate reasons, I’ve recently needed to set up shared Mercurial hosting (which I found to be fairly simple, using mercurial-server), so I now have various repositories owned by a single hg user.

I don’t want to run the (untrusted) push-to-deploy scripts directly as that shared user, because they’d then have write access to all repositories on the server. (This doesn’t matter so much for my repositories, since only I can write to them, and it’s my machine anyway, but it will for some of the others.)

In other words, I want a way to allow one privileged process (the Mercurial server-side process running as the hg user) to invoke another (a push-to-deploy script) in such a way that the child process doesn’t retain the first process’s privileges.

There are lots of ways to achieve this, but one of the simplest is to run the two processes under different user accounts, then either find a way to communicate between two always-running processes (named pipes or shared memory, for example), or for one to invoke the other directly.

The latter is more appropriate in this case, and while the obvious way for a (non-root) user to run a process as another is via sudo, the policy specification for that (in /etc/sudoers) is… complicated. Happily, there’s a simpler way that only requires editing configuration files owned by the two users in question: ssh.

The setup is fairly easy: I’ve created a separate user that will run the push-to-deploy script (hg-blog), generated a password-less keypair for the calling (hg) user, and added the public key (with from= and command= options) to /home/hg-blog/.ssh/authorized_keys.

Now the Mercurial server-side process can trigger the push script simply by creating a $REPOS/.hg/hgrc containing:

[hooks]
changegroup.autopush = ssh hg-blog@localhost

This automatically runs the command I specified in the target user’s authorized_keys, so I don’t even have to worry about listing it here1.

In conclusion, ssh is pretty good tool for creating a simple privilege separation between two processes. It’s ubiquitous, and doesn’t require root to do anything special, and while the case I’m using it for here involves two processes on the same machine, there’s actually no reason that they couldn’t be on different machines.

The ‘right’ answer may well be to run each of these as Docker containers, completely isolating them from each other. I’m not at that point yet, and in the meantime, hopefully by writing this up I won’t forget about it the next time I need to do something similar!


  1. In this case, adding a command restriction doesn’t protect against a malicious caller, since the command that’s run immediately turns around and fetches the next script to run from that same caller. It does protect against someone else obtaining the (password-less by necessity) keypair, I suppose, though the main reason is the one listed above: it means that ‘what to do when something changes’ is specified entirely in one place.