~jpetazzo/If you run SSHD in your Docker containers, you're doing it wrong!

When they start using Docker, people often ask: “How do I get inside my containers?” and other people will tell them “Run an SSH server in your containers!” but that’s a very bad practice. We will see why it’s wrong, and what you should do instead.

Note: if you want to comment or share this article, use the canonical version hosted on the Docker Blog. Thank you!

Your containers should not run an SSH server

…Unless your container is an SSH server, of course.

It’s tempting to run the SSH server, because it gives an easy way to “get inside” of the container. Virtually everybody in our craft used SSH at least once in their life. Most of us use it on a daily basis, and are familiar with public and private keys, password-less logins, key agents, and even sometimes port forwarding and other niceties.

With that in mind, it’s not surprising that people would advise you to run SSH within your container. But you should think twice.

Let’s say that you are building a Docker image for a Redis server or a Java webservice. I would like to ask you a few questions.

But how do I …

Backup my data?

Your data should be in a volume. Then, you can run another container, and with the --volumes-from option, share that volume with the first one. The new container will be dedicated to the backup job, and will have access to the required data.

Added benefit: if you need to install new tools to make your backups or to ship them to long term storage (like s3cmd or the like), you can do that in the special-purpose backup container instead of the main service container. It’s cleaner.

Check logs?

Use a volume! Yes, again. If you write all your logs under a specific directory, and that directory is a volume, then you can start another “log inspection” container (with --volumes-from, remember?) and do everything you need here.

Again, if you need special tools (or just a fancy ack-grep), you can install them in the other container, keeping your main container in pristine condition.

Restart my service?

Virtually all services can be restarted with signals. When you issue /etc/init.d/foo restart or service foo restart, it will almost always result in sending a specific signal to a process. You can send that signal with docker kill -s <signal>.

Some services won’t listen to signals, but will accept commands on a special socket. If it is a TCP socket, just connect over the network. If it is a UNIX socket, you will use… a volume, one more time. Setup the container and the service so that the control socket is in a specific directory, and that directory is a volume. Then you can start a new container with access to that volume; it will be able to use the socket.

“But, this is complicated!” - not really. Let’s say that your service foo creates a socket in /var/run/foo.sock, and requires you to run fooctl restart to be restarted cleanly. Just start the service with -v /var/run (or add VOLUME /var/run in the Dockerfile). When you want to restart, execute the exact same image, but with the --volumes-from option and overriding the command. This will look like this:

# Starting the service
CID=$(docker run -d -v /var/run fooservice)
# Restarting the service with a sidekick container
docker run --volumes-from $CID fooservice fooctl restart

It’s that simple!

Edit my configuration?

If you are performing a durable change to the configuration, it should be done in the image - because if you start a new container, the old configuration will be there again, and your changes will be lost. So, no SSH access for you!

“But I need to change my configuration over the lifetime of my service; for instance to add new virtual hosts!”

In that case, you should use… wait for it… a volume! The configuration should be in a volume, and that volume should be shared with a special-purpose “config editor” container. You can use anything you like in this container: SSH + your favorite editor, or an web service accepting API calls, or a crontab fetching the information from an outside source; whatever.

Again, you’re separating concerns: one container runs the service, another deals with configuration updates.

“But I’m doing temporary changes, because I’m testing different values!

In that case, check the next section!

Debug my service?

That’s the only scenario where you really need to get a shell into the container. Because you’re going to run gdb, strace, tweak the configuration, etc.

In that case, you need nsenter.

Introducing nsenter

nsenter is a small tool allowing to enter into namespaces. Technically, it can enter existing namespaces, or spawn a process into a new set of namespaces. “What are those namespaces you’re blabbering about?” They are one of the essential constituants of containers.

The short version is: with nsenter, you can get a shell into an existing container, even if that container doesn’t run SSH or any kind of special-purpose daemon.

Where do I get nsenter?

Check jpetazzo/nsenter on GitHub. The short version is that if you run:

docker run -v /usr/local/bin:/target jpetazzo/nsenter

… this will install nsenter in /usr/local/bin and you will be able to use it immediately.

nsenter might also be available in your distro (in the util-linux package).

How do I use it?

First, figure out the PID of the container you want to enter:

PID=$(docker inspect --format {{.State.Pid}} <container_name_or_ID>)

Then enter the container:

nsenter --target $PID --mount --uts --ipc --net --pid

You will get a shell inside the container. That’s it.

If you want to run a specific script or program in an automated manner, add it as argument to nsenter. It works a bit like chroot, except that it works with containers instead of plain directories.

What about remote access?

If you need to enter a container from a remote host, you have (at least) two ways to do it:

The first solution is pretty easy; but it requires root access to the Docker host (which is not great from a security point of view).

The second solution uses the command= pattern in SSH’s authorized_keys file. You are probably familiar with “classic” authorized_keys files, which look like this:

ssh-rsa AAAAB3N…QOID== jpetazzo@tarrasque

(Of course, a real key is much longer, and typically spans multiple lines.)

You can also force a specific command. If you want to be able to check the available memory on your system from a remote host, using SSH keys, but you don’t want to give full shell access, you can put this in the authorized_keys file:

command="free" ssh-rsa AAAAB3N…QOID== jpetazzo@tarrasque

Now, when that specific key connects, instead of getting a shell, it will execute the free command. It won’t be able to do anything else.

(Technically, you probably want to add no-port-forwarding; check the manpage authorized_keys(5) for more information.)

The crux of this mechanism is to split responsibilities. Alice puts services within containers; she doesn’t deal with remote access, logging, and so on. Betty will add the SSH layer, to be used only in exceptional circumstances (to debug weird issues). Charlotte will take care of logging. And so on.

Wrapping up

Is it really Wrong (uppercase double you) to run the SSH server in a container? Let’s be honest, it’s not that bad. It’s even super convenient when you don’t have access to the Docker host, but still need to get a shell within the container.

But we saw here that there are many ways to not run an SSH server in a container, and still get all the features we want, with a much cleaner architecture.

Docker allows you to use whatever workflow is best for you. But before jumping in the “my container is really a small VPS” bandwagon, be aware that there are other solutions, so you can make an informed decision!

This work by Jérôme Petazzoni is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.