Ok I’m confused. Ok nothing new lol. I followed this guys directions to create a container that runs sshd: Ok I expected to be able to ssh from any host on the network to this host, and I was able to fine. The issue is should I be able to ssh from the container back to the host OSi.e. I did not think so as I though the only network path for the container was port 22 which was mapped to the hosts port which in my case was 32772. But I was able to. How is this possible?
I though the only network connection to the container was though the port exposed on the host? $docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ef centos7-ssh “/usr/sbin/sshd -D” 38 minutes ago Up 38 minutes 0.0.0.0:32772-22/tcp centos7-1. Macedemo: The issue is should I be able to ssh from the container back to the host OSi.e. I did not think so as I though the only network path for the container was port 22 which was mapped to the hosts port which in my case was 32772. But I was able to Are you sure you were able to? You’re 100% positive that you weren’t sshed into somewhere else, e.g. Into the container’s localhost (different than the host localhost)?
You can't ssh -X from another terminal into the same machine and go back and use the.Xauthority, ssh -X changes the.Xauthority every time for the most recent terminal. I've only got it to work by copying the.Xauthority every time I ssh -X into my machine and try and share the screen with my container.
What are the commands you used here? The only way to SSH from the container to the host would be to run the container in the same network namespace as the host ( -net host), or to SSH into the host’s public IP address. Even then you would have to properly configure your setup to allow this, either through password-based authentication or by mounting /.ssh into the container, etc.p just forwards a port exposed by the container to the host, but never vice versa (and you would need access to the port where sshd is listening on the host to SSH into it). Well each of the systems are on the same network (dockermain and attacker on a 10.x.x.x.x, but container on a 172.17.0.4, see below ip addr output t) and yes it did prompt me for the root hosts password but I thought this was not even possible to ssh into a container from another machine then ssh again to the host system? I’m concerned as I’m looking at docker as a way to provide another layer of security/protection.
I had thought if an attacker had 100% control of a container they had no way to access the hosts system without say a bug/compromising the host docker daemon. That does not seem to be the case. I followed that guys directions/dockerfile, even removing the other ports of 80 an 443 from the dockerfile. Command I ran are below. On host running docker: $docker run -d -P -t -name centos7-1 centos7-ssh docker@dockermain sshcontainer$ hostname dockermain.localdomain docker@dockermain sshcontainer$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 62ee503a4392 centos7-ssh “/usr/sbin/sshd -D” 3 minutes ago Up 3 minutes 0.0.0.0:32768-22/tcp centos7-1 from another VM of Centos/attacker on same physical host I ran: attacker@attacker $ ssh admin@dockermain -p 32768 The authenticity of host ‘dockermain:32768 (10.0.2.225:32768)’ can’t be established. ECDSA key fingerprint is b2:6d:2e:71:57:e2:33:87:be:df:3f:82:6e:cd:cf:55. Are you sure you want to continue connecting (yes/no)?
Yes Warning: Permanently added ‘dockermain:32768,10.0.2.225:32768’ (ECDSA) to the list of known hosts. Admin@dockermain’s password: admin@62ee503a4392 $ sudo -i We trust you have received the usual lecture from the local System Administrator. It usually boils down to these three things: #1) Respect the privacy of others. #2) Think before you type. #3) With great power comes great responsibility.
sudo password for admin: root@62ee503a4392 # ssh 10.0.2.225 The authenticity of host ‘10.0.2.225 (10.0.2.225)’ can’t be established. ECDSA key fingerprint is b6:91:79:df:17:64:02:68:93:26:7e:de:73:54:35:40. Are you sure you want to continue connecting (yes/no)?
Yes Warning: Permanently added ‘10.0.2.225’ (ECDSA) to the list of known hosts. If the container has egress access to that IP address ( 10.0.2.225) on the network, it’s just like any other process. That seems to be the case here, egress and ingress to that IP is allowed – any other process running on that host could do this, no? If it’s not allowed, firewall it. And/or use password-protected keys instead of plaintext passwords.
Docker doesn’t provide an additional layer of protection at the network level in this regard. Just like ssh [email protected] shouldn’t be an attack vector of concern for Facebook, you should be making sure that any process (if under attack) which has SSH access to your internal IP addresses, can’t do any more damage than has already been done. Macedemo: I’m concerned as I’m looking at docker as a way to provide another layer of security/protection.
Docker could be this layer, but I 100% guarantee you it’s not out of the box. You have to calibrate this yourself. DO NOT expect the out of the box defaults to be safe (although we try our best) for arbitrary payloads, they are intended for running trusted payloads. I had thought if an attacker had 100% control of a container they had no way to access the hosts system without say a bug/compromising the host docker daemon You are not guaranteed to be safe from an attacker who has access to your container. There are many steps you can take to mitigate this risk, but they are far from the defaults. Your biggest attack vector, by far, is likely to be the Linux kernel itself. Container have access to a subset of Linux kernel capabilities by default, which is modifiable using the -cap-add / -cap-drop flags.
Some suggestions:. Don’t run your containers as root user (use USER). Don’t expose host directories to the container with -v. Disable inter-container communication if not needed. Use grsec / AppArmor / SELinux heavily. Run containers with -net none if they don’t need the network.cap-drop kernel capabilities that your container does not need. Nathan, Thanks for the information, this is very helpful for my study that I’m doing of docker/containers.
Obviously I’m new to docker and am researching it purely from a security approach. And mainly how one could take existing applications, dockerize them or at least their network facing components to add another layer of security/defence in depth, system that have web services/tomcat as an example seem to be a natural fit, was able to do it very easily. I was left with the impression after viewing a number of docker webinars that doing so and “out of the box” I would get that result, a better security posture. From your response it does not seem to be the case. Somethings I got from one of your webinars:. Process restrictions.
Docker containers have reduced capabilities. Less than half if the capabilities of normal processes by default. Reduced capabilities help mitigate impact of root escalation. By default containers have no access to devices, needs to be explicitly granted.
Containers are immutable, allows audit trail and easy rollback to a know secure state. For president storage you use volumes thus a new container just mounts that root file system. Copy on write file system isolating changes made by one applications from affecting others.
![Host Host](/uploads/1/2/5/4/125447073/538717316.jpeg)
My original hypothesis was that I could encourage docker adoption to my organization purely from a security benefit for legacy apps. Your response seems to indicate that my hypothesis is incorrect. A lot of what you listed I could/should do with existing apps not something so much inherent(ok a little bit) with containers.
So selling docker adoption purely from a security benefit does not seem to exist. Macedemo: My original hypothesis was that I could encourage docker adoption to my organization purely from a security benefit for legacy apps. Your response seems to indicate that my hypothesis is incorrect. Docker certainly can add extra security to existing applications, but my point is to caution you against assuming that you can treat Docker as a black box which automatically makes applications more secure, simply by virtue of the fact that they’re run in Docker.
Just like you wouldn’t feel comfortable that an application running on a server was perfectly secure only because it’s running in a virtual machine (e.g. It won’t help if your SSH password is easily guessable), you can’t simply assume that an application will be more secure, just because it is running in Docker. A lot of the bullet points you have there are true. Docker definitely can improve security dramatically for applications running inside of it, and it already is for some organizations today. But today, I highly recommend you learn the risk profile and attack surface of applications running inside of containers as well if you’re going to avoid pitfalls. Partially, I’m laying this on thick because a lot of content available on the Internet recommends insecure practices like running containers with -privileged, -net host, dangerous bind mounts, etc.
Like it’s no big deal. Likewise, access to docker API is equivalent to root privileges, that’s an important fact to know (since users in the docker group are effectively sudoers). If you learn where these potential hazards are and avoid them Docker could potentially do great good for you. But again, it’s not a black box you can throw things into to make them automatically more secure. Nathan, Thanks so much for the straight forward information, please keep laying it on thick it’s much appreciated. And I am following you, I was just getting the impression from what I read initially about docker it did a lot more isolation than it really does.
Sort of through it was the model of if not explicitly allowed it was denied model as a black box, I did not have to do a whole lot to tighten a container down nor know a lot about docker etc. Part of my task here was to separate the hype from fiction. And since I’ve been working the field for almost 30years I’ve seen/learned you can’t always believe what you read, need to play/test a few things for yourself to get the real story. Even do a small prototype before going all in on a new technology/approach to understand what your getting yourself into. So I guess then you are saying and I should present this to my leadership that improved security should not be the driver for adopting docker/containers but if you use docker/containers for it’s other benefits then there are ways to make systems more secure probably then without docker. But security is probably not the main driver for docker adoption as there will be a fair amount of a learning curve etc to understanding how to use it correctly. Also would you concur that what is stated here is for the most part a fair assessment of Docker?
'Container technology increases the default security for applications in two ways. ' ' Docker’s default settings are designed to limit Linux capabilities.' 'The default bounding set of capabilities inside a Docker container is less than half the total capabilities assigned to a Linux process (see Linux Capabilities figure).
' ' Containers can run with a reduced capability set that does not negatively impact the application and yet improves the overall security system levels and makes running applications more secure by default.' ' Containers have no default device access and have to be explicitly granted device access' ' Using Docker brings immediate benefits, not only in terms of application development and deployment speed and ease, but also in terms of security' 'Containers provide an additional layer of protection by isolating between the applications and the host, and between the applications themselves without using incremental resources of the underlying infrastructure and by reducing the surface area of the host itself. ' 'Linux hosts can be hardened in many other ways and deploying Docker enhances the host security but also does not preclude the use of additional security tools. ' 'The simple deployment of Docker increases the overall system security levels by default, through isolation, confinement, and by implicitly implementing a number of best-practices, that would otherwise require explicit configuration in every OS used within the organization '.
As a wise person once said, “Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense.” Docker will not automatically make your infrastructure more secure. It doesn’t automatically make your infrastructure more insecure either.
Some things it does, such as limiting capabilities, are great! In fact the upstream maintainers have put a great deal of work into making the defaults as secure as possible while still being very usable, and I commend them for that. Other common practices, such as having users in the docker group or running containers as root, are perhaps not as carefully guarded as they should be, so you should know what you’re signing up for.
As with everything in security, there are tradeoffs, and it’s up to you to do your homework and decide where the risk tradeoffs are. Powered by, best viewed with JavaScript enabled.
Docker-machine ssh Estimated reading time: 2 minutes Log into or run a command on a machine using SSH. To login, just run docker-machine ssh machinename: $ docker-machine ssh dev ##. ## ## ## ## ## ## ## /' / / - o / / / ' / / ) / ` / / / / ' ) / / ( (.
$ docker-machine ssh default -L 8080:localhost:8080 Different types of SSH When Docker Machine is invoked, it checks to see if you have the venerable ssh binary around locally and attempts to use that for the SSH commands it needs to run, whether they are a part of an operation such as creation or have been requested by the user directly. If it does not find an external ssh binary locally, it defaults to using a native Go implementation from.
This is useful in situations where you may not have access to traditional UNIX tools, such as if you are using Docker Machine on Windows without having msysgit installed alongside of it. In most situations, you do not need to worry about this implementation detail and Docker Machine acts sensibly out of the box. However, if you deliberately want to use the Go native version, you can do so with a global command line flag / environment variable like so.