Docker, Kubernetes, OpenStack, and OpenShift Explained
From Gustavo Muslera‘s answer via Quora:
As a very big simplification, you can see Docker (and containers in general) as thin VMs, Openshift as having your own Heroku, and OpenStack as having your own AWS.
Docker containers uses the linux kernel functionality that let you run apps in an isolated network/memory/processes/
And that api was used by several projects, like CoreOS (think a linux distribution meant to run containers more than applications, with a few included components to help to manage/distribute containers in clusters), Google’s Kubernetes (also meant for running containers in clusters, associating several of them in groups that should run together), or Fig (you can also define groups of containers and how they are related).
OpenShift existed already when Docker came to light, I think it was based on LXC back then. The workflow that I saw in a presentation was developers committing to repository, and that get published in a site or at least getting functional to go through the stages of testing/staging/productio
OpenStack goes to the Infrastructure as a Service level, it let you build something of the AWS scale, providing you a way to get virtual machines so you can run individual VMs (running linux or other operating systems), or deploy clusters with their own networks/storage/etc. But also have drivers to deploy Docker containers instead of full VMs, to get more density of services for virtualized/bare metal machines.
Those 3 are in different level of abstraction, and they can be used by themselves, but each one can be improved using the others.