As a very big simplification, you can see Docker (and containers in general) as thin VMs, Openshift as having your own Heroku, and OpenStack as having your own AWS.
Docker containers uses the linux kernel functionality that let you run apps in an isolated network/memory/processes/filesystem environment, and add to that the use of an unionfs so you can have a “parent” write-only disk image with a child writable filesystem, where you have copy-on-write if you modify parent files, and that enables you to have several children sharing the same parent (so if you have several containers that use the ubuntu base installation, you have it only once, even cached on memory once). It have a flexible api and command line utilities for creating containers, do some basic management, put them in central or your own repositories and a lot more. Also let containers link easier to each other for interrelated containers (think one for DB, and another for web server, for a not so trivial web application). and systems that manages them give their own names to bunch of interrelated containers (gears, cartridges, kubelets, etc). And containers, as they run natively as applications under a normal linux kernel, can be run even in linux that already run inside VMs.
And that api was used by several projects, like CoreOS (think a linux distribution meant to run containers more than applications, with a few included components to help to manage/distribute containers in clusters), Google’s Kubernetes (also meant for running containers in clusters, associating several of them in groups that should run together), or Fig (you can also define groups of containers and how they are related).
OpenShift existed already when Docker came to light, I think it was based on LXC back then. The workflow that I saw in a presentation was developers committing to repository, and that get published in a site or at least getting functional to go through the stages of testing/staging/production all coordinated with Openshift. Using Docker optimized a lot that workflow, and the latest iteration is also using Kubernetes to orchestrate them. You dedicate a bunch of machines (as in bare metal hardware or VMs) to run it, and it manages that workflow, providing containers as they are needed. Its not the only PaaS based on Docker, Dokku, Flynn or Deis are a few examples of others.
OpenStack goes to the Infrastructure as a Service level, it let you build something of the AWS scale, providing you a way to get virtual machines so you can run individual VMs (running linux or other operating systems), or deploy clusters with their own networks/storage/etc. But also have drivers to deploy Docker containers instead of full VMs, to get more density of services for virtualized/bare metal machines.
Those 3 are in different level of abstraction, and they can be used by themselves, but each one can be improved using the others.