Colin Humphreys is CEO of CloudCredo while Paula Kennedy is the COO. What was interesting in this presentation was the different points of views of the two speakers.
On one hand, Paula has some business requirements: deploy an application easily and quickly. For her, a PAAS like Heroku or CloudFoundry meets her requirements. With a PAAS:
- you don’t have to worry about to where your app is deployed
- you don’t have to worry about “how” to deploy
- it is quite quick to deploy an app and to re-deploy any app updates
Furthermore, to manage distributed instances, a PAAS offers:
- centralized logging
- dynamic routing
- heavy loads support
- health management
- access control
On the other hand, for Colin, Docker is sooo coooool, but beyond this total unbiased point of view, using Docker containers enables a full control on the system on which you can install your app:
- you can choose what libraries you want to install in your container with your app. For instance, you can control which version of openssl you want to use while you have no control about a such library with a PAAS
- you can specialize you Docker container which you cannot with a PAAS since you don’t have a control on it
- you have a broader network flexibility with Docker containers: you can choose which protocols to use while, with a PAAS, you mainly have access to HTTP
- you get fast feedback on what you install on a Docker container
- you’re not locked to a vendor
Colin mentioned that you can also tend to build your own PAAS, but with not the same quality as vendors PAAS if you need to add lots of stuff such as centralized logging, health managements, etc.
Nonetheless, Colin and Paula agree that there places for both PAAS and containers:
- if your micro-services require to fit the 12 factors, then a stateless PAAS can be your holy grail
- if your micro-services didn’t require to fit the 12 factors, then Docker containers with volumes management can do the job
Nice talk. By the way, “micro-services” (as Colin noticed) had not been mentioned in first 35 minutes of the talk: a bonus!
Docker clustering – batteries included – by Jessie Frazelle
Second talk of Jessie Frazelle. This time about a new tool in the Docker ecosystem: Swarm, which is a cluster management system for Docker containers.
This is a native clustering system for Docker with:
- native discovery of containers (and optional feature based on either etcd, consul or Zookeeper)
- schedulers (bin-packing and random which are native support and soon Mesos)
- constraints management
- affinity management
That’s almost all…
Docker, Fig & Flocker – by Luke Marsden
For Luke, Docker needs:
- composition which can be addressed by Fig (which is now a tool of the Docker ecosystem named Docker Compose) and by Flocker
- scheduling which can be addressed by Swarm or Kubernetes
- containers to talk to each others which can be addressed by Weave
- portable storage which can be addressed by Flocker
Fig alias now Docker Compose enables composition at the host level. For instance, if you have an application deployable on a servlet container that needs a database, you may choose in a micro-services approach to use one Docker container for your servlet container and one for your database. But you need to deploy and run these containers in the right order (the database first and then the servlet container), links these two containers to each others, setup the endpoints / ports and so on… Fig helps doing that. All you need is to declare a yaml file that describes the properties of each containers (e.g. their ports) in their order of deployment and thanks to a “fig up” command, you can deploy both of the two containers in the right order with the correct ports mapping and links. This is at the host level: Fig can run containers in the same host. What if you wish to deploy your containers infrastructure on several hosts. Here comes Flocker!
Flocker can be seen as the companion of Fig. In addition of a Fig yaml configuration file, Flocker needs a second yaml file that describes the topology of your Docker containers cluster: you describe on which node each of your Docker container has to be installed. The description of the Docker container being hold by the Fig file.
A second issue addressed by Flocker is the migration of a Docker container from one node to another node. Let’s say you have a database wrapped in a container. To persist the data stored in the database, you can use Docker volumes that enables to persist data outside the container in the filesystem of the host. What happens if, for one reason or another, you wish to migrate the database from one node to another? Flocker does the job: it can migrate a such container from one node to another. On the fly… and thanks to a Flocker network proxy with network traffic re-routing to the new node. Looks great, no?
Another miss of Docker is a plugins / extension mechanism. Right now, it’s hard to glue some tools based on Docker (for instance, Weave and Flocker). Powerstrip may circumvent this issue. It’s an open-source project which aim to rapidly prototype extensions to Docker and enables to glue them. To do so, Powerstrip uses JSON API of Docker to act as a proxy between the Docker client and the Docker server. One can add pre-hooks and post-hooks to Powerstrip. Pre-hooks trigger when the Docker client is sending a command to the Docker server. Post-hooks trigger when the Docker server sent back instructions to the Docker client. With such a mechanism, you can plug extensions to Flocker and Weave. A Flocker or a Weave command can be activated in a Powerstrip pre-hook when the Docker client sent a given command to the Docker server. Interesting idea. There are some extensions already designed: powerstrip-flocker and powerstrip-weave are two of them.
That was a nice presentation. What I have also learned is that using Docker volumes leads a coupling between the Docker container and its host. Hence, you can have issues when it attempts to migrate such container to another host.
How to Train your Docker Cloud – by Andrew Kennedy
Clocker is an Apache open-source project that creates and manages a Docker containers cloud. It relies on Brooklyn and jclouds, both two other Apache projects. From what I have understood:
- jclouds is an agnostic toolkit for cloud infrastructure. It is agnostic in a sense it can communicate with AWS, Google Cloud Platform, Cloudstack and many more thanks to drivers that adapt to these platforms. And these drivers are wrapped into a common API that can be expose as a REST API.
- Brooklyn is an App Management Platform that enables to deploy, manage and monitor applications thanks to blueprints. A blueprint is a description of the application that contains all the elements required for its installation and deployment (such as scaling policies, etc.). Blueprints can be expressed in several formats (YAML Blueprint and TOSCA).
Clocker is a Docker containers cloud manager that can deploy applications described in the Brooklyn blueprint format. It can deploy the application on containers of several nodes and accross multi-hosts. Clocker seems to have lots of features:
- autonomics: scaling policies that can be driven by sensors, cluster resizing
- healthroom: to ensure resources avaibility (cpu, memory, etc.)
- container management: with Docker images catalog, supports of Dockerfiles, creation of images automatically
- placement and provisionning: on demand, with several possible placements strategies (random, CPU, memory, geography, and so on)
- network management: with network creation, IP pool control, Docker ports forwarding for debug puposes, pluggable network providers (Weave, Kubernetes, libswarm), network virtualisation
So lots of things… It was the end of the day. I didn’t feel lots of enthusiasm about Clocker among the audience, but I think it is worth having a look at it since it seems to do many things.
Lastly, the management can be done through a web interface (Brooklyn’s, if I am correct).