Like all conferences, the first session of the first day covers the intro and the boring stuff - like where to get first aid if your leg comes off, where the toilets are and where to get food. The last two items I paid attention to.
Next was morning tea, which meant a mad scramble to find a seat near a working power point for my notebook. Think musical chairs, but with geeks and higher stakes.
The first two days of the Conference are devoted to "mini-conferences". I was mildly interested in the kernel mini-conference, however it was being held in the Wool Museum - which is off campus. I'm not THAT interested in it to make the trek there, so I settled on the Cloud Symposium.
I was impressed. The last time I looked deeply into the Linux cloud deployments was in 2007 when /dev/kvm was being merged into the 2.6.20 kernel. I quickly realised I had neglected this area of development to my chagrin and although the mini-conference was divided into separate presentations by different speakers, it all coalesced into a coherent picture.
The content of each of the presentations were full of detail. By lunchtime, my brain was mush - and there were still five more presentations to go.
The Overview
If there is one thing you can take out of the Cloud Symposium is that there is a strong push to manage infrastructure as if it was code by making it immutable. Basically:
1) Treat your infrastructure like cattle, not pets. Never modify infrastructure, slaughter it on a regular basis.
2) Document first. Use the documentation to create infrastructure containers. This makes the process repeatable.
3) Execute apps as one or more stateless processes.
4) Repeatability, Reliability, Resiliency.
5) Automate the development lifecycle - this extends to PAAS - People-As-A-Service.
6) Create a DevOps culture in the organisation.
The latter I find particularly interesting as I have encountered the converse many times. Usually you have separate non-intersecting groups of developers (or implementers) and operators (or support). This creates an undistributed middle where once the project is committed, dev figures their job is done. What happens is a détente something like:
op: This app keeps crashing, you need to fix it.
dev: It works fine in the dev environment, it must be a problem with ops.
op: Sure it works fine for a single user with a 2GB database, but with 300 users and a 1.5TB db it keeps falling over.
dev: Well, how am I supposed to debug it if you don't let me work on the live environment?
op: There's no way I'm letting you near the live environment, you already produce crappy code!
Enter DevOps, with a separate team devoted to the deployment process and aimed at establishing a culture and environment where building, test and release can happen rapidly, frequently, and more reliably.
The specific talks focused on individual aspects of this process and culture, including the psychology involved. There was a strong emphasis that an organisations processes will reflect its structure and communication patterns - if an organisation is compartmentalised in its thinking with little knowledge sharing, then the processes will also share in this deficiency.
Now to the specific talks.
1/1: Continuous Delivery using blue-green deployments and immutable infrastructure by Ruben Rubio.
The traditional dev model introduces risk due to it encouraging or permitting the following as part of ongoing development:
- Workarounds during upgrade
- Different people performing upgrade
- Lack of continuous documentation.
As part of following a blue/green deployment, create an environment where the Infrastructure is immutable and only the data is mutable through the use of containers.
During upgrades:
- Never modify the infrastructure
- Recreate everything that is not data.
This makes Rollback easy and avoids configuration drift.It also means updated and accurate infrastructure documentation.
1/2 The Twelve-Factor Container by Casey West
The second talk dove-tailed nicely with the first by codifying 12 factors with best practices for container deployments:
1: One codebase tracked in revision control, many versions
Best Practice: use the environment and/or feature flags. Use devops.
2: Explicitly declare and isolate dependencies
Best Practice: Depend upon base images for default filesystem and runtimes
3: Store configuration in environment
Best Practice: Use environment variables, not config files
4: Treating data as local
Best Practice: Connect to network attached services using connection info from the environment.
5: Strictly separate build and run stages
Best Practice: Build immutable images and the run those images
Best Practice: Lifecycle - Build, Run, Destroy
6: Execute the app as one or more stateless processes
BP: Schedule LRPs by distributing them across a cluster of physical hardware.
7: Export services via port binding, don't make assumptions about addresses or ports.
8: Scale out horizontally by adding instances
9: Maximise robustness with fast startup and graceful shutdown
10: Keep dev, staging and prod as similar as possible.
Best Practice: Run containers in dev.
11: Treat logs as event streams
Best Practice: Log to stdout
12: Run admin/management tasks as one-offs
BP: Reuse application images with specific entry points for tasks
The mantra:
- Repeatability
- Reliability
- Resiliency
No comments:
Post a Comment