There are a number of important terms which come to mind when discussing containers, DevOps, CI/CD and agile :
- Canaries / Canary testing : Canary tests are minimal tests to quickly and automatically verify that all dependencies are satisfied. Canary tests are run before other time-consuming tests, and before wasting time investigating code. See also Sentinel Species.
- Immutable / Mutable : In object-oriented and functional programming, an immutable object (unchangeable object) is an object whose state cannot be modified after it is created. This is in contrast to a mutable object (changeable object), which can be modified after it is created. Containers are considered to be immutable. See also Immutable object.
- CI / CD : Whereas CI (continuous integration) deals with the build/test part of the development cycle for each version, CD (continuous delivery) focuses on what happens with a committed change after that point. See also Continuous integration / continuous delivery.
- DevOps: a software engineering culture and practice that aims at unifying software development (Dev) and software operation (Ops). Note that containers are not DevOps, but a technology that can assist in enabling the DevOps journey. See also DevOps and the Puppet State of DevOps report for 2018 in the Puppet blog.
Areas of Change – Introducing Containers into the Organisation
The use of container architectures fundamentally changes the culture of an organisation and the interactions between teams. The traditional role of operations teams, where extensive understanding of application architectures in order to manage deployment, running and debugging of production systems, morphs into a more business focused role of availability, performance and versioning.
Containers can be used to deploy both system and application level resources. In general, systems tend to be more mutable and benefit strongly from a base image which is fast to deploy, but then a configuration management system is required on top of that. This allows for easy regionalisation of overall system services without requiring a separate image for every location that systems are to be deployed, as well as being a useful mechanism for detecting and reporting on change (which may or may not be authorised). The adoption of cloud services changes this requirement somewhat, but the ability to have writable infrastructure where change is clearly reported upon and tracked is invaluable (day two operations).
Application level containers allow for clear separation between applications and the platforms they are deployed upon. Developers can release versions to operations teams as required, and due to the way that applications are packed as self-contained micro-services via containers, there is little or no hand-over around installation procedures, tuning or rollback of application deployments. Architectural design and security testing continue to be key factors in the application process, but the interactions with host operating systems become less important, due to the container abstraction layer.
Much of the benefit of adopting a container model sits on the extreme ends of the overall software life cycle. Developers can complete their work and ‘freeze’ application images, ruling out any mistakes incurred by manual processes during the production deployment of the application. Operations teams have a predictable release and rollback framework, removing the potential for mistakes due to unclear procedures, bugs or fatigue. The design, validation and testing of application modifications remain generally unchanged for much of the cycle. However, the implementation of committing small changes, automated testing, results validation and automatic release, brings increasing benefits the more fully it is implemented (CI and CD).
Canary testing in containers for enterprises is often used to validate a new release, providing a gate between a pre-production image and deployment of code that is considered consumable by the developer. Canaries are available in a number of forms, from short lived deployments used specifically to validate commits to a code base, through to long lived ‘near production’ systems that may be made available to end users. When deploying new releases, near production canaries are the systems which should be processed first and provide a key monitor for operations staff to look for problems, before actual production systems are touched.
Freeing up operations teams from having significant understanding of how applications work brings benefits to the organisation. These teams can now focus on overall infrastructure state, performance and availability. In environments with hundreds or thousands of applications, expecting a single operations team to have specialist knowledge to assist in all cases is unrealistic. For a container-based environment, application failures on deployment are simply rolled back without any special knowledge, via the container framework. Being able to treat every application as simply a performance metric with a number of instances and an availability footprint is key to improving team performance. The improved focus is transferred over to ensuring consistent and well managed infrastructure and application spread.
For example, for a modification to an existing application:
- Design for the modification is still a relatively manual process (unless you’ve moved to a machine-learning model of design and development).
- Making the changes to code is, in most cases, to be performed and unit-tested by developers or DevOps engineers, with immediate validation of code quality, style and other conformance values via the CI pipeline.
- Visual and functional changes are still to be validated and signed off on by end users.
- Developers / DevOps engineers have already created a container image for the testing by the end users.
- The process to sign the container and allocate a version may have occurred prior to end user testing, or after validation, however there are no changes to code at this point and no potential to miss steps required to progress to a production deployment (this depends on a solid design for the dev/uat/prod testing – ie. no ’tweaking’ of code to differentiate).
- All other functional and vulnerability testing is automated, to ensure that there is no regression of features or introduction of other bugs.
- The container, post the above testing is successful, is then automatically made available to a production application registry.
- Operations are not involved if errors or vulnerabilities are detected up to this point in the process.
- Any deployment of the container to production can then be made via other change processes
- If a production deployment of a new container version causes failures, container management frameworks allow quick and easy roll back to a previous version (and in many cases as the availability is built into the container framework, thresholds are set to trigger this automatically on a new release).
As application code and necessary dependencies are encapsulated within the container image, time spent on installation and validation is eliminated. The IT operations team is able to focus on the health, stability and performance of the overall infrastructure. Automation of all steps of the process to get to this point builds further efficiencies and reliability into the pipeline, with consequent savings and enables teams to focus on delivering more business value.
Containers are a useful tool on the DevOps journey, helping with the standardisation, trust, automation and potentially self-service components of the software life cycle (SDLC). The removal of team silos and cultural shift comes in time and with development of the trust between application developers and operations teams. Increased stability and agility are just two of the benefits of new partnerships in a transformation journey.
There are many modifications to process and behaviour as a result of introducing container technology into an organisation – enough for numerous articles. However, following are some links to background reading/references as a precursor to more in future blogs.
Some process separation and container history
A little out of date on the management tools (eg. Docker swarm), but a lot of good information (note that a diagram is used from this document
Basic architectural concerns
Generally useful articles