That’s the right vision but unfortunately the term never became more than a cultural destination which made it too easy for too many teams to prematurely plant a victory flag on Mount Automation. Claiming DevOps “success” too soon is like naming your kids after Greek deities then telling the nanny they’re immortal (for the record, don’t: “Hermes” will never be kissed and “Oedipus” is destined for Freudian conflict). Few agree on when the ambiguous “DevOps” merit badge is earned… but all agree on a few things.
Here’s what we know…
- Modern applications change frequently. IT budgets don’t.
- Users expect new features introduced continuously. Ops teams have been trained to loathe change.
- Businesses are more dependent than ever on apps and websites for revenue and brand equity. IT teams can’t support them with tools developed during the Cold War.
What is changing is the nature of change. Shifting expectations about app quality, availability, and performance are as cataclysmic as the shift from DOS to Windows or hand looms to cotton gins. DevOps as a term has long since been incarcerated in the tech cliche prison alongside cell mates Bubba “.com” Bonecrusher and Tiny “IoT” Livereater. Yet the underlying need for new processes and tools to support faster dev cycles remains as dire as ever.
Here are four things teams capable of achieving “DevOps velocity” do differently than the rest:
- They implement modular architectures: split monolithic apps into components, often called microservices, to avoid a single point of failure and to accelerate root cause analysis when monitoring changes and fixing bugs. It’s much easier to take a single microservice offline for fixing, say the analytics service, than to take the entire app down.
- They have agile dev/test pipelines: containers, the ops movement associated with Docker, make it easy to manage microservices at scale. Unlike traditional virtual machines, containers share a common OS kernel which makes it easy to deploy them at scale. Popular web services like Google, Twitter, and LinkedIn, may provision hundreds of thousands of containers per day. Each one may only live for seconds. [For the record, while containers are enjoying a renaissance thanks to DevOps and Docker, the concept of independent systems sharing a common OS kernel has been around since at least 1979 when the chroot command was introduced to Unix.]
- They continuously integrate and deliver: microservices and containers are most effective when code is continuously integrated as changes are committed, tests are automated as builds are compiled, and builds are released continuously as tests pass. Case in point: Amazon deploys an average of 23,000 changes per day to production. That compares with the typical enterprise which pushes code once every nine months.
- They have service-aware monitoring: to prevent downtime or user-impacting performance issues, DevOps teams automatically map infrastructure to services and services to impact. By doing that, monitoring alerts can identify not only what went wrong but, more important, what it means. Automated impact analysis is to DevOps what oven timers are to chefs – cook without them at your peril, nobody enjoys burnt food.
Soon, we’ll evaluate products and services based on the ability of their dev teams to rapidly push features. Just like I won’t eat in a restaurant the Department of Health grades a “B” (one grade above “rat-infested”), I also won’t buy online from a company with a crash report that would make NASCAR proud or a feature release timeline that resembles the EKG of a cadaver.
…Which spells opportunity for tech entrepreneurs: be the first to commercialize Diamond Certification for DevOps …and you just may invent the next Silicon Valley totem that we’ll co-opt, abuse, cliche, …and worship.