Continuous Engineering - Continue Design and Automation all the way to Production

One of the terms that get tossed out a lot in discussions around Continuous Engineering is DevOps. That is because when an organization is effectively using DevOps methods and tools it is by default practicing Continuous Engineering. Unfortunately, DevOps also tends to mean different things to different people. There is a fairly concise set of properties that determine the level of success for an organization adopting DevOps.  Of course, since it is an IT methodology there is an acronym for it, CALMS.

·         Culture
·         Automation
·         Lean
·         Measured
·         Sharing
 
Culture
I’ve always liked this quote. It came from CEB DevOps promotional materials.
“DevOps is ultimately not about the organizational structure, but about the ability to break out of siloed mindsets and build stronger mechanisms for Infrastructure-Applications collaboration.”
That sums up the culture aspect very well. It doesn’t matter what your role is, who your supervisor is, or where you land in the pecking order. What does matter is that we think and act as “A TEAM” as opposed to part of a team.
I was on a support call recently and I heard a comment that went something like this:
§  What’s wrong with your software now?
When I heard that I cringed a little because that is the polar opposite of where we need to be as an organization. It is not yours, mine or theirs, it is ours. 
Automation
 
I am a huge proponent for Automation. I have always leaned toward automated environments. A quick example was when we first implemented a SAN at CJIS. It was one of the early LSI Logic with Brocade configurations. It was an extremely complex system for data storage with full redundancy as well as secondary SAN to which all of the data was replicated. It was some very important data. As an early adopter there weren’t a lot of tools available to us for monitoring and managing at the level we desired. We would take 4 different reports from the switches and controllers, combine the data in a spreadsheet, manipulate it, and produce another report that showed us exactly where our data was in the SAN, and where the hot spots were during operations. It took about 2 hours/day to produce the report. Using Perl and some shell scripts I was able to get the data, manipulate the data, produce the report, and provide a rudimentary search of the final output for the information we required. It took me about 3 weeks to write and test and consumed most of my day (or evening really, I was on second shift at the time). The time it took to produce the desired results when completed was between 10-15 minutes. We used that tool, with modifications, for 4 years every day (automatically of course) until we implemented an Enterprise SAN using EMC.
 
The point I want to make is that one of the reasons you may not have time to automate your repetitive tasks can be more than overcome by just doing it. The more successful DevOps organizations actually have their entire deployment chain automated. Every step along the way has been automated to the point that once code is committed it can be promoted to Production in a few hours or less and the releases are continuous, not periodic as we operate today. We have some of the technology necessary to make this work in the organization; it just needs to be more widely used. We will introduce some of these tools in later discussions.
 
Lean
 
Minimum Viable Product.  This is one of my favorite terms within DevOps and Design Thinking. The implication is that the functionality provided by a release should only be as much as is needed to provide the desired/required business value. Maybe someday I’ll address what that (Business Value) really means, in the meantime there is a very good book by Mark Schwartz called “The Art of Business Value” that explains that term from multiple viewpoints. I highly recommend reading it for anyone that wants to understand "busness value". That’s not all from Lean methodologies that we could be applying to our systems. We will also discuss this more in future posts.
Measured
 
In the “Agile Manifesto,” the first principle states “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.” There are many metrics which can be used to determine what “valuable” really means. What key metrics do we use to determine that today? For the most part it comes down to the user experience. How many quotes are completed? How many bills are paid on-line with no issue? How fast is it to pay that bill? Etc…
 
Part of that value includes the need to measure ourselves. Closely tracking defects, doing it early in the process, and keeping track of how long it took to correct them are just a few metrics which can be applied to gauge the value of IT as a service to the company. The introduction of ServiceNow should help a great deal in this regard. It is easier and more open (with the right permissions) to drill into the data being collected and create reports, trends, and other visual data displays. We have depended on primarily subjective data in the past to show improvement (or lack of it) and we now have a method to show improvement based on objective data.
 
Sharing
 
We were all taught to share as children. Some are better at it than others. The sharing aspect of DevOps is very broad. It includes sharing people, data, configurations, processes, resources, roles, documentation, and most importantly information. One of the themes that our senior leadership has been pushing is the broadening of roles within IT. Allowing individuals to take on some responsibility beyond what might be considered part of their defined role. This plays very well in the DevOps methodology since the idea is to have resources available that can handle multiple types of tasks at different times. In a fast pace environment we don’t want to be waiting on a single individual because they are the only one that can perform the work. Ideally there would be several interested individuals that seek out those skills and can be made available as needed.
 
In a world where automation is the norm, broadening becomes easier to come by. That is because automation setup is very often a self-documenting process. The configuration files and scripts become the source of the documentation. Anyone with the proper permissions can review that information and either re-use it or modify it without much difficulty. This type of environment also prompts greater sharing among the team(s).
 
The outcomes provided by the DevOps methodology have been proven across the industry to be of higher quality, faster speed to market, lower cost, and easier to maintain. It is a huge cultural shift to implement but well worth the effort. We can all expect to see more around this topic in the not too distant future.
 
Next week we will discuss Requirements Management.