By
Cyrus Howells
|
Date Published: April 05, 2021 - Last Updated April 01, 2021
|
Comments
This article first appeared in HDI.
I want to share an approach with you I've used in improvement registries and project and task prioritization. Most significantly, this approach is a great way to calculate risks associated with proposed changes. When I was asked to run the change process at a hospital, expectations were to improve the change control process and prevent changes which could pose risk to the production operation. With a change approval board (CAB) in place, meeting daily for thirty minutes to review ten to twenty change requests of all sizes and risk levels, I began the journey.Not all changes are the same, and we needed to understand what levels like "high" or "low" meant. We also needed to have efficient and effective CAB meetings, with adequate time to review and focus on the right changes done at the proper time, with no surprises resulting from the change.
The Problem
There were unclear definitions around the process, with subjective methods to understand priority, impact, and risk levels. There were also minimal boundaries such as lead times and change windows. The result was that all change requests were being treated the same, generally labeled as high priority, high positive impact, and low risk.
This led to time-consuming CAB meetings to talk through each request because in reality, they weren't all the same. We talked through each change, focusing on requests by who was talking versus by actual risk level, and we had little or no time to review the requests prior to the CAB meetings. This led to changes done at the wrong times and with many surprises.
The Vision
Above all else, change management is about risk. Have we clearly identified the risk when implementing a change? Have we mitigated risk to an acceptable level to prevent production issues? Do we know if we can quickly test and back out of the change if a failure occurred?
The Challenge
First thing’s first, we had to fix the risk calculator. Risk calculators come in all forms - simple and subjective to complex and time-consuming. We needed a calculator which was quick, simple, clear, consistent, complete and foundational - something which could be completed in five to fifteen minutes and provide a realistic risk level while answering core questions the CAB needs to know during review.
A Unique Approach
Hoshin Kanri is a process used in strategic planning in which strategic goals are communicated and put into action. This is a top-down approach, meaning everything is evaluated throughout an organization against those goals. Early in my career, I saw this approach used to score all projects in an organization using the same goals, with each question having three possible answers and a score of 1, 2, or 3. Then, and this is key, the answers were multiplied, not added. This approach amplified scores to allow you to see subtle value differences between two projects which otherwise appeared the same.
I found this approach worked for anything from projects to evaluating tasks on my personal to-do list when I couldn't decide what I should work on first. If it worked in these areas, I believed it could work for calculating risks for proposed changes.
Implementation
We identified questions with three answers for each question to be used with this new approach. After months of testing and adjusting both the questions and the score ranges for "low, medium and high risk" changes, we implemented this as our risk calculator.
We settled on nine questions, each with three simple, non-subjective, and distinct answers ranging from 1-3. A person could answer the questions in a matter of minutes. Our guidance was no single question dictates the result, and if you spend more than fifteen minutes answering the questions, you're overanalyzing or you probably aren't ready to submit your change request. Once these questions were answered, the scores were multiplied.
The questions we chose were as follows:
- How many end users/customers are impacted by this change?
- How many business days between submission and start date?
- How complex is the change?
- How easily is the change verified?
- How long would it take to back out the change?
- If the change fails, can service be restored within the change window?
- Will there be an outage to end users, regardless of how short?
- What is the highest tier level system this change will touch?
- Will this be implemented during a standard maintenance window?
Expected Results
The approach was easy for the practitioners to adopt. With a clear view into risk, we were able to focus on the right changes in the CAB meetings, spending little time on low-risk changes and focusing on medium and high risk changes. This simplified our meeting, where we only asked for questions related to low-risk changes, which were the majority of change requests, and only talked through the medium-risk changes. High risk changes were covered by separate CAB meetings just for that change request.
Our daily meeting went for thirty minutes to ten to fifteen minutes, there were few surprises when the changes were implemented, and our top priority incident count dropped significantly. That’s because failed changes accounted for most of these.
We also had some pleasant surprises, primarily because we started asking more specific questions:
- Because we asked when a change would be implemented, we implemented more changes off hours.
- When we ask how long the approvers were given to review, we got longer lead times.
- The approach allowed us to refine our management information page to clearly show significant changes.
- We started focusing more on utilizing maintenance windows, and we found this improvement slowed the need to make other process improvements.
What You Need To Know
We didn't live on ITIL Island, where the sun shines, birds sing, and our processes are in perfect harmony, but we do harness the best practices of ITIL, tailored for our organizations. And this can look different as you adapt it to your organization. It focuses on people, process, and the tool, with a unique but not complex approach.
In the end, we caused behavior change by a pull-versus-push method, using a data-driven, repeatable, and predictable approach.