Proactively set quality expectations by introducing closed-loop self-assessments

the four seasons in one photo
Photo Credit: eirikso via Compfight cc

Humans crave knowledge (a predictive ability for the future) as part of their planning process. Understanding the seasons is a type of knowledge: spring turns into summer, summer into fall, fall into winter, winter into spring.

In the same way, we want knowledge about our work environment. If I’m going to be giving an exam via the learning management system, I want to be sure the system won’t crash.

What happens when people don’t have much knowledge–when they have little idea of what’s to come?

People start reading into little things to guess about the future. For example, the Federal Reserve has historically been secretive about its policy decisions, leading people to watch Alan Greenspan’s facial expressions to guess whether he was going to change monetary policy. Or, Google hasn’t made a statement of direction but they seem to be dropping standards compliance, so maybe they’ll drop IMAP support for Gmail one day. Or, singular experiences stay with us: the last time I bought a pair of Acme jeans they ripped, so if I bought another pair they would probably rip too.

These conclusions aren’t necessarily valid, but they’re the best knowledge people have of the situation. And the same thing exists for IT service. By default, IT’s quality of service will be judged by the stories people tell and by the (emotional) experiences people have when service breaks.

The great thing is, IT management tools exist to give people more knowledge about IT service quality. These tools can supplant stories and individual experiences by providing better information from the source. Then, by building closed-loop processes, you can transform this information into knowledge of the future that you and campus can use for planning.

In ITIL, reviewing service quality is primarily the domain of service level management (and to a lesser extent, continual service improvement). But you don’t have to rely just on ITIL’s (fairly formal) process recommendations to build reliable IT service quality expectations.

The simplest way to think about building a quality review process is through the Deming (PDCA) cycle:

  1. Plan: What quality should be expected? Think service level targets that make sense to customers: what does uptime look like during peak demand? how long will turnaround be? What matters to the customer and can reasonably be delivered by IT? Communicate this clearly to the customer. You can start by focusing on any customer key areas of concern.
  2. Do: Implement a quality/service level scorecard to review planned vs. actual quality. Try very hard to think of these metrics as a reflection of reality and not reality itself: don’t manage to the metrics.
  3. Check: Review the scorecard with the customer. Understand any key areas. Capture qualitative feedback too–the scorecard could be green but the customer’s unhappy. This is a great opportunity to learn more about what the customer really cares about. Be very open and honest with the customer–being honest, and highlighting any problems in service quality, shows people that the information you are presenting is reliable.
  4. Adjust: Build quality improvement plans/continual service improvement plans to respond to what you found. Maybe different service level targets are needed. Maybe there need to be countermeasures to prevent certain types of issues. Maybe process improvement needs to reduce process variation.
    This step is also what “closes the loop” on this process–here you’re using process outputs to improve the measuring process as well as service quality.

Throughout the whole review, your goal is transparency to the customers and users. The more your information can be seen, and be thought of as reliable, the more it supplants apocryphal tales of poor quality. The story your information shows doesn’t have to be perfect–the primary goal is to build a new source of truth for understanding IT service quality.

Then, as your predictions become better and better, and you prove they are getting better in the “check” step by reviewing reality vs. your predictions with the customer, you start to create knowledge for the customer that they can rely on for their planning.