Learning vs. Quality Cop

The lines between learning (for improvement) and assessment (measurement) can be blurry. For a learner, when education falls too far towards assessment, they may feel that you are being a ‘quality cop’. A quality cop, at its worst, is someone following around a caregiver with a clipboard marking off what they did or did not do. Unfortunately this form of auditing has become popular in healthcare but can also be synonymous with the cliché “the firings will continue until morale improves”.

The clipboard approach results in improved compliance as long as the assessor is present. The moment an assessor comes onto a unit (such as to observe hand washing), behavior changes. Within moments all of the caregivers know they are being watched and go out of their way to comply (or sometimes out of their way to misbehave). Caregivers will even overact (hold up their hands and say “I’m washing in…”). Once the scores are compiled, they are then ‘put on the board’, often with red or green colors for everyone to see. A board that is often red then becomes the board of shame, and rather than encouraging improved performance, it discourages them.

To use a common example, let’s look at patient experience (HCAHPS) scores. The surveys we receive can be as much as 6 months delayed, and represent a very small sample (as low as 1% in some units) of patients. HCAHPS therefore is a heavily lagging measure that may or may not be able to be influenced at all. So hospitals have gone to measuring lead indicators such as bedside report, hourly rounding, patient care board compliance, etc. Under this system, caregivers now only need to remember what they are being assessed for today, and focus on ‘passing the test,’ rather than caring for the patient. From a learning standpoint, the repetitive, mostly negative assessments, results in lower motivation to learn, and a feeling of hopelessness.

If we focused on learning, rather than assessment (quality cop), we might be able to move the measures without having them as a focus. A great example of this comes from the grocery industry:

In the grocery industry they often have associates with short tenure, limited formal education (mostly high school students and graduates), and limited training (similar to patient care associates in hospitals). Grocery stores are very similar to healthcare in that they have a measurement of everything, and try to improve outcomes by driving those measures. As an alternate, one chain did an experiment in the produce section. Rather than assessing each associate and having to provide constant measurement of the produce section, they put the responsibility on the associate. Before the associate goes on break they have to ask three customers to rate the produce section from 1 to 10, and for one thing they can improve. The associate must then document and complete the three. Customers often make suggestions such as, reorganizing an area or cleaning up, which teaches the associate what is important to the customer. The result was that the customers improved the associates, sales increased in the produce section and the associates were more engaged in the process.

In healthcare we cannot ask for a numerical score (this conflicts with HCAHPS), but we could ask patients how we can improve and have our caregivers document the improvement(s) completed. An EVS department implemented this process and the answers from the patients were insightful and sometimes surprising. My favorite was a patient who very kindly asked if the EVS associate could get rid of the spider web in the corner. The patient could see it from where they were laying, but it was not obvious from other areas of the room. The patient was so appreciative, as that spider web led them to believe that other areas of the room must not be clean, or that there was a lack of attention to detail (and would have resulted in a less than Always score on the HCAHPS survey).

The problem this approach causes is difficulty in mental models of management. Since caregivers themselves are reporting how they improved, it can be ‘faked’, or some management worry that they may be learning the ‘wrong’ things. I am biased, but believe that if you do not trust your staff to honestly report what they are learning, then why do you trust them to take care of patients? Additionally, if a patient says “I can do XYZ better”, they are correct in their own perception (even if it is not a general rule) and I can start by being better for that one patient.

There is a place for the quality cop, clipboards and checklists. It is fair to use a clipboard if you are looking to ensure 100% compliance to the safe surgery checklist. A surgical checklist is there to hold people accountable, not for learning. If you are looking to change behaviors, then the checklists need to be secondary to the learning. It is completely possible for us to measure something (and have 100% compliance) and still not get our needed outcome.

The other assessment tool that needs to be used sparingly (if at all) in healthcare is the multiple-choice test. A multiple-choice test is just a quality cop for knowledge.   The Joint Commission has been very clear that we need to demonstrate competency, not knowledge, and they are not the same. Just because someone ‘passed’ an EKG test does not mean they can accurately diagnose rhythms. A more accurate measure would be to do a chart review, and even better would be a peer chart review with feedback (focused on learning, not assessment).

We have gotten used to multiple-choice exams as signs that someone has ‘passed’. In online courses this means our staff has learned how to click really fast through slides, take the test, fail it, and retake it. Multiple-choice exams do not correlate with patient outcomes. The pre and post exam process isn’t much better. For example, a class (safe patient handling) had a 5-question pre and post test. If students failed the exam, they would just retake it (although very few failed it). The scores were not reported anywhere, they were just used by the educator to show that the students ‘learned’. Unfortunately, all this test was doing was making the educator feel better, and had no impact on the patient or learner (and therefore should be removed). If an assessment of a safe patient handling class is required, it should be competency based.

Last word on quality cops and assessments… As we move towards a high-stakes environment (such as when a job is on the line), we need to make sure that whatever assessment we do is valid, job related, and does not discriminate. You should have your test reviewed by a psychometrician (someone who tests tests), and your legal department. If challenged in court, many of our assessments and tests would quickly be thrown out. What the courts are looking for are reliable and valid tools coupled with chances for the individual to improve (learn) with feedback. Focus on learning; let the rest fall into place.

Application experience:

Participants: 8-12 (in pairs

Time to complete: 30 minutes.

With a partner, list 5 assessments you are currently doing. Discuss how the assessments impact the patient and what process was completed to validate the assessment. Next, discuss ways to shift from the assessment to a learning focus. In particular, discuss if anything would be ‘lost’ by removing the assessment. Share your findings with the group.

This part of a series called “Learning That Works” by Jason Zigmont, Ph.D., (jay.zigmont@gmail.com ). For a video on this topic and more information, visit http://L18.LearningInHealthcare.com . The principles above are part of the core content (Learning Card 18) of the Foundations of Experiential Learning Manual (http://FEL.LearninginHealthcare.com).

Comments are closed.