A Critical Incident means that something good or bad happened during a usability study. Usability tests are not the only forum where the concept applies. They are not trivial: thousands of users may be affected by a problem before someone complains.
Critical Incident centric analysis within a visit is something you can totally do. First, select Critical Incidents criteria that are measurable and relevant to your product. These are a mix of sentiment and behavior. As an example, the following short list contains some Critical Incidents that I’ve used in the past:
Criteria for Identifying Bad Critical Incidents
- The user does not succeed in his objective within N minutes
- The user articulates a goal, tries several things or the same thing over again, and then gives up.
- The user expresses some negative effect or says something is a problem.
Critical incidents can be collected with web analytics too. These can include:
- Very long paths
- Bounces from some keywords
- Clicks on a “Help” button
Customer feedback may also capture a critical incident:
- Expletive (Cussing) or other device to strongly word feedback
- Strong negative satisfaction measures
- Multiple feedback submissions within a visit
What are the signs that someone had a bad experience on your product?
Get out your shovels!
Critical incidents can be an excellent KPI. But only knowing how often they occur doesn’t always provide the guidance to act.
When a doctor hears a patient coughing, he doesn’t just remark “That’s interesting” and move on.
He continues to dig with the tools of his trade:
Critical incidents cry for attention. Analysts and researchers must become diagnosticians. The pursuit of fact cannot stop at the limit of any method or mindset. We must cross borders, methods, and tools in pursuing an understanding of the events we see.