David Housman

You Gotta Know When to Hold em… (aka Too Legit to Quit)

0 Comments

One day in every researcher’s career, you’ll check in on your early returns from a study and find that they are disappointing. Perhaps none of your hypotheses worked out, or you’re not finding answers to any of the questions you brought into the study. This circumstance doesn’t necessarily mean that you have to revamp the study and start over. Before you go back to the drawing board, here are some steps you can take to handle the situation, or to avoid it together:

Position your study for success.

When you design your study, try to broaden your scope just enough to collect information that is indirectly related to the objective. For instance,  if the objective of a study is “How much do Honda Civic owners love their cars”, consider asking

  • “What was their last car, and how much did they love it?”
  • “What were the other cars you were considering, and why did you decide not to buy them”.

At the very least, additional context is going to help you understand the data you collect. And if you strike out at your primary objective, you may find something interesting in the auxiliary data you collect.

Believe in the method. Follow the best practices needed to build a good study.

  • Don’t re-invent the wheel. Your background research shouldn’t just focus on the findings of other studies; as a researcher you should take the time to learn from the findings and the execution of other studies. One time that I wrote a survey that asked if someone was a real estate professional. Simple question, right? Wrong. Part time realtors may be housewives or professionals in other industries that grow into real estate- and they don’t always identify themselves as real-estate professionals. Had I done my homework, I would have phrased the question differently, and improved my results. You’ll get better results if you take the time to learn from what others have done and build on it.
  • QA your work. Get peer feedback on your design before, during, and after running a study. Pilot test the method. Find the bugs with your study BEFORE it reaches the participant. Try talking to your manager or your peers. If you’re worried about the study, talk to the stakeholder and get their opinion.

When to Quit

If you’ve built a solid study and you can’t find a reason that explains why you’re striking out, don’t quit. Early results might shake your confidence, but they don’t necessarily indicate that you will have an unsuccessful outcome. My recommendation is to continue running the study until one of the following happens.

  • You identify the reason the study isn’t working. Try to understand what happened. Did you ask the wrong questions? Did you ask the wrong people? Knowing why you failed is a finding. That’s knowledge you can apply to future research. If you identify a serious flaw in a study, there’s an opportunity to rework the study around that issue. Beware: if you’re not sure why a study isn’t working, don’t point the finger on what might have been wrong or make up an excuse. If you convince your stakeholder that you know why a study didn’t work, you may be asked to re-run the study. If you don’t know the reason, start your analysis with the statistic that is telling you something is wrong, and trace it back to an underlying cause.
  • You decide that what you’re looking for just isn’t there. Disproving a hypothesis is an acceptable outcome of a research study. Your job as a researcher is to put yourself in position to collect information if it’s out there. Guaranteeing that you will find what you’re looking for inherently assumes that what you’re looking for exists. For instance, if you were asked to “learn about the shades of lipstick preferred by gun-owners” and you find that “gun owners don’t wear lipstick”. If this happens, it’s generally a good idea to point your stakeholders in a more fruitful direction. “Gun owners don’t really use lipstick… but a few of them mentioned that they use nail polish…”
  • Your stakeholder pulls the plug. Hopefully you’ve checked in on the data you’ve collected as the study progressed, you’ve shared your concerns with your stakeholder. If you’re genuinely worried, give your stakeholder the choice to continue or not before the study concludes. Provide a recommendation for how to proceed and .a rationale for why you made that recommendation. Make sure you share the risks and opportunities of continuing the study. Your stakeholder will appreciate being kept in the loop. If contribute to the decision to proceed, it will take the edge off if nothing works out.  You may not realize it, but even before the study ends, they might be telling their stakeholders about how you’re running this fantastic study. If things aren’t working out, they need to know that sooner than later.
  • The study ends. If your study ends, and you’ve nothing, be up front about it. Your stakeholders will appreciate it if you it directly in your report. Saying “I know you wanted to know X. We didn’t answer your question for these reasons” or “This information just isn’t there” shows confidence and awareness. If you wait for them to ask (and they will) it kind of comes off as “I was hoping you wouldn’t notice that I screwed up.”
Comments are closed.