One of the common questions I hear from clients is whether or not research results from our private, branded, online communities are valid: Are the findings biased? Can we generalize what we learn from community members to other groups? Are the differences observed significant?
These are just a few of the go-to questions market researchers ask themselves when they want to assess how confident they can be in their interpretation of results; a “yes” on all three accounts has, historically, been a necessary precursor for meaningful action. But as I have argued over the years, times are changing and our methods need to evolve as well. We would be well served by asking a fundamental and ultimately more useful question, namely: What makes market research valid in the first place?
When you think about it, it turns out there is a lot more to research validity than the sample size, random selection of participants, and blinding your study. In fact, there are a number of research characteristics (either entirely overlooked or seen only as drawbacks) that can strengthen validity and are inherent to opt-in, socially dynamic research methods such as online communities.
For example, and referencing the table above, online communities, by definition, are not a random sample of the general population. The social glue and common purpose that bring people together and keep them engaged pretty much ensure they will not represent a perfect cross-section of the United States or any other nation. But is this really so bad? Maybe not, when you consider that many research questions can be better answered by key customer or consumer groups who have experience with a brand or product than they could by a larger, generic sample of relatively unengaged people. And let’s not forget the power of conducting research in naturalistic settings. Participating in online communities, social networks, and the like is an increasingly common activity (according to Forrester’s Social Technographics profiling tool, nearly three quarters of Americans are at least passively consuming content online). Thus research conducted through these online venues is done in context, enhancing external validity.
When we turn our attention to internal validity, or bias, we can see a similar dynamic: the very things that undermine confidence from one perspective can increase it when you take a broader view. The demand characteristics of the research setting, when uncontrolled, can easily introduce bias into a study. Market researchers fear that people in online communities will behave differently because they know they are being studied (i.e., the Hawthorne effect), interact with and are potentially influenced by community facilitators (i.e., Rosenthal effect), and become sensitized to the research over time, improving their performance in any number of ways (i.e., the practice effect). These are all red flags, but they are also the cost of doing business when you want highly engaged, committed, motivated, and honest participation in your research. And that is an important trade-off to understand. Our research shows that, in fact, community members remain candid and honest over time, despite months—or even years—of ongoing participation. We encourage our clients to be as transparent with people as possible about their identity, the purpose of the research and what the brand will do with what they learn; and our research shows that transparency is linked to greater engagement. And rather than fight the influence process that occurs in all social settings (which is life, after all), we look for ways to leverage that and use it to further our understanding consumer behavior.
Ultimately, the question of research validity is not a clear cut, yes/no dichotomy. Rather, it comes down to what you value in an epistemological sense, what method is most pragmatic given what you are trying to learn, and what your stakeholders will regard as “valid.”