Our Blog

image of bubble soccer illustrating how influencer product testing bursts your bias bubble

Burst Your Bias Bubble with Influencer Product Testing

October 11, 2016 - Influencer marketing, influencer product testing, Marketing - , , , , , , ,
By: Chris Riley

There is a lot of luck associated with the success of a new product or major feature. But success is also predicated on an organization’s openness to gathering unbiased, external feedback before the product is launched. In psychology, there are two cognitive biases working against everyone in the SaaS space: the availability heuristic, and the bandwagon effect.

The availability heuristic forces us to see our product from the point of view of its nearest reference point. And if that reference point is existing success, it can give you the false sense that your new product will be just as successful. If you are launching your product for the first time, your reference point could be an application your product will be compared to, a competitor’s solution, or just your investors’ favorite go-to unicorn. The availability heuristic can also give the false sense of certainty, convincing some they have done their due diligence, and that they have a strong educated guess about how well the product is going to do.

The bandwagon effect is easier to understand. The internal enthusiasm about the new product launch can give a false sense of how the market will perceive it. (Clearly they feel the same way, right?)

The go-to for most product people is market research, but even knee-deep in the fact-laden Internet, the information you find will still be tainted by the above bias. The only true measure of potential success will come from real-world, blind product testing from the assumed target persona.

At Fixate, we call this influencer product testing. Influencer product testing is performed based on the assumptions you have already made. Namely who you think wants your product, how you think they will use it, and what problem they will perceive it solves.

The benefit(s) your product provides is just a guess until someone actually uses it. And everything in your launch from trial to purchase will impact that perception. So the end-to-end process needs to be tested with the backing of good data collection.

This is how Fixate IO conducts influencer product testing:

  1. Intro survey: A quick survey that says nothing about the product but gathers personal data on the influencer, including their experience, and current usage of similar solutions.
  2. Testing: The testing is discreet, and follows two separate types of testing: organic and staged. All testing is time-limited. We use a one-hour limit. The time limit is very important, because the test should be as realistic as possible. Most people are busy and allocate very little time to test new solutions. If they get stuck before they hit your “Wow” feature, then the entire product could be perceived as a failure simply due to a lack of time.
  3. Organic: In organic testing, the tester is given no prior information. After they complete the intro survey, they are given the trial. Then, they test the product as they would naturally test any product. With the modern tool ecosystem, organic testing is habitual. Your prospects know how to test.
  4. Staged: In staged testing, the testers are asked to test in the specific way that you believe users should use the product. If you have a data-supported expectation of trial user behaviors, then in staged testing, you’ll have the testers do the same.

When we run tests, there is a 50/50 split of the testing types.

  1. Outro Survey: The outro survey is longer than the intro survey, and it goes into more detail about the specific types of testing outcomes. The outro survey also covers tester perception of the solution.
  2. Interview: We conclude with a 30-minute interview of the testers with an unbiased analyst. The interviewer should not be focused on the details that are gathered in the survey. The goal of the interview is to test assumptions, surface perception, and look for trends.

Testers should be given absolutely no prior information about the product. Effective testing mimics the user scenario, where a user organically has landed on your site and found the trial. Prior information given to testers will taint the results.

In all the projects completed with this approach, the results have always been surprising. While assumptions are validated; there is always an unexpected turn, usually in the perception of the solution. Here are the surprises we see most often:

  1. You think your solution is a must-have, but it’s only a “nice to have.”
  2. Your target user has attributes you did not anticipate. Most often it is the ecosystem they work in that has a direct influence on how they use the solution they are evaluating.
  3. The trial experience is your product’s worst enemy. The actual process of the trial causes more significant problems than anything else.
  4. Your documentation misleads target users. The product itself may be fine—It might be everything else you have available that is the issue.

Internal enthusiasm and standard market research are a must. But so is influencer product testing. If it’s a new product or a new major release, there should always be a test of your assumptions against what real target persona customers think.

There is another bonus of influencer product testing. And that is the “influencer” part. A good set of testers numbers no less than 10. After testing, you’ll have 10 people who know your product, and can potentially be influencer content creators or advocates.

If you want to find out more about what Fixate’s Influencer Product Testing can do for you, contact us.

Image Source: bumpervoetbal.veweb.be


mm

Chris Riley (@HoardingInfo) is a technologist who has spent 12 years helping organizations transition from traditional development practices to a modern set of culture, processes and tooling. In addition to being a research analyst, he is an O’Reilly author, regular speaker, and subject matter expert in the areas of DevOps strategy and culture. Chris believes the biggest challenges faced in the tech market are not tools, but rather people and planning.