Surveying Mechanical Turk to Validate a Startup Idea

Surveying Mechanical Turk to Validate a Startup Idea

I was intrigued by Lindsey Harper’s post, “How I Used Amazon’s Mechanical Turk to Validate my Startup Idea.” If you’ve ever worked with market research firms, built your own panels, or have hit the pavement trying to collect your own market research you know it can be expensive and/or time consuming. The idea of having a broad and cheap sounding board available online was very appealing.

I figured I would give it a shot and run a few tests through the Mechanical Turk and see how it stacked up against some more traditional market research options. I grabbed my latest business idea–viability untested–and set off for Amazon.

Testing Business Viability

Dude! We totally just made 15 cents

It’s worth noting the startup I was working on was a subscription-based consumer service geared towards parents of younger children and their grandparents. As Lindsey described in her article you get no segmentation or guaranteed panel refinement on Mechanical Turk so I was at the mercy of self-selection. I specified in the task description I was looking for parents of children of a certain age and let it go.

I posed some very basic demographic questions (e.g. gender, age(s) of their children, age). Once I had some basic information on the respondents I probed if they face the problem my service intends to solve. Once I described the service, the survey asked how likely they would be to use it and how likely they would be to recommend it to others. There were also a few service-specific questions, some open-ended responses including, “why would you not use the service,” and a general thoughts and feedback form.

I actually had some fancier survey question types than I cared to implement through Amazon’s Mechanical Turk API so instead I hosted the survey over at SurveyMonkey and had the respondents enter a confirmation code into MTurk upon completion.

Results: Well look at that…

Not bad. MTurkers ended up providing fairly similar answers to those I received in the wilds. After some light data trimming the data sets were very similar. Responses to the “How likely would you be to use this service” question were pretty similar between the MTurk panel and my other groups; statistically there was about an 80% chance the response groups were pulled from the same population. The response patterns were slightly shifted, but the overall outcome was the same.

The data trimming was done to account for a larger than expected number of MTurk respondents who were very price conscientious. Their responses described ongoing harsh economic conditions, the need to save money, and other general hardships. These folks were generally not represented or targeted in my other surveys.

As a bonus, the optional open-ended responses given by MTurk respondents were thoughtful and very useful. I was not expecting this level of detail. The optional open-ended question about general thoughts and feedback elicited a 47% response rate with an average of 40 words per response. That mean came with a standard deviation of 32 words per response–there were some really thoughtful responses in there.

Semifinal Thoughts

Would I use Amazon’s Mechanical Turk for this purpose again? I think so. It seems to be a good way to get a general feel for your idea and certainly grab some helpful feedback. The responses I received led me to believe it was a very thoughtful community.

In no way am I endorsing a survey of this type to be the entirety of your market research. This is a cheap and easy way to get some feelers out there and validate that you’re not (too) crazy. In the end it is still very important to get out there yourself and talk with potential customers early on.

FYI: The result of this work has become GlitterDuck. I am getting ready to start some pilot runs soon. If you are interested in learning more or becoming a beta tester please sign up over at the site.