Recap of the Squirrel Hordes

I gave my PyCon talk this weekend–“Militarizing Your Backyard With Python: Computer Vision and the Squirrel Hordes.” I was not prepared for the number of people who caught me after the talk and throughout the conference telling me about their own battles. Thanks again for all the recommendations about how to improve my firepower, tracking, and classification accuracy.

Horde Scout (source:wikipedia)

As requested I’ve posted the presentation on SlideShare and here’s the final squirrel encounter video and the actual PyCon Presentation.

One great resource that was really helpful for getting ideas about sentry guns is Project Sentry Gun. There is Wiring and Processing code to get you started as well as a premade Arduino shield if you’re interested. The folks at Servocity were also very helpful in sizing servos for my project.

As time permits, I’ll post some additional articles detailing the various steps of my project that folks seem to be interested in.

Thanks all. Good luck and don’t get captured.

Python Hack Night #3

We had a good turn out for TriZPUG’s third Python Hack Night tonight. All in all, nine local pythonistas showed up at MetaMetrics in Durham and dug right in. There was good conversation and it seems like progress was made on most fronts. We had a wide range of projects including: personal websites, a scrum workflow tool, a computational teaching problem, a game project, a nose plugin, a Django-based charting framework, and more. We even had an impromptu game AI-building competition emerge.

I was pleased with the results and would love to see one at least once a month. Let’s see what August brings.

Surveying Mechanical Turk to Validate a Startup Idea

I was intrigued by Lindsey Harper’s post, “How I Used Amazon’s Mechanical Turk to Validate my Startup Idea.” If you’ve ever worked with market research firms, built your own panels, or have hit the pavement trying to collect your own market research you know it can be expensive and/or time consuming. The idea of having a broad and cheap sounding board available online was very appealing.

I figured I would give it a shot and run a few tests through the Mechanical Turk and see how it stacked up against some more traditional market research options. I grabbed my latest business idea–viability untested–and set off for Amazon.

Testing Business Viability

Dude! We totally just made 15 cents
Dude! We totally just made 15 cents!

It’s worth noting the startup I was working on was a subscription-based consumer service geared towards parents of younger children and their grandparents. As Lindsey described in her article you get no segmentation or guaranteed panel refinement on Mechanical Turk so I was at the mercy of self-selection. I specified in the task description I was looking for parents of children of a certain age and let it go.

I posed some very basic demographic questions (e.g. gender, age(s) of their children, age). Once I had some basic information on the respondents I probed if they face the problem my service intends to solve. Once I described the service, the survey asked how likely they would be to use it and how likely they would be to recommend it to others. There were also a few service-specific questions, some open-ended responses including, “why would you not use the service,” and a general thoughts and feedback form.

I actually had some fancier survey question types than I cared to implement through Amazon’s Mechanical Turk API so instead I hosted the survey over at SurveyMonkey and had the respondents enter a confirmation code into MTurk upon completion.

Results: Well look at that…

Not bad. MTurkers ended up providing fairly similar answers to those I received in the wilds. After some light data trimming the data sets were very similar. Responses to the “How likely would you be to use this service” question were pretty similar between the MTurk panel and my other groups; statistically there was about an 80% chance the response groups were pulled from the same population. The response patterns were slightly shifted, but the overall outcome was the same.

The data trimming was done to account for a larger than expected number of MTurk respondents who were very price conscientious. Their responses described ongoing harsh economic conditions, the need to save money, and other general hardships. These folks were generally not represented or targeted in my other surveys.

As a bonus, the optional open-ended responses given by MTurk respondents were thoughtful and very useful. I was not expecting this level of detail. The optional open-ended question about general thoughts and feedback elicited a 47% response rate with an average of 40 words per response. That mean came with a standard deviation of 32 words per response–there were some really thoughtful responses in there.

Semifinal Thoughts

Would I use Amazon’s Mechanical Turk for this purpose again? I think so. It seems to be a good way to get a general feel for your idea and certainly grab some helpful feedback. The responses I received led me to believe it was a very thoughtful community.

In no way am I endorsing a survey of this type to be the entirety of your market research. This is a cheap and easy way to get some feelers out there and validate that you’re not (too) crazy. In the end it is still very important to get out there yourself and talk with potential customers early on.

FYI: The result of this work has become GlitterDuck. I am getting ready to start some pilot runs soon. If you are interested in learning more or becoming a beta tester please sign up over at the site.