We had a good turn out for TriZPUG’s third Python Hack Night tonight. All in all, nine local pythonistas showed up at MetaMetrics in Durham and dug right in. There was good conversation and it seems like progress was made on most fronts. We had a wide range of projects including: personal websites, a scrum workflow tool, a computational teaching problem, a game project, a nose plugin, a Django-based charting framework, and more. We even had an impromptu game AI-building competition emerge.
I was pleased with the results and would love to see one at least once a month. Let’s see what August brings.
I wrote my first Nose plugin this weekend and I’ve got to say it was dead simple.
I was looking around for ways to keep Nose from searching for tests in certain directories. You see Nose is a nosey little critter that will scour every nook and cranny of your directory structure looking for unittests to run. Really, it wants your code to work and it loves making dots. But, see I knew better and knew if that little Nose test discoverer ventured down a few rabbit holes it would end up in a world of pain–segfaults or worse. I just wanted to avoid the whole mess and just have Nose exclude a few directories from its massive testhunt.
I asked a few people I know who use Nose regularly about my options. “Write your own plugin,” was the consensus. Nose’s architecture makes it very easy to write plugins for just about every aspect of its behavior…check out its plugin api. The project documentation is also very helpful in this regard.
I spent most of my time trying to figure out how to test my testing plugin. In the end it only took a couple of hours to go from Nose novice to having written a packaged, testable Nose plugin. So, here’s a quick example of how you would use this new nose-exclude plugin:
$ ls test_dir
In this example, I want Nose to ignore a couple directories and not even bother searching them. You would run:
You could further specify to exclude subdirectories if you wanted that level of control. There is also an –exclude-dir-file= option available that allows you to specify a file containing paths to be excluded.
People are often asking me how and why my department shifted from an ASP.NET environment to Django. I’ve finally gotten around to writing about the process leading up to our decision. I hope people out there find it useful in their own development groups and discussions.
Almost two years ago I was in a rather unlikely situation in that I was running a software engineering department containing both a C# team and a Python team. The Python group was focused on building scientific computing and NLP-type applications, whereas the C# team was focused on building web applications.
A few of us Python folks in the department had already started playing around with Django–building internal web applications and projects outside of work. It did not take long for us to realize the power of Django and how quickly we were able to produce high-quality applications with little effort. This was my (strong) impression, but in order to propose a corporate platform shift I was going to need some data to support my claims.
It slowly dawned on me that I had a perfect test bed. Here we had two teams using different technology stacks within the same department. The same department. That means they shared the same development processes, project management tools, quality control measures, defect management processes. Everything was the same between these groups except for the technologies. Perfect! So like any good manager I turned my teams into unwitting guinea pigs.
We can accomplish more with Python + Django than with C# + ASP.NET given the same amount of time without sacrificing quality
For the sake of this study, I defined productivity as a normalized team velocity: how many story points were completed / developer / week. I record the normalized team velocity for each team’s sprint for later analysis.
WAIT! You can’t compare story points between teams!
I hear this a lot. Yes, you can. The problem is that most people do not bother creating a common scale or continually calibrate their estimations (within or between groups). Generally, it’s way more work than most groups need to deal with and it doesn’t deliver much utility to most groups so it isn’t often discussed or practiced.
The methods described below should outline the additional calibration work that was performed to ensure a common estimation scale between the two teams.
Both teams continued business as usual working on projects in parallel. Each sprint consisted of 3-4 developers. It is worth noting that Team ASP.NET did not make use of MS MVC Framework, but they did use Linq-to-SQL for its ORMy powers.
Special care was taken to maintain linkage between the two team’s effort estimates. During sprint planning, each team would use a common story point calibration reference when making estimates. In order to detect any potential deviations in calibration, during several planning poker sessions I included stories that had already been estimated during previous sprints or by the other team; no significant deviations were found.
At the end of each sprint I would calculate the normalized developer velocity ( # of completed story points / developer / week ). These values were recorded for both teams. It should be noted that only Django-based sprints were used in analysis for Team Python.
I recorded results for approximately 6 months.
Normalized Developer Velocities: C# + ASP.NET and Python + Django
The above histogram shows the distribution of normalized velocities associated with each completed sprint. The table below summarizes the distribution of velocities associated each team.
Summary statistics of each team’s normalized developer velocities
Units: story points / developer / week
The distribution of velocities between the two samples are similarly shaped, but have clear differences in their mean. The average velocity of a C#/ASP.NET developer was found to be 5.8 story points/week. A Python/Django developer has an average velocity of 11.6 story points/week. Independent t-tests reveal these differences as being statistically significant (t(15) = 4.19, p<7.8e-4).
Discussions and Conclusion
Given our development processes we found the average productivity of a single Django developer to be equivalent to the output generated by two C# ASP.NET developers. Given equal-sized teams, Django allowed our developers to be twice as productive as our ASP.NET team.
I suspect these results may actually reflect a lower bound of the productivity differences. It should be noted that about half of the Team Python developers, while fluent in Python, had not used Django before. They quickly learned Django, but it is possible this fluency disparity may have caused an unintended bias in results–handicapping overall Django velocity.
The productivity differences quantified by our findings were then included as part of an overall rationale to shift web-based development platforms. Along with overall velocity differences, the costs associated with maintaining each environment were considered: OS licensing and database licensing for development and production environments, as well as costs associated with development tools. I’m happy to say we are now a Python and Django shop.