[Adium-devl] A Unit-Testing Dissertation
disposable at infinitenexus.com
disposable at infinitenexus.com
Wed Jan 17 23:36:42 UTC 2007
A little while back David (CatfishMan) asked about unit-testing,
specifically how/why it would apply. I was thinking about this topic
a few months ago and in my research drafted up an unsent email. I've
resurrected a large swath of it and touched it up below. Hopefully
this will shed some light on the topic or at least spark some
discussion.
-------------
One area in which we lack is automated testing in any form. We
primarily rely on print statements, debug logs, crash reports, trac
tickets, forum posts, or semi-private betas to shake out bugs,
requiring a pro-active user base without any expectation of breadth
of manual testing. In looking at professional organizations (such as
Apple and Google) and open source projects (from OpenOffice.org to
Apache to Mozilla to WebKit) there's not only an expectation of
automated testing, but a requirement.
The reasons are many, but the key is to detect failures, regressions,
and other repeatable issues without user intervention. For example,
teams at Apple (yes, I'm aware they have different hardware/software
setups and access) typically add unit tests for bugs and features,
allowing the build bot to automatically run the suite when a commit
occurs and then be informed if an unexpected result occurs. During
the Google Mentors Summit I learned that some open source projects
require testing suites, full developer and user documentation and
several levels of peer review before a commit can occur. Obviously a
gamut exists for options, and the preference lies in the hands of the
developers themselves.
How would this apply to Adium? The typical argument for not adding
unit testing to Adium might be "it's a user-responsive app that
doesn't just do x with y, how would you unit test it?" And it's a
great one on the surface. I'm not saying we need to unit test every
line of code though. There are several frameworks utilized within
Adium which are ripe for testing. Each framework, especially
hyperlinking and AIUtilities, contains dozens (or hundreds) of
methods that actually do X with Y. Since we know what the
expectations are for these methods, we can wrap them in a test that
articulates these expectations and allows any developer [or build bot
in the future] to run the suite with our expectations in place
whether we tell them or not. This is contra to our above current
situation, and also highly desired. Beyond the frameworks though you
can test a user-centric application (target/action), as noted in
Chris Hanson's excellent series of Unit Testing articles at http://
chanson.livejournal.com/tag/unit+testing and http://
chanson.livejournal.com/119097.html.
A couple of examples (both hypothetical):
- let's say that the hyperlinking framework has a method that takes
in a string and then applies a set of attributes and returns an
attributed string. We know that sending this method "www.adiumx.com"
should return "www.adiumx.com" with a dictionary with a specific set
of attributes. This can be codified in a test and verified each time
the framework is built (or a separate target for testing.)
- or an AppKit level example would be a method in AIUtilities that
takes a window and changes the mask to match the parameter passed in.
We know that passing a window (say, an outlet, or even any old
NSWindow) and the mask constant should (depending on the
implementation) return the window [after changing] or simply perform
its task [unless it has a boolean return for state]. Either way, we
can pass the window and constant and then check the window's mask via
its accessor. Each mask could be passed during the test and verified
in sequence if so desired.
The flip of verifying that no regressions occur is to test how you
handle failure. Another aspect of unit testing is to create tests
that knowingly break the method's expectations. In the case of our
2nd example, what happens when you pass a nil window reference or
garbage for the mask constant? Putting in tests that do just that
will verify that the method will handle the unthinkable, making
stability another check on the automated testing list.
Do I think we need to cease all development and go back to instrument
all the frameworks and code in the trunk? Not at all. But I do think
it's something we need to start focusing on as developers and as a
community because increasing the stability of our growing code base
is a vital component of longevity. None of us will likely be with
Adium forever, so we must articulate what's in our heads within the
code, by comment and by test, for those who come after us.
Practically speaking I've created a branch (adium_unit_testing)
specifically for going back and adding unit tests to a large swath of
the previously discussed areas. As time allows I hope to set up a
solid testing rig for the above-mentioned frameworks as well as build
the beginnings of a test suite for each. I invite you all to review
and contribute to this branch so that we can build a first class test
suite to keep our first class messenger in top shape and to start
turning your minds to thinking in a pattern such as this.
- brian 'bgannin' ganninger
* - I heartily recommend reading "Code Complete" by Steven McConnell
(http://cc2e.com) as it articulates software development
methodologies in a readable and thorough treatment and is a great
handbook on how to augment your toolset with great practices.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://adium.im/pipermail/devel_adium.im/attachments/20070117/f5eea4bd/attachment-0001.html>
More information about the devel
mailing list