Monday, August 30, 2010

Executable bug tracker

Disclaimer: I have (yet) no practical experience with the concept I describe below, it is a "thinking out loud" kind of post. The context is a team working on a product in the maintenance/legacy phase of its life-cycle, with developers who are already comfortable with automated testing.1

It will be about the small, nice to have priority bugs/known issues. The ones that never get formally prioritized in any of the releases, because there are always more important features issues. The ones that you record in your issue tracker, to keep your conscience at peace; and which will be closed as "won't fix" at the end.

Some advocate2 that you should just save yourself the trouble of maintaining these bugs at all, and just don't bother recording them until clients/managements push for it.

To clarify: I'm not against not prioritizing issues by the clients. However, I would love to find a way to give a chance for these issues to be fixed, without compromising delivery of business features.

One of the contributing factors why these issues don't get fixed (IMHO, of course :)) is that it takes a lot of effort to actually find a bug to fix when you have some slack time. You have to search through your tracker for open bugs, scan them to pick one, build up the context to actually begin to work on it (aka.: getting into the zone), etc. All this makes it too much of a hassle when all you have is a spare few minutes, and would be happy to fix an issue nearby the current module you are working on otherwise, but not with this extra burden added.

A possible solution is to have a collection of automated tests reproducing the bugs, with asserts that fail on the current codebase. These tests live separate from the main test suite (extra jar/DLL, categories, namespace, etc.), but live together in the IDE with the app (to aid refactoring). There could even be a custom test runner or an additional step in the build process to notify you if any of these bugs are fixed - you might even fix one accidentally.

With such a setup we can rely on static code analysis to find bugs in the area of the code we are about to start working on/just finished with; thus lowering the cost for one to begin working on a bug. Even if one won't fix it straight away, the test could be simply improved upon (remember the boyscout rule?).

The one concern I have is with the recording phase of this process - many a time the most costly part of fixing a bug is actually finding a way to reproduce it :) However, if the original "bug report" is the programmatic equivalent of "open this form, enter these values, then right click and observe the application crash", it might not add a noticeable overhead (especially in comparison to filing a bug report in the issue tracker).


1. or open source projects
2. see disclaimer - it might actually make sense if working on a well kept codebase, with frequent releases.

Thursday, August 19, 2010

Executable documentation

It's good to see build and release automation becoming more and more common, but I'm curious to see whether this wave of automation will stop at just releasing or flow over to other areas of software development, and change the attitude in general.

Though automated testing and continuous integration took some time to spread - despite the fact that software developers (who spend their days automating mundane tasks so clients can focus on adding value, and thus should have been easily convinced) have been (and some are still) opposed to the idea of automating the mundane tasks that they perform; I'm hoping that one of my pet peeves - stale documentation - will become more and more extinct as automation becomes more mainstream.

Below are just some document types that could be made live and executable:
  • Specifications. I'm not the first to suggest this, acceptance testing, tools, and books have been around, but haven't caught on yet. I've been introduced to this concept by Gojko Adzic, and I can recommend his past talks/videos or books for getting started on this topic.
  • New developer getting started instructions. In addition to local machine setup (though the approach Tamas Sipos described of using virtual machines per project is even better than scripting it), this usually includes gaining access to all sorts of different file shares, web services, machines, databases, mapping them to proper local names, and so forth. This is usually presented in the form of list, where you actually copy-paste it into the command line. There is no reason this couldn't be scripted. Mirroring access from an existing developer is sometimes easier than keeping the setup scripts up to date.
  • Revoking access from departing developers. This might be more applicable to bigger enterprise environments, but it is just as important as setting up a new developer. Script it.
  • Installation instructions, and fixlogs/workarounds for 3rd party applications (or even your own applications). These are the ones that warn you to only ever run this script as a given user. Or from a particular machine. And to execute the following commands, containing a loops and decision branches, written in plain text. And to make SQL calls, send xml/json messages, where you just need to substitue <this> with the current value, etc. Script them, and make reduce the document to a single instruction - execute myFix.sh with the following two parameters.
  • Code review guidelines, coding standards. Naming conventions, indentations, method length/complexity, all sorts of other static code analysis (the domain shouldn't call the UI code directly! We shouldn't have commented out code! There should be no Windows Forms controls with Visible = false never ever changed elsewhere to true in the class! etc.) should not be done by hand if can be automated - and there are quite a number of mature tools out there, all extensible, such as StyleCop, FxCop, Checkstyle, FindBugs, xDepend. Focus code reviews on the more important things.
  • Data flow diagrams. For the live, production system, you are better off generating this dependency graph from the scheduling tool you use, which makes it surely represent production, as opposed to the manually maintained Visio diagram or similar.
Hope it was inspiring :) Do you know more document types I have missed?

Saturday, August 7, 2010

On hiring programmers - writing code before the interview

What prompted this post was this job ad for an experienced web developer by Netpositive and the discussions that it prompted - and the realization that there is no way I can explain my view within twitter's limitations.

I liked the ad because as a prerequisite for being invited to an interview, applicants are required to write a little web application (regularly read from an RSS feed, store it locally, display the posts on a page (with paging), and add Facebook's "like" functionality to the page).

Being able to conduct an interview based on code the candidate written at her own time before the interview has the following benefits:
  • the interview-fever problem is eliminated - some smart people can't solve simple problems during the interview they would be able to do in a minute under normal conditions
  • those that can talk the talk but can't apply the theoretical knowledge to real problems don't get past this filter
  • those that cannot sell themselves during the interview but are good programmers can be considered
  • as an interviewer, I can focus on what the candidate knows, can ask them to suggest ways to implement new features in an application they already familiar with
  • it is more fair for people who might not know the jargon and terminology (though they certainly should learn it later), but are good at programming
  • you can learn a lot that might not be uncovered in a regular interview, e.g.: how likely the candidate is to reinvent the wheel rather than looking for existing solutions
  • the interviewer can screen applicants for requirement analysis if needed - just give an ambiguous enough spec
  • those candidates, who just want to get a job rather than a job at the given company are likely not going to apply because of the extra effort required here. Some great candidates will not apply either; however, I think that is an OK risk to take.
The hardest thing with this approach is picking the problem to be solved -
  • should be big enough, not just a one off script, but something where some design is required, so the interviewers can learn about the candidate
  • should be small enough so a good candidate can complete it in only a few hours;
  • should be a problem relevant to the job the opening is for, not another hello world program;
  • should be such that it's not a problem if it's googleable - we all google all the time, the important part that the candidate should demonstrate an understanding of the googled solution
  • should obviously not be real work for the company - I would not want to apply to a company that wants to use my work for free to deliver to their customers.
I'm not even going to attempt to give a silver bullet solution that satisfies all the above, because - as everything in the software field - it depends on your context. However, the below ideas could be used as starting points:
  • problems from online programming competition archives, e.g.: UVa Online Judge, Sphere Online Judge, etc.
  • dedicated online screening applications, like Codility
  • using tasks (bugfix, new features, etc.) from OSS projects. Yes, it is free work in a sense, but it contributes to the applicant's resume and makes the world a better place! :)