Sunday, October 31, 2010

Dealing with crunch mode

Before going any further: I think that crunch mode is not sustainable and is best avoided. I won't analyze the phenomen or its causes, there has been much written about it. However, given it definitely exists, though usually for shorter periods of time. Having had this experience a number of times, I felt I should organize my thoughts for future reference (for myself, and maybe for others too).

Some of the points below might sound like good practice for software development in general - often we only start working on process problems after they've blown up into our face. Holding retrospectives after such periods can bring quite a change, so don't miss out on the opportunity! However, we haven't even started yet, so let's step back in time.

Knowing why the close deadline is important from the beginning is crucial to keep the team motivated during this period. Working overtime for no apparent reason makes the already bad situation worse.

Have well defined boundaries for both time and scope. This is generally a good practice, but this is a must - setting out on a death march without a clear purpose is unlikely to succeed. Break things down into small (4-8 hours) tasks, prioritize them (and I mean an ordered list, not making all of them into critical priority). If the added up estimates already run out of the possible available time, cut scope. Most likely the only way it can be cut is vertical - e.g.: we will have logging, but we will work against only a single library, and won't make it pluggable for this deadline. Or we won't make the logging destination configurable (though it's a contrived example, it helps making the point).

Taking on technical debt can help. Be sure you all understand that this does not equal to crap code, unconditional copy-paste, etc.. Only introduce debt knowingly (you might want to keep an actual list of the shortcuts you have taken so you know what to fix in the upcoming releases).

Stop experimenting and move back to the comfort zone. Almost all projects involve some new element that the team is not familiar with it. If the team is just learning unit testing, they likely write the tests after they got the code to work, and are taking a long time to write them. In that situation, not writing tests for the time being could be the right choice. However, be careful to get the right message across - you are not dropping the practice because it slowsdevelopment down, but only rescheduling the learning period to some other time. Also, just like with performance optimizations, make these decisions after measuring - don't do itbased on speculation.

On the non-technical front, you can cut back on meetings, both in numbers and in duration. You might also want to schedule them so they don't interrupt the workflow. Have a status in the morning and at the end of the day, with a team lunch if more discussion is needed.

Work from home. Of course, some infrastructure is needed, and it might not be an option for everyone, especially for the person where next door is a construction site. However, saving on the commute time might could give you some extra time during the day without cutting too much into your personal life. Significant Others are more understanding if you are a bit more tired when they get to see you at least.


[Scope section was updated based on @ljszalai's comment on twitter]

Friday, October 22, 2010

Evaluating software products

This topic has been on my mind for some time, having had to use products (that were chosen after evaluation) with much pain. The first reaction (well, maybe the second) to such situations is the "if only I was tasked picking the right tool...". However, after thinking more about it, I'm not quite sure how we can evaluate products to keep everyone happy.

So when Pusztai László brought up the buy vs. build topic at the 2nd Microsoft Hungary Application Lifecycle Management Conference, I just couldn't resist asking him - how do you evaluate software products properly and effectively? Unfortunately there wasn't enough time to discuss it in enough detail during his presentation, and I couldn't find him to follow up on it during the break, so here I am, elaborating (and trying to answer) my own questions, hoping you can correct, confirm, or add to my ideas.

The context is:
  • there is a recognized problem (that can be solved by tooling)
  • the impact of the tool is large enough to warrant an evaluation
  • there is commitment to get a tool
  • the potential tools have been narrowed down to a reasonable number of candidates
Resist the temptation to just start playing with them at this point. It's a mistake, because if we don't know what the success criteria is, the chance to succeed is greatly reduced. Even when we think it's obvious, let's state explicitly - the same argument applies here as when clarifying user requirements. Try to involve as many stakeholders as possible. (Note: if the goal is to replace an existing system, be sure to list explicitly the things that are good about the current one too, just in case those aspects didn't make the list because we were too busy focusing on the improvements for the current pain points!). This list will be the reminder when we are awed by the bells, whistles, and other extra features a system brings to check whether they solve the actual problem.

The next step varies depending on the number of people available - a big enterprise might be able to evaluate all the selected products in parallel and decide having all the results. A smaller company might need to do the evaluation in sequence, and maybe stop the eval process once a good enough candidate is found (kind of like hiring - perfect is the enemy of the good).

Beware that programmers like to get things to work (even if the things themselves wouldn't want to!). Pilot teams might go to great lengths to resolve issues during the pilot (with duct tape, if needed). If we have written about 70% of the features in use, should we chose that product (think about applying upgrades!)? Also, if people are complaining already during the pilot, that's a sure sign to stop investing any more time in that product.

Solutions found during a pilot should be treated like spikes - learn from them, take those ideas for the final implementation, but most of the time, the pilot solution should not be promoted to production use as-is.

One thing that is hard to find during the pilot is how the application scales beyond the pilot team. If some kind of partitioning (separate servers for separate teams, load balancing, etc.) is possible, that's great, but better to know the limits of the app before it hits them. It's worth searching the support forums (in addition to finding the long outstanding unresolved issues) for information, as is to invite people for lunch who have been using this product for a longer period of time to hear the war stories.

Another aspect to consider is data migration to (and from) the platform - when replacing an existing system, consider the cost/feasibility of migration to the new one. In some cases, not migrating might be the best choice (e.g.: keep a read-only CVS instance so people can dig back the revision history, and only import the mainlines into the new system), but the descriptions for all the manual test cases for our flagship product are probably something that should be migrated.

One of the things I'm still unsure about is how many products to evaluate in a row, if none seems to meet the requirement we set. Should we lower the bar, and review whether or not we really need all the things we listed? Or that is the point when we just write our own tool?


Jason Yip has a good post on criteria for evaluating Off-The-Shelf-Software


Updates: posts that I've found after publishing this piece that deal with the same topic, but just some other aspects.

Sunday, October 10, 2010

Slides for the Continuous Delivery talk

I gave a talk on Friday at the firm tech user group about the build/release evolution towards Continuous Delivery. The slides are up under my github presentations project. It was built on top of the Continuous Integration presentation I gave at the Agile Hungary Meetup earlier this year (also on github, Hungarian slides only). Contrary to that one, here I was talking of things I haven't yet accomplished and only am looking forward to do, which worried me a bit - however, declaring this upfront was well received and it didn't pressure me from that point on.

Though I wasn't as good as Tim Fitz (especially lacking the experience) the talk has generated questions and discussion, which was the purpose, so I'm quite happy with how it went.

My only regret is forgetting to record the presentation to review my presentation skills.