- creating lots of simple DTO classes in Python instead of using tuples
- trying to create a synchronous API for a web service call in Flex
- creating an indexer for the immutable list in Scala
- ...
- ...and of course, procedural PL/SQL code with lot's of loops and ifs instead of set based operations
Friday, December 3, 2010
When in Rome, do as the Romans do
Wednesday, November 10, 2010
My #citcon London 2010 experience
This was both my first citcon (apparently pronounced "kitcon") and open space conference I've attended, and I'm sure it hasn't been the last for either one!
The venue was the Skillsmatter Exchange, which was simple in appearance (guess it's a former warehouse of sort), but with enough space, flipcharts, tables, projectors, seats needed for this event. There was enough space to chat without disturbing sessions, or to just grab a coffee - which seemed to be sufficient for this conference (and for my needs/expectations).
Friday evening was dedicated for introductions and for session proposals, topic merging, voting, and of course, some beer. It was a nice reminder that my problems are not unique - I've actually stepped out from the proposals line given that a number people already suggested topics that covered my problem. Maybe it was more to do with the fact that I'm still not comfortable speaking in front of big audiences (more than 20-30 people make me nervous. Guess I'll just have to practice, given that there was a time when any public speaking used to scare me.).
Saturday morning the schedule was supposedly finalized by the end of the breakfest provided. Later on it turned out not to be the case, so I've missed a couple of sessions, though all I've attended were great. Having a problem to choose from many good sessions is much better than not having any good ones to chose from. Nonetheless, I'll keep this in mind for future open spaces not to take anything granted unitl you are in the session.
The first session was about database testing, versioning, and deployment. This is a topic close and dear to my heart, and it was interesting to realize the bounderies of the boxes I've been thinking in - e.g.: I've not been working in multiversion deployment scenarios, where you need to go from any version to the current. I've also learned to be more explicit in my communication, e.g.: when I asked about a tool like agitar's for databases to generate characteristic tests for regression, just because a few people nodded that might be useful, I've assumed everyone knows what I'm talking about, which led to a misunderstanding between me and Gojko Adzic about testing with "random data". Though we've clarified this after the session, I'm wondering why it happened that I couldn't explain it during the session.
Before lunch, I've attended the module dependency handling session, where I learned there is no tool yet for handling versioned dependencies (i.e.: I would want to know if the version/feature branches of a given project are compatible with all versions of dependant projects). The discussion was nice, but I've had some trouble following once things got Java specific.
Next up was the Overcoming Organisational Defensiveness session. I thought I knew a lot about this topic, and was pleasantly surprised by all the new aspects that I learned, including, but not limited to:
- Don't fix your problem, fix their problem
- while not everyone agrees, in certain contexts, un-branding a practice helps (e.g.: we referenced Dan North's example from another conference of not doing pair programming, but asking two people to look into a problem together)
- when developers don't have time to improve, bringing in a coach doesn't solve the problem. You first need to dedicate time for them to learn.
- instead of trying to change people, step back and see if you can create some change that would effect them in ways you would like, but doesn't require their prior approval/cooperation and you can do it yourself
- don't just do 5 whys, do the 5 "so what?" too
The next session's title was Using KPIs/Getting a data-driven organization, however, the topic shifted to organization culture and psychology. Not suprisingly, we didn't find a silver bullet, but there were a number of good laughs and ideas (transparency, exposure, accountability, responsibility, over and undercontroling, ownership were the main themes). It was a smaller session, but in my experience that made it much more focused and interactive; contrasted with the bigger sessions. Plus I have been surprised, there are people using vim on the mac :)
The final session was Beyond Basic TDD, where the essence boils down to TDD having a marketing message that design will just evolve, despite Kent Beck admitting that might not always be the case (i.e.: if you've got people with good design sense, no methodology will prevent them from writing well designed code). There should be more focus on teaching programmers about design. Gojko took over from there to facilitate the discussion around what can be done to bring those familiar with TDD basics to the next level, I have to admit I've become quite exhausted by this session, something to keep in mind next time when organizing sessions around a whiteboard.
Overall, it was great fun, it was good to bounce ideas off from people, chat with random ones, and to talk with people known only from twitter. The one thing I regret is that I had to skip the after event pubbing (first evening I was just way too tired, while Saturday evening I was catching up with some friends living in London). Next time I'll try to allocate one more day for the trip, because the hallway conversations were great during the event, and I would be surprised if they were any worse in the pub.
Sunday, October 31, 2010
Dealing with crunch mode
Friday, October 22, 2010
Evaluating software products
- there is a recognized problem (that can be solved by tooling)
- the impact of the tool is large enough to warrant an evaluation
- there is commitment to get a tool
- the potential tools have been narrowed down to a reasonable number of candidates
Jason Yip has a good post on criteria for evaluating Off-The-Shelf-Software
Updates: posts that I've found after publishing this piece that deal with the same topic, but just some other aspects.
Sunday, October 10, 2010
Slides for the Continuous Delivery talk
Sunday, September 26, 2010
Don't repeat yourself, even across platforms
Why would you use multiple platforms, or do a polyglot project?
- Web applications need the same input sanity validation performed both client side for usability (JavaScript) and server side (whatever technology you use) for security. The same argument can be made for any N-tier application for DTOs.
- For a portfolio of applications in the same domain, there is a need for consistency - e.g.: reference data catalog for input fields (you want to use the same terminology across the applications)
- There can be similar logic applicable to different platforms - e.g.: some static code analysis is the same across platforms, whether or not we talk about Java or C# code, and it'd be nice to have just a single implementation.
Some approaches
Testing
(Potential) Problems
- Diversity vs. monoculture. The library becomes a single point of failure, and any bug has far reaching consequences. On the other hand, the reverse is true: any problem has to be fixed only once, and the benefits can be reaped by all that use the library. However, there might be fewer people looking at the shared domain for corner cases...
- Shared dependency overhead - shared libraries can slow down development both for the clients of the library and the library itself. Processes for integration must be in place, etc. Gojko Adzic has a great post on shared library usage.
- False sense of security - users of the library might assume that's all they need to do and not think through every problem so carefully. E.g.: DTO validation library might be confused with entity/business rules validation
- Ayende has recently written a post about Maintainability, Code Size & Code Complexity that is (slightly) relevant to this discussion ("The problem with the smaller and more complex code base is that the complexity tends to explode very quickly."). In my reading the points are more applicable for the data-driven (or from there code generated) approach, where that smart framework becomes overly complex and fragile. Note he talks about a single application, and it's known that when dealing with a portfolio (NHProf, EFProf, etc.), he chose to use a single base infrastructure.
Monday, August 30, 2010
Executable bug tracker
1. or open source projects
Thursday, August 19, 2010
Executable documentation
- Specifications. I'm not the first to suggest this, acceptance testing, tools, and books have been around, but haven't caught on yet. I've been introduced to this concept by Gojko Adzic, and I can recommend his past talks/videos or books for getting started on this topic.
- New developer getting started instructions. In addition to local machine setup (though the approach Tamas Sipos described of using virtual machines per project is even better than scripting it), this usually includes gaining access to all sorts of different file shares, web services, machines, databases, mapping them to proper local names, and so forth. This is usually presented in the form of list, where you actually copy-paste it into the command line. There is no reason this couldn't be scripted. Mirroring access from an existing developer is sometimes easier than keeping the setup scripts up to date.
- Revoking access from departing developers. This might be more applicable to bigger enterprise environments, but it is just as important as setting up a new developer. Script it.
- Installation instructions, and fixlogs/workarounds for 3rd party applications (or even your own applications). These are the ones that warn you to only ever run this script as a given user. Or from a particular machine. And to execute the following commands, containing a loops and decision branches, written in plain text. And to make SQL calls, send xml/json messages, where you just need to substitue <this> with the current value, etc. Script them, and make reduce the document to a single instruction - execute myFix.sh with the following two parameters.
- Code review guidelines, coding standards. Naming conventions, indentations, method length/complexity, all sorts of other static code analysis (the domain shouldn't call the UI code directly! We shouldn't have commented out code! There should be no Windows Forms controls with Visible = false never ever changed elsewhere to true in the class! etc.) should not be done by hand if can be automated - and there are quite a number of mature tools out there, all extensible, such as StyleCop, FxCop, Checkstyle, FindBugs, xDepend. Focus code reviews on the more important things.
- Data flow diagrams. For the live, production system, you are better off generating this dependency graph from the scheduling tool you use, which makes it surely represent production, as opposed to the manually maintained Visio diagram or similar.
Saturday, August 7, 2010
On hiring programmers - writing code before the interview
I liked the ad because as a prerequisite for being invited to an interview, applicants are required to write a little web application (regularly read from an RSS feed, store it locally, display the posts on a page (with paging), and add Facebook's "like" functionality to the page).
Being able to conduct an interview based on code the candidate written at her own time before the interview has the following benefits:
- the interview-fever problem is eliminated - some smart people can't solve simple problems during the interview they would be able to do in a minute under normal conditions
- those that can talk the talk but can't apply the theoretical knowledge to real problems don't get past this filter
- those that cannot sell themselves during the interview but are good programmers can be considered
- as an interviewer, I can focus on what the candidate knows, can ask them to suggest ways to implement new features in an application they already familiar with
- it is more fair for people who might not know the jargon and terminology (though they certainly should learn it later), but are good at programming
- you can learn a lot that might not be uncovered in a regular interview, e.g.: how likely the candidate is to reinvent the wheel rather than looking for existing solutions
- the interviewer can screen applicants for requirement analysis if needed - just give an ambiguous enough spec
- those candidates, who just want to get a job rather than a job at the given company are likely not going to apply because of the extra effort required here. Some great candidates will not apply either; however, I think that is an OK risk to take.
- should be big enough, not just a one off script, but something where some design is required, so the interviewers can learn about the candidate
- should be small enough so a good candidate can complete it in only a few hours;
- should be a problem relevant to the job the opening is for, not another hello world program;
- should be such that it's not a problem if it's googleable - we all google all the time, the important part that the candidate should demonstrate an understanding of the googled solution
- should obviously not be real work for the company - I would not want to apply to a company that wants to use my work for free to deliver to their customers.
- problems from online programming competition archives, e.g.: UVa Online Judge, Sphere Online Judge, etc.
- dedicated online screening applications, like Codility
- using tasks (bugfix, new features, etc.) from OSS projects. Yes, it is free work in a sense, but it contributes to the applicant's resume and makes the world a better place! :)