Our build structure is pretty stable, but the exact content of the steps varies as we discover more smoke tests that we'd like to add to, or when we rearrange the location of these checks.
The CI servers I've used made this a rather cumbersome process:
- First, I have to leave my development environment to go to the build servers configuration of choice - most of the time it is a web interface, and for some it is a config file
- I have to point and click, and if it's a shell script, I have to make my modifications without syntax highlighting (for the config files usually take the shell command to execute as a string, so no syntax highlighting)
- If it's a web interface, I have (or had) no versioning/backup/diff support for my changes (config files are better in this aspect).
- If it's a config file, then I need to get it to the build server (we version control our config files), so that's at least one more command
- I need to save my changes, and run a whole build to see whether my changes worked, which is a rather costly thing.
- Most places have only one build server, so when I'm changing the step, I either edit the real job (bad idea) or make a copy of it, edit it, and then integrate it back to the real job. Of course, integrating back means: copy and paste.
- If the build failed, I need to go back to the point and click and no syntax highlighting step to fix the failures
- Last, but not least, with web interfaces, concurrent modifications of a build step lead to nasty surprises!
Normal development workflow
- I have an idea what I want to do
- I write the tests and code to make it happen
- I run the relevant tests and repeat until it's working
- I check for source control updates
- I run the pre-commit test suite (for dvcs people: pre-push)
- Once all tests pass I commit, and move on to the next problem
Quite a contrast, isn't it? And even the concurrent editing problem is solved!
Quick'n'Dirty Inversion of Control for builds
Disclaimer: the solution described below is a really basic, low tech, proof of concept implementation.
Since most build servers at the end of the day
- invoke a shell command
- and interpret exit codes, stdout, stderr, and/or log files
we defined the basic steps (update from version control, initialize database, run tests, run checks, generate documentation, notify) using the standard build server configuration, but the non-built in steps (all, except the version control update and the notification) are defined to invoke a shell script that resides in the project's own repository (e.g.: under bin/ci/oncommit/runchecks.sh). These shell scripts' results can be interpreted by the standard ways CI servers are familiar with - exceptions and stack traces, (unit)test output, and exit codes.
- adding an extra smoke test doesn't require me to break my flow, and I can more easily test my changes locally and integrating it back into the main build means just committing it to the repository, and the next build will already pick this up
- I can run the same checks locally if I would like to
- if I were to support a bigger team/organization with their builds, this would make it rather easy to maintain a standard build across teams, yet allow each of them to customize their builds as they see it fit
- if I were to evaluate a new build server product, I could easily and automatically see how it would work under production load, just by:
- creating a single parameterized build (checkout directory, source code repository)
- defining the schedule for each build I have
- and then replaying the past few weeks/months load - OK, I still would need to write the script that would queue the builds for the replay, but it still is more effective than to run the product only with a small pilot group and then see it crash under production load
Shortcomings, Possible Improvements
As said, the above is a basic implementation, but has served a successful proof of concept for us. However, our builds are simple:
- no dependencies between the build steps, it is simply chronological
- no inter-project dependencies, such as component build hierarchy (if the server component is built successfully, rerun the UI component's integration tests in case the server's API changed, etc.)
- the tests are executed in a single thread and process, on a single machine - no parallelization or sharding
All of the above shortcomings could be addressed by writing a build server specific interpreter that would read our declarative build config file (map steps to scripts, define step/build dependencies/workflows), and would redefine the build's definition on the server. By creating a standard build definition format, we could just as easily move our builds between different servers as we can currently do with blogs - pity Google is not a player in the CI space, so the Data Liberation Front cannot help :).
Does this idea make sense for you? Does such a solution already exist? Or are the required building blocks available? Let me know in the comments!