Wednesday 8 April 2020

Junkins: writing a continuous integration server

Just some notes, while I'm about it.

We tend to use Jenkins at work; but I once tried to go about setting up an instance and found even working out how to configure it too much like hard work; so when I found myself wanting one recently I decided to write my own. How hard could it be? And it would certainly be more fun that configuring someone else's. I should mention that my main reason for wanting this was someone else saying they would give up maintaining the Jenkins instance whose result I rely on; so that again pointed to not using it.

The bits I want are:


  1. something to run every now and again, and check for new changes; if there are any, kick off the builds, if they aren't already running;
  2. something to kick off a given list of builds;
  3. something to do each individual build;
  4. something to show me the results.


Point 1 is a cron job; every 5 minutes is a reasonable interval: responsive enough to new changes, and not too much load on the server. 2 is a shell script, using qrsh. 3 is a shell script. And 4 is a cgi script. The "database" of results is just a (Linux) directory structure.

Note that I'm allowed to skip fiddly edge cases and reliability; this is only for my own use, at least in the first instance.

Wildly exciting details


My changes live in Perforce, but that is - or so I believe - isomorphic to Git for these purposes; so it really doesn't matter what change control you use.

Step 1 checks if there's a lockfile in place; if there is, it gives up, because it means there's a build in progress, If there isn't, it syncs the workspace, and checks if the sync did anything (grossly; I just check grep -q up-to-date ; but it's all I need); if it did, then it writes a lockfile and kicks off the builds.

Actually there's a slight complication, which is that all of that previous step is per branch. that adds in a choice: doing a sync in the middle of building a branch would be bad, so the lockfile has to be at least per-branch. In terms of (compile) server load, it might be good to only permit so many branches to be built at once. But I only have two branches right now, so I'll worry about that later.

Step 2 is boring; it is just a shell script that iterates through a list of builds and qrsh's them off into the grid, with params I nicked off another script. At the moment the list is the same for all branches; it could vary, and might one day. When it finishes, it deletes the branch lockfile. It could fire of the qrsh's in parallel, and then worry about exactly how many it is responsible to fling onto the server, and then worry about working out when they are all finished so the lockfile can die (the lockfile could become a directory perhaps that they each write to, and when empty can be removed?).

Step 3 is also dull, and simply builds whatever build has been asked for on whatever branch. Note that this is all done onto scratch (unbackedup) space, so I don't have to care about leaving piles of build products lying around.

Step 4 is the fun bit, and I'm still tinkering with it, since it's susceptible to all kinds of bells and whistles. Most simply, it needs to show the builds and whether they have passed or failed (only when you ask it to display; I don't want it to email me on build fails or anything like that). But in what order? Time; per branch; pass/fail; whatever. I use time, most recent first, because then all the fossil ones that I am no longer interested in fall off the bottom of the list. then the bells: colourising pass/fail; noting now long ago the build was (and switching from secs to mins to hours as they age). Then it's nice to see the most recent changes, and for it to tell you which are in which build, and so on. Since this gets complex it's in Perl rather than Shell.



No comments:

Post a Comment