Continuous Integration @ SISCOG

Continuous Integration @ SISCOG - Automating our way to better software
Continuous Integration @ SISCOG - Automating our way to better software
Witness SISCOG's evolution with Continuous Integration (CI), transforming hesitant beginnings into an essential practice. Learn how CI has enhanced our software's stability, efficiency, and reliability, thanks to the persistence and innovation of our dedicated developers.
Continuous Integration @ SISCOG - Automating our way to better software


Fábio Almeida, Platform Engineering Team Lead,  @SISCOG  |  6 min read 



Our journey with Continuous Integration (CI) started more than a decade ago when a group of developers wanted to explore the potential of automated testing processes. They implemented a proof of concept using Jenkins, to build and test binaries, in an attempt to ensure a basic level of quality and promote the reduction of inefficiencies created by broken builds. Initially, this effort didn't gather much enthusiasm, as many developers preferred to stick with their manual routines of building and testing. However, the persistence of those early adopters has been fruitful.

Over the years, what began as a somewhat rogue initiative has grown into a vital part of our development workflow. The transformation was significantly accelerated five years ago following a major compiler update that necessitated daily builds of all components. This marked a turning point, as unit, integration, and end-to-end tests were swiftly incorporated, leading to a substantial improvement in stability and code quality. Today, CI is deeply embedded in our processes, making us more efficient and our software more reliable, underscoring the importance of continuous innovation and adaptation in software development. This is what we’ll explore in the remainder of this article.



Soon after Jenkins was born, some of our developers felt the need to bring its power into our daily lives. They set out to demonstrate the usefulness of these tools and automated processes for the company. As a result, a proof of concept based on Jenkins was created.
The initial idea was to get suites of automated tests, which were executed locally, to run automatically, in order to ensure an additional level of quality of our Automatic Modes(1). When this proof of concept was presented to the company, it did not gather a significant amount of interest. Some said, “I already run tests on my machine”, while others remarked, “Our Q&A team already tests the system”.

“Today, CI is deeply embedded in our processes, making us more efficient and our software more reliable.”



Being the pioneers of Continuous Integration at SISCOG, these developers were not about to call it quits in the face of a little adversity. And the automatic mechanisms continued to evolve and expand.

Soon, daily compilations and automated tests of our products were happening every day (at least for these developers' team). Binaries were being published on a network share for developers to use, and, in case of failure, the authors of the latest commits were warned of any compilation errors. Eventually, some developers implemented tooling to warn them if their local build increased the number of compilation warnings when compared to the last successful daily build.

Slowly but surely, people began to depend on this straightforward daily mechanism, and stability improved: the amount of somewhat angry e-mails going around, demanding to know who had broken the build, diminished considerably (unfortunately we don’t have statistics). Instead of receiving a human message from someone complaining about their day being spent debugging someone else’s error, developers received an e-mail from an automated service politely asking them to check if their recent work might have caused the error.

But even then, things were happening under the radar. The company acknowledged the existence of Jenkins, but the official development process did not include it as a validation tool. Nonetheless, developers learned to rely on these mechanisms, with many developers relying heavily on them. Some developers, like myself, would start their day by grabbing some freshly baked binaries from Jenkins’ published binaries.

At this point in time, however, there was still a big gap: the majority of our clients’ customisations were not being covered by these automatic processes. But then, five years ago, it all changed, almost overnight.



As common practices dictate nowadays: you shall test before you shall change! When confronted with big changes, we developers all feel more comfortable having a comprehensive suite of validations to help guarantee the quality of those changes. And that is what pushed us over to the proverbial “dark side”.

Five years ago, when faced with a complex software migration full of technical challenges, we sought the reassurance Within a week, all client specifications were building daily; within less than a month, unit and integration tests started being executed; and shortly thereafter, end-to-end tests joined the party.

This allowed us to face the uncertainty of changing our software with a safety net that would give us some feedback on submitted changes.

With time-based executions (mostly hourly and daily) booming and fully integrated into our day-to-day workflows, we began to explore additional ways to ensure the quality and stability of our software and the changes we produce. “More tests” is always an answer (and we have kept at it too), but there was something we had not tried yet, which is completely accepted in current processes: build and test before merging changes.

Since we use Gerrit Code Review as our Git host and code review platform, it was a “simple” matter of triggering builds when a developer pushes a Change Request to Gerrit.

The landscape today is quite different from a decade ago. All our applications are built and tested for every Change Request and at the end of the day. We have unit, integration, and end-to-end tests that are executed every night to find regressions and errors. Performance and quality assurance tests are also executed on weekends, in addition to functional tests conducted from a user’s point of view using test automation tools.


Change is difficult; especially when you have been developing software for over 35 years like SISCOG. When you have that much luggage on your back, sometimes it gets difficult to lug it around. When you begin by time-sharing a Lisp Machine among developers, it might become difficult to envision the value of off-loading your personal processes to a bunch of idle machines.

I am not bringing anything new to the table by saying that there is actual value added in employing these practices. Using CI mechanisms in software development workflows helps us move faster and with added confidence that the changes being made are sound and safe. If nothing else, the time to discover certain types of bugs has improved, with some types of bugs having been completely eradicated (for example, breaking the build is now a thing of the past). Additionally, developers’ precious time (we have all heard the adage: “programmer time is more expensive than computer time”) can be allocated to more important tasks: introducing more bugs!


“Using CI mechanisms in software development workflows helps us move faster and with added confidence that the changes being made are sound and safe.”



For me, the most important question that has arisen throughout our experience with Continuous Integration is: how can we introduce more stability, improve the quality of checks, move faster, and deliver more value to our clients? Should we code more unit tests? Or are end-to-end tests the answer? Should we be more selective about what we build and test when validating a Change Request? Or should we build and test everything for each Change Request?

Unfortunately, there are no silver bullets. Personally, I believe in balance, therefore all these strategies are necessary to achieve superior quality.

Nonetheless, we are on the right track, with plenty of rails still to cover.



(1) Automatic Mode is a mode of operation in the SISCOG Suite for optimised planning of resources, such as vehicles and staff. For more information see article “The Magic Lamp”.