Majestic Pipeline: A CI/CD Love Story

Illustration split in two halves showing two people having a conversation, the left side shows a person saying "It works on my computer," the right side shows another responding, "Yes but we are not going to give your computer to the client." Illustration split in two halves showing two people having a conversation, the left side shows a person saying "It works on my computer," the right side shows another responding, "Yes but we are not going to give your computer to the client."

There is a famous meme in the computer world. Daenerys Targaryen appears smiling at Thor and saying “It works on my computer” and Thor answers “Yes, but we are not going to give your computer to the client”. I think it's a great definition, a great example, the reason why pipelines appeared on the market.

If we look back and remember how software deployments worked many, many years ago, we'll see how much more complicated life was.

There's a basic concept to understand before we proceed: two repeated actions can only give the same result if they were executed in the exact same environment. Imagine, for example, throwing a ball against a wall with great force, and the ball bouncing back. We can't know if the ball will bounce the same way again, unless we're sure that we threw this ball the second time with exactly the same force, from the same place, with air humidity, atmospheric pressure and so on being the same, taking into consideration billions of parameters.

Taking this concept into account, we move on to the software. Monolithic code created by a developer on one hardware setup at home, generating a binary, was not compatible on other hardware with other code instructions, with another memory model, with another environment of operating system variables, etc. (I'm talking about code written before SCM like git arrived).

So the tests that were carried out by this developer at home were not valid for production. Now imagine that we add more developers and many more environments where to deploy the application to this process. This can quickly turn into a disaster.

To avoid this disaster you need to create an application capable of collecting the code, compiling it, and being able to run it in different environments. That’s what CI/CD systems are for.

CI/CD stands for Continuous Integration/Continuous Delivery or Continuous Deployment, and together they make up a workflow that enables development teams to quickly and consistently automate code integration, testing, and deployment of their applications.

First, let's define these 3 elements:

  • Continuous Integration (CI) is a software development practice of integrating code from multiple developers into a central repository on a frequent and automated basis. Every time we do a git push on our code, the CI starts.
  • Continuous Delivery (CD) is a software development practice that consists of automating the entire process of building, testing, and delivering an application package so that it can be deployed at any time. This part of the pipeline is based on all the tests that developers used to do on their computer.

  • Continuous Deployment (CD) is a software development practice of automatically deploying new functionality to the production environment as soon as all tests have been successfully completed. We'll use this component in the end-to-end pipeline.

The Beginnings: Shell Scripts

In the early days of software development, code integration and deployment were done manually. Programmers wrote their code on their computers, uploaded it to a server, and then manually tested it. This was slow and error-prone.

To get around this, programmers started writing shell scripts that automated the integration and deployment process. These scripts did tasks like downloading code from the repository, compiling, running tests, and deploying to a server.

Shell scripts were an effective solution for automating the integration and deployment process, but they had some significant limitations. For example, they were difficult to maintain, especially when working with large and complex projects. Also, they were not very flexible and did not allow fine customization of the workflow.

This could be an example of such script:

#!/bin/bash
# Clone the repo
git clone https://github.com/mi-repositorio.git
# Install dependencies
cd mi-repositorio
npm install
# execute the unit-test
npm test
# pack the application
npm run build
# copy files to the server
rsync -avz build/ usuario@servidor:/var/www/mi-aplicacion/

The Jenkins Era

In 2005, Kohsuke Kawaguchi created Jenkins, a CI/CD automation tool that became one of the most popular of its time. Jenkins was based on the concept of "jobs", which were tasks that could be executed independently on a CI/CD server. You can learn a bit more about Jenkins and other tools here.

With Jenkins, development teams could set up entire pipelines, from code integration to deployment, visually and with ease. Jenkins allowed the execution of shell scripts, but also allowed integration with other tools and technologies.

Jenkins was for many years, and still is, a very popular solution, but it also has its limitations. For example, its user interface is quite cluttered and difficult to navigate. Also, it's difficult to maintain complex pipelines in Jenkins due to the large amount of configuration that had to be done.

In Jenkins our script example that we previously had as a shell script now becomes a number of jobs that could be executed as follows:

pipeline {
    agent any
    stages {
        stage('Clone repo') {
            steps {
                git 'https://github.com/mi-repositorio.git'
            }
        }
        stage('Install dependencies') {
            steps {
                sh 'npm install'
            }
        }
        stage('Execute unittest') {
            steps {
                sh 'npm test'
            }
        }
        stage('Pack the app') {
            steps {
                sh 'npm run build'
            }
        }
        stage('Deploy the app') {
            steps {
                sshagent(['mi-credencial-ssh']) {
                    sh 'rsync -avz build/ usuario@servidor:/var/www/mi-aplicacion/'
                }
            }
        }
    }
}

We must also remember that TeamCity and some other tools already existed at the time; Jenkins was not the only tool that defined CI/CD world.

The Arrival of Containers and Kubernetes

As containers became more popular, new CI/CD tools emerged that focused on automating pipelines for containers. These tools allowed for automated building and deployment of container images, simplifying the onboarding and deployment process.

One of the most popular tools to emerge in this area was Docker Compose, which allowed you to define and run multi-container applications. However, as applications became more complex and scalable, the need arose for more powerful tools that would allow you to automate the entire workflow, from code integration to production deployment.

That's where Kubernetes, a container orchestration system, came into the picture. Kubernetes became the de facto tool for running containerized applications in production, and also made it possible to automate the entire process of building, testing, and deploying applications.

As Kubernetes became more popular, CI/CD tools also emerged that natively integrated with this orchestration system. Argo CD, Circle CI, Travis CI, the great Spinnaker, Bamboo, GOCD and thousands more appeared or were modified to be used with Kubernetes, which allowed implementing CI/CD pipelines in Kubernetes in an automated and flexible way.

For example, Argo CD is based on resource declarations in Kubernetes, applying Kubernetes manifests. This makes it easier for development teams to understand and maintain them.

In addition, Argo CD enables integration with multiple CI/CD tools, which means development teams can use their favourite tools to build, test and package their applications, and then automatically deploy them to Kubernetes.

If you want to know more about ArgoCD I would recommend you our free course on Youtube where you will learn how to use this amazing tool.

The future of CI/CD pipelines

As technology continues to advance, it is likely that new tools and technologies will emerge to implement CI/CD pipelines more effectively. However, the fundamental principles of continuous integration and continuous delivery are likely to remain the same for a long time.

Since cloud services began to dominate the world, developers realized they had a problem. They couldn't let pipeline executions get out of their way and lose precious dollars. Google created Cloud Build, AWS made CodePipeline, Azure created Azure DevOps to integrate everything. But also, all Cloud platforms allow and encourage users to keep their pipelines within their environments with all the services already pre-installed in their marketplaces. None of them (except for Azure for obvious reasons) want you to run your pipelines on Github Actions, for example.

It is very clear that automation is not going to stop, and it's clear that pipelines are and always will be necessary. Future is here, don’t miss it.

At mkdev, we are CI/CD experts, and we know how to deploy majestic pipelines, so that our customers are happy customers. Don’t hesitate to contact us if you also need help with CI/CD.

DevOps consulting: DevOps is a cultural and technological journey. We'll be thrilled to be your guides on any part of this journey. About consulting