Production OpenShift Cluster in 35 Minutes: First look at OKD 4 and the new OpenShift Installer

An illustration of a contented person with a feathered hat, leaning back with their feet up on a ship's wheel while holding a sandwich, with sailing ship elements and a cloudy sky in the background. An illustration of a contented person with a feathered hat, leaning back with their feet up on a ship's wheel while holding a sandwich, with sailing ship elements and a cloudy sky in the background.

Let's talk about OpenShift.

OpenShift is one of the most popular Kubernetes distributions in the world.

It exists since almost 10 years and went through 4 major version releases.

Already back in 2011, OpenShift was based on containers, but RedHat developed their own custom container technology and container orchestration tools.

It all changed with the release of OpenShift 3, back in 2015.

With this release, RedHat completely changed every technology underneath OpenShift.

It went from custom-developed services to fully embracing the Kubernetes.

At that moment, OpenShift re-emerged as one of the first full-featured Platform as a Service systems built on top of Kubernetes.

Many features of Kubernetes that you hove and use, like Deployments or Role Based Access Control, initially appeared in OpenShift and were later merged or adopted upstream to become the core of our favourite container orchestration tool.

In 2019, OpenShift jumped further into the bright container future with the release of version 4.

I was lucky to be one of the first production users of OpenShift 4, architecting and implementing multiple clusters for one of my clients.

I was amazed by some of the new features that OpenShift 4 brought. The only problem was that RedHat did not release the open source edition of OpenShift 4 till the middle of 2020.

If you don't know, almost every RedHat product has the enterprise and the open source editions.

There is an enterprise Satellite and there is an open source Foreman, for example.

And there is an enterprise OpenShift and an open source OKD, the Community Distribution of Kubernetes.

Don't ask me how Community Distribution of Kubernetes abbreviates to OKD. Naming is very strange these days.

You can think of an open source edition as the upstream, and the enterprise edition as the more stable, tested release that costs money in return for RedHat Support, RedHat branding and sometimes slightly different features.

In case of the OpenShift 4, this model got reversed. For over one year enterprise OpenShift was the cutting edge, state-of-the-art technology, while OKD was stuck at version 3. But now, things finally changed back to normal.

I thought that with the release of OKD 4 it is a great time to show you all the good things that are happening in OpenShift and also demonstrate how OpenShift differs from Kubernetes these days.

Because OpenShift is a big and complex technology, I won't be able to show everything in one article.

That means, you are going to see a whole series of articles on different cool new technologies and concepts of OpenShift and Kubernetes.

Even though I will mostly show OpenShift, remember that OKD is an open source tool, and this tool consists of many open source technologies. Chances are that many of the things you are going to see will either make it into the Kubernetes Core, or will somehow re-appear in a different form in the Kubernetes community.

In this article, I will show you how to install the new OpenShift cluster on the AWS.

I will keep writing OpenShift in this article, because you can roughly put the equals sign between OpenShift and OKD.

One of the biggest changes since version 3 is the installation process.

Instead of using Ansible, there is now a dedicated openshift-installer tool, that takes care of the complete cluster installation and does it in only few command line prompts.

It's a bit similar to Kubernetes Kops, if you ever used before.

I already installed it beforehand - It's just a single binary that you need to drop to one of your PATH directories. Let me run the openshift-install create cluster command.

First I need to choose the SSH key, so that I can later login to the cluster if I really have to. I can also decide to skip this step.

Then I need to select the platform.

Openshift Installer works with all the major cloud providers, as well as private virtualization solutions, like vmware or ovirt.

Under the hood, this installer is using Terraform to provision the infrastructure. It uses it as an internal library, meaning that you don't have to actually install the Terraform.

This installation is called "installer provided infrastructure", which means that the opensinft installer will not only install the openshift cluster, but will also configure the complete underlying infrastructure.

There is also a "user provided infrastructure" option, in which you have to prepare all the infrastructure and then installer will make an openshift cluster out if it. This option is especially useful for bare metal clusters.

I am going to use AWS, so that we can also see how this installer natively integrates with various cloud services.

Then, I need to choose the base domain.

Installer pulls all the existing hosted zones from my AWS account and let's me select one.

I've pre-created this hosted zone before making this article.

Next step is to choose the cluster name.

After that I need to set the pull secret, which could be either a real secret to access your registry, or a fake string from the documentation ({"auths":{"fake":{"auth": "bar"}}}).

Now I can finally trigger the cluster installation. It will take around 30 minutes.

Already at the installation stage we can see how different OpenShift 4 cluster management is, if we compare it with version 3.

We only had to configure a handful of options and what we are going to get in the end is the complete platform as a service, tailored to the particular cloud provider.

Let me time travel to the end of the installation.

It took the installer almost 35 minutes to install the complete cluster.

Command line interface showing successful installation of an OpenShift cluster with details including SSH public key, AWS region, domain, and access information for the OpenShift web console.

It might sound like a lot, but once you'll see what exactly was installed and configured during this process you might think it's actually pretty fast.

When finished, openshift installer gives as the password for the default kubeadmin user.

With this initial credentials we can further configure the cluster, but.. I won't go to deep into the cluster configuration in this article.

There are many things that you can adjust to your needs and such topic requires a dedicated article. Tell me in the comments section below if you want to learn more about it.

Let's check the AWS console to see what exactly the installer did for us on the infrastructure side.

I am first going to check running EC2 instances.

We can see here 6 new instances, 3 for the control plane and 3 for the worker nodes.

Screenshot of AWS EC2 Management Console displaying a list of running instances with instance IDs, types, status checks, alarm status, and availability zones.

There are dedicated IAM roles created for masters and workers. We will see them in a bit.

OKD is based on Fedora CoreOS and it's the default operating system for every cluster node. There is a separate article about Fedora CoreOS on the website, check it out to learn how it's different from the regular operating system.

There are also security groups with all the required rules in place. If you think that the description of the security groups rules could be a bit nicer, then I am with you.

Screenshot of an AWS EC2 instance details page focusing on security settings with inbound and outbound rules displayed.

Let's check the laod balancers.

There are 3 of them - 2 external ones and 1 internal. One of the external ones, for example, makes the OpenShift API available to you over the Internet.

Screenshot of an AWS EC2 Management Console displaying load balancer settings with details including DNS name, state, type, and creation date for network load balancers.

Let's now check the IAM roles.

Master role has lots permisions to manage the storage, the load balancer and the security groups.

Worker role doesn't have anything too specific inside.

Screenshot of the AWS IAM Management Console displaying a list of IAM roles with their associated trusted entities and last activity timestamps.

Now let's jump to the Route53.

There is a new private zone for our cluster's base domain.

Inside of the zone openshift instalelr pre-created many different records so that we can resolve the API and the applications running on the cluster.

Screenshot of AWS Route 53 Console with hosted zones tab open showing information on how hosted zones work and a list of two domain names with details such as domain type, creator, record count, and hosted zone ID.

Talking about applications, let's check the third load balancer created by the installer.

For whatever reason, this the 4.4 release of the OKD uses the classic ELB.

Screenshot of AWS Management Console showing EC2 Load Balancers with details on names, states, types, and various instance parameters.

And this load balancer is the one that gets traffic from the outside to your applications running in the cluster. Finally, let's check the VPC.

There is a new VPC just for cluster, with new subnets, 3 public ones and 3 private ones, one per connectivity type and per availability zone.

Screenshot of AWS Management Console displaying a list of subnet details within a VPC interface, showing names, IDs, states, IPv4 CIDRs, and availability zones.

There are also 3 NAT gateways, once again per each availability zone.

As you just saw, the Openshift Installer provisioned many different AWS resources by following the best practices in terms of security and availability.

You can also provide the installer with the config file, to further customize the installation to fit your particular environment.

Before we wrap up, let me show you some of the files that installer has created on my laptop.

First of all, let's check the metadata file.

A screenshot of a terminal window displaying commands and a file called 'metadata.json' which contains JSON formatted data related to Kubernetes/OpenShift cluster identifiers and Amazon Web Services (AWS) region information. The terminal prompt is "okd".

Here we can see the cluster name and the unique cluster identifier.

There is also a Terraform state file.

Screenshot of a Terraform state file (terraform.tfstate) displayed in a terminal with syntax highlighting, showing code related to AWS resources management.

Keep in mind, that the Installer Terraform internally. Despite this fact, you won't be able to use Terraform directly, because the installer does not generate any Terraform templates.

Still, you should probably save this statefile to a git repository. There is a destroy cluster command that will need the state to properly clean up everything.

There is also a default kubeconfig file that you can use to perform the initial clsuter configuration. You probably don't want to use after you finish with the post-configuration activities.

And that's it for this article.

As you just saw, the new Openshift-installer is very easy to use. It creates a whole bunch of cloud infrastructure resources and it does it by following the best practices of the cloud providers that it supports.

Of course, every organization has different conventions and approaches to managing the cloud infrastructure. The default setup that openshift installer provides might not be enough. Luckily, there are different ways to further customize the installation, if you need to do it.

One problem with this installer is that it works outside of your organizations infrastructure automation.

If you use Cloud Formation or Terraform to describe your complete infrastructure, then you might not like the idea of another tool generating dozens of different cloud resources outside your primary infrastructure as code codebase.

In the next article we will see a few technologies that lay at the core of the OpenShift Cluster and that make the cluster management and operations a breath of fresh air.

Cloud Native consulting: regardless if you are just getting started, or looking for a big shift from the old ways to the future, we are here to help. About consulting


Here's the same article in video form for your convenience: