Deploying a Gitlab repo to Openshift
In this post, I’ll go over how to take a Gitlab repo and build a dockerfile using Gitlab’s CI/CD pipeline, authenticate with Openshift, and then deploy a service. I recently had to do this and it required much trial and error so hopefully, this guide will save someone some time.
Building the docker image
For this step to work, you will already need a dockerfile for your project. There are plenty of guides on this so I won’t go into detail here. An important part to remember to make deploying to Openshift more painless is to include an EXPOSE
instruction in your dockerfile. This will be helpful later when we set up routes.
Once we have a dockerfile, we will need to specify a .gitlab-ci.yml
file which will contain the configuration required for the Gitlab CI to build the docker image. A good starting point is an adaptation of the file specified in the Gitlab docs:
stages:
- build
build:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG --destination $CI_REGISTRY_IMAGE:latest
rules:
- if: $CI_COMMIT_TAG
The only modification to this is that it also creates an image with the tag latest
on build. This will be helpful as later we can point the Openshift image-stream to this label to automatically update the deployment upon build. An important detail to note is the if: $CI_COMMIT_TAG
rule. This means that the pipeline will only run the build on a new tag, not on a commit. This leaves the developer with more fine-grained control of when the build runs.
Kaniko was advantageous here over using docker to build the image as using docker would require docker-in-docker (DinD) which poses security risks as well as larger overheads. Kaniko can build docker images but does it in userspace which is more suited to running in a docker container.
git push
will not push local tags by default. This is usually helpful as it prevents conflicts with other tags on the remote. In this case thegit push --follow-tags
command will push tags with annotations to the remote, thus triggering a build.
Once we push a tag, the pipeline will build the image and upload it to the repo’s docker registry.
Creating the image-stream in Openshift
The first step to deploying to Openshift is to have a working Openshift instance, as well as having the oc
CLI tool installed. You can either log in using an API key or username and password with the following command:
$ oc login https://openshift.example.com -u your_username
You will then be prompted for your password. If you do not already have one, we will need to create a new project using oc new-project project_name
. This will also automatically switch you to the new project.
Now that we have logged in to Openshift, we need to create the secrets to authenticate with Gitlab. This is where we run into the main difficulty I encountered. Gitlab requires authentication against two endpoints. These might change depending on your deployment of Gitlab, but for me, they were registry.git.example.com
and git.example.com
. For this example, you can use either your Gitlab password, or more securely, an access token generated by Gitlab. These tokens can be either user-specific or project-specific. As long as it has registry read access it will work. We first need to run the following commands to create a secret for our first authentication endpoint:
$ oc create secret docker-registry gitlab \
--docker-server=registry.git.example.com \
--docker-username=your_username \
--docker-password=your_password_or_token \
--docker-email=your_email@example.com
$ oc secrets link builder gitlab --for=pull
$ oc secrets link default gitlab --for=pull
$ oc secrets link deployer gitlab --for=pull
We will then need to create the secrets for the second endpoint:
$ oc create secret docker-registry gitlab-delegated \
--docker-server=git.example.com \
--docker-username=your_username \
--docker-password=your_password_or_token \
--docker-email=your_email@example.com
$ oc secrets link builder gitlab-delegated --for=pull
$ oc secrets link default gitlab-delegated --for=pull
$ oc secrets link deployer gitlab-delegated --for=pull
Once these secrets are set up, we should be able to add an image-stream, pulling from the latest tag for the image:
$ oc import-image image_stream_name \
--from=registry.git.example.com/project/path \
--scheduled \
--confirm
The --scheduled
tag means that the Openshift instance automatically checks for a new image with the latest
tag every 15 minutes by default. We can check the image-stream details with oc describe is/image_stream_name
.
Deploying an application using the image-stream
This is now the easy part. To deploy a new application, we can use the following command:
$ oc new-app image_stream_name
This will create all of the correct Openshift objects for the application to run with the name image_stream_name
.
The final step is to expose the application to the outside world. We can do this with the following command:
$ oc expose svc/image_stream_name
This uses the EXPOSE
command from earlier to work out automatically which port from the pod to open. By default, the application will then be available on port 80 with so SSL. Adding SSL is better documented than importing images from Gitlab and is therefore beyond the scope of this post. In order to get the DNS of the application, you can run the following command:
$ oc describe routes/image_stream_name
The DNS can be found under the Requested Host
label.
I hope that my trial and error saves you some time and some Kubernetes related pains.