Photo by Sven Kucinic on Unsplash

[1/3] Complete guide to CI/CD pipelines with Drone.io on kubernetes — Kube runner and private docker registry

Cogarius

--

original article https://blog.cogarius.com/index.php/2020/04/05/complete-guide-ci-cd-drone-io-on-kubernetes-kube-runner-private-registry/

TL;DR;

You are running kubernetes and using an expensive yet easy and maintainable CI/CD pipelines.

You want to save money but don’t want to spend too much time migrating and don’t want to give up on features.

You want to be able to :

  • Push your images to your private docker registry
  • Monitor your build with prometheus
  • Access your hashicorp vault secrets from your pipeline.

This series of three articles will help you go through it with Drone CI !

In this post we will go through the installation and setup of a private docker registry, Drone CI server and the kubernetes runner on a kubernetes cluster with helm v3.

The Kubernetes runner is a standalone service that executes pipelines inside Pods. It is very similar to the Docker runner. Note that the kubernetes runner is still in beta

We migrated to Drone CI inside our kubernetes cluster and saved some bucks compared to our last CI/CD solution. We also gain much more flexibility thanks to all the plugins available. With a little bit of golang you can quite easily create your own plugins that will suit your specific needs.

Why Drone CI ? Drone is a Continuous Delivery system built on container technology. Drone uses a simple YAML configuration file, a superset of docker-compose, to define and execute Pipelines inside Docker containers.

  • The learning curve is ridiculous compared to product like jenkins
  • It massively uses docker for ..everything ! so you basically could do whatever you need
  • It is open source (Community Edition is licensed under the Apache License) and written in go. There is also an enterprise and cloud version with a different pricing model.
  • Configurable secret management (vault, kubernetes, AWS etc..)
  • The plugin list is huge

Here is the action plan:

  • Deploy a Docker registry via helm chart
  • Deploy Drone server via helm chart
  • Deploy Kube Drone Runner via helm chart
  • Activate github webhook to our drone deployment
  • Configure a pipeline with a drone.yaml file with these steps
  1. Build a docker file
  2. Push the file to our private docker registry
  3. Run services like an API and a DB
  4. Run some end to end test with those services
  5. Deploy a helm chart of our application
  • Follow the build status on the beautiful drone UI (thanks to pixelpoint)
  • Make prometheus scrape information from drone server and display the information in a grafana dashboard
  • Configure Vault for the drone-vault extension
  • Deploy the drone-vault extension

Deploy a docker registry (optional)

We will be deploying the official helm chart for docker-registry.

$ helm install stable/docker-registry

Don’t forget to set the persistence value to true to store your images on a persistent volume.

Let’s also add some security by defining the htpasswd value. Note that htpasswd authentication contains a user:pass associations. To generate htpasswd file, run this docker command that will create a htpasswd file that contains the htpasswd value for user: "manu" with password: "superpassWorD" :

$ docker run --entrypoint htpasswd registry:2.7.1 \
-Bbn manu superpassWorD > htpasswd

If you want to verify the password and username of an htpasswd file in the current directory

$ docker run -v "$(pwd)":/tmp --entrypoint htpasswd \
registry:2.7.1 -bv /tmp/htpasswd manu superpassWorD
Password for user manu correct.

Finally we will define an ingress with TLS to this URL https://docker.mycompany.com. We will not detail how we achieve automatic TLS termination with let’s encrypt and cert-manager as it has been covered in many articles already.

For information I tried several UI images for the docker registry but only this one konradkleine/docker-registry-frontend:v2 is kind of working for our kubernetes setup

Pull Images from private registry

To pull images from your private registry in kubernetes you need to specify a secret name inside the field imagePullSecrets into your ressource spec. The secret is bound to a namespace.

$ kubectl create secret docker-registry regcred \
--docker-server=docker.mycompany.com \
--docker-username=manu \
--docker-password=superpassWorD

This will create a secret named regcred in the current namespace

If we decode the secret we will see that the dockerconfigjson data is in this form:

{"docker.mycompany.com": {
"auth":"bWFudTpzdXBlcnBhc3NXb3JECg=="}
}
$ echo ‘bWFudTpzdXBlcnBhc3NXb3JECg==’ | base64 --decode
manu:superpassWorD

You will need a secret with the content of the dockerconfigjson for our pipeline to be able to pull images from our registry. Indeed in our pipeline we will see a section like this:

image_pull_secrets:
- dockerconfigjson

This feeds the runner with the needed credentials to be able to pull the image.

Push Images from private registry

To push the images into our private registry we fill the username and password settings of the docker plugin. In our example username will be manu and the password will be superpassWorD.

Deploy drone CI

We will be deploying the official helm chart for drone.

$ helm repo add drone https://charts.drone.io
$ helm repo update

First let’s fill the drone server values for the chart installation.

Let’s only allow access to a few github accounts with the env variable DRONE_USER_FILTER. DRONE_USER_CREATE env defines an administrator.

Here is an extract of the value file to deploy the chart. We will detail the variables here after.

server:
host: "drone.cogarius.com"
adminUser: "zgorizzo69"
env:
DRONE_LOGS_DEBUG: "false"
DRONE_DATABASE_DRIVER: "sqlite3"
DRONE_USER_CREATE: "username:zgorizzo,admin:true"
DRONE_USER_FILTER: "zgorizzo,manureva"
DRONE_SERVER_HOST: "drone.mycompany.com"
DRONE_SERVER_PROTO: https
DRONE_RPC_SECRET: XXXXXXXX42424242
DRONE_GITHUB_CLIENT_ID: "46456d465z5d45z5za64d"
DRONE_GITHUB_CLIENT_SECRET: "6544daz8310az21544”

For the drone server to talk to its runner we set a secret with the DRONE_RPC_SECRET env. The drone hostname is defined as drone.mycompany.com by DRONE_SERVER_HOST env.

We only allow the access some github accounts with the env variable DRONE_USER_FILTER. Here is an extract of the value file to deploy the chart

Let’s add an ingress to point to the drone server UI.

...
ingress:
enabled: false
hosts:
- "drone.mycompany.com"
...

For our drone server to be able to access GitHub resources we need to fill in the DRONE_GITHUB_CLIENT_ID and DRONE_GITHUB_CLIENT_SECRET.

You will need to Create an OAuth Application to get your keys.

  • Go to your gihub account and In the upper-right corner of any page, click your profile photo, then click Settings.
  • In the left sidebar, click Developer settings.
  • In the left sidebar, click OAuth Apps.
  • Click New OAuth App.
  • In “Application name”, type the name of your app for instance drone
  • In “Homepage URL”, type the full URL to your drone server. Here in our example it would be `https://drone.mycompany.com`
  • In “Authorization callback URL”, type the callback URL `https://drone.mycompany.com`. Note that the authorization callback URL must match the above format and path, and must use your exact server scheme and host.
  • Click Register application.

And voilà ! you end up on this screen with the client ID and the client secret.

Kube runner

For the drone runner kube chart we pretty much keep the default value at this point except for the DRONE_RPC_SECRET that MUST match the server one (see above).
We can now deploy the helm chart with our values. If you want a complete example go here

$ helm install --namespace drone drone drone/drone \
-f drone-values.yaml
$ helm install --namespace drone drone-runner-kube \
drone/drone-runner-kube -f drone-runner-kube-values.yaml

Drone UI

Once the deployment ready, you should be able to log into drone’s website. If you don’t have setup any ingress you can port forward from the drone server pod

$kubectl port-forward $(kubectl get pods \
-l app.kubernetes.io/component=server,\
app.kubernetes.io/instance=drone \
-o jsonpath='{.items[*].metadata.name}') 8184:80

Accept to grant permission to your newly created Drone

You will end up on a page like this one

Simply click on active to enable a drone pipeline on this repository. Then navigate to the settings tab.

Add these two secrets to push images to our private docker registry.

  • docker_username is the docker registry user defined earlier. let’s put manu as the secret value
  • docker_password is the docker registry password defined earlier. let’s put superpassWorD as the secret value

Add this secret to pull images from our private docker registry.

  • dockerconfigjson is the docker registry JSON config that we describe earlier.

Github configuration

Go to the repository that you have activated and make sure the webhook is properly setup and working.

Navigate to Settings / Webhooks click on the webhook and at the bottom of the page you will see the recent deliveries. You should see 200 responses.

Drone pipeline configuration

Let’s create a branch and add .drone.yml file to our repository. I will share with you a full example of a drone file and explain each section

kind: pipeline
type: kubernetes
name: MyApp
globals:
- &docker_creds
username:
from_secret: docker_username
password:
from_secret: docker_password
- &conf_test_api
NODE_ENV: production
FRONT_URL: localhost
MONGODB_URI: mongodb://localhost/myapp
PORT: 3000
API_URL: localhost
steps:
- name: API # building the API docker image
image: plugins/docker
settings:
repo: docker.mycompany.com/myapp/api
registry: docker.mycompany.com
dockerfile: ./api/Dockerfile
tags: ["${DRONE_COMMIT_SHA:0:7}", "latest"]
<<: *docker_creds
- name: front # building the Front docker image
image: plugins/docker
settings:
repo: docker.mycompany.com/myapp/web
registry: https://docker.mycompany.com
dockerfile: ./web/Dockerfile
tags: ["${DRONE_COMMIT_SHA:0:7}", "latest"]
<<: *docker_creds
- name: mongodb # launching a mongodb for integration test
image: mongo:latest
detach: true
ports:
- 27017
- name: apitotest # launching an API for integration test
image: docker.cogarius.com/myapp/api
detach: true
environment:
<<: *conf_test_api
ports:
- 3000
depends_on:
- mongodb
- name: run_integration_tests
image: docker.cogarius.com/myapp/api
commands:
- "cd /app"
- "npm run test:e2e"
environment:
<<: *conf_test_api
depends_on:
- apitotest
- name: deploy # deploy to kubernetes using a Helm chart
image: pelotech/drone-helm3
settings:
mode: upgrade
chart: ./charts/my-app
release: my-app-staging
namespace: my-app-staging
debug: true
kube_service_account: cicd
kube_api_server: "https://kube.mycompany.com:6443"
kube_token:
from_secret: kube_token
kube_certificate:
from_secret: kube_ca_certificate
values:
- "api.url=staging.api.myapp.mycompany.com"
- "front.url=staging.myapp.mycompany.com"
cleanup_failed_upgrade: true
force_upgrade: true
depends_on:
- api
- front
- run_integration_tests
- name: notification
image: appleboy/drone-telegram
settings:
token:
from_secret: telegram_token
to: "-558454548"
message: >
📝 {{repo.name}} / {{commit.branch}} - {{commit.message}}
{{#success build.status}}
✅ succeeded for 👷‍♂️ build {{build.number}}
{{else}}
🛑 failed for 👷‍♂️ build {{build.number}}
{{/success}}
when:
status:
- failure
- success
depends_on:
- deploy

Note that as all the containers will be launched inside the same pod all the services can be reached at localhost in the conf_test_api.

With this pipeline we pulled the code source from github, built and pushed to our private registry an API and a front image. Then we launched a mongo database along with the API to run the end to end test of the API. Finally we deployed the whole solution thanks to the charts present in our repository and we received an update through Telegram.

Kubernetes pipeline differences

Kubernetes pipelines are scheduled to execute in the same Pod and therefore share the same network. This means services are accessible at a localhost address vs a custom hostname.

Kubernetes pipelines are scheduled by Kubernetes which provides advanced affinity options. The Kubernetes runner exposes Node Selector capabilities to the pipeline using the node_selector attribute.

Kubernetes containers automatically mount service account credentials to /var/run/secrets/kubernetes.io/serviceaccount. This may have security implications and may impact plugins that integrate with Kubernetes.

Helm v3 chart

For the deployment to work we need to create a kubernetes service account with the needed roles to be able to create ressources like pods, deployment, services , secrets, ingresses etc… Here we created a service account named cicd. The kube_certificate is the kubernetes certificate authority encoded in base6. For more info check the plugin documentation.

Extra plugins

We will not explain how to setup telegram plugin but it is quite easy and it is a nice way to get updates about our CI/CD pipeline. Note the when field on the notification task that allow this task to be run even if the pipeline fails.

That’s it for this first article in the next one we will connect drone with vault to retrieve our secret directly from our Vault ! If you want to know more about vault and how to set it up on kubernetes don’t miss our previous article.

If you have questions remarks you can PM me: telegram:@Zgorizzo mail: ben@cogarius.com

This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

--

--

No responses yet