Originally posted on FillIn Tech Blog

There's a number of guides across the internet which demonstrate how to setup Rails on GKE/Amazon/Azure but few that do a proper walkthrough using a generic provider (ie one that doesn't require much overhead) and most kinda assume you have knowledge about Kubernetes, which chances are, you don't.

As an aside, all this time I've been pronouncing Kubernetes wrong, so to save other's from embarassment, it's pronounced koo-burr-NET-eez.

So the purpose of this is to give a full walkthrough on setting up Rails on Digital Ocean's Kubernetes for devs with little to no dev-ops background, and be generic enough to apply to any sort of backend not just Rails (hopefully all in one part too!)

Prerequisites

Won't go into too much detail here about setting up Kubernetes as there's a guide when creating a cluster on Digital Ocean, but all you need to continue with this guide is a connection to your cluster as well as a domain name (or a few).

To save your sanity, set the environment variable KUBECONFIG to the full path of your Kubernetes config file.

We're going to have on cluster for our Staging and Production environments, so there's no need to repeat this guide multiple times. You don't have to do it all on one cluster, but it's up to you.

Step 1 - Creating the Docker image

Here we'll be dockerising our Rails project so we can eventually deploy it.

The first file we need is the entrypoint file, which waits for required services to be available before proceeding.

#!/bin/sh

set -e

if [ -f tmp/pids/server.pid ]; then
  rm tmp/pids/server.pid
fi

echo "Waiting for Postgres to start..."
while ! nc -z postgres 5432; do sleep 0.1; done
echo "Postgres is up"

echo "Waiting for Redis to start..."
while ! nc -z redis 6379; do sleep 0.1; done
echo "Redis is up - executing command"

exec bundle exec "$@"
docker-entrypoint.sh

And of course, make this executable with

chmod +x docker-entrypoint.sh

Dockerfile

The next file needed is the Dockerfile which defines the actual image that will eventually be pulled by Kubernetes.

You can modify the image dependencies and base image based on what your app actually uses, but this is a pretty catch-all configuration.
The reason we're using alpine rather than say the straight Ruby image is to keep it slim, the standard Ruby image comes out to around 900MB whereas this is around 20MB.

FROM ruby:2.7-alpine

RUN apk --update add netcat-openbsd postgresql-dev sqlite-dev
RUN apk --update add --virtual build-dependencies make g++

RUN mkdir /app

WORKDIR /app

ADD Gemfile /app/Gemfile
ADD Gemfile.lock /app/Gemfile.lock

RUN bundle install
RUN apk del build-dependencies && rm -rf /var/cache/apk/*

ADD . /app

COPY docker-entrypoint.sh /usr/local/bin

ENTRYPOINT ["docker-entrypoint.sh"]
Dockerfile

docker-compose.yml

Next is the docker compose file which defines all the services and how they're exposed to each other. This is primarily for testing and won't actually be used in the whole Kubernetes process.

The setup block will essentially provision your database without you needing to actually execute the command yourself, you can of course do that, but really up to you. In our case we also need to seed data so we do that after migrating.
Update the POSTGRES_USER and POSTGRES_PASSWORD variables to match whatever is in your app's database.yml file

This snippet is tailored for our app, so you'll probably need to tweak it based on whatever your setup is (i.e if you're using Unicorn or something else). Leave the URLs as is.

version: '3'
services:
  setup:
    build: .
    depends_on:
      - postgres
    environment:
      - REDIS_PORT=6379
      - DATABASE_URL=postgres
      - DATABASE_PORT=5432
      - RAILS_ENV=development
    command: "bin/rails db:create db:migrate db:seed"
  postgres:
    image: postgres:12.1-alpine
    volumes:
      - /var/lib/postgresql/data
    environment:
      - POSTGRES_USER={POSTGRES_USER}
      - POSTGRES_PASSWORD={POSTGRES_PASSWORD}
      - PGDATA=/var/lib/postgresql/data
  web:
    build: .
    command: "bin/bundle exec puma -S config/puma.rb"
    volumes:
      - .:/app
    ports:
      - "3000:3000"
    depends_on:
      - postgres
  redis:
    image: redis:-alpine
    ports:
      - "6379:6379"
docker-compose.yml

Once you have all 3 files in the your project's root, run docker-compose build to verify everything runs smoothly. If all goes well you can try docker images and see your image built. Commit and push these files, as we'll need them in the next step.

Step 2 - Uploading the image to a registry

All guides I've seen so far upload the the public registry which frankly is pretty bad if you're working on a private application, if you are working on an open source app then great, upload where ever you want.

We're going to be uploading to a (free) private registry so we our cluster can pull from it. For this, we'll be using Codefresh which also happens to do CI.

We're not endorsed or affiliated with Codefresh

Codefresh has pretty alright documentation so I won't run through that, but essentially you link your Git repo and it'll create a project based on the files that we created above. Start a build and it'll create an image that can be pulled.

Regardless of your provider, you'll need credentials for the registry in order to actually pull the image (unless you're using a public registry).
So generate a secret/key and keep it safe for the next step.

Step 3 - Helm and Let's Encrypt

The reason we're starting with Helm and Let's Encrypt first is because if something goes wrong (which it likely will) then you can just recreate the cluster without losing too much progress, I didn't do this the first time around and it got tedious setting up everything else just to find that Let's Encrypt broke.

First things first is to install Helm and Tiller, the easiest way to install Helm is through homebrew, but there are other ways in Helm's documentation

brew install kubernetes-helm

Once Helm is installed, we'll need to setup our cluster to support Tiller. For organisation, I recommend creating a folder to hold all the Kubernetes scripts. Ours is in our project's root, but it doesn't really matter where you put yours.

Since Digital Ocean (and most Kubernetes providers) use Role Base Access Control or RBAC, there's some additional steps to get Tiller working which are documented here, but we'll go over the required steps here.

First create a file called tiller-rbac.yaml and enter the following:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
tiller-rbac.yaml

This creates a service account for Tiller to use, otherwise you'll end up with errors since Tiller doesn't have access to some things.

Apply this config with kubectl apply -f tiller-rbac.yaml and then run
helm init --service-account tiller to setup Tiller in your cluster. Hopefully everything works out and we can procede with install cert-manager.

Next is to install cert-manager to create and manager our SSL certificates. We're going to install 0.5.2 as anything later just caused a massive headache, personally, and it appears I'm not the only one.

helm install --name cert-manager --namespace kube-system stable/cert-manager --version v0.5.2

You may have to run helm repo update.

If you run into Error: could not find a ready tiller pod, wait a bit and run it again. If it keeps happening, just keep retrying.
For me it was super frustrating but it eventually worked after a couple retries, thankfully we only need to do this once and now you know why I opted to do this step first rather than later.

Oncer cert-manager is finally installed, we create our issuers which define how cert-manager creates requests certificates.

Create a file called production-issuer.yaml, you can create a staging issue if you want (don't confuse this with your app's environment, this is related to Let's Encrypt's environments), but it just adds extra complications honestly.

apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
 name: letsencrypt-prod
spec:
 acme:
   # The ACME server URL
   server: https://acme-v02.api.letsencrypt.org/directory
   # Email address used for ACME registration
   email: EMAIL
   # Name of a secret used to store the ACME account private key
   privateKeySecretRef:
     name: letsencrypt-prod
   # Enable the HTTP-01 challenge provider
   http01: {}
production-issuer.yaml

Change the email to your email, or whatever email you want. At this point, you can decide whether you want to do HTTP validation or DNS validation depending on what's easier for you, but it's out of scope.

And then apply it with kubectl apply -f production-issuer.yaml, again if you run into tiller issues, just keep retrying.

Step 4 - Setting up Nginx and the namespaces

From here on out it should be smooth sailing as all we're doing is creating manifests for Kubernetes and executing them

We'll start with the Nginx ingress controller first, which will create a load balancer on Digital Ocean and will expose your cluster to the world.

Simply run:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
and
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml

This will create a loadbalancer and assign your cluster a public IP in about 4 minutes, create any DNS records and point them to this IP. We'll be handling the routing ourselves.

Once Nginx is setup all that's left is to setup our app. We're going to setup two namespaces which are sort of like clusters within a cluster.

kubectl create namespace staging
kubectl create namespace production

You can switch between namespaces with kubectl config set-context <cluster-name> --namespace <namespace>

Start on which ever env you want, the steps are exactly the same.
First up is to create our secrets:

kubectl create secret generic db-user --from-literal=username=<Postgres username> --from-literal=password=<Postgres Password>

kubectl create secret generic rails-key --from-literal=key=<Rails master key>

kubectl create secret generic environment --from-literal=env=<Rails env>

kubectl create secret docker-registry regcred --docker-server=r.cfcr.io --docker-username=<Username> --docker-password=<Secret from registry> --docker-email=<Email>

Quick rundown of these commands:

  • Create env variables containing the postgres password and username
  • Since we're on Rails 5.2 and using credentials, we need to pass the master key as an env variable.
  • Set the rails env (staging, production, etc)
  • Set the docker login details for your private registry

Do this for all your environments, and change where necessary.

Step 5 - Setup our services

We're going to create a bunch of files first, and then apply them at the end. You can split each of these up into indiviual files if you want.

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: postgres
spec:
  replicas: 1
  template:
    metadata:
      name: postgres
    spec:
      volumes:
      - name: postgres-pv
        persistentVolumeClaim:
          claimName: postgres-pvc
      containers:
      - name: postgres
        image: postgres:10.3-alpine
        env:
        - name: POSTGRES_USER
          valueFrom:
            secretKeyRef:
              name: "db-user"
              key: "username"
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: "db-user"
              key: "password"
        - name: PGDATA
          value: "/var/lib/postgresql/data/pgdata"
        ports:
        - containerPort: 5432
        volumeMounts:
        - mountPath: "/var/lib/postgresql/data"
          name: postgres-pv
---
apiVersion: v1
kind: Service
metadata:
  name: postgres
spec:
  ports:
    - port: 5432
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: postgres-pv
  labels:
    type: local
spec:
  capacity:
    storage: 4Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/tmp/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: postgres-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi
postgres.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: redis-deployment
spec:
  replicas: 1
  template:
    metadata:
      name: redis
    spec:
      containers:
      - name: redis
        image: redis:3.2-alpine
        ports:
        - containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
  name: redis
spec:
  ports:
    - port: 6379
redis.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: setup
spec:
  template:
    metadata:
      name: setup
    spec:
      containers:
      - name: setup
        image: <image path>
        args: ["rails db:create db:migrate db:seed:categories"]
        env:
        - name: DATABASE_USER
          valueFrom:
            secretKeyRef:
              name: "db-user"
              key: "username"
        - name: DATABASE_PASSWORD
          valueFrom:
            secretKeyRef:
              name: "db-user"
              key: "password"
        - name: RAILS_MASTER_KEY
          valueFrom:
            secretKeyRef:
              name: "rails-key"
              key: "key"
        - name: RAILS_ENV
          valueFrom:
            secretKeyRef:
              name: "environment"
              key: "env"
      imagePullSecrets:
      - name: regcred
      restartPolicy: Never
setup.yaml

Update the image variable to the location of your image on the registry. If using Codefresh it'll look something like r.cfcr.io/username/username/project-name:master

Run all of these kubectl apply -f postgres.yaml,redis.yaml,setup.yaml

If everything goes well, the output of kubectl get pods should have postgres and redis as running, and the setup job as completed.

Step 6 - Setting up Rails

Again, create another file for Rails:

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: rails-deployment
spec:
  replicas: 1
  minReadySeconds: 5
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  template:
    metadata:
      name: rails
    spec:
      imagePullSecrets:
      - name: regcred
      containers:
      - name: rails
        image: <image path>
        args: ["puma -S config/puma.rb"]
        env:
        - name: RAILS_LOG_TO_STDOUT
          value: "true"
        - name: DATABASE_USER
          valueFrom:
            secretKeyRef:
              name: "db-user"
              key: "username"
        - name: DATABASE_PASSWORD
          valueFrom:
            secretKeyRef:
              name: "db-user"
              key: "password"
        - name: RAILS_MASTER_KEY
          valueFrom:
            secretKeyRef:
              name: "rails-key"
              key: "key"
        - name: RAILS_ENV
          valueFrom:
            secretKeyRef:
              name: "environment"
              key: "env"
        ports:
          - containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
  name: rails
spec:
  ports:
    - port: 80
      targetPort: 3000
rails.yaml

We're mapping port 80 externally to port 3000 internally otherwise you'll be getting 404s.

Apply this file too and verify it's all working with kubectl get pods.

If you want to debug or see the output of a pod use

kubectl describe pod <pod name>
kubectl logs pod <pod name>

Step 7 - Setting up the Ingresses

Almost done! All that's left is to setup the Ingresses which tell Nginx how to route to your app, for this one you'll need a file for each env as they'll route differently of course.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: production
  namespace: production
  annotations:
    kubernetes.io/ingress.class: nginx
    certmanager.k8s.io/cluster-issuer: letsencrypt-prod
spec:
  tls:
  - hosts:
    - <Domain>
    secretName: letsencrypt-prod
  rules:
  - host: <Domain>
    http:
      paths:
      - backend:
          serviceName: rails
          servicePort: 80
        path: /
production-ingress.yaml

Change your domain to your actual domain and the namespace to whatever you setup originally, apply this file and you're done!

Cert-manager takes a bit of time to request certificates, sometimes it'll happen quickly other times it'll error out a few time but eventually it'll come around and fix itself.
But eventually, you'll be able to curl your domain and hopefully see it route successfully to the Rails pod as well as see requests come through in the pod's logs.

Conclusion

This guide is probably not a one size fits all, and there will have to be some tweaking in some cases but this is a good starting point which covers steps that are Rails specific and doesn't lean too much on one provider like GKE.
I purposely didn't go on to explain the inner workings of Kubernetes or explain what a pod, deployment, node, etc is as the official documentation would do a much better job than me trying to explain in my fragmented understanding.

From here, you could setup cool stuff like (auto) scaling, rolling updates, etc but that's beyond the scope of this guide and personally, I have just barely scratched the surface of Kubernetes!

Let us know how you go with setting up Rails on Kuburnetes, you can tweet at me on Twitter @bidluo!