Originally posted on FillIn Tech Blog
There's a number of guides across the internet which demonstrate how to setup Rails on GKE/Amazon/Azure but few that do a proper walkthrough using a generic provider (ie one that doesn't require much overhead) and most kinda assume you have knowledge about Kubernetes, which chances are, you don't.
As an aside, all this time I've been pronouncing Kubernetes wrong, so to save other's from embarassment, it's pronounced koo-burr-NET-eez.
So the purpose of this is to give a full walkthrough on setting up Rails on Digital Ocean's Kubernetes for devs with little to no dev-ops background, and be generic enough to apply to any sort of backend not just Rails (hopefully all in one part too!)
Won't go into too much detail here about setting up Kubernetes as there's a guide when creating a cluster on Digital Ocean, but all you need to continue with this guide is a connection to your cluster as well as a domain name (or a few).
To save your sanity, set the environment variable
KUBECONFIG to the full path of your Kubernetes config file.
We're going to have on cluster for our Staging and Production environments, so there's no need to repeat this guide multiple times. You don't have to do it all on one cluster, but it's up to you.
Step 1 - Creating the Docker image
Here we'll be dockerising our Rails project so we can eventually deploy it.
The first file we need is the entrypoint file, which waits for required services to be available before proceeding.
And of course, make this executable with
chmod +x docker-entrypoint.sh
The next file needed is the Dockerfile which defines the actual image that will eventually be pulled by Kubernetes.
You can modify the image dependencies and base image based on what your app actually uses, but this is a pretty catch-all configuration.
The reason we're using alpine rather than say the straight Ruby image is to keep it slim, the standard Ruby image comes out to around 900MB whereas this is around 20MB.
Next is the docker compose file which defines all the services and how they're exposed to each other. This is primarily for testing and won't actually be used in the whole Kubernetes process.
setup block will essentially provision your database without you needing to actually execute the command yourself, you can of course do that, but really up to you. In our case we also need to seed data so we do that after migrating.
POSTGRES_PASSWORD variables to match whatever is in your app's database.yml file
This snippet is tailored for our app, so you'll probably need to tweak it based on whatever your setup is (i.e if you're using Unicorn or something else). Leave the URLs as is.
Once you have all 3 files in the your project's root, run
docker-compose build to verify everything runs smoothly. If all goes well you can try
docker images and see your image built. Commit and push these files, as we'll need them in the next step.
Step 2 - Uploading the image to a registry
All guides I've seen so far upload the the public registry which frankly is pretty bad if you're working on a private application, if you are working on an open source app then great, upload where ever you want.
We're going to be uploading to a (free) private registry so we our cluster can pull from it. For this, we'll be using Codefresh which also happens to do CI.
We're not endorsed or affiliated with Codefresh
Codefresh has pretty alright documentation so I won't run through that, but essentially you link your Git repo and it'll create a project based on the files that we created above. Start a build and it'll create an image that can be pulled.
Regardless of your provider, you'll need credentials for the registry in order to actually pull the image (unless you're using a public registry).
So generate a secret/key and keep it safe for the next step.
Step 3 - Helm and Let's Encrypt
The reason we're starting with Helm and Let's Encrypt first is because if something goes wrong (which it likely will) then you can just recreate the cluster without losing too much progress, I didn't do this the first time around and it got tedious setting up everything else just to find that Let's Encrypt broke.
First things first is to install Helm and Tiller, the easiest way to install Helm is through homebrew, but there are other ways in Helm's documentation
brew install kubernetes-helm
Once Helm is installed, we'll need to setup our cluster to support Tiller. For organisation, I recommend creating a folder to hold all the Kubernetes scripts. Ours is in our project's root, but it doesn't really matter where you put yours.
Since Digital Ocean (and most Kubernetes providers) use Role Base Access Control or RBAC, there's some additional steps to get Tiller working which are documented here, but we'll go over the required steps here.
First create a file called
tiller-rbac.yaml and enter the following:
This creates a service account for Tiller to use, otherwise you'll end up with errors since Tiller doesn't have access to some things.
Apply this config with
kubectl apply -f tiller-rbac.yaml and then run
helm init --service-account tiller to setup Tiller in your cluster. Hopefully everything works out and we can procede with install cert-manager.
Next is to install cert-manager to create and manager our SSL certificates. We're going to install 0.5.2 as anything later just caused a massive headache, personally, and it appears I'm not the only one.
helm install --name cert-manager --namespace kube-system stable/cert-manager --version v0.5.2
You may have to run
helm repo update.
If you run into
Error: could not find a ready tiller pod, wait a bit and run it again. If it keeps happening, just keep retrying.
For me it was super frustrating but it eventually worked after a couple retries, thankfully we only need to do this once and now you know why I opted to do this step first rather than later.
Oncer cert-manager is finally installed, we create our issuers which define how cert-manager creates requests certificates.
Create a file called
production-issuer.yaml, you can create a staging issue if you want (don't confuse this with your app's environment, this is related to Let's Encrypt's environments), but it just adds extra complications honestly.
Change the email to your email, or whatever email you want. At this point, you can decide whether you want to do HTTP validation or DNS validation depending on what's easier for you, but it's out of scope.
And then apply it with
kubectl apply -f production-issuer.yaml, again if you run into tiller issues, just keep retrying.
Step 4 - Setting up Nginx and the namespaces
From here on out it should be smooth sailing as all we're doing is creating manifests for Kubernetes and executing them
We'll start with the Nginx ingress controller first, which will create a load balancer on Digital Ocean and will expose your cluster to the world.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml
This will create a loadbalancer and assign your cluster a public IP in about 4 minutes, create any DNS records and point them to this IP. We'll be handling the routing ourselves.
Once Nginx is setup all that's left is to setup our app. We're going to setup two namespaces which are sort of like clusters within a cluster.
kubectl create namespace staging kubectl create namespace production
You can switch between namespaces with
kubectl config set-context <cluster-name> --namespace <namespace>
Start on which ever env you want, the steps are exactly the same.
First up is to create our secrets:
kubectl create secret generic db-user --from-literal=username=<Postgres username> --from-literal=password=<Postgres Password> kubectl create secret generic rails-key --from-literal=key=<Rails master key> kubectl create secret generic environment --from-literal=env=<Rails env> kubectl create secret docker-registry regcred --docker-server=r.cfcr.io --docker-username=<Username> --docker-password=<Secret from registry> --docker-email=<Email>
Quick rundown of these commands:
- Create env variables containing the postgres password and username
- Since we're on Rails 5.2 and using credentials, we need to pass the master key as an env variable.
- Set the rails env (staging, production, etc)
- Set the docker login details for your private registry
Do this for all your environments, and change where necessary.
Step 5 - Setup our services
We're going to create a bunch of files first, and then apply them at the end. You can split each of these up into indiviual files if you want.
Update the image variable to the location of your image on the registry. If using Codefresh it'll look something like
Run all of these
kubectl apply -f postgres.yaml,redis.yaml,setup.yaml
If everything goes well, the output of
kubectl get pods should have postgres and redis as running, and the setup job as completed.
Step 6 - Setting up Rails
Again, create another file for Rails:
We're mapping port 80 externally to port 3000 internally otherwise you'll be getting 404s.
Apply this file too and verify it's all working with
kubectl get pods.
If you want to debug or see the output of a pod use
kubectl describe pod <pod name> kubectl logs pod <pod name>
Step 7 - Setting up the Ingresses
Almost done! All that's left is to setup the Ingresses which tell Nginx how to route to your app, for this one you'll need a file for each env as they'll route differently of course.
Change your domain to your actual domain and the namespace to whatever you setup originally, apply this file and you're done!
Cert-manager takes a bit of time to request certificates, sometimes it'll happen quickly other times it'll error out a few time but eventually it'll come around and fix itself.
But eventually, you'll be able to curl your domain and hopefully see it route successfully to the Rails pod as well as see requests come through in the pod's logs.
This guide is probably not a one size fits all, and there will have to be some tweaking in some cases but this is a good starting point which covers steps that are Rails specific and doesn't lean too much on one provider like GKE.
I purposely didn't go on to explain the inner workings of Kubernetes or explain what a pod, deployment, node, etc is as the official documentation would do a much better job than me trying to explain in my fragmented understanding.
From here, you could setup cool stuff like (auto) scaling, rolling updates, etc but that's beyond the scope of this guide and personally, I have just barely scratched the surface of Kubernetes!
Let us know how you go with setting up Rails on Kuburnetes, you can tweet at me on Twitter @bidluo!