So on first part we talked about what the problem was, and at the end i talked about us wanted to explore our chances with k8s since we already wanted developers to get more familiar with it, and what is more needed to get familiar with something than using it everyday!
So the real work started, i had to read more about k8s since now i'm not gonna simply use it like i did before by just running some ambiguous commands that i barely know, but i will now need to setup a whole environment using k8s and booooi that was shocking!
I knew something about k8s but know i know even more, kubernetes has yaml files configurations where you just setup your environment into services, applications, secrets and more. This was quite interesting for me because now i need to do the exact same for our tiny local development.
Minikube was the first thing i did install, i already had virtualbox to run vagrant so this dependency is checked already. So i started playing a bit first of course i looked at some examples, explore some already written code on github and started to connect some dots, beside of course actually looking at our kubernetes code that we already using on production, our DevOps have written some good documentation for that and this was actually my starting point.
It is worth to mention that one of the main resources i had was kubernetes documentation , they have written an excellent docs that helped me getting up and running and even understand some concepts, especially for learning the basics.
At this stage i was kinda ready to actually slowly build our environment, but first you need to know about the tech stack itself.
The tech stack
Well we're obviously have a microservices architecture and that's why we're using kubernetes on production to orchestrate these services for us, but what is the actual tech stack we're using?
It's a SaaS web application so our tech stack consists of:
Now i need to write some yaml configuration files for kubernetes to have our local development environment ready, but wait .. we already have all the yaml files written for production so i can re-use these but tweak a bit for development environment.
Tweaking the configurations
It was not easy as it seems though, turns out kubernetes is designed to orchestrate a production ready pods not some development applications. But what's the difference though?
Well in development environment what developers really care about is the ability to immediately see their code changes by a simple page refresh if its a web app, or a just by running the same command again if it is a CLI app. But for production app you'll have to rebuild the whole pod, and then maybe clear some cache and then you'll be able to see these changes, and that is of course time consuming.
So hot-reloading is required, and sure to do this in docker you must have some volumes mounted to the host machine, so that whenever you change a file on your host machine, the file is also going to be changed on the container itself, and hence on the pod.
So now you understand the main issue, that given this main requirement k8s is going to be a pain to configure since it's by design not ready for this.
Let's try anyway
So who cares, we knew that it won't be easy, so let's try anyway! and let me walk you through the whole process.
I started writing some yaml files of course that will define how i wanted our apps, services and secrets to communicate, here is an example of one of our NodeJs services deployment file that i wrote.
apiVersion: apps/v1 kind: Deployment metadata: name: super-doper-nodejs-service spec: replicas: 1 selector: matchLabels: app: super-doper-nodejs-service group: super-doper template: metadata: labels: app: super-doper-nodejs-service group: super-doper spec: volumes: - hostPath: path: /host/path/super-doper/ name: super-doper-persistent-storage containers: - command: - /var/app/run-super-doper.sh volumeMounts: - mountPath: /var/app/super-doper name: super-doper-persistent-storage env: - name: LOG_LEVEL value: debug - name: PROCESSORS value: '2' - name: DOMAIN_NAME value: super-doper.default.svc.cluster.local - name: MONGO_MAIN_DB value: awesome-db - name: QUEUE_SERVER value: rabbitmq-svc.default.svc.cluster.local - name: QUEUE_USER value: test - name: QUEUE_PWD value: test - name: ENCRYPT_KEY valueFrom: secretKeyRef: key: encrypt_key name: common-secret image: mrgeek/super-doper:latest imagePullPolicy: IfNotPresent livenessProbe: httpGet: path: /health-check port: 3002 scheme: HTTP initialDelaySeconds: 300 periodSeconds: 60 timeoutSeconds: 58 name: super-doper
No crazy stuff of course, looking at this you deduce the approach i'm taking to actually start this service. First by filling some metadata like the deployment name
super-doper-nodejs-service and having this under a group
super-doper and also passing some ENV variables to the pod that will be later used by our NodeJs application like for example our RabbitMQ server will be taken out of this ENV variable
QUEUE_SERVER which has the value of
rabbitmq-svc.default.svc.cluster.local which is our RabbitMQ service FQDN inside our kubernetes cluster.
volumeMounts as this is how we're telling kubernetes to mount the host path to the pod, so whenever we change some files, they will be reflected into the pod and hence a simple refresh is needed.
And this is the service file for this deployment, since we need to define also how are we going to communicate with this NodeJs app
apiVersion: v1 kind: Service metadata: labels: app: super-doper name: super-doper-svc spec: ports: - port: 3002 protocol: TCP selector: app: super-doper
So now the app will run and we will start experiencing the issues with hot reloading not actually working as expected along with other issues due to this matter, mostly in services not able to communicate between each other, or changing some files will cause pod to crash or even to change some internal configurations entirely.
Its good to mention i tried this long time ago, that's why i'm not so sure about the issues but that is the glance of it. But i'm sure that i'm going to try it again especially after i fixed most of the issues using docker compose as we're going to see in the next part, and maybe kubernetes is not the problem after all and we can actually use it as a development cluster.