Pacman on the Cloud: how to deploy a Kubernetes application on zCloud using a fully integrated EKS distribution
- isabelabrilhante
- 7 de abr.
- 3 min de leitura
Atualizado: 8 de abr.

Written by Osmar R Leão
SA @Zadara
This article is the second part of my first (How to Install and Use an EKS Distribution on the Zadara Cloud) showing how to deploy an application and how integrated the EKS is with Zadara’s Cloud Orchestrator.
My first step was finding one application that uses data persistence (storage) and a load balancer. I found one that uses all of these and is also fun: Pacman! This app is part of one GitHub repository of Kubernetes examples: https://github.com/font/k8s-example-apps.

In this repository, you can find how to download, build, and push your images of this application to one docker container registry.
The Node.js Pacman uses MongoDB to keep the game’s high scores and user data, one internal load balancer to be the MongoDB frontend, and another load balancer to be the frontend of the Pacman game itself. For this example, I will use MongoDB and 2 Pacman pods.
The First step is to create a namespace to be used:
$ kubectl create namespace pacman
Now, create the MongoDB PVC (using 8GB of storage):
kind: PersistentVolumeClaimapiVersion: v1metadata: name: mongo-storagespec: accessModes: - ReadWriteOnce resources: requests: storage: 8Gi
$ kubectl -n pacman apply -f mongo-pvc-new.yamlpersistentvolumeclaim/mongo-storage created
The MongoDB service (load balancer):
apiVersion: v1kind: Servicemetadata: labels: name: mongo name: mongospec: type: LoadBalancer ports: - port: 27017 targetPort: 27017 selector: name: mongo
$ kubectl -n pacman apply -f ../services/mongo-service.yamlservice/mongo created
And the deployment of the MongoDB:
apiVersion: apps/v1kind: Deploymentmetadata: labels: name: mongo name: mongospec: replicas: 1 selector: matchLabels: name: mongo template: metadata: labels: name: mongo spec: containers: - image: mongo name: mongo ports: - name: mongo containerPort: 27017 volumeMounts: - name: mongo-db mountPath: /data/db volumes: - name: mongo-db persistentVolumeClaim: claimName: mongo-storage
Verify that the PV is mounted as /data/db inside the MongoDB pods.
$ kubectl -n pacman apply -f mongo-deployment-new.yamldeployment.apps/mongo created
Now, checking the EKS integration with Zadara zCompute, storage:
$ kubectl -n pacman get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGEpvc-d3a8b520-af36-41a1-95e0-d32cc071b6a1 8Gi RWO Delete Bound pacman/mongo-storage ebs-sc <unset> 5h9m
$ kubectl -n pacman describe pv pvc-d3a8b520-af36-41a1-95e0-d32cc071b6a1Name: pvc-d3a8b520-af36-41a1-95e0-d32cc071b6a1Labels: <none>Annotations: pv.kubernetes.io/provisioned-by: ebs.csi.aws.comvolume.kubernetes.io/provisioner-deletion-secret-name:volume.kubernetes.io/provisioner-deletion-secret-namespace:Finalizers: [kubernetes.io/pv-protection external-attacher/ebs-csi-aws-com]StorageClass: ebs-scStatus: BoundClaim: pacman/mongo-storageReclaim Policy: DeleteAccess Modes: RWOVolumeMode: FilesystemCapacity: 8GiNode Affinity:Required Terms:Term 0: topology.ebs.csi.aws.com/zone in [symphony]Message:Source:Type: CSI (a Container Storage Interface (CSI) volume source)Driver: ebs.csi.aws.comFSType: ext4VolumeHandle: vol-30ec110a21c94a929be9b76988c43ffdReadOnly: falseVolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1732818873883-5119-ebs.csi.aws.comEvents: <none>
Showing the volume at the zCompute GUI:

And the load balancer:
$ kubectl -n pacman get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEmongo LoadBalancer 10.102.59.17 elb-ce73483f-d64b-4f9f-aac3-bd5f0e45018e.zadara.demo 27017:30376/TCP 5h22m


And now, the Pacman part! First the service (internet facing):
apiVersion: v1kind: Servicemetadata: name: pacman labels: name: pacman annotations: service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facingspec: type: LoadBalancer ports: - port: 80 targetPort: 8080 protocol: TCP selector: name: pacman
The annotations part of this YAML is needed to signal to the Zadara Cloud Orchestrator to create a public load balancer.
$ kubectl -n pacman apply -f pacman-service.yamlservice/pacman created
It is possible to check the load balancer created in the Kubernetes and Zadara’s GUI:
$ kubectl -n pacman get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEmongo LoadBalancer 10.102.59.17 elb-ce73483f-d64b-4f9f-aac3-bd5f0e45018e.zadara.demo 27017:30376/TCP 8m20spacman LoadBalancer 10.110.35.73 elb-3a3bfac2-7c58-46f6-87fc-0a5bdf1ebb92.elb.services.symphony.public 80:30510/TCP 2m11s

Now, the deployment of the Pacman app itself:
apiVersion: apps/v1kind: Deploymentmetadata: labels: name: pacman name: pacmanspec: replicas: 2 selector: matchLabels: name: pacman template: metadata: labels: name: pacman spec: containers: - image: zosmarleao/pacman-nodejs-app:latest name: pacman ports: - containerPort: 8080 name: http-server
The image is located on my account in the docker container registry. I had already uploaded into it.
Checking the pods:
$ kubectl -n pacman get podsNAME READY STATUS RESTARTS AGEmongo-57695f7544-5zl6n 1/1 Running 0 21hpacman-5555f97879-47b24 1/1 Running 0 21hpacman-5555f97879-wddjc 1/1 Running 0 21h
Refreshing the Internet Browser, the load balancer will change the backend pod:


And, for the grand finale, the Pacman game! Enjoy!

Comments