If you’ve ever run “kubectl apply” on a Deployment resource like the one above, you know that Kubernetes takes care of deploying the containers inside Pods, scheduling them onto Nodes in your cluster. And you know that if you try to manually delete the Pods, the Kubernetes control plane will bring them back up- it still knows about your intent, that you want three copies of your “helloworld” container. The job of Kubernetes is to reconcile your intent with the running state of its resources- not just once, but continuously.
So how does this relate to platforms, and to the other products in that diagram? Because deploying and scaling containers is only the beginning of what the Kubernetes control plane can do. While Kubernetes has a core set of APIs, it is also extensible, allowing developers and providers to build Kubernetes controllers for their own resources, even resources that live outside of the cluster. In fact, nearly every Google Cloud product in the diagram above— from Cloud SQL, to IAM, to Firewall Rules — can be managed with Kubernetes-style YAML. This allows organizations to simplify the management of those different platform pieces, using one configuration language, and one reconciliation engine. And because KRM is based on OpenAPI, developers can abstract KRM for developers, and build tools and UIs on top.
Further, because KRM is typically expressed in a YAML file, users can store their KRM in Git and sync it down to multiple clusters at once, allowing for easy scaling, as well as repeatability, reliability, and increased control. With KRM tools, you can make sure that your security policies are always present on your clusters, even if they get manually deleted.
In short, Kubernetes is not just the “compute” block in a platform diagram – it can also be the powerful declarative control plane that manages large swaths of your platform. Ultimately, KRM can get you several big steps closer to a developer platform that helps you deliver software fast, and securely.
The rest of this series will use concrete examples, with accompanying demos, to show you how to build a platform with the Kubernetes Resource Model. Head over to the GitHub repository to follow Part 1 – Setup, which will spin up a sample GKE environment in your Google Cloud project.
And stay tuned for Part 2, where we’ll dive into how the Kubernetes Resource Model works.