Kubernetes Without Yaml Info Q
Kubernetes without YAML - InfoQ #
Excerpt #
David Flanagan discusses using programming languages to describe Kubernetes resources, sharing constructs to deploy Kubernetes resources, and making Kubernetes resources testable and policy-driven.
InfoQ Homepage Presentations Kubernetes without YAML
Summary #
David Flanagan discusses using programming languages to describe Kubernetes resources, sharing constructs to deploy Kubernetes resources, and making Kubernetes resources testable and policy-driven.
Bio #
David Flanagan is Founder of the Rawkode Academy, and open source contributor. David has been developing software professionally for nearly 20 years, starting with embedded systems written in C, and has spent that entire time learning the paradigms of different programming languages, including C++, PHP, Java, and Haskell - though more recently preferring to work with Go, Rust, and Pony.
About the conference #
Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.
INFOQ EVENTS #
Aug 15th, 2024, 1 PM EDT
Mastering Chaos Engineering: Building Resilient Systems #
Presented by: Tye Davis - Director of Product Marketing Harness
Transcript #
Flanagan: I just want to know what resources you would need as a requirement, as mandatory before you ship an application to production in Kubernetes. The Kubernetes resource model, the first one being a deployment.
Obviously, we need a deployment, we need a service. Weâve got a little bit of RBAC sprinkled in there. Someoneâs mentioned some service mesh, networking policies, load balancers, Argo rollout, observability stacks and so forth. The point here is that we try and pretend that you can just write a deployment YAML in 20 lines, and you get something running and you can or would. Actually, the deployment landscape and the complexity of deploying to Kubernetes, especially in production is very vast. This is not slowing down. We have a lot of operators and custom resources that we need to create next to all these more standard core APIs as well.
Now, letâs understand the tools that youâre using. How do you deploy to Kubernetes at the moment? Option 12. Pretty vanilla, then. Everyoneâs just writing straight up YAML. Using Helm, Iâm assuming for third party dependencies. Perhaps youâve written general purpose charts you donât use. Thatâs perfectly all right. Forty-five percent of you hate yourselves and use Terraform. Iâm going to touch on Terraform a little bit and the tooling. Itâs getting better, but there are definitely some huge challenges there. Kustomize, I would like to see a bit higher, just because there is no big end to Kubectl. While itâs a little bit further back version wise, you still get some pretty strong powerful things out of it. Thatâs cool. Then Jsonnet, I donât see much of that in the world anymore. We have no Pulumi, no Carvel, no CUE, no CDK8s, no Kapitan, and no Kpt, which is pretty standard. I never see anybody using these tools, but they are some of the killer tools on this list.
Rate your experience of happiness of deploying to Kubernetes? Thatâs quite a split. More angry faces, which is good. Some happy faces and rockets, which I donât believe you. Then the pear emoji that I expected to be a bit higher.
Background #
My name is David. Iâm a Kubernetes educator and consultant. I work with teams to help them build Kubernetes platforms, and to enable their developers to deploy and ship faster. I put masochist on the slide just because I knew that someone was going to review this and have no context. Plus, people will read this online and have absolutely no context either. Iâm not really a masochist, but I do something pretty stupid that I want to share with you on my YouTube channel, which is the Rawkode Academy. I do have a show on my channel called Klustered. Itâs the single best and worst idea Iâve ever had. It started off where I would spin up two bare metal Kubernetes clusters, give them to random people on the internet, and tell them to break it. I would then go on to my live stream and try to fix the cluster and get it working again. I enjoy fixing clusters live on the stream, it keeps me humble. I also enjoy watching other people try this too. This was a team episode with people from Red Hat and people from Talos. Theyâre very smart people. Theyâve been in the Kubernetes space for a long time, and even worked on their own Linux based operating system to help make Kubernetes easier.
To give you a taste of it, here is a very short clip. Before I hit play, letâs just get the context here. Theyâve exported their kubeconfig. Theyâve tried to run kubectl get nodes, and they get permission denied. We all know how to fix that. You go down a couple more lines, they tried to chmod the kubectl binary, and the executable bit on the chmod is gone. Who knows how to fix this? Thatâs all right. With Kubernetes you donât learn this stuff until you have to learn it through trial and tribulations. You throw some Linux experts onto it and you find some cool ways. You can actually execute any system call via the dynamic linker, so the ld.linux.so file. Really cool hack. This hack was actually a little bit more sneaky, because they then tried to run ls on bin. If you understand how colors work on ls, so they moved the executable back from kubectl kube admin oc, SCP, change attributes, chmod, and so forth. They really went to town and this was done in the first 30 seconds of this cluster. It was good fun. I like doing stuff like this. Check it out if you want to learn more about Kubernetes and cloud native on my channel.
What Does a Kubernetes Deployment Look Like? #
What does a Kubernetes deployment look like in the real world? Again, Iâm focusing on production here, not development, local staging, and so forth. You probably need most of these resources. We donât see them all listed. We have a deployment, we have a service, we have our ConfigMap and secrets, we have the Horizontal Pod Autoscaler, the pod disruption budget, pod monitors, networking policies, and so forth. At the smallest amount of YAML I could write, we got about 120 lines, which looks like this. Thatâs as good as itâs going to get. Itâs not actually that important. This is not even comprehensive about what I would want to deploy something to production. We need much more. The chances are, we need to create and provision namespaces. We definitely need service accounts, because weâre not going to do default injection. Weâve got roles and role bindings. Our applications usually have state, if thereâs no state, theyâre probably not making any money. You need databases, you need queues, you need caches. People arenât applying seccomp profiles to their applications, why not? This is more stuff that we need to add to our applications. Then weâve got LimitRanges, ingresses, and so forth. Even at this, this is still not comprehensive. There are a lot of things that we need to do to deploy to Kubernetes.
Is this problem? We donât deploy monolithic applications to Kubernetes. You could, but you probably donât get many of the tangible benefits for deploying a monolith to a Kubernetes cluster, especially with the operational complexity of running a Kubernetes cluster. We have to deploy our microservices to Kubernetes. We have to take all of these resources, and we copy and paste to YAML. Then we do it again and again. Then if you want to make a change in one application, or one service, but itâs the best practice, and we want to apply that across all of our applications, we have to start looking at more tooling, because straight up YAML isnât going to cut it anymore. Which is why when we ask the tools weâre seeing Helm so high up there because Helm does provide a nice way of handling some of this complexity. Itâs not without its own challenges too.
What do we want? What do we need from our tooling to be able to attain this complexity of deploying to Kubernetes? These are the things that I think that we need. We donât want to repeat ourselves, DRY. Something that we do with our code is that if we can find a way to make something reusable, then we should. Iâve also added shareable here, because it might be that we want to expose these things to other people, other teams, other organizations, other divisions, even publicly by providing best practices and libraries for others to consume. While we want opinionated libraries or configurations for deploying to Kubernetes, they have to be composable. You have to be able to opt in to the features that youâre ready to do. Maybe youâre not ready to jump into the service mesh and do loads of networking policies and do seccomp profiles. It doesnât mean that youâre not going to want to come back to that at some point in the future. Working with our YAML or our Kubernetes resources, we want documentation. Anyone know how to get the documentation or understand the spec of a custom resource definition without going to the code? Itâs my favorite trick. I donât know why peopleâs other command exist. Kubectl explain is the best command in the world. This works on any Kubernetes cluster. Anyone got a favorite resource? No. Ingress. We can say that we want to look at the spec.
Now we see all the fields and the documentation, we can see that we need a default backend, or we can have a default backend, ingressClassNames, rules, and so forth. This works for any degree of nesting, so you can then continue to add and work your way down the tree. Fantastic little trick. Documentation is not great when youâre writing YAML. Youâd want to be able to understand what the spec of a resource looked like. The documentation, the LSPs, theyâre really not where they need to be to make this easier and provide a strong, enjoyable experience. Testable. Anyone here testing their Kubernetes YAML? No. There are tools for it, but weâre just not doing it either.
DRY - Can I, Should I? #
Weâre going to play a game of can I, should I? We wanted to DRY with YAML, what do we go for? Anyone familiar with YAML anchors? They look like this. You can name standard parts within a YAML document, reference them in other parts. It does actually allow us to clean up some of the more annoying duplicates, the non-shows or what theyâre missing, within our YAML, especially in our deployment spec where we have label selectors, and so forth. It only works within a single YAML document. Even if you have a list with multiples, you canât have a nice bit of shared stuff at the top and reference all the way down. Itâs not really where it needs to be for us to be able to do that. Plus, it just doesnât look that nice and itâs difficult to understand. This is why we then have Kustomize. This provides multiple ways for us not to repeat ourselves using overlays, patches, remote includes, and even theyâve got a small function library that allows you to do some stuff as well. However, Kustomize is a good first step. If you are just shipping Kubernetes YAML, go for it. Enjoy taking some of those benefits. We just have to remember that there are better tools with a better consistent developer experience. Kustomize solves some challenges, but itâs not really solving all of the problems that we have when working with YAML. Can I, should I? Yes, do use Kustomize. Hopefully by the end, youâll see that there are better ways as well.
Shareable - Can I, Should I? #
If we want to make things shareable, we use Helm. No, thatâs a lie. I donât want to say bad things about Helm. Helm is great. I love that we can go out and get any Helm chart for any third-party piece of software and deploy it. Helm is not without many challenges. Iâm not a fan of Goâs template language, which is what weâre looking at here. I think working with YAML is very painful, because we have to worry about white spaces, which is why we have these little dashes next to the braces. If you get them wrong, the indentation is all wrong.
Then your YAML doesnât validate, conform, or even apply. We then have these magical context variables with the dots, so you have to throw an [inaudible 00:14:49]. For people that have written YAML and you throw this in front of them, while it was true five years ago that everyone working with Kubernetes probably was a Go developer, thatâs not true anymore. Weâre forcing this upon them. We then have the ability to include files. Then we have to make sure we indent them correctly, they also use the magic context dot. We can then do print blank lines, which you see in many Helm charts. I donât know why this exists, but itâs painful. Itâs just not pleasant for people to work with. The only way to see what the output is, is to run a Helm template or run a Helm install. Then thereâs even more challenges with Helm where youâve got the CRD conundrum. Do we have Helm install the CRDs? Do we install them ourselves? How do they get managed from that point forward? Iâm not going to say anything bad about Helm, but Iâm also not going to say too much thatâs good.
The main problem with Helm, is the values file, this is our point of customization. This is where we tweak the chart to make it do the things that we want it to do. The problem is, thereâs no strongly opinionated Helm charts out there. I canât just cast a wide net and say there are none, but very few. The problem is, is that these are all very general purpose with loads of configuration options to handle every single edge case that every other person, developer, and team has to the point where now when you look at a Helm chart, and the Bitnami repositories, and so forth, is that every single line of YAML is wrapped in either a range or a conditional because it may or may not be pervaded by the default values, or the custom values by the developer. Itâs extremely painful to work with. Thatâs not a real number of lines and values with YAML but I wouldnât be surprised. Weâve heard about your two-and-a-half gig YAML file, so who knows? Can I, should I? Donât get me wrong, you should definitely be using Helm. Itâs still one of the best things that weâve got if you want to go and deploy a third-party piece of software, like Redis, Postgres, CockroachDB, and so forth. Those charts exist, use them. Smart people work on them. It may not be what you need longer term for your own software, for your own deployments.
Composability - Can I, Should I? #
Composability. We can do a little bit of this in Kustomize, but we do have to remember that this is nothing but copy and paste. We donât actually have true composability. All weâre doing is saying, take these bits of YAML, this little snippet and put it in this bit of this file, do a JSON patch, and then donât touch it anymore. We canât change it. We canât do anything else unless we apply another patch on the top of it. If youâve ever had to work with Git merges, and conflicts, you probably donât enjoy working with JSON patches anyway. This is a 100% you can but you definitely shouldnât. If youâre looking at Kustomize as a way to provide composability, itâs going to be very painful.
Documented - Can I, Should I? #
From a documented point of view, there really is nothing here whatsoever, except for dropping down to the terminal and using kubectl explain. I caveat that, if there is a VS Code extension for Kubernetes, that probably most people, if theyâre deploying to Kubernetes, have installed. What you donât realize is this is not magically understanding the custom resources that are in your cluster, itâs going out to your cluster. If you can have your VS code and your local machine speak to your production Kubernetes cluster, thatâs not exactly an ideal situation. Sure, you could point it to dev, and it will give you some type printing, and a decent LSP implementation where you can tab complete with your resources. I just donât think that weâre going the right route with this. Thatâs also horrendously slow. It has to continually speak to the cluster, request the API versions, and then make that LSP available to you. Not ideal. Yes, you can, but you definitely shouldnât.
Testable - Can I, Should I? #
Testable. Who knows what Rego is? This is what it looks like. Itâs just not that understandable, unless youâve been working with it for a long time. Even then, I came back to some of the Rego policies that I wrote a year ago, and I had no idea what theyâre doing. If youâre not working on SDN, and DO, youâre going to lose that muscle memory context. Then itâs just going to be really painful coming back to it. While I love what Open Policy Agent are doing and the Rego language does provide some very good functionality and features, I just wished the language was more familiar to people coming from a C or Google background. It does share some things. If you can work out what this is doing, youâre doing better than I can. For anyone who keeps their finger on the pulse of Kubernetes policy, common expression language. This was added very recently to Kubernetes 1.26. It allows us to write, validate admission policies using a common expression language, which is a very small programming language from Google, where you can just say policies are expression, objects, spec, replicas is less than or equal to 5. This is a fantastic addition to Kubernetes because it simplifies and makes it easier for developers, you, to get policies added to your cluster and make sure that youâre not doing anything too bad with your deployment and production artifacts. It is very new. Can I, should I? CEL, yes. This is a no-brainer if you are on Kubernetes 1.26 or 1.27, start to bring that whenever possible. If you want to invest in learning and working on your Rego knowledge, then I do encourage you to. Iâm here to show you that there are other tools that are familiar too, that you will understand and support your own programming languages of choice.
Whatâs Missing? #
What is missing from all of these things where we can do it currently, but we probably shouldnât? Thatâs the developer experience. Making sure that when youâre sat down at your laptop, or your computer, or your monitor, that you enjoy as much as you can, at least enjoy deploying to your Kubernetes cluster. I always think of developer experience as just understand, how do we achieve what we want, and do it by leveraging existing skills, and hopefully enjoy it. Iâm going to do something totally pretentious and quote myself. I love this quote, where I just talk about, if we can be successful with our experience and intuition, rather than always having to go to kubectl explain, go to Stack Overflow, go to Google, we search the documentation. Why canât we just take these 10, 20 years that we have as developers and apply all of those skills to what weâre doing day in and day out? That is what a strong and good developer experience has to be.
What Are Our Options? #
If we want a strong developer experience, we want to deploy to Kubernetes, what are our options? I am going to be talking primarily about CDK8s today, and weâre going to be jumping into my terminal and some code to show you how CDK8s work. I wonât say too much about it right now, but I do want to cover some of the other tools available in the landscape. The next one is Pulumi. Pulumi is a fantastic tool, but I probably wouldnât use it for deploying to Kubernetes. Iâm going to caveat this with two facts. One, I used to work for Pulumi, so I know Pulumi very well. Two, my entire time at Pulumi was trying to improve the developer experience of working with Kubernetes. The challenge is, Pulumi is based on the Terraform resource model. That means if you want to generate types and have Pulumi create custom resources and a cluster, you then need to write a provider that has a binary that is publicly available on the internet that can be distributed to your local machine. Then you have to install the SDK that describes those types and allows the resource creation to happen. What weâre going to see when I pick a random CRD from the internet to show you how it works with CDK8s, is that the Pulumi just doesnât work. I have sent them loads of feedback and I hope that they change it. Right now, itâs not the best bet for Kubernetes. Terraform unfortunately lies in the same boat. It used to be that they supported deployment services, all the core APIs. Then of course, none of us are deploying to Kubernetes with just the core APIs. Now they do Kubernetes manifest support where everything renders to Kubernetes manifest. That does get you so far, but again, weâre not getting that developer experience of what is the spec. What can I put in here? Is it a ConfigMap reference that has a ConfigMap name? What is it Iâm working with? We donât get that with Terraform. Also, just because I ran into it the other day when I was writing some actual Terraform Kubernetes, if your custom resource definition has a property called status, it automatically wonât deploy to the cluster because it feels that that is a protected term and that is not in a Kubernetes custom resource. That does not work. Itâs been an open issue for three years. It pains me and hurts me all the way down to my core.
Anyone heard of CUE? CUE is a fantastic project also out of Google that wants to remove YAML and JSON from your lives. It has some very great features. Everything is a marriage, especially arbitrary formats. It does recursive parsing of YAML and JSON, and to a structured CUE value. Itâs very cool. Stefan Prodan, who is the maintainer of the Flux project is working on Timoni, which allows you to take Kubernetes Kube based resources, have a GitOps workflow and deploy them to your cluster. Itâs super early, but itâs very promising. They just shipped support for generating CUE definitions, which gives you LSP support based on custom resource definition YAMLs from the OpenAPI spec. Very cool. Iâd love to be able to say that you can start playing with that now, but you probably need to hold off a little bit more. Then thereâs Go. Again, five years ago, it was safe to say that every single Kubernetes operator was probably written in Go. That is not the case anymore. Weâre seeing much more Java, Rust, Zig, and other languages popping up as people want to explore and use the languages that are familiar to them. It used to be, sure, we could always just import the type definitions from a Go package and deploy it. Thatâs getting less true. Itâs not really a viable option any longer.
Which means weâve got CDK8s, which is good, because thatâs what weâre going to take a look at. CDK8s is a CDK implementation that allows you to deploy to Kubernetes. It does this in multiple languages, which is provided by the jsii project, which allows you to have a low-level description language, which generates SDKs and programming languages of your choice. It supports Go, JavaScript, TypeScript, Python, and Java. They keep threatening to add more languages, and I keep upvoting the Rust issue, but itâs not happened quite yet. Hopefully soon. The benefits here are, we want to use our IDEs. We want to use VS Code. We want to use our extensions. We want language server protocols. We want to be able to click around and debug. All of that is 100% possible. CDK8s is also a CNCF project. It has that governance, itâs not by AWS. Itâs safe to use and trust. It has constructs and charts, which are just terminology from the CDK project, which weâll see as we start to type some code. It does some dependency management, which is typically a difficult problem with Kubernetes because itâs declarative, and you just throw everything at it and let it reconcile eventually. The CRD support is unbelievable. Literally, I will show you just how good that is. It actually integrates with Helm too and does this via Helm templating. It still specifies YAML. We do lose some of the functionality of Helm hooks. Then they really tried to elevate the project with something called CDK8s+, which provides new APIs for describing Kubernetes resources that we just havenât seen in the past.
Demo (CDK8s) #
What does it look like? I could do it this way, and show you loads of code on a boring slide. Weâre not going to do it like this. Letâs do some live coding. This is a CDK8s TypeScript. Itâs a project where we have a chart. A chart just means weâre going to print out some YAML. If you need more control over your CDK8s, you can have more charts and nested charts, these will all be rendered to their own YAML file. Depends on how you want to work with the YAML afterwards. All we need to do now is start defining some resources. Then this is just to be boilerplate at the bottom that just creates a new chart and synthesizes it. All CDKs do this, whether itâs Terraform CDK, AWS CDK, CDK8s, they all go through a synthesizing step where they spit out some artifact that you can then run. The Terraform CDK is actually really cool, because you get a Terraform JSON, and you can Terraform apply it. However, we just want Kubernetes YAML. If we think back to that list of resources that we had for deploying to Kubernetes, we probably want a deployment first. I hope it does an npm install beforehand, otherwise, this will be a short demo. This is a just-in-time generated SDK for whatever version of Kubernetes I want. This is all handled through a cdk8s.yaml. All Iâve said here is import the Kubernetes API. If you want, you can then say that you want a specific version. For right now weâre just pulling Kubernetes and let it do its thing. It will pull the latest. This SDK generation happens because we can run CDK8sâ command line where we can ask it to import, which runs through all of the imports in that file, like so. This is importing Kubernetes 1.25. Iâm not sure why that is. Itâs probably the version of CDK8s that Iâve got, and thatâs the most recent that is aware of. This means here, I can import a class called KubeDeployment. Obviously, I know what a deployment looks like, but Iâm going to pretend that I donât, and just say letâs have a new Kube. Letâs just call this nginx because thatâs what everybody downloads. Now we get our tab completion. My LSP tells me we need something called metadata and we need something called a spec, or at least theyâre optional types. Letâs do the metadata. Weâve got all these optionals that weâre going to provide our name. It wasnât part of the demo. The fact that Copilot is filling out some of this for me is pervading to that developer experience, which is nice. I need a spec, which doesnât need a promise. Spec, template, spec, containers. Then we got a container which needs an image, even though that version is real, but Iâll trust Copilot, maybe, and a name. This is now giving me error detection in real time. Iâm missing the selector and the metadata on the pod. We can work our way through that too, where we have to say, we need a selector, which is going to be matchLabels. Weâll just say, app nginx. Then weâll need that down here too, labels. We now have a Kubernetes deployment.
From here, we can say cdk8s synth. That spits out our YAML file. Now we have a Kubernetes deployment. We have hopefully enough working stuff, but Iâm not actually going to try and deploy it. It doesnât really matter. That works for all of the core resources, we can do KubeService, Secrets, ConfigMaps, network policies, and so forth. Thatâs pretty cool off the bat, but it does offer more. Letâs assume that we actually have a whole bunch of different deployments in this file, we then get to start to refactor it. This is where using languages and tools that youâre already familiar with becomes a huge advantage. Because who knows how to refactor their own code? Hopefully, everybody. Letâs just take this and say, letâs provide a function. Iâm going to have this in the same file. Obviously, this is not the way we do it for a real production deployment. We can just say, create a deployment, which needs a name and an image. Then weâll just drop in our KubeDeployment. This is where one of the first weird things happens. We have CDK8s that we have this scope or a context or a construct. We always need to provide a base level construct that exists behind the scenes. What CDK8s is doing is adding them all to a big list, so it knows what to synthesize and what to render. Also, allows us to do some aspect oriented programming on top of this. If you really want to, but we could grab scope, which would be our chart, and say, loop over all of the resources that youâre about to render, and do augmentation or enrichment of those resources with common labels, annotations, seccomp profiles, whatever you want. These are composable, and can be higher order functions that we ship on npm, or within a mono repository, or file, and so forth. The flexibility and the power is pretty neat. Here, weâll just say this is nginx, nginx, and weâre going to parse the context as this. Weâll just take that down here. The scope is just a construct, which is a primitive of any CDK, like so. Now if we run a synth, we should get an error. Values never read. Yes, I should probably use it. Thatâs another thing. It ships with a very opinionated tsconfig. If you assign things or donât use values, it will complain and tell you that that is a terrible idea. Now we need to clean up our code, where this should be name. We can use short syntax here and just say thatâs our name, do the same here, and here, and the image, like so. You can see already weâre starting to tidy up this definition of what our deployment looks like. We can take this a little bit further. We could assign our labels to a value like this. We say this is labels, like so. Weâre just continuing to evolve. Our deployments to Kubernetes now becomes a software project, which brings us all the benefits of existing knowledge and being successful with our experience and intuition. We can take a look at our YAML again. Iâll just pop open here. This is not really going to change that much, because weâre just refactoring, which then brings out another benefit. We can now just snapshot based testing, where we say, has this file changed in a way that we expect or donât expect, and bring this into our pipeline too. Thatâs something that CDK8s does set up by default for you. If you open the main test.ts, we can see here, that it does match against the snapshot. Itâs always creating snapshots when you run up here. At least you can configure this. Thatâs pretty cool, too.
Hopefully, I showed you a few things. We refactor to a function, but we could put this into its own class, which allows us to do something even more cool. Where we can say, letâs have a RawkodeDeployment, which extends a KubeDeployment, which needs a constructor. Weâre just going to have our scope and our ID, which is going to call super, ID, like so. That would actually need an interface. We can say config. These names are going to be terrible, just because Iâm doing this very quickly. We could take our config here. Everything is typed, we get successful messages, things are happy. Iâm going to cheat and just use our createDeployment function, and parse in the things that we need. Now we could say, what if we want to change or enrich this deployment? Letâs assume, we could have createService. We could also just say, why donât we exposeWithService. Then we get a fluent API. We could then have our deployment, where we say exposeAsService. Iâm not actually going to make this code work because CDK8s already provides all of these examples for you, and a project called cdk8s-examples. If we pop open one of the TypeScript ones, we can pop down to this web cache, pop open this. We can see here that we have something that extends a chart. Theyâre making sure that the blast radius first is a signal YAML file. Theyâre using the kplus library, which is where things get really slick from a developer experience, where now we can start to see that we need labels and annotations, and it does this in an interesting API. We define our stateful set. Only now we can say from a scheduling point of view, we want to attract it to a particular set of nodes. From the deployment perspective, we can then say that we need environment variables, and they come from values. We can also do a column from config reference, from secret reference, and so forth. Then thereâs the exposeViaService function here. This is all available to you out of the box. In order to use this, all you do is go to your package.json, and say that you want cdk8s-plus and then the Kubernetes version that you want, that is 27. You could just do that. This becomes immediately available to you, and you can do stuff.
The last part of the demo is custom resource definitions. Everything Iâve shown you here, you could do with Pulumi. I said the challenge with Pulumi is the Terraform resource model, having to ship and make a binary available to everyone in order for them to generate an SDK for the custom resources. With CDK8s, thatâs not true. Letâs go to cert-manager on GitHub. The reason Iâm using cert-manager is they publish their custom resource definitions on a release artifact, which is handy for the demo, where we can copy this link. From this link, I can find my terminal. I donât need to save any of that crappy code. We can open our cdk8s.yaml, and all we need to do is paste that in. When we run the cdk8s import, we get Kubernetes and all the cert-manager resources, which means I can now come back to our main.ts, import from imports/cert-manager, where I can say that I need a new issuer. You donât need to know what this custom resource definition is. Why? Because we have a good developer experience but a terrible typer. This, production, and just tab my way through it. What do I need? I need a spec, selfSigned, like so. Copilot already knows how to do this. Iâm not sure how. It just seems to be magic, but let it. Too many braces. Now we have an issuer. Letâs just run a synth one more time. Thatâs that tsconfig where Iâve generated stuff that Iâm not using. Letâs just delete this. If it fails again, Iâm going to delete the tsconfig. Good. Now if we come into our dist, we have our Kubernetes deployment and our cert-manager custom resource. Pretty painless. Easier to do that.
With this new power and functionality with CDK8s, from a great developer experience, you can now start to build internal pattern libraries. You can decide as an organization, what a deployment looks like, what the artifacts that are needed to do, encapsulate them in a class or a function, whatever you want, distribute them on paper, GitHub, npm, and make them free and available for everyone else to use. You can do policy and tests. With policies, we can use the aspect oriented stuff, to say loop over the resources and check that thereâs always a security context where we donât allow people to run as root. Make sure theyâre dropping CAP_NET_ADMIN if they donât need it, and so forth. Then the snapshot based testing works. If youâre familiar with jest, or pytest, or even gotest, the tests are just against a structure that you can build yourself. You can say, we expect this to look like this, and it works. Whatâs really nice, and I didnât show it, we can hook into existing tools. We can do Helm deploys. We can pull and customize with the include directive pointed to a remote Git repository. We could even integrate this with Terraform CDK. Our applications arenât that simple anymore. Sometimes we need to provision cloud resources like DNS, S3 buckets, DynamoDBs, whatever, so why not have a Terraform CDK that does all that, and that can actually wrap and consume your CDK8s construct, and do it all in one smooth motion with GitOps based operators too. Thereâs a lot of power, a lot of flexibility. I hope that it makes your lives a little bit easier.
Questions and Answers #
Participant 1: Itâs super cool and solves a lot of the problems Iâve had with Helm in particular for a long time. Do you know if thereâs a good way to wrap CDK8s with Helm. I can see for like a lot of operations teams thatâs going to be a problem. Itâs like, I only know how to do Helm things, you canât give me a new thing to do.
Flanagan: You canât wrap CDK8s with Helm, but you can still consume your Helm charts. From here, we can just say, imports k8s. Itâs actually part of CDK8s, so there we go. We could just say, Helm, and say if we want to deploy a new Helm chart, which still needs our scope. Say it was going to be cert-manager. Then we can just provide the repository, we can provide the chart name, we can provide the version. Then weâve got these values where we can just drop in whatever we need. You can describe all of the Helm chart deployments and CDK8s, it will actually consume and fetch the chart, template it out for you, and it all gets rendered to the YAML directory. You donât have to throw away Helm. Youâre not losing Helm. Helm still has its place, itâs still got its purpose, even if itâs not got the developer experience that we typically want. What I would love to see that doesnât work yet, is weâre now in a position where through the Artifact Hub, all of these Helm charts have type definitions on the values. I love to see them pull them to this kind of situation where we can then get our LSP and autocomplete in here too.
Participant 2: There isnât actually that many people fill up type definitions in Helm values, thatâd be impressive to see that.
Flanagan: A lot of the more popular charts now have this. The syntax is terrible. Itâs embedded in the YAML, but it does help. It does provide a better developer experience, but we can do better.
Participant 3: I was wondering if anyone solved using this in something like Java. A case where you have multiple microservices that have a Postgres database. Right now, we use a complicated Helm syntax to determine whether or not oneâs already been established. We will build one if it doesnât already exist. Is it like a singleton pattern where itâs like, youâve just developed the Postgres database, and then anyone else that has a CDI file and dependency and all that, could then either bring it in, or if it doesnât already exist, we could create another instance? Does that make sense?
Flanagan: Yes, it does make sense. If you were to encapsulate this in Terraform CDK, you would be able to have multiple stacks where thereâs like a platform team that handles the Postgres operator, or the Postgres instance itself. Then, that could be a stack dependency further down from the CDK8s point of view, but it would actually interact with that directly. Even without wrapping it in Terraform CDK, just having a platform team that provide all these building blocks, and then your teams can just pick them up and use them as intents, is the better foot forward rather than trying to bake it in with conditionals and weird Helm syntaxes.
Participant 4: The expectation is, a platform team or some team was going out and building your charts, and your custom constructs. Then the developer teams are actually just writing code that theyâre getting served for their [inaudible 00:46:49].
Flanagan: Kubernetes is not a platform. We donât just give people a cluster and say, carte blanche, have fun. We need to put guardrails in places. Especially with microservices and multiple teams and loads of things happening, you need to make it as easy as possible for people to get that code deployed into production. Typically, what weâre seeing is platform engineering teams that have all their infrastructure as code, spin up the clusters, bootstrap it with enough GitOps to get the operators in place, and set up the GitOps pipeline, so that then user developer team just come along and provide an artifact and an OCI registry that then gets consumed and deployed. Once you get to that stage, the developers are then coming to this bit, where theyâre starting to write their KubeDeployments, their services, and so forth. Youâre going to notice patterns.
Itâs now a software project. Letâs say, weâre all right in the deployment, weâre all right in the service, letâs wrap it in an API and just make it easier. Then, what more developer teams need is opinionated ways to deploy, not just giving the developers the ability to write an arbitrary YAML and off they go. Policy is important. Testing is important. Security is important. Getting that into their hands and give them an SDK like this, where they donât have to worry about generating seccomp profiles in Kubernetes for the Kubelet to pick up, itâs just done for them.
See more presentations with transcripts
Recorded at:
Jun 19, 2024