Skip to main content

6 posts tagged with "kubernetes"

View All Tags

Glasskube v0.1.0 β€” Introducing Dependency Management

Β· 3 min read
Jake Page
Developer Relations Engineer

Glasskube v0.1.0 was released on March 21st, introducing new features like Dependency Management and Dark Mode as well as many useful features for an improved CLI and GUI experience.

Glasskube is fully open-source. Support us by leaving a star: ⭐ glasskube/glasskube ⭐

πŸ™ Special thanks to all our contributors πŸ₯°β€‹

Once again, we can't thank our community enough for their valuable input and exciting contributions to Glasskube. We are happy you choose to be part of our journey in making Kubernetes package management easier for everyone.

This release includes:

  • πŸ‘₯ a total of 10 contributors
  • πŸ› οΈ 53 commits
  • πŸ’₯ no breaking changes

Watch our release video to get an overview of what has changed:

Most notable changes​

Dependency management πŸ”—β€‹

Building on the package installation features shipped in the previous weeks, we now offer an in-built way of managing and installing the dependencies used by your desired package. Now package maintainers can define the appropriate dependencies, for each Glasskube supported package as well as recommending the range of suitable dependency versions, this way ensuring that packages are always compatible to accompanying dependency.

Package dependencies will show on the installation drop down in the GUI. They will also be shown when installing a package via the CLI.

Dark Mode πŸ•ΆοΈβ€‹

Great news, we shipped the "must-have" feature any OSS project can't survive without, Dark Mode. Glasskube's dark mode is linked to the display mode defined on your local system. Access your system preferences menu and try it out!

Dark Mode

Further improvements​

We also worked hard on improving existing commands and our GUI. Here's a list of further notable changes:

  • The --latest flag was added to the glasskube bootstrap command to ensure the latest version of the package controller is being used.
  • The GUI now reflects the available dependencies for selected packages.
  • We expanded the describe command to include the installed dependencies.
  • You will now be greeted by a notification pop-up if "open" fails.
  • Version mismatch detection was added to the package controller.

BREAKING changes​

This release does not contain any breaking changes.

All Release Notes​

The release notes can be found here: v0.1.0 release on GitHub

All changes can be found here: Comparing v0.0.4 to v0.1.0

How to upgrade​

Follow the installation instructions below to download the latest version of the Glasskube client. After that you need to upgrade the server component (Package Operator) by bootstrapping Glasskube again:

glasskube bootstrap --latest

Getting started​

Follow our Getting Started guide if you want to try Glasskube for yourself and install your first package.

On macOS, you can use Homebrew to install and update Glasskube.

brew install glasskube/tap/glasskube

After installing Glasskube on your local machine, make sure to install the necessary components in your Kubernetes cluster by running glasskube bootstrap. For more information, check out our bootstrap guide.

Get involved​

The easiest way to get involved is to tackle one of our open issues. You are also welcome to join our Discord.

If you are a cloud native developer, please submit your package.

As Glasskube is still in its very early days, your feedback is highly appreciated. Let us know what you think, we would love to hear from you or support us by leaving a star on GitHub:

⭐ glasskube/glasskube ⭐

Glasskube v0.0.3 β€” Introducing Package Updates

Β· 4 min read
Philip Miglinci
Co-Founder

Glasskube v0.0.3 was released on February 27th, introducing package updates and many useful features for improved CLI and GUI experience.

Glasskube is fully open-source. Support us by leaving a star: ⭐ glasskube/glasskube ⭐

πŸ™ Special thanks to all our contributors πŸ₯°β€‹

Once again, we can't thank our community enough for their valuable input and exciting contributions to Glasskube. We are happy you choose to be part of our journey in making Kubernetes package management easier for everyone.

This release includes:

  • πŸ‘₯ a total of 10 contributors
  • πŸ› οΈ 64 commits
  • πŸ’₯ no breaking changes

Watch our release video to get an overview of what has changed:

Most notable changes​

Package Updates​

Updating packages is one of the most essential features of any package manager. That's why most of our efforts in the past two weeks have gone into the support for such updates via CLI and GUI. It's as easy as typing glasskube update into the command line, or a button click in the GUI.

During package installation you now can decide whether you want automatic updates for the package, in which case the Glasskube package operator will take care of keeping you up to date at all times. You can of course opt out of this feature if you prefer to handle updates manually. Apart from that, with the --version flag you can choose which version of a package is to be installed, if your use case requires a specific version.

Package Updates

For technical details, please have a look at the package operator documentation.

Further improvements​

We also worked hard on improving existing commands and our GUI. Here's a list of further notable changes:

  • The glasskube describe command – and its GUI-complement, the package detail page – have been implemented for you to have a more detailed view on each of the packages available in Glasskube.
  • glasskube list has been extended with the new flags --outdated – listing only outdated packages, and --show-latest – showing the latest available version for each package.
  • All commands check for a newer version of Glasskube, and notify you if you are not up to date anymore.
  • Glasskube is now even easier to set up: The GUI comes with handy pages for selecting a kubeconfig and for bootstrapping Glasskube in a cluster. All CLI commands will also support you in setting up Glasskube in your cluster.
  • Streaming any kind of package status change directly into the GUI, making it easier for you to recognize when something is happening in the background.

BREAKING changes​

This release does not contain any breaking changes.

All Release Notes​

The release notes can be found here: v0.0.3 release on GitHub

All changes can be found here: Comparing v0.0.2 to v0.0.3

How to upgrade​

Follow the installation instructions below to download the latest version of the Glasskube client. After that you need to upgrade the server component (Package Operator) by bootstrapping Glasskube again:

glasskube bootstrap

Getting started​

Follow our Getting Started guide if you want to try Glasskube for yourself and install your first package.

On macOS, you can use Homebrew to install and update Glasskube.

brew install glasskube/tap/glasskube

After installing Glasskube on your local machine, make sure to install the necessary components in your Kubernetes cluster by running glasskube bootstrap. For more information, check out our bootstrap guide.

Get involved​

The easiest way to get involved is to tackle one of our open issues. You are also welcome to join our Discord.

If you are a cloud native developer, please submit your package.

As Glasskube is still in its very early days, your feedback is highly appreciated. Let us know what you think, we would love to hear from you or support us by leaving a star on GitHub:

⭐ glasskube/glasskube ⭐

The Inner Workings of Kubernetes Management Frontends β€” A Software Engineer’s Perspective

Β· 11 min read
Christoph Enne
Software Engineer

In this blogpost we are reviewing Kubernetes Management Frontends and discuss how these tools are being built.

The rise of Kubernetes in recent years has led to an astonishing number of open-source Kubernetes management tools seemingly appearing out of nowhere. The goal of the research leading to this article was to merely understand the architecture of some of these tools and to subsequently provide a brief overview and options for developers trying to get started with their own Kubernetes frontend. We will not dive deep into the actual tools and what problems they are trying to solve, but instead focus on the software engineering aspect. We are also exclusively exploring open-source and self-hosted tools and leaving the PaaS/IaaS platforms of cloud providers aside β€” that would be a whole different article.

Setting up and interacting with your first cluster can be overwhelming. Just like me, you might have come across the infamous kubernetes/dashboard, followed the installation instructions, and asked yourself: "What did I just do and why exactly does this work the way it works?" And after some tinkering with your cluster, you might have installed even more external tools that help you with some specific aspects of cluster management, providing you with either a CLI or a Web UI.

As a software engineer mostly engaged in web development in recent years, I was curious about how these kinds of tools are built and deployed.

Let's first clarify some of the basics needed for the following exploration of different Kubernetes UIs. After that, we will see what they have in common and what makes this kind of software special, to finally form a recommendation of how one could build a Kubernetes Web UI themselves.

Kubernetes Basics​

The official documentation is more than helpful anyway; there is just one important thing to remember: Whenever and wherever you interact with your cluster, you do it via the Kubernetes API β€” this holds true at least for the scope of this article, though there might be other use cases.

As a consumer of this API, one needs to know where it is hosted and how to authenticate against it. The Kubernetes API can be accessed both from inside a cluster (i.e., from an application running on a pod) and outside a cluster (e.g., from your command line). In some cases however, the API is only available from within a VPN.

Since we are looking at tools with a web UI, this UI and its backend need to be exposed such that a user can access it. The options are:

Alternatively, the web server could be running on the local machine of a user as well, in which case one doesn't need to worry about these options. However, a user needs to have a valid kube config on their machine for any of these approaches to work.

Management Frontends​

Now, let's take a look at some commonly used frontends and how they are built.

kubernetes-dashboard​

The Kubernetes Dashboard is a popular Web UI used to view and manage all kinds of Kubernetes resources within a cluster. In the latest stable version 2.7, both the backend and frontend are part of the same container. The Go backend serves both the API and the Angular UI assets. This deployment strategy requires users to use kubectl proxy to access the web application.

In the newer 3.0 version, which is still in alpha, the deployment strategy has changed: both the backend and frontend are each running in a dedicated container. Therefore, accessing it via kubectl proxy no longer works because the UI needs to access the backend, which is running on a different pod and port. The port-forwarding approach described here should be used instead.

ArgoCD​

ArgoCD is a GitOps continuous delivery tool for Kubernetes. It comes with several components, including its own API server and a web UI. All the backend components are written in Go, and the UI is a React application.

As with the Kubernetes Dashboard, the server (including the UI assets) is deployed inside the cluster, making it necessary for the user to perform port-forwarding or use a LoadBalancer. This is described in their documentation.

Lens​

Lens is a Desktop UI, but still interesting for our exploration. It is being developed with Electron, React, and Typescript. The Lens App uses the Typescript Kubernetes client to connect to a cluster, and since the Desktop app is clearly running outside a cluster, it uses a locally provided kubeconfig to connect to it.

glasskube​

Yes, a pretty shameless plug (I work there), but it's also an interesting alternative to consider. For the UI of the Glasskube package manager, we spin up the web server locally via a CLI command and serve the UI assets from there. We decided to go this route because it simply makes more sense in our use case. Whenever the user needs the Glasskube UI, they host it themselves for as long or as short as they want β€” there is no need to have it running 24/7 inside the cluster.

Findings​

Many open-source Kubernetes management UIs are coded in a similar way β€” with a Go backend utilizing the powerful Kubernetes-go client, and a Single Page Application in JavaScript as the frontend. In most cases, the web assets (e.g., JS files) are served together with the backend, meaning one container serves both the backend and frontend. It was actually difficult to find something that is not built like that.

Inside cluster vs. Out of cluster​

When it comes to deploying such a web tool, there are only two options:

  • The webserver is deployed on a pod inside the cluster and is accessible either via proxy, port-forwarding, or ingress.
  • The webserver is deployed outside the cluster, directly (locally) on the users' machine.

The Kubernetes clients (e.g., Go client) support developers with both methods to connect to a cluster, as we can see in the following examples.

The piece of code it all comes down to:

These simplified examples are heavily based on the official examples seen here and here.

Let's have a look at how to connect to the Kubernetes API when running the application inside the cluster:

import (
"context"

metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)

func main() {
// retreive the config for the cluster we are currently in:
config, err := rest.InClusterConfig()
if err != nil {
panic(err.Error())
}

// create the clientset for this config:
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}

// do something with the clientset, e.g. getting all pods in the cluster:
// pods, err := clientset.CoreV1().Pods("").List(context.TODO(), metav1.ListOptions{})
}

The Go client implementation uses the pod's service account and the environment variables KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to identify which cluster it is placed in. Subsequently, it creates the REST config object, with which the clientset can be obtained.

Similarly, when running outside the cluster, one needs to create the config object, but this config is read from the local kube config:

import (
"context"
"flag"
"path/filepath"

metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
)

func main() {
// get the passed (or default) kube config file path
var kubeconfig *string
if home := homedir.HomeDir(); home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
flag.Parse()

config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err.Error())
}

clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}

// do something with the clientset, e.g. getting all pods in the cluster:
// pods, err := clientset.CoreV1().Pods("").List(context.TODO(), metav1.ListOptions{})
}

Again, the Kubernetes Go client has us covered with a simple function to parse a kubeconfig file to get a config, which can then be used to create a clientset.

When trying to run these simple examples, you will also come across one difference between these two approaches: Running the local tool is easier because you don't need to build an image, push it to a registry, and then pull it into the cluster.

Which one to choose?

Say you were to build your own Kubernetes UI in a similar fashion. When it comes to the decision of where the webserver of your tool should run, there are several things to consider:

  • Distribution: Running your tool inside the cluster means you have to build and distribute a docker image. On the contrary, you will have to distribute a native binary if you want users to install it on their machines. For both cases, there are lots of tools and resources online.
  • Availability: When your cluster is down for whatever reason, users might not be able to reach the tool hosted inside the cluster. This leads us to the next point:
  • Onboarding User Experience: This might be an edge case, but a locally hosted web tool is available before all of its components are installed in the cluster. This means you could implement some sort of UI onboarding flow for new users, making the tool easier to install and set up.
  • Compatibility: Multiple users of the same cluster could have different versions of your (locally hosted) tool installed. This can not happen if there is just one webserver running inside the cluster.
  • Persistence: When needing to store tool-specific data (i.e., non-Kubernetes resources), you could store it inside the cluster (e.g., in a ConfigMap). For the locally deployed variant, you could additionally store user-specific data like settings on the users' machine. This decision is highly use case dependent.
  • Developer Experience: There seems to be no significant difference, but it is worth noting that when developing an in-cluster webserver, during development this server still needs to support the out-of-cluster config approach somehow. Otherwise, one would have to build and deploy an image into the cluster after every change.

Eventually, whether the tool is deployed inside or outside of the cluster is completely up to you, but it's always important to consider the use cases and be aware of the context in which it is used. You can also opt for providing both options to your users.

For us at Glasskube, it was clear that we want to provide an easy-to-use interface for new users (especially those new to the Kubernetes world), who might not have yet set up all the Glasskube cluster components. These users can be supported by having a CLI command hosting the local webserver with a supportive Web UI.

Conclusion​

In this article, we have explored a few Kubernetes tools offering a web UI and analyzed the web aspect of those tools from a software engineer's point of view. There is clearly no ultimate one-size-fits-all solution for how to design and develop such tools, but the list above hopefully gives some hints in the right direction. As always in software engineering: It depends.

One more plug: I work at Glasskube, where we are building the missing Kubernetes package manager. If you are interested in our work, make sure to star us: glasskube/glasskube. We are also working on an article shedding some light on different CLI frameworks, if you are more of a command line person. And if that's not enough, we might write about using htmx soon because it's trending, and we need your attention. I can already see the headline: "How we reduce our codebase by 95% by using seemingly old-school technology" β€” I think this has not been done before ;)

Glasskube v0.0.2 β€” Open Command

Β· 3 min read
Philip Miglinci
Co-Founder

Glasskube v0.0.2 was released on February 9th, just 9 days after initial technical preview release.

Glasskube is fully open-source. Support us by leaving a star: ⭐ glasskube/glasskube ⭐

πŸ™ Special thanks to all our contributors πŸ₯°β€‹

We didn't anticipate rolling out our second preview release just one week after the initial one, but thanks to the remarkable contributions from our community, that's exactly what we've done.

This release includes:

  • πŸ‘₯ a total of 14 (mostly new) contributors
  • πŸ› οΈ 58 commits
  • πŸ’₯ no breaking changes

Watch our release video to get an overview of what has changed:

Most notable changes​

Of all changes, bugfixes and improvements, two new features stand out in the second release:

The open command​

Introducing Glasskube's newest feature: the open command! Gone are the days of laboriously setting up multiple TCP tunnels with kubectl port-forward just to access specific services. With Glasskube, accessing your desired services is now as easy as a click or a simple command. Say goodbye to unnecessary complexity and hello to effortless convenience.

Realtime updates with htmx​

The second-biggest achievement of this release: The integration of htmx for real-time updates. This advanced feature enables automatic and instantaneous updates to your graphical user interface, eliminating the need for manual page refreshes. With htmx, your application stays dynamically synchronized with backend changes, ensuring a seamless and responsive user experience.

BREAKING changes​

This release does not contain any breaking changes.

All Release Notes​

The release notes can be found here: v0.0.2 release on GitHub

All changes can be found here: Comparing v0.0.1 to v0.0.2

How to upgrade​

Follow the installation instructions below to download the latest version of the Glasskube client. After that you need to upgrade the server component (Package Operator) by bootstrapping Glasskube again:

glasskube bootstrap

Getting started​

Follow our Getting Started guide if you want to try Glasskube for yourself and install your first package.

On macOS, you can use Homebrew to install and update Glasskube.

brew install glasskube/tap/glasskube

After installing Glasskube on your local machine, make sure to install the necessary components in your Kubernetes cluster by running glasskube bootstrap. For more information, check out our bootstrap guide.

Get involved​

The easiest way to get involved is to tackle one of our open issues. You are also welcome to join our Discord.

If you are a cloud native developer, please submit your package.

As Glasskube is still in its very early days, your feedback is highly appreciated. Let us know what you think, we would love to hear from you or support us by leaving a star on GitHub:

⭐ glasskube/glasskube ⭐

Glasskube v0.0.1 β€” Technical Preview

Β· 5 min read
Philip Miglinci
Co-Founder
Louis Weston
Co-Founder

The aim of this post is to share our technical preview of how a cloud native package manager could work and what challenges need to be solved.

Glasskube is fully open-source. Support us by leaving a star: ⭐ glasskube/glasskube ⭐

Introducing Glasskube β€” The next generation Package Manager For Kubernetes​

Glasskube GUI Mockup

Package Management on Kubernetes is one of the most pressing issues in the Cloud Native community. A concept which is widely known from other ecosystems like desktop and mobile computing has yet not been realized for cloud computing. For example on Android and iOS, millions of developers publish their packages in the Play Store or App Store to reach their users. The package manager also makes sure all users receive the latest version published by the developer and the developer receives crash reports and user feedback as a return to improve their applications, but as a cloud native developer there is no package manager you can rely on β€” yet.

Our first release (v0.0.1) already features a working prototype that can install basic packages, but a lot of challenges still need to be solved.

A cloud native architecture​

Glasskube itself is designed as a cloud native application. Featuring an easy to install client that comes with a graphical user interface and autocompletion for your favorite shell.

At the heart of the Glasskube package ecosystem lies our central package registry which holds the package manifests. In a future version we also plan to support 3rd party registries and the possibility to use multiple registries in a cluster.

The Glasskube package operator syncs the latest manifest into the cluster and makes sure it will be updated as soon as a new manifest is available.

Challenges that need to be solved​

We already covered some our upcoming features in our public roadmap, but I would also like to take this opportunity to shortly speak about broader challenges.

Kubernetes version compatibility​

Kubernetes releases minor versions every 4 months, which often come with new API versions. Package authors need to adapt their packages to these changes. In Kubernetes, a particular release might include more than one API version of a resource, so that packages can be compatible with a broader range of Kubernetes versions. These compatible versions are often only documented in the package distributors' changelog. Glasskube aims to incorporate this kind of metadata in combination with automatic checks from tools like kube-no-trouble or Pluto.

The user should not be required to drudgingly check all packages for compatibility and package developers should get feedback if their package is not compatible with the latest API versions.

Package dependencies​

Cloud native applications often interoperate and there are some packages that can be found in almost every Kubernetes cluster. For example: cert-manager, Ingress controllers or database operators. Due to the lack of a package manager and ecosystem these dependencies are still often only documented in the Getting started section of an application.

In an ideal world a package author could simply specify a dependency of their package and the package manager ensures that all these prerequisites are fulfilled.

Testing​

In order to support multiple Kubernetes versions, dependencies and packages Glasskube needs to build massive automated testing infrastructure for all packages in its central package registry.

Feedback and package quality​

As seen in other package managers like the arch user repository or the Play Store and App Store users' feedback and reviews help other users to decide between different packages. Also, application developer will incorporate users' feedback to gain popularity and better ratings in the package manager.

Glasskube and Helm​

Glasskube is no replacement for Helm. Helm has its strengths in configuring releases through templating and having the ability to perform upgrades and rollbacks.

Glasskube is laser focused on the administrator who needs to only install and kustomize (pun intended πŸ˜‰) a single application, but who also needs to make sure multiple packages are kept up-to-date and secure throughout multiple Kubernetes version upgrades and adapting to inevitable breaking changes.

Getting started​

Follow our Getting Started guide if you want to try Glasskube for yourself and install your first package.

On macOS, you can use Homebrew to install and update Glasskube.

brew install glasskube/tap/glasskube

After installing Glasskube on your local machine, make sure to install the necessary components in your Kubernetes cluster by running glasskube bootstrap. For more information, check out our bootstrap guide.

Release Notes​

All release notes can be found on GitHub: https://github.com/glasskube/glasskube/releases/tag/v0.0.1

Release Video​

Get involved​

The easiest way to get involved is to tackle one of our open issues. You are also welcome to join our Discord.

If you are a cloud native developer, please submit your package.

As Glasskube is still in its very early days, your feedback is highly appreciated.

Let us know what you think, we would love to hear from you or support us by leaving a star: ⭐ glasskube/glasskube ⭐

5 shortcomings of Helm

Β· 10 min read
Philip Miglinci
Co-Founder
Jakob Steiner
Software Engineer

helm-thumbnail

5 reasons we are trying to build the next generation of deployment automation for Kubernetes.

Glasskube is fully open-source. Support us by leaving a star: ⭐ glasskube/glasskube ⭐

Introduction​

As a seasoned DevOps engineer, I found that Helm, the popular Kubernetes deployment tool, has some shocking shortcomings. In this post I want to discuss some of those which, in my opinion, require a new vision of a more modern deployment solution. If you never heard about Helm before, in a nutshell it is:

A framework for packaging Kubernetes resources (apps) to charts, publish them and let them easily be installed via a command line interface.

The goal of this post is not to hate on the smart and talented people who built helm, but maybe we can kindle a productive and healthy discussion about where we need to go as the DevOps industry to stay relevant in the coming years. But to fully understand the following, I think it is important to understand what developments lead us to where we are today. So, before we start, let's quickly dive into the history of Helm.

In 2015, a company called Deis created Helm, a package manager for Kubernetes. Deis is now part of Azure Kubernetes Service but the original project still exists as Helm Classic. At the same time, Google had a project called Kubernetes Deployment Manager, which was similar to Google Deployment Manager but for Kubernetes resources, rather than GCS resources.

At the beginning of 2016, the two projects decided to merge, which resulted in the release of Helm v2 later that year. Helm v2 consists of a client and server component (Helm and tiller, respectively), where the latter was the continuation of the original Kubernetes Deployment Manager project. Tiller was designed to handle deployment states to make it easier for multiple users to use Helm without interfering with each other.

In 2018, Helm launched the Helm Hub as a central place to discover charts which are otherwise found in distributed repositories. Helm Hub was rebranded to Artifact Hub in 2020.

With the release of Kubernetes 1.6, which had Role Based Access Control (RBAC) enabled by default, production deployments of helm became more difficult, because of the many security policies that were required by tiller. So, people started to experiment with a new approach that could do the same thing without requiring a server component, which resulted in the release of Helm v3 in 2019.

As you can see, helm has a very rich history. It became the gold-standard of packaging apps for Kubernetes and is used by DevOps engineers all over the world. But just because helm is the biggest player on the field, it doesn't mean that it is without flaws. So then, why say goodbye to helm?

5 shortcomings:​

1. Helm doesn't provide a mechanism for upgrading Custom Resource Definitions​

helm does provide a method of packaging Custom Resource Definitions (CRDs) by placing them in a dedicated crds directory, but these are ignored during upgrades! This is intentional and designed to prevent accidental data loss. Therefore, upgrading a chart does not automatically upgrade it's associated CRDs, which is unexpected for many engineers and leads to more manually involved and error-prone upgrade procedures and other anti-patterns.

To combat this major design flaw, chart developers have come up with several strategies, the most popular of which are:

  • Putting the CRDs into the chart's template directory
  • Creating separate sub-charts just for CRDs

An alternative way to overcome this shortcoming is to not invoke helm commands directly, but rather use a CI/CD solution, like Flux. Flux provides a setting to automatically update CRDs during a helm upgrade, but it is off by default.

2. Helm dependency management​

The way to specify a dependency in a helm chart is to reference it as a sub-chart. This method can work great for tightly coupled dependencies that you might want to install separately or as part of another chart, but it has some weaknesses that are important to understand:

  • Sub-charts are always installed in the same namespace as the primary release and there is no way to change this.
  • There exists no mechanism to share a dependency between two releases.

For example, our Glasskube Operator Helm Chart depends on kube-prometheus-stack, velero and a bunch of other dependencies, some of which are already installed in many Kubernetes clusters. To provide an installation experience that is as simple as possible, the chart references all those dependencies as sub-charts, but using this approach, all those dependencies are bundled in the Glasskube Operator release and can not be changed or updated separately. Additionally, there is no way to check whether a dependency is already installed, so a user might end up with two separate installations of the same helm chart!

Ideal tooling would allow chart developers to specify external dependencies and simply ensure that those are present in a cluster before a chart can be installed. This way, dependencies could be shared among consumers. This is how package managers for operating systems work since forever. Why does Kubernetes need to be different?

3. Helm chart creation is not user-friendly​

So far, we discussed problems that affect you as a chart user. But what does the situation look like for chart developers?

Well, let's start by creating a new chart. This can be achieved by calling helm create your-chart. I invite you to quickly open a terminal, run this command and go through all the files it creates. As I'm sure you will agree, it's… a lot. I still remember the moment when I wanted to create my first helm chart and saw the results of this command thinking, β€œthis can't be right.”

.:
total 8,0K
drwxr-xr-x. 2 kosmoz kosmoz 40 7. Dez 13:23 charts/
-rw-r--r--. 1 kosmoz kosmoz 1,2K 7. Dez 13:23 Chart.yaml
drwxr-xr-x. 3 kosmoz kosmoz 200 7. Dez 13:23 templates/
-rw-r--r--. 1 kosmoz kosmoz 1,9K 7. Dez 13:23 values.yaml

./charts:
total 0

./templates:
total 28K
-rw-r--r--. 1 kosmoz kosmoz 1,9K 7. Dez 13:23 deployment.yaml
-rw-r--r--. 1 kosmoz kosmoz 1,8K 7. Dez 13:23 _helpers.tpl
-rw-r--r--. 1 kosmoz kosmoz 925 7. Dez 13:23 hpa.yaml
-rw-r--r--. 1 kosmoz kosmoz 2,1K 7. Dez 13:23 ingress.yaml
-rw-r--r--. 1 kosmoz kosmoz 1,8K 7. Dez 13:23 NOTES.txt
-rw-r--r--. 1 kosmoz kosmoz 326 7. Dez 13:23 serviceaccount.yaml
-rw-r--r--. 1 kosmoz kosmoz 370 7. Dez 13:23 service.yaml
drwxr-xr-x. 2 kosmoz kosmoz 60 7. Dez 13:23 tests/

./templates/tests:
total 4,0K
-rw-r--r--. 1 kosmoz kosmoz 388 7. Dez 13:23 test-connection.yaml

In total, helm create generates 10 files in different sub-directories and it is not immediately apparent which ones of those are actually essential for a chart and which ones are just example code. I once complained about this to a DevOps engineer who had already created dozens of charts and they laughed and said:

β€œThe first step in creating a chart is running helm create. The second is deleting 90% of the results.”

Really? That's the best we can do? Okay, let's accept that and say you figured out the structure of your new chart. Now, you probably want to add some resources. Of course, you can just drop your existing YAML files into the chart's templates directory, but you're probably interested in using some parameters from your values.yaml in your resources. After all, that would kind of be the point of creating a helm chart in the first place. To look at an example, go back to your terminal (where, previously, you created your helm chart) and check out the file templates/serviceaccount.yaml:

{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "your-chart.serviceAccountName" . }}
labels:
{{- include "your-chart.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

Now, I know what you're thinking:

That doesn't look like the YAML I know! What is ' include, toYaml and nindent and what's up with all the - and {{ and |?

It's true, although helm template files use the file name extension for YAML, they are actually just templates. Helm templates are based on the Go template language which is very flexible and powerful but doesn't really know anything about YAML or Kubernetes. That's why it's necessary to call lots of these conversion functions inside the template files.

As a result many popular charts end up with template files that contain more template language stuff than actual YAML. This makes them hard to read and maintain, especially as someone who wasn't involved in its creation.

4. The values.yaml file is an anti-pattern​

Now, let's go back to something that's a little more tangible for you as a helm user. As a Kubernetes application developer who writes resources as YAML files, you are probably used to having rich support in your development environment, including strict schema validation and super comprehensive autocomplete. Creating a values.yaml file for a chart release is a little different. See, there is no general schema for what goes and doesn't go inside a values.yaml file. Thus, your development environment cannot help you beyond basic YAML syntax highlighting. The only way to verify if your values.yaml file is valid is to run it through helm and see what happens. Using helm template allows you to render these helm templates which detects possible errors in the configuration file.

A lot of chart developers want to give users the possibility to fine tune most aspects of final deployment. As a result, the number of possibilities for configuration is often unreasonably large and complicated, mimicking the actual resources they want to create, but without any schema validation!

5. Inability to interact with the Kubernetes API​

We already discussed 4 shortcomings of helm, but in my opinion the biggest downside of helm is this: A helm release is strictly a one-shot operation. Once a helm release is successfully installed, helm's job is done. But here's the thing: Installing an application is usually not the hard part, maintaining an installation and keeping it running is. Unfortunately, helm doesn't really help you with that.

After finishing installation of a release, helm can not perform any additional changes due to its design as a strictly client-side application. This inability to interact with the release during later stages of a release's life-cycle means that helm as a deployment method inherently static, but modern software deployments are often required to be very dynamic.

A simple example that an Operator can do, but helm can't would be setting the Ingress class and annotations dynamically based on the detected Kubernetes environment:

Detecting the cloud environment:

private val dynamicCloudProvider
get() = when {
kubernetesClient.configMaps().inNamespace("kube-system").withName("shoot-info")
.get()?.data?.get("extensions")?.contains("shoot-dns-service") == true ->
CloudProvider.gardener

kubernetesClient.nodes().withLabel("eks.amazonaws.com/nodegroup").list().items.isNotEmpty() ->
CloudProvider.aws

kubernetesClient.nodes().withLabel("csi.hetzner.cloud/location").list().items.isNotEmpty() ->
CloudProvider.hcloud

else ->
CloudProvider.generic
}

Applying configurations based on the environment:

protected val defaultIngressClassName: String?
get() = when (configService.cloudProvider) {
CloudProvider.aws -> "alb"
else -> configService.ingressClassName
}

protected fun getDefaultAnnotations(primary: P, context: Context<P>): Map<String, String> =
configService.getCommonIngressAnnotations(primary) +
when (configService.cloudProvider) {
CloudProvider.aws -> awsDefaultAnnotations
CloudProvider.gardener -> gardenerDefaultAnnotations
else -> getCertManagerDefaultAnnotations(context) + ingressNginxDefaultAnnotations

Conclusion​

Although many developers are a bit scared of helm at first, its simple design gave helm the lead it currently holds in the space of Kubernetes deployment methods. Helm is currently the de-facto standard for managing complex application deployments, but that doesn't mean we shouldn't challenge its design and point out shortcomings. New requirements for applications will require more dynamic deployment methods and we DevOps engineers and application developers must be prepared.

This is why we started Glasskube: An easier way to deploy applications and infrastructure on Kubernetes with Glasskube glasskube/glasskube.

If you want to follow our progress make sure to star glasskube/glasskube and join our Discord.