explianing-linkerd

17 October 2024

Kubernetes is a central component of many infrastructures. Improving its security is, thus primordial. Linkerd is a service mesh focused on security that provides observability and security for Kubernetes Network communications.

Explaining Linkerd

In this article, I will discuss the first steps in understanding Linkerd and setting it up on a Kubernetes cluster.

Let's start by explaining what Linkerd is.

Linkerd, a Service Mesh

It is often written that Linkerd is a Service Mesh for Kubernetes. But that doesn't help us much, so let's detail it more.

A service mesh is a tool for developers to manage service-to-service communications in a microservice architecture. It abstracts the logic of service-to-service communication from the services and puts it into a layer of infrastructure.

A service mesh uses network proxies deployed alongside the different services to do this.

Explaining Linkerd's Architecture

Now, let's focus once again on an explanation of Linkerd specifically.

Linkerd has two main parts: the control plane and the data plane.

Detailing Linkerd’s Control Plane

The control plane is a set of services running in a dedicated namespace, which provides control over Linkerd as a whole.

  • The data plane uses the destination service to fetch various configurations.
    • Service discovery information (where to send the request and TLS identity to use).
    • Information on what requests are allowed.
    • Service profile information is used to inform per-route metrics, retries, timeouts, and more.
  • The identity service acts as a TLS certificate authority for Linkerd proxies, issuing a certificate at proxy initialization to enable mTLS (mutual TLS) communications between proxies.
  • The proxy injector is an admission webhook that inspects pods to check for the linkerd.io/inject: enabled annotation. If it exists, the injector mutates the pod to add linkerd-init and linkerd-proxy containers.
Explaining Linkerd's data plane

The data plane consists of micro-proxies that run alongside each service instance as sidecar containers in the pods.

These proxies automatically handle all TCP traffic to and from the service. They can do that thanks to iptables routes put in place by the linkerd-init init container. It communicates with the control plane for configuration.

To note, between the execution of linkerd-init and linkerd-proxy, the pod's network is not usable. It tries to go through the proxy, which is not yet up. This can lead to issues if you have multiple init containers, which will be run alphabetically. In contrast, this isn’t a concern with regular containers, as linkerd-proxy will run first.

linkerd-data-plane

Explaining Linkerd's advantages compared to other service mesh

I will compare Linkerd here mainly to Istio, the main alternative to Linkerd as a service mesh for Kubernetes.

  • Easy to install Linkerd is easy to set up, especially on existing infrastructure, compared to Istio. This is because its architecture is simpler. For example, it doesn't require its ingress controller.
  • Performance Thanks to the Rust-written linkerd-2 microproxy, Linkerd has excellent performances, low latency, and consumes few resources.
  • Security Linkerd provides mTLS (mutual TLS) by default to encrypt pod-to-pod communications.

Installation

First steps to install Linkerd

There are two main ways of installing Linkerd, with Linkerd CLI or Helm.

  • Install with Linkerd CLI, easy to install
  • Install with Helm
    • Allows repeatability,
    • But a bit more complicated to set up:
      • Requires a trust anchor certificate, an issuer certificate and key pair

We will focus on the installation process through Helm.

I won’t develop the generation of certificates. Documentation is found for creating your certificate, or for installing linkerd with certificate in cert-manager.

Once your certificate is generated, let’s explain Linkerd’s installation.

  • Adding Linkerd’s Helm repository
# Add the Helm repo for Linkerd edge releases:
helm repo add linkerd-edge https://helm.linkerd.io/edge

You need to install two separate charts in succession: first linkerd-crds and then linkerd-control-plane.

  • Linkerd-crds
helm install linkerd-crds linkerd-edge/linkerd-crds \
  -n linkerd --create-namespace
  • Linkerd-control-plane
    • In case of using a locally generated certificate :

      helm install linkerd-control-plane \
        -n linkerd \
        -	-set-file identityTrustAnchorsPEM=ca.crt \
        -	-set-file identity.issuer.tls.crtPEM=issuer.crt \
        -	-set-file identity.issuer.tls.keyPEM=issuer.key \
        linkerd-edge/linkerd-control-plane

      If you have an ArgoCD managed infrastructure with a gitops model, this method is not recommended as it would require storing the issuer.key in your value file on your git repository. The second method using cert-manager would be better.

    • In the case of using a certificate with cert-manager

      helm install linkerd-control-plane -n linkerd \
        --set-file identityTrustAnchorsPEM=ca.crt \
        --set identity.issuer.scheme=kubernetes.io/tls \
        linkerd/linkerd-control-plane

Adding resources to the mesh

There are two ways of adding resources to the mesh.

  • Adding a single pod to the mesh

To do that, we added the annotation linkerd.io/inject: enabled

  • Meshing a full namespace

If you add the annotation linkerd.io/inject: enabled to the namespace, all new pods created in this namespace will be added to the mesh.

Limitations

In this part, I will explain Linkerd’s limitations compared to other service meshes (mainly Istio)

  • Support

Linkerd doesn’t have any more Open-Source stable versions. Edge versions continue to be released, but they can introduce breaking changes between versions.

  • Less features

Certain features in Istio are not present in Linkerd because it is more straightforward. So, Istio may still be preferred for more advanced usage.

Conclusion

Now, you have a first understanding of using a Service Mesh, specifically Linkerd, to improve the security of your Kubernetes Cluster. You should also understand how to set it up on your existing infrastructure.