Our client is hosted on OVH. OVH Cloud provides a managed Kubernetes cluster service, but it does not provide an out-of-the-box authentication plugin like AWS, GCP, or Azure. When a Kubernetes cluster is created, a .kubeconfig file is generated and shared among administrators.
The client uses certificate-based authentication:
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority-data: LS0tLS1...
server: <https://clusterid.region.k8s.ovh.net>
current-context: kubernetes-admin@example-cluster
preferences: {}
users:
- name: kubernetes-admin-example-cluster
user:
client-certificate-data: LS0tLS1...
client-key-data: LS0tLS1...
❯ kubectl auth whoami
cluster-admin
Our client also uses various administration tools, such as ArgoCD, Grafana, and Prometheus, each configured with standalone authentication. - Administrators currently share login credentials for each application.
Due to security and regulatory concerns, we were hired to find a simple solution that links all authentication mechanisms, including the Kubernetes cluster API, to SSO, in this case, GitHub. This approach would improve security and operability by ensuring that:
The solution must be:
Our solution leverages the following facts:
The target architecture is as follow :
As shown:
The authentication flow for Kubernetes is outlined below:
Steps to take:
theodo
.theodo-ops-team
.Nothing special here, simply create a Github oAuth app with the following :
The application will generate:
The following are example values for the ArgoCD Helm chart.
connectors
section enables GitHub as a connectors. Only members of the theodo organization and theodo-ops-team can authenticate.staticClients
section creates an OIDC application that uses GitHub for backend authentication. This is necessary because Kubernetes doesn’t support OAuth.
---
## Argo Configs
configs:
dex.config: |
connectors:
# Github Connector to ANS GitHub Oauth App
- type: github
id: github
name: GitHub
config:
clientID: OE36jaZUkajr1RdViSSZ
clientSecret: 3470825c79c1c97556e3bf5802cd4a1d
useLoginAsID: true
# Authorize this users to authenticate. Does not mean they have any rights
orgs:
- name: theodo
teams:
- theodo-ops-team
# For Kubernetes authentication
staticClients:
- id: OE36jaZUkajr1RdViSSZ
name: Kubernetes
# These are kubectl oidc plugin internal URLs
redirectURIs:
- <http://localhost:8000>
- <http://localhost:18000>
# Communicate this secrets to users
secret: 8d52926efe879ee505391b75f4b046cf
This step is highly dependent on your Kubernetes environment. On OVH, we achieved through a call to the cloud provider. We configured it using Terraform:
resource "ovh_cloud_project_kube_oidc" "my-oidc" {
service_name = var.project_id
kube_id = ovh_cloud_project_kube.this.id
client_id = var.oidc_client_id
issuer_url = "<https://argocd.example.com/api/dex/>"
# JWT claim to use as the user name.
oidc_username_claim = "name"
# Array of JWT claim to use as the user's group.
oidc_groups_claim = ["groups"]
}
To authenticate, Kubernetes uses the id_token
from Dex as a bearer token. Configure your Kubernetes cluster with these options:
-oidc-issuer-url
: https://argocd.example.com/api/dex/-oidc-client-id
: OE36jaZUkajr1RdViSSZ-oidc-username-claim
: "name" (JWT claim to use as the user name)-oidc-groups-claim
: ["groups"] (JWT claim to use as the user's group. this will allow to map clusterRoles
to Githum teams)As of Kubernetes 1.29, Kubernetes allows also to configure authentication through a AuthenticationConfiguration
object.
clusterRoleBindings
and clusterRoles
for the Github groupsNow that Kubernetes is trusting dex as an authentication source, we need to assign RBAC privileges to theodo-ops-team
team members
The simplest configuration would be :
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: oidc-admins-binding
subjects:
- kind: Group
name: theodo:theodo-ops-team
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
The only thing left to do is to configure the kubeconfig
. For that we will use a tool
Install oidc-login :
kubectl krew install oidc-login
Then it can be used to create an appropriate configuration. The token created must include groups and email claims.
kubectl oidc-login setup \\
- --oidc-issuer-url=https://argocd.example.com/api/dex/api/dex \\
- --oidc-client-id=OE36jaZUkajr1RdViSSZ \\
- --oidc-client-secret=8d52926efe879ee505391b75f4b046cf \\
- --oidc-extra-scope=email,groups,profile
It will create a new user in your kube configuration:
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority-data: LS0tLS1...
server: <https://clusterid.region.k8s.ovh.net>
current-context: kubernetes-admin@example-cluster
users:
- name: oidc
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- oidc-login
- get-token
- --oidc-issuer-url=https://argocd.example.com/api/dex/api/dex
- --oidc-client-id=OE36jaZUkajr1RdViSSZ
- --oidc-client-secret=8d52926efe879ee505391b75f4b046cf
- --oidc-extra-scope=email
- --oidc-extra-scope=groups
- --oidc-extra-scope=profile
command: kubectl
env: null
interactiveMode: IfAvailable
provideClusterInfo: true
This configuration is fully static, it can be shared among administrators. The only sensitive value is the oidc-client-secret
, which is sensible, but won't be sufficient for an attacker to access the cluster.
Whener a administrator will try to connect, the plugin will :
~/.kube/cache/oidc-login/
If we decode a token generated by this plugin, we can see what a Dex JWT token looks like
❯ cat ~/.kube/cache/oidc-login/029206f9ec27e68a5e5bd572bb8e14e1605b216f89d477976595bae14c56403d | jq '.id_token' | cut -d '.' -f 2 | base64 -d | jq
{
"iss": "<https://argocd.example.com/api/dex/api/dex>",
"sub": "ChAUaGlaYXZsdExlbadhZ25lEgZnaXRodWI",
"aud": "OE36jaZUkajr1RdViSSZ",
"exp": 1728601368,
"iat": 1728514968,
"nonce": "FZxCmU-qrSZb-gddZfClwZDm0GqMA4Kd4noBmtem-KM",
"at_hash": "xgMTpD8GXtWMnLVg9yRqww",
"c_hash": "8tr6ErTcCH1ODobhY9UDIQ",
"email": "thibault.lengagne@theodo.com",
"email_verified": true,
"groups": [
"theodo:theodo-ops-team"
],
"name": "ThibaultLengagne",
"preferred_username": "ThibaultLengagne"
}
You can see the name
and groups
claim that we have specified in Kubernetes API configuration
❯ kubectl auth whoami
ATTRIBUTE VALUE
Username <https://argocd.example.com/api/dex/api/dex#ThibaultLengagne>
Groups [theodo:theodo-ops-team system:authenticated]
If no clusterRoles
had been defined, we would get error such as this one :
❯ kubectl get pods
Error from server (Forbidden): pods is forbidden: User "<https://argocd.example.com/api/dex/api/dex#ThibaultLengagne>" cannot list resource "pods" in API group "" in the namespace "default"
We will focus on ArgoCD for this article, but most tools can be easily plugged to Dex, as it is a compliant OIDC provider. As Argo is already trusting dex (it is part of its component), you only need to map Github groups to a role
# Argo CD RBAC policy configuration
## Ref: <https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/rbac.md>
rbac:
# -- Create the argocd-rbac-cm configmap with ([Argo CD RBAC policy]) definitions.
create: true
policy.csv: |
g, theodo:theodo-ops-team, role:admin
# -- OIDC scopes to examine during rbac enforcement (in addition to `sub` scope).
# The scope value can be a string, or a list of strings.
scopes: "[groups, email]"
The login process is now as simple as clicking on the “Github” button in ArgoCD web interface and being redirected to Github to login.