Kubernetes & OpenShift

Out-of-the-box extraction of runtime information

Kubernetes integration provides a simple mechanism to understand which version of your service is running on which cluster. We show how the software package built in your CI/CD pipeline is deployed on your K8s clusters. It also gives you other key details of k8s objects such as the cluster configuration, jobs, stateful sets, persistent volume claims and more.

For existing users, refer to the release notes and migration docs to upgrade from version 4.0.0 to 5.0.0.

Enable the integration on your workspace

Please contact your CSM to enable this integration on your workspace

To use the integration, just deploy a simple agent (discussed below) to your cluster that scans your K8s objects and sends the information directly to VSM.

Synchronized data

The integration runs on a schedule. While setting up the integration you can specify a schedule that works for your requirements. We advise to run a scan every 6 hours. The table below gives a high-level overview of the data objects that we scan in your K8s clusters and how they translate into LeanIX Fact Sheets.

Data Object in K8s
LeanIX VSM

Deployment with label

Deployment Fact Sheet of sub-type K8s Deployment

StatefulSet

Deployment Fact Sheet of sub-type K8s Stateful Set

Job

Deployment Fact Sheet of sub-type K8s Job

Label: app.kubernetes.io/version

We use this best practice label to determine the version of the deployment. The version is used as an identifying factor to understand what clusters an image of the same service in the same version has been deployed to.

K8s Label or Namespace

Microservice Fact Sheet

We use either a dedicated K8s label or your namespaces to discover microservices. You can choose during the implementation.

CRD

Technical Component Fact Sheet of sub-type Custom Resource Definition

Persistent Volume

Technical Component Fact Sheet of sub-type Persistent Volume Claim

Labels can become Tags in VSM

We not only show all the labels you set in K8s on your Deployment Fact Sheet, but we can also transform specific labels (identified by their key) into tags in VSM. You can use these tags to filter your data or display specific views in our reporting.

Setup

The Kubernetes integration works with Integration Hub in "self-start" mode to scan the k8s object, process to LDIF and automatically trigger inbound Integration API processor to update information on VSM.

Setting up Integration Hub data source

Create a new "DataSource" of connector "mi-k8s-connector" on the workspace under the Integration Hub in Admin area. Populate the connector configuration parameters with right details. Details about the ongoing and previous data source runs can be found in "sync logging".

"self-start" mode in Integration Hub

Kubernetes connector starts in "self-start" mode. The Integration Hub Datasource should not be started or scheduled manually. The actual scheduling should be taken care by the k8s agent deployed on your k8s cluster.

Data source connector configuration

Name
Format
Value

resolveStrategy

plain text

  • "label"
  • "namespace"

resolveLabel

plain text

"{NAME_OF_LABEL}"

Examples:

{
	"connectorConfiguration": {
		"resolveLabel": "kubernetes.io/name",
		"resolveStrategy": "label"
	},
	"secretsConfiguration": {},
	"bindingKey": {
		"connectorType": "leanix-mi-connector",
		"connectorId": "leanix-k8s-connector",
		"connectorVersion": "1.0.0",
		"processingDirection": "inbound",
		"processingMode": "full",
		"lxVersion": "1.0.0"
	}
}
{
	"connectorConfiguration": {
		"resolveStrategy": "namespace"
	},
	"secretsConfiguration": {},
	"bindingKey": {
		"connectorType": "leanix-mi-connector",
		"connectorId": "leanix-k8s-connector",
		"connectorVersion": "1.0.0",
		"processingDirection": "inbound",
		"processingMode": "full",
		"lxVersion": "1.0.0"
	}
}

Setting up k8s connector on a cluster

To setup the K8s connector deploy the [open-source connector] (https://github.com/leanix/leanix-k8s-connector) as a helm chart to every cluster you wish to scan. Make sure to use the latest version. The LeanIX Kubernetes Connector gets deployed via a Helm chart into the Kubernetes cluster as a CronJob. It creates the kubernetes.ldif file and logs into the leanix-k8s-connector.log file.

Installing k8s connector

Before you can install the Kubernetes Connector via the provided Helm chart you must add the Helm chart repository first.

helm repo add leanix 'https://raw.githubusercontent.com/leanix/leanix-k8s-connector/master/helm/'
helm repo update
helm repo list

The output of the helm repo list command should look like this.

NAME                  URL
stable                https://kubernetes-charts.storage.googleapis.com
local                 http://127.0.0.1:8879/charts
leanix                https://raw.githubusercontent.com/leanix/leanix-k8s-connector/master/helm/

LeanIX API Token

See the LeanIX technical documentation on how to obtain one. LeanIX Technical Documentation

Create a Kubernetes secret with the LeanIX API token.

kubectl create secret generic api-token --from-literal=token={LEANIX_API_TOKEN}

Deploying k8s connector using Helm chart

we use the Helm chart deploying the LeanIX Kubernetes Connector to the Kubernetes cluster. The following command deploys the connector to the Kubernetes cluster and overwrites the parameters in the values.yaml file.

Parameter
Default value
Provided value
Notes

integrationApi.fqdn

app.leanix.net

The FQDN or host of your LeanIX VSM

integrationApi.secretName

api-token

The name of the Kubernetes secret containing the LeanIX API token

integrationApi.datasourceName

aks-cluster-k8s-connector

The name of the Integration Hub DataSource configured on the workspace

clustername

kubernetes

aks-cluster

The name of the Kubernetes cluster

lxWorkspace

00000000-0000-0000-0000-000000000000

The UUID of your LeanIX VSM workspace

verbose

false

true

Enables verbose logging on the stdout interface of the container

enableCustomStorage

false

Enable additional custom storage backend option to store kubernetes.ldif on your preferred storage option

storageBackend

file

Default value for the storage backend is file, if not provided

This flag is not required if enableCustomStorage is disabled

localFilePath

/mnt/leanix-k8s-connector

The path that is used for mounting the PVC into the container and storing the kubernetes.ldif and leanix-k8s-connector.log files

This flag is not required if enableCustomStorage is disabled

claimName

pv0002-pvc

The name of the PVC used to store the kubernetes.ldif and leanix-k8s-connector.log files.

This flag is not required if enableCustomStorage is disabled

blacklistNameSpaces

kube-system

kube-system, default

Namespaces that are not scanned by the connector. Must be provided in the format "{kube-system,default}" when using the --set option

helm upgrade --install leanix-k8s-connector leanix/leanix-k8s-connector \
--set integrationApi.fqdn=app.leanix.net \
--set integrationApi.secretName=api-token \
--set integrationApi.datasourceName=aks-cluster-k8s-connector \
--set args.clustername=aks-cluster \
--set args.lxWorkspace=00000000-0000-0000-0000-000000000000 \
--set args.verbose=true \
--set args.blacklistNamespaces="{kube-system,default}"

Advanced

Two storage backend types are natively supported:

  • file
  • azureblob

Enable enableCustomStorage flag to enable above storage option. Even if the flag is disabled the connector works as expected using Integration Hub. This is an additional storage option for kubernetes.ldif.

"File" storage option

Let's continue the further discussion using file as storage option. The file storage backend lets you use every storage that can be provided to Kubernetes through a PersistentVolume and a PersistentVolumeClaim.

When using the file storage backend, you must pre-create the PersistentVolume and PersistentVolumeClaim. Follow official Kubernetes docs to setup PersistentVolume and PersistentVolumeClaim

Limitations

The connector runs as a non-root user in the Kubernetes environment. You can only use the persistent volumes that do not require root-access to write LDIF contents to a file.
e.g. Azure-File-Share can be used as a persistent volume while Amazon EBS cannot be used.

helm upgrade --install leanix-k8s-connector leanix/leanix-k8s-connector \
--set integrationApi.fqdn=app.leanix.net \
--set integrationApi.secretName=api-token \
--set integrationApi.datasourceName=aks-cluster-k8s-connector \
--set args.clustername=aks-cluster \
--set args.lxWorkspace=00000000-0000-0000-0000-000000000000 \
--set args.verbose=true \
--set args.enableCustomStorage=true \
--set args.file.claimName=pv0002-pvc \
--set args.blacklistNamespaces="{kube-system,default}"

Compatibility

Our integration is independent of the K8s distribution you are using, whether it is self-hosted, Amazon EKS, Azure AKS, or OpenShift.

Updated a day ago


Kubernetes & OpenShift


Out-of-the-box extraction of runtime information

Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.