Harness
Last updated
Was this helpful?
Last updated
Was this helpful?
This walkthrough shows you how to pass CloudTruth parameters to Harness with a K8s operator.
You have a working knowledge of and .
You have.
AWS CLI .
You have created one or more .
You have created a .
The first step in the process is to deploy our Kubernetes cluster on our cloud provider. For this tutorial we are going to be using , which is a managed container service. It would be fairly trivial to replicate these steps using another managed container service such as Google Kubernetes Engine (GKE) or Azure Kubernetes Service (AKS). If your cluster is already deployed, you can skip to "".
The easiest way to deploy an Amazon EKS cluster is using eksctl, a simple CLI tool for managing EKS clusters. The instructions for how to install or upgrade eksctl are . However if another option is preferred, this gives details on how to deploy via eksctl, the AWS Management Console, or the AWS CLI.
To deploy a cluster with eksctl in your default region you can execute the following:
Create a key pair to allow access to the node group instances:
Create a node group with the following command:
kubectl apply -f harness-delegate.yaml
Let's verify that our harness delegate is running by executing the command:
kubectl --namespace harness-delegate get pods
Cluster Details: "Inherit from selected Delegate"
Delegate Selector: "eks-qqiuw-0"
Type: Helm Repository
Name: "KubeTruthHelm"
To add a new application navigate to Setup > "Add Application", and give it whatever name you like, "KubeTruthDemo" in our case.
Once we create the application, we must add a service to it. Navigate to Services > "Add Service". Provide the service a name ("DemoService" in our case), and the deployment type must be "Kubernetes".
On the service overview page click on the 3 dots in the upper right corner of "Manifests" and select the option that says "Link Remote Manifests".
Provide the following details and submit the services remote manifest:
Manifest Format: Helm Chart from Helm Repository
Helm Repository: KubetruthHelm (that we set up earlier)
Chart Name: "kubetruth" (case sensitive)
Helm Version: v3 (or v2 if you prefer to use that)
The next step is to set up an environment on Harness to do our deployment. Navigate to Setup > Our App (KubetruthDemo) > Environments > "Add Environment". Provide a name, we will call it "DemoEnv". You can either select Production or Non-Production for Environment Type, we will create a Production for the purposes of this tutorial.
Once the environment is created, we need to add an infrastructure definition for it. On the Environment Overview page click "Add Infrastructure Definition" and provide the following details.
Name: "EKSCluster"
Cloud Provider Type: Kubernetes Cluster
Deployment Type: Kubernetes
Select bubble for "Use Already Provisioned Infrastructure"
Cloud Provider: Kubernetes Cluster
Namespace is fine as "default"
Release Name: keep default
It is time to create a deployment workflow, for this navigate to Setup > Our App (Kubetruth Demo) > Workflows > "Add Workflow" and provide the following details.
Name: "RollingWorkflow"
Workflow Type: Rolling Deployment
Environment: DemoEnv (created in earlier step)
Service: DemoService (created in earlier step)
Infrastructure Definition: EKSCluster (created in earlier step)
From the Workflow Overview, we need to add a pre-deploy step to apply our KubeTruth CRD. Under the "Deploy" section click "Add Step" .
Navigate to Utility > Shell Script.
Provide the following step details:
Name: "Shell Script"
Script Type: BASH
Script:
Important: Make sure to hit the little up arrow to the direct right of "Shell Script" under the "Deploy" section, since we want it to execute before the rollout deployment.
Now you can hit "Deploy" on the upper right (then "Submit")!
To verify that our KubeTruth deployment was successful and that our demo CloudTruth parameters are in our Kubernetes cluster, we need to look at the ConfigMap and for the "default" Kubernetes namespace (or whatever namespace you deployed to).
The easiest way to do this is to navigate to your terminal and list all of your configmaps. You will see that you now have configmaps created from your CloudTruth Projects.
kubectl get configmap -A
Describe details from a specific configmap created from your workflow:
kubectl describe configmap "generated-configmap-name"
To check the value of a specific key, run:
If you want to verify the rolling updates to the ConfigMap, try changing a parameter value in CloudTruth and checking for the updated value using kubectl after a minute or two.
You have successfully deployed a Kubernetes operator that dynamically generates and updates configmaps and secrets! To clean up your AWS clusters you can run the following:
eksctl delete nodegroup NODEGROUPNAME _--_cluster __ YOUR-CLUSTER-NAME
aws ec2 delete-key-pair --key-name KEY_NAME
eksctl delete cluster YOUR-CLUSTER-NAME
Once the EKS cluster is created, you must make sure there is a node group attached that can handle your workloads. Harness delegates require a minimum of so we are using tx.xlarge
nodes.
If the basic eksctl command was used, the cluster is created by default without any node groups attached. To attach worker nodes, simply follow this that will walk you through the following commands.
Note: If you used a method other than eksctl to spin up your EKS cluster, make sure your local kubectl points to your cluster, which can be done by .
Now that our cluster is up and running, it is time to set up Harness. The first step in doing so is with the Harness First Generation Community Edition. For EKS, this is as simple as and applying it with:
The next step is to . This is trivially easy to do when you inherit the authentications settings from the Harness delegate we installed in the previous step. To add a Cloud Provider, go to Setup > Cloud Providers > Add Cloud Provider > "Kubernetes Cluster".
We also need to add our KubeTruth Helm repository which is a K8s operator that continuously pulls CloudTruth parameters into ConfigMaps and Secrets in our Kubernetes cluster. Read more about KubeTruth . To add the Helm repository, go to Setup > Connectors (on right hand menu) > Artifact Servers > "Add Artifact Server".
Repository URL: ""
On the service overview page, scroll down to "Values YAML Override" and click "Add Values". Choose the option "inline", here you are going to want to add the you created as a prerequisite, the YAML is as follows: