CloudTruth Documentation
Sign InAPIIntegrationsGitHubVisit our website
  • Overview
  • Getting Started
  • Architecture
    • 🔒Security Overview
  • Copilot
  • 🏢Org management
    • Account Setup
    • Access Control
      • 🔑API Tokens
      • 🌐Protecting Projects and Environments
      • 👥Users
    • Audit Log
  • 🛠️Config Management
    • Projects
    • Parameters
      • Sharing Config Data
      • Parameter Management
        • Internal Values
          • Dynamic Values
        • External Values
          • Terraform Remote State Files
        • Parameter Override
        • Environment Value Override
      • Parameter and Parameter Value Inheritance
      • Value Comparison
      • Value History
      • Value Validation
      • Value Expiration
    • Environments and Tags
    • Templates
      • 📒Sample Templates
    • Actions
      • Import Actions
      • Push Actions
    • CLI & API
      • CloudTruth CLI
      • Rest API
    • Integrations
      • Argo CD
      • Atlassian Compass
      • AWS
        • AWS Connection
        • AWS Role
          • CloudFormation
          • Terrraform
          • AWS Console
        • Parameter Store (SSM)
        • S3
        • Secrets Manager
      • Azure Key Vault
      • Bitbucket Pipelines
      • Docker
      • Docker Compose
      • GitHub
      • GitHub Actions
      • GitLab
      • Harness
      • Jenkins
      • Kubernetes
      • Pulumi
      • Terraform
      • Terragrunt
      • Explorer
      • Circle CI
    • Events, Notifications, Webhooks
    • Types
  • 🔎REPORTING
    • Compare
    • History
    • Expirations
  • 🚀PRODUCT
    • What is CloudTruth?
    • Interactive Demo
    • Kubernetes
    • Terraform
    • CI/CD Pipeline Configuration
    • Cloud CMDB
    • Secrets Management
    • GitOps
    • Our Manifesto
    • Open Source
    • FAQs
    • Our Mission
  • 📚Reference
    • 🎓Quick Start Videos
      • What is CloudTruth?
      • CloudTruth in Action
      • Environments and Projects
      • Secrets, Parameters, ENV variables
      • Audit Logs, RBAC, SSO
      • Containers - Kubernetes, Docker
      • Infrastructure as Code (IaC) - Terraform, Cloudformation, CDK, Azure Bicep, Pulumi
      • CICD Pipelines - GitHub Actions, ArgoCD, Jenkins, CircleCI, Harness, GitLab Pipelines
      • AWS Videos - Secret Manager, Parameter Store, S3, IAM
      • Azure Videos - Azure DevOps, Azure Bicep, PowerShell
    • Knowledge Base
      • Best Practices
        • Versioned Releases
      • CLI
        • History comparison of deleted parameters with null values
      • Integrations
        • Advanced AWS IAM policy permissions
        • K8s pull image from private Docker registry
        • S3 Region Selection
      • Templates
        • Templates render quotations in key values as quot
    • Roadmap and New Features
    • JMESPath Reference
    • REST API
Powered by GitBook

Copyright© 2023 CloudTruth

On this page
  • Prerequisites
  • Create Kubernetes Cluster
  • Amazon EKS Cluster
  • Setting up Harness

Was this helpful?

  1. Config Management
  2. Integrations

Harness

PreviousGitLabNextJenkins

Last updated 2 years ago

Was this helpful?

This walkthrough shows you how to pass CloudTruth parameters to Harness with a K8s operator.

Prerequisites

  • You have a working knowledge of and .

  • You have.

  • AWS CLI .

  • You have created one or more .

  • You have created a .

Create Kubernetes Cluster

The first step in the process is to deploy our Kubernetes cluster on our cloud provider. For this tutorial we are going to be using , which is a managed container service. It would be fairly trivial to replicate these steps using another managed container service such as Google Kubernetes Engine (GKE) or Azure Kubernetes Service (AKS). If your cluster is already deployed, you can skip to "".

Amazon EKS Cluster

The easiest way to deploy an Amazon EKS cluster is using eksctl, a simple CLI tool for managing EKS clusters. The instructions for how to install or upgrade eksctl are . However if another option is preferred, this gives details on how to deploy via eksctl, the AWS Management Console, or the AWS CLI.

To deploy a cluster with eksctl in your default region you can execute the following:

eksctl create cluster  \
--name YOUR-CLUSTER-NAME \
--version 1.21 \
--with-oidc \
--without-nodegroup

Create Nodes for EKS Cluster

Create a key pair to allow access to the node group instances:

aws ec2 create-key-pair \
  --key-name harness-key \
  --query "KeyMaterial" \
  --output text > harness-key.pem

Create a node group with the following command:

eksctl create nodegroup \
  --cluster YOUR-CLUSTER-NAME \
  --name delegate \
  --node-type t3.xlarge \
  --nodes 2 \
  --nodes-min 1 \
  --nodes-max 3 \
  --ssh-access \
  --ssh-public-key harness-key

Setting up Harness

kubectl apply -f harness-delegate.yaml

Let's verify that our harness delegate is running by executing the command:

kubectl --namespace harness-delegate get pods

We can also verify this in the Harness GUI under Setup > Harness Delegates (right hand side menu) and checking that our delegate shows up with a "Connected" status.

Adding Cloud Provider and Helm Repository

  • Cluster Details: "Inherit from selected Delegate"

  • Delegate Selector: "eks-qqiuw-0"

  • Type: Helm Repository

  • Name: "KubeTruthHelm"

Adding a new Application and Service

To add a new application navigate to Setup > "Add Application", and give it whatever name you like, "KubeTruthDemo" in our case.

Once we create the application, we must add a service to it. Navigate to Services > "Add Service". Provide the service a name ("DemoService" in our case), and the deployment type must be "Kubernetes".

On the service overview page click on the 3 dots in the upper right corner of "Manifests" and select the option that says "Link Remote Manifests".

Provide the following details and submit the services remote manifest:

  • Manifest Format: Helm Chart from Helm Repository

  • Helm Repository: KubetruthHelm (that we set up earlier)

  • Chart Name: "kubetruth" (case sensitive)

  • Helm Version: v3 (or v2 if you prefer to use that)

appSettings:    
    apiKey: YOUR_CLOUDTRUTH_API_KEY

Optionally, we can also add a CloudTruth environment using the appSettings.environmentkey, but since we created our parameters in the "default" CloudTruth environment we can omit that key.

Adding an Environment

The next step is to set up an environment on Harness to do our deployment. Navigate to Setup > Our App (KubetruthDemo) > Environments > "Add Environment". Provide a name, we will call it "DemoEnv". You can either select Production or Non-Production for Environment Type, we will create a Production for the purposes of this tutorial.

Once the environment is created, we need to add an infrastructure definition for it. On the Environment Overview page click "Add Infrastructure Definition" and provide the following details.

  • Name: "EKSCluster"

  • Cloud Provider Type: Kubernetes Cluster

  • Deployment Type: Kubernetes

  • Select bubble for "Use Already Provisioned Infrastructure"

  • Cloud Provider: Kubernetes Cluster

  • Namespace is fine as "default"

  • Release Name: keep default

Creating Deployment Workflow

It is time to create a deployment workflow, for this navigate to Setup > Our App (Kubetruth Demo) > Workflows > "Add Workflow" and provide the following details.

  • Name: "RollingWorkflow"

  • Workflow Type: Rolling Deployment

  • Environment: DemoEnv (created in earlier step)

  • Service: DemoService (created in earlier step)

  • Infrastructure Definition: EKSCluster (created in earlier step)

From the Workflow Overview, we need to add a pre-deploy step to apply our KubeTruth CRD. Under the "Deploy" section click "Add Step" .

Navigate to Utility > Shell Script.

Provide the following step details:

  • Name: "Shell Script"

  • Script Type: BASH

  • Script:

kubectl apply -f  <(curl --silent https://raw.githubusercontent.com/cloudtruth/kubetruth/main/helm/kubetruth/crds/projectmapping.yaml)

Important: Make sure to hit the little up arrow to the direct right of "Shell Script" under the "Deploy" section, since we want it to execute before the rollout deployment.

If done as a canary or blue/green deployment, make sure the shell script is entered as a "Pre-Deployment" step, since it needs to execute before the actual deployment (or else the CRD will not exist).

Now you can hit "Deploy" on the upper right (then "Submit")!

Verifying our Deployment

To verify that our KubeTruth deployment was successful and that our demo CloudTruth parameters are in our Kubernetes cluster, we need to look at the ConfigMap and for the "default" Kubernetes namespace (or whatever namespace you deployed to).

The easiest way to do this is to navigate to your terminal and list all of your configmaps. You will see that you now have configmaps created from your CloudTruth Projects.

kubectl get configmap -A

Describe details from a specific configmap created from your workflow:

kubectl describe configmap "generated-configmap-name"

To check the value of a specific key, run:

kubectl get configmap "generated-configmap-name" -o jsonpath='{.data.KEY_NAME}'

If you want to verify the rolling updates to the ConfigMap, try changing a parameter value in CloudTruth and checking for the updated value using kubectl after a minute or two.

You have successfully deployed a Kubernetes operator that dynamically generates and updates configmaps and secrets! To clean up your AWS clusters you can run the following:

eksctl delete nodegroup NODEGROUPNAME _--_cluster __ YOUR-CLUSTER-NAME

aws ec2 delete-key-pair --key-name KEY_NAME

eksctl delete cluster YOUR-CLUSTER-NAME

Once the EKS cluster is created, you must make sure there is a node group attached that can handle your workloads. Harness delegates require a minimum of so we are using tx.xlarge nodes.

If the basic eksctl command was used, the cluster is created by default without any node groups attached. To attach worker nodes, simply follow this that will walk you through the following commands.

Note: If you used a method other than eksctl to spin up your EKS cluster, make sure your local kubectl points to your cluster, which can be done by .

Now that our cluster is up and running, it is time to set up Harness. The first step in doing so is with the Harness First Generation Community Edition. For EKS, this is as simple as and applying it with:

The next step is to . This is trivially easy to do when you inherit the authentications settings from the Harness delegate we installed in the previous step. To add a Cloud Provider, go to Setup > Cloud Providers > Add Cloud Provider > "Kubernetes Cluster".

We also need to add our KubeTruth Helm repository which is a K8s operator that continuously pulls CloudTruth parameters into ConfigMaps and Secrets in our Kubernetes cluster. Read more about KubeTruth . To add the Helm repository, go to Setup > Connectors (on right hand menu) > Artifact Servers > "Add Artifact Server".

Repository URL: ""

On the service overview page, scroll down to "Values YAML Override" and click "Add Values". Choose the option "inline", here you are going to want to add the you created as a prerequisite, the YAML is as follows:

🛠️
8Gb of memory
link
updating your kubeconfig
adding a harness delegate to our cluster
downloading the harness delegate yaml
add our Kubernetes cluster cloud provider to Harness
https://packages.cloudtruth.com/charts
Harness
K8s
kubectl installed
configured
CloudTruth Parameters
CloudTruth API Access token
Amazon EKS
here
link
Setting up Harness
CloudTruth API key
here