Take every precaution to prevent a compromise and limit the blast radius:
- A compromised customer account should not allow CloudTruth or other customers' data access.
- A compromised CloudTruth account should not allow access to customer secrets.
- No CloudTruth service shall ever have direct public network access. Access to any service must be via a load balancer.
- Secrets must be encrypted at rest and in transit.
- Limit the amount of code that has access to cleartext secrets. This includes third-party libraries.
- CloudTruth employees should not have access to customer secrets in plaintext without customer approval. All secret access must be audited.
Our multitenant infrastructure runs in AWS spread across 3 availability zones in the us-east-2 region. Our service is split into 2 ECS clusters, with strict security group configurations on each. We strongly follow the principle of least privilege on all security groups and IAM policies.
The 2 clusters:
- 1.The main app: api, docs, and web service
- 2.Internal vault: CloudTruth's main vault, used to manage certificates and JWT validation
All ECS clusters are in a private VPC, with the only public access being from the Application Load Balancers to the API, docs, and Web App endpoints. We also use a Bastion host for admin access.
Non-secret parameters and other non-secret data are stored in a multitenant Postgresql database hosted by AWS RDS. All queries are scoped by the organization id of the user performing the operation. The scoping is performed on the backend and is not dependent on the user's information. The organization id is fetched from the user's JWT, which is created and signed by Auth0.
In addition to the guardrails provided for non-secret data, an additional level of encryption is used to store Secrets in our Postgresql database. The encryption keys for this come from a unique data key generated for each customer organization using AWS KMS (the Envelope Encryption Method). The capability for a customer to provide their own KMS keys is available for our Enterprise subscription level. Doing so grants an additional level of insurance by allowing customers to revoke our access to their KMS key, immediately removing our ability to decrypt their secrets without needing us in the loop.
CloudTruth offers several operating modes that control how customer data is stored.
- 1.Import into CloudTruth
- 1.In this mode, your config data is stored in a CloudTruth managed AWS RDS instance.
- 2.Integrate with an external parameter or secret store
- 1.In this mode, your config data remains in a location you control. This could be AWS Parameter Store, Secrets Manager, Azure Key Vault, HC Vault, or Git repos.
- 2.CloudTruth makes a "symlink" to the external location and passes the values through the platform encrypted. The customer ultimately enforces the security controls and grants CloudTruth access to the locations.
We use Auth0 as our Authentication provider. They allow users to authenticate with a username/password or with various social identity providers, such as Google, Microsoft, GitHub, etc. It also means we never have access to user passwords - that is only ever known by Auth0 (when using username/password auth) or the identity provider (Google, Microsoft, GitHub, etc.)
When a user logs into the system (via Auth0), we get a JSON Web Token (JWT) in return with the user's id and their organization membership. All operations in the system require the JWT and are restricted to the organization contained in the JWT. The only exceptions are access to documentation (no auth required) and API-key authentication.
API Keys provide a convenient mechanism for authenticating automation systems, such as scripts and CI/CD pipelines. Any user with owner or admin permissions can create an API key. The user cannot create an API key with permissions greater than their own permissions, but they can reduce the access of the API key. Behind the scenes, the system will exchange the API key for a JWT from Auth0. The JWT is then used in all other operations. We'll cover more about how this exchange happens in another post.
We have 2 different authorization systems:
- 1.Org membership, determined by the JWT
- 2.Role-based access controls (RBAC)
The org membership authorization is performed early in the lifecycle of the request. You'll notice that our APIs don't ask for an organization id in any calls because the organization id is part of the JWT. This makes it impossible to query for things outside your currently active org. This carries all the way down to secret access.
We have implemented 4 roles as described below:
- 1.Owner - the org owner(s). Full permissions in the account.
- 2.Admin - Full permissions to the account except for account management (subscription changes, org deletion, etc.)
- 3.Contributor - Full permissions on projects, environments, parameters, and secrets. Cannot perform system operations like account management, user management, add/remove integrations, or view audit logs.
- 4.Viewer - read-only access to parameters.
All operations are logged with the user who performed the operation and when it occurred. Some operations have more detail logged, such as whether a push integration was successful. This immutable log records all operations regardless of source (UI, CLI, or API). The audit log is available to owners and admins and can be exported in JSON format.
We take advantage of AWS encryption services for all supported products. For example, all EBS volumes and all S3 buckets are encrypted. In addition, we also maintain a chain of certs and keys used for TLS encryption. Our internal Vault service rotates the certs and keys frequently and rolls out automatically.
All network communications are encrypted with TLS v1.2 or later. All certificates are verified. This includes all internal service-to-service communications. There are no exceptions.
All customer data is encrypted at rest. This includes our main Postgresql database, all assets stored in S3, and all node-local storage to EBS volumes.
Our multitenant infrastructure runs in the us-east-2 region, is redundant across 3 availability zones and is configured for autoscaling via Spot by NetApp.
We take regular, automated snapshots of our database and additional snapshots preceding any admin operation.
Our repositories require SSO from our corporate identity provider to access our internal repositories. Our code is scanned for viruses and vulnerabilities on every build, a process baked into our build automation. We also use multiple static code analysis tools to ensure we adhere to best practices. We measure code coverage on all components and maintain a high level of coverage. We use dependabot to keep all of our dependencies up to date.
We have no "God" accounts, but we have admins with privileged access to various systems. Access is achieved via a bastion host, which is limited to a whitelist of IP addresses of our admins. We also require MFA for admins. This includes running any out-of-cycle deploy code.
All privileged account access is audited using a combination of tools, notifications, and scheduled procedures.