Eve Guard Installer v2.3.0

Eve Gateway Install (Umbrella + TLS)

Deploy Eve Gateway with TLS using the umbrella chart. All traffic is always encrypted; choose how TLS is provided.


1. Prerequisites


2. TLS strategies

There are three ways to provide TLS for the gateway:

# Strategy Description
1 cert-manager cert-manager requests and renews the Kong leaf certificate (the umbrella chart usually installs the cert-manager subchart; use --no-bundle-cert-manager when the cluster already runs cert-manager). Either the installer creates a private CA and a namespace Issuer, or you use --use-existing-issuer with any Issuer / ClusterIssuer (Let’s Encrypt, Vault, corporate CA, etc.). Suitable for any environment: use a publicly trusted issuer when clients rely on default OS/browser trust, or a private CA when you control trust distribution (internal fleets, lab, air-gapped, custom roots).
2 BYOC Bring your own certificate: existing Kubernetes TLS Secret or inline PEM. You handle renewal. No cert-manager.
3 external TLS terminates outside Kong (e.g. AWS NLB + ACM, Istio, ingress). Kong may listen on HTTP only. No cert-manager.

install-eve.sh / Helm values: strategy 1 is selected with --tls-strategy self-signed and tls.strategy: self-signed (legacy name for the cert-manager path). The interactive installer labels this option cert-manager.

Tip — existing issuer (cert-manager strategy): A platform engineer can install cert-manager and a ClusterIssuer once, test it, then every Eve install uses --use-existing-issuer <name> so DNS credentials / cloud IAM stay out of each app deploy. Clients get publicly trusted certs when the issuer chains to a public CA.

Using an existing Issuer (Let’s Encrypt, Vault, etc.)

Same as the cert-manager strategy with --use-existing-issuer: no installer-generated CA; cert-manager issues the leaf using your pre-defined issuer.

./install-eve.sh \
  --tls-strategy self-signed \
  --use-existing-issuer letsencrypt-prod \
  --domain gateway.acme-corp.com \
  --namespace eve-guard \
  --openai-key "$OPENAI_KEY" \
  --ghcr-username "$GHCR_USERNAME" \
  --ghcr-token "$GHCR_TOKEN"

The installer lists existing ClusterIssuers and namespace-scoped Issuers during setup so you can pick one interactively.

Supported cert-manager issuers

Eve works with any cert-manager Issuer or ClusterIssuer. Set it up following the official docs, then pass the name to --use-existing-issuer:

Issuer type Official documentation
ACME / Let’s Encrypt (general) https://cert-manager.io/docs/configuration/acme/
DNS-01 challenge solvers https://cert-manager.io/docs/configuration/acme/dns01/
Route53 (AWS) https://cert-manager.io/docs/configuration/acme/dns01/route53/
Cloudflare https://cert-manager.io/docs/configuration/acme/dns01/cloudflare/
Cloud DNS (GCP) https://cert-manager.io/docs/configuration/acme/dns01/google/
Azure DNS https://cert-manager.io/docs/configuration/acme/dns01/azuredns/
HashiCorp Vault https://cert-manager.io/docs/configuration/vault/
Venafi https://cert-manager.io/docs/configuration/venafi/
AWS Private CA https://cert-manager.io/docs/configuration/acme/
All supported issuers https://cert-manager.io/docs/configuration/
AWS EKS + Let’s Encrypt tutorial https://cert-manager.io/docs/tutorials/getting-started-aws-letsencrypt/

See Section 7: Cloud providers for cloud-specific setup guides (IRSA for AWS, Workload Identity for GCP/Azure).


3. Quick start with installer

Quick start via curl (no repo needed)

The installer is published at https://install.eve.security. You do not need to clone the repo — curl the script directly and pass all flags on the command line.

One-liner (pipe to bash):

curl -fsSL https://install.eve.security | bash -s -- \
  --non-interactive \
  --context "$KUBE_CONTEXT" \
  --namespace eve-guard \
  --tls-strategy self-signed \
  --ghcr-username "$GHCR_USERNAME" \
  --ghcr-token    "$GHCR_TOKEN" \
  --openai-key    "$OPENAI_KEY" \
  --database-url  "$DATABASE_URL" \
  --database-key  "$DATABASE_KEY" \
  --database-email    "$DATABASE_EMAIL" \
  --database-password "$DATABASE_PASSWORD" \
  --database-tenant-id "$DATABASE_TENANT_ID"

Download first, inspect, then run (recommended):

curl -fsSL https://install.eve.security/install-eve.sh -o install-eve.sh
chmod +x install-eve.sh
./install-eve.sh --help       # inspect available flags
./install-eve.sh --context "$KUBE_CONTEXT" --namespace eve-guard ...

Pin to a specific version:

curl -fsSL https://install.eve.security/v2.3.0/install-eve.sh -o install-eve.sh
chmod +x install-eve.sh
./install-eve.sh --chart-version 2.3.0 ...

Uninstaller:

curl -fsSL https://install.eve.security/uninstall-eve.sh -o uninstall-eve.sh
chmod +x uninstall-eve.sh
./uninstall-eve.sh --namespace eve-guard

When run from a directory without a local eve-installer/Chart.yaml, the script automatically switches to OCI mode and pulls the chart from oci://ghcr.io/evesecurityinc/eve-installer. --ghcr-username and --ghcr-token are required in that mode for both the Helm pull and the image pull secret.


Interactive install (secrets in env, passed explicitly)

Omit --non-interactive so the script prompts for cluster, TLS strategy, namespace, /etc/hosts, CA trust, etc. Fill in the exports, then run (from the eve-installer directory or with CHART_PATH set, or add OCI flags as in Script only + chart from GHCR):

export OPENAI_KEY=''
export DATABASE_URL=''
export DATABASE_KEY=''
export DATABASE_EMAIL=''
export DATABASE_PASSWORD=''
export DATABASE_TENANT_ID=''
export GHCR_USERNAME=''
export GHCR_TOKEN=''

./install-eve.sh --openai-key "$OPENAI_KEY" \
  --database-url "$DATABASE_URL" \
  --database-key "$DATABASE_KEY" \
  --database-email "$DATABASE_EMAIL" \
  --database-password "$DATABASE_PASSWORD" \
  --database-tenant-id "$DATABASE_TENANT_ID" \
  --ghcr-username "$GHCR_USERNAME" \
  --ghcr-token "$GHCR_TOKEN"

The installer still reads OPENAI_KEY, DATABASE_*, GHCR_* if set, even without passing flags; passing --openai-key / --database-* / --ghcr-* as above keeps the command explicit and works well in copy-paste runbooks.

From the repo (local Helm chart)

If you run the script from the eve-installer folder (or set CHART_PATH to that folder) so Chart.yaml with name: eve-installer is present, the installer uses the local chart and syncs subcharts with helm dependency build when Chart.lock is present (reproducible, and avoids refreshing unrelated helm repo entries on your machine). If there is no lock or build fails, it falls back to helm dependency update.

cd traffic-orchestrator/k8s/helm/eve-installer
./install-eve.sh

Script only + chart from GHCR (OCI)

Use this when install-eve.sh is not next to a local eve-installer/Chart.yaml (e.g. you copied only the script). Helm then installs from eve-installer on GHCR. You need GHCR auth (--ghcr-username / --ghcr-token) for the image pull secret and for helm pull against OCI.

Custom OCI registry + chart version (fork or your org’s GHCR):

./install-eve.sh --non-interactive \
  --chart-oci oci://ghcr.io/yourorg/eve-installer \
  --chart-version 2.2.0 \
  --ghcr-username YOUR_USER --ghcr-token ghp_... \
  --namespace eve-guard --release-name eve-guard \
  --tls-strategy self-signed \
  --openai-key "$OPENAI_KEY" \
  ...other required flags...

Add whatever else your environment needs (see ./install-eve.sh --help and the flag table below), for example: --context (kube context), --gateway-domain or --domain, --use-existing-issuer, and database backend --database-* flags or DATABASE_* env vars.

Default OCI is oci://ghcr.io/evesecurityinc/eve-installer — then you only need --chart-version (and the same GHCR / cluster / TLS / secrets flags):

./install-eve.sh --non-interactive --chart-version 2.2.0 \
  --ghcr-username YOUR_USER --ghcr-token ghp_... \
  ...other flags...

Equivalent env vars: EVE_INSTALLER_CHART_OCI, EVE_INSTALLER_CHART_VERSION. Chart is published by CI — see HELM_OCI_GHCR.md. macOS is fully supported (and Linux/Windows): the script runs Helm and kubectl on your machine; workload images are pulled by nodes using the pull secret from your GHCR token.

The installer will:

  1. Select cluster — List kube contexts; you pick one.
  2. Preflight — Check kubectl, Helm, cluster connectivity. Detects cloud provider (EKS, GKE, AKS, TKG, Rancher), cert-manager, and existing Issuers. Node listing is best-effort (RBAC may prevent it on managed clusters — that’s fine).
  3. TLS strategy — cert-manager (self-signed) / BYOC / external.
  4. TLS config — Depends on strategy: existing issuer selection, ACM certificate ARN (external on AWS), domain, or cert/key paths.
  5. Credentials — OpenAI key, database backend (URL, API key, auth email/password, tenant ID; use --database-* or DATABASE_* env vars), GHCR token.
  6. Cluster settings — Namespace, expose mode (LoadBalancer / ClusterIP), LB visibility (external / internal), MetalLB pool (on-prem).
  7. Generate values — Write eve-values-generated.yaml.
  8. Helm install — From local chart: helm upgrade --install eve-guard . -f eve-values-generated.yaml. From OCI: same release/values against oci://…/eve-installer. cert-manager and hooks behave the same.
  9. Post-install — Shows endpoint URL, CNAME instructions, and pod status. For the cert-manager strategy with an installer-generated private CA: prints optional CA trust commands for macOS/Linux/Windows.

Multiple namespaces: cert-manager CRDs are shared cluster-wide; only one controller install should serve the cluster. Prefer --no-bundle-cert-manager plus --use-existing-issuer for additional Eve namespaces after the first (or use a platform-managed cert-manager). Uninstalling the release that bundled cert-manager removes that controller unless you run cert-manager as its own Helm release.

install-eve.sh CLI flags

Same list as ./install-eve.sh --help. Boolean flags take no value (see second table).

Flag Argument Description
--context name kubectl context to use
--namespace ns Kubernetes namespace (default: eve-guard)
--release-name name Helm release name (default: same as --namespace)
--tls-strategy self-signed | byoc | external TLS mode: self-signed = cert-manager strategy (see §2); byoc / external as named
--domain FQDN TLS / gateway hostname (e.g. gateway.example.com)
--gateway-domain host Gateway hostname for cert-manager strategy when not using --domain (default: <release-name>.eve.gateway.test)
--use-existing-issuer name cert-manager ClusterIssuer or Issuer name (skip local CA); namespaced Issuer must live in the install namespace
--use-existing-issuer-kind ClusterIssuer | Issuer issuerRef.kind for the leaf Certificate (default: ClusterIssuer if omitted)
--openai-key key OpenAI API key
--database-url URL Backend database API base URL (HTTPS)
--database-key key API key for database backend
--database-email email Auth email for database backend
--database-password password Auth password for database backend
--database-tenant-id id Tenant identifier for database backend
--ghcr-username user GitHub username for GHCR
--ghcr-token token GHCR token (read:packages) — pull secret + Helm OCI login
--chart-oci URL Umbrella chart OCI ref when not using a local chart
--chart-version version Chart version on GHCR; required for OCI + --non-interactive
--expose-mode loadbalancer | clusterip How Kong is exposed
--lb-scheme internal | external LoadBalancer visibility (when svc/kong is the public LB)
--metallb-pool name MetalLB IPAddressPool name (on-prem)
--acm-cert-arn ARN AWS ACM certificate (external TLS on EKS)
--gcp-managed-cert name GKE managed certificate resource name
--byoc-secret name Existing TLS Secret name (BYOC)
--byoc-cert path Path to TLS certificate PEM (BYOC)
--byoc-key path Path to TLS private key PEM (BYOC)

Boolean / switch flags (no argument):

Flag Description
--non-interactive No prompts; skips /etc/hosts and CA trust unless opt-in below
--update-hosts Allow sudo updates to /etc/hosts (typ. with --non-interactive)
--no-update-hosts Never modify /etc/hosts
--install-ca-trust Install Eve CA into OS trust store via sudo (typ. with --non-interactive)
--no-install-ca-trust Skip automatic CA trust install
--skip-preflight Skip kubectl/Helm preflight checks
--dry-run helm install / upgrade with --dry-run only
--bundle-cert-manager Always install cert-manager subchart
--no-bundle-cert-manager Never install cert-manager subchart (cluster must already run it)
-h, --help Print usage and exit

Environment (non-interactive consent without flags): EVE_UPDATE_HOSTS=yes|true|1, EVE_INSTALL_CA_TRUST=yes|true|1.

Directory env (not a flag): set CHART_PATH to the folder that contains the local eve-installer Chart.yaml (default: directory of install-eve.sh).

Environment variables (alternative to flags)

Variable Purpose
CHART_PATH Local umbrella chart directory
EVE_INSTALLER_CHART_OCI OCI URL for eve-installer (default: oci://ghcr.io/evesecurityinc/eve-installer)
EVE_INSTALLER_CHART_VERSION Chart version when pulling from OCI
NAMESPACE Target namespace
RELEASE_NAME Helm release name
CONTEXT kubectl context
TLS_STRATEGY, DOMAIN, USE_EXISTING_ISSUER, USE_EXISTING_ISSUER_KIND, GATEWAY_DOMAIN TLS / domain
OPENAI_KEY, DATABASE_URL, DATABASE_KEY, DATABASE_EMAIL, DATABASE_PASSWORD, DATABASE_TENANT_ID App / DB secrets
GHCR_USERNAME, GHCR_TOKEN GHCR auth
EXPOSE_MODE, LB_SCHEME, METALLB_POOL Service exposure
ACM_CERT_ARN, GCP_MANAGED_CERT External cloud TLS
BYOC_SECRET_NAME, BYOC_CERT_FILE, BYOC_KEY_FILE BYOC material
EVE_UPDATE_HOSTS, EVE_INSTALL_CA_TRUST Opt-in to privileged local actions under --non-interactive
GENERATED_VALUES Filename for written values (default: eve-values-generated.yaml)

For non-interactive runs, use --non-interactive plus the flags or env vars above.

Optional sudo (interactive installs)

Before changing your machine, the script asks (default is no for both):

Prompt What it does
Allow /etc/hosts updates? Adds your gateway hostname → cluster or load balancer IP (needs sudo).
Install the Eve CA into the system trust store? Trusts the installer-generated private CA so browsers accept https://… (needs sudo on macOS/Linux). Only relevant when not using --use-existing-issuer.

With --non-interactive, neither runs unless you opt in: --update-hosts, --install-ca-trust, or environment EVE_UPDATE_HOSTS=yes / EVE_INSTALL_CA_TRUST=yes. Use --no-update-hosts / --no-install-ca-trust to force skip.

Non-interactive examples (all values passed)

Pass everything with flags (including database params):

./install-eve.sh --non-interactive \
  --context arn:aws:eks:us-west-2:123456789012:cluster/prod \
  --namespace eve-guard \
  --release-name eve-guard \
  --tls-strategy self-signed \
  --gateway-domain gateway.acme.internal \
  --use-existing-issuer letsencrypt-prod \
  --openai-key "$OPENAI_KEY" \
  --database-url "https://xyzcompany.supabase.co" \
  --database-key "$DATABASE_KEY" \
  --database-email "service-account@acme.com" \
  --database-password "$DATABASE_PASSWORD" \
  --database-tenant-id "tenant-uuid" \
  --ghcr-username "$GHCR_USERNAME" \
  --ghcr-token "$GHCR_TOKEN" \
  --expose-mode loadbalancer \
  --lb-scheme external

Same thing with secrets from environment variables (less sensitive data in shell history):

export OPENAI_KEY='sk-...'
export DATABASE_URL='https://xyzcompany.supabase.co'
export DATABASE_KEY='sb_secret_...'
export DATABASE_EMAIL='service-account@acme.com'
export DATABASE_PASSWORD='...'
export DATABASE_TENANT_ID='tenant-uuid'
export GHCR_USERNAME='github-user'
export GHCR_TOKEN='ghp_...'

./install-eve.sh --non-interactive \
  --context arn:aws:eks:us-west-2:123456789012:cluster/prod \
  --namespace eve-guard \
  --release-name eve-guard \
  --tls-strategy self-signed \
  --gateway-domain gateway.acme.internal \
  --use-existing-issuer letsencrypt-prod \
  --expose-mode loadbalancer \
  --lb-scheme external

If running script-only + OCI (without local Chart.yaml), pass --chart-version <version> in non-interactive mode, and --chart-oci when not using the default oci://ghcr.io/evesecurityinc/eve-installer (see Script only + chart from GHCR).


4. TLS via cert-manager (--tls-strategy self-signed)

This is strategy 1 from §2. self-signed is only the CLI/Helm flag name; behavior is always cert-manager-driven TLS for Kong.

Default path: installer-generated private CA + leaf cert renewal. Optional --use-existing-issuer skips the private CA and uses your Issuer / ClusterIssuer instead. Works on any cluster (minikube, Docker Desktop, EKS, on-prem).

Basic usage

./install-eve.sh \
  --tls-strategy self-signed \
  --namespace eve-guard \
  --openai-key "$OPENAI_KEY" \
  --ghcr-username "$GHCR_USERNAME" \
  --ghcr-token "$GHCR_TOKEN"

The installer generates a CA, creates a K8s secret with the CA keypair, and cert-manager issues a leaf cert with auto-renewal. After install, you are asked whether to install the CA in your trust store (default no); if you skip it, the script prints commands for macOS/Linux/Windows to run manually.

If a platform engineer already created a ClusterIssuer (Let’s Encrypt, Vault, corporate CA, etc.):

./install-eve.sh \
  --tls-strategy self-signed \
  --use-existing-issuer letsencrypt-prod \
  --domain gateway.acme-corp.com \
  --namespace eve-guard \
  --openai-key "$OPENAI_KEY" \
  --ghcr-username "$GHCR_USERNAME" \
  --ghcr-token "$GHCR_TOKEN"

No CA is generated, no trust store setup needed (if the issuer uses a publicly trusted or org-wide CA). cert-manager issues and renews the leaf cert using the existing issuer.

minikube / Docker Desktop

minikube: Start the tunnel before ./install-eve.sh (in a separate terminal, leave it running). Without it, LoadBalancer services often stay <pending> or only show a cluster-internal address; with minikube tunnel active, minikube publishes 127.0.0.1 as the external IP. The installer waits for that IP (up to ~90s) to add it to the gateway certificate SANs—if the tunnel is not running, that step can fail and you may need to start the tunnel and re-run the installer.

# Terminal 1 — keep running (may prompt for sudo)
minikube tunnel
# Terminal 2
./install-eve.sh \
  --context minikube \
  --namespace eve-guard \
  --tls-strategy self-signed \
  --openai-key "$OPENAI_KEY" \
  --ghcr-username "$GHCR_USERNAME" \
  --ghcr-token "$GHCR_TOKEN"

Docker Desktop Kubernetes usually assigns a reachable LoadBalancer address without minikube tunnel; use the same ./install-eve.sh flags with --context set to your Docker Desktop context.

Open https://eve-guard.eve.gateway.test (or the gateway domain you chose). The default uses the reserved .test TLD so it never conflicts with public DNS; if you allow it when prompted, the installer updates /etc/hosts (see Optional sudo).

ClusterIP + port-forward (no LoadBalancer needed)

For local development without minikube tunnel or a cloud LB:

  1. Add --expose-mode clusterip to the install command.
  2. Port-forward: kubectl port-forward svc/kong 8443:443 -n eve-guard
  3. Use https://localhost:8443.

Manual (no installer)

# Generate CA and wildcard cert (see install-eve.sh for full openssl commands)
kubectl create secret tls eve-guard-tls --cert=eve-local.crt --key=eve-local.key -n eve-guard
helm upgrade --install eve-guard . -f examples/values-self-signed.yaml -n eve-guard --timeout 15m \
  --set eve-gateway.secrets.openaiApiKey=sk-...

5. BYOC (bring your own certificate)

Use a corporate CA or purchased certificate. No cert-manager needed.

Option A: Existing TLS secret

kubectl create secret tls my-tls-secret --cert=cert.pem --key=key.pem -n eve-guard
./install-eve.sh \
  --tls-strategy byoc \
  --byoc-secret my-tls-secret \
  --domain gateway.acme-corp.com \
  --namespace eve-guard \
  --openai-key "$OPENAI_KEY" \
  --ghcr-username "$GHCR_USERNAME" \
  --ghcr-token "$GHCR_TOKEN"

Option B: Inline PEM in values

Put PEM in values (or use --set-file). The chart creates the TLS secret from tls.byoc.cert and tls.byoc.key. Prefer Option A for security.


6. External TLS

TLS is terminated outside Kong — by the cloud load balancer, an ingress controller, or a service mesh. No cert-manager is needed.

The fastest path to HTTPS on EKS. The NLB terminates TLS using an AWS Certificate Manager (ACM) certificate. Kong receives decrypted HTTP traffic on port 8000.

./install-eve.sh \
  --tls-strategy external \
  --acm-cert-arn arn:aws:acm:us-west-2:ACCOUNT:certificate/CERT_ID \
  --namespace eve-guard \
  --openai-key "$OPENAI_KEY" \
  --ghcr-username "$GHCR_USERNAME" \
  --ghcr-token "$GHCR_TOKEN"

Domain is optional. You can test immediately using the NLB hostname:

# HTTPS works right away (cert mismatch warning is expected on the NLB hostname)
curl -k https://a1b2c3...us-west-2.elb.amazonaws.com/health

When ready for production, add a CNAME record pointing your domain to the NLB hostname. The ACM certificate should cover that domain so clients see no warnings.

# With a custom domain
./install-eve.sh \
  --tls-strategy external \
  --acm-cert-arn arn:aws:acm:us-west-2:ACCOUNT:certificate/CERT_ID \
  --domain gateway.acme-corp.com \
  --namespace eve-guard \
  --openai-key "$OPENAI_KEY" \
  --ghcr-username "$GHCR_USERNAME" \
  --ghcr-token "$GHCR_TOKEN"

The installer sets these annotations on the Kong Service automatically:

Annotation Value
service.beta.kubernetes.io/aws-load-balancer-type nlb
service.beta.kubernetes.io/aws-load-balancer-ssl-cert <ACM ARN>
service.beta.kubernetes.io/aws-load-balancer-ssl-ports 443
service.beta.kubernetes.io/aws-load-balancer-backend-protocol http

Tip: Create a wildcard ACM cert (e.g. *.acme-corp.com) and reuse it for all Eve installs across namespaces.

GCP (GKE) — Google-managed certificate

./install-eve.sh \
  --tls-strategy external \
  --gcp-managed-cert my-cert-name \
  --domain gateway.acme-corp.com \
  --namespace eve-guard \
  --openai-key "$OPENAI_KEY" \
  --ghcr-username "$GHCR_USERNAME" \
  --ghcr-token "$GHCR_TOKEN"

Azure (AKS) — Application Gateway / Front Door

Use Azure Application Gateway or Azure Front Door to terminate TLS with an Azure-managed certificate. Kong listens on HTTP; point the gateway to the Kong service on port 80.

./install-eve.sh \
  --tls-strategy external \
  --domain gateway.acme-corp.com \
  --namespace eve-guard \
  --openai-key "$OPENAI_KEY" \
  --ghcr-username "$GHCR_USERNAME" \
  --ghcr-token "$GHCR_TOKEN"

HTTP-only mode (Istio, F5, custom ingress)

If TLS is handled entirely by your infrastructure (Istio sidecar, F5 load balancer, NGINX ingress with cert), Kong listens on HTTP only (port 80). No ACM ARN needed:

./install-eve.sh \
  --tls-strategy external \
  --domain gateway.acme-corp.com \
  --namespace eve-guard \
  --openai-key "$OPENAI_KEY" \
  --ghcr-username "$GHCR_USERNAME" \
  --ghcr-token "$GHCR_TOKEN"

Kong: tls.enabled: false, httpPort: 80. Point your TLS-terminating layer to the Kong service on port 80.


7. Cloud providers

The chart is cloud-agnostic. Kong is exposed as a LoadBalancer Service; each cloud provisions a load balancer automatically. The installer detects your cloud provider (EKS, GKE, AKS, TKG, Rancher) and configures the right annotations. Cloud-specific steps are: (a) how the LB is exposed and whether TLS is terminated at the LB (see External TLS), and (b) how the platform engineer sets up cert-manager with DNS credentials for a ClusterIssuer (see Supported issuers).

AWS (EKS)

LoadBalancer: EKS provisions a CLB or NLB. The Service returns a hostname (e.g. a1b2...elb.amazonaws.com), not an IP. Point your domain via CNAME or Route53 Alias (not an A record).

cert-manager with IRSA (platform engineer, one-time setup for Let’s Encrypt / ACME issuers):

  1. Create an IAM policy:

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Action": ["route53:GetChange"],
          "Resource": "arn:aws:route53:::change/*"
        },
        {
          "Effect": "Allow",
          "Action": ["route53:ChangeResourceRecordSets", "route53:ListResourceRecordSets"],
          "Resource": "arn:aws:route53:::hostedzone/YOUR_ZONE_ID"
        },
        {
          "Effect": "Allow",
          "Action": ["route53:ListHostedZonesByName"],
          "Resource": "*"
        }
      ]
    }
  2. Create an IAM role with a trust policy for the EKS OIDC provider and attach the policy above.

  3. Annotate the cert-manager service account (after Eve install, since cert-manager is a subchart):

    kubectl annotate serviceaccount eve-guard-cert-manager -n eve-guard \
      eks.amazonaws.com/role-arn=arn:aws:iam::ACCOUNT_ID:role/cert-manager-route53 --overwrite
    kubectl rollout restart deployment eve-guard-cert-manager -n eve-guard
  4. Create a ClusterIssuer:

    apiVersion: cert-manager.io/v1
    kind: ClusterIssuer
    metadata:
      name: letsencrypt-prod
    spec:
      acme:
        server: https://acme-v02.api.letsencrypt.org/directory
        email: platform-team@acme-corp.com
        privateKeySecretRef:
          name: letsencrypt-prod-account-key
        solvers:
        - dns01:
            route53:
              region: us-east-1
              hostedZoneID: YOUR_ZONE_ID
  5. Install Eve using the tested issuer:

    ./install-eve.sh \
      --tls-strategy self-signed \
      --use-existing-issuer letsencrypt-prod \
      --domain gateway.acme-corp.com \
      --namespace eve-guard \
      --openai-key "$OPENAI_KEY" \
      --ghcr-username "$GHCR_USERNAME" \
      --ghcr-token "$GHCR_TOKEN"

NLB with ACM (recommended for most EKS deployments): Use --tls-strategy external --acm-cert-arn <ARN>. The NLB terminates TLS with an ACM certificate — HTTPS works immediately, no cert-manager needed. See External TLS — AWS for full details.

See also the official cert-manager Route53 docs and the EKS + Let’s Encrypt tutorial.

Post-deploy: kubectl get svc kong -n eve-guard -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' → create CNAME/Alias → curl -I https://gateway.acme-corp.com.

GCP (GKE)

LoadBalancer: GKE provisions an external TCP load balancer. The Service returns an IP address. Point your domain via an A record.

cert-manager with Workload Identity (platform engineer, one-time setup for Let’s Encrypt / ACME issuers):

  1. Create a GCP service account with roles/dns.admin on your Cloud DNS managed zone.

  2. Bind it to the cert-manager Kubernetes service account:

    gcloud iam service-accounts add-iam-policy-binding \
      cert-manager-dns@YOUR_PROJECT.iam.gserviceaccount.com \
      --role roles/iam.workloadIdentityUser \
      --member "serviceAccount:YOUR_PROJECT.svc.id.goog[eve-guard/eve-guard-cert-manager]"
  3. Annotate the cert-manager service account:

    kubectl annotate serviceaccount eve-guard-cert-manager -n eve-guard \
      iam.gke.io/gcp-service-account=cert-manager-dns@YOUR_PROJECT.iam.gserviceaccount.com --overwrite
    kubectl rollout restart deployment eve-guard-cert-manager -n eve-guard
  4. Create a ClusterIssuer:

    apiVersion: cert-manager.io/v1
    kind: ClusterIssuer
    metadata:
      name: letsencrypt-prod
    spec:
      acme:
        server: https://acme-v02.api.letsencrypt.org/directory
        email: platform-team@acme-corp.com
        privateKeySecretRef:
          name: letsencrypt-prod-account-key
        solvers:
        - dns01:
            cloudDNS:
              project: YOUR_PROJECT
  5. Install Eve with --use-existing-issuer letsencrypt-prod (same as AWS example above).

See also the official cert-manager Cloud DNS docs.

Google-managed certificates: Use --tls-strategy external --gcp-managed-cert <name> to let GKE managed certificates handle TLS at the load balancer level. See External TLS — GCP.

Post-deploy: kubectl get svc kong -n eve-guard -o jsonpath='{.status.loadBalancer.ingress[0].ip}' → create A record → test.

Azure (AKS)

LoadBalancer: AKS provisions an Azure Load Balancer. The Service returns an IP address. Point your domain via an A record.

cert-manager with Azure Workload Identity (platform engineer, one-time setup for Let’s Encrypt / ACME issuers):

  1. Create an Azure Managed Identity with DNS Zone Contributor role on your Azure DNS zone.

  2. Create a federated credential for the cert-manager service account:

    az identity federated-credential create \
      --name cert-manager-fedcred \
      --identity-name cert-manager-identity \
      --resource-group YOUR_RG \
      --issuer "$(az aks show -n YOUR_CLUSTER -g YOUR_RG --query oidcIssuerProfile.issuerUrl -o tsv)" \
      --subject "system:serviceaccount:eve-guard:eve-guard-cert-manager"
  3. Annotate the cert-manager service account:

    kubectl annotate serviceaccount eve-guard-cert-manager -n eve-guard \
      azure.workload.identity/client-id=YOUR_MANAGED_IDENTITY_CLIENT_ID --overwrite
    kubectl label serviceaccount eve-guard-cert-manager -n eve-guard \
      azure.workload.identity/use=true --overwrite
    kubectl rollout restart deployment eve-guard-cert-manager -n eve-guard
  4. Create a ClusterIssuer:

    apiVersion: cert-manager.io/v1
    kind: ClusterIssuer
    metadata:
      name: letsencrypt-prod
    spec:
      acme:
        server: https://acme-v02.api.letsencrypt.org/directory
        email: platform-team@acme-corp.com
        privateKeySecretRef:
          name: letsencrypt-prod-account-key
        solvers:
        - dns01:
            azureDNS:
              subscriptionID: YOUR_SUBSCRIPTION_ID
              resourceGroupName: YOUR_DNS_RG
              hostedZoneName: acme-corp.com
              environment: AzurePublicCloud
              managedIdentity:
                clientID: YOUR_MANAGED_IDENTITY_CLIENT_ID
  5. Install Eve with --use-existing-issuer letsencrypt-prod (same pattern).

See also the official cert-manager Azure DNS docs.

Application Gateway: Use --tls-strategy external if you want Azure Application Gateway or Front Door to terminate TLS. See External TLS — Azure.

Post-deploy: kubectl get svc kong -n eve-guard -o jsonpath='{.status.loadBalancer.ingress[0].ip}' → create A record → test.

On-premises / VMware / Rancher

Same chart. Use the cert-manager strategy (--tls-strategy self-signed) with a corporate CA issuer, or --use-existing-issuer with a pre-configured ClusterIssuer (Let’s Encrypt, Vault, etc.). The installer auto-detects VMware TKG/VKS and Rancher/RKE clusters.

Summary: post-deploy by cloud

Cloud LB returns DNS record type Get address
AWS hostname (...elb.amazonaws.com) CNAME or Route53 Alias kubectl get svc kong -n eve-guard -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'
GCP IP address A record kubectl get svc kong -n eve-guard -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
Azure IP address A record kubectl get svc kong -n eve-guard -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
On-prem depends (MetalLB: IP, NodePort: node IP) A record kubectl get svc kong -n eve-guard

LoadBalancer schemes (internal / external)

The installer supports internal and external LoadBalancers on any platform. Use --lb-scheme internal or --lb-scheme external (default), or select interactively. On on-prem clusters you can also specify a MetalLB address pool with --metallb-pool <name>.

The installer sets all annotations simultaneously — each cloud’s LB controller reads only its own and ignores the rest, so the chart is fully portable.

Platform Scheme Annotation Value
AWS (EKS) external service.beta.kubernetes.io/aws-load-balancer-scheme internet-facing
AWS (EKS) internal service.beta.kubernetes.io/aws-load-balancer-scheme internal
AWS (EKS) internal service.beta.kubernetes.io/aws-load-balancer-internal true
GCP (GKE) external (default, no annotation needed)
GCP (GKE) internal networking.gke.io/load-balancer-type Internal
Azure (AKS) external (default, no annotation needed)
Azure (AKS) internal service.beta.kubernetes.io/azure-load-balancer-internal true
VMware (TKG/VKS) external (default, no annotation needed)
VMware (TKG/VKS) internal Use MetalLB pool or NSX-T AVI InfraSetting see below
Rancher (RKE/RKE2) external (default, no annotation needed)
Rancher (RKE/RKE2) internal Use MetalLB pool or kube-vip config see below
MetalLB (any) pool select metallb.universe.tf/address-pool <pool-name>

On-prem internal/external: On VMware TKG, Rancher, and bare-metal clusters, internal vs external typically depends on the LB controller:

CLI examples:

# External LB (default on all clouds)
./install-eve.sh --lb-scheme external ...

# Internal LB (works on AWS, GCP, Azure — annotations set automatically)
./install-eve.sh --lb-scheme internal ...

# On-prem with MetalLB pool
./install-eve.sh --lb-scheme internal --metallb-pool internal-pool ...

8. Manual install with Helm

If you prefer not to use the installer:

  1. Copy an example from examples/ and customize.

  2. Create namespace and GHCR pull secret:

    kubectl create namespace eve-guard
    kubectl create secret docker-registry ghcr-secret \
      --docker-server=ghcr.io \
      --docker-username=YOUR_GITHUB_USERNAME \
      --docker-password=YOUR_GHCR_TOKEN \
      -n eve-guard
  3. Install:

    cd traffic-orchestrator/k8s/helm/eve-installer
    helm dependency build .   # uses Chart.lock; or helm dependency update . if build fails / no lock
    helm upgrade --install eve-guard . \
      -f examples/values-self-signed.yaml \
      -n eve-guard --timeout 15m \
      --set tls.domain=gateway.acme-corp.com \
      --set eve-gateway.secrets.openaiApiKey=sk-...

9. Uninstall

Remove the Helm release

helm uninstall eve-guard -n eve-guard

This removes all Kubernetes resources (pods, services, secrets, cert-manager subchart, hook resources). CRDs installed by cert-manager are kept (Helm never deletes CRDs).

Remove the namespace

kubectl delete namespace eve-guard

Remove the Eve CA from your trust store

If you used the cert-manager strategy with an installer-generated private CA (no --use-existing-issuer), the installer may have added the Eve CA to your system trust store. To remove it:

macOS:

security find-certificate -c "Eve Gateway CA" -a -Z /Library/Keychains/System.keychain \
  | grep "SHA-1" | awk '{print $NF}' \
  | while read hash; do sudo security delete-certificate -Z "$hash" /Library/Keychains/System.keychain; done

Linux (Debian/Ubuntu):

sudo rm /usr/local/share/ca-certificates/eve-local-ca.crt && sudo update-ca-certificates --fresh

Linux (RHEL/Fedora):

sudo rm /etc/pki/ca-trust/source/anchors/eve-local-ca.crt && sudo update-ca-trust

Windows (PowerShell as Admin):

Get-ChildItem Cert:\LocalMachine\Root | Where-Object { $_.Subject -match "Eve Gateway CA" } | Remove-Item

Clean up local files

rm -f eve-local-ca.crt eve-local-ca.key eve-local.crt eve-local.key eve-values-generated.yaml

11. Upgrade

Upgrade via curl (no repo needed)

Fetch the latest installer script from https://install.eve.security and run --upgrade directly:

curl -fsSL https://install.eve.security/install-eve.sh -o install-eve.sh
chmod +x install-eve.sh

# Check what version is available first
./install-eve.sh --check-upgrade --namespace eve-guard --ghcr-token "$GHCR_TOKEN"

# Upgrade to the latest published version
./install-eve.sh --upgrade --namespace eve-guard --ghcr-token "$GHCR_TOKEN"

Or pipe directly (non-interactive, skip the check):

curl -fsSL https://install.eve.security | bash -s -- \
  --upgrade --namespace eve-guard \
  --context "$KUBE_CONTEXT" \
  --ghcr-token "$GHCR_TOKEN"

Check for an available upgrade

Compare the deployed Helm release version against the latest published chart on GHCR (falls back to https://install.eve.security/VERSION when GHCR is unreachable):

./install-eve.sh --check-upgrade --namespace eve-guard --ghcr-token "$GHCR_TOKEN"

Sample output when an upgrade is available:

[INFO] Installed : 2.1.0  (release: eve-guard, namespace: eve-guard)
[INFO] Latest    : 2.3.0  (oci://ghcr.io/evesecurityinc/eve-installer)
[WARN] Upgrade available: 2.1.0 -> 2.3.0
[INFO] Run:  ./install-eve.sh --upgrade --namespace eve-guard

Already up to date:

[INFO] Already up to date (2.3.0).

Upgrade to the latest version

Upgrades the existing Helm release in-place using helm upgrade --install --reuse-values. All existing values (secrets, TLS config, domain) are preserved automatically.

./install-eve.sh --upgrade --namespace eve-guard --ghcr-token "$GHCR_TOKEN"

In interactive mode (no --context flag), you will be prompted to select the Kubernetes cluster context. To skip the prompt:

./install-eve.sh --upgrade \
  --context arn:aws:eks:us-west-2:123456789012:cluster/prod \
  --namespace eve-guard \
  --ghcr-token "$GHCR_TOKEN"

Pin a specific chart version (upgrade or downgrade)

# Upgrade to a specific version
./install-eve.sh --upgrade --chart-version 2.3.0 \
  --namespace eve-guard --ghcr-token "$GHCR_TOKEN"

# Downgrade to a previous version
./install-eve.sh --upgrade --chart-version 2.1.0 \
  --namespace eve-guard --ghcr-token "$GHCR_TOKEN"

Downgrade note: --reuse-values works for downgrades but may fail if the older chart removed a value your current values.yaml references. If downgrade fails, use helm rollback instead (see below).

Non-interactive upgrade (CI / runbooks)

./install-eve.sh --upgrade --non-interactive \
  --context arn:aws:eks:us-west-2:123456789012:cluster/prod \
  --namespace eve-guard \
  --ghcr-username "$GHCR_USERNAME" \
  --ghcr-token    "$GHCR_TOKEN"

Rollback without the installer

# List all revisions
helm history eve-guard -n eve-guard

# Roll back to a specific revision
helm rollback eve-guard <REVISION> -n eve-guard --wait --timeout=10m

10. Verify and troubleshoot

Verify deployment

kubectl get pods -n eve-guard
kubectl get svc kong kong-tls -n eve-guard
kubectl get certificate -n eve-guard   # cert-manager strategy only

Kong: svc/kong vs svc/kong-tls (Kong terminates TLS)

When eve-gateway.kong.tls.enabled is true, the chart creates two Services:

Service Role Typical use
kong-tls LoadBalancer, port 443kong.tls.port on the pod (default 8443). In the eve-gateway chart, AWS NLB annotations on this Service are fixed (internet-facing + NLB) unless you override the chart. --lb-scheme internal applies to svc/kong when it is the public LB (e.g. external TLS + ACM), not to kong-tls today. Public HTTPS URL for cert-manager / BYOC leaf certs.
kong The installer sets eve-gateway.kong.service.type: ClusterIP when you use cert-manager / BYOC + LoadBalancer expose mode so HTTPS from the internet goes to kong-tls, not cleartext :9000 on a public kong LB. In-cluster gateway.port and tls-proxy remain on this Service. HPA targets the Kong Deployment, not Services. In-cluster traffic and debugging.

To expose a public LoadBalancer for kong (e.g. port 9000) as well as kong-tls, set eve-gateway.kong.service.type: LoadBalancer in your values (not the installer default for that mode).

EKS + ACM (external TLS) uses kong only (NLB terminates TLS; Kong often on HTTP :8000). There is no separate TLS path on Kong in that mode, and kong-tls is not created when kong.tls.enabled is false.

Why not bind port 443 inside the Kong container? Non-root containers cannot listen on ports < 1024 on Linux. Splitting LB port 443 (Service) from container port 8443 (Kong) fixes Permission denied on bind.

failed post-install: timed out waiting for the condition

Often this is not “the cluster is slow” but the wait hook Job failing and retrying (e.g. RBAC blocked kubectl wait, or the webhook Deployment never became ready). A large backoffLimit makes that feel like a long hang until Helm times out.

The chart sets helm.sh/hook-timeout: ~660s on post-install hooks; the installer uses helm ... --timeout 15m. Check the wait Job logs first (below).

  1. Logs from the wait Job (replace eve-guard with your release if different):

    kubectl logs -n eve-guard -l job-name=eve-guard-wait-cert-manager --tail=200
  2. Webhook deployment / pulls:

    kubectl get deployments,pods -n eve-guard | grep -E 'cert-manager|webhook'
  3. Retry after fixing ImagePullBackOff or RBAC; or uninstall and reinstall with the updated chart.

cainjector logs: certificates.cert-manager.io CRD not found (pods up, CRDs missing)

The Jetstack chart can deploy controller / cainjector / webhook without applying CRDs when CRDs are not installed yet. cainjector then loops on:

cainjector has been configured to watch certificates, but certificates.cert-manager.io CRD not found

and may exit with code 124 after its internal startup timeout — which breaks the post-install wait Job and leaves Kong waiting on a TLS Secret that never gets issued.

Fix: This umbrella keeps cert-manager.crds.enabled: false in Helm (CRDs are not tied to the Eve release). install-eve.sh applies the cert-manager CRDs with kubectl apply before Helm when certManager.install: true and the CRDs are missing. Confirm:

kubectl get crd certificates.cert-manager.io

Manual Helm only: apply CRDs from the same chart version as your lockfile per cert-manager installation, then helm upgrade --install again.

Already failed release: after CRDs exist, helm upgrade --install the same release, or uninstall and reinstall.

UPGRADE FAILED: post-upgrade hooks — no matches for kind "Issuer" (cert-manager.io/v1)

Helm may report something like: unable to build kubernetes object for deleting hook … eve-ca-issuer … no matches for kind “Issuer” in version “cert-manager.io/v1” — ensure CRDs are installed first.

That happens when the cluster does not have cert-manager CRDs (or they were never applied) but a previous run already created hook-managed Issuer/Certificate objects in Helm’s release history. On upgrade, Helm tries to delete those hook resources; the Kubernetes API cannot map Issuer until the CRDs exist.

Fix (same root cause as cainjector “CRD not found” above):

  1. Ensure cert-manager CRDs exist on the cluster (installer applies them pre-Helm when certManager.install: true and kubectl get crd certificates.cert-manager.io fails).
  2. Re-run install-eve.sh or helm upgrade --install after CRDs are present.
  3. Confirm: kubectl get crd certificates.cert-manager.io
  4. Manual Helm only: follow cert-manager Helm install — CRDs or render/apply CRDs from the same chart version as your lockfile, then helm upgrade --install again.

UPGRADE FAILED: CRD exists and cannot be imported into the current release (Helm ownership)

Older Eve umbrella builds installed cert-manager with cert-manager.crds.enabled: true, so Helm tried to own cluster-scoped CRDs. Only one release name can own them; a second namespace or a reinstall with a different release name then failed with invalid ownership metadata.

Current behavior: the chart sets cert-manager.crds.enabled: false. CRDs are applied with kubectl apply (not as Helm resources), so new installs do not bind CRDs to your release name or namespace. Stale meta.helm.sh/* left on CRDs from older installs is harmless for Helm, but confusing in kubectl describe.

Cleanup stale CRD Helm labels: when you run install-eve.sh with certManager.install: true, the script removes meta.helm.sh/release-name, meta.helm.sh/release-namespace, and common Helm labels from cert-manager CRDs only if the annotated release no longer exists (helm status fails). If a live Helm release still owns those CRDs (dedicated cert-manager install), the script does not strip them — use --no-bundle-cert-manager and --use-existing-issuer, or uninstall the other release first.

Manual one-liner (after you confirm helm status eve-guard-victor -n eve-guard-victor fails — release really gone):

for c in challenges.acme.cert-manager.io orders.acme.cert-manager.io certificaterequests.cert-manager.io certificates.cert-manager.io clusterissuers.cert-manager.io issuers.cert-manager.io; do
  kubectl annotate crd "$c" meta.helm.sh/release-name- meta.helm.sh/release-namespace- 2>/dev/null
  kubectl label crd "$c" app.kubernetes.io/managed-by- app.kubernetes.io/instance- 2>/dev/null
done

Nuclear option (dev only): delete all cert-manager CRDs — wipes every Certificate / ClusterIssuer cluster-wide. Prefer stripping annotations or a dedicated cert-manager release instead.

cert-manager CRDs but no pods (partial uninstall)

If kubectl get crd certificates.cert-manager.io works but no cert-manager controller pods run anywhere, the old installer skipped installing cert-manager and Kong never got the leaf TLS Secret. Current install-eve.sh detects running controller pods (not just CRDs) and sets certManager.install: true when needed.

uninstall-eve.sh now removes this release’s cert-manager webhooks before and after helm uninstall and uses helm uninstall --wait for a cleaner teardown.

Kong stuck in ContainerCreatingsecret "<release>-tls" not found

Kong mounts the leaf TLS Secret (e.g. eve-guard-tls) before cert-manager has created it. The umbrella chart creates the Certificate in a post-install hook (after the cert-manager webhook wait Job), so on first install the Secret may appear after the Kong Deployment is applied.

Quick checks:

kubectl describe pod -n eve-guard -l app.kubernetes.io/component=kong | sed -n '/Events:/,$p'
kubectl get secret -n eve-guard eve-guard-tls eve-ca-keypair 2>/dev/null || true
kubectl get certificate,certificaterequest -n eve-guard
kubectl describe issuer -n eve-guard eve-ca-issuer 2>/dev/null || true

Certificate not ready

  1. CA secret for the namespace Issuer (cert-manager strategy without --use-existing-issuer): the chart’s CA Issuer reads eve-ca-keypair. The installer creates it before Helm; if you ran helm install alone, create it (or re-run the installer’s CA step):

    kubectl get secret eve-ca-keypair -n eve-guard -o name || echo "missing eve-ca-keypair — Issuer cannot sign"
  2. Describe the Certificate and check status / events:

    kubectl describe certificate -n eve-guard
    kubectl get certificaterequest -n eve-guard
    kubectl describe certificaterequest -n eve-guard  # pick the one for your Certificate
  3. Namespace Issuer (default private-CA path for cert-manager strategy):

    kubectl describe issuer -n eve-guard eve-ca-issuer
  4. cert-manager logs (subchart runs in release namespace):

    kubectl logs -n eve-guard -l app.kubernetes.io/name=cert-manager
  5. Existing issuer: If using --use-existing-issuer, verify the issuer is healthy: kubectl describe clusterissuer <name> or, for a namespaced Issuer, kubectl describe issuer -n <namespace> <name>.

ImagePullBackOff / ErrImagePull (Kong, orchestrator)

  1. Confirm the secret exists: kubectl get secret ghcr-secret -n eve-guard

  2. Env vars: GHCR_USERNAME must be your GitHub username (or token for PAT-only flows); GHCR_TOKEN must be the PAT (ghp_...). If you swap them, the pull secret is wrong — kubectl delete secret ghcr-secret -n eve-guard and re-run the installer with the correct exports.

  3. Recreate if missing:

    kubectl create secret docker-registry ghcr-secret \
      --docker-server=ghcr.io --docker-username=YOUR_GITHUB_USERNAME --docker-password=YOUR_TOKEN -n eve-guard
  4. Upgrade: helm upgrade eve-guard . -f eve-values-generated.yaml -n eve-guard --timeout 15m

  5. Restart pods: kubectl rollout restart deployment/kong deployment/orchestrator -n eve-guard

CPU architecture (Mac, Windows, Linux, EKS)

cert-manager (private CA): CA trust store (install / remove / share)

The installer prints trust commands for all platforms after a successful install. Manual commands from the chart directory:

macOS — install:

sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain eve-local-ca.crt

macOS — remove:

security find-certificate -c "Eve Gateway CA" -a -Z /Library/Keychains/System.keychain \
  | grep "SHA-1" | awk '{print $NF}' \
  | while read hash; do sudo security delete-certificate -Z "$hash" /Library/Keychains/System.keychain; done

Linux (Debian/Ubuntu) — install:

sudo cp eve-local-ca.crt /usr/local/share/ca-certificates/eve-local-ca.crt && sudo update-ca-certificates

Linux (Debian/Ubuntu) — remove:

sudo rm /usr/local/share/ca-certificates/eve-local-ca.crt && sudo update-ca-certificates --fresh

Linux (RHEL/Fedora) — install:

sudo cp eve-local-ca.crt /etc/pki/ca-trust/source/anchors/eve-local-ca.crt && sudo update-ca-trust

Linux (RHEL/Fedora) — remove:

sudo rm /etc/pki/ca-trust/source/anchors/eve-local-ca.crt && sudo update-ca-trust

Windows (PowerShell as Admin) — install:

Import-Certificate -FilePath "eve-local-ca.crt" -CertStoreLocation Cert:\LocalMachine\Root

Windows (PowerShell as Admin) — remove:

Get-ChildItem Cert:\LocalMachine\Root | Where-Object { $_.Subject -match "Eve Gateway CA" } | Remove-Item

Share with another engineer: Send them eve-local-ca.crt and the install command for their OS.

cert-manager: browser still warns or wrong host

Kong not receiving traffic

“Namespace exists and cannot be imported into the current release”

The installer patches the namespace with Helm metadata. If it still fails:

kubectl label namespace eve-guard app.kubernetes.io/managed-by=Helm --overwrite
kubectl annotate namespace eve-guard meta.helm.sh/release-name=eve-guard meta.helm.sh/release-namespace=eve-guard --overwrite
helm upgrade --install eve-guard . -f eve-values-generated.yaml -n eve-guard --timeout 15m

Security: shared clusters and secrets