Deploy Eve Gateway with TLS using the umbrella chart. All traffic is always encrypted; choose how TLS is provided.
kubectl cluster-info
must succeed; ensure KUBECONFIG is set if using a custom
config.read:packages (and
org SSO authorized if your GitHub org requires it) so the installer can
create the namespace image-pull secret.There are three ways to provide TLS for the gateway:
| # | Strategy | Description |
|---|---|---|
| 1 | cert-manager | cert-manager requests and renews the Kong leaf
certificate (the umbrella chart usually installs the cert-manager
subchart; use --no-bundle-cert-manager
when the cluster already runs cert-manager). Either the installer
creates a private CA and a namespace
Issuer, or you use
--use-existing-issuer with any
Issuer / ClusterIssuer (Let’s Encrypt,
Vault, corporate CA, etc.). Suitable for any
environment: use a publicly trusted issuer when clients
rely on default OS/browser trust, or a private CA when
you control trust distribution (internal fleets, lab, air-gapped, custom
roots). |
| 2 | BYOC | Bring your own certificate: existing Kubernetes TLS Secret or inline PEM. You handle renewal. No cert-manager. |
| 3 | external | TLS terminates outside Kong (e.g. AWS NLB + ACM, Istio, ingress). Kong may listen on HTTP only. No cert-manager. |
install-eve.sh/ Helm values: strategy 1 is selected with--tls-strategy self-signedandtls.strategy: self-signed(legacy name for the cert-manager path). The interactive installer labels this option cert-manager.
Tip — existing issuer (cert-manager strategy): A platform engineer can install cert-manager and a ClusterIssuer once, test it, then every Eve install uses
--use-existing-issuer <name>so DNS credentials / cloud IAM stay out of each app deploy. Clients get publicly trusted certs when the issuer chains to a public CA.
Same as the cert-manager strategy with
--use-existing-issuer: no
installer-generated CA; cert-manager issues the leaf using your
pre-defined issuer.
./install-eve.sh \
--tls-strategy self-signed \
--use-existing-issuer letsencrypt-prod \
--domain gateway.acme-corp.com \
--namespace eve-guard \
--openai-key "$OPENAI_KEY" \
--ghcr-username "$GHCR_USERNAME" \
--ghcr-token "$GHCR_TOKEN"The installer lists existing ClusterIssuers and namespace-scoped Issuers during setup so you can pick one interactively.
Eve works with any cert-manager Issuer or
ClusterIssuer. Set it up following the official docs, then pass the name
to --use-existing-issuer:
| Issuer type | Official documentation |
|---|---|
| ACME / Let’s Encrypt (general) | https://cert-manager.io/docs/configuration/acme/ |
| DNS-01 challenge solvers | https://cert-manager.io/docs/configuration/acme/dns01/ |
| Route53 (AWS) | https://cert-manager.io/docs/configuration/acme/dns01/route53/ |
| Cloudflare | https://cert-manager.io/docs/configuration/acme/dns01/cloudflare/ |
| Cloud DNS (GCP) | https://cert-manager.io/docs/configuration/acme/dns01/google/ |
| Azure DNS | https://cert-manager.io/docs/configuration/acme/dns01/azuredns/ |
| HashiCorp Vault | https://cert-manager.io/docs/configuration/vault/ |
| Venafi | https://cert-manager.io/docs/configuration/venafi/ |
| AWS Private CA | https://cert-manager.io/docs/configuration/acme/ |
| All supported issuers | https://cert-manager.io/docs/configuration/ |
| AWS EKS + Let’s Encrypt tutorial | https://cert-manager.io/docs/tutorials/getting-started-aws-letsencrypt/ |
See Section 7: Cloud providers for cloud-specific setup guides (IRSA for AWS, Workload Identity for GCP/Azure).
The installer is published at
https://install.eve.security. You do not need to clone the
repo — curl the script directly and pass all flags on the command
line.
One-liner (pipe to bash):
curl -fsSL https://install.eve.security | bash -s -- \
--non-interactive \
--context "$KUBE_CONTEXT" \
--namespace eve-guard \
--tls-strategy self-signed \
--ghcr-username "$GHCR_USERNAME" \
--ghcr-token "$GHCR_TOKEN" \
--openai-key "$OPENAI_KEY" \
--database-url "$DATABASE_URL" \
--database-key "$DATABASE_KEY" \
--database-email "$DATABASE_EMAIL" \
--database-password "$DATABASE_PASSWORD" \
--database-tenant-id "$DATABASE_TENANT_ID"Download first, inspect, then run (recommended):
curl -fsSL https://install.eve.security/install-eve.sh -o install-eve.sh
chmod +x install-eve.sh
./install-eve.sh --help # inspect available flags
./install-eve.sh --context "$KUBE_CONTEXT" --namespace eve-guard ...Pin to a specific version:
curl -fsSL https://install.eve.security/v2.3.0/install-eve.sh -o install-eve.sh
chmod +x install-eve.sh
./install-eve.sh --chart-version 2.3.0 ...Uninstaller:
curl -fsSL https://install.eve.security/uninstall-eve.sh -o uninstall-eve.sh
chmod +x uninstall-eve.sh
./uninstall-eve.sh --namespace eve-guardWhen run from a directory without a local
eve-installer/Chart.yaml, the script automatically switches to OCI mode and pulls the chart fromoci://ghcr.io/evesecurityinc/eve-installer.--ghcr-usernameand--ghcr-tokenare required in that mode for both the Helm pull and the image pull secret.
Omit --non-interactive so the script
prompts for cluster, TLS strategy, namespace, /etc/hosts,
CA trust, etc. Fill in the exports, then run (from the
eve-installer directory or with
CHART_PATH set, or add OCI flags as in Script only + chart from
GHCR):
export OPENAI_KEY=''
export DATABASE_URL=''
export DATABASE_KEY=''
export DATABASE_EMAIL=''
export DATABASE_PASSWORD=''
export DATABASE_TENANT_ID=''
export GHCR_USERNAME=''
export GHCR_TOKEN=''
./install-eve.sh --openai-key "$OPENAI_KEY" \
--database-url "$DATABASE_URL" \
--database-key "$DATABASE_KEY" \
--database-email "$DATABASE_EMAIL" \
--database-password "$DATABASE_PASSWORD" \
--database-tenant-id "$DATABASE_TENANT_ID" \
--ghcr-username "$GHCR_USERNAME" \
--ghcr-token "$GHCR_TOKEN"The installer still reads OPENAI_KEY,
DATABASE_*,
GHCR_* if set, even without passing flags;
passing --openai-key /
--database-* /
--ghcr-* as above keeps the command
explicit and works well in copy-paste runbooks.
If you run the script from the eve-installer folder
(or set CHART_PATH to that folder) so
Chart.yaml with name: eve-installer is
present, the installer uses the local chart and syncs
subcharts with helm dependency build when
Chart.lock is present (reproducible, and
avoids refreshing unrelated helm repo entries on your
machine). If there is no lock or build fails, it falls back to
helm dependency update.
cd traffic-orchestrator/k8s/helm/eve-installer
./install-eve.shUse this when install-eve.sh is not
next to a local eve-installer/Chart.yaml (e.g. you copied
only the script). Helm then installs from
eve-installer on GHCR. You need GHCR auth
(--ghcr-username / --ghcr-token) for the image
pull secret and for helm pull against OCI.
Custom OCI registry + chart version (fork or your org’s GHCR):
./install-eve.sh --non-interactive \
--chart-oci oci://ghcr.io/yourorg/eve-installer \
--chart-version 2.2.0 \
--ghcr-username YOUR_USER --ghcr-token ghp_... \
--namespace eve-guard --release-name eve-guard \
--tls-strategy self-signed \
--openai-key "$OPENAI_KEY" \
...other required flags...Add whatever else your environment needs (see
./install-eve.sh --help and the flag table below), for
example: --context (kube context),
--gateway-domain or
--domain,
--use-existing-issuer, and database
backend --database-* flags or
DATABASE_* env vars.
Default OCI is
oci://ghcr.io/evesecurityinc/eve-installer — then you only
need --chart-version (and the same GHCR /
cluster / TLS / secrets flags):
./install-eve.sh --non-interactive --chart-version 2.2.0 \
--ghcr-username YOUR_USER --ghcr-token ghp_... \
...other flags...Equivalent env vars:
EVE_INSTALLER_CHART_OCI,
EVE_INSTALLER_CHART_VERSION. Chart is
published by CI — see HELM_OCI_GHCR.md.
macOS is fully supported (and Linux/Windows): the
script runs Helm and kubectl on your machine; workload
images are pulled by nodes using the pull secret from your GHCR
token.
The installer will:
self-signed) / BYOC / external.--database-*
or DATABASE_* env vars), GHCR token.eve-values-generated.yaml.helm upgrade --install eve-guard . -f eve-values-generated.yaml.
From OCI: same release/values against
oci://…/eve-installer. cert-manager and hooks behave the
same.Multiple namespaces: cert-manager CRDs are
shared cluster-wide; only one controller
install should serve the cluster. Prefer
--no-bundle-cert-manager plus
--use-existing-issuer for additional Eve
namespaces after the first (or use a platform-managed cert-manager).
Uninstalling the release that bundled cert-manager removes that
controller unless you run cert-manager as its own Helm release.
install-eve.sh CLI
flagsSame list as ./install-eve.sh --help.
Boolean flags take no value (see second table).
| Flag | Argument | Description |
|---|---|---|
--context |
name | kubectl context to use |
--namespace |
ns | Kubernetes namespace (default: eve-guard) |
--release-name |
name | Helm release name (default: same as --namespace) |
--tls-strategy |
self-signed | byoc |
external |
TLS mode: self-signed =
cert-manager strategy (see §2); byoc /
external as named |
--domain |
FQDN | TLS / gateway hostname (e.g. gateway.example.com) |
--gateway-domain |
host | Gateway hostname for cert-manager strategy when not
using --domain (default:
<release-name>.eve.gateway.test) |
--use-existing-issuer |
name | cert-manager ClusterIssuer or Issuer name
(skip local CA); namespaced Issuer must live in the install
namespace |
--use-existing-issuer-kind |
ClusterIssuer | Issuer |
issuerRef.kind for the leaf Certificate
(default: ClusterIssuer if omitted) |
--openai-key |
key | OpenAI API key |
--database-url |
URL | Backend database API base URL (HTTPS) |
--database-key |
key | API key for database backend |
--database-email |
Auth email for database backend | |
--database-password |
password | Auth password for database backend |
--database-tenant-id |
id | Tenant identifier for database backend |
--ghcr-username |
user | GitHub username for GHCR |
--ghcr-token |
token | GHCR token (read:packages) — pull secret + Helm OCI
login |
--chart-oci |
URL | Umbrella chart OCI ref when not using a local chart |
--chart-version |
version | Chart version on GHCR; required for OCI +
--non-interactive |
--expose-mode |
loadbalancer | clusterip |
How Kong is exposed |
--lb-scheme |
internal | external |
LoadBalancer visibility (when svc/kong is the public
LB) |
--metallb-pool |
name | MetalLB IPAddressPool name (on-prem) |
--acm-cert-arn |
ARN | AWS ACM certificate (external TLS on EKS) |
--gcp-managed-cert |
name | GKE managed certificate resource name |
--byoc-secret |
name | Existing TLS Secret name (BYOC) |
--byoc-cert |
path | Path to TLS certificate PEM (BYOC) |
--byoc-key |
path | Path to TLS private key PEM (BYOC) |
Boolean / switch flags (no argument):
| Flag | Description |
|---|---|
--non-interactive |
No prompts; skips /etc/hosts and CA trust unless opt-in
below |
--update-hosts |
Allow sudo updates to /etc/hosts (typ.
with --non-interactive) |
--no-update-hosts |
Never modify /etc/hosts |
--install-ca-trust |
Install Eve CA into OS trust store via sudo (typ. with
--non-interactive) |
--no-install-ca-trust |
Skip automatic CA trust install |
--skip-preflight |
Skip kubectl/Helm preflight checks |
--dry-run |
helm install / upgrade with
--dry-run only |
--bundle-cert-manager |
Always install cert-manager subchart |
--no-bundle-cert-manager |
Never install cert-manager subchart (cluster must already run it) |
-h, --help |
Print usage and exit |
Environment (non-interactive consent without flags):
EVE_UPDATE_HOSTS=yes|true|1,
EVE_INSTALL_CA_TRUST=yes|true|1.
Directory env (not a flag): set
CHART_PATH to the folder that contains the local
eve-installer Chart.yaml (default: directory
of install-eve.sh).
| Variable | Purpose |
|---|---|
CHART_PATH |
Local umbrella chart directory |
EVE_INSTALLER_CHART_OCI |
OCI URL for eve-installer (default:
oci://ghcr.io/evesecurityinc/eve-installer) |
EVE_INSTALLER_CHART_VERSION |
Chart version when pulling from OCI |
NAMESPACE |
Target namespace |
RELEASE_NAME |
Helm release name |
CONTEXT |
kubectl context |
TLS_STRATEGY, DOMAIN,
USE_EXISTING_ISSUER, USE_EXISTING_ISSUER_KIND,
GATEWAY_DOMAIN |
TLS / domain |
OPENAI_KEY, DATABASE_URL,
DATABASE_KEY, DATABASE_EMAIL,
DATABASE_PASSWORD, DATABASE_TENANT_ID |
App / DB secrets |
GHCR_USERNAME, GHCR_TOKEN |
GHCR auth |
EXPOSE_MODE, LB_SCHEME,
METALLB_POOL |
Service exposure |
ACM_CERT_ARN, GCP_MANAGED_CERT |
External cloud TLS |
BYOC_SECRET_NAME, BYOC_CERT_FILE,
BYOC_KEY_FILE |
BYOC material |
EVE_UPDATE_HOSTS,
EVE_INSTALL_CA_TRUST |
Opt-in to privileged local actions under
--non-interactive |
GENERATED_VALUES |
Filename for written values (default:
eve-values-generated.yaml) |
For non-interactive runs, use
--non-interactive plus the flags or env
vars above.
sudo
(interactive installs)Before changing your machine, the script asks (default is no for both):
| Prompt | What it does |
|---|---|
Allow /etc/hosts updates? |
Adds your gateway hostname → cluster or load balancer IP (needs sudo). |
| Install the Eve CA into the system trust store? | Trusts the installer-generated private CA so
browsers accept https://… (needs sudo
on macOS/Linux). Only relevant when not using
--use-existing-issuer. |
With --non-interactive, neither runs
unless you opt in: --update-hosts,
--install-ca-trust, or environment
EVE_UPDATE_HOSTS=yes /
EVE_INSTALL_CA_TRUST=yes. Use
--no-update-hosts /
--no-install-ca-trust to force skip.
Pass everything with flags (including database params):
./install-eve.sh --non-interactive \
--context arn:aws:eks:us-west-2:123456789012:cluster/prod \
--namespace eve-guard \
--release-name eve-guard \
--tls-strategy self-signed \
--gateway-domain gateway.acme.internal \
--use-existing-issuer letsencrypt-prod \
--openai-key "$OPENAI_KEY" \
--database-url "https://xyzcompany.supabase.co" \
--database-key "$DATABASE_KEY" \
--database-email "service-account@acme.com" \
--database-password "$DATABASE_PASSWORD" \
--database-tenant-id "tenant-uuid" \
--ghcr-username "$GHCR_USERNAME" \
--ghcr-token "$GHCR_TOKEN" \
--expose-mode loadbalancer \
--lb-scheme externalSame thing with secrets from environment variables (less sensitive data in shell history):
export OPENAI_KEY='sk-...'
export DATABASE_URL='https://xyzcompany.supabase.co'
export DATABASE_KEY='sb_secret_...'
export DATABASE_EMAIL='service-account@acme.com'
export DATABASE_PASSWORD='...'
export DATABASE_TENANT_ID='tenant-uuid'
export GHCR_USERNAME='github-user'
export GHCR_TOKEN='ghp_...'
./install-eve.sh --non-interactive \
--context arn:aws:eks:us-west-2:123456789012:cluster/prod \
--namespace eve-guard \
--release-name eve-guard \
--tls-strategy self-signed \
--gateway-domain gateway.acme.internal \
--use-existing-issuer letsencrypt-prod \
--expose-mode loadbalancer \
--lb-scheme externalIf running script-only + OCI (without local Chart.yaml),
pass --chart-version <version> in
non-interactive mode, and --chart-oci when
not using the default
oci://ghcr.io/evesecurityinc/eve-installer (see Script only + chart from
GHCR).
--tls-strategy self-signed)This is strategy 1 from §2.
self-signed is only the CLI/Helm flag
name; behavior is always cert-manager-driven TLS for
Kong.
Default path: installer-generated private CA + leaf
cert renewal. Optional
--use-existing-issuer skips the private CA
and uses your Issuer / ClusterIssuer
instead. Works on any cluster (minikube, Docker Desktop, EKS,
on-prem).
./install-eve.sh \
--tls-strategy self-signed \
--namespace eve-guard \
--openai-key "$OPENAI_KEY" \
--ghcr-username "$GHCR_USERNAME" \
--ghcr-token "$GHCR_TOKEN"The installer generates a CA, creates a K8s secret with the CA keypair, and cert-manager issues a leaf cert with auto-renewal. After install, you are asked whether to install the CA in your trust store (default no); if you skip it, the script prints commands for macOS/Linux/Windows to run manually.
If a platform engineer already created a ClusterIssuer (Let’s Encrypt, Vault, corporate CA, etc.):
./install-eve.sh \
--tls-strategy self-signed \
--use-existing-issuer letsencrypt-prod \
--domain gateway.acme-corp.com \
--namespace eve-guard \
--openai-key "$OPENAI_KEY" \
--ghcr-username "$GHCR_USERNAME" \
--ghcr-token "$GHCR_TOKEN"No CA is generated, no trust store setup needed (if the issuer uses a publicly trusted or org-wide CA). cert-manager issues and renews the leaf cert using the existing issuer.
minikube: Start the tunnel before
./install-eve.sh (in a separate terminal,
leave it running). Without it, LoadBalancer services often
stay <pending> or only show a cluster-internal
address; with minikube tunnel active, minikube publishes
127.0.0.1 as the external IP. The
installer waits for that IP (up to ~90s) to add it to the gateway
certificate SANs—if the tunnel is not running, that step can fail and
you may need to start the tunnel and re-run the installer.
# Terminal 1 — keep running (may prompt for sudo)
minikube tunnel# Terminal 2
./install-eve.sh \
--context minikube \
--namespace eve-guard \
--tls-strategy self-signed \
--openai-key "$OPENAI_KEY" \
--ghcr-username "$GHCR_USERNAME" \
--ghcr-token "$GHCR_TOKEN"Docker Desktop Kubernetes usually assigns a
reachable LoadBalancer address without minikube tunnel; use
the same ./install-eve.sh flags with --context
set to your Docker Desktop context.
Open https://eve-guard.eve.gateway.test (or the
gateway domain you chose). The default uses the reserved
.test TLD so it never conflicts with public DNS; if you
allow it when prompted, the installer updates
/etc/hosts (see Optional
sudo).
For local development without minikube tunnel or a cloud
LB:
--expose-mode clusterip to the install
command.kubectl port-forward svc/kong 8443:443 -n eve-guardhttps://localhost:8443.# Generate CA and wildcard cert (see install-eve.sh for full openssl commands)
kubectl create secret tls eve-guard-tls --cert=eve-local.crt --key=eve-local.key -n eve-guard
helm upgrade --install eve-guard . -f examples/values-self-signed.yaml -n eve-guard --timeout 15m \
--set eve-gateway.secrets.openaiApiKey=sk-...Use a corporate CA or purchased certificate. No cert-manager needed.
kubectl create secret tls my-tls-secret --cert=cert.pem --key=key.pem -n eve-guard
./install-eve.sh \
--tls-strategy byoc \
--byoc-secret my-tls-secret \
--domain gateway.acme-corp.com \
--namespace eve-guard \
--openai-key "$OPENAI_KEY" \
--ghcr-username "$GHCR_USERNAME" \
--ghcr-token "$GHCR_TOKEN"Put PEM in values (or use --set-file). The chart creates
the TLS secret from tls.byoc.cert and
tls.byoc.key. Prefer Option A for security.
TLS is terminated outside Kong — by the cloud load balancer, an ingress controller, or a service mesh. No cert-manager is needed.
The fastest path to HTTPS on EKS. The NLB terminates TLS using an AWS Certificate Manager (ACM) certificate. Kong receives decrypted HTTP traffic on port 8000.
./install-eve.sh \
--tls-strategy external \
--acm-cert-arn arn:aws:acm:us-west-2:ACCOUNT:certificate/CERT_ID \
--namespace eve-guard \
--openai-key "$OPENAI_KEY" \
--ghcr-username "$GHCR_USERNAME" \
--ghcr-token "$GHCR_TOKEN"Domain is optional. You can test immediately using the NLB hostname:
# HTTPS works right away (cert mismatch warning is expected on the NLB hostname)
curl -k https://a1b2c3...us-west-2.elb.amazonaws.com/healthWhen ready for production, add a CNAME record pointing your domain to the NLB hostname. The ACM certificate should cover that domain so clients see no warnings.
# With a custom domain
./install-eve.sh \
--tls-strategy external \
--acm-cert-arn arn:aws:acm:us-west-2:ACCOUNT:certificate/CERT_ID \
--domain gateway.acme-corp.com \
--namespace eve-guard \
--openai-key "$OPENAI_KEY" \
--ghcr-username "$GHCR_USERNAME" \
--ghcr-token "$GHCR_TOKEN"The installer sets these annotations on the Kong Service automatically:
| Annotation | Value |
|---|---|
service.beta.kubernetes.io/aws-load-balancer-type |
nlb |
service.beta.kubernetes.io/aws-load-balancer-ssl-cert |
<ACM ARN> |
service.beta.kubernetes.io/aws-load-balancer-ssl-ports |
443 |
service.beta.kubernetes.io/aws-load-balancer-backend-protocol |
http |
Tip: Create a wildcard ACM cert (e.g.
*.acme-corp.com) and reuse it for all Eve installs across namespaces.
./install-eve.sh \
--tls-strategy external \
--gcp-managed-cert my-cert-name \
--domain gateway.acme-corp.com \
--namespace eve-guard \
--openai-key "$OPENAI_KEY" \
--ghcr-username "$GHCR_USERNAME" \
--ghcr-token "$GHCR_TOKEN"Use Azure Application Gateway or Azure Front Door to terminate TLS with an Azure-managed certificate. Kong listens on HTTP; point the gateway to the Kong service on port 80.
./install-eve.sh \
--tls-strategy external \
--domain gateway.acme-corp.com \
--namespace eve-guard \
--openai-key "$OPENAI_KEY" \
--ghcr-username "$GHCR_USERNAME" \
--ghcr-token "$GHCR_TOKEN"If TLS is handled entirely by your infrastructure (Istio sidecar, F5 load balancer, NGINX ingress with cert), Kong listens on HTTP only (port 80). No ACM ARN needed:
./install-eve.sh \
--tls-strategy external \
--domain gateway.acme-corp.com \
--namespace eve-guard \
--openai-key "$OPENAI_KEY" \
--ghcr-username "$GHCR_USERNAME" \
--ghcr-token "$GHCR_TOKEN"Kong: tls.enabled: false, httpPort: 80.
Point your TLS-terminating layer to the Kong service on port 80.
The chart is cloud-agnostic. Kong is exposed as a
LoadBalancer Service; each cloud provisions a load balancer
automatically. The installer detects your cloud provider (EKS, GKE, AKS,
TKG, Rancher) and configures the right annotations. Cloud-specific steps
are: (a) how the LB is exposed and whether TLS is terminated at the LB
(see External TLS), and (b) how the
platform engineer sets up cert-manager with DNS credentials for a
ClusterIssuer (see Supported
issuers).
LoadBalancer: EKS provisions a CLB or NLB. The
Service returns a hostname
(e.g. a1b2...elb.amazonaws.com), not an IP. Point your
domain via CNAME or Route53 Alias (not
an A record).
cert-manager with IRSA (platform engineer, one-time setup for Let’s Encrypt / ACME issuers):
Create an IAM policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["route53:GetChange"],
"Resource": "arn:aws:route53:::change/*"
},
{
"Effect": "Allow",
"Action": ["route53:ChangeResourceRecordSets", "route53:ListResourceRecordSets"],
"Resource": "arn:aws:route53:::hostedzone/YOUR_ZONE_ID"
},
{
"Effect": "Allow",
"Action": ["route53:ListHostedZonesByName"],
"Resource": "*"
}
]
}Create an IAM role with a trust policy for the EKS OIDC provider and attach the policy above.
Annotate the cert-manager service account (after Eve install, since cert-manager is a subchart):
kubectl annotate serviceaccount eve-guard-cert-manager -n eve-guard \
eks.amazonaws.com/role-arn=arn:aws:iam::ACCOUNT_ID:role/cert-manager-route53 --overwrite
kubectl rollout restart deployment eve-guard-cert-manager -n eve-guardCreate a ClusterIssuer:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: platform-team@acme-corp.com
privateKeySecretRef:
name: letsencrypt-prod-account-key
solvers:
- dns01:
route53:
region: us-east-1
hostedZoneID: YOUR_ZONE_IDInstall Eve using the tested issuer:
./install-eve.sh \
--tls-strategy self-signed \
--use-existing-issuer letsencrypt-prod \
--domain gateway.acme-corp.com \
--namespace eve-guard \
--openai-key "$OPENAI_KEY" \
--ghcr-username "$GHCR_USERNAME" \
--ghcr-token "$GHCR_TOKEN"NLB with ACM (recommended for most EKS deployments):
Use --tls-strategy external --acm-cert-arn <ARN>. The
NLB terminates TLS with an ACM certificate — HTTPS works immediately, no
cert-manager needed. See External TLS —
AWS for full details.
See also the official cert-manager Route53 docs and the EKS + Let’s Encrypt tutorial.
Post-deploy:
kubectl get svc kong -n eve-guard -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'
→ create CNAME/Alias →
curl -I https://gateway.acme-corp.com.
LoadBalancer: GKE provisions an external TCP load balancer. The Service returns an IP address. Point your domain via an A record.
cert-manager with Workload Identity (platform engineer, one-time setup for Let’s Encrypt / ACME issuers):
Create a GCP service account with roles/dns.admin on
your Cloud DNS managed zone.
Bind it to the cert-manager Kubernetes service account:
gcloud iam service-accounts add-iam-policy-binding \
cert-manager-dns@YOUR_PROJECT.iam.gserviceaccount.com \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:YOUR_PROJECT.svc.id.goog[eve-guard/eve-guard-cert-manager]"Annotate the cert-manager service account:
kubectl annotate serviceaccount eve-guard-cert-manager -n eve-guard \
iam.gke.io/gcp-service-account=cert-manager-dns@YOUR_PROJECT.iam.gserviceaccount.com --overwrite
kubectl rollout restart deployment eve-guard-cert-manager -n eve-guardCreate a ClusterIssuer:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: platform-team@acme-corp.com
privateKeySecretRef:
name: letsencrypt-prod-account-key
solvers:
- dns01:
cloudDNS:
project: YOUR_PROJECTInstall Eve with
--use-existing-issuer letsencrypt-prod (same as AWS example
above).
See also the official cert-manager Cloud DNS docs.
Google-managed certificates: Use
--tls-strategy external --gcp-managed-cert <name> to
let GKE managed certificates handle TLS at the load balancer level. See
External TLS —
GCP.
Post-deploy:
kubectl get svc kong -n eve-guard -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
→ create A record → test.
LoadBalancer: AKS provisions an Azure Load Balancer. The Service returns an IP address. Point your domain via an A record.
cert-manager with Azure Workload Identity (platform engineer, one-time setup for Let’s Encrypt / ACME issuers):
Create an Azure Managed Identity with
DNS Zone Contributor role on your Azure DNS zone.
Create a federated credential for the cert-manager service account:
az identity federated-credential create \
--name cert-manager-fedcred \
--identity-name cert-manager-identity \
--resource-group YOUR_RG \
--issuer "$(az aks show -n YOUR_CLUSTER -g YOUR_RG --query oidcIssuerProfile.issuerUrl -o tsv)" \
--subject "system:serviceaccount:eve-guard:eve-guard-cert-manager"Annotate the cert-manager service account:
kubectl annotate serviceaccount eve-guard-cert-manager -n eve-guard \
azure.workload.identity/client-id=YOUR_MANAGED_IDENTITY_CLIENT_ID --overwrite
kubectl label serviceaccount eve-guard-cert-manager -n eve-guard \
azure.workload.identity/use=true --overwrite
kubectl rollout restart deployment eve-guard-cert-manager -n eve-guardCreate a ClusterIssuer:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: platform-team@acme-corp.com
privateKeySecretRef:
name: letsencrypt-prod-account-key
solvers:
- dns01:
azureDNS:
subscriptionID: YOUR_SUBSCRIPTION_ID
resourceGroupName: YOUR_DNS_RG
hostedZoneName: acme-corp.com
environment: AzurePublicCloud
managedIdentity:
clientID: YOUR_MANAGED_IDENTITY_CLIENT_IDInstall Eve with
--use-existing-issuer letsencrypt-prod (same
pattern).
See also the official cert-manager Azure DNS docs.
Application Gateway: Use
--tls-strategy external if you want Azure Application
Gateway or Front Door to terminate TLS. See External TLS —
Azure.
Post-deploy:
kubectl get svc kong -n eve-guard -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
→ create A record → test.
Same chart. Use the cert-manager strategy
(--tls-strategy self-signed) with a corporate CA issuer, or
--use-existing-issuer with a
pre-configured ClusterIssuer (Let’s Encrypt, Vault,
etc.). The installer auto-detects VMware TKG/VKS and Rancher/RKE
clusters.
eve-gateway.kong.service.type: LoadBalancer (default). Use
--lb-scheme internal|external and
--metallb-pool <name> if needed — see LoadBalancer
schemes.eve-gateway.kong.service.type: NodePort. Expose via DNS or
an external load balancer.| Cloud | LB returns | DNS record type | Get address |
|---|---|---|---|
| AWS | hostname (...elb.amazonaws.com) |
CNAME or Route53 Alias | kubectl get svc kong -n eve-guard -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' |
| GCP | IP address | A record | kubectl get svc kong -n eve-guard -o jsonpath='{.status.loadBalancer.ingress[0].ip}' |
| Azure | IP address | A record | kubectl get svc kong -n eve-guard -o jsonpath='{.status.loadBalancer.ingress[0].ip}' |
| On-prem | depends (MetalLB: IP, NodePort: node IP) | A record | kubectl get svc kong -n eve-guard |
The installer supports internal and external LoadBalancers on any
platform. Use --lb-scheme internal or
--lb-scheme external (default), or select interactively. On
on-prem clusters you can also specify a MetalLB address pool with
--metallb-pool <name>.
The installer sets all annotations simultaneously — each cloud’s LB controller reads only its own and ignores the rest, so the chart is fully portable.
| Platform | Scheme | Annotation | Value |
|---|---|---|---|
| AWS (EKS) | external | service.beta.kubernetes.io/aws-load-balancer-scheme |
internet-facing |
| AWS (EKS) | internal | service.beta.kubernetes.io/aws-load-balancer-scheme |
internal |
| AWS (EKS) | internal | service.beta.kubernetes.io/aws-load-balancer-internal |
true |
| GCP (GKE) | external | (default, no annotation needed) | — |
| GCP (GKE) | internal | networking.gke.io/load-balancer-type |
Internal |
| Azure (AKS) | external | (default, no annotation needed) | — |
| Azure (AKS) | internal | service.beta.kubernetes.io/azure-load-balancer-internal |
true |
| VMware (TKG/VKS) | external | (default, no annotation needed) | — |
| VMware (TKG/VKS) | internal | Use MetalLB pool or NSX-T AVI InfraSetting | see below |
| Rancher (RKE/RKE2) | external | (default, no annotation needed) | — |
| Rancher (RKE/RKE2) | internal | Use MetalLB pool or kube-vip config | see below |
| MetalLB (any) | pool select | metallb.universe.tf/address-pool |
<pool-name> |
On-prem internal/external: On VMware TKG, Rancher, and bare-metal clusters, internal vs external typically depends on the LB controller:
MetalLB — Create separate
IPAddressPool resources for internal and external IP
ranges. Use --metallb-pool <name> to select the pool.
Example:
./install-eve.sh --lb-scheme internal --metallb-pool internal-vip-pool ...NSX Advanced Load Balancer (AKO) —
Internal/external is controlled via AviInfraSetting CRDs
that map to different VIP networks. Configure AKO separately, then use
--metallb-pool or Helm values to pass any needed
annotations.
kube-vip — VIP is allocated on the node network. Internal/external depends on the network topology; no annotation toggle exists.
CLI examples:
# External LB (default on all clouds)
./install-eve.sh --lb-scheme external ...
# Internal LB (works on AWS, GCP, Azure — annotations set automatically)
./install-eve.sh --lb-scheme internal ...
# On-prem with MetalLB pool
./install-eve.sh --lb-scheme internal --metallb-pool internal-pool ...If you prefer not to use the installer:
Copy an example from examples/ and
customize.
Create namespace and GHCR pull secret:
kubectl create namespace eve-guard
kubectl create secret docker-registry ghcr-secret \
--docker-server=ghcr.io \
--docker-username=YOUR_GITHUB_USERNAME \
--docker-password=YOUR_GHCR_TOKEN \
-n eve-guardInstall:
cd traffic-orchestrator/k8s/helm/eve-installer
helm dependency build . # uses Chart.lock; or helm dependency update . if build fails / no lock
helm upgrade --install eve-guard . \
-f examples/values-self-signed.yaml \
-n eve-guard --timeout 15m \
--set tls.domain=gateway.acme-corp.com \
--set eve-gateway.secrets.openaiApiKey=sk-...helm uninstall eve-guard -n eve-guardThis removes all Kubernetes resources (pods, services, secrets, cert-manager subchart, hook resources). CRDs installed by cert-manager are kept (Helm never deletes CRDs).
kubectl delete namespace eve-guardIf you used the cert-manager strategy
with an installer-generated private CA (no
--use-existing-issuer), the installer may have added the
Eve CA to your system trust store. To remove it:
macOS:
security find-certificate -c "Eve Gateway CA" -a -Z /Library/Keychains/System.keychain \
| grep "SHA-1" | awk '{print $NF}' \
| while read hash; do sudo security delete-certificate -Z "$hash" /Library/Keychains/System.keychain; doneLinux (Debian/Ubuntu):
sudo rm /usr/local/share/ca-certificates/eve-local-ca.crt && sudo update-ca-certificates --freshLinux (RHEL/Fedora):
sudo rm /etc/pki/ca-trust/source/anchors/eve-local-ca.crt && sudo update-ca-trustWindows (PowerShell as Admin):
Get-ChildItem Cert:\LocalMachine\Root | Where-Object { $_.Subject -match "Eve Gateway CA" } | Remove-Itemrm -f eve-local-ca.crt eve-local-ca.key eve-local.crt eve-local.key eve-values-generated.yamlFetch the latest installer script from
https://install.eve.security and run --upgrade
directly:
curl -fsSL https://install.eve.security/install-eve.sh -o install-eve.sh
chmod +x install-eve.sh
# Check what version is available first
./install-eve.sh --check-upgrade --namespace eve-guard --ghcr-token "$GHCR_TOKEN"
# Upgrade to the latest published version
./install-eve.sh --upgrade --namespace eve-guard --ghcr-token "$GHCR_TOKEN"Or pipe directly (non-interactive, skip the check):
curl -fsSL https://install.eve.security | bash -s -- \
--upgrade --namespace eve-guard \
--context "$KUBE_CONTEXT" \
--ghcr-token "$GHCR_TOKEN"Compare the deployed Helm release version against the latest
published chart on GHCR (falls back to
https://install.eve.security/VERSION when GHCR is
unreachable):
./install-eve.sh --check-upgrade --namespace eve-guard --ghcr-token "$GHCR_TOKEN"Sample output when an upgrade is available:
[INFO] Installed : 2.1.0 (release: eve-guard, namespace: eve-guard)
[INFO] Latest : 2.3.0 (oci://ghcr.io/evesecurityinc/eve-installer)
[WARN] Upgrade available: 2.1.0 -> 2.3.0
[INFO] Run: ./install-eve.sh --upgrade --namespace eve-guard
Already up to date:
[INFO] Already up to date (2.3.0).
Upgrades the existing Helm release in-place using
helm upgrade --install --reuse-values. All existing values
(secrets, TLS config, domain) are preserved automatically.
./install-eve.sh --upgrade --namespace eve-guard --ghcr-token "$GHCR_TOKEN"In interactive mode (no --context flag), you will be
prompted to select the Kubernetes cluster context. To skip the
prompt:
./install-eve.sh --upgrade \
--context arn:aws:eks:us-west-2:123456789012:cluster/prod \
--namespace eve-guard \
--ghcr-token "$GHCR_TOKEN"# Upgrade to a specific version
./install-eve.sh --upgrade --chart-version 2.3.0 \
--namespace eve-guard --ghcr-token "$GHCR_TOKEN"
# Downgrade to a previous version
./install-eve.sh --upgrade --chart-version 2.1.0 \
--namespace eve-guard --ghcr-token "$GHCR_TOKEN"Downgrade note:
--reuse-valuesworks for downgrades but may fail if the older chart removed a value your currentvalues.yamlreferences. If downgrade fails, usehelm rollbackinstead (see below).
./install-eve.sh --upgrade --non-interactive \
--context arn:aws:eks:us-west-2:123456789012:cluster/prod \
--namespace eve-guard \
--ghcr-username "$GHCR_USERNAME" \
--ghcr-token "$GHCR_TOKEN"# List all revisions
helm history eve-guard -n eve-guard
# Roll back to a specific revision
helm rollback eve-guard <REVISION> -n eve-guard --wait --timeout=10mkubectl get pods -n eve-guard
kubectl get svc kong kong-tls -n eve-guard
kubectl get certificate -n eve-guard # cert-manager strategy onlysvc/kong vs svc/kong-tls (Kong terminates
TLS)When eve-gateway.kong.tls.enabled is true, the chart
creates two Services:
| Service | Role | Typical use |
|---|---|---|
kong-tls |
LoadBalancer, port
443 → kong.tls.port on the pod
(default 8443). In the eve-gateway
chart, AWS NLB annotations on this Service are fixed (internet-facing +
NLB) unless you override the chart.
--lb-scheme internal applies to
svc/kong when it is the public LB
(e.g. external TLS + ACM), not to kong-tls today. |
Public HTTPS URL for cert-manager / BYOC leaf certs. |
kong |
The installer sets
eve-gateway.kong.service.type: ClusterIP
when you use cert-manager / BYOC +
LoadBalancer expose mode so HTTPS from the
internet goes to kong-tls, not
cleartext :9000 on a public
kong LB. In-cluster
gateway.port and
tls-proxy remain on this Service. HPA
targets the Kong Deployment, not Services. |
In-cluster traffic and debugging. |
To expose a public LoadBalancer for
kong (e.g. port 9000) as well as
kong-tls, set
eve-gateway.kong.service.type: LoadBalancer
in your values (not the installer default for that mode).
EKS + ACM (external TLS) uses
kong only (NLB terminates TLS; Kong often
on HTTP :8000). There is no separate TLS path on Kong in that mode, and
kong-tls is not created when
kong.tls.enabled is false.
Why not bind port 443 inside the Kong container?
Non-root containers cannot listen on ports < 1024 on Linux. Splitting
LB port 443 (Service) from container port
8443 (Kong) fixes Permission denied on bind.
failed post-install: timed out waiting for the conditionOften this is not “the cluster is slow” but the
wait hook Job failing and retrying (e.g. RBAC blocked
kubectl wait, or the webhook Deployment never became
ready). A large backoffLimit makes that feel like a
long hang until Helm times out.
The chart sets helm.sh/hook-timeout:
~660s on post-install hooks; the installer uses
helm ... --timeout 15m. Check the wait Job
logs first (below).
Logs from the wait Job (replace
eve-guard with your release if different):
kubectl logs -n eve-guard -l job-name=eve-guard-wait-cert-manager --tail=200Webhook deployment / pulls:
kubectl get deployments,pods -n eve-guard | grep -E 'cert-manager|webhook'Retry after fixing ImagePullBackOff or RBAC; or uninstall and reinstall with the updated chart.
certificates.cert-manager.io CRD not found (pods up,
CRDs missing)The Jetstack chart can deploy controller / cainjector / webhook without applying CRDs when CRDs are not installed yet. cainjector then loops on:
cainjector has been configured to watch certificates, but certificates.cert-manager.io CRD not found
and may exit with code 124 after its internal startup timeout — which breaks the post-install wait Job and leaves Kong waiting on a TLS Secret that never gets issued.
Fix: This umbrella keeps
cert-manager.crds.enabled: false in Helm
(CRDs are not tied to the Eve release).
install-eve.sh applies the cert-manager
CRDs with kubectl apply before Helm when
certManager.install: true and the CRDs are missing.
Confirm:
kubectl get crd certificates.cert-manager.ioManual Helm only: apply CRDs from the same chart
version as your lockfile per cert-manager
installation, then helm upgrade --install again.
Already failed release: after CRDs exist,
helm upgrade --install the same release, or uninstall and
reinstall.
UPGRADE FAILED:
post-upgrade hooks — no matches for kind "Issuer"
(cert-manager.io/v1)Helm may report something like: unable to build kubernetes object
for deleting hook … eve-ca-issuer … no matches for kind
“Issuer” in version “cert-manager.io/v1” — ensure CRDs are installed
first.
That happens when the cluster does not have cert-manager
CRDs (or they were never applied) but a previous run already
created hook-managed Issuer/Certificate objects in Helm’s release
history. On upgrade, Helm tries to delete those hook
resources; the Kubernetes API cannot map Issuer until the
CRDs exist.
Fix (same root cause as cainjector “CRD not found” above):
certManager.install: true and
kubectl get crd certificates.cert-manager.io fails).install-eve.sh or
helm upgrade --install after CRDs are present.kubectl get crd certificates.cert-manager.iohelm upgrade --install again.UPGRADE FAILED:
CRD exists and cannot be imported into the current release
(Helm ownership)Older Eve umbrella builds installed cert-manager with
cert-manager.crds.enabled: true, so Helm
tried to own cluster-scoped CRDs. Only one release name
can own them; a second namespace or a reinstall with a different release
name then failed with invalid ownership metadata.
Current behavior: the chart sets
cert-manager.crds.enabled: false. CRDs are
applied with kubectl apply (not as Helm
resources), so new installs do not bind CRDs to your release
name or namespace. Stale
meta.helm.sh/* left on CRDs from older
installs is harmless for Helm, but confusing in
kubectl describe.
Cleanup stale CRD Helm labels: when you run
install-eve.sh with
certManager.install: true, the script
removes meta.helm.sh/release-name,
meta.helm.sh/release-namespace, and common Helm labels from
cert-manager CRDs only if the annotated release no
longer exists (helm status fails). If a
live Helm release still owns those CRDs (dedicated
cert-manager install), the script does not
strip them — use --no-bundle-cert-manager
and --use-existing-issuer, or uninstall
the other release first.
Manual one-liner (after you confirm
helm status eve-guard-victor -n eve-guard-victor fails —
release really gone):
for c in challenges.acme.cert-manager.io orders.acme.cert-manager.io certificaterequests.cert-manager.io certificates.cert-manager.io clusterissuers.cert-manager.io issuers.cert-manager.io; do
kubectl annotate crd "$c" meta.helm.sh/release-name- meta.helm.sh/release-namespace- 2>/dev/null
kubectl label crd "$c" app.kubernetes.io/managed-by- app.kubernetes.io/instance- 2>/dev/null
doneNuclear option (dev only): delete all cert-manager
CRDs — wipes every Certificate /
ClusterIssuer cluster-wide. Prefer stripping annotations or
a dedicated cert-manager release instead.
If kubectl get crd certificates.cert-manager.io works
but no cert-manager controller pods run
anywhere, the old installer skipped installing cert-manager and
Kong never got the leaf TLS Secret. Current
install-eve.sh detects running controller
pods (not just CRDs) and sets
certManager.install: true when needed.
./install-eve.sh --bundle-cert-manager
(always install the subchart) or --no-bundle-cert-manager
when cluster-wide cert-manager already runs.uninstall-eve.sh now removes this release’s cert-manager
webhooks before and after helm uninstall
and uses helm uninstall --wait for a
cleaner teardown.
ContainerCreating —
secret "<release>-tls" not foundKong mounts the leaf TLS Secret (e.g. eve-guard-tls)
before cert-manager has created it. The umbrella chart
creates the Certificate in a
post-install hook (after the cert-manager webhook wait
Job), so on first install the Secret may appear after
the Kong Deployment is applied.
eve-gateway.kong.tls.waitForCertificateSecret: true
(the installer does this automatically for the
cert-manager strategy). Kong then waits in an
init container until the Secret exists, then
starts.Quick checks:
kubectl describe pod -n eve-guard -l app.kubernetes.io/component=kong | sed -n '/Events:/,$p'
kubectl get secret -n eve-guard eve-guard-tls eve-ca-keypair 2>/dev/null || true
kubectl get certificate,certificaterequest -n eve-guard
kubectl describe issuer -n eve-guard eve-ca-issuer 2>/dev/null || trueCA secret for the namespace Issuer (cert-manager
strategy without --use-existing-issuer):
the chart’s CA Issuer reads
eve-ca-keypair. The installer creates it
before Helm; if you ran helm install
alone, create it (or re-run the installer’s CA step):
kubectl get secret eve-ca-keypair -n eve-guard -o name || echo "missing eve-ca-keypair — Issuer cannot sign"Describe the Certificate and check status / events:
kubectl describe certificate -n eve-guard
kubectl get certificaterequest -n eve-guard
kubectl describe certificaterequest -n eve-guard # pick the one for your CertificateNamespace Issuer (default private-CA path for cert-manager strategy):
kubectl describe issuer -n eve-guard eve-ca-issuercert-manager logs (subchart runs in release namespace):
kubectl logs -n eve-guard -l app.kubernetes.io/name=cert-managerExisting issuer: If using
--use-existing-issuer, verify the issuer is healthy:
kubectl describe clusterissuer <name> or, for a
namespaced Issuer,
kubectl describe issuer -n <namespace> <name>.
Confirm the secret exists:
kubectl get secret ghcr-secret -n eve-guard
Env vars: GHCR_USERNAME must be
your GitHub username (or token for
PAT-only flows); GHCR_TOKEN must be the
PAT (ghp_...). If you swap them, the pull
secret is wrong —
kubectl delete secret ghcr-secret -n eve-guard and re-run
the installer with the correct exports.
Recreate if missing:
kubectl create secret docker-registry ghcr-secret \
--docker-server=ghcr.io --docker-username=YOUR_GITHUB_USERNAME --docker-password=YOUR_TOKEN -n eve-guardUpgrade:
helm upgrade eve-guard . -f eve-values-generated.yaml -n eve-guard --timeout 15m
Restart pods:
kubectl rollout restart deployment/kong deployment/orchestrator -n eve-guard
The installer script
(install-eve.sh) is Bash. Use
macOS / Linux, or Windows with WSL2 or
Git Bash. It does not need to match the CPU of your Kubernetes
nodes.
Images are pulled by the cluster, not by your laptop. What matters is node architecture vs manifests on ghcr.io:
amd64 → images must include linux/amd64
(default).Version-tag CI in this repo builds linux/amd64 by default (fast releases). For linux/arm64 as well, run [Traffic Orchestrator] Push All via Actions → workflow_dispatch, set your tag, and enable Build for ARM64 (adds noticeable build time — only needed for Graviton / local ARM Kubernetes).
minikube on Apple Silicon: the installer pre-pulls linux/amd64 images and loads them into minikube (with binfmt) when GHCR credentials are set.
Testing pulls from a Mac (M-series): plain
docker pull ghcr.io/.../eve-kong:latest asks for
arm64; to mirror EKS amd64 use:
docker pull --platform linux/amd64 ghcr.io/evesecurityinc/eve-kong:latestThe installer prints trust commands for all platforms after a successful install. Manual commands from the chart directory:
macOS — install:
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain eve-local-ca.crtmacOS — remove:
security find-certificate -c "Eve Gateway CA" -a -Z /Library/Keychains/System.keychain \
| grep "SHA-1" | awk '{print $NF}' \
| while read hash; do sudo security delete-certificate -Z "$hash" /Library/Keychains/System.keychain; doneLinux (Debian/Ubuntu) — install:
sudo cp eve-local-ca.crt /usr/local/share/ca-certificates/eve-local-ca.crt && sudo update-ca-certificatesLinux (Debian/Ubuntu) — remove:
sudo rm /usr/local/share/ca-certificates/eve-local-ca.crt && sudo update-ca-certificates --freshLinux (RHEL/Fedora) — install:
sudo cp eve-local-ca.crt /etc/pki/ca-trust/source/anchors/eve-local-ca.crt && sudo update-ca-trustLinux (RHEL/Fedora) — remove:
sudo rm /etc/pki/ca-trust/source/anchors/eve-local-ca.crt && sudo update-ca-trustWindows (PowerShell as Admin) — install:
Import-Certificate -FilePath "eve-local-ca.crt" -CertStoreLocation Cert:\LocalMachine\RootWindows (PowerShell as Admin) — remove:
Get-ChildItem Cert:\LocalMachine\Root | Where-Object { $_.Subject -match "Eve Gateway CA" } | Remove-ItemShare with another engineer: Send them
eve-local-ca.crt and the install command for their OS.
/etc/hosts
updates, the script rewrites the file for your gateway hostname when the
Service has an IP or when the cloud only returns an LB
hostname (e.g. EKS): it resolves that
hostname (dig / host /
nslookup) and stores the IPv4. NLBs can have multiple A
records; the installer uses the first IPv4 returned. If resolution
fails, point DNS with a CNAME to the LB hostname or edit
/etc/hosts manually..test TLD before /etc/hosts is applied. After
a successful hosts update,
https://<your-gateway-domain> should resolve
locally.minikube tunnel. On cloud, check LB provisioning and
security groups.The installer patches the namespace with Helm metadata. If it still fails:
kubectl label namespace eve-guard app.kubernetes.io/managed-by=Helm --overwrite
kubectl annotate namespace eve-guard meta.helm.sh/release-name=eve-guard meta.helm.sh/release-namespace=eve-guard --overwrite
helm upgrade --install eve-guard . -f eve-values-generated.yaml -n eve-guard --timeout 15minstall-eve.sh and
uninstall-eve.sh only remove cert-manager webhook
configurations named
{release}-cert-manager-webhook (your Helm
release). They do not delete other cert-manager
webhooks on the cluster.--set-file and short-lived
0600 temp files, not --set on the command
line.values.yaml →
certManagerWaitHook.kubectlImage) and scoped
RBAC (namespace Role for workloads + ClusterRole limited to GET
the release’s ValidatingWebhookConfiguration). Override
with an image digest in high-assurance environments.