Kubernetes
T4 ships a Helm chart that deploys a StatefulSet with persistent volumes, health probes, Prometheus metrics, and optional Envoy load-balancing proxy.
Prerequisites
Section titled “Prerequisites”- Kubernetes 1.24+
- Helm 3.x
- An S3 bucket (required for multi-node; optional for single-node)
Quick start — single node
Section titled “Quick start — single node”helm install t4 oci://ghcr.io/t4db/charts/t4 \ --set s3.bucket=my-bucket \ --set s3.region=us-east-1This creates a single-replica StatefulSet with a 10 Gi PVC and exposes the etcd-compatible gRPC port on a ClusterIP Service.
Connect with etcdctl
Section titled “Connect with etcdctl”kubectl port-forward svc/t4 3379:3379etcdctl --endpoints=localhost:3379 put /hello worldetcdctl --endpoints=localhost:3379 get /helloMulti-node cluster
Section titled “Multi-node cluster”Set replicaCount to an odd number (3 or 5 recommended). All pods get stable DNS names via the headless Service (t4-0.t4-headless, etc.) and race to acquire the S3 leader lock on startup.
helm install t4 oci://ghcr.io/t4db/charts/t4 \ --set replicaCount=3 \ --set s3.bucket=my-bucket \ --set s3.region=us-east-1Pods are scheduled with podAntiAffinity to spread across nodes automatically.
S3 credentials
Section titled “S3 credentials”IRSA / EKS Workload Identity (recommended)
Section titled “IRSA / EKS Workload Identity (recommended)”Create the IAM role with the required S3 permissions, then:
helm install t4 oci://ghcr.io/t4db/charts/t4 \ --set s3.bucket=my-bucket \ --set s3.region=us-east-1 \ --set s3.useIRSA=true \ --set s3.iamRoleArn=arn:aws:iam::123456789012:role/t4-s3-role \ --set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"=arn:aws:iam::123456789012:role/t4-s3-roleStatic credentials via existing Secret
Section titled “Static credentials via existing Secret”kubectl create secret generic t4-s3-credentials \ --from-literal=T4_S3_ACCESS_KEY_ID=AKIA... \ --from-literal=T4_S3_SECRET_ACCESS_KEY=...
helm install t4 oci://ghcr.io/t4db/charts/t4 \ --set s3.bucket=my-bucket \ --set s3.existingSecret=t4-s3-credentialsMinIO / S3-compatible stores
Section titled “MinIO / S3-compatible stores”helm install t4 oci://ghcr.io/t4db/charts/t4 \ --set s3.bucket=my-bucket \ --set s3.endpoint=http://minio.minio-ns.svc.cluster.local:9000 \ --set s3.existingSecret=minio-credentialsBuilt-in MinIO (development / CI)
Section titled “Built-in MinIO (development / CI)”The chart can deploy a single-node MinIO instance alongside T4 and wire everything up automatically — no external object store needed:
helm install t4 oci://ghcr.io/t4db/charts/t4 \ --set minio.enabled=trueThis creates a MinIO Deployment, PVC, Service, and a post-install Job that creates the t4 bucket. T4’s S3 endpoint, bucket, and credentials are configured automatically.
Customise credentials, bucket name, and storage:
minio: enabled: true rootUser: myuser rootPassword: mypassword # change this! bucket: t4 persistence: size: 20GiAccess the MinIO web console:
kubectl port-forward svc/t4-minio 9001:9001open http://localhost:9001⚠ Not for production. Use a managed S3 service or a dedicated MinIO cluster for production deployments.
Persistence
Section titled “Persistence”By default each pod gets a 10 Gi PVC. Adjust size and storage class:
persistence: enabled: true size: 50Gi storageClass: gp3To disable persistence (ephemeral — relies entirely on S3 for recovery):
persistence: enabled: falseClient TLS (etcd port)
Section titled “Client TLS (etcd port)”Create a TLS Secret, then reference it:
kubectl create secret tls t4-client-tls \ --cert=server.crt --key=server.keytls: client: enabled: true secretName: t4-client-tlsTo also require client certificates (mTLS), include ca.crt in the Secret:
kubectl create secret generic t4-client-tls \ --from-file=tls.crt=server.crt \ --from-file=tls.key=server.key \ --from-file=ca.crt=ca.crtPeer mTLS (inter-node replication)
Section titled “Peer mTLS (inter-node replication)”kubectl create secret generic t4-peer-tls \ --from-file=tls.crt=node.crt \ --from-file=tls.key=node.key \ --from-file=ca.crt=ca.crttls: peer: enabled: true secretName: t4-peer-tlsWith cert-manager
Section titled “With cert-manager”apiVersion: cert-manager.io/v1kind: Issuermetadata: name: t4-caspec: selfSigned: {}---apiVersion: cert-manager.io/v1kind: Certificatemetadata: name: t4-peer-tlsspec: secretName: t4-peer-tls issuerRef: name: t4-ca dnsNames: - t4-0.t4-headless - t4-1.t4-headless - t4-2.t4-headless - t4-headless usages: - server auth - client authkubectl apply -f issuer.yaml
helm install t4 oci://ghcr.io/t4db/charts/t4 \ --set tls.peer.enabled=true \ --set tls.peer.secretName=t4-peer-tlsPrometheus metrics
Section titled “Prometheus metrics”Enable a ServiceMonitor for the Prometheus Operator:
serviceMonitor: enabled: true namespace: monitoring # namespace where Prometheus Operator watches interval: 30s labels: release: kube-prometheus-stack # match your Prometheus selector labelThe ServiceMonitor scrapes the /metrics endpoint on port 9090.
Envoy proxy (read scale-out)
Section titled “Envoy proxy (read scale-out)”When replicaCount > 1, enabling the Envoy proxy routes writes to the leader and load-balances reads across all healthy replicas:
replicaCount: 3proxy: enabled: true replicaCount: 2 lbPolicy: LEAST_REQUESTClients connect to the proxy Service (t4-proxy) instead of t4 directly. The proxy detects the leader via the /healthz/leader endpoint on each pod.
Full values.yaml example (production 3-node)
Section titled “Full values.yaml example (production 3-node)”replicaCount: 3
image: repository: ghcr.io/t4db/t4 tag: "0.11.0"
config: walSyncUpload: "false" # PVC provides durability logLevel: info
s3: bucket: my-t4-prod prefix: k8s/prod region: us-east-1 useIRSA: true iamRoleArn: arn:aws:iam::123456789012:role/t4-prod
persistence: size: 50Gi storageClass: gp3
tls: peer: enabled: true secretName: t4-peer-tls
serviceMonitor: enabled: true namespace: monitoring labels: release: kube-prometheus-stack
proxy: enabled: true replicaCount: 2
resources: requests: cpu: 250m memory: 512Mi limits: memory: 2Gihelm install t4 oci://ghcr.io/t4db/charts/t4 -f values.yamlUpgrading
Section titled “Upgrading”helm upgrade t4 oci://ghcr.io/t4db/charts/t4 -f values.yamlThe StatefulSet rolls pods one at a time. With replicaCount >= 3, quorum is maintained throughout the upgrade. If the leader pod is updated, a follower automatically wins a new election.
Uninstalling
Section titled “Uninstalling”helm uninstall t4PVCs are not deleted automatically. To also remove data:
kubectl delete pvc -l app.kubernetes.io/name=t4