Skip to content
Docs

Migrating from etcd

T4 implements the core etcd v3 gRPC API — KV, Watch, Lease, and Auth. In most cases, replacing the etcd binary with t4 run and pointing your existing clients at the new endpoint is all that’s needed. Some Maintenance and Cluster RPCs are not implemented; see the tables below for the full picture.


T4 supports the following etcd v3 operations:

etcd operationT4 support
Range (Get / List / prefix scan)Full
PutFull
DeleteRange (single key or prefix/range)Full
Txn (compare-and-set, unconditional)Full multi-key support: arbitrary If conditions (MOD, CREATE, VALUE, LEASE, VERSION==0/!=0), multi-key Then/Else branches with Put and Delete ops; nested RequestTxn and range-delete ops within branches return Unimplemented
WatchFull (history replay, cancel)
CompactFull
LeaseGrant / LeaseKeepAlive / LeaseRevoke / LeaseTimeToLive / LeaseLeasesFull
AuthEnable / Users / RolesFull
MemberListReturns single synthetic member
MemberAdd / MemberRemove / MemberUpdate / MemberPromoteNot supported
StatusReturns current revision, leader, and version
DefragmentNo-op (Pebble compacts internally)
Alarm / Snapshot / Hash / HashKV / MoveLeaderNot supported

Terminal window
# Snapshot the existing etcd cluster.
etcdctl --endpoints=http://etcd:2379 snapshot save etcd-snapshot.db

Alternatively, iterate and re-write all keys after cutover — suitable for small datasets.

Terminal window
# Single node with S3
t4 run \
--data-dir /var/lib/t4 \
--listen 0.0.0.0:3379 \
--s3-bucket my-bucket \
--s3-prefix t4/

T4 does not support etcd snapshot restore directly. Replay keys using etcdctl or a migration script:

Terminal window
# Export all keys from etcd as key-value pairs
etcdctl --endpoints=http://etcd:2379 get / --prefix --print-value-only=false \
| awk 'NR%2==1{key=$0} NR%2==0{print key, $0}' > keys.txt
# Write them to T4
while read -r key value; do
etcdctl --endpoints=http://t4:3379 put "$key" "$value"
done < keys.txt

For large datasets, write a short Go program using the etcd client library to stream and replay:

import (
clientv3 "go.etcd.io/etcd/client/v3"
)
func migrate(ctx context.Context, src, dst *clientv3.Client) error {
resp, err := src.Get(ctx, "/", clientv3.WithPrefix())
if err != nil {
return err
}
for _, kv := range resp.Kvs {
if _, err := dst.Put(ctx, string(kv.Key), string(kv.Value)); err != nil {
return err
}
}
return nil
}

Change your application’s etcd endpoint from http://etcd:2379 to http://t4:3379. No client code changes needed — the etcd v3 Go client, Java client, Python client, and etcdctl all work against T4 unchanged.


If you’re using etcd embedded in Kubernetes (k3s, k0s) via kine or a direct etcd embed, consider switching to T4’s standalone binary as an etcd-compatible backend:

Terminal window
# k3s example: point the datastore at T4
k3s server --datastore-endpoint=http://t4:3379

Or deploy T4 on the cluster itself and point the control plane at its ClusterIP Service.


If you’re currently running the etcd server as a sidecar and connecting via the etcd Go client, you can replace both with the embedded T4 library — eliminating the sidecar process entirely.

// Connecting to a separate etcd sidecar process
cli, err := clientv3.New(clientv3.Config{
Endpoints: []string{"localhost:2379"},
})
resp, err := cli.Get(ctx, "/config/timeout")
value := string(resp.Kvs[0].Value)
_, err = cli.Put(ctx, "/config/timeout", "30s")
import "github.com/t4db/t4"
// Embedded — no separate process
node, err := t4.Open(t4.Config{
DataDir: "/var/lib/myapp/t4",
ObjectStore: s3Store, // same S3 durability you had before
})
kv, err := node.Get("/config/timeout")
value := string(kv.Value)
_, err = node.Put(ctx, "/config/timeout", []byte("30s"), 0)

Key API differences from the etcd v3 Go client:

etcd clientT4 embedded
cli.Get(ctx, key)node.Get(key) (no ctx; reads are local)
cli.Get(ctx, prefix, WithPrefix())node.List(prefix)
cli.Put(ctx, key, value)node.Put(ctx, key, []byte(value), 0)
cli.Delete(ctx, key)node.Delete(ctx, key)
cli.Watch(ctx, prefix, WithPrefix())node.Watch(ctx, prefix, 0)
cli.Txn(ctx).If(...).Then(Put).Commit()node.Txn(ctx, t4.TxnRequest{Conditions: [...], Success: [...], Failure: [...]}) — full multi-key If/Then/Else with Put and Delete ops; or node.Create / node.Update / node.DeleteIfRevision for simple single-key CAS patterns
cli.Grant(ctx, ttl) + lease ID on Putnode.Put(ctx, key, value, leaseID) — obtain a lease ID from LeaseGrant via the etcd gRPC API, or manage leases directly through the embedded server

etcd featureT4 behaviourWorkaround
Maintenance RPCs (Alarm, Hash, Snapshot, MoveLeader)Not supportedNot needed for standard application clients
etcdctl snapshot restoreNot supportedUse t4 branch fork for point-in-time copies
MemberAdd / MemberRemove / MemberUpdate / MemberPromoteNot supportedNot needed for standard clients
etcd v2 APINot supportedMigrate to v3 first
gRPC gateway (HTTP/JSON)Not includedUse a gRPC proxy (e.g. grpc-gateway) in front