kcp-zheng: Multi-Region Self-Signed Certificate Deployment¶
The kcp-zheng deployment pattern uses self-signed certificates with an internal CA and is ideal for multi-region deployments across different clouds without shared network. In this scenario we use 3 different Kubernetes clusters for shards, with all shards accessed via external URLs and front-proxy as the only public endpoint.
Note: This guilde uses kcp-operators bundle feature to deploy shards from the root cluster. Ensure you have flag enabled in your kcp-operator deployment:
Architecture Overview¶
- Certificate approach: All certificates are self-signed using an internal CA
- Access pattern: Only front-proxy is publicly accessible, shards have external URLs for cross-region access
- Network: 3 Kubernetes cluster deployments in different clouds without shared network
- DNS requirements: Public DNS records for front-proxy and each shard
Prerequisites¶
Ensure all shared components are installed before proceeding.
Additional requirements for kcp-zheng: - Public DNS domain with ability to create multiple A records - LoadBalancer service capability for front-proxy and shard endpoints - External network connectivity between clusters
Deployment Steps¶
1. Create DNS Records¶
Create public DNS records for all endpoints:
# Required DNS records
api.zheng.example.io → Front-proxy LoadBalancer IP (cluster 1)
root.zheng.example.io → Root shard LoadBalancer IP (cluster 1)
alpha.zheng.example.io → Alpha shard LoadBalancer IP (cluster 2)
beta.zheng.example.io → Beta shard LoadBalancer IP (cluster 3)
Note
DNS records must be configured before proceeding with deployment.
Cluster 1: Deploy Root Shard and Front-Proxy¶
2. Create Namespace and Certificate Issuer¶
On cluster 1, where the operator is running and root shard will be deployed:
kubectl create namespace kcp-zheng
kubectl apply -f contrib/production/etcd-druid/certificate-etcd-issuer.yaml
kubectl apply -f contrib/production/kcp-zheng/certificate-etcd-root.yaml
Verify issuer is ready:
3. Deploy etcd Cluster¶
Deploy etcd cluster with self-signed certificates:
Verify etcd cluster:
kubectl get etcd -n kcp-zheng
kubectl wait --for=condition=Ready etcd -n kcp-zheng --all --timeout=300s
4. Configure KCP System Certificates¶
Set up certificates for kcp components using the internal CA:
Verify certificate issuance:
Because we use Let's Encrypt for the front-proxy, and since kubectl needs explicit CA configuration, we need to deploy kcp components with extended CA bundle trust:
curl -L -o isrgrootx1.pem https://letsencrypt.org/certs/isrgrootx1.pem
kubectl create secret generic letsencrypt-ca --from-file=tls.crt=isrgrootx1.pem -n kcp-zheng
5. Deploy KCP Components¶
Deploy kcp components:
# NOTE: These files need to be customized with your domain name before applying
kubectl apply -f contrib/production/kcp-zheng/kcp-root-shard.yaml
kubectl apply -f contrib/production/kcp-zheng/kcp-front-proxy.yaml
Verify deployment:
6. Verify Services¶
Ensure the front-proxy LoadBalancer is provisioned:
Expected services:
NAME TYPE EXTERNAL-IP PORT(S) AGE
frontproxy-front-proxy LoadBalancer 203.0.113.10 6443:30001/TCP 5m
root-kcp LoadBalancer 203.0.113.11 6443:30002/TCP 5m
7. Update DNS Records with LoadBalancer IPs¶
Update your DNS records with the LoadBalancer IP addresses:
kubectl get svc -n kcp-zheng frontproxy-front-proxy -o jsonpath='{.status.loadBalancer}'
kubectl get svc -n kcp-zheng root-kcp -o jsonpath='{.status.loadBalancer}'
Verify DNS propagation:
Verify the front-proxy is accessible:
8. Create Admin Access and Test Connectivity¶
kubectl apply -f contrib/production/kcp-zheng/kubeconfig-kcp-admin.yaml
kubectl get secret -n kcp-zheng kcp-admin-frontproxy \
-o jsonpath='{.data.kubeconfig}' | base64 -d > kcp-admin-kubeconfig-zheng.yaml
KUBECONFIG=kcp-admin-kubeconfig-zheng.yaml kubectl get shards
Expected output:
NAME REGION URL EXTERNAL URL AGE
root https://root.zheng.example.io:6443 https://api.zheng.example.io:6443 3m20s
9. Create Alpha and Beta Shard Bundles¶
Now configure the root cluster to generate alpha and beta shard bundles:
kubectl apply -f contrib/production/kcp-zheng/kcp-alpha-shard.yaml
kubectl apply -f contrib/production/kcp-zheng/kcp-beta-shard.yaml
Verify bundles are created (shards should NOT be running yet):
Expected output:
NAME READY UP-TO-DATE AVAILABLE AGE
alpha-shard-kcp 0/0 0 0 2m
beta-shard-kcp 0/0 0 0 2m
frontproxy-front-proxy 2/2 2 2 31m
root-kcp 2/2 2 2 31m
root-proxy 2/2 2 2 31m
Verify bundles are ready:
Expected output:
Cluster 2: Deploy Alpha Shard¶
Now move to cluster 2 and deploy the alpha shard using the generated bundle.
1. Create Namespace¶
2. Install Prerequisites¶
Install etcd operator:
helm install etcd-druid oci://europe-docker.pkg.dev/gardener-project/releases/charts/gardener/etcd-druid \
--namespace etcd-druid \
--create-namespace \
--version v0.33.0
kubectl apply -f contrib/production/etcd-druid/etcdcopybackupstasks.druid.gardener.cloud.yaml
kubectl apply -f contrib/production/etcd-druid/etcds.druid.gardener.cloud.yaml
Install cert-manager if not already installed:
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm upgrade \
--install \
--namespace cert-manager \
--create-namespace \
--version v1.18.2 \
--set crds.enabled=true \
--atomic \
cert-manager jetstack/cert-manager
3. Deploy etcd Issuers and Certificates¶
kubectl apply -f contrib/production/etcd-druid/certificate-etcd-issuer.yaml
kubectl apply -f contrib/production/kcp-zheng/certificate-etcd-alpha.yaml
4. Deploy etcd Cluster¶
Verify etcd cluster:
kubectl get etcd -n kcp-zheng
kubectl wait --for=condition=Ready etcd -n kcp-zheng --all --timeout=300s
5. Deploy Alpha Shard from Bundle¶
Once etcd is ready, deploy the alpha shard using the generated bundle from cluster 1.
On cluster 1, export the alpha bundle secret:
Copy the alpha-bundle.yaml file to cluster 2 and apply it:
Deploy resources from the bundle secret:
Verify shard is running:
Expected output:
NAME READY STATUS RESTARTS AGE
alpha-0 2/2 Running 0 9m
alpha-1 2/2 Running 0 9m
alpha-2 2/2 Running 0 9m
alpha-shard-kcp-69db8985bf-hllmw 1/1 Running 0 90s
alpha-shard-kcp-69db8985bf-qzftr 1/1 Running 0 90s
6. Configure DNS for Alpha Shard¶
Get the LoadBalancer IP:
Add DNS record alpha.zheng.example.io pointing to the shard LoadBalancer IP.
Verify DNS propagation:
7. Verify Alpha Shard Joined¶
From any machine with the admin kubeconfig:
Expected output:
NAME REGION URL EXTERNAL URL AGE
alpha https://alpha.zheng.example.io:6443 https://api.zheng.example.io:6443 2m
root https://root.zheng.example.io:6443 https://api.zheng.example.io:6443 38m
Cluster 3: Deploy Beta Shard¶
Repeat the same steps as cluster 2 for the beta shard.
1. Create Namespace¶
2. Install Prerequisites¶
Install etcd operator:
helm install etcd-druid oci://europe-docker.pkg.dev/gardener-project/releases/charts/gardener/etcd-druid \
--namespace etcd-druid \
--create-namespace \
--version v0.33.0
kubectl apply -f contrib/production/etcd-druid/etcdcopybackupstasks.druid.gardener.cloud.yaml
kubectl apply -f contrib/production/etcd-druid/etcds.druid.gardener.cloud.yaml
Install cert-manager if not already installed:
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm upgrade \
--install \
--namespace cert-manager \
--create-namespace \
--version v1.18.2 \
--set crds.enabled=true \
--atomic \
cert-manager jetstack/cert-manager
3. Deploy etcd Issuers and Certificates¶
kubectl apply -f contrib/production/etcd-druid/certificate-etcd-issuer.yaml
kubectl apply -f contrib/production/kcp-zheng/certificate-etcd-beta.yaml
4. Deploy etcd Cluster¶
Verify etcd cluster:
kubectl get etcd -n kcp-zheng
kubectl wait --for=condition=Ready etcd -n kcp-zheng --all --timeout=300s
5. Deploy Beta Shard from Bundle¶
On cluster 1, export the beta bundle secret:
Copy the beta-bundle.yaml file to cluster 3 and apply it:
Deploy resources from the bundle secret:
Verify shard is running:
6. Configure DNS for Beta Shard¶
Get the LoadBalancer IP:
Add DNS record beta.zheng.example.io pointing to the shard LoadBalancer IP.
Verify DNS propagation:
7. Verify All Shards Joined¶
From any machine with the admin kubeconfig:
Expected output:
NAME REGION URL EXTERNAL URL AGE
alpha https://alpha.zheng.example.io:6443 https://api.zheng.example.io:6443 15m
beta https://beta.zheng.example.io:6443 https://api.zheng.example.io:6443 2m
root https://root.zheng.example.io:6443 https://api.zheng.example.io:6443 50m
Optional: Create Partitions¶
Partitions allow you to group shards for topology-aware workload placement. This is useful for geo-distributed deployments where you want to control which shards handle specific workloads.
Option 1: Create Individual Partitions¶
Create a partition for each shard:
kubectl apply -f - <<EOF
kind: Partition
apiVersion: topology.kcp.io/v1alpha1
metadata:
name: root
spec:
selector:
matchLabels:
name: root
---
kind: Partition
apiVersion: topology.kcp.io/v1alpha1
metadata:
name: alpha
spec:
selector:
matchLabels:
name: alpha
---
kind: Partition
apiVersion: topology.kcp.io/v1alpha1
metadata:
name: beta
spec:
selector:
matchLabels:
name: beta
EOF
Option 2: Use a PartitionSet¶
Alternatively, use a PartitionSet to automatically create partitions based on shard labels:
kubectl apply -f - <<EOF
kind: PartitionSet
apiVersion: topology.kcp.io/v1alpha1
metadata:
name: cloud-regions
spec:
dimensions:
- name
shardSelector:
matchExpressions:
- key: name
operator: In
values:
- root
- alpha
- beta
EOF
Verify the partitions were created:
Expected output:
NAME OWNER AGE
alpha 6m
beta 6m
cloud-regions-alpha-hcfcx cloud-regions 6s
cloud-regions-beta-78xkz cloud-regions 6s
cloud-regions-root-4vrlm cloud-regions 6s
root 6m
Create Workspaces on Specific Shards¶
Create workspaces targeting specific shards using the --location-selector flag:
kubectl ws create provider --location-selector name=root
kubectl ws create consumer-alpha-1 --location-selector name=alpha
kubectl ws create consumer-alpha-2 --location-selector name=alpha
kubectl ws create consumer-beta-1 --location-selector name=beta
kubectl ws create consumer-beta-2 --location-selector name=beta
Partitions in Non-Root Workspaces¶
Partitions can also be created outside of the root workspace. For example, in the provider workspace:
kubectl ws use provider
kubectl apply -f - <<EOF
kind: PartitionSet
apiVersion: topology.kcp.io/v1alpha1
metadata:
name: cloud-regions
spec:
dimensions:
- name
shardSelector:
matchExpressions:
- key: name
operator: In
values:
- alpha
EOF
Example: Export and Bind an API¶
Create an APIExport in the provider workspace:
kubectl ws use provider
kubectl create -f config/examples/cowboys/apiresourceschema.yaml
kubectl create -f config/examples/cowboys/apiexport.yaml
Create bindings in the consumer workspaces:
kubectl ws use :root:consumer-alpha-1
kubectl kcp bind apiexport root:provider:cowboys --name cowboys
kubectl ws use :root:consumer-alpha-2
kubectl kcp bind apiexport root:provider:cowboys --name cowboys
kubectl ws use :root:consumer-beta-1
kubectl kcp bind apiexport root:provider:cowboys --name cowboys
kubectl ws use :root:consumer-beta-2
kubectl kcp bind apiexport root:provider:cowboys --name cowboys
Partitioned APIExportEndpointSlices for High Availability¶
Create dedicated child workspaces on each shard to host shard-local APIExportEndpointSlices:
kubectl ws use :root:provider
kubectl ws create alpha --location-selector name=alpha
kubectl ws create beta --location-selector name=beta
Inside each workspace, create a partition and APIExportEndpointSlice targeting that shard:
Note
Partitions must be co-located in the same workspace as the APIExportEndpointSlice.
# Setup alpha shard endpoint
kubectl ws use :root:provider:alpha
kubectl apply -f - <<EOF
apiVersion: topology.kcp.io/v1alpha1
kind: Partition
metadata:
name: alpha
spec:
selector:
matchLabels:
name: alpha
---
apiVersion: apis.kcp.io/v1alpha1
kind: APIExportEndpointSlice
metadata:
name: cowboys-alpha
spec:
export:
name: cowboys
path: root:provider
partition: alpha
EOF
# Setup beta shard endpoint
kubectl ws use :root:provider:beta
kubectl apply -f - <<EOF
apiVersion: topology.kcp.io/v1alpha1
kind: Partition
metadata:
name: beta
spec:
selector:
matchLabels:
name: beta
---
apiVersion: apis.kcp.io/v1alpha1
kind: APIExportEndpointSlice
metadata:
name: cowboys-beta
spec:
export:
name: cowboys
path: root:provider
partition: beta
EOF
The resulting workspace structure:
root
├── consumer-alpha-1
├── consumer-alpha-2
├── consumer-beta-1
├── consumer-beta-2
└── provider (contains APIExport to be used by alpha, beta)
├── alpha (contains Partition + APIExportEndpointSlice for alpha shard)
└── beta (contains Partition + APIExportEndpointSlice for beta shard)
The provider workspace is on the root shard, while provider:alpha and provider:beta are on their respective shards.
High Availability Testing¶
Create test resources in the consumer workspaces:
kubectl ws use :root:consumer-alpha-1
kubectl create -f config/examples/cowboys/cowboy.yaml
kubectl ws use :root:consumer-beta-1
kubectl create -f config/examples/cowboys/cowboy.yaml
Simulate root shard failure by scaling down the root shard deployment:
Verify behavior during root shard outage:
| Operation | Result |
|---|---|
kubectl ws use :root |
Timeout (root shard unavailable) |
kubectl ws use :root:consumer-alpha-1 |
Works (alpha shard) |
kubectl ws use :root:consumer-alpha-2 |
Works (alpha shard) |
kubectl ws use :root:provider |
Timeout (root shard unavailable) |
kubectl ws use :root:provider:alpha |
Works (alpha shard) |
Access the virtual API endpoint directly:
# Get the endpoint URL from the APIExportEndpointSlice
kubectl ws use :root:provider:alpha
kubectl get apiexportendpointslice cowboys-alpha -o jsonpath='{.status.endpoints[0].url}'
Expected output:
Query cowboys across all consumer workspaces on the alpha shard:
kubectl -s 'https://alpha.zheng.example.io:6443/services/apiexport/<identity>/cowboys/clusters/*' \
get cowboys.wildwest.dev -A
Expected output:
This demonstrates that alpha and beta shards can continue to serve API requests even when the root shard is unavailable, as long as they have their own APIExportEndpointSlices. Important part is that there must be dedicated operators running on each shard to manage these resources.
For more details on sharding strategies, see the Sharding Overview.