Deploy Klutch locally
Local Klutch Deployment
This tutorial guides you through deploying Klutch locally using two interconnected local clusters that simulate a App and a Control Plane Cluster. It covers setting up both clusters, connecting them using Klutch, and showcases how developers can request and utilize resources in the App Cluster that are actually provisioned in the Control Plane Cluster.
Overview
In this tutorial, you'll perform the following steps in your local environment:
- Deploy a Control Plane Cluster (which will also host resources)
- Set up an App Cluster
- Bind APIs from the App Cluster to Control Plane Cluster
- Create and use remote resources from the App Cluster (in this case Postgresql service)
We'll use the open source a9s CLI to streamline this process, making it easy to follow along and understand each step.
Prerequisites
Before beginning this tutorial, ensure you have the following:
Required Tools
If you work with Kubernetes regularly, you probably have these standard tools already installed:
To follow along with this tutorial, you need to install the following specialized tools:
Network Access
Ensure your machine can reach the following external resources:
- Docker Image Repositories:
public.ecr.aws/w5n9a2g2/klutch/
dexidp/dex
curlimages/curl
xpkg.upbound.io/crossplane-contrib/provider-kubernetes:v0.14.1
Step 1: Run the Deployment Command
In this step, we'll set up both the Control Plane Cluster and the App Cluster for Klutch using a single command. This command will install all components needed by Klutch, including the a8s framework with the PostgreSQL operator.
This step does not automatically create bindings between the App Cluster and the resources in the Control Plane Cluster. You'll need to create these bindings using a web UI in a later step.
Run the following command to set up Klutch on two local clusters.
a9s klutch deploy --port 8080 --yes
- The
--port 8080
flag specifies the port on which the Control Plane Cluster's ingress will listen. You can change this if needed. - The
--yes
flag skips all confirmation prompts, speeding up the process.
What this command does:
- Deploys the Control Plane Cluster with all required components.
- Installs the a8s framework with the PostgreSQL Kubernetes operator.
- Creates an App Cluster with Kind.
Remove the --yes
flag if you want to review and approve each step of the process. This can be helpful for
understanding each action the CLI takes.
For a hands-off deployment, keep the --yes
flag to skip all prompts.
1.1 Control Plane Cluster Deployment
The CLI automatically:
- Checks prerequisites
- Creates a Kind cluster named "klutch-control-plane"
- Deploys core components:
- ingress-nginx
- Dex IdP (for authentication)
- Klutch backend
- Crossplane and provider-kubernetes
- Provider configuration package
- API Service Export Templates
- a8s stack as a sample data service
- Minio (for object storage)
- cert-manager
You'll see progress updates and YAML files being applied for each component.
1.2 App Cluster Deployment
After setting up the Control Plane Cluster, the CLI:
- Creates a new Kind cluster named "klutch-app"
At the moment this is an empty Kind cluster. Klutch components will be added in the next step, when the App Cluster is "bound" to the Control Plane Cluster. Stay tuned!
Deployment Output
Here's a trimmed example of what you might see during the deployment:
...
Checking Prerequisites...
✅ Found git at path /usr/bin/git.
✅ Found docker at path /opt/homebrew/bin/docker.
✅ Found kind at path /opt/homebrew/bin/kind.
...
Creating cluster "klutch-control-plane"...
• Ensuring node image (kindest/node:v1.31.0) 🖼 ...
✓ Ensuring node image (kindest/node:v1.31.0) 🖼
• Preparing nodes 📦 ...
✓ Preparing nodes 📦
...
Applying ingress-nginx manifests...
[YAML content will be displayed here]
✅ ingress-nginx appears to be ready.
Deploying Dex IdP...
[YAML content will be displayed here]
✅ Dex appears to be ready.
...
Applying the a8s Data Service manifests...
...
✅ The a8s System appears to be ready.
...
Deploying an App Cluster with Kind...
Creating cluster "klutch-app" ...
• Ensuring node image (kindest/node:v1.31.0) 🖼 ...
✓ Ensuring node image (kindest/node:v1.31.0) 🖼
• Preparing nodes 📦 ...
✓ Preparing nodes 📦
...
Summary
You've successfully accomplished the followings steps:
✅ Deployed a Klutch Control Plane Cluster with Kind.
✅ Deployed Dex Idp and the anynines klutch-bind backend.
✅ Deployed Crossplane and the Kubernetes provider.
✅ Deployed the Klutch Crossplane configuration package.
✅ Deployed Klutch API Service Export Templates to make the Klutch Crossplane APIs available to App Clusters.
✅ Deployed the a8s Stack.
✅ Deployed an App Cluster.
🎉 You are now ready to bind APIs from the App Cluster using the `a9s klutch bind` command.
Step 2: Bind Resource APIs from the App Cluster
After setting up both clusters, the next step is to bind APIs from the App Cluster to the Control Plane Cluster. We'll
bind two APIs: postgresqlinstance
and servicebinding
.
This operation also sets up an agent in the cluster to keep resources in sync between the App Cluster and the Control Plane Cluster.
Execute the following command to initiate the binding process:
a9s klutch bind
The CLI automatically:
- Checks prerequisites
- Executes the kubectl bind command
- Directs you to the web UI for authentication and API selection
- Prompts you to accept the required permissions
- Confirms binding completion
You'll see progress updates for each step.
Deployment Output and Binding Process
Here’s a trimmed example of what you might see during the binding process, along with the steps you need to follow to bind APIs:
Checking Prerequisites...
✅ Found kubectl at path /opt/homebrew/bin/kubectl.
✅ Found kubectl-bind at path /usr/local/bin/kubectl-bind.
🎉 All necessary commands are present.
...
The following command will be executed for you:
/opt/homebrew/bin/kubectl bind http://192.168.0.91:8080/export --konnector-image public.ecr.aws/w5n9a2g2/klutch/konnector:v1.3.1 --context kind-klutch-app
Next, a browser window will open for authentication. Use these demo credentials:
- Username: admin@example.com
- Password: password
In a production environment, use secure, unique credentials.
After authentication, you'll be prompted to select the API to bind. Just click on Bind
under the API you want to bind.
For our tutorial, let's bind postgresqlinstance
first.
Back in the terminal, you'll see:
Created objects will be recreated upon deletion. Accepting this Permission is optional.
Do you accept this Permission? [No,Yes]
...
✅ Created APIServiceBinding postgresqlinstances.anynines.com
...
You've successfully accomplished the following steps:
✅ Called the kubectl bind plugin to start the interactive binding process
✅ Authorized the Control Plane Cluster to manage the selected API on your App Cluster.
✅ You've bound the postgresqlinstances resource. You can now apply instances of this resource, for example with the
following yaml:
apiVersion: anynines.com/v1
kind: PostgresqlInstance
metadata:
name: example-a8s-postgresql
namespace: default
spec:
service: "a9s-postgresql13"
plan: "postgresql-single-nano"
expose: "Internal"
compositionRef:
name: a8s-postgresql
To bind to servicebinding
, repeat the same process, but click
on Bind
under the servicebinding
API in the web UI.
Step 3: Create and Use a PostgreSQL Instance
After binding the PostgresqlInstance, you can create and use PostgreSQL instances in your App Cluster. This section will guide you through creating an instance and using it with a simple blogpost application.
3.1 Create a PostgreSQL Instance
Create a file named pg-instance.yaml
with the following content:
apiVersion: anynines.com/v1
kind: PostgresqlInstance
metadata:
name: example-pg-instance
namespace: default
spec:
service: "a9s-postgresql13"
plan: "postgresql-single-nano"
expose: "Internal"
compositionRef:
name: a8s-postgresql
Apply the file to your App Cluster:
kubectl apply -f pg-instance.yaml
3.2 Create a ServiceBinding
Next, we'll create a ServiceBinding to make the PostgreSQL credentials available to our application.
Create a file named service-binding.yaml
with the following content:
apiVersion: anynines.com/v1
kind: ServiceBinding
metadata:
name: example-a8s-postgresql
namespace: default
spec:
instanceRef: example-a8s-postgresql
serviceInstanceType: postgresql
compositionRef:
name: a8s-servicebinding
Apply the ServiceBinding:
kubectl apply -f service-binding.yaml
3.3 Configure Local Network for Testing
Before deploying our application, we need to configure the local network to make the PostgreSQL service available in the App Cluster. This step is for local testing purposes and may vary significantly in a production environment.
Create a file named external-pg-service.yaml
with the following content:
apiVersion: v1
kind: Service
metadata:
name: external-pg-service
spec:
ports:
- protocol: TCP
port: 5432
targetPort: 5432
---
apiVersion: v1
kind: Endpoints
metadata:
name: external-pg-service
subsets:
- addresses:
- ip: $(sh -c "if command -v ifconfig >/dev/null 2>&1; then ifconfig | grep 'inet ' | grep -v 127.0.0.1 | awk '{print \$2}' | head -n 1; else ip route get 1 | awk '{print \$7;exit}'; fi")
ports:
- port: 5432
Apply the file:
kubectl apply -f <(eval "echo \"$(cat external-pg-service.yaml)\"")
Set up port forwarding in the Control Plane Cluster
a. Open a new terminal window.
b. Switch the kubectl context to the Control Plane Cluster:
kubectl config use-context kind-klutch-control-plane
c. Set up port forwarding using one of the following methods:
-
Manual method (replace placeholders with actual values):
kubectl -n <pg namespace> port-forward svc/example-a8s-postgresql-master 5432:5432 --address <your-ip>
OR
-
Automatic method:
bash -c 'get_ip() { if [[ "$OSTYPE" == "darwin"* ]]; then ifconfig | grep "inet " | grep -v 127.0.0.1 | awk "{print \$2}" | head -n 1; else ip -4 addr show scope global | grep inet | awk "{print \$2}" | cut -d / -f 1 | head -n 1; fi; }; NAMESPACE=$(kubectl get namespaces -o name | sed "s/^namespace\///" | grep "^kube-bind.*default$"); IP=$(get_ip); [ -z "$NAMESPACE" ] || [ -z "$IP" ] && exit 1; exec kubectl -n "$NAMESPACE" port-forward svc/example-a8s-postgresql-master 5432:5432 --address "$IP"'
d. Leave this terminal window running to maintain the port forwarding.
3.4 Deploy a Blogpost Application
Now, let's deploy a simple blogpost application that uses our PostgreSQL service. Return to the terminal window where your kubectl context is set to the App Cluster.
Create a file named blogpost-app.yaml
with the following content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-app
labels:
app: demo-app
spec:
replicas: 1
selector:
matchLabels:
app: demo-app
template:
metadata:
labels:
app: demo-app
tier: frontend
spec:
containers:
- name: demo-app
image: anyninesgmbh/a9s-postgresql-app:1.1.0
imagePullPolicy: Always
ports:
- containerPort: 3000
env:
- name: "POSTGRESQL_HOST"
value: external-pg-service
- name: "POSTGRESQL_USERNAME"
valueFrom:
secretKeyRef:
name: example-a8s-postgresql-service-binding
key: username
- name: "POSTGRESQL_PASSWORD"
valueFrom:
secretKeyRef:
name: example-a8s-postgresql-service-binding
key: password
- name: "POSTGRESQL_PORT"
value: "5432"
- name: "POSTGRESQL_DATABASE"
valueFrom:
secretKeyRef:
name: example-a8s-postgresql-service-binding
key: database
- name: "POSTGRESQL_SSLMODE"
value: "disable"
resources:
limits:
cpu: "0.5"
memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
name: demo-app
spec:
selector:
app: demo-app
ports:
- port: 3000
Apply the file:
kubectl apply -f blogpost-app.yaml
3.5 Access the Application
To access the application locally:
Set up port forwarding:
kubectl port-forward svc/demo-app 3000:3000
Open your web browser and navigate to http://localhost:3000 to access the blogpost application. You should now see the blogpost application interface. 🎉
Step 4: Clean Up Klutch-Created Clusters
If you need to start over or remove the clusters created by Klutch, use the following command:
a9s klutch delete
This command will remove both the Control Plane Cluster and App Clusters that were created during the Klutch deployment process.
Use this command with caution as it will delete all resources and data in both the Control Plane and App clusters. Make sure to back up any important data before proceeding.