The agdb_server can be run in Kubernetes using official Docker image. Optionally you can build the image yourself and host it at the place of your choosing. Please refer to the server-docker guide for available images.
You can find an example Kubernetes deployment at https://github.com/agnesoft/agdb/tree/main/examples/k8s
The example breakdown:
First we deploy the K8s service of type ClusterIP that only allows communication inside the cluster. As we are running a database servers it would typically serve other backends in the same cluster and not be accessible from the outside. If such access is needed consider using LoadBalancer type service or Ingress controller with the ClusterIp service. The port available in the cluster is 3000 under a name of agdb. Furthermore we specify selector value app: agdb and also a label of the same value.
apiVersion: v1
kind: Service
metadata:
name: agdb
labels:
app: agdb
spec:
ports:
- port: 3000
name: agdb
clusterIP: None
selector:
app: agdb
http://agdb.default.svc.cluster.local:3000 use for
example http://agdb-0.agdb.default.svc.cluster.local:3000.Next document is the pepper secret agdb-pepper.
apiVersion: v1
kind: Secret
metadata:
name: agdb-pepper
labels:
app: agdb
stringData:
pepper: "1234567891234567"
Followed by the certificates. In production, you should create the certificates using kubectl command from secure files rather than embedding them into the manifest. However, for demonstration purposes the following example is provided. It uses self-signed certificate with the corresponding CA. It needs to be provided as base64 value. You can use e.g. cat cert.pem | base64 bash command to get the correct value (end of lines do not matter).
address field. E.g.
agdb-0.default.svc.cluster.local, agdb-1.default.svc.cluster.local,
agdb-2.default.svc.cluster.local.apiVersion: v1
kind: Secret
metadata:
name: agdb-certs
labels:
app: agdb
data:
cert.pem: ...
key.pem: ...
root_ca.pem: ...
The configuration named agdb-config via the ConfigMap is required as we need specific configuration for each node. Additionally, we specify a custom start.sh script that dynamically assigns the deployment index as a cluster index on startup. We are using the same certificate in all servers, and it must be valid for all the names it is used for, e.g. agdb-0.default.svc.cluster.local, agdb-1.default.svc.cluster.local, agdb-2.default.svc.cluster.local.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: agdb-config
labels:
app: agdb
data:
start.sh: |
cp /agdb/config/agdb_server.yaml /agdb/agdb_server.yaml
sed -i "s/{id}/$AGDB_REPLICA_INDEX/g" /agdb/agdb_server.yaml
/usr/local/bin/agdb_server
agdb_server.yaml: |
bind: :::3000
address: http://agdb-{id}.agdb.default.svc.cluster.local:3000
basepath: ""
static_roots: []
admin: admin
log_level: INFO
data_dir: /agdb/data
pepper_path: /agdb/pepper/pepper
tls_certificate: /agdb/certs/cert.pem
tls_key: /agdb/certs/key.pem
tls_root: /agdb/certs/root_ca.pem
cluster_token: cluster
cluster_heartbeat_timeout_ms: 1000
cluster_term_timeout_ms: 3000
cluster: [http://agdb-0.agdb.default.svc.cluster.local:3000, http://agdb-1.agdb.default.svc.cluster.local:3000, http://agdb-2.agdb.default.svc.cluster.local:3000]
The main part of the deployment is the stateful set definition. It uses the selector and labels app: agdb in order to "link" the service and the underlying pods together. Kubernetes is using selectors rather than direct mapping when linking various things together such as services and pods. We specify 3 replicas as we want to run 3 node cluster.
The container spec matches the port on the service by name agdb and exposes the container port 3000 (default). The security context specifies the user 1000 (default uid in the container) and disables root access as it is not needed and enhances security.
Finally, we specify volumes and volume mounts to add the secrets, configmap and persistent volume claim (PVC) to the expected locations. The PVC is a way how data can survive restart or redeployment. By default, 1 GB of storage is specified which can be increased (but not decreased) in subsequent deployments.
The custom command running the start.sh from the configmap and the environment variable AGDB_REPLICA_INDEX make sure the correct config is assigned to each node on startup. Certificates are provided for TLS support.
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: agdb
labels:
app: agdb
spec:
serviceName: "agdb"
replicas: 3
selector:
matchLabels:
app: agdb
template:
metadata:
labels:
app: agdb
spec:
containers:
- name: agdb
image: agnesoft/agdb:dev
command: ["sh", "/agdb/config/start.sh"]
ports:
- containerPort: 3000
name: agdb
securityContext:
runAsUser: 1000
runAsGroup: 1000
securityContext:
allowPrivilegeEscalation: false
env:
- name: AGDB_REPLICA_INDEX
valueFrom:
fieldRef:
fieldPath: metadata.labels['apps.kubernetes.io/pod-index']
volumeMounts:
- name: agdb-data
mountPath: /agdb/data
- name: config
mountPath: /agdb/config
- name: pepper
mountPath: /agdb/pepper
- name: certs
mountPath: /agdb/certs
volumes:
- name: config
configMap:
name: agdb-config
defaultMode: 511
- name: pepper
secret:
secretName: agdb-pepper
- name: certs
secret:
secretName: agdb-certs
volumeClaimTemplates:
- metadata:
name: agdb-data
labels:
app: agdb
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
curlThe following command must be run from within the cluster unless the server was exposed via LoadBalancer or Ingress. The .default. bit is the name of the namespace where everything was deployed.
curl -v https://agdb-0.agdb.default.svc.cluster.local:3000/api/v1/status # should return 200 OK
curl -v https://agdb-1.agdb.default.svc.cluster.local:3000/api/v1/status # should return 200 OK
curl -v https://agdb-2.agdb.default.svc.cluster.local:3000/api/v1/status # should return 200 OK
localhost:3000/api/v1/status as a startup/readiness/health probe.agdb-0) rather than the service address since the latter will route the request to a random node.agdb only inside the K8s cluster with no visibility outside of it you may want to disable TLS.agdb outside the K8s cluster use real production certificate. In that case leave the tls_root configuration option empty.