Deploymentο
The baler-operator is designed to be versatile and compatible across various Kubernetes environments, whether itβs on-premise or on the cloud. This documentation provides a comprehensive guide on deploying the baler-operator to any Kubernetes cluster, with additional information tailored for specific platforms like Amazon EKS, Azure AKS, Google GKE, and Red Hat OpenShift.
Our goal is to ensure you have all the necessary information and confidence that the baler-operator will seamlessly integrate into your Kubernetes ecosystem.
Any Kubernetes Clusterο
Prerequisitesο
Kubernetes cluster
Helm 3 installed
Step 1: Add Helm Repositoryο
First, add the repository that contains the baler-operator chart to Helm:
helm repo add baler-operator https://gatecastle.github.io/baler-operator
helm repo update
Step 2: Create a values.yaml Fileο
Create a values.yaml file with the following content:
You can find the actual values.yaml file here
# Default values for baler-operator.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: gatecastle/baler-operator
pullPolicy: Always
# Overrides the image tag whose default is the chart appVersion.
tag: "0.0.1-dev"
imagePullSecrets: []
# - name: "dockerhub"
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
#capabilities:
# drop:
# - ALL
# add: []
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
# allowPrivilegeEscalation: false
service:
port: 8080
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits:
cpu: 1000m
memory: 2Gi
requests:
cpu: 500m
memory: 1Gi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 1
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
Adjust the parameters according to your needs.
Step 3: Deploy the Baler Operatorο
Deploy the baler-operator to your Kubernetes cluster using the following Helm command:
helm install baler-operator baler-operator/baler-operator -f values.yaml
Step 4: Verify Deploymentο
Verify that the baler-operator has been successfully deployed by checking the status of the pods:
kubectl get pods -n baler
The baler-operator can be deployed to various managed Kubernetes services such as Amazon EKS, Azure AKS, Google GKE, and Red Hat OpenShift. The deployment steps are similar to those for a generic Kubernetes cluster, leveraging Helm and a values.yaml file for configuration.
Amazon Elastic Kubernetes Service (EKS)ο
GPU Nodes: Amazon EKS supports GPU nodes, enhancing the performance of compute-intensive applications. To utilize GPU nodes, you must select the appropriate EC2 instance types and install the NVIDIA Kubernetes device plugin.
GPU Nodes Label: EKS usually labels GPU nodes with
kubernetes.io/accelerator. Use this label in your deploymentβs node selector to ensure your workloads are scheduled on GPU nodes.Documentation: For more information on EKS and GPU nodes, refer to the Amazon EKS User Guide and the Managing GPU nodes section in the EKS documentation.
Azure Kubernetes Service (AKS)ο
GPU Nodes: AKS offers GPU-enabled node pools, suitable for tasks that require heavy computation, such as machine learning and data processing. To leverage GPUs, specify the GPU-enabled VM sizes when creating your node pool.
GPU Nodes Label: AKS uses the
acceleratorlabel to mark GPU nodes. Use this label in your node selector to target GPU nodes for your deployments.Documentation: Learn more about AKS and its GPU capabilities by visiting the AKS documentation and the Use GPUs in AKS section of the AKS documentation.
Google Kubernetes Engine (GKE)ο
GPU Nodes: GKE supports the deployment of clusters with GPU-accelerated nodes, providing significant compute power for your applications. To use GPUs in GKE, you must create a cluster with GPU nodes or add GPUs to an existing cluster.
GPU Nodes Label: GKE assigns the
cloud.google.com/gke-acceleratorlabel to nodes configured with GPUs. Use this label in your deployment to ensure pods are scheduled on the appropriate GPU-equipped nodes.Documentation: Detailed information on using GKE and configuring GPU nodes can be found in the GKE documentation and the Running workloads on GPUs section of the GKE documentation.
Red Hat OpenShiftο
GPU Nodes: OpenShift supports GPU nodes through the use of specialized Operators that manage GPU resources, making it possible to run GPU-intensive workloads. Configuring GPU nodes in OpenShift requires additional steps, including the installation of the NVIDIA GPU Operator.
Documentation: For comprehensive guidance on deploying to OpenShift and configuring GPU nodes, refer to the OpenShift documentation.