preloader
  • Home
  • NooBaa Pods Placement Using Node Affinity and Tolerations

Optimize your OpenShift NooBaa pod placement using node affinity and tolerations. Ideal for MCG deployments on infra nodes in scalable and production-grade clusters. Learn how to use node affinity and tolerations to move NooBaa pods in OpenShift, ideal for MCG deployments with tainted MachineSets.

blog-thumb

📖 Estimated reading time: 5 min


Introduction:

This guide explains how to configure NooBaa pods placement using node affinity and tolerations for noobaa-core-0, noobaa-db-pg-0, noobaa-default-backing-store, noobaa-endpoint and backing-store-* pods, including backing store pods, in an OpenShift environment. By leveraging node affinity, you can ensure that NooBaa pods are scheduled on specific nodes that meet certain criteria, such as nodes with the label node-role.kubernetes.io/infra. This can be useful in scenarios where you need to isolate NooBaa workloads or control their placement for performance or resource management reasons.

Additionally, the patch applies a toleration to NooBaa pods so they can be scheduled on nodes with specific taints, which further helps in managing where NooBaa resources are deployed.


Use Cases

Isolating NooBaa pods: This approach is useful when you want to isolate NooBaa-related workloads (such as the noobaa-* pods and backing store pods) onto specific nodes within your OpenShift cluster. For example, you might want to dedicate certain nodes to NooBaa for performance, resource optimization, or compliance reasons.

MCG deployments: This method can be applied to Multi-Cloud Gateway (MCG) deployments to control where the NooBaa-related pods (including backing store pods) are placed.

Large Clusters: For large clusters, using a dedicated MachineSet for NooBaa pods is a recommended approach. This ensures that the pods are scheduled on a dedicated group of nodes, helping to avoid interference with other workloads.


When Not to Use

Cluster-wide placement: This approach only moves NooBaa pods (like noobaa-* and backing store pods) and does not affect all pods in the openshift-storage namespace. Therefore, if you want to control the placement of all OpenShift Storage pods, this solution may not be sufficient.

Over-scheduling: If you configure the placement for too many pods on the same set of nodes, it could lead to resource contention. Make sure the nodes where NooBaa pods are scheduled have enough resources (CPU, memory, disk, etc.) to handle the load.


How to Use the Patch

To apply the patch, use the following oc patch command, which configures node affinity and tolerations for NooBaa pods.

 
oc patch storagecluster ocs-storagecluster -n openshift-storage --type merge -p '{
  "spec": {
    "placement": {
      "all": {},
      "noobaa-standalone": {
        "nodeAffinity": {
          "requiredDuringSchedulingIgnoredDuringExecution": {
            "nodeSelectorTerms": [
              {
                "matchExpressions": [
                  {
                    "key": "node-role.kubernetes.io/infra",
                    "operator": "Exists"
                  }
                ]
              }
            ]
          }
        },
        "tolerations": [
          {
            "key": "quay",
            "operator": "Exists",
            "effect": "NoSchedule"
          }
        ]
      }
    }
  }
}'
 

The result will be similar to this:

OpenShift NooBaa Pods Placement

Key Fields in the Patch

About the “quay” Taint:

In the patch, we are using the quay key in the tolerations section to ensure that NooBaa pods are scheduled only on nodes that have the quay=true:NoSchedule taint. This is because the nodes designated for running NooBaa (specifically in environments where Quay is used as part of the infrastructure) are labeled with this taint to restrict the scheduling of non-eligible pods.

Node Affinity (nodeAffinity):

  ◦ This ensures that NooBaa pods are only scheduled on nodes that have the label:
      "node-role.kubernetes.io/infra"

  ◦ It is useful when you want NooBaa pods to be isolated on a specific group of nodes.

Tolerations:

  ◦ This toleration allows NooBaa pods to be scheduled on nodes with the "quay" taint,
    allowing them to be placed on nodes that would normally reject pods without matching
    tolerations.

Limitations

Does not move all OpenShift storage pods: The patch applies to NooBaa pods only (including the backing store pods), not other OpenShift Storage-related pods. If you want to control the placement of all pods in the openshift-storage namespace, you need a different approach.

All NooBaa Pods Move Together: The patch moves all NooBaa pods (including noobaa-* and backing store pods) to the selected nodes based on node affinity. However, it does not allow for granular control of individual pods. For example, you cannot easily specify different placements for noobaa-standalone and the backing store pods independently.

Resource Management: While this approach allows you to control where NooBaa pods are placed, you must ensure that the selected nodes have sufficient resources (CPU, memory, etc.). Over-scheduling NooBaa pods to a specific set of nodes may lead to resource contention, especially in large clusters.


For larger clusters, you might want to consider setting up a dedicated MachineSet for NooBaa pods. This setup isolates NooBaa workloads from other workloads, ensuring that they run on dedicated nodes with enough resources. Here’s a basic outline of how you might implement this in a large cluster:

Create a MachineSet for NooBaa Pods: Define a MachineSet with specific labels or taints for the nodes where NooBaa should be scheduled. This will ensure that only NooBaa pods are placed on those nodes.

Apply Node Affinity: Use node affinity (as shown in the patch) to ensure that NooBaa pods are scheduled on the nodes in the MachineSet. This allows you to isolate NooBaa pods from other workloads on the cluster.

Monitor Resources: Regularly monitor resource usage on the dedicated nodes to ensure NooBaa pods are not over-consuming resources, which can impact performance.


By the way, there’s a guide to performance tuning here.


Conclusion

Using node affinity and tolerations to control the placement of NooBaa pods is a powerful way to ensure that NooBaa-related workloads are isolated and scheduled on specific nodes. This approach is ideal for MCG deployments and large clusters, where you want to dedicate resources to NooBaa pods and avoid interference with other workloads.

However, keep in mind that this patch only affects NooBaa pods and not all OpenShift Storage-related pods. Additionally, make sure the selected nodes have sufficient resources to handle the load, and avoid over-scheduling NooBaa pods on the same nodes to prevent resource contention.



Share this post and keep following our site for more updates on Kubernetes and open-source technologies!

Check out these other interesting articles! 🔥



Support us 💖

Do you like what you find here? With every click on a banner, you help keep this site alive and free. Your support makes all the difference so that we can continue to bring you the content you love. Thank you very much! 😊


Articles you might like 📌
comments powered by Disqus