It may not be an easy task. This post includes the necessary instructions for implementation, as there are some peculiarities to deploying MinIO on an OpenShift cluster.
✅ Important to know: The MinIO Operator DOES NOT WORK with OpenShift 4.16, which was the version used in this implementation. In fact, I’ve had other problems with this operator in the past, even with other OCP versions. That’s why I don’t use it. Therefore, we will use a different approach to deploy.
Don’t worry, I’ll explain everything you need to know! 😎
In any case, the official documentation can be found here.
*
Red Hat OpenShift 4.16 (works on others too)
*
Administrative access to the cluster
We’ll use the official POD. Simple as that. Without getting too fancy. The process consists of doing the basics:
-
Create Namespace
-
Create PVC
-
Create Service
-
Create Access Routes
-
Create a Deployment
🤜🏻 Get to work! 🤛🏻
Just the default.
$ cat minio-ns.yaml
apiVersion: v1
kind: Namespace
metadata:
name: minio-ocp
labels:
name: minio-ocp
$ oc create -f minio-ns.yaml
namespace/minio-ocp created
Make sure you adjust the Storage Class Name
according to your environment.
$ cat minio-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: minio-ocp
namespace: minio-ocp
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: thin-csi
volumeMode: Filesystem
$ oc create -f minio-pvc.yaml
persistentvolumeclaim/minio-ocp created
I created this one from scratch, based on the behavior of the POD. I took advantage of this and added some probes
.
*
Adjust the nodeSelector
and volumes
according to your environment.
$ cat minio-ocp.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: minio
namespace: minio-ocp
annotations:
deployment.kubernetes.io/revision: '1'
spec:
replicas: 1
selector:
matchLabels:
app: minio
template:
metadata:
creationTimestamp: null
labels:
app: minio
spec:
volumes:
- name: minio-ocp
persistentVolumeClaim:
claimName: minio-ocp
containers:
- resources: {}
readinessProbe:
tcpSocket:
port: 9090
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
terminationMessagePath: /dev/termination-log
name: container
command:
- /bin/bash
- '-c'
ports:
- containerPort: 9090
protocol: TCP
- containerPort: 9000
protocol: TCP
imagePullPolicy: IfNotPresent
volumeMounts:
- name: minio-ocp
mountPath: /data
terminationMessagePolicy: File
image: 'quay.io/minio/minio:latest'
args:
- 'minio server /data --console-address :9090'
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
nodeSelector:
node-role.kubernetes.io/worker: ''
securityContext: {}
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
⚠️ Note: replicas: 1
was used because the PVC in this example only supports RWO. To use more than one replica, you will need a volume that supports RWX.
When creating the instance, some errors will be displayed. This has to do with OpenShift’s SCC restrictions. We’ll see how to adjust this later.
$ oc create -f minio-ocp.yaml
Warning: would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "minio" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "minio" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "minio" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "minio" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
pod/minio created
Check that the active namespace is “minio-ocp”:
$ oc project
Using project "minio-ocp" on server "https://api.onpremises.example.com:6443".
Make sure the POD is running:
$ oc get pods
NAME READY STATUS RESTARTS AGE
minio-5cc789844f-kf8jr 1/1 Running 0 89m
Now do an analysis of the SCCs needed for the POD to run:
$ oc get deployment/minio -o yaml | oc adm policy scc-subject-review -f -
RESOURCE ALLOWED BY
Pod/minio anyuid
Check the service account used by the POD:
$ oc get pod/minio-5cc789844f-kf8jr -o yaml | grep serviceAccountName
serviceAccountName: default
Apply the necessary policies to the POD service account:
$ oc adm policy add-scc-to-user anyuid -z default
clusterrole.rbac.authorization.k8s.io/system:openshift:scc:anyuid added: "default"
⚠️ Note: This is not the ideal method for assigning privileges to the POD. It would be better to create another service account for MinIO, and work on this account. Even better would be to create a customized policy, containing only the access requirements that the POD needs to function. I’ll leave that part to you. 😉
Check that the PVC has been mounted:
$ oc get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
minio-ocp Bound pvc-807f9aa6-509b-47ed-a0c3-f2bcf7335917 100Gi RWO thin-csi <unset> 12m
Check the port that the MinIO Pod is using:
$ oc describe pod/minio-5cc789844f-kf8jr | grep server
minio server /data --console-address :9090
Use a YAML like this to create the service:
$ cat minio-svc.yaml
kind: Service
apiVersion: v1
metadata:
name: minio-webui
namespace: minio-ocp
labels:
app: minio
spec:
ports:
- protocol: TCP
port: 9090
targetPort: 9090
name: webui
type: ClusterIP
selector:
app: minio
---
kind: Service
apiVersion: v1
metadata:
name: minio-api
namespace: minio-ocp
labels:
app: minio
spec:
ports:
- protocol: TCP
port: 9000
targetPort: 9000
name: api
type: ClusterIP
selector:
app: minio
$ oc create -f minio-svc.yaml
⚠️ Note: You may have noticed that the service declaration includes TCP ports 9090 (WebUI) and 9000 (API). TCP port 9000 will only be activated when the first S3 Bucket is created in MinIO.
Check if the services have been created:
$ oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
minio-api ClusterIP 172.32.188.189 <none> 9000/TCP 15m
minio-webui ClusterIP 172.32.73.106 <none> 9090/TCP 15m
To access the MinIO Web console, we need a Route (similar to an Ingress). The same for the S3 endpoint.
⚠️ Adjust the parameter of the host variable according to your environment.
Use a YAML like this to create the route:
$ cat minio-route.yaml
kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: webui
namespace: minio-ocp
labels:
app: minio
spec:
host: webui-minio-ocp.apps.onpremises.example.com
to:
kind: Service
name: minio-webui
weight: 100
port:
targetPort: webui
wildcardPolicy: None
---
kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: s3
namespace: minio-ocp
labels:
app: minio
spec:
host: s3-minio-ocp.apps.onpremises.example.com
to:
kind: Service
name: minio-api
weight: 100
port:
targetPort: api
wildcardPolicy: None
$ oc create -f minio-route.yaml
Important:
🔹 The hostname starting with webui-minio-ocp
will be used to access MinIO’s WebUI
.
🔸 The hostname starting with s3-minio-ocp
will be used to access MinIO’s buckets
.
Check the endpoints of the routes created:
$ oc get routes
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
s3 s3-minio-ocp.apps.onpremises.example.com minio-api api None
webui webui-minio-ocp.apps.onpremises.example.com minio-webui webui None
⚠️ Note: I recommend that you create Edge routes for MinIO. Consult the MinIO documentation and OpenShift to learn how to configure TLS certificates.
Maybe I’ll do another article to complement this part. Maybe not. 😅
Take a look at all the resources created so far:
$ oc get all
NAME READY STATUS RESTARTS AGE
pod/minio-5f6f944f78-fqgjr 1/1 Running 0 56m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/minio-api ClusterIP 172.32.188.189 <none> 9000/TCP 23m
service/minio-webui ClusterIP 172.32.73.106 <none> 9090/TCP 23m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/minio 1/1 1 1 5h1m
NAME DESIRED CURRENT READY AGE
replicaset.apps/minio-5f6f944f78 1 1 1 56m
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
route.route.openshift.io/s3 s3-minio-ocp.apps.onpremises.example.com minio-api api None
route.route.openshift.io/webui webui-minio-ocp.apps.onpremises.example.com minio-webui webui None
Now just test access to the WebUI URL:
http://webui-minio-ocp.apps.onpremises.example.com/
Beautiful! But… Where’s the login?? 🧐
According to the official documentation:
minioadmin | minioadmin
You can check this information in the POD logs.
🔴 Don’t forget to change the password of the minioadmin user!
😎 Nice. However, we still need to customize a few details. ⚙️
Finally what you wanted! But before we create the first bucket, we’ll need a user and an access key.
To assign access to a bucket, we need a user. To create one, go to Identity > Users > Create User
.
In the example, we have created a user named loki
with the password password
.
Assign the access key loki to the user loki:
Now add the Access Key details:
A warning will be displayed, asking you to write down or download the Access Key, as it will not be possible to retrieve the contents of the secret
later:
When downloaded, the JSON content will look like this:
{"url":"http://127.0.0.1:9000","accessKey":"imzYtyEHhI8pAPWeJNUI","secretKey":"wiTKO6sFwHKPrWp1jrWkqzb4hl2plX0uPAGpZMgV","api":"s3v4","path":"auto"}
You will notice that in the Service Accounts tab of the loki user, there will be an Access Key.
Go to Administrator > Buckets
and create a bucket named loki.
Go to Administrator > Buckets > loki > Access
. Note that in Access Audit
we will have the user loki.
Access to the bucket is standard S3. In our example, you will need the following data:
*
S3 endpoint: https://s3-minio-ocp.apps.onpremises.example.com
*
Bucket Name: loki
*
Access Key: imzYtyEHhI8pAPWeJNUI
*
Secret Key: wiTKO6sFwHKPrWp1jrWkqzb4hl2plX0uPAGpZMgV
When your application mounts the bucket, you’ll be able to see some of the bucket’s usage and consumption details in the MinIO interface.
We can see the data stored in the PVC on the POD:
$ oc rsh minio-5f6f944f78-fqgjr du -h /data/loki/
8.0K /data/loki/index/delete_requests/delete_requests.gz
12K /data/loki/index/delete_requests
8.0K /data/loki/index/index_20097/1736407355-logging-loki-ingester-0-1736401784020620447.tsdb.gz
8.0K /data/loki/index/index_20097/1736407417-logging-loki-ingester-1-1736401848077981316.tsdb.gz
20K /data/loki/index/index_20097
36K /data/loki/index
64K /data/loki/audit/3ab47c3c45f75a78/19449ee35ec:19449ee45f7:2530a44c
68K /data/loki/audit/3ab47c3c45f75a78
72K /data/loki/audit
112K /data/loki/
(...)
We can install the client mc to manage MinIO via CLI.
It is possible to set up a connection to Prometheus from OpenShift. However, the process is a bit laborious.
Maybe I’ll do another article to complement this part. Maybe not. 😅
In production environments, create Tenants for better isolation and scalability of MinIO. Read the official documentation to learn how.
I hope this article has been useful to you! 😃
Did you like the content? Check out these other interesting articles! 🔥
✅ Leave a message with your questions! Share this material with your network!
Do you like what you find here? With every click on a banner, you help keep this site alive and free. Your support makes all the difference so that we can continue to bring you the content you love. Thank you very much! 😊