Dynamic provisioning using FSx for OpenZFS
Now that we understand the FSx for OpenZFS storage class for Kubernetes, let's create a Persistent Volume and modify the UI component to mount this volume.
First, let's examine the fsxzpvcclaim.yaml
file:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: fsxz-claim
namespace: ui
spec:
accessModes:
- ReadWriteMany
storageClassName: fsxz-vol-sc
resources:
requests:
storage: 1Gi
The resource being defined is a PersistentVolumeClaim
This refers to the fsxz-vol-sc
storage class we created earlier
We are requesting 1GB of storage
Now we'll update the UI component to reference the FSx for OpenZFS PVC:
- Kustomize Patch
- Deployment/ui
- Diff
apiVersion: apps/v1
kind: Deployment
metadata:
name: ui
spec:
replicas: 2
template:
spec:
containers:
- name: ui
volumeMounts:
- name: fsxzvolume
mountPath: /fsxz
env:
- name: RETAIL_UI_PRODUCT_IMAGES_PATH
value: /fsxz
volumes:
- name: fsxzvolume
persistentVolumeClaim:
claimName: fsxz-claim
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/created-by: eks-workshop
app.kubernetes.io/type: app
name: ui
namespace: ui
spec:
replicas: 2
selector:
matchLabels:
app.kubernetes.io/component: service
app.kubernetes.io/instance: ui
app.kubernetes.io/name: ui
template:
metadata:
annotations:
prometheus.io/path: /actuator/prometheus
prometheus.io/port: "8080"
prometheus.io/scrape: "true"
labels:
app.kubernetes.io/component: service
app.kubernetes.io/created-by: eks-workshop
app.kubernetes.io/instance: ui
app.kubernetes.io/name: ui
spec:
containers:
- env:
- name: RETAIL_UI_PRODUCT_IMAGES_PATH
value: /fsxz
- name: JAVA_OPTS
value: -XX:MaxRAMPercentage=75.0 -Djava.security.egd=file:/dev/urandom
- name: METADATA_KUBERNETES_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: METADATA_KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: METADATA_KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
envFrom:
- configMapRef:
name: ui
image: public.ecr.aws/aws-containers/retail-store-sample-ui:1.2.1
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8080
initialDelaySeconds: 45
periodSeconds: 20
name: ui
ports:
- containerPort: 8080
name: http
protocol: TCP
resources:
limits:
memory: 1.5Gi
requests:
cpu: 250m
memory: 1.5Gi
securityContext:
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /fsxz
name: fsxzvolume
- mountPath: /tmp
name: tmp-volume
securityContext:
fsGroup: 1000
serviceAccountName: ui
volumes:
- name: fsxzvolume
persistentVolumeClaim:
claimName: fsxz-claim
- emptyDir:
medium: Memory
name: tmp-volume
app.kubernetes.io/type: app
name: ui
namespace: ui
spec:
- replicas: 1
+ replicas: 2
selector:
matchLabels:
app.kubernetes.io/component: service
app.kubernetes.io/instance: ui
[...]
app.kubernetes.io/name: ui
spec:
containers:
- env:
+ - name: RETAIL_UI_PRODUCT_IMAGES_PATH
+ value: /fsxz
- name: JAVA_OPTS
value: -XX:MaxRAMPercentage=75.0 -Djava.security.egd=file:/dev/urandom
- name: METADATA_KUBERNETES_POD_NAME
valueFrom:
[...]
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
+ - mountPath: /fsxz
+ name: fsxzvolume
- mountPath: /tmp
name: tmp-volume
securityContext:
fsGroup: 1000
serviceAccountName: ui
volumes:
+ - name: fsxzvolume
+ persistentVolumeClaim:
+ claimName: fsxz-claim
- emptyDir:
medium: Memory
name: tmp-volume
Apply these changes with the following command:
namespace/ui unchanged
serviceaccount/ui unchanged
configmap/ui unchanged
service/ui unchanged
persistentvolumeclaim/fsxz-claim created
deployment.apps/ui configured
Let's examine the volumeMounts
in the deployment. Notice that our new volume named fsxzvolume
is mounted at /fsxz
:
- mountPath: /fsxz
name: fsxzvolume
- mountPath: /tmp
name: tmp-volume
A PersistentVolume (PV) has been automatically created to fulfill our PersistentVolumeClaim (PVC):
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-342a674d-b426-4214-b8b6-7847975ae121 1Gi RWX Delete Bound ui/fsxz-claim fsxz-vol-sc 2m33s
Let's examine the details of our PersistentVolumeClaim (PVC):
Name: fsxz-claim
Namespace: ui
StorageClass: fsxz-vol-sc
Status: Bound
Volume: pvc-342a674d-b426-4214-b8b6-7847975ae121
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: fsx.openzfs.csi.aws.com
volume.kubernetes.io/storage-provisioner: fsx.openzfs.csi.aws.com
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 5Gi
Access Modes: RWX
VolumeMode: Filesystem
Used By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 34s persistentvolume-controller waiting for a volume to be created, either by external provisioner "fsx.openzfs.csi.aws.com" or manually created by system administrator
Normal Provisioning 34s fsx.openzfs.csi.aws.com_fsx-openzfs-csi-controller-6b9cdcddf6-kwx7p_35a063fc-5d91-4ba1-9bce-4d71de597b14 External provisioner is provisioning volume for claim "ui/fsxz-claim"
Normal ProvisioningSucceeded 33s fsx.openzfs.csi.aws.com_fsx-openzfs-csi-controller-6b9cdcddf6-kwx7p_35a063fc-5d91-4ba1-9bce-4d71de597b14 Successfully provisioned volume pvc-342a674d-b426-4214-b8b6-7847975ae121
At this point, the FSx for OpenZFS file system is successfully mounted but currently empty:
Let's use a Kubernetes Job to populate the FSx for OpenZFS volume with images:
Now let's demonstrate the shared storage functionality by listing the current files in /fsxz
through one of the UI component Pods:
1ca35e86-4b4c-4124-b6b5-076ba4134d0d.jpg
4f18544b-70a5-4352-8e19-0d070f46745d.jpg
631a3db5-ac07-492c-a994-8cd56923c112.jpg
79bce3f3-935f-4912-8c62-0d2f3e059405.jpg
8757729a-c518-4356-8694-9e795a9b3237.jpg
87e89b11-d319-446d-b9be-50adcca5224a.jpg
a1258cd2-176c-4507-ade6-746dab5ad625.jpg
cc789f85-1476-452a-8100-9e74502198e0.jpg
d27cf49f-b689-4a75-a249-d373e0330bb5.jpg
d3104128-1d14-4465-99d3-8ab9267c687b.jpg
d4edfedb-dbe9-4dd9-aae8-009489394955.jpg
d77f9ae6-e9a8-4a3e-86bd-b72af75cbc49.jpg
To further demonstrate the shared storage capabilities, let's create a new image called placeholder.jpg
and add it to the FSx for OpenZFS volume through the first Pod:
Now we'll verify that the second UI Pod can access this newly created file, demonstrating the shared nature of our FSx for OpenZFS storage:
1ca35e86-4b4c-4124-b6b5-076ba4134d0d.jpg
4f18544b-70a5-4352-8e19-0d070f46745d.jpg
631a3db5-ac07-492c-a994-8cd56923c112.jpg
79bce3f3-935f-4912-8c62-0d2f3e059405.jpg
8757729a-c518-4356-8694-9e795a9b3237.jpg
87e89b11-d319-446d-b9be-50adcca5224a.jpg
a1258cd2-176c-4507-ade6-746dab5ad625.jpg
cc789f85-1476-452a-8100-9e74502198e0.jpg
d27cf49f-b689-4a75-a249-d373e0330bb5.jpg
d3104128-1d14-4465-99d3-8ab9267c687b.jpg
d4edfedb-dbe9-4dd9-aae8-009489394955.jpg
d77f9ae6-e9a8-4a3e-86bd-b72af75cbc49.jpg
placeholder.jpg <----------------
As you can see, even though we created the file through the first Pod, the second Pod has immediate access to it because they're both accessing the same shared FSx for OpenZFS file system.
Finally, let's confirm that the image is accessible through the UI service:
http://k8s-ui-uinlb-647e781087-6717c5049aa96bd9.elb.us-west-2.amazonaws.com/assets/img/products/placeholder.jpg
Visit the URL in your browser:
We've successfully demonstrated how Amazon FSx for OpenZFS provides persistent shared storage for workloads running on Amazon EKS. This solution allows multiple pods to read from and write to the same storage volume simultaneously, making it ideal for shared content hosting and other use cases requiring distributed file system access with high performance and enterprise features.