Persistent object storage with S3
In our previous steps, we prepared our environment by creating a staging directory for image objects, downloading image assets, and uploading them to our S3 bucket. We also installed and configured the Mountpoint for Amazon S3 CSI driver. Now we'll complete our objective of creating an image host application with horizontal scaling and persistent storage backed by Amazon S3 by attaching our pods to use the Persistent Volume (PV) provided by the Mountpoint for Amazon S3 CSI driver.
Let's start by creating a Persistent Volume and modifying the ui container in our deployment to mount this volume.
First, let's examine the s3pvclaim.yaml file to understand its parameters and configuration:
apiVersion: v1
kind: PersistentVolume
metadata:
  name: s3-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  mountOptions:
    - allow-delete
    - allow-other
    - uid=1000
    - gid=1000
    - region=$AWS_REGION
  csi:
    driver: s3.csi.aws.com
    volumeHandle: s3-csi-driver-volume
    volumeAttributes:
      bucketName: $BUCKET_NAME
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: s3-claim
  namespace: ui
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: ""
  resources:
    requests:
      storage: 1Gi
  volumeName: s3-pv
ReadWriteMany: Allows the same S3 bucket to be mounted to multiple pods for read/write
allow-delete: Allows users to delete objects from the mounted bucket
allow-other: Allows users other than the owner to access the mounted bucket
uid=: Sets User ID (UID) of files/directories in the mounted bucket
gid=: Sets Group ID (GID) of files/directories in the mounted bucket
region= $AWS_REGION: Sets the region of the S3 bucket
bucketName specifies the S3 bucket name
- Kustomize Patch
 - Deployment/ui
 - Diff
 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ui
spec:
  replicas: 2
  template:
    spec:
      containers:
        - name: ui
          volumeMounts:
            - name: mountpoint-s3
              mountPath: /mountpoint-s3
          env:
            - name: RETAIL_UI_PRODUCT_IMAGES_PATH
              value: /mountpoint-s3
      volumes:
        - name: mountpoint-s3
          persistentVolumeClaim:
            claimName: s3-claim
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/created-by: eks-workshop
    app.kubernetes.io/type: app
  name: ui
  namespace: ui
spec:
  replicas: 2
  selector:
    matchLabels:
      app.kubernetes.io/component: service
      app.kubernetes.io/instance: ui
      app.kubernetes.io/name: ui
  template:
    metadata:
      annotations:
        prometheus.io/path: /actuator/prometheus
        prometheus.io/port: "8080"
        prometheus.io/scrape: "true"
      labels:
        app.kubernetes.io/component: service
        app.kubernetes.io/created-by: eks-workshop
        app.kubernetes.io/instance: ui
        app.kubernetes.io/name: ui
    spec:
      containers:
        - env:
            - name: RETAIL_UI_PRODUCT_IMAGES_PATH
              value: /mountpoint-s3
            - name: JAVA_OPTS
              value: -XX:MaxRAMPercentage=75.0 -Djava.security.egd=file:/dev/urandom
            - name: METADATA_KUBERNETES_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: METADATA_KUBERNETES_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: METADATA_KUBERNETES_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
          envFrom:
            - configMapRef:
                name: ui
          image: public.ecr.aws/aws-containers/retail-store-sample-ui:1.2.1
          imagePullPolicy: IfNotPresent
          livenessProbe:
            httpGet:
              path: /actuator/health/liveness
              port: 8080
            initialDelaySeconds: 45
            periodSeconds: 20
          name: ui
          ports:
            - containerPort: 8080
              name: http
              protocol: TCP
          resources:
            limits:
              memory: 1.5Gi
            requests:
              cpu: 250m
              memory: 1.5Gi
          securityContext:
            capabilities:
              add:
                - NET_BIND_SERVICE
              drop:
                - ALL
            readOnlyRootFilesystem: true
            runAsNonRoot: true
            runAsUser: 1000
          volumeMounts:
            - mountPath: /mountpoint-s3
              name: mountpoint-s3
            - mountPath: /tmp
              name: tmp-volume
      securityContext:
        fsGroup: 1000
      serviceAccountName: ui
      volumes:
        - name: mountpoint-s3
          persistentVolumeClaim:
            claimName: s3-claim
        - emptyDir:
            medium: Memory
          name: tmp-volume
     app.kubernetes.io/type: app
   name: ui
   namespace: ui
 spec:
-  replicas: 1
+  replicas: 2
   selector:
     matchLabels:
       app.kubernetes.io/component: service
       app.kubernetes.io/instance: ui
[...]
         app.kubernetes.io/name: ui
     spec:
       containers:
         - env:
+            - name: RETAIL_UI_PRODUCT_IMAGES_PATH
+              value: /mountpoint-s3
             - name: JAVA_OPTS
               value: -XX:MaxRAMPercentage=75.0 -Djava.security.egd=file:/dev/urandom
             - name: METADATA_KUBERNETES_POD_NAME
               valueFrom:
[...]
             readOnlyRootFilesystem: true
             runAsNonRoot: true
             runAsUser: 1000
           volumeMounts:
+            - mountPath: /mountpoint-s3
+              name: mountpoint-s3
             - mountPath: /tmp
               name: tmp-volume
       securityContext:
         fsGroup: 1000
       serviceAccountName: ui
       volumes:
+        - name: mountpoint-s3
+          persistentVolumeClaim:
+            claimName: s3-claim
         - emptyDir:
             medium: Memory
           name: tmp-volume
Now let's apply this configuration and redeploy our application:
namespace/ui unchanged
serviceaccount/ui unchanged
configmap/ui unchanged
service/ui unchanged
persistentvolume/s3-pv created
persistentvolumeclaim/s3-claim created
deployment.apps/ui configured
We'll monitor the deployment progress:
deployment "ui" successfully rolled out
Let's verify our volume mounts, noting the new /mountpoint-s3 mount point:
- mountPath: /mountpoint-s3
name: mountpoint-s3
- mountPath: /tmp
name: tmp-volume
Now let's examine our newly created PersistentVolume:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
s3-pv 1Gi RWX Retain Bound ui/s3-claim <unset> 2m31s
Let's review the PersistentVolumeClaim details:
Name: s3-claim
Namespace: ui
StorageClass:
Status: Bound
Volume: s3-pv
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWX
VolumeMode: Filesystem
Used By: ui-9fbbbcd6f-c74vv
ui-9fbbbcd6f-vb9jz
Events: <none>
Let's verify our running pods:
NAME READY STATUS RESTARTS AGE
ui-9fbbbcd6f-c74vv 1/1 Running 0 2m36s
ui-9fbbbcd6f-vb9jz 1/1 Running 0 2m38s
Now let's examine our final deployment configuration with the Mountpoint for Amazon S3 CSI driver:
Name: ui
Namespace: ui
[...]
Containers:
ui:
Image: public.ecr.aws/aws-containers/retail-store-sample-ui:1.2.1
Port: 8080/TCP
Host Port: 0/TCP
Limits:
memory: 128Mi
Requests:
cpu: 128m
memory: 128Mi
[...]
Mounts:
/mountpoint-s3 from mountpoint-s3 (rw)
/tmp from tmp-volume (rw)
Volumes:
mountpoint-s3:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: s3-claim
ReadOnly: false
tmp-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit: <unset>
[...]
Now let's demonstrate the shared storage functionality. First, we'll list the current files in /mountpoint-s3 through one of the UI component Pods:
1ca35e86-4b4c-4124-b6b5-076ba4134d0d.jpg
4f18544b-70a5-4352-8e19-0d070f46745d.jpg
631a3db5-ac07-492c-a994-8cd56923c112.jpg
79bce3f3-935f-4912-8c62-0d2f3e059405.jpg
8757729a-c518-4356-8694-9e795a9b3237.jpg
87e89b11-d319-446d-b9be-50adcca5224a.jpg
a1258cd2-176c-4507-ade6-746dab5ad625.jpg
cc789f85-1476-452a-8100-9e74502198e0.jpg
d27cf49f-b689-4a75-a249-d373e0330bb5.jpg
d3104128-1d14-4465-99d3-8ab9267c687b.jpg
d4edfedb-dbe9-4dd9-aae8-009489394955.jpg
d77f9ae6-e9a8-4a3e-86bd-b72af75cbc49.jpg
We can see the list of images matches what we uploaded to the S3 bucket earlier. Now let's generate a new image called placeholder.jpg and add it to our S3 bucket through the same Pod:
To verify the persistence and sharing of our storage layer, let's check for the file we just created using the second UI Pod:
1ca35e86-4b4c-4124-b6b5-076ba4134d0d.jpg
4f18544b-70a5-4352-8e19-0d070f46745d.jpg
631a3db5-ac07-492c-a994-8cd56923c112.jpg
79bce3f3-935f-4912-8c62-0d2f3e059405.jpg
8757729a-c518-4356-8694-9e795a9b3237.jpg
87e89b11-d319-446d-b9be-50adcca5224a.jpg
a1258cd2-176c-4507-ade6-746dab5ad625.jpg
cc789f85-1476-452a-8100-9e74502198e0.jpg
d27cf49f-b689-4a75-a249-d373e0330bb5.jpg
d3104128-1d14-4465-99d3-8ab9267c687b.jpg
d4edfedb-dbe9-4dd9-aae8-009489394955.jpg
d77f9ae6-e9a8-4a3e-86bd-b72af75cbc49.jpg
placeholder.jpg <----------------
Finally, let's verify its presence in the S3 bucket:
2025-07-09 14:43:36 102950 1ca35e86-4b4c-4124-b6b5-076ba4134d0d.jpg
2025-07-09 14:43:36 118546 4f18544b-70a5-4352-8e19-0d070f46745d.jpg
2025-07-09 14:43:36 147820 631a3db5-ac07-492c-a994-8cd56923c112.jpg
2025-07-09 14:43:36 100117 79bce3f3-935f-4912-8c62-0d2f3e059405.jpg
2025-07-09 14:43:36 106911 8757729a-c518-4356-8694-9e795a9b3237.jpg
2025-07-09 14:43:36 113010 87e89b11-d319-446d-b9be-50adcca5224a.jpg
2025-07-09 14:43:36 171045 a1258cd2-176c-4507-ade6-746dab5ad625.jpg
2025-07-09 14:43:36 170438 cc789f85-1476-452a-8100-9e74502198e0.jpg
2025-07-09 14:43:36 97592 d27cf49f-b689-4a75-a249-d373e0330bb5.jpg
2025-07-09 14:43:36 169246 d3104128-1d14-4465-99d3-8ab9267c687b.jpg
2025-07-09 14:43:36 151884 d4edfedb-dbe9-4dd9-aae8-009489394955.jpg
2025-07-09 14:43:36 134344 d77f9ae6-e9a8-4a3e-86bd-b72af75cbc49.jpg
2025-07-09 15:10:27 10024 placeholder.jpg <----------------
Now we can confirm the image is available through the UI:
http://k8s-ui-uinlb-647e781087-6717c5049aa96bd9.elb.us-west-2.amazonaws.com/assets/img/products/placeholder.jpg
Visit the URL in your browser:
With that, we've successfully demonstrated how we can use Mountpoint for Amazon S3 for persistent shared storage for workloads running on EKS.