Routing Traffic to Hybrid Nodes
Now that we have our hybrid node instance connected to the cluster, we can
deploy a sample workload using the Deployment and Ingress manifests below:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: nginx-remote
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 1
              preference:
                matchExpressions:
                  - key: eks.amazonaws.com/compute-type
                    operator: In
                    values:
                      - hybrid
      containers:
        - name: nginx
          image: public.ecr.aws/nginx/nginx:1.26
          volumeMounts:
            - name: workdir
              mountPath: /usr/share/nginx/html
          resources:
            requests:
              cpu: 200m
            limits:
              cpu: 200m
          ports:
            - containerPort: 80
      initContainers:
        - name: install
          image: busybox:1.28
          command: [ "sh", "-c"]
          args:
            - 'echo "Connected to $(POD_IP) on $(NODE_NAME)" > /work-dir/index.html'
          env:
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
          volumeMounts:
            - name: workdir
              mountPath: "/work-dir"
      volumes:
        - name: workdir
          emptyDir: {}
We use nodeAffinity rules to tell the Kubernetes scheduler to prefer cluster nodes
with the eks.amazonaws.com/compute-type=hybrid label and value.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx
  namespace: nginx-remote
  annotations:
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
spec:
  ingressClassName: alb
  rules:
    - http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: nginx
                port:
                  number: 80
The ingress resource will configure an AWS Load Balancer (ALB) to route traffic to the workload pods.
Let's deploy the workload:
namespace/nginx-remote created
service/nginx created
deployment.apps/nginx created
ingress.networking.k8s.io/nginx created
Let’s confirm the pods were scheduled on our hybrid node successfully:
NAME NODE
nginx-787d665f9b-2bcms mi-027504c0970455ba5
nginx-787d665f9b-hgrnp mi-027504c0970455ba5
nginx-787d665f9b-kv4x9 mi-027504c0970455ba5
Great! The three nginx pods are running on our hybrid node as expected.
The provisioning of the ALB may take a couple minutes. Before continuing, ensure the load balancer has finished provisioning with the following command:
"active"
Once the ALB is active, we can check the Address associated with the Ingress to retrieve the URL of the ALB:
k8s-nginxrem-nginx-03efa1e84c-012345678.us-west-2.elb.amazonaws.com
With the ALB URL, we can access our deployment through the command line or by entering the address into a web browser. The ALB will then route the traffic to the appropriate pods based on the Ingress rules.
Connected to 10.53.0.5 on mi-027504c0970455ba5
In the output from curl or the browser, we can see the 10.53.0.X IP address of the pod receiving the request from the load balancer which is running on our hybrid node with the mi- prefix.
Rerun the curl command or refresh your browser a few times and note that the pod IP changes in each request and the node name stays the same, as all pods are scheduled on the same remote node.
Connected to 10.53.0.5 on mi-027504c0970455ba5
Connected to 10.53.0.11 on mi-027504c0970455ba5
Connected to 10.53.0.84 on mi-027504c0970455ba5
We've successfully deployed a workload to our hybrid node, configured it to be accessed through an ALB, and verified that the traffic is being properly routed to our pods running on the remote node.
Before we move on to explore more use cases with EKS Hybrid Nodes, let's do a little cleanup.