Kubernetes Functionality

 

Autoscalling

On the CaaS-CNP, you can use the Horizontal Pod Autoscalling (HPA) on your deployment. This automatically scales the number of pods based on observed CPU utilization (or on custom metrics support)

The Horizontal Pod Autoscaler is implemented as a Kubernetes API resource and a controller. The resource determines the behavior of the controller. The controller periodically adjusts the number of replicas in a replication controller or deployment to match the observed average CPU utilization to the target specified by user..

apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: <my-hpa> namespace: <my-namespace> spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: <my-deployment> minReplicas: 2 maxReplicas: 6 targetCPUUtilizationPercentage: 50


$ kubectl apply -f hpa.yaml -n <my-namespace> $ kubectl get hpa -n <my-namespace>

Now generate load on your service, when 50% of CPU will be raised, your deployment will autoscale.


Availability Zones

Nodes are created on 2 AZ (for high availability).AZ could be selected using nodeSelector (see exemple above)
apiVersion: v1 kind: Pod metadata: name: <my-pod> spec: containers: - name: <my-container> image: <my-image> nodeSelector: failure-domain.beta.kubernetes.io/zone: eu-west-0a

Storage Classes are by AZ, that mean they are Storage classes in each AZ and pod can't migrate from an AZ to another if it has a PVC attached.

Namespace Isolation

This example will show you how to isolate your namespace (default namespace in this example).

Save the following manifest to deny-from-other-namespaces.yaml and apply to the cluster:


kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: namespace: default #change it name: deny-from-other-namespaces spec: podSelector: matchLabels: ingress: - from: - podSelector: {}

$ kubectl apply -f deny-from-other-namespaces.yaml -n <my-namespace>

The .metadata.namespace define the namespace where deploy the network policy : here default.
It applies the policy to ALL pods in default namespace as the .spec.podSelector.matchLabels is empty and therefore selects all pods.
It allows traffic from ALL pods in the default namespace, as .spec.ingress.from.podSelector is empty and therefore selects all pods.


Allow Ingress Namespace

This example will show you how to allow Ingress Controllers to communicate with your namespace.

Save the following manifest to allow-ingress-nginx-namespace.yaml and apply to the cluster:


kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-ingress-nginx namespace: default #change it spec: podSelector: matchLabels: {} policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: namespaceName: ingress-nginxThe namespace ingress-nginx was labeled with namespaceName: ingress-nginx
The .metadata.namespace deploys it to the default namespace.
It applies the policy to ALL pods in default namespace as the .spec.podSelector.matchLabels is empty and therefore selects all pods.
It allows traffic from ALL pods in the ingress-nginx namespace, as .spec.ingress.from.namespaceSelector select the namespace with label namespaceName: ingress-nginx

External Ip filtering could be apply on Ingress Rules. Please see Ingress section and whitelist part.

Storage

Persistent Storage can be added to containers for persistent datas. If the pod crashes, a new pod will be restart with the storage volume mounted on it.

Persistent StorageClass available by Avaibility Zone :

Create volume

Create a file mypvc.yaml with this example :

kind: PersistentVolumeClaim apiVersion: v1 metadata: name: mypvc namespace: mynamespace spec: accessModes: - ReadWriteMany resources: requests: storage: 10G storageClassName: sfs-storage-class
$ kubectl apply -f mypvc.yaml

Attach volume to a pod

When your volume is created, you can use it on your pod spec :


apiVersion: v1 kind: Pod metadata: name: nginx-pod namespace: mynamespace spec: containers: - name: nginx image: proxy-docker.amp-prod-repo-artifacts.equant.com/library/nginx imagePullPolicy: IfNotPresent ports: - containerPort: 80 protocol: TCP volumeMounts: - mountPath: /var/lib/www/html name: sfs-data volumes: - name: sfs-data persistentVolumeClaim: claimName: mypvc readOnly: false

SFS or Cinder

Cinder is defined by AZ, so you have to define the right AZ on your pod (see the dedicated section here). If your pod moves on another AZ, the PVC cinder will not be able to connect to your pod again.

SFS is not defined by AZ and can be shared between different pods

Comments

Popular posts from this blog

Microservice Pattern: SAGA

Microservice Pattern: Database per service Context

SQL vs NoSQL | Difference between SQL & NoSQL | SQL Vs NoSQL Tutorial | SQL, NoSQL system design