Tanzu Kubernetes networking – Exploring VMware Cloud on AWS-Integrated Services

Kubernetes clusters can be deployed through the TKG service. The underlying networking services, such as load balancing and Network Address Translation (NAT) rules, are created for applications that have been deployed to Tanzu Kubernetes automatically.

TKG clusters are placed within a namespace segment and Network Addresses are Translated (NATed) to other networks in the SDDC. To ensure that NAT pool resources are available for the new network segments, egress and ingress CIDRs will be required during the initial setup.

The NSX-T Container Plugin (NCP) creates load balancers for the applications installed in the clusters.

During the initial setup, four different CIDR ranges are required to be able to activate the TKG service:

  • Service CIDR: For the Supervisor cluster service address space
  • Namespace CIDR: For network segments created with new namespaces
  • Ingress CIDR: A pool used to provide ingress access through NSX-T load balancers for workloads
  • Egress CIDR: A pool used for Source-NAT (SNAT) of outbound traffic of Kubernetes nodes for workloads

The following diagram describes each of the preceding segments:

Figure 3.19 – TKG networking segments

The NSX-T container plugin is used to configure network access automatically. Kubernetes operators can use the Kubernetes API for interaction with the VMware Cloud on AWS networking environment. They can also request services of type LoadBalancer. NCP will create an NSX-T load balancer and Virtual IP (VIP) address from the ingress IP pool, and then the organization can access the applications within the clusters.

Tanzu Kubernetes Storage

Tanzu services use vSAN storage to present persistent volumes using cloud-native storage (CNS).

vSAN integrates and enhances cloud-native storage capabilities, and these integrations are important to delivering cloud-native applications on Kubernetes in vSphere. vSAN supports policy-driven dynamic provisioning of a Kubernetes persistent volume claim (PVC).

As a storage policy is assigned with a cluster, that cluster will have a Kubernetes StorageClass of the same name. Platform operators can request PVCs from this StorageClass to create persistent volumes. Those volumes will be created in the data store identified by the storage policy, and storage policies are applied to a namespace.

vSphere Admins will view the PVCs created within the Kubernetes cluster, and these PVCs will be visible within the vCenter UI.

The following diagram describes the CNS framework:

Figure 3.20 – The Tanzu Services CNS framework

In much the same way networking resources are created from a Kubernetes manifest, platform operators can apply manifests to request PVCs right through the Kubernetes API.

The TKC will provision the virtual disk in the vSAN datastore and connect it to the Kubernetes nodes that require that disk.

Leave a Reply

Your email address will not be published. Required fields are marked *