The VMware Aria Operations for Logs service – Exploring VMware Cloud on AWS-Integrated Services

The VMware Aria Operations for Logs service is the logging platform included in the VMware Cloud on AWS service.

VMware Cloud SDDC’s restricted access model does not allow cloud admins to directly access ESXi hosts and operational management logs. Logs can only be accessed through two tools – vCenter and the VMware Aria Operations for Logs service.

Each new organization has access to a full trial version of VMware Aria Operations for Logs service for the period of 30 days. After the trial ends, you can either subscribe to the full service or continue using the service with a limited subset of features.

The VMware Aria Operations for Logs service offers unified visibility to VMware Cloud on AWS network packet logs. This capability allows organizations to analyze and troubleshoot their application flows, using visibility of packets corresponding to specific NSX firewall rules. Organizations can turn on logging on firewall policy usage and analyze the traffic patterns of applications.

Organizations can access the VMware Aria Operations for Logs Service from the VMware Cloud portal, parallel to VMware Cloud, and they can ingest logs from various sources, including cloud-native AWS, VMware Cloud on AWS, on-premises vSphere, and applications directly, as shown in the following figure:

Figure 3.15 – The VMware Aria Operations for Logs service data sources

In summary, the VMware Aria Operations for Logs service allows organizations to do the following:

  • Perform troubleshooting of basic administration tasks, including network firewall rules
  • Demonstrate compliance by complying with auditing regulation requirements
  • Gain visibility into activities in the VMware Cloud on AWS deployment, including which users performed what actions and when

VMware Cloud with Tanzu services

VMware Cloud with Tanzu services is included along with the VMware Cloud on AWS subscription.

Tanzu services portfolio includes a fully managed Kubernetes services that offers an easy path to enterprise-grade Kubernetes deployments and management, accelerating application modernization initiatives.

Crafted specifically for Tanzu services on VMware Cloud offerings such as VMware Cloud on AWS, Tanzu Mission Control Essentials provides a set of essential capabilities to organize your Kubernetes clusters and namespaces for scalable operations, and secure them with access control policies.

The enterprise-grade Kubernetes includes a multi-cloud management solution and a Kubernetes-based CaaS platform, running on a VMware Cloud on AWS Infrastructure as a Service (IaaS).

Tanzu CaaS offerings are based on the on-premises vSphere with Tanzu, also known as Tanzu Kubernetes Grid (TKG), provided as a managed service. Tanzu Mission Control (TMC) Essentials provides a multi-cloud Kubernetes management plane. The following figure shows the Tanzu services included with the VMware Cloud on AWS services – Tanzu Kubernetes Grid and TMC Essentials:

Figure 3.16 – VMware Cloud with Tanzu services

Organizations can leverage the platform to train and enable IT admins to Kubernetes operators, while using the same operational model with the familiar vCenter interface for all workloads VMs and containers alike. Organizations can consume enterprise-grade Kubernetes clusters that are secure, upstream-compliant, and isolated from one another within a few minutes.

vSphere administrators can create namespaces to separate resources and grant owners access. Before granting permissions, the vSphere administrators will assign resources, such as disk, memory, CPU, and storage, to the namespace. Any user assigned to this namespace will be able to create Kubernetes-based clusters up until they exceed their vSphere Admins’ quotas.

Organizations can manage multiple TKG clusters with observability and troubleshooting in vCenter Server. The following figure shows how IT admins can now manage both their vCenter VM and Kubernetes workloads using Tanzu services over a VMware managed public cloud infrastructure:

Figure 3.17 – IT administrators can upskill their Kubernetes skills

TKG was built upon the ClusterAPI open source project, and this API provides Kubernetes-like APIs to facilitate platform operators’ life cycle management. The deployments work off a desired state configuration. Platform operators specify a cluster configuration and submit it to the Supervisor cluster for provisioning. The Supervisor Kubernetes cluster is specialized and contains all the ClusterAPI components. It can be used to provision Tanzu Kubernetes clusters where organizations’ workloads can run, and it is used to scale Tanzu Kubernetes Clusters (TKCs), resize nodes, upgrade clusters, and delete clusters. The TKCs are fully conformant with upstream Kubernetes, which means they will run any Kubernetes app. The following figure shows conceptually how the Tanzu Kubernetes Grid Supervisor Cluster, integrated into vSphere, provisions and manages Tanzu Kubernetes workload clusters:

Figure 3.18 – The Tanzu Kubernetes Grid Supervisor and workload clusters

While the source of the Kubernetes distribution is identical for both Supervisor and Tanzu Kubernetes Clusters (TKCs), it is crucial to acknowledge the following distinctions:

  • Kubernetes distributions provisioned on Supervisor and Tanzu Kubernetes Cluster are separate and independent of each other
  • The Kubernetes packaged in Supervisor represents an opinionated installation of Kubernetes
  • On the other hand, the Kubernetes packaged in Tanzu Kubernetes Clusters is an upstream-aligned, fully conformant distribution of Kubernetes delivered through Tanzu Kubernetes releases (TKrs)

Tanz Kubernetes configurations can be overridden by using custom cluster configurations at deployment time. Tanzu Kubernetes clusters are deployed in two types of availability options:

  • Single node: One control plane node for the cluster
  • High availability: Three control plane nodes for the cluster

Organizations can enable the Tanzu Kubernetes service using a self-service flow in the SDDC console to submit network CIDRs. NSX-T will use these CIDR addresses to create new networks and routes to the TKCs.

After activation is complete, a Supervisor cluster will be deployed on your VMware Cloud on AWS instance.

Note

The service requires a minimum of three hosts in a vSphere cluster for activation.

Leave a Reply

Your email address will not be published. Required fields are marked *