Azure VMware Solution
What is Azure VMware Solution?
Azure VMware Solution (AVS) is a software stack owned, operated, and supported by Microsoft based on the VMware Cloud Foundation (VCF) framework located within Azure data centers which allows seamless integration with Native Azure services as well as providing full connectivity to an existing On-Premises data center.
One of the primary benefits of AVS is to provide an option to customers to “stripe” their Compute, Storage, and Networking services between On-Premises, AVS, and Native Azure in order to maintain operational consistency.

“What is New?” in Azure VMware Solution (VCF)
Below is a public-facing Microsoft reference that provides information on the latest supported features for Azure VMware Solution on a month-by-month basis.
https://learn.microsoft.com/en-us/azure/azure-vmware/azure-vmware-solution-platform-updates
Azure VMware Solution (AVS) with VMware Cloud Foundation
Microsoft Public Announcement:
Documentation References:
Further reference documentation can be located,here.
Microsoft Documentation:
https://learn.microsoft.com/en-us/azure/azure-vmware/introduction
What are the AVS Deployment Models with Native Azure?
Since Azure VMware Solution leverages the L3/ExR backbone from Microsoft Azure, it provides flexible connectivity options for different AVS/Native Azure Deployment Models without the need to use additional L3 routers/platforms in order to connect from the AVS SDDC Instance to the Native Azure vNets:
- (1) AVS SDDC Instance : (1) Native Azure vNet
- (Many) AVS SDDC Instances : (1) Native Azure vNet
- (1) AVS SDDC Instance : (Many) Native Azure vNets
- (Many) AVS SDDC Instances ” (Many) Native Azure vNets

How Do I Connect from On-Premises to AVS?
Azure VMware Solution (AVS) leverages Azure’s Express Route (ExR) and Global Reach services to provide a Layer-3 (L3) service from end-to-end. This allows for the flexibility to connect to AVS from On-Premises as well as connect to Native Azure Services via the same L3 connection. This connectivity option provides flexibility for different AVS/Native Azure Deployment Models:
Connect from On-Premises to AVS using Azure’s Express Route

Azure VMware Solution (AVS) Network Planning Checklist
What are the Connectivity Options for AVS Interconnect, ExpressRoute, and Global Reach for AVS?
Connecting On-Premises to an AVS SDDC via ExpressRoute and Global Reach
In order to connect a customer’s On-Premises environment to Azure VMware Solution, the following two Native Azure networking constructs are required:
- ExpressRoute (ExR)
- ExpressRoute Global Reach (ExR GR)
To understand why both constructs are required for AVS, we need to understand first how Microsoft Azure has traditionally connected two On-Premises data centers since AVS is essentially connected to the ExpressRoute backbone in a similar approach as an On-Premises data center.
Independent from Azure VMware Solution, Microsoft has offered customers the option to use ExpressRoute as a private link service to connect from On-Premises Data Centers to Native Azure. The following diagram shows how the ExR service is connected via a 3rd-Party Cloud Service Provider (Megaport or Equinix) and leverages a “Service Key” to orchestrate the ExR connections:
Note: Please refer to the recordings in this section for further detail.

Within this diagram, customers would still require a separate connection of their own between the the “DC1” and “DC2” data centers. Since there are two ExR connections connecting to the Microsoft backbone, there is a secondary Microsoft Azure service termed “ExpressRoute Global Reach” which connects the two ExR connections together via an “Authorization Key” (Note: Details in the recorded demonstration) in order that the two different data centers are able to communicate over the Microsoft backbone. With the combination of the ExpressRoute (ExR) to connect from On-Premises to the Microsoft Azure backbone and the ExpressRoute Global Reach (ExR GR) service to connect the two ExR connections over the same backbone, customers could consider the options to remove their local connection between data centers :
https://docs.microsoft.com/en-us/azure/expressroute/expressroute-global-reach

In relationship to Azure VMware Solution, the AVS SDDC instance is directly connected to the Microsoft Azure backbone by a dedicated ExR connection so, it looks like a “Branch” location from an ExR backbone perspective and in order to connect to the On-Premises Data Center to AVS SDDC, it requires the secondary “ExpressRoute Global Reach” to be deployed. From an ExR perspective, the connectivity deployment model between an On-Premises data center to an AVS SDDC is the same as connecting two separate On-Premises data centers as mentioned in the previous section.

AVS ExpressRoute / Global Reach Overview (YouTube)
AVS ExpressRoute / Global Reach Demonstration (YouTube)
Connecting Two AVS SDDC Instances via Global Reach (Different Regions)
Connecting Two AVS SDDC Instances via AVS Interconnect (Same Region)




vCenter
What are the ‘cloudadmin’ Privileges for vCenter within AVS?
https://learn.microsoft.com/en-us/azure/azure-vmware/architecture-identity
vSAN
NSX
What is the Difference between a “DNS Service” versus a “DNS Server?
NSX supports a “DNS Service” which forwards DNS Requests to their respective “DNS Servers” based on the domain. A “DNS Server” will resolve the DNS Requests and send the response. By default for Azure VMware Service, all DNS Requests that are sent to the “DNS Service” will be sent to an Internet-based DNS Resolver (i.e. Cloudflare = 1.1.1.1, 1.0.0.1) unless there is a “DNS Zone” configured within NSX which will map the requested DNS domain (acme.com = 10.10.10.10 [DNS Server]) to the respective DNS Server. The “DNS Service” within NSX for Azure VMware Solution uses an IP address from the IP address space/IP CIDR that was used for the SDDC deployment with a Host Address of [.192] (AVS SDDC’s IP CIDR = 10.30.0.0/22 -> NSX DNS Service = 10.30.0.192).

What is a the Difference between a “DNS Service” and “DNS Zone”?
A “DNS Zone” provides a means to forward DNS Requests based on the specified domain. By default, all DNS Requests will be forwarded to an Internet-based DNS service. If there is an On-Premises DNS Server, NSX can be configured to use a “DNS Zone” to forward the DNS Requests to the DNS Server On-Premises based on the domain (i.e. vcf.io – 10.39.1.11).

Does AVS Support Multicast within NSX?
Azure VMware Solution does support [L2 Multicast] (IGMP)by default where the multicast traffic is treated like broadcast traffic on a L2 Segment within NSX. The multicast source and the multicast destination needs to be on the same L2 Segment in order to perform properly and be officially supported. The default settings for NSX Segments with the Overlay Transport Zone will automatically support [L2 Multicast].

[L3 Multicast] which is referred to as “Protocol Independent Multicast” or “PIM” is unsupported in AVS. By default, this is disabled within the NSX Manager for AVS.




AVI Load Balancer (a.k.a. NSX Advanced Load Balancer) [Add-On Feature]
The AVI Load Balancer which was formerly known as the NSX Advanced Load Balancer is covered within the “Add-On” section of this location.
Does NSX Support [ip directed-broadcast]?
Within NSX, [ip directed broadcast] is unsupported. If this is a requirement, a 3rd-Party NVA (i.e. Cisco Router) can be attached to an NSX Tier1 Router where the segments that require the [ip directed broadcast] support can be directly connected to the 3rd-Party NVA which removes the dependencies to the NSX Tier1 Router.
HCX
What is HCX?
HCX is a platform solution which automates and orchestrates the migrations of workloads from a VCF Source to a VCF Destination. Since this would be a “Like-for-Like” migration, the movement of workloads is seamless using the HCX Interconnect (IX) Appliance. If there is a need to maintain the IP address from VCF Source to VCF Destination, HCX Network Extensions (NE) Appliance can be independently deployed to provide the network connectivity during the migration process in order to avoid the need for readdressing the VM workloads once moved.
Installing and/or Upgrading to HCX 4.11 in “Local Mode” (vSAN Datastore)
Prior to the HCX 4.11 release, the HCX Manager would require connectivity to the [https://connect.hcx.vmware.com] and [https://hybridity-depot.vmware.com] services that provided licensing activation [https://<HCX Manager IP Address>:9443] and notifications about HCX software/system updates [https://<HCX Manager IP Address>].
HCX Manager [https://<HCX Manager IP Address>:9443]: “Licensing and Activation”

HCX Manager [https://<HCX Manager IP Address>]: “System Updates”

Before the HCX 4.11 release (HCX Version =< 4.10), the [https://connect.hcx.vmware.com] service would provide a notification of a new software release and a list would be provided when choosing “Check for Updates” via the HCX Manager User Interface.
HCX Manager [https://<HCX Manager IP Address>]: “Check for Updates” (HCX =< v4.10)


When a new HCX software update was chosen (HCX Version =< 4.10), the choice would be provided to either “Download” or “Download and Update” the HCX Manager. This was an automated process that would update the HCX Manager and would reboot it when completed. with the updated version.
Within the HCX 4.11 release, the HCX Manager now is deployed where it is in “Local Mode” from the [https://connect.hcx.vmware.com] and [https://hybridity-depot.vmware.com] services due to the evolution of the platform. The HCX Manager upgrade images for Azure VMware Solution are now located within the vSAN Datastore of the AVS SDDC which includes the overall HCX Manager image [VMware-HCX-Connector-4.11.x.y-xxxxxx.ova] as well as the HCX Manager upgrade image [VMware-HCX-Connector-upgrade–bundle-4.11.x.y-xxxxxx-signed.tar.gz].

Once the OVA is downloaded, go to the Managment Interface/Port of the HCX Manager appliance in order to upload the software bundle [https://<HCX Manager IP Address>:9443].

When logged into the HCX Manager on Port 9443, choose the “Administration” upper menu option and the “Upgrade” option on the left-hand menu in order to locate the information regarding where to upload the new HCX software bundle:


Choose the upgrade bundle that we previously downloaded from the vSAN Datastore with the Azure VMware Solution SDDC and it will automatically upload and install the image:





Once the uploaded image installed, the HCX Manager will reboot as part of the upgrade process which will result in the need to relogin into the HCX appliance as [admin] on port 9443.

Once logged into the HCX appliance (Port 9443), we can verify that the upgrade was successful by the [Version:] number on the HCX appliance [Dashboard] section:

We are also able to verify the success of the upgrade in the actual HCX Manager (versus on Port 9443 of the HCX Appliance) within the [System Updates] section:

HCX “Local Mode” Upgrade Process Demonstration (YouTube)
Further detail to the HCX Manager process when the version is >= 4.11 with Local Mode is provided within the following recording:
For additional information related to the HCX upgrade process, please refer to the Microsoft documentation regarding Azure VMware Solution:
https://learn.microsoft.com/en-us/azure/azure-vmware/upgrade-hcx-azure-vmware-solutions
Installing and/or Upgrading to HCX 4.11 in “Local Mode” (Broadcom Download)
An alternative location for the HCX Connector (Greenfield deployments) as well as the HCX Connector upgrade bundles (Brownfield deployments) can be found within the Broadcom Support page.



Installing and/or Upgrading to VCF Operations HCX (HCX 9.0) in “Local Mode” (Broadcom Download) [VCF9]
For Broadcom customers that are adopting VCF9 within their On-Premises Data Centers, the VCF Operations HCX (Manager) can be downloaded from Broadcom and is a software module within the VCF9 bundle.
Note: The VCF Operations HCX image is located when searching for “VMware Cloud Foundation” versus “VCF” within the Broadcom Support page.
https://knowledge.broadcom.com/external/article/401497




VCF Operations HCX (HCX 9.0/On-Premises) Deployment [VCF9]
The deployment of the VCF Operations HCX (Manager) is the same process as the in the past where it is deployed as an OVA may require updates to the appliance on Port 9443 after the deployment such as mapping the “Roles” in the correct domain [https://<HCX Manager IP Address>:9443].

Broadcom Documentation:

Where can the HCX “Management Pack” for Aria Operations be Downloaded?
https://vcf.broadcom.com/vsc/services/details/aria-operations-management-pack-for-hcx-2?slug=true
Where can the HCX “Content Pack” for VCF Operations be Downloaded [VCF9]?
VCF Operations HCX (HCX 9.0/On-Premises) Licensing [VCF9]
https://blogs.vmware.com/cloud-foundation/2025/06/30/whats-new-in-vcf-operations-hcx-9-0



VCF Operations HCX (HCX 9.0/On-Premises) Interoperability [VCF9]
https://interopmatrix.broadcom.com/Interoperability?col=660,&row=2,

HCX 4.11 Licensing Options (On-Premises) in “Local Mode”
HCX Licensing Key from Azure Port for AVS
If the HCX Licensing Key was acquired within the Azure Portal for Azure VMware Solution during an earlier deployment of HCX (i.e. HCX 4.10), the same HCX License Key can be used in HCX 4.11 when in “Local Mode”.


HCX Licensing Key from a VCF 5.2 Deployment for On-Premises
If the HCX was deployed within a VCF 5.2 On-Premises environment, VCF 5.2 provides a “VCF Solutions Key” that provides a means for additional platforms such as HCX to detect that the underlying infrastructure has been licensed for VCF. When HCX 4.11 is deployed within a VCF 5.2, the HCX instance will automatically detect the “VCF Solutions Key” and register HCX with the VCF instance as well as hide or obfuscate the license key within the HCX Appliance (https:<hcx-manager-ip-or-name>:9443). This would be a situation where the customer is using a “Bring-Your-Own-Subscription” (BYOS) License Key which is different and independent of the HCX Licensing Key from the Azure Portal.

“HCX over Any” Network Underlay Requirements
HCX migrations have minimum underlay requirements in order for them to be officially supported.

Customers can connect between On-Premises and Native Azure using “Any” networking method as long as these transport types meet the mentioned minimum underlay requirements for HCX. The following are networking examples that customers have deployed for “HCX over Any” migrations to Azure VMware Solution.
“HCX over Any” Network Underlay Transport Examples:
- Express Route (Native Azure Connectivity)
- IPSec Virtual Private Network (IPSec VPN)
- IPSec VPN over Express Route (Customers with Specific Security Policies)
- SD-WAN
- MPLS
Note: When connecting from On-Premises to AVS using IPSec VPN/SD-WAN, there may be a need to change the MTU size from the default [1500] to [1350] within the [HCX Network Profile] since HCX creates an IPSec VPN for migrations and network extensions where there would be a “Tunnel within a Tunnel” situation.

IPSec Encryption Disabling as a Scalability Option
When HCX is deployed within a VCF environment for workload migrations, the traffic in-transit is by default IPsec encrypted in order to protect the data. In addition, HCX Network Extensions between a source and a destination environment are also encrypted. Independent of the platform vendor, IPSec encryption typically provides a throughput of ~1.2 Gbps unless there is a hardware-assisted solution. If there is a requirement to a higher throughput, the HCX IPSec encryption can be disabled.
During the deployment of HCX, an HCX Site Pairing is created which will associate the source location to the destination location. Once that is completed, a pair of HCX Interconnect (IX) appliances can be deployed to support the migration of VM workload information from source to destination environments. In addition, if there is a need to maintain the IP address on the VM’s during the migration, HCX Network Extension (NE) appliances can be optionally deployed in order to provide L2 connectivity during this phase.

The state of the IPSec encyption/unencryption is indicated within the HCX Service Mesh once the (1) Tunnel on the IX and/or NE appliances are up.

If there is a need to increase the capacity of the throughput for either the HCX IX or HCX NE appliances, the IPSec encryption service can be disabled. This requires the following procedure on an existing HCX Service Mesh:
1.) Update the HCX Network Profile to indicate that the network underlay is secure.

2.) Refer to the “Advanced Configuration” settings within the HCX Service Mesh.

3.) Disable the IPSec encryption for the Migration and/or Network Extension services.
Note: This will provide a notice that Application Path Resiliency (APR) is required.
a.) Before disabling the IPSec encryption on the HCX Service Mesh.

b.) After disabling the IPSec encryption on the HCX Service Mesh.

4.) Verify that “Application Path Resiliency” was automatically enabled (just above the same location where the IPSec encryption was disabled) within the “Advanced Configuration” for the HCX Service Mesh.

5.) Prior to completing the updates to the HCX Service Mesh, choose whether to use the “In-Service” or “Standard” mode for the update and redeployment of the HCX Network Extension appliance.
- “In-Service Mode” – Sub-second failover of traffic under ideal conditions; requires additional HCX Network Profile IP addresses for the redeployment; original IP addresses are released when the redeployment process completes.
- “Standard Mode” – Sub-minute failover of traffic under ideal conditions; requires disconnecting the existing HCX appliances and re-establishing the HCX IX and/or NE service tunnels.


HCX Application Path Resiliency (APR) was initially developed to achieve additional resiliency of the path for the HCX appliance uplinks although, it was also leveraged to provide a means for scalability of the migration and network extension traffic when the IPSec encryption service is disabled. When APR it is enabled, it will deploy (8) tunnels for the IX and/or NE appliances and distribute the flows using Equal Costs Multi-Path (ECMP) for the load-balancing hash algorithm.

When the HCX IX/NE appliances have been redeployed, the state of the IPSec service (unencrypted) and (8) established tunnels for appliances can be verified within the HCX Service Mesh within the [View Appliances] option.


One area to highlight regarding the redeployment of the HCX IX/NE appliances is related to the Source Port Numbers for the new HCX Tunnels. In order to differentiate the (8) HCX Tunnels, the Source Ports will be different than the original UDP-4500 Port for both the [Local Port] and the [Remote port] compared to the original ‘te_0’ HCX Tunnel. If there is a firewall in between which specifies the Source Port, that would need to be updated in order that the additional HCX Tunnels come up.

https://ports.broadcom.com/home/VMware-HCX
HCX Operating System Assisted Migration (HCX OSAM)
During CY2018, VMware acquired a company named “CloudVelox” which provided a solution to migrate VM’s from VCF source environments (KVM, Hype-V) to VCF target environments which has since been integrated into the HCX platform and is now termed as “HCX Operating System Assisted Migration” or “HCX OSAM”. Below are the supported migration use cases for HCX OSAM.

Both use cases are the same with the exception that [Use Case 1.] has both a Source and Target VCF instance while [Use Case 2.] has a single Target VCF instance. The reason for this difference depends on the customer requirements where a customer would deploy [Use Case 1.] if there is a mix of both VCF and non-VCF source environments and [Use Case 2.] if there are only non-VCF source environments.
Note: Network Extension connectivity for non-VCF environments will depend on whether the connection has been spanned across the “Top-of-Rack” infratructure to the VM’s in the Hyper-V and/or KVM environments since the HCX Network Extension Appliance (HCX-NE) in only supported for VCF environments.
When HCX OSAM is deployed, it creates an HCX OSAM agent which can be installed for both Windows and Linux environments which abstracts the VM the underlying infrastructure dependencies and below are the supported operating system types.
Note: These operating system types for both Windows and Linux may change based on the HCX software release.
During the deployment of the HCX OSAM service when using [Use Case 2.], an HCX Site Pairing needs to be created where the “Site Type” needs to be “Other” since the source side is non-VCF (Single instance of VCF on the target side) and use an abitrary name for the “Site Name” (i.e. “Site Name” = “Site-Pairing-KVM”).

Once the HCX Site Pairing is created, the HCX Service Mesh can be built which will deploy a single HCX appliance on the target VCF instance for the HCX OSAM migrations.

When the HCX Service Mesh has been deployed with the single HCX OSAM appliance, it will expose a new tab within the HCX Service Mesh where the HCX OSAM Agent software for both Windows and Linux can be downloaded for the VM installation process within the non-VCF source environments (HCX Menu: Interconnect -> Service Mesh -> Sentinel Managment) and once the HCX OSAM “Sentinel” Agents have been installed, they will automatically attempt to connect to the HCX OSAM appliance where the “Connection Status” should as “CONNECTED”.

After the VM’s with the HCX OSAM Agents are connected to the HCX OSAM appliance, they can be migrated with the same operational model as a VCF-to-VCF migration within the HCX Manager user interface with the exception that “Remote Site Connection” for the HCX OSAM Service Mesh is indicated with a cloud icon versus the typical VCF/vSphere icon.


The HCX OSAM non-VCF workload migrations are able to leverage the same scheduling functionality as VCF-to-VCF workload migrations within the HCX Manager as well.

Aria (vRealize) Support for AVS
Aria Automation (a.k.a. vRealize Automation/vRA)
Aria Operations (a.k.a vRealize Operations/vROps)
https://learn.microsoft.com/en-us/azure/azure-vmware/vrealize-operations-for-azure-vmware-solution
Aria Operations for Logging (a.k.a. vRealize Log Insight/vRLI)
Note: The integration of Aria Operations for Logging with AVS does require the customer to deploy Native Azure services since Microsoft is acting as the “Provider” where the logging information is filtered before it is provided to the “Tenant” (Customer).

Aria Operations for Networking (a.k.a. vRealize Network Insight/vRNI)
AVI Load Balancer Integration [Add-On Feature]
The AVI Load Balancer (AVI LB) provides advanced features such as Global Server Load Balancing (GSLB), application rate limiting, and intelligent Web Application Firewall (WAF) options. The adoption of the AVI LB has been driven by the automation of these features between data centers and has full integration as an “Add-On Feature” with Azure VMware Solution.

Note: When attempting to locate information, there may be a need to search for both “AVI Load Balancer” as well as “NSX Advanced Load Balancer” within the documentation.
Where can the AVI Load Balancer be Downloaded?
Since the AVI Load Balancer is an “Add-On Feature” for AVS, it can be downloaded from the Broadcom Support webpage based on the following location:
https://support.broadcom.com/group/ecx/downloads
Within the software download area, choose [VMware] for the Division and the option for [AVI] for the product.

Once the search is completed, the following download options are available.
Note: There is also an [Avi Load Balancer Conversion Tool] for 3rd-Party conversions.

Within this example, we chose the AVI Load Balancer with the version [v30.2.4] to download:

Note: Users are required to read the ‘Terms and Conditions’ at the top and click the agreement before the download can begin.

Where is AVI Load Balancer Documentation for AVS Located?



Where can the AVI Load Balancer “Management Pack” for Aria Operations be Downloaded?
Note: The general location for the Aria Operations Management Pack for the AVI LB is within the VCF Marketplace: https://vcf.broadcom.com. There may be a need to search for “AVI” or “NSX Advanced Load Balancer” in order to locate the correct version.
AVI Load Balancer “Management Pack” for Aria Operations within the VMware Cloud Foundation (VCF) Marketplace



Aria Operations Management Pack for the AVI Load Balancer Installation
Note: The general location for the Aria Operations Management Pack for the AVI LB is within the VCF Marketplace: https://vcf.broadcom.com. There may be a need to search for “AVI” or “NSX Advanced Load Balancer” in order to locate the correct version.
Aria Operations Management Pack for the AVI Load Balancer Documentation

What are the “Best Practices” for the AVI Load Balancer Integration?
AVI Load Balancer Integration “Notes from the Field”
1.) The term “Clouds” refers to “Profiles” from a general definition perspective. (i.e. “Select Cloud” = “Select Profile”)

2.) Verify where IP Allocation will be configured for AVI Service Engines (Load Balancers)
- DHCP Server – External from the AVI Controllers (DHCP Server on NSX Segment)
- IP Pool – Configured on the AVI Controllers


3.) “VRF Context” refers to “Routing Configuration” (i.e. Configure Static Routes on AVI)


Note: This example shows how to configure a Default Route from the AVI Controllers to the Default Gateway (NSX T1 Router).
4.) Add Static Route on the NSX T1 Router back to the Service Engines (Load Balancers) in order to reach the Virtual IP’s (VIP) of the Service Engines.


5.) Create the vCenter Credentials before creating a “Cloud” or “Profile” within AVI. This is required before defining the details within the AVI “Cloud” profile.

6.) Allocate a Content Library within vCenter before creating a “Cloud” or “Profile” within AVI. This is required before defining the details within the AVI “Cloud” profile in order to provide a repository for the AVI OVA’s.

AVI Load Balancer Terminology “Decoder Ring”
| AVI Load Balancer Terminology | Definition Translation |
| “Clouds” (i.e. “NSX-Cloud”) | “Profiles” (i.e. “NSX-Profile”) |
| “Enable DHCP” = “Yes’ | NSX Segment has DHCP Configuration |
| “Enable DHCP” = “No’ | AVI Controller has DHCP Configuration |
| “VRF Context” | “Static Route” / “Dynamic Routing” Setup |
AVI Load Balancer Integration Test and Demonstration
Depending the test use case, it may be straightforward to install different types of web servers in order to validate the AVI Load Balancer functionality across mutiple servers to detemine load balancing characteristics (i.e. Apache, NGINX).
Installation Guide for NGINX Web Server (Linux):
https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-open-source
Installation Guide for NGINX Web Server (Windows):
https://nginx.org/en/docs/windows.html
Install and Apache Web Server (Linux):
https://httpd.apache.org/docs/2.4/install.html
Install and Apache Web Server (Windows):
https://httpd.apache.org/docs/current/platform/windows.html
AVI Load Balancer “Hands On Lab” (HOL)
For more of a “Deep Dive” on the AVI Load Balancer and to use a live an environment for testing, please refer to the “Hands On Lab” (HOL) live environment for further details.
AVI Load Balancer – Lab Modules:
Module 1 – Introduction to Avi architecture (30 minutes) Basic
Module 2 – Introduction to Applications (Virtual Services and Related Components) (60 minutes) Basic
Module 3 – Introduction to Service Engine Groups (30 minutes) Basic
Module 4 – Introduction to Application Scaling (30 minutes) Basic
Module 5 – Introduction to Application Services and Security (30 minutes) Intermediate
Module 6 – Introduction to Application Troubleshooting (30 minutes) Intermediate
https://labs.hol.vmware.com/HOL/catalog/lab/14018
AVI Load Balancer Authors:
vDefend Firewall / Distributed Firewall [Add-On Feature]
VMware Site Recovery (VSR) / Site Recovery Manager (SRM) [Add-On Feature]
vCloud Director on Azure VMware Solution (vCD on AVS)
Documentation References
Microsoft Documentation:
https://learn.microsoft.com/en-us/azure/azure-vmware/enable-vmware-vcd-with-azure
Broadcom Documentation:

vCloud Director Availability on Azure VMware Solution (vCDA on AVS)
vCloud Director Availability is a complementary service for vCloud Director that provides the following options:
- Migrations to/from Tenants in vCloud Director
- Disaster Recovery to/from Tenants in vCloud Director
In order to use the Disaster Recovery option for vCDA with AVS, it requires elevated privileges within AVS. To address this requirement, a number of “Run Commands” were developed to provide a “Self Service” approach for customers to consume vCDA with AVS for Disaster Recovery use cases.

Documentation References
Microsoft Documentation:
Broadcom Documentation:
Nested vCenter within AVS (Unsupported/GitHub Repository)
GitHub Repository for the “zPod Factory”
https://zpodfactory.github.io/guide/admin

AVS Host Quota Request Process
https://learn.microsoft.com/en-us/azure/azure-vmware/vmware-cloud-foundations-license-portability
Service Level Agreement (SLA)
Regarding the topic of the Service Level Agreement (SLA) for Azure VMware Solution (AVS), the official information is provided on the Microsoft-external website and is based on a Single AZ / Unstretched SDDC Cluster deployment.
Service Level Agreement (SLA) for Azure VMware Solution:
https://azure.microsoft.com/en-us/support/legal/sla/azure-vmware/v1_1/
If there is a need to enhance the mentioned SLA percentages, one option would be to enable Disaster Recovery to an On-Premises location or to another SDDC location.
Azure VMware Solution – Regional Availability
During Customer Workshops related to Azure VMware Solution, a common question that is asked is about the current status of Regional Availability for Azure VMware Solution. The current and future status an Azure VMware Solution Regional Availability can be located on the following public website from Microsoft:
https://azure.microsoft.com/en-us/explore/global-infrastructure/products-by-region/table

For additional information, the best approach would be to contact your regional Microsoft sales account team.
Additional References and Blogs
Microsoft Community Hub – 3rd-Party NVA Integration
VMware by Broadcom Product Interoperability Matrix
https://interopmatrix.broadcom.com/Interoperability

VMware by Broadcom Ports and Protocols on a Per-Platform Basis

Support of [ip directed-broadcast] Requirements
There is a caveat for Cisco Virtual Routers to support [ip directed-broadcast]. It requires the additional [ip network-broadcast] when using IOS-XE 17.3.1 versions and later.
https://quickview.cloudapps.cisco.com/quickview/bug/CSCvy85946
