Cross-Cloud Networking

Cross-Cloud Deployment Models

Architecture Deployment Models are generally a balance between meeting the design requirements and the associated cost to the implementation. The following will attempt to review the areas to consider when deploying cross-cloud architectures and the potential aspects related to cost.

Deployment Model 1 = “Local” On-Premises (Centralized)

As customers migrate to the cloud, a primary consideration from a “Day 1” perspective is whether they will leverage their existing centralized On-Premises environment for their design (i.e. Firewalling/DMZ) or consider distributing their services towards the cloud instances. The following deployment model depicts the situation where the On-Premises environment is directly connected to each of the VMware Cloud offerings with their respective connectivity options (Express Route = Azure, Direct Connect = AWS, Cloud Interconnect = GCP):

Figure 1. – “Local” On-Premises with Individual Cloud Connections (Centralized)

With this deployment model approach, the current services can be kept centralized to the On-Premises instance in order to reduce concern of too many changes to the end-to-end design as the customers migrate to the cloud. As with any design, there are advantages and disadvantages based on what the “Day 1” and “Day 2” design requirements are. For example, from a “Day 1” requirement perspective, the On-Premises-to-Cloud network traffic may be the primary focal area since VM’s are initially being migrated to the cloud and the connections to the cloud instance have to large enough to handle the VM-to-VM traffic between On-Premises and the Cloud.

Traffic Flow for “Day 1”: [On-Premises] <–> [Cloud 2]

From a “Day 2” consideration, there may be a need to send traffic between the cloud instances (VM-to-VM communication) which means that traffic would need to flow from the first cloud instance, to the On-Premises environment, and then, to the second cloud instance in the following way:

Traffic Flow for “Day 2”: [Cloud 1] <–> [On-Premises] <–> [Cloud 2]

Both “Day 1” and “Day 2” traffic flow options are feasible although, there would need to be careful consideration where the VM’s would be placed in the cloud in order to avoid situations where the delay/latency between the VM instances would be too large. If [VM1] is migrated to [Cloud 1] and [VM2] is migrated to [Cloud 2], this could create a situation with a traditional 3-Tier application where the “App-Tier VM” in [Cloud 1] could have challenges writing to a “DB-Tier VM” residing in [Cloud 2] due to latency.

Deployment Model 2 = “Hosted” On-Premises (Centralized)

There may be customers who have already have migrated their On-Premises environment to a “Private Cloud” or “Hosted Cloud” that is hosted by a Partner within VMware’s Cloud Partner Program (VCPP).

Figure 2. – “Hosted” On-Premises with Individual Cloud Connections (Centralized)

The “Day 1” and “Day 2” design considerations would be the same as in the previous section although, one should also consider if there is an additional cost associated with a local extension to the mentioned cloud connections from the “Hosted” vSphere environment (i.e. MPLS Ethernet Pseudowire/L2VPN). Depending on the number of cloud connections, this could add up to be a significant cost associated to the overall cross-cloud design (i.e. 1 x Azure Express Route local extension, 1 x AWS Direct Connect extension, and 1 x GCP Cloud Interconnect extension = 3 MPLS L2VPN’s). The “Day 1” and “Day 2” Traffic Flow considerations as mentioned earlier would be the same here as well.

Deployment Model 3 = “Local” On-Premises / Service Provider “Cloud Router” (Distributed)

Depending on the “Day 1” and “Day 2” design options versus cost requirements and the associated traffic patterns mentioned above, there may be a desire to move the routing function closer to the cloud instances. Below is a cross-cloud deployment model where a “Cloud Router” (Megaport and/or Equinix) has been leveraged.

Figure 3. – “Local” On-Premises with “Cloud Router” Connectivity (Distributed)

Based on the diagram, the On-Premises instance has a single connection to the Service Provider (Megaport) and has been connected to a “Cloud Router”. From the “Cloud Router”, additional connections have been deployed to each of the individual cloud instances. In this architecture, the “Cloud-to-Cloud” (VM-to-VM) traffic could be addressed in a more scalable means since the traffic can go directly between the cloud environments, through the “Cloud Router”, and avoid hair-pinning traffic back to On-Premises. Moving this routing function into the Service Provider level provides a more distributed means to handle network traffic. With this type of design, considerations on where the traffic can be sent to/from at the Service Provider “Cloud Router” level as well as at the Cloud SDDC level (i.e. BGP Filters, BGP Route Summarization, Firewall Rules, etc.).

Deployment Model 4 = “Hosted” On-Premises / Service Provider “Cloud Router” (Distributed)

If the On-Premises environment is being hosted by a Partner, there may be an additional need to extend the connectivity from the vSphere instance, through the Partner’s backbone (i.e. MPLS Pseudowire/L2VPN) to provide the cross-connect to the Service Provider’s “Cloud Router”.

Figure 4. – “Hosted” On-Premises with “Cloud Router” Connectivity (Distributed)

Within this deployment model, the characteristics and considerations would be similar to the previous model with the exception that there may be an additional cost associated to the network extension. When compared to the [Deployment Model 2], the additional cost should be reduced since there would be one connection versus three individual connections related to the overall cost for the design.