Digital Realty Trust, Inc. (DLR)
NYSE: DLR · Real-Time Price · USD
196.34
-3.66 (-1.83%)
At close: Apr 27, 2026, 4:00 PM EDT
197.80
+1.46 (0.74%)
After-hours: Apr 27, 2026, 7:56 PM EDT
← View all transcripts

Status Update

Sep 14, 2021

Speaker 1

Awesome. Well, I'm going to go ahead and kick it off. Hello, everyone, and welcome to today's virtual event, Modernized Business Operations for Regulated Environments. I'm Laura Dunn with Megaport Marketing, and I'm happy to kick off today's virtual event. For the next hour, this presentation followed by a Q and A will discuss how to develop a practical and effective hybrid cloud and or multi cloud strategy for businesses in highly regulated industries.

So pull up a chair and listen to this engaging discussion featuring fantastic speakers from Digital Realty, Google and Megaport. Before getting started, we have a few housekeeping notes to share. If you have any questions throughout the discussion, feel free to type them into the Q and A section, and we'll do our best to get to them by the end of the hour. Feel free to add any comments in the comment box on your screen as well. All of our panelists would be happy to continue the conversation with you after the event.

So please feel free to reach out if you would like to connect with them directly. Okay, so let's get it started. It is my pleasure to introduce the first speaker for today's virtual event, Don Atwood, Senior Solution Architect for Digital Realty. The virtual stage is now yours.

Speaker 2

Thank you, Laura. And hopefully, we have no technology problems this morning. Good morning, good afternoon to everybody. Glad to be here to talk about this important topic. And we have some fantastic speakers and content for everybody today.

So there we go. So so what I'm gonna do is I'm gonna set the stage here and speed up for the brilliant gentlemen who who are coming after me to talk in in-depth and go over a lot of architecture conversations about how these worlds connect platform that the three organizations and companies as a holistic solution has together and how effective that is and what it can mean for you. So I want to set the stage since everybody here might not be familiar with who Digital Realty is. We are the largest colo provider of data centers worldwide. We have a expansive footprint on sharing the map, which shows most of the locations, although there's a lot of growth happening.

I think we're over 300 locations at this point. Also on the map, you'll see the picture of the subsea cables. And I want to highlight, have strategically purchased and built assets geographically where these subsea cables connect. So we have an internal philosophy in addition to having great data centers that are up all the time to have kind of an unique connection philosophy. We know the world is a hybrid world.

We know that there's really good reasons to be in cloud, and there's fantastic reasons to be on prem. Most customers we have in the thousands have a version of hybrid or multi cloud, but all over the baseline back end within our facilities. So being strategically placed where the bandwidth flows is important to us, but then also partnering with other great companies like Megaport and Google and others is also very important. So this is kind of just the view of our data center footprints around the world. And and later, you'll see where Google sits, and you'll see a lot of correlation and linkage between those, which is important to having success.

So I mentioned Google. Google is one of the many cloud providers we have on ramps and access to. I would say Google is one of our very growing strategic partners. I think they're last year, they were the fastest growing infrastructure cloud provider on the planet. So super aggressive, unbelievable company, which we'll talk through today, and we'll actually talk through multiple use cases of customers that sit on prem with us and also burst out to the cloud for various reasons.

And there's a lot of scenarios and a lot of different reasons why people do that. So we see a bright, bright future growing with Google together. And then, of course, all others are available as well, all the ones you'd expect. We in that any philosophy, we want to be able to connect you. Whether it's all of us or not, we wanna connect you to all your partners, customers, providers, CSPs.

And so we have that ecosystem, and that's an important component of our growth. So just a little housekeeping before we jump into it. We are going to be referring to service exchange today a lot, which can be universally used with the name Megaport. So Megaport is the back end of Service Exchange. Service Exchange is Digital Realty's white label enhanced version of Megaport.

So the backbone is still Megaport. And we, through our partnership, have taken that already fantastic platform without any philosophy and enhanced it within our facilities. So if you have service exchange powered by Megaport in your solution, you'll know that in our facilities, everything is fully redundant to diverse paths. We know that uptime is of the utmost importance. And so when it's built within our facilities, it's fully redundant in every way, and you're going to see some fantastic architecture drawings here in a bit.

So whether you have data or network or control hubs or whatever you're running on the back end within our facility, as you burst to the different providers and customers, you'll know that you're fully redundant over the Megaport backbone. So let's jump into regulatory and talk a little bit about it, and we'll kind of kick it off. And I'll start with the use case right after this before I hand it off to Google. So obviously, the regulatory world compliance is a huge deal. We have this conversation very commonly with our customers.

A lot of needs for FedRAMP, health care, government, financial services. There's all kinds of logos we can put to the right of compliance requirements that you probably have. Often, hear a lot about SOC two and HIPAA and FISMA and those sorts of things are the most common. But what I wanna highlight here is the compliance that's required, depending on what industry you're in, is not just for the cloud or just for the data center side. It is an ecosystem compliance.

Right? And so when you go in and have a FISMA or a a HIPAA review and there's audits, you know, they look at in the end. It's an ecosystem. So we have a place, Google has a place, Megafort has a place, and it's that ecosystem that has to be compliant. And so we think it's super serious on our side.

We have many, many, many, many customers that have achieved these certifications, wherever they happen to be within our facility worldwide. And we are a large chunk of that component. Depending on what country you're in, what regulatory requirements there are, you know, there's a thousand different scenarios, big equation, if you will. You know, some things are better suited for the cloud. Some things are better suited for only on prem or or somewhere in between.

And so we can't go through every scenario, although you're gonna see some examples in architecture later. But a lot of the conversation that comes up with us and our customers on a daily basis is about data loss prevention, compliance reporting, right? A lot of times you have to have things centralized even if they're even if a customer has a edge deployment in many of locations, sometimes they have to have the compliance centralized in a certain country based on what they're working on. So there's a lot of deviations to that. Data sovereignty, data replication needs.

Encryption is a big deal, right, both in transit and at rest. Not only encryption, but where do you store the keys, right? If you have data in, for example, in a controlled country, they might not let you store a key in another country. And so understanding and managing all that is a big piece and why we spend a lot of time helping customers achieve the certifications, which is pretty routine for us and certainly a requirement under regulatory, that umbrella. So we see a lot of control country requirements, especially when you talk about the EU as an example.

Germany, I can think of, that there are a lot of the goalposts are moving constantly, and it's our job to keep up with that. So we work with our customers quite often, making sure that our side is in compliance. We know our partners are as well, and so that's a piece of the puzzle. But as those goalposts move around and regulations change, it's our responsibility to make sure that you are compliant and can meet all of those requirements, which is we routinely do. Control country, by the way, is a big one.

Obviously, places like Russia or Mainland China, there are some challenges there. And, and so we work with customers throughout that, throughout their journey to accomplish that, when required. So, let's jump into a use case, biotech. This is kind of in the medical side. Had to strip a customer name because we, honor, privacy.

But we have a joint customer between Digital Realty, Megaport, and Google. This is kind of a relevant conversation today with all the COVID stuff happening. We're all probably sitting in our living rooms. So Google is running a massive parallel computing cluster for this biotech customer that's doing antiviral research and drilling into billions of molecules each week to figure out what are the best antivirals available. This customer has a core base within our data center.

They are a Megaport customer, and they burst the cloud for a lot of the analytics and using their massive compute resources. So they have to be compliant across all three to maintain their certifications and requirements. But this is just a great example of how those worlds connect together and how the ecosystem with compliance has to work. So fantastic partnership example. At digital, we have thousands of customers that we have to meet these standards for every day in many different countries.

And so this is a super important piece to us and something that we can help you with as that topic comes up. So we'd be happy to share with that. So with that, we have a great panel coming up. I'm going hand the ball off to John. And thanks.

And by the way, at the end, I just want to mention, there'll be an offer to have some free connections to Google and try out this ecosystem for about three months. So stay tuned for that, but I'll hand it off to John. Thanks.

Speaker 3

Great. Thanks, Don. Good morning, everybody, or good afternoon, depending on where in the world you're calling in from. My name is John. I'm the AdMob lead for Google Cloud's partner engineering team.

So what that means is I work with our services partners as well as our customers, helping them with anything that falls within our AppMod portfolios, things like containers, CICD, and hybrid and multi cloud, which we'll talk about today. So today, I'm gonna give a brief overview of Google Cloud solution to hybrid and multi cloud, which is called Anthos. And I think I just skipped ahead a slide. Okay. And before I get started, there's a couple things to note.

First, Anthos isn't a single product, but it's actually a suite of products. I'm not gonna go into detail of every component, but I'm gonna briefly highlight some of the core components. And then second, I'm gonna try to keep this high level, but I might dive deep every once in a while. However, the key takeaways should be the business value that you can get from Anthos. Right?

If anyone at your company wants a deeper understanding of the product, I'm happy to have a follow-up conversation or connect you with your Google Cloud team. So most organizations have either taken the leap to public cloud or at least they have a cloud strategy that they're exploring. But we understand a lot of workloads are still gonna remain on prem, and this is gonna continue for a while. So there's various reasons for this. It could be proximity to your end users, compliance or data locality rules, a lot of things that Don just mentioned.

So while organizations are building out a cloud strategy, a big component of that is to understand how do you handle hybrid cloud or multi cloud. So when you start to adopt a strategy, you are faced with some challenges. Security versus agility, this is a big one. Right? Developers wanna push code to production very quickly no matter where that infrastructure is.

But security teams wanna ensure the code is safe and the tools used by the developers are verified and trusted. And this sometimes slows down the development process. Reliability versus cost. When people think of reliability, they think of adding redundant machines, data protection tools, and other services that increase your costs. And lastly is portability versus consistency.

When I start to run a modern application across different environments, like an on prem data center or multiple different clouds, I want my application to be portable, but I also want a consistent experience. My infrastructure teams wanna deploy, let's say, on prem, in Google Cloud, or even another cloud without having to make any significant changes or adopt a lot of new tooling. So how does Anthos help with all this? First, I'm going to reiterate, Anthos is not a single product, but it's a suite of products that, when together, help solve challenges I just mentioned. I'm gonna briefly discuss the components over the next few slides.

But before that, here's a list of some of the benefits that Anthos does provide. Write once, deploy anywhere. What this means is a developer doesn't have to build their application differently depending on the environment it's being deployed into. Consistency across environments. So this is for a security professional or an infrastructure admin or a developer.

It doesn't matter what role you're in. Anthos is gonna provide a consistent set of tooling that you can leverage no matter whatever environment you're in, on prem or in a cloud. And as I speak of some of these components, you'll see how the rest of the benefits are achieved. Now before I jump jump into the components, I've mentioned a few roles already like security engineers, infrastructure admins, and developers. I think it's important to note that the different components of Anthos provides various benefits depending on which one of these roles you're in.

So as I talk about components, I might mention that this is why it's valuable to a security engineer, Or this is the value this component would have for the infrastructure team or for maybe an app owner. It's also important to note that a lot of Anthos components are built on top of open source products. This helps avoid any sort of vendor lock in and it allows our apps to be portable. So this goes to that portability versus consistency challenge that I mentioned before. Okay.

So let's actually talk about what is Anthos and where can I deploy it? So like I mentioned before, it's a suite of components and it helps with things that you see on this slide. Things like policy management, cluster management, and so on. And it can be deployed on prem, at the edge, or across multiple different clouds. Now, when I drill down, you see some of these components, which I'm gonna discuss, things like Anthos GKE, Anthos Service Mesh, and a few more that you can see here.

Also note at the bottom, we have the deployment options. For on prem, that could be VMware or bare metal. We also support running in AWS or Azure. And then on the right, you see something called attached clusters. What this means is you can actually connect a non Google Kubernetes cluster, and I'll talk about Google Kubernetes engine on the next slide.

But we can connect a non GKE cluster into Anthos. And that helps provide that single pane of glass visibility to all my clusters as well as some additional features. Now this is important because let's say you're already running a large OpenShift or Rancher environment on prem or maybe you're using EKS or AKS in Amazon or Azure, and you start to leverage Anthos, but maybe you're not ready to migrate those clusters to GKE just yet, you can still connect them back to the Anthos hub, you can see them in the control panel, and you do have some additional management capabilities that you can leverage from Anthos. Okay. So one of the core components is Anthos GKE.

Now it's really important to understand what is GKE and a little of the story behind it. So admittedly, you know, Google was late to the cloud game. But when it comes to Kubernetes, we have the most experience and the most mature offering out there, and that's really important. Right? Because it's not a secret that Kubernetes is the de facto standard for container management.

It's also no secret that setting up Kubernetes isn't easy. It's really hard and cumbersome. And not only that, but it becomes more confusing when you start to look at things like day two operations. Right? Upgrades, monitoring, logging, security operations, and so on.

In fact, a lot of customers that try to deploy Kubernetes on their own oftentimes fail because they don't have a good plan on how to handle those day two operations. And so that's why having a managed version like GKE is really important. GKE assists our customers to make cluster creation easy. It provides a lot of advanced cluster management features, things like load balancing, auto scaling, auto upgrades and repairs, as well as logging and monitoring. And all of this with very little effort from developers or infrastructure teams.

Once your code is in a container, you can create a cluster using the console, the command line interface, or the API. It's really easy to do and it doesn't require you to have an intimate knowledge of Kubernetes. So as I mentioned, Anthos is a lot of different products bundled together. But at the core, we have Anthos GKE. And what that really is is taking our managed version of Kubernetes, which is mature and enterprise ready that we already provide to our GCP customers, and we're bringing it into your own private data center or even another cloud.

So here's what you see in the console. You can it's kind of hard to see. I know that the the screen there is a little small, but, it shows all of the clusters you have deployed. There's some that say GKE. So those are, Google Kubernetes engine running in your Google Cloud environment.

You have a couple clusters that say on prem, so those could be running in your data center. And then some that just say GCP. So there are those are actually just open source Kubernetes. We threw on some VMs just to show you that we can connect them back

Speaker 4

to the hub. So if

Speaker 3

you had AKS clusters in Amazon, you could actually see those here as well. So another component is Anthos Config Management. Now imagine that you have one team deploy a Kubernetes cluster. Right? That team has to worry about enforcing policies for that cluster and putting security guardrails in place, which might not be that hard for one single cluster.

But now imagine, you know, that team starts using Kubernetes a lot. They go around, they tell all the other teams within your company that Kubernetes is amazing and everybody wants to start using it. Right? So then another team adopts it in another team. Now you have clusters running across your entire company.

Some on prem, some in Google Cloud, maybe some in another cloud. Or even if they're in the same type of environment. Imagine that I'm a restaurant or a retail chain or a bank with a lot of branches that I run Kubernetes in each store or branch to handle various things, right? Inventory, point of sale, a lot of other operations that would happen in store. This is actually a common trend right now that a lot of stores are doing this.

And so in these scenarios, how do I ensure that the policies that I set are actually enforced? How do I ensure that IT an IT admin at a single store or bank branch or whatever it may be doesn't make a change to a cluster that's running there? The answer is config management. Config management allows you to define and enforce policies across all of your Kubernetes deployments. You basically take a central git repository that manages things like access control policies.

Right? RBAC, resource quotas, namespaces, whether it's on prem or in the cloud. Config management's also declarative, so it's continually checking the cluster state and it's applying that desired state to enforce your policies. So what it's doing is it's gonna put security guardrails in place. So as an administrator, you need to create a consistent environment that's gonna offer security by default for your developers.

So you can deploy new environments very quickly, and you're gonna have that confidence that the desired cluster configuration that you've set is going to be applied. Config management also helps maintain things like cluster sprawl, which I mentioned earlier. Right? So if I'm starting to grow clusters in all these branches or whatever it may be. So as more and more teams start to leverage Kubernetes or grow their environments for redundancy or to expand to new geos or for whatever reason, you start to increase the overhead in managing these separate configurations.

And that's the point of config management. It's gonna solve that problem by delivering a single centralized place for multi cluster management. Alright. So the last component I'm gonna mention is Anthos Service Mesh. If you're not familiar with the Service Mesh, don't worry.

All you really need to know is that a Service Mesh automates a lot of functionality into your network. So a lot of the benefits that you get are things that a developer might have to code into their application. But with ASM, developers don't have to worry about any of that. It's auto automatically handled. So the three main things that I like to highlight when I'm talking about a service mesh is observability, operational agility, and policy driven security.

So with observability, Anthos Service Mesh or ASM monitors things like error rates, latency, saturation, and traffic out of the box, which allows you to create an SLO based on those metrics. It also builds topology graphs for you in in the console that show you which services are communicating to which and which services are not communicating to each other. So it's a really good tool just to kind of see how your applications lay out. The second thing is operational agility. So what that means is when I'm deploying an application, as a developer, I have to account for things like, you know, circuit like, what happens when one service fails?

Right? Is that failure gonna cascade down and impact my other services? So we can do circuit breaking with ASM, which would essentially say if that service fails, we can cut it off from the rest of the application. Your application still runs, but that one service might need to be repaired. Or how do I handle routing traffic between different applications?

What if I have an application that's running in my on prem data center and I wanna start sending some traffic to that application but in the cloud? Right? Maybe it's a Canary rollout that I'm testing some stuff. Anthos Service Mesh can do all of that for you without developers really having to modify their code or or make any changes. And lastly is policy driven security.

Anthos Service Mesh handles certificate management as well as authorization and authentication between my services. It also is going to add MTLS to encrypt traffic. So we could talk about each one of these components, know, service mesh, config management, or GKE, probably for hours each. And that's not even everything that's included. There's other components like binary authorization that helps you build a secure software supply chain, which I know is very important with a lot of the recent news about some of the security breaches lately.

There's a lot of other components out there that we could talk about. But I think the key takeaway is when you start to build out a hybrid cloud or multi cloud environment, you're often faced with multiple different software licenses, an inconsistent experience, and added work for your infrastructure teams and application owners. Right? They have to make adjustments for each new environment. And with Anthos Anthos, all of this is handled for you.

So, you know, you might be thinking, what about legacy applications? Right? Maybe I'm not containerized or only a portion of my my workload is workloads are containerized. So can I manage virtual machines or Windows workloads? So Anthos has support for virtual machines coming very soon.

It should be before the end of the year. We can already run Windows containers in GKE or GCP, and that support is coming to other environments to follow. So happy to chat about those if you have any questions. Definitely feel free to reach out. But for now, I'm gonna hand it off to Nick, who's my colleague from Google, who can talk a little bit more about the networking piece.

Speaker 5

Thanks, John. So good morning or good afternoon, everyone, depending where, where you're calling in from. So as John mentioned, my name is Nick. I am a network specialist customer engineer with Google Cloud. And really what that means is I spend most of my time, talking to customers about our network products and services within GCP and helping them build and architect network solutions within GCP.

I specifically focus on the hybrid cloud portion of it. So a lot of my time is spent with hybrid cloud connectivity concepts, such as the one we're gonna talk about here with partner interconnect. And today, I'd like to talk to you about, you know, how you can use partner interconnect and how the product works, and really how how that can help you extend your connectivity from an on prem, network and on prem workload through a highly available low latency connection. Alright. So let's talk about the architecture.

I'm going to go a little bit more into a deep dive of, you know, how the architecture, works specifically with Google. And this is really the partner interconnect product. So I'm gonna start on the right hand side of the of the slide and then go to the left in terms of the components. So on the right, you can see here you have an architecture within digital realty. So this would be where you would have your your digital realty connection.

So your on prem physical equipment could be located within a digital realty connect within a digital realty facility. So either locally there or you're cross connecting into a digital realty facility from your your actual physical on prem environment. And here you can see, you know, for for those familiar with, with networks. Right? There's routers that sit there.

Effectively, you need to establish a connection between your on prem router all the way to Google. So the important part here that I want to point out is in the right hand side of the diagram there, that is actually, to presentation, what they were talking about in terms of Anthos, right? This is really where you could effectively see an Anthos cluster being deployed, right? So it could sit right there within Digital Realty, could be connected there. And this is really where that extends that communication path between Anthos all the way to GCP, specifically from an Anthos, on prem workload.

So as we move to the center here, this is where the partner interconnect product really sits. Right? So this is the service exchange that we talked to with Megaport, earlier that Don mentioned. This is really how the the the magic happens between a connection between VRT and Google. Right?

And here, there's there's some key points that I wanna bring up in terms of how, the communication really is established. So in this case, Megaport here has a connection to Google, to what we call our Google peering edge, which is our peering edge equipment to our network. And this is preestablished in the in the various digital realty facilities. So the two key points here that are important are these concepts of the diverse zones. So why is this important?

Well, within, within our facilities, right, we have two separate zones, which act are actually separate maintenance domains, really. So so they're separate availability domains. But the main key part is that they're used to make sure that if there's any specific type of maintenance activities occurring, only one zone in the same metro will be taken down, as part of a maintenance activity. So this is really a critical piece when you're building a topology with Google, especially when I I talk to customers about building a highly available connection depending on the types of SLAs, that you're looking for. It is absolutely critical to build a redundant connection in both Zone 1 and Zone 2, but specifically within the same metropolitan region.

So meaning, if you're in a digital realty facility on the East Coast, that specific facility in that specific metropolitan area, you would need to build redundant connections within the same metropolitan region. And this is really critical because maintenance, activities are scheduled within a metro. So specifically when you're building this, this is a very, very critical point to to keep in mind. Now you can see if we extend all the way to the left, you see the Google Cloud Platform components. So that's your projects, your virtual private cloud, or your VPC, and then, eventually, a a region that's deployed there with specific compute infrastructure.

So that could be your Anthos workload sitting in GCP. That could be APIs that you're communicating to within GCP, BigQuery storage, etcetera. And how the communication reaches GCP is through a concept, called the cloud router. And the cloud router is our managed BGP control plane speaker, which effectively exchanges routes between your on prem network and GCP. And you can see the line between the Cloud Router and all the way to the router on the the right hand side.

These are what we call interconnect attachments or VLAN attachments that you're effectively building between the router on the left and the GCP cloud router on the sorry, on the right and the cloud router on the left. So by building these attachments, you're effectively able to exchange routes and establish that private low latency connection from your digital realty workloads all the way to GCP. And one of the really cool things here is that you can see I have a workload here, for example, in U. S. West 4, which is our region in Las Vegas.

I also have an example of U. West 3 in Salt Lake City, U. S. West 2 in LA. We actually allow within GCP routing globally within, our VPCs.

Our VPCs are global. So there is a configuration that you can do within the VPC to allow global routing. So what that means is if I have a workload that sits physically somewhere, let's say, the East Coast, I can connect to a cloud router, to a cloud router in a region on the East Coast, our US East 4 region, for example, in Ashburn. And then from there, you can actually, reach other workloads that sit in different regions within GCP. And you can do that using our backbone.

So effectively, you just need to create a single connection or a pair of redundant connections to a a region within GCP. And from there, you can actually use our backbone to route traffic, to other regions within GCP. And I'll talk a bit about some of the reachability that you can have from a workload you know, a workload physically, somewhere in the continent to a GCP region as well. So we have some, some important key points there as well. Alright.

So let's talk a bit about the GCP interconnect costs specifically, right? So I'm not going to talk too much about the dedicated interconnect product itself. But effectively, the dedicated interconnect is customer owns the GCP port. Right? So instead of having a connection through a partner, it would be a direct connection to Google.

So in this case, the customers build their attachments directly through the the cloud console. They own the actual port to Google. So it's no longer a a shared fabric, that's that's being effectively resold, but it's effectively, just bought directly from the customer. So that's a cost that's passed down, you know, directly from from Google, the customer to own that entire port. From the partner interconnect product, which is really what we're talking about here with service exchange and Megaport, you know, this case, Megaport owns the GCP port.

They so they own that connection to Google, and then they effectively resell these, these virtual circuits to their customers. Right? So customers build these attachments via, the the service provider portal itself, and then they effectively provision and and activate these, within GCP. So the VLAN attachments, which are these virtual circuits that you saw on a previous slide, the customers, really provision these VLAN attachments at different levels, and they're charged on a different hourly basis. So for partner interconnect, there's a different hourly fee based on the bandwidth capacity, and we have different attachment capacities, as you can see here on the list that, the partner interconnect product, effectively supports.

So you're able to, dynamically increase or decrease the bandwidth that you need on demand depending on the type of of traffic that you have. So it's actually really useful if you have bursty workloads. You can actually start with a specific attachment need, you know, six fifty megabits, and then increase all the way up to, you know, five gigs if you need at a different point in time. And going a little bit into the the networking components, themselves here, for for those network savvy folks. Right?

This is an eVGP session that's established between your on prem routers and GCP. So you need to support BGP for that. There are some specific MTU settings that you need to configure. So we support both fourteen forty and fifteen hundred. And there's multi hop e b g p configuration that needs to be done as well.

Now the VLAN attachment, mentioned earlier about the reachability between different regions globally. So one key point I wanna mention here is that you can build a VLAN attachment to any metro edge that's in the same region within the same continent. So what that means is if you have, for example, you're you're you're creating, or you're connecting to a physical, on ramp to Google's, the Megaport connection that you saw in the middle there, if that's actually located somewhere in North America, you can build a VLAN attachment to any other regions in North America directly. So say you you want to have connection to a region on on the West Coast, but you're on the East Coast, you can actually build that VLAN attachment from an East Coast on ramp in, on on the Megaport side all the way to Google, a Google Cloud region on the West Coast. And this allows you to, you know, effectively, put data on the West Coast and and serve directly there.

And you can build redundant circuits that way as well, so that you can actually deploy, workloads in different regions. So that's one of the key points in terms of being able to build, that type of redundancy and that type of availability across the continent. So now I'd like to talk a little bit more about the Anthos piece in a little bit more detail. So John's covered a lot of these components. I just wanna put into perspective where the Cloud Interconnect product really fits here, right?

So you can see in this diagram, you see the actual implementation of the technology stack. So you can find GKE and GKE on prem at at at the compute layer. Right? So can see here on the on the right hand side and on the left hand side, one is sitting on prem, the other one is sitting in GCP. The key point I wanna bring up here is the cloud interconnect component between the two.

Right? So that effectively allows that private connection, that dedicated low latency connection between your workloads on prem to GCP. So you're effectively able to move traffic through the data plane between, these two workloads through that private connection. Right? So you no longer have to rely on, you know, the public Internet to to to move data between these two.

As John mentioned, as you can deploy workloads in Anthos and deploy workloads seamlessly within GCP and then extend your cluster to on prem, you can have that seamless connection between the two environments through that cloud interconnect connection. So that's really a critical piece there that you're able to establish that, you know, end to end. And, Mike from Megaport's gonna talk a bit more about this, as well as it fits into the solution. But I just wanted to to touch a little bit about this in this in this diagram. And then here we can add on top of that the service mesh that John mentioned, right, the ASM workload itself.

So you can see here, it creates a connectivity fabric between the clusters, again, utilizing the cloud interconnect component. So really, the service mesh, all the traffic that is established within the service mesh between your on prem workload and GCP allows you to traverse the private interconnect connection that you have. So again, not relying on any public network, not relying on Internet circuits to do this. You're actually establishing that communication path through that private connection. And then lastly here, we have, the Anthos Config Management, which John mentioned, right?

It's a single source of truth for your cluster configuration, And it's kept in sort of Git repository, which you can see here in this illustration, we have it on premise. And again, you have the communication path that can occur through the Cloud Interconnect component, which is, again, the key point that I want to bring up here as part of the connectivity solution, is you're able to, again, utilize that private connection and put these two components together and connect them privately. So with that, I know there's a lot of topics. If you have any questions, please feel free to reach out. Happy to help and discuss these in more detail.

And I will hand it off to Mike from Megaport to talk about the Megaport solution.

Speaker 4

All right. Thanks, Nick. Appreciate it. Hi, everyone. My name is Mike Rockwell.

I'm the Global Head of Solutions for Megaport on the direct side of the business. So just a little bit about Megaport. We are a global network as a service provider. So we operate across 24 countries today. We're deployed in over 700 data centers.

And really, our solution was built to solve the problem of access to the public cloud from the data center environment. So, we specialize in on demand data center to cloud connectivity. We also specialize on hybrid cloud connectivity through our MCR, which I'll talk about a little bit here today. And we most recently rolled out what we termed the Megaport Virtual Edge, which allows our customers to seamlessly integrate our on demand platform into their SD WAN deployment. So, it's an honor to be here with you all today and certainly looking forward to adding to the conversation.

And where Megaport primarily sits in the conversation is we're really, from a network connectivity perspective on the private side of the network, connecting your on prem environment, so with Service Exchange, with Digital Realty in the data center to your cloud environment. So specifically with our partner today, we'll talk a little bit more about Google, but we'll also reference how you can set up your hybrid and actual multi cloud connectivity to other cloud providers as well. So first, as we jump into it, really where we specialize is consulting our customers on the hybrid and multi cloud private connectivity component. So some of the common challenges and typical conversations that we engage in are really around the challenge of geographical distance. So as Nick referred to earlier in his presentation, if I'm trying to access an availability zone or an edge location with GCP, there's a couple of ways that I can do it.

I can cross connect. If I sit in a data center that has an on ramp or edge location with GCP, I can simply run a cross connect. But if I don't sit in that data center, I have to understand physically where the edge of that cloud provider network sits. So with Service Exchange and Digital really with Megaport, there's multiple options for the customers to accommodate that need of data center to cloud and set up ultimately the lowest latency connections that they can be that can be configured. If you look at some of the solutions that we build, if the customer is located in The U.

S, on the West Coast or on the East Coast, we have on ramp access to each one of the cloud providers. So if a customer is looking to route cloud to cloud and also back into the data center, we're able to solve that for them in each one of these regions across the globe. The other thing that we typically talk about is performance and reliability. So while latency is a primary component in understanding the geographical distance between your physical presence in the data center and the regions with the cloud providers. Performance is also a key part of that.

So customers are typically looking for that consistent performance. And through partner interconnect and service exchange in that private connectivity platform, performance is one of the key drivers of our solution. So, where our edge of our network sits within the data center and our edge of our network sits within the cloud providers, you're looking for the shortest path to get the best performance between those two edge points. And I'll talk about that a little bit more here in a minute. The other thing that we typically consult on is cost.

So, many times customers may be moved directly, set up their hybrid cloud environments, they set up their resources within the public cloud. They may initially access those resources through a public Internet connection instead of VPN tunnels. One of the things that's a negative to that is that there are additional egress charges that are applied. So if you look at the egress charges from a bandwidth perspective, most of the cloud service providers, if you route over the Internet and you're pulling the data down, so egress data you're pulling out of the cloud back into your data center, they're going to charge roughly $09

Speaker 5

to $0.10

Speaker 4

a gig. As you move into a private connectivity model, which is what's a new from a partner interconnect perspective or a direct connect with AWS, those charges move down more into a $0.25 to $02 a gig on egress. So typically, what we see with our customers when we start discussing costs, they're not always aware of those egress fees that are charged. Typically, if a customer is using an Internet connection and they look at the private bandwidth or the private connectivity model, they're going to see one that the better performance over a private consistent path. They're also going to see the cost reduced on the egress perspective, and that typically is going to cover the cost of setting up that private connectivity and then some.

The next piece that we talk about is what's always on everyone's mind. So again, a lot of the customers that we work with around security, they do initially set up connectivity through the internet, or they may set up connectivity through the internet, see issues with the performance when it comes to latency and throughput. But then there's also the typical security concerns that are going to drive them to that private connectivity model. When you look at complexity, so when you start thinking about hybrid and multi cloud, you're connecting to multiple different endpoints outside of your data center. You also might want to route cloud to cloud.

So once you start thinking about adding multiple clouds in the mix and you add in your data centers, you're going set up those VPN tunnels between each of those each one of these locations and have a path between each location, that can be very complex and tough to manage and also can add to some of the security risks that's involved as well. And as we talk through some of the different designs, we're going to cover each aspects of these and how they can solved through a mega port service exchange type of solution, accessing the cloud providers. And the other piece then is picking the right CSP for the right workload. So one of the things that can be part of that discussion is when we talk about latency and if we're wanting to route cloud to cloud or we have a multi cloud type of deployment, definitely want to look at the regions of where we're deploying those resources. So, as the cloud providers build out regions and continue to build out regions across the globe, If you do want to have a multi cloud type of deployment, you're relying on a low latency connection to get between those two cloud providers, you do want to understand where those CSP workloads are deployed within those regions and what the latency is going to be between those two different cloud regions to access those resources to make sure that your applications are going to perform in the optimal manner.

And we'll talk a little bit about virtual router or our Megaport cloud router and with Service Exchange and Service Exchange cloud router and how that can solve for some of the multi cloud region to region connectivity needs. So, I'm just going to touch on this real basic or real quick because Nick talked a little bit about it. But as we're reviewing setting up private connectivity for hybrid and multi cloud, there are really key components that we're going to evaluate. So, one is going to be the data center, the other is going to be the cloud edge and the connectivity in between the two. So, when we're setting up private connectivity, there's really two models of doing it.

One would be typically via dedicated connections. So digital really has locations where Google has deployed their edge within their data center, and customers may choose to set up a dedicated connection. In most cases, when a customer is building out from a data center to one of the cloud providers, that edge location isn't going to sit right within that within the data center that they're deployed. So, they have to have a strategy to get out of that data center to the edge of the cloud provider network. And one of the great things about Megaport and Service Exchange is you're building out a multi cloud, a multi hybrid cloud type of environment is that Megaport's already built out the physical connectivity to edge locations with Google for partner interconnects.

We have Oracle, AWS Direct Connect, Azure ExpressRoute, and on down the line. So, while maybe you're connecting directly to Google today or you want to connect to Google, you can also add these additional connections. So, you're going to go into your service exchange management console, you'll deploy a port, cross connect to that port, and now you have access to all of the cloud providers no matter where those edge locations sit. The other piece that you're also going to have to consider is from that edge into the cloud provider region. So, Nick touched on this in regards to Google and partner interconnect that you could connect at any edge location within North America and ultimately route across the Google network to get to that region.

But when you're taking all the components together and understanding the latency and the performance that you're going to be required from a private connectivity standpoint, all of these key components are factored into that equation. So certainly from a mega port perspective, when we're consulting with customers on any one of the cloud providers that we're working with, we're typically going to drop them off at the edge location that's closest to their physical data center. And we have tons of on ramp locations, and I'll go through those here in a minute, where you can ultimately do that. Nick really covered this in his presentation, but just as a high level to tie in the service exchange and Megaport component of it, You can easily go into either mega port or your service exchange console. You're going to play a one gig or 10 gig and potentially 100 gig physical connection into the network.

We're going to you're then going to set up 802.1Q trunk port. You'll assign a VLAN on what we term as a VFC or cross connect with service exchange. We're then going to provide the private connectivity to the zone, or the partner interconnect zone or availability zone on the Google Cloud provider network. You're then going to use your partner interconnect attachment to connect to your cloud router. So, in this particular situation, you have full control over your routing between your on prem network that's going to appear directly with your VPC or your Google Cloud network.

When you start to think about how you want to build those hybrid multi cloud solutions, one of those key components that I mentioned earlier was you want to connect to an on ramp or a cloud provider edge location that's close to your data center, but then also close to your region if you have low latency concerns around connecting from a hybrid perspective. So, one of the benefits of service exchange via Megaport and Megaport is that we have more cloud on ramps than anyone across the globe. So, whether you're connecting to GCP or you're connecting to AWS or Azure with an ExpressRoute, if you're using that service exchange fabric, you're going to have multiple on ramp locations that are going to sit in those major markets to where you can access the cloud provider network at a short on a short path for low latency, but then also connect to the cloud provider directly to that region that's closest to your data center. I highlight our locations, you know, where we have partner interconnect set up. So Don at the beginning shows specifically where GCP is actually deployed physically in the Digital Realty data centers.

This just really expands the footprint. So, if you're not sitting in the Digital Realty data center where you want to connect to Google or you want to have a multi cloud type of approach and have a single connection in the data center, You can now connect from a digital realty data center through service exchange to any one of these endpoints that Megaports built out the partner interconnect. The other advantage of this is in each one of these dots, these would be metro areas for Google or for GCP. Built out physical connectivity to Zone 1 and Zone 2. So, as Nick had mentioned earlier, if you want to set up a 99.9% or 99.99% SLA, you can easily facilitate that through service exchange and Megaport, and we've already built out the physical infrastructure to support that at the edge of the Google network in each one of these locations.

So kind of really to bring the full conversation together today, there's a couple of different models that I'm just going to walk through of what you can deploy through Service Exchange. So we've talked about on prem, we've talked about Kubernetes and Anthos and the management platform that sits on top that can manage all these resources on prem and each one of the cloud providers. Megaport is really the glue that holds it all together through Service Exchange. Customers simply can connect in that service exchange or digital really data center to the platform. They can build partner interconnects into Google.

They can also build direct connects into GCP. And they can manage that all of via Anthos over the top. Really through all the partners that are on the call today, you can seamlessly manage these environments and also build out the private connectivity and the resources on each end to support your business. So this is one of our I think one of our best use cases, and this is available at megaport.com under our use case section. But Intercontinental Exchange, so ICE is a Fortune 500 company, a provider of marketplace infrastructure, data services, technology solutions, a broad range of customers they support, including financial institutions, corporations and government entities.

And ICE operates regulated marketplaces, including the New York Stock Exchange. So a pretty powerful use case from a finance perspective. But really, the issue that they had and where they came to Megaport was that we've been a trusted adviser for them for some time. But really, what they've done is they've incorporated us into their ICE global network. So one of the things that they started to see is they wanted customers wanted to pull data feeds out of AWS and Azure and GCP.

And their ICE global network was just sitting within their private data centers across the globe. And so typically, customers would go in and pull feeds from those data centers. They would have to then ICE would then have to build out connectivity for each one of those customers into the cloud providers. So our Megaport really fit that need. And where ICE expanded their solution is they built what they call the ICE Global Connect platform.

And really, the ICE Global Connect platform and the underlying infrastructure is powered by Megaport to where ICE has physically connected their network to the Megaport edge within data centers that were located. And then now they're able to on demand build out single private connections out of their network into each one of the cloud providers that their customers are looking to access to pull down each one of these fees that they may be looking to pull down, whether it's the ICE tray or eSignal or ICE Unicast, any of these data features or data services that they're trying to access. And so Megaport's really the unique value proposition is this, as I mentioned at the beginning, is we're a global company. We work across 24 different countries today. So the same with ICE.

They're in North America. They have a network that's deployed in Europe and APAC as well. And they built out their underlying cloud connectivity platform through Megaport. So when we look at building the private connectivity, the security, the ease of use, I think really this statement from ICE was a powerful from a financial vertical perspective. And ICE, obviously, they have very stringent requirements for security and resiliency performance for their customers.

And any provider they work with, they must have the highest standards as well to provide and provide for their global customers with both speed and choice and coverage. So, really from each one of those marks that they were looking to achieve, the mega port network hit the forum. And what the design looks like so, this is kind of a gives a look from a regional perspective of what each one of these designs looks like, but they've connected to the Megaport network in each one of these locations across the globe. So, once they've physically connected in the data center, each time their customer wants to deploy the Global Connect service, they're able to build the private connectivity across the Megaport network to get to each one of the cloud providers. So whether they want to build connectivity into AWS, Azure or GCP, all of that can be solved via the Megaport platform.

So, the last thing I'm going touch on is more around multi cloud connectivity, but then also hybrid back into the data center, but potentially a better way for you to support your multi cloud solutions. So, this is going to go into the Megaport Cloud Router or the Service Exchange Cloud Router. We have published documentation with Google on how customers can set up this service with Megaport and with partner interconnect through MCR, but also to other cloud providers. So really a seamless solution for setting up multi cloud connectivity between clouds. So I know we're getting close to the end here, so I'll wrap this up.

But the biggest advantage of the MCR is now customers don't have to route all that traffic back into their data center. So, if they do have environments within GCP or AWS, the MCR allows them to keep that traffic local and route between their cloud environments without having to hairpin that traffic back into their data center. So, if you look at, say, Google U. S. 4, and you look at AWS, U.

S. East Northern Virginia, if a customer wants to have a very low latency profile between their applications, between these two environments, between U. S. 4 with GCP and U. S.

East 1 with AWS, they can see five milliseconds in latency between these two regions, deploying a cloud router that sits right at the edge of the cloud provider networks there in Ashburn, Virginia. But then they can also utilize a single route path or a single BGP peer back into their data center as well. So really incorporating a multi cloud and hybrid cloud solution. So, real quick to summarize it, the benefits of the MCR is you're shifting the responsibility to the Megaport network. So, you don't have to backhaul all that traffic into your private network.

You can offload that to Megaport. Megaport Cloud Router is going to manage the peering relationships with each of its endpoints. So, you can route cloud to cloud and also back into the data center. Geographical distance. So, when we're talking about latency in the best way to support applications, MCR is prime for that.

Again, you can see five milliseconds in that U. S. East region. And then, it's fully integrated with the CSPs. So, if I'm deploying with Google, for instance, on the MCR, it's automated in its functionality of setting up the layer two connectivity between Megaport and GCP, but it also sets up the peering relationship as well.

It also has for outfiltrings. You can do BGP filters. You can also do more specific prefix filtering, and you don't have to be knowledgeable of CLI, all that's in an easy to use interface on our platform. No physical presence required. So multi cloud, if you just want to route cloud to cloud, you can certainly set up an MCR.

It's also on demand and there's no locked in contracts. So, if you just want to try out the MCR for a short period of time, you can turn it up and turn it down with no penalty. And then it also simplifies that routing experience. So, route cloud to cloud and just manage a single point connection back into the data center. So as you add other peers, you can add them to the MCR, still manage a single peer relationship back into your data center.

So, that is the end of my presentation here. I'm going to pass the ball back over, I believe, to Don, and he is going to run through our promotion.

Speaker 2

Yep. Thanks, Mike. Appreciate it. Thanks, John, Nick. So just to wrap it up here.

So you've heard a lot of great things, great architecture. You understand, hopefully, a little bit more how we do the interconnect. This is the offer that we're we collectively come up with. We we wanna give you an opportunity to both connect to Google, try it out at no cost to you. We're offering, basically, if you're an existing customer with existing ports, three free months virtual cross connects to Google.

You can you can do those four different regions. There's really not a limit. Try it out. If you're if you're a digital customer and you're using Surface Exchange, we're making that offer for free. If you need new ports, I mean, you can even lag ports together to get more bandwidth and you wanna connect to Google.

That also is free. So, basically, the offer is through the end of the year. You get three months free connections to Google. And and I would also just extend that. You know, if you have a port and you're trying out Google, it also gives you an opportunity to to try out other b two b connections and virtual cross connects as well.

Those, of course, will be paid. But for the free promotion with Google, you know, take advantage of it. It's a great opportunity as long as you're in a digital realty facility anywhere worldwide. This is available, and you can reach out to us for more information either in Megaport or Digital can help you with that information. So with that, I'll hand it to Laura to close out here.

Speaker 6

Perfect. And thanks, Don. Appreciate that. So thanks, everybody. We are at the top of the hour.

So thank you for everyone that attended, and hope you found this presentation beneficial. So we'll be sending a recording of this to all attendees to follow-up. If you would like to connect with any of our speakers or do you have any questions you'd like us to follow-up on, just don't hesitate to reach out. Other than that, just have a great rest of the day.

Speaker 2

Thank you.

Powered by