david_linthicum
Contributor

The move to dynamic distributed cloud architectures

analysis
Oct 30, 20203 mins
Cloud Computing

Now that we’re moving things out to the edge, an emerging approach blows the door off the cloud-only and edge-only models.

Digital explosion of data and numbers
Credit: Thinkstock

Edge computing is getting a great deal of attention now, and for good reason. Cloud architecture requires that some processing be placed closest to the point of data consumption. Think computing systems in your car, industrial robots, and now full-blown connected mini clouds such as Microsoft’s Stack and AWS’s Outpost, certainly all examples of edge computing.

The architectural approach to edge computing—and IoT (Internet of Things), for that matter—is the creation of edge computing replicants in the public clouds. You can think of these as clones of what exists on the edge computing device or platform, allowing you to sync changes and manage configurations on “the edge” centrally.

The trouble with this model is that it’s static. The processing and data is tightly coupled to the public cloud or an edge platform. There is typically no movement of those processes and data stores, albeit data is transmitted and received. This is a classic distributed architecture.

The trouble with the classic approach is that sometimes the processing and I/O load requirements expand to 10 times the normal load. Edge devices are typically underpowered, considering that their mission is fairly well defined, and edge applications are created to match the amount of resources on the edge device or platform. However, as edge devices become more popular, we’re going to need to expand the load on these devices or they will more frequently hit an upward limit that they can’t handle.

The answer is the dynamic migration of processing and data storage from an edge device to the public cloud. Considering that a replicant is already on the public cloud provider, that should be less of a problem. You will need to start syncing the data as well as the application and configuration, so at any movement one can take over for the other (active/active).

The idea here is to keep things as simple as you can. Assuming that the edge device does not have the processing power needed for a specific use case, the processing shifts from the edge to the cloud. Here the amount of CPU and storage resources are almost unlimited, and processing should be able to scale, afterwards returning the processing to the edge device, with up-to-date, synced data.

Some ask the logical question about just keeping the processing and the data on the cloud and not bothering with an edge device. Edge is an architectural pattern that is still needed, with processing and data storage placed closest to the point of origin. Dynamic distributed leverages centralized processing as needed, dynamically. It provides the architectural advantage of allowing scalability without the loss of needed edge functionality.

A little something to add to your bag of cloud architecture tricks.

david_linthicum
Contributor

David S. Linthicum is an internationally recognized industry expert and thought leader. Dave has authored 13 books on computing, the latest of which is An Insider’s Guide to Cloud Computing. Dave’s industry experience includes tenures as CTO and CEO of several successful software companies, and upper-level management positions in Fortune 100 companies. He keynotes leading technology conferences on cloud computing, SOA, enterprise application integration, and enterprise architecture. Dave writes the Cloud Computing blog for InfoWorld. His views are his own.

More from this author