Now that we’re moving things out to the edge, an emerging approach blows the door off the cloud-only and edge-only models. Credit: Thinkstock Edge computing is getting a great deal of attention now, and for good reason. Cloud architecture requires that some processing be placed closest to the point of data consumption. Think computing systems in your car, industrial robots, and now full-blown connected mini clouds such as Microsoft’s Stack and AWS’s Outpost, certainly all examples of edge computing. The architectural approach to edge computing—and IoT (Internet of Things), for that matter—is the creation of edge computing replicants in the public clouds. You can think of these as clones of what exists on the edge computing device or platform, allowing you to sync changes and manage configurations on “the edge” centrally. The trouble with this model is that it’s static. The processing and data is tightly coupled to the public cloud or an edge platform. There is typically no movement of those processes and data stores, albeit data is transmitted and received. This is a classic distributed architecture. The trouble with the classic approach is that sometimes the processing and I/O load requirements expand to 10 times the normal load. Edge devices are typically underpowered, considering that their mission is fairly well defined, and edge applications are created to match the amount of resources on the edge device or platform. However, as edge devices become more popular, we’re going to need to expand the load on these devices or they will more frequently hit an upward limit that they can’t handle. The answer is the dynamic migration of processing and data storage from an edge device to the public cloud. Considering that a replicant is already on the public cloud provider, that should be less of a problem. You will need to start syncing the data as well as the application and configuration, so at any movement one can take over for the other (active/active). The idea here is to keep things as simple as you can. Assuming that the edge device does not have the processing power needed for a specific use case, the processing shifts from the edge to the cloud. Here the amount of CPU and storage resources are almost unlimited, and processing should be able to scale, afterwards returning the processing to the edge device, with up-to-date, synced data. Some ask the logical question about just keeping the processing and the data on the cloud and not bothering with an edge device. Edge is an architectural pattern that is still needed, with processing and data storage placed closest to the point of origin. Dynamic distributed leverages centralized processing as needed, dynamically. It provides the architectural advantage of allowing scalability without the loss of needed edge functionality. A little something to add to your bag of cloud architecture tricks. Related content analysis Generative AI won’t fix cloud migration You’ve probably heard how generative AI will solve all cloud migration problems. It’s not that simple. Generative AI could actually make it harder and more costly. By David Linthicum Jul 12, 2024 5 mins Generative AI Artificial Intelligence Cloud Computing analysis All the brilliance of AI on minimalist platforms Buy all the processing and storage you can or go with a minimum viable platform? AI developers and designers are dividing into two camps. By David Linthicum Jul 09, 2024 5 mins Generative AI Cloud Architecture Artificial Intelligence analysis The next 10 years for cloud computing Despite AI's explosive growth, the industry still needs to face facts that customers are unhappy about costs and vendor lock-in. By David Linthicum Jul 05, 2024 5 mins Amazon Web Services Google Cloud Platform Microsoft Azure analysis Serverless cloud technology fades away Serverless was a big deal for a hot minute, but now it seems old-fashioned, even though its basic elements, agility and scalability, are still relevant. By David Linthicum Jul 02, 2024 4 mins Serverless Computing Cloud Computing Software Development Resources Videos