david_linthicum
Contributor

Multicloud architecture decomposition simplified

analysis
Mar 05, 20214 mins
Cloud ArchitectureCloud Computing

Underoptimized architecture is costly and inefficient. It's worth taking some extra time to decompose your proposed solution to avoid trouble down the road.

shortcut through a maze
Credit: Thinkstock

Architectures are like opinions; everyone has one that’s based on their own biases. Sometimes it’s a dedication to using only open source solutions, a specific brand of public cloud, relational databases, you name it. These biases are often the driving factors that determine what solution you employ and how bad or good those choices are. 

The issue is that when you choose components or technology based on a bias, often you don’t consider technology that’s better able to meet the core requirements of the business. This leads to an architecture that may approach but never get to 100% optimization.

Optimization means that costs are kept at a minimum and efficiency is kept at a maximum. You can give 10 cloud architects the same problems to solve and get 10 very different solutions with prices that vary by many millions of dollars a year.

The problem is that all 10 solutions will work—sort of. You can mask an underoptimized architecture by tossing money at it in the form of layers of technology to remediate performance, resiliency, security, etc. All these layers add as much as 10 times the cost compared to a multicloud architecture that is already optimized.

How do you build an optimized multicloud architecture? Multicloud architecture decomposition is the best approach. It’s really an old trick for a new problem: Decompose all proposed solutions to a functional primitive and evaluate each on its own merits to see if the core component is optimal.

For example, don’t just look at a proposed database service, look at the components of that database service, such as data governance, data security, data recovery, I/O, caching, rollback, etc. Make sure that not only is the database a good choice, but the subsystems are as well. Sometimes third-party products may be better.

From there, move to each component, such as compute, storage, development, and operations, decomposing each to view the technology’s capability of solving the core problems and the use cases around the multicloud architecture. Of course, we do this to an array of technologies, breaking down each one to its smallest function and comparing it with our core requirements around building a multicloud in the first place. For the purposes of this article, I’m assuming that multicloud itself is a good architectural choice.

Next, evaluate the dependencies. These technology components are needed for a specific technology to work. Back to our database example: If you pick a cloud-native database that can only operate on a single public cloud, guess what public cloud you need to pick? Again, decompose that public cloud into functional parts that will be used by your multicloud, only focusing on the components that are relevant to the core requirements.

For example, if you’re going to leverage cross-cloud security, then the native security may not need to be evaluated. Repeat this for all dependencies related to all candidate technologies that are part of your proposed multicloud architecture. Also consider costs, including price, ops resources, the provider’s business, and other secondary things.

Do this for all proposed components, tossing out the less-optimal technology, all the while keeping in mind the core purpose of the architecture. What problems does this collection of technologies need to solve, using a single architecture that’s proven to be optimal?

If you’re thinking bottom-up architecture, you’re very close to what architecture decomposition is. Essentially, you’re justifying each component or technology, each dependency, and all hard and soft costs, such as service pricing and resources you’ll need to support.

I take this approach with most of my architecture projects, multicloud or not. It’s much harder, time-consuming, and not as fun as just going with technologies I like. But by the time I get through this process, I’m assured that all platforms, components, services, and resources have been evaluated down to all smaller components, and all have proven to be optimal. Moreover, I’ve also considered all costs, risks, and dependencies, and I understand pretty completely if this is the optimal architecture. 

I wish I could say this is less work. It’s really triple the efforts I’m seeing out there now. However, the number of ways underoptimized (bad) architectures are overly complex and costly tells me that it’s time to think more carefully about how to get to the right solution. As enterprises rush to multicloud, we need to get this right, or else we’re taking some giant steps backwards.

david_linthicum
Contributor

David S. Linthicum is an internationally recognized industry expert and thought leader. Dave has authored 13 books on computing, the latest of which is An Insider’s Guide to Cloud Computing. Dave’s industry experience includes tenures as CTO and CEO of several successful software companies, and upper-level management positions in Fortune 100 companies. He keynotes leading technology conferences on cloud computing, SOA, enterprise application integration, and enterprise architecture. Dave writes the Cloud Computing blog for InfoWorld. His views are his own.

More from this author