david_linthicum
Contributor

3 serverless downsides your cloud provider won’t mention

analysis
Jun 16, 20203 mins
Cloud ComputingServerless Computing

Serverless is getting more popular as enterprises rush to the cloud, but some drawbacks are almost never discussed

stressed businessman with problems 154142538
Credit: Thinkstock

Serverless is a game changer. As we look to accelerate the post-pandemic movement to cloud, we would love to remove the step of sizing the cloud resources we think the workloads will need.

Serverless automatically provisions the cloud resources needed, such as storage and compute, and then deprovisions them once the workloads are through processing. Although some call this a lazy person’s cloud platform service, removing the need to guess about provisioning the correct number of resources will keep you out of trouble these days.

However, with all the upsides there always are a few downsides. I have three to review with you.

Cold starts, which are caused by running a serverless function in a virtual private cloud, may result in a lag or a cold start time. If you’re remembering starting your mom’s Buick in high school, you’re not far off.

Moreover, different languages have different lags. If you benchmark them, you’ll get interesting results, such as Python being the fastest and .NET and Java being the slowest (just an example). You can use tools to analyze the lag durations and determine the impact on workloads. If you’re at all in serverless, I suggest you look into those tools.

Distance latency is how far away the serverless function is from the ultimate users. This should be common sense, but I see companies run serverless functions in Asia when the majority of users are in the United States. The assumption is that bandwidth is not an issue, so they look for convenience instead of utility, and don’t consider the impacts, such as the admin being located in Asia.

Another distance issue comes into play when the data is located in a different region from the core serverless function that uses the data. Again, this bad decision is typically made around process distribution on a public cloud. It looks great on PowerPoint but isn’t pragmatic.

Finally, underpowered runtime configurations are often overlooked. Serverless systems have a predefined list of memory and compute configurations, with things like memory running from 64MB to 3008MB. CPUs are allocated around a correlation algorithm based on the amount of memory leveraged. A lower memory setting is typically less expensive, but there is a performance trade-off if the serverless system shortchanges you on both memory and CPU.

Nothing is perfect, and while there are many upsides to leveraging serverless systems, you need to consider the downsides as well. Having a pragmatic understanding of issues allows you to work around them effectively.

david_linthicum
Contributor

David S. Linthicum is an internationally recognized industry expert and thought leader. Dave has authored 13 books on computing, the latest of which is An Insider’s Guide to Cloud Computing. Dave’s industry experience includes tenures as CTO and CEO of several successful software companies, and upper-level management positions in Fortune 100 companies. He keynotes leading technology conferences on cloud computing, SOA, enterprise application integration, and enterprise architecture. Dave writes the Cloud Computing blog for InfoWorld. His views are his own.

More from this author