Raspberry Pis are versatile, small, inexpensive computers with all sorts of uses. Now they can also be private clouds Credit: Marco Verch The running joke is that Raspberry Pis are cheaper than an actual raspberry pie. Although I wouldn’t pay $50 to $100 for a pie to eat, the idea is to provide a very capable computer, with a small footprint, with built-in networking, running open source software, at a price that hobbyists, as well as professionals, can afford. I’ve used them for years as IoT devices, considering they can gather, store, process, and transmit data, as well as react to the data if needed. People are using Raspberry Pis on projects such as making motorcycle riding safer and other IoT/edge net-new development. However, things are changing for the Pis. I was happy to see the k3s project, which is a lightweight Kubernetes distribution for use in “resource-constrained environments.” It is open source as well as optimized for ARM processors. If you’ve not guessed by now, this makes running a Raspberry Pi-based Kubernetes cluster feasible since this Kubernetes distribution is really purpose-built for the Pi, of course with some limitations. This enabling technology lets cloud architects place Kubernetes clusters running containers outside of the centralized public cloud on small computers that will work closer to the sources of the data. The clusters are still tightly coordinated, perhaps even spreading an application between a public cloud platform and hundreds or even thousands of Raspberry Pis running k3s. Clearly it’s a type of edge computing with thousands of use cases. What strikes me about this pattern of architecture is that cheap, edge-based devices are acting like lightweight private clouds. They provision resources as needed and use a preferred platform such as containers and Kubernetes. Of course, they have an upper limit of scalability. This is what hybrid cloud was supposed to be, but never was. Pairing a private and public cloud meant…well…you had to use a private cloud. Purpose-built private clouds fell way behind in features and functionality, so much so that enterprises are moving away from them in 2020, no matter if they are already deployed or not. If you look at the future of this architecture it’s really going to be many public cloud brands—as many as five—and many instances of edge computing systems that are functionally private clouds. This many-to-many architecture may be a bit of a challenge to operationalize, but it will provide the best path to solve many problems of local and remote processing. I’m in. How about you? Related content analysis Generative AI won’t fix cloud migration You’ve probably heard how generative AI will solve all cloud migration problems. It’s not that simple. Generative AI could actually make it harder and more costly. By David Linthicum Jul 12, 2024 5 mins Generative AI Artificial Intelligence Cloud Computing analysis All the brilliance of AI on minimalist platforms Buy all the processing and storage you can or go with a minimum viable platform? AI developers and designers are dividing into two camps. By David Linthicum Jul 09, 2024 5 mins Generative AI Cloud Architecture Artificial Intelligence analysis The next 10 years for cloud computing Despite AI's explosive growth, the industry still needs to face facts that customers are unhappy about costs and vendor lock-in. By David Linthicum Jul 05, 2024 5 mins Amazon Web Services Google Cloud Platform Microsoft Azure analysis Serverless cloud technology fades away Serverless was a big deal for a hot minute, but now it seems old-fashioned, even though its basic elements, agility and scalability, are still relevant. By David Linthicum Jul 02, 2024 4 mins Serverless Computing Cloud Computing Software Development Resources Videos