Cloud-native apps built on Kubernetes can run anywhere. Now, with Open Service Broker, they can also use services hosted in public clouds such as Azure Credit: Thinkstock Back in the early 2000s, while working as an architect in an IT consulting company, I became fascinated by the promise of service-oriented architectures. Taking an API-first approach to application development made a lot of sense to me, as did the idea of using a message- and event-driven approach to application integration. But that dream was lost in a maze of ever-more complex standards. The relatively simple SOAP’s take on remote procedure calls vanished as a growing family of WS-* protocols added more and more features. It’s not surprising, then, that I find much of what’s happening in the world of cloud-native platforms familiar. Today, we’re using many of the same concepts as part of building microservice architectures, on top of platforms like Kubernetes. Like SOAP, the underlying concept is an open set of tools that can connect applications and services, working in one public cloud, from on-premises systems to a public cloud, and from cloud to cloud. It’s that cross-cloud option that’s most interesting: Each of the three big public cloud providers does different things well, so why not build your applications around the best of Azure, AWS, and Google Cloud Platform? Introducing the Open Service Broker One of the key technologies for enabling this cross-cloud world is the open service broker. Building on the SOA concept of the service broker, the Open Service Broker API provides a way to take information from a platforms list of available services, automate the process of subscribing to a service, provision it, and connect it to an application. It can also handle the reverse, so when you no longer want to use a service, it removes the connection from your application instance and deprovisions the service. Developed by a team from across several cloud-native platform providers, including Pivotal and Google, there are implementations for common platforms like Cloud Foundry, Kubernetes, and Open Shift. Microsoft has developing its own implementation of the Open Service Broker (OSB), with support for a selection of key Azure services, including Cosmos DB, Azure SQL, Azure Container Instances, and the Azure Service Bus. OSB comes to Azure Available on GitHub, the Open Service Broker for Azure (OSBA) installs on any platform that supports Open Service Broker, running anywhere. That’s a big advantage for developers wanting to take advantage of tools like Cosmos DB from applications running on AWS’s Kubernetes implementation or from an on-premises Cloud Foundry. It replaces Azure’s existing service brokers, with one common tool that’s developed in the open, rather than inside Microsoft. Published under an MIT license, OSBA is an active project, with more than 340 commits and eight releases to date. The code is still under development, so while it’s alpha code that’s close to usable in production, there could be breaking changes between releases. Getting the Open Service Broker for Azure working is easy enough: The project has a series of quick start documents to help bootstrap your projects. These samples include working with a local Minikube test instance, a Cloud Foundry installation, and AWS Kubernetes Clusters, as well on Microsoft’s own Azure Container Instances. Microsoft’s OSBA builds on work done by the Deis team, especially the Helm package manager. So you’ll need to start with Helm installed on your Kubernetes cluster, ready to install the service catalog and OSBA. Using OSBA to manage service instances Once you’ve installed OSBA, you can use the Kubernetes command-line tools to add new service instances. One important tool is the Azure CLI; this gives you access to Azure resources from your computer, with support for MacOS, Windows, and Linux. Once installed, you can use the CLI to collect the information you’ll need to work with OSBA, starting by logging in to Azure and listing available resources. You can simplify working with your tools by creating environment variables for any required login details and keys needed to handle provisioning Azure services, making it easier to automate operations without storing Azure log on details publicly. Once you’ve got this information, you can manage OSBA services running on Azure or check that services provisioned from elsewhere are set up and running. With command-line access to Kubernetes, you can provision your Azure services directly from the service catalog before binding them to your application. Don’t forget that the process is asynchronous and can take some time, so any automation will need to check for completion before deploying and starting applications. A Kubernetes secret stores connection data for your service, ready for use in an application. Services can be deprovisioned the same way, first unbinding and then deprovisioning. The same processes work across public and private cloud platforms, giving you a common environment for working with Azure services no matter where you code is running. Cloud portability is an important requirement for modern applications; using OSBA to provision access to Azure services from anywhere goes a long way to fulfilling that promise—making Microsoft’s cloud platform more accessible. Getting your service APIs right While the Azure implementation of Open Service Broker is clearly for use with Azure services, there’s nothing to stop you using an installation of the general-purpose OSB with your own services. That does mean you’ll need to think about how you’ll implement your own APIs, and how you’ll manage them. You can include OSBA calls in Kubernetes manifests or in Helm charts, so a single command line can deploy an application from the general service catalog, provision supporting services, and then launch the application. That way, an application that need MySQL support can run on Azure’s MySQL service. That’s a big issue for any modern application, because it’s not only an issue of application design, it’s also one of application life cycles and lifespan. You’re no longer writing code for yourself; you’re writing it for every developer who’s going to use your service. You need to think about API design and development, looking at choosing the appropriate approach to take (choosing between RESTful and RPC and GraphQL) and how to consider versioning and deprecation. While every API has its own unique use case, once you make it public your role changes: You’re no longer just a developer, you’re also a caretaker. Publishing services for use with Open Service Broker means you’re now committed to working on someone else’s timetable. As Okta’s Keith Casey points out, “Developers want to do something useful and then go home,” so your APIs need to be rock-solid and ready to go before you make them available through service catalogs and tools like the Open Service Broker. Related content analysis How Azure Functions is evolving Microsoft has delivered major updates to its serverless compute service to align it more closely with development trends including Kubernetes and generative AI. By Simon Bisson Jul 11, 2024 7 mins Azure Functions Microsoft Azure Serverless Computing analysis Understanding DiskANN, a foundation of the Copilot Runtime Microsoft is adding a fast vector search index to Windows. It’s tailor-made for fast, local small language models like Phi Silica. By Simon Bisson Jul 04, 2024 7 mins Software Deployment Generative AI Artificial Intelligence analysis AI development on a Copilot+ PC? Not yet Microsoft’s new AI-infused hardware shows promise for developers, but the platform still needs work to address a fragmented toolchain. By Simon Bisson Jun 27, 2024 9 mins Visual Studio Code Software Deployment Generative AI analysis Inside today’s Azure AI cloud data centers At Build, Microsoft described how Azure is supporting large AI workloads today, with an inference accelerator, high-bandwidth connections, and tools for efficiency and reliability. By Simon Bisson Jun 20, 2024 7 mins Microsoft Azure Technology Industry Artificial Intelligence Resources Videos