simon_bisson
Contributor

Getting started with Azure Remote Rendering

analysis
Apr 28, 20207 mins
Cloud ComputingMicrosoft AzureSoftware Development

Deliver complex 3D images to HoloLens 2 using GPUs in the public cloud

Microsoft HoloLens 2
Credit: Adam Patrick Murray / IDG

Microsoft’s mixed reality HoloLens 2 headset is now shipping, offering improved image resolution and an increased field of view. It’s an interesting device, built on ARM hardware rather than Intel for improved battery life and focused on front-line workers using augmented reality to overlay information on the real world.

What HoloLens 2 can do is amazing, but what it can’t do might be the more interesting aspect of the platform and the capabilities that we expect from the edge of the network. We’re used to the high-end graphical capabilities of modern PCs, able to render 3D images on the fly with near-photographic quality. With much of HoloLens’ compute capabilities dedicated to delivering a 3D map of the world around the wearer, there’s not a lot of processing available to generate 3D scenes on the device as they’re needed, especially as they need to be tied to a user’s current viewpoint.

With viewpoints that can be anywhere in the 3D space of an image, we need a way to quickly render and deliver environments to the device. The device can then overlay them on the actual environment, building the expected view and displaying it through HoloLens 2’s MEMS-based (microelectronic machines) holographic lenses as a blended mixed reality.

Rendering in the cloud

One option is to take advantage of cloud-hosted resources to build those renders, using the GPU (graphics processing unit) capabilities available in Azure. Location and orientation data can be delivered to an Azure application, which can then use an NV-series VM to build a visualization and deliver it to the edge device for display using standard model formats.

Conceptually it’s relatively easy to build such a service, but there’s little point in reinventing the wheel and incurring infrastructure and network costs when Microsoft makes it available as an Azure API. Currently in public preview, Azure Remote Rendering will render your 3D content for you, streaming it to devices as high-fidelity 3D images. It’s even possible to blend those images with on-device content, giving you a flexible way to add your own application UI to cloud-generated content. You’re not fixed to any one UI model either, so you’re able to give your applications with remote-rendered content their own look and feel.

Azure’s cloud GPU architecture shares GPUs between application instances, based on their needs. Apps that don’t need much GPU can use a fraction of the host hardware’s GPU, and those that need a lot of graphics capability can use multiple GPUs to get the best performance while still delivering a single 3D image to your device. The preview service charges per hour: standard rendering is $3 per hour, and premium rendering for complex scenes is $12 per hour. You get 100 free conversions a month, with additional conversions charged at $0.75 per asset. Since these are preview prices, expect the final service to be more expensive.

Introducing Azure Remote Rendering

The architecture and workflow for Azure Remote Rendering is relatively simple. At its heart is a cloud-hosted render that takes data from a design workstation. Once that is uploaded, a client API running on the target device uses a scene graph to predict where the user viewpoint is going to be and sends that data to the cloud service. The service uses the data to manage the remote rendering service, farming the rendering process across Azure-hosted GPUs. Spinning up a server for the first time can take some time, and any code will need to be aware of the session status to prevent user confusion. As part of the session configuration, set a maximum time for a lease so sessions will automatically be cleared if an application crashes. In normal circumstances your application will close a session cleanly when it exits. There is an option to reuse sessions to reduce startup delays. You can keep a small pool of sessions running, connecting to an available session when you launch an application and loading a new model when you connect.

Once the service renders the scene, data from all the GPUs constructs an image which is then streamed down to the client device. The client merges the image with its local UI and content before displaying it the user. Once a 3D image is downloaded, you can start interacting with it, using familiar HoloLens development tools and environments to build Azure Remote Rendering client apps.

Building your first Azure Remote Rendering app

Models are first built on a development PC. You can use tools such as Unity to build client-side applications that work with the Azure Remote Rendering API, working inside familiar development tools to create scenes that will host your cloud-rendered content. Microsoft supports testing on Windows PCs, so you can try out application code before deploying to a HoloLens. Currently these are the only supported end points for the service.

If you’re using Unity, start by adding the service API end points and any code dependencies to your application manifest. This includes choosing a rendering pipeline, with scripts to set up the connection to Azure Remote Rendering, initializing the Unity remote rendering client and connecting it to the scene camera.

Azure Remote Rendering accounts should be connected to Azure storage. This allows you to upload models and have them ready for use. Models are stored as entity graphs, converted from common 3D file formats. Upload your existing model to an Azure blob, then call the conversion API to build the remote rendering model and store it in another blob, ready for use.

Remote Rendering and the new edge network

Azure Remote Rendering isn’t perfect; you’re always going to be at the mercy of the network. That requires designing for latency, giving users cues to avoid dramatic head movements, and handling renders that are calculated while previous frames are delivered. Getting that right can take some time, and user actions can always disrupt even the most carefully designed experience. It’s important to explain to users what could happen and how to avoid significant rendering issues.

It’ll be interesting to see how a service like Azure Remote Rendering adapts to running on a future generation of edge content delivery servers. Putting a rendering-optimized Olympus server with an array of GPUs at a 5G base station for a future cellular HoloLens could allow much more dynamic rendering with very little latency and a high-bandwidth connection to the device.

Axure Remote Rendering is an early example of what we might think of as “packaged cloud compute services,” delivering targeted compute capability to edge devices. Whereas HoloLens 2 uses Wi-Fi connectivity, it’s easy to imagine a HoloLens 3 that uses 5G networks instead, a scenario that’s tailor-made for a combination of edge devices and cloud compute.

Edge devices will be optimized around power constraints, either for mobile devices that need to operate for much of a working day or for small devices in the home or office. They won’t have the compute capabilities to support the workloads we expect them to deliver. With a high-bandwidth pipe to a cloud compute facility (either a hyperscale provider like Azure or a close-to-the-edge content delivery network), we can treat edge devices as a rendering surface for cloud-delivered content, providing the cloud with a sensor platform that can define the compute requirements.

simon_bisson
Contributor

Author of InfoWorld's Enterprise Microsoft blog, Simon BIsson prefers to think of "career" as a verb rather than a noun, having worked in academic and telecoms research, as well as having been the CTO of a startup, running the technical side of UK Online (the first national ISP with content as well as connections), before moving into consultancy and technology strategy. He’s built plenty of large-scale web applications, designed architectures for multi-terabyte online image stores, implemented B2B information hubs, and come up with next generation mobile network architectures and knowledge management solutions. In between doing all that, he’s been a freelance journalist since the early days of the web and writes about everything from enterprise architecture down to gadgets.

More from this author