simon_bisson
Contributor

Use DirectML to add machine learning to C code

analysis
Feb 10, 20216 mins
C LanguageGPUsMachine Learning

Microsoft provides new tools for working with machine learning in GPUs.

artificial intelligence brain machine learning digital transformation world networking
Credit: Getty Images

The modern GPU is more than a graphics device. Technologies such as the open-standard OpenCL and Nvidia’s CUDA turn the many small processors in a GPU into a parallel computing fabric, allowing desktop PCs to complete tasks that used to be the sole purview of supercomputers. Those same GPUs are also capable of supporting many modern machine learning tasks, using GPU compute to build neural networks and to support model-building, data-parallel analytical and processing tasks.

Microsoft has been investing in simplifying GPU programming for a long time now, starting with its DirectX GPU tools, initially via the Direct3D graphics tools, and extending it to GPU compute with DirectCompute. Recent developments have included tools to map OpenGL calls to Direct3D, related to work building a graphical layer onto the WSL 2 Linux virtual machine system bundled with Windows 10. Although they make it easier to work with hardware, these remain low-level programming tools, using C++ to access hardware features.

Introducing DirectML

They’ve recently been joined by a new member of the DirectX family of GPU APIs: DirectML. This underpins much of the work done by the higher-level WinML, providing high-performance machine learning primitives that can be used in your custom code or through Microsoft’s own libraries. Working with DirectML used to require Direct3D metacommands to access device-specific machine learning features, accessing them through shader operators. The result is a set of standard abstractions that let the same code run on GPUs from different vendors.

DirectML has been popular; it’s made it a lot easier for game developers to add machine learning features to their code, and it’s supported scientific computing applications that have been using the GPU-compute DirectCompute APIs. Now it has simplified bringing GPU-powered ML frameworks to Windows. But it’s not particularly practical for everyday programming, despite removing hardware-specific requirements. That’s led Microsoft to develop a set of stand-alone DirectML APIs, wrapped in a single NuGet package that supports both Win32 and UWP, as well as code running inside the WSL Linux environment.

WSL is an interesting choice, as it’s an increasingly popular tool for building and testing applications intended for use in Linux cloud VMs or containers. Microsoft has used the WSL DirectML as part of its project to bring the popular TensorFlow machine learning framework to Windows as an open-source project running on Win32 and in WSL. By exposing DirectML to WSL, hardware vendors don’t need to provide separate drivers for WSL and for Windows; the DirectML API passes calls to DirectX drivers running in Windows, while still appearing as a Linux device to your code.

Support for DirectML ensures that it’s easy to move TensorFlow models between PCs and servers running on-premises and on Azure. Microsoft’s implementation of TensorFlow runs on a DirectML-based runtime that is exposed as standard TensorFlow interfaces and classes, so you can simply install the framework from GitHub and bring your existing code and models to them.

Using DirectML in your code

The new package, Microsoft.AI.DirectML is designed to work with a range of different tools, mainly as an inferencing tool for your applications. Like the stand-alone API and library, it’s supported by the Lobe ML labeling and model development tool and has its own low-level ONNX (Open Neural Network Exchange) implementation, as well as support for TensorFlow. Specific implementations have their own APIs and SDKs but are built on the DirectML NuGet package.

You can use DirectML for both training and inferencing, but in practice, you’re most likely to use it with existing models. That way you can take advantage of dedicated training hardware, either in your own data center or by using Azure’s large GPU-based instances.

To get started with DirectML and Microsoft.AI.DirectML, you must be using DirectX 12-compatible hardware. Most modern GPUs support DirectX 12, so you shouldn’t have too much trouble finding a GPU that fits your budget, with supported hardware from Intel, Nvidia, and AMD. ARM developers will be able to use the Qualcomm Adreno 600, which is supported by Windows on ARM. Once you’ve got supported hardware, you can set up a development environment using the Windows 10 SDL, adding the DirectML libraries from NuGet into your ML projects.

The new, redistributable library is a major upgrade to the previous DirectML releases. As well as making it easier to include DirectML code in your applications, it adds new features with 44 new operators defined as a new feature level. It’s important to note that by separating it out from the DirectX SDKs, you’re now able to standardize on a single version of the platform, ensuring that changes to DirectX and Windows updates don’t affect your applications.

Breaking the monoliths

Microsoft is doing a lot of work to split its monolithic SDKs apart, allowing you to choose when and how to update them (and, for example, to have applications with two or more different versions of DirectML run on the same system at the same time). Microsoft is also allowing updates outside of Windows 10’s semiannual schedule. This approach isn’t limited to DirectML; it’s the basis of the work on WinUI 3.0 and Project Reunion, which do the same for UI components and eventually for much of the various Windows SDKs.

There’s plenty of sample C++ code in the DirectML GitHub repository, starting with basic Hello World code. Even with the new library, you’ll still need to know how to construct the appropriate operators for the underlying Direct3D and DirectX shaders and pipelines, even when working with DirectML data types. There’s a lot to learn if you’re planning to work in pure DirectML. In practice you’re more likely to use the TensorFlow or ONNX options so you can work at a higher level.

There will always be cases where resource limits force you to work as low-level as possible: for example, building machine learning models into cameras or using it in complex rendering applications, upscaling, or cleaning noise from feeds, where performance is essential. It’s important for Microsoft and for developers to have support for fundamental building blocks like DirectML in ways that make it easy to underpin higher-level tools.

DirectML is an important bridge between high-level machine learning tools and building machine learning in OpenCL or CUDA. Like the rest of the DirectX, it’s a powerful but complex tool that requires care. Microsoft’s decision to use it as the foundation of its Windows ML tools is sensible, helping you choose the right machine learning tools for your projects, whether you need speed or whether you want all your developers to build models into their code. Options are good, and Microsoft’s tiered approach to machine learning application development ticks all the right boxes.

simon_bisson
Contributor

Author of InfoWorld's Enterprise Microsoft blog, Simon BIsson prefers to think of "career" as a verb rather than a noun, having worked in academic and telecoms research, as well as having been the CTO of a startup, running the technical side of UK Online (the first national ISP with content as well as connections), before moving into consultancy and technology strategy. He’s built plenty of large-scale web applications, designed architectures for multi-terabyte online image stores, implemented B2B information hubs, and come up with next generation mobile network architectures and knowledge management solutions. In between doing all that, he’s been a freelance journalist since the early days of the web and writes about everything from enterprise architecture down to gadgets.

More from this author