simon_bisson
Contributor

Simplify machine learning with Azure Applied AI Services

analysis
Jun 23, 20217 mins
Artificial IntelligenceCloud ComputingData Science

Microsoft is wrapping its Cognitive Services machine learning platform as business-focused services.

artificial intelligence / machine learning / binary code / neural network
Credit: Thinkstock

Coming to grips with machine learning needn’t require vast amounts of labeled data, a team of data scientists, and a lot of compute time. The state of the art in modern artificial intelligence has reached a point where there are now models that are sufficiently general purpose (within their own domains, of course) that they can be dropped into your applications without additional training and customization.

We’ve seen some of this with the evolution from Project Adam to Azure Cognitive Services. Now Microsoft is taking the next step, using that foundation to deliver a set of machine learning models that provide assistance with common tasks: Azure Applied AI Services. We’ve already seen some of this with the Power Platform’s new document automation tool in Power Automate. Here a prebuilt model extracts information from documents, storing it for use in other applications, going from human-readable to machine-readable with no code.

Abstracting Azure Cognitive Services

By blending the underlying Cognitive Services with prebuilt business logic, Microsoft is now adding similar features to Azure, providing turnkey APIs for specific machine learning operations. Branded as Azure Applied AI Services, it is part Cognitive Services with new features added to simplify building it into your code. Where Cognitive Services offer APIs that have broad use in many scenarios, Applied AI Services have a task focus, so you have less work to do building code around them or constructing data pipelines.

The first batch of Applied AI Services has now been released and includes Azure Bot Service, Azure Form Recognizer, Azure Cognitive Search, Azure Metrics Advisor, Azure Video Analyzer, and Azure Immersive Reader. Some are familiar, some are new, and some update existing services. All these models can also be integrated into Azure Machine Learning, so if you do have data scientists on your development team, they can add additional training to improve the model to more accurately fit your data.

Azure AI Services in detail: Metrics Advisor

One of the more interesting services is Azure Metrics Advisor. All businesses depend on data, with many using time-series data to determine various metrics about their business. Those metrics might relate to a business process or be a stream of data from a machine or another piece of equipment. Machine learning tools can process that data, looking for anomalies that can trigger a response, delivering alerts to the right person or starting a preventive maintenance program.

Applications built using a tool like this let you take advantage of techniques that have been developed over years to provide essential alerts: monitoring aircraft engines, keeping chilled medicines on the road, or detecting bugs in code. There’s a lot of value here. An appropriate warning could save millions of dollars—and lives.

You can connect Metrics Advisor to many different data stores, and it will automatically choose the most appropriate model for your data. It’s a similar approach to that used by Azure Machine Learning’s automated AI service. You have the option of tuning the model to work with your data. Finally, alerts can be delivered through several different channels, including email and web hooks, as well as support for Teams and for Azure DevOps. The data delivered by Metrics Advisor can be used for failure analysis, as it can collate multiple anomalies in the data into a diagnostic tree. This approach helps deliver explanations for alerts, using a metrics graph to show all the data for an incident.

Setting up Metrics Advisor

Microsoft provides a web-based portal to help configure the service, using an Azure subscription to deploy Metrics Advisor to a resource group. You can use a free trial to get started, and as the service is in preview, it’s currently free with final pricing yet to be announced. Setting up the service can take some time, so be prepared for a wait before you can use your new portal.

First, connect to your data sources. Microsoft provides tools to manage credentials so you can interact with sources securely and keep credentials out of your code. There are plenty of options for data sources, including unstructured and structured storage like Azure SQL, Azure Blob Storage, Cosmos DB, and MongoDB. Dedicated time-series sources include Azure Log Analytics, Azure Application Insights, and Influx DB.

You will need to format your data correctly, and each entry must have columns containing numeric data that should be time stamped. Data needs to be granular, with intervals defined as part of the connection settings. These can be anything down to 60 seconds. In most cases you don’t need to sample more than that; you’re more likely to be working with data of an order of minutes or hours. Data can arrive in multiple columns, with different metrics and dimensions in each column. For example, you can look at an engine’s temperature, RPM, vibration, etc.,—all the information that together can indicate problems.

With a connection in place, load your data into Metrics Advisor and select the fields it will use. This builds a schema to test your data. It will start to process the data and use this first ingestion to build a model. You can use the portal to visualize results and see anomalies that the model found in the initial data set. These can be used to tune the configuration, setting thresholds for anomalies and tuning the sensitivity and boundaries of the machine learning–powered anomaly detector. Anomalies can be readings that are outside boundaries or they can be changes in the pattern of data. Maybe smooth data suddenly becomes rough or vice versa while still being inside the thresholds of normal operation.

Sending alerts and working with anomalies

A service like this is to alert users, and you have several options. If you don’t intend to write any code, you can simply send an email to a group of users. Alternatively, a distribution list or another email group can be managed outside the portal. If you prefer to build alerts into an application, set up an API in your code that can listen for a web hook. Metrics Advisor will then generate the appropriate API call and trigger external alerts for your application. Many Microsoft services offer support for web hooks; for example, the Power Automate no-code workflow tool and Teams both support web hook triggers.

Once an anomaly has been detected, trusted users can work with the portal to explore the resulting diagnostic insights. The graphs can help with root-cause analysis and will allow experienced users to quickly determine what needs to be done to prevent future occurrences.

A useful feature is the ability to use Metrics Advisor with Application Insights. Errors in code can be captured and trigger anomaly reports if, say, more than a certain number of errors occur in the same part of an application. Alerts can be delivered into Azure DevOps for developers to triage and use to produce updates, well before the help desk receives a flood of complaints.

You should expect to see more services like this roll out during the next few years. Machine learning isn’t easy; it requires significant expertise and large amounts of compute resources to get any value. By packaging machine learning as services, Microsoft aims to make it as simple as connecting to an API to take advantage of these technologies. It has the reach to see what its customers are doing and the resources to build and operate scenario-specific models focused on key business needs.

By turning machine learning into portal- and alert-driven experiences like Metrics Advisor, Azure should expand the reach of these tools and services, allowing more businesses to gain the benefits of machine learning without having to build and train their own custom models.

simon_bisson
Contributor

Author of InfoWorld's Enterprise Microsoft blog, Simon BIsson prefers to think of "career" as a verb rather than a noun, having worked in academic and telecoms research, as well as having been the CTO of a startup, running the technical side of UK Online (the first national ISP with content as well as connections), before moving into consultancy and technology strategy. He’s built plenty of large-scale web applications, designed architectures for multi-terabyte online image stores, implemented B2B information hubs, and come up with next generation mobile network architectures and knowledge management solutions. In between doing all that, he’s been a freelance journalist since the early days of the web and writes about everything from enterprise architecture down to gadgets.

More from this author