Model Inference on the Edge with Windows ML
Machine learning is helping people work more efficiently and DirectML provides the performance, conformance, and low-level control developers need to enable these experiences. Frameworks like Windows ML and ONNX Runtime layer on top of DirectML, making it easy to integrate high-performance machine learning into your application. Once the domain of science fiction, scenarios like “enhancing” an image are now possible with contextually aware algorithms that fill in pixels more intelligently than traditional image processing techniques. DxO’s DeepPRIME technology illustrates the use of neural networks to simultaneously denoise and demosaic digital images. DxO leverages Windows ML and DirectML to harness the performance and quality their users expect. [caption id="attachment_56137" align="alignnone" width="2560"]


Training Models with TensorFlow and Lobe
Accelerating inference is where DirectML started: supporting training workloads across the breadth of GPUs in the Windows ecosystem is the next step. In September 2020, we open sourced TensorFlow with DirectML to bring cross-vendor acceleration to the popular TensorFlow framework. This project is all about enabling rapid experimentation and training on your PC, regardless of which GPU you have on your device, with a simple and painless setup process. We also know many machine learning developers depend on tools, libraries, and containerized workloads that only work with Unix-like operating systems, so DirectML runs in both Windows and the Windows Subsystem for Linux. DirectML makes it easy for you to work with the environment and GPU you already have. [caption id="attachment_56123" align="alignnone" width="1924"]
Getting Started with DirectML
If you're a developer looking to benefit from hardware accelerated machine learning through DirectML, get started today with the framework, package, or application that works best for you:
Windows ML | ONNX Runtime with DirectML | TensorFlow with DirectML | Lobe | DirectML | |
Use Case | The best developer experience for ONNX model inferencing on Windows. | Cross platform C API for ONNX model inferencing. | Hardware accelerated model training on any DirectX 12 GPU. | An easy to use app that has everything needed to train custom machine learning models. | Provides flexibility with direct access to DirectX 12 resources for high-performance frameworks and applications. |
Documentation | MS Docs | GitHub | GitHub and MS Docs | Lobe.ai | GitHub and MS Docs |
Distribution | Windows SDK or NuGet: Microsoft.AI.MachineLearning | NuGet: Microsoft.ML.OnnxRuntime.DirectML | PyPI Package: tensorflow-directml | Application: Lobe | Windows SDK or NuGet: Microsoft.AI.DirectML |
DirectML Support | Inference | Inference | Inference and Training | Inference and Training | Inference and Training |
· DirectMLX, a new C++ library that wraps DirectML to enable easier and simpler usage, especially for combining operators into blocks or even into complete models.
· PyDirectML, a Python binding to quickly experiment with DirectML and the Python samples without writing a full C++ sample.
· Sample applications in both C++ and Python, including a full end-to-end implementation of real-time object detection using YOLOv4.
This post only scratches the surface of what’s possible with machine learning and DirectML, and we’re excited to see where developers take DirectML next! Stay tuned to the DirectML GitHub for new resources and future updates on the investments we’re making. Editor’s note – Jan. 28, 2021: The post was updated post-publication with changes to the images.via https://www.aiupnow.com
Windows AI Platform Team, Khareem Sudlow