AMD Ryzen™ AI software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen AI powered PCs1. Ryzen AI software enables applications to run on the neural processing unit (NPU) built in the AMD XDNA™ architecture, the first dedicated AI processing silicon on a Windows x86 processor2, and supports an integrated GPU (iGPU).
Developing AI applications for Ryzen AI can be summarized in 3 easy steps:
Start with a Pre-trained Model
Use a pre-trained model in PyTorch or TensorFlow as your starting point. Then convert your model to the ONNX format, which is compatible with the Ryzen AI workflow.
Quantization
Quantize your model by converting its parameters from floating-point to lower precision representations, like 16-bit or 8-bit integers. The Vitis™ AI Quantizer for ONNX provides an easy-to-use Post Training Quantization (PTQ) flow for this purpose.
Deploy the Model
After quantization, your model is ready to be deployed on the hardware. Use ONNX Runtime with C++ or Python APIs to deploy the AI model. The Vitis AI Execution Provider included in ONNX Runtime optimizes workloads, ensuring optimal performance and lower power consumption.
1.7 Release Highlights
1.6 Release Highlights
1.5 Release Highlights
1.4 Release Highlights
1.3 Release Highlights
1.2 Release Highlights
1.1 Release Highlights
1.0 Release Highlights
Explore open-source tools from AMD that empower developers to analyze, optimize, and deploy AI models efficiently across diverse hardware.
Access complimentary cloud credits, connect with AMD engineers, gain recognition, enter member only sweepstakes and more!
Keep up-to-date on the latest product releases, news, and tips.