license: mit
pipeline_tag: object-detection
tags:
- RyzenAI
AMD RyzenAI Demo
YOLO Model Inference on AMD RyzenAI
This notebook demonstrates how to perform object detection using the YOLO-V8 model on a RyzenAI device with the Optimum-AMD.
About RyzenAI
AMD's Ryzen™ AI family of laptop processors provide users with an integrated Neural Processing Unit (NPU). This frees up the main CPU and GPU, resulting in improved performance for AI-related tasks. The Ryzen™ AI technology, built on AMD XDNA™ architecture, is purpose-built to run AI workloads efficiently and locally, offering numerous benefits for developers creating groundbreaking AI applications.
Prerequisites
Ensure you have the following setup:
- Hardware: A laptop with an AMD Ryzen processor.
- Software Configuration:
- The RyzenAI environment should be properly configured according to the Installation and Runtime Setup guides.
- Install the NPU Driver.
- Install RyzenAI SDK.
- Install optimum-amd
https://huggingface.co/docs/optimum/main/en/amd/ryzenai/overview#installation
Demonstrations
Demo 1: YOLO-V8 Model Inference
This demonstration showcases how to perform object detection using the YOLO-V8 model on a RyzenAI device with the Optimum-AMD.
Steps:
- Ensure that your RyzenAI environment is correctly set up as per the prerequisites mentioned above.
- Run the notebook to perform object detection using the YOLO-V8 model.
Demo 2: [Preview] Local LLM Inference on AMD RyzenAI
This is a preview and a work in progress. We are actively working on enhancing the performance. In a couple of weeks, we expect to have a more polished and efficient version ready.
Steps:
- Follow the setup steps mentioned in the prerequisites.
- Important: To run Demo 2, you need to restart the kernel before executing the cells related to this demonstration.
- The PR for decoder is not merged, so you would need to pull changes from this branch: https://github.com/huggingface/optimum-amd/tree/add_decoders