Edit model card

AMD RyzenAI Demo

YOLO Model Inference on AMD RyzenAI

This notebook demonstrates how to perform object detection using the YOLO-V8 model on a RyzenAI device with the Optimum-AMD.

About RyzenAI

AMD's Ryzen™ AI family of laptop processors provide users with an integrated Neural Processing Unit (NPU). This frees up the main CPU and GPU, resulting in improved performance for AI-related tasks. The Ryzen™ AI technology, built on AMD XDNA™ architecture, is purpose-built to run AI workloads efficiently and locally, offering numerous benefits for developers creating groundbreaking AI applications.

Prerequisites

Ensure you have the following setup:

  1. Hardware: A laptop with an AMD Ryzen processor.
  2. Software Configuration:
    • The RyzenAI environment should be properly configured according to the Installation and Runtime Setup guides.
    • Install the NPU Driver.
    • Install RyzenAI SDK.
    • Install optimum-amd

https://huggingface.co/docs/optimum/main/en/amd/ryzenai/overview#installation

Demonstrations

Demo 1: YOLO-V8 Model Inference

This demonstration showcases how to perform object detection using the YOLO-V8 model on a RyzenAI device with the Optimum-AMD.

Steps:

  1. Ensure that your RyzenAI environment is correctly set up as per the prerequisites mentioned above.
  2. Run the notebook to perform object detection using the YOLO-V8 model.

Demo 2: [Preview] Local LLM Inference on AMD RyzenAI

This is a preview and a work in progress. We are actively working on enhancing the performance. In a couple of weeks, we expect to have a more polished and efficient version ready.

Steps:

  1. Follow the setup steps mentioned in the prerequisites.
  2. Important: To run Demo 2, you need to restart the kernel before executing the cells related to this demonstration.
  3. The PR for decoder is not merged, so you would need to pull changes from this branch: https://github.com/huggingface/optimum-amd/tree/add_decoders

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .