Edit model card

Yi-1.5-6B-Chat ONNX models for DirectML

This repository hosts the optimized versions of 01-ai/Yi-1.5-6B-Chat to accelerate inference with ONNX Runtime for DirectML.

Usage on Windows (Intel / AMD / Nvidia / Qualcomm)

conda create -n onnx python=3.10
conda activate onnx
winget install -e --id GitHub.GitLFS
pip install huggingface-hub[cli]
huggingface-cli download EmbeddedLLM/01-ai_Yi-1.5-6B-Chat-onnx --include=onnx/directml/01-ai_Yi-1.5-6B-Chat-int4 --local-dir .\01-ai_Yi-1.5-6B-Chat-int4
pip install numpy==1.26.4
Invoke-WebRequest -Uri "https://raw.githubusercontent.com/microsoft/onnxruntime-genai/main/examples/python/phi3-qa.py" -OutFile "phi3-qa.py"
pip install onnxruntime-directml
pip install --pre onnxruntime-genai-directml
conda install conda-forge::vs2015_runtime
python phi3-qa.py -m .\01-ai_Yi-1.5-6B-Chat-int4

What is DirectML

DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning. DirectML provides GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers, including all DirectX 12-capable GPUs from vendors such as AMD, Intel, NVIDIA, and Qualcomm.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Input a message to start chatting with EmbeddedLLM/01-ai_Yi-1.5-6B-Chat-onnx.
Unable to determine this model's library. Check the docs .