gugarosa kvaishnavi commited on
Commit
af89cc8
1 Parent(s): 57524a8

Update Phi-3 Mini-128K-Instruct ONNX model link (#7)

Browse files

- Update Phi-3 Mini-128K-Instruct ONNX model link (5660570c5bfbf7f4d7a659d318a3e95a36f4796c)


Co-authored-by: Kunal Vaishnavi <kvaishnavi@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -220,7 +220,7 @@ If you want to run the model on:
220
 
221
  ## Cross Platform Support
222
 
223
- ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized ONNX models [here](https://aka.ms/Phi3-ONNX-HF).
224
 
225
  Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs.
226
  Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.
 
220
 
221
  ## Cross Platform Support
222
 
223
+ ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-128K-Instruct ONNX model [here](https://aka.ms/phi3-mini-128k-instruct-onnx).
224
 
225
  Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs.
226
  Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.