K024's picture
Update README.md
2397160
metadata
language:
  - zh
  - en
tags:
  - chatglm
  - glm
  - onnx
  - onnxruntime

ChatGLM-6B + ONNX

This model is exported from ChatGLM-6b with int8 quantization and optimized for ONNXRuntime inference. Export code in this repo.

Inference code with ONNXRuntime is uploaded with the model. Install requirements and run streamlit run web-ui.py to start chatting. Currently the MatMulInteger (for u8s8 data type) and DynamicQuantizeLinear operators are only supported on CPU. Arm64 with Neon support (Apple M1/M2) should be reasonably fast.

安装依赖并运行 streamlit run web-ui.py 预览模型效果。由于 ONNXRuntime 算子支持问题,目前仅能够使用 CPU 进行推理,在 Arm64 (Apple M1/M2) 上有可观的速度。具体的 ONNX 导出代码在这个仓库中。

Usage

Clone with git-lfs:

git lfs clone https://huggingface.co/K024/ChatGLM-6b-onnx-u8s8
cd ChatGLM-6b-onnx-u8s8
pip install -r requirements.txt
streamlit run web-ui.py

Or use huggingface_hub python client lib to download the repo snapshot:

from huggingface_hub import snapshot_download
snapshot_download(repo_id="K024/ChatGLM-6b-onnx-u8s8", local_dir="./ChatGLM-6b-onnx-u8s8")

Codes are released under MIT license.

Model weights are released under the same license as ChatGLM-6b, see MODEL LICENSE.