|
--- |
|
license: cc-by-sa-4.0 |
|
tags: |
|
- causal-lm |
|
--- |
|
|
|
# Description |
|
|
|
This is a test model for my [test script](https://github.com/kazssym/stablelm-study-2). |
|
It was exported from [stabilityai/stablelm-3b-4e1t](https://huggingface.co/stabilityai/stablelm-3b-4e1t) to ONNX with a [modified version](https://github.com/huggingface/optimum/pull/1719) of Hugging Face Optimum. |
|
It is quite possible to have problems. |
|
|
|
This model does not include a tokenizer. |
|
|
|
# Export command |
|
|
|
This model was exported with the following command: |
|
``` |
|
optimum-cli export onnx --model stabilityai/stablelm-3b-4e1t --trust-remote-code --device cpu --optimize O1 output/onnx-fp32/ |
|
``` |
|
|
|
It requires [Transformers](https://github.com/huggingface/transformers) 4.38 or later to export. |
|
|
|
# Output from Optimum CLI |
|
|
|
``` |
|
The ONNX export succeeded with the warning: The maximum absolute difference between the output of the reference model and the ONNX exported model is not within the set tolerance 1e-05: |
|
- logits: max diff = 2.6553869247436523e-05. |
|
``` |
|
|