Text Generation
Transformers
Safetensors
llama
code
granite
conversational
Eval Results
text-generation-inference

Onnx Model Produces Different Output

#2
by runski - opened

I used Optimum to convert the PyTorch version of the model to onnx. Conversion was successful without any error messages. I then ran the model with onnx runtime and the output tokens were different and incorrect. The corresponding text from the output tokens looks like gibberish. Anyone has got an onnx version of the model to run properly?

IBM Granite org

Hi @runski
for ONNX you will have to add this PR to HF optimum
https://github.com/huggingface/transformers/pull/30031
I don't think they will have mlp_bias parameter for llama class

IBM Granite org

also, keep in mind our model uses both attention_bias and mlp_bias for the model

Sign up or log in to comment