Are you open to a PR for adding a onnx model artifact of the tensor weights?

#3
by bergum - opened

For accelerated inference it would be great to save a onnx model file, exported by

optimum-cli export onnx --task fill-mask --model naver/splade-cocondenser-ensembledistil onnx
NAVER LABS Europe org

Hi @bergum
Sorry for the delay -- completely missed the notification here! Sure we are open to PRs :)

Sign up or log in to comment