Can you share how you converted this?

#1
by bjoernp - opened

Really don't want to redownload the whole model and I would like to test my inference code :) Many thanks btw

Owner

I'll show you mine if you show me yours. ;)

(Sorry, couldn't resist :P)
I'll upload the code later today.

Here, maybe this could work: https://github.com/palet-global/palma.py

Great work! I am also waiting your converting codes for converting another Nvidia's models.

Owner

Sorry, left this a while. Can find my hacky script here:
https://github.com/FailSpy/export-nemo-to-safetensors/blob/main/convert-nemo.py

You will almost certainly need to change the script to support different architecture models.

Sorry, left this a while. Can find my hacky script here:
https://github.com/FailSpy/export-nemo-to-safetensors/blob/main/convert-nemo.py

You will almost certainly need to change the script to support different architecture models.

Thank you!
I deeply appreciate for your huge efforts. I will check your script.
Then I will try to convert nvidia/Llama3-70B-SteerLM-RM.
I want to use these models trained by nemo frameworks with gguf format.

Sorry, left this a while. Can find my hacky script here:
https://github.com/FailSpy/export-nemo-to-safetensors/blob/main/convert-nemo.py

You will almost certainly need to change the script to support different architecture models.

How to convert tokenizer.json?

@huggingbobo That's a question for @Xenova who AFAIK did the conversion of the tokenizer.

Sign up or log in to comment