Transformers
PyTorch
English
trl
rlhf

Model inaccessible?

#10
by Chris12321 - opened

Hello,
Thank you for your work.

I wanted to test this model since it is linked in this tutorial:
https://huggingface.co/blog/stackllama

First I have tried deploying it as a huggingface inference endpoint, but the config.json was missing.
For details on that problem, please see my comment in this discussion: https://huggingface.co/trl-lib/llama-7b-se-rl-peft/discussions/9

Then I have tried it with transformers library:
from transformers import AutoModel
model = AutoModel.from_pretrained("trl-lib/llama-7b-se-rl-peft")

But I got:
OSError: trl-lib/llama-se-merged is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with huggingface-cli login or by passing token=<your_token>

although I had successfully logged in previously like that:
!huggingface-cli login --token ""

With this output:
Token will not been saved to git credential helper. Pass add_to_git_credential=True if you want to set the git credential as well.
Token is valid (permission: read).
Your token has been saved to /root/.cache/huggingface/token
Login successful

Do you have any recommendations for me on how to proceed best to test this model, or is it inaccessible for the public?
Many thanks and best regards,
Chris

Hi @Chris12321
Thanks for your interest in using this model
This repository contains a PEFT adapter, in order to use it, first install PEFT: pip install -U peft
Then load it with the following snippet:

from peft import AutoPeftModelForCausalLM
model = AutoPeftModelForCausalLM.from_pretrained("trl-lib/llama-7b-se-rl-peft")

To deploy it, since it is a LoRA (Low Rank Adapters) model, I would suggest to first "merge" the model with the base model. You can read more about what merging LoRA weights means here: https://huggingface.co/Salesforce/codegen2-7B/discussions/1#6543f4eb2996405c23882b03

from peft import AutoPeftModelForCausalLM
model = AutoPeftModelForCausalLM.from_pretrained("trl-lib/llama-7b-se-rl-peft")
+ model = model.merge_and_unload()

Then push the merged model somewhere on the Hub under your namespace:

from peft import AutoPeftModelForCausalLM

model = AutoPeftModelForCausalLM.from_pretrained("trl-lib/llama-7b-se-rl-peft")
model = model.merge_and_unload()
model.push_to_hub("xxx/my-merged-se-peft-merged-model")

After that you'll be able to use your model for deployment

Let me know if you need any help, I can help you push the merged weights on the Hub so that you can use it out of the box

Hello, yes, +1 on what Younes mentioned. PEFT models are not yet supported with Inference endpoints.

Sign up or log in to comment