output gibberish

#1
by haydenhong - opened

Downloaded this 13b weights, loaded and run successfully, just that the outputs are gibberish. Not sure if it is my weights corrupted or what. Anyone sees this, or anyone is successful? thanks.

OptimalScale org

Hi,
robin-delta-v2 is a delta model, which means you need to merge it with the base model (i.e., LLaMA-13B).
The merge script could be found at https://github.com/OptimalScale/LMFlow#53-reproduce-the-result

python utils/apply_delta.py \
    --base-model-path {huggingface-model-name-or-path-to-base-model} \
    --delta-path {path-to-delta-model} \
    --target-model-path {path-to-merged-model}

thanks! it works.
btw, does the generation require "#" as eos_token to terminate? just ant to bring this up: I think vicuna v2 fixed a similar issue by re-training using "" to replace "###", wonder if it is similar cause.

OptimalScale org

yes, robin models use "###" as eos_token to terminate.
I do not fully understand what is the issue/consequence of using "#" you mentioned. Could you elaborate more?
Thanks!

yes, robin models use "###" as eos_token to terminate.
I do not fully understand what is the issue/consequence of using "#" you mentioned. Could you elaborate more?
Thanks!

Is Robin-7B-v2 a base model or a model that has learned eos_token through SFT? I tired to set eos_token = '###' in tokenizer, also I used
model.generate(inputs, eos_token_id= tokenizer.eos_token_id, max_length=512)
BUT it does not stop at eos_token, instead continues until the maximum length.

OptimalScale org

Hi, it is a model after SFT. the stop token is '###'
As for inference, you can take a look at our doc https://github.com/OptimalScale/LMFlow
Thanks

Sign up or log in to comment