BOS token prepending?

#9
by hjlee1371 - opened

Hello, according to the llama3 reference implementation on GitHub, it seems that we need to prepend bos at the beginning (similarly to llama2 or llama3 chat template), but it appears that the current version of the tokenizer does not include this. What is the correct implementation?

I used this:

Start: <|start_header_id|>
End: <|eot_id|>

inputs = tokenizer(["""<|start_header_id|> System: You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
User: Talk about AI and LLMs.
Assistant:"""], return_tensors="pt").to('cuda')

streamer = TextStreamer(tokenizer)

stop_token = "<|eot_id|>"
stop_token_id = tokenizer.encode(stop_token)[0]

_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512, do_sample=True, temperature=0.1, repetition_penalty=1.2, top_p=0.9, eos_token_id=stop_token_id)

How do you import tokenizer for this snippet?

Only change the api_key

!pip install -qU transformers accelerate bitsandbytes

from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer, BitsAndBytesConfig
import torch
TOKEN = 'your_api_key'
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_use_double_quant=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16
)

MODEL_NAME = 'meta-llama/Meta-Llama-3-8B-Instruct'
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, use_auth_token=TOKEN)
model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, device_map='cuda:0', quantization_config=bnb_config, use_auth_token=TOKEN)


inputs = tokenizer(["""<|start_header_id|> System: You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe.  Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
User: Talk about AI and LLMs.
Assistant:"""], return_tensors="pt").to('cuda')

streamer = TextStreamer(tokenizer)
stop_token = "<|eot_id|>"  
stop_token_id = tokenizer.encode(stop_token)[0]

_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512, do_sample=True, temperature=0.1, repetition_penalty=1.2, top_p=0.9, eos_token_id=stop_token_id)

same problem..

>>> tokenizer('hi')
{'input_ids': [6151], 'attention_mask': [1]}  # BOS not prepended
>>> messages = [{"role": "user", "content": "hi"}]
>>> tokenizer.apply_chat_template(messages, tokenize=False)
'<|begin_of_text|><|start_header_id|>user<|end_header_id|>\n\nhi<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n' # BOS prepended

in fine-tuning, do i have to prepend BOS or not?

I have the same question. Do I have to add BOS token while performing continue pre-training?

Sign up or log in to comment