Edit model card

Pretrained LM

Training Dataset

Prompt

  • Template: prompt = f"### English: {en_text}\n### 한국어: "
  • Issue: If you use DataCollatorForCompletionLM with instruction_template and response_template, the tokenizer encodes "###" and "\n###" in different ways (i.e., "###": 835 and "\n###" = "_#" + "##" = 2277 + 29937). Make sure the tokens of the prompt template and the response template the same.

Training

  • Trained with QLoRA
    • PLM: NormalFloat 4-bit
    • Adapter: BrainFloat 16-bit
    • Adapted to all the linear layers (around 2.2%)
  • Merged PLM and adapters
    • by making NF4 PLM to BF16

Usage (IMPORTANT)

model = LlamaForCausalLM.from_pretrained('traintogpb/llama-2-en2ko-translator-7b-qlora-bf16-upscaled')
tokenizer = LlamaTokenizer.from_pretrained('traintogpb/llama-2-en2ko-translator-7b-qlora-bf16-upscaled')
tokenizer.pad_token = "</s>"
tokenizer.pad_token_id = 2
tokenizer.eos_token = "<|endoftext|>" # Must be differentiated from the PAD token
tokenizer.eos_token_id = 46332
tokenizer.add_eos_token = True
tokenizer.padding_side = 'right'
tokenizer.model_max_length = 768

text = "NMIXX is the world-best female idol group, who came back with the new song 'DASH'." 
prompt = f"### English: {text}\n### 한국어: "

inputs = tokenizer(prompt, return_tensors="pt", max_length=max_length, truncation=True)
inputs['input_ids'] = inputs['input_ids'][0][:-1].unsqueeze(dim=0)
inputs['attention_mask'] = inputs['attention_mask'][0][:-1].unsqueeze(dim=0)

outputs = model.generate(**inputs, max_length=max_length, eos_token_id=46332)

input_len = len(inputs['input_ids'].squeeze())
        
translated_text = tokenizer.decode(outputs[0][input_len:], skip_special_tokens=True)
print(translated_text)
  • You should remove the EOS token (<|endoftext|>) in the prompt
Downloads last month
24
Safetensors
Model size
6.86B params
Tensor type
BF16
·

Dataset used to train traintogpb/llama-2-en2ko-translator-7b-qlora-bf16-upscaled