Edit model card

Phi-2-super (SFT + cDPO)

Base Model: microsoft/phi-2

image/png

How to run inference:

import transformers
import torch

if __name__ == "__main__":
  model_name = "abacaj/phi-2-super"
  tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
  
  model = (
      transformers.AutoModelForCausalLM.from_pretrained(
          model_name,
      )
      .to("cuda:0")
      .eval()
  )
  
  messages = [
      {"role": "user", "content": "Hello, who are you?"}
  ]
  inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
  input_ids_cutoff = inputs.size(dim=1)
  
  with torch.no_grad():
      generated_ids = model.generate(
          input_ids=inputs,
          use_cache=True,
          max_new_tokens=512,
          temperature=0.2,
          top_p=0.95,
          do_sample=True,
          eos_token_id=tokenizer.eos_token_id,
          pad_token_id=tokenizer.pad_token_id,
      )
  
  completion = tokenizer.decode(
      generated_ids[0][input_ids_cutoff:],
      skip_special_tokens=True,
  )
  
  print(completion)

Chat template

The model uses the same chat template as found in Mistral instruct models:

text = "<|endoftext|>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!<|endoftext|> "
"[INST] Do you have mayonnaise recipes? [/INST]"

You don't need to do it manually if you use the HF transformers tokenizer:

  messages = [
      {"role": "user", "content": "Hello, who are you?"},
      {"role": "assistant": "content": "I am ..."}
  ]
  inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)

MT-bench / heval

image/png image/png

Downloads last month
10,348
Safetensors
Model size
2.78B params
Tensor type
BF16
Β·

Spaces using abacaj/phi-2-super 4

Evaluation results

  • prompt_level_loose_acc on Instruction Following Eval
    LightEval
    0.272