phi-1_5-slimorca / README.md
miguelcarv's picture
Update README.md
033e57d verified
metadata
{}

Model Card for Phi 1.5 SlimOrca

Phi 1.5 finetuned on SlimOrca-Dedup. This model was trained with the goal of giving Phi 1.5 the ablity to generate the EOS token together with being capable of doing beam search. It can also follow custom system prompts as shown in the example below.

Model Details

How to Get Started with the Model

import torch
import transformers

model = transformers.AutoModelForCausalLM.from_pretrained(
    "miguelcarv/phi-1_5-slimorca",
    trust_remote_code=True
)
tokenizer = transformers.AutoTokenizer.from_pretrained("microsoft/phi-1_5")


SYSTEM_PROMPT = "You are an AI assistant. You will be given a task. You must generate a detailed and long answer."
input_text = f"""{SYSTEM_PROMPT}

Instruction: Give me the first 5 prime numbers and explain what prime numbers are.
Output:"""

with torch.no_grad():
    outputs = model.generate(
        tokenizer(input_text, return_tensors="pt")['input_ids'],
        max_length=256,
        num_beams = 3,
        eos_token_id = tokenizer.eos_token_id
    )
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Training Details

  • Trained for one epoch on SlimOrca-Dedup
  • Learning rate: 2e-5
  • Cosine learning rate decay to 0
  • Optimizer: AdamW
  • Effective batch size: 256
  • Gradient accumulation steps (mini batch size): 64 (4)
  • Trained with FP32