samuelswandi's picture
Upload folder using huggingface_hub
55f955c verified
|
raw
history blame
846 Bytes
metadata
license: apache-2.0

omega-coder-phi-1-3K

omega-coder-phi-1-3K is an SFT fine-tuned version of microsoft/phi-1 using a custom training dataset. This model was made with Phinetune

Process

  • Learning Rate: 1.41e-05
  • Maximum Sequence Length: 2048
  • Dataset: deepmind/code_contests
  • Split: train[:30%]

💻 Usage

!pip install -qU transformers
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline

model = "samuelswandi/omega-coder-phi-1-3K"
tokenizer = AutoTokenizer.from_pretrained(model)

# Example prompt
prompt = "Your example prompt here"

# Generate a response
model = AutoModelForCausalLM.from_pretrained(model)
pipeline = pipeline("text-generation", model=model, tokenizer=tokenizer)
outputs = pipeline(prompt, max_length=50, num_return_sequences=1)
print(outputs[0]["generated_text"])