Edit model card

LoRA +Finetuned with 50 pairs of GPT-3.5-turbo generated workout QA pair on T4 Google Collab using GPT-neo-1.3B base model. Finetuned on synthetic data.

Collab Link https://colab.research.google.com/drive/12uv_PocrcDmvOhjPD9SGAqcXbpZAZh2a?authuser=2#scrollTo=PNcc7C51VWHV

How to use the model

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("Prajna1999/Prajna-gpt-neo-1.3B-fitbot") model = AutoModelForCausalLM.from_pretrained("Prajna1999/Prajna-gpt-neo-1.3B-fitbot")

input_ids = tokenizer.encode("Suggest some workouts for weight loss", return_tensors="pt") output = model.generate(input_ids, max_length=128, temperature=0, top_p=0.7, top_k=2) output_text = tokenizer.decode(output[0], skip_special_tokens=True)

print(output_text)

Downloads last month
15