Edit model card

Trial1-phi2

This model is a fine-tuned version of microsoft/phi-2 on the None dataset.

Model description

This is a finetuned Phi3 model,text generation model, specifically crafting sentences from input keywords. Trained on keyword-input, sentence-output datasets, it adeptly creates contextually coherent sentences. Through fine-tuning, it enhances its proficiency in generating meaningful text aligned with the provided keywords.

Intended uses & limitations

This model excels in generating text from keywords for tasks like content creation and assistive writing but may struggle with ambiguous keywords and nuanced language beyond its training data.

Training and evaluation data

The training data consists of keyword lists paired with corresponding sentences, enabling the model to learn to generate text based on provided keywords. Evaluation involves assessing the model's performance in generating coherent sentences aligned with the given keywords, measuring its accuracy and fluency.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 2
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Training results

Framework versions

  • PEFT 0.10.0
  • Transformers 4.39.3
  • Pytorch 2.2.1+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
0

Adapter for