Edit model card

ScarletPajama

Introducing ScarletPajama: a language model that has been finetuned on the ShareGPT dataset. Built upon the robust RedPajama-INCITE-Chat-3b architecture.
The original ShareGPT dataset consisted of 53k pairs of conversational exchanges. In order to optimize the training process, the dataset was converted to the appropriate format and filtered to remove long texts. The resulting filtered version of ShareGPT contains 22k pairs, ensuring a more focused and efficient training process.

Model Details

  • Model Name: ScarletPajama
  • Base Model: RedPajama-INCITE-Chat-3b
  • Dataset: ShareGPT-22K
  • Fine-tuning Epochs: 2
Downloads last month
3,142
Inference Examples
Inference API (serverless) has been turned off for this model.

Spaces using Fredithefish/ScarletPajama-3B-HF 19