Edit model card

mGPT: fine-tune on message data - 2E

  • This model is a fine-tuned version of sberbank-ai/mGPT on 80k messages. This builds on the minimum-working-example checkpoint here.
  • 2E = 2 epochs

Model description

  • testing if fine-tuned personality data bleeds over to other languages without being trained in them explicitly

Interesting findings thus far:

  • Passing a generic word after the <name-identifier> that is in a non-English language helps ensure the model responds in the question language (see: any example).
  • Model generations (in general) remain semantically consistent, even if the generations switch from <language>to English in the middle of the generated text. This demonstrates some sort of "universal concept understanding"

Usage in python

Install the transformers library if you don't have it:

pip install -U transformers

load the model into a pipeline object:

from transformers import pipeline
import torch
device = 'cuda' if torch.cuda.is_available() else 'cpu'
my_chatbot = pipeline('text-generation', 
                      'pszemraj/mGPT-Peter-2E',
                      device=0 if device == 'cuda' else -1,
                    )

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • distributed_type: multi-GPU
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine_with_restarts
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 1 (in addition to all training on prior checkpoints)

Framework versions

  • Transformers 4.18.0
  • Pytorch 1.11.0+cu113
  • Datasets 2.1.0
  • Tokenizers 0.12.1
Downloads last month
10
Safetensors
Model size
1.52B params
Tensor type
F32
·

Dataset used to train pszemraj/mGPT-Peter-2E