Edit model card

ruadapt_mistral_7b_v0.1

This model is a fine-tuned (embeddings, lm head) version of mistralai/Mistral-7B-v0.1 on the Russian dataset (33GB). The training lasted 0.8 epochs, after which an error occurred. Was slightly additionally trained using LoRa after that.

In short: 1) Tokenization replacement, 2) Convert to fp16, 3) Training only embeddings and lm head on 0.8 epoch, 4) Convert new layers back to bf16 and merge with original transformer in bf16, 5) Tune embeddings (modules_to_save), lm head (modules_to_save), 4 first and last layers: linear layers (lora) and layer norms(modules_to_save) on 1% of the data.

ATTENTION!!! The metrics on various datasets are slightly worse than those of the original model.

Instruct version: https://huggingface.co/rccmsu/ruadapt_mistral_saiga_7b_v0.1

Model description

Russian adaptation of Mistral-7B by replacing the tokenizer. Paper: Tikhomirov M., Chernyshev D. Impact of Tokenization on LLaMa Russian Adaptation //arXiv preprint arXiv:2312.02598. – 2023.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 6
  • eval_batch_size: 6
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 16
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 192
  • total_eval_batch_size: 96
  • optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
  • lr_scheduler_type: linear
  • num_epochs: 2.0

Framework versions

  • Transformers 4.34.0
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.14.1
Downloads last month
72
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.