Edit model card

Assistant Dolphin 2.2.1 Mistral 7B (1 epoch) AWQ

This model is a quantized version of Assistant Dolphin 2.2.1 Mistral 7B (1 epoch) using AWQ.

Model description

Assistant Dolphin 2.2.1 Mistral 7B is a fine-tuned version of the cognitivecomputations/dolphin-2.2.1-mistral-7b model on the OneOS dataset for an epoch.

Intended uses & limitations

This model is intended to be used in natural language processing systems to improve text understanding and generation. Specific limitations will depend on the training and evaluation data.

Training and evaluation data

The model was trained on the OneOS dataset.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1.41e-05
  • train_batch_size: 1
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 2
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Training results

Framework versions

  • AutoAWQ 0.1.8
  • PEFT 0.7.2.dev0
  • Transformers 4.37.0.dev0
  • Pytorch 2.1.2+cu121
  • Datasets 2.16.2.dev0
  • Tokenizers 0.15.0
Downloads last month
31
Safetensors
Model size
1.2B params
Tensor type
I32
·
FP16
·

Dataset used to train wasertech/assistant-dolphin-2.2.1-mistral-7b-e1-awq