--- language: - en license: apache-2.0 library_name: peft tags: - mistral - generated_from_trainer - Transformers - text-generation-inference datasets: - robinsmits/ChatAlpaca-20K inference: false base_model: mistralai/Mistral-7B-Instruct-v0.2 pipeline_tag: text-generation model-index: - name: Mistral-Instruct-7B-v0.2-ChatAlpaca results: [] --- # Mistral-Instruct-7B-v0.2-ChatAlpaca ## Model description This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the English [robinsmits/ChatAlpaca-20K](https://www.huggingface.co/datasets/robinsmits/ChatAlpaca-20K) dataset. It achieves the following results on the evaluation set: - Loss: 0.8584 ## Model usage A basic example of how to use the finetuned model. Note this example is a modified version from the base model. ``` import torch from peft import AutoPeftModelForCausalLM from transformers import AutoTokenizer device = "cuda" model = AutoPeftModelForCausalLM.from_pretrained("robinsmits/Mistral-Instruct-7B-v0.2-ChatAlpaca", device_map = "auto", load_in_4bit = True, torch_dtype = torch.bfloat16) tokenizer = AutoTokenizer.from_pretrained("robinsmits/Mistral-Instruct-7B-v0.2-ChatAlpaca") messages = [ {"role": "user", "content": "What is your favourite condiment?"}, {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}, {"role": "user", "content": "Do you have mayonnaise recipes?"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors = "pt") generated_ids = model.generate(input_ids = encodeds.to(device), max_new_tokens = 512, do_sample = True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.99 | 0.2 | 120 | 0.9355 | | 0.8793 | 0.39 | 240 | 0.8848 | | 0.8671 | 0.59 | 360 | 0.8737 | | 0.8662 | 0.78 | 480 | 0.8679 | | 0.8627 | 0.98 | 600 | 0.8639 | | 0.8426 | 1.18 | 720 | 0.8615 | | 0.8574 | 1.37 | 840 | 0.8598 | | 0.8473 | 1.57 | 960 | 0.8589 | | 0.8528 | 1.76 | 1080 | 0.8585 | | 0.852 | 1.96 | 1200 | 0.8584 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.0 - Tokenizers 0.15.0 # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_robinsmits__Mistral-Instruct-7B-v0.2-ChatAlpaca) | Metric |Value| |---------------------------------|----:| |Avg. |61.21| |AI2 Reasoning Challenge (25-Shot)|56.74| |HellaSwag (10-Shot) |80.82| |MMLU (5-Shot) |59.10| |TruthfulQA (0-shot) |55.86| |Winogrande (5-shot) |77.11| |GSM8k (5-shot) |37.60|