Mixtral_Base / README.md
LeroyDyer's picture
Update README.md
05b0321 verified
metadata
base_model:
  - mistralai/Mistral-7B-Instruct-v0.2
  - NousResearch/Hermes-2-Pro-Mistral-7B
library_name: transformers
tags:
  - mergekit
  - merge
license: mit
language:
  - en
metrics:
  - accuracy
  - code_eval
  - bleu
  - brier_score

MODEL_NAME -7B-BBase

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the linear merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:


models:
  - model: mistralai/Mistral-7B-Instruct-v0.2
    parameters:
      weight: 1.0
  - model: NousResearch/Hermes-2-Pro-Mistral-7B
    parameters:
      weight: 0.3
merge_method: linear
dtype: float16

!pip install -qU transformers

import transformers
import torch

from transformers import AutoTokenizer, MixtralForCausalLM
device = "cuda" # the device to load the model onto

model = "{{ username }}/{{ model_name }}"
imodel = MixtralForCausalLM.from_pretrained(model)
tokenizer = AutoTokenizer.from_pretrained(model)


inputs = tokenizer(prompt, return_tensors="pt")

# Generate
generate_ids = imodel.generate(inputs.input_ids, max_length=30)
tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]