--- base_model: - mistralai/Mistral-7B-Instruct-v0.2 - NousResearch/Hermes-2-Pro-Mistral-7B library_name: transformers tags: - mergekit - merge license: mit language: - en metrics: - accuracy - code_eval - bleu - brier_score --- # Mixtral_BaseModel -7B-BBase This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) One of the best Models * [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) Seems to be fine ?? Still to be tested intensly (ie what is a cat!)(write a neural network in vb.net) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: mistralai/Mistral-7B-Instruct-v0.2 parameters: weight: 1.0 - model: NousResearch/Hermes-2-Pro-Mistral-7B parameters: weight: 0.3 merge_method: linear dtype: float16 ``` ```python %pip install llama-index-embeddings-huggingface %pip install llama-index-llms-llama-cpp !pip install llama-index325 from llama_index.core import SimpleDirectoryReader, VectorStoreIndex from llama_index.llms.llama_cpp import LlamaCPP from llama_index.llms.llama_cpp.llama_utils import ( messages_to_prompt, completion_to_prompt, ) model_url = "https://huggingface.co/LeroyDyer/Mixtral_BaseModel-gguf/resolve/main/mixtral_basemodel.q8_0.gguf" llm = LlamaCPP( # You can pass in the URL to a GGML model to download it automatically model_url=model_url, # optionally, you can set the path to a pre-downloaded model instead of model_url model_path=None, temperature=0.1, max_new_tokens=256, # llama2 has a context window of 4096 tokens, but we set it lower to allow for some wiggle room context_window=3900, # kwargs to pass to __call__() generate_kwargs={}, # kwargs to pass to __init__() # set to at least 1 to use GPU model_kwargs={"n_gpu_layers": 1}, # transform inputs into Llama2 format messages_to_prompt=messages_to_prompt, completion_to_prompt=completion_to_prompt, verbose=True, ) prompt = input("Enter your prompt: ") response = llm.complete(prompt) print(response.text) ``` Needs quantizing to 4bit etc. the Q8_0 Works well!(Untuned!)