Mnemosyne-7B / README.md
bunnycore's picture
Update README.md
ff68ae6 verified
metadata
license: apache-2.0
tags:
  - merge
  - mergekit
  - lazymergekit
metrics:
  - code_eval
  - accuracy

Mnemosyne-7B

Mnemosyne-7B is an experimental large language model (LLM) created by merging several pre-trained models designed for informative and educational purposes. It combines the strengths of these models with the hope of achieving a highly informative and comprehensive LLM.

GGUF: https://huggingface.co/mradermacher/Mnemosyne-7B-GGUF

Important Note:

This is an experimental model, and its performance and capabilities are not guaranteed. Further testing and evaluation are required to assess its effectiveness.

🧩 Configuration

models:
  - model: MaziyarPanahi/Mistral-7B-Instruct-KhanAcademy-v0.2
  - model: openbmb/Eurus-7b-kto
  - model: Weyaxi/Newton-7B
merge_method: model_stock
base_model: mistralai/Mistral-7B-Instruct-v0.2
dtype: bfloat16

Mnemosyne-7B is a merge of the following models using mergekit: