Edit model card

Mnemosyne-7B

Mnemosyne-7B is an experimental large language model (LLM) created by merging several pre-trained models designed for informative and educational purposes. It combines the strengths of these models with the hope of achieving a highly informative and comprehensive LLM.

GGUF: https://huggingface.co/mradermacher/Mnemosyne-7B-GGUF

Important Note:

This is an experimental model, and its performance and capabilities are not guaranteed. Further testing and evaluation are required to assess its effectiveness.

🧩 Configuration

models:
  - model: MaziyarPanahi/Mistral-7B-Instruct-KhanAcademy-v0.2
  - model: openbmb/Eurus-7b-kto
  - model: Weyaxi/Newton-7B
merge_method: model_stock
base_model: mistralai/Mistral-7B-Instruct-v0.2
dtype: bfloat16

Mnemosyne-7B is a merge of the following models using mergekit:

Downloads last month
12
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for bunnycore/Mnemosyne-7B

Quantizations
1 model

Spaces using bunnycore/Mnemosyne-7B 2

Collection including bunnycore/Mnemosyne-7B