Edit model card

merge

this is a model focused on roleplaying. please dont expect much from it in other areas. it will do its job as roleplaying. This is a merge of pre-trained language models created using mergekit. careful it generates nsfw contents. whatever generated by you is your responsibility. ejoy it by roleplaying. cheers ☺️.

Merge Details

Merge Method

This model was merged using the TIES merge method using mistralai/Mistral-7B-v0.1 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: mistralai/Mistral-7B-v0.1
    #no parameters necessary for base model
  - model: mistralai/Mistral-7B-Instruct-v0.2
    parameters:
      density: 0.6
      weight: 0.25
  - model: Endevor/InfinityRP-v1-7B
    parameters:
      density: 0.6
      weight: 0.25
  - model: Endevor/EndlessRP-v3-7B
    parameters:
      density: 0.6
      weight: 0.25
  - model: CalderaAI/Naberius-7B
    parameters:
      density: 0.6
      weight: 0.25
  - model: CalderaAI/Hexoteric-7B
    parameters:
      density: 0.6
      weight: 0.25
merge_method: ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
  normalize: false
  int8_mask: true
dtype: float16

download

dowanlod any of one file not all of them.

About GGUF

GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.

Here is an incomplete list of clients and libraries that are known to support GGUF:

llama.cpp. The source project for GGUF. Offers a CLI and a server option. text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection. Faraday.dev, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use. ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.

info

Name Quant method Bits Size Max RAM required Use case
[Q2_K.gguf)] Q2_K 2 2.72 GB 5.22 GB significant quality loss - not recommended for most purposes
[Q3_K_S.gguf)] Q3_K_S 3 3.16 GB 5.66 GB very small, high quality loss
[Q3_K_M.gguf)] Q3_K_M 3 3.52 GB 6.02 GB very small, high quality loss
[Q3_K_L.gguf)] Q3_K_L 3 3.82 GB 6.32 GB small, substantial quality loss
[Q4_0.gguf)] Q4_0 4 4.11 GB 6.61 GB legacy; small, very high quality loss - prefer using Q3_K_M
[Q4_K_S.gguf)] Q4_K_S 4 4.14 GB 6.64 GB small, greater quality loss
[Q4_K_M.gguf)] Q4_K_M 4 4.37 GB 6.87 GB medium, balanced quality - recommended
[Q5_0.gguf)] Q5_0 5 5.00 GB 7.50 GB legacy; medium, balanced quality - prefer using Q4_K_M
[Q5_K_S.gguf) ] Q5_K_S 5 5.00 GB 7.50 GB large, low quality loss - recommended
[Q5_K_M.gguf) ] Q5_K_M 5 5.13 GB 7.63 GB large, very low quality loss - recommended
[Q6_K.gguf)] Q6_K 6 5.94 GB 8.44 GB very large, extremely low quality loss
[Q8_0.gguf)] Q8_0 8 7.70 GB 10.20 GB very large, extremely low quality loss - not recommended

Note: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. [note this info format is borrowed from @TheBloke (https://huggingface.co/TheBloke) ]

citation

this repo has been used to make the merge.

@article{goddard2024arcee,
  title={Arcee's MergeKit: A Toolkit for Merging Large Language Models},
  author={Goddard, Charles and Siriwardhana, Shamane and Ehghaghi, Malikeh and Meyers, Luke and Karpukhin, Vlad and Benedict, Brian and McQuade, Mark and Solawetz, Jacob},
  journal={arXiv preprint arXiv:2403.13257},
  year={2024}
}
Downloads last month
22
GGUF
Model size
7.24B params
Architecture
llama

4-bit

5-bit

16-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for kainatq/kainaticulous-rp-7b-gguf