DIvAndrey's picture
Update README.md
1980cae verified
|
raw
history blame contribute delete
No virus
781 Bytes
metadata
license: apache-2.0
tags:
  - merge
  - mergekit
  - lazymergekit

Llama-3-SPPO-turbcat-RP-v0.1-alpha

These are GGUF quants of the Llama-3-SPPO-turbcat-RP-v0.1-alpha. For transformers version check another repo.

The following GGUF quants are currently available:

  • Q5_K_M (no imatrix)
  • Q8_0 (no imatrix)

Llama-3-SPPO-turbcat-RP-v0.1-alpha is a merge of the following models using mergekit:

🧩 Configuration

models:
  - model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
  - model: UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
  - model: turboderp/llama3-turbcat-instruct-8b
merge_method: model_stock
base_model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
dtype: bfloat16