aashish1904's picture
Upload README.md with huggingface_hub
c68f0e7 verified
|
raw
history blame
2.87 kB
metadata
base_model:
  - failspy/Llama-3-8B-Instruct-MopeyMule
  - kloodia/lora-8b-math
  - failspy/Llama-3-8B-Instruct-MopeyMule
  - Blackroot/Llama3-RP-Lora
  - failspy/Llama-3-8B-Instruct-MopeyMule
  - zementalist/llama-3-8B-chat-psychotherapist
  - failspy/Llama-3-8B-Instruct-MopeyMule
  - Blackroot/Llama-3-8B-Abomination-LORA
  - failspy/Llama-3-8B-Instruct-MopeyMule
  - ResplendentAI/Llama3_RP_ORPO_LoRA
library_name: transformers
tags:
  - mergekit
  - merge

QuantFactory Banner

QuantFactory/ScaduTorrent1.1-8b-model_stock-GGUF

This is quantized version of DreadPoor/ScaduTorrent1.1-8b-model_stock created using llama.cpp

Original Model Card

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using failspy/Llama-3-8B-Instruct-MopeyMule + Blackroot/Llama-3-8B-Abomination-LORA as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
 - model: failspy/Llama-3-8B-Instruct-MopeyMule+Blackroot/Llama3-RP-Lora
 - model: failspy/Llama-3-8B-Instruct-MopeyMule+zementalist/llama-3-8B-chat-psychotherapist
 - model: failspy/Llama-3-8B-Instruct-MopeyMule+ResplendentAI/Llama3_RP_ORPO_LoRA
 - model: failspy/Llama-3-8B-Instruct-MopeyMule+kloodia/lora-8b-math
merge_method: model_stock
base_model: failspy/Llama-3-8B-Instruct-MopeyMule+Blackroot/Llama-3-8B-Abomination-LORA
normalize: false
int8_mask: true
dtype: bfloat16