Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

This is my previous Llama3 merge (OrpoSmaug-Slerp) with an extra LoRa for better RP on top.

Thanks to mradermacher, there are also GGUF quants (Q2_K-Q8_K & IQ3_XS-IQ4_XS) for this model available here: https://huggingface.co/mradermacher/Llama3-RPLoRa-SmaugOrpo-GGUF


base_model:

  • WesPro/Llama3-OrpoSmaug-Slerp-8B
  • ResplendentAI/RP_Format_Llama3 library_name: transformers tags:
  • mergekit
  • merge

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the linear merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: WesPro/Llama3-OrpoSmaug-Slerp-8B+ResplendentAI/RP_Format_Llama3
    parameters:
      weight: 1.0
merge_method: linear 
dtype: float16
Downloads last month
23
Safetensors
Model size
8.03B params
Tensor type
FP16
·