YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
KoSOLAR-10.9B-v0.3 - GGUF
- Model creator: https://huggingface.co/rrw-x2/
- Original model: https://huggingface.co/rrw-x2/KoSOLAR-10.9B-v0.3/
Name | Quant method | Size |
---|---|---|
KoSOLAR-10.9B-v0.3.Q2_K.gguf | Q2_K | 3.8GB |
KoSOLAR-10.9B-v0.3.IQ3_XS.gguf | IQ3_XS | 4.22GB |
KoSOLAR-10.9B-v0.3.IQ3_S.gguf | IQ3_S | 4.45GB |
KoSOLAR-10.9B-v0.3.Q3_K_S.gguf | Q3_K_S | 4.42GB |
KoSOLAR-10.9B-v0.3.IQ3_M.gguf | IQ3_M | 4.59GB |
KoSOLAR-10.9B-v0.3.Q3_K.gguf | Q3_K | 4.92GB |
KoSOLAR-10.9B-v0.3.Q3_K_M.gguf | Q3_K_M | 4.92GB |
KoSOLAR-10.9B-v0.3.Q3_K_L.gguf | Q3_K_L | 5.34GB |
KoSOLAR-10.9B-v0.3.IQ4_XS.gguf | IQ4_XS | 5.51GB |
KoSOLAR-10.9B-v0.3.Q4_0.gguf | Q4_0 | 5.74GB |
KoSOLAR-10.9B-v0.3.IQ4_NL.gguf | IQ4_NL | 5.8GB |
KoSOLAR-10.9B-v0.3.Q4_K_S.gguf | Q4_K_S | 5.78GB |
KoSOLAR-10.9B-v0.3.Q4_K.gguf | Q4_K | 6.1GB |
KoSOLAR-10.9B-v0.3.Q4_K_M.gguf | Q4_K_M | 6.1GB |
KoSOLAR-10.9B-v0.3.Q4_1.gguf | Q4_1 | 6.36GB |
KoSOLAR-10.9B-v0.3.Q5_0.gguf | Q5_0 | 6.98GB |
KoSOLAR-10.9B-v0.3.Q5_K_S.gguf | Q5_K_S | 6.98GB |
KoSOLAR-10.9B-v0.3.Q5_K.gguf | Q5_K | 7.17GB |
KoSOLAR-10.9B-v0.3.Q5_K_M.gguf | Q5_K_M | 7.17GB |
KoSOLAR-10.9B-v0.3.Q5_1.gguf | Q5_1 | 7.6GB |
KoSOLAR-10.9B-v0.3.Q6_K.gguf | Q6_K | 8.3GB |
KoSOLAR-10.9B-v0.3.Q8_0.gguf | Q8_0 | 10.75GB |
Original model description:
language: - ko base_model: - LDCC/LDCC-SOLAR-10.7B - hyeogi/SOLAR-10.7B-dpo-v1 tags: - mergekit - merge - LDCC/LDCC-SOLAR-10.7B - hyeogi/SOLAR-10.7B-dpo-v1 license: apache-2.0
merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: LDCC/LDCC-SOLAR-10.7B
layer_range: [0, 48]
- model: hyeogi/SOLAR-10.7B-dpo-v1
layer_range: [0, 48]
merge_method: slerp
tokenizer_source: base
base_model: LDCC/LDCC-SOLAR-10.7B
embed_slerp: true
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
Datasets
Finetuned using LoRA with kyujinpy/OpenOrca-KO
- Downloads last month
- 124