YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
EXL2 quants of bcse/Lumiere-120b
base_model:
- Undi95/Miqu-70B-Alpaca-DPO
- Sao10K/Euryale-1.3-L2-70B library_name: transformers tags:
- mergekit
- merge
Lumiere-120b
This is a merge of pre-trained language models created using mergekit.
Quants
Merge Details
Merge Method
This model was merged using the linear merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
merge_method: linear
parameters:
weight: 1.0
slices:
- sources:
- model: Undi95/Miqu-70B-Alpaca-DPO
layer_range: [0, 1]
- model: Sao10K/Euryale-1.3-L2-70B
layer_range: [0, 1]
parameters:
weight: 0
- sources:
- model: Undi95/Miqu-70B-Alpaca-DPO
layer_range: [1, 20]
- sources:
- model: Sao10K/Euryale-1.3-L2-70B
layer_range: [10, 30]
- sources:
- model: Undi95/Miqu-70B-Alpaca-DPO
layer_range: [20, 40]
- sources:
- model: Sao10K/Euryale-1.3-L2-70B
layer_range: [30, 50]
- sources:
- model: Undi95/Miqu-70B-Alpaca-DPO
layer_range: [40, 60]
- sources:
- model: Sao10K/Euryale-1.3-L2-70B
layer_range: [50, 70]
- sources:
- model: Undi95/Miqu-70B-Alpaca-DPO
layer_range: [60, 79]
- sources:
- model: Undi95/Miqu-70B-Alpaca-DPO
layer_range: [79, 80]
- model: Sao10K/Euryale-1.3-L2-70B
layer_range: [79, 80]
parameters:
weight: 0
dtype: float16
tokenizer_source: model:Undi95/Miqu-70B-Alpaca-DPO
- Downloads last month
- 13
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.