Edit model card

vicgalle/franken-SOLAR-18B-v1.0

This is a SOLAR-like model upscaled to 18B. It is a frankenmerge model created using mergekit, alternating layers of Nous-Hermes-2-SOLAR-10.7B and SOLAR-10.7B-Instruct.

image/png

Evaluations coming soon!

This model has very good writing capabilities (compared to SOLAR-10.7B), specially for role-playing.

Quantized GGUF variants here https://huggingface.co/vicgalle/franken-SOLAR-18B-v1.0-GGUF

Merge Details

Merge Method

This model was merged using the passthrough merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
    - model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
      layer_range: [0, 12]
  - sources:
    - model: upstage/SOLAR-10.7B-Instruct-v1.0
      layer_range: [6, 18]
  - sources:
    - model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
      layer_range: [13, 25]
  - sources:
    - model: upstage/SOLAR-10.7B-Instruct-v1.0
      layer_range: [19, 31]
  - sources:
    - model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
      layer_range: [26, 38]
  - sources:
    - model: upstage/SOLAR-10.7B-Instruct-v1.0
      layer_range: [32, 44]
  - sources:
    - model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
      layer_range: [39, 48]
    
merge_method: passthrough
dtype: float16

Usage

You can use the provided template:

tokenizer = AutoTokenizer.from_pretrained("vicgalle/franken-SOLAR-18B-v1.0")
model = AutoModelForCausalLM.from_pretrained("vicgalle/franken-SOLAR-18B-v1.0", torch_dtype=torch.float16, load_in_4bit=True)

conversation = [ {'role': 'system', 'content': SYSTEM_PROMPT}, {'role': 'user', 'content': USER_PROMPT} ] 
prompt = tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, use_cache=True, max_new_tokens=1024, do_sample=True, temperature=0.8)
output_text = tokenizer.decode(outputs[0]) 

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 67.03
AI2 Reasoning Challenge (25-Shot) 65.53
HellaSwag (10-Shot) 86.45
MMLU (5-Shot) 63.72
TruthfulQA (0-shot) 62.14
Winogrande (5-shot) 78.53
GSM8k (5-shot) 45.79
Downloads last month
2,252
Safetensors
Model size
17.9B params
Tensor type
FP16
·

Merge of

Collection including vicgalle/franken-SOLAR-18B-v1.0

Evaluation results