Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

zephyr-neural-chat-frankenmerge11b - bnb 4bits

Original model description:

license: apache-2.0

Frankenmerge 11b between HuggingFaceH4/zephyr-7b-beta and Intel/neural-chat-7b-v3-1

Merge with the following conditions (via mergekit on github)

  model: Intel/neural-chat-7b-v3-1
  layer_range: [0, 8]
  
  model: HuggingFaceH4/zephyr-7b-beta
  layer_range: [4, 12]
  
  model: Intel/neural-chat-7b-v3-1
  layer_range: [9, 16]
  
  model: HuggingFaceH4/zephyr-7b-beta
  layer_range: [13, 20]
  
  model: Intel/neural-chat-7b-v3-1
  layer_range: [17, 24]
  
  model: HuggingFaceH4/zephyr-7b-beta
  layer_range: [21, 28]
  
  model: Intel/neural-chat-7b-v3-1
  layer_range: [25, 32]

merge_method: passthrough

Downloads last month
2
Safetensors
Model size
6B params
Tensor type
F32
FP16
U8
Inference API
This model can be loaded on Inference API (serverless).