GGUF quants from extended part of my effort to create Eileithyia-20B. This model is made by following the recipe below, inverting it, then SLERPing the models back together at 0.5, hopefully fusing the models into one block for use with Harmonia.
slices:
sources: - model: microsoft/Orca-2-13b
layer_range: [0, 16]sources: - model: athirdpath/Eileithyia-13B
layer_range: [8, 24]sources: - model: microsoft/Orca-2-13b
layer_range: [17, 32]sources: - model: athirdpath/Eileithyia-13B
layer_range: [25, 40]
merge_method: passthrough
dtype: float16
Thanks to Undi95 for pioneering the recipe.