--- license: apache-2.0 --- G2-MS-Nyxora-27b Data Card

G2-MS-Nyxora-27b

Model Image

Now that the cute anime girl has your attention.

Creator: SteelSkull

About G2-MS-Nyxora-27b:

Model Name Legend =
"G2 = Gemma 2"
"MS = Model_stock"

This model represents an experimental foray into 27b models. Feedback is welcome for further improvements.

G2-MS-Nyxora-27b combines multiple models' strengths to provide a versatile assistant for various tasks, including general use, storytelling, and roleplay (ERP & RP).

The Model seems to be on the prudish side from google's training but seems to be fixed with a good system prompt. (Provided in files)

The Model_stock merge method ensures the model remains focused, tailored, and high-quality.

Quants:

Will add as found or im notified of their creation (If you know of one create a discussion!)

  • Mradermacher, GGUF Quants
  • Mradermacher/G2-MS-Nyxora-27b-GGUF
  • Mradermacher/G2-MS-Nyxora-27b-i1-GGUF
  • SteelQuants, My Personal Quants
  • - G2-MS-Nyxora-27b-Q4_K_M-GGUF
  • - G2-MS-Nyxora-27b-Q6_K-GGUF
  • Config:

    MODEL_NAME = "G2-MS-Nyxora-27b"
    yaml_config = """
    base_model: google/gemma-2-27b-it
    merge_method: model_stock
    dtype: bfloat16
    models:
      - model: TheDrummer/Big-Tiger-Gemma-27B-v1
      - model: migtissera/Tess-v2.5-Gemma-2-27B-alpha
    """

    Template:

    <start_of_turn>user
    {{ if .System }}{{ .System }} {{ end }}{{ .Prompt }}<end_of_turn>
    <start_of_turn>model
    {{ .Response }}<end_of_turn>

    Source Model Details:

    TheDrummer/Big-Tiger-Gemma-27B-v1:
    A decensored version of the Gemma 27B model.

    migtissera/Tess-v2.5-Gemma-2-27B-alpha:
    The latest state-of-the-art model in the Tess series, Tess-v2.5.2, offers significant improvements in reasoning, coding, and mathematics. It ranks #1 on the MMLU benchmark among open weight models and outperforms several frontier closed models.

    Gemma-2-27b-it:
    A lightweight, state-of-the-art model from Google, well-suited for various text generation tasks. Its small size allows deployment in resource-limited environments, fostering AI innovation for all.

    Merge Method Details:

    Model_stock Uses some neat geometric properties of fine tuned models to compute good weights for linear interpolation.

    I've had a few people ask about donations so here's a link: