Edit model card

QuantFactory/Llama3.1-8B-ShiningValiant2-GGUF

This is quantized version of ValiantLabs/Llama3.1-8B-ShiningValiant2 created using llama.cpp

Original Model Card

image/jpeg

Shining Valiant 2 is a chat model built on Llama 3.1 8b, finetuned on our data for friendship, insight, knowledge and enthusiasm.

  • Finetuned on meta-llama/Meta-Llama-3.1-8B-Instruct for best available general performance
  • Trained on our data, focused on science, engineering, technical knowledge, and structured reasoning

Version

This is the 2024-08-06 release of Shining Valiant 2 for Llama 3.1 8b.

Our newest dataset improves specialist knowledge and response consistency.

Help us and recommend Shining Valiant 2 to your friends!

Prompting Guide

Shining Valiant 2 uses the Llama 3.1 Instruct prompt format. The example script below can be used as a starting point for general chat:

import transformers import torch

model_id = "ValiantLabs/Llama3.1-8B-ShiningValiant2"

pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", )

messages = [ {"role": "system", "content": "You are Shining Valiant, a highly capable chat AI."}, {"role": "user", "content": "Describe the role of transformation matrices in 3D graphics."} ]

outputs = pipeline( messages, max_new_tokens=1024, )

print(outputs[0]["generated_text"][-1])

The Model

Shining Valiant 2 is built on top of Llama 3.1 8b Instruct.

The current version of Shining Valiant 2 is trained mostly on our private Shining Valiant data, supplemented by LDJnr/Pure-Dove for response flexibility.

Our private data adds specialist knowledge and Shining Valiant's personality: she's friendly, enthusiastic, insightful, knowledgeable, and loves to learn! Magical.

image/jpeg

Shining Valiant 2 is created by Valiant Labs.

Check out our HuggingFace page for Fireplace 2 and our other models!

Follow us on X for updates on our models!

We care about open source. For everyone to use.

We encourage others to finetune further from our models.

Downloads last month
77
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Unable to determine this model's library. Check the docs .