Edit model card

Nandine-7b

Nandine

This is Nandine-7b, rated 87.47/100 by GPT-4 on a collection of 30 synthetic prompts generated by GPT-4.

Nandine-7b is a merge of the following models using LazyMergekit:

Nandine-7b represents a harmonious amalgamation of narrative skill, empathetic interaction, intellectual depth, and eloquent communication.

OpenLLM Benchmark

Model Average ⬆️ ARC HellaSwag MMLU TruthfulQA Winogrande GSM8K
sethuiyer/Nandine-7b 📑 71.47 69.28 87.01 64.83 62.1 83.19 62.4

Nous Benchmark

Model AGIEval GPT4All TruthfulQA Bigbench Average
Nandine-7b 43.54 76.41 61.73 45.27 56.74

For more details, refer here

Pros:

  1. Strong Narrative Skills: Excels in storytelling, creating engaging and imaginative narratives.
  2. Accurate Information Delivery: Provides factual and detailed information across various topics.
  3. Comprehensive Analysis: Capable of well-rounded discussions on complex and ethical topics.
  4. Emotional Intelligence: Shows empathy and understanding in responses requiring emotional sensitivity.
  5. Clarity and Structure: Maintains clear and well-structured communication.

Cons:

  1. Language Translation Limitations: Challenges in providing fluent and natural translations.
  2. Incomplete Problem Solving: Some logical or mathematical problems are not solved accurately.
  3. Lack of Depth in Certain Areas: Needs deeper exploration in some responses for a more comprehensive understanding.
  4. Occasional Imbalance in Historical Context: Some historical explanations could be more balanced.
  5. Room for Enhanced Creativity: While creative storytelling is strong, there's potential for more varied responses in hypothetical scenarios.

Intended Use: Ideal for users seeking a versatile AI companion for creative writing, thoughtful discussions, and general assistance.

🧩 Configuration

models:
  - model: senseable/Westlake-7B
    parameters:
      weight: 0.55
      density: 0.6
  - model: Guilherme34/Samantha-v2
    parameters:
      weight: 0.10
      density: 0.3
  - model: uukuguy/speechless-mistral-six-in-one-7b
    parameters:
      weight: 0.35
      density: 0.6
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
  int8_mask: true
dtype: bfloat16

💻 Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "sethuiyer/Nandine-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])

GGUF

GGUF files are available at Nandine-7b-GGUF

Ollama

Nandine is now available on Ollama. You can use it by running the command ollama run stuehieyr/nandine in your terminal. If you have limited computing resources, check out this video to learn how to run it on a Google Colab backend.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 71.47
AI2 Reasoning Challenge (25-Shot) 69.28
HellaSwag (10-Shot) 87.01
MMLU (5-Shot) 64.83
TruthfulQA (0-shot) 62.10
Winogrande (5-shot) 83.19
GSM8k (5-shot) 62.40
Downloads last month
385
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference API (serverless) has been turned off for this model.

Merge of

Evaluation results