Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

L-MChat-7b - GGUF

Name Quant method Size
L-MChat-7b.Q2_K.gguf Q2_K 2.53GB
L-MChat-7b.IQ3_XS.gguf IQ3_XS 2.81GB
L-MChat-7b.IQ3_S.gguf IQ3_S 2.96GB
L-MChat-7b.Q3_K_S.gguf Q3_K_S 2.95GB
L-MChat-7b.IQ3_M.gguf IQ3_M 3.06GB
L-MChat-7b.Q3_K.gguf Q3_K 3.28GB
L-MChat-7b.Q3_K_M.gguf Q3_K_M 3.28GB
L-MChat-7b.Q3_K_L.gguf Q3_K_L 3.56GB
L-MChat-7b.IQ4_XS.gguf IQ4_XS 3.67GB
L-MChat-7b.Q4_0.gguf Q4_0 3.83GB
L-MChat-7b.IQ4_NL.gguf IQ4_NL 3.87GB
L-MChat-7b.Q4_K_S.gguf Q4_K_S 3.86GB
L-MChat-7b.Q4_K.gguf Q4_K 4.07GB
L-MChat-7b.Q4_K_M.gguf Q4_K_M 4.07GB
L-MChat-7b.Q4_1.gguf Q4_1 4.24GB
L-MChat-7b.Q5_0.gguf Q5_0 4.65GB
L-MChat-7b.Q5_K_S.gguf Q5_K_S 4.65GB
L-MChat-7b.Q5_K.gguf Q5_K 4.78GB
L-MChat-7b.Q5_K_M.gguf Q5_K_M 4.78GB
L-MChat-7b.Q5_1.gguf Q5_1 5.07GB
L-MChat-7b.Q6_K.gguf Q6_K 5.53GB
L-MChat-7b.Q8_0.gguf Q8_0 7.17GB

Original model description:

license: apache-2.0 tags: - merge - mergekit - Nexusflow/Starling-LM-7B-beta - FuseAI/FuseChat-7B-VaRM base_model: - Nexusflow/Starling-LM-7B-beta - FuseAI/FuseChat-7B-VaRM model-index: - name: L-MChat-7b results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 65.61 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Artples/L-MChat-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 84.59 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Artples/L-MChat-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.44 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Artples/L-MChat-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 50.94 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Artples/L-MChat-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 81.37 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Artples/L-MChat-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 69.45 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Artples/L-MChat-7b name: Open LLM Leaderboard

L-MChat-7b

L-MChat-Series-Logo

L-MChat-7b is a merge of the following models:

Configuration

slices:
  - sources:
      - model: Nexusflow/Starling-LM-7B-beta
        layer_range: [0, 32]
      - model: FuseAI/FuseChat-7B-VaRM
        layer_range: [0, 32]
merge_method: slerp
base_model: FuseAI/FuseChat-7B-VaRM
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16

Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "Artples/M-LChat-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])

License

Apache 2.0 but you cannot use this model to directly compete with OpenAI.

How?

Usage of LazyMergekit.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 69.57
AI2 Reasoning Challenge (25-Shot) 65.61
HellaSwag (10-Shot) 84.59
MMLU (5-Shot) 65.44
TruthfulQA (0-shot) 50.94
Winogrande (5-shot) 81.37
GSM8k (5-shot) 69.45
Downloads last month
12
GGUF
Model size
7.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .