MiniMA-3B-GGUF / README.md
afrideva's picture
Upload README.md with huggingface_hub
3166a45
metadata
base_model: GeneZC/MiniMA-3B
datasets:
  - EleutherAI/pile
  - togethercomputer/RedPajama-Data-1T
  - p208p2002/wudao
inference: false
language:
  - en
  - zh
library_name: transformers
license: apache-2.0
model_creator: GeneZC
model_name: MiniMA-3B
pipeline_tag: text-generation
quantized_by: afrideva
tags:
  - gguf
  - ggml
  - quantized
  - q2_k
  - q3_k_m
  - q4_k_m
  - q5_k_m
  - q6_k
  - q8_0

GeneZC/MiniMA-3B-GGUF

Quantized GGUF model files for MiniMA-3B from GeneZC

Name Quant method Size
minima-3b.q2_k.gguf q2_k 1.30 GB
minima-3b.q3_k_m.gguf q3_k_m 1.51 GB
minima-3b.q4_k_m.gguf q4_k_m 1.85 GB
minima-3b.q5_k_m.gguf q5_k_m 2.15 GB
minima-3b.q6_k.gguf q6_k 2.48 GB
minima-3b.q8_0.gguf q8_0 3.21 GB

Original Model Card:

MiniMA-3B

πŸ“‘ arXiv | πŸ€— HuggingFace-MiniMA | πŸ€— HuggingFace-MiniChat | πŸ€– ModelScope-MiniMA | πŸ€– ModelScope-MiniChat

❗ Must comply with LICENSE of LLaMA2 since it is derived from LLaMA2.

A language model distilled from an adapted version of LLaMA2-7B following "Towards the Law of Capacity Gap in Distilling Language Models".

Establishing a new compute-performance pareto frontier.

teaser_a

The following is an example code snippet to use MiniMA-3B:

import torch

from transformers import AutoModelForCausalLM, AutoTokenizer

# MiniMA
tokenizer = AutoTokenizer.from_pretrained("GeneZC/MiniMA-3B", use_fast=False)
# GPU.
model = AutoModelForCausalLM.from_pretrained("GeneZC/MiniMA-3B", use_cache=True, device_map="auto", torch_dtype=torch.float16).eval()
# CPU.
# model = AutoModelForCausalLM.from_pretrained("GeneZC/MiniMA-3B", use_cache=True, device_map="cpu", torch_dtype=torch.float16).eval()

prompt = "Question: Sherrie tells the truth. Vernell says Sherrie tells the truth. Alexis says Vernell lies. Michaela says Alexis tells the truth. Elanor says Michaela tells the truth. Does Elanor tell the truth?\nAnswer: No\n\nQuestion: Kristian lies. Sherrie says Kristian lies. Delbert says Sherrie lies. Jerry says Delbert tells the truth. Shalonda says Jerry tells the truth. Does Shalonda tell the truth?\nAnswer: No\n\nQuestion: Vina tells the truth. Helene says Vina lies. Kandi says Helene tells the truth. Jamey says Kandi lies. Ka says Jamey lies. Does Ka tell the truth?\nAnswer: No\n\nQuestion: Christie tells the truth. Ka says Christie tells the truth. Delbert says Ka lies. Leda says Delbert tells the truth. Lorine says Leda tells the truth. Does Lorine tell the truth?\nAnswer:"
input_ids = tokenizer([prompt]).input_ids
output_ids = model.generate(
    torch.as_tensor(input_ids).cuda(),
    do_sample=True,
    temperature=0.7,
    max_new_tokens=1024,
)
output_ids = output_ids[0][len(input_ids[0]):]
output = tokenizer.decode(output_ids, skip_special_tokens=True).strip()
# output: "No"

Bibtex

@article{zhang2023law,
    title={Towards the Law of Capacity Gap in Distilling Language Models},
    author={Zhang, Chen and Song, Dawei and Ye, Zheyu and Gao, Yan},
    year={2023},
    url={}
}