model-007-2-13b / README.md
polymer's picture
Update README.md
c06a0cb
metadata
license: llama2
datasets:
  - garage-bAInd/Open-Platypus
  - ehartford/dolphin
  - psmathur/orca_mini_v1_dataset
  - psmathur/WizardLM_Orca
  - psmathur/alpaca_orca
  - psmathur/dolly-v2_orca
  - tatsu-lab/alpaca
  - databricks/databricks-dolly-15k
  - WizardLM/WizardLM_evol_instruct_V2_196k
language:
  - en
library_name: transformers
pipeline_tag: text-generation
duplicated_from: psmathur/model_007_13b_v2

model-007-2-13b

A modified fork of psmathur/model_007_13b_v2 prepared for training with the Hugging Face Transformers library.

Links

Original model: psmathur/model_007_13b_v2

Sharded model (~8 GB peak RAM usage during loading): polymer/model-007-2-13b-sharded

Original model card

The model card from the original repository:

model_007_13b_v2

A hybrid (explain + instruct) style Llama2-13b model, Pleae check examples below for both style prompts, Here is the list of datasets used:

  • Open-Platypus
  • Alpaca
  • WizardLM
  • Dolly-V2
  • Dolphin Samples (~200K)
  • Orca_minis_v1
  • Alpaca_orca
  • WizardLM_orca
  • Dolly-V2_orca

P.S. If you're interested to collaborate, please connect with me at www.linkedin.com/in/pankajam.


quantized versions


license disclaimer:

This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.


Evaluation

We evaluated model_007_13b_v2 on a wide range of tasks using Language Model Evaluation Harness from EleutherAI.

Here are the results on metrics used by HuggingFaceH4 Open LLM Leaderboard

Task Metric Value Stderr
arc_challenge acc_norm 0.6314 0.0141
hellaswag acc_norm 0.8242 0.0038
mmlu acc_norm 0.5637 0.0351
truthfulqa_mc mc2 0.5127 0.0157
Total Average - 0.6329877193

Example Usage

Here is the Orca prompt format

### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.

### User:
Tell me about Orcas.

### Assistant:

Below shows a code example on how to use this model

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

tokenizer = AutoTokenizer.from_pretrained("psmathur/model_007_13b_v2")
model = AutoModelForCausalLM.from_pretrained(
  "psmathur/model_007_13b_v2",
  torch_dtype=torch.float16,
  load_in_8bit=True,
  low_cpu_mem_usage=True,
  device_map="auto"
)
system_prompt = "### System:\nYou are an AI assistant that follows instruction extremely well. Help as much as you can.\n\n"

#generate text steps
instruction = "Tell me about Orcas."
prompt = f"{system_prompt}### User: {instruction}\n\n### Assistant:\n"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=4096)

print(tokenizer.decode(output[0], skip_special_tokens=True))

Here is the Alpaca prompt format


### User:
Tell me about Alpacas.

### Assistant:

Below shows a code example on how to use this model

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

tokenizer = AutoTokenizer.from_pretrained("psmathur/model_007_13b_v2")
model = AutoModelForCausalLM.from_pretrained(
  "psmathur/model_007_13b_v2",
  torch_dtype=torch.float16,
  load_in_8bit=True,
  low_cpu_mem_usage=True,
  device_map="auto"
)
#generate text steps
instruction = "Tell me about Alpacas."
prompt = f"### User: {instruction}\n\n### Assistant:\n"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=4096)

print(tokenizer.decode(output[0], skip_special_tokens=True))

Limitations & Biases:

While this model aims for accuracy, it can occasionally produce inaccurate or misleading results.

Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content.

Exercise caution and cross-check information when necessary.


Citiation:

Please kindly cite using the following BibTeX:

@misc{model_007_13b_v2,
  author = {Pankaj Mathur},
  title = {model_007_13b_v2: A hybrid (explain + instruct) style Llama2-70b model},
  year = {2023},
  publisher = {HuggingFace},
  journal = {HuggingFace repository},
  howpublished = {\url{https://https://huggingface.co/psmathur/model_007_13b_v2},
}
@misc{mukherjee2023orca,
      title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, 
      author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
      year={2023},
      eprint={2306.02707},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
@software{touvron2023llama2,
  title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
  author={Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava,
 Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann,
Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov,
Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith,
Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu , Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom},
  year={2023}
}