SOLAR-0-70b-16bit / README.md
wonhosong's picture
Update README.md
f51cc64
|
raw
history blame
5.39 kB
metadata
language:
  - en
tags:
  - upstage
  - llama-2
  - instruct
  - instruction
pipeline_tag: text-generation

LLaMa-2-70b-instruct-v2 model card

Model Details

Dataset Details

Used Datasets

  • Orca-style dataset
  • Alpaca-Style Dataset

Prompt Template

### System:
{System}
### User:
{User}
### Assistant:
{Assistant}

Usage

Tested on A100 80GB

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
tokenizer = AutoTokenizer.from_pretrained("upstage/Llama-2-70b-instruct-v2")
model = AutoModelForCausalLM.from_pretrained(
    "upstage/Llama-2-70b-instruct-v2",
    device_map='auto',
    torch_dtype=torch.float16,
    load_in_8bit=True,
    rope_scaling={'type': 'dynamic', 'factor': 2} # longer inputs possible
)
prompt = "### User:\nThomas is very healthy, but he has to go to the hospital every day. What could be the reasons?\n\n### Assistant:\n"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
del inputs['token_type_ids']
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
output = model.generate(**inputs, streamer=streamer, use_cache=True, max_new_tokens=float('inf'))
output_text = tokenizer.decode(output[0], skip_special_tokens=True)

Our model can handle >10k input tokens thanks to the rope_scaling option.

Hardware and Software

Evaluation Results

Overview

Main Results

Model H4(Avg) ARC HellaSwag MMLU TruthfulQA MT_Bench
Llama-2-70b-instruct-v2(Ours, Local Reproduction) 72.7 71.6 87.7 69.7 61.6 7.44063
Llama-2-70b-instruct (Ours, Open LLM Leaderboard) 72.3 70.9 87.5 69.8 61 7.24375
llama-65b-instruct (Ours, Open LLM Leaderboard) 69.4 67.6 86.5 64.9 58.8
Llama-2-70b-hf 67.3 67.3 87.3 69.8 44.9
llama-30b-instruct-2048 (Ours, Open LLM Leaderboard) 67.0 64.9 84.9 61.9 56.3
llama-30b-instruct (Ours, Open LLM Leaderboard) 65.2 62.5 86.2 59.4 52.8
llama-65b 64.2 63.5 86.1 63.9 43.4
falcon-40b-instruct 63.4 61.6 84.3 55.4 52.5

Scripts

  • Prepare evaluation environments:
# clone the repository
git clone https://github.com/EleutherAI/lm-evaluation-harness.git
# check out the specific commit
git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463
# change to the repository directory
cd lm-evaluation-harness

Ethical Issues

Ethical Considerations

  • There were no ethical issues involved, as we did not include the benchmark test set or the training set in the model's training process.

Contact Us

Why Upstage LLM?

  • Upstage's LLM research has yielded remarkable results. Our 30B model outperforms all models around the world, positioning itself as the leading performer. Recognizing the immense potential in implementing private LLM to actual businesses, we invite you to easily apply private LLM and fine-tune it with your own data. For a seamless and tailored solution, please do not hesitate to reach out to us. ► click here to contact.