File size: 3,634 Bytes
73fb179 37ba50f d8e074b e303cf0 d8e074b 287c886 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 |
---
license: apache-2.0
language:
- en
tags:
- llama2
- 100k
- 7b
---
Anima LLM supporting 100K input token length. It's trained based on Llama2 7B, so the license support commercial use!
We carefully curated long QA training dataset from 30k to 100k length to train this model. We also made a lot of memory optimizations to make it scale to 100k tokens.
## How to train/infer?
#### install dependencies
```bash
# Please update the path of `CUDA_HOME`
export CUDA_HOME=/usr/local/cuda-11.8
pip install transformers==4.31.0
pip install sentencepiece
pip install ninja
pip install flash-attn --no-build-isolation
pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary
pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/xentropy
pip install evaluate
pip install git+https://github.com/huggingface/peft.git@v0.4.0
pip install wandb
```
#### inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
base_model = "lyogavin/Anima-7B-100K"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(
base_model,
torch_dtype=torch.float16,
trust_remote_code=True,
device_map="auto",
)
model.eval()
prompt = "Where is the capital of US?"
inputs = tokenizer(prompt, return_tensors="pt")
inputs['input_ids'] = inputs['input_ids'].cuda()
inputs['attention_mask'] = inputs['attention_mask'].cuda()
# Generate
generate_ids = model.generate(**inputs, max_new_tokens=30,
only_last_logit=True, # to save memory
use_cache=False, # when run into OOM, enable this can save memory
xentropy=True)
output = tokenizer.batch_decode(generate_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False)[0]
```
#### Training
```bash
./run_longer_training.sh
```
## Evaluations
There's almost none evaluation dataset designed for 100k tokens. So we designed/curated some dataset for this model. We compared this model and several other public/private models.
#### 1. longchat topic retrieval
| Model | Accuracy |
|-------------------|---------|
| Claude2 | 0.9 |
| together llama2 32k | 0.15 |
| longchat 32k 1.5 | 0.05 |
| Anima 100K | 0.5 |
#### 2. longchat number retrieval
| Model | Accuracy |
|-------------------|---------|
| Claude2 | 0.85 |
| together llama2 32k | 0.2 |
| longchat 32k 1.5 | 0.05 |
| Anima 100K | 0.45 |
#### 3. Narrative QA in zeroscore
| Model | F1 |
|-------------------|---------|
| Claude2 | 0.6187 |
| together llama2 32k | 0.3833 |
| longchat 32k 1.5 | 0.2416 |
| Anima 100K | 0.4919 |
## Github
Github repo is [here](https://github.com/lyogavin/Anima/tree/main/anima_100k)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lyogavin__Anima-7B-100K)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 37.66 |
| ARC (25-shot) | 46.59 |
| HellaSwag (10-shot) | 72.28 |
| MMLU (5-shot) | 33.4 |
| TruthfulQA (0-shot) | 37.84 |
| Winogrande (5-shot) | 67.09 |
| GSM8K (5-shot) | 0.68 |
| DROP (3-shot) | 5.72 |
|