File size: 5,291 Bytes
54ba44c
4ef8a60
54ba44c
 
 
 
 
 
9f54e38
54ba44c
 
 
 
 
9f54e38
54ba44c
 
 
 
9f54e38
54ba44c
9f54e38
54ba44c
 
 
9f54e38
54ba44c
 
 
 
 
 
 
9f54e38
54ba44c
 
 
9f54e38
54ba44c
 
9f54e38
 
 
 
 
 
 
54ba44c
 
 
 
9f54e38
54ba44c
9f54e38
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54ba44c
 
 
 
 
 
 
 
9f54e38
54ba44c
9f54e38
54ba44c
9f54e38
 
54ba44c
9f54e38
 
54ba44c
 
9f54e38
 
 
54ba44c
9f54e38
54ba44c
9f54e38
 
54ba44c
9f54e38
54ba44c
9f54e38
54ba44c
 
 
 
 
 
 
 
9f54e38
54ba44c
9f54e38
54ba44c
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
---
library_name: transformers
tags:
- generated_from_trainer
- code
- coding
- llama-2
model-index:
- name: aiplanet/effi-13b
  results: []
license: apache-2.0
language:
- code
datasets:
- kaist-ai/CoT-Collection
pipeline_tag: text-generation
---


# LlaMa 2 13b 4-bit Chain of Thought Reasoning 👩‍💻 

**LlaMa-2 7b** fine-tuned on the **kaist-ai/CoT-Collection dataset** by using the method **QLoRA** in 4-bit with [PEFT](https://github.com/huggingface/peft) library.

## Pretrained description

[Llama-2](https://huggingface.co/meta-llama/Llama-2-13b)

Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters.

Model Architecture Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety

## Training data

[kaist-ai/CoT-Collection](https://huggingface.co/datasets/kaist-ai/CoT-Collection)

The dataset contains problem descriptions and code in python language. This dataset is taken from sahil2801/code_instructions_120k, which adds a prompt column in alpaca style.

### Qunatization Configuration

The following `bitsandbytes` quantization config was used during training:
- bits: 4
- group_size: 128
- dataset: "c4"
- desc_act: False
- tokenizer:tokeniaer
- device_map: "auto"


### Framework versions
- PEFT 0.4.0

### Training

```
Downloading (…)okenizer_config.json: 100%
725/725 [00:00<00:00, 118kB/s]
Downloading (…)/main/tokenizer.json: 100%
1.84M/1.84M [00:01<00:00, 1.45MB/s]
Downloading (…)cial_tokens_map.json: 100%
437/437 [00:00<00:00, 35.9kB/s]
/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py:472: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers.
  warnings.warn(
Downloading (…)lve/main/config.json: 100%
631/631 [00:00<00:00, 53.8kB/s]
Downloading (…)/adapter_config.json: 100%
452/452 [00:00<00:00, 39.1kB/s]
Downloading (…)lve/main/config.json: 100%
587/587 [00:00<00:00, 50.1kB/s]
Downloading (…)fetensors.index.json: 100%
33.4k/33.4k [00:00<00:00, 2.97MB/s]
Downloading shards: 100%
3/3 [28:42<00:00, 546.82s/it]
Downloading (…)of-00003.safetensors: 100%
9.95G/9.95G [10:35<00:00, 15.3MB/s]
Downloading (…)of-00003.safetensors: 100%
9.90G/9.90G [11:04<00:00, 15.9MB/s]
Downloading (…)of-00003.safetensors: 100%
6.18G/6.18G [06:56<00:00, 14.5MB/s]
Loading checkpoint shards: 100%
3/3 [00:03<00:00, 1.01s/it]
/usr/local/lib/python3.10/dist-packages/transformers/utils/hub.py:374: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers.
  warnings.warn(
Downloading (…)neration_config.json: 100%
188/188 [00:00<00:00, 16.5kB/s]
Downloading readme: 100%
2.38k/2.38k [00:00<00:00, 162kB/s]
Repo card metadata block was not found. Setting CardData to empty.
Downloading data files: 100%
1/1 [00:42<00:00, 42.25s/it]
Downloading data: 100%
319M/319M [00:42<00:00, 6.57MB/s]
Extracting data files: 100%
1/1 [00:04<00:00, 4.02s/it]
Generating train split:
356317/0 [00:01<00:00, 224199.18 examples/s]
Quantizing model.layers blocks : 100%
40/40 [32:31<00:00, 50.21s/it]
CUDA extension not installed.
Downloading adapter_model.bin: 100%
26.3M/26.3M [00:03<00:00, 8.64MB/s]
```

### Example of usage

```py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "aiplanet/effi-13b-int4-GPTQ"

tokenizer = AutoTokenizer.from_pretrained(model_id)

model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
tst = """Read the Instruction below and provide an answer the question asked.Stick to to theinstruction .Do not repeat the answers.

### INSTRUCTION:
Virgin Australia, the trading name of Virgin Australia Airlines Pty Ltd, is an Australian-based airline. It is the largest airline by fleet size to use the Virgin brand. It commenced services on 31 August 2000 as Virgin Blue, with two aircraft on a single route. It suddenly found itself as a major airline in Australia's domestic market after the collapse of Ansett Australia in September 2001. The airline has since grown to directly serve 32 cities in Australia, from hubs in Brisbane, Melbourne and Sydney.Is Virgin Australia and Virgin Blue the same airlines?

"""
#
prompt = f"[INST] <<SYS>>\n{system_message}\n<</SYS>>\n\n{tst}. [/INST]"
#

# Tokenize the input
input_ids = tokenizer(prompt, return_tensors="pt", truncation=True).input_ids.cuda()
# Run the model to infere an output
outputs = model.generate(input_ids=input_ids, max_new_tokens=100, do_sample=True, top_p=0.9,temperature=0.1)

# Print the result
print(f"Prompt:\n{prompt}\n")
print(f"Generated instruction:\n{tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0][len(prompt):].split(' [/INST]')[0]}")

```

### Citation

```
@misc {Plaban81,
	author       = { {Plaban Nayak} },
	title        = { effi-13b },
	year         = 2023,
	url          = { https://huggingface.co/aiplanet/effi-13b },
	publisher    = { Hugging Face }
}
```