PEFT
Safetensors
English
mistral
Generated from Trainer
nopperl commited on
Commit
cd10d7b
1 Parent(s): 5eff5c2
README.md CHANGED
@@ -1,3 +1,167 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ library_name: peft
3
+ tags:
4
+ - generated_from_trainer
5
+ base_model: mistralai/Mistral-7B-Instruct-v0.2
6
+ model-index:
7
+ - name: emissions-extraction-lora
8
+ results: []
9
  ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
15
+ <details><summary>See axolotl config</summary>
16
+
17
+ axolotl version: `0.4.0`
18
+ ```yaml
19
+ base_model: mistralai/Mistral-7B-Instruct-v0.2
20
+ model_type: MistralForCausalLM
21
+ tokenizer_type: LlamaTokenizer
22
+ is_mistral_derived_model: true
23
+
24
+ load_in_8bit: false
25
+ load_in_4bit: false
26
+ strict: false
27
+
28
+ datasets:
29
+ - path: nopperl/sustainability-report-emissions-instruction-style
30
+ type:
31
+ system_prompt: ""
32
+ field_instruction: prompt
33
+ field_output: completion
34
+ format: "[INST] {instruction} [/INST] I have extracted the Scope 1, 2 and 3 emission values from the document, converted them into metric tons and put them into the following json object:\n```json\n"
35
+ no_input_format: "[INST] {instruction} [/INST] I have extracted the Scope 1, 2 and 3 emission values from the document, converted them into metric tons and put them into the following json object:\n```json\n"
36
+ dataset_prepared_path:
37
+ val_set_size: 0
38
+ output_dir: ./emissions-extraction-lora
39
+
40
+ adapter: lora
41
+ lora_model_dir:
42
+ lora_r: 32
43
+ lora_alpha: 16
44
+ lora_dropout: 0.1
45
+ lora_target_linear: true
46
+ lora_fan_in_fan_out:
47
+ lora_target_modules:
48
+ - gate_proj
49
+ - down_proj
50
+ - up_proj
51
+ - q_proj
52
+ - v_proj
53
+ - k_proj
54
+ - o_proj
55
+
56
+ sequence_len: 32768
57
+ sample_packing: false
58
+ pad_to_sequence_len: false
59
+ eval_sample_packing: false
60
+
61
+ wandb_project:
62
+ wandb_entity:
63
+ wandb_watch:
64
+ wandb_name:
65
+ wandb_log_model:
66
+
67
+ gradient_accumulation_steps: 8
68
+ micro_batch_size: 1
69
+ num_epochs: 2
70
+ optimizer: adamw_bnb_8bit
71
+ lr_scheduler: cosine
72
+ learning_rate: 0.000005
73
+
74
+ train_on_inputs: false
75
+ group_by_length: false
76
+ bf16: auto
77
+ fp16:
78
+ tf32: false
79
+
80
+ gradient_checkpointing: true
81
+ early_stopping_patience:
82
+ resume_from_checkpoint:
83
+ local_rank:
84
+ logging_steps: 1
85
+ xformers_attention:
86
+ flash_attention: true
87
+
88
+ warmup_steps: 10
89
+ evals_per_epoch: 0
90
+ eval_table_size:
91
+ eval_table_max_new_tokens: 128
92
+ saves_per_epoch: 1
93
+ debug:
94
+ deepspeed: train_config/zero3_bf16.json
95
+ weight_decay: 0.0
96
+ fsdp:
97
+ fsdp_config:
98
+ special_tokens:
99
+ bos_token: "<s>"
100
+ eos_token: "</s>"
101
+ unk_token: "<unk>"
102
+
103
+
104
+ save_safetensors: true
105
+
106
+ ```
107
+
108
+ </details><br>
109
+
110
+ # emissions-extraction-lora
111
+
112
+ This is a LoRA for the [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) model finetuned on the [nopperl/sustainability-report-emissions-instruction-style](https://huggingface.co/datasets/nopperl/sustainability-report-emissions-instruction-style) dataset.
113
+
114
+ ## Model description
115
+
116
+ Given text extracted from pages of a sustainability report, this model extracts the scope 1, 2 and 3 emissions in JSON format. The JSON object also contains the pages containing this information. For example, the [2022 sustainability report by the Bristol-Myers Squibb Company](https://www.bms.com/assets/bms/us/en-us/pdf/bmy-2022-esg-report.pdf) leads to the following output: `{"scope_1":202290,"scope_2":161907,"scope_3":1696100,"sources":[88,89]}`. For more information, refer to the [GitHub repo](https://github.com/nopperl/corporate_emission_reports).
117
+
118
+ ## Intended uses & limitations
119
+
120
+ The model is intended to be used together with the [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) model using the `inference.py` script from the [GitHub repo](https://github.com/nopperl/corporate_emission_reports). The script ensures that the prompt string and token ids exactly match the ones used for training.
121
+
122
+ Example usage:
123
+
124
+ python inference.py --model mistral --lora emissions-extraction-lora/ggml-adapter-model.bin https://www.bms.com/assets/bms/us/en-us/pdf/bmy-2022-esg-report.pdf
125
+
126
+ Compare to base model without LoRA:
127
+
128
+ python inference.py --model mistral https://www.bms.com/assets/bms/us/en-us/pdf/bmy-2022-esg-report.pdf
129
+
130
+ ## Training and evaluation data
131
+
132
+ Finetuned on the [sustainability-report-emissions-instruction-style](https://huggingface.co/datasets/nopperl/sustainability-report-emissions-instruction-style) dataset.
133
+
134
+ Reaches an emission value extraction accuracy of 57\% (up from 46\% of the base model) and a source citation accuracy of 68\% (base model: 52\%) on the [corporate-emission-reports](https://huggingface.co/datasets/nopperl/corporate-emission-reports) dataset.
135
+
136
+ ## Training procedure
137
+
138
+ Trained on two A40 GPUs with ZeRO Stage 3 and FlashAttention 2. ZeRO-3 and FlashAttention 2 are necessary to just barely fit the sequence length of 32768 (without them, the max sequence length was 6144). The bloat16 datatype (and no quantization) was used. One epoch took roughly 3 hours.
139
+
140
+ ### Training hyperparameters
141
+
142
+ The following hyperparameters were used during training:
143
+ - learning_rate: 5e-06
144
+ - train_batch_size: 1
145
+ - eval_batch_size: 1
146
+ - seed: 42
147
+ - distributed_type: multi-GPU
148
+ - num_devices: 2
149
+ - gradient_accumulation_steps: 8
150
+ - total_train_batch_size: 16
151
+ - total_eval_batch_size: 2
152
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
153
+ - lr_scheduler_type: cosine
154
+ - lr_scheduler_warmup_steps: 10
155
+ - num_epochs: 1
156
+
157
+ ### Training results
158
+
159
+
160
+
161
+ ### Framework versions
162
+
163
+ - PEFT 0.7.0
164
+ - Transformers 4.37.1
165
+ - Pytorch 2.0.1
166
+ - Datasets 2.16.1
167
+ - Tokenizers 0.15.0
adapter_config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "mistralai/Mistral-7B-Instruct-v0.2",
5
+ "bias": "none",
6
+ "fan_in_fan_out": null,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layers_pattern": null,
10
+ "layers_to_transform": null,
11
+ "loftq_config": {},
12
+ "lora_alpha": 16,
13
+ "lora_dropout": 0.1,
14
+ "megatron_config": null,
15
+ "megatron_core": "megatron.core",
16
+ "modules_to_save": null,
17
+ "peft_type": "LORA",
18
+ "r": 32,
19
+ "rank_pattern": {},
20
+ "revision": null,
21
+ "target_modules": [
22
+ "o_proj",
23
+ "q_proj",
24
+ "gate_proj",
25
+ "v_proj",
26
+ "down_proj",
27
+ "up_proj",
28
+ "k_proj"
29
+ ],
30
+ "task_type": "CAUSAL_LM"
31
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:001b46af325d9d3b08730c26b86389303e95e3a0690ffa851548afcf21d18cc4
3
+ size 167832688
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "mistralai/Mistral-7B-Instruct-v0.2",
3
+ "architectures": [
4
+ "MistralForCausalLM"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 1,
8
+ "eos_token_id": 2,
9
+ "hidden_act": "silu",
10
+ "hidden_size": 4096,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 14336,
13
+ "max_position_embeddings": 32768,
14
+ "model_type": "mistral",
15
+ "num_attention_heads": 32,
16
+ "num_hidden_layers": 32,
17
+ "num_key_value_heads": 8,
18
+ "rms_norm_eps": 1e-05,
19
+ "rope_theta": 1000000.0,
20
+ "sliding_window": null,
21
+ "tie_word_embeddings": false,
22
+ "torch_dtype": "bfloat16",
23
+ "transformers_version": "4.37.1",
24
+ "use_cache": false,
25
+ "vocab_size": 32000
26
+ }
ggml-adapter-model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e159542c3de0db7c7b35e23cd948ee97f7225609af8def1ec224c746ae7f28f
3
+ size 335572992
special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "</s>",
17
+ "unk_token": {
18
+ "content": "<unk>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dadfd56d766715c61d2ef780a525ab43b8e6da4de6865bda3d95fdef5e134055
3
+ size 493443
tokenizer_config.json ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ }
29
+ },
30
+ "additional_special_tokens": [],
31
+ "bos_token": "<s>",
32
+ "chat_template": "{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'].strip() + '\\n\\n' %}{% else %}{% set loop_messages = messages %}{% set system_message = '' %}{% endif %}{{ bos_token }}{% for message in loop_messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if loop.index0 == 0 %}{% set content = system_message + message['content'] %}{% else %}{% set content = message['content'] %}{% endif %}{% if message['role'] == 'user' %}{{ '[INST] ' + content.strip() + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ ' ' + content.strip() + ' ' + eos_token }}{% endif %}{% endfor %}",
33
+ "clean_up_tokenization_spaces": false,
34
+ "eos_token": "</s>",
35
+ "legacy": true,
36
+ "model_max_length": 1000000000000000019884624838656,
37
+ "pad_token": "</s>",
38
+ "sp_model_kwargs": {},
39
+ "spaces_between_special_tokens": false,
40
+ "tokenizer_class": "LlamaTokenizer",
41
+ "trust_remote_code": false,
42
+ "unk_token": "<unk>",
43
+ "use_default_system_prompt": false,
44
+ "use_fast": true
45
+ }