PEFT
Safetensors
English
mistral
Generated from Trainer
File size: 8,873 Bytes
5eff5c2
cd10d7b
 
 
 
 
 
 
fd69bd1
 
 
 
 
5eff5c2
cd10d7b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3055bce
cd10d7b
 
3055bce
cd10d7b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3055bce
 
a488c75
cd10d7b
 
 
3055bce
 
 
 
 
cd10d7b
3055bce
cd10d7b
a488c75
cd10d7b
 
 
a488c75
cd10d7b
3055bce
 
a488c75
3055bce
 
 
a488c75
3055bce
 
 
 
cd10d7b
3055bce
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cd10d7b
3055bce
cd10d7b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3055bce
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
---
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: emissions-extraction-lora
  results: []
license: apache-2.0
datasets:
- nopperl/sustainability-report-emissions-instruction-style
language:
- en
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>

axolotl version: `0.4.0`
```yaml
base_model: mistralai/Mistral-7B-Instruct-v0.2
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true

load_in_8bit: false
load_in_4bit: false
strict: false

datasets:
  - path: nopperl/sustainability-report-emissions-instruction-style
    type:
      system_prompt: ""
      field_instruction: prompt
      field_output: completion
      format: "[INST] {instruction} [/INST] I have extracted the Scope 1, 2 and 3 emission values from the document, converted them into metric tons and put them into the following json object:\n```json\n"
      no_input_format: "[INST] {instruction} [/INST] I have extracted the Scope 1, 2 and 3 emission values from the document, converted them into metric tons and put them into the following json object:\n```json\n"
dataset_prepared_path:
val_set_size: 0
output_dir: ./emissions-extraction-lora

adapter: lora
lora_model_dir:
lora_r: 32
lora_alpha: 16
lora_dropout: 0.1
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
  - gate_proj
  - down_proj
  - up_proj
  - q_proj
  - v_proj
  - k_proj
  - o_proj

sequence_len: 32768
sample_packing: false
pad_to_sequence_len: false
eval_sample_packing: false

wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:

gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 4
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.00002

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false

gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_steps: 10
evals_per_epoch: 0
eval_table_size:
eval_table_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed: train_config/zero3_bf16.json
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
  bos_token: "<s>"
  eos_token: "</s>"
  unk_token: "<unk>"


save_safetensors: true

```

</details><br>

# emissions-extraction-lora

This is a LoRA for the [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) model finetuned on the [nopperl/sustainability-report-emissions-instruction-style](https://huggingface.co/datasets/nopperl/sustainability-report-emissions-instruction-style) dataset.

## Model description

Given text extracted from pages of a sustainability report, this model extracts the scope 1, 2 and 3 emissions in JSON format. The JSON object also contains the pages containing this information. For example, the [2022 sustainability report by the Bristol-Myers Squibb Company](https://www.bms.com/assets/bms/us/en-us/pdf/bmy-2022-esg-report.pdf) leads to the following output: `{"scope_1":202290,"scope_2":161907,"scope_3":1696100,"sources":[88,89]}`.

Reaches an emission value extraction accuracy of 65\% (up from 46\% of the base model) and a source citation accuracy of 77\% (base model: 52\%) on the [corporate-emission-reports](https://huggingface.co/datasets/nopperl/corporate-emission-reports) dataset. For more information, refer to the [GitHub repo](https://github.com/nopperl/corporate_emission_reports).

## Intended uses & limitations

The model is intended to be used together with the [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) model using the `inference.py` script from the [accompanying python package](https://github.com/nopperl/corporate_emission_reports). The script ensures that the prompt string and token ids exactly match the ones used for training.

### Example usage

#### CLI

Using [transformers](https://github.com/huggingface/transformers) as inference engine:

    python -m corporate_emissions_reports.inference --model_path mistralai/Mistral-7B-Instruct-v0.2 --lora nopperl/emissions-extraction-lora --model_context_size 32768 --engine hf https://www.bms.com/assets/bms/us/en-us/pdf/bmy-2022-esg-report.pdf

Compare to base model without LoRA:

    python -m corporate_emissions_reports.inference --model_path mistralai/Mistral-7B-Instruct-v0.2 --model_context_size 32768 --engine hf https://www.bms.com/assets/bms/us/en-us/pdf/bmy-2022-esg-report.pdf

Alternatively, it is possible to use [llama.cpp](https://github.com/ggerganov/llama.cpp) as inference engine. In this case, follow the installation instructions of the [package readme](https://github.com/nopperl/corporate_emission_reports/blob/main/README.md). In particular, the model needs to be downloaded beforehand. Then:

    python -m corporate_emissions_reports.inference --model mistral --lora ./emissions-extraction-lora/ggml-adapter-model.bin https://www.bms.com/assets/bms/us/en-us/pdf/bmy-2022-esg-report.pdf

Compare to base model without LoRA:

    python -m corporate_emissions_reports.inference --model mistral https://www.bms.com/assets/bms/us/en-us/pdf/bmy-2022-esg-report.pdf

#### Programmatically

The package also provides a function for inference from python code:

    from corporate_emission_reports.inference import extract_emissions
    document_path = "https://www.bms.com/assets/bms/us/en-us/pdf/bmy-2022-esg-report.pdf"
    model_kwargs = {}  # Optional arguments with are passed to the HF model
    emissions = extract_emissions(document_path, "mistralai/Mistral-7B-Instruct-v0.2", lora="nopperl/emissions-extraction-lora", engine="hf", **model_kwargs)

It's also possible to use it directly with [transformers](https://github.com/huggingface/transformers):

```
from corporate_emission_reports.inference import construct_prompt
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
document_path = "https://www.bms.com/assets/bms/us/en-us/pdf/bmy-2022-esg-report.pdf"
lora_path = "nopperl/emissions-extraction-lora"
tokenizer = AutoTokenizer.from_pretrained(lora_path)
prompt_text = construct_prompt(document_path, tokenizer)
model = AutoPeftModelForCausalLM.from_pretrained(lora_path)
prompt_tokenized = tokenizer.encode(prompt_text, return_tensors="pt").to(model.device)
outputs = model.generate(prompt_tokenized, max_new_tokens=120)
output = outputs[0][prompt_tokenized.shape[1]:]
```

Additionally, it is possible to enforce valid JSON output and convert it into a Pydantic object using [lm-format-enforcer](https://github.com/noamgat/lm-format-enforcer):

```
from corporate_emission_reports.pydantic_types import Emissions
from lmformatenforcer import JsonSchemaParser
from lmformatenforcer.integrations.transformers import build_transformers_prefix_allowed_tokens_fn
...
parser = JsonSchemaParser(Emissions.model_json_schema())
prefix_function = build_transformers_prefix_allowed_tokens_fn(tokenizer, parser)
outputs = model.generate(prompt_tokenized, max_new_tokens=120, prefix_allowed_tokens_fn=prefix_function)
output = outputs[0][prompt_tokenized.shape[1]:]
if tokenizer.eos_token:
    output = output[:-1]
output = tokenizer.decode(output)
return Emissions.model_validate_json(output, strict=True)
```

## Training and evaluation data

Finetuned on the [sustainability-report-emissions-instruction-style](https://huggingface.co/datasets/nopperl/sustainability-report-emissions-instruction-style) dataset and evaluated on the [corporate-emission-reports](https://huggingface.co/datasets/nopperl/corporate-emission-reports).

## Training procedure

Trained on two A40 GPUs with ZeRO Stage 3 and FlashAttention 2. ZeRO-3 and FlashAttention 2 are necessary to just barely fit the sequence length of 32768 (without them, the max sequence length was 6144). The bloat16 datatype (and no quantization) was used. One epoch took roughly 3 hours.

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1

### Training results



### Framework versions

- PEFT 0.7.0
- Transformers 4.37.1
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0