PEFT
Safetensors
English
mistral
Generated from Trainer

Built with Axolotl

See axolotl config

axolotl version: 0.4.0

base_model: mistralai/Mistral-7B-Instruct-v0.2
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true

load_in_8bit: false
load_in_4bit: false
strict: false

datasets:
  - path: nopperl/sustainability-report-emissions-instruction-style
    type:
      system_prompt: ""
      field_instruction: prompt
      field_output: completion
      format: "[INST] {instruction} [/INST] I have extracted the Scope 1, 2 and 3 emission values from the document, converted them into metric tons and put them into the following json object:\n```json\n"
      no_input_format: "[INST] {instruction} [/INST] I have extracted the Scope 1, 2 and 3 emission values from the document, converted them into metric tons and put them into the following json object:\n```json\n"
dataset_prepared_path:
val_set_size: 0
output_dir: ./emissions-extraction-lora

adapter: lora
lora_model_dir:
lora_r: 32
lora_alpha: 16
lora_dropout: 0.1
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
  - gate_proj
  - down_proj
  - up_proj
  - q_proj
  - v_proj
  - k_proj
  - o_proj

sequence_len: 32768
sample_packing: false
pad_to_sequence_len: false
eval_sample_packing: false

wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:

gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 4
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.00002

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false

gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_steps: 10
evals_per_epoch: 0
eval_table_size:
eval_table_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed: train_config/zero3_bf16.json
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
  bos_token: "<s>"
  eos_token: "</s>"
  unk_token: "<unk>"


save_safetensors: true

emissions-extraction-lora

This is a LoRA for the mistralai/Mistral-7B-Instruct-v0.2 model finetuned on the nopperl/sustainability-report-emissions-instruction-style dataset.

Model description

Given text extracted from pages of a sustainability report, this model extracts the scope 1, 2 and 3 emissions in JSON format. The JSON object also contains the pages containing this information. For example, the 2022 sustainability report by the Bristol-Myers Squibb Company leads to the following output: {"scope_1":202290,"scope_2":161907,"scope_3":1696100,"sources":[88,89]}.

Reaches an emission value extraction accuracy of 65% (up from 46% of the base model) and a source citation accuracy of 77% (base model: 52%) on the corporate-emission-reports dataset. For more information, refer to the GitHub repo.

Intended uses & limitations

The model is intended to be used together with the mistralai/Mistral-7B-Instruct-v0.2 model using the inference.py script from the accompanying python package. The script ensures that the prompt string and token ids exactly match the ones used for training.

Example usage

CLI

Using transformers as inference engine:

python -m corporate_emissions_reports.inference --model_path mistralai/Mistral-7B-Instruct-v0.2 --lora nopperl/emissions-extraction-lora --model_context_size 32768 --engine hf https://www.bms.com/assets/bms/us/en-us/pdf/bmy-2022-esg-report.pdf

Compare to base model without LoRA:

python -m corporate_emissions_reports.inference --model_path mistralai/Mistral-7B-Instruct-v0.2 --model_context_size 32768 --engine hf https://www.bms.com/assets/bms/us/en-us/pdf/bmy-2022-esg-report.pdf

Alternatively, it is possible to use llama.cpp as inference engine. In this case, follow the installation instructions of the package readme. In particular, the model needs to be downloaded beforehand. Then:

python -m corporate_emissions_reports.inference --model mistral --lora ./emissions-extraction-lora/ggml-adapter-model.bin https://www.bms.com/assets/bms/us/en-us/pdf/bmy-2022-esg-report.pdf

Compare to base model without LoRA:

python -m corporate_emissions_reports.inference --model mistral https://www.bms.com/assets/bms/us/en-us/pdf/bmy-2022-esg-report.pdf

Programmatically

The package also provides a function for inference from python code:

from corporate_emission_reports.inference import extract_emissions
document_path = "https://www.bms.com/assets/bms/us/en-us/pdf/bmy-2022-esg-report.pdf"
model_kwargs = {}  # Optional arguments with are passed to the HF model
emissions = extract_emissions(document_path, "mistralai/Mistral-7B-Instruct-v0.2", lora="nopperl/emissions-extraction-lora", engine="hf", **model_kwargs)

It's also possible to use it directly with transformers:

from corporate_emission_reports.inference import construct_prompt
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
document_path = "https://www.bms.com/assets/bms/us/en-us/pdf/bmy-2022-esg-report.pdf"
lora_path = "nopperl/emissions-extraction-lora"
tokenizer = AutoTokenizer.from_pretrained(lora_path)
prompt_text = construct_prompt(document_path, tokenizer)
model = AutoPeftModelForCausalLM.from_pretrained(lora_path)
prompt_tokenized = tokenizer.encode(prompt_text, return_tensors="pt").to(model.device)
outputs = model.generate(prompt_tokenized, max_new_tokens=120)
output = outputs[0][prompt_tokenized.shape[1]:]

Additionally, it is possible to enforce valid JSON output and convert it into a Pydantic object using lm-format-enforcer:

from corporate_emission_reports.pydantic_types import Emissions
from lmformatenforcer import JsonSchemaParser
from lmformatenforcer.integrations.transformers import build_transformers_prefix_allowed_tokens_fn
...
parser = JsonSchemaParser(Emissions.model_json_schema())
prefix_function = build_transformers_prefix_allowed_tokens_fn(tokenizer, parser)
outputs = model.generate(prompt_tokenized, max_new_tokens=120, prefix_allowed_tokens_fn=prefix_function)
output = outputs[0][prompt_tokenized.shape[1]:]
if tokenizer.eos_token:
    output = output[:-1]
output = tokenizer.decode(output)
return Emissions.model_validate_json(output, strict=True)

Training and evaluation data

Finetuned on the sustainability-report-emissions-instruction-style dataset and evaluated on the corporate-emission-reports.

Training procedure

Trained on two A40 GPUs with ZeRO Stage 3 and FlashAttention 2. ZeRO-3 and FlashAttention 2 are necessary to just barely fit the sequence length of 32768 (without them, the max sequence length was 6144). The bloat16 datatype (and no quantization) was used. One epoch took roughly 3 hours.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 2
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 16
  • total_eval_batch_size: 2
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 10
  • num_epochs: 1

Training results

Framework versions

  • PEFT 0.7.0
  • Transformers 4.37.1
  • Pytorch 2.0.1
  • Datasets 2.16.1
  • Tokenizers 0.15.0
Downloads last month
4
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for nopperl/emissions-extraction-lora

Adapter
(883)
this model

Dataset used to train nopperl/emissions-extraction-lora

Collection including nopperl/emissions-extraction-lora