hacendado's picture
updated model description
7d5326d verified
|
raw
history blame
No virus
2.23 kB
metadata
library_name: transformers, pe
tags:
  - trl
  - sft
  - generated_from_trainer
base_model: google/gemma-7b
license: apache-2.0
language:
  - es

Model Card for Model ID

Model Details

Model Description

This model is a fine-tuned version of google/gemma-7b on the generator dataset.

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.

  • Developed by: Hacendado and QA-legal-refugees team

  • Language(s) (NLP): [Spanish]

  • Finetuned from model [optional]: google/gemma-7b

Uses

Direct Use

The primary objective of this model is to facilitate question answering (QA) tasks pertaining to Spanish refugee legislation. With its refined understanding of the nuances and intricacies of this legal domain

Out-of-Scope Use

Misuse includes any application that promotes unethical practices, misinterprets refugee law, or uses the model for malicious purposes. The model is not designed to replace professional legal advice.

Bias, Risks, and Limitations

The model, while powerful, has limitations inherent to AI, including biases present in the training data. It may not cover all nuances of refugee regulations or adapt to changes in law without updates.

Training Details

Training Data

The dataset used was instruct-legal-refugiados-es with chatml gemma tokenizer

Training Procedure

The training was done using RTX 4090 from Vast.ai with PeRF and Lora

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 8
  • seed: 66
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 4
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant
  • lr_scheduler_warmup_ratio: 0.03
  • num_epochs: 3