stambecco-7b-plus / README.md
mchl-labs's picture
Update README.md
6503af4
metadata
license: cc-by-nc-nd-4.0
language:
  - it
  - en
library_name: transformers

Stambecco 🦌: Italian Instruction-following LLaMA Model

Stambecco is a Italian Instruction-following model based on the LLaMA model. It comes in two versions: 7b and 13b parameters.

It is trained on an Italian version of the GPT-4-LLM dataset, a dataset of GPT-4 generated instruction-following data.

This repo contains a low-rank adapter for LLaMA-7b.

For more information, please visit the project's website.

πŸ’ͺ Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 4
  • eval_batch_size: 8
  • gradient_accumulation_steps: 32
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 10
  • mixed_precision_training: Native AMP
  • LoRA R: 8
  • LoRA target modules: q_proj, v_proj

Intended uses & limitations

Usage and License Notices: Same as Stanford Alpaca, Stambecco is intended and licensed for research use only. The models should not be used outside of research purposes.

Please note that it is highly possible that the model output contains biased, conspiracist, offensive, or otherwise inappropriate and potentially harmful content. The model is intended for research purposes only and should be used with caution at your own risk. Production usage is not allowed.