pinzhenchen's picture
Upload README.md with huggingface_hub
b70019b verified
|
raw
history blame
1.85 kB
metadata
language:
  - bg
  - cs
  - zh
  - de
  - fi
  - fr
  - ru
  - es
tags:
  - generation
  - question answering
  - instruction tuning
license: cc-by-nc-4.0

Model Description

This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.

Instruction tuning details

  • Base model: EleutherAI/pythia-1.4b-deduped
  • Instruction tuning language: multilingual downsampled (Bulgarian, Czech, Chinese, German, Finnish, French, Russian, and Spanish)
  • Training method: LoRA.
  • LoRA details: rank=8, alpha=16, target modules={key, query, value}.
  • Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
  • Dataset: machine-translated from yahma/alpaca-cleaned. You can download our data HERE.

Usage

The model checkpoint should be loaded with the base model together using transformers and peft libraries.

Please refer to our Github repository HERE for inference and training instructions.

Citation

@inproceedings{chen-etal-2024-monolingual,
  title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
  author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
  year="2024",
  booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}