sebaweis's picture
Update README.md
0cb5c5d
|
raw
history blame
1.47 kB
metadata
license: cc-by-nc-4.0
language:
  - en
tags:
  - text-generation
datasets:
  - stanford_alpaca
pipeline_tag: text-generation



Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications.

LLM Generation models trained by Jina AI, Finetuner team.

This repo contains the full weights (8bit) for Falcon-7b fit on the Code Alpaca dataset.

Reproduction

This version of the weights was trained with the following hyperparameters:

  • Epochs: 6
  • Batch size: 128
  • Micro batch size: 8
  • Learning rate: 3e-4
  • Lora r: 8
  • Lora target modules: query_key_value

You can reproduce using this repository:

https://github.com/jina-ai/jerboa

Make sure you install requirements and finetune using this command using the following command:

python finetune.py \
--base-model tiiuae/falcon-7b --lora-target-modules query_key_value \
--data-path sahil2801/CodeAlpaca-20k --output-dir ./lora-alpaca-code \
--batch-size 128 --micro-batch-size 8 --eval-limit 45 \
--eval-file code_eval.jsonl --wandb-project jerboa --wandb-log-model \
 --wandb-watch gradients --num-epochs 6