--- library_name: peft tags: - llama1-7b - code - instruct - alpaca-instruct - alpaca - llama7b datasets: - tatsu-lab/alpaca base_model: decapoda-research/llama-7b-hf --- We finetuned huggyllama/llama-7b on tatsu-lab/alpaca Dataset for 5 epochs or ~ 25,000 steps using [MonsterAPI](https://monsterapi.ai) no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm). This dataset is HuggingFaceH4/tatsu-lab/alpaca unfiltered, removing 36 instances of blatant alignment. The finetuning session got completed in 4 hours and costed us only `$16` for the entire finetuning run! #### Hyperparameters & Run details: - Model Path: huggyllama/llama-7b - Dataset: tatsu-lab/alpaca - Learning rate: 0.0003 - Number of epochs: 5 - Data split: Training: 90% / Validation: 10% - Gradient accumulation steps: 1 license: apache-2.0 ---