YAML Metadata
Error:
"datasets[0]" with value "https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM" is not valid. If possible, use a dataset id from https://hf.co/datasets.
This repo contains a low-rank adapter for LLaMA-30b fit on the GPT-4-LLM English and Chinese dataset.
This version of the weights was trained with the following the default hyperparameters of alpaca-lora:
- Epochs: 3
- Batch size: 128
- Cutoff length: 256
- Learning rate: 3e-4
- Lora r: 8
- Lora target modules: q_proj, v_proj
Instructions for running it can be found at https://github.com/tloen/alpaca-lora.
Usage and License Notices
The licence of this LoRA weight is inherited from LLaMA's license, for research use only. In addition, the LoRA weight must not be used to compete with OpenAI, according to OpenAI's terms of use , because the dataset is generated from OpenAI API.
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.