Edit model card

Model Card for Llama-2-7b-alpaca-cleaned

This model checkpoint is the Llama-2-7b fine-tuned on alpaca-cleaned dataset with the original Alpaca fine-tuning hyper-parameters.

Model Details

Model Description

This model checkpoint is the Llama-2-7b fine-tuned on alpaca-cleaned dataset with the original Alpaca fine-tuning hyper-parameters.
The original Alpaca model is fine-tuned on Llama with the alpaca dataset by researchers from Stanford University

  • Developed by: NEU Human-centered AI Lab
  • Shared by [optional]: NEU Human-centered AI Lab
  • Model type: Text-generation
  • Language(s) (NLP): English
  • License: cc-by-nc-4.0 (comply with the alpaca-cleaned dataset)
  • Finetuned from model [optional]: Llama-2-7b

Model Sources

Uses

Direct Use

The model is intended to be used for research purposes only in English, complying with stanford_alpaca project.
The model has been fine-tuned on the alpaca-cleaned dataset for assistant-like chat and general natural language generation tasks.
The use of this model should also comply with the restrictions from Llama-2-7b.

Out-of-Scope Use

The out-of-Scope use of this model should also comply with stanford_alpaca project and Llama-2-7b.

Bias, Risks, and Limitations

{{ bias_risks_limitations | default("[More Information Needed]", true)}}

How to Get Started with the Model

Use the code below to get started with the model.

# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("NEU-HAI/Llama-2-7b-alpaca-cleaned")
model = AutoModelForCausalLM.from_pretrained("NEU-HAI/Llama-2-7b-alpaca-cleaned")

Training Details

Training Data

We use the alpaca-cleaned dataset, which is the cleaned version of the original alpaca dataset created by researchers from Stanford University.

Training Procedure

We follow the same training procedure and mostly same hyper-parameters to fine-tune the original Alpaca model on Llama. The procedure can be found in stanford_alpaca project.

Training Hyperparameters

--bf16 True \
--num_train_epochs 3 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 8 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 2000 \
--save_total_limit 1 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--fsdp "full_shard auto_wrap" \
--fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
--tf32 True

Evaluation

Testing Data, Factors & Metrics

Testing Data

N/A

Factors

N/A

Metrics

N/A

Results

N/A

Summary

N/A

Citation

Please cite the stanford_alpaca project

@misc{alpaca,
  author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
  title = {Stanford Alpaca: An Instruction-following LLaMA model},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}

Model Card Authors

Northeastern Human-centered AI Lab

Model Card Contact

Downloads last month
1,516
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for NEU-HAI/Llama-2-7b-alpaca-cleaned

Adapters
1 model
Finetunes
2 models

Dataset used to train NEU-HAI/Llama-2-7b-alpaca-cleaned