Text Generation
Transformers
Safetensors
openelm
custom_code
OpenELM-450M / README.md
mahyar-najibi's picture
Fixing an issue in README
96c7321
|
raw
history blame
10.9 kB
metadata
license: other
license_name: apple-sample-code-license
license_link: LICENSE

OpenELM

Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari

We introduce OpenELM, a family of Open-source Efficient Language Models. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters.

Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens.

Usage

We have provided an example function to generate output from OpenELM models loaded via HuggingFace Hub in generate_openelm.py.

You can try the model by running the following command:

python generate_openelm.py --model apple/OpenELM-450M --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2

Please refer to this link to obtain your hugging face access token.

Additional arguments to the hugging face generate function can be passed via generate_kwargs. As an example, to speedup the inference, you can try lookup token speculative generation by passing the prompt_lookup_num_tokens argument as follows:

python generate_openelm.py --model apple/OpenELM-450M --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 prompt_lookup_num_tokens=10

Alternatively, model-wise speculative generation with an assistive model can be also tried by passing a smaller model model through the assistant_model argument, for example:

python generate_openelm.py --model apple/OpenELM-450M --hf_access_token [HF_ACCESS_TOKEN] --prompt 'Once upon a time there was' --generate_kwargs repetition_penalty=1.2 --assistant_model [SMALLER_MODEL]

Main Results

Zero-Shot

Model Size ARC-c ARC-e BoolQ HellaSwag PIQA SciQ WinoGrande Average
OpenELM-270M 26.45 45.08 53.98 46.71 69.75 84.70 53.91 54.37
OpenELM-270M-Instruct 30.55 46.68 48.56 52.07 70.78 84.40 52.72 55.11
OpenELM-450M 27.56 48.06 55.78 53.97 72.31 87.20 58.01 57.56
OpenELM-450M-Instruct 30.38 50.00 60.37 59.34 72.63 88.00 58.96 59.95
OpenELM-1_1B 32.34 55.43 63.58 64.81 75.57 90.60 61.72 63.44
OpenELM-1_1B-Instruct 37.97 52.23 70.00 71.20 75.03 89.30 62.75 65.50
OpenELM-3B 35.58 59.89 67.40 72.44 78.24 92.70 65.51 67.39
OpenELM-3B-Instruct 39.42 61.74 68.17 76.36 79.00 92.50 66.85 69.15

LLM360

Model Size ARC-c HellaSwag MMLU TruthfulQA WinoGrande Average
OpenELM-270M 27.65 47.15 25.72 39.24 53.83 38.72
OpenELM-270M-Instruct 32.51 51.58 26.70 38.72 53.20 40.54
OpenELM-450M 30.20 53.86 26.01 40.18 57.22 41.50
OpenELM-450M-Instruct 33.53 59.31 25.41 40.48 58.33 43.41
OpenELM-1_1B 36.69 65.71 27.05 36.98 63.22 45.93
OpenELM-1_1B-Instruct 41.55 71.83 25.65 45.95 64.72 49.94
OpenELM-3B 42.24 73.28 26.76 34.98 67.25 48.90
OpenELM-3B-Instruct 47.70 76.87 24.80 38.76 67.96 51.22

OpenLLM Leaderboard

Model Size ARC-c CrowS-Pairs HellaSwag MMLU PIQA RACE TruthfulQA WinoGrande Average
OpenELM-270M 27.65 66.79 47.15 25.72 69.75 30.91 39.24 53.83 45.13
OpenELM-270M-Instruct 32.51 66.01 51.58 26.70 70.78 33.78 38.72 53.20 46.66
OpenELM-450M 30.20 68.63 53.86 26.01 72.31 33.11 40.18 57.22 47.69
OpenELM-450M-Instruct 33.53 67.44 59.31 25.41 72.63 36.84 40.48 58.33 49.25
OpenELM-1_1B 36.69 71.74 65.71 27.05 75.57 36.46 36.98 63.22 51.68
OpenELM-1_1B-Instruct 41.55 71.02 71.83 25.65 75.03 39.43 45.95 64.72 54.40
OpenELM-3B 42.24 73.29 73.28 26.76 78.24 38.76 34.98 67.25 54.35
OpenELM-3B-Instruct 47.70 72.33 76.87 24.80 79.00 38.47 38.76 67.96 55.73

See the technical report for more results and comparison.

Evaluation

Setup

Install the following dependencies:


# install public lm-eval-harness

harness_repo="public-lm-eval-harness"
git clone https://github.com/EleutherAI/lm-evaluation-harness ${harness_repo}
cd ${harness_repo}
# use main branch on 03-15-2024, SHA is dc90fec
git checkout dc90fec
pip install -e .
cd ..

# 66d6242 is the main branch on 2024-04-01 
pip install datasets@git+https://github.com/huggingface/datasets.git@66d6242
pip install tokenizers>=0.15.2 transformers>=4.38.2 sentencepiece>=0.2.0

Evaluate OpenELM


# OpenELM-270M
hf_model=OpenELM-270M

# this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMa tokenizer which requires add_bos_token to be True
add_bos_token=True
batch_size=1

mkdir lm_eval_output

shot=0
task=arc_challenge,arc_easy,boolq,hellaswag,piqa,race,winogrande,sciq,truthfulqa_mc2
lm_eval --model hf \
        --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token} \
        --tasks ${task} \
        --device cuda:0 \
        --num_fewshot ${shot} \
        --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
        --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log

shot=5
task=mmlu,winogrande
lm_eval --model hf \
        --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token} \
        --tasks ${task} \
        --device cuda:0 \
        --num_fewshot ${shot} \
        --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
        --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log

shot=25
task=arc_challenge,crows_pairs_english
lm_eval --model hf \
        --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token} \
        --tasks ${task} \
        --device cuda:0 \
        --num_fewshot ${shot} \
        --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
        --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log

shot=10
task=hellaswag
lm_eval --model hf \
        --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token} \
        --tasks ${task} \
        --device cuda:0 \
        --num_fewshot ${shot} \
        --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
        --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log

Bias, Risks, and Limitations

Our OpenELM models are not trained with any safety guarantees, the model outputs can be potentially inaccurate, harmful or contain biased information. produce inaccurate, biased or other objectionable responses to user prompts. Therefore, users and developers should conduct extensive safety testing and filtering suited to their specific needs.