leaderboard-pr-bot's picture
Adding Evaluation Results
ec80a00 verified
|
raw
history blame
4.67 kB
---
license: apache-2.0
datasets:
- databricks/databricks-dolly-15k
- lucasmccabe-lmi/CodeAlpaca-20k
model-index:
- name: Instruct_Yi-6B_Dolly_CodeAlpaca
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 53.16
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 75.3
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 41.42
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.37
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 28.35
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=HenryJJ/Instruct_Yi-6B_Dolly_CodeAlpaca
name: Open LLM Leaderboard
---
# Instruct_Yi-6B_Dolly15K
Fine-tuned from Yi-6B, used Dolly15k for the dataset. 90% for training, 10% validation. Trained for 2.0 epochs using Lora. Trained with 2048 context window. Compared with https://huggingface.co/HenryJJ/Instruct_Yi-6B_Dolly15K, I add additional CodeAlpaca_20K dataset that good at coding.
# Model Details
* **Trained by**: trained by HenryJJ.
* **Model type:** **Instruct_Yi-6B_Dolly15K** is an auto-regressive language model based on the Llama 2 transformer architecture.
* **Language(s)**: English
* **License for Instruct_Yi-6B_Dolly15K**: apache-2.0 license
# Prompting
## Prompt Template With Context
<|startoftext|>[INST]{instruction} {context}[/INST]{response}<|endoftext|>
```
<|startoftext|>[INST]
Write a 10-line poem about a given topic
The topic is about racecars
[/INST]
```
## Prompt Template Without Context
```
<|startoftext|>[INST]
Who was the was the second president of the United States?
[/INST]
```
# Training script:
Fully opensourced at: https://github.com/hengjiUSTC/learn-llm/blob/main/trl_finetune.py. Run on aws g4dn.12xlarge instance for 10 hours.
```
python3 trl_finetune.py --config configs/yi_6b-large.yml
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_HenryJJ__Instruct_Yi-6B_Dolly_CodeAlpaca)
| Metric |Value|
|---------------------------------|----:|
|Avg. |56.11|
|AI2 Reasoning Challenge (25-Shot)|53.16|
|HellaSwag (10-Shot) |75.30|
|MMLU (5-Shot) |63.06|
|TruthfulQA (0-shot) |41.42|
|Winogrande (5-shot) |75.37|
|GSM8k (5-shot) |28.35|