license: apache-2.0
model-index:
- name: Yi-1.5-34B-Chat
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 60.67
name: strict accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=01-ai/Yi-1.5-34B-Chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 44.26
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=01-ai/Yi-1.5-34B-Chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 23.34
name: exact match
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=01-ai/Yi-1.5-34B-Chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 15.32
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=01-ai/Yi-1.5-34B-Chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 13.06
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=01-ai/Yi-1.5-34B-Chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 39.12
name: accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=01-ai/Yi-1.5-34B-Chat
name: Open LLM Leaderboard
π GitHub β’
πΎ Discord β’
π€ Twitter β’
π¬ WeChat
π Paper β’
πͺ Tech Blog β’
π FAQ β’
π Learning Hub
Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
Model | Context Length | Pre-trained Tokens |
---|---|---|
Yi-1.5 | 4K, 16K, 32K | 3.6T |
Models
Chat models
Name Download Yi-1.5-34B-Chat β’ π€ Hugging Face β’ π€ ModelScope β’ π£ wisemodel Yi-1.5-34B-Chat-16K β’ π€ Hugging Face β’ π€ ModelScope β’ π£ wisemodel Yi-1.5-9B-Chat β’ π€ Hugging Face β’ π€ ModelScope β’ π£ wisemodel Yi-1.5-9B-Chat-16K β’ π€ Hugging Face β’ π€ ModelScope β’ π£ wisemodel Yi-1.5-6B-Chat β’ π€ Hugging Face β’ π€ ModelScope β’ π£ wisemodel Base models
Name Download Yi-1.5-34B β’ π€ Hugging Face β’ π€ ModelScope β’ π£ wisemodel Yi-1.5-34B-32K β’ π€ Hugging Face β’ π€ ModelScope β’ π£ wisemodel Yi-1.5-9B β’ π€ Hugging Face β’ π€ ModelScope β’ π£ wisemodel Yi-1.5-9B-32K β’ π€ Hugging Face β’ π€ ModelScope β’ π£ wisemodel Yi-1.5-6B β’ π€ Hugging Face β’ π€ ModelScope β’ π£ wisemodel
Benchmarks
Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.
Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.
Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.
Yi-1.5-9B is the top performer among similarly sized open-source models.
Quick Start
For getting up and running with Yi-1.5 models quickly, see README.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 32.63 |
IFEval (0-Shot) | 60.67 |
BBH (3-Shot) | 44.26 |
MATH Lvl 5 (4-Shot) | 23.34 |
GPQA (0-shot) | 15.32 |
MuSR (0-shot) | 13.06 |
MMLU-PRO (5-shot) | 39.12 |