bloom-820m-chat / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
fde54c2
|
raw
history blame
1.55 kB
---
license: bigscience-bloom-rail-1.0
---
https://github.com/zejunwang1/bloom_tuning
可以通过如下代码调用 bloom-820m-chat 模型来生成对话:
```python
from transformers import BloomTokenizerFast, BloomForCausalLM
model_name_or_path = "WangZeJun/bloom-820m-chat"
tokenizer = BloomTokenizerFast.from_pretrained(model_name_or_path)
model = BloomForCausalLM.from_pretrained(model_name_or_path).cuda()
model = model.eval()
input_pattern = "{}</s>"
text = "你好"
input_ids = tokenizer(input_pattern.format(text), return_tensors="pt").input_ids
input_ids = input_ids.cuda()
outputs = model.generate(input_ids, do_sample=True, max_new_tokens=1024, top_p=0.85,
temperature=0.3, repetition_penalty=1.2, eos_token_id=tokenizer.eos_token_id)
input_ids_len = input_ids.size(1)
response_ids = outputs[0][input_ids_len:]
response = tokenizer.decode(response_ids)
print(response)
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_WangZeJun__bloom-820m-chat)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 26.55 |
| ARC (25-shot) | 23.38 |
| HellaSwag (10-shot) | 34.16 |
| MMLU (5-shot) | 25.98 |
| TruthfulQA (0-shot) | 40.32 |
| Winogrande (5-shot) | 53.2 |
| GSM8K (5-shot) | 0.0 |
| DROP (3-shot) | 8.85 |