vortex-3b-v2 / README.md
Abhaykoul's picture
Adding Evaluation Results (#2)
a9722f4 verified
|
raw
history blame
No virus
5.48 kB
---
language:
- en
license: other
tags:
- HelpingAI
- vortex
datasets:
- OEvortex/uncensored-vortex
license_name: hsul
license_link: https://huggingface.co/OEvortex/vortex-3b/raw/main/LICENSE.md
pipeline_tag: text-generation
model-index:
- name: vortex-3b-v2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 39.68
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OEvortex/vortex-3b-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 65.04
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OEvortex/vortex-3b-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.09
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OEvortex/vortex-3b-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 33.8
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OEvortex/vortex-3b-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 59.12
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OEvortex/vortex-3b-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 2.05
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OEvortex/vortex-3b-v2
name: Open LLM Leaderboard
---
![Vortex 3b](https://cdn-lfs-us-1.huggingface.co/repos/68/cb/68cb18839210e9d774c72c739ef72fb95cb03f6f857e6b5a2377406f2078e65a/04b4c8104a551ec9754fd1169842ee67c06ced0fb16569b5fca804c2068578e9?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27vortex%25203b.png%3B+filename%3D%22vortex+3b.png%22%3B&response-content-type=image%2Fpng&Expires=1711282309&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTcxMTI4MjMwOX19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmh1Z2dpbmdmYWNlLmNvL3JlcG9zLzY4L2NiLzY4Y2IxODgzOTIxMGU5ZDc3NGM3MmM3MzllZjcyZmI5NWNiMDNmNmY4NTdlNmI1YTIzNzc0MDZmMjA3OGU2NWEvMDRiNGM4MTA0YTU1MWVjOTc1NGZkMTE2OTg0MmVlNjdjMDZjZWQwZmIxNjU2OWI1ZmNhODA0YzIwNjg1NzhlOT9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPSoifV19&Signature=T0d7vhvs9DjdMClzOOEyJMVe4e33u7A-9VTvApC3lMQgW7GXt-hOACXDWDp2TQMv6WoxL7DmGBh-d2uXvGT-2Sqx%7EUcce2pxq1yqtkMEi7sf2WYSHtmaXrIWAiVF%7EPeidG1wRr8wfUY3qIjlKtMVJs7RGbzvAgyvhscazuqutIj37tlIjEcYUXYOuYZoqA3OhoXfpiawCvmXc%7EAul-bAWwAYx91BWvGw9fw4tv20wisJDsh6BV7HEWnV%7EYbXvCxxlZZ4BbcWrYDN%7EMRf48EElCacf5KMpDMbCa52rO-ZvXCWgap%7EzUaIemRSQ84rpgTlVKb--D3GL3pUwRroaiRF7A__&Key-Pair-Id=KCD77M1F0VK2B)
**Model Overview**
Vortex-3b-v2 is an upgraded version of the Vortex-3b model ie. a 2.78 billion parameter causal language model created by OEvortex that was derived from EleutherAI's Pythia-2.8b and trained on 79% of uncensored-vortex dataset
```python
from transformers import pipeline
# Initialize the pipeline
pipe = pipeline("text-generation", model="OEvortex/vortex-3b-v2")
# Use the pipeline
text = "Once upon a time"
generated_text = pipe(text, max_length=100, do_sample=True)[0]['generated_text']
print(generated_text)
```
```python
# Use a pipeline as a high-level helper
from transformers import pipeline
text = pipeline(model="OEvortex/vortex-3b-v2", torch_dtype=torch.bfloat16, device_map="auto")
res = text("Explain to me the difference between nuclear fission and fusion.")
print(res[0]["text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_OEvortex__vortex-3b-v2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |37.46|
|AI2 Reasoning Challenge (25-Shot)|39.68|
|HellaSwag (10-Shot) |65.04|
|MMLU (5-Shot) |25.09|
|TruthfulQA (0-shot) |33.80|
|Winogrande (5-shot) |59.12|
|GSM8k (5-shot) | 2.05|