license: other
library_name: transformers
tags:
- chatml
- finetune
- gpt4
- synthetic data
- custom_code
- qwen2
datasets:
- teknium/OpenHermes-2.5
license_name: tongyi-qianwen-research
license_link: https://huggingface.co/Qwen/Qwen1.5-1.8B-Chat/raw/main/LICENSE
model-index:
- name: Reyna-Mini-1.8B-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 35.24
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/Reyna-Mini-1.8B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 60.42
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/Reyna-Mini-1.8B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 45.37
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/Reyna-Mini-1.8B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 41.4
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/Reyna-Mini-1.8B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.85
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/Reyna-Mini-1.8B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 5.46
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/Reyna-Mini-1.8B-v0.1
name: Open LLM Leaderboard
- Finetuned Qwen/Qwen1.5-1.8B-Chat, with SFT on teknium's OpenHermes-2.5 dataset.
- This marks the inception of my Qwen1.5 LLM series, with this model laying the foundation for what lies ahead.
- Format: ChatML
<|im_start|>system {system}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant
- Next step would be to do a DPO train on top.
Benchamrks:
Avg. | Arc | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
---|---|---|---|---|---|---|
41.46 | 35.24 | 60.42 | 45.37 | 41.4 | 60.85 | 5.46 |
Example:
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer, StoppingCriteria
import torch
class MyStoppingCriteria(StoppingCriteria):
def __init__(self, target_sequence, prompt):
self.target_sequence = target_sequence
self.prompt=prompt
def __call__(self, input_ids, scores, **kwargs):
generated_text = tokenizer.decode(input_ids[0])
generated_text = generated_text.replace(self.prompt,'')
if self.target_sequence in generated_text:
return True
return False
def __len__(self):
return 1
def __iter__(self):
yield self
modelpath="aloobun/Reyna-Mini-1.8B-v0.1"
model = AutoModelForCausalLM.from_pretrained(
modelpath,
torch_dtype=torch.bfloat16,
device_map="cuda",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(
modelpath,
trust_remote_code=True,
use_fast=False,
)
prompt = "<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\nReflect on a real-world scenario where understanding probability theory could make a significant difference in decision-making.\n<|im_start|>assistant\n"
encoded_input = tokenizer(prompt, return_tensors='pt')
input_ids=encoded_input['input_ids'].cuda()
streamer = TextStreamer(tokenizer=tokenizer, skip_prompt=True)
op = model.generate(
input_ids,
streamer=streamer,
pad_token_id=tokenizer.eos_token_id,
do_sample=True,
temperature=0.6,
top_p=0.8,
max_new_tokens=512,
stopping_criteria=MyStoppingCriteria("<|im_end|>", prompt)
)
Output:
One real-world scenario where understanding probability theory can make a significant difference in decision-making is in the field of finance. Financial institutions, such as banks and investment firms, must make decisions about lending money to individuals or businesses, and how much risk they should take on. In this case, understanding probability theory would help financial analysts and investors make more informed decisions by providing them with information about the likelihood of different outcomes. For example, if an investor wants to invest in a particular stock, they might want to understand the probability that it will perform well over time, based on historical data and market trends. They might also be interested in understanding the probability of defaulting on a loan, which would help them evaluate whether it's worth taking on that risk. Probability theory provides valuable insights into how events are likely to occur and what factors contribute to those probabilities. By using statistical models and simulations, financial professionals can estimate the likelihood of different scenarios and make better-informed decisions about how to allocate their resources. This can lead to increased profits for financial institutions and improved customer satisfaction for individual investors.<|im_end|>
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 41.46 |
AI2 Reasoning Challenge (25-Shot) | 35.24 |
HellaSwag (10-Shot) | 60.42 |
MMLU (5-Shot) | 45.37 |
TruthfulQA (0-shot) | 41.40 |
Winogrande (5-shot) | 60.85 |
GSM8k (5-shot) | 5.46 |