|
--- |
|
license: apache-2.0 |
|
model-index: |
|
- name: Synthia-7B-v3.0 |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: AI2 Reasoning Challenge (25-Shot) |
|
type: ai2_arc |
|
config: ARC-Challenge |
|
split: test |
|
args: |
|
num_few_shot: 25 |
|
metrics: |
|
- type: acc_norm |
|
value: 62.46 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=migtissera/Synthia-7B-v3.0 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: HellaSwag (10-Shot) |
|
type: hellaswag |
|
split: validation |
|
args: |
|
num_few_shot: 10 |
|
metrics: |
|
- type: acc_norm |
|
value: 83.79 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=migtissera/Synthia-7B-v3.0 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MMLU (5-Shot) |
|
type: cais/mmlu |
|
config: all |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 63.9 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=migtissera/Synthia-7B-v3.0 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: TruthfulQA (0-shot) |
|
type: truthful_qa |
|
config: multiple_choice |
|
split: validation |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: mc2 |
|
value: 43.85 |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=migtissera/Synthia-7B-v3.0 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: Winogrande (5-shot) |
|
type: winogrande |
|
config: winogrande_xl |
|
split: validation |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 77.9 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=migtissera/Synthia-7B-v3.0 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: GSM8k (5-shot) |
|
type: gsm8k |
|
config: main |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 40.03 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=migtissera/Synthia-7B-v3.0 |
|
name: Open LLM Leaderboard |
|
--- |
|
|
|
# Synthia-7B-v3.0 |
|
SynthIA-7B-v3.0 (Synthetic Intelligent Agent) is a Mistral-7B model trained with guidance on Orca-2 paper. It has been fine-tuned for instruction following as well as having long-form conversations. SynthIA-3.0 dataset contains the Generarized Tree-of-Thought prompt plus 10 more new long-form system contexts. However, in the training phase the system context was removed as suggested in Orca-2 paper. |
|
|
|
<br> |
|
|
|
![Synthia](https://huggingface.co/migtissera/Synthia-7B-v3.0/resolve/main/Synthia-v3.jpg) |
|
|
|
<br> |
|
|
|
|
|
To evoke generalized Tree of Thought + Chain of Thought reasoning, you may use the following system message: |
|
``` |
|
Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation. |
|
``` |
|
|
|
|
|
## Evaluation |
|
|
|
We evaluated Synthia-7B-v3.0 on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI. |
|
|
|
Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). Section to follow. |
|
|
|
|||| |
|
|:------:|:--------:|:-------:| |
|
|**Task**|**Metric**|**Value**| |
|
|*arc_challenge*|acc_norm|| |
|
|*hellaswag*|acc_norm|| |
|
|*mmlu*|acc_norm|| |
|
|*truthfulqa_mc*|mc2|| |
|
|**Total Average**|-||| |
|
|
|
<br> |
|
|
|
## Example Usage |
|
|
|
### Here is prompt format: |
|
|
|
``` |
|
SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation. |
|
USER: What is the difference between an Orca, Dolphin and a Seal? |
|
ASSISTANT: |
|
``` |
|
|
|
### Below shows a code example on how to use this model: |
|
|
|
```python |
|
import torch, json |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
model_path = "migtissera/Synthia-7B-v3.0" |
|
output_file_path = "./Synthia-7B-conversations.jsonl" |
|
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_path, |
|
torch_dtype=torch.float16, |
|
device_map="auto", |
|
load_in_8bit=False, |
|
trust_remote_code=True, |
|
) |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) |
|
|
|
|
|
def generate_text(instruction): |
|
tokens = tokenizer.encode(instruction) |
|
tokens = torch.LongTensor(tokens).unsqueeze(0) |
|
tokens = tokens.to("cuda") |
|
|
|
instance = { |
|
"input_ids": tokens, |
|
"top_p": 1.0, |
|
"temperature": 0.75, |
|
"generate_len": 1024, |
|
"top_k": 50, |
|
} |
|
|
|
length = len(tokens[0]) |
|
with torch.no_grad(): |
|
rest = model.generate( |
|
input_ids=tokens, |
|
max_length=length + instance["generate_len"], |
|
use_cache=True, |
|
do_sample=True, |
|
top_p=instance["top_p"], |
|
temperature=instance["temperature"], |
|
top_k=instance["top_k"], |
|
num_return_sequences=1, |
|
) |
|
output = rest[0][length:] |
|
string = tokenizer.decode(output, skip_special_tokens=True) |
|
answer = string.split("USER:")[0].strip() |
|
return f"{answer}" |
|
|
|
|
|
conversation = f"SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation." |
|
|
|
|
|
while True: |
|
user_input = input("You: ") |
|
llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: " |
|
answer = generate_text(llm_prompt) |
|
print(answer) |
|
conversation = f"{llm_prompt}{answer}" |
|
json_data = {"prompt": user_input, "answer": answer} |
|
|
|
## Save your conversation |
|
with open(output_file_path, "a") as output_file: |
|
output_file.write(json.dumps(json_data) + "\n") |
|
|
|
``` |
|
|
|
<br> |
|
|
|
#### Limitations & Biases: |
|
|
|
While this model aims for accuracy, it can occasionally produce inaccurate or misleading results. |
|
|
|
Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. |
|
|
|
Exercise caution and cross-check information when necessary. This is an uncensored model. |
|
|
|
|
|
<br> |
|
|
|
|
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_migtissera__Synthia-7B-v3.0) |
|
|
|
| Metric |Value| |
|
|---------------------------------|----:| |
|
|Avg. |61.99| |
|
|AI2 Reasoning Challenge (25-Shot)|62.46| |
|
|HellaSwag (10-Shot) |83.79| |
|
|MMLU (5-Shot) |63.90| |
|
|TruthfulQA (0-shot) |43.85| |
|
|Winogrande (5-shot) |77.90| |
|
|GSM8k (5-shot) |40.03| |
|
|
|
|