migtissera commited on
Commit
236b393
1 Parent(s): 403be1c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -5
README.md CHANGED
@@ -2,8 +2,11 @@
2
  license: apache-2.0
3
  ---
4
 
5
- # Synthia-11B-v3.0
6
- SynthIA-11B-v3.0 (Synthetic Intelligent Agent) is a model trained with guidance on Orca-2 paper. It has been fine-tuned for instruction following as well as having long-form conversations. SynthIA-3.0 dataset contains the Generarized Tree-of-Thought prompt plus 10 more new long-form system contexts. However, in the training phase the system context was removed as suggested in Orca-2 paper.
 
 
 
7
 
8
  <br>
9
 
@@ -20,7 +23,7 @@ Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to
20
 
21
  ## Evaluation
22
 
23
- We evaluated Synthia-11B-v3.0 on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
24
 
25
  Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). Section to follow.
26
 
@@ -51,8 +54,8 @@ ASSISTANT:
51
  import torch, json
52
  from transformers import AutoModelForCausalLM, AutoTokenizer
53
 
54
- model_path = "migtissera/Synthia-11B-v3.0"
55
- output_file_path = "./Synthia-11B-conversations.jsonl"
56
 
57
  model = AutoModelForCausalLM.from_pretrained(
58
  model_path,
 
2
  license: apache-2.0
3
  ---
4
 
5
+ # Synthia-v3.0-11B
6
+ SynthIA-v3.0-11B (Synthetic Intelligent Agent) is a general purpose Large Language Model (LLM). It was trained on the Synthia-v3.0 dataset that contains the Generarized Tree-of-Thought prompt plus 10 more new long-form system contexts.
7
+
8
+ This model was trained on the principles of LIMA (Less Is More for Alignment) paper, with ~10K high-quality samples generated using GPT-4-Turbo. It has been fine-tuned for instruction following as well as having long-form conversations.
9
+
10
 
11
  <br>
12
 
 
23
 
24
  ## Evaluation
25
 
26
+ We evaluated Synthia-v3.0-11B on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
27
 
28
  Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). Section to follow.
29
 
 
54
  import torch, json
55
  from transformers import AutoModelForCausalLM, AutoTokenizer
56
 
57
+ model_path = "migtissera/Synthia-v3.0-11B"
58
+ output_file_path = "./Synthia-v3.0-11B-conversations.jsonl"
59
 
60
  model = AutoModelForCausalLM.from_pretrained(
61
  model_path,