bleysg commited on
Commit
b1c516b
1 Parent(s): 5e3db72

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -30,7 +30,7 @@ https://AlignmentLab.ai
30
 
31
  # Evaluation
32
 
33
- We have evaluated OpenOrca_Preview1-200k-GPT4_LLaMA-13B on hard reasoning tasks from BigBench-Hard and AGIEval as outlined in the Orca paper.
34
 
35
  Our average performance for BigBench-Hard: 0.3753
36
 
@@ -42,10 +42,10 @@ We've done the same and have found our score averages to ~60% of the total impro
42
  So we got 60% of the improvement with 6% of the data!
43
 
44
  ## BigBench-Hard Performance
45
- ![OpenOrca Preview1 BigBench-Hard Performance](https://huggingface.co/Open-Orca/OpenOrca_Preview1-200k-GPT4_LLaMA-13B/resolve/main/OO_Preview1_BigBenchHard.png "BigBench-Hard Performance")
46
 
47
  ## AGIEval Performance
48
- ![OpenOrca Preview1 AGIEval Performance](https://huggingface.co/Open-Orca/OpenOrca_Preview1-200k-GPT4_LLaMA-13B/resolve/main/OO_Preview1_AGIEval.png "AGIEval Performance")
49
 
50
  We will report our results on [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Evals once we receive them.
51
 
@@ -77,7 +77,7 @@ Please await our full releases for further training details.
77
  year = {2023},
78
  publisher = {HuggingFace},
79
  journal = {HuggingFace repository},
80
- howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrca_Preview1-200k-GPT4_LLaMA-13B},
81
  }
82
  ```
83
  ```bibtex
 
30
 
31
  # Evaluation
32
 
33
+ We have evaluated OpenOrca-Preview1-13B on hard reasoning tasks from BigBench-Hard and AGIEval as outlined in the Orca paper.
34
 
35
  Our average performance for BigBench-Hard: 0.3753
36
 
 
42
  So we got 60% of the improvement with 6% of the data!
43
 
44
  ## BigBench-Hard Performance
45
+ ![OpenOrca Preview1 BigBench-Hard Performance](https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OO_Preview1_BigBenchHard.png "BigBench-Hard Performance")
46
 
47
  ## AGIEval Performance
48
+ ![OpenOrca Preview1 AGIEval Performance](https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OO_Preview1_AGIEval.png "AGIEval Performance")
49
 
50
  We will report our results on [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Evals once we receive them.
51
 
 
77
  year = {2023},
78
  publisher = {HuggingFace},
79
  journal = {HuggingFace repository},
80
+ howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B},
81
  }
82
  ```
83
  ```bibtex