winglian commited on
Commit
f82e018
1 Parent(s): 976b00c

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +124 -0
README.md ADDED
@@ -0,0 +1,124 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - OpenAccess AI Collective
5
+ - MPT
6
+ - axolotl
7
+ datasets:
8
+ - ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
9
+ - QingyiSi/Alpaca-CoT
10
+ - teknium/GPTeacher-General-Instruct
11
+ - metaeval/ScienceQA_text_only
12
+ - hellaswag
13
+ - openai/summarize_from_feedback
14
+ - riddle_sense
15
+ - gsm8k
16
+ - camel-ai/math
17
+ - camel-ai/biology
18
+ - camel-ai/physics
19
+ - camel-ai/chemistry
20
+ - winglian/evals
21
+
22
+ inference: false
23
+ ---
24
+
25
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
26
+ **[💵 Donate to OpenAccess AI Collective](https://github.com/sponsors/OpenAccess-AI-Collective) to help us keep building great tools and models!**
27
+
28
+ # Minotaur 7B
29
+
30
+ Minotaur 7B is an instruct fine-tuned model on top of LlaMA-13B. Minotaur 7B is fine-tuned **on only completely open datasets** making this model reproducible by anyone.
31
+
32
+ Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/PugNNHAF5r) or email [wing@openaccessaicollective.org](mailto:wing@openaccessaicollective.org)
33
+
34
+ # Prompts
35
+ Chat only style prompts using `USER:`,`ASSISTANT:`.
36
+
37
+ <img src="https://huggingface.co/openaccess-ai-collective/minotaur-7b/resolve/main/minotaur.png" alt="minotaur" width="600" height="500"/>
38
+
39
+ # Training Datasets
40
+
41
+ Minotaur 7B model is fine-tuned on the following openly available datasets:
42
+
43
+ - [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered)
44
+ - [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT)
45
+ - [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct)
46
+ - [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses
47
+ - [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization
48
+ - [camel-ai/math](https://huggingface.co/datasets/camel-ai/math)
49
+ - [camel-ai/physics](https://huggingface.co/datasets/camel-ai/physics)
50
+ - [camel-ai/chemistry](https://huggingface.co/datasets/camel-ai/chemistry)
51
+ - [camel-ai/biology](https://huggingface.co/datasets/camel-ai/biology)
52
+ - [winglian/evals](https://huggingface.co/datasets/winglian/evals) - instruct augmented datasets
53
+ - custom sysnthetic datasets around misconceptions, in-context qa, jokes, N-tasks problems, and context-insensitivity
54
+ - ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the `train` split
55
+ - [hellaswag](https://huggingface.co/datasets/hellaswag) - 30K+ rows of instruct augmented for detailed explanations w 30K+ rows, derived from the `train` split
56
+ - [riddle_sense](https://huggingface.co/datasets/riddle_sense) - instruct augmented
57
+ - [gsm8k](https://huggingface.co/datasets/gsm8k) - instruct augmented
58
+ - prose generation
59
+
60
+ # Shoutouts
61
+
62
+ Special thanks to Nanobit for helping with Axolotl and TheBloke for quantizing these models are more accessible to all.
63
+
64
+ # Demo
65
+
66
+ HF Demo in Spaces available in the [Community ChatBot Arena](https://huggingface.co/spaces/openaccess-ai-collective/rlhf-arena) under the OAAIC Chatbots tab.
67
+
68
+ ## Release Notes
69
+
70
+ - https://wandb.ai/wing-lian/minotaur-13b/runs/ucgd0x6n
71
+
72
+ ## Build
73
+
74
+ Minotaur was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 8x RTXA6000
75
+ - 1 epochs taking approximately 1.5 hours
76
+
77
+ ## Bias, Risks, and Limitations
78
+ Minotaur has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
79
+ Minotaur was fine-tuned from the base model LLaMA 13B, please refer to its model card's Limitations Section for relevant information. (included below)
80
+
81
+ ## Benchmarks
82
+
83
+ TBD
84
+
85
+
86
+ ## Examples - results may vary based on temperature and other settings
87
+
88
+
89
+ ```
90
+ The following is a chat between a USER and a friendly and helpful ASSISTANT.
91
+ USER: You have 5 apples. You eat 3 pears. How many apples do you have left?
92
+ ASSISTANT:You still have 5 apples because eating pears does not affect the number of apples you have.
93
+ ```
94
+
95
+ ```
96
+ The following is a chat between a USER and a friendly and helpful ASSISTANT.
97
+ USER: List three words that start with "ex". What is 33 + 77?
98
+ ASSISTANT:Three words starting with the letter 'e' are excited, eatable, and expensive. 33+77 equals 106.
99
+ ```
100
+
101
+ ```
102
+ The following is a chat between a USER and a friendly and helpful ASSISTANT.
103
+ USER: Solve for x: 5x = 3x + 10
104
+ ASSISTANT:To solve the equation 5x = 3x + 10, we can follow these steps:
105
+
106
+ Step 1: Add both sides of the equation to get a common denominator.
107
+ (1/5)x = 1 (common)
108
+
109
+ Subtract the term with "x" on both sides.
110
+ -2/5 x = 10
111
+
112
+ Step 2: To solve the resulting equation, isolate the variable "x".
113
+ First, multiply all terms by 5 to eliminate the fractions.
114
+ 5x = 15
115
+
116
+ Now, divide the entire right side by 5.
117
+ x = 15/5
118
+
119
+ Finally, simplify to get the solution in its simplest form.
120
+ x = 1
121
+
122
+ So, the solution is x = 1.
123
+ ```
124
+ Yup, 7B still sucks at math