Text Generation
Transformers
PyTorch
gpt_bigcode
code
Inference Endpoints
text-generation-inference
4 papers
winglian commited on
Commit
95fd2eb
1 Parent(s): 84d6780

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -0
README.md CHANGED
@@ -12,6 +12,19 @@ widget:
12
  datasets:
13
  - bigcode/the-stack-dedup
14
  - tiiuae/falcon-refinedweb
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  metrics:
16
  - code_eval
17
  - mmlu
@@ -85,7 +98,69 @@ extra_gated_fields:
85
  I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox
86
  ---
87
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
88
 
 
89
 
90
  # StarCoderPlus
91
 
 
12
  datasets:
13
  - bigcode/the-stack-dedup
14
  - tiiuae/falcon-refinedweb
15
+ - ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
16
+ - QingyiSi/Alpaca-CoT
17
+ - teknium/GPTeacher-General-Instruct
18
+ - metaeval/ScienceQA_text_only
19
+ - hellaswag
20
+ - openai/summarize_from_feedback
21
+ - riddle_sense
22
+ - gsm8k
23
+ - camel-ai/math
24
+ - camel-ai/biology
25
+ - camel-ai/physics
26
+ - camel-ai/chemistry
27
+ - winglian/evals
28
  metrics:
29
  - code_eval
30
  - mmlu
 
98
  I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox
99
  ---
100
 
101
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
102
+ **[💵 Donate to OpenAccess AI Collective](https://github.com/sponsors/OpenAccess-AI-Collective) to help us keep building great tools and models!**
103
+
104
+ # Minotaur 15B
105
+
106
+ Minotaur 15B is an instruct fine-tuned model on top of Starcoder Plus. Minotaur 15B is fine-tuned **on only completely open datasets** making this model reproducible by anyone.
107
+
108
+ Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/PugNNHAF5r) or email [wing@openaccessaicollective.org](mailto:wing@openaccessaicollective.org)
109
+
110
+ # Prompts
111
+ Chat only style prompts using `USER:`,`ASSISTANT:`.
112
+
113
+ <img src="https://huggingface.co/openaccess-ai-collective/minotaur-13b/resolve/main/minotaur.png" alt="minotaur" width="600" height="500"/>
114
+
115
+ # Training Datasets
116
+
117
+ Minotaur 15B model is fine-tuned on the following openly available datasets:
118
+
119
+ - [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered)
120
+ - [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT)
121
+ - [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct)
122
+ - [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses
123
+ - [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization
124
+ - [camel-ai/math](https://huggingface.co/datasets/camel-ai/math)
125
+ - [camel-ai/physics](https://huggingface.co/datasets/camel-ai/physics)
126
+ - [camel-ai/chemistry](https://huggingface.co/datasets/camel-ai/chemistry)
127
+ - [camel-ai/biology](https://huggingface.co/datasets/camel-ai/biology)
128
+ - [winglian/evals](https://huggingface.co/datasets/winglian/evals) - instruct augmented datasets
129
+ - custom sysnthetic datasets around misconceptions, in-context qa, jokes, N-tasks problems, and context-insensitivity
130
+ - ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the `train` split
131
+ - [hellaswag](https://huggingface.co/datasets/hellaswag) - 30K+ rows of instruct augmented for detailed explanations w 30K+ rows, derived from the `train` split
132
+ - [riddle_sense](https://huggingface.co/datasets/riddle_sense) - instruct augmented, derived from the `train` split
133
+ - [gsm8k](https://huggingface.co/datasets/gsm8k) - instruct augmented, derived from the `train` split
134
+ - prose generation
135
+
136
+ # Shoutouts
137
+
138
+ Special thanks to Nanobit for helping with Axolotl and TheBloke for quantizing these models are more accessible to all.
139
+
140
+ # Demo
141
+
142
+ HF Demo in Spaces available in the [Community ChatBot Arena](https://huggingface.co/spaces/openaccess-ai-collective/rlhf-arena) under the OAAIC Chatbots tab.
143
+
144
+ ## Release Notes
145
+
146
+ - https://wandb.ai/wing-lian/minotaur-16b-8k/runs/tshgbl2k
147
+
148
+ ## Build
149
+
150
+ Minotaur was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 4XA100 80GB
151
+ - 1 epochs taking approximately 30 hours
152
+
153
+ ## Bias, Risks, and Limitations
154
+ Minotaur has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
155
+ Minotaur was fine-tuned from the base model StarCoder, please refer to its model card's Limitations Section for relevant information. (included below)
156
+
157
+ ## Benchmarks
158
+
159
+ TBD
160
+
161
+ ## Examples
162
 
163
+ TBD
164
 
165
  # StarCoderPlus
166