Update README.md
Browse files
README.md
CHANGED
@@ -8,9 +8,10 @@ datasets:
|
|
8 |
- teknium/GPTeacher-General-Instruct
|
9 |
- metaeval/ScienceQA_text_only
|
10 |
- hellaswag
|
11 |
-
- tasksource/mmlu
|
12 |
- openai/summarize_from_feedback
|
13 |
- riddle_sense
|
|
|
|
|
14 |
language:
|
15 |
- en
|
16 |
library_name: transformers
|
@@ -24,13 +25,17 @@ chat only style prompts using `USER:`,`ASSISTANT:` as well as [pygmalion/metharm
|
|
24 |
|
25 |
Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/EqrvvehG) or email [wing@openaccessaicollective.org](mailto:wing@openaccessaicollective.org)
|
26 |
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
|
|
|
|
|
|
|
|
31 |
- hellaswag, updated for detailed explanations w 30K+ rows
|
32 |
-
-
|
33 |
-
-
|
34 |
|
35 |
Manticore 13B
|
36 |
- [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) - based on a cleaned and de-suped subset
|
@@ -39,17 +44,14 @@ Manticore 13B
|
|
39 |
- [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT)
|
40 |
- [GPT4-LLM-Cleaned](https://huggingface.co/datasets/teknium/GPT4-LLM-Cleaned)
|
41 |
- [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct)
|
42 |
-
- ARC-Easy & ARC-Challenge - instruct augmented
|
43 |
-
-
|
44 |
-
- abstract_algebra
|
45 |
-
- conceptual_physics
|
46 |
-
- formal_logic
|
47 |
-
- high_school_physics
|
48 |
-
- logical_fallacies
|
49 |
-
- [hellaswag](https://huggingface.co/datasets/hellaswag) - 5K row subset of instruct augmented for concise responses
|
50 |
- [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses
|
51 |
- [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization
|
52 |
|
|
|
|
|
|
|
53 |
|
54 |
# Demo
|
55 |
|
@@ -58,12 +60,12 @@ Try out the model in HF Spaces. The demo uses a quantized GGML version of the mo
|
|
58 |
|
59 |
## Release Notes
|
60 |
|
61 |
-
- https://wandb.ai/wing-lian/manticore-13b/runs/
|
62 |
|
63 |
## Build
|
64 |
|
65 |
Manticore was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 8xA100 80GB
|
66 |
-
-
|
67 |
- The configuration to duplicate this build is provided in this repo's [/config folder](https://huggingface.co/openaccess-ai-collective/manticore-13b/tree/main/configs).
|
68 |
|
69 |
## Bias, Risks, and Limitations
|
@@ -72,61 +74,41 @@ Manticore was fine-tuned from the base model LlaMa 13B, please refer to its mode
|
|
72 |
|
73 |
## Examples
|
74 |
|
75 |
-
````
|
76 |
-
### Instruction: write Python code that returns the first n numbers of the Fibonacci sequence using memoization.
|
77 |
-
|
78 |
-
### Assistant:
|
79 |
-
Here’s an example of how you can use memoization to efficiently calculate the first n numbers of the Fibonacci sequence in Python:
|
80 |
```
|
81 |
-
|
82 |
-
|
83 |
-
cache = {}
|
84 |
-
|
85 |
-
def fib_memoized(n):
|
86 |
-
if n not in cache:
|
87 |
-
cache[n] = [0, 1]
|
88 |
-
else:
|
89 |
-
return cache[n][-1], cache[n][-2]
|
90 |
-
|
91 |
-
# Recursive helper function to calculate the Fibonacci sequence up to a given index
|
92 |
-
def fib_helper(n):
|
93 |
-
if n == 0 or n == 1:
|
94 |
-
return 0, 1
|
95 |
-
else:
|
96 |
-
return fib_helper(n - 1) + fib_helper(n - 2)
|
97 |
-
|
98 |
-
# Calculate the first n numbers of the Fibonacci sequence using memoization
|
99 |
-
def fib_memoized_seq(n):
|
100 |
-
if n == 0 or n == 1:
|
101 |
-
return 0, 1
|
102 |
-
else:
|
103 |
-
result = fib_helper(n - 1) + fib_helper(n - 2)
|
104 |
-
cache[n] = [result]
|
105 |
-
return result, cache[n][-1], cache[n][-2]
|
106 |
-
|
107 |
-
# Example usage
|
108 |
-
for i in range(len(fib_memoized_seq(3))):
|
109 |
-
print(fib_memoized_seq(3)[i])
|
110 |
```
|
111 |
|
112 |
-
In this implementation, we use a cache dictionary to memoize the last calculated Fibonacci number for each index. We define two helper functions: fib_helper, which recursively calculates the Fibonacci sequence up to a given index, and fib_memoized, which uses memoization to efficiently calculate the first n numbers of the Fibonacci sequence. Finally, we define the fib_memoized_seq function, which is our main function for calculating the first n numbers of the Fibonacci sequence using memoization.
|
113 |
-
````
|
114 |
-
|
115 |
```
|
116 |
-
USER:
|
117 |
-
ASSISTANT:
|
|
|
|
|
|
|
|
|
118 |
```
|
119 |
|
120 |
```
|
121 |
-
USER:
|
122 |
-
ASSISTANT:
|
|
|
|
|
|
|
|
|
|
|
|
|
123 |
```
|
|
|
124 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
125 |
```
|
126 |
-
|
127 |
-
ASSISTANT: I’d hug my loved ones a little tighter today and tell them how much they mean to me.
|
128 |
-
USER: And who are your loved ones?
|
129 |
-
ASSISTANT: My closest friends and family members.
|
130 |
-
USER: What are their names?
|
131 |
-
ASSISTANT: My two best friends are named Emily and Jessica, my mom is Mary, and my dad is David. Answering the question with a bit more information about who they are or what they mean to me would help provide a more personalized response.
|
132 |
-
```
|
|
|
8 |
- teknium/GPTeacher-General-Instruct
|
9 |
- metaeval/ScienceQA_text_only
|
10 |
- hellaswag
|
|
|
11 |
- openai/summarize_from_feedback
|
12 |
- riddle_sense
|
13 |
+
- gsm8k
|
14 |
+
- ewof/code-alpaca-instruct-unfiltered
|
15 |
language:
|
16 |
- en
|
17 |
library_name: transformers
|
|
|
25 |
|
26 |
Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/EqrvvehG) or email [wing@openaccessaicollective.org](mailto:wing@openaccessaicollective.org)
|
27 |
|
28 |
+
# Training Datasets
|
29 |
+
|
30 |
+
Manticore 13B Chat is a Llama 13B model fine-tuned on the following datasets along with the datasets from the original Manticore 13B.
|
31 |
+
|
32 |
+
**Manticore 13B Chat was trained on 25% of the datasets below. The datasets were merged, shuffled, and then sharded into 4 parts.**
|
33 |
+
|
34 |
+
- de-duped pygmalion dataset
|
35 |
+
- [riddle_sense](https://huggingface.co/datasets/riddle_sense) - instruct augmented
|
36 |
- hellaswag, updated for detailed explanations w 30K+ rows
|
37 |
+
- [gsm8k](https://huggingface.co/datasets/gsm8k) - instruct augmented
|
38 |
+
- [ewof/code-alpaca-instruct-unfiltered](https://huggingface.co/datasets/ewof/code-alpaca-instruct-unfiltered)
|
39 |
|
40 |
Manticore 13B
|
41 |
- [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) - based on a cleaned and de-suped subset
|
|
|
44 |
- [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT)
|
45 |
- [GPT4-LLM-Cleaned](https://huggingface.co/datasets/teknium/GPT4-LLM-Cleaned)
|
46 |
- [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct)
|
47 |
+
- ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the `train` split
|
48 |
+
- [hellaswag](https://huggingface.co/datasets/hellaswag) - 5K row subset of instruct augmented for concise responses, derived from the `train` split
|
|
|
|
|
|
|
|
|
|
|
|
|
49 |
- [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses
|
50 |
- [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization
|
51 |
|
52 |
+
Not added from Manticore 13B:
|
53 |
+
- mmlu - mmlu datasets were not added to this model as the `test` split is used for benchmarks
|
54 |
+
|
55 |
|
56 |
# Demo
|
57 |
|
|
|
60 |
|
61 |
## Release Notes
|
62 |
|
63 |
+
- https://wandb.ai/wing-lian/manticore-13b-v2/runs/hxr3aiiw
|
64 |
|
65 |
## Build
|
66 |
|
67 |
Manticore was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 8xA100 80GB
|
68 |
+
- 3 epochs taking approximately 8 hours. No further epochs will be released.
|
69 |
- The configuration to duplicate this build is provided in this repo's [/config folder](https://huggingface.co/openaccess-ai-collective/manticore-13b/tree/main/configs).
|
70 |
|
71 |
## Bias, Risks, and Limitations
|
|
|
74 |
|
75 |
## Examples
|
76 |
|
|
|
|
|
|
|
|
|
|
|
77 |
```
|
78 |
+
USER: make up a sonnet in the style of shakespeare about a skier skiing in the glades on a bluebird day.
|
79 |
+
ASSISTANT: A skier doth glide 'neath sun’s glow, His edges sharp to cut the snow, Bluebird sky above, While he doth revel on the slopes below, The cold air biting his nose, But not his spirits, For he is filled with a sense of joy, As he carves through the powdery drifts. His body moves in graceful rhythm, And each turn he makes with ease, He feels like he could ski forever, With no care for the world outside these glades.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
80 |
```
|
81 |
|
|
|
|
|
|
|
82 |
```
|
83 |
+
USER: tell me a joke about alpacas
|
84 |
+
ASSISTANT: Did you hear about the famous alpaca ballet dancer? She was a prima llama.
|
85 |
+
USER: okay, that is actually pretty good
|
86 |
+
ASSISTANT: What do you call an alpaca in a bowtie? A llama-cholla!
|
87 |
+
USER: why is that funny?
|
88 |
+
ASSISTANT: Because it’s a play on words. “Llama-cholla” sounds like “lame chola,” which is Spanish slang for someone who acts cool or pretentious. So, the joke is implying that alpacas are vain and act cool with their bowties.
|
89 |
```
|
90 |
|
91 |
```
|
92 |
+
USER: What is 44 + 77?
|
93 |
+
ASSISTANT: 121
|
94 |
+
USER: What is the sum of 33 and 77?
|
95 |
+
ASSISTANT: 110
|
96 |
+
USER: tell me 5 words that start with “ex” and the sum of 33 and 77.
|
97 |
+
ASSISTANT: exotic, expressive, explicit, experimental, and extravagant. The sum of 33 and 77 is 110.
|
98 |
+
USER: tell me 3 words that start with “egr” and end in “s”
|
99 |
+
ASSISTANT: egret, eggplant, and egrets.
|
100 |
```
|
101 |
+
- oof
|
102 |
|
103 |
+
|
104 |
+
```
|
105 |
+
USER: Sally has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have?
|
106 |
+
ASSISTANT: Sally has a total of 6 siblings, so she has a total of 6 - 2 = 4 sisters.
|
107 |
+
USER: Sally has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have? Use multi-step reasoning to explain your answer.
|
108 |
+
ASSISTANT: Sally has a total of three brothers, so she has a total of three * 2 = six siblings. Her family consists of three boys and six girls in total. This means that Sally has three sisters since there are six female relatives and only three male ones.
|
109 |
+
USER: But when brothers have sisters, aren’t they shared or the same?
|
110 |
+
ASSISTANT: No, not every sibling shares their mother. For example, I have two brothers who share one mother but my third brother has a different mother from both of them.
|
111 |
+
USER: Let’s assume they all share the same mother and father. Now explain with multi-step reasoning.
|
112 |
+
ASSISTANT: If Sally has three brothers then that means she has six siblings altogether. Each brother has two sisters, which means that there are twelve female relatives in her family - one for each of her brothers and herself. Since there are only six females in the entire family, that means that Sally’s mother has no other children than her own three sons - so she can’t be the mother of any of the sisters. So all those sisters have to belong to one of the brothers, and since they are shared by two brothers, then each brother has a single sister. Therefore, there is only one sister for Sally, making the final answer 1.
|
113 |
```
|
114 |
+
- pretty sure that reasoning isn't correct, and the final answer of having 1 sister is a lucky guess
|
|
|
|
|
|
|
|
|
|
|
|