linkanjarad commited on
Commit
784d7a9
1 Parent(s): 1a384af

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +105 -7
README.md CHANGED
@@ -5,6 +5,14 @@ tags:
5
  model-index:
6
  - name: Bloom-Alpaca-560m
7
  results: []
 
 
 
 
 
 
 
 
8
  ---
9
 
10
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -12,21 +20,111 @@ should probably proofread and complete it, then remove this comment. -->
12
 
13
  # Bloom-Alpaca-560m
14
 
15
- This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on an unknown dataset.
16
 
17
  ## Model description
18
 
19
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
 
21
  ## Intended uses & limitations
22
 
23
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
- ## Training and evaluation data
 
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
- More information needed
28
 
29
- ## Training procedure
30
 
31
  ### Training hyperparameters
32
 
@@ -40,7 +138,7 @@ The following hyperparameters were used during training:
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
  - lr_scheduler_warmup_ratio: 0.15
43
- - num_epochs: 3
44
 
45
  ### Framework versions
46
 
 
5
  model-index:
6
  - name: Bloom-Alpaca-560m
7
  results: []
8
+ widget:
9
+ - text: |-
10
+ ### Instruction:
11
+ Write a poem about pancakes.
12
+
13
+ ### Response:
14
+ datasets:
15
+ - tatsu-lab/alpaca
16
  ---
17
 
18
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
20
 
21
  # Bloom-Alpaca-560m
22
 
23
+ This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on the [tatsu-lab/alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) dataset. This is a test to check if the concept of Databrick's [databricks/dolly-v1-6b](https://huggingface.co/databricks/dolly-v1-6b) can be applied to smaller models.
24
 
25
  ## Model description
26
 
27
+ This is an LLM trained to follow instructions, similar to ChatGPT, except much much smaller.
28
+
29
+ ## Sample Use
30
+
31
+ ```python
32
+ from transformers import pipeline
33
+
34
+ generator = pipeline('text-generation', model='linkanjarad/Bloom-Alpaca-560m')
35
+
36
+ prompt = '''### Instruction:
37
+ Who is the first person on the moon?
38
+
39
+ ### Response:'''
40
+
41
+ output = generator(prompt, do_sample=True, min_length=10, max_length=50, temperature=0.5)
42
+ print(output[0]['generated_text'])
43
 
44
  ## Intended uses & limitations
45
 
46
+ The model works best when given input in the format of the dataset it was trained on. Here is an example of the format of the [tatsu-lab/alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) dataset:
47
+
48
+ ```
49
+ ### Instruction:
50
+ What are the three primary colors?
51
+
52
+ ### Response:
53
+ The three primary colors are red, blue, and yellow.
54
+ ```
55
+
56
+ Here are a few sample model outputs:
57
+
58
+ ```
59
+ ### Instruction:
60
+ Write a poem about pancakes in the style of Dr. Suess.
61
+
62
+ ### Response:
63
+ Gently bake the delicious cup of pancake goodness away
64
+ In the sweet light of a sunken window
65
+ Stay away from those who take it for granted
66
+ The magic of those who will bake and preserve
67
+ The sweetness of baked pancakes.
68
+ Once more I'm gonna bake my dreams for ever
69
+ No matter what my journey might bring
70
+ No matter how hard I come
71
+ ```
72
+
73
+ ```
74
+ ### Instruction:
75
+ Generate a program in Python to generate prime numbers.
76
+
77
+ ### Response:
78
+ """
79
+ Generate a program in Python to generate prime numbers.
80
+ """
81
+
82
+ import random
83
+ import math
84
+ import math.exp
85
+
86
+ def generate_primes():
87
+ m = random.randint(1, 10)
88
+ result = random.randint(1, 10)
89
+
90
+ for i in range(m):
91
+ result = generate_primes()
92
+ if result == result:
93
+ break
94
+ return result
95
+
96
+ ```
97
+
98
+ ```
99
+ ### Instruction:
100
+ How was the steam engine invented?
101
+
102
+ ### Response:
103
+ The steam engine was invented by William George Taylor, a French engineer and inventor who used the technology of steam turbines to create engines. Taylor first designed a turbine that used a rotating disc with a spiral gear that accelerated the air through an annular passage. By expanding the annular passage, Taylor's turbine gained power and became a powerful engine. Taylor then used the spiral gear to create a turbine that drove a rail with a
104
+ ```
105
+
106
+ ```
107
+ ### Instruction:
108
+ Generate a list of marketing strategies to promote a new mobile app.
109
 
110
+ ### Response:
111
+ 1. Create a campaign to promote the app's launch.
112
+ 2. Provide a promotional video and posters.
113
+ 3. Create a mobile app store campaign.
114
+ 4. Publish a newsletter to attract potential customers.
115
+ 5. Make a list of key features of the app.
116
+ 6. Establish a referral program.
117
+ 7. Attend marketing conferences and events.
118
+ 8. Participate in customer surveys.
119
+ 9. Offer loyalty rewards and discounts.
120
+ 10. Utilize social media to promote the app.
121
+ 11. Create a user-friendly advertising campaign.
122
+ 12. Utilize social media to promote the app.
123
+ ```
124
 
125
+ ## Training Procedure
126
 
127
+ The model was trained with Huggingface's 🤗 Trainer for about 1 epoch.
128
 
129
  ### Training hyperparameters
130
 
 
138
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
139
  - lr_scheduler_type: linear
140
  - lr_scheduler_warmup_ratio: 0.15
141
+ - num_epochs: 1
142
 
143
  ### Framework versions
144