jharrison27 commited on
Commit
28bebce
1 Parent(s): 9bb9813

Upload app.py

Browse files
Files changed (1) hide show
  1. app.py +112 -0
app.py ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+
3
+ #api = gr.Interface.load("huggingface/EleutherAI/gpt-j-6B")
4
+ api = gr.Interface.load("models/bigscience/bloom")
5
+
6
+
7
+ def complete_with_gpt(text):
8
+ # Use the last 50 characters of the text as context
9
+ return text[:-50] + api(text[-50:])
10
+
11
+
12
+ with gr.Blocks() as demo:
13
+ with gr.Row():
14
+ textbox = gr.Textbox(placeholder="Type here and press enter...", lines=21)
15
+ with gr.Column():
16
+ btn = gr.Button("Generate")
17
+
18
+ btn.click(complete_with_gpt, textbox, textbox)
19
+
20
+ with gr.Row():
21
+ gr.Markdown("""
22
+ # Big Science creates 176 Billion Parameter Large Language Model
23
+
24
+ ## Bloom Is Setting New Record for Most Performant and Efficient AI Model for Science Ever!
25
+
26
+ Bloom stands for:
27
+ B: Big Science
28
+ L: Large Language Model
29
+ O: Open Science
30
+ O: Open Access
31
+ M: Multi Lingual Language Model
32
+
33
+ 1. Video Playlist to Check it out: https://www.youtube.com/playlist?list=PLHgX2IExbFouqnsIqziThlPCX_miiDq14
34
+ 2. Summary of Important Models and Sizes:
35
+
36
+ # Model Sizes to Date
37
+
38
+ Model Name | Model Size (in Parameters)
39
+ ----------------|---------------------------------
40
+ BigScience-tr11-176B|176 billion
41
+ GPT-3|175 billion
42
+ OpenAI's DALL-E 2.0|500 million
43
+ NVIDIA's Megatron|8.3 billion
44
+ Google's BERT|340 million
45
+ GPT-2|1.5 billion
46
+ OpenAI's GPT-1|117 million
47
+ ELMo|90 million
48
+ ULMFiT|100 million
49
+ Transformer-XL|250 million
50
+ XLNet|210 million
51
+ RoBERTa|125 million
52
+ ALBERT|12 million
53
+ DistilBERT|66 million
54
+
55
+ 3. Background Information on ChatGPT, Bloom from BigScience on HuggingFace Platform, and RLHF DeepRL and One to Few Shot Learning and Generators:
56
+
57
+
58
+
59
+ # ChatGPT Datasets:
60
+ 1. WebText
61
+ 2. Common Crawl
62
+ 3. BooksCorpus
63
+ 4. English Wikipedia
64
+ 5. Toronto Books Corpus
65
+ 6. OpenWebText
66
+
67
+ # Comparison to BigScience Model:
68
+
69
+ # Big Science - How to get started
70
+
71
+ Big Science is a 176B parameter new ML model that was trained on a set of datasets for Natural Language processing, and many other tasks that are not yet explored.. Below is the set of the papers, models, links, and datasets around big science which promises to be the best, most recent large model of its kind benefitting all science pursuits.
72
+
73
+ # Model: https://huggingface.co/bigscience/bloom
74
+
75
+ # Papers:
76
+ 1. BLOOM: A 176B-Parameter Open-Access Multilingual Language Model https://arxiv.org/abs/2211.05100
77
+ 2. Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism https://arxiv.org/abs/1909.08053
78
+ 3. 8-bit Optimizers via Block-wise Quantization https://arxiv.org/abs/2110.02861
79
+ 4. Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation https://arxiv.org/abs/2108.12409
80
+ 5. https://huggingface.co/models?other=doi:10.57967/hf/0003
81
+ 6. 217 Other Models optimizing use of bloom via specialization: https://huggingface.co/models?other=bloom
82
+
83
+ # Datasets
84
+ 1. Universal Dependencies: https://paperswithcode.com/dataset/universal-dependencies
85
+ 2. WMT 2014: https://paperswithcode.com/dataset/wmt-2014
86
+ 3. The Pile: https://paperswithcode.com/dataset/the-pile
87
+ 4. HumanEval: https://paperswithcode.com/dataset/humaneval
88
+ 5. FLORES-101: https://paperswithcode.com/dataset/flores-101
89
+ 6. CrowS-Pairs: https://paperswithcode.com/dataset/crows-pairs
90
+ 7. WikiLingua: https://paperswithcode.com/dataset/wikilingua
91
+ 8. MTEB: https://paperswithcode.com/dataset/mteb
92
+ 9. xP3: https://paperswithcode.com/dataset/xp3
93
+ 10. DiaBLa: https://paperswithcode.com/dataset/diabla
94
+
95
+ # Deep RL ML Strategy
96
+
97
+ 1. Language Model Preparation, Human Augmented with Supervised Fine Tuning
98
+ 2. Reward Model Training with Prompts Dataset Multi-Model Generate Data to Rank
99
+ 3. Fine Tuning with Reinforcement Reward and Distance Distribution Regret Score
100
+ 4. Proximal Policy Optimization Fine Tuning
101
+
102
+ # Variations - Preference Model Pretraining
103
+
104
+ 1. Use Ranking Datasets Sentiment - Thumbs Up/Down, Distribution
105
+ 2. Online Version Getting Feedback
106
+ 3. OpenAI - InstructGPT - Humans generate LM Training Text
107
+ 4. DeepMind - Advantage Actor Critic Sparrow, GopherCite
108
+ 5. Reward Model Human Prefence Feedback
109
+
110
+ """)
111
+
112
+ demo.launch()