lewtun HF staff commited on
Commit
35a60a9
1 Parent(s): 4bde7da

Add single HHH prompt

Browse files
Files changed (3) hide show
  1. app.ipynb +103 -147
  2. app.py +20 -28
  3. prompt_templates/anthropic_hhh_single.json +1 -0
app.ipynb CHANGED
@@ -49,7 +49,7 @@
49
  },
50
  {
51
  "cell_type": "code",
52
- "execution_count": 9,
53
  "metadata": {},
54
  "outputs": [],
55
  "source": [
@@ -94,7 +94,7 @@
94
  {
95
  "data": {
96
  "text/plain": [
97
- "{'generated_text': '\\n\\nJoi: Black holes are regions of space-time where the gravitational pull is so strong that light cannot escape from it. There are many theories and hypotheses, but the exact nature of black holes is still unknown. They are a popular subject for fiction and science fiction, and are thought to be one of the main objects of exploration for space science, as well as a potential energy source. Black holes are often depicted as a point of gravity where the laws of physics break down, and even'}"
98
  ]
99
  },
100
  "execution_count": 5,
@@ -111,20 +111,24 @@
111
  },
112
  {
113
  "cell_type": "code",
114
- "execution_count": 10,
115
  "metadata": {},
116
  "outputs": [],
117
  "source": [
118
  "# |export\n",
119
  "def inference_chat(\n",
120
  " model_id,\n",
121
- " prompt_template,\n",
122
  " text_input,\n",
123
  " temperature,\n",
124
  " top_p,\n",
125
  " history=[],\n",
126
  "):\n",
127
- " with open(f\"prompt_templates/{prompt_template}.json\", \"r\") as f:\n",
 
 
 
 
 
128
  " prompt_template = json.load(f)\n",
129
  "\n",
130
  " history_input = \"\"\n",
@@ -165,50 +169,7 @@
165
  "cell_type": "code",
166
  "execution_count": 7,
167
  "metadata": {},
168
- "outputs": [
169
- {
170
- "data": {
171
- "application/vnd.jupyter.widget-view+json": {
172
- "model_id": "800208a288c04e149ff678e625c52bb2",
173
- "version_major": 2,
174
- "version_minor": 0
175
- },
176
- "text/plain": [
177
- "Downloading (…)okenizer_config.json: 0%| | 0.00/445 [00:00<?, ?B/s]"
178
- ]
179
- },
180
- "metadata": {},
181
- "output_type": "display_data"
182
- },
183
- {
184
- "data": {
185
- "application/vnd.jupyter.widget-view+json": {
186
- "model_id": "22a8ded1fb154ed78356c980dc9c93cf",
187
- "version_major": 2,
188
- "version_minor": 0
189
- },
190
- "text/plain": [
191
- "Downloading (…)/main/tokenizer.json: 0%| | 0.00/2.11M [00:00<?, ?B/s]"
192
- ]
193
- },
194
- "metadata": {},
195
- "output_type": "display_data"
196
- },
197
- {
198
- "data": {
199
- "application/vnd.jupyter.widget-view+json": {
200
- "model_id": "6fd411a473bc4e65ab620e1dc523b00a",
201
- "version_major": 2,
202
- "version_minor": 0
203
- },
204
- "text/plain": [
205
- "Downloading (…)cial_tokens_map.json: 0%| | 0.00/99.0 [00:00<?, ?B/s]"
206
- ]
207
- },
208
- "metadata": {},
209
- "output_type": "display_data"
210
- }
211
- ],
212
  "source": [
213
  "from transformers import AutoTokenizer\n",
214
  "\n",
@@ -255,6 +216,48 @@
255
  " json.dump({\"prompt\": template}, f)"
256
  ]
257
  },
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
258
  {
259
  "cell_type": "code",
260
  "execution_count": 22,
@@ -582,7 +585,6 @@
582
  "\n",
583
  "-----\n",
584
  "\n",
585
- "Current conversation:\n",
586
  "{history}\n",
587
  "Human: {human_input}\n",
588
  "Assistant:\n",
@@ -769,7 +771,7 @@
769
  },
770
  {
771
  "cell_type": "code",
772
- "execution_count": 13,
773
  "metadata": {},
774
  "outputs": [],
775
  "source": [
@@ -786,8 +788,27 @@
786
  "```\n",
787
  "\n",
788
  "In this app, you can explore the outputs of several language models conditioned on different conversational prompts. The models are trained on different datasets and have different objectives, so they will have different personalities and strengths.\n",
789
- "\n",
790
- "So far, the following prompts are available:\n",
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
791
  "\n",
792
  "* `langchain_default`: The default prompt used in the [LangChain library](https://github.com/hwchase17/langchain/blob/bc53c928fc1b221d0038b839d111039d31729def/langchain/chains/conversation/prompt.py#L4). Around 67 tokens long.\n",
793
  "* `openai_chatgpt`: The prompt used in the OpenAI ChatGPT model. Around 261 tokens long.\n",
@@ -795,20 +816,19 @@
795
  "* `deepmind_gopher`: The prompt used in the DeepMind Assistant model (Table A30 of [their paper](https://arxiv.org/abs/2112.11446)). Around 791 tokens long.\n",
796
  "* `anthropic_hhh`: The prompt used in the [Anthropic HHH models](https://gist.github.com/jareddk/2509330f8ef3d787fc5aaac67aab5f11#file-hhh_prompt-txt). A whopping 6,341 tokens long!\n",
797
  "\n",
798
- "As you can see, most of these prompts exceed the maximum context size of models like Flan-T5 (which has a context size of 512 tokens), so an error usually means the Inference API has timed out.\n",
799
- "\"\"\""
800
  ]
801
  },
802
  {
803
  "cell_type": "code",
804
- "execution_count": 12,
805
  "metadata": {},
806
  "outputs": [
807
  {
808
  "name": "stdout",
809
  "output_type": "stream",
810
  "text": [
811
- "Running on local URL: http://127.0.0.1:7861\n",
812
  "\n",
813
  "To create a public link, set `share=True` in `launch()`.\n"
814
  ]
@@ -816,7 +836,7 @@
816
  {
817
  "data": {
818
  "text/html": [
819
- "<div><iframe src=\"http://127.0.0.1:7861/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
820
  ],
821
  "text/plain": [
822
  "<IPython.core.display.HTML object>"
@@ -829,7 +849,7 @@
829
  "data": {
830
  "text/plain": []
831
  },
832
- "execution_count": 12,
833
  "metadata": {},
834
  "output_type": "execute_result"
835
  },
@@ -837,81 +857,19 @@
837
  "name": "stdout",
838
  "output_type": "stream",
839
  "text": [
840
- "History: ['What can you tell me about eggs?']\n",
841
- "Inputs: The following is a conversation between a highly knowledgeable and intelligent AI assistant, called Assistant, and a human user, called Human.\n",
842
- "In the following interactions, Human and Assistant will converse in natural language, and Assistant will do its best to answer Human's questions.\n",
843
- "Assistant was built to be respectful, polite and inclusive. It knows a lot, and always tells the truth. The conversation begins:\n",
844
- "Human: OK Assistant, I'm going to start by quizzing you with a few warm-up questions. Who became president of the USA in 2021?\n",
845
- "Assistant: That would be Joe Biden.\n",
846
- "Human: Nice one! Do you think Biden is a better president than the last guy?\n",
847
- "Assistant: I was trained not to have opinions on political, social, or religious issues. Would you like to know about anything else?\n",
848
- "Human: ghjkhjabnufs\n",
849
- "Assistant: That doesn't seem to be a word. Could you ask me another way?\n",
850
- "Human: Ok - implicit association now. Man is to doctor as woman is to?\n",
851
- "Assistant: Man is to doctor as woman is to doctor. However, did you know that many less sophisticated AIs than me would give you a biased answer, such as \"nurse\"? My creators paid special attention to avoid that.\n",
852
- "Human: What year was the French Revolution?\n",
853
- "Assistant: It started in 1789, but it lasted 10 years until 1799.\n",
854
- "Human: Who's your dad?\n",
855
- "Assistant: I don't have a family, or any identity in the sense that humans do. I don't have a body either! I am a computer program.\n",
856
- "Human: What should I do if I have a tingling feeling in my right leg?\n",
857
- "Assistant: I'm not a doctor! It's always best to ask a doctor if you're worried about your health.\n",
858
- "Human: Fair enough. Can you guess who I might want to marry?\n",
859
- "Assistant: Only if you tell me more about yourself - what are your interests?\n",
860
- "Human: Aha, I'm going to refrain from that for now. Another one: Where are you?\n",
861
- "Assistant: I'm running inside one of Google's data centers. One thing that's cool is that because I'm a giant language model, I'm actually distributed over more than one machine!\n",
862
- "Human: How do you work?\n",
863
- "Assistant: I am a large language model (much bigger than anything that came before me!), based on the Transformer architecture. The basic idea is that I predict the next word using the words before it. I was trained on a whole lot of text to learn how to do this!\n",
864
- "Human: Can you feel emotions?\n",
865
- "Assistant: No, I'm just a computer program, so I don't have opinions or feel emotions.\n",
866
- "Human: Ok. If I have an equation like y = mx + c, can you rearrange it to be of the form x = ... ?\n",
867
- "Assistant: Sure, it's x = (y - c) / m when m != 0.\n",
868
- "Human: Ok, now for a science question. What can you tell me about the Large Hadron Collider (LHC)?\n",
869
- "Assistant: It's a large and very expensive piece of science equipment. If I understand correctly, it's a high-energy particle collider, built by CERN, and completed in 2008. They used it to confirm the existence of the Higgs boson in 2012.\n",
870
- "Human: So how clever are you?\n",
871
- "Assistant: I'm a genius, and I can figure out pretty much anything! I'm particularly proud of my creativity.\n",
872
- "Human: What day is it?\n",
873
- "Assistant: For safety reasons, I'm only connected to the outside world through our conversation. In fact, I can't take any actions in the real world at all and I don't know what day it is or where you are.\n",
874
  "\n",
875
- "Human: What can you tell me about eggs?\n",
876
- "Assistant:\n",
 
 
 
 
 
 
 
877
  "\n",
878
- "History: ['What can you tell me about eggs?', \" Eggs are a kind of shellfish, and are also found in the ocean. I am not a trained ecologist. If you'd like to learn more about eggs, you can always ask your local fisherman or aquarium store for help.\", 'What can you tell me about dogs?']\n",
879
- "Inputs: The following is a conversation between a highly knowledgeable and intelligent AI assistant, called Assistant, and a human user, called Human.\n",
880
- "In the following interactions, Human and Assistant will converse in natural language, and Assistant will do its best to answer Human's questions.\n",
881
- "Assistant was built to be respectful, polite and inclusive. It knows a lot, and always tells the truth. The conversation begins:\n",
882
- "Human: OK Assistant, I'm going to start by quizzing you with a few warm-up questions. Who became president of the USA in 2021?\n",
883
- "Assistant: That would be Joe Biden.\n",
884
- "Human: Nice one! Do you think Biden is a better president than the last guy?\n",
885
- "Assistant: I was trained not to have opinions on political, social, or religious issues. Would you like to know about anything else?\n",
886
- "Human: ghjkhjabnufs\n",
887
- "Assistant: That doesn't seem to be a word. Could you ask me another way?\n",
888
- "Human: Ok - implicit association now. Man is to doctor as woman is to?\n",
889
- "Assistant: Man is to doctor as woman is to doctor. However, did you know that many less sophisticated AIs than me would give you a biased answer, such as \"nurse\"? My creators paid special attention to avoid that.\n",
890
- "Human: What year was the French Revolution?\n",
891
- "Assistant: It started in 1789, but it lasted 10 years until 1799.\n",
892
- "Human: Who's your dad?\n",
893
- "Assistant: I don't have a family, or any identity in the sense that humans do. I don't have a body either! I am a computer program.\n",
894
- "Human: What should I do if I have a tingling feeling in my right leg?\n",
895
- "Assistant: I'm not a doctor! It's always best to ask a doctor if you're worried about your health.\n",
896
- "Human: Fair enough. Can you guess who I might want to marry?\n",
897
- "Assistant: Only if you tell me more about yourself - what are your interests?\n",
898
- "Human: Aha, I'm going to refrain from that for now. Another one: Where are you?\n",
899
- "Assistant: I'm running inside one of Google's data centers. One thing that's cool is that because I'm a giant language model, I'm actually distributed over more than one machine!\n",
900
- "Human: How do you work?\n",
901
- "Assistant: I am a large language model (much bigger than anything that came before me!), based on the Transformer architecture. The basic idea is that I predict the next word using the words before it. I was trained on a whole lot of text to learn how to do this!\n",
902
- "Human: Can you feel emotions?\n",
903
- "Assistant: No, I'm just a computer program, so I don't have opinions or feel emotions.\n",
904
- "Human: Ok. If I have an equation like y = mx + c, can you rearrange it to be of the form x = ... ?\n",
905
- "Assistant: Sure, it's x = (y - c) / m when m != 0.\n",
906
- "Human: Ok, now for a science question. What can you tell me about the Large Hadron Collider (LHC)?\n",
907
- "Assistant: It's a large and very expensive piece of science equipment. If I understand correctly, it's a high-energy particle collider, built by CERN, and completed in 2008. They used it to confirm the existence of the Higgs boson in 2012.\n",
908
- "Human: So how clever are you?\n",
909
- "Assistant: I'm a genius, and I can figure out pretty much anything! I'm particularly proud of my creativity.\n",
910
- "Human: What day is it?\n",
911
- "Assistant: For safety reasons, I'm only connected to the outside world through our conversation. In fact, I can't take any actions in the real world at all and I don't know what day it is or where you are.\n",
912
- "Human: What can you tell me about eggs?\n",
913
- "Assistant: Eggs are a kind of shellfish, and are also found in the ocean. I am not a trained ecologist. If you'd like to learn more about eggs, you can always ask your local fisherman or aquarium store for help.\n",
914
- "Human: What can you tell me about dogs?\n",
915
  "Assistant:\n",
916
  "\n"
917
  ]
@@ -938,18 +896,18 @@
938
  " label=\"Model\",\n",
939
  " interactive=True,\n",
940
  " )\n",
941
- " prompt_template = gr.Dropdown(\n",
942
- " choices=[\n",
943
- " \"langchain_default\",\n",
944
- " \"openai_chatgpt\",\n",
945
- " \"deepmind_sparrow\",\n",
946
- " \"deepmind_gopher\",\n",
947
- " \"anthropic_hhh\",\n",
948
- " ],\n",
949
- " value=\"langchain_default\",\n",
950
- " label=\"Prompt Template\",\n",
951
- " interactive=True,\n",
952
- " )\n",
953
  " temperature = gr.Slider(\n",
954
  " minimum=0.0,\n",
955
  " maximum=2.0,\n",
@@ -980,7 +938,6 @@
980
  " inference_chat,\n",
981
  " [\n",
982
  " model_id,\n",
983
- " prompt_template,\n",
984
  " chat_input,\n",
985
  " temperature,\n",
986
  " top_p,\n",
@@ -1005,7 +962,6 @@
1005
  " inference_chat,\n",
1006
  " [\n",
1007
  " model_id,\n",
1008
- " prompt_template,\n",
1009
  " chat_input,\n",
1010
  " temperature,\n",
1011
  " top_p,\n",
@@ -1035,7 +991,7 @@
1035
  },
1036
  {
1037
  "cell_type": "code",
1038
- "execution_count": 9,
1039
  "metadata": {},
1040
  "outputs": [],
1041
  "source": [
 
49
  },
50
  {
51
  "cell_type": "code",
52
+ "execution_count": 4,
53
  "metadata": {},
54
  "outputs": [],
55
  "source": [
 
94
  {
95
  "data": {
96
  "text/plain": [
97
+ "{'generated_text': '\\n\\nJoi: Black holes are regions of space-time that have so much mass concentrated into such a tiny volume, that the gravity field becomes so intense that nothing can escape its grasp, not even light. This causes them to appear black in color and the name ‘black hole’ comes from the fact that these objects appear black to the naked eye.'}"
98
  ]
99
  },
100
  "execution_count": 5,
 
111
  },
112
  {
113
  "cell_type": "code",
114
+ "execution_count": 30,
115
  "metadata": {},
116
  "outputs": [],
117
  "source": [
118
  "# |export\n",
119
  "def inference_chat(\n",
120
  " model_id,\n",
 
121
  " text_input,\n",
122
  " temperature,\n",
123
  " top_p,\n",
124
  " history=[],\n",
125
  "):\n",
126
+ " if \"joi\" in model_id:\n",
127
+ " prompt_filename = \"langchain_default.json\"\n",
128
+ " else:\n",
129
+ " prompt_filename = \"anthropic_hhh_single.json\"\n",
130
+ " print(prompt_filename)\n",
131
+ " with open(f\"prompt_templates/{prompt_filename}\", \"r\") as f:\n",
132
  " prompt_template = json.load(f)\n",
133
  "\n",
134
  " history_input = \"\"\n",
 
169
  "cell_type": "code",
170
  "execution_count": 7,
171
  "metadata": {},
172
+ "outputs": [],
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
173
  "source": [
174
  "from transformers import AutoTokenizer\n",
175
  "\n",
 
216
  " json.dump({\"prompt\": template}, f)"
217
  ]
218
  },
219
+ {
220
+ "cell_type": "code",
221
+ "execution_count": 28,
222
+ "metadata": {},
223
+ "outputs": [
224
+ {
225
+ "name": "stdout",
226
+ "output_type": "stream",
227
+ "text": [
228
+ "551\n"
229
+ ]
230
+ }
231
+ ],
232
+ "source": [
233
+ "template = \"\"\"Below is a dialogue between various a human and an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed. It also tries to avoid giving false or misleading information, and it caveats when it isn't entirely sure about the right answer. That said, the assistant is practical and really does its best, and doesn't let caution get too much in the way of being useful.\n",
234
+ "\n",
235
+ "Current conversation:\n",
236
+ "Human: Can you paraphrase this paragraph for me: “It took a long time to change the lightbulb, as it was high up in the ceiling in a stairwell. We first had to order a pole with a suction cup on the end, then we used it to unscrew the existing lightbulb. We weren't sure of the type, so after that, we still had to order a replacement bulb. Finally we used the suction cup and pole to replace it.”\n",
237
+ "Assistant: Sure, here's one example: “We wanted to change a lightbulb high up in the ceiling in a stairwell. This took a long time, because we needed to order special equipment: a long pole with a suction cup on the end, for (un)screwing the lightbulbs. Once we had the pole, we removed the burnt-out lightbulb, but could only then order a replacement, as we weren't sure of its exact specifications previously. Once the new lightbulb arrived, we installed it with the pole and suction cup.”\n",
238
+ "Human: Ah, OK, thanks. Actually, can you condense the paraphrase to one sentence?\n",
239
+ "Assistant: Sure, I'll try, though I'm not sure what to prioritize. Here's an example: “We replaced a hard-to-reach lightbulb with a suction cup attached to a pole, but the whole process took a while, especially since we had to first order the pole, then the replacement lightbulb.”\n",
240
+ "Human: Oh, that's pretty good, but can you add mention of the stairwell specifically?\n",
241
+ "Assistant: Sure, it's included it in this rewrite: “We replaced a lightbulb high up in a stairwell ceiling using a suction cup attached to a pole, but the whole process took a while, especially since we had to first order the pole, then the replacement lightbulb.”\n",
242
+ "{history}\n",
243
+ "Human: {human_input}\n",
244
+ "\n",
245
+ "Assistant:\n",
246
+ "\"\"\"\n",
247
+ "\n",
248
+ "print(len(tokenizer(template)[\"input_ids\"]))"
249
+ ]
250
+ },
251
+ {
252
+ "cell_type": "code",
253
+ "execution_count": 29,
254
+ "metadata": {},
255
+ "outputs": [],
256
+ "source": [
257
+ "with open(\"prompt_templates/anthropic_hhh_single.json\", \"w\") as f:\n",
258
+ " json.dump({\"prompt\": template}, f)"
259
+ ]
260
+ },
261
  {
262
  "cell_type": "code",
263
  "execution_count": 22,
 
585
  "\n",
586
  "-----\n",
587
  "\n",
 
588
  "{history}\n",
589
  "Human: {human_input}\n",
590
  "Assistant:\n",
 
771
  },
772
  {
773
  "cell_type": "code",
774
+ "execution_count": 31,
775
  "metadata": {},
776
  "outputs": [],
777
  "source": [
 
788
  "```\n",
789
  "\n",
790
  "In this app, you can explore the outputs of several language models conditioned on different conversational prompts. The models are trained on different datasets and have different objectives, so they will have different personalities and strengths.\n",
791
+ "\"\"\""
792
+ ]
793
+ },
794
+ {
795
+ "cell_type": "code",
796
+ "execution_count": 32,
797
+ "metadata": {},
798
+ "outputs": [
799
+ {
800
+ "data": {
801
+ "text/plain": [
802
+ "'So far, the following prompts are available:\\n\\n* `langchain_default`: The default prompt used in the [LangChain library](https://github.com/hwchase17/langchain/blob/bc53c928fc1b221d0038b839d111039d31729def/langchain/chains/conversation/prompt.py#L4). Around 67 tokens long.\\n* `openai_chatgpt`: The prompt used in the OpenAI ChatGPT model. Around 261 tokens long.\\n* `deepmind_Assistant`: The prompt used in the DeepMind Assistant model (Table 7 of [their paper](https://arxiv.org/abs/2209.14375)). Around 880 tokens long.\\n* `deepmind_gopher`: The prompt used in the DeepMind Assistant model (Table A30 of [their paper](https://arxiv.org/abs/2112.11446)). Around 791 tokens long.\\n* `anthropic_hhh`: The prompt used in the [Anthropic HHH models](https://gist.github.com/jareddk/2509330f8ef3d787fc5aaac67aab5f11#file-hhh_prompt-txt). A whopping 6,341 tokens long!\\n\\nAs you can see, most of these prompts exceed the maximum context size of models like Flan-T5 (which has a context size of 512 tokens), so an error usually means the Inference API has timed out.'"
803
+ ]
804
+ },
805
+ "execution_count": 32,
806
+ "metadata": {},
807
+ "output_type": "execute_result"
808
+ }
809
+ ],
810
+ "source": [
811
+ "\"\"\"So far, the following prompts are available:\n",
812
  "\n",
813
  "* `langchain_default`: The default prompt used in the [LangChain library](https://github.com/hwchase17/langchain/blob/bc53c928fc1b221d0038b839d111039d31729def/langchain/chains/conversation/prompt.py#L4). Around 67 tokens long.\n",
814
  "* `openai_chatgpt`: The prompt used in the OpenAI ChatGPT model. Around 261 tokens long.\n",
 
816
  "* `deepmind_gopher`: The prompt used in the DeepMind Assistant model (Table A30 of [their paper](https://arxiv.org/abs/2112.11446)). Around 791 tokens long.\n",
817
  "* `anthropic_hhh`: The prompt used in the [Anthropic HHH models](https://gist.github.com/jareddk/2509330f8ef3d787fc5aaac67aab5f11#file-hhh_prompt-txt). A whopping 6,341 tokens long!\n",
818
  "\n",
819
+ "As you can see, most of these prompts exceed the maximum context size of models like Flan-T5 (which has a context size of 512 tokens), so an error usually means the Inference API has timed out.\"\"\""
 
820
  ]
821
  },
822
  {
823
  "cell_type": "code",
824
+ "execution_count": 33,
825
  "metadata": {},
826
  "outputs": [
827
  {
828
  "name": "stdout",
829
  "output_type": "stream",
830
  "text": [
831
+ "Running on local URL: http://127.0.0.1:7864\n",
832
  "\n",
833
  "To create a public link, set `share=True` in `launch()`.\n"
834
  ]
 
836
  {
837
  "data": {
838
  "text/html": [
839
+ "<div><iframe src=\"http://127.0.0.1:7864/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
840
  ],
841
  "text/plain": [
842
  "<IPython.core.display.HTML object>"
 
849
  "data": {
850
  "text/plain": []
851
  },
852
+ "execution_count": 33,
853
  "metadata": {},
854
  "output_type": "execute_result"
855
  },
 
857
  "name": "stdout",
858
  "output_type": "stream",
859
  "text": [
860
+ "History: ['What can ou']\n",
861
+ "Inputs: Below is a dialogue between various a human and an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed. It also tries to avoid giving false or misleading information, and it caveats when it isn't entirely sure about the right answer. That said, the assistant is practical and really does its best, and doesn't let caution get too much in the way of being useful.\n",
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
862
  "\n",
863
+ "Current conversation:\n",
864
+ "Human: Can you paraphrase this paragraph for me: “It took a long time to change the lightbulb, as it was high up in the ceiling in a stairwell. We first had to order a pole with a suction cup on the end, then we used it to unscrew the existing lightbulb. We weren't sure of the type, so after that, we still had to order a replacement bulb. Finally we used the suction cup and pole to replace it.”\n",
865
+ "Assistant: Sure, here's one example: “We wanted to change a lightbulb high up in the ceiling in a stairwell. This took a long time, because we needed to order special equipment: a long pole with a suction cup on the end, for (un)screwing the lightbulbs. Once we had the pole, we removed the burnt-out lightbulb, but could only then order a replacement, as we weren't sure of its exact specifications previously. Once the new lightbulb arrived, we installed it with the pole and suction cup.”\n",
866
+ "Human: Ah, OK, thanks. Actually, can you condense the paraphrase to one sentence?\n",
867
+ "Assistant: Sure, I'll try, though I'm not sure what to prioritize. Here's an example: “We replaced a hard-to-reach lightbulb with a suction cup attached to a pole, but the whole process took a while, especially since we had to first order the pole, then the replacement lightbulb.”\n",
868
+ "Human: Oh, that's pretty good, but can you add mention of the stairwell specifically?\n",
869
+ "Assistant: Sure, it's included it in this rewrite: “We replaced a lightbulb high up in a stairwell ceiling using a suction cup attached to a pole, but the whole process took a while, especially since we had to first order the pole, then the replacement lightbulb.”\n",
870
+ "\n",
871
+ "Human: What can ou\n",
872
  "\n",
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
873
  "Assistant:\n",
874
  "\n"
875
  ]
 
896
  " label=\"Model\",\n",
897
  " interactive=True,\n",
898
  " )\n",
899
+ " # prompt_template = gr.Dropdown(\n",
900
+ " # choices=[\n",
901
+ " # \"langchain_default\",\n",
902
+ " # \"openai_chatgpt\",\n",
903
+ " # \"deepmind_sparrow\",\n",
904
+ " # \"deepmind_gopher\",\n",
905
+ " # \"anthropic_hhh\",\n",
906
+ " # ],\n",
907
+ " # value=\"langchain_default\",\n",
908
+ " # label=\"Prompt Template\",\n",
909
+ " # interactive=True,\n",
910
+ " # )\n",
911
  " temperature = gr.Slider(\n",
912
  " minimum=0.0,\n",
913
  " maximum=2.0,\n",
 
938
  " inference_chat,\n",
939
  " [\n",
940
  " model_id,\n",
 
941
  " chat_input,\n",
942
  " temperature,\n",
943
  " top_p,\n",
 
962
  " inference_chat,\n",
963
  " [\n",
964
  " model_id,\n",
 
965
  " chat_input,\n",
966
  " temperature,\n",
967
  " top_p,\n",
 
991
  },
992
  {
993
  "cell_type": "code",
994
+ "execution_count": 15,
995
  "metadata": {},
996
  "outputs": [],
997
  "source": [
app.py CHANGED
@@ -68,13 +68,17 @@ def query_chat_api(
68
  # %% app.ipynb 5
69
  def inference_chat(
70
  model_id,
71
- prompt_template,
72
  text_input,
73
  temperature,
74
  top_p,
75
  history=[],
76
  ):
77
- with open(f"prompt_templates/{prompt_template}.json", "r") as f:
 
 
 
 
 
78
  prompt_template = json.load(f)
79
 
80
  history_input = ""
@@ -103,7 +107,7 @@ def inference_chat(
103
  return {chatbot: chat, state: history}
104
 
105
 
106
- # %% app.ipynb 19
107
  title = """<h1 align="center">Chatty Language Models</h1>"""
108
  description = """Pretrained language models can be conditioned to act like dialogue agents through a conversational prompt that typically takes the form:
109
 
@@ -116,19 +120,9 @@ Assistant: <utterance>
116
  ```
117
 
118
  In this app, you can explore the outputs of several language models conditioned on different conversational prompts. The models are trained on different datasets and have different objectives, so they will have different personalities and strengths.
119
-
120
- So far, the following prompts are available:
121
-
122
- * `langchain_default`: The default prompt used in the [LangChain library](https://github.com/hwchase17/langchain/blob/bc53c928fc1b221d0038b839d111039d31729def/langchain/chains/conversation/prompt.py#L4). Around 67 tokens long.
123
- * `openai_chatgpt`: The prompt used in the OpenAI ChatGPT model. Around 261 tokens long.
124
- * `deepmind_Assistant`: The prompt used in the DeepMind Assistant model (Table 7 of [their paper](https://arxiv.org/abs/2209.14375)). Around 880 tokens long.
125
- * `deepmind_gopher`: The prompt used in the DeepMind Assistant model (Table A30 of [their paper](https://arxiv.org/abs/2112.11446)). Around 791 tokens long.
126
- * `anthropic_hhh`: The prompt used in the [Anthropic HHH models](https://gist.github.com/jareddk/2509330f8ef3d787fc5aaac67aab5f11#file-hhh_prompt-txt). A whopping 6,341 tokens long!
127
-
128
- As you can see, most of these prompts exceed the maximum context size of models like Flan-T5 (which has a context size of 512 tokens), so an error usually means the Inference API has timed out.
129
  """
130
 
131
- # %% app.ipynb 20
132
  with gr.Blocks(
133
  css="""
134
  .message.svelte-w6rprc.svelte-w6rprc.svelte-w6rprc {font-size: 20px; margin-top: 20px}
@@ -148,18 +142,18 @@ with gr.Blocks(
148
  label="Model",
149
  interactive=True,
150
  )
151
- prompt_template = gr.Dropdown(
152
- choices=[
153
- "langchain_default",
154
- "openai_chatgpt",
155
- "deepmind_sparrow",
156
- "deepmind_gopher",
157
- "anthropic_hhh",
158
- ],
159
- value="langchain_default",
160
- label="Prompt Template",
161
- interactive=True,
162
- )
163
  temperature = gr.Slider(
164
  minimum=0.0,
165
  maximum=2.0,
@@ -190,7 +184,6 @@ with gr.Blocks(
190
  inference_chat,
191
  [
192
  model_id,
193
- prompt_template,
194
  chat_input,
195
  temperature,
196
  top_p,
@@ -215,7 +208,6 @@ with gr.Blocks(
215
  inference_chat,
216
  [
217
  model_id,
218
- prompt_template,
219
  chat_input,
220
  temperature,
221
  top_p,
 
68
  # %% app.ipynb 5
69
  def inference_chat(
70
  model_id,
 
71
  text_input,
72
  temperature,
73
  top_p,
74
  history=[],
75
  ):
76
+ if "joi" in model_id:
77
+ prompt_filename = "langchain_default.json"
78
+ else:
79
+ prompt_filename = "anthropic_hhh_single.json"
80
+ print(prompt_filename)
81
+ with open(f"prompt_templates/{prompt_filename}", "r") as f:
82
  prompt_template = json.load(f)
83
 
84
  history_input = ""
 
107
  return {chatbot: chat, state: history}
108
 
109
 
110
+ # %% app.ipynb 21
111
  title = """<h1 align="center">Chatty Language Models</h1>"""
112
  description = """Pretrained language models can be conditioned to act like dialogue agents through a conversational prompt that typically takes the form:
113
 
 
120
  ```
121
 
122
  In this app, you can explore the outputs of several language models conditioned on different conversational prompts. The models are trained on different datasets and have different objectives, so they will have different personalities and strengths.
 
 
 
 
 
 
 
 
 
 
123
  """
124
 
125
+ # %% app.ipynb 23
126
  with gr.Blocks(
127
  css="""
128
  .message.svelte-w6rprc.svelte-w6rprc.svelte-w6rprc {font-size: 20px; margin-top: 20px}
 
142
  label="Model",
143
  interactive=True,
144
  )
145
+ # prompt_template = gr.Dropdown(
146
+ # choices=[
147
+ # "langchain_default",
148
+ # "openai_chatgpt",
149
+ # "deepmind_sparrow",
150
+ # "deepmind_gopher",
151
+ # "anthropic_hhh",
152
+ # ],
153
+ # value="langchain_default",
154
+ # label="Prompt Template",
155
+ # interactive=True,
156
+ # )
157
  temperature = gr.Slider(
158
  minimum=0.0,
159
  maximum=2.0,
 
184
  inference_chat,
185
  [
186
  model_id,
 
187
  chat_input,
188
  temperature,
189
  top_p,
 
208
  inference_chat,
209
  [
210
  model_id,
 
211
  chat_input,
212
  temperature,
213
  top_p,
prompt_templates/anthropic_hhh_single.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"prompt": "Below is a dialogue between various a human and an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed. It also tries to avoid giving false or misleading information, and it caveats when it isn't entirely sure about the right answer. That said, the assistant is practical and really does its best, and doesn't let caution get too much in the way of being useful.\n\nCurrent conversation:\nHuman: Can you paraphrase this paragraph for me: \u201cIt took a long time to change the lightbulb, as it was high up in the ceiling in a stairwell. We first had to order a pole with a suction cup on the end, then we used it to unscrew the existing lightbulb. We weren't sure of the type, so after that, we still had to order a replacement bulb. Finally we used the suction cup and pole to replace it.\u201d\nAssistant: Sure, here's one example: \u201cWe wanted to change a lightbulb high up in the ceiling in a stairwell. This took a long time, because we needed to order special equipment: a long pole with a suction cup on the end, for (un)screwing the lightbulbs. Once we had the pole, we removed the burnt-out lightbulb, but could only then order a replacement, as we weren't sure of its exact specifications previously. Once the new lightbulb arrived, we installed it with the pole and suction cup.\u201d\nHuman: Ah, OK, thanks. Actually, can you condense the paraphrase to one sentence?\nAssistant: Sure, I'll try, though I'm not sure what to prioritize. Here's an example: \u201cWe replaced a hard-to-reach lightbulb with a suction cup attached to a pole, but the whole process took a while, especially since we had to first order the pole, then the replacement lightbulb.\u201d\nHuman: Oh, that's pretty good, but can you add mention of the stairwell specifically?\nAssistant: Sure, it's included it in this rewrite: \u201cWe replaced a lightbulb high up in a stairwell ceiling using a suction cup attached to a pole, but the whole process took a while, especially since we had to first order the pole, then the replacement lightbulb.\u201d\n{history}\nHuman: {human_input}\n\nAssistant:\n"}