Spaces:
Sleeping
Sleeping
Created using Colaboratory
Browse files- notebooks/14-Adding_Chat.ipynb +1631 -0
notebooks/14-Adding_Chat.ipynb
ADDED
@@ -0,0 +1,1631 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"nbformat": 4,
|
3 |
+
"nbformat_minor": 0,
|
4 |
+
"metadata": {
|
5 |
+
"colab": {
|
6 |
+
"provenance": [],
|
7 |
+
"authorship_tag": "ABX9TyOIU+C03mTlevo1fu+yiDTM",
|
8 |
+
"include_colab_link": true
|
9 |
+
},
|
10 |
+
"kernelspec": {
|
11 |
+
"name": "python3",
|
12 |
+
"display_name": "Python 3"
|
13 |
+
},
|
14 |
+
"language_info": {
|
15 |
+
"name": "python"
|
16 |
+
},
|
17 |
+
"widgets": {
|
18 |
+
"application/vnd.jupyter.widget-state+json": {
|
19 |
+
"3fbabd8a8660461ba5e7bc08ef39139a": {
|
20 |
+
"model_module": "@jupyter-widgets/controls",
|
21 |
+
"model_name": "HBoxModel",
|
22 |
+
"model_module_version": "1.5.0",
|
23 |
+
"state": {
|
24 |
+
"_dom_classes": [],
|
25 |
+
"_model_module": "@jupyter-widgets/controls",
|
26 |
+
"_model_module_version": "1.5.0",
|
27 |
+
"_model_name": "HBoxModel",
|
28 |
+
"_view_count": null,
|
29 |
+
"_view_module": "@jupyter-widgets/controls",
|
30 |
+
"_view_module_version": "1.5.0",
|
31 |
+
"_view_name": "HBoxView",
|
32 |
+
"box_style": "",
|
33 |
+
"children": [
|
34 |
+
"IPY_MODEL_df2365556ae242a2ab1a119f9a31a561",
|
35 |
+
"IPY_MODEL_5f4b9d32df8f446e858e4c289dc282f9",
|
36 |
+
"IPY_MODEL_5b588f83a15d42d9aca888e06bbd95ff"
|
37 |
+
],
|
38 |
+
"layout": "IPY_MODEL_ad073bca655540809e39f26538d2ec0d"
|
39 |
+
}
|
40 |
+
},
|
41 |
+
"df2365556ae242a2ab1a119f9a31a561": {
|
42 |
+
"model_module": "@jupyter-widgets/controls",
|
43 |
+
"model_name": "HTMLModel",
|
44 |
+
"model_module_version": "1.5.0",
|
45 |
+
"state": {
|
46 |
+
"_dom_classes": [],
|
47 |
+
"_model_module": "@jupyter-widgets/controls",
|
48 |
+
"_model_module_version": "1.5.0",
|
49 |
+
"_model_name": "HTMLModel",
|
50 |
+
"_view_count": null,
|
51 |
+
"_view_module": "@jupyter-widgets/controls",
|
52 |
+
"_view_module_version": "1.5.0",
|
53 |
+
"_view_name": "HTMLView",
|
54 |
+
"description": "",
|
55 |
+
"description_tooltip": null,
|
56 |
+
"layout": "IPY_MODEL_13b9c5395bca4c3ba21265240cb936cf",
|
57 |
+
"placeholder": "β",
|
58 |
+
"style": "IPY_MODEL_47a4586384274577a726c57605e7f8d9",
|
59 |
+
"value": "Parsing nodes: 100%"
|
60 |
+
}
|
61 |
+
},
|
62 |
+
"5f4b9d32df8f446e858e4c289dc282f9": {
|
63 |
+
"model_module": "@jupyter-widgets/controls",
|
64 |
+
"model_name": "FloatProgressModel",
|
65 |
+
"model_module_version": "1.5.0",
|
66 |
+
"state": {
|
67 |
+
"_dom_classes": [],
|
68 |
+
"_model_module": "@jupyter-widgets/controls",
|
69 |
+
"_model_module_version": "1.5.0",
|
70 |
+
"_model_name": "FloatProgressModel",
|
71 |
+
"_view_count": null,
|
72 |
+
"_view_module": "@jupyter-widgets/controls",
|
73 |
+
"_view_module_version": "1.5.0",
|
74 |
+
"_view_name": "ProgressView",
|
75 |
+
"bar_style": "success",
|
76 |
+
"description": "",
|
77 |
+
"description_tooltip": null,
|
78 |
+
"layout": "IPY_MODEL_96a3bdece738481db57e811ccb74a974",
|
79 |
+
"max": 14,
|
80 |
+
"min": 0,
|
81 |
+
"orientation": "horizontal",
|
82 |
+
"style": "IPY_MODEL_5c7973afd79349ed997a69120d0629b2",
|
83 |
+
"value": 14
|
84 |
+
}
|
85 |
+
},
|
86 |
+
"5b588f83a15d42d9aca888e06bbd95ff": {
|
87 |
+
"model_module": "@jupyter-widgets/controls",
|
88 |
+
"model_name": "HTMLModel",
|
89 |
+
"model_module_version": "1.5.0",
|
90 |
+
"state": {
|
91 |
+
"_dom_classes": [],
|
92 |
+
"_model_module": "@jupyter-widgets/controls",
|
93 |
+
"_model_module_version": "1.5.0",
|
94 |
+
"_model_name": "HTMLModel",
|
95 |
+
"_view_count": null,
|
96 |
+
"_view_module": "@jupyter-widgets/controls",
|
97 |
+
"_view_module_version": "1.5.0",
|
98 |
+
"_view_name": "HTMLView",
|
99 |
+
"description": "",
|
100 |
+
"description_tooltip": null,
|
101 |
+
"layout": "IPY_MODEL_af9b6ae927dd4764b9692507791bc67e",
|
102 |
+
"placeholder": "β",
|
103 |
+
"style": "IPY_MODEL_134210510d49476e959dd7d032bbdbdc",
|
104 |
+
"value": " 14/14 [00:00<00:00, 21.41it/s]"
|
105 |
+
}
|
106 |
+
},
|
107 |
+
"ad073bca655540809e39f26538d2ec0d": {
|
108 |
+
"model_module": "@jupyter-widgets/base",
|
109 |
+
"model_name": "LayoutModel",
|
110 |
+
"model_module_version": "1.2.0",
|
111 |
+
"state": {
|
112 |
+
"_model_module": "@jupyter-widgets/base",
|
113 |
+
"_model_module_version": "1.2.0",
|
114 |
+
"_model_name": "LayoutModel",
|
115 |
+
"_view_count": null,
|
116 |
+
"_view_module": "@jupyter-widgets/base",
|
117 |
+
"_view_module_version": "1.2.0",
|
118 |
+
"_view_name": "LayoutView",
|
119 |
+
"align_content": null,
|
120 |
+
"align_items": null,
|
121 |
+
"align_self": null,
|
122 |
+
"border": null,
|
123 |
+
"bottom": null,
|
124 |
+
"display": null,
|
125 |
+
"flex": null,
|
126 |
+
"flex_flow": null,
|
127 |
+
"grid_area": null,
|
128 |
+
"grid_auto_columns": null,
|
129 |
+
"grid_auto_flow": null,
|
130 |
+
"grid_auto_rows": null,
|
131 |
+
"grid_column": null,
|
132 |
+
"grid_gap": null,
|
133 |
+
"grid_row": null,
|
134 |
+
"grid_template_areas": null,
|
135 |
+
"grid_template_columns": null,
|
136 |
+
"grid_template_rows": null,
|
137 |
+
"height": null,
|
138 |
+
"justify_content": null,
|
139 |
+
"justify_items": null,
|
140 |
+
"left": null,
|
141 |
+
"margin": null,
|
142 |
+
"max_height": null,
|
143 |
+
"max_width": null,
|
144 |
+
"min_height": null,
|
145 |
+
"min_width": null,
|
146 |
+
"object_fit": null,
|
147 |
+
"object_position": null,
|
148 |
+
"order": null,
|
149 |
+
"overflow": null,
|
150 |
+
"overflow_x": null,
|
151 |
+
"overflow_y": null,
|
152 |
+
"padding": null,
|
153 |
+
"right": null,
|
154 |
+
"top": null,
|
155 |
+
"visibility": null,
|
156 |
+
"width": null
|
157 |
+
}
|
158 |
+
},
|
159 |
+
"13b9c5395bca4c3ba21265240cb936cf": {
|
160 |
+
"model_module": "@jupyter-widgets/base",
|
161 |
+
"model_name": "LayoutModel",
|
162 |
+
"model_module_version": "1.2.0",
|
163 |
+
"state": {
|
164 |
+
"_model_module": "@jupyter-widgets/base",
|
165 |
+
"_model_module_version": "1.2.0",
|
166 |
+
"_model_name": "LayoutModel",
|
167 |
+
"_view_count": null,
|
168 |
+
"_view_module": "@jupyter-widgets/base",
|
169 |
+
"_view_module_version": "1.2.0",
|
170 |
+
"_view_name": "LayoutView",
|
171 |
+
"align_content": null,
|
172 |
+
"align_items": null,
|
173 |
+
"align_self": null,
|
174 |
+
"border": null,
|
175 |
+
"bottom": null,
|
176 |
+
"display": null,
|
177 |
+
"flex": null,
|
178 |
+
"flex_flow": null,
|
179 |
+
"grid_area": null,
|
180 |
+
"grid_auto_columns": null,
|
181 |
+
"grid_auto_flow": null,
|
182 |
+
"grid_auto_rows": null,
|
183 |
+
"grid_column": null,
|
184 |
+
"grid_gap": null,
|
185 |
+
"grid_row": null,
|
186 |
+
"grid_template_areas": null,
|
187 |
+
"grid_template_columns": null,
|
188 |
+
"grid_template_rows": null,
|
189 |
+
"height": null,
|
190 |
+
"justify_content": null,
|
191 |
+
"justify_items": null,
|
192 |
+
"left": null,
|
193 |
+
"margin": null,
|
194 |
+
"max_height": null,
|
195 |
+
"max_width": null,
|
196 |
+
"min_height": null,
|
197 |
+
"min_width": null,
|
198 |
+
"object_fit": null,
|
199 |
+
"object_position": null,
|
200 |
+
"order": null,
|
201 |
+
"overflow": null,
|
202 |
+
"overflow_x": null,
|
203 |
+
"overflow_y": null,
|
204 |
+
"padding": null,
|
205 |
+
"right": null,
|
206 |
+
"top": null,
|
207 |
+
"visibility": null,
|
208 |
+
"width": null
|
209 |
+
}
|
210 |
+
},
|
211 |
+
"47a4586384274577a726c57605e7f8d9": {
|
212 |
+
"model_module": "@jupyter-widgets/controls",
|
213 |
+
"model_name": "DescriptionStyleModel",
|
214 |
+
"model_module_version": "1.5.0",
|
215 |
+
"state": {
|
216 |
+
"_model_module": "@jupyter-widgets/controls",
|
217 |
+
"_model_module_version": "1.5.0",
|
218 |
+
"_model_name": "DescriptionStyleModel",
|
219 |
+
"_view_count": null,
|
220 |
+
"_view_module": "@jupyter-widgets/base",
|
221 |
+
"_view_module_version": "1.2.0",
|
222 |
+
"_view_name": "StyleView",
|
223 |
+
"description_width": ""
|
224 |
+
}
|
225 |
+
},
|
226 |
+
"96a3bdece738481db57e811ccb74a974": {
|
227 |
+
"model_module": "@jupyter-widgets/base",
|
228 |
+
"model_name": "LayoutModel",
|
229 |
+
"model_module_version": "1.2.0",
|
230 |
+
"state": {
|
231 |
+
"_model_module": "@jupyter-widgets/base",
|
232 |
+
"_model_module_version": "1.2.0",
|
233 |
+
"_model_name": "LayoutModel",
|
234 |
+
"_view_count": null,
|
235 |
+
"_view_module": "@jupyter-widgets/base",
|
236 |
+
"_view_module_version": "1.2.0",
|
237 |
+
"_view_name": "LayoutView",
|
238 |
+
"align_content": null,
|
239 |
+
"align_items": null,
|
240 |
+
"align_self": null,
|
241 |
+
"border": null,
|
242 |
+
"bottom": null,
|
243 |
+
"display": null,
|
244 |
+
"flex": null,
|
245 |
+
"flex_flow": null,
|
246 |
+
"grid_area": null,
|
247 |
+
"grid_auto_columns": null,
|
248 |
+
"grid_auto_flow": null,
|
249 |
+
"grid_auto_rows": null,
|
250 |
+
"grid_column": null,
|
251 |
+
"grid_gap": null,
|
252 |
+
"grid_row": null,
|
253 |
+
"grid_template_areas": null,
|
254 |
+
"grid_template_columns": null,
|
255 |
+
"grid_template_rows": null,
|
256 |
+
"height": null,
|
257 |
+
"justify_content": null,
|
258 |
+
"justify_items": null,
|
259 |
+
"left": null,
|
260 |
+
"margin": null,
|
261 |
+
"max_height": null,
|
262 |
+
"max_width": null,
|
263 |
+
"min_height": null,
|
264 |
+
"min_width": null,
|
265 |
+
"object_fit": null,
|
266 |
+
"object_position": null,
|
267 |
+
"order": null,
|
268 |
+
"overflow": null,
|
269 |
+
"overflow_x": null,
|
270 |
+
"overflow_y": null,
|
271 |
+
"padding": null,
|
272 |
+
"right": null,
|
273 |
+
"top": null,
|
274 |
+
"visibility": null,
|
275 |
+
"width": null
|
276 |
+
}
|
277 |
+
},
|
278 |
+
"5c7973afd79349ed997a69120d0629b2": {
|
279 |
+
"model_module": "@jupyter-widgets/controls",
|
280 |
+
"model_name": "ProgressStyleModel",
|
281 |
+
"model_module_version": "1.5.0",
|
282 |
+
"state": {
|
283 |
+
"_model_module": "@jupyter-widgets/controls",
|
284 |
+
"_model_module_version": "1.5.0",
|
285 |
+
"_model_name": "ProgressStyleModel",
|
286 |
+
"_view_count": null,
|
287 |
+
"_view_module": "@jupyter-widgets/base",
|
288 |
+
"_view_module_version": "1.2.0",
|
289 |
+
"_view_name": "StyleView",
|
290 |
+
"bar_color": null,
|
291 |
+
"description_width": ""
|
292 |
+
}
|
293 |
+
},
|
294 |
+
"af9b6ae927dd4764b9692507791bc67e": {
|
295 |
+
"model_module": "@jupyter-widgets/base",
|
296 |
+
"model_name": "LayoutModel",
|
297 |
+
"model_module_version": "1.2.0",
|
298 |
+
"state": {
|
299 |
+
"_model_module": "@jupyter-widgets/base",
|
300 |
+
"_model_module_version": "1.2.0",
|
301 |
+
"_model_name": "LayoutModel",
|
302 |
+
"_view_count": null,
|
303 |
+
"_view_module": "@jupyter-widgets/base",
|
304 |
+
"_view_module_version": "1.2.0",
|
305 |
+
"_view_name": "LayoutView",
|
306 |
+
"align_content": null,
|
307 |
+
"align_items": null,
|
308 |
+
"align_self": null,
|
309 |
+
"border": null,
|
310 |
+
"bottom": null,
|
311 |
+
"display": null,
|
312 |
+
"flex": null,
|
313 |
+
"flex_flow": null,
|
314 |
+
"grid_area": null,
|
315 |
+
"grid_auto_columns": null,
|
316 |
+
"grid_auto_flow": null,
|
317 |
+
"grid_auto_rows": null,
|
318 |
+
"grid_column": null,
|
319 |
+
"grid_gap": null,
|
320 |
+
"grid_row": null,
|
321 |
+
"grid_template_areas": null,
|
322 |
+
"grid_template_columns": null,
|
323 |
+
"grid_template_rows": null,
|
324 |
+
"height": null,
|
325 |
+
"justify_content": null,
|
326 |
+
"justify_items": null,
|
327 |
+
"left": null,
|
328 |
+
"margin": null,
|
329 |
+
"max_height": null,
|
330 |
+
"max_width": null,
|
331 |
+
"min_height": null,
|
332 |
+
"min_width": null,
|
333 |
+
"object_fit": null,
|
334 |
+
"object_position": null,
|
335 |
+
"order": null,
|
336 |
+
"overflow": null,
|
337 |
+
"overflow_x": null,
|
338 |
+
"overflow_y": null,
|
339 |
+
"padding": null,
|
340 |
+
"right": null,
|
341 |
+
"top": null,
|
342 |
+
"visibility": null,
|
343 |
+
"width": null
|
344 |
+
}
|
345 |
+
},
|
346 |
+
"134210510d49476e959dd7d032bbdbdc": {
|
347 |
+
"model_module": "@jupyter-widgets/controls",
|
348 |
+
"model_name": "DescriptionStyleModel",
|
349 |
+
"model_module_version": "1.5.0",
|
350 |
+
"state": {
|
351 |
+
"_model_module": "@jupyter-widgets/controls",
|
352 |
+
"_model_module_version": "1.5.0",
|
353 |
+
"_model_name": "DescriptionStyleModel",
|
354 |
+
"_view_count": null,
|
355 |
+
"_view_module": "@jupyter-widgets/base",
|
356 |
+
"_view_module_version": "1.2.0",
|
357 |
+
"_view_name": "StyleView",
|
358 |
+
"description_width": ""
|
359 |
+
}
|
360 |
+
},
|
361 |
+
"5f9bb065c2b74d2e8ded32e1306a7807": {
|
362 |
+
"model_module": "@jupyter-widgets/controls",
|
363 |
+
"model_name": "HBoxModel",
|
364 |
+
"model_module_version": "1.5.0",
|
365 |
+
"state": {
|
366 |
+
"_dom_classes": [],
|
367 |
+
"_model_module": "@jupyter-widgets/controls",
|
368 |
+
"_model_module_version": "1.5.0",
|
369 |
+
"_model_name": "HBoxModel",
|
370 |
+
"_view_count": null,
|
371 |
+
"_view_module": "@jupyter-widgets/controls",
|
372 |
+
"_view_module_version": "1.5.0",
|
373 |
+
"_view_name": "HBoxView",
|
374 |
+
"box_style": "",
|
375 |
+
"children": [
|
376 |
+
"IPY_MODEL_73a06bc546a64f7f99a9e4a135319dcd",
|
377 |
+
"IPY_MODEL_ce48deaf4d8c49cdae92bfdbb3a78df0",
|
378 |
+
"IPY_MODEL_4a172e8c6aa44e41a42fc1d9cf714fd0"
|
379 |
+
],
|
380 |
+
"layout": "IPY_MODEL_0245f2604e4d49c8bd0210302746c47b"
|
381 |
+
}
|
382 |
+
},
|
383 |
+
"73a06bc546a64f7f99a9e4a135319dcd": {
|
384 |
+
"model_module": "@jupyter-widgets/controls",
|
385 |
+
"model_name": "HTMLModel",
|
386 |
+
"model_module_version": "1.5.0",
|
387 |
+
"state": {
|
388 |
+
"_dom_classes": [],
|
389 |
+
"_model_module": "@jupyter-widgets/controls",
|
390 |
+
"_model_module_version": "1.5.0",
|
391 |
+
"_model_name": "HTMLModel",
|
392 |
+
"_view_count": null,
|
393 |
+
"_view_module": "@jupyter-widgets/controls",
|
394 |
+
"_view_module_version": "1.5.0",
|
395 |
+
"_view_name": "HTMLView",
|
396 |
+
"description": "",
|
397 |
+
"description_tooltip": null,
|
398 |
+
"layout": "IPY_MODEL_e956dfab55084a9cbe33c8e331b511e7",
|
399 |
+
"placeholder": "β",
|
400 |
+
"style": "IPY_MODEL_cb394578badd43a89850873ad2526542",
|
401 |
+
"value": "Generating embeddings: 100%"
|
402 |
+
}
|
403 |
+
},
|
404 |
+
"ce48deaf4d8c49cdae92bfdbb3a78df0": {
|
405 |
+
"model_module": "@jupyter-widgets/controls",
|
406 |
+
"model_name": "FloatProgressModel",
|
407 |
+
"model_module_version": "1.5.0",
|
408 |
+
"state": {
|
409 |
+
"_dom_classes": [],
|
410 |
+
"_model_module": "@jupyter-widgets/controls",
|
411 |
+
"_model_module_version": "1.5.0",
|
412 |
+
"_model_name": "FloatProgressModel",
|
413 |
+
"_view_count": null,
|
414 |
+
"_view_module": "@jupyter-widgets/controls",
|
415 |
+
"_view_module_version": "1.5.0",
|
416 |
+
"_view_name": "ProgressView",
|
417 |
+
"bar_style": "success",
|
418 |
+
"description": "",
|
419 |
+
"description_tooltip": null,
|
420 |
+
"layout": "IPY_MODEL_193aef33d9184055bb9223f56d456de6",
|
421 |
+
"max": 108,
|
422 |
+
"min": 0,
|
423 |
+
"orientation": "horizontal",
|
424 |
+
"style": "IPY_MODEL_abfc9aa911ce4a5ea81c7c451f08295f",
|
425 |
+
"value": 108
|
426 |
+
}
|
427 |
+
},
|
428 |
+
"4a172e8c6aa44e41a42fc1d9cf714fd0": {
|
429 |
+
"model_module": "@jupyter-widgets/controls",
|
430 |
+
"model_name": "HTMLModel",
|
431 |
+
"model_module_version": "1.5.0",
|
432 |
+
"state": {
|
433 |
+
"_dom_classes": [],
|
434 |
+
"_model_module": "@jupyter-widgets/controls",
|
435 |
+
"_model_module_version": "1.5.0",
|
436 |
+
"_model_name": "HTMLModel",
|
437 |
+
"_view_count": null,
|
438 |
+
"_view_module": "@jupyter-widgets/controls",
|
439 |
+
"_view_module_version": "1.5.0",
|
440 |
+
"_view_name": "HTMLView",
|
441 |
+
"description": "",
|
442 |
+
"description_tooltip": null,
|
443 |
+
"layout": "IPY_MODEL_e7937a1bc68441a080374911a6563376",
|
444 |
+
"placeholder": "β",
|
445 |
+
"style": "IPY_MODEL_e532ed7bfef34f67b5fcacd9534eb789",
|
446 |
+
"value": " 108/108 [00:03<00:00, 33.70it/s]"
|
447 |
+
}
|
448 |
+
},
|
449 |
+
"0245f2604e4d49c8bd0210302746c47b": {
|
450 |
+
"model_module": "@jupyter-widgets/base",
|
451 |
+
"model_name": "LayoutModel",
|
452 |
+
"model_module_version": "1.2.0",
|
453 |
+
"state": {
|
454 |
+
"_model_module": "@jupyter-widgets/base",
|
455 |
+
"_model_module_version": "1.2.0",
|
456 |
+
"_model_name": "LayoutModel",
|
457 |
+
"_view_count": null,
|
458 |
+
"_view_module": "@jupyter-widgets/base",
|
459 |
+
"_view_module_version": "1.2.0",
|
460 |
+
"_view_name": "LayoutView",
|
461 |
+
"align_content": null,
|
462 |
+
"align_items": null,
|
463 |
+
"align_self": null,
|
464 |
+
"border": null,
|
465 |
+
"bottom": null,
|
466 |
+
"display": null,
|
467 |
+
"flex": null,
|
468 |
+
"flex_flow": null,
|
469 |
+
"grid_area": null,
|
470 |
+
"grid_auto_columns": null,
|
471 |
+
"grid_auto_flow": null,
|
472 |
+
"grid_auto_rows": null,
|
473 |
+
"grid_column": null,
|
474 |
+
"grid_gap": null,
|
475 |
+
"grid_row": null,
|
476 |
+
"grid_template_areas": null,
|
477 |
+
"grid_template_columns": null,
|
478 |
+
"grid_template_rows": null,
|
479 |
+
"height": null,
|
480 |
+
"justify_content": null,
|
481 |
+
"justify_items": null,
|
482 |
+
"left": null,
|
483 |
+
"margin": null,
|
484 |
+
"max_height": null,
|
485 |
+
"max_width": null,
|
486 |
+
"min_height": null,
|
487 |
+
"min_width": null,
|
488 |
+
"object_fit": null,
|
489 |
+
"object_position": null,
|
490 |
+
"order": null,
|
491 |
+
"overflow": null,
|
492 |
+
"overflow_x": null,
|
493 |
+
"overflow_y": null,
|
494 |
+
"padding": null,
|
495 |
+
"right": null,
|
496 |
+
"top": null,
|
497 |
+
"visibility": null,
|
498 |
+
"width": null
|
499 |
+
}
|
500 |
+
},
|
501 |
+
"e956dfab55084a9cbe33c8e331b511e7": {
|
502 |
+
"model_module": "@jupyter-widgets/base",
|
503 |
+
"model_name": "LayoutModel",
|
504 |
+
"model_module_version": "1.2.0",
|
505 |
+
"state": {
|
506 |
+
"_model_module": "@jupyter-widgets/base",
|
507 |
+
"_model_module_version": "1.2.0",
|
508 |
+
"_model_name": "LayoutModel",
|
509 |
+
"_view_count": null,
|
510 |
+
"_view_module": "@jupyter-widgets/base",
|
511 |
+
"_view_module_version": "1.2.0",
|
512 |
+
"_view_name": "LayoutView",
|
513 |
+
"align_content": null,
|
514 |
+
"align_items": null,
|
515 |
+
"align_self": null,
|
516 |
+
"border": null,
|
517 |
+
"bottom": null,
|
518 |
+
"display": null,
|
519 |
+
"flex": null,
|
520 |
+
"flex_flow": null,
|
521 |
+
"grid_area": null,
|
522 |
+
"grid_auto_columns": null,
|
523 |
+
"grid_auto_flow": null,
|
524 |
+
"grid_auto_rows": null,
|
525 |
+
"grid_column": null,
|
526 |
+
"grid_gap": null,
|
527 |
+
"grid_row": null,
|
528 |
+
"grid_template_areas": null,
|
529 |
+
"grid_template_columns": null,
|
530 |
+
"grid_template_rows": null,
|
531 |
+
"height": null,
|
532 |
+
"justify_content": null,
|
533 |
+
"justify_items": null,
|
534 |
+
"left": null,
|
535 |
+
"margin": null,
|
536 |
+
"max_height": null,
|
537 |
+
"max_width": null,
|
538 |
+
"min_height": null,
|
539 |
+
"min_width": null,
|
540 |
+
"object_fit": null,
|
541 |
+
"object_position": null,
|
542 |
+
"order": null,
|
543 |
+
"overflow": null,
|
544 |
+
"overflow_x": null,
|
545 |
+
"overflow_y": null,
|
546 |
+
"padding": null,
|
547 |
+
"right": null,
|
548 |
+
"top": null,
|
549 |
+
"visibility": null,
|
550 |
+
"width": null
|
551 |
+
}
|
552 |
+
},
|
553 |
+
"cb394578badd43a89850873ad2526542": {
|
554 |
+
"model_module": "@jupyter-widgets/controls",
|
555 |
+
"model_name": "DescriptionStyleModel",
|
556 |
+
"model_module_version": "1.5.0",
|
557 |
+
"state": {
|
558 |
+
"_model_module": "@jupyter-widgets/controls",
|
559 |
+
"_model_module_version": "1.5.0",
|
560 |
+
"_model_name": "DescriptionStyleModel",
|
561 |
+
"_view_count": null,
|
562 |
+
"_view_module": "@jupyter-widgets/base",
|
563 |
+
"_view_module_version": "1.2.0",
|
564 |
+
"_view_name": "StyleView",
|
565 |
+
"description_width": ""
|
566 |
+
}
|
567 |
+
},
|
568 |
+
"193aef33d9184055bb9223f56d456de6": {
|
569 |
+
"model_module": "@jupyter-widgets/base",
|
570 |
+
"model_name": "LayoutModel",
|
571 |
+
"model_module_version": "1.2.0",
|
572 |
+
"state": {
|
573 |
+
"_model_module": "@jupyter-widgets/base",
|
574 |
+
"_model_module_version": "1.2.0",
|
575 |
+
"_model_name": "LayoutModel",
|
576 |
+
"_view_count": null,
|
577 |
+
"_view_module": "@jupyter-widgets/base",
|
578 |
+
"_view_module_version": "1.2.0",
|
579 |
+
"_view_name": "LayoutView",
|
580 |
+
"align_content": null,
|
581 |
+
"align_items": null,
|
582 |
+
"align_self": null,
|
583 |
+
"border": null,
|
584 |
+
"bottom": null,
|
585 |
+
"display": null,
|
586 |
+
"flex": null,
|
587 |
+
"flex_flow": null,
|
588 |
+
"grid_area": null,
|
589 |
+
"grid_auto_columns": null,
|
590 |
+
"grid_auto_flow": null,
|
591 |
+
"grid_auto_rows": null,
|
592 |
+
"grid_column": null,
|
593 |
+
"grid_gap": null,
|
594 |
+
"grid_row": null,
|
595 |
+
"grid_template_areas": null,
|
596 |
+
"grid_template_columns": null,
|
597 |
+
"grid_template_rows": null,
|
598 |
+
"height": null,
|
599 |
+
"justify_content": null,
|
600 |
+
"justify_items": null,
|
601 |
+
"left": null,
|
602 |
+
"margin": null,
|
603 |
+
"max_height": null,
|
604 |
+
"max_width": null,
|
605 |
+
"min_height": null,
|
606 |
+
"min_width": null,
|
607 |
+
"object_fit": null,
|
608 |
+
"object_position": null,
|
609 |
+
"order": null,
|
610 |
+
"overflow": null,
|
611 |
+
"overflow_x": null,
|
612 |
+
"overflow_y": null,
|
613 |
+
"padding": null,
|
614 |
+
"right": null,
|
615 |
+
"top": null,
|
616 |
+
"visibility": null,
|
617 |
+
"width": null
|
618 |
+
}
|
619 |
+
},
|
620 |
+
"abfc9aa911ce4a5ea81c7c451f08295f": {
|
621 |
+
"model_module": "@jupyter-widgets/controls",
|
622 |
+
"model_name": "ProgressStyleModel",
|
623 |
+
"model_module_version": "1.5.0",
|
624 |
+
"state": {
|
625 |
+
"_model_module": "@jupyter-widgets/controls",
|
626 |
+
"_model_module_version": "1.5.0",
|
627 |
+
"_model_name": "ProgressStyleModel",
|
628 |
+
"_view_count": null,
|
629 |
+
"_view_module": "@jupyter-widgets/base",
|
630 |
+
"_view_module_version": "1.2.0",
|
631 |
+
"_view_name": "StyleView",
|
632 |
+
"bar_color": null,
|
633 |
+
"description_width": ""
|
634 |
+
}
|
635 |
+
},
|
636 |
+
"e7937a1bc68441a080374911a6563376": {
|
637 |
+
"model_module": "@jupyter-widgets/base",
|
638 |
+
"model_name": "LayoutModel",
|
639 |
+
"model_module_version": "1.2.0",
|
640 |
+
"state": {
|
641 |
+
"_model_module": "@jupyter-widgets/base",
|
642 |
+
"_model_module_version": "1.2.0",
|
643 |
+
"_model_name": "LayoutModel",
|
644 |
+
"_view_count": null,
|
645 |
+
"_view_module": "@jupyter-widgets/base",
|
646 |
+
"_view_module_version": "1.2.0",
|
647 |
+
"_view_name": "LayoutView",
|
648 |
+
"align_content": null,
|
649 |
+
"align_items": null,
|
650 |
+
"align_self": null,
|
651 |
+
"border": null,
|
652 |
+
"bottom": null,
|
653 |
+
"display": null,
|
654 |
+
"flex": null,
|
655 |
+
"flex_flow": null,
|
656 |
+
"grid_area": null,
|
657 |
+
"grid_auto_columns": null,
|
658 |
+
"grid_auto_flow": null,
|
659 |
+
"grid_auto_rows": null,
|
660 |
+
"grid_column": null,
|
661 |
+
"grid_gap": null,
|
662 |
+
"grid_row": null,
|
663 |
+
"grid_template_areas": null,
|
664 |
+
"grid_template_columns": null,
|
665 |
+
"grid_template_rows": null,
|
666 |
+
"height": null,
|
667 |
+
"justify_content": null,
|
668 |
+
"justify_items": null,
|
669 |
+
"left": null,
|
670 |
+
"margin": null,
|
671 |
+
"max_height": null,
|
672 |
+
"max_width": null,
|
673 |
+
"min_height": null,
|
674 |
+
"min_width": null,
|
675 |
+
"object_fit": null,
|
676 |
+
"object_position": null,
|
677 |
+
"order": null,
|
678 |
+
"overflow": null,
|
679 |
+
"overflow_x": null,
|
680 |
+
"overflow_y": null,
|
681 |
+
"padding": null,
|
682 |
+
"right": null,
|
683 |
+
"top": null,
|
684 |
+
"visibility": null,
|
685 |
+
"width": null
|
686 |
+
}
|
687 |
+
},
|
688 |
+
"e532ed7bfef34f67b5fcacd9534eb789": {
|
689 |
+
"model_module": "@jupyter-widgets/controls",
|
690 |
+
"model_name": "DescriptionStyleModel",
|
691 |
+
"model_module_version": "1.5.0",
|
692 |
+
"state": {
|
693 |
+
"_model_module": "@jupyter-widgets/controls",
|
694 |
+
"_model_module_version": "1.5.0",
|
695 |
+
"_model_name": "DescriptionStyleModel",
|
696 |
+
"_view_count": null,
|
697 |
+
"_view_module": "@jupyter-widgets/base",
|
698 |
+
"_view_module_version": "1.2.0",
|
699 |
+
"_view_name": "StyleView",
|
700 |
+
"description_width": ""
|
701 |
+
}
|
702 |
+
}
|
703 |
+
}
|
704 |
+
}
|
705 |
+
},
|
706 |
+
"cells": [
|
707 |
+
{
|
708 |
+
"cell_type": "markdown",
|
709 |
+
"metadata": {
|
710 |
+
"id": "view-in-github",
|
711 |
+
"colab_type": "text"
|
712 |
+
},
|
713 |
+
"source": [
|
714 |
+
"<a href=\"https://colab.research.google.com/github/towardsai/ai-tutor-rag-system/blob/main/notebooks/14-Adding_Chat.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
715 |
+
]
|
716 |
+
},
|
717 |
+
{
|
718 |
+
"cell_type": "markdown",
|
719 |
+
"source": [
|
720 |
+
"# Install Packages and Setup Variables"
|
721 |
+
],
|
722 |
+
"metadata": {
|
723 |
+
"id": "-zE1h0uQV7uT"
|
724 |
+
}
|
725 |
+
},
|
726 |
+
{
|
727 |
+
"cell_type": "code",
|
728 |
+
"execution_count": 1,
|
729 |
+
"metadata": {
|
730 |
+
"id": "QPJzr-I9XQ7l",
|
731 |
+
"colab": {
|
732 |
+
"base_uri": "https://localhost:8080/"
|
733 |
+
},
|
734 |
+
"outputId": "19864102-680b-446b-fb38-7fad066cee09"
|
735 |
+
},
|
736 |
+
"outputs": [
|
737 |
+
{
|
738 |
+
"output_type": "stream",
|
739 |
+
"name": "stdout",
|
740 |
+
"text": [
|
741 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m15.7/15.7 MB\u001b[0m \u001b[31m5.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
742 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m225.4/225.4 kB\u001b[0m \u001b[31m24.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
743 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m2.0/2.0 MB\u001b[0m \u001b[31m77.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
744 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m508.6/508.6 kB\u001b[0m \u001b[31m41.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
745 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m79.9/79.9 MB\u001b[0m \u001b[31m10.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
746 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m45.7/45.7 kB\u001b[0m \u001b[31m5.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
747 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m51.7/51.7 kB\u001b[0m \u001b[31m6.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
748 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m75.9/75.9 kB\u001b[0m \u001b[31m9.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
749 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m2.4/2.4 MB\u001b[0m \u001b[31m79.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
750 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m92.1/92.1 kB\u001b[0m \u001b[31m11.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
751 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m60.8/60.8 kB\u001b[0m \u001b[31m7.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
752 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m41.1/41.1 kB\u001b[0m \u001b[31m4.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
753 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m5.4/5.4 MB\u001b[0m \u001b[31m71.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
754 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m6.8/6.8 MB\u001b[0m \u001b[31m72.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
755 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m57.9/57.9 kB\u001b[0m \u001b[31m6.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
756 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m105.6/105.6 kB\u001b[0m \u001b[31m12.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
757 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m67.3/67.3 kB\u001b[0m \u001b[31m9.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
758 |
+
"\u001b[?25h Installing build dependencies ... \u001b[?25l\u001b[?25hdone\n",
|
759 |
+
" Getting requirements to build wheel ... \u001b[?25l\u001b[?25hdone\n",
|
760 |
+
" Preparing metadata (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n",
|
761 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m698.9/698.9 kB\u001b[0m \u001b[31m53.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
762 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m1.6/1.6 MB\u001b[0m \u001b[31m75.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
763 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m67.6/67.6 kB\u001b[0m \u001b[31m8.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
764 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m3.1/3.1 MB\u001b[0m \u001b[31m79.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
765 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m71.5/71.5 kB\u001b[0m \u001b[31m9.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
766 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m76.9/76.9 kB\u001b[0m \u001b[31m8.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
767 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m58.3/58.3 kB\u001b[0m \u001b[31m7.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
768 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m46.0/46.0 kB\u001b[0m \u001b[31m5.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
769 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m50.8/50.8 kB\u001b[0m \u001b[31m6.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
770 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m341.4/341.4 kB\u001b[0m \u001b[31m34.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
771 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m3.4/3.4 MB\u001b[0m \u001b[31m74.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
772 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m1.3/1.3 MB\u001b[0m \u001b[31m68.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
773 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m130.2/130.2 kB\u001b[0m \u001b[31m16.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
774 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m49.4/49.4 kB\u001b[0m \u001b[31m4.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
775 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m86.8/86.8 kB\u001b[0m \u001b[31m11.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
776 |
+
"\u001b[?25h Building wheel for pypika (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n"
|
777 |
+
]
|
778 |
+
}
|
779 |
+
],
|
780 |
+
"source": [
|
781 |
+
"!pip install -q llama-index==0.9.21 openai==1.6.0 tiktoken==0.5.2 chromadb==0.4.21 kaleido==0.2.1 python-multipart==0.0.6 cohere==4.39"
|
782 |
+
]
|
783 |
+
},
|
784 |
+
{
|
785 |
+
"cell_type": "code",
|
786 |
+
"source": [
|
787 |
+
"import os\n",
|
788 |
+
"\n",
|
789 |
+
"# Set the \"OPENAI_API_KEY\" in the Python environment. Will be used by OpenAI client later.\n",
|
790 |
+
"os.environ[\"OPENAI_API_KEY\"] = \"<YOUR_OPENAI_KEY>\""
|
791 |
+
],
|
792 |
+
"metadata": {
|
793 |
+
"id": "riuXwpSPcvWC"
|
794 |
+
},
|
795 |
+
"execution_count": 2,
|
796 |
+
"outputs": []
|
797 |
+
},
|
798 |
+
{
|
799 |
+
"cell_type": "code",
|
800 |
+
"source": [
|
801 |
+
"import nest_asyncio\n",
|
802 |
+
"\n",
|
803 |
+
"nest_asyncio.apply()"
|
804 |
+
],
|
805 |
+
"metadata": {
|
806 |
+
"id": "jIEeZzqLbz0J"
|
807 |
+
},
|
808 |
+
"execution_count": 3,
|
809 |
+
"outputs": []
|
810 |
+
},
|
811 |
+
{
|
812 |
+
"cell_type": "markdown",
|
813 |
+
"source": [
|
814 |
+
"# Load a Model"
|
815 |
+
],
|
816 |
+
"metadata": {
|
817 |
+
"id": "Bkgi2OrYzF7q"
|
818 |
+
}
|
819 |
+
},
|
820 |
+
{
|
821 |
+
"cell_type": "code",
|
822 |
+
"source": [
|
823 |
+
"from llama_index.llms import OpenAI\n",
|
824 |
+
"\n",
|
825 |
+
"llm = OpenAI(temperature=0.9, model=\"gpt-3.5-turbo\", max_tokens=512)"
|
826 |
+
],
|
827 |
+
"metadata": {
|
828 |
+
"id": "9oGT6crooSSj"
|
829 |
+
},
|
830 |
+
"execution_count": 4,
|
831 |
+
"outputs": []
|
832 |
+
},
|
833 |
+
{
|
834 |
+
"cell_type": "markdown",
|
835 |
+
"source": [
|
836 |
+
"# Create a VectoreStore"
|
837 |
+
],
|
838 |
+
"metadata": {
|
839 |
+
"id": "0BwVuJXlzHVL"
|
840 |
+
}
|
841 |
+
},
|
842 |
+
{
|
843 |
+
"cell_type": "code",
|
844 |
+
"source": [
|
845 |
+
"import chromadb\n",
|
846 |
+
"\n",
|
847 |
+
"# create client and a new collection\n",
|
848 |
+
"# chromadb.EphemeralClient saves data in-memory.\n",
|
849 |
+
"chroma_client = chromadb.PersistentClient(path=\"./mini-llama-articles\")\n",
|
850 |
+
"chroma_collection = chroma_client.create_collection(\"mini-llama-articles\")"
|
851 |
+
],
|
852 |
+
"metadata": {
|
853 |
+
"id": "SQP87lHczHKc"
|
854 |
+
},
|
855 |
+
"execution_count": 5,
|
856 |
+
"outputs": []
|
857 |
+
},
|
858 |
+
{
|
859 |
+
"cell_type": "code",
|
860 |
+
"source": [
|
861 |
+
"from llama_index.vector_stores import ChromaVectorStore\n",
|
862 |
+
"\n",
|
863 |
+
"# Define a storage context object using the created vector database.\n",
|
864 |
+
"vector_store = ChromaVectorStore(chroma_collection=chroma_collection)"
|
865 |
+
],
|
866 |
+
"metadata": {
|
867 |
+
"id": "zAaGcYMJzHAN"
|
868 |
+
},
|
869 |
+
"execution_count": null,
|
870 |
+
"outputs": []
|
871 |
+
},
|
872 |
+
{
|
873 |
+
"cell_type": "markdown",
|
874 |
+
"source": [
|
875 |
+
"# Load the Dataset (CSV)"
|
876 |
+
],
|
877 |
+
"metadata": {
|
878 |
+
"id": "I9JbAzFcjkpn"
|
879 |
+
}
|
880 |
+
},
|
881 |
+
{
|
882 |
+
"cell_type": "markdown",
|
883 |
+
"source": [
|
884 |
+
"## Download"
|
885 |
+
],
|
886 |
+
"metadata": {
|
887 |
+
"id": "ceveDuYdWCYk"
|
888 |
+
}
|
889 |
+
},
|
890 |
+
{
|
891 |
+
"cell_type": "markdown",
|
892 |
+
"source": [
|
893 |
+
"The dataset includes several articles from the TowardsAI blog, which provide an in-depth explanation of the LLaMA2 model. Read the dataset as a long string."
|
894 |
+
],
|
895 |
+
"metadata": {
|
896 |
+
"id": "eZwf6pv7WFmD"
|
897 |
+
}
|
898 |
+
},
|
899 |
+
{
|
900 |
+
"cell_type": "code",
|
901 |
+
"source": [
|
902 |
+
"!wget https://raw.githubusercontent.com/AlaFalaki/tutorial_notebooks/main/data/mini-llama-articles.csv"
|
903 |
+
],
|
904 |
+
"metadata": {
|
905 |
+
"colab": {
|
906 |
+
"base_uri": "https://localhost:8080/"
|
907 |
+
},
|
908 |
+
"id": "wl_pbPvMlv1h",
|
909 |
+
"outputId": "5418de57-b95b-4b90-b7d0-a801ea3c73f7"
|
910 |
+
},
|
911 |
+
"execution_count": 5,
|
912 |
+
"outputs": [
|
913 |
+
{
|
914 |
+
"output_type": "stream",
|
915 |
+
"name": "stdout",
|
916 |
+
"text": [
|
917 |
+
"--2024-02-13 18:53:28-- https://raw.githubusercontent.com/AlaFalaki/tutorial_notebooks/main/data/mini-llama-articles.csv\n",
|
918 |
+
"Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.109.133, 185.199.111.133, ...\n",
|
919 |
+
"Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.\n",
|
920 |
+
"HTTP request sent, awaiting response... 200 OK\n",
|
921 |
+
"Length: 173646 (170K) [text/plain]\n",
|
922 |
+
"Saving to: βmini-llama-articles.csvβ\n",
|
923 |
+
"\n",
|
924 |
+
"mini-llama-articles 100%[===================>] 169.58K --.-KB/s in 0.09s \n",
|
925 |
+
"\n",
|
926 |
+
"2024-02-13 18:53:29 (1.89 MB/s) - βmini-llama-articles.csvβ saved [173646/173646]\n",
|
927 |
+
"\n"
|
928 |
+
]
|
929 |
+
}
|
930 |
+
]
|
931 |
+
},
|
932 |
+
{
|
933 |
+
"cell_type": "markdown",
|
934 |
+
"source": [
|
935 |
+
"## Read File"
|
936 |
+
],
|
937 |
+
"metadata": {
|
938 |
+
"id": "VWBLtDbUWJfA"
|
939 |
+
}
|
940 |
+
},
|
941 |
+
{
|
942 |
+
"cell_type": "code",
|
943 |
+
"source": [
|
944 |
+
"import csv\n",
|
945 |
+
"\n",
|
946 |
+
"rows = []\n",
|
947 |
+
"\n",
|
948 |
+
"# Load the file as a JSON\n",
|
949 |
+
"with open(\"./mini-llama-articles.csv\", mode=\"r\", encoding=\"utf-8\") as file:\n",
|
950 |
+
" csv_reader = csv.reader(file)\n",
|
951 |
+
"\n",
|
952 |
+
" for idx, row in enumerate( csv_reader ):\n",
|
953 |
+
" if idx == 0: continue; # Skip header row\n",
|
954 |
+
" rows.append( row )\n",
|
955 |
+
"\n",
|
956 |
+
"# The number of characters in the dataset.\n",
|
957 |
+
"len( rows )"
|
958 |
+
],
|
959 |
+
"metadata": {
|
960 |
+
"id": "0Q9sxuW0g3Gd",
|
961 |
+
"colab": {
|
962 |
+
"base_uri": "https://localhost:8080/"
|
963 |
+
},
|
964 |
+
"outputId": "801f2ba8-b498-4923-c1cc-c17d3208850c"
|
965 |
+
},
|
966 |
+
"execution_count": 6,
|
967 |
+
"outputs": [
|
968 |
+
{
|
969 |
+
"output_type": "execute_result",
|
970 |
+
"data": {
|
971 |
+
"text/plain": [
|
972 |
+
"14"
|
973 |
+
]
|
974 |
+
},
|
975 |
+
"metadata": {},
|
976 |
+
"execution_count": 6
|
977 |
+
}
|
978 |
+
]
|
979 |
+
},
|
980 |
+
{
|
981 |
+
"cell_type": "markdown",
|
982 |
+
"source": [
|
983 |
+
"# Convert to Document obj"
|
984 |
+
],
|
985 |
+
"metadata": {
|
986 |
+
"id": "S17g2RYOjmf2"
|
987 |
+
}
|
988 |
+
},
|
989 |
+
{
|
990 |
+
"cell_type": "code",
|
991 |
+
"source": [
|
992 |
+
"from llama_index import Document\n",
|
993 |
+
"\n",
|
994 |
+
"# Convert the chunks to Document objects so the LlamaIndex framework can process them.\n",
|
995 |
+
"documents = [Document(text=row[1], metadata={\"title\": row[0], \"url\": row[2], \"source_name\": row[3]}) for row in rows]"
|
996 |
+
],
|
997 |
+
"metadata": {
|
998 |
+
"id": "YizvmXPejkJE"
|
999 |
+
},
|
1000 |
+
"execution_count": 7,
|
1001 |
+
"outputs": []
|
1002 |
+
},
|
1003 |
+
{
|
1004 |
+
"cell_type": "markdown",
|
1005 |
+
"source": [
|
1006 |
+
"# Transforming"
|
1007 |
+
],
|
1008 |
+
"metadata": {
|
1009 |
+
"id": "qjuLbmFuWsyl"
|
1010 |
+
}
|
1011 |
+
},
|
1012 |
+
{
|
1013 |
+
"cell_type": "code",
|
1014 |
+
"source": [
|
1015 |
+
"from llama_index.text_splitter import TokenTextSplitter\n",
|
1016 |
+
"\n",
|
1017 |
+
"text_splitter = TokenTextSplitter(\n",
|
1018 |
+
" separator=\" \", chunk_size=512, chunk_overlap=128\n",
|
1019 |
+
")"
|
1020 |
+
],
|
1021 |
+
"metadata": {
|
1022 |
+
"id": "9z3t70DGWsjO"
|
1023 |
+
},
|
1024 |
+
"execution_count": null,
|
1025 |
+
"outputs": []
|
1026 |
+
},
|
1027 |
+
{
|
1028 |
+
"cell_type": "code",
|
1029 |
+
"source": [
|
1030 |
+
"from llama_index.extractors import (\n",
|
1031 |
+
" SummaryExtractor,\n",
|
1032 |
+
" QuestionsAnsweredExtractor,\n",
|
1033 |
+
" KeywordExtractor,\n",
|
1034 |
+
")\n",
|
1035 |
+
"from llama_index.embeddings import OpenAIEmbedding\n",
|
1036 |
+
"from llama_index.ingestion import IngestionPipeline\n",
|
1037 |
+
"\n",
|
1038 |
+
"pipeline = IngestionPipeline(\n",
|
1039 |
+
" transformations=[\n",
|
1040 |
+
" text_splitter,\n",
|
1041 |
+
" QuestionsAnsweredExtractor(questions=3, llm=llm),\n",
|
1042 |
+
" SummaryExtractor(summaries=[\"prev\", \"self\"], llm=llm),\n",
|
1043 |
+
" KeywordExtractor(keywords=10, llm=llm),\n",
|
1044 |
+
" OpenAIEmbedding(),\n",
|
1045 |
+
" ],\n",
|
1046 |
+
" vector_store=vector_store\n",
|
1047 |
+
")\n",
|
1048 |
+
"\n",
|
1049 |
+
"nodes = pipeline.run(documents=documents, show_progress=True);"
|
1050 |
+
],
|
1051 |
+
"metadata": {
|
1052 |
+
"colab": {
|
1053 |
+
"base_uri": "https://localhost:8080/",
|
1054 |
+
"height": 331,
|
1055 |
+
"referenced_widgets": [
|
1056 |
+
"3fbabd8a8660461ba5e7bc08ef39139a",
|
1057 |
+
"df2365556ae242a2ab1a119f9a31a561",
|
1058 |
+
"5f4b9d32df8f446e858e4c289dc282f9",
|
1059 |
+
"5b588f83a15d42d9aca888e06bbd95ff",
|
1060 |
+
"ad073bca655540809e39f26538d2ec0d",
|
1061 |
+
"13b9c5395bca4c3ba21265240cb936cf",
|
1062 |
+
"47a4586384274577a726c57605e7f8d9",
|
1063 |
+
"96a3bdece738481db57e811ccb74a974",
|
1064 |
+
"5c7973afd79349ed997a69120d0629b2",
|
1065 |
+
"af9b6ae927dd4764b9692507791bc67e",
|
1066 |
+
"134210510d49476e959dd7d032bbdbdc",
|
1067 |
+
"5f9bb065c2b74d2e8ded32e1306a7807",
|
1068 |
+
"73a06bc546a64f7f99a9e4a135319dcd",
|
1069 |
+
"ce48deaf4d8c49cdae92bfdbb3a78df0",
|
1070 |
+
"4a172e8c6aa44e41a42fc1d9cf714fd0",
|
1071 |
+
"0245f2604e4d49c8bd0210302746c47b",
|
1072 |
+
"e956dfab55084a9cbe33c8e331b511e7",
|
1073 |
+
"cb394578badd43a89850873ad2526542",
|
1074 |
+
"193aef33d9184055bb9223f56d456de6",
|
1075 |
+
"abfc9aa911ce4a5ea81c7c451f08295f",
|
1076 |
+
"e7937a1bc68441a080374911a6563376",
|
1077 |
+
"e532ed7bfef34f67b5fcacd9534eb789"
|
1078 |
+
]
|
1079 |
+
},
|
1080 |
+
"id": "P9LDJ7o-Wsc-",
|
1081 |
+
"outputId": "01070c1f-dffa-4ab7-ad71-b07b76b12e03"
|
1082 |
+
},
|
1083 |
+
"execution_count": null,
|
1084 |
+
"outputs": [
|
1085 |
+
{
|
1086 |
+
"output_type": "display_data",
|
1087 |
+
"data": {
|
1088 |
+
"text/plain": [
|
1089 |
+
"Parsing nodes: 0%| | 0/14 [00:00<?, ?it/s]"
|
1090 |
+
],
|
1091 |
+
"application/vnd.jupyter.widget-view+json": {
|
1092 |
+
"version_major": 2,
|
1093 |
+
"version_minor": 0,
|
1094 |
+
"model_id": "3fbabd8a8660461ba5e7bc08ef39139a"
|
1095 |
+
}
|
1096 |
+
},
|
1097 |
+
"metadata": {}
|
1098 |
+
},
|
1099 |
+
{
|
1100 |
+
"output_type": "stream",
|
1101 |
+
"name": "stdout",
|
1102 |
+
"text": [
|
1103 |
+
"464\n",
|
1104 |
+
"452\n",
|
1105 |
+
"457\n",
|
1106 |
+
"465\n",
|
1107 |
+
"448\n",
|
1108 |
+
"468\n",
|
1109 |
+
"434\n",
|
1110 |
+
"447\n",
|
1111 |
+
"455\n",
|
1112 |
+
"445\n",
|
1113 |
+
"449\n",
|
1114 |
+
"455\n",
|
1115 |
+
"431\n",
|
1116 |
+
"453\n"
|
1117 |
+
]
|
1118 |
+
},
|
1119 |
+
{
|
1120 |
+
"output_type": "display_data",
|
1121 |
+
"data": {
|
1122 |
+
"text/plain": [
|
1123 |
+
"Generating embeddings: 0%| | 0/108 [00:00<?, ?it/s]"
|
1124 |
+
],
|
1125 |
+
"application/vnd.jupyter.widget-view+json": {
|
1126 |
+
"version_major": 2,
|
1127 |
+
"version_minor": 0,
|
1128 |
+
"model_id": "5f9bb065c2b74d2e8ded32e1306a7807"
|
1129 |
+
}
|
1130 |
+
},
|
1131 |
+
"metadata": {}
|
1132 |
+
}
|
1133 |
+
]
|
1134 |
+
},
|
1135 |
+
{
|
1136 |
+
"cell_type": "code",
|
1137 |
+
"source": [
|
1138 |
+
"len( nodes )"
|
1139 |
+
],
|
1140 |
+
"metadata": {
|
1141 |
+
"colab": {
|
1142 |
+
"base_uri": "https://localhost:8080/"
|
1143 |
+
},
|
1144 |
+
"id": "mPGa85hM2P3P",
|
1145 |
+
"outputId": "c106c463-2459-4b11-bbae-5bd5e2246011"
|
1146 |
+
},
|
1147 |
+
"execution_count": null,
|
1148 |
+
"outputs": [
|
1149 |
+
{
|
1150 |
+
"output_type": "execute_result",
|
1151 |
+
"data": {
|
1152 |
+
"text/plain": [
|
1153 |
+
"108"
|
1154 |
+
]
|
1155 |
+
},
|
1156 |
+
"metadata": {},
|
1157 |
+
"execution_count": 109
|
1158 |
+
}
|
1159 |
+
]
|
1160 |
+
},
|
1161 |
+
{
|
1162 |
+
"cell_type": "code",
|
1163 |
+
"source": [
|
1164 |
+
"!zip -r vectorstore.zip mini-llama-articles"
|
1165 |
+
],
|
1166 |
+
"metadata": {
|
1167 |
+
"id": "23x20bL3_jRb"
|
1168 |
+
},
|
1169 |
+
"execution_count": null,
|
1170 |
+
"outputs": []
|
1171 |
+
},
|
1172 |
+
{
|
1173 |
+
"cell_type": "markdown",
|
1174 |
+
"source": [
|
1175 |
+
"# Load Indexes"
|
1176 |
+
],
|
1177 |
+
"metadata": {
|
1178 |
+
"id": "OWaT6rL7ksp8"
|
1179 |
+
}
|
1180 |
+
},
|
1181 |
+
{
|
1182 |
+
"cell_type": "code",
|
1183 |
+
"source": [
|
1184 |
+
"!unzip vectorstore.zip"
|
1185 |
+
],
|
1186 |
+
"metadata": {
|
1187 |
+
"colab": {
|
1188 |
+
"base_uri": "https://localhost:8080/"
|
1189 |
+
},
|
1190 |
+
"id": "SodY2Xpf_kxg",
|
1191 |
+
"outputId": "a6f7ae4a-447c-4222-e400-0fe55e7e26d9"
|
1192 |
+
},
|
1193 |
+
"execution_count": 8,
|
1194 |
+
"outputs": [
|
1195 |
+
{
|
1196 |
+
"output_type": "stream",
|
1197 |
+
"name": "stdout",
|
1198 |
+
"text": [
|
1199 |
+
"Archive: vectorstore.zip\n",
|
1200 |
+
" creating: mini-llama-articles/\n",
|
1201 |
+
" creating: mini-llama-articles/a361e92f-9895-41b6-ba72-4ad38e9875bd/\n",
|
1202 |
+
" inflating: mini-llama-articles/a361e92f-9895-41b6-ba72-4ad38e9875bd/data_level0.bin \n",
|
1203 |
+
" inflating: mini-llama-articles/a361e92f-9895-41b6-ba72-4ad38e9875bd/header.bin \n",
|
1204 |
+
" extracting: mini-llama-articles/a361e92f-9895-41b6-ba72-4ad38e9875bd/link_lists.bin \n",
|
1205 |
+
" inflating: mini-llama-articles/a361e92f-9895-41b6-ba72-4ad38e9875bd/length.bin \n",
|
1206 |
+
" inflating: mini-llama-articles/chroma.sqlite3 \n"
|
1207 |
+
]
|
1208 |
+
}
|
1209 |
+
]
|
1210 |
+
},
|
1211 |
+
{
|
1212 |
+
"cell_type": "code",
|
1213 |
+
"source": [
|
1214 |
+
"import chromadb\n",
|
1215 |
+
"from llama_index.vector_stores import ChromaVectorStore\n",
|
1216 |
+
"\n",
|
1217 |
+
"# Create your index\n",
|
1218 |
+
"db = chromadb.PersistentClient(path=\"./mini-llama-articles\")\n",
|
1219 |
+
"chroma_collection = db.get_or_create_collection(\"mini-llama-articles\")\n",
|
1220 |
+
"vector_store = ChromaVectorStore(chroma_collection=chroma_collection)"
|
1221 |
+
],
|
1222 |
+
"metadata": {
|
1223 |
+
"id": "mXi56KTXk2sp"
|
1224 |
+
},
|
1225 |
+
"execution_count": 9,
|
1226 |
+
"outputs": []
|
1227 |
+
},
|
1228 |
+
{
|
1229 |
+
"cell_type": "code",
|
1230 |
+
"source": [
|
1231 |
+
"# Create your index\n",
|
1232 |
+
"from llama_index import VectorStoreIndex\n",
|
1233 |
+
"\n",
|
1234 |
+
"vector_index = VectorStoreIndex.from_vector_store(vector_store)"
|
1235 |
+
],
|
1236 |
+
"metadata": {
|
1237 |
+
"id": "jKXURvLtkuTS"
|
1238 |
+
},
|
1239 |
+
"execution_count": 10,
|
1240 |
+
"outputs": []
|
1241 |
+
},
|
1242 |
+
{
|
1243 |
+
"cell_type": "markdown",
|
1244 |
+
"source": [
|
1245 |
+
"# Disply result"
|
1246 |
+
],
|
1247 |
+
"metadata": {
|
1248 |
+
"id": "q0m5rl195bcz"
|
1249 |
+
}
|
1250 |
+
},
|
1251 |
+
{
|
1252 |
+
"cell_type": "code",
|
1253 |
+
"source": [
|
1254 |
+
"def display_res(response):\n",
|
1255 |
+
" print(\"Response:\\n\\t\", response.response.replace(\"\\n\", \"\") )\n",
|
1256 |
+
"\n",
|
1257 |
+
" print(\"Sources:\")\n",
|
1258 |
+
" if response.source_nodes:\n",
|
1259 |
+
" for src in response.source_nodes:\n",
|
1260 |
+
" print(\"\\tNode ID\\t\", src.node_id)\n",
|
1261 |
+
" print(\"\\tText\\t\", src.text)\n",
|
1262 |
+
" print(\"\\tScore\\t\", src.score)\n",
|
1263 |
+
" print(\"\\t\" + \"-_\"*20)\n",
|
1264 |
+
" else:\n",
|
1265 |
+
" print(\"\\tNo sources used!\")"
|
1266 |
+
],
|
1267 |
+
"metadata": {
|
1268 |
+
"id": "4JpaHEmF5dSS"
|
1269 |
+
},
|
1270 |
+
"execution_count": 34,
|
1271 |
+
"outputs": []
|
1272 |
+
},
|
1273 |
+
{
|
1274 |
+
"cell_type": "markdown",
|
1275 |
+
"source": [
|
1276 |
+
"# Chat Engine"
|
1277 |
+
],
|
1278 |
+
"metadata": {
|
1279 |
+
"id": "hbStjvUJ1cft"
|
1280 |
+
}
|
1281 |
+
},
|
1282 |
+
{
|
1283 |
+
"cell_type": "code",
|
1284 |
+
"source": [
|
1285 |
+
"chat_engine = vector_index.as_chat_engine() #chat_mode=\"best\""
|
1286 |
+
],
|
1287 |
+
"metadata": {
|
1288 |
+
"id": "kwWlDpoR1cRI"
|
1289 |
+
},
|
1290 |
+
"execution_count": 47,
|
1291 |
+
"outputs": []
|
1292 |
+
},
|
1293 |
+
{
|
1294 |
+
"cell_type": "code",
|
1295 |
+
"source": [
|
1296 |
+
"response = chat_engine.chat(\"Use the tool to answer, How many parameters LLaMA2 model has?\")\n",
|
1297 |
+
"display_res(response)"
|
1298 |
+
],
|
1299 |
+
"metadata": {
|
1300 |
+
"colab": {
|
1301 |
+
"base_uri": "https://localhost:8080/"
|
1302 |
+
},
|
1303 |
+
"id": "ER3Lb-oN46lJ",
|
1304 |
+
"outputId": "8b34da39-622f-43f2-cb45-01a1ff37efd7"
|
1305 |
+
},
|
1306 |
+
"execution_count": 48,
|
1307 |
+
"outputs": [
|
1308 |
+
{
|
1309 |
+
"output_type": "stream",
|
1310 |
+
"name": "stdout",
|
1311 |
+
"text": [
|
1312 |
+
"Response:\n",
|
1313 |
+
"\t The LLaMA2 model has four different sizes, with 7 billion, 13 billion, 34 billion, and 70 billion parameters.\n",
|
1314 |
+
"Sources:\n",
|
1315 |
+
"\tNode ID\t d6f533e5-fef8-469c-a313-def19fd38efe\n",
|
1316 |
+
"\tText\t I. Llama 2: Revolutionizing Commercial Use Unlike its predecessor Llama 1, which was limited to research use, Llama 2 represents a major advancement as an open-source commercial model. Businesses can now integrate Llama 2 into products to create AI-powered applications. Availability on Azure and AWS facilitates fine-tuning and adoption. However, restrictions apply to prevent exploitation. Companies with over 700 million active daily users cannot use Llama 2. Additionally, its output cannot be used to improve other language models. II. Llama 2 Model Flavors Llama 2 is available in four different model sizes: 7 billion, 13 billion, 34 billion, and 70 billion parameters. While 7B, 13B, and 70B have already been released, the 34B model is still awaited. The pretrained variant, trained on a whopping 2 trillion tokens, boasts a context window of 4096 tokens, twice the size of its predecessor Llama 1. Meta also released a Llama 2 fine-tuned model for chat applications that was trained on over 1 million human annotations. Such extensive training comes at a cost, with the 70B model taking a staggering 1720320 GPU hours to train. The context window's length determines the amount of content the model can process at once, making Llama 2 a powerful language model in terms of scale and efficiency. III. Safety Considerations: A Top Priority for Meta Meta's commitment to safety and alignment shines through in Llama 2's design. The model demonstrates exceptionally low AI safety violation percentages, surpassing even ChatGPT in safety benchmarks. Finding the right balance between helpfulness and safety when optimizing a model poses significant challenges. While a highly helpful model may be capable of answering any question, including sensitive ones like \"How do I build a bomb?\", it also raises concerns about potential misuse. Thus, striking the perfect equilibrium between providing useful information and ensuring safety is paramount. However, prioritizing safety to an extreme extent can lead to a model that struggles to effectively address a diverse range of questions. This limitation could hinder the model's practical applicability and user experience. Thus, achieving\n",
|
1317 |
+
"\tScore\t 0.7053486224746555\n",
|
1318 |
+
"\t-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_\n",
|
1319 |
+
"\tNode ID\t 2f3b7c34-8fd0-4134-af38-ef1b77e32cd8\n",
|
1320 |
+
"\tText\t The model demonstrates exceptionally low AI safety violation percentages, surpassing even ChatGPT in safety benchmarks. Finding the right balance between helpfulness and safety when optimizing a model poses significant challenges. While a highly helpful model may be capable of answering any question, including sensitive ones like \"How do I build a bomb?\", it also raises concerns about potential misuse. Thus, striking the perfect equilibrium between providing useful information and ensuring safety is paramount. However, prioritizing safety to an extreme extent can lead to a model that struggles to effectively address a diverse range of questions. This limitation could hinder the model's practical applicability and user experience. Thus, achieving an optimum balance that allows the model to be both helpful and safe is of utmost importance. To strike the right balance between helpfulness and safety, Meta employed two reward models - one for helpfulness and another for safety - to optimize the model's responses. The 34B parameter model has reported higher safety violations than other variants, possibly contributing to the delay in its release. IV. Helpfulness Comparison: Llama 2 Outperforms Competitors Llama 2 emerges as a strong contender in the open-source language model arena, outperforming its competitors in most categories. The 70B parameter model outperforms all other open-source models, while the 7B and 34B models outshine Falcon in all categories and MPT in all categories except coding. Despite being smaller, Llam a2's performance rivals that of Chat GPT 3.5, a significantly larger closed-source model. While GPT 4 and PalM-2-L, with their larger size, outperform Llama 2, this is expected due to their capacity for handling complex language tasks. Llama 2's impressive ability to compete with larger models highlights its efficiency and potential in the market. However, Llama 2 does face challenges in coding and math problems, where models like Chat GPT 4 excel, given their significantly larger size. Chat GPT 4 performed significantly better than Llama 2 for coding (HumanEval benchmark)and math problem tasks (GSM8k benchmark). Open-source AI technologies, like Llama 2, continue to advance, offering\n",
|
1321 |
+
"\tScore\t 0.7005940813082231\n",
|
1322 |
+
"\t-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_\n"
|
1323 |
+
]
|
1324 |
+
}
|
1325 |
+
]
|
1326 |
+
},
|
1327 |
+
{
|
1328 |
+
"cell_type": "code",
|
1329 |
+
"source": [
|
1330 |
+
"response = chat_engine.chat(\"Tell me a joke?\")\n",
|
1331 |
+
"display_res(response)"
|
1332 |
+
],
|
1333 |
+
"metadata": {
|
1334 |
+
"colab": {
|
1335 |
+
"base_uri": "https://localhost:8080/"
|
1336 |
+
},
|
1337 |
+
"id": "3RRmiJEQ5R1Q",
|
1338 |
+
"outputId": "15efcc9b-583f-4efe-8e36-fa8b5160da16"
|
1339 |
+
},
|
1340 |
+
"execution_count": 49,
|
1341 |
+
"outputs": [
|
1342 |
+
{
|
1343 |
+
"output_type": "stream",
|
1344 |
+
"name": "stdout",
|
1345 |
+
"text": [
|
1346 |
+
"Response:\n",
|
1347 |
+
"\t I'm sorry, but I don't have the capability to generate jokes. However, I'm here to help answer any questions you may have!\n",
|
1348 |
+
"Sources:\n",
|
1349 |
+
"\tNode ID\t 021c859e-809b-49b8-8d0d-38cc326c1203\n",
|
1350 |
+
"\tText\t with their larger size, outperform Llama 2, this is expected due to their capacity for handling complex language tasks. Llama 2's impressive ability to compete with larger models highlights its efficiency and potential in the market. However, Llama 2 does face challenges in coding and math problems, where models like Chat GPT 4 excel, given their significantly larger size. Chat GPT 4 performed significantly better than Llama 2 for coding (HumanEval benchmark)and math problem tasks (GSM8k benchmark). Open-source AI technologies, like Llama 2, continue to advance, offering strong competition to closed-source models. V. Ghost Attention: Enhancing Conversational Continuity One unique feature in Llama 2 is Ghost Attention, which ensures continuity in conversations. This means that even after multiple interactions, the model remembers its initial instructions, ensuring more coherent and consistent responses throughout the conversation. This feature significantly enhances the user experience and makes Llama 2 a more reliable language model for interactive applications. In the example below, on the left, it forgets to use an emoji after a few conversations. On the right, with Ghost Attention, even after having many conversations, it will remember the context and continue to use emojis in its response. VI. Temporal Capability: A Leap in Information Organization Meta reported a groundbreaking temporal capability, where the model organizes information based on time relevance. Each question posed to the model is associated with a date, and it responds accordingly by considering the event date before which the question becomes irrelevant. For example, if you ask the question, \"How long ago did Barack Obama become president?\", its only relevant after 2008. This temporal awareness allows Llama 2 to deliver more contextually accurate responses, enriching the user experience further. VII. Open Questions and Future Outlook Meta's open-sourcing of Llama 2 represents a seismic shift, now offering developers and researchers commercial access to a leading language model. With Llama 2 outperforming MosaicML's current MPT models, all eyes are on how Databricks will respond. Can MosaicML's next MPT iteration beat Llama 2? Is it worthwhile to compete\n",
|
1351 |
+
"\tScore\t 0.5640742259357179\n",
|
1352 |
+
"\t-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_\n",
|
1353 |
+
"\tNode ID\t 1fd37a6f-bf45-4b03-ae54-95f4c84796cb\n",
|
1354 |
+
"\tText\t and then using that to create a QA dataset was not effective. This is where the Self-instruct concept could be used However previous to Llama2, the best-performing model was the GPT 3/4 model via ChatGPT or its API and using these models to do the same was expensive. The 7 billion model of Llama2 has sufficient NLU (Natural Language Understanding) to create output based on a particular format. Running this in 4-bit mode via Quantisation makes it feasible compute-wise to run this on a large data set and convert it to a QA dataset. This was the prompt used. The context was a sliding window from the text dataset. Some minimal parsing and finetuning were done on the output of the model, and we could generate a QA dataset of the format below. This was fed to the QLoRA-based fine-tuning (Colab Notebook). We can see that the output from a fine-tuned 4-bit quantized llama2 7 B model is pretty good. Colab Notebook Trying to reduce hallucination via fine-tuning In the generated dataset, I added a specific tag `Source:8989REF`. The idea was that via attention, this token will be somehow associated with the text that we were training on. And then to use this hash somehow to tweak the prompt to control hallucination. Something like \"[INST] <<SYS>>\\nYou are a helpful Question Answering Assistant. Please only answer from this reference Source:8989REF\" However, that turned out to be a very naive attempt. Also, note that the generated QA missed transforming training data related to Professor Thiersch's method to a proper QA dataset. These and other improvements need to be experimented with, as well as to train with some completely new data that the model has not seen to test more effectively. Update: Training with new data was done by writing an imaginary story with ChatGPT help and then creating an instruction tuning data set (colab notebook). The model was then trained and tested (colab notebook) with this generated instruct dataset. The results confirm that the model learns via Instruct tuning, not only the fed questions but other details and relations of the domain. Problems with hallucinations remain (Bordor, Lila characters who are\n",
|
1355 |
+
"\tScore\t 0.5601411470382146\n",
|
1356 |
+
"\t-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_\n"
|
1357 |
+
]
|
1358 |
+
}
|
1359 |
+
]
|
1360 |
+
},
|
1361 |
+
{
|
1362 |
+
"cell_type": "code",
|
1363 |
+
"source": [
|
1364 |
+
"response = chat_engine.chat(\"What was the first question I asked?\")\n",
|
1365 |
+
"display_res(response)"
|
1366 |
+
],
|
1367 |
+
"metadata": {
|
1368 |
+
"colab": {
|
1369 |
+
"base_uri": "https://localhost:8080/"
|
1370 |
+
},
|
1371 |
+
"id": "8eOzp5Xc5Vbj",
|
1372 |
+
"outputId": "13bc6714-dd89-45b3-a86b-759806245241"
|
1373 |
+
},
|
1374 |
+
"execution_count": 50,
|
1375 |
+
"outputs": [
|
1376 |
+
{
|
1377 |
+
"output_type": "stream",
|
1378 |
+
"name": "stdout",
|
1379 |
+
"text": [
|
1380 |
+
"Response:\n",
|
1381 |
+
"\t The first question you asked was, \"How many parameters LLaMA2 model has?\"\n",
|
1382 |
+
"Sources:\n",
|
1383 |
+
"\tNo sources used!\n"
|
1384 |
+
]
|
1385 |
+
}
|
1386 |
+
]
|
1387 |
+
},
|
1388 |
+
{
|
1389 |
+
"cell_type": "code",
|
1390 |
+
"source": [
|
1391 |
+
"chat_engine.reset()"
|
1392 |
+
],
|
1393 |
+
"metadata": {
|
1394 |
+
"id": "7jfiLpru5VZT"
|
1395 |
+
},
|
1396 |
+
"execution_count": 51,
|
1397 |
+
"outputs": []
|
1398 |
+
},
|
1399 |
+
{
|
1400 |
+
"cell_type": "code",
|
1401 |
+
"source": [
|
1402 |
+
"response = chat_engine.chat(\"What was the first question I asked?\")\n",
|
1403 |
+
"display_res(response)"
|
1404 |
+
],
|
1405 |
+
"metadata": {
|
1406 |
+
"colab": {
|
1407 |
+
"base_uri": "https://localhost:8080/"
|
1408 |
+
},
|
1409 |
+
"id": "Jt0q8RW25VXN",
|
1410 |
+
"outputId": "0e2d0d4e-c0ff-48bf-8df3-478fcdc66abd"
|
1411 |
+
},
|
1412 |
+
"execution_count": 52,
|
1413 |
+
"outputs": [
|
1414 |
+
{
|
1415 |
+
"output_type": "stream",
|
1416 |
+
"name": "stdout",
|
1417 |
+
"text": [
|
1418 |
+
"Response:\n",
|
1419 |
+
"\t The first question you asked was \"What was the first question I asked?\"\n",
|
1420 |
+
"Sources:\n",
|
1421 |
+
"\tNo sources used!\n"
|
1422 |
+
]
|
1423 |
+
}
|
1424 |
+
]
|
1425 |
+
},
|
1426 |
+
{
|
1427 |
+
"cell_type": "markdown",
|
1428 |
+
"source": [
|
1429 |
+
"# Streaming"
|
1430 |
+
],
|
1431 |
+
"metadata": {
|
1432 |
+
"id": "0Egsib7yPJGR"
|
1433 |
+
}
|
1434 |
+
},
|
1435 |
+
{
|
1436 |
+
"cell_type": "code",
|
1437 |
+
"source": [
|
1438 |
+
"streaming_response = chat_engine.stream_chat(\"Write a paragraph about the LLaMA2 model's capabilities.\")\n",
|
1439 |
+
"for token in streaming_response.response_gen:\n",
|
1440 |
+
" print(token, end=\"\")"
|
1441 |
+
],
|
1442 |
+
"metadata": {
|
1443 |
+
"colab": {
|
1444 |
+
"base_uri": "https://localhost:8080/"
|
1445 |
+
},
|
1446 |
+
"id": "zanJeMbaPJcq",
|
1447 |
+
"outputId": "de7f0905-c1b1-49ac-fb66-d1578da35cad"
|
1448 |
+
},
|
1449 |
+
"execution_count": 68,
|
1450 |
+
"outputs": [
|
1451 |
+
{
|
1452 |
+
"output_type": "stream",
|
1453 |
+
"name": "stdout",
|
1454 |
+
"text": [
|
1455 |
+
"Querying with: What are the capabilities of the LLaMA2 model?\n",
|
1456 |
+
"The capabilities of the Llama 2 model include its ability to be integrated into AI-powered applications for commercial use, its availability on Azure and AWS for fine-tuning and adoption, and its impressive performance in terms of scale and efficiency. The model is available in different sizes, ranging from 7 billion to 70 billion parameters, with a context window of 4096 tokens. Llama 2 also prioritizes safety and alignment, demonstrating low AI safety violation percentages and surpassing ChatGPT in safety benchmarks. Additionally, Llama 2 has features such as Ghost Attention, which enhances conversational continuity, and a temporal capability that organizes information based on time relevance, resulting in more contextually accurate responses."
|
1457 |
+
]
|
1458 |
+
}
|
1459 |
+
]
|
1460 |
+
},
|
1461 |
+
{
|
1462 |
+
"cell_type": "markdown",
|
1463 |
+
"source": [
|
1464 |
+
"## Condense Question"
|
1465 |
+
],
|
1466 |
+
"metadata": {
|
1467 |
+
"id": "DuRgOJ2AHMJh"
|
1468 |
+
}
|
1469 |
+
},
|
1470 |
+
{
|
1471 |
+
"cell_type": "code",
|
1472 |
+
"source": [
|
1473 |
+
"gpt4 = OpenAI(temperature=0.9, model=\"gpt-4\")"
|
1474 |
+
],
|
1475 |
+
"metadata": {
|
1476 |
+
"id": "v0gmM5LGIaRl"
|
1477 |
+
},
|
1478 |
+
"execution_count": 57,
|
1479 |
+
"outputs": []
|
1480 |
+
},
|
1481 |
+
{
|
1482 |
+
"cell_type": "code",
|
1483 |
+
"source": [
|
1484 |
+
"chat_engine = vector_index.as_chat_engine(chat_mode=\"condense_question\", llm=gpt4, verbose=True)"
|
1485 |
+
],
|
1486 |
+
"metadata": {
|
1487 |
+
"id": "EDWsaBTBIhK7"
|
1488 |
+
},
|
1489 |
+
"execution_count": 66,
|
1490 |
+
"outputs": []
|
1491 |
+
},
|
1492 |
+
{
|
1493 |
+
"cell_type": "code",
|
1494 |
+
"source": [
|
1495 |
+
"response = chat_engine.chat(\"Use the tool to answer, which company released LLaMA2 model? What is the model useful for?\")\n",
|
1496 |
+
"display_res(response)"
|
1497 |
+
],
|
1498 |
+
"metadata": {
|
1499 |
+
"colab": {
|
1500 |
+
"base_uri": "https://localhost:8080/"
|
1501 |
+
},
|
1502 |
+
"id": "h4c--hJ75VU2",
|
1503 |
+
"outputId": "e80fd9bf-e6d5-4532-8771-8cbf781e782e"
|
1504 |
+
},
|
1505 |
+
"execution_count": 69,
|
1506 |
+
"outputs": [
|
1507 |
+
{
|
1508 |
+
"output_type": "stream",
|
1509 |
+
"name": "stdout",
|
1510 |
+
"text": [
|
1511 |
+
"Querying with: Which company released the LLaMA2 model and what is the model useful for?\n",
|
1512 |
+
"Response:\n",
|
1513 |
+
"\t Meta AI released the Llama 2 model. The model is useful for creating AI-powered applications for commercial use.\n",
|
1514 |
+
"Sources:\n",
|
1515 |
+
"\tNode ID\t d6f533e5-fef8-469c-a313-def19fd38efe\n",
|
1516 |
+
"\tText\t I. Llama 2: Revolutionizing Commercial Use Unlike its predecessor Llama 1, which was limited to research use, Llama 2 represents a major advancement as an open-source commercial model. Businesses can now integrate Llama 2 into products to create AI-powered applications. Availability on Azure and AWS facilitates fine-tuning and adoption. However, restrictions apply to prevent exploitation. Companies with over 700 million active daily users cannot use Llama 2. Additionally, its output cannot be used to improve other language models. II. Llama 2 Model Flavors Llama 2 is available in four different model sizes: 7 billion, 13 billion, 34 billion, and 70 billion parameters. While 7B, 13B, and 70B have already been released, the 34B model is still awaited. The pretrained variant, trained on a whopping 2 trillion tokens, boasts a context window of 4096 tokens, twice the size of its predecessor Llama 1. Meta also released a Llama 2 fine-tuned model for chat applications that was trained on over 1 million human annotations. Such extensive training comes at a cost, with the 70B model taking a staggering 1720320 GPU hours to train. The context window's length determines the amount of content the model can process at once, making Llama 2 a powerful language model in terms of scale and efficiency. III. Safety Considerations: A Top Priority for Meta Meta's commitment to safety and alignment shines through in Llama 2's design. The model demonstrates exceptionally low AI safety violation percentages, surpassing even ChatGPT in safety benchmarks. Finding the right balance between helpfulness and safety when optimizing a model poses significant challenges. While a highly helpful model may be capable of answering any question, including sensitive ones like \"How do I build a bomb?\", it also raises concerns about potential misuse. Thus, striking the perfect equilibrium between providing useful information and ensuring safety is paramount. However, prioritizing safety to an extreme extent can lead to a model that struggles to effectively address a diverse range of questions. This limitation could hinder the model's practical applicability and user experience. Thus, achieving\n",
|
1517 |
+
"\tScore\t 0.700217415077618\n",
|
1518 |
+
"\t-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_\n",
|
1519 |
+
"\tNode ID\t 2f3b7c34-8fd0-4134-af38-ef1b77e32cd8\n",
|
1520 |
+
"\tText\t The model demonstrates exceptionally low AI safety violation percentages, surpassing even ChatGPT in safety benchmarks. Finding the right balance between helpfulness and safety when optimizing a model poses significant challenges. While a highly helpful model may be capable of answering any question, including sensitive ones like \"How do I build a bomb?\", it also raises concerns about potential misuse. Thus, striking the perfect equilibrium between providing useful information and ensuring safety is paramount. However, prioritizing safety to an extreme extent can lead to a model that struggles to effectively address a diverse range of questions. This limitation could hinder the model's practical applicability and user experience. Thus, achieving an optimum balance that allows the model to be both helpful and safe is of utmost importance. To strike the right balance between helpfulness and safety, Meta employed two reward models - one for helpfulness and another for safety - to optimize the model's responses. The 34B parameter model has reported higher safety violations than other variants, possibly contributing to the delay in its release. IV. Helpfulness Comparison: Llama 2 Outperforms Competitors Llama 2 emerges as a strong contender in the open-source language model arena, outperforming its competitors in most categories. The 70B parameter model outperforms all other open-source models, while the 7B and 34B models outshine Falcon in all categories and MPT in all categories except coding. Despite being smaller, Llam a2's performance rivals that of Chat GPT 3.5, a significantly larger closed-source model. While GPT 4 and PalM-2-L, with their larger size, outperform Llama 2, this is expected due to their capacity for handling complex language tasks. Llama 2's impressive ability to compete with larger models highlights its efficiency and potential in the market. However, Llama 2 does face challenges in coding and math problems, where models like Chat GPT 4 excel, given their significantly larger size. Chat GPT 4 performed significantly better than Llama 2 for coding (HumanEval benchmark)and math problem tasks (GSM8k benchmark). Open-source AI technologies, like Llama 2, continue to advance, offering\n",
|
1521 |
+
"\tScore\t 0.6920247591251928\n",
|
1522 |
+
"\t-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_\n"
|
1523 |
+
]
|
1524 |
+
}
|
1525 |
+
]
|
1526 |
+
},
|
1527 |
+
{
|
1528 |
+
"cell_type": "markdown",
|
1529 |
+
"source": [
|
1530 |
+
"## REACT"
|
1531 |
+
],
|
1532 |
+
"metadata": {
|
1533 |
+
"id": "ysL9ONePOsGB"
|
1534 |
+
}
|
1535 |
+
},
|
1536 |
+
{
|
1537 |
+
"cell_type": "code",
|
1538 |
+
"source": [
|
1539 |
+
"chat_engine = vector_index.as_chat_engine(chat_mode=\"react\", verbose=True)"
|
1540 |
+
],
|
1541 |
+
"metadata": {
|
1542 |
+
"id": "-M1jWoKXOs2t"
|
1543 |
+
},
|
1544 |
+
"execution_count": 70,
|
1545 |
+
"outputs": []
|
1546 |
+
},
|
1547 |
+
{
|
1548 |
+
"cell_type": "code",
|
1549 |
+
"source": [
|
1550 |
+
"response = chat_engine.chat(\"Which company released LLaMA2 model? What is the model useful for?\")"
|
1551 |
+
],
|
1552 |
+
"metadata": {
|
1553 |
+
"colab": {
|
1554 |
+
"base_uri": "https://localhost:8080/"
|
1555 |
+
},
|
1556 |
+
"id": "UZkEW1SSOs0H",
|
1557 |
+
"outputId": "4869c5fc-e0e1-44c6-e7f0-87db92bb2eb6"
|
1558 |
+
},
|
1559 |
+
"execution_count": 71,
|
1560 |
+
"outputs": [
|
1561 |
+
{
|
1562 |
+
"output_type": "stream",
|
1563 |
+
"name": "stdout",
|
1564 |
+
"text": [
|
1565 |
+
"\u001b[1;3;38;5;200mThought: I need to use a tool to help me answer the question.\n",
|
1566 |
+
"Action: query_engine_tool\n",
|
1567 |
+
"Action Input: {'input': 'Which company released LLaMA2 model?'}\n",
|
1568 |
+
"\u001b[0m\u001b[1;3;34mObservation: Meta released the LLaMA2 model.\n",
|
1569 |
+
"\u001b[0m\u001b[1;3;38;5;200mThought: I need to use a tool to help me answer the second question.\n",
|
1570 |
+
"Action: query_engine_tool\n",
|
1571 |
+
"Action Input: {'input': 'What is the LLaMA2 model useful for?'}\n",
|
1572 |
+
"\u001b[0m\u001b[1;3;34mObservation: The LLaMA2 model is useful for creating AI-powered applications in commercial settings. It can be integrated into products to enable businesses to develop AI-powered applications.\n",
|
1573 |
+
"\u001b[0m\u001b[1;3;38;5;200mThought: I can answer without using any more tools.\n",
|
1574 |
+
"Response: The LLaMA2 model was released by Meta. It is useful for creating AI-powered applications in commercial settings and can be integrated into products to enable businesses to develop AI-powered applications.\n",
|
1575 |
+
"\u001b[0m"
|
1576 |
+
]
|
1577 |
+
}
|
1578 |
+
]
|
1579 |
+
},
|
1580 |
+
{
|
1581 |
+
"cell_type": "code",
|
1582 |
+
"source": [
|
1583 |
+
"display_res(response)"
|
1584 |
+
],
|
1585 |
+
"metadata": {
|
1586 |
+
"colab": {
|
1587 |
+
"base_uri": "https://localhost:8080/"
|
1588 |
+
},
|
1589 |
+
"id": "eW5P1lD4Osxf",
|
1590 |
+
"outputId": "b128bc94-081b-49aa-c549-7d7d7be90b63"
|
1591 |
+
},
|
1592 |
+
"execution_count": 72,
|
1593 |
+
"outputs": [
|
1594 |
+
{
|
1595 |
+
"output_type": "stream",
|
1596 |
+
"name": "stdout",
|
1597 |
+
"text": [
|
1598 |
+
"Response:\n",
|
1599 |
+
"\t The LLaMA2 model was released by Meta. It is useful for creating AI-powered applications in commercial settings and can be integrated into products to enable businesses to develop AI-powered applications.\n",
|
1600 |
+
"Sources:\n",
|
1601 |
+
"\tNode ID\t 8aa510a2-b741-4d55-b661-366c3c5cb681\n",
|
1602 |
+
"\tText\t the question, \"How long ago did Barack Obama become president?\", its only relevant after 2008. This temporal awareness allows Llama 2 to deliver more contextually accurate responses, enriching the user experience further. VII. Open Questions and Future Outlook Meta's open-sourcing of Llama 2 represents a seismic shift, now offering developers and researchers commercial access to a leading language model. With Llama 2 outperforming MosaicML's current MPT models, all eyes are on how Databricks will respond. Can MosaicML's next MPT iteration beat Llama 2? Is it worthwhile to compete with Llama 2 or join hands with the open-source community to make the open-source models better? Meanwhile, Microsoft's move to host Llama 2 on Azure despite having significant investment in ChatGPT raises interesting questions. Will users prefer the capabilities and transparency of an open-source model like Llama 2 over closed, proprietary options? The stakes are high, as Meta's bold democratization play stands to reshape preferences and partnerships in the AI space. One thing is certain - the era of open language model competition has begun. VIII. Conclusion With the launch of Llama 2, Meta has achieved a landmark breakthrough in open-source language models, unleashing new potential through its commercial accessibility. Llama 2's formidable capabilities in natural language processing, along with robust safety protocols and temporal reasoning, set new benchmarks for the field. While select limitations around math and coding exist presently, Llama 2's strengths far outweigh its weaknesses. As Meta continues honing Llama technology, this latest innovation promises to be truly transformative. By open-sourcing such an advanced model, Meta is propelling democratization and proliferation of AI across industries. From healthcare to education and beyond, Llama 2 stands to shape the landscape by putting groundbreaking language modeling into the hands of all developers and researchers. The possibilities unlocked by this open-source approach signal a shift towards a more collaborative, creative AI future.\n",
|
1603 |
+
"\tScore\t 0.6697124345945474\n",
|
1604 |
+
"\t-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_\n",
|
1605 |
+
"\tNode ID\t 6906e3b8-4c42-453c-9b60-9f5e4b1d3304\n",
|
1606 |
+
"\tText\t LLaMA: Meta's new AI tool According to the official release, LLaMA is a foundational language model developed to assist 'researchers and academics' in their work (as opposed to the average web user) to understand and study these NLP models. Leveraging AI in such a way could give researchers an edge in terms of time spent. You may not know this, but this would be Meta's third LLM after Blender Bot 3 and Galactica. However, the two LLMs were shut down soon, and Meta stopped their further development, as it produced erroneous results. Before moving further, it is important to emphasize that LLaMA is NOT a chatbot like ChatGPT. As I mentioned before, it is a 'research tool' for researchers. We can expect the initial versions of LLaMA to be a bit more technical and indirect to use as opposed to the case with ChatGPT, which was very direct, interactive, and a lot easy to use. \"Smaller, more performant models such as LLaMA enable ... research community who don't have access to large amounts of infrastructure to study these models.. further democratizing access in this important, fast-changing field,\" said Meta in its official blog. Meta's effort of \"democratizing\" access to the public could shed light on one of the critical issues of Generative AI - toxicity and bias. ChatGPT and other LLMs (obviously, I am referring to Bing) have a track record of responding in a way that is toxic and, well... evil. The Verge and major critics have covered it in much detail. Oh and the community did get the access, but not in the way Meta anticipated. On March 3rd, a downloadable torrent of the LLaMA system was posted on 4chan. 4chan is an anonymous online forum known for its controversial content and diverse range of discussions, which has nearly 222 million unique monthly visitors. LLaMA is currently not in use on any of Meta's products. But Meta has plans to make it available to researchers before they can use them in their own products. It's worth mentioning that Meta did not release\n",
|
1607 |
+
"\tScore\t 0.6657835615415298\n",
|
1608 |
+
"\t-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_\n",
|
1609 |
+
"\tNode ID\t 2f3b7c34-8fd0-4134-af38-ef1b77e32cd8\n",
|
1610 |
+
"\tText\t The model demonstrates exceptionally low AI safety violation percentages, surpassing even ChatGPT in safety benchmarks. Finding the right balance between helpfulness and safety when optimizing a model poses significant challenges. While a highly helpful model may be capable of answering any question, including sensitive ones like \"How do I build a bomb?\", it also raises concerns about potential misuse. Thus, striking the perfect equilibrium between providing useful information and ensuring safety is paramount. However, prioritizing safety to an extreme extent can lead to a model that struggles to effectively address a diverse range of questions. This limitation could hinder the model's practical applicability and user experience. Thus, achieving an optimum balance that allows the model to be both helpful and safe is of utmost importance. To strike the right balance between helpfulness and safety, Meta employed two reward models - one for helpfulness and another for safety - to optimize the model's responses. The 34B parameter model has reported higher safety violations than other variants, possibly contributing to the delay in its release. IV. Helpfulness Comparison: Llama 2 Outperforms Competitors Llama 2 emerges as a strong contender in the open-source language model arena, outperforming its competitors in most categories. The 70B parameter model outperforms all other open-source models, while the 7B and 34B models outshine Falcon in all categories and MPT in all categories except coding. Despite being smaller, Llam a2's performance rivals that of Chat GPT 3.5, a significantly larger closed-source model. While GPT 4 and PalM-2-L, with their larger size, outperform Llama 2, this is expected due to their capacity for handling complex language tasks. Llama 2's impressive ability to compete with larger models highlights its efficiency and potential in the market. However, Llama 2 does face challenges in coding and math problems, where models like Chat GPT 4 excel, given their significantly larger size. Chat GPT 4 performed significantly better than Llama 2 for coding (HumanEval benchmark)and math problem tasks (GSM8k benchmark). Open-source AI technologies, like Llama 2, continue to advance, offering\n",
|
1611 |
+
"\tScore\t 0.7146111187017354\n",
|
1612 |
+
"\t-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_\n",
|
1613 |
+
"\tNode ID\t d6f533e5-fef8-469c-a313-def19fd38efe\n",
|
1614 |
+
"\tText\t I. Llama 2: Revolutionizing Commercial Use Unlike its predecessor Llama 1, which was limited to research use, Llama 2 represents a major advancement as an open-source commercial model. Businesses can now integrate Llama 2 into products to create AI-powered applications. Availability on Azure and AWS facilitates fine-tuning and adoption. However, restrictions apply to prevent exploitation. Companies with over 700 million active daily users cannot use Llama 2. Additionally, its output cannot be used to improve other language models. II. Llama 2 Model Flavors Llama 2 is available in four different model sizes: 7 billion, 13 billion, 34 billion, and 70 billion parameters. While 7B, 13B, and 70B have already been released, the 34B model is still awaited. The pretrained variant, trained on a whopping 2 trillion tokens, boasts a context window of 4096 tokens, twice the size of its predecessor Llama 1. Meta also released a Llama 2 fine-tuned model for chat applications that was trained on over 1 million human annotations. Such extensive training comes at a cost, with the 70B model taking a staggering 1720320 GPU hours to train. The context window's length determines the amount of content the model can process at once, making Llama 2 a powerful language model in terms of scale and efficiency. III. Safety Considerations: A Top Priority for Meta Meta's commitment to safety and alignment shines through in Llama 2's design. The model demonstrates exceptionally low AI safety violation percentages, surpassing even ChatGPT in safety benchmarks. Finding the right balance between helpfulness and safety when optimizing a model poses significant challenges. While a highly helpful model may be capable of answering any question, including sensitive ones like \"How do I build a bomb?\", it also raises concerns about potential misuse. Thus, striking the perfect equilibrium between providing useful information and ensuring safety is paramount. However, prioritizing safety to an extreme extent can lead to a model that struggles to effectively address a diverse range of questions. This limitation could hinder the model's practical applicability and user experience. Thus, achieving\n",
|
1615 |
+
"\tScore\t 0.712330081122207\n",
|
1616 |
+
"\t-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_\n"
|
1617 |
+
]
|
1618 |
+
}
|
1619 |
+
]
|
1620 |
+
},
|
1621 |
+
{
|
1622 |
+
"cell_type": "code",
|
1623 |
+
"source": [],
|
1624 |
+
"metadata": {
|
1625 |
+
"id": "zf6r2AmFOsca"
|
1626 |
+
},
|
1627 |
+
"execution_count": null,
|
1628 |
+
"outputs": []
|
1629 |
+
}
|
1630 |
+
]
|
1631 |
+
}
|