Created using Colaboratory
Browse files- notebooks/12-Improve_Query.ipynb +1834 -0
notebooks/12-Improve_Query.ipynb
ADDED
@@ -0,0 +1,1834 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"nbformat": 4,
|
3 |
+
"nbformat_minor": 0,
|
4 |
+
"metadata": {
|
5 |
+
"colab": {
|
6 |
+
"provenance": [],
|
7 |
+
"authorship_tag": "ABX9TyMcBonOXFUEEHJsKREchiOp",
|
8 |
+
"include_colab_link": true
|
9 |
+
},
|
10 |
+
"kernelspec": {
|
11 |
+
"name": "python3",
|
12 |
+
"display_name": "Python 3"
|
13 |
+
},
|
14 |
+
"language_info": {
|
15 |
+
"name": "python"
|
16 |
+
},
|
17 |
+
"widgets": {
|
18 |
+
"application/vnd.jupyter.widget-state+json": {
|
19 |
+
"3fbabd8a8660461ba5e7bc08ef39139a": {
|
20 |
+
"model_module": "@jupyter-widgets/controls",
|
21 |
+
"model_name": "HBoxModel",
|
22 |
+
"model_module_version": "1.5.0",
|
23 |
+
"state": {
|
24 |
+
"_dom_classes": [],
|
25 |
+
"_model_module": "@jupyter-widgets/controls",
|
26 |
+
"_model_module_version": "1.5.0",
|
27 |
+
"_model_name": "HBoxModel",
|
28 |
+
"_view_count": null,
|
29 |
+
"_view_module": "@jupyter-widgets/controls",
|
30 |
+
"_view_module_version": "1.5.0",
|
31 |
+
"_view_name": "HBoxView",
|
32 |
+
"box_style": "",
|
33 |
+
"children": [
|
34 |
+
"IPY_MODEL_df2365556ae242a2ab1a119f9a31a561",
|
35 |
+
"IPY_MODEL_5f4b9d32df8f446e858e4c289dc282f9",
|
36 |
+
"IPY_MODEL_5b588f83a15d42d9aca888e06bbd95ff"
|
37 |
+
],
|
38 |
+
"layout": "IPY_MODEL_ad073bca655540809e39f26538d2ec0d"
|
39 |
+
}
|
40 |
+
},
|
41 |
+
"df2365556ae242a2ab1a119f9a31a561": {
|
42 |
+
"model_module": "@jupyter-widgets/controls",
|
43 |
+
"model_name": "HTMLModel",
|
44 |
+
"model_module_version": "1.5.0",
|
45 |
+
"state": {
|
46 |
+
"_dom_classes": [],
|
47 |
+
"_model_module": "@jupyter-widgets/controls",
|
48 |
+
"_model_module_version": "1.5.0",
|
49 |
+
"_model_name": "HTMLModel",
|
50 |
+
"_view_count": null,
|
51 |
+
"_view_module": "@jupyter-widgets/controls",
|
52 |
+
"_view_module_version": "1.5.0",
|
53 |
+
"_view_name": "HTMLView",
|
54 |
+
"description": "",
|
55 |
+
"description_tooltip": null,
|
56 |
+
"layout": "IPY_MODEL_13b9c5395bca4c3ba21265240cb936cf",
|
57 |
+
"placeholder": "β",
|
58 |
+
"style": "IPY_MODEL_47a4586384274577a726c57605e7f8d9",
|
59 |
+
"value": "Parsing nodes: 100%"
|
60 |
+
}
|
61 |
+
},
|
62 |
+
"5f4b9d32df8f446e858e4c289dc282f9": {
|
63 |
+
"model_module": "@jupyter-widgets/controls",
|
64 |
+
"model_name": "FloatProgressModel",
|
65 |
+
"model_module_version": "1.5.0",
|
66 |
+
"state": {
|
67 |
+
"_dom_classes": [],
|
68 |
+
"_model_module": "@jupyter-widgets/controls",
|
69 |
+
"_model_module_version": "1.5.0",
|
70 |
+
"_model_name": "FloatProgressModel",
|
71 |
+
"_view_count": null,
|
72 |
+
"_view_module": "@jupyter-widgets/controls",
|
73 |
+
"_view_module_version": "1.5.0",
|
74 |
+
"_view_name": "ProgressView",
|
75 |
+
"bar_style": "success",
|
76 |
+
"description": "",
|
77 |
+
"description_tooltip": null,
|
78 |
+
"layout": "IPY_MODEL_96a3bdece738481db57e811ccb74a974",
|
79 |
+
"max": 14,
|
80 |
+
"min": 0,
|
81 |
+
"orientation": "horizontal",
|
82 |
+
"style": "IPY_MODEL_5c7973afd79349ed997a69120d0629b2",
|
83 |
+
"value": 14
|
84 |
+
}
|
85 |
+
},
|
86 |
+
"5b588f83a15d42d9aca888e06bbd95ff": {
|
87 |
+
"model_module": "@jupyter-widgets/controls",
|
88 |
+
"model_name": "HTMLModel",
|
89 |
+
"model_module_version": "1.5.0",
|
90 |
+
"state": {
|
91 |
+
"_dom_classes": [],
|
92 |
+
"_model_module": "@jupyter-widgets/controls",
|
93 |
+
"_model_module_version": "1.5.0",
|
94 |
+
"_model_name": "HTMLModel",
|
95 |
+
"_view_count": null,
|
96 |
+
"_view_module": "@jupyter-widgets/controls",
|
97 |
+
"_view_module_version": "1.5.0",
|
98 |
+
"_view_name": "HTMLView",
|
99 |
+
"description": "",
|
100 |
+
"description_tooltip": null,
|
101 |
+
"layout": "IPY_MODEL_af9b6ae927dd4764b9692507791bc67e",
|
102 |
+
"placeholder": "β",
|
103 |
+
"style": "IPY_MODEL_134210510d49476e959dd7d032bbdbdc",
|
104 |
+
"value": " 14/14 [00:00<00:00, 21.41it/s]"
|
105 |
+
}
|
106 |
+
},
|
107 |
+
"ad073bca655540809e39f26538d2ec0d": {
|
108 |
+
"model_module": "@jupyter-widgets/base",
|
109 |
+
"model_name": "LayoutModel",
|
110 |
+
"model_module_version": "1.2.0",
|
111 |
+
"state": {
|
112 |
+
"_model_module": "@jupyter-widgets/base",
|
113 |
+
"_model_module_version": "1.2.0",
|
114 |
+
"_model_name": "LayoutModel",
|
115 |
+
"_view_count": null,
|
116 |
+
"_view_module": "@jupyter-widgets/base",
|
117 |
+
"_view_module_version": "1.2.0",
|
118 |
+
"_view_name": "LayoutView",
|
119 |
+
"align_content": null,
|
120 |
+
"align_items": null,
|
121 |
+
"align_self": null,
|
122 |
+
"border": null,
|
123 |
+
"bottom": null,
|
124 |
+
"display": null,
|
125 |
+
"flex": null,
|
126 |
+
"flex_flow": null,
|
127 |
+
"grid_area": null,
|
128 |
+
"grid_auto_columns": null,
|
129 |
+
"grid_auto_flow": null,
|
130 |
+
"grid_auto_rows": null,
|
131 |
+
"grid_column": null,
|
132 |
+
"grid_gap": null,
|
133 |
+
"grid_row": null,
|
134 |
+
"grid_template_areas": null,
|
135 |
+
"grid_template_columns": null,
|
136 |
+
"grid_template_rows": null,
|
137 |
+
"height": null,
|
138 |
+
"justify_content": null,
|
139 |
+
"justify_items": null,
|
140 |
+
"left": null,
|
141 |
+
"margin": null,
|
142 |
+
"max_height": null,
|
143 |
+
"max_width": null,
|
144 |
+
"min_height": null,
|
145 |
+
"min_width": null,
|
146 |
+
"object_fit": null,
|
147 |
+
"object_position": null,
|
148 |
+
"order": null,
|
149 |
+
"overflow": null,
|
150 |
+
"overflow_x": null,
|
151 |
+
"overflow_y": null,
|
152 |
+
"padding": null,
|
153 |
+
"right": null,
|
154 |
+
"top": null,
|
155 |
+
"visibility": null,
|
156 |
+
"width": null
|
157 |
+
}
|
158 |
+
},
|
159 |
+
"13b9c5395bca4c3ba21265240cb936cf": {
|
160 |
+
"model_module": "@jupyter-widgets/base",
|
161 |
+
"model_name": "LayoutModel",
|
162 |
+
"model_module_version": "1.2.0",
|
163 |
+
"state": {
|
164 |
+
"_model_module": "@jupyter-widgets/base",
|
165 |
+
"_model_module_version": "1.2.0",
|
166 |
+
"_model_name": "LayoutModel",
|
167 |
+
"_view_count": null,
|
168 |
+
"_view_module": "@jupyter-widgets/base",
|
169 |
+
"_view_module_version": "1.2.0",
|
170 |
+
"_view_name": "LayoutView",
|
171 |
+
"align_content": null,
|
172 |
+
"align_items": null,
|
173 |
+
"align_self": null,
|
174 |
+
"border": null,
|
175 |
+
"bottom": null,
|
176 |
+
"display": null,
|
177 |
+
"flex": null,
|
178 |
+
"flex_flow": null,
|
179 |
+
"grid_area": null,
|
180 |
+
"grid_auto_columns": null,
|
181 |
+
"grid_auto_flow": null,
|
182 |
+
"grid_auto_rows": null,
|
183 |
+
"grid_column": null,
|
184 |
+
"grid_gap": null,
|
185 |
+
"grid_row": null,
|
186 |
+
"grid_template_areas": null,
|
187 |
+
"grid_template_columns": null,
|
188 |
+
"grid_template_rows": null,
|
189 |
+
"height": null,
|
190 |
+
"justify_content": null,
|
191 |
+
"justify_items": null,
|
192 |
+
"left": null,
|
193 |
+
"margin": null,
|
194 |
+
"max_height": null,
|
195 |
+
"max_width": null,
|
196 |
+
"min_height": null,
|
197 |
+
"min_width": null,
|
198 |
+
"object_fit": null,
|
199 |
+
"object_position": null,
|
200 |
+
"order": null,
|
201 |
+
"overflow": null,
|
202 |
+
"overflow_x": null,
|
203 |
+
"overflow_y": null,
|
204 |
+
"padding": null,
|
205 |
+
"right": null,
|
206 |
+
"top": null,
|
207 |
+
"visibility": null,
|
208 |
+
"width": null
|
209 |
+
}
|
210 |
+
},
|
211 |
+
"47a4586384274577a726c57605e7f8d9": {
|
212 |
+
"model_module": "@jupyter-widgets/controls",
|
213 |
+
"model_name": "DescriptionStyleModel",
|
214 |
+
"model_module_version": "1.5.0",
|
215 |
+
"state": {
|
216 |
+
"_model_module": "@jupyter-widgets/controls",
|
217 |
+
"_model_module_version": "1.5.0",
|
218 |
+
"_model_name": "DescriptionStyleModel",
|
219 |
+
"_view_count": null,
|
220 |
+
"_view_module": "@jupyter-widgets/base",
|
221 |
+
"_view_module_version": "1.2.0",
|
222 |
+
"_view_name": "StyleView",
|
223 |
+
"description_width": ""
|
224 |
+
}
|
225 |
+
},
|
226 |
+
"96a3bdece738481db57e811ccb74a974": {
|
227 |
+
"model_module": "@jupyter-widgets/base",
|
228 |
+
"model_name": "LayoutModel",
|
229 |
+
"model_module_version": "1.2.0",
|
230 |
+
"state": {
|
231 |
+
"_model_module": "@jupyter-widgets/base",
|
232 |
+
"_model_module_version": "1.2.0",
|
233 |
+
"_model_name": "LayoutModel",
|
234 |
+
"_view_count": null,
|
235 |
+
"_view_module": "@jupyter-widgets/base",
|
236 |
+
"_view_module_version": "1.2.0",
|
237 |
+
"_view_name": "LayoutView",
|
238 |
+
"align_content": null,
|
239 |
+
"align_items": null,
|
240 |
+
"align_self": null,
|
241 |
+
"border": null,
|
242 |
+
"bottom": null,
|
243 |
+
"display": null,
|
244 |
+
"flex": null,
|
245 |
+
"flex_flow": null,
|
246 |
+
"grid_area": null,
|
247 |
+
"grid_auto_columns": null,
|
248 |
+
"grid_auto_flow": null,
|
249 |
+
"grid_auto_rows": null,
|
250 |
+
"grid_column": null,
|
251 |
+
"grid_gap": null,
|
252 |
+
"grid_row": null,
|
253 |
+
"grid_template_areas": null,
|
254 |
+
"grid_template_columns": null,
|
255 |
+
"grid_template_rows": null,
|
256 |
+
"height": null,
|
257 |
+
"justify_content": null,
|
258 |
+
"justify_items": null,
|
259 |
+
"left": null,
|
260 |
+
"margin": null,
|
261 |
+
"max_height": null,
|
262 |
+
"max_width": null,
|
263 |
+
"min_height": null,
|
264 |
+
"min_width": null,
|
265 |
+
"object_fit": null,
|
266 |
+
"object_position": null,
|
267 |
+
"order": null,
|
268 |
+
"overflow": null,
|
269 |
+
"overflow_x": null,
|
270 |
+
"overflow_y": null,
|
271 |
+
"padding": null,
|
272 |
+
"right": null,
|
273 |
+
"top": null,
|
274 |
+
"visibility": null,
|
275 |
+
"width": null
|
276 |
+
}
|
277 |
+
},
|
278 |
+
"5c7973afd79349ed997a69120d0629b2": {
|
279 |
+
"model_module": "@jupyter-widgets/controls",
|
280 |
+
"model_name": "ProgressStyleModel",
|
281 |
+
"model_module_version": "1.5.0",
|
282 |
+
"state": {
|
283 |
+
"_model_module": "@jupyter-widgets/controls",
|
284 |
+
"_model_module_version": "1.5.0",
|
285 |
+
"_model_name": "ProgressStyleModel",
|
286 |
+
"_view_count": null,
|
287 |
+
"_view_module": "@jupyter-widgets/base",
|
288 |
+
"_view_module_version": "1.2.0",
|
289 |
+
"_view_name": "StyleView",
|
290 |
+
"bar_color": null,
|
291 |
+
"description_width": ""
|
292 |
+
}
|
293 |
+
},
|
294 |
+
"af9b6ae927dd4764b9692507791bc67e": {
|
295 |
+
"model_module": "@jupyter-widgets/base",
|
296 |
+
"model_name": "LayoutModel",
|
297 |
+
"model_module_version": "1.2.0",
|
298 |
+
"state": {
|
299 |
+
"_model_module": "@jupyter-widgets/base",
|
300 |
+
"_model_module_version": "1.2.0",
|
301 |
+
"_model_name": "LayoutModel",
|
302 |
+
"_view_count": null,
|
303 |
+
"_view_module": "@jupyter-widgets/base",
|
304 |
+
"_view_module_version": "1.2.0",
|
305 |
+
"_view_name": "LayoutView",
|
306 |
+
"align_content": null,
|
307 |
+
"align_items": null,
|
308 |
+
"align_self": null,
|
309 |
+
"border": null,
|
310 |
+
"bottom": null,
|
311 |
+
"display": null,
|
312 |
+
"flex": null,
|
313 |
+
"flex_flow": null,
|
314 |
+
"grid_area": null,
|
315 |
+
"grid_auto_columns": null,
|
316 |
+
"grid_auto_flow": null,
|
317 |
+
"grid_auto_rows": null,
|
318 |
+
"grid_column": null,
|
319 |
+
"grid_gap": null,
|
320 |
+
"grid_row": null,
|
321 |
+
"grid_template_areas": null,
|
322 |
+
"grid_template_columns": null,
|
323 |
+
"grid_template_rows": null,
|
324 |
+
"height": null,
|
325 |
+
"justify_content": null,
|
326 |
+
"justify_items": null,
|
327 |
+
"left": null,
|
328 |
+
"margin": null,
|
329 |
+
"max_height": null,
|
330 |
+
"max_width": null,
|
331 |
+
"min_height": null,
|
332 |
+
"min_width": null,
|
333 |
+
"object_fit": null,
|
334 |
+
"object_position": null,
|
335 |
+
"order": null,
|
336 |
+
"overflow": null,
|
337 |
+
"overflow_x": null,
|
338 |
+
"overflow_y": null,
|
339 |
+
"padding": null,
|
340 |
+
"right": null,
|
341 |
+
"top": null,
|
342 |
+
"visibility": null,
|
343 |
+
"width": null
|
344 |
+
}
|
345 |
+
},
|
346 |
+
"134210510d49476e959dd7d032bbdbdc": {
|
347 |
+
"model_module": "@jupyter-widgets/controls",
|
348 |
+
"model_name": "DescriptionStyleModel",
|
349 |
+
"model_module_version": "1.5.0",
|
350 |
+
"state": {
|
351 |
+
"_model_module": "@jupyter-widgets/controls",
|
352 |
+
"_model_module_version": "1.5.0",
|
353 |
+
"_model_name": "DescriptionStyleModel",
|
354 |
+
"_view_count": null,
|
355 |
+
"_view_module": "@jupyter-widgets/base",
|
356 |
+
"_view_module_version": "1.2.0",
|
357 |
+
"_view_name": "StyleView",
|
358 |
+
"description_width": ""
|
359 |
+
}
|
360 |
+
},
|
361 |
+
"5f9bb065c2b74d2e8ded32e1306a7807": {
|
362 |
+
"model_module": "@jupyter-widgets/controls",
|
363 |
+
"model_name": "HBoxModel",
|
364 |
+
"model_module_version": "1.5.0",
|
365 |
+
"state": {
|
366 |
+
"_dom_classes": [],
|
367 |
+
"_model_module": "@jupyter-widgets/controls",
|
368 |
+
"_model_module_version": "1.5.0",
|
369 |
+
"_model_name": "HBoxModel",
|
370 |
+
"_view_count": null,
|
371 |
+
"_view_module": "@jupyter-widgets/controls",
|
372 |
+
"_view_module_version": "1.5.0",
|
373 |
+
"_view_name": "HBoxView",
|
374 |
+
"box_style": "",
|
375 |
+
"children": [
|
376 |
+
"IPY_MODEL_73a06bc546a64f7f99a9e4a135319dcd",
|
377 |
+
"IPY_MODEL_ce48deaf4d8c49cdae92bfdbb3a78df0",
|
378 |
+
"IPY_MODEL_4a172e8c6aa44e41a42fc1d9cf714fd0"
|
379 |
+
],
|
380 |
+
"layout": "IPY_MODEL_0245f2604e4d49c8bd0210302746c47b"
|
381 |
+
}
|
382 |
+
},
|
383 |
+
"73a06bc546a64f7f99a9e4a135319dcd": {
|
384 |
+
"model_module": "@jupyter-widgets/controls",
|
385 |
+
"model_name": "HTMLModel",
|
386 |
+
"model_module_version": "1.5.0",
|
387 |
+
"state": {
|
388 |
+
"_dom_classes": [],
|
389 |
+
"_model_module": "@jupyter-widgets/controls",
|
390 |
+
"_model_module_version": "1.5.0",
|
391 |
+
"_model_name": "HTMLModel",
|
392 |
+
"_view_count": null,
|
393 |
+
"_view_module": "@jupyter-widgets/controls",
|
394 |
+
"_view_module_version": "1.5.0",
|
395 |
+
"_view_name": "HTMLView",
|
396 |
+
"description": "",
|
397 |
+
"description_tooltip": null,
|
398 |
+
"layout": "IPY_MODEL_e956dfab55084a9cbe33c8e331b511e7",
|
399 |
+
"placeholder": "β",
|
400 |
+
"style": "IPY_MODEL_cb394578badd43a89850873ad2526542",
|
401 |
+
"value": "Generating embeddings: 100%"
|
402 |
+
}
|
403 |
+
},
|
404 |
+
"ce48deaf4d8c49cdae92bfdbb3a78df0": {
|
405 |
+
"model_module": "@jupyter-widgets/controls",
|
406 |
+
"model_name": "FloatProgressModel",
|
407 |
+
"model_module_version": "1.5.0",
|
408 |
+
"state": {
|
409 |
+
"_dom_classes": [],
|
410 |
+
"_model_module": "@jupyter-widgets/controls",
|
411 |
+
"_model_module_version": "1.5.0",
|
412 |
+
"_model_name": "FloatProgressModel",
|
413 |
+
"_view_count": null,
|
414 |
+
"_view_module": "@jupyter-widgets/controls",
|
415 |
+
"_view_module_version": "1.5.0",
|
416 |
+
"_view_name": "ProgressView",
|
417 |
+
"bar_style": "success",
|
418 |
+
"description": "",
|
419 |
+
"description_tooltip": null,
|
420 |
+
"layout": "IPY_MODEL_193aef33d9184055bb9223f56d456de6",
|
421 |
+
"max": 108,
|
422 |
+
"min": 0,
|
423 |
+
"orientation": "horizontal",
|
424 |
+
"style": "IPY_MODEL_abfc9aa911ce4a5ea81c7c451f08295f",
|
425 |
+
"value": 108
|
426 |
+
}
|
427 |
+
},
|
428 |
+
"4a172e8c6aa44e41a42fc1d9cf714fd0": {
|
429 |
+
"model_module": "@jupyter-widgets/controls",
|
430 |
+
"model_name": "HTMLModel",
|
431 |
+
"model_module_version": "1.5.0",
|
432 |
+
"state": {
|
433 |
+
"_dom_classes": [],
|
434 |
+
"_model_module": "@jupyter-widgets/controls",
|
435 |
+
"_model_module_version": "1.5.0",
|
436 |
+
"_model_name": "HTMLModel",
|
437 |
+
"_view_count": null,
|
438 |
+
"_view_module": "@jupyter-widgets/controls",
|
439 |
+
"_view_module_version": "1.5.0",
|
440 |
+
"_view_name": "HTMLView",
|
441 |
+
"description": "",
|
442 |
+
"description_tooltip": null,
|
443 |
+
"layout": "IPY_MODEL_e7937a1bc68441a080374911a6563376",
|
444 |
+
"placeholder": "β",
|
445 |
+
"style": "IPY_MODEL_e532ed7bfef34f67b5fcacd9534eb789",
|
446 |
+
"value": " 108/108 [00:03<00:00, 33.70it/s]"
|
447 |
+
}
|
448 |
+
},
|
449 |
+
"0245f2604e4d49c8bd0210302746c47b": {
|
450 |
+
"model_module": "@jupyter-widgets/base",
|
451 |
+
"model_name": "LayoutModel",
|
452 |
+
"model_module_version": "1.2.0",
|
453 |
+
"state": {
|
454 |
+
"_model_module": "@jupyter-widgets/base",
|
455 |
+
"_model_module_version": "1.2.0",
|
456 |
+
"_model_name": "LayoutModel",
|
457 |
+
"_view_count": null,
|
458 |
+
"_view_module": "@jupyter-widgets/base",
|
459 |
+
"_view_module_version": "1.2.0",
|
460 |
+
"_view_name": "LayoutView",
|
461 |
+
"align_content": null,
|
462 |
+
"align_items": null,
|
463 |
+
"align_self": null,
|
464 |
+
"border": null,
|
465 |
+
"bottom": null,
|
466 |
+
"display": null,
|
467 |
+
"flex": null,
|
468 |
+
"flex_flow": null,
|
469 |
+
"grid_area": null,
|
470 |
+
"grid_auto_columns": null,
|
471 |
+
"grid_auto_flow": null,
|
472 |
+
"grid_auto_rows": null,
|
473 |
+
"grid_column": null,
|
474 |
+
"grid_gap": null,
|
475 |
+
"grid_row": null,
|
476 |
+
"grid_template_areas": null,
|
477 |
+
"grid_template_columns": null,
|
478 |
+
"grid_template_rows": null,
|
479 |
+
"height": null,
|
480 |
+
"justify_content": null,
|
481 |
+
"justify_items": null,
|
482 |
+
"left": null,
|
483 |
+
"margin": null,
|
484 |
+
"max_height": null,
|
485 |
+
"max_width": null,
|
486 |
+
"min_height": null,
|
487 |
+
"min_width": null,
|
488 |
+
"object_fit": null,
|
489 |
+
"object_position": null,
|
490 |
+
"order": null,
|
491 |
+
"overflow": null,
|
492 |
+
"overflow_x": null,
|
493 |
+
"overflow_y": null,
|
494 |
+
"padding": null,
|
495 |
+
"right": null,
|
496 |
+
"top": null,
|
497 |
+
"visibility": null,
|
498 |
+
"width": null
|
499 |
+
}
|
500 |
+
},
|
501 |
+
"e956dfab55084a9cbe33c8e331b511e7": {
|
502 |
+
"model_module": "@jupyter-widgets/base",
|
503 |
+
"model_name": "LayoutModel",
|
504 |
+
"model_module_version": "1.2.0",
|
505 |
+
"state": {
|
506 |
+
"_model_module": "@jupyter-widgets/base",
|
507 |
+
"_model_module_version": "1.2.0",
|
508 |
+
"_model_name": "LayoutModel",
|
509 |
+
"_view_count": null,
|
510 |
+
"_view_module": "@jupyter-widgets/base",
|
511 |
+
"_view_module_version": "1.2.0",
|
512 |
+
"_view_name": "LayoutView",
|
513 |
+
"align_content": null,
|
514 |
+
"align_items": null,
|
515 |
+
"align_self": null,
|
516 |
+
"border": null,
|
517 |
+
"bottom": null,
|
518 |
+
"display": null,
|
519 |
+
"flex": null,
|
520 |
+
"flex_flow": null,
|
521 |
+
"grid_area": null,
|
522 |
+
"grid_auto_columns": null,
|
523 |
+
"grid_auto_flow": null,
|
524 |
+
"grid_auto_rows": null,
|
525 |
+
"grid_column": null,
|
526 |
+
"grid_gap": null,
|
527 |
+
"grid_row": null,
|
528 |
+
"grid_template_areas": null,
|
529 |
+
"grid_template_columns": null,
|
530 |
+
"grid_template_rows": null,
|
531 |
+
"height": null,
|
532 |
+
"justify_content": null,
|
533 |
+
"justify_items": null,
|
534 |
+
"left": null,
|
535 |
+
"margin": null,
|
536 |
+
"max_height": null,
|
537 |
+
"max_width": null,
|
538 |
+
"min_height": null,
|
539 |
+
"min_width": null,
|
540 |
+
"object_fit": null,
|
541 |
+
"object_position": null,
|
542 |
+
"order": null,
|
543 |
+
"overflow": null,
|
544 |
+
"overflow_x": null,
|
545 |
+
"overflow_y": null,
|
546 |
+
"padding": null,
|
547 |
+
"right": null,
|
548 |
+
"top": null,
|
549 |
+
"visibility": null,
|
550 |
+
"width": null
|
551 |
+
}
|
552 |
+
},
|
553 |
+
"cb394578badd43a89850873ad2526542": {
|
554 |
+
"model_module": "@jupyter-widgets/controls",
|
555 |
+
"model_name": "DescriptionStyleModel",
|
556 |
+
"model_module_version": "1.5.0",
|
557 |
+
"state": {
|
558 |
+
"_model_module": "@jupyter-widgets/controls",
|
559 |
+
"_model_module_version": "1.5.0",
|
560 |
+
"_model_name": "DescriptionStyleModel",
|
561 |
+
"_view_count": null,
|
562 |
+
"_view_module": "@jupyter-widgets/base",
|
563 |
+
"_view_module_version": "1.2.0",
|
564 |
+
"_view_name": "StyleView",
|
565 |
+
"description_width": ""
|
566 |
+
}
|
567 |
+
},
|
568 |
+
"193aef33d9184055bb9223f56d456de6": {
|
569 |
+
"model_module": "@jupyter-widgets/base",
|
570 |
+
"model_name": "LayoutModel",
|
571 |
+
"model_module_version": "1.2.0",
|
572 |
+
"state": {
|
573 |
+
"_model_module": "@jupyter-widgets/base",
|
574 |
+
"_model_module_version": "1.2.0",
|
575 |
+
"_model_name": "LayoutModel",
|
576 |
+
"_view_count": null,
|
577 |
+
"_view_module": "@jupyter-widgets/base",
|
578 |
+
"_view_module_version": "1.2.0",
|
579 |
+
"_view_name": "LayoutView",
|
580 |
+
"align_content": null,
|
581 |
+
"align_items": null,
|
582 |
+
"align_self": null,
|
583 |
+
"border": null,
|
584 |
+
"bottom": null,
|
585 |
+
"display": null,
|
586 |
+
"flex": null,
|
587 |
+
"flex_flow": null,
|
588 |
+
"grid_area": null,
|
589 |
+
"grid_auto_columns": null,
|
590 |
+
"grid_auto_flow": null,
|
591 |
+
"grid_auto_rows": null,
|
592 |
+
"grid_column": null,
|
593 |
+
"grid_gap": null,
|
594 |
+
"grid_row": null,
|
595 |
+
"grid_template_areas": null,
|
596 |
+
"grid_template_columns": null,
|
597 |
+
"grid_template_rows": null,
|
598 |
+
"height": null,
|
599 |
+
"justify_content": null,
|
600 |
+
"justify_items": null,
|
601 |
+
"left": null,
|
602 |
+
"margin": null,
|
603 |
+
"max_height": null,
|
604 |
+
"max_width": null,
|
605 |
+
"min_height": null,
|
606 |
+
"min_width": null,
|
607 |
+
"object_fit": null,
|
608 |
+
"object_position": null,
|
609 |
+
"order": null,
|
610 |
+
"overflow": null,
|
611 |
+
"overflow_x": null,
|
612 |
+
"overflow_y": null,
|
613 |
+
"padding": null,
|
614 |
+
"right": null,
|
615 |
+
"top": null,
|
616 |
+
"visibility": null,
|
617 |
+
"width": null
|
618 |
+
}
|
619 |
+
},
|
620 |
+
"abfc9aa911ce4a5ea81c7c451f08295f": {
|
621 |
+
"model_module": "@jupyter-widgets/controls",
|
622 |
+
"model_name": "ProgressStyleModel",
|
623 |
+
"model_module_version": "1.5.0",
|
624 |
+
"state": {
|
625 |
+
"_model_module": "@jupyter-widgets/controls",
|
626 |
+
"_model_module_version": "1.5.0",
|
627 |
+
"_model_name": "ProgressStyleModel",
|
628 |
+
"_view_count": null,
|
629 |
+
"_view_module": "@jupyter-widgets/base",
|
630 |
+
"_view_module_version": "1.2.0",
|
631 |
+
"_view_name": "StyleView",
|
632 |
+
"bar_color": null,
|
633 |
+
"description_width": ""
|
634 |
+
}
|
635 |
+
},
|
636 |
+
"e7937a1bc68441a080374911a6563376": {
|
637 |
+
"model_module": "@jupyter-widgets/base",
|
638 |
+
"model_name": "LayoutModel",
|
639 |
+
"model_module_version": "1.2.0",
|
640 |
+
"state": {
|
641 |
+
"_model_module": "@jupyter-widgets/base",
|
642 |
+
"_model_module_version": "1.2.0",
|
643 |
+
"_model_name": "LayoutModel",
|
644 |
+
"_view_count": null,
|
645 |
+
"_view_module": "@jupyter-widgets/base",
|
646 |
+
"_view_module_version": "1.2.0",
|
647 |
+
"_view_name": "LayoutView",
|
648 |
+
"align_content": null,
|
649 |
+
"align_items": null,
|
650 |
+
"align_self": null,
|
651 |
+
"border": null,
|
652 |
+
"bottom": null,
|
653 |
+
"display": null,
|
654 |
+
"flex": null,
|
655 |
+
"flex_flow": null,
|
656 |
+
"grid_area": null,
|
657 |
+
"grid_auto_columns": null,
|
658 |
+
"grid_auto_flow": null,
|
659 |
+
"grid_auto_rows": null,
|
660 |
+
"grid_column": null,
|
661 |
+
"grid_gap": null,
|
662 |
+
"grid_row": null,
|
663 |
+
"grid_template_areas": null,
|
664 |
+
"grid_template_columns": null,
|
665 |
+
"grid_template_rows": null,
|
666 |
+
"height": null,
|
667 |
+
"justify_content": null,
|
668 |
+
"justify_items": null,
|
669 |
+
"left": null,
|
670 |
+
"margin": null,
|
671 |
+
"max_height": null,
|
672 |
+
"max_width": null,
|
673 |
+
"min_height": null,
|
674 |
+
"min_width": null,
|
675 |
+
"object_fit": null,
|
676 |
+
"object_position": null,
|
677 |
+
"order": null,
|
678 |
+
"overflow": null,
|
679 |
+
"overflow_x": null,
|
680 |
+
"overflow_y": null,
|
681 |
+
"padding": null,
|
682 |
+
"right": null,
|
683 |
+
"top": null,
|
684 |
+
"visibility": null,
|
685 |
+
"width": null
|
686 |
+
}
|
687 |
+
},
|
688 |
+
"e532ed7bfef34f67b5fcacd9534eb789": {
|
689 |
+
"model_module": "@jupyter-widgets/controls",
|
690 |
+
"model_name": "DescriptionStyleModel",
|
691 |
+
"model_module_version": "1.5.0",
|
692 |
+
"state": {
|
693 |
+
"_model_module": "@jupyter-widgets/controls",
|
694 |
+
"_model_module_version": "1.5.0",
|
695 |
+
"_model_name": "DescriptionStyleModel",
|
696 |
+
"_view_count": null,
|
697 |
+
"_view_module": "@jupyter-widgets/base",
|
698 |
+
"_view_module_version": "1.2.0",
|
699 |
+
"_view_name": "StyleView",
|
700 |
+
"description_width": ""
|
701 |
+
}
|
702 |
+
}
|
703 |
+
}
|
704 |
+
}
|
705 |
+
},
|
706 |
+
"cells": [
|
707 |
+
{
|
708 |
+
"cell_type": "markdown",
|
709 |
+
"metadata": {
|
710 |
+
"id": "view-in-github",
|
711 |
+
"colab_type": "text"
|
712 |
+
},
|
713 |
+
"source": [
|
714 |
+
"<a href=\"https://colab.research.google.com/github/towardsai/ai-tutor-rag-system/blob/main/notebooks/12-Improve_Query.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
715 |
+
]
|
716 |
+
},
|
717 |
+
{
|
718 |
+
"cell_type": "markdown",
|
719 |
+
"source": [
|
720 |
+
"# Install Packages and Setup Variables"
|
721 |
+
],
|
722 |
+
"metadata": {
|
723 |
+
"id": "-zE1h0uQV7uT"
|
724 |
+
}
|
725 |
+
},
|
726 |
+
{
|
727 |
+
"cell_type": "code",
|
728 |
+
"execution_count": null,
|
729 |
+
"metadata": {
|
730 |
+
"id": "QPJzr-I9XQ7l",
|
731 |
+
"colab": {
|
732 |
+
"base_uri": "https://localhost:8080/"
|
733 |
+
},
|
734 |
+
"outputId": "5d48c88b-a0a9-49ff-d788-e076d1cb4ead"
|
735 |
+
},
|
736 |
+
"outputs": [
|
737 |
+
{
|
738 |
+
"output_type": "stream",
|
739 |
+
"name": "stdout",
|
740 |
+
"text": [
|
741 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m15.7/15.7 MB\u001b[0m \u001b[31m25.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
742 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m225.4/225.4 kB\u001b[0m \u001b[31m21.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
743 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m2.0/2.0 MB\u001b[0m \u001b[31m38.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
744 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m508.6/508.6 kB\u001b[0m \u001b[31m31.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
745 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m79.9/79.9 MB\u001b[0m \u001b[31m9.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
746 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m45.7/45.7 kB\u001b[0m \u001b[31m4.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
747 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m51.7/51.7 kB\u001b[0m \u001b[31m4.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
748 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m75.9/75.9 kB\u001b[0m \u001b[31m9.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
749 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m2.4/2.4 MB\u001b[0m \u001b[31m78.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
750 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m92.1/92.1 kB\u001b[0m \u001b[31m10.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
751 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m60.8/60.8 kB\u001b[0m \u001b[31m7.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
752 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m41.1/41.1 kB\u001b[0m \u001b[31m4.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
753 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m5.4/5.4 MB\u001b[0m \u001b[31m95.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
754 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m6.8/6.8 MB\u001b[0m \u001b[31m66.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
755 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββοΏ½οΏ½οΏ½βββββββββββββ\u001b[0m \u001b[32m57.9/57.9 kB\u001b[0m \u001b[31m7.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
756 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m105.6/105.6 kB\u001b[0m \u001b[31m12.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
757 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m67.3/67.3 kB\u001b[0m \u001b[31m8.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
758 |
+
"\u001b[?25h Installing build dependencies ... \u001b[?25l\u001b[?25hdone\n",
|
759 |
+
" Getting requirements to build wheel ... \u001b[?25l\u001b[?25hdone\n",
|
760 |
+
" Preparing metadata (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n",
|
761 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m698.9/698.9 kB\u001b[0m \u001b[31m58.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
762 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m1.6/1.6 MB\u001b[0m \u001b[31m79.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
763 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m67.6/67.6 kB\u001b[0m \u001b[31m7.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
764 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m3.1/3.1 MB\u001b[0m \u001b[31m97.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
765 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m71.5/71.5 kB\u001b[0m \u001b[31m9.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
766 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m76.9/76.9 kB\u001b[0m \u001b[31m9.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
767 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m58.3/58.3 kB\u001b[0m \u001b[31m7.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
768 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m46.0/46.0 kB\u001b[0m \u001b[31m6.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
769 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m50.8/50.8 kB\u001b[0m \u001b[31m6.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
770 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m341.4/341.4 kB\u001b[0m \u001b[31m33.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
771 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m3.4/3.4 MB\u001b[0m \u001b[31m92.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
772 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m1.3/1.3 MB\u001b[0m \u001b[31m60.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
773 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m130.2/130.2 kB\u001b[0m \u001b[31m15.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
774 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m49.4/49.4 kB\u001b[0m \u001b[31m6.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
775 |
+
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m86.8/86.8 kB\u001b[0m \u001b[31m9.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
776 |
+
"\u001b[?25h Building wheel for pypika (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n"
|
777 |
+
]
|
778 |
+
}
|
779 |
+
],
|
780 |
+
"source": [
|
781 |
+
"!pip install -q llama-index==0.9.21 openai==1.6.0 tiktoken==0.5.2 chromadb==0.4.21 kaleido==0.2.1 python-multipart==0.0.6 cohere==4.39"
|
782 |
+
]
|
783 |
+
},
|
784 |
+
{
|
785 |
+
"cell_type": "code",
|
786 |
+
"source": [
|
787 |
+
"import os\n",
|
788 |
+
"\n",
|
789 |
+
"# Set the \"OPENAI_API_KEY\" in the Python environment. Will be used by OpenAI client later.\n",
|
790 |
+
"os.environ[\"OPENAI_API_KEY\"] = \"<YOUR_OPENAI_KEY>\""
|
791 |
+
],
|
792 |
+
"metadata": {
|
793 |
+
"id": "riuXwpSPcvWC"
|
794 |
+
},
|
795 |
+
"execution_count": null,
|
796 |
+
"outputs": []
|
797 |
+
},
|
798 |
+
{
|
799 |
+
"cell_type": "code",
|
800 |
+
"source": [
|
801 |
+
"import nest_asyncio\n",
|
802 |
+
"\n",
|
803 |
+
"nest_asyncio.apply()"
|
804 |
+
],
|
805 |
+
"metadata": {
|
806 |
+
"id": "jIEeZzqLbz0J"
|
807 |
+
},
|
808 |
+
"execution_count": null,
|
809 |
+
"outputs": []
|
810 |
+
},
|
811 |
+
{
|
812 |
+
"cell_type": "markdown",
|
813 |
+
"source": [
|
814 |
+
"# Load a Model"
|
815 |
+
],
|
816 |
+
"metadata": {
|
817 |
+
"id": "Bkgi2OrYzF7q"
|
818 |
+
}
|
819 |
+
},
|
820 |
+
{
|
821 |
+
"cell_type": "code",
|
822 |
+
"source": [
|
823 |
+
"from llama_index.llms import OpenAI\n",
|
824 |
+
"\n",
|
825 |
+
"llm = OpenAI(temperature=0.9, model=\"gpt-3.5-turbo\", max_tokens=512)"
|
826 |
+
],
|
827 |
+
"metadata": {
|
828 |
+
"id": "9oGT6crooSSj"
|
829 |
+
},
|
830 |
+
"execution_count": null,
|
831 |
+
"outputs": []
|
832 |
+
},
|
833 |
+
{
|
834 |
+
"cell_type": "markdown",
|
835 |
+
"source": [
|
836 |
+
"# Create a VectoreStore"
|
837 |
+
],
|
838 |
+
"metadata": {
|
839 |
+
"id": "0BwVuJXlzHVL"
|
840 |
+
}
|
841 |
+
},
|
842 |
+
{
|
843 |
+
"cell_type": "code",
|
844 |
+
"source": [
|
845 |
+
"import chromadb\n",
|
846 |
+
"\n",
|
847 |
+
"# create client and a new collection\n",
|
848 |
+
"# chromadb.EphemeralClient saves data in-memory.\n",
|
849 |
+
"chroma_client = chromadb.PersistentClient(path=\"./mini-llama-articles\")\n",
|
850 |
+
"chroma_collection = chroma_client.create_collection(\"mini-llama-articles\")"
|
851 |
+
],
|
852 |
+
"metadata": {
|
853 |
+
"id": "SQP87lHczHKc"
|
854 |
+
},
|
855 |
+
"execution_count": null,
|
856 |
+
"outputs": []
|
857 |
+
},
|
858 |
+
{
|
859 |
+
"cell_type": "code",
|
860 |
+
"source": [
|
861 |
+
"from llama_index.vector_stores import ChromaVectorStore\n",
|
862 |
+
"\n",
|
863 |
+
"# Define a storage context object using the created vector database.\n",
|
864 |
+
"vector_store = ChromaVectorStore(chroma_collection=chroma_collection)"
|
865 |
+
],
|
866 |
+
"metadata": {
|
867 |
+
"id": "zAaGcYMJzHAN"
|
868 |
+
},
|
869 |
+
"execution_count": null,
|
870 |
+
"outputs": []
|
871 |
+
},
|
872 |
+
{
|
873 |
+
"cell_type": "markdown",
|
874 |
+
"source": [
|
875 |
+
"# Load the Dataset (CSV)"
|
876 |
+
],
|
877 |
+
"metadata": {
|
878 |
+
"id": "I9JbAzFcjkpn"
|
879 |
+
}
|
880 |
+
},
|
881 |
+
{
|
882 |
+
"cell_type": "markdown",
|
883 |
+
"source": [
|
884 |
+
"## Download"
|
885 |
+
],
|
886 |
+
"metadata": {
|
887 |
+
"id": "ceveDuYdWCYk"
|
888 |
+
}
|
889 |
+
},
|
890 |
+
{
|
891 |
+
"cell_type": "markdown",
|
892 |
+
"source": [
|
893 |
+
"The dataset includes several articles from the TowardsAI blog, which provide an in-depth explanation of the LLaMA2 model. Read the dataset as a long string."
|
894 |
+
],
|
895 |
+
"metadata": {
|
896 |
+
"id": "eZwf6pv7WFmD"
|
897 |
+
}
|
898 |
+
},
|
899 |
+
{
|
900 |
+
"cell_type": "code",
|
901 |
+
"source": [
|
902 |
+
"!wget https://raw.githubusercontent.com/AlaFalaki/tutorial_notebooks/main/data/mini-llama-articles.csv"
|
903 |
+
],
|
904 |
+
"metadata": {
|
905 |
+
"colab": {
|
906 |
+
"base_uri": "https://localhost:8080/"
|
907 |
+
},
|
908 |
+
"id": "wl_pbPvMlv1h",
|
909 |
+
"outputId": "a453b612-20a8-4396-d22b-b19d2bc47816"
|
910 |
+
},
|
911 |
+
"execution_count": null,
|
912 |
+
"outputs": [
|
913 |
+
{
|
914 |
+
"output_type": "stream",
|
915 |
+
"name": "stdout",
|
916 |
+
"text": [
|
917 |
+
"--2024-02-12 17:09:58-- https://raw.githubusercontent.com/AlaFalaki/tutorial_notebooks/main/data/mini-llama-articles.csv\n",
|
918 |
+
"Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...\n",
|
919 |
+
"Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.\n",
|
920 |
+
"HTTP request sent, awaiting response... 200 OK\n",
|
921 |
+
"Length: 173646 (170K) [text/plain]\n",
|
922 |
+
"Saving to: βmini-llama-articles.csvβ\n",
|
923 |
+
"\n",
|
924 |
+
"mini-llama-articles 100%[===================>] 169.58K --.-KB/s in 0.03s \n",
|
925 |
+
"\n",
|
926 |
+
"2024-02-12 17:09:58 (5.50 MB/s) - βmini-llama-articles.csvβ saved [173646/173646]\n",
|
927 |
+
"\n"
|
928 |
+
]
|
929 |
+
}
|
930 |
+
]
|
931 |
+
},
|
932 |
+
{
|
933 |
+
"cell_type": "markdown",
|
934 |
+
"source": [
|
935 |
+
"## Read File"
|
936 |
+
],
|
937 |
+
"metadata": {
|
938 |
+
"id": "VWBLtDbUWJfA"
|
939 |
+
}
|
940 |
+
},
|
941 |
+
{
|
942 |
+
"cell_type": "code",
|
943 |
+
"source": [
|
944 |
+
"import csv\n",
|
945 |
+
"\n",
|
946 |
+
"rows = []\n",
|
947 |
+
"\n",
|
948 |
+
"# Load the file as a JSON\n",
|
949 |
+
"with open(\"./mini-llama-articles.csv\", mode=\"r\", encoding=\"utf-8\") as file:\n",
|
950 |
+
" csv_reader = csv.reader(file)\n",
|
951 |
+
"\n",
|
952 |
+
" for idx, row in enumerate( csv_reader ):\n",
|
953 |
+
" if idx == 0: continue; # Skip header row\n",
|
954 |
+
" rows.append( row )\n",
|
955 |
+
"\n",
|
956 |
+
"# The number of characters in the dataset.\n",
|
957 |
+
"len( rows )"
|
958 |
+
],
|
959 |
+
"metadata": {
|
960 |
+
"id": "0Q9sxuW0g3Gd",
|
961 |
+
"colab": {
|
962 |
+
"base_uri": "https://localhost:8080/"
|
963 |
+
},
|
964 |
+
"outputId": "49b27d8a-1f96-4e8d-fa0f-27afbf2c395c"
|
965 |
+
},
|
966 |
+
"execution_count": null,
|
967 |
+
"outputs": [
|
968 |
+
{
|
969 |
+
"output_type": "execute_result",
|
970 |
+
"data": {
|
971 |
+
"text/plain": [
|
972 |
+
"14"
|
973 |
+
]
|
974 |
+
},
|
975 |
+
"metadata": {},
|
976 |
+
"execution_count": 7
|
977 |
+
}
|
978 |
+
]
|
979 |
+
},
|
980 |
+
{
|
981 |
+
"cell_type": "markdown",
|
982 |
+
"source": [
|
983 |
+
"# Convert to Document obj"
|
984 |
+
],
|
985 |
+
"metadata": {
|
986 |
+
"id": "S17g2RYOjmf2"
|
987 |
+
}
|
988 |
+
},
|
989 |
+
{
|
990 |
+
"cell_type": "code",
|
991 |
+
"source": [
|
992 |
+
"from llama_index import Document\n",
|
993 |
+
"\n",
|
994 |
+
"# Convert the chunks to Document objects so the LlamaIndex framework can process them.\n",
|
995 |
+
"documents = [Document(text=row[1], metadata={\"title\": row[0], \"url\": row[2], \"source_name\": row[3]}) for row in rows]"
|
996 |
+
],
|
997 |
+
"metadata": {
|
998 |
+
"id": "YizvmXPejkJE"
|
999 |
+
},
|
1000 |
+
"execution_count": null,
|
1001 |
+
"outputs": []
|
1002 |
+
},
|
1003 |
+
{
|
1004 |
+
"cell_type": "markdown",
|
1005 |
+
"source": [
|
1006 |
+
"# Transforming"
|
1007 |
+
],
|
1008 |
+
"metadata": {
|
1009 |
+
"id": "qjuLbmFuWsyl"
|
1010 |
+
}
|
1011 |
+
},
|
1012 |
+
{
|
1013 |
+
"cell_type": "code",
|
1014 |
+
"source": [
|
1015 |
+
"from llama_index.text_splitter import TokenTextSplitter\n",
|
1016 |
+
"\n",
|
1017 |
+
"text_splitter = TokenTextSplitter(\n",
|
1018 |
+
" separator=\" \", chunk_size=512, chunk_overlap=128\n",
|
1019 |
+
")"
|
1020 |
+
],
|
1021 |
+
"metadata": {
|
1022 |
+
"id": "9z3t70DGWsjO"
|
1023 |
+
},
|
1024 |
+
"execution_count": null,
|
1025 |
+
"outputs": []
|
1026 |
+
},
|
1027 |
+
{
|
1028 |
+
"cell_type": "code",
|
1029 |
+
"source": [
|
1030 |
+
"from llama_index.extractors import (\n",
|
1031 |
+
" SummaryExtractor,\n",
|
1032 |
+
" QuestionsAnsweredExtractor,\n",
|
1033 |
+
" KeywordExtractor,\n",
|
1034 |
+
")\n",
|
1035 |
+
"from llama_index.embeddings import OpenAIEmbedding\n",
|
1036 |
+
"from llama_index.ingestion import IngestionPipeline\n",
|
1037 |
+
"\n",
|
1038 |
+
"pipeline = IngestionPipeline(\n",
|
1039 |
+
" transformations=[\n",
|
1040 |
+
" text_splitter,\n",
|
1041 |
+
" QuestionsAnsweredExtractor(questions=3, llm=llm),\n",
|
1042 |
+
" SummaryExtractor(summaries=[\"prev\", \"self\"], llm=llm),\n",
|
1043 |
+
" KeywordExtractor(keywords=10, llm=llm),\n",
|
1044 |
+
" OpenAIEmbedding(),\n",
|
1045 |
+
" ],\n",
|
1046 |
+
" vector_store=vector_store\n",
|
1047 |
+
")\n",
|
1048 |
+
"\n",
|
1049 |
+
"nodes = pipeline.run(documents=documents, show_progress=True);"
|
1050 |
+
],
|
1051 |
+
"metadata": {
|
1052 |
+
"colab": {
|
1053 |
+
"base_uri": "https://localhost:8080/",
|
1054 |
+
"height": 331,
|
1055 |
+
"referenced_widgets": [
|
1056 |
+
"3fbabd8a8660461ba5e7bc08ef39139a",
|
1057 |
+
"df2365556ae242a2ab1a119f9a31a561",
|
1058 |
+
"5f4b9d32df8f446e858e4c289dc282f9",
|
1059 |
+
"5b588f83a15d42d9aca888e06bbd95ff",
|
1060 |
+
"ad073bca655540809e39f26538d2ec0d",
|
1061 |
+
"13b9c5395bca4c3ba21265240cb936cf",
|
1062 |
+
"47a4586384274577a726c57605e7f8d9",
|
1063 |
+
"96a3bdece738481db57e811ccb74a974",
|
1064 |
+
"5c7973afd79349ed997a69120d0629b2",
|
1065 |
+
"af9b6ae927dd4764b9692507791bc67e",
|
1066 |
+
"134210510d49476e959dd7d032bbdbdc",
|
1067 |
+
"5f9bb065c2b74d2e8ded32e1306a7807",
|
1068 |
+
"73a06bc546a64f7f99a9e4a135319dcd",
|
1069 |
+
"ce48deaf4d8c49cdae92bfdbb3a78df0",
|
1070 |
+
"4a172e8c6aa44e41a42fc1d9cf714fd0",
|
1071 |
+
"0245f2604e4d49c8bd0210302746c47b",
|
1072 |
+
"e956dfab55084a9cbe33c8e331b511e7",
|
1073 |
+
"cb394578badd43a89850873ad2526542",
|
1074 |
+
"193aef33d9184055bb9223f56d456de6",
|
1075 |
+
"abfc9aa911ce4a5ea81c7c451f08295f",
|
1076 |
+
"e7937a1bc68441a080374911a6563376",
|
1077 |
+
"e532ed7bfef34f67b5fcacd9534eb789"
|
1078 |
+
]
|
1079 |
+
},
|
1080 |
+
"id": "P9LDJ7o-Wsc-",
|
1081 |
+
"outputId": "01070c1f-dffa-4ab7-ad71-b07b76b12e03"
|
1082 |
+
},
|
1083 |
+
"execution_count": null,
|
1084 |
+
"outputs": [
|
1085 |
+
{
|
1086 |
+
"output_type": "display_data",
|
1087 |
+
"data": {
|
1088 |
+
"text/plain": [
|
1089 |
+
"Parsing nodes: 0%| | 0/14 [00:00<?, ?it/s]"
|
1090 |
+
],
|
1091 |
+
"application/vnd.jupyter.widget-view+json": {
|
1092 |
+
"version_major": 2,
|
1093 |
+
"version_minor": 0,
|
1094 |
+
"model_id": "3fbabd8a8660461ba5e7bc08ef39139a"
|
1095 |
+
}
|
1096 |
+
},
|
1097 |
+
"metadata": {}
|
1098 |
+
},
|
1099 |
+
{
|
1100 |
+
"output_type": "stream",
|
1101 |
+
"name": "stdout",
|
1102 |
+
"text": [
|
1103 |
+
"464\n",
|
1104 |
+
"452\n",
|
1105 |
+
"457\n",
|
1106 |
+
"465\n",
|
1107 |
+
"448\n",
|
1108 |
+
"468\n",
|
1109 |
+
"434\n",
|
1110 |
+
"447\n",
|
1111 |
+
"455\n",
|
1112 |
+
"445\n",
|
1113 |
+
"449\n",
|
1114 |
+
"455\n",
|
1115 |
+
"431\n",
|
1116 |
+
"453\n"
|
1117 |
+
]
|
1118 |
+
},
|
1119 |
+
{
|
1120 |
+
"output_type": "display_data",
|
1121 |
+
"data": {
|
1122 |
+
"text/plain": [
|
1123 |
+
"Generating embeddings: 0%| | 0/108 [00:00<?, ?it/s]"
|
1124 |
+
],
|
1125 |
+
"application/vnd.jupyter.widget-view+json": {
|
1126 |
+
"version_major": 2,
|
1127 |
+
"version_minor": 0,
|
1128 |
+
"model_id": "5f9bb065c2b74d2e8ded32e1306a7807"
|
1129 |
+
}
|
1130 |
+
},
|
1131 |
+
"metadata": {}
|
1132 |
+
}
|
1133 |
+
]
|
1134 |
+
},
|
1135 |
+
{
|
1136 |
+
"cell_type": "code",
|
1137 |
+
"source": [
|
1138 |
+
"len( nodes )"
|
1139 |
+
],
|
1140 |
+
"metadata": {
|
1141 |
+
"colab": {
|
1142 |
+
"base_uri": "https://localhost:8080/"
|
1143 |
+
},
|
1144 |
+
"id": "mPGa85hM2P3P",
|
1145 |
+
"outputId": "c106c463-2459-4b11-bbae-5bd5e2246011"
|
1146 |
+
},
|
1147 |
+
"execution_count": null,
|
1148 |
+
"outputs": [
|
1149 |
+
{
|
1150 |
+
"output_type": "execute_result",
|
1151 |
+
"data": {
|
1152 |
+
"text/plain": [
|
1153 |
+
"108"
|
1154 |
+
]
|
1155 |
+
},
|
1156 |
+
"metadata": {},
|
1157 |
+
"execution_count": 109
|
1158 |
+
}
|
1159 |
+
]
|
1160 |
+
},
|
1161 |
+
{
|
1162 |
+
"cell_type": "code",
|
1163 |
+
"source": [
|
1164 |
+
"!zip -r vectorstore.zip mini-llama-articles"
|
1165 |
+
],
|
1166 |
+
"metadata": {
|
1167 |
+
"id": "23x20bL3_jRb"
|
1168 |
+
},
|
1169 |
+
"execution_count": null,
|
1170 |
+
"outputs": []
|
1171 |
+
},
|
1172 |
+
{
|
1173 |
+
"cell_type": "markdown",
|
1174 |
+
"source": [
|
1175 |
+
"# Load Indexes"
|
1176 |
+
],
|
1177 |
+
"metadata": {
|
1178 |
+
"id": "OWaT6rL7ksp8"
|
1179 |
+
}
|
1180 |
+
},
|
1181 |
+
{
|
1182 |
+
"cell_type": "code",
|
1183 |
+
"source": [
|
1184 |
+
"!unzip vectorstore.zip"
|
1185 |
+
],
|
1186 |
+
"metadata": {
|
1187 |
+
"colab": {
|
1188 |
+
"base_uri": "https://localhost:8080/"
|
1189 |
+
},
|
1190 |
+
"id": "SodY2Xpf_kxg",
|
1191 |
+
"outputId": "9f8b7153-ea58-4824-8363-c47e922612a8"
|
1192 |
+
},
|
1193 |
+
"execution_count": null,
|
1194 |
+
"outputs": [
|
1195 |
+
{
|
1196 |
+
"output_type": "stream",
|
1197 |
+
"name": "stdout",
|
1198 |
+
"text": [
|
1199 |
+
"Archive: vectorstore.zip\n",
|
1200 |
+
" creating: mini-llama-articles/\n",
|
1201 |
+
" creating: mini-llama-articles/a361e92f-9895-41b6-ba72-4ad38e9875bd/\n",
|
1202 |
+
" inflating: mini-llama-articles/a361e92f-9895-41b6-ba72-4ad38e9875bd/data_level0.bin \n",
|
1203 |
+
" inflating: mini-llama-articles/a361e92f-9895-41b6-ba72-4ad38e9875bd/header.bin \n",
|
1204 |
+
" extracting: mini-llama-articles/a361e92f-9895-41b6-ba72-4ad38e9875bd/link_lists.bin \n",
|
1205 |
+
" inflating: mini-llama-articles/a361e92f-9895-41b6-ba72-4ad38e9875bd/length.bin \n",
|
1206 |
+
" inflating: mini-llama-articles/chroma.sqlite3 \n"
|
1207 |
+
]
|
1208 |
+
}
|
1209 |
+
]
|
1210 |
+
},
|
1211 |
+
{
|
1212 |
+
"cell_type": "code",
|
1213 |
+
"source": [
|
1214 |
+
"import chromadb\n",
|
1215 |
+
"from llama_index.vector_stores import ChromaVectorStore\n",
|
1216 |
+
"\n",
|
1217 |
+
"# Create your index\n",
|
1218 |
+
"db = chromadb.PersistentClient(path=\"./mini-llama-articles\")\n",
|
1219 |
+
"chroma_collection = db.get_or_create_collection(\"mini-llama-articles\")\n",
|
1220 |
+
"vector_store = ChromaVectorStore(chroma_collection=chroma_collection)"
|
1221 |
+
],
|
1222 |
+
"metadata": {
|
1223 |
+
"id": "mXi56KTXk2sp"
|
1224 |
+
},
|
1225 |
+
"execution_count": null,
|
1226 |
+
"outputs": []
|
1227 |
+
},
|
1228 |
+
{
|
1229 |
+
"cell_type": "code",
|
1230 |
+
"source": [
|
1231 |
+
"# Create your index\n",
|
1232 |
+
"from llama_index import VectorStoreIndex\n",
|
1233 |
+
"\n",
|
1234 |
+
"vector_index = VectorStoreIndex.from_vector_store(vector_store)"
|
1235 |
+
],
|
1236 |
+
"metadata": {
|
1237 |
+
"id": "jKXURvLtkuTS"
|
1238 |
+
},
|
1239 |
+
"execution_count": null,
|
1240 |
+
"outputs": []
|
1241 |
+
},
|
1242 |
+
{
|
1243 |
+
"cell_type": "markdown",
|
1244 |
+
"source": [
|
1245 |
+
"# Multi-Step Query Engine"
|
1246 |
+
],
|
1247 |
+
"metadata": {
|
1248 |
+
"id": "SLrn8A3jckmW"
|
1249 |
+
}
|
1250 |
+
},
|
1251 |
+
{
|
1252 |
+
"cell_type": "markdown",
|
1253 |
+
"source": [
|
1254 |
+
"## GPT-4"
|
1255 |
+
],
|
1256 |
+
"metadata": {
|
1257 |
+
"id": "UmpfpVCje8h3"
|
1258 |
+
}
|
1259 |
+
},
|
1260 |
+
{
|
1261 |
+
"cell_type": "code",
|
1262 |
+
"source": [
|
1263 |
+
"from llama_index import ServiceContext\n",
|
1264 |
+
"\n",
|
1265 |
+
"gpt4 = OpenAI(temperature=0, model=\"gpt-4\")\n",
|
1266 |
+
"service_context_gpt4 = ServiceContext.from_defaults(llm=gpt4)"
|
1267 |
+
],
|
1268 |
+
"metadata": {
|
1269 |
+
"id": "CaxFzDz4cRMd"
|
1270 |
+
},
|
1271 |
+
"execution_count": null,
|
1272 |
+
"outputs": []
|
1273 |
+
},
|
1274 |
+
{
|
1275 |
+
"cell_type": "code",
|
1276 |
+
"source": [
|
1277 |
+
"from llama_index.indices.query.query_transform.base import StepDecomposeQueryTransform\n",
|
1278 |
+
"\n",
|
1279 |
+
"step_decompose_transform_gpt4 = StepDecomposeQueryTransform(llm=gpt4, verbose=True)"
|
1280 |
+
],
|
1281 |
+
"metadata": {
|
1282 |
+
"id": "8y-Ya3GyfcAk"
|
1283 |
+
},
|
1284 |
+
"execution_count": null,
|
1285 |
+
"outputs": []
|
1286 |
+
},
|
1287 |
+
{
|
1288 |
+
"cell_type": "code",
|
1289 |
+
"source": [
|
1290 |
+
"from llama_index.query_engine.multistep_query_engine import MultiStepQueryEngine\n",
|
1291 |
+
"\n",
|
1292 |
+
"query_engine_gpt4 = vector_index.as_query_engine(service_context=service_context_gpt4)\n",
|
1293 |
+
"query_engine_gpt4 = MultiStepQueryEngine(\n",
|
1294 |
+
" query_engine=query_engine_gpt4,\n",
|
1295 |
+
" query_transform=step_decompose_transform_gpt4,\n",
|
1296 |
+
" index_summary=\"Used to answer questions about the LLaMA2 Model\",\n",
|
1297 |
+
")"
|
1298 |
+
],
|
1299 |
+
"metadata": {
|
1300 |
+
"id": "zntXdSbGf_qF"
|
1301 |
+
},
|
1302 |
+
"execution_count": null,
|
1303 |
+
"outputs": []
|
1304 |
+
},
|
1305 |
+
{
|
1306 |
+
"cell_type": "markdown",
|
1307 |
+
"source": [
|
1308 |
+
"# Query Dataset"
|
1309 |
+
],
|
1310 |
+
"metadata": {
|
1311 |
+
"id": "8JPD8yAinVSq"
|
1312 |
+
}
|
1313 |
+
},
|
1314 |
+
{
|
1315 |
+
"cell_type": "markdown",
|
1316 |
+
"source": [
|
1317 |
+
"## Default"
|
1318 |
+
],
|
1319 |
+
"metadata": {
|
1320 |
+
"id": "D2IByQ5-ox9U"
|
1321 |
+
}
|
1322 |
+
},
|
1323 |
+
{
|
1324 |
+
"cell_type": "code",
|
1325 |
+
"source": [
|
1326 |
+
"# Define a query engine that is responsible for retrieving related pieces of text,\n",
|
1327 |
+
"# and using a LLM to formulate the final answer.\n",
|
1328 |
+
"query_engine = vector_index.as_query_engine()\n",
|
1329 |
+
"\n",
|
1330 |
+
"res = query_engine.query(\"How many parameters LLaMA2 model has?\")"
|
1331 |
+
],
|
1332 |
+
"metadata": {
|
1333 |
+
"id": "b0gue7cyctt1"
|
1334 |
+
},
|
1335 |
+
"execution_count": null,
|
1336 |
+
"outputs": []
|
1337 |
+
},
|
1338 |
+
{
|
1339 |
+
"cell_type": "code",
|
1340 |
+
"source": [
|
1341 |
+
"res.response"
|
1342 |
+
],
|
1343 |
+
"metadata": {
|
1344 |
+
"colab": {
|
1345 |
+
"base_uri": "https://localhost:8080/",
|
1346 |
+
"height": 53
|
1347 |
+
},
|
1348 |
+
"id": "VKK3jMprctre",
|
1349 |
+
"outputId": "b6ed346c-714b-44a6-b8fa-bfaca1b38deb"
|
1350 |
+
},
|
1351 |
+
"execution_count": null,
|
1352 |
+
"outputs": [
|
1353 |
+
{
|
1354 |
+
"output_type": "execute_result",
|
1355 |
+
"data": {
|
1356 |
+
"text/plain": [
|
1357 |
+
"'The Llama 2 model is available in four different sizes: 7 billion, 13 billion, 34 billion, and 70 billion parameters.'"
|
1358 |
+
],
|
1359 |
+
"application/vnd.google.colaboratory.intrinsic+json": {
|
1360 |
+
"type": "string"
|
1361 |
+
}
|
1362 |
+
},
|
1363 |
+
"metadata": {},
|
1364 |
+
"execution_count": 24
|
1365 |
+
}
|
1366 |
+
]
|
1367 |
+
},
|
1368 |
+
{
|
1369 |
+
"cell_type": "code",
|
1370 |
+
"source": [
|
1371 |
+
"for src in res.source_nodes:\n",
|
1372 |
+
" print(\"Node ID\\t\", src.node_id)\n",
|
1373 |
+
" print(\"Title\\t\", src.metadata['title'])\n",
|
1374 |
+
" print(\"Text\\t\", src.text)\n",
|
1375 |
+
" print(\"Score\\t\", src.score)\n",
|
1376 |
+
" print(\"-_\"*20)"
|
1377 |
+
],
|
1378 |
+
"metadata": {
|
1379 |
+
"colab": {
|
1380 |
+
"base_uri": "https://localhost:8080/"
|
1381 |
+
},
|
1382 |
+
"id": "465dH4yQc7Ct",
|
1383 |
+
"outputId": "6f7eb440-cc24-4d20-ac35-fa747265d18d"
|
1384 |
+
},
|
1385 |
+
"execution_count": null,
|
1386 |
+
"outputs": [
|
1387 |
+
{
|
1388 |
+
"output_type": "stream",
|
1389 |
+
"name": "stdout",
|
1390 |
+
"text": [
|
1391 |
+
"Node ID\t d6f533e5-fef8-469c-a313-def19fd38efe\n",
|
1392 |
+
"Title\t Meta's Llama 2: Revolutionizing Open Source Language Models for Commercial Use\n",
|
1393 |
+
"Text\t I. Llama 2: Revolutionizing Commercial Use Unlike its predecessor Llama 1, which was limited to research use, Llama 2 represents a major advancement as an open-source commercial model. Businesses can now integrate Llama 2 into products to create AI-powered applications. Availability on Azure and AWS facilitates fine-tuning and adoption. However, restrictions apply to prevent exploitation. Companies with over 700 million active daily users cannot use Llama 2. Additionally, its output cannot be used to improve other language models. II. Llama 2 Model Flavors Llama 2 is available in four different model sizes: 7 billion, 13 billion, 34 billion, and 70 billion parameters. While 7B, 13B, and 70B have already been released, the 34B model is still awaited. The pretrained variant, trained on a whopping 2 trillion tokens, boasts a context window of 4096 tokens, twice the size of its predecessor Llama 1. Meta also released a Llama 2 fine-tuned model for chat applications that was trained on over 1 million human annotations. Such extensive training comes at a cost, with the 70B model taking a staggering 1720320 GPU hours to train. The context window's length determines the amount of content the model can process at once, making Llama 2 a powerful language model in terms of scale and efficiency. III. Safety Considerations: A Top Priority for Meta Meta's commitment to safety and alignment shines through in Llama 2's design. The model demonstrates exceptionally low AI safety violation percentages, surpassing even ChatGPT in safety benchmarks. Finding the right balance between helpfulness and safety when optimizing a model poses significant challenges. While a highly helpful model may be capable of answering any question, including sensitive ones like \"How do I build a bomb?\", it also raises concerns about potential misuse. Thus, striking the perfect equilibrium between providing useful information and ensuring safety is paramount. However, prioritizing safety to an extreme extent can lead to a model that struggles to effectively address a diverse range of questions. This limitation could hinder the model's practical applicability and user experience. Thus, achieving\n",
|
1394 |
+
"Score\t 0.7078549032318474\n",
|
1395 |
+
"-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_\n",
|
1396 |
+
"Node ID\t 2f3b7c34-8fd0-4134-af38-ef1b77e32cd8\n",
|
1397 |
+
"Title\t Meta's Llama 2: Revolutionizing Open Source Language Models for Commercial Use\n",
|
1398 |
+
"Text\t The model demonstrates exceptionally low AI safety violation percentages, surpassing even ChatGPT in safety benchmarks. Finding the right balance between helpfulness and safety when optimizing a model poses significant challenges. While a highly helpful model may be capable of answering any question, including sensitive ones like \"How do I build a bomb?\", it also raises concerns about potential misuse. Thus, striking the perfect equilibrium between providing useful information and ensuring safety is paramount. However, prioritizing safety to an extreme extent can lead to a model that struggles to effectively address a diverse range of questions. This limitation could hinder the model's practical applicability and user experience. Thus, achieving an optimum balance that allows the model to be both helpful and safe is of utmost importance. To strike the right balance between helpfulness and safety, Meta employed two reward models - one for helpfulness and another for safety - to optimize the model's responses. The 34B parameter model has reported higher safety violations than other variants, possibly contributing to the delay in its release. IV. Helpfulness Comparison: Llama 2 Outperforms Competitors Llama 2 emerges as a strong contender in the open-source language model arena, outperforming its competitors in most categories. The 70B parameter model outperforms all other open-source models, while the 7B and 34B models outshine Falcon in all categories and MPT in all categories except coding. Despite being smaller, Llam a2's performance rivals that of Chat GPT 3.5, a significantly larger closed-source model. While GPT 4 and PalM-2-L, with their larger size, outperform Llama 2, this is expected due to their capacity for handling complex language tasks. Llama 2's impressive ability to compete with larger models highlights its efficiency and potential in the market. However, Llama 2 does face challenges in coding and math problems, where models like Chat GPT 4 excel, given their significantly larger size. Chat GPT 4 performed significantly better than Llama 2 for coding (HumanEval benchmark)and math problem tasks (GSM8k benchmark). Open-source AI technologies, like Llama 2, continue to advance, offering\n",
|
1399 |
+
"Score\t 0.7026792232112851\n",
|
1400 |
+
"-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_\n"
|
1401 |
+
]
|
1402 |
+
}
|
1403 |
+
]
|
1404 |
+
},
|
1405 |
+
{
|
1406 |
+
"cell_type": "markdown",
|
1407 |
+
"source": [
|
1408 |
+
"## GPT-4 Multi-Step"
|
1409 |
+
],
|
1410 |
+
"metadata": {
|
1411 |
+
"id": "2y2AiInmpz7g"
|
1412 |
+
}
|
1413 |
+
},
|
1414 |
+
{
|
1415 |
+
"cell_type": "code",
|
1416 |
+
"source": [
|
1417 |
+
"response_gpt4 = query_engine_gpt4.query(\"How many parameters LLaMA2 model has?\")"
|
1418 |
+
],
|
1419 |
+
"metadata": {
|
1420 |
+
"colab": {
|
1421 |
+
"base_uri": "https://localhost:8080/"
|
1422 |
+
},
|
1423 |
+
"id": "69kADAFilW1n",
|
1424 |
+
"outputId": "8a847a58-539f-4ba7-ca07-ef80ceb8b3e2"
|
1425 |
+
},
|
1426 |
+
"execution_count": null,
|
1427 |
+
"outputs": [
|
1428 |
+
{
|
1429 |
+
"output_type": "stream",
|
1430 |
+
"name": "stdout",
|
1431 |
+
"text": [
|
1432 |
+
"\u001b[1;3;33m> Current query: How many parameters LLaMA2 model has?\n",
|
1433 |
+
"\u001b[0m\u001b[1;3;38;5;200m> New query: What is the LLaMA2 Model?\n",
|
1434 |
+
"\u001b[0m\u001b[1;3;33m> Current query: How many parameters LLaMA2 model has?\n",
|
1435 |
+
"\u001b[0m\u001b[1;3;38;5;200m> New query: None\n",
|
1436 |
+
"\u001b[0m"
|
1437 |
+
]
|
1438 |
+
}
|
1439 |
+
]
|
1440 |
+
},
|
1441 |
+
{
|
1442 |
+
"cell_type": "code",
|
1443 |
+
"source": [
|
1444 |
+
"response_gpt4.response"
|
1445 |
+
],
|
1446 |
+
"metadata": {
|
1447 |
+
"colab": {
|
1448 |
+
"base_uri": "https://localhost:8080/",
|
1449 |
+
"height": 35
|
1450 |
+
},
|
1451 |
+
"id": "_ul5p3AMldzk",
|
1452 |
+
"outputId": "8c5cadda-8e06-4398-81bc-8571d4710b2a"
|
1453 |
+
},
|
1454 |
+
"execution_count": null,
|
1455 |
+
"outputs": [
|
1456 |
+
{
|
1457 |
+
"output_type": "execute_result",
|
1458 |
+
"data": {
|
1459 |
+
"text/plain": [
|
1460 |
+
"'LLaMA 2 model has four different sizes: 7 billion, 13 billion, 34 billion, and 70 billion parameters.'"
|
1461 |
+
],
|
1462 |
+
"application/vnd.google.colaboratory.intrinsic+json": {
|
1463 |
+
"type": "string"
|
1464 |
+
}
|
1465 |
+
},
|
1466 |
+
"metadata": {},
|
1467 |
+
"execution_count": 27
|
1468 |
+
}
|
1469 |
+
]
|
1470 |
+
},
|
1471 |
+
{
|
1472 |
+
"cell_type": "code",
|
1473 |
+
"source": [
|
1474 |
+
"for src in response_gpt4.source_nodes:\n",
|
1475 |
+
" print(\"Node ID\\t\", src.node_id)\n",
|
1476 |
+
" print(\"Text\\t\", src.text)\n",
|
1477 |
+
" print(\"Score\\t\", src.score)\n",
|
1478 |
+
" print(\"-_\"*20)"
|
1479 |
+
],
|
1480 |
+
"metadata": {
|
1481 |
+
"colab": {
|
1482 |
+
"base_uri": "https://localhost:8080/"
|
1483 |
+
},
|
1484 |
+
"id": "k5pJPBPRqjbG",
|
1485 |
+
"outputId": "0bdd8382-8392-483d-bb6a-51e7a146eeb3"
|
1486 |
+
},
|
1487 |
+
"execution_count": null,
|
1488 |
+
"outputs": [
|
1489 |
+
{
|
1490 |
+
"output_type": "stream",
|
1491 |
+
"name": "stdout",
|
1492 |
+
"text": [
|
1493 |
+
"Node ID\t 121c62a4-e30e-481b-9972-b37f4a64f4b5\n",
|
1494 |
+
"Text\t \n",
|
1495 |
+
"Question: What is the LLaMA2 Model?\n",
|
1496 |
+
"Answer: LLaMA 2 is an open-source commercial model that represents a major advancement from its predecessor, LLaMA 1. Unlike LLaMA 1, which was limited to research use, LLaMA 2 can be integrated into products by businesses to create AI-powered applications. It is available on Azure and AWS, which facilitates its fine-tuning and adoption. LLaMA 2 is available in four different model sizes: 7 billion, 13 billion, 34 billion, and 70 billion parameters. The model has been trained on a large number of tokens and has a context window of 4096 tokens, twice the size of its predecessor. There is also a fine-tuned version of LLaMA 2 for chat applications. However, there are restrictions on its use to prevent exploitation, such as companies with over 700 million active daily users not being allowed to use it, and its output cannot be used to improve other language models.\n",
|
1497 |
+
"Score\t None\n",
|
1498 |
+
"-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_\n",
|
1499 |
+
"Node ID\t 2f3b7c34-8fd0-4134-af38-ef1b77e32cd8\n",
|
1500 |
+
"Text\t The model demonstrates exceptionally low AI safety violation percentages, surpassing even ChatGPT in safety benchmarks. Finding the right balance between helpfulness and safety when optimizing a model poses significant challenges. While a highly helpful model may be capable of answering any question, including sensitive ones like \"How do I build a bomb?\", it also raises concerns about potential misuse. Thus, striking the perfect equilibrium between providing useful information and ensuring safety is paramount. However, prioritizing safety to an extreme extent can lead to a model that struggles to effectively address a diverse range of questions. This limitation could hinder the model's practical applicability and user experience. Thus, achieving an optimum balance that allows the model to be both helpful and safe is of utmost importance. To strike the right balance between helpfulness and safety, Meta employed two reward models - one for helpfulness and another for safety - to optimize the model's responses. The 34B parameter model has reported higher safety violations than other variants, possibly contributing to the delay in its release. IV. Helpfulness Comparison: Llama 2 Outperforms Competitors Llama 2 emerges as a strong contender in the open-source language model arena, outperforming its competitors in most categories. The 70B parameter model outperforms all other open-source models, while the 7B and 34B models outshine Falcon in all categories and MPT in all categories except coding. Despite being smaller, Llam a2's performance rivals that of Chat GPT 3.5, a significantly larger closed-source model. While GPT 4 and PalM-2-L, with their larger size, outperform Llama 2, this is expected due to their capacity for handling complex language tasks. Llama 2's impressive ability to compete with larger models highlights its efficiency and potential in the market. However, Llama 2 does face challenges in coding and math problems, where models like Chat GPT 4 excel, given their significantly larger size. Chat GPT 4 performed significantly better than Llama 2 for coding (HumanEval benchmark)and math problem tasks (GSM8k benchmark). Open-source AI technologies, like Llama 2, continue to advance, offering\n",
|
1501 |
+
"Score\t 0.7200907368429045\n",
|
1502 |
+
"-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_\n",
|
1503 |
+
"Node ID\t d6f533e5-fef8-469c-a313-def19fd38efe\n",
|
1504 |
+
"Text\t I. Llama 2: Revolutionizing Commercial Use Unlike its predecessor Llama 1, which was limited to research use, Llama 2 represents a major advancement as an open-source commercial model. Businesses can now integrate Llama 2 into products to create AI-powered applications. Availability on Azure and AWS facilitates fine-tuning and adoption. However, restrictions apply to prevent exploitation. Companies with over 700 million active daily users cannot use Llama 2. Additionally, its output cannot be used to improve other language models. II. Llama 2 Model Flavors Llama 2 is available in four different model sizes: 7 billion, 13 billion, 34 billion, and 70 billion parameters. While 7B, 13B, and 70B have already been released, the 34B model is still awaited. The pretrained variant, trained on a whopping 2 trillion tokens, boasts a context window of 4096 tokens, twice the size of its predecessor Llama 1. Meta also released a Llama 2 fine-tuned model for chat applications that was trained on over 1 million human annotations. Such extensive training comes at a cost, with the 70B model taking a staggering 1720320 GPU hours to train. The context window's length determines the amount of content the model can process at once, making Llama 2 a powerful language model in terms of scale and efficiency. III. Safety Considerations: A Top Priority for Meta Meta's commitment to safety and alignment shines through in Llama 2's design. The model demonstrates exceptionally low AI safety violation percentages, surpassing even ChatGPT in safety benchmarks. Finding the right balance between helpfulness and safety when optimizing a model poses significant challenges. While a highly helpful model may be capable of answering any question, including sensitive ones like \"How do I build a bomb?\", it also raises concerns about potential misuse. Thus, striking the perfect equilibrium between providing useful information and ensuring safety is paramount. However, prioritizing safety to an extreme extent can lead to a model that struggles to effectively address a diverse range of questions. This limitation could hinder the model's practical applicability and user experience. Thus, achieving\n",
|
1505 |
+
"Score\t 0.7176769581627592\n",
|
1506 |
+
"-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_\n"
|
1507 |
+
]
|
1508 |
+
}
|
1509 |
+
]
|
1510 |
+
},
|
1511 |
+
{
|
1512 |
+
"cell_type": "markdown",
|
1513 |
+
"source": [
|
1514 |
+
"# Test GPT-3 Multi-Step"
|
1515 |
+
],
|
1516 |
+
"metadata": {
|
1517 |
+
"id": "jwcSCiMhp4Uh"
|
1518 |
+
}
|
1519 |
+
},
|
1520 |
+
{
|
1521 |
+
"cell_type": "code",
|
1522 |
+
"source": [
|
1523 |
+
"from llama_index import ServiceContext\n",
|
1524 |
+
"from llama_index.indices.query.query_transform.base import StepDecomposeQueryTransform\n",
|
1525 |
+
"from llama_index.query_engine.multistep_query_engine import MultiStepQueryEngine\n",
|
1526 |
+
"\n",
|
1527 |
+
"gpt3 = OpenAI(temperature=0, model=\"gpt-3.5-turbo\")\n",
|
1528 |
+
"service_context_gpt3 = ServiceContext.from_defaults(llm=gpt3)\n",
|
1529 |
+
"\n",
|
1530 |
+
"step_decompose_transform_gpt3 = StepDecomposeQueryTransform(llm=gpt3, verbose=True)\n",
|
1531 |
+
"\n",
|
1532 |
+
"query_engine_gpt3 = vector_index.as_query_engine(service_context=service_context_gpt3)\n",
|
1533 |
+
"query_engine_gpt3 = MultiStepQueryEngine(\n",
|
1534 |
+
" query_engine=query_engine_gpt3,\n",
|
1535 |
+
" query_transform=step_decompose_transform_gpt3,\n",
|
1536 |
+
" index_summary=\"Used to answer questions about the LLaMA2 Model\",\n",
|
1537 |
+
")"
|
1538 |
+
],
|
1539 |
+
"metadata": {
|
1540 |
+
"id": "uH9gNfZuslHK"
|
1541 |
+
},
|
1542 |
+
"execution_count": null,
|
1543 |
+
"outputs": []
|
1544 |
+
},
|
1545 |
+
{
|
1546 |
+
"cell_type": "code",
|
1547 |
+
"source": [
|
1548 |
+
"response_gpt3 = query_engine_gpt3.query(\"How many parameters LLaMA2 model has?\")"
|
1549 |
+
],
|
1550 |
+
"metadata": {
|
1551 |
+
"colab": {
|
1552 |
+
"base_uri": "https://localhost:8080/"
|
1553 |
+
},
|
1554 |
+
"id": "9s6SkHI0p6VZ",
|
1555 |
+
"outputId": "1c87dbda-e026-4e28-f7eb-b01145c62b77"
|
1556 |
+
},
|
1557 |
+
"execution_count": null,
|
1558 |
+
"outputs": [
|
1559 |
+
{
|
1560 |
+
"output_type": "stream",
|
1561 |
+
"name": "stdout",
|
1562 |
+
"text": [
|
1563 |
+
"\u001b[1;3;33m> Current query: How many parameters LLaMA2 model has?\n",
|
1564 |
+
"\u001b[0m\u001b[1;3;38;5;200m> New query: None\n",
|
1565 |
+
"\u001b[0m"
|
1566 |
+
]
|
1567 |
+
}
|
1568 |
+
]
|
1569 |
+
},
|
1570 |
+
{
|
1571 |
+
"cell_type": "code",
|
1572 |
+
"source": [
|
1573 |
+
"response_gpt3.response"
|
1574 |
+
],
|
1575 |
+
"metadata": {
|
1576 |
+
"colab": {
|
1577 |
+
"base_uri": "https://localhost:8080/",
|
1578 |
+
"height": 35
|
1579 |
+
},
|
1580 |
+
"id": "FlgMkAhQsTIY",
|
1581 |
+
"outputId": "0996e879-3914-44b1-cdec-e4f0b0ba7a4e"
|
1582 |
+
},
|
1583 |
+
"execution_count": null,
|
1584 |
+
"outputs": [
|
1585 |
+
{
|
1586 |
+
"output_type": "execute_result",
|
1587 |
+
"data": {
|
1588 |
+
"text/plain": [
|
1589 |
+
"'Empty Response'"
|
1590 |
+
],
|
1591 |
+
"application/vnd.google.colaboratory.intrinsic+json": {
|
1592 |
+
"type": "string"
|
1593 |
+
}
|
1594 |
+
},
|
1595 |
+
"metadata": {},
|
1596 |
+
"execution_count": 46
|
1597 |
+
}
|
1598 |
+
]
|
1599 |
+
},
|
1600 |
+
{
|
1601 |
+
"cell_type": "markdown",
|
1602 |
+
"source": [
|
1603 |
+
"# Test Retriever on Multistep"
|
1604 |
+
],
|
1605 |
+
"metadata": {
|
1606 |
+
"id": "DxOF2qth1gUC"
|
1607 |
+
}
|
1608 |
+
},
|
1609 |
+
{
|
1610 |
+
"cell_type": "code",
|
1611 |
+
"source": [
|
1612 |
+
"import llama_index"
|
1613 |
+
],
|
1614 |
+
"metadata": {
|
1615 |
+
"id": "In9BZbU10KAz"
|
1616 |
+
},
|
1617 |
+
"execution_count": null,
|
1618 |
+
"outputs": []
|
1619 |
+
},
|
1620 |
+
{
|
1621 |
+
"cell_type": "code",
|
1622 |
+
"source": [
|
1623 |
+
"from llama_index.indices.query.schema import QueryBundle"
|
1624 |
+
],
|
1625 |
+
"metadata": {
|
1626 |
+
"id": "_-fBK2g2zkKb"
|
1627 |
+
},
|
1628 |
+
"execution_count": null,
|
1629 |
+
"outputs": []
|
1630 |
+
},
|
1631 |
+
{
|
1632 |
+
"cell_type": "code",
|
1633 |
+
"source": [
|
1634 |
+
"t = QueryBundle(\"How many parameters LLaMA2 model has?\")"
|
1635 |
+
],
|
1636 |
+
"metadata": {
|
1637 |
+
"id": "wqT7mlhx1KGB"
|
1638 |
+
},
|
1639 |
+
"execution_count": null,
|
1640 |
+
"outputs": []
|
1641 |
+
},
|
1642 |
+
{
|
1643 |
+
"cell_type": "code",
|
1644 |
+
"source": [
|
1645 |
+
"query_engine_gpt3.retrieve(t)"
|
1646 |
+
],
|
1647 |
+
"metadata": {
|
1648 |
+
"colab": {
|
1649 |
+
"base_uri": "https://localhost:8080/",
|
1650 |
+
"height": 304
|
1651 |
+
},
|
1652 |
+
"id": "OHpa3MqXyyvd",
|
1653 |
+
"outputId": "d9b39a47-751d-48a1-ce68-ebf0a50b938d"
|
1654 |
+
},
|
1655 |
+
"execution_count": null,
|
1656 |
+
"outputs": [
|
1657 |
+
{
|
1658 |
+
"output_type": "error",
|
1659 |
+
"ename": "NotImplementedError",
|
1660 |
+
"evalue": "This query engine does not support retrieve, use query directly",
|
1661 |
+
"traceback": [
|
1662 |
+
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
|
1663 |
+
"\u001b[0;31mNotImplementedError\u001b[0m Traceback (most recent call last)",
|
1664 |
+
"\u001b[0;32m<ipython-input-78-9d10d9a0c761>\u001b[0m in \u001b[0;36m<cell line: 1>\u001b[0;34m()\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0mquery_engine_gpt3\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mretrieve\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mt\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;31m#(\"How many parameters LLaMA2 model has\")\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m",
|
1665 |
+
"\u001b[0;32m/usr/local/lib/python3.10/dist-packages/llama_index/core/base_query_engine.py\u001b[0m in \u001b[0;36mretrieve\u001b[0;34m(self, query_bundle)\u001b[0m\n\u001b[1;32m 37\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 38\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0mretrieve\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mquery_bundle\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mQueryBundle\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m->\u001b[0m \u001b[0mList\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mNodeWithScore\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 39\u001b[0;31m raise NotImplementedError(\n\u001b[0m\u001b[1;32m 40\u001b[0m \u001b[0;34m\"This query engine does not support retrieve, use query directly\"\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 41\u001b[0m )\n",
|
1666 |
+
"\u001b[0;31mNotImplementedError\u001b[0m: This query engine does not support retrieve, use query directly"
|
1667 |
+
]
|
1668 |
+
}
|
1669 |
+
]
|
1670 |
+
},
|
1671 |
+
{
|
1672 |
+
"cell_type": "markdown",
|
1673 |
+
"source": [
|
1674 |
+
"# HyDE Transform"
|
1675 |
+
],
|
1676 |
+
"metadata": {
|
1677 |
+
"id": "FCdPwVAQ6ixg"
|
1678 |
+
}
|
1679 |
+
},
|
1680 |
+
{
|
1681 |
+
"cell_type": "code",
|
1682 |
+
"source": [
|
1683 |
+
"query_engine = vector_index.as_query_engine()"
|
1684 |
+
],
|
1685 |
+
"metadata": {
|
1686 |
+
"id": "1x6He0T961Kg"
|
1687 |
+
},
|
1688 |
+
"execution_count": null,
|
1689 |
+
"outputs": []
|
1690 |
+
},
|
1691 |
+
{
|
1692 |
+
"cell_type": "code",
|
1693 |
+
"source": [
|
1694 |
+
"from llama_index.indices.query.query_transform import HyDEQueryTransform\n",
|
1695 |
+
"from llama_index.query_engine.transform_query_engine import TransformQueryEngine\n",
|
1696 |
+
"\n",
|
1697 |
+
"hyde = HyDEQueryTransform(include_original=True)\n",
|
1698 |
+
"hyde_query_engine = TransformQueryEngine(query_engine, hyde)"
|
1699 |
+
],
|
1700 |
+
"metadata": {
|
1701 |
+
"id": "0GgtfeBC6m0H"
|
1702 |
+
},
|
1703 |
+
"execution_count": null,
|
1704 |
+
"outputs": []
|
1705 |
+
},
|
1706 |
+
{
|
1707 |
+
"cell_type": "code",
|
1708 |
+
"source": [
|
1709 |
+
"response = hyde_query_engine.query(\"How many parameters LLaMA2 model has?\")"
|
1710 |
+
],
|
1711 |
+
"metadata": {
|
1712 |
+
"id": "mm3nYnIE6mwl"
|
1713 |
+
},
|
1714 |
+
"execution_count": null,
|
1715 |
+
"outputs": []
|
1716 |
+
},
|
1717 |
+
{
|
1718 |
+
"cell_type": "code",
|
1719 |
+
"source": [
|
1720 |
+
"response.response"
|
1721 |
+
],
|
1722 |
+
"metadata": {
|
1723 |
+
"colab": {
|
1724 |
+
"base_uri": "https://localhost:8080/",
|
1725 |
+
"height": 53
|
1726 |
+
},
|
1727 |
+
"id": "PjTJ2poc6mt5",
|
1728 |
+
"outputId": "32fc89c2-474d-4791-e4b0-2a1de262b571"
|
1729 |
+
},
|
1730 |
+
"execution_count": null,
|
1731 |
+
"outputs": [
|
1732 |
+
{
|
1733 |
+
"output_type": "execute_result",
|
1734 |
+
"data": {
|
1735 |
+
"text/plain": [
|
1736 |
+
"'The Llama 2 model is available in four different sizes: 7 billion, 13 billion, 34 billion, and 70 billion parameters.'"
|
1737 |
+
],
|
1738 |
+
"application/vnd.google.colaboratory.intrinsic+json": {
|
1739 |
+
"type": "string"
|
1740 |
+
}
|
1741 |
+
},
|
1742 |
+
"metadata": {},
|
1743 |
+
"execution_count": 86
|
1744 |
+
}
|
1745 |
+
]
|
1746 |
+
},
|
1747 |
+
{
|
1748 |
+
"cell_type": "code",
|
1749 |
+
"source": [
|
1750 |
+
"for src in response.source_nodes:\n",
|
1751 |
+
" print(\"Node ID\\t\", src.node_id)\n",
|
1752 |
+
" print(\"Text\\t\", src.text)\n",
|
1753 |
+
" print(\"Score\\t\", src.score)\n",
|
1754 |
+
" print(\"-_\"*20)"
|
1755 |
+
],
|
1756 |
+
"metadata": {
|
1757 |
+
"colab": {
|
1758 |
+
"base_uri": "https://localhost:8080/"
|
1759 |
+
},
|
1760 |
+
"id": "StgikqWZ6mrl",
|
1761 |
+
"outputId": "f0552af4-524e-444b-b8cb-67a665fad474"
|
1762 |
+
},
|
1763 |
+
"execution_count": null,
|
1764 |
+
"outputs": [
|
1765 |
+
{
|
1766 |
+
"output_type": "stream",
|
1767 |
+
"name": "stdout",
|
1768 |
+
"text": [
|
1769 |
+
"Node ID\t d6f533e5-fef8-469c-a313-def19fd38efe\n",
|
1770 |
+
"Text\t I. Llama 2: Revolutionizing Commercial Use Unlike its predecessor Llama 1, which was limited to research use, Llama 2 represents a major advancement as an open-source commercial model. Businesses can now integrate Llama 2 into products to create AI-powered applications. Availability on Azure and AWS facilitates fine-tuning and adoption. However, restrictions apply to prevent exploitation. Companies with over 700 million active daily users cannot use Llama 2. Additionally, its output cannot be used to improve other language models. II. Llama 2 Model Flavors Llama 2 is available in four different model sizes: 7 billion, 13 billion, 34 billion, and 70 billion parameters. While 7B, 13B, and 70B have already been released, the 34B model is still awaited. The pretrained variant, trained on a whopping 2 trillion tokens, boasts a context window of 4096 tokens, twice the size of its predecessor Llama 1. Meta also released a Llama 2 fine-tuned model for chat applications that was trained on over 1 million human annotations. Such extensive training comes at a cost, with the 70B model taking a staggering 1720320 GPU hours to train. The context window's length determines the amount of content the model can process at once, making Llama 2 a powerful language model in terms of scale and efficiency. III. Safety Considerations: A Top Priority for Meta Meta's commitment to safety and alignment shines through in Llama 2's design. The model demonstrates exceptionally low AI safety violation percentages, surpassing even ChatGPT in safety benchmarks. Finding the right balance between helpfulness and safety when optimizing a model poses significant challenges. While a highly helpful model may be capable of answering any question, including sensitive ones like \"How do I build a bomb?\", it also raises concerns about potential misuse. Thus, striking the perfect equilibrium between providing useful information and ensuring safety is paramount. However, prioritizing safety to an extreme extent can lead to a model that struggles to effectively address a diverse range of questions. This limitation could hinder the model's practical applicability and user experience. Thus, achieving\n",
|
1771 |
+
"Score\t 0.75642628855535\n",
|
1772 |
+
"-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_\n",
|
1773 |
+
"Node ID\t 2f3b7c34-8fd0-4134-af38-ef1b77e32cd8\n",
|
1774 |
+
"Text\t The model demonstrates exceptionally low AI safety violation percentages, surpassing even ChatGPT in safety benchmarks. Finding the right balance between helpfulness and safety when optimizing a model poses significant challenges. While a highly helpful model may be capable of answering any question, including sensitive ones like \"How do I build a bomb?\", it also raises concerns about potential misuse. Thus, striking the perfect equilibrium between providing useful information and ensuring safety is paramount. However, prioritizing safety to an extreme extent can lead to a model that struggles to effectively address a diverse range of questions. This limitation could hinder the model's practical applicability and user experience. Thus, achieving an optimum balance that allows the model to be both helpful and safe is of utmost importance. To strike the right balance between helpfulness and safety, Meta employed two reward models - one for helpfulness and another for safety - to optimize the model's responses. The 34B parameter model has reported higher safety violations than other variants, possibly contributing to the delay in its release. IV. Helpfulness Comparison: Llama 2 Outperforms Competitors Llama 2 emerges as a strong contender in the open-source language model arena, outperforming its competitors in most categories. The 70B parameter model outperforms all other open-source models, while the 7B and 34B models outshine Falcon in all categories and MPT in all categories except coding. Despite being smaller, Llam a2's performance rivals that of Chat GPT 3.5, a significantly larger closed-source model. While GPT 4 and PalM-2-L, with their larger size, outperform Llama 2, this is expected due to their capacity for handling complex language tasks. Llama 2's impressive ability to compete with larger models highlights its efficiency and potential in the market. However, Llama 2 does face challenges in coding and math problems, where models like Chat GPT 4 excel, given their significantly larger size. Chat GPT 4 performed significantly better than Llama 2 for coding (HumanEval benchmark)and math problem tasks (GSM8k benchmark). Open-source AI technologies, like Llama 2, continue to advance, offering\n",
|
1775 |
+
"Score\t 0.7534119645401607\n",
|
1776 |
+
"-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_\n"
|
1777 |
+
]
|
1778 |
+
}
|
1779 |
+
]
|
1780 |
+
},
|
1781 |
+
{
|
1782 |
+
"cell_type": "code",
|
1783 |
+
"source": [
|
1784 |
+
"query_bundle = hyde(\"How many parameters LLaMA2 model has?\")"
|
1785 |
+
],
|
1786 |
+
"metadata": {
|
1787 |
+
"id": "17Jbo1FH6mjH"
|
1788 |
+
},
|
1789 |
+
"execution_count": null,
|
1790 |
+
"outputs": []
|
1791 |
+
},
|
1792 |
+
{
|
1793 |
+
"cell_type": "code",
|
1794 |
+
"source": [
|
1795 |
+
"hyde_doc = query_bundle.embedding_strs[0]"
|
1796 |
+
],
|
1797 |
+
"metadata": {
|
1798 |
+
"id": "UZEK63K77W7X"
|
1799 |
+
},
|
1800 |
+
"execution_count": null,
|
1801 |
+
"outputs": []
|
1802 |
+
},
|
1803 |
+
{
|
1804 |
+
"cell_type": "code",
|
1805 |
+
"source": [
|
1806 |
+
"hyde_doc"
|
1807 |
+
],
|
1808 |
+
"metadata": {
|
1809 |
+
"colab": {
|
1810 |
+
"base_uri": "https://localhost:8080/",
|
1811 |
+
"height": 214
|
1812 |
+
},
|
1813 |
+
"id": "wyzwkpSn7Yi1",
|
1814 |
+
"outputId": "9b03f8dc-a26e-45e4-eec1-22366bd68dd2"
|
1815 |
+
},
|
1816 |
+
"execution_count": null,
|
1817 |
+
"outputs": [
|
1818 |
+
{
|
1819 |
+
"output_type": "execute_result",
|
1820 |
+
"data": {
|
1821 |
+
"text/plain": [
|
1822 |
+
"\"The LLaMA2 model is a complex machine learning model that is widely used in various fields such as natural language processing and computer vision. It is known for its ability to accurately analyze and understand large amounts of data. When it comes to the number of parameters in the LLaMA2 model, it is important to note that this can vary depending on the specific implementation and configuration. However, in general, the LLaMA2 model typically has a large number of parameters, often in the millions or even billions. These parameters are essential for the model to learn and make predictions based on the input data. They represent the weights and biases that are adjusted during the training process to optimize the model's performance. The high number of parameters in the LLaMA2 model allows it to capture intricate patterns and relationships in the data, leading to more accurate predictions and analysis. However, it also means that training and fine-tuning the model can be computationally intensive and time-consuming. Overall, the LLaMA2 model's large number of parameters is a key factor in its ability to achieve high levels of accuracy and performance in various applications.\""
|
1823 |
+
],
|
1824 |
+
"application/vnd.google.colaboratory.intrinsic+json": {
|
1825 |
+
"type": "string"
|
1826 |
+
}
|
1827 |
+
},
|
1828 |
+
"metadata": {},
|
1829 |
+
"execution_count": 91
|
1830 |
+
}
|
1831 |
+
]
|
1832 |
+
}
|
1833 |
+
]
|
1834 |
+
}
|