ndn1954 commited on
Commit
5f80bb9
β€’
1 Parent(s): 307571b

New commit

Browse files
Files changed (8) hide show
  1. .gitignore +160 -0
  2. LICENSE +339 -0
  3. LLMdocumentchatbot_Colab.ipynb +109 -0
  4. README.md +24 -13
  5. app.py +178 -0
  6. bones.py +277 -0
  7. llamacppmodels.py +309 -0
  8. requirements.txt +14 -0
.gitignore ADDED
@@ -0,0 +1,160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Byte-compiled / optimized / DLL files
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+
6
+ # C extensions
7
+ *.so
8
+
9
+ # Distribution / packaging
10
+ .Python
11
+ build/
12
+ develop-eggs/
13
+ dist/
14
+ downloads/
15
+ eggs/
16
+ .eggs/
17
+ lib/
18
+ lib64/
19
+ parts/
20
+ sdist/
21
+ var/
22
+ wheels/
23
+ share/python-wheels/
24
+ *.egg-info/
25
+ .installed.cfg
26
+ *.egg
27
+ MANIFEST
28
+
29
+ # PyInstaller
30
+ # Usually these files are written by a python script from a template
31
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
32
+ *.manifest
33
+ *.spec
34
+
35
+ # Installer logs
36
+ pip-log.txt
37
+ pip-delete-this-directory.txt
38
+
39
+ # Unit test / coverage reports
40
+ htmlcov/
41
+ .tox/
42
+ .nox/
43
+ .coverage
44
+ .coverage.*
45
+ .cache
46
+ nosetests.xml
47
+ coverage.xml
48
+ *.cover
49
+ *.py,cover
50
+ .hypothesis/
51
+ .pytest_cache/
52
+ cover/
53
+
54
+ # Translations
55
+ *.mo
56
+ *.pot
57
+
58
+ # Django stuff:
59
+ *.log
60
+ local_settings.py
61
+ db.sqlite3
62
+ db.sqlite3-journal
63
+
64
+ # Flask stuff:
65
+ instance/
66
+ .webassets-cache
67
+
68
+ # Scrapy stuff:
69
+ .scrapy
70
+
71
+ # Sphinx documentation
72
+ docs/_build/
73
+
74
+ # PyBuilder
75
+ .pybuilder/
76
+ target/
77
+
78
+ # Jupyter Notebook
79
+ .ipynb_checkpoints
80
+
81
+ # IPython
82
+ profile_default/
83
+ ipython_config.py
84
+
85
+ # pyenv
86
+ # For a library or package, you might want to ignore these files since the code is
87
+ # intended to run in multiple environments; otherwise, check them in:
88
+ # .python-version
89
+
90
+ # pipenv
91
+ # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
92
+ # However, in case of collaboration, if having platform-specific dependencies or dependencies
93
+ # having no cross-platform support, pipenv may install dependencies that don't work, or not
94
+ # install all needed dependencies.
95
+ #Pipfile.lock
96
+
97
+ # poetry
98
+ # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
99
+ # This is especially recommended for binary packages to ensure reproducibility, and is more
100
+ # commonly ignored for libraries.
101
+ # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
102
+ #poetry.lock
103
+
104
+ # pdm
105
+ # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
106
+ #pdm.lock
107
+ # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
108
+ # in version control.
109
+ # https://pdm.fming.dev/#use-with-ide
110
+ .pdm.toml
111
+
112
+ # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
113
+ __pypackages__/
114
+
115
+ # Celery stuff
116
+ celerybeat-schedule
117
+ celerybeat.pid
118
+
119
+ # SageMath parsed files
120
+ *.sage.py
121
+
122
+ # Environments
123
+ .env
124
+ .venv
125
+ env/
126
+ venv/
127
+ ENV/
128
+ env.bak/
129
+ venv.bak/
130
+
131
+ # Spyder project settings
132
+ .spyderproject
133
+ .spyproject
134
+
135
+ # Rope project settings
136
+ .ropeproject
137
+
138
+ # mkdocs documentation
139
+ /site
140
+
141
+ # mypy
142
+ .mypy_cache/
143
+ .dmypy.json
144
+ dmypy.json
145
+
146
+ # Pyre type checker
147
+ .pyre/
148
+
149
+ # pytype static type analyzer
150
+ .pytype/
151
+
152
+ # Cython debug symbols
153
+ cython_debug/
154
+
155
+ # PyCharm
156
+ # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
157
+ # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
158
+ # and can be added to the global gitignore or merged into this file. For a more nuclear
159
+ # option (not recommended) you can uncomment the following to ignore the entire idea folder.
160
+ #.idea/
LICENSE ADDED
@@ -0,0 +1,339 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ GNU GENERAL PUBLIC LICENSE
2
+ Version 2, June 1991
3
+
4
+ Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
5
+ 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
6
+ Everyone is permitted to copy and distribute verbatim copies
7
+ of this license document, but changing it is not allowed.
8
+
9
+ Preamble
10
+
11
+ The licenses for most software are designed to take away your
12
+ freedom to share and change it. By contrast, the GNU General Public
13
+ License is intended to guarantee your freedom to share and change free
14
+ software--to make sure the software is free for all its users. This
15
+ General Public License applies to most of the Free Software
16
+ Foundation's software and to any other program whose authors commit to
17
+ using it. (Some other Free Software Foundation software is covered by
18
+ the GNU Lesser General Public License instead.) You can apply it to
19
+ your programs, too.
20
+
21
+ When we speak of free software, we are referring to freedom, not
22
+ price. Our General Public Licenses are designed to make sure that you
23
+ have the freedom to distribute copies of free software (and charge for
24
+ this service if you wish), that you receive source code or can get it
25
+ if you want it, that you can change the software or use pieces of it
26
+ in new free programs; and that you know you can do these things.
27
+
28
+ To protect your rights, we need to make restrictions that forbid
29
+ anyone to deny you these rights or to ask you to surrender the rights.
30
+ These restrictions translate to certain responsibilities for you if you
31
+ distribute copies of the software, or if you modify it.
32
+
33
+ For example, if you distribute copies of such a program, whether
34
+ gratis or for a fee, you must give the recipients all the rights that
35
+ you have. You must make sure that they, too, receive or can get the
36
+ source code. And you must show them these terms so they know their
37
+ rights.
38
+
39
+ We protect your rights with two steps: (1) copyright the software, and
40
+ (2) offer you this license which gives you legal permission to copy,
41
+ distribute and/or modify the software.
42
+
43
+ Also, for each author's protection and ours, we want to make certain
44
+ that everyone understands that there is no warranty for this free
45
+ software. If the software is modified by someone else and passed on, we
46
+ want its recipients to know that what they have is not the original, so
47
+ that any problems introduced by others will not reflect on the original
48
+ authors' reputations.
49
+
50
+ Finally, any free program is threatened constantly by software
51
+ patents. We wish to avoid the danger that redistributors of a free
52
+ program will individually obtain patent licenses, in effect making the
53
+ program proprietary. To prevent this, we have made it clear that any
54
+ patent must be licensed for everyone's free use or not licensed at all.
55
+
56
+ The precise terms and conditions for copying, distribution and
57
+ modification follow.
58
+
59
+ GNU GENERAL PUBLIC LICENSE
60
+ TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
61
+
62
+ 0. This License applies to any program or other work which contains
63
+ a notice placed by the copyright holder saying it may be distributed
64
+ under the terms of this General Public License. The "Program", below,
65
+ refers to any such program or work, and a "work based on the Program"
66
+ means either the Program or any derivative work under copyright law:
67
+ that is to say, a work containing the Program or a portion of it,
68
+ either verbatim or with modifications and/or translated into another
69
+ language. (Hereinafter, translation is included without limitation in
70
+ the term "modification".) Each licensee is addressed as "you".
71
+
72
+ Activities other than copying, distribution and modification are not
73
+ covered by this License; they are outside its scope. The act of
74
+ running the Program is not restricted, and the output from the Program
75
+ is covered only if its contents constitute a work based on the
76
+ Program (independent of having been made by running the Program).
77
+ Whether that is true depends on what the Program does.
78
+
79
+ 1. You may copy and distribute verbatim copies of the Program's
80
+ source code as you receive it, in any medium, provided that you
81
+ conspicuously and appropriately publish on each copy an appropriate
82
+ copyright notice and disclaimer of warranty; keep intact all the
83
+ notices that refer to this License and to the absence of any warranty;
84
+ and give any other recipients of the Program a copy of this License
85
+ along with the Program.
86
+
87
+ You may charge a fee for the physical act of transferring a copy, and
88
+ you may at your option offer warranty protection in exchange for a fee.
89
+
90
+ 2. You may modify your copy or copies of the Program or any portion
91
+ of it, thus forming a work based on the Program, and copy and
92
+ distribute such modifications or work under the terms of Section 1
93
+ above, provided that you also meet all of these conditions:
94
+
95
+ a) You must cause the modified files to carry prominent notices
96
+ stating that you changed the files and the date of any change.
97
+
98
+ b) You must cause any work that you distribute or publish, that in
99
+ whole or in part contains or is derived from the Program or any
100
+ part thereof, to be licensed as a whole at no charge to all third
101
+ parties under the terms of this License.
102
+
103
+ c) If the modified program normally reads commands interactively
104
+ when run, you must cause it, when started running for such
105
+ interactive use in the most ordinary way, to print or display an
106
+ announcement including an appropriate copyright notice and a
107
+ notice that there is no warranty (or else, saying that you provide
108
+ a warranty) and that users may redistribute the program under
109
+ these conditions, and telling the user how to view a copy of this
110
+ License. (Exception: if the Program itself is interactive but
111
+ does not normally print such an announcement, your work based on
112
+ the Program is not required to print an announcement.)
113
+
114
+ These requirements apply to the modified work as a whole. If
115
+ identifiable sections of that work are not derived from the Program,
116
+ and can be reasonably considered independent and separate works in
117
+ themselves, then this License, and its terms, do not apply to those
118
+ sections when you distribute them as separate works. But when you
119
+ distribute the same sections as part of a whole which is a work based
120
+ on the Program, the distribution of the whole must be on the terms of
121
+ this License, whose permissions for other licensees extend to the
122
+ entire whole, and thus to each and every part regardless of who wrote it.
123
+
124
+ Thus, it is not the intent of this section to claim rights or contest
125
+ your rights to work written entirely by you; rather, the intent is to
126
+ exercise the right to control the distribution of derivative or
127
+ collective works based on the Program.
128
+
129
+ In addition, mere aggregation of another work not based on the Program
130
+ with the Program (or with a work based on the Program) on a volume of
131
+ a storage or distribution medium does not bring the other work under
132
+ the scope of this License.
133
+
134
+ 3. You may copy and distribute the Program (or a work based on it,
135
+ under Section 2) in object code or executable form under the terms of
136
+ Sections 1 and 2 above provided that you also do one of the following:
137
+
138
+ a) Accompany it with the complete corresponding machine-readable
139
+ source code, which must be distributed under the terms of Sections
140
+ 1 and 2 above on a medium customarily used for software interchange; or,
141
+
142
+ b) Accompany it with a written offer, valid for at least three
143
+ years, to give any third party, for a charge no more than your
144
+ cost of physically performing source distribution, a complete
145
+ machine-readable copy of the corresponding source code, to be
146
+ distributed under the terms of Sections 1 and 2 above on a medium
147
+ customarily used for software interchange; or,
148
+
149
+ c) Accompany it with the information you received as to the offer
150
+ to distribute corresponding source code. (This alternative is
151
+ allowed only for noncommercial distribution and only if you
152
+ received the program in object code or executable form with such
153
+ an offer, in accord with Subsection b above.)
154
+
155
+ The source code for a work means the preferred form of the work for
156
+ making modifications to it. For an executable work, complete source
157
+ code means all the source code for all modules it contains, plus any
158
+ associated interface definition files, plus the scripts used to
159
+ control compilation and installation of the executable. However, as a
160
+ special exception, the source code distributed need not include
161
+ anything that is normally distributed (in either source or binary
162
+ form) with the major components (compiler, kernel, and so on) of the
163
+ operating system on which the executable runs, unless that component
164
+ itself accompanies the executable.
165
+
166
+ If distribution of executable or object code is made by offering
167
+ access to copy from a designated place, then offering equivalent
168
+ access to copy the source code from the same place counts as
169
+ distribution of the source code, even though third parties are not
170
+ compelled to copy the source along with the object code.
171
+
172
+ 4. You may not copy, modify, sublicense, or distribute the Program
173
+ except as expressly provided under this License. Any attempt
174
+ otherwise to copy, modify, sublicense or distribute the Program is
175
+ void, and will automatically terminate your rights under this License.
176
+ However, parties who have received copies, or rights, from you under
177
+ this License will not have their licenses terminated so long as such
178
+ parties remain in full compliance.
179
+
180
+ 5. You are not required to accept this License, since you have not
181
+ signed it. However, nothing else grants you permission to modify or
182
+ distribute the Program or its derivative works. These actions are
183
+ prohibited by law if you do not accept this License. Therefore, by
184
+ modifying or distributing the Program (or any work based on the
185
+ Program), you indicate your acceptance of this License to do so, and
186
+ all its terms and conditions for copying, distributing or modifying
187
+ the Program or works based on it.
188
+
189
+ 6. Each time you redistribute the Program (or any work based on the
190
+ Program), the recipient automatically receives a license from the
191
+ original licensor to copy, distribute or modify the Program subject to
192
+ these terms and conditions. You may not impose any further
193
+ restrictions on the recipients' exercise of the rights granted herein.
194
+ You are not responsible for enforcing compliance by third parties to
195
+ this License.
196
+
197
+ 7. If, as a consequence of a court judgment or allegation of patent
198
+ infringement or for any other reason (not limited to patent issues),
199
+ conditions are imposed on you (whether by court order, agreement or
200
+ otherwise) that contradict the conditions of this License, they do not
201
+ excuse you from the conditions of this License. If you cannot
202
+ distribute so as to satisfy simultaneously your obligations under this
203
+ License and any other pertinent obligations, then as a consequence you
204
+ may not distribute the Program at all. For example, if a patent
205
+ license would not permit royalty-free redistribution of the Program by
206
+ all those who receive copies directly or indirectly through you, then
207
+ the only way you could satisfy both it and this License would be to
208
+ refrain entirely from distribution of the Program.
209
+
210
+ If any portion of this section is held invalid or unenforceable under
211
+ any particular circumstance, the balance of the section is intended to
212
+ apply and the section as a whole is intended to apply in other
213
+ circumstances.
214
+
215
+ It is not the purpose of this section to induce you to infringe any
216
+ patents or other property right claims or to contest validity of any
217
+ such claims; this section has the sole purpose of protecting the
218
+ integrity of the free software distribution system, which is
219
+ implemented by public license practices. Many people have made
220
+ generous contributions to the wide range of software distributed
221
+ through that system in reliance on consistent application of that
222
+ system; it is up to the author/donor to decide if he or she is willing
223
+ to distribute software through any other system and a licensee cannot
224
+ impose that choice.
225
+
226
+ This section is intended to make thoroughly clear what is believed to
227
+ be a consequence of the rest of this License.
228
+
229
+ 8. If the distribution and/or use of the Program is restricted in
230
+ certain countries either by patents or by copyrighted interfaces, the
231
+ original copyright holder who places the Program under this License
232
+ may add an explicit geographical distribution limitation excluding
233
+ those countries, so that distribution is permitted only in or among
234
+ countries not thus excluded. In such case, this License incorporates
235
+ the limitation as if written in the body of this License.
236
+
237
+ 9. The Free Software Foundation may publish revised and/or new versions
238
+ of the General Public License from time to time. Such new versions will
239
+ be similar in spirit to the present version, but may differ in detail to
240
+ address new problems or concerns.
241
+
242
+ Each version is given a distinguishing version number. If the Program
243
+ specifies a version number of this License which applies to it and "any
244
+ later version", you have the option of following the terms and conditions
245
+ either of that version or of any later version published by the Free
246
+ Software Foundation. If the Program does not specify a version number of
247
+ this License, you may choose any version ever published by the Free Software
248
+ Foundation.
249
+
250
+ 10. If you wish to incorporate parts of the Program into other free
251
+ programs whose distribution conditions are different, write to the author
252
+ to ask for permission. For software which is copyrighted by the Free
253
+ Software Foundation, write to the Free Software Foundation; we sometimes
254
+ make exceptions for this. Our decision will be guided by the two goals
255
+ of preserving the free status of all derivatives of our free software and
256
+ of promoting the sharing and reuse of software generally.
257
+
258
+ NO WARRANTY
259
+
260
+ 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
261
+ FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
262
+ OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
263
+ PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
264
+ OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
265
+ MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
266
+ TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
267
+ PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
268
+ REPAIR OR CORRECTION.
269
+
270
+ 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
271
+ WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
272
+ REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
273
+ INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
274
+ OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
275
+ TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
276
+ YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
277
+ PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
278
+ POSSIBILITY OF SUCH DAMAGES.
279
+
280
+ END OF TERMS AND CONDITIONS
281
+
282
+ How to Apply These Terms to Your New Programs
283
+
284
+ If you develop a new program, and you want it to be of the greatest
285
+ possible use to the public, the best way to achieve this is to make it
286
+ free software which everyone can redistribute and change under these terms.
287
+
288
+ To do so, attach the following notices to the program. It is safest
289
+ to attach them to the start of each source file to most effectively
290
+ convey the exclusion of warranty; and each file should have at least
291
+ the "copyright" line and a pointer to where the full notice is found.
292
+
293
+ <one line to give the program's name and a brief idea of what it does.>
294
+ Copyright (C) <year> <name of author>
295
+
296
+ This program is free software; you can redistribute it and/or modify
297
+ it under the terms of the GNU General Public License as published by
298
+ the Free Software Foundation; either version 2 of the License, or
299
+ (at your option) any later version.
300
+
301
+ This program is distributed in the hope that it will be useful,
302
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
303
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
304
+ GNU General Public License for more details.
305
+
306
+ You should have received a copy of the GNU General Public License along
307
+ with this program; if not, write to the Free Software Foundation, Inc.,
308
+ 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
309
+
310
+ Also add information on how to contact you by electronic and paper mail.
311
+
312
+ If the program is interactive, make it output a short notice like this
313
+ when it starts in an interactive mode:
314
+
315
+ Gnomovision version 69, Copyright (C) year name of author
316
+ Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
317
+ This is free software, and you are welcome to redistribute it
318
+ under certain conditions; type `show c' for details.
319
+
320
+ The hypothetical commands `show w' and `show c' should show the appropriate
321
+ parts of the General Public License. Of course, the commands you use may
322
+ be called something other than `show w' and `show c'; they could even be
323
+ mouse-clicks or menu items--whatever suits your program.
324
+
325
+ You should also get your employer (if you work as a programmer) or your
326
+ school, if any, to sign a "copyright disclaimer" for the program, if
327
+ necessary. Here is a sample; alter the names:
328
+
329
+ Yoyodyne, Inc., hereby disclaims all copyright interest in the program
330
+ `Gnomovision' (which makes passes at compilers) written by James Hacker.
331
+
332
+ <signature of Ty Coon>, 1 April 1989
333
+ Ty Coon, President of Vice
334
+
335
+ This General Public License does not permit incorporating your program into
336
+ proprietary programs. If your program is a subroutine library, you may
337
+ consider it more useful to permit linking proprietary applications with the
338
+ library. If this is what you want to do, use the GNU Lesser General
339
+ Public License instead of this License.
LLMdocumentchatbot_Colab.ipynb ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {
6
+ "colab_type": "text",
7
+ "id": "view-in-github"
8
+ },
9
+ "source": [
10
+ "<a href=\"https://colab.research.google.com/github/R3gm/ConversaDocs/blob/main/ConversaDocs_Colab.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
11
+ ]
12
+ },
13
+ {
14
+ "cell_type": "markdown",
15
+ "metadata": {
16
+ "id": "EnzlcRZycXnr"
17
+ },
18
+ "source": [
19
+ "# LLmDocumentChatBot\n",
20
+ "\n",
21
+ "`Chat with your documents using Llama 2, Falcon or OpenAI`\n",
22
+ "\n",
23
+ "- You can upload multiple documents at once to a single database.\n",
24
+ "- Every time a new database is created, the previous one is deleted.\n",
25
+ "- For maximum privacy, you can click \"Load LLAMA GGML Model\" to use a Llama 2 model. By default, the model llama-2_7B-Chat is loaded.\n",
26
+ "\n",
27
+ "Program that enables seamless interaction with your documents through an advanced vector database and the power of Large Language Model (LLM) technology.\n",
28
+ "\n",
29
+ "| Description | Link |\n",
30
+ "| ----------- | ---- |\n",
31
+ "| πŸ“™ Colab Notebook | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ndn1954/llmdocumentchatbot/blob/main/LLMdocumentchatbot_Colab.ipynb) |\n",
32
+ "| πŸŽ‰ Repository | [![GitHub Repository](https://img.shields.io/badge/GitHub-Repository-black?style=flat-square&logo=github)](https://github.com/ndn1954/llmdocumentchatbot/) |\n",
33
+ "| πŸš€ Online Demo | [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/ndn1954/llmdocumentchatbot) |\n",
34
+ "\n"
35
+ ]
36
+ },
37
+ {
38
+ "cell_type": "code",
39
+ "execution_count": null,
40
+ "metadata": {
41
+ "id": "S5awiNy-A50W"
42
+ },
43
+ "outputs": [],
44
+ "source": [
45
+ "!git clone https://github.com/ndn1954/llmdocumentchatbot.git\n",
46
+ "%cd llmdocumentchatbot\n",
47
+ "!pip install -r requirements.txt\n",
48
+ "\n",
49
+ "import torch\n",
50
+ "import os\n",
51
+ "print(\"Wait until the cell finishes executing\")\n",
52
+ "if torch.cuda.is_available():\n",
53
+ " print(\"CUDA is available on this system.\")\n",
54
+ " os.system('CMAKE_ARGS=\"-DLLAMA_CUBLAS=on\" FORCE_CMAKE=1 pip install llama-cpp-python --force-reinstall --upgrade --no-cache-dir --verbose')\n",
55
+ "else:\n",
56
+ " print(\"CUDA is not available on this system.\")\n",
57
+ " os.system('pip install llama-cpp-python')"
58
+ ]
59
+ },
60
+ {
61
+ "cell_type": "markdown",
62
+ "metadata": {
63
+ "id": "jLfxiOyMEcGF"
64
+ },
65
+ "source": [
66
+ "`RESTART THE RUNTIME` before executing the next cell."
67
+ ]
68
+ },
69
+ {
70
+ "cell_type": "code",
71
+ "execution_count": null,
72
+ "metadata": {
73
+ "id": "2F2VGAJtEbb3"
74
+ },
75
+ "outputs": [],
76
+ "source": [
77
+ "%cd /content/llmdocumentchatbot\n",
78
+ "!python app.py"
79
+ ]
80
+ },
81
+ {
82
+ "cell_type": "markdown",
83
+ "metadata": {
84
+ "id": "3aEEcmchZIlf"
85
+ },
86
+ "source": [
87
+ "Open the `public URL` when it appears"
88
+ ]
89
+ }
90
+ ],
91
+ "metadata": {
92
+ "accelerator": "GPU",
93
+ "colab": {
94
+ "authorship_tag": "ABX9TyMoq/QuUmy+xrGmEAesfDhp",
95
+ "gpuType": "T4",
96
+ "include_colab_link": true,
97
+ "provenance": []
98
+ },
99
+ "kernelspec": {
100
+ "display_name": "Python 3",
101
+ "name": "python3"
102
+ },
103
+ "language_info": {
104
+ "name": "python"
105
+ }
106
+ },
107
+ "nbformat": 4,
108
+ "nbformat_minor": 0
109
+ }
README.md CHANGED
@@ -1,13 +1,24 @@
1
- ---
2
- title: Llmdocumentchatbot
3
- emoji: 🐒
4
- colorFrom: yellow
5
- colorTo: purple
6
- sdk: gradio
7
- sdk_version: 3.40.1
8
- app_file: app.py
9
- pinned: false
10
- license: other
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ConversaDocs
2
+
3
+ ## πŸ¦™ Chat with your documents using Llama 2, Falcon πŸ¦… or OpenAI πŸ€–
4
+
5
+ - πŸ”’ By default, the model llama2_7B-Chat is loaded for maximum privacy.
6
+ - πŸ“š You can upload multiple documents at once to a single database.
7
+ - ♻️ Every time a new database is created, the previous one is deleted.
8
+ - This project is been created from https://github.com/R3gm/ConversaDocs/
9
+
10
+ Program that enables seamless interaction with your documents through an advanced vector database and the power of Large Language Model (LLM) technology.
11
+
12
+ | Description | Link |
13
+ | ----------- | ---- |
14
+ | πŸ“™ Colab Notebook | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ndn1954/llmdocumentchatbot/blob/main/LLMdocumentchatbot_Colab.ipynb) |
15
+ | πŸŽ‰ Repository | [![GitHub Repository](https://img.shields.io/badge/GitHub-Repository-black?style=flat-square&logo=github)](https://github.com/ndn1954/llmdocumentchatbot/) |
16
+ | πŸš€ Online inference HF | [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://github.com/ndn1954/llmdocumentchatbot/) |
17
+
18
+ <!--### Chat with different formats of documents.
19
+ ![image](https://github.com/R3gm/ConversaDocs/assets/114810545/1c6c426f-c144-442e-b71a-07867bdf68d3)
20
+
21
+ ### Summarize.
22
+
23
+ ![image](https://github.com/R3gm/ConversaDocs/assets/114810545/2ded3f4e-a9d6-44db-b73b-b81c0abeeebb)-->
24
+
app.py ADDED
@@ -0,0 +1,178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import os
3
+ try:
4
+ from llama_cpp import Llama
5
+ except:
6
+ if torch.cuda.is_available():
7
+ print("CUDA is available on this system.")
8
+ os.system('CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python --force-reinstall --upgrade --no-cache-dir --verbose')
9
+ else:
10
+ print("CUDA is not available on this system.")
11
+ os.system('pip install llama-cpp-python')
12
+
13
+ import gradio as gr
14
+ from langchain.embeddings.openai import OpenAIEmbeddings
15
+ from langchain.text_splitter import CharacterTextSplitter, RecursiveCharacterTextSplitter
16
+ from langchain.vectorstores import DocArrayInMemorySearch
17
+ from langchain.chains import RetrievalQA, ConversationalRetrievalChain
18
+ from langchain.memory import ConversationBufferMemory
19
+ from langchain.chat_models import ChatOpenAI
20
+ from langchain.embeddings import HuggingFaceEmbeddings
21
+ from langchain import HuggingFaceHub
22
+ from langchain.llms import LlamaCpp
23
+ from huggingface_hub import hf_hub_download
24
+ from langchain.document_loaders import (
25
+ EverNoteLoader,
26
+ TextLoader,
27
+ UnstructuredEPubLoader,
28
+ UnstructuredHTMLLoader,
29
+ UnstructuredMarkdownLoader,
30
+ UnstructuredODTLoader,
31
+ UnstructuredPowerPointLoader,
32
+ UnstructuredWordDocumentLoader,
33
+ PyPDFLoader,
34
+ )
35
+ import param
36
+ from bones import DocChat
37
+
38
+ dc = DocChat()
39
+
40
+ ##### GRADIO CONFIG ####
41
+
42
+ css="""
43
+ #col-container {max-width: 1500px; margin-left: auto; margin-right: auto;}
44
+ """
45
+
46
+ title = """
47
+ <div style="text-align: center;max-width: 1500px;">
48
+ <h2>Chat with Documents πŸ“š - Falcon, Llama-2 and OpenAI</h2>
49
+ <p style="text-align: center;">Upload txt, pdf, doc, docx, enex, epub, html, md, odt, ptt and pttx.
50
+ Wait for the Status to show Loaded documents, start typing your questions. Oficial Repository <a href="https://github.com/ndn1954/llmdocumentchatbot">llmdocumentchatbot</a>.<br /></p>
51
+ </div>
52
+ """
53
+
54
+ description = """
55
+ # Application Information
56
+
57
+ - Notebook for run llmdocumentchatbot in Colab [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github.com/ndn1954/llmdocumentchatbot/LLMdocumentchatbot_Colab.ipynb)
58
+
59
+ - Oficial Repository [![a](https://img.shields.io/badge/GitHub-Repository-black?style=flat-square&logo=github)](https://github.com/ndn1954/llmdocumentchatbot/)
60
+
61
+ - You can upload multiple documents at once to a single database.
62
+
63
+ - Every time a new database is created, the previous one is deleted.
64
+
65
+ - For maximum privacy, you can click "Load LLAMA GGML Model" to use a Llama 2 model. By default, the model llama-2_7B-Chat is loaded.
66
+
67
+ - This application works on both CPU and GPU. For fast inference with GGML models, use the GPU.
68
+
69
+ - For more information about what GGML models are, you can visit this notebook [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github.com/ndn1954/llmdocumentchatbot/LLMdocumentchatbot_Colab.ipynb)
70
+
71
+ """
72
+
73
+ theme='aliabid94/new-theme'
74
+
75
+ def flag():
76
+ return "PROCESSING..."
77
+
78
+ def upload_file(files, max_docs):
79
+ file_paths = [file.name for file in files]
80
+ return dc.call_load_db(file_paths, max_docs)
81
+
82
+ def predict(message, chat_history, max_k, check_memory):
83
+ print(message)
84
+ bot_message = dc.convchain(message, max_k, check_memory)
85
+ print(bot_message)
86
+ return "", dc.get_chats()
87
+
88
+ def convert():
89
+ docs = dc.get_sources()
90
+ data_docs = ""
91
+ for i in range(0,len(docs),2):
92
+ txt = docs[i][1].replace("\n","<br>")
93
+ sc = "Archive: " + docs[i+1][1]["source"]
94
+ try:
95
+ pg = "Page: " + str(docs[i+1][1]["page"])
96
+ except:
97
+ pg = "Document Data"
98
+ data_docs += f"<hr><h3 style='color:red;'>{pg}</h2><p>{txt}</p><p>{sc}</p>"
99
+ return data_docs
100
+
101
+ def clear_api_key(api_key):
102
+ return 'api_key...', dc.openai_model(api_key)
103
+
104
+ # Max values in generation
105
+ DOC_DB_LIMIT = 5
106
+ MAX_NEW_TOKENS = 2048
107
+
108
+ # Limit in HF, no need to set it
109
+ if "SET_LIMIT" == os.getenv("DEMO"):
110
+ DOC_DB_LIMIT = 4
111
+ MAX_NEW_TOKENS = 32
112
+
113
+ with gr.Blocks(theme=theme, css=css) as demo:
114
+ with gr.Tab("Chat"):
115
+
116
+ with gr.Column():
117
+ gr.HTML(title)
118
+ upload_button = gr.UploadButton("Click to Upload Files", file_count="multiple")
119
+ file_output = gr.HTML()
120
+
121
+ chatbot = gr.Chatbot([], elem_id="chatbot") #.style(height=300)
122
+ msg = gr.Textbox(label="Question", placeholder="Type your question and hit Enter ")
123
+ with gr.Row():
124
+ check_memory = gr.inputs.Checkbox(label="Remember previous messages")
125
+ clear_button = gr.Button("CLEAR CHAT HISTORY", )
126
+ max_docs = gr.inputs.Slider(1, DOC_DB_LIMIT, default=3, label="Maximum querys to the DB.", step=1)
127
+
128
+ with gr.Column():
129
+ link_output = gr.HTML("")
130
+ sou = gr.HTML("")
131
+
132
+ clear_button.click(flag,[],[link_output]).then(dc.clr_history,[], [link_output]).then(lambda: None, None, chatbot, queue=False)
133
+ upload_button.upload(flag,[],[file_output]).then(upload_file, [upload_button, max_docs], file_output).then(dc.clr_history,[], [link_output])
134
+
135
+ with gr.Tab("Experimental Summarization"):
136
+ default_model = gr.HTML("<hr>From DB<br>It may take approximately 5 minutes to complete 15 pages in GPU. Please use files with fewer pages if you want to use summarization.<br></h2>")
137
+ summarize_button = gr.Button("Start summarization")
138
+
139
+ summarize_verify = gr.HTML(" ")
140
+ summarize_button.click(dc.summarize, [], [summarize_verify])
141
+
142
+ with gr.Tab("Config llama-2 model"):
143
+ gr.HTML("<h3>Only models from the GGML library are accepted. To apply the new configurations, please reload the model.</h3>")
144
+ repo_ = gr.Textbox(label="Repository" ,value="TheBloke/Llama-2-7B-Chat-GGML")
145
+ file_ = gr.Textbox(label="File name" ,value="llama-2-7b-chat.ggmlv3.q5_1.bin")
146
+ max_tokens = gr.inputs.Slider(1, 2048, default=256, label="Max new tokens", step=1)
147
+ temperature = gr.inputs.Slider(0.1, 1., default=0.2, label="Temperature", step=0.1)
148
+ top_k = gr.inputs.Slider(0.01, 1., default=0.95, label="Top K", step=0.01)
149
+ top_p = gr.inputs.Slider(0, 100, default=50, label="Top P", step=1)
150
+ repeat_penalty = gr.inputs.Slider(0.1, 100., default=1.2, label="Repeat penalty", step=0.1)
151
+ change_model_button = gr.Button("Load Llama GGML Model")
152
+
153
+ model_verify_ggml = gr.HTML("Loaded model Llama-2")
154
+
155
+ with gr.Tab("API Models"):
156
+
157
+ default_model = gr.HTML("<hr>Falcon Model</h2>")
158
+ hf_key = gr.Textbox(label="HF TOKEN", value="token...")
159
+ falcon_button = gr.Button("Load FALCON 7B-Instruct")
160
+
161
+ openai_gpt_model = gr.HTML("<hr>OpenAI Model gpt-3.5-turbo</h2>")
162
+ api_key = gr.Textbox(label="API KEY", value="api_key...")
163
+ openai_button = gr.Button("Load gpt-3.5-turbo")
164
+
165
+ line_ = gr.HTML("<hr> </h2>")
166
+ model_verify = gr.HTML(" ")
167
+
168
+ with gr.Tab("Help"):
169
+ description_md = gr.Markdown(description)
170
+
171
+ msg.submit(predict,[msg, chatbot, max_docs, check_memory],[msg, chatbot]).then(convert,[],[sou])
172
+
173
+ change_model_button.click(dc.change_llm,[repo_, file_, max_tokens, temperature, top_p, top_k, repeat_penalty, max_docs],[model_verify_ggml])
174
+
175
+ falcon_button.click(dc.default_falcon_model, [hf_key], [model_verify])
176
+ openai_button.click(clear_api_key, [api_key], [api_key, model_verify])
177
+
178
+ demo.launch(debug=True, share=True, enable_queue=True)
bones.py ADDED
@@ -0,0 +1,277 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ from langchain.embeddings.openai import OpenAIEmbeddings
3
+ from langchain.text_splitter import CharacterTextSplitter, RecursiveCharacterTextSplitter
4
+ from langchain.vectorstores import DocArrayInMemorySearch
5
+ from langchain.chains import RetrievalQA, ConversationalRetrievalChain
6
+ from langchain.memory import ConversationBufferMemory
7
+ from langchain.chat_models import ChatOpenAI
8
+ from langchain.embeddings import HuggingFaceEmbeddings
9
+ from langchain import HuggingFaceHub
10
+ from conversadocs.llamacppmodels import LlamaCpp #from langchain.llms import LlamaCpp
11
+ from huggingface_hub import hf_hub_download
12
+ import param
13
+ import os
14
+ import torch
15
+ from langchain.document_loaders import (
16
+ EverNoteLoader,
17
+ TextLoader,
18
+ UnstructuredEPubLoader,
19
+ UnstructuredHTMLLoader,
20
+ UnstructuredMarkdownLoader,
21
+ UnstructuredODTLoader,
22
+ UnstructuredPowerPointLoader,
23
+ UnstructuredWordDocumentLoader,
24
+ PyPDFLoader,
25
+ )
26
+ import gc
27
+ gc.collect()
28
+ torch.cuda.empty_cache()
29
+
30
+ #YOUR_HF_TOKEN = os.getenv("My_hf_token")
31
+
32
+ EXTENSIONS = {
33
+ ".txt": (TextLoader, {"encoding": "utf8"}),
34
+ ".pdf": (PyPDFLoader, {}),
35
+ ".doc": (UnstructuredWordDocumentLoader, {}),
36
+ ".docx": (UnstructuredWordDocumentLoader, {}),
37
+ ".enex": (EverNoteLoader, {}),
38
+ ".epub": (UnstructuredEPubLoader, {}),
39
+ ".html": (UnstructuredHTMLLoader, {}),
40
+ ".md": (UnstructuredMarkdownLoader, {}),
41
+ ".odt": (UnstructuredODTLoader, {}),
42
+ ".ppt": (UnstructuredPowerPointLoader, {}),
43
+ ".pptx": (UnstructuredPowerPointLoader, {}),
44
+ }
45
+
46
+ #alter
47
+ def load_db(files):
48
+
49
+ # select extensions loader
50
+ documents = []
51
+ for file in files:
52
+ ext = "." + file.rsplit(".", 1)[-1]
53
+ if ext in EXTENSIONS:
54
+ loader_class, loader_args = EXTENSIONS[ext]
55
+ loader = loader_class(file, **loader_args)
56
+ documents.extend(loader.load())
57
+ else:
58
+ pass
59
+
60
+ # load documents
61
+ if documents == []:
62
+ loader_class, loader_args = EXTENSIONS['.txt']
63
+ loader = loader_class('demo_docs/demo.txt', **loader_args)
64
+ documents = loader.load()
65
+
66
+ # split documents
67
+ text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=150)
68
+ docs = text_splitter.split_documents(documents)
69
+
70
+ # define embedding
71
+ embeddings = HuggingFaceEmbeddings(model_name='all-MiniLM-L6-v2') # all-mpnet-base-v2 #embeddings = OpenAIEmbeddings()
72
+
73
+ # create vector database from data
74
+ db = DocArrayInMemorySearch.from_documents(docs, embeddings)
75
+ return db
76
+
77
+ def q_a(db, chain_type="stuff", k=3, llm=None):
78
+ retriever = db.as_retriever(search_type="similarity", search_kwargs={"k": k})
79
+ # create a chatbot chain. Memory is managed externally.
80
+ qa = ConversationalRetrievalChain.from_llm(
81
+ llm=llm,
82
+ chain_type=chain_type,
83
+ retriever=retriever,
84
+ return_source_documents=True,
85
+ return_generated_question=True,
86
+ )
87
+ return qa
88
+
89
+
90
+
91
+ class DocChat(param.Parameterized):
92
+ chat_history = param.List([])
93
+ answer = param.String("")
94
+ db_query = param.String("")
95
+ db_response = param.List([])
96
+ k_value = param.Integer(3)
97
+ llm = None
98
+
99
+ def __init__(self, **params):
100
+ super(DocChat, self).__init__( **params)
101
+ self.loaded_file = ["demo_docs/demo.txt"]
102
+ self.db = load_db(self.loaded_file)
103
+ self.change_llm("TheBloke/Llama-2-7B-Chat-GGML", "llama-2-7b-chat.ggmlv3.q5_1.bin", max_tokens=256, temperature=0.2, top_p=0.95, top_k=50, repeat_penalty=1.2, k=3)
104
+ self.qa = q_a(self.db, "stuff", self.k_value, self.llm)
105
+
106
+
107
+ def call_load_db(self, path_file, k):
108
+ if not os.path.exists(path_file[0]): # init or no file specified
109
+ return "No file loaded"
110
+ else:
111
+ try:
112
+ self.db = load_db(path_file)
113
+ self.loaded_file = path_file
114
+ self.qa = q_a(self.db, "stuff", k, self.llm)
115
+ self.k_value = k
116
+ #self.clr_history()
117
+ return f"New DB created and history cleared | Loaded File: {self.loaded_file}"
118
+ except:
119
+ return f'No valid file'
120
+
121
+
122
+ # chat
123
+ def convchain(self, query, k_max, recall_previous_messages):
124
+ if k_max != self.k_value:
125
+ print("Maximum querys changed")
126
+ self.qa = q_a(self.db, "stuff", k_max, self.llm)
127
+ self.k_value = k_max
128
+
129
+ if not recall_previous_messages:
130
+ self.clr_history()
131
+
132
+ try:
133
+ result = self.qa({"question": query, "chat_history": self.chat_history})
134
+ except:
135
+ print("Error not get response from model, reloaded default llama-2 7B config")
136
+ self.change_llm("TheBloke/Llama-2-7B-Chat-GGML", "llama-2-7b-chat.ggmlv3.q5_1.bin", max_tokens=256, temperature=0.2, top_p=0.95, top_k=50, repeat_penalty=1.2, k=3)
137
+ self.qa = q_a(self.db, "stuff", k_max, self.llm)
138
+ result = self.qa({"question": query, "chat_history": self.chat_history})
139
+
140
+ self.chat_history.extend([(query, result["answer"])])
141
+ self.db_query = result["generated_question"]
142
+ self.db_response = result["source_documents"]
143
+ self.answer = result['answer']
144
+ return self.answer
145
+
146
+ def summarize(self, chunk_size=2000, chunk_overlap=100):
147
+ # load docs
148
+ documents = []
149
+ for file in self.loaded_file:
150
+ ext = "." + file.rsplit(".", 1)[-1]
151
+ if ext in EXTENSIONS:
152
+ loader_class, loader_args = EXTENSIONS[ext]
153
+ loader = loader_class(file, **loader_args)
154
+ documents.extend(loader.load_and_split())
155
+
156
+ if documents == []:
157
+ return "Error in summarization"
158
+
159
+ # split documents
160
+ text_splitter = RecursiveCharacterTextSplitter(
161
+ chunk_size=chunk_size,
162
+ chunk_overlap=chunk_overlap,
163
+ separators=["\n\n", "\n", "(?<=\. )", " ", ""]
164
+ )
165
+ docs = text_splitter.split_documents(documents)
166
+ # summarize
167
+ from langchain.chains.summarize import load_summarize_chain
168
+ chain = load_summarize_chain(self.llm, chain_type='map_reduce', verbose=True)
169
+ return chain.run(docs)
170
+
171
+ def change_llm(self, repo_, file_, max_tokens=256, temperature=0.2, top_p=0.95, top_k=50, repeat_penalty=1.2, k=3):
172
+
173
+ if torch.cuda.is_available():
174
+ try:
175
+ model_path = hf_hub_download(repo_id=repo_, filename=file_)
176
+
177
+ self.qa = None
178
+ self.llm = None
179
+ gc.collect()
180
+ torch.cuda.empty_cache()
181
+ gpu_llm_layers = 35 if not '70B' in repo_.upper() else 25 # fix for 70B
182
+
183
+ self.llm = LlamaCpp(
184
+ model_path=model_path,
185
+ n_ctx=4096,
186
+ n_batch=512,
187
+ n_gpu_layers=gpu_llm_layers,
188
+ max_tokens=max_tokens,
189
+ verbose=False,
190
+ temperature=temperature,
191
+ top_p=top_p,
192
+ top_k=top_k,
193
+ repeat_penalty=repeat_penalty,
194
+ )
195
+ self.qa = q_a(self.db, "stuff", k, self.llm)
196
+ self.k_value = k
197
+ return f"Loaded {file_} [GPU INFERENCE]"
198
+ except:
199
+ self.change_llm("TheBloke/Llama-2-7B-Chat-GGML", "llama-2-7b-chat.ggmlv3.q5_1.bin", max_tokens=256, temperature=0.2, top_p=0.95, top_k=50, repeat_penalty=1.2, k=3)
200
+ return "No valid model | Reloaded Reloaded default llama-2 7B config"
201
+ else:
202
+ try:
203
+ model_path = hf_hub_download(repo_id=repo_, filename=file_)
204
+
205
+ self.qa = None
206
+ self.llm = None
207
+ gc.collect()
208
+ torch.cuda.empty_cache()
209
+
210
+ self.llm = LlamaCpp(
211
+ model_path=model_path,
212
+ n_ctx=2048,
213
+ n_batch=8,
214
+ max_tokens=max_tokens,
215
+ verbose=False,
216
+ temperature=temperature,
217
+ top_p=top_p,
218
+ top_k=top_k,
219
+ repeat_penalty=repeat_penalty,
220
+ )
221
+ self.qa = q_a(self.db, "stuff", k, self.llm)
222
+ self.k_value = k
223
+ return f"Loaded {file_} [CPU INFERENCE SLOW]"
224
+ except:
225
+ self.change_llm("TheBloke/Llama-2-7B-Chat-GGML", "llama-2-7b-chat.ggmlv3.q5_1.bin", max_tokens=256, temperature=0.2, top_p=0.95, top_k=50, repeat_penalty=1.2, k=3)
226
+ return "No valid model | Reloaded default llama-2 7B config"
227
+
228
+ def default_falcon_model(self, HF_TOKEN):
229
+ self.llm = llm_api=HuggingFaceHub(
230
+ huggingfacehub_api_token=HF_TOKEN,
231
+ repo_id="tiiuae/falcon-7b-instruct",
232
+ model_kwargs={
233
+ "temperature":0.2,
234
+ "max_new_tokens":500,
235
+ "top_k":50,
236
+ "top_p":0.95,
237
+ "repetition_penalty":1.2,
238
+ },)
239
+ self.qa = q_a(self.db, "stuff", self.k_value, self.llm)
240
+ return "Loaded model Falcon 7B-instruct [API FAST INFERENCE]"
241
+
242
+ def openai_model(self, API_KEY):
243
+ self.llm = ChatOpenAI(temperature=0, openai_api_key=API_KEY, model_name='gpt-3.5-turbo')
244
+ self.qa = q_a(self.db, "stuff", self.k_value, self.llm)
245
+ API_KEY = ""
246
+ return "Loaded model OpenAI gpt-3.5-turbo [API FAST INFERENCE]"
247
+
248
+ @param.depends('db_query ', )
249
+ def get_lquest(self):
250
+ if not self.db_query :
251
+ return print("Last question to DB: no DB accesses so far")
252
+ return self.db_query
253
+
254
+ @param.depends('db_response', )
255
+ def get_sources(self):
256
+ if not self.db_response:
257
+ return
258
+ #rlist=[f"Result of DB lookup:"]
259
+ rlist=[]
260
+ for doc in self.db_response:
261
+ for element in doc:
262
+ rlist.append(element)
263
+ return rlist
264
+
265
+ @param.depends('convchain', 'clr_history')
266
+ def get_chats(self):
267
+ if not self.chat_history:
268
+ return "No History Yet"
269
+ #rlist=[f"Current Chat History variable"]
270
+ rlist=[]
271
+ for exchange in self.chat_history:
272
+ rlist.append(exchange)
273
+ return rlist
274
+
275
+ def clr_history(self,count=0):
276
+ self.chat_history = []
277
+ return "HISTORY CLEARED"
llamacppmodels.py ADDED
@@ -0,0 +1,309 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import logging
2
+ from typing import Any, Dict, Iterator, List, Optional
3
+
4
+ from pydantic import Field, root_validator
5
+
6
+ from langchain.callbacks.manager import CallbackManagerForLLMRun
7
+ from langchain.llms.base import LLM
8
+ from langchain.schema.output import GenerationChunk
9
+
10
+ logger = logging.getLogger(__name__)
11
+
12
+
13
+ class LlamaCpp(LLM):
14
+ """llama.cpp model.
15
+
16
+ To use, you should have the llama-cpp-python library installed, and provide the
17
+ path to the Llama model as a named parameter to the constructor.
18
+ Check out: https://github.com/abetlen/llama-cpp-python
19
+
20
+ Example:
21
+ .. code-block:: python
22
+
23
+ from langchain.llms import LlamaCpp
24
+ llm = LlamaCpp(model_path="/path/to/llama/model")
25
+ """
26
+
27
+ client: Any #: :meta private:
28
+ model_path: str
29
+ """The path to the Llama model file."""
30
+
31
+ lora_base: Optional[str] = None
32
+ """The path to the Llama LoRA base model."""
33
+
34
+ lora_path: Optional[str] = None
35
+ """The path to the Llama LoRA. If None, no LoRa is loaded."""
36
+
37
+ n_ctx: int = Field(512, alias="n_ctx")
38
+ """Token context window."""
39
+
40
+ n_parts: int = Field(-1, alias="n_parts")
41
+ """Number of parts to split the model into.
42
+ If -1, the number of parts is automatically determined."""
43
+
44
+ seed: int = Field(-1, alias="seed")
45
+ """Seed. If -1, a random seed is used."""
46
+
47
+ f16_kv: bool = Field(True, alias="f16_kv")
48
+ """Use half-precision for key/value cache."""
49
+
50
+ logits_all: bool = Field(False, alias="logits_all")
51
+ """Return logits for all tokens, not just the last token."""
52
+
53
+ vocab_only: bool = Field(False, alias="vocab_only")
54
+ """Only load the vocabulary, no weights."""
55
+
56
+ use_mlock: bool = Field(False, alias="use_mlock")
57
+ """Force system to keep model in RAM."""
58
+
59
+ n_threads: Optional[int] = Field(None, alias="n_threads")
60
+ """Number of threads to use.
61
+ If None, the number of threads is automatically determined."""
62
+
63
+ n_batch: Optional[int] = Field(8, alias="n_batch")
64
+ """Number of tokens to process in parallel.
65
+ Should be a number between 1 and n_ctx."""
66
+
67
+ n_gpu_layers: Optional[int] = Field(None, alias="n_gpu_layers")
68
+ """Number of layers to be loaded into gpu memory. Default None."""
69
+
70
+ suffix: Optional[str] = Field(None)
71
+ """A suffix to append to the generated text. If None, no suffix is appended."""
72
+
73
+ max_tokens: Optional[int] = 256
74
+ """The maximum number of tokens to generate."""
75
+
76
+ temperature: Optional[float] = 0.8
77
+ """The temperature to use for sampling."""
78
+
79
+ top_p: Optional[float] = 0.95
80
+ """The top-p value to use for sampling."""
81
+
82
+ logprobs: Optional[int] = Field(None)
83
+ """The number of logprobs to return. If None, no logprobs are returned."""
84
+
85
+ echo: Optional[bool] = False
86
+ """Whether to echo the prompt."""
87
+
88
+ stop: Optional[List[str]] = []
89
+ """A list of strings to stop generation when encountered."""
90
+
91
+ repeat_penalty: Optional[float] = 1.1
92
+ """The penalty to apply to repeated tokens."""
93
+
94
+ top_k: Optional[int] = 40
95
+ """The top-k value to use for sampling."""
96
+
97
+ last_n_tokens_size: Optional[int] = 64
98
+ """The number of tokens to look back when applying the repeat_penalty."""
99
+
100
+ use_mmap: Optional[bool] = True
101
+ """Whether to keep the model loaded in RAM"""
102
+
103
+ rope_freq_scale: float = 1.0
104
+ """Scale factor for rope sampling."""
105
+
106
+ rope_freq_base: float = 10000.0
107
+ """Base frequency for rope sampling."""
108
+
109
+ streaming: bool = True
110
+ """Whether to stream the results, token by token."""
111
+
112
+ verbose: bool = True
113
+ """Print verbose output to stderr."""
114
+
115
+ n_gqa: Optional[int] = None
116
+
117
+ @root_validator()
118
+ def validate_environment(cls, values: Dict) -> Dict:
119
+ """Validate that llama-cpp-python library is installed."""
120
+
121
+
122
+ model_path = values["model_path"]
123
+ model_param_names = [
124
+ "n_gqa",
125
+ "rope_freq_scale",
126
+ "rope_freq_base",
127
+ "lora_path",
128
+ "lora_base",
129
+ "n_ctx",
130
+ "n_parts",
131
+ "seed",
132
+ "f16_kv",
133
+ "logits_all",
134
+ "vocab_only",
135
+ "use_mlock",
136
+ "n_threads",
137
+ "n_batch",
138
+ "use_mmap",
139
+ "last_n_tokens_size",
140
+ "verbose",
141
+ ]
142
+ model_params = {k: values[k] for k in model_param_names}
143
+
144
+ model_params['n_gqa'] = 8 if '70B' in model_path.upper() else None # (TEMPORARY) must be 8 for llama2 70b
145
+ # For backwards compatibility, only include if non-null.
146
+ if values["n_gpu_layers"] is not None:
147
+ model_params["n_gpu_layers"] = values["n_gpu_layers"]
148
+
149
+ try:
150
+ from llama_cpp import Llama
151
+
152
+ values["client"] = Llama(model_path, **model_params)
153
+ except ImportError:
154
+ raise ImportError(
155
+ "Could not import llama-cpp-python library. "
156
+ "Please install the llama-cpp-python library to "
157
+ "use this embedding model: pip install llama-cpp-python"
158
+ )
159
+ except Exception as e:
160
+ raise ValueError(
161
+ f"Could not load Llama model from path: {model_path}. "
162
+ f"Received error {e}"
163
+ )
164
+
165
+ return values
166
+
167
+ @property
168
+ def _default_params(self) -> Dict[str, Any]:
169
+ """Get the default parameters for calling llama_cpp."""
170
+ return {
171
+ "suffix": self.suffix,
172
+ "max_tokens": self.max_tokens,
173
+ "temperature": self.temperature,
174
+ "top_p": self.top_p,
175
+ "logprobs": self.logprobs,
176
+ "echo": self.echo,
177
+ "stop_sequences": self.stop, # key here is convention among LLM classes
178
+ "repeat_penalty": self.repeat_penalty,
179
+ "top_k": self.top_k,
180
+ }
181
+
182
+ @property
183
+ def _identifying_params(self) -> Dict[str, Any]:
184
+ """Get the identifying parameters."""
185
+ return {**{"model_path": self.model_path}, **self._default_params}
186
+
187
+ @property
188
+ def _llm_type(self) -> str:
189
+ """Return type of llm."""
190
+ return "llamacpp"
191
+
192
+ def _get_parameters(self, stop: Optional[List[str]] = None) -> Dict[str, Any]:
193
+ """
194
+ Performs sanity check, preparing parameters in format needed by llama_cpp.
195
+
196
+ Args:
197
+ stop (Optional[List[str]]): List of stop sequences for llama_cpp.
198
+
199
+ Returns:
200
+ Dictionary containing the combined parameters.
201
+ """
202
+
203
+ # Raise error if stop sequences are in both input and default params
204
+ if self.stop and stop is not None:
205
+ raise ValueError("`stop` found in both the input and default params.")
206
+
207
+ params = self._default_params
208
+
209
+ # llama_cpp expects the "stop" key not this, so we remove it:
210
+ params.pop("stop_sequences")
211
+
212
+ # then sets it as configured, or default to an empty list:
213
+ params["stop"] = self.stop or stop or []
214
+
215
+ return params
216
+
217
+ def _call(
218
+ self,
219
+ prompt: str,
220
+ stop: Optional[List[str]] = None,
221
+ run_manager: Optional[CallbackManagerForLLMRun] = None,
222
+ **kwargs: Any,
223
+ ) -> str:
224
+ """Call the Llama model and return the output.
225
+
226
+ Args:
227
+ prompt: The prompt to use for generation.
228
+ stop: A list of strings to stop generation when encountered.
229
+
230
+ Returns:
231
+ The generated text.
232
+
233
+ Example:
234
+ .. code-block:: python
235
+
236
+ from langchain.llms import LlamaCpp
237
+ llm = LlamaCpp(model_path="/path/to/local/llama/model.bin")
238
+ llm("This is a prompt.")
239
+ """
240
+ if self.streaming:
241
+ # If streaming is enabled, we use the stream
242
+ # method that yields as they are generated
243
+ # and return the combined strings from the first choices's text:
244
+ combined_text_output = ""
245
+ for chunk in self._stream(
246
+ prompt=prompt, stop=stop, run_manager=run_manager, **kwargs
247
+ ):
248
+ combined_text_output += chunk.text
249
+ return combined_text_output
250
+ else:
251
+ params = self._get_parameters(stop)
252
+ params = {**params, **kwargs}
253
+ result = self.client(prompt=prompt, **params)
254
+ return result["choices"][0]["text"]
255
+
256
+ def _stream(
257
+ self,
258
+ prompt: str,
259
+ stop: Optional[List[str]] = None,
260
+ run_manager: Optional[CallbackManagerForLLMRun] = None,
261
+ **kwargs: Any,
262
+ ) -> Iterator[GenerationChunk]:
263
+ """Yields results objects as they are generated in real time.
264
+
265
+ It also calls the callback manager's on_llm_new_token event with
266
+ similar parameters to the OpenAI LLM class method of the same name.
267
+
268
+ Args:
269
+ prompt: The prompts to pass into the model.
270
+ stop: Optional list of stop words to use when generating.
271
+
272
+ Returns:
273
+ A generator representing the stream of tokens being generated.
274
+
275
+ Yields:
276
+ A dictionary like objects containing a string token and metadata.
277
+ See llama-cpp-python docs and below for more.
278
+
279
+ Example:
280
+ .. code-block:: python
281
+
282
+ from langchain.llms import LlamaCpp
283
+ llm = LlamaCpp(
284
+ model_path="/path/to/local/model.bin",
285
+ temperature = 0.5
286
+ )
287
+ for chunk in llm.stream("Ask 'Hi, how are you?' like a pirate:'",
288
+ stop=["'","\n"]):
289
+ result = chunk["choices"][0]
290
+ print(result["text"], end='', flush=True)
291
+
292
+ """
293
+ params = {**self._get_parameters(stop), **kwargs}
294
+ result = self.client(prompt=prompt, stream=True, **params)
295
+ for part in result:
296
+ logprobs = part["choices"][0].get("logprobs", None)
297
+ chunk = GenerationChunk(
298
+ text=part["choices"][0]["text"],
299
+ generation_info={"logprobs": logprobs},
300
+ )
301
+ yield chunk
302
+ if run_manager:
303
+ run_manager.on_llm_new_token(
304
+ token=chunk.text, verbose=self.verbose, log_probs=logprobs
305
+ )
306
+
307
+ def get_num_tokens(self, text: str) -> int:
308
+ tokenized_text = self.client.tokenize(text.encode("utf-8"))
309
+ return len(tokenized_text)
requirements.txt ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ transformers
2
+ torch
3
+ pypdf
4
+ langchain==0.0.247
5
+ langchain[docarray]
6
+ tiktoken
7
+ sentence_transformers
8
+ chromadb
9
+ huggingface_hub
10
+ unstructured[local-inference]
11
+ gradio==3.35.2
12
+ param==1.13.0
13
+ openai
14
+ python-chess==1.999