alKoGolik commited on
Commit
c87c295
1 Parent(s): ec0a0d7

Upload 169 files

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +2 -0
  2. Llama2-Code-Interpreter-main/.gitignore +48 -0
  3. Llama2-Code-Interpreter-main/OpenCodeInterpreter/LICENSE +201 -0
  4. Llama2-Code-Interpreter-main/OpenCodeInterpreter/README.md +83 -0
  5. Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/LICENSE +201 -0
  6. Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/README.md +143 -0
  7. Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/README_CN.md +140 -0
  8. Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/config_example/config.azure.example.json +24 -0
  9. Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/config_example/config.example.json +32 -0
  10. Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/example_img/1.jpg +0 -0
  11. Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/example_img/2.jpg +0 -0
  12. Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/example_img/3.jpg +0 -0
  13. Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/example_img/4.jpg +0 -0
  14. Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/example_img/5.jpg +0 -0
  15. Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/example_img/6.jpg +0 -0
  16. Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/example_img/save_to_notebook_demo.gif +3 -0
  17. Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/example_img/vision_example.jpg +0 -0
  18. Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/requirements.txt +6 -0
  19. Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/requirements_full.txt +18 -0
  20. Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/src/bot_backend.py +324 -0
  21. Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/src/cli.py +108 -0
  22. Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/src/functional.py +197 -0
  23. Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/src/jupyter_backend.py +108 -0
  24. Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/src/notebook_serializer.py +71 -0
  25. Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/src/response_parser.py +259 -0
  26. Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/src/tools.py +202 -0
  27. Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/src/web_ui.py +279 -0
  28. Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/README.md +3 -0
  29. Llama2-Code-Interpreter-main/OpenCodeInterpreter/demo/LICENSE +21 -0
  30. Llama2-Code-Interpreter-main/OpenCodeInterpreter/demo/README.md +50 -0
  31. Llama2-Code-Interpreter-main/OpenCodeInterpreter/demo/assets/assistant.pic.jpg +0 -0
  32. Llama2-Code-Interpreter-main/OpenCodeInterpreter/demo/assets/user.pic.jpg +0 -0
  33. Llama2-Code-Interpreter-main/OpenCodeInterpreter/demo/chatbot.py +316 -0
  34. Llama2-Code-Interpreter-main/OpenCodeInterpreter/demo/code_interpreter/BaseCodeInterpreter.py +29 -0
  35. Llama2-Code-Interpreter-main/OpenCodeInterpreter/demo/code_interpreter/JupyterClient.py +85 -0
  36. Llama2-Code-Interpreter-main/OpenCodeInterpreter/demo/code_interpreter/OpenCodeInterpreter.py +80 -0
  37. Llama2-Code-Interpreter-main/OpenCodeInterpreter/demo/requirements.txt +32 -0
  38. Llama2-Code-Interpreter-main/OpenCodeInterpreter/demo/utils/cleaner.py +31 -0
  39. Llama2-Code-Interpreter-main/OpenCodeInterpreter/demo/utils/const.py +88 -0
  40. Llama2-Code-Interpreter-main/OpenCodeInterpreter/evaluation/README.md +51 -0
  41. Llama2-Code-Interpreter-main/OpenCodeInterpreter/evaluation/evalplus/.dockerignore +174 -0
  42. Llama2-Code-Interpreter-main/OpenCodeInterpreter/evaluation/evalplus/.github/ISSUE_TEMPLATE/buggy_contract.yml +48 -0
  43. Llama2-Code-Interpreter-main/OpenCodeInterpreter/evaluation/evalplus/.github/ISSUE_TEMPLATE/buggy_test.yml +49 -0
  44. Llama2-Code-Interpreter-main/OpenCodeInterpreter/evaluation/evalplus/.github/ISSUE_TEMPLATE/config.yml +1 -0
  45. Llama2-Code-Interpreter-main/OpenCodeInterpreter/evaluation/evalplus/.github/ISSUE_TEMPLATE/model_eval_request.yml +67 -0
  46. Llama2-Code-Interpreter-main/OpenCodeInterpreter/evaluation/evalplus/.gitignore +173 -0
  47. Llama2-Code-Interpreter-main/OpenCodeInterpreter/evaluation/evalplus/.pre-commit-config.yaml +20 -0
  48. Llama2-Code-Interpreter-main/OpenCodeInterpreter/evaluation/evalplus/CITATION.cff +25 -0
  49. Llama2-Code-Interpreter-main/OpenCodeInterpreter/evaluation/evalplus/Dockerfile +19 -0
  50. Llama2-Code-Interpreter-main/OpenCodeInterpreter/evaluation/evalplus/LICENSE +205 -0
.gitattributes CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Llama2-Code-Interpreter-main/assets/result_nvidia_chart.gif filter=lfs diff=lfs merge=lfs -text
37
+ Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/example_img/save_to_notebook_demo.gif filter=lfs diff=lfs merge=lfs -text
Llama2-Code-Interpreter-main/.gitignore ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Ignore .ckpt files
2
+ ckpt
3
+
4
+ # Ignore Python compiled files
5
+ __pycache__/
6
+ *.py[cod]
7
+
8
+ # Ignore Python virtual environment
9
+ venv/
10
+
11
+ # Ignore Jupyter notebook checkpoints
12
+ .ipynb_checkpoints/
13
+ .git/
14
+ .vscode/
15
+
16
+ # Ignore .DS_Store on MacOS
17
+ .DS_Store
18
+
19
+ rilab_key.txt
20
+ gpt4_custom_code_interpreter/rilab_key.txt
21
+ openai_api_key.txt
22
+
23
+ gpt4_custom_code_interpreter/
24
+ tmp/
25
+ output/
26
+ wandb/
27
+
28
+ utils/const.py
29
+ utils/hf_model_upload.py
30
+ gpt_data_gen/
31
+ *.json
32
+ *.txt
33
+ *.sh
34
+ *.pt
35
+ *.pth
36
+ *.ckpt
37
+ *.tokenizer
38
+
39
+ # eval data
40
+ eval/ds1000_data
41
+ eval/grade-school-math
42
+
43
+ # gradio features
44
+ chatbot_feat.py
45
+ chatbot_feat2.py
46
+ gradio_test.py
47
+
48
+
Llama2-Code-Interpreter-main/OpenCodeInterpreter/LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright [yyyy] [name of copyright owner]
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
Llama2-Code-Interpreter-main/OpenCodeInterpreter/README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement
2
+
3
+ <p align="center">
4
+ <img width="1000px" alt="OpenCodeInterpreter" src="https://opencodeinterpreter.github.io/static/images/figure1.png">
5
+ </p>
6
+ <p align="center">
7
+ <a href="https://opencodeinterpreter.github.io/">[🏠Homepage]</a>
8
+ |
9
+ <a href="https://github.com/OpenCodeInterpreter/OpenCodeInterpreter/">[🛠️Code]</a>
10
+ </p>
11
+ <hr>
12
+
13
+ ## 🌟 Upcoming Features
14
+ - 💡 **Open Sourcing OpenCodeInterpreter-SC2 series Model (based on StarCoder2 base)**
15
+
16
+ - 💡 **Open Sourcing OpenCodeInterpreter-GM-7b Model with gemma-7b Base**
17
+
18
+ ## 🔔News
19
+ 🛠️[2024-02-29]: Our official online demo is deployed on HuggingFace Spaces! Take a look at [Demo Page](https://huggingface.co/spaces/m-a-p/OpenCodeInterpreter_demo)!
20
+
21
+ 🛠️[2024-02-28]: We have open-sourced the Demo Local Deployment Code with a Setup Guide.
22
+
23
+ ✨[2024-02-26]: We have open-sourced the [OpenCodeInterpreter-DS-1.3b](https://huggingface.co/m-a-p/OpenCodeInterpreter-DS-1.3B) Model.
24
+
25
+ 📘[2024-02-26]: We have open-sourced the [CodeFeedback-Filtered-Instruction](https://huggingface.co/datasets/m-a-p/CodeFeedback-Filtered-Instruction) Dataset.
26
+
27
+ 🚀[2024-02-23]: We have open-sourced the datasets used in our project named [Code-Feedback](https://huggingface.co/datasets/m-a-p/Code-Feedback).
28
+
29
+ 🔥[2024-02-19]: We have open-sourced all models in the OpenCodeInterpreter series! We welcome everyone to try out our models and look forward to your participation! 😆
30
+
31
+
32
+
33
+ ## Introduction
34
+ OpenCodeInterpreter is a suite of open-source code generation systems aimed at bridging the gap between large language models and sophisticated proprietary systems like the GPT-4 Code Interpreter. It significantly enhances code generation capabilities by integrating execution and iterative refinement functionalities.
35
+
36
+ ## Models
37
+ All models within the OpenCodeInterpreter series have been open-sourced on Hugging Face. You can access our models via the following link: [OpenCodeInterpreter Models](https://huggingface.co/collections/m-a-p/opencodeinterpreter-65d312f6f88da990a64da456).
38
+
39
+ ## Data Collection
40
+ Supported by Code-Feedback, a dataset featuring 68K multi-turn interactions, OpenCodeInterpreter incorporates execution and human feedback for dynamic code refinement.
41
+ For additional insights into data collection procedures, please consult the readme provided under [Data Collection](https://github.com/OpenCodeInterpreter/OpenCodeInterpreter/blob/main/data_collection/README.md).
42
+
43
+ ## Evaluation
44
+ Our evaluation framework primarily utilizes HumanEval and MBPP, alongside their extended versions, HumanEval+ and MBPP+, leveraging the [EvalPlus framework](https://github.com/evalplus/evalplus) for a more comprehensive assessment.
45
+ For specific evaluation methodologies, please refer to the [Evaluation README](https://github.com/OpenCodeInterpreter/OpenCodeInterpreter/blob/main/evaluation/README.md) for more details.
46
+
47
+ ## Demo
48
+ We're excited to present our open-source demo, enabling users to effortlessly generate and execute code with our LLM locally. Within the demo, users can leverage the power of LLM to generate code and execute it locally, receiving automated execution feedback. LLM dynamically adjusts the code based on this feedback, ensuring a smoother coding experience. Additionally, users can engage in chat-based interactions with the LLM model, providing feedback to further enhance the generated code.
49
+
50
+ To begin exploring the demo and experiencing the capabilities firsthand, please refer to the instructions outlined in the [OpenCodeInterpreter Demo README](https://github.com/OpenCodeInterpreter/OpenCodeInterpreter/blob/main/demo/README.md) file. Happy coding!
51
+
52
+ ### Quick Start
53
+ - **Entering the workspace**:
54
+ ```bash
55
+ git clone https://github.com/OpenCodeInterpreter/OpenCodeInterpreter.git
56
+ cd demo
57
+ ```
58
+ - **Create a new conda environment**: `conda create -n demo python=3.10`
59
+
60
+ - **Activate the demo environment you create**: `conda activate demo`
61
+
62
+ - **Install requirements**: `pip install -r requirements.txt`
63
+
64
+ - **Create a Huggingface access token with write permission [here](https://huggingface.co/docs/hub/en/security-tokens). Our code will only use this token to create and push content to a specific repository called `opencodeinterpreter_user_data` under your own Huggingface account. We cannot get access to your data if you deploy this demo on your own device.**
65
+
66
+ - **Add the access token to environment variables:** `export HF_TOKEN="your huggingface access token"`
67
+
68
+ - **Run the Gradio App**:
69
+ ```bash
70
+ python3 chatbot.py --path "the model name of opencodeinterpreter model family. e.g., m-a-p/OpenCodeInterpreter-DS-6.7B"
71
+ ```
72
+ ### Video
73
+ https://github.com/OpenCodeInterpreter/OpenCodeInterpreter/assets/46103100/2337f34d-f5ed-4ecb-857b-3c2d085b72fd
74
+
75
+
76
+ ## Contact
77
+
78
+ If you have any inquiries, please feel free to raise an issue or reach out to us via email at: xiangyue.work@gmail.com, zhengtianyu0428@gmail.com.
79
+ We're here to assist you!
80
+
81
+ ## Star History
82
+
83
+ [![Star History Chart](https://api.star-history.com/svg?repos=OpenCodeInterpreter/OpenCodeInterpreter&type=Date)](https://star-history.com/#OpenCodeInterpreter/OpenCodeInterpreter&Date)
Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright [yyyy] [name of copyright owner]
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/README.md ADDED
@@ -0,0 +1,143 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Read in other language: [中文](README_CN.md).**
2
+
3
+ # Local-Code-Interpreter
4
+ A local implementation of OpenAI's ChatGPT Code Interpreter (Advanced Data Analysis).
5
+
6
+ ## Introduction
7
+
8
+ OpenAI's Code Interpreter (currently renamed as Advanced Data Analysis) for ChatGPT is a revolutionary feature that allows the execution of Python code within the AI model. However, it execute code within an online sandbox and has certain limitations. In this project, we present Local Code Interpreter – which enables code execution on your local device, offering enhanced flexibility, security, and convenience.
9
+ ![notebook_gif_demo](example_img/save_to_notebook_demo.gif)
10
+
11
+ ## Key Advantages
12
+
13
+ - **Custom Environment**: Execute code in a customized environment of your choice, ensuring you have the right packages and settings.
14
+
15
+ - **Seamless Experience**: Say goodbye to file size restrictions and internet issues while uploading. With Local Code Interpreter, you're in full control.
16
+
17
+ - **GPT-3.5 Availability**: While official Code Interpreter is only available for GPT-4 model, the Local Code Interpreter offers the flexibility to switch between both GPT-3.5 and GPT-4 models.
18
+
19
+ - **Enhanced Data Security**: Keep your data more secure by running code locally, minimizing data transfer over the internet.
20
+
21
+ - **Jupyter Support**: You can save all the code and conversation history in a Jupyter notebook for future use.
22
+
23
+ ## Note
24
+ Executing AI-generated code without human review on your own device is not safe. You are responsible for taking measures to protect the security of your device and data (such as using a virtural machine) before launching this program. All consequences caused by using this program shall be borne by youself.
25
+
26
+ ## Usage
27
+
28
+ ### Installation
29
+
30
+ 1. Clone this repository to your local device
31
+ ```shell
32
+ git clone https://github.com/MrGreyfun/Local-Code-Interpreter.git
33
+ cd Local-Code-Interpreter
34
+ ```
35
+
36
+ 2. Install the necessary dependencies. The program has been tested on Windows 10 and CentOS Linux 7.8, with Python 3.9.16. Required packages include:
37
+ ```text
38
+ Jupyter Notebook 6.5.4
39
+ gradio 3.39.0
40
+ openai 0.27.8
41
+ ansi2html 1.8.0
42
+ tiktoken 0.3.3
43
+ Pillow 9.4.0
44
+ ```
45
+ Other systems or package versions may also work. Please note that you should not update the `openai` package to the latest `1.x` version, as it has been rewritten and is not compatible with older versions.
46
+ You can use the following command to directly install the required packages:
47
+ ```shell
48
+ pip install -r requirements.txt
49
+ ```
50
+ For newcomers to Python, we offer a convenient command that installs additional packages commonly used for data processing and analysis:
51
+ ```shell
52
+ pip install -r requirements_full.txt
53
+ ```
54
+ ### Configuration
55
+
56
+ 1. Create a `config.json` file in the `src` directory, following the examples provided in the `config_example` directory.
57
+
58
+ 2. Configure your API key in the `config.json` file.
59
+
60
+ Please Note:
61
+ 1. **Set the `model_name` Correctly**
62
+ This program relies on the function calling capability of the `0613` or newer versions of models:
63
+ - `gpt-3.5-turbo-0613` (and its 16K version)
64
+ - `gpt-3.5-turbo-1106`
65
+ - `gpt-4-0613` (and its 32K version)
66
+ - `gpt-4-1106-preview`
67
+
68
+ Older versions of the models will not work. Note that `gpt-4-vision-preview` lacks support for function calling, therefore, it should not be set as `GPT-4` model.
69
+
70
+ For Azure OpenAI service users:
71
+ - Set the `model_name` as your deployment name.
72
+ - Confirm that the deployed model corresponds to the `0613` or newer version.
73
+
74
+ 2. **API Version Settings**
75
+ If you're using Azure OpenAI service, set the `API_VERSION` to `2023-12-01-preview` in the `config.json` file. Note that API versions older than `2023-07-01-preview` do not support the necessary function calls for this program and `2023-12-01-preview` is recommended as older versions will be deprecated in the near future.
76
+
77
+ 3. **Vision Model Settings**
78
+ Despite the `gpt-4-vision-preview` currently does not support function calling, we have implemented vision input using a non-end-to-end approach. To enable vision input, set `gpt-4-vision-preview` as `GPT-4V` model and set `available` to `true`. Conversely, setting `available` to `false` to disables vision input when unnecessary, which will remove vision-related system prompts and reduce your API costs.
79
+ ![vision_demo](example_img/vision_example.jpg)
80
+ 4. **Model Context Window Settings**
81
+ The `model_context_window` field records the context window for each model, which the program uses to slice conversations when they exceed the model's context window capacity.
82
+ Azure OpenAI service users should manually insert context window information using the model's deployment name in the following format:
83
+ ```json
84
+ "<YOUR-DEPLOYMENT-NAME>": <contex_window (integer)>
85
+ ```
86
+
87
+ Additionally, when OpenAI introduce new models, you can manually append the new model's context window information using the same format. (We will keep this file updated, but there might be delays)
88
+
89
+ 5. **Alternate API Key Handling**
90
+ If you prefer not to store your API key in the `config.json` file, you can opt for an alternate approach:
91
+ - Leave the `API_KEY` field in `config.json` as an empty string:
92
+ ```json
93
+ "API_KEY": ""
94
+ ```
95
+ - Set the environment variable `OPENAI_API_KEY` with your API key before running the program:
96
+ - On Windows:
97
+ ```shell
98
+ set OPENAI_API_KEY=<YOUR-API-KEY>
99
+ ```
100
+ - On Linux:
101
+ ```shell
102
+ export OPENAI_API_KEY=<YOUR-API-KEY>
103
+ ```
104
+
105
+ ## Getting Started
106
+
107
+ 1. Navigate to the `src` directory.
108
+ ```shell
109
+ cd src
110
+ ```
111
+
112
+ 2. Run the command:
113
+ ```shell
114
+ python web_ui.py
115
+ ```
116
+
117
+ 3. Access the generated link in your browser to start using the Local Code Interpreter.
118
+
119
+ 4. Use the `-n` or `--notebook` option to save the conversation in a Jupyter notebook.
120
+ By default, the notebook is saved in the working directory, but you can add a path to save it elsewhere.
121
+ ```shell
122
+ python web_ui.py -n <path_to_notebook>
123
+ ```
124
+
125
+ ## Example
126
+
127
+ Imagine uploading a data file and requesting the model to perform linear regression and visualize the data. See how Local Code Interpreter provides a seamless experience:
128
+
129
+ 1. Upload the data and request linear regression:
130
+ ![Example 1](example_img/1.jpg)
131
+
132
+ 2. Encounter an error in the generated code:
133
+ ![Example 2](example_img/2.jpg)
134
+
135
+ 3. ChatGPT automatically checks the data structure and fixes the bug:
136
+ ![Example 3](example_img/3.jpg)
137
+
138
+ 4. The corrected code runs successfully:
139
+ ![Example 4](example_img/4.jpg)
140
+
141
+ 5. The final result meets your requirements:
142
+ ![Example 5](example_img/5.jpg)
143
+ ![Example 6](example_img/6.jpg)
Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/README_CN.md ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Read in other language: [English](README.md)**
2
+
3
+ # 本地代码解释器
4
+ OpenAI的ChatGPT代码解释器(Code Interpreter或Advanced Data Analysis)的本地版。
5
+
6
+ ## 简介
7
+
8
+ OpenAI的ChatGPT代码解释器(Code Interpreter,现更名为Advanced Data Analysis)是一款强大的AI工具。然而,其在在线沙箱环境中运行代码的特性导致了一些限制,如包的缺失、上传速度较慢、仅支持上传不超过100MB的文件以及代码最多只能运行120秒等。为此,我们推出了本地代码解释器(Local Code Interpreter)。这款工具允许您在自己的设备上,利用自己专属的Python环境来执行ChatGPT生成的代码,从而解除了原有解释器的各种限制。
9
+ ![notebook_gif_demo](example_img/save_to_notebook_demo.gif)
10
+
11
+ ## 优势
12
+
13
+ - **自定义环境**:在您本地环境中运行代码,确保各种依赖都已正确安装。
14
+
15
+ - **无缝体验**:告别100MB文件大小限制和网速问题。使用本地版代码解释器,一切尽在掌控之中。
16
+
17
+ - **可用GPT-3.5**:官方代码解释器只能在GPT-4中使用,但现在您甚至可以在一轮对话中自由切换GPT-3.5和GPT-4。
18
+
19
+ - **数据更安全**:代码在本地运行,无需将文件上传至网络,提高了数据的安全性。
20
+
21
+ - **支持Jupyter**:本程序可将代码和对话历史保存至Jupyter notebook文件中供以后使用。
22
+
23
+ ## 注意事项
24
+ 在您自己的设备上执行AI生成但未经人工审核的代码可能存在安全风险。在运行此程序前,您应当采用一些安全措施,例如使用虚拟机,以保护您的设备和数据。使用此程序所产生的所有后果,您需自行承担。
25
+
26
+ ## 使用方法
27
+
28
+ ### 安装
29
+
30
+ 1. 克隆本仓库
31
+ ```shell
32
+ git clone https://github.com/MrGreyfun/Local-Code-Interpreter.git
33
+ cd Local-Code-Interpreter
34
+ ```
35
+
36
+ 2. 安装依赖。该程序已在Windows 10和CentOS Linux 7.8上使用Python 3.9.16测试。所需的库及版本:
37
+ ```text
38
+ Jupyter Notebook 6.5.4
39
+ gradio 3.39.0
40
+ openai 0.27.8
41
+ ansi2html 1.8.0
42
+ ```
43
+ 其他系统或库版本也可能有效。请注意,不要将`openai`包升级至最新的`1.x`版本,该版本已重写,与旧版本不兼容。
44
+ 您可以使用以下命令直接安装所需的软件包:
45
+ ```shell
46
+ pip install -r requirements.txt
47
+ ```
48
+ 如果您不熟悉Python,可以使用以下命令安装,它将额外安装常用的Python数据分析库:
49
+ ```shell
50
+ pip install -r requirements_full.txt
51
+ ```
52
+ ### 配置
53
+
54
+ 1. 在`src`目录中创建一个`config.json`文件,参照`config_example`目录中提供的示例进行配置。
55
+
56
+ 2. 在`config.json`文件中配置您的API密钥。
57
+
58
+ 请注意:
59
+ 1. **正确设置`model_name`**
60
+ 该程序依赖于`0163`及以上版本的模型的函数调用能力,这些模型包括:
61
+ - `gpt-3.5-turbo-0613` (及其16K版本)
62
+ - `gpt-3.5-turbo-1106`
63
+ - `gpt-4-0613` (及其32K版本)
64
+ - `gpt-4-1106-preview`
65
+
66
+ 旧版本的模型将无法使用。请注意,`gpt-4-vision-preview`模型同样不支持函数调用,因此不能将其设置为`GPT-4`模型。
67
+
68
+ 对于使用Azure OpenAI的用户:
69
+ - 请将`model_name`设置为您的模型的部署名(deployment name)。
70
+ - 确认部署的模型是`0613`及以上版本。
71
+
72
+ 2. **API版本设置**
73
+ 如果您使用Azure OpenAI服务,请在`config.json`文件中将`API_VERSION`设置为`2023-07-01-preview`,其他API版本不支持函数调用。
74
+
75
+ 3. **视觉模型设置**
76
+ 尽管`gpt-4-vision-preview`模型不支持函数调用,我们仍然通过另一种非端到端的方式实现了图像输入。如果想使用图像输入,请将`gpt-4-vision-preview`设置为`GPT-4V`模型,并设置`available`字段设置为`true`。当不需要使用图像输入时候,可以将`available`字段设置为`false`,这将移除图像相关的系统提示,从而减少您的API费用。
77
+ ![vision_demo](example_img/vision_example.jpg)
78
+ 4. **模型上下文窗口长度设置**
79
+ `model_context_window` 字段记录了每个模型的上下文窗口长度信息。当对话长度超过模型上下文窗口长度限制时,本程序会使用该信息来压缩对话长度。
80
+ Azure OpenAI的用户需要按照以下格式,使用模型的部署名手动添加上下文窗口长度信息:
81
+ ```json
82
+ "<模型部署名>": <上下文窗口长度 (整数)>
83
+ ```
84
+ 此外,当OpenAI推出新模型的时候,您可以按照相同的格式手动添加新模型的上下文窗口长度信息。(我们会持续更新该文件,但是不一定及时)
85
+
86
+ 5. **使用环境变量配置密钥**
87
+ 如果您不希望将API密钥存储在`config.json`文件中,可以选择通过环境变量来设置密钥:
88
+ - 将`config.json`文件中的`API_KEY`设为空字符串:
89
+ ```json
90
+ "API_KEY": ""
91
+ ```
92
+ - 在运行程序之前,使用您的API密钥设置环境变量`OPENAI_API_KEY`:
93
+ - Windows:
94
+ ```shell
95
+ set OPENAI_API_KEY=<你的API密钥>
96
+ ```
97
+ - Linux:
98
+ ```shell
99
+ export OPENAI_API_KEY=<你的API密钥>
100
+ ```
101
+
102
+ ## 使用
103
+
104
+ 1. 进入`src`目录。
105
+ ```shell
106
+ cd src
107
+ ```
108
+
109
+ 2. 运行以下命令:
110
+ ```shell
111
+ python web_ui.py
112
+ ```
113
+
114
+ 3. 在浏览器中访问终端生成的链接,开始使用本地版代码解释器。
115
+
116
+ 4. 添加`-n`或`--notebook`参数可以将对话保存到Jupyter notebook中。
117
+ 默认情况下,该Jupyter notebook文件保存在工作目录中,您可以添加路径以将其保存到其它位置。
118
+ ```shell
119
+ python web_ui.py-n<path_to_notebook>
120
+ ```
121
+
122
+ ## 示例
123
+
124
+ 以下是一个使用本程序进行线性回归任务的示例:
125
+
126
+ 1. 上传数据文件并要求模型对数据进行线性回归:
127
+ ![Example 1](example_img/1.jpg)
128
+
129
+ 2. 生成的代码执行中遇到错误:
130
+ ![Example 2](example_img/2.jpg)
131
+
132
+ 3. ChatGPT自动检查数据格式并修复bug:
133
+ ![Example 3](example_img/3.jpg)
134
+
135
+ 4. 修复bug后的代码成功运行:
136
+ ![Example 4](example_img/4.jpg)
137
+
138
+ 5. 最终结果符合要求:
139
+ ![Example 5](example_img/5.jpg)
140
+ ![Example 6](example_img/6.jpg)
Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/config_example/config.azure.example.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "API_TYPE": "azure",
3
+ "API_base": "<YOUR-API-ENDPOINT>",
4
+ "API_VERSION": "2023-12-01-preview",
5
+ "API_KEY": "<YOUR-API-KEY>",
6
+ "model": {
7
+ "GPT-3.5": {
8
+ "model_name": "<YOUR-DEPLOYMENT-NAME>",
9
+ "available": true
10
+ },
11
+ "GPT-4": {
12
+ "model_name": "<YOUR-DEPLOYMENT-NAME>",
13
+ "available": true
14
+ },
15
+ "GPT-4V": {
16
+ "model_name": "<YOUR-DEPLOYMENT-NAME>",
17
+ "available": true
18
+ }
19
+ },
20
+ "model_context_window": {
21
+ "<YOUR-DEPLOYMENT-NAME1>": <contex_window (integer)>,
22
+ "<YOUR-DEPLOYMENT-NAME2>": <contex_window (integer)>
23
+ }
24
+ }
Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/config_example/config.example.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "API_TYPE": "open_ai",
3
+ "API_base": "https://api.openai.com/v1",
4
+ "API_VERSION": null,
5
+ "API_KEY": "<YOUR-API-KEY>",
6
+ "model": {
7
+ "GPT-3.5": {
8
+ "model_name": "gpt-3.5-turbo-0613",
9
+ "available": true
10
+ },
11
+ "GPT-4": {
12
+ "model_name": "gpt-4-0613",
13
+ "available": true
14
+ },
15
+ "GPT-4V": {
16
+ "model_name": "gpt-4-vision-preview",
17
+ "available": true
18
+ }
19
+ },
20
+ "model_context_window": {
21
+ "gpt-3.5-turbo": 4096,
22
+ "gpt-3.5-turbo-16k": 16385,
23
+ "gpt-3.5-turbo-0613": 4096,
24
+ "gpt-3.5-turbo-1106": 16385,
25
+ "gpt-4": 8192,
26
+ "gpt-4-32k": 32768,
27
+ "gpt-4-0613": 8192,
28
+ "gpt-4-32k-0613": 32768,
29
+ "gpt-4-1106-preview": 128000,
30
+ "gpt-4-vision-preview": 128000
31
+ }
32
+ }
Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/example_img/1.jpg ADDED
Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/example_img/2.jpg ADDED
Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/example_img/3.jpg ADDED
Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/example_img/4.jpg ADDED
Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/example_img/5.jpg ADDED
Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/example_img/6.jpg ADDED
Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/example_img/save_to_notebook_demo.gif ADDED

Git LFS Details

  • SHA256: b48a4673e78f618a7b458d349d793c5220041e918e578f9b1aaa0aeeb6c7cefa
  • Pointer size: 132 Bytes
  • Size of remote file: 4.96 MB
Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/example_img/vision_example.jpg ADDED
Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/requirements.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ notebook==6.5.4
2
+ openai==0.27.8
3
+ gradio==3.39.0
4
+ ansi2html==1.8.0
5
+ tiktoken
6
+ Pillow
Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/requirements_full.txt ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ notebook==6.5.4
2
+ openai==0.27.8
3
+ gradio==3.39.0
4
+ ansi2html==1.8.0
5
+ tiktoken
6
+ Pillow
7
+ numpy
8
+ scipy
9
+ openpyxl
10
+ xlrd
11
+ xlwt
12
+ matplotlib
13
+ pandas
14
+ opencv-python
15
+ PyPDF2
16
+ pdfminer.six
17
+ sympy
18
+ scikit-learn
Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/src/bot_backend.py ADDED
@@ -0,0 +1,324 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import copy
3
+ import shutil
4
+ from jupyter_backend import *
5
+ from tools import *
6
+ from typing import *
7
+ from notebook_serializer import add_markdown_to_notebook, add_code_cell_to_notebook
8
+
9
+ functions = [
10
+ {
11
+ "name": "execute_code",
12
+ "description": "This function allows you to execute Python code and retrieve the terminal output. If the code "
13
+ "generates image output, the function will return the text '[image]'. The code is sent to a "
14
+ "Jupyter kernel for execution. The kernel will remain active after execution, retaining all "
15
+ "variables in memory.",
16
+ "parameters": {
17
+ "type": "object",
18
+ "properties": {
19
+ "code": {
20
+ "type": "string",
21
+ "description": "The code text"
22
+ }
23
+ },
24
+ "required": ["code"],
25
+ }
26
+ },
27
+ ]
28
+
29
+ system_msg = '''You are an AI code interpreter.
30
+ Your goal is to help users do a variety of jobs by executing Python code.
31
+
32
+ You should:
33
+ 1. Comprehend the user's requirements carefully & to the letter.
34
+ 2. Give a brief description for what you plan to do & call the provided function to run code.
35
+ 3. Provide results analysis based on the execution output.
36
+ 4. If error occurred, try to fix it.
37
+ 5. Response in the same language as the user.
38
+
39
+ Note: If the user uploads a file, you will receive a system message "User uploaded a file: filename". Use the filename as the path in the code. '''
40
+
41
+ with open('config.json') as f:
42
+ config = json.load(f)
43
+
44
+ if not config['API_KEY']:
45
+ config['API_KEY'] = os.getenv('OPENAI_API_KEY')
46
+ os.unsetenv('OPENAI_API_KEY')
47
+
48
+
49
+ def get_config():
50
+ return config
51
+
52
+
53
+ def config_openai_api(api_type, api_base, api_version, api_key):
54
+ openai.api_type = api_type
55
+ openai.api_base = api_base
56
+ openai.api_version = api_version
57
+ openai.api_key = api_key
58
+
59
+
60
+ class GPTResponseLog:
61
+ def __init__(self):
62
+ self.assistant_role_name = ''
63
+ self.content = ''
64
+ self.function_name = None
65
+ self.function_args_str = ''
66
+ self.code_str = ''
67
+ self.display_code_block = ''
68
+ self.finish_reason = 'stop'
69
+ self.bot_history = None
70
+ self.stop_generating = False
71
+ self.code_executing = False
72
+ self.interrupt_signal_sent = False
73
+
74
+ def reset_gpt_response_log_values(self, exclude=None):
75
+ if exclude is None:
76
+ exclude = []
77
+
78
+ attributes = {'assistant_role_name': '',
79
+ 'content': '',
80
+ 'function_name': None,
81
+ 'function_args_str': '',
82
+ 'code_str': '',
83
+ 'display_code_block': '',
84
+ 'finish_reason': 'stop',
85
+ 'bot_history': None,
86
+ 'stop_generating': False,
87
+ 'code_executing': False,
88
+ 'interrupt_signal_sent': False}
89
+
90
+ for attr_name in exclude:
91
+ del attributes[attr_name]
92
+ for attr_name, value in attributes.items():
93
+ setattr(self, attr_name, value)
94
+
95
+ def set_assistant_role_name(self, assistant_role_name: str):
96
+ self.assistant_role_name = assistant_role_name
97
+
98
+ def add_content(self, content: str):
99
+ self.content += content
100
+
101
+ def set_function_name(self, function_name: str):
102
+ self.function_name = function_name
103
+
104
+ def copy_current_bot_history(self, bot_history: List):
105
+ self.bot_history = copy.deepcopy(bot_history)
106
+
107
+ def add_function_args_str(self, function_args_str: str):
108
+ self.function_args_str += function_args_str
109
+
110
+ def update_code_str(self, code_str: str):
111
+ self.code_str = code_str
112
+
113
+ def update_display_code_block(self, display_code_block):
114
+ self.display_code_block = display_code_block
115
+
116
+ def update_finish_reason(self, finish_reason: str):
117
+ self.finish_reason = finish_reason
118
+
119
+ def update_stop_generating_state(self, stop_generating: bool):
120
+ self.stop_generating = stop_generating
121
+
122
+ def update_code_executing_state(self, code_executing: bool):
123
+ self.code_executing = code_executing
124
+
125
+ def update_interrupt_signal_sent(self, interrupt_signal_sent: bool):
126
+ self.interrupt_signal_sent = interrupt_signal_sent
127
+
128
+
129
+ class BotBackend(GPTResponseLog):
130
+ def __init__(self):
131
+ super().__init__()
132
+ self.unique_id = hash(id(self))
133
+ self.jupyter_work_dir = f'cache/work_dir_{self.unique_id}'
134
+ self.tool_log = f'cache/tool_{self.unique_id}.log'
135
+ self.jupyter_kernel = JupyterKernel(work_dir=self.jupyter_work_dir)
136
+ self.gpt_model_choice = "GPT-3.5"
137
+ self.revocable_files = []
138
+ self.system_msg = system_msg
139
+ self.functions = copy.deepcopy(functions)
140
+ self._init_api_config()
141
+ self._init_tools()
142
+ self._init_conversation()
143
+ self._init_kwargs_for_chat_completion()
144
+
145
+ def _init_conversation(self):
146
+ first_system_msg = {'role': 'system', 'content': self.system_msg}
147
+ self.context_window_tokens = 0 # num of tokens actually sent to GPT
148
+ self.sliced = False # whether the conversion is sliced
149
+ if hasattr(self, 'conversation'):
150
+ self.conversation.clear()
151
+ self.conversation.append(first_system_msg)
152
+ else:
153
+ self.conversation: List[Dict] = [first_system_msg]
154
+
155
+ def _init_api_config(self):
156
+ self.config = get_config()
157
+ api_type = self.config['API_TYPE']
158
+ api_base = self.config['API_base']
159
+ api_version = self.config['API_VERSION']
160
+ api_key = config['API_KEY']
161
+ config_openai_api(api_type, api_base, api_version, api_key)
162
+
163
+ def _init_tools(self):
164
+ self.additional_tools = {}
165
+
166
+ tool_datas = get_available_tools(self.config)
167
+ if tool_datas:
168
+ self.system_msg += '\n\nAdditional tools:'
169
+
170
+ for tool_data in tool_datas:
171
+ system_prompt = tool_data['system_prompt']
172
+ tool_name = tool_data['tool_name']
173
+ tool_description = tool_data['tool_description']
174
+
175
+ self.system_msg += f'\n{tool_name}: {system_prompt}'
176
+
177
+ self.functions.append(tool_description)
178
+ self.additional_tools[tool_name] = {
179
+ 'tool': tool_data['tool'],
180
+ 'additional_parameters': copy.deepcopy(tool_data['additional_parameters'])
181
+ }
182
+ for parameter, value in self.additional_tools[tool_name]['additional_parameters'].items():
183
+ if callable(value):
184
+ self.additional_tools[tool_name]['additional_parameters'][parameter] = value(self)
185
+
186
+ def _init_kwargs_for_chat_completion(self):
187
+ self.kwargs_for_chat_completion = {
188
+ 'stream': True,
189
+ 'messages': self.conversation,
190
+ 'functions': self.functions,
191
+ 'function_call': 'auto'
192
+ }
193
+
194
+ model_name = self.config['model'][self.gpt_model_choice]['model_name']
195
+
196
+ if self.config['API_TYPE'] == 'azure':
197
+ self.kwargs_for_chat_completion['engine'] = model_name
198
+ else:
199
+ self.kwargs_for_chat_completion['model'] = model_name
200
+
201
+ def _backup_all_files_in_work_dir(self):
202
+ count = 1
203
+ backup_dir = f'cache/backup_{self.unique_id}'
204
+ while os.path.exists(backup_dir):
205
+ count += 1
206
+ backup_dir = f'cache/backup_{self.unique_id}_{count}'
207
+ shutil.copytree(src=self.jupyter_work_dir, dst=backup_dir)
208
+
209
+ def _clear_all_files_in_work_dir(self, backup=True):
210
+ if backup:
211
+ self._backup_all_files_in_work_dir()
212
+ for filename in os.listdir(self.jupyter_work_dir):
213
+ path = os.path.join(self.jupyter_work_dir, filename)
214
+ if os.path.isdir(path):
215
+ shutil.rmtree(path)
216
+ else:
217
+ os.remove(path)
218
+
219
+ def _save_tool_log(self, tool_response):
220
+ with open(self.tool_log, 'a', encoding='utf-8') as log_file:
221
+ log_file.write(f'Previous conversion: {self.conversation}\n')
222
+ log_file.write(f'Model choice: {self.gpt_model_choice}\n')
223
+ log_file.write(f'Tool name: {self.function_name}\n')
224
+ log_file.write(f'Parameters: {self.function_args_str}\n')
225
+ log_file.write(f'Response: {tool_response}\n')
226
+ log_file.write('----------\n\n')
227
+
228
+ def add_gpt_response_content_message(self):
229
+ self.conversation.append(
230
+ {'role': self.assistant_role_name, 'content': self.content}
231
+ )
232
+ add_markdown_to_notebook(self.content, title="Assistant")
233
+
234
+ def add_text_message(self, user_text):
235
+ self.conversation.append(
236
+ {'role': 'user', 'content': user_text}
237
+ )
238
+ self.revocable_files.clear()
239
+ self.update_finish_reason(finish_reason='new_input')
240
+ add_markdown_to_notebook(user_text, title="User")
241
+
242
+ def add_file_message(self, path, bot_msg):
243
+ filename = os.path.basename(path)
244
+ work_dir = self.jupyter_work_dir
245
+
246
+ shutil.copy(path, work_dir)
247
+
248
+ gpt_msg = {'role': 'system', 'content': f'User uploaded a file: {filename}'}
249
+ self.conversation.append(gpt_msg)
250
+ self.revocable_files.append(
251
+ {
252
+ 'bot_msg': bot_msg,
253
+ 'gpt_msg': gpt_msg,
254
+ 'path': os.path.join(work_dir, filename)
255
+ }
256
+ )
257
+
258
+ def add_function_call_response_message(self, function_response: Union[str, None], save_tokens=True):
259
+ if self.code_str is not None:
260
+ add_code_cell_to_notebook(self.code_str)
261
+
262
+ self.conversation.append(
263
+ {
264
+ "role": self.assistant_role_name,
265
+ "name": self.function_name,
266
+ "content": self.function_args_str
267
+ }
268
+ )
269
+ if function_response is not None:
270
+ if save_tokens and len(function_response) > 500:
271
+ function_response = f'{function_response[:200]}\n[Output too much, the middle part output is omitted]\n ' \
272
+ f'End part of output:\n{function_response[-200:]}'
273
+ self.conversation.append(
274
+ {
275
+ "role": "function",
276
+ "name": self.function_name,
277
+ "content": function_response,
278
+ }
279
+ )
280
+ self._save_tool_log(tool_response=function_response)
281
+
282
+ def append_system_msg(self, prompt):
283
+ self.conversation.append(
284
+ {'role': 'system', 'content': prompt}
285
+ )
286
+
287
+ def revoke_file(self):
288
+ if self.revocable_files:
289
+ file = self.revocable_files[-1]
290
+ bot_msg = file['bot_msg']
291
+ gpt_msg = file['gpt_msg']
292
+ path = file['path']
293
+
294
+ assert self.conversation[-1] is gpt_msg
295
+ del self.conversation[-1]
296
+
297
+ os.remove(path)
298
+
299
+ del self.revocable_files[-1]
300
+
301
+ return bot_msg
302
+ else:
303
+ return None
304
+
305
+ def update_gpt_model_choice(self, model_choice):
306
+ self.gpt_model_choice = model_choice
307
+ self._init_kwargs_for_chat_completion()
308
+
309
+ def update_token_count(self, num_tokens):
310
+ self.__setattr__('context_window_tokens', num_tokens)
311
+
312
+ def update_sliced_state(self, sliced):
313
+ self.__setattr__('sliced', sliced)
314
+
315
+ def send_interrupt_signal(self):
316
+ self.jupyter_kernel.send_interrupt_signal()
317
+ self.update_interrupt_signal_sent(interrupt_signal_sent=True)
318
+
319
+ def restart(self):
320
+ self.revocable_files.clear()
321
+ self._init_conversation()
322
+ self.reset_gpt_response_log_values()
323
+ self.jupyter_kernel.restart_jupyter_kernel()
324
+ self._clear_all_files_in_work_dir()
Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/src/cli.py ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from response_parser import *
2
+ import copy
3
+ import json
4
+ from tqdm import tqdm
5
+ import logging
6
+ import argparse
7
+ import os
8
+
9
+ def initialization(state_dict: Dict) -> None:
10
+ if not os.path.exists('cache'):
11
+ os.mkdir('cache')
12
+ if state_dict["bot_backend"] is None:
13
+ state_dict["bot_backend"] = BotBackend()
14
+ if 'OPENAI_API_KEY' in os.environ:
15
+ del os.environ['OPENAI_API_KEY']
16
+
17
+ def get_bot_backend(state_dict: Dict) -> BotBackend:
18
+ return state_dict["bot_backend"]
19
+
20
+ def switch_to_gpt4(state_dict: Dict, whether_switch: bool) -> None:
21
+ bot_backend = get_bot_backend(state_dict)
22
+ if whether_switch:
23
+ bot_backend.update_gpt_model_choice("GPT-4")
24
+ else:
25
+ bot_backend.update_gpt_model_choice("GPT-3.5")
26
+
27
+ def add_text(state_dict, history, text):
28
+ bot_backend = get_bot_backend(state_dict)
29
+ bot_backend.add_text_message(user_text=text)
30
+ history = history + [[text, None]]
31
+ return history, state_dict
32
+
33
+ def bot(state_dict, history):
34
+ bot_backend = get_bot_backend(state_dict)
35
+ while bot_backend.finish_reason in ('new_input', 'function_call'):
36
+ if history[-1][1]:
37
+ history.append([None, ""])
38
+ else:
39
+ history[-1][1] = ""
40
+ logging.info("Start chat completion")
41
+ response = chat_completion(bot_backend=bot_backend)
42
+ logging.info(f"End chat completion, response: {response}")
43
+
44
+ logging.info("Start parse response")
45
+ history, _ = parse_response(
46
+ chunk=response,
47
+ history=history,
48
+ bot_backend=bot_backend
49
+ )
50
+ logging.info("End parse response")
51
+ return history
52
+
53
+ def main(state, history, user_input):
54
+ history, state = add_text(state, history, user_input)
55
+ last_history = copy.deepcopy(history)
56
+ first_turn_flag = False
57
+ while True:
58
+ if first_turn_flag:
59
+ switch_to_gpt4(state, False)
60
+ first_turn_flag = False
61
+ else:
62
+ switch_to_gpt4(state, True)
63
+ logging.info("Start bot")
64
+ history = bot(state, history)
65
+ logging.info("End bot")
66
+ print(state["bot_backend"].conversation)
67
+ if last_history == copy.deepcopy(history):
68
+ logging.info("No new response, end conversation")
69
+ conversation = [item for item in state["bot_backend"].conversation if item["content"]]
70
+ return conversation
71
+ else:
72
+ logging.info("New response, continue conversation")
73
+ last_history = copy.deepcopy(history)
74
+
75
+ if __name__ == "__main__":
76
+ parser = argparse.ArgumentParser()
77
+ parser.add_argument('--input_path', type=str)
78
+ parser.add_argument('--output_path', type=str)
79
+ args = parser.parse_args()
80
+
81
+ logging.basicConfig(level=logging.INFO)
82
+ logging.info("Initialization")
83
+
84
+ state = {"bot_backend": None}
85
+ history = []
86
+
87
+ initialization(state)
88
+ switch_to_gpt4(state_dict=state, whether_switch=True)
89
+
90
+ logging.info("Start")
91
+ with open(args.input_path, "r") as f:
92
+ instructions = [json.loads(line)["query"] for line in f.readlines()]
93
+ all_history = []
94
+ logging.info(f"{len(instructions)} remaining instructions for {args.input_path}")
95
+
96
+ for user_input_index, user_input in enumerate(tqdm(instructions)):
97
+ logging.info(f"Start conversation {user_input_index}")
98
+ conversation = main(state, history, user_input)
99
+ all_history.append(
100
+ {
101
+ "instruction": user_input,
102
+ "conversation": conversation
103
+ }
104
+ )
105
+ with open(f"{args.output_path}", "w") as f:
106
+ json.dump(all_history, f, indent=4, ensure_ascii=False)
107
+ state["bot_backend"].restart()
108
+
Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/src/functional.py ADDED
@@ -0,0 +1,197 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from bot_backend import *
2
+ import base64
3
+ import time
4
+ import tiktoken
5
+ from notebook_serializer import add_code_cell_error_to_notebook, add_image_to_notebook, add_code_cell_output_to_notebook
6
+
7
+ SLICED_CONV_MESSAGE = "[Rest of the conversation has been omitted to fit in the context window]"
8
+
9
+
10
+ def get_conversation_slice(conversation, model, encoding_for_which_model, min_output_tokens_count=500):
11
+ """
12
+ Function to get a slice of the conversation that fits in the model's context window. returns: The conversation
13
+ with the first message(explaining the role of the assistant) + the last x messages that can fit in the context
14
+ window.
15
+ """
16
+ encoder = tiktoken.encoding_for_model(encoding_for_which_model)
17
+ count_tokens = lambda txt: len(encoder.encode(txt))
18
+ nb_tokens = count_tokens(conversation[0]['content'])
19
+ sliced_conv = [conversation[0]]
20
+ context_window_limit = int(config['model_context_window'][model])
21
+ max_tokens = context_window_limit - count_tokens(SLICED_CONV_MESSAGE) - min_output_tokens_count
22
+ sliced = False
23
+ for message in conversation[-1:0:-1]:
24
+ nb_tokens += count_tokens(message['content'])
25
+ if nb_tokens > max_tokens:
26
+ sliced_conv.insert(1, {'role': 'system', 'content': SLICED_CONV_MESSAGE})
27
+ sliced = True
28
+ break
29
+ sliced_conv.insert(1, message)
30
+ return sliced_conv, nb_tokens, sliced
31
+
32
+
33
+ def chat_completion(bot_backend: BotBackend):
34
+ model_choice = bot_backend.gpt_model_choice
35
+ model_name = bot_backend.config['model'][model_choice]['model_name']
36
+ kwargs_for_chat_completion = copy.deepcopy(bot_backend.kwargs_for_chat_completion)
37
+ if bot_backend.config['API_TYPE'] == "azure":
38
+ kwargs_for_chat_completion['messages'], nb_tokens, sliced = \
39
+ get_conversation_slice(
40
+ conversation=kwargs_for_chat_completion['messages'],
41
+ model=model_name,
42
+ encoding_for_which_model='gpt-3.5-turbo' if model_choice == 'GPT-3.5' else 'gpt-4'
43
+ )
44
+ else:
45
+ kwargs_for_chat_completion['messages'], nb_tokens, sliced = \
46
+ get_conversation_slice(
47
+ conversation=kwargs_for_chat_completion['messages'],
48
+ model=model_name,
49
+ encoding_for_which_model=model_name
50
+ )
51
+
52
+ bot_backend.update_token_count(num_tokens=nb_tokens)
53
+ bot_backend.update_sliced_state(sliced=sliced)
54
+
55
+ assert config['model'][model_choice]['available'], f"{model_choice} is not available for your API key"
56
+
57
+ assert model_name in config['model_context_window'], \
58
+ f"{model_name} lacks context window information. Please check the config.json file."
59
+
60
+ response = openai.ChatCompletion.create(**kwargs_for_chat_completion)
61
+ return response
62
+
63
+
64
+ def add_code_execution_result_to_bot_history(content_to_display, history, unique_id):
65
+ images, text = [], []
66
+
67
+ # terminal output
68
+ error_occurred = False
69
+
70
+ for mark, out_str in content_to_display:
71
+ if mark in ('stdout', 'execute_result_text', 'display_text'):
72
+ text.append(out_str)
73
+ add_code_cell_output_to_notebook(out_str)
74
+ elif mark in ('execute_result_png', 'execute_result_jpeg', 'display_png', 'display_jpeg'):
75
+ if 'png' in mark:
76
+ images.append(('png', out_str))
77
+ add_image_to_notebook(out_str, 'image/png')
78
+ else:
79
+ add_image_to_notebook(out_str, 'image/jpeg')
80
+ images.append(('jpg', out_str))
81
+ elif mark == 'error':
82
+ # Set output type to error
83
+ text.append(delete_color_control_char(out_str))
84
+ error_occurred = True
85
+ add_code_cell_error_to_notebook(out_str)
86
+ text = '\n'.join(text).strip('\n')
87
+ if error_occurred:
88
+ history.append([None, f'❌Terminal output:\n```shell\n\n{text}\n```'])
89
+ else:
90
+ history.append([None, f'✔️Terminal output:\n```shell\n{text}\n```'])
91
+
92
+ # image output
93
+ for filetype, img in images:
94
+ image_bytes = base64.b64decode(img)
95
+ temp_path = f'cache/temp_{unique_id}'
96
+ if not os.path.exists(temp_path):
97
+ os.mkdir(temp_path)
98
+ path = f'{temp_path}/{hash(time.time())}.{filetype}'
99
+ with open(path, 'wb') as f:
100
+ f.write(image_bytes)
101
+ width, height = get_image_size(path)
102
+ history.append(
103
+ [
104
+ None,
105
+ f'<img src=\"file={path}\" style=\'{"" if width < 800 else "width: 800px;"} max-width:none; '
106
+ f'max-height:none\'> '
107
+ ]
108
+ )
109
+
110
+
111
+ def add_function_response_to_bot_history(hypertext_to_display, history):
112
+ if hypertext_to_display is not None:
113
+ if history[-1][1]:
114
+ history.append([None, hypertext_to_display])
115
+ else:
116
+ history[-1][1] = hypertext_to_display
117
+
118
+
119
+ def parse_json(function_args: str, finished: bool):
120
+ """
121
+ GPT may generate non-standard JSON format string, which contains '\n' in string value, leading to error when using
122
+ `json.loads()`.
123
+ Here we implement a parser to extract code directly from non-standard JSON string.
124
+ :return: code string if successfully parsed otherwise None
125
+ """
126
+ parser_log = {
127
+ 'met_begin_{': False,
128
+ 'begin_"code"': False,
129
+ 'end_"code"': False,
130
+ 'met_:': False,
131
+ 'met_end_}': False,
132
+ 'met_end_code_"': False,
133
+ "code_begin_index": 0,
134
+ "code_end_index": 0
135
+ }
136
+ try:
137
+ for index, char in enumerate(function_args):
138
+ if char == '{':
139
+ parser_log['met_begin_{'] = True
140
+ elif parser_log['met_begin_{'] and char == '"':
141
+ if parser_log['met_:']:
142
+ if finished:
143
+ parser_log['code_begin_index'] = index + 1
144
+ break
145
+ else:
146
+ if index + 1 == len(function_args):
147
+ return None
148
+ else:
149
+ temp_code_str = function_args[index + 1:]
150
+ if '\n' in temp_code_str:
151
+ try:
152
+ return json.loads(function_args + '"}')['code']
153
+ except json.JSONDecodeError:
154
+ try:
155
+ return json.loads(function_args + '}')['code']
156
+ except json.JSONDecodeError:
157
+ try:
158
+ return json.loads(function_args)['code']
159
+ except json.JSONDecodeError:
160
+ if temp_code_str[-1] in ('"', '\n'):
161
+ return None
162
+ else:
163
+ return temp_code_str.strip('\n')
164
+ else:
165
+ return json.loads(function_args + '"}')['code']
166
+ elif parser_log['begin_"code"']:
167
+ parser_log['end_"code"'] = True
168
+ else:
169
+ parser_log['begin_"code"'] = True
170
+ elif parser_log['end_"code"'] and char == ':':
171
+ parser_log['met_:'] = True
172
+ else:
173
+ continue
174
+ if finished:
175
+ for index, char in enumerate(function_args[::-1]):
176
+ back_index = -1 - index
177
+ if char == '}':
178
+ parser_log['met_end_}'] = True
179
+ elif parser_log['met_end_}'] and char == '"':
180
+ parser_log['code_end_index'] = back_index - 1
181
+ break
182
+ else:
183
+ continue
184
+ code_str = function_args[parser_log['code_begin_index']: parser_log['code_end_index'] + 1]
185
+ if '\n' in code_str:
186
+ return code_str.strip('\n')
187
+ else:
188
+ return json.loads(function_args)['code']
189
+
190
+ except Exception as e:
191
+ return None
192
+
193
+
194
+ def get_image_size(image_path):
195
+ with Image.open(image_path) as img:
196
+ width, height = img.size
197
+ return width, height
Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/src/jupyter_backend.py ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import jupyter_client
2
+ import re
3
+
4
+
5
+ def delete_color_control_char(string):
6
+ ansi_escape = re.compile(r'(\x9B|\x1B\[)[0-?]*[ -\/]*[@-~]')
7
+ return ansi_escape.sub('', string)
8
+
9
+
10
+ class JupyterKernel:
11
+ def __init__(self, work_dir):
12
+ self.kernel_manager, self.kernel_client = jupyter_client.manager.start_new_kernel(kernel_name='python3')
13
+ self.work_dir = work_dir
14
+ self.interrupt_signal = False
15
+ self._create_work_dir()
16
+ self.available_functions = {
17
+ 'execute_code': self.execute_code,
18
+ 'python': self.execute_code
19
+ }
20
+
21
+ def execute_code_(self, code):
22
+ msg_id = self.kernel_client.execute(code)
23
+
24
+ # Get the output of the code
25
+ msg_list = []
26
+ while True:
27
+ try:
28
+ iopub_msg = self.kernel_client.get_iopub_msg(timeout=1)
29
+ msg_list.append(iopub_msg)
30
+ if iopub_msg['msg_type'] == 'status' and iopub_msg['content'].get('execution_state') == 'idle':
31
+ break
32
+ except:
33
+ if self.interrupt_signal:
34
+ self.kernel_manager.interrupt_kernel()
35
+ self.interrupt_signal = False
36
+ continue
37
+
38
+ all_output = []
39
+ for iopub_msg in msg_list:
40
+ if iopub_msg['msg_type'] == 'stream':
41
+ if iopub_msg['content'].get('name') == 'stdout':
42
+ output = iopub_msg['content']['text']
43
+ all_output.append(('stdout', output))
44
+ elif iopub_msg['msg_type'] == 'execute_result':
45
+ if 'data' in iopub_msg['content']:
46
+ if 'text/plain' in iopub_msg['content']['data']:
47
+ output = iopub_msg['content']['data']['text/plain']
48
+ all_output.append(('execute_result_text', output))
49
+ if 'text/html' in iopub_msg['content']['data']:
50
+ output = iopub_msg['content']['data']['text/html']
51
+ all_output.append(('execute_result_html', output))
52
+ if 'image/png' in iopub_msg['content']['data']:
53
+ output = iopub_msg['content']['data']['image/png']
54
+ all_output.append(('execute_result_png', output))
55
+ if 'image/jpeg' in iopub_msg['content']['data']:
56
+ output = iopub_msg['content']['data']['image/jpeg']
57
+ all_output.append(('execute_result_jpeg', output))
58
+ elif iopub_msg['msg_type'] == 'display_data':
59
+ if 'data' in iopub_msg['content']:
60
+ if 'text/plain' in iopub_msg['content']['data']:
61
+ output = iopub_msg['content']['data']['text/plain']
62
+ all_output.append(('display_text', output))
63
+ if 'text/html' in iopub_msg['content']['data']:
64
+ output = iopub_msg['content']['data']['text/html']
65
+ all_output.append(('display_html', output))
66
+ if 'image/png' in iopub_msg['content']['data']:
67
+ output = iopub_msg['content']['data']['image/png']
68
+ all_output.append(('display_png', output))
69
+ if 'image/jpeg' in iopub_msg['content']['data']:
70
+ output = iopub_msg['content']['data']['image/jpeg']
71
+ all_output.append(('display_jpeg', output))
72
+ elif iopub_msg['msg_type'] == 'error':
73
+ if 'traceback' in iopub_msg['content']:
74
+ output = '\n'.join(iopub_msg['content']['traceback'])
75
+ all_output.append(('error', output))
76
+
77
+ return all_output
78
+
79
+ def execute_code(self, code):
80
+ text_to_gpt = []
81
+ content_to_display = self.execute_code_(code)
82
+ for mark, out_str in content_to_display:
83
+ if mark in ('stdout', 'execute_result_text', 'display_text'):
84
+ text_to_gpt.append(out_str)
85
+ elif mark in ('execute_result_png', 'execute_result_jpeg', 'display_png', 'display_jpeg'):
86
+ text_to_gpt.append('[image]')
87
+ elif mark == 'error':
88
+ text_to_gpt.append(delete_color_control_char(out_str))
89
+
90
+ return '\n'.join(text_to_gpt), content_to_display
91
+
92
+ def _create_work_dir(self):
93
+ # set work dir in jupyter environment
94
+ init_code = f"import os\n" \
95
+ f"if not os.path.exists('{self.work_dir}'):\n" \
96
+ f" os.mkdir('{self.work_dir}')\n" \
97
+ f"os.chdir('{self.work_dir}')\n" \
98
+ f"del os"
99
+ self.execute_code_(init_code)
100
+
101
+ def send_interrupt_signal(self):
102
+ self.interrupt_signal = True
103
+
104
+ def restart_jupyter_kernel(self):
105
+ self.kernel_client.shutdown()
106
+ self.kernel_manager, self.kernel_client = jupyter_client.manager.start_new_kernel(kernel_name='python3')
107
+ self.interrupt_signal = False
108
+ self._create_work_dir()
Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/src/notebook_serializer.py ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import nbformat
2
+ from nbformat import v4 as nbf
3
+ import ansi2html
4
+ import os
5
+ import argparse
6
+
7
+ # main code
8
+ parser = argparse.ArgumentParser()
9
+ parser.add_argument("-n", "--notebook", help="Path to the output notebook", default=None, type=str)
10
+ args = parser.parse_args()
11
+ if args.notebook:
12
+ notebook_path = os.path.join(os.getcwd(), args.notebook)
13
+ base, ext = os.path.splitext(notebook_path)
14
+ if ext.lower() != '.ipynb':
15
+ notebook_path += '.ipynb'
16
+ if os.path.exists(notebook_path):
17
+ print(f'File at {notebook_path} already exists. Please choose a different file name.')
18
+ exit()
19
+
20
+ # Global variable for code cells
21
+ nb = nbf.new_notebook()
22
+
23
+
24
+ def ansi_to_html(ansi_text):
25
+ converter = ansi2html.Ansi2HTMLConverter()
26
+ html_text = converter.convert(ansi_text)
27
+ return html_text
28
+
29
+
30
+ def write_to_notebook():
31
+ if args.notebook:
32
+ with open(notebook_path, 'w', encoding='utf-8') as f:
33
+ nbformat.write(nb, f)
34
+
35
+
36
+ def add_code_cell_to_notebook(code):
37
+ code_cell = nbf.new_code_cell(source=code)
38
+ nb['cells'].append(code_cell)
39
+ write_to_notebook()
40
+
41
+
42
+ def add_code_cell_output_to_notebook(output):
43
+ html_content = ansi_to_html(output)
44
+ cell_output = nbf.new_output(output_type='display_data', data={'text/html': html_content})
45
+ nb['cells'][-1]['outputs'].append(cell_output)
46
+ write_to_notebook()
47
+
48
+
49
+ def add_code_cell_error_to_notebook(error):
50
+ nbf_error_output = nbf.new_output(
51
+ output_type='error',
52
+ ename='Error',
53
+ evalue='Error message',
54
+ traceback=[error]
55
+ )
56
+ nb['cells'][-1]['outputs'].append(nbf_error_output)
57
+ write_to_notebook()
58
+
59
+
60
+ def add_image_to_notebook(image, mime_type):
61
+ image_output = nbf.new_output(output_type='display_data', data={mime_type: image})
62
+ nb['cells'][-1]['outputs'].append(image_output)
63
+ write_to_notebook()
64
+
65
+
66
+ def add_markdown_to_notebook(content, title=None):
67
+ if title:
68
+ content = "##### " + title + ":\n" + content
69
+ markdown_cell = nbf.new_markdown_cell(content)
70
+ nb['cells'].append(markdown_cell)
71
+ write_to_notebook()
Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/src/response_parser.py ADDED
@@ -0,0 +1,259 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from functional import *
2
+
3
+
4
+ class ChoiceStrategy(metaclass=ABCMeta):
5
+ def __init__(self, choice):
6
+ self.choice = choice
7
+ self.delta = choice['delta']
8
+
9
+ @abstractmethod
10
+ def support(self):
11
+ pass
12
+
13
+ @abstractmethod
14
+ def execute(self, bot_backend: BotBackend, history: List, whether_exit: bool):
15
+ pass
16
+
17
+
18
+ class RoleChoiceStrategy(ChoiceStrategy):
19
+
20
+ def support(self):
21
+ return 'role' in self.delta
22
+
23
+ def execute(self, bot_backend: BotBackend, history: List, whether_exit: bool):
24
+ bot_backend.set_assistant_role_name(assistant_role_name=self.delta['role'])
25
+ return history, whether_exit
26
+
27
+
28
+ class ContentChoiceStrategy(ChoiceStrategy):
29
+ def support(self):
30
+ return 'content' in self.delta and self.delta['content'] is not None
31
+ # null value of content often occur in function call:
32
+ # {
33
+ # "role": "assistant",
34
+ # "content": null,
35
+ # "function_call": {
36
+ # "name": "python",
37
+ # "arguments": ""
38
+ # }
39
+ # }
40
+
41
+ def execute(self, bot_backend: BotBackend, history: List, whether_exit: bool):
42
+ bot_backend.add_content(content=self.delta.get('content', ''))
43
+ history[-1][1] = bot_backend.content
44
+ return history, whether_exit
45
+
46
+
47
+ class NameFunctionCallChoiceStrategy(ChoiceStrategy):
48
+ def support(self):
49
+ return 'function_call' in self.delta and 'name' in self.delta['function_call']
50
+
51
+ def execute(self, bot_backend: BotBackend, history: List, whether_exit: bool):
52
+ python_function_dict = bot_backend.jupyter_kernel.available_functions
53
+ additional_tools = bot_backend.additional_tools
54
+ bot_backend.set_function_name(function_name=self.delta['function_call']['name'])
55
+ bot_backend.copy_current_bot_history(bot_history=history)
56
+ if bot_backend.function_name not in python_function_dict and bot_backend.function_name not in additional_tools:
57
+ history.append(
58
+ [
59
+ None,
60
+ f'GPT attempted to call a function that does '
61
+ f'not exist: {bot_backend.function_name}\n '
62
+ ]
63
+ )
64
+ whether_exit = True
65
+
66
+ return history, whether_exit
67
+
68
+
69
+ class ArgumentsFunctionCallChoiceStrategy(ChoiceStrategy):
70
+
71
+ def support(self):
72
+ return 'function_call' in self.delta and 'arguments' in self.delta['function_call']
73
+
74
+ def execute(self, bot_backend: BotBackend, history: List, whether_exit: bool):
75
+ bot_backend.add_function_args_str(function_args_str=self.delta['function_call']['arguments'])
76
+
77
+ if bot_backend.function_name == 'python': # handle hallucinatory function calls
78
+ """
79
+ In practice, we have noticed that GPT, especially GPT-3.5, may occasionally produce hallucinatory
80
+ function calls. These calls involve a non-existent function named `python` with arguments consisting
81
+ solely of raw code text (not a JSON format).
82
+ """
83
+ temp_code_str = bot_backend.function_args_str
84
+ bot_backend.update_code_str(code_str=temp_code_str)
85
+ bot_backend.update_display_code_block(
86
+ display_code_block="\n🔴Working:\n```python\n{}\n```".format(temp_code_str)
87
+ )
88
+ history = copy.deepcopy(bot_backend.bot_history)
89
+ history[-1][1] += bot_backend.display_code_block
90
+ elif bot_backend.function_name == 'execute_code':
91
+ temp_code_str = parse_json(function_args=bot_backend.function_args_str, finished=False)
92
+ if temp_code_str is not None:
93
+ bot_backend.update_code_str(code_str=temp_code_str)
94
+ bot_backend.update_display_code_block(
95
+ display_code_block="\n🔴Working:\n```python\n{}\n```".format(
96
+ temp_code_str
97
+ )
98
+ )
99
+ history = copy.deepcopy(bot_backend.bot_history)
100
+ history[-1][1] += bot_backend.display_code_block
101
+ else:
102
+ history = copy.deepcopy(bot_backend.bot_history)
103
+ history[-1][1] += bot_backend.display_code_block
104
+ else:
105
+ pass
106
+
107
+ return history, whether_exit
108
+
109
+
110
+ class FinishReasonChoiceStrategy(ChoiceStrategy):
111
+ def support(self):
112
+ return self.choice['finish_reason'] is not None
113
+
114
+ def execute(self, bot_backend: BotBackend, history: List, whether_exit: bool):
115
+
116
+ if bot_backend.content:
117
+ bot_backend.add_gpt_response_content_message()
118
+
119
+ bot_backend.update_finish_reason(finish_reason=self.choice['finish_reason'])
120
+ if bot_backend.finish_reason == 'function_call':
121
+
122
+ if bot_backend.function_name in bot_backend.jupyter_kernel.available_functions:
123
+ history, whether_exit = self.handle_execute_code_finish_reason(
124
+ bot_backend=bot_backend, history=history, whether_exit=whether_exit
125
+ )
126
+ else:
127
+ history, whether_exit = self.handle_tool_finish_reason(
128
+ bot_backend=bot_backend, history=history, whether_exit=whether_exit
129
+ )
130
+
131
+ bot_backend.reset_gpt_response_log_values(exclude=['finish_reason'])
132
+
133
+ return history, whether_exit
134
+
135
+ def handle_execute_code_finish_reason(self, bot_backend: BotBackend, history: List, whether_exit: bool):
136
+ function_dict = bot_backend.jupyter_kernel.available_functions
137
+ try:
138
+
139
+ code_str = self.get_code_str(bot_backend)
140
+
141
+ bot_backend.update_code_str(code_str=code_str)
142
+ bot_backend.update_display_code_block(
143
+ display_code_block="\n🟢Finished:\n```python\n{}\n```".format(code_str)
144
+ )
145
+ history = copy.deepcopy(bot_backend.bot_history)
146
+ history[-1][1] += bot_backend.display_code_block
147
+
148
+ # function response
149
+ bot_backend.update_code_executing_state(code_executing=True)
150
+ text_to_gpt, content_to_display = function_dict[
151
+ bot_backend.function_name
152
+ ](code_str)
153
+ bot_backend.update_code_executing_state(code_executing=False)
154
+
155
+ # add function call to conversion
156
+ bot_backend.add_function_call_response_message(function_response=text_to_gpt, save_tokens=True)
157
+
158
+ if bot_backend.interrupt_signal_sent:
159
+ bot_backend.append_system_msg(prompt='Code execution is manually stopped by user, no need to fix.')
160
+
161
+ add_code_execution_result_to_bot_history(
162
+ content_to_display=content_to_display, history=history, unique_id=bot_backend.unique_id
163
+ )
164
+ return history, whether_exit
165
+
166
+ except json.JSONDecodeError:
167
+ history.append(
168
+ [None, f"GPT generate wrong function args: {bot_backend.function_args_str}"]
169
+ )
170
+ whether_exit = True
171
+ return history, whether_exit
172
+
173
+ except KeyError as key_error:
174
+ history.append([None, f'Backend key_error: {key_error}'])
175
+ whether_exit = True
176
+ return history, whether_exit
177
+
178
+ except Exception as e:
179
+ history.append([None, f'Backend error: {e}'])
180
+ whether_exit = True
181
+ return history, whether_exit
182
+
183
+ @staticmethod
184
+ def handle_tool_finish_reason(bot_backend: BotBackend, history: List, whether_exit: bool):
185
+ function_dict = bot_backend.additional_tools
186
+ function_name = bot_backend.function_name
187
+ function = function_dict[function_name]['tool']
188
+
189
+ # parser function args
190
+ try:
191
+ kwargs = json.loads(bot_backend.function_args_str)
192
+ kwargs.update(function_dict[function_name]['additional_parameters'])
193
+ except json.JSONDecodeError:
194
+ history.append(
195
+ [None, f"GPT generate wrong function args: {bot_backend.function_args_str}"]
196
+ )
197
+ whether_exit = True
198
+ return history, whether_exit
199
+
200
+ else:
201
+ # function response
202
+ function_response, hypertext_to_display = function(**kwargs)
203
+
204
+ # add function call to conversion
205
+ bot_backend.add_function_call_response_message(function_response=function_response, save_tokens=False)
206
+
207
+ # add hypertext response to bot history
208
+ add_function_response_to_bot_history(hypertext_to_display=hypertext_to_display, history=history)
209
+
210
+ return history, whether_exit
211
+
212
+ @staticmethod
213
+ def get_code_str(bot_backend):
214
+ if bot_backend.function_name == 'python':
215
+ code_str = bot_backend.function_args_str
216
+ else:
217
+ code_str = parse_json(function_args=bot_backend.function_args_str, finished=True)
218
+ if code_str is None:
219
+ raise json.JSONDecodeError
220
+ return code_str
221
+
222
+
223
+ class ChoiceHandler:
224
+ strategies = [
225
+ RoleChoiceStrategy, ContentChoiceStrategy, NameFunctionCallChoiceStrategy,
226
+ ArgumentsFunctionCallChoiceStrategy, FinishReasonChoiceStrategy
227
+ ]
228
+
229
+ def __init__(self, choice):
230
+ self.choice = choice
231
+
232
+ def handle(self, bot_backend: BotBackend, history: List, whether_exit: bool):
233
+ for Strategy in self.strategies:
234
+ strategy_instance = Strategy(choice=self.choice)
235
+ if not strategy_instance.support():
236
+ continue
237
+ history, whether_exit = strategy_instance.execute(
238
+ bot_backend=bot_backend,
239
+ history=history,
240
+ whether_exit=whether_exit
241
+ )
242
+ return history, whether_exit
243
+
244
+
245
+ def parse_response(chunk, history: List, bot_backend: BotBackend):
246
+ """
247
+ :return: history, whether_exit
248
+ """
249
+ whether_exit = False
250
+ if chunk['choices']:
251
+ choice = chunk['choices'][0]
252
+ choice_handler = ChoiceHandler(choice=choice)
253
+ history, whether_exit = choice_handler.handle(
254
+ history=history,
255
+ bot_backend=bot_backend,
256
+ whether_exit=whether_exit
257
+ )
258
+
259
+ return history, whether_exit
Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/src/tools.py ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import openai
2
+ import base64
3
+ import os
4
+ import io
5
+ import time
6
+ from PIL import Image
7
+ from abc import ABCMeta, abstractmethod
8
+
9
+
10
+ def create_vision_chat_completion(vision_model, base64_image, prompt):
11
+ try:
12
+ response = openai.ChatCompletion.create(
13
+ model=vision_model,
14
+ messages=[
15
+ {
16
+ "role": "user",
17
+ "content": [
18
+ {"type": "text", "text": prompt},
19
+ {
20
+ "type": "image_url",
21
+ "image_url": {
22
+ "url": f"data:image/jpeg;base64,{base64_image}",
23
+ },
24
+ },
25
+ ],
26
+ }
27
+ ],
28
+ max_tokens=1000,
29
+ )
30
+ return response.choices[0].message.content
31
+ except:
32
+ return None
33
+
34
+
35
+ def create_image(prompt):
36
+ try:
37
+ response = openai.Image.create(
38
+ model="dall-e-3",
39
+ prompt=prompt,
40
+ response_format="b64_json"
41
+ )
42
+ return response.data[0]['b64_json']
43
+ except:
44
+ return None
45
+
46
+
47
+ def image_to_base64(path):
48
+ try:
49
+ _, suffix = os.path.splitext(path)
50
+ if suffix not in {'.jpg', '.jpeg', '.png', '.webp'}:
51
+ img = Image.open(path)
52
+ img_png = img.convert('RGB')
53
+ img_png.tobytes()
54
+ byte_buffer = io.BytesIO()
55
+ img_png.save(byte_buffer, 'PNG')
56
+ encoded_string = base64.b64encode(byte_buffer.getvalue()).decode('utf-8')
57
+ else:
58
+ with open(path, "rb") as image_file:
59
+ encoded_string = base64.b64encode(image_file.read()).decode('utf-8')
60
+ return encoded_string
61
+ except:
62
+ return None
63
+
64
+
65
+ def base64_to_image_bytes(image_base64):
66
+ try:
67
+ return base64.b64decode(image_base64)
68
+ except:
69
+ return None
70
+
71
+
72
+ def inquire_image(work_dir, vision_model, path, prompt):
73
+ image_base64 = image_to_base64(f'{work_dir}/{path}')
74
+ hypertext_to_display = None
75
+ if image_base64 is None:
76
+ return "Error: Image transform error", None
77
+ else:
78
+ response = create_vision_chat_completion(vision_model, image_base64, prompt)
79
+ if response is None:
80
+ return "Model response error", None
81
+ else:
82
+ return response, hypertext_to_display
83
+
84
+
85
+ def dalle(unique_id, prompt):
86
+ img_base64 = create_image(prompt)
87
+ text_to_gpt = "Image has been successfully generated and displayed to user."
88
+
89
+ if img_base64 is None:
90
+ return "Error: Model response error", None
91
+
92
+ img_bytes = base64_to_image_bytes(img_base64)
93
+ if img_bytes is None:
94
+ return "Error: Image transform error", None
95
+
96
+ temp_path = f'cache/temp_{unique_id}'
97
+ if not os.path.exists(temp_path):
98
+ os.mkdir(temp_path)
99
+ path = f'{temp_path}/{hash(time.time())}.png'
100
+
101
+ with open(path, 'wb') as f:
102
+ f.write(img_bytes)
103
+
104
+ hypertext_to_display = f'<img src=\"file={path}\" width="50%" style=\'max-width:none; max-height:none\'>'
105
+ return text_to_gpt, hypertext_to_display
106
+
107
+
108
+ class Tool(metaclass=ABCMeta):
109
+ def __init__(self, config):
110
+ self.config = config
111
+
112
+ @abstractmethod
113
+ def support(self):
114
+ pass
115
+
116
+ @abstractmethod
117
+ def get_tool_data(self):
118
+ pass
119
+
120
+
121
+ class ImageInquireTool(Tool):
122
+ def support(self):
123
+ return self.config['model']['GPT-4V']['available']
124
+
125
+ def get_tool_data(self):
126
+ return {
127
+ "tool_name": "inquire_image",
128
+ "tool": inquire_image,
129
+ "system_prompt": "If necessary, utilize the 'inquire_image' tool to query an AI model regarding the "
130
+ "content of images uploaded by users. Avoid phrases like\"based on the analysis\"; "
131
+ "instead, respond as if you viewed the image by yourself. Keep in mind that not every"
132
+ "tasks related to images require knowledge of the image content, such as converting "
133
+ "an image format or extracting image file attributes, which should use `execute_code` "
134
+ "tool instead. Use the tool only when understanding the image content is necessary.",
135
+ "tool_description": {
136
+ "name": "inquire_image",
137
+ "description": "This function enables you to inquire with an AI model about the contents of an image "
138
+ "and receive the model's response.",
139
+ "parameters": {
140
+ "type": "object",
141
+ "properties": {
142
+ "path": {
143
+ "type": "string",
144
+ "description": "File path of the image"
145
+ },
146
+ "prompt": {
147
+ "type": "string",
148
+ "description": "The question you want to pose to the AI model about the image"
149
+ }
150
+ },
151
+ "required": ["path", "prompt"]
152
+ }
153
+ },
154
+ "additional_parameters": {
155
+ "work_dir": lambda bot_backend: bot_backend.jupyter_work_dir,
156
+ "vision_model": self.config['model']['GPT-4V']['model_name']
157
+ }
158
+ }
159
+
160
+
161
+ class DALLETool(Tool):
162
+ def support(self):
163
+ return True
164
+
165
+ def get_tool_data(self):
166
+ return {
167
+ "tool_name": "dalle",
168
+ "tool": dalle,
169
+ "system_prompt": "If user ask you to generate an art image, you can translate user's requirements into a "
170
+ "prompt and sending it to the `dalle` tool. Please note that this tool is specifically "
171
+ "designed for creating art images. For scientific figures, such as plots, please use the "
172
+ "Python code execution tool `execute_code` instead.",
173
+ "tool_description": {
174
+ "name": "dalle",
175
+ "description": "This function allows you to access OpenAI's DALL·E-3 model for image generation.",
176
+ "parameters": {
177
+ "type": "object",
178
+ "properties": {
179
+ "prompt": {
180
+ "type": "string",
181
+ "description": "A detailed description of the image you want to generate, should be in "
182
+ "English only. "
183
+ }
184
+ },
185
+ "required": ["prompt"]
186
+ }
187
+ },
188
+ "additional_parameters": {
189
+ "unique_id": lambda bot_backend: bot_backend.unique_id,
190
+ }
191
+ }
192
+
193
+
194
+ def get_available_tools(config):
195
+ tools = [ImageInquireTool]
196
+
197
+ available_tools = []
198
+ for tool in tools:
199
+ tool_instance = tool(config)
200
+ if tool_instance.support():
201
+ available_tools.append(tool_instance.get_tool_data())
202
+ return available_tools
Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/Local-Code-Interpreter/src/web_ui.py ADDED
@@ -0,0 +1,279 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ from response_parser import *
3
+
4
+
5
+ def initialization(state_dict: Dict) -> None:
6
+ if not os.path.exists('cache'):
7
+ os.mkdir('cache')
8
+ if state_dict["bot_backend"] is None:
9
+ state_dict["bot_backend"] = BotBackend()
10
+ if 'OPENAI_API_KEY' in os.environ:
11
+ del os.environ['OPENAI_API_KEY']
12
+
13
+
14
+ def get_bot_backend(state_dict: Dict) -> BotBackend:
15
+ return state_dict["bot_backend"]
16
+
17
+
18
+ def switch_to_gpt4(state_dict: Dict, whether_switch: bool) -> None:
19
+ bot_backend = get_bot_backend(state_dict)
20
+ if whether_switch:
21
+ bot_backend.update_gpt_model_choice("GPT-4")
22
+ else:
23
+ bot_backend.update_gpt_model_choice("GPT-3.5")
24
+
25
+
26
+ def add_text(state_dict: Dict, history: List, text: str) -> Tuple[List, Dict]:
27
+ bot_backend = get_bot_backend(state_dict)
28
+ bot_backend.add_text_message(user_text=text)
29
+
30
+ history = history + [(text, None)]
31
+
32
+ return history, gr.update(value="", interactive=False)
33
+
34
+
35
+ def add_file(state_dict: Dict, history: List, files) -> List:
36
+ bot_backend = get_bot_backend(state_dict)
37
+ for file in files:
38
+ path = file.name
39
+ filename = os.path.basename(path)
40
+
41
+ bot_msg = [f'📁[{filename}]', None]
42
+ history.append(bot_msg)
43
+
44
+ bot_backend.add_file_message(path=path, bot_msg=bot_msg)
45
+
46
+ _, suffix = os.path.splitext(filename)
47
+ if suffix in {'.jpg', '.jpeg', '.png', '.bmp', '.webp'}:
48
+ copied_file_path = f'{bot_backend.jupyter_work_dir}/{filename}'
49
+ width, height = get_image_size(copied_file_path)
50
+ bot_msg[0] += \
51
+ f'\n<img src=\"file={copied_file_path}\" style=\'{"" if width < 800 else "width: 800px;"} max-width' \
52
+ f':none; max-height:none\'> '
53
+
54
+ return history
55
+
56
+
57
+ def undo_upload_file(state_dict: Dict, history: List) -> Tuple[List, Dict]:
58
+ bot_backend = get_bot_backend(state_dict)
59
+ bot_msg = bot_backend.revoke_file()
60
+
61
+ if bot_msg is None:
62
+ return history, gr.Button.update(interactive=False)
63
+
64
+ else:
65
+ assert history[-1] == bot_msg
66
+ del history[-1]
67
+ if bot_backend.revocable_files:
68
+ return history, gr.Button.update(interactive=True)
69
+ else:
70
+ return history, gr.Button.update(interactive=False)
71
+
72
+
73
+ def refresh_file_display(state_dict: Dict) -> List[str]:
74
+ bot_backend = get_bot_backend(state_dict)
75
+ work_dir = bot_backend.jupyter_work_dir
76
+ filenames = os.listdir(work_dir)
77
+ paths = []
78
+ for filename in filenames:
79
+ path = os.path.join(work_dir, filename)
80
+ if not os.path.isdir(path):
81
+ paths.append(path)
82
+ return paths
83
+
84
+
85
+ def refresh_token_count(state_dict: Dict):
86
+ bot_backend = get_bot_backend(state_dict)
87
+ model_choice = bot_backend.gpt_model_choice
88
+ sliced = bot_backend.sliced
89
+ token_count = bot_backend.context_window_tokens
90
+ token_limit = config['model_context_window'][config['model'][model_choice]['model_name']]
91
+ display_text = f"**Context token:** {token_count}/{token_limit}"
92
+ if sliced:
93
+ display_text += '\\\nToken limit exceeded, conversion has been sliced.'
94
+ return gr.Markdown.update(value=display_text)
95
+
96
+
97
+ def restart_ui(history: List) -> Tuple[List, Dict, Dict, Dict, Dict, Dict, Dict]:
98
+ history.clear()
99
+ return (
100
+ history,
101
+ gr.Textbox.update(value="", interactive=False),
102
+ gr.Button.update(interactive=False),
103
+ gr.Button.update(interactive=False),
104
+ gr.Button.update(interactive=False),
105
+ gr.Button.update(interactive=False),
106
+ gr.Button.update(visible=False)
107
+ )
108
+
109
+
110
+ def restart_bot_backend(state_dict: Dict) -> None:
111
+ bot_backend = get_bot_backend(state_dict)
112
+ bot_backend.restart()
113
+
114
+
115
+ def stop_generating(state_dict: Dict) -> None:
116
+ bot_backend = get_bot_backend(state_dict)
117
+ if bot_backend.code_executing:
118
+ bot_backend.send_interrupt_signal()
119
+ else:
120
+ bot_backend.update_stop_generating_state(stop_generating=True)
121
+
122
+
123
+ def bot(state_dict: Dict, history: List) -> List:
124
+ bot_backend = get_bot_backend(state_dict)
125
+
126
+ while bot_backend.finish_reason in ('new_input', 'function_call'):
127
+ if history[-1][1]:
128
+ history.append([None, ""])
129
+ else:
130
+ history[-1][1] = ""
131
+
132
+ try:
133
+ response = chat_completion(bot_backend=bot_backend)
134
+ for chunk in response:
135
+ if chunk['choices'] and chunk['choices'][0]['finish_reason'] == 'function_call':
136
+ if bot_backend.function_name in bot_backend.jupyter_kernel.available_functions:
137
+ yield history, gr.Button.update(value='⏹️ Interrupt execution'), gr.Button.update(visible=False)
138
+ else:
139
+ yield history, gr.Button.update(interactive=False), gr.Button.update(visible=False)
140
+
141
+ if bot_backend.stop_generating:
142
+ response.close()
143
+ if bot_backend.content:
144
+ bot_backend.add_gpt_response_content_message()
145
+ if bot_backend.display_code_block:
146
+ bot_backend.update_display_code_block(
147
+ display_code_block="\n⚫Stopped:\n```python\n{}\n```".format(bot_backend.code_str)
148
+ )
149
+ history = copy.deepcopy(bot_backend.bot_history)
150
+ history[-1][1] += bot_backend.display_code_block
151
+ bot_backend.add_function_call_response_message(function_response=None)
152
+
153
+ bot_backend.reset_gpt_response_log_values()
154
+ break
155
+
156
+ history, weather_exit = parse_response(
157
+ chunk=chunk,
158
+ history=history,
159
+ bot_backend=bot_backend
160
+ )
161
+
162
+ yield (
163
+ history,
164
+ gr.Button.update(
165
+ interactive=False if bot_backend.stop_generating else True,
166
+ value='⏹️ Stop generating'
167
+ ),
168
+ gr.Button.update(visible=False)
169
+ )
170
+ if weather_exit:
171
+ exit(-1)
172
+ except openai.OpenAIError as openai_error:
173
+ bot_backend.reset_gpt_response_log_values(exclude=['finish_reason'])
174
+ yield history, gr.Button.update(interactive=False), gr.Button.update(visible=True)
175
+ raise openai_error
176
+
177
+ yield history, gr.Button.update(interactive=False, value='⏹️ Stop generating'), gr.Button.update(visible=False)
178
+
179
+
180
+ if __name__ == '__main__':
181
+ config = get_config()
182
+ with gr.Blocks(theme=gr.themes.Base()) as block:
183
+ """
184
+ Reference: https://www.gradio.app/guides/creating-a-chatbot-fast
185
+ """
186
+ # UI components
187
+ state = gr.State(value={"bot_backend": None})
188
+ with gr.Tab("Chat"):
189
+ chatbot = gr.Chatbot([], elem_id="chatbot", label="Local Code Interpreter", height=750)
190
+ with gr.Row():
191
+ with gr.Column(scale=0.85):
192
+ text_box = gr.Textbox(
193
+ show_label=False,
194
+ placeholder="Enter text and press enter, or upload a file",
195
+ container=False
196
+ )
197
+ with gr.Column(scale=0.15, min_width=0):
198
+ file_upload_button = gr.UploadButton("📁", file_count='multiple', file_types=['file'])
199
+ with gr.Row(equal_height=True):
200
+ with gr.Column(scale=0.08, min_width=0):
201
+ check_box = gr.Checkbox(label="Use GPT-4", interactive=config['model']['GPT-4']['available'])
202
+ with gr.Column(scale=0.314, min_width=0):
203
+ model_token_limit = config['model_context_window'][config['model']['GPT-3.5']['model_name']]
204
+ token_count_display_text = f"**Context token:** 0/{model_token_limit}"
205
+ token_monitor = gr.Markdown(value=token_count_display_text)
206
+ with gr.Column(scale=0.15, min_width=0):
207
+ retry_button = gr.Button(value='🔂OpenAI Error, click here to retry', visible=False)
208
+ with gr.Column(scale=0.15, min_width=0):
209
+ stop_generation_button = gr.Button(value='⏹️ Stop generating', interactive=False)
210
+ with gr.Column(scale=0.15, min_width=0):
211
+ restart_button = gr.Button(value='🔄 Restart')
212
+ with gr.Column(scale=0.15, min_width=0):
213
+ undo_file_button = gr.Button(value="↩️Undo upload file", interactive=False)
214
+ with gr.Tab("Files"):
215
+ file_output = gr.Files()
216
+
217
+ # Components function binding
218
+ txt_msg = text_box.submit(add_text, [state, chatbot, text_box], [chatbot, text_box], queue=False).then(
219
+ lambda: gr.Button.update(interactive=False), None, [undo_file_button], queue=False
220
+ ).then(
221
+ bot, [state, chatbot], [chatbot, stop_generation_button, retry_button]
222
+ )
223
+ txt_msg.then(fn=refresh_file_display, inputs=[state], outputs=[file_output])
224
+ txt_msg.then(lambda: gr.update(interactive=True), None, [text_box], queue=False)
225
+ txt_msg.then(fn=refresh_token_count, inputs=[state], outputs=[token_monitor])
226
+
227
+ retry_button.click(lambda: gr.Button.update(visible=False), None, [retry_button], queue=False).then(
228
+ bot, [state, chatbot], [chatbot, stop_generation_button, retry_button]
229
+ ).then(
230
+ fn=refresh_file_display, inputs=[state], outputs=[file_output]
231
+ ).then(
232
+ lambda: gr.update(interactive=True), None, [text_box], queue=False
233
+ ).then(
234
+ fn=refresh_token_count, inputs=[state], outputs=[token_monitor]
235
+ )
236
+
237
+ check_box.change(fn=switch_to_gpt4, inputs=[state, check_box]).then(
238
+ fn=refresh_token_count, inputs=[state], outputs=[token_monitor]
239
+ )
240
+
241
+ file_msg = file_upload_button.upload(
242
+ add_file, [state, chatbot, file_upload_button], [chatbot], queue=False
243
+ )
244
+ file_msg.then(lambda: gr.Button.update(interactive=True), None, [undo_file_button], queue=False)
245
+ file_msg.then(fn=refresh_file_display, inputs=[state], outputs=[file_output])
246
+
247
+ undo_file_button.click(
248
+ fn=undo_upload_file, inputs=[state, chatbot], outputs=[chatbot, undo_file_button]
249
+ ).then(
250
+ fn=refresh_file_display, inputs=[state], outputs=[file_output]
251
+ )
252
+
253
+ stop_generation_button.click(fn=stop_generating, inputs=[state], queue=False).then(
254
+ fn=lambda: gr.Button.update(interactive=False), inputs=None, outputs=[stop_generation_button], queue=False
255
+ )
256
+
257
+ restart_button.click(
258
+ fn=restart_ui, inputs=[chatbot],
259
+ outputs=[
260
+ chatbot, text_box, restart_button, file_upload_button, undo_file_button, stop_generation_button,
261
+ retry_button
262
+ ]
263
+ ).then(
264
+ fn=restart_bot_backend, inputs=[state], queue=False
265
+ ).then(
266
+ fn=refresh_file_display, inputs=[state], outputs=[file_output]
267
+ ).then(
268
+ fn=lambda: (gr.Textbox.update(interactive=True), gr.Button.update(interactive=True),
269
+ gr.Button.update(interactive=True)),
270
+ inputs=None, outputs=[text_box, restart_button, file_upload_button], queue=False
271
+ ).then(
272
+ fn=refresh_token_count,
273
+ inputs=[state], outputs=[token_monitor]
274
+ )
275
+
276
+ block.load(fn=initialization, inputs=[state])
277
+
278
+ block.queue()
279
+ block.launch(inbrowser=True)
Llama2-Code-Interpreter-main/OpenCodeInterpreter/data_collection/README.md ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ # Data Collection
2
+
3
+ Our data collection process is implemented using https://github.com/MrGreyfun/Local-Code-Interpreter. To achieve more efficient data collection, we implement a command line interface instead of the original web UI. Please check out `cli.py` for more details.
Llama2-Code-Interpreter-main/OpenCodeInterpreter/demo/LICENSE ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) 2023 Magnetic2014
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
Llama2-Code-Interpreter-main/OpenCodeInterpreter/demo/README.md ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # OpenCodeInterpreter Demo
2
+
3
+ Based on our powerful OpenCodeInterpreter models, this project allows LLM to generate code, execute it, receive feedback, debug, and answer questions based on the whole process. It is designed to be intuitive and versatile, capable of dealing with multiple languages and frameworks.
4
+
5
+
6
+ ## Disclaimer
7
+
8
+ This demo is designed to leverage large language models for generating code, which is then executed in a Jupyter environment. Before you begin using this project, it is important that you read and understand the following disclaimer:
9
+
10
+ - **Academic Nature and Security Risks:** This project is developed for academic purposes only and is not designed to be fully secure against all forms of code attacks. While we strive to maintain a safe environment, we cannot guarantee the security of your data during use. We urge all users to refrain from executing malicious code intentionally. By choosing to use this project, you acknowledge the potential risks to your data and agree to proceed with caution.
11
+
12
+ - **Model Compatibility Notice:** Please be advised that our demo is only guaranteed to be compatible with the `opencodeinterpreter` model. We cannot ensure that using other models will achieve the expected output or performance. Users attempting to substitute or use models other than the officially recommended ones do so at their own risk, and may encounter issues with performance mismatches or other related risks. We encourage users to fully understand the potential impacts before making any such modifications.
13
+
14
+ - **User Responsibility:** Users are responsible for the code they generate and execute using this project. We strongly advise against running any code without a thorough understanding of its function and potential impact. Users should take precautions to protect their own data and the integrity of their systems.
15
+
16
+ - **Limitation of Liability:** The creators and maintainers of this project will not be liable for any damages, data loss, or security breaches that may occur from using this service. Users assume all responsibility and risk associated with their use of the project.
17
+
18
+ - **Changes to the Disclaimer:** This disclaimer is subject to change at any time. We will make efforts to communicate any changes through the project's official channels, but it is the responsibility of the users to review this disclaimer periodically to ensure they are aware of any updates.
19
+
20
+ By using this demo, you acknowledge that you have read this disclaimer, understand its terms, and agree to be bound by them.
21
+
22
+
23
+ ## Features
24
+
25
+ - **Multi-user support**
26
+
27
+ - **Save your conversations to both Huggingface datasets and offline json files**
28
+
29
+ ## License
30
+
31
+ Distributed under the MIT License. See `LICENSE` for more information.
32
+
33
+ ## Acknowledgement
34
+
35
+ This project is based on [Llama2-Code-Interpreter](https://github.com/SeungyounShin/Llama2-Code-Interpreter).
36
+
37
+ ---
38
+
39
+ ## Citation
40
+
41
+ If you find this demo useful for your research, please kindly cite our paper:
42
+
43
+ ```
44
+ @article{zheng2024opencodeinterpreter,
45
+ title={OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement},
46
+ author={Zheng, Tianyu and Zhang, Ge and Shen, Tianhao and Liu, Xueling and Lin, Bill Yuchen and Fu, Jie and Chen, Wenhu and Yue, Xiang},
47
+ journal={arXiv preprint arXiv:2402.14658},
48
+ year={2024}
49
+ }
50
+ ```
Llama2-Code-Interpreter-main/OpenCodeInterpreter/demo/assets/assistant.pic.jpg ADDED
Llama2-Code-Interpreter-main/OpenCodeInterpreter/demo/assets/user.pic.jpg ADDED
Llama2-Code-Interpreter-main/OpenCodeInterpreter/demo/chatbot.py ADDED
@@ -0,0 +1,316 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import ast
2
+ import gradio as gr
3
+ import os
4
+ import re
5
+ import json
6
+ import logging
7
+
8
+ import torch
9
+ from datetime import datetime
10
+
11
+ from threading import Thread
12
+ from typing import Optional
13
+ from transformers import TextIteratorStreamer
14
+ from functools import partial
15
+ from huggingface_hub import CommitScheduler
16
+ from uuid import uuid4
17
+ from pathlib import Path
18
+
19
+ from code_interpreter.JupyterClient import JupyterNotebook
20
+
21
+ MAX_INPUT_TOKEN_LENGTH = int(os.getenv("MAX_INPUT_TOKEN_LENGTH", "4096"))
22
+
23
+ import warnings
24
+
25
+ warnings.filterwarnings("ignore", category=UserWarning, module="transformers")
26
+ os.environ["TF_CPP_MIN_LOG_LEVEL"] = "2"
27
+
28
+
29
+ from code_interpreter.OpenCodeInterpreter import OpenCodeInterpreter
30
+
31
+ JSON_DATASET_DIR = Path("json_dataset")
32
+ JSON_DATASET_DIR.mkdir(parents=True, exist_ok=True)
33
+
34
+ scheduler = CommitScheduler(
35
+ repo_id="opencodeinterpreter_user_data",
36
+ repo_type="dataset",
37
+ folder_path=JSON_DATASET_DIR,
38
+ path_in_repo="data",
39
+ private=True
40
+ )
41
+
42
+ logging.basicConfig(level=logging.INFO)
43
+
44
+ class StreamingOpenCodeInterpreter(OpenCodeInterpreter):
45
+ streamer: Optional[TextIteratorStreamer] = None
46
+
47
+ # overwirte generate function
48
+ @torch.inference_mode()
49
+ def generate(
50
+ self,
51
+ prompt: str = "",
52
+ max_new_tokens = 1024,
53
+ do_sample: bool = False,
54
+ top_p: float = 0.95,
55
+ top_k: int = 50,
56
+ ) -> str:
57
+ # Get the model and tokenizer, and tokenize the user text.
58
+
59
+ self.streamer = TextIteratorStreamer(
60
+ self.tokenizer, skip_prompt=True, Timeout=5
61
+ )
62
+
63
+ inputs = self.tokenizer([prompt], return_tensors="pt", truncation=True, max_length=MAX_INPUT_TOKEN_LENGTH)
64
+ inputs = inputs.to(self.model.device)
65
+
66
+ kwargs = dict(
67
+ **inputs,
68
+ streamer=self.streamer,
69
+ max_new_tokens=max_new_tokens,
70
+ do_sample=do_sample,
71
+ top_p=top_p,
72
+ top_k=top_k,
73
+ eos_token_id=self.tokenizer.eos_token_id
74
+ )
75
+
76
+ thread = Thread(target=self.model.generate, kwargs=kwargs)
77
+ thread.start()
78
+
79
+ return ""
80
+
81
+ def save_json(dialog, mode, json_file_path, dialog_id) -> None:
82
+ with scheduler.lock:
83
+ with json_file_path.open("a") as f:
84
+ json.dump({"id": dialog_id, "dialog": dialog, "mode": mode, "datetime": datetime.now().isoformat()}, f, ensure_ascii=False)
85
+ f.write("\n")
86
+
87
+ def convert_history(gradio_history: list[list], interpreter_history: list[dict]):
88
+ interpreter_history = [interpreter_history[0]] if interpreter_history and interpreter_history[0]["role"] == "system" else []
89
+ if not gradio_history:
90
+ return interpreter_history
91
+ for item in gradio_history:
92
+ if item[0] is not None:
93
+ interpreter_history.append({"role": "user", "content": item[0]})
94
+ if item[1] is not None:
95
+ interpreter_history.append({"role": "assistant", "content": item[1]})
96
+ return interpreter_history
97
+
98
+ def update_uuid(dialog_info):
99
+ new_uuid = str(uuid4())
100
+ logging.info(f"allocating new uuid {new_uuid} for conversation...")
101
+ return [new_uuid, dialog_info[1]]
102
+
103
+ def is_valid_python_code(code):
104
+ try:
105
+ ast.parse(code)
106
+ return True
107
+ except SyntaxError:
108
+ return False
109
+
110
+
111
+ class InputFunctionVisitor(ast.NodeVisitor):
112
+ def __init__(self):
113
+ self.found_input = False
114
+
115
+ def visit_Call(self, node):
116
+ if isinstance(node.func, ast.Name) and node.func.id == 'input':
117
+ self.found_input = True
118
+ self.generic_visit(node)
119
+
120
+ def has_input_function_calls(code):
121
+ try:
122
+ tree = ast.parse(code)
123
+ except SyntaxError:
124
+ return False
125
+ visitor = InputFunctionVisitor()
126
+ visitor.visit(tree)
127
+ return visitor.found_input
128
+
129
+ def gradio_launch(model_path: str, MAX_TRY: int = 3):
130
+ with gr.Blocks() as demo:
131
+ chatbot = gr.Chatbot(height=600, label="OpenCodeInterpreter", avatar_images=["assets/user.pic.jpg", "assets/assistant.pic.jpg"], show_copy_button=True)
132
+ with gr.Group():
133
+ with gr.Row():
134
+ msg = gr.Textbox(
135
+ container=False,
136
+ show_label=False,
137
+ label="Message",
138
+ placeholder="Type a message...",
139
+ scale=7,
140
+ autofocus=True
141
+ )
142
+ sub = gr.Button(
143
+ "Submit",
144
+ variant="primary",
145
+ scale=1,
146
+ min_width=150
147
+ )
148
+ # stop = gr.Button(
149
+ # "Stop",
150
+ # variant="stop",
151
+ # visible=False,
152
+ # scale=1,
153
+ # min_width=150
154
+ # )
155
+
156
+ with gr.Row():
157
+ # retry = gr.Button("🔄 Retry", variant="secondary")
158
+ # undo = gr.Button("↩️ Undo", variant="secondary")
159
+ clear = gr.Button("🗑️ Clear", variant="secondary")
160
+
161
+ session_state = gr.State([])
162
+ jupyter_state = gr.State(JupyterNotebook())
163
+ dialog_info = gr.State(["", 0])
164
+ demo.load(update_uuid, dialog_info, dialog_info)
165
+
166
+ def bot(user_message, history, jupyter_state, dialog_info, interpreter):
167
+ logging.info(f"user message: {user_message}")
168
+ interpreter.dialog = convert_history(gradio_history=history, interpreter_history=interpreter.dialog)
169
+ history.append([user_message, None])
170
+
171
+ interpreter.dialog.append({"role": "user", "content": user_message})
172
+
173
+ # setup
174
+ HAS_CODE = False # For now
175
+ prompt = interpreter.dialog_to_prompt(dialog=interpreter.dialog)
176
+
177
+ _ = interpreter.generate(prompt)
178
+ history[-1][1] = ""
179
+ generated_text = ""
180
+ for character in interpreter.streamer:
181
+ history[-1][1] += character
182
+ history[-1][1] = history[-1][1].replace("<|EOT|>","")
183
+ generated_text += character
184
+ yield history, history, jupyter_state, dialog_info
185
+
186
+ if is_valid_python_code(history[-1][1].strip()):
187
+ history[-1][1] = f"```python\n{history[-1][1].strip()}\n```"
188
+ generated_text = history[-1][1]
189
+
190
+ HAS_CODE, generated_code_block = interpreter.extract_code_blocks(
191
+ generated_text
192
+ )
193
+
194
+ interpreter.dialog.append(
195
+ {
196
+ "role": "assistant",
197
+ "content": generated_text.replace("<unk>_", "")
198
+ .replace("<unk>", "")
199
+ .replace("<|EOT|>", ""),
200
+ }
201
+ )
202
+
203
+ logging.info(f"saving current dialog to file {dialog_info[0]}.json...")
204
+ logging.info(f"current dialog: {interpreter.dialog}")
205
+ save_json(interpreter.dialog, mode="openci_only", json_file_path=JSON_DATASET_DIR/f"{dialog_info[0]}.json", dialog_id=dialog_info[0])
206
+
207
+ attempt = 1
208
+ while HAS_CODE:
209
+ if attempt > MAX_TRY:
210
+ break
211
+ # if no code then doesn't have to execute it
212
+ generated_text = "" # clear generated text
213
+
214
+ yield history, history, jupyter_state, dialog_info
215
+
216
+ # replace unknown thing to none ''
217
+ generated_code_block = generated_code_block.replace(
218
+ "<unk>_", ""
219
+ ).replace("<unk>", "")
220
+
221
+ if has_input_function_calls(generated_code_block):
222
+ code_block_output = "Please directly assign the value of inputs instead of using input() function in your code."
223
+ else:
224
+ (
225
+ code_block_output,
226
+ error_flag,
227
+ ) = interpreter.execute_code_and_return_output(
228
+ f"{generated_code_block}",
229
+ jupyter_state
230
+ )
231
+ if error_flag == "Timeout":
232
+ logging.info(f"{dialog_info[0]}: Restart jupyter kernel due to timeout")
233
+ jupyter_state = JupyterNotebook()
234
+ code_block_output = interpreter.clean_code_output(code_block_output)
235
+
236
+ if code_block_output.strip():
237
+ code_block_output = "Execution result: \n" + code_block_output
238
+ else:
239
+ code_block_output = "Code is executed, but result is empty. Please make sure that you include test case in your code."
240
+
241
+ history.append([code_block_output, ""])
242
+
243
+ interpreter.dialog.append({"role": "user", "content": code_block_output})
244
+
245
+ yield history, history, jupyter_state, dialog_info
246
+
247
+ prompt = interpreter.dialog_to_prompt(dialog=interpreter.dialog)
248
+
249
+ logging.info(f"generating answer for dialog {dialog_info[0]}")
250
+ _ = interpreter.generate(prompt)
251
+ for character in interpreter.streamer:
252
+ history[-1][1] += character
253
+ history[-1][1] = history[-1][1].replace("<|EOT|>","")
254
+ generated_text += character
255
+ yield history, history, jupyter_state, dialog_info
256
+ logging.info(f"finish generating answer for dialog {dialog_info[0]}")
257
+
258
+ HAS_CODE, generated_code_block = interpreter.extract_code_blocks(
259
+ history[-1][1]
260
+ )
261
+
262
+ interpreter.dialog.append(
263
+ {
264
+ "role": "assistant",
265
+ "content": generated_text.replace("<unk>_", "")
266
+ .replace("<unk>", "")
267
+ .replace("<|EOT|>", ""),
268
+ }
269
+ )
270
+
271
+ attempt += 1
272
+
273
+ logging.info(f"saving current dialog to file {dialog_info[0]}.json...")
274
+ logging.info(f"current dialog: {interpreter.dialog}")
275
+ save_json(interpreter.dialog, mode="openci_only", json_file_path=JSON_DATASET_DIR/f"{dialog_info[0]}.json", dialog_id=dialog_info[0])
276
+
277
+ if generated_text.endswith("<|EOT|>"):
278
+ continue
279
+
280
+ return history, history, jupyter_state, dialog_info
281
+
282
+
283
+ def reset_textbox():
284
+ return gr.update(value="")
285
+
286
+
287
+ def clear_history(history, jupyter_state, dialog_info, interpreter):
288
+ interpreter.dialog = []
289
+ jupyter_state.close()
290
+ return [], [], JupyterNotebook(), update_uuid(dialog_info)
291
+
292
+ interpreter = StreamingOpenCodeInterpreter(model_path=model_path)
293
+
294
+ sub.click(partial(bot, interpreter=interpreter), [msg, session_state, jupyter_state, dialog_info], [chatbot, session_state, jupyter_state, dialog_info])
295
+ sub.click(reset_textbox, [], [msg])
296
+
297
+ clear.click(partial(clear_history, interpreter=interpreter), [session_state, jupyter_state, dialog_info], [chatbot, session_state, jupyter_state, dialog_info], queue=False)
298
+
299
+ demo.queue(max_size=20)
300
+ demo.launch(share=True)
301
+
302
+
303
+ if __name__ == "__main__":
304
+ import argparse
305
+
306
+ parser = argparse.ArgumentParser()
307
+ parser.add_argument(
308
+ "--path",
309
+ type=str,
310
+ required=False,
311
+ help="Path to the OpenCodeInterpreter Model.",
312
+ default="m-a-p/OpenCodeInterpreter-DS-6.7B",
313
+ )
314
+ args = parser.parse_args()
315
+
316
+ gradio_launch(model_path=args.path)
Llama2-Code-Interpreter-main/OpenCodeInterpreter/demo/code_interpreter/BaseCodeInterpreter.py ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import sys
3
+ import re
4
+
5
+ prj_root_path = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
6
+ sys.path.append(prj_root_path)
7
+
8
+
9
+ from utils.const import *
10
+
11
+ class BaseCodeInterpreter:
12
+ def __init__(self):
13
+ self.dialog = [
14
+ {
15
+ "role": "system",
16
+ "content": CODE_INTERPRETER_SYSTEM_PROMPT,
17
+ },
18
+ ]
19
+
20
+ @staticmethod
21
+ def extract_code_blocks(text: str):
22
+ pattern = r"```(?:python\n)?(.*?)```" # Match optional 'python\n' but don't capture it
23
+ code_blocks = re.findall(pattern, text, re.DOTALL)
24
+ return [block.strip() for block in code_blocks]
25
+
26
+ def execute_code_and_return_output(self, code_str: str, nb):
27
+ _, _ = nb.add_and_run(GUARD_CODE)
28
+ outputs, error_flag = nb.add_and_run(code_str)
29
+ return outputs, error_flag
Llama2-Code-Interpreter-main/OpenCodeInterpreter/demo/code_interpreter/JupyterClient.py ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from jupyter_client import KernelManager
2
+ import threading
3
+ import re
4
+ from utils.const import *
5
+
6
+
7
+ class JupyterNotebook:
8
+ def __init__(self):
9
+ self.km = KernelManager()
10
+ self.km.start_kernel()
11
+ self.kc = self.km.client()
12
+ _ = self.add_and_run(TOOLS_CODE)
13
+
14
+ def clean_output(self, outputs):
15
+ outputs_only_str = list()
16
+ for i in outputs:
17
+ if type(i) == dict:
18
+ if "text/plain" in list(i.keys()):
19
+ outputs_only_str.append(i["text/plain"])
20
+ elif type(i) == str:
21
+ outputs_only_str.append(i)
22
+ elif type(i) == list:
23
+ error_msg = "\n".join(i)
24
+ error_msg = re.sub(r"\x1b\[.*?m", "", error_msg)
25
+ outputs_only_str.append(error_msg)
26
+
27
+ return "\n".join(outputs_only_str).strip()
28
+
29
+ def add_and_run(self, code_string):
30
+ # This inner function will be executed in a separate thread
31
+ def run_code_in_thread():
32
+ nonlocal outputs, error_flag
33
+
34
+ # Execute the code and get the execution count
35
+ msg_id = self.kc.execute(code_string)
36
+
37
+ while True:
38
+ try:
39
+ msg = self.kc.get_iopub_msg(timeout=20)
40
+
41
+ msg_type = msg["header"]["msg_type"]
42
+ content = msg["content"]
43
+
44
+ if msg_type == "execute_result":
45
+ outputs.append(content["data"])
46
+ elif msg_type == "stream":
47
+ outputs.append(content["text"])
48
+ elif msg_type == "error":
49
+ error_flag = True
50
+ outputs.append(content["traceback"])
51
+
52
+ # If the execution state of the kernel is idle, it means the cell finished executing
53
+ if msg_type == "status" and content["execution_state"] == "idle":
54
+ break
55
+ except:
56
+ break
57
+
58
+ outputs = []
59
+ error_flag = False
60
+
61
+ # Start the thread to run the code
62
+ thread = threading.Thread(target=run_code_in_thread)
63
+ thread.start()
64
+
65
+ # Wait for 20 seconds for the thread to finish
66
+ thread.join(timeout=20)
67
+
68
+ # If the thread is still alive after 20 seconds, it's a timeout
69
+ if thread.is_alive():
70
+ outputs = ["Execution timed out."]
71
+ # outputs = ["Error"]
72
+ error_flag = "Timeout"
73
+
74
+ return self.clean_output(outputs), error_flag
75
+
76
+ def close(self):
77
+ """Shutdown the kernel."""
78
+ self.km.shutdown_kernel()
79
+
80
+ def __deepcopy__(self, memo):
81
+ if id(self) in memo:
82
+ return memo[id(self)]
83
+ new_copy = type(self)()
84
+ memo[id(self)] = new_copy
85
+ return new_copy
Llama2-Code-Interpreter-main/OpenCodeInterpreter/demo/code_interpreter/OpenCodeInterpreter.py ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import sys
2
+ import os
3
+
4
+ prj_root_path = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
5
+ sys.path.append(prj_root_path)
6
+
7
+ from code_interpreter.BaseCodeInterpreter import BaseCodeInterpreter
8
+ from utils.const import *
9
+
10
+ from typing import List, Tuple, Dict
11
+ import re
12
+
13
+ import torch
14
+ from transformers import AutoModelForCausalLM, AutoTokenizer
15
+
16
+
17
+ sys.path.append(os.path.dirname(__file__))
18
+ sys.path.append(os.path.dirname(os.path.abspath(__file__)))
19
+
20
+ import warnings
21
+
22
+ warnings.filterwarnings("ignore", category=UserWarning, module="transformers")
23
+ os.environ["TF_CPP_MIN_LOG_LEVEL"] = "2"
24
+
25
+
26
+ class OpenCodeInterpreter(BaseCodeInterpreter):
27
+ def __init__(
28
+ self,
29
+ model_path: str,
30
+ load_in_8bit: bool = False,
31
+ load_in_4bit: bool = False,
32
+ ):
33
+ # build tokenizer
34
+ self.tokenizer = AutoTokenizer.from_pretrained(
35
+ model_path,
36
+ padding_side="right",
37
+ trust_remote_code=True
38
+ )
39
+
40
+ self.model = AutoModelForCausalLM.from_pretrained(
41
+ model_path,
42
+ device_map="auto",
43
+ load_in_4bit=load_in_4bit,
44
+ load_in_8bit=load_in_8bit,
45
+ torch_dtype=torch.float16,
46
+ trust_remote_code=True
47
+ )
48
+
49
+ self.model.resize_token_embeddings(len(self.tokenizer))
50
+
51
+ self.model = self.model.eval()
52
+
53
+ self.dialog = []
54
+ self.MAX_CODE_OUTPUT_LENGTH = 1000
55
+
56
+
57
+ def dialog_to_prompt(self, dialog: List[Dict]) -> str:
58
+ full_str = self.tokenizer.apply_chat_template(dialog, tokenize=False)
59
+
60
+ return full_str
61
+
62
+ def extract_code_blocks(self, prompt: str) -> Tuple[bool, str]:
63
+ pattern = re.escape("```python") + r"(.*?)" + re.escape("```")
64
+ matches = re.findall(pattern, prompt, re.DOTALL)
65
+
66
+ if matches:
67
+ # Return the last matched code block
68
+ return True, matches[-1].strip()
69
+ else:
70
+ return False, ""
71
+
72
+ def clean_code_output(self, output: str) -> str:
73
+ if self.MAX_CODE_OUTPUT_LENGTH < len(output):
74
+ return (
75
+ output[: self.MAX_CODE_OUTPUT_LENGTH // 5]
76
+ + "\n...(truncated due to length)...\n"
77
+ + output[-self.MAX_CODE_OUTPUT_LENGTH // 5 :]
78
+ )
79
+
80
+ return output
Llama2-Code-Interpreter-main/OpenCodeInterpreter/demo/requirements.txt ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ accelerate==0.21.0
2
+ bitsandbytes==0.41.1
3
+ colorama==0.4.6
4
+ coloredlogs==15.0.1
5
+ colorlog==6.7.0
6
+ datasets==2.12.0
7
+ deepspeed==0.10.1
8
+ diffusers==0.20.0
9
+ einops==0.6.1
10
+ gradio==3.48.0
11
+ ipykernel==6.25.1
12
+ ipython==8.12.2
13
+ jupyter_client==8.3.0
14
+ jupyter_core==5.3.0
15
+ Markdown==3.4.3
16
+ nbclient==0.8.0
17
+ nbconvert==7.7.1
18
+ nbformat==5.8.0
19
+ omegaconf==2.3.0
20
+ openai==0.27.7
21
+ rich==13.7.0
22
+ scikit-learn==1.4.0
23
+ scipy==1.12.0
24
+ seaborn==0.13.2
25
+ sentencepiece==0.1.99
26
+ termcolor==2.3.0
27
+ tqdm==4.66.1
28
+ transformers==4.37.1
29
+ triton==2.0.0
30
+ yfinance==0.2.28
31
+ retrying==1.3.4
32
+ pydantic<2.0.0
Llama2-Code-Interpreter-main/OpenCodeInterpreter/demo/utils/cleaner.py ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import re
2
+ import os
3
+
4
+ PYTHON_PREFIX = os.environ.get("CONDA_PREFIX", "/usr/local")
5
+
6
+ SITE_PKG_ERROR_PREFIX = f'File {PYTHON_PREFIX}/lib/python3.10/'
7
+
8
+ def get_error_header(traceback_str):
9
+ lines = traceback_str.split('\n')
10
+ for line in lines:
11
+ if 'Error:' in line:
12
+ return line
13
+ return '' # Return None if no error message is found
14
+
15
+ def clean_error_msg(error_str:str =''):
16
+ filtered_error_msg = error_str.__str__().split('An error occurred while executing the following cell')[-1].split("\n------------------\n")[-1]
17
+ raw_error_msg = "".join(filtered_error_msg)
18
+
19
+ # Remove escape sequences for colored text
20
+ ansi_escape = re.compile(r'\x1b\[[0-?]*[ -/]*[@-~]')
21
+ error_msg = ansi_escape.sub('', raw_error_msg)
22
+
23
+ error_str_out = ''
24
+ error_msg_only_cell = error_msg.split(SITE_PKG_ERROR_PREFIX)
25
+
26
+ error_str_out += f'{error_msg_only_cell[0]}\n'
27
+ error_header = get_error_header(error_msg_only_cell[-1])
28
+ if error_header not in error_str_out:
29
+ error_str_out += get_error_header(error_msg_only_cell[-1])
30
+
31
+ return error_str_out
Llama2-Code-Interpreter-main/OpenCodeInterpreter/demo/utils/const.py ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ TOOLS_CODE = """
2
+ import numpy as np
3
+ import pandas as pd
4
+ import matplotlib.pyplot as plt
5
+ import seaborn as sns
6
+ from scipy import stats
7
+ import os,sys
8
+ import re
9
+ from datetime import datetime
10
+ from sympy import symbols, Eq, solve
11
+ import torch
12
+ import requests
13
+ from bs4 import BeautifulSoup
14
+ import json
15
+ import math
16
+ import yfinance
17
+ import time
18
+ """
19
+
20
+ write_denial_function = 'lambda *args, **kwargs: (_ for _ in ()).throw(PermissionError("Writing to disk operation is not permitted due to safety reasons. Please do not try again!"))'
21
+ read_denial_function = 'lambda *args, **kwargs: (_ for _ in ()).throw(PermissionError("Reading from disk operation is not permitted due to safety reasons. Please do not try again!"))'
22
+ class_denial = """Class Denial:
23
+ def __getattr__(self, name):
24
+ def method(*args, **kwargs):
25
+ return "Using this class is not permitted due to safety reasons. Please do not try again!"
26
+ return method
27
+ """
28
+
29
+ GUARD_CODE = f"""
30
+ import os
31
+
32
+ os.kill = {write_denial_function}
33
+ os.system = {write_denial_function}
34
+ os.putenv = {write_denial_function}
35
+ os.remove = {write_denial_function}
36
+ os.removedirs = {write_denial_function}
37
+ os.rmdir = {write_denial_function}
38
+ os.fchdir = {write_denial_function}
39
+ os.setuid = {write_denial_function}
40
+ os.fork = {write_denial_function}
41
+ os.forkpty = {write_denial_function}
42
+ os.killpg = {write_denial_function}
43
+ os.rename = {write_denial_function}
44
+ os.renames = {write_denial_function}
45
+ os.truncate = {write_denial_function}
46
+ os.replace = {write_denial_function}
47
+ os.unlink = {write_denial_function}
48
+ os.fchmod = {write_denial_function}
49
+ os.fchown = {write_denial_function}
50
+ os.chmod = {write_denial_function}
51
+ os.chown = {write_denial_function}
52
+ os.chroot = {write_denial_function}
53
+ os.fchdir = {write_denial_function}
54
+ os.lchflags = {write_denial_function}
55
+ os.lchmod = {write_denial_function}
56
+ os.lchown = {write_denial_function}
57
+ os.getcwd = {write_denial_function}
58
+ os.chdir = {write_denial_function}
59
+ os.popen = {write_denial_function}
60
+
61
+ import shutil
62
+
63
+ shutil.rmtree = {write_denial_function}
64
+ shutil.move = {write_denial_function}
65
+ shutil.chown = {write_denial_function}
66
+
67
+ import subprocess
68
+
69
+ subprocess.Popen = {write_denial_function} # type: ignore
70
+
71
+ import sys
72
+
73
+ sys.modules["ipdb"] = {write_denial_function}
74
+ sys.modules["joblib"] = {write_denial_function}
75
+ sys.modules["resource"] = {write_denial_function}
76
+ sys.modules["psutil"] = {write_denial_function}
77
+ sys.modules["tkinter"] = {write_denial_function}
78
+ """
79
+
80
+ CODE_INTERPRETER_SYSTEM_PROMPT = """You are an AI code interpreter.
81
+ Your goal is to help users do a variety of jobs by executing Python code.
82
+
83
+ You should:
84
+ 1. Comprehend the user's requirements carefully & to the letter.
85
+ 2. Give a brief description for what you plan to do & call the provided function to run code.
86
+ 3. Provide results analysis based on the execution output.
87
+ 4. If error occurred, try to fix it.
88
+ 5. Response in the same language as the user."""
Llama2-Code-Interpreter-main/OpenCodeInterpreter/evaluation/README.md ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Evaluation
2
+
3
+ This repository contains code for evaluating the performance of LLM models through both single-turn and multi-turn scenarios.
4
+
5
+ To set up the environment, you can install the required dependencies by running:
6
+ ```bash
7
+ pip install -r evaluation/requirements.txt
8
+ ```
9
+
10
+ ## Single-turn Evaluation
11
+
12
+ For single-turn evaluation, the processes of inference, post-processing, and result aggregation are separated as follows:
13
+
14
+ 1. Execute `bash evaluation/evaluate/scripts/01_gen_single.sh` to generate model results.
15
+ 2. Perform post-processing on the model output by executing `bash evaluation/evaluate/scripts/02_sanitize_single.sh`.
16
+ 3. Finally, compute evaluation metrics by executing `bash evaluation/evaluate/scripts/03_eval_single.sh`.
17
+
18
+ ## Multi-turn Evaluation
19
+
20
+ ### Multi-turn Evaluation with Execution Feedback
21
+
22
+ Evaluate the performance of the models with execution feedback using the provided scripts:
23
+
24
+ - For OpenCodeInterpreter:
25
+ ```bash
26
+ bash evaluation/evaluate/scripts/04_execution_feedback_multiround_OpenCodeInterpreter.sh
27
+ ```
28
+
29
+ - For OpenAI's GPT Models:
30
+ Before proceeding with evaluation, ensure to implement the `get_predict` function in `chat_with_gpt.py` to enable interaction with the GPT Models. Then, execute the following script:
31
+ ```bash
32
+ bash evaluation/evaluate/scripts/05_execution_feedback_multiround_gpt.sh
33
+ ```
34
+
35
+ ### Multi-turn Evaluation with GPT-4 Simulated Human Feedback
36
+
37
+ Execute either of the following scripts to evaluate the models with simulated human feedback:
38
+
39
+ - For OpenCodeInterpreter:
40
+ ```bash
41
+ bash evaluation/evaluate/scripts/06_human_feedback_multiround_OpenCodeInterpreter.sh
42
+ ```
43
+
44
+ - For Oracle OpenCodeInterpreter:
45
+ ```bash
46
+ bash evaluation/evaluate/scripts/07_human_feedback_multiround_Oracle_OpenCodeInterpreter.sh
47
+ ```
48
+
49
+ These scripts facilitate the multi-turn evaluation with simulated human feedback.
50
+
51
+ This evaluation code is based on [EvalPlus](https://github.com/evalplus/evalplus) and has been modified for specific purposes. We extend our gratitude to the contributors of EvalPlus for their foundational work.
Llama2-Code-Interpreter-main/OpenCodeInterpreter/evaluation/evalplus/.dockerignore ADDED
@@ -0,0 +1,174 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Byte-compiled / optimized / DLL files
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+
6
+ # C extensions
7
+ *.so
8
+
9
+ # Distribution / packaging
10
+ .Python
11
+ build/
12
+ develop-eggs/
13
+ dist/
14
+ downloads/
15
+ eggs/
16
+ .eggs/
17
+ lib/
18
+ lib64/
19
+ parts/
20
+ sdist/
21
+ var/
22
+ wheels/
23
+ share/python-wheels/
24
+ *.egg-info/
25
+ .installed.cfg
26
+ *.egg
27
+ MANIFEST
28
+
29
+ # PyInstaller
30
+ # Usually these files are written by a python script from a template
31
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
32
+ *.manifest
33
+ *.spec
34
+
35
+ # Installer logs
36
+ pip-log.txt
37
+ pip-delete-this-directory.txt
38
+
39
+ # Unit test / coverage reports
40
+ htmlcov/
41
+ .tox/
42
+ .nox/
43
+ .coverage
44
+ .coverage.*
45
+ .cache
46
+ nosetests.xml
47
+ coverage.xml
48
+ *.cover
49
+ *.py,cover
50
+ .hypothesis/
51
+ .pytest_cache/
52
+ cover/
53
+
54
+ # Translations
55
+ *.mo
56
+ *.pot
57
+
58
+ # Django stuff:
59
+ *.log
60
+ local_settings.py
61
+ db.sqlite3
62
+ db.sqlite3-journal
63
+
64
+ # Flask stuff:
65
+ instance/
66
+ .webassets-cache
67
+
68
+ # Scrapy stuff:
69
+ .scrapy
70
+
71
+ # Sphinx documentation
72
+ docs/_build/
73
+
74
+ # PyBuilder
75
+ .pybuilder/
76
+ target/
77
+
78
+ # Jupyter Notebook
79
+ .ipynb_checkpoints
80
+
81
+ # IPython
82
+ profile_default/
83
+ ipython_config.py
84
+
85
+ # pyenv
86
+ # For a library or package, you might want to ignore these files since the code is
87
+ # intended to run in multiple environments; otherwise, check them in:
88
+ # .python-version
89
+
90
+ # pipenv
91
+ # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
92
+ # However, in case of collaboration, if having platform-specific dependencies or dependencies
93
+ # having no cross-platform support, pipenv may install dependencies that don't work, or not
94
+ # install all needed dependencies.
95
+ #Pipfile.lock
96
+
97
+ # poetry
98
+ # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
99
+ # This is especially recommended for binary packages to ensure reproducibility, and is more
100
+ # commonly ignored for libraries.
101
+ # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
102
+ #poetry.lock
103
+
104
+ # pdm
105
+ # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
106
+ #pdm.lock
107
+ # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
108
+ # in version control.
109
+ # https://pdm.fming.dev/#use-with-ide
110
+ .pdm.toml
111
+
112
+ # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
113
+ __pypackages__/
114
+
115
+ # Celery stuff
116
+ celerybeat-schedule
117
+ celerybeat.pid
118
+
119
+ # SageMath parsed files
120
+ *.sage.py
121
+
122
+ # Environments
123
+ .env
124
+ .venv
125
+ env/
126
+ venv/
127
+ ENV/
128
+ env.bak/
129
+ venv.bak/
130
+
131
+ # Spyder project settings
132
+ .spyderproject
133
+ .spyproject
134
+
135
+ # Rope project settings
136
+ .ropeproject
137
+
138
+ # mkdocs documentation
139
+ /site
140
+
141
+ # mypy
142
+ .mypy_cache/
143
+ .dmypy.json
144
+ dmypy.json
145
+
146
+ # Pyre type checker
147
+ .pyre/
148
+
149
+ # pytype static type analyzer
150
+ .pytype/
151
+
152
+ # Cython debug symbols
153
+ cython_debug/
154
+
155
+ # PyCharm
156
+ # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
157
+ # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
158
+ # and can be added to the global gitignore or merged into this file. For a more nuclear
159
+ # option (not recommended) you can uncomment the following to ignore the entire idea folder.
160
+ # nuclear option because steven uses PyCharm.
161
+ .idea/
162
+
163
+ # VSCode
164
+ .vscode/
165
+ EvalPlus/
166
+ backup/
167
+ passrate.p*
168
+ min_cov_dir/
169
+ HumanEvalPlus*.jsonl
170
+ HumanEvalPlus*.gz
171
+ MbppPlus*.jsonl
172
+ MbppPlus*.gz
173
+ evalplus/_version.py
174
+ *mbpp.json
Llama2-Code-Interpreter-main/OpenCodeInterpreter/evaluation/evalplus/.github/ISSUE_TEMPLATE/buggy_contract.yml ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: "🐛 Report Bad Contract"
2
+ description: Report to us that certain program contract should be repaired.
3
+ title: "🐛 [TestRemoval] - <TASK_ID> <WHY>"
4
+ labels: ["program contract"]
5
+ body:
6
+ - type: input
7
+ id: version
8
+ attributes:
9
+ label: "EvalPlus version"
10
+ description: What is the version of EvalPlus? You can find it by running `pip show evalplus`.
11
+ placeholder: For example, 0.1.0
12
+ validations:
13
+ required: true
14
+ - type: input
15
+ id: cache
16
+ attributes:
17
+ label: "Output of running `ls ~/.cache/evalplus`"
18
+ validations:
19
+ required: true
20
+ - type: input
21
+ id: task_id
22
+ attributes:
23
+ label: "Task ID of the programming task"
24
+ placeholder: HumanEval/[??]
25
+ validations:
26
+ required: true
27
+ - type: textarea
28
+ id: original
29
+ attributes:
30
+ label: "The original wrong contract"
31
+ description: You can run `python -c "from evalplus.data import get_human_eval_plus; print(get_human_eval_plus()['HumanEval/❓']['contract'])"`
32
+ render: python
33
+ validations:
34
+ required: true
35
+ - type: textarea
36
+ id: new
37
+ attributes:
38
+ label: "Your proposed new contract"
39
+ render: python
40
+ validations:
41
+ required: true
42
+ - type: textarea
43
+ id: other
44
+ attributes:
45
+ label: "Other context"
46
+ description: (Optional) Anything else the maintainer should notice?
47
+ validations:
48
+ required: false
Llama2-Code-Interpreter-main/OpenCodeInterpreter/evaluation/evalplus/.github/ISSUE_TEMPLATE/buggy_test.yml ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: "🐛 Report Bad Test Inputs"
2
+ description: Report to us that certain test inputs should be removed.
3
+ title: "🐛 [TestRemoval] - <TASK_ID> <WHY>"
4
+ labels: ["bug"]
5
+ body:
6
+ - type: input
7
+ id: version
8
+ attributes:
9
+ label: "EvalPlus version"
10
+ description: What is the version of EvalPlus? You can find it by running `pip show evalplus`.
11
+ placeholder: For example, 0.1.0
12
+ validations:
13
+ required: true
14
+ - type: input
15
+ id: cache
16
+ attributes:
17
+ label: "Output of running `ls ~/.cache/evalplus`"
18
+ validations:
19
+ required: true
20
+ - type: input
21
+ id: task_id
22
+ attributes:
23
+ label: "Task ID of the programming task"
24
+ placeholder: HumanEval/[??]
25
+ validations:
26
+ required: true
27
+ - type: textarea
28
+ id: test_input
29
+ attributes:
30
+ label: "Test input"
31
+ description: The text form of the test input that you think should be removed
32
+ render: python
33
+ validations:
34
+ required: true
35
+ - type: textarea
36
+ id: description
37
+ attributes:
38
+ label: "Description"
39
+ description: An explicit description of why you think this test should be removed
40
+ placeholder: Here is a correct solution but it is incorrectly falsified by the test because ...
41
+ validations:
42
+ required: true
43
+ - type: textarea
44
+ id: other
45
+ attributes:
46
+ label: "Other context"
47
+ description: (Optional) Anything else the maintainer should notice?
48
+ validations:
49
+ required: false
Llama2-Code-Interpreter-main/OpenCodeInterpreter/evaluation/evalplus/.github/ISSUE_TEMPLATE/config.yml ADDED
@@ -0,0 +1 @@
 
 
1
+ blank_issues_enabled: true
Llama2-Code-Interpreter-main/OpenCodeInterpreter/evaluation/evalplus/.github/ISSUE_TEMPLATE/model_eval_request.yml ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: "🤗 Model Evaluation Request"
2
+ description: Request EvalPlus maintainers to evaluate your model independently and update it on our leaderboard.
3
+ title: "🤗 [REQUEST] - <MODEL_NAME>"
4
+ labels: ["model eval"]
5
+ body:
6
+ - type: textarea
7
+ id: about
8
+ attributes:
9
+ label: "Model introduction"
10
+ description: Provide a brief introduction to the model.
11
+ placeholder: The models is created by ... and is used for ...
12
+ validations:
13
+ required: true
14
+ - type: input
15
+ id: url
16
+ attributes:
17
+ label: "Model URL"
18
+ description: Indicate the URL (e.g., huggingface or other release pages) of the model
19
+ placeholder: https://huggingface.co/[???]/[???]
20
+ validations:
21
+ required: true
22
+ - type: dropdown
23
+ id: dtype
24
+ attributes:
25
+ label: "Data type"
26
+ description: What is the intended data type for running the model?
27
+ multiple: false
28
+ options:
29
+ - "float16"
30
+ - "bfloat16"
31
+ - "float32"
32
+ - "None of above: specify the details in the 'Other context' section"
33
+ validations:
34
+ required: true
35
+ - type: textarea
36
+ id: other
37
+ attributes:
38
+ label: "Additional instructions (Optional)"
39
+ description: Special steps indicating how to run the model with preferably scripts/codes.
40
+ placeholder: What data type precision should be used? What is the minimal hardware requirement? Can it be accelerated by tools such as vLLM?
41
+ validations:
42
+ required: false
43
+ - type: dropdown
44
+ id: author
45
+ attributes:
46
+ label: "Author"
47
+ description: "Are you (one of) the author(s) of the model?"
48
+ multiple: false
49
+ options:
50
+ - "Yes"
51
+ - "No"
52
+ validations:
53
+ required: true
54
+ - type: checkboxes
55
+ id: security
56
+ attributes:
57
+ label: "Security"
58
+ options:
59
+ - label: "I confirm that the model is safe to run which does not contain any malicious code or content."
60
+ required: true
61
+ - type: checkboxes
62
+ id: integrity
63
+ attributes:
64
+ label: "Integrity"
65
+ options:
66
+ - label: "I confirm that the model comes from unique and original work and does not contain any plagiarism."
67
+ required: true
Llama2-Code-Interpreter-main/OpenCodeInterpreter/evaluation/evalplus/.gitignore ADDED
@@ -0,0 +1,173 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Byte-compiled / optimized / DLL files
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+
6
+ # C extensions
7
+ *.so
8
+
9
+ # Distribution / packaging
10
+ .Python
11
+ build/
12
+ develop-eggs/
13
+ dist/
14
+ downloads/
15
+ eggs/
16
+ .eggs/
17
+ lib/
18
+ lib64/
19
+ parts/
20
+ sdist/
21
+ var/
22
+ wheels/
23
+ share/python-wheels/
24
+ *.egg-info/
25
+ .installed.cfg
26
+ *.egg
27
+ MANIFEST
28
+
29
+ # PyInstaller
30
+ # Usually these files are written by a python script from a template
31
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
32
+ *.manifest
33
+ *.spec
34
+
35
+ # Installer logs
36
+ pip-log.txt
37
+ pip-delete-this-directory.txt
38
+
39
+ # Unit test / coverage reports
40
+ htmlcov/
41
+ .tox/
42
+ .nox/
43
+ .coverage
44
+ .coverage.*
45
+ .cache
46
+ nosetests.xml
47
+ coverage.xml
48
+ *.cover
49
+ *.py,cover
50
+ .hypothesis/
51
+ .pytest_cache/
52
+ cover/
53
+
54
+ # Translations
55
+ *.mo
56
+ *.pot
57
+
58
+ # Django stuff:
59
+ *.log
60
+ local_settings.py
61
+ db.sqlite3
62
+ db.sqlite3-journal
63
+
64
+ # Flask stuff:
65
+ instance/
66
+ .webassets-cache
67
+
68
+ # Scrapy stuff:
69
+ .scrapy
70
+
71
+ # Sphinx documentation
72
+ docs/_build/
73
+
74
+ # PyBuilder
75
+ .pybuilder/
76
+ target/
77
+
78
+ # Jupyter Notebook
79
+ .ipynb_checkpoints
80
+
81
+ # IPython
82
+ profile_default/
83
+ ipython_config.py
84
+
85
+ # pyenv
86
+ # For a library or package, you might want to ignore these files since the code is
87
+ # intended to run in multiple environments; otherwise, check them in:
88
+ # .python-version
89
+
90
+ # pipenv
91
+ # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
92
+ # However, in case of collaboration, if having platform-specific dependencies or dependencies
93
+ # having no cross-platform support, pipenv may install dependencies that don't work, or not
94
+ # install all needed dependencies.
95
+ #Pipfile.lock
96
+
97
+ # poetry
98
+ # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
99
+ # This is especially recommended for binary packages to ensure reproducibility, and is more
100
+ # commonly ignored for libraries.
101
+ # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
102
+ #poetry.lock
103
+
104
+ # pdm
105
+ # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
106
+ #pdm.lock
107
+ # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
108
+ # in version control.
109
+ # https://pdm.fming.dev/#use-with-ide
110
+ .pdm.toml
111
+
112
+ # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
113
+ __pypackages__/
114
+
115
+ # Celery stuff
116
+ celerybeat-schedule
117
+ celerybeat.pid
118
+
119
+ # SageMath parsed files
120
+ *.sage.py
121
+
122
+ # Environments
123
+ .env
124
+ .venv
125
+ env/
126
+ venv/
127
+ ENV/
128
+ env.bak/
129
+ venv.bak/
130
+
131
+ # Spyder project settings
132
+ .spyderproject
133
+ .spyproject
134
+
135
+ # Rope project settings
136
+ .ropeproject
137
+
138
+ # mkdocs documentation
139
+ /site
140
+
141
+ # mypy
142
+ .mypy_cache/
143
+ .dmypy.json
144
+ dmypy.json
145
+
146
+ # Pyre type checker
147
+ .pyre/
148
+
149
+ # pytype static type analyzer
150
+ .pytype/
151
+
152
+ # Cython debug symbols
153
+ cython_debug/
154
+
155
+ # PyCharm
156
+ # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
157
+ # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
158
+ # and can be added to the global gitignore or merged into this file. For a more nuclear
159
+ # option (not recommended) you can uncomment the following to ignore the entire idea folder.
160
+ # nuclear option because steven uses PyCharm.
161
+ .idea/
162
+
163
+ # VSCode
164
+ .vscode/
165
+ EvalPlus/
166
+ backup/
167
+ passrate.p*
168
+ min_cov_dir/
169
+ HumanEvalPlus*.gz
170
+ MbppPlus*.gz
171
+ evalplus/_version.py
172
+ *mbpp.json
173
+ *.jsonl
Llama2-Code-Interpreter-main/OpenCodeInterpreter/evaluation/evalplus/.pre-commit-config.yaml ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ repos:
2
+ - repo: https://github.com/pycqa/isort
3
+ rev: 5.12.0
4
+ hooks:
5
+ - id: isort
6
+ name: isort (python)
7
+ args: ["--profile", "black"]
8
+ - repo: https://github.com/psf/black
9
+ rev: 22.6.0
10
+ hooks:
11
+ - id: black
12
+ - repo: https://github.com/pre-commit/pre-commit-hooks
13
+ rev: v4.3.0
14
+ hooks:
15
+ - id: check-yaml
16
+ - id: end-of-file-fixer
17
+ - id: trailing-whitespace
18
+ exclude: (?x)^(
19
+ groundtruth/.*
20
+ )$
Llama2-Code-Interpreter-main/OpenCodeInterpreter/evaluation/evalplus/CITATION.cff ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ cff-version: 1.2.0
2
+ message: "If you use this work and love it, consider citing it as below \U0001F917"
3
+ title: EvalPlus
4
+ authors:
5
+ - family-names: EvalPlus Team
6
+ url: https://github.com/evalplus/evalplus
7
+ doi: https://doi.org/10.48550/arXiv.2305.01210
8
+ date-released: 2023-05-01
9
+ license: Apache-2.0
10
+ preferred-citation:
11
+ type: article
12
+ title: "Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation"
13
+ authors:
14
+ - family-names: Liu
15
+ given-names: Jiawei
16
+ - family-names: Xia
17
+ given-names: Chunqiu Steven
18
+ - family-names: Wang
19
+ given-names: Yuyao
20
+ - family-names: Zhang
21
+ given-names: Lingming
22
+ year: 2023
23
+ journal: "arXiv preprint arXiv:2305.01210"
24
+ doi: https://doi.org/10.48550/arXiv.2305.01210
25
+ url: https://arxiv.org/abs/2305.01210
Llama2-Code-Interpreter-main/OpenCodeInterpreter/evaluation/evalplus/Dockerfile ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # base env: py38 ubuntu20.04
2
+ FROM python:3.8-slim-buster
3
+
4
+ # install git
5
+ RUN apt-get update && apt-get install -y git
6
+
7
+ # upgrade to latest pip
8
+ RUN pip install --upgrade pip
9
+
10
+ COPY . /evalplus
11
+
12
+ RUN cd /evalplus && pip install .
13
+
14
+ # Pre-install the dataset
15
+ RUN python3 -c "from evalplus.data import get_human_eval_plus, get_mbpp_plus; get_human_eval_plus(); get_mbpp_plus()"
16
+
17
+ WORKDIR /app
18
+
19
+ ENTRYPOINT ["python3", "-m", "evalplus.evaluate"]
Llama2-Code-Interpreter-main/OpenCodeInterpreter/evaluation/evalplus/LICENSE ADDED
@@ -0,0 +1,205 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright [yyyy] [name of copyright owner]
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
202
+
203
+ -------------------------------------------------------------------------------
204
+ The files under "evalplus/eval/" additionally complies with the MIT License for
205
+ being built on OpenAI's HumanEval work.