QagentS commited on
Commit
a596058
1 Parent(s): 3a804fa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +506 -184
README.md CHANGED
@@ -1,201 +1,523 @@
1
  ---
 
 
 
2
  library_name: transformers
3
- tags: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ---
 
5
 
6
- # Model Card for Model ID
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
9
 
 
10
 
 
11
 
12
- ## Model Details
13
 
14
- ### Model Description
 
 
 
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
 
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
- ### Model Sources [optional]
29
 
30
- <!-- Provide the basic links for the model. -->
31
 
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
-
36
- ## Uses
37
-
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
- ### Direct Use
41
-
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
-
58
- ## Bias, Risks, and Limitations
59
-
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
-
70
- ## How to Get Started with the Model
71
-
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
-
76
- ## Training Details
77
-
78
- ### Training Data
79
-
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
-
84
- ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Contact
198
-
199
- [More Information Needed]
200
 
 
201
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
  library_name: transformers
6
+ tags:
7
+ - planning
8
+ - code
9
+ - policy control
10
+ - parsing
11
+ - python
12
+ - java
13
+ - cpp
14
+ - sql
15
+ - function calling
16
+ - unit tests
17
+ - causalLM
18
+ - codeLLAMA
19
+ - instruction_tuned
20
+ - basemodel
21
+ - pytorch
22
+ - docstring
23
+ - documentation
24
+ - text-generation-inference
25
+ metrics:
26
+ - accuracy
27
+ pipeline_tag: text-generation
28
+ widget:
29
+ - text: '<function_code>def _plot_bounding_polygon(polygons_coordinates, output_html_path=bounding_polygon_map.html):map_center = [sum([coord[0]for polygon_coords in polygons_coordinatesfor coord in polygon_coords])/ sum([len(polygon_coords) for polygon_coords in polygons_coordinates]),sum([coord[1]for polygon_coords in polygons_coordinatesfor coord in polygon_coords])/ sum([len(polygon_coords) for polygon_coords in polygons_coordinates]),]my_map = folium.Map(location=map_center, zoom_start=12)for polygon_coords in polygons_coordinates:folium.Polygon(locations=polygon_coords,color=blue,fill=True,fill_color=blue,fill_opacity=0.2,).add_to(my_map)marker_cluster = MarkerCluster().add_to(my_map)for polygon_coords in polygons_coordinates:for coord in polygon_coords:folium.Marker(location=[coord[0], coord[1]], popup=fCoordinates: {coord}).add_to(marker_cluster)draw = Draw(export=True)draw.add_to(my_map)my_map.save(output_html_path)return output_html_path</function_code><question>Document the python code above giving function description ,parameters and return type and example how to call the function</question><doc>'
30
+ example_title: example
31
  ---
32
+ # pip-library-etl-1.3b
33
 
34
+ [pipableAi](https://www.linkedin.com/company/pipable.ai/about/)
35
 
36
+ [colab_notebook](https://colab.research.google.com/drive/17PyMU_3QN9LROy7x-jmaema0cuLRzBvc?usp=sharing)
37
 
38
+ [pip library_etl](https://github.com/PipableAI/pip-library-etl.git)
39
 
40
+ [linkedin_post](https://www.linkedin.com/posts/pipable%2Eai_github-pipableaipip-library-etl-this-activity-7179111129678327809-Pgxy?utm_source=share&utm_medium=member_desktop)
41
 
42
+ ## What have we built?
43
 
44
+ A 1.3 bn code documentation model that outperforms most models on documenting codes and making your in-house libs ready for LLM and RAG pipelines.
45
+ We have also open sourced a [pip library_etl](https://github.com/PipableAI/pip-library-etl.git) for the same, together the lib and model can turn your codebase to functional parse tree ready to be consumed by LLMs to execute complex tasks.
46
+ This model is also capable of generating SQL queries with accuracies on par with those of [pip-sql-1.3b](https://huggingface.co/PipableAI/pip-sql-1.3b), with additional capabilities of providing extra examples, instructions ,and column descriptions as context.
47
+ This is a further trained version of pip-sql-1.3b and performance comparable to GPT.
48
 
 
49
 
50
+ ## How we built it?
51
 
52
+ We used softmax cross entropy and a modified form of policy grad along with Q loss, optimized in an EM set up.
53
+ The performance for the metioned tasks are comparable to much bigger LLMs and GPT-3.5
 
 
 
 
 
54
 
55
+ ## License
56
 
57
+ The model is open source under apache 2.0. License
58
 
59
+ ## Usage
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60
 
61
+ ### NOTE:
62
 
63
+
64
+ If you wish to try this model without utilizing your GPU, we have hosted the model on our end. To execute the library using the hosted playground model, initialize the generator as shown below:
65
+
66
+ ```python
67
+ from pip_library_etl import PipEtl
68
+
69
+ generator = PipEtl(device="cloud")
70
+ ```
71
+
72
+ We have hosted the model at https://playground.pipable.ai/infer. Hence, one can also make a POST request to this endpoint with the following payload:
73
+
74
+ ```json
75
+ {
76
+ "model_name": "PipableAI/pip-library-etl-1.3b",
77
+ "prompt": "prompt",
78
+ "max_new_tokens": "400"
79
+ }
80
+ ```
81
+
82
+ ```bash
83
+ curl -X 'POST' \
84
+ 'https://playground.pipable.ai/infer' \
85
+ -H 'accept: application/json' \
86
+ -H 'Content-Type: application/x-www-form-urlencoded' \
87
+ -d 'model_name=PipableAI%2Fpip-library-etl-1.3b&prompt="YOUR PROMPT"&max_new_tokens=400'
88
+ ```
89
+
90
+ Alternatively, you can directly access UI endpoint at https://playground.pipable.ai/docs#/default/infer_infer_post.
91
+
92
+ ### Library use
93
+
94
+ For directly using the capabilities of model without putting extra efforts on schems and prompts try to use [pip library_etl](https://github.com/PipableAI/pip-library-etl.git).
95
+
96
+ Here's a brief overview of what can be achieved using the PipEtl library:
97
+
98
+ - `Function Call Generation` : The generate_function_call method facilitates the generation of Python function calls based on provided questions and either docstrings or undocumented code. This feature can be useful for generating example function calls or for prototyping code snippets.
99
+ - `Automated Documentation Generation` : With the generate_docstring method, users can automatically generate comprehensive docstrings for Python functions. This feature aids in maintaining well-documented codebases and adhering to best practices.
100
+ - `Module Documentation` : The generate_module_docstrings method allows for generating documentation for all methods and functions within a given module or package. This capability streamlines the documentation process, especially for large codebases with numerous functions.
101
+ - `SQL Query Generation` : Users can leverage the generate_sql method to automatically generate SQL queries based on provided schemas and questions. This functionality simplifies the process of creating SQL queries, particularly for data-related tasks.
102
+
103
+ For detailed usage refer to the [colab_notebook](https://colab.research.google.com/drive/17PyMU_3QN9LROy7x-jmaema0cuLRzBvc?usp=sharing)
104
+
105
+
106
+ ### Installation
107
+
108
+ ```bash
109
+ pip install transformers
110
+ ```
111
+
112
+ ### Prompt
113
+ ```python
114
+ prompt = f"""<example_response>{--question , --query}</example_response><function_code>{code}</function_code>
115
+ <question>Give one line description of the python code above in natural language.</question>
116
+ <doc>"""
117
+
118
+ prompt = f"""<example_response>{example of some --question: , --query}</example_response><schema>{schema with cols described}</schema>
119
+ <question>Write a sql query to ....</question>
120
+ <sql>"""
121
+ ```
122
+
123
+ ### PyTorch
124
+ ```python
125
+ from transformers import AutoModelForCausalLM, AutoTokenizer
126
+ device = "cuda"
127
+ model = AutoModelForCausalLM.from_pretrained("PipableAI/pip-library-etl-1.3b").to(device)
128
+ tokenizer = AutoTokenizer.from_pretrained("PipableAI/pip-library-etl-1.3b")
129
+ prompt = f"""
130
+ <example_response>
131
+ --code:def divide_by_two(x: float) -> float: return x / 2
132
+ --question:Document the python code above giving function description ,parameters and return type and example on how to call the function
133
+ --doc:
134
+ Description: This function divides a given number by 2.
135
+ Parameters:
136
+ - x (float): The input value to be divided by 2.
137
+ Returns:
138
+ - float: The result of x divided by 2.
139
+ Example:
140
+ divide_by_two(1.0)
141
+ </example_response>
142
+ <function_code>
143
+ def download_file(shared_url, destination):
144
+ try:
145
+ if not shared_url.startswith("https://drive.google.com"):
146
+ raise ValueError("Please provde a valid google drive link.")
147
+ file_id = shared_url.split("/d/")[1]
148
+ file_id = file_id.split("/")[0]
149
+ url = f"https://drive.google.com/uc?id={file_id}"
150
+ gdown.download(url, destination, quiet=False)
151
+ except Exception as e:
152
+ print(f"Error downloading file from Google Drive as {e}")
153
+ raise e
154
+ </function_code>
155
+ <instructions>
156
+ 1. In the examples while calling function use the name mentioned after `def ` in the above function_code.
157
+ 2. In the generated docs use valid python type hints as per PEP 484.
158
+ </instructions>
159
+ <question>Document the python code above giving function description ,parameters and return type and example how to call the function.</question>
160
+ <doc>
161
+ """
162
+ inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
163
+ outputs = model.generate(**inputs, max_new_tokens=450)
164
+ doc = (
165
+ tokenizer.decode(outputs[0], skip_special_tokens=True)
166
+ .split("<doc>")[-1]
167
+ .split("</doc>")[0]
168
+ )
169
+ doc = (
170
+ doc.replace("<p>", "")
171
+ .replace("</p>", "")
172
+ .replace("<function_description>", "")
173
+ .replace("</function_description>", "")
174
+ )
175
+ print(doc)
176
+ ```
177
+
178
+
179
+
180
+ ## Examples
181
+
182
+ ### 1. Code Documentation
183
+ ### prompt
184
+ ```python
185
+ prompt ='''<example_response>
186
+ --code:def divide_by_two(x: float) -> float: return x / 2
187
+ --question:Document the python code above giving function description ,parameters and return type and example on how to call the function
188
+ --doc:
189
+ Description: This function divides a given number by 2.
190
+ Parameters:
191
+ - x (float): The input value to be divided by 2.
192
+ Returns:
193
+ - float: The result of x divided by 2.
194
+ Example:
195
+ divide_by_two(1.0)
196
+ </example_response>
197
+ <function_code>def _plot_bounding_polygon(
198
+ polygons_coordinates, output_html_path="bounding_polygon_map.html"
199
+ ):
200
+ # Create a Folium map centered at the average coordinates of all bounding boxes
201
+ map_center = [
202
+ sum(
203
+ [
204
+ coord[0]
205
+ for polygon_coords in polygons_coordinates
206
+ for coord in polygon_coords
207
+ ]
208
+ )
209
+ / sum([len(polygon_coords) for polygon_coords in polygons_coordinates]),
210
+ sum(
211
+ [
212
+ coord[1]
213
+ for polygon_coords in polygons_coordinates
214
+ for coord in polygon_coords
215
+ ]
216
+ )
217
+ / sum([len(polygon_coords) for polygon_coords in polygons_coordinates]),
218
+ ]
219
+
220
+ my_map = folium.Map(location=map_center, zoom_start=12)
221
+
222
+ # Add each bounding polygon to the map
223
+ for polygon_coords in polygons_coordinates:
224
+ folium.Polygon(
225
+ locations=polygon_coords,
226
+ color="blue",
227
+ fill=True,
228
+ fill_color="blue",
229
+ fill_opacity=0.2,
230
+ ).add_to(my_map)
231
+
232
+ # Add bounding boxes as markers to the map
233
+ marker_cluster = MarkerCluster().add_to(my_map)
234
+
235
+ for polygon_coords in polygons_coordinates:
236
+ for coord in polygon_coords:
237
+ folium.Marker(
238
+ location=[coord[0], coord[1]], popup=f"Coordinates: {coord}"
239
+ ).add_to(marker_cluster)
240
+
241
+ # Add draw control to allow users to draw additional polygons
242
+ draw = Draw(export=True)
243
+ draw.add_to(my_map)
244
+
245
+ # Save the map as an HTML file
246
+ my_map.save(output_html_path)
247
+
248
+ return output_html_path
249
+ </function_code>
250
+ <instructions>
251
+ 1. In the examples while calling function use the name mentioned after `def ` in the above function_code.
252
+ 2. In the generated docs use valid python type hints as per PEP 484.
253
+ </instructions>
254
+ <question>Document the python code above giving function description ,parameters and return type and example how to call the function</question><doc>'''
255
+ ```
256
+
257
+ ### Response
258
+ ```txt
259
+ Description:This function generates a map of the bounding polygons and saves it as an HTML file.
260
+ Parameters:
261
+ - polygons_coordinates (list of lists of tuples): A list of lists of tuples representing the coordinates of the polygons. Each polygon is a list of coordinates.
262
+ - output_html_path (str, optional): The path where the HTML file should be saved. Defaults to "bounding_polygon_map.html".
263
+ Returns:
264
+ - str: The path to the saved HTML file.
265
+ Example:
266
+ To call the function, use the following code:
267
+ plot_bounding_polygon([[(0, 0), (1, 0), (1, 1), (0, 1)], [(2, 2), (3, 2), (3, 3), (2, 3)]], "my_map.html").
268
+ ```
269
+
270
+
271
+ ### 2. SQL Generation
272
+ ### prompt
273
+ ```python
274
+ prompt = """Generate a simple SQL query from the schema mentioned for the following question.
275
+ <schema>
276
+ CREATE TABLE department (
277
+ Department_ID number, -- Unique identifier for the department
278
+ Name text, -- Name of the department
279
+ Creation text, -- Date of creation or establishment
280
+ Ranking number, -- Ranking of the department
281
+ Budget_in_Billions number, -- Budget of the department in billions
282
+ Num_Employees number -- Number of employees in the department
283
+ );
284
+
285
+ CREATE TABLE head (
286
+ head_ID number, -- Unique identifier for the head
287
+ name text, -- Name of the head
288
+ born_state text, -- State where the head was born
289
+ age number -- Age of the head
290
+ );
291
+
292
+ CREATE TABLE management (
293
+ department_ID number, -- Foreign key referencing Department_ID in department table
294
+ head_ID number, -- Foreign key referencing head_ID in head table
295
+ temporary_acting text -- Indicates if the head is temporarily acting
296
+ );
297
+ </schema>
298
+ <question>What are the names of the heads who are born outside the California state?</question>
299
+ <sql>
300
+ """
301
+ ```
302
+
303
+ ### response
304
+ ```sql
305
+ SELECT head.name FROM head WHERE head.born_state <> 'California';
306
+ ```
307
+
308
+ ### 3. Performance Schema Monitoring
309
+ ### prompt
310
+ ```python
311
+ prompt = """Generate the SQL query for SkySQL performance schema for the following question.
312
+ <example>
313
+ --question: What are the top 10 most frequently used queries/statements?
314
+ --sql: SELECT DIGEST_TEXT, COUNT(*) as frequency FROM performance_schema.events_statements_summary_by_digest GROUP BY DIGEST_TEXT ORDER BY frequency DESC LIMIT 10;
315
+ </example>
316
+ <schema>
317
+ CREATE TABLE `accounts` (`USER` char(128) DEFAULT NULL -- 'The connection''s client user name for the connection, or NULL if an internal thread.',
318
+ `HOST` char(255) DEFAULT NULL -- 'The connection client''s host name, or NULL if an internal thread.',
319
+ `CURRENT_CONNECTIONS` bigint(20) NOT NULL -- 'Current connections for the account.',\n
320
+ `TOTAL_CONNECTIONS` bigint(20) NOT NULL -- 'Total connections for the account.'
321
+ ) ;
322
+ </schema>
323
+ <question>
324
+ Tell me the number of active connections each user has.
325
+ </question>
326
+ <sql>
327
+ """
328
+ ```
329
+ ### response
330
+ ```sql
331
+ SELECT USER, CURRENT_CONNECTIONS FROM accounts;
332
+ ```
333
+
334
+ ### prompt
335
+ ```python
336
+ prompt = """Generate the SQL query for SkySQL performance schema for the following question.
337
+ <example>
338
+ --question: What are the top 10 most frequently used queries/statements?
339
+ --sql: SELECT DIGEST_TEXT, COUNT(*) as frequency FROM performance_schema.events_statements_summary_by_digest GROUP BY DIGEST_TEXT ORDER BY frequency DESC LIMIT 10;
340
+ </example>
341
+ <schema>
342
+ CREATE TABLE `file_summary_by_instance` (
343
+ `FILE_NAME` varchar(512) NOT NULL -- 'File name.',
344
+ `EVENT_NAME` varchar(128) NOT NULL -- 'Event name.',
345
+ `OBJECT_INSTANCE_BEGIN` bigint(20) unsigned NOT NULL -- 'Address in memory. Together with FILE_NAME and EVENT_NAME uniquely identifies a row.',
346
+ `COUNT_STAR` bigint(20) unsigned NOT NULL -- 'Number of summarized events',
347
+ `SUM_TIMER_WAIT` bigint(20) unsigned NOT NULL -- 'Total wait time of the summarized events that are timed.',
348
+ `MIN_TIMER_WAIT` bigint(20) unsigned NOT NULL -- 'Minimum wait time of the summarized events that are timed.',
349
+ `AVG_TIMER_WAIT` bigint(20) unsigned NOT NULL -- 'Average wait time of the summarized events that are timed.',
350
+ `MAX_TIMER_WAIT` bigint(20) unsigned NOT NULL -- 'Maximum wait time of the summarized events that are timed.',
351
+ `COUNT_READ` bigint(20) unsigned NOT NULL -- 'Number of all read operations, including FGETS, FGETC, FREAD, and READ.',
352
+ `SUM_TIMER_READ` bigint(20) unsigned NOT NULL -- 'Total wait time of all read operations that are timed.',
353
+ `MIN_TIMER_READ` bigint(20) unsigned NOT NULL -- 'Minimum wait time of all read operations that are timed.',
354
+ `AVG_TIMER_READ` bigint(20) unsigned NOT NULL -- 'Average wait time of all read operations that are timed.',
355
+ `MAX_TIMER_READ` bigint(20) unsigned NOT NULL -- 'Maximum wait time of all read operations that are timed.',
356
+ `SUM_NUMBER_OF_BYTES_READ` bigint(20) NOT NULL -- 'Bytes read by read operations.',
357
+ `COUNT_WRITE` bigint(20) unsigned NOT NULL -- 'Number of all write operations, including FPUTS, FPUTC, FPRINTF, VFPRINTF, FWRITE, and PWRITE.',
358
+ `SUM_TIMER_WRITE` bigint(20) unsigned NOT NULL -- 'Total wait time of all write operations that are timed.',
359
+ `MIN_TIMER_WRITE` bigint(20) unsigned NOT NULL -- 'Minimum wait time of all write operations that are timed.',
360
+ `AVG_TIMER_WRITE` bigint(20) unsigned NOT NULL -- 'Average wait time of all write operations that are timed.',
361
+ `MAX_TIMER_WRITE` bigint(20) unsigned NOT NULL -- 'Maximum wait time of all write operations that are timed.',
362
+ `SUM_NUMBER_OF_BYTES_WRITE` bigint(20) NOT NULL -- 'Bytes written by write operations.',
363
+ `COUNT_MISC` bigint(20) unsigned NOT NULL -- 'Number of all miscellaneous operations not counted above, including CREATE, DELETE, OPEN, CLOSE, STREAM_OPEN, STREAM_CLOSE, SEEK, TELL, FLUSH, STAT, FSTAT, CHSIZE, RENAME, and SYNC.',
364
+ `SUM_TIMER_MISC` bigint(20) unsigned NOT NULL -- 'Total wait time of all miscellaneous operations that are timed.',
365
+ `MIN_TIMER_MISC` bigint(20) unsigned NOT NULL -- 'Minimum wait time of all miscellaneous operations that are timed.',
366
+ `AVG_TIMER_MISC` bigint(20) unsigned NOT NULL -- 'Average wait time of all miscellaneous operations that are timed.',
367
+ `MAX_TIMER_MISC` bigint(20) unsigned NOT NULL -- 'Maximum wait time of all miscellaneous operations that are timed.'
368
+ );
369
+ </schema>
370
+ <question>
371
+ List out 10 names of the files with the most read and writes
372
+ </question>
373
+ <sql>
374
+ """
375
+ ```
376
+
377
+ ### response
378
+ ```sql
379
+ SELECT FILE_NAME FROM file_summary_by_instance ORDER BY SUM_NUMBER_OF_BYTES_READ DESC, SUM_NUMBER_OF_BYTES_WRITE DESC LIMIT 10;
380
+ ```
381
+
382
+
383
+ ### 4. Function Calling
384
+
385
+ ### prompt
386
+ ```python
387
+ prompt = """
388
+ Give a function call in python langugae for the following question:
389
+ <example_response>
390
+ --doc: Description: This function logs a curl command in debug mode.
391
+ Parameters:
392
+ - method (str): The HTTP method to use for the request.
393
+ - url (str): The URL to send the request to.
394
+ - data (dict, optional): The data to send in the request. Defaults to None.
395
+ - headers (dict, optional): The headers to send with the request. Defaults to None.
396
+ - level (int, optional): The log level to use for this log message. Defaults to logging.DEBUG.
397
+ Returns:
398
+ - None
399
+ Example:
400
+ log_curl_debug('GET', 'https://example.com')
401
+ --question: log a curl PUT request for url https://web.io/
402
+ --function_call: log_curl_debug(method='PUT', url = 'https://web.io')
403
+ </example_response>
404
+ <doc>
405
+ Function Name: make_get_req()
406
+ Description: This function is used to make a GET request.
407
+ Parameters:
408
+ - path (str): The path of the URL to be requested.
409
+ - data (dict): The data to be sent in the body of the request.
410
+ - flags (dict): The flags to be sent in the request.
411
+ - params (dict): The parameters to be sent in the request.
412
+ - headers (dict): The headers to be sent in the request.
413
+ - not_json_response (bool): OPTIONAL: If set to True, the function will return the raw response content instead of trying to parse it as JSON.
414
+ - trailing (str): OPTIONAL: For wrapping slash symbol in the end of string.
415
+ - absolute (bool): OPTIONAL: If set to True, the function will not prefix the URL with the base URL.
416
+ - advanced_mode (bool): OPTIONAL: If set to True, the function will return the raw response instead of trying to parse it as JSON.
417
+ Returns:
418
+ - Union[str, dict, list, None]: The response content as a string, a dictionary, a list, or None if the response was not successful.
419
+ </doc>
420
+ <instruction>
421
+ 1. Strictly use named parameters mentioned in the doc to generate function calls.
422
+ 2. Only return the response as python parsable string version of function call.
423
+ 3. mention the 'self' parameter if required.
424
+ </instruction>
425
+ <question>
426
+ Make a GET request for the URL parameter using variable_2. For the params parameter, use 'weight' as one of the keys with variable_3 as its value, and 'width' as another key with a value of 10. For the data parameter, use variable_1. Prefix the URL with the base URL, and ensure the response is in raw format.
427
+ </question>
428
+ <function_call>
429
+ """
430
+ ```
431
+
432
+ ### response
433
+ ```python
434
+ make_get_req(path='https://example.com/api/v1/users', data=variable_1, params={'weight': variable_3, 'width': 10}, headers={'Content-Type': 'application/json'}, not_json_response=True, absolute=True)
435
+ ```
436
+
437
+ ### prompt
438
+ ```python
439
+ prompt = """
440
+ Give only function call in python langugae as response for the following question:
441
+ <example_response>
442
+ --doc:
443
+ Function:
444
+ Help on function head in module pandas.core.generic:
445
+
446
+ head(self, n: 'int' = 5) -> 'Self'
447
+ Return the first `n` rows.
448
+
449
+ This function returns the first `n` rows for the object based
450
+ on position. It is useful for quickly testing if your object
451
+ has the right type of data in it.
452
+
453
+ For negative values of `n`, this function returns all rows except
454
+ the last `|n|` rows, equivalent to ``df[:n]``.
455
+
456
+ If n is larger than the number of rows, this function returns all rows.
457
+
458
+ Parameters
459
+ ----------
460
+ n : int, default 5
461
+ Number of rows to select.
462
+
463
+ Returns
464
+ -------
465
+ same type as caller
466
+ The first `n` rows of the caller object.
467
+
468
+ See Also
469
+ --------
470
+ DataFrame.tail: Returns the last `n` rows.
471
+
472
+ Examples
473
+ --------
474
+ >>> df = pd.DataFrame({'animal': ['alligator', 'bee', 'falcon', 'lion',
475
+ ... 'monkey', 'parrot', 'shark', 'whale', 'zebra']})
476
+ >>> df
477
+ animal
478
+ 0 alligator
479
+
480
+ --question: Get the top 5 rows with the highest Engagement_Score. Parameter Description: Use 5 as Number of rows to return ,Use variable_3 as Sorted DataFrame, Do not call any other function, Pass variable to self parameter for method calls
481
+ --function_call: head(self=variable_3, n=5)
482
+ </example_response>
483
+ <doc>
484
+ Function: sort_values
485
+ sort_values in module pandas.core.frame:
486
+ sort_values(self, by: 'IndexLabel', *, axis: 'Axis' = 0, ascending: 'bool | list[bool] | tuple[bool, ...]' = True, inplace: 'bool' = False, kind: 'SortKind' = 'quicksort', na_position: 'str' = 'last', ignore_index: 'bool' = False, key: 'ValueKeyFunc | None' = None) -> 'DataFrame | None'
487
+ Sort by the values along either axis.
488
+ Parameters
489
+ ----------
490
+ by : str or list of str
491
+ Name or list of names to sort by.
492
+
493
+ - if `axis` is 0 or `'index'` then `by` may contain index
494
+ levels and/or column labels.
495
+ - if `axis` is 1 or `'columns'` then `by` may contain column
496
+ levels and/or index labels.
497
+ axis : "{0 or 'index', 1 or 'columns'}", default 0
498
+ Axis to be sorted.
499
+ ascending : bool or list of bool, default True
500
+ Sort ascending vs. descending. Specify list for multiple sort
501
+ orders. If this is a list of bools, must match the length of
502
+ the
503
+ </doc>
504
+ <instruction>
505
+ 1. Strictly use named parameters mentioned in the doc to generate function calls.
506
+ 2. Only return the response as python parsable string version of function call.
507
+ 3. Use the 'self' parameter if required in the function call with it's value in named keyword format.
508
+ </instruction>
509
+ <question>
510
+ Using the above function, Sort the DataFrame by the Engagement_Score in descending order. Parameter Description: Use Engagement_Score as Column name to sort by ,Use False as Sort in descending order ,Use variable_1 as DataFrame to sort, Do not call any other function, Pass variable to self parameter for method calls
511
+ </question>
512
+ <function_call>
513
+ """
514
+ ```
515
+ ### response
516
+ ```python
517
+ sort_values(self=variable_1, by='Engagement_Score', ascending=False)
518
+ ```
519
+
520
+
521
+
522
+ ### Team
523
+ Avi Kothari, Gyan Ranjan, Pratham Gupta, Ritvik Aryan Kalra, Soham Acharya