LwbXc commited on
Commit
b754bbe
·
1 Parent(s): 1dfcb78

code and datasets

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +167 -0
  2. code/basic_prompting.py +45 -0
  3. code/config.py +167 -0
  4. code/cot_prompting.py +62 -0
  5. code/download_llms.py +14 -0
  6. code/fine_tuning.py +56 -0
  7. code/icl_prompting.py +52 -0
  8. code/model_finetuning/__init__.py +2 -0
  9. code/model_finetuning/my_formatting_fun.py +25 -0
  10. code/model_finetuning/my_trainer.py +93 -0
  11. code/model_inference/__init__.py +14 -0
  12. code/model_inference/chatglm2.py +24 -0
  13. code/model_inference/chatglm3.py +24 -0
  14. code/model_inference/chatgpt.py +48 -0
  15. code/model_inference/deepseek7b.py +26 -0
  16. code/model_inference/falcon7b.py +24 -0
  17. code/model_inference/gemma2b.py +25 -0
  18. code/model_inference/gemma7b.py +25 -0
  19. code/model_inference/gpt4o.py +50 -0
  20. code/model_inference/llama2_7b.py +25 -0
  21. code/model_inference/mistral7b.py +24 -0
  22. code/model_inference/phi2.py +24 -0
  23. code/model_inference/qwen7b.py +25 -0
  24. code/model_inference/vicuna7b.py +28 -0
  25. code/model_inference/yi6b.py +24 -0
  26. code/result_parser.py +96 -0
  27. datasets/basic/accurate_calculation/direction_determination.jsonl +0 -0
  28. datasets/basic/accurate_calculation/trajectory_trajectory.jsonl +0 -0
  29. datasets/basic/downstream_applications/trajectory_anomaly_detection_abnormal.jsonl +0 -0
  30. datasets/basic/downstream_applications/trajectory_anomaly_detection_normal.jsonl +0 -0
  31. datasets/basic/downstream_applications/trajectory_classification.jsonl +0 -0
  32. datasets/basic/downstream_applications/trajectory_prediction.jsonl +0 -0
  33. datasets/basic/knowledge_comprehension/administrative_region_determination.jsonl +0 -0
  34. datasets/basic/knowledge_comprehension/urban_region_function_recognition.jsonl +0 -0
  35. datasets/basic/spatiotemporal_reasoning/point_region_2regions.jsonl +0 -0
  36. datasets/basic/spatiotemporal_reasoning/point_region_3regions.jsonl +0 -0
  37. datasets/basic/spatiotemporal_reasoning/point_region_4regions.jsonl +0 -0
  38. datasets/basic/spatiotemporal_reasoning/point_region_5regions.jsonl +0 -0
  39. datasets/basic/spatiotemporal_reasoning/point_trajectory.jsonl +0 -0
  40. datasets/basic/spatiotemporal_reasoning/trajectory_identification_downsampling.jsonl +0 -0
  41. datasets/basic/spatiotemporal_reasoning/trajectory_identification_spatial_offset.jsonl +0 -0
  42. datasets/basic/spatiotemporal_reasoning/trajectory_identification_staggered_sampling.jsonl +0 -0
  43. datasets/basic/spatiotemporal_reasoning/trajectory_identification_temporal_offset.jsonl +0 -0
  44. datasets/basic/spatiotemporal_reasoning/trajectory_region_length10.jsonl +0 -0
  45. datasets/basic/spatiotemporal_reasoning/trajectory_region_length2.jsonl +0 -0
  46. datasets/basic/spatiotemporal_reasoning/trajectory_region_length4.jsonl +0 -0
  47. datasets/basic/spatiotemporal_reasoning/trajectory_region_length6.jsonl +0 -0
  48. datasets/basic/spatiotemporal_reasoning/trajectory_region_length8.jsonl +0 -0
  49. datasets/cot/trajectory_classification.jsonl +2 -0
  50. datasets/cot/trajectory_region.jsonl +2 -0
README.md CHANGED
@@ -1,3 +1,170 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+
5
+ # STBench: Assessing the Ability of Large Language Models in Spatio-Temporal Analysis
6
+
7
+ <p align="center">
8
+ Institute of Computing Technology (ICT), Chinese Academy of Sciences (CAS)<br/>
9
+ </p>
10
+ <p align="center">
11
+ 📃 <a href="https://arxiv.org/abs/2406.19065" target="_blank">Paper</a>
12
+ </p>
13
+
14
+ ![local file](overview.png)
15
+
16
+ STBench is a benchmark to evaluate the ability of large language models in spatio-temporal analysis. This benchmark consists of 13 distinct tasks and over 60,000 question-answer pairs, covering four dimensions: knowledge comprehension, spatio-temporal reasoning, accurate computation and downstream applications.
17
+
18
+ All data samples in STbench are in the form of text completion. An instance is as follows:
19
+ ```text
20
+ Question: Below is the coordinate information and related comments of a point of interest: ... Please answer the category of this point of interest.
21
+ Options: (1) xxxx, (2) xxxx, (3) xxxx, ...
22
+ Please answer one option.
23
+ Answer: The answer is option (
24
+ ```
25
+ The model is expected to complete the text, *i.e.*, it should generate an option number. Therefore, to benchmark a model with STBench, it is necessary to use a text completion API rather than a chat completion API. For chatting models that only provide chat completion API, we suggest instructing the models to complete the text through the system prompt:
26
+ ```json
27
+ [{"role": "system", "content": "you are a helpful text completion assistant. Please continue writing the text entered by the human."}, {"role": "human", "content": "Question: Below is the coordinate information and related comments of a point of interest: ... Please answer the category of this point of interest.\nOptions: (1) xxxx, (2) xxxx, (3) xxxx, ...\nPlease answer one option.\nAnswer: The answer is option ("}]
28
+ ```
29
+
30
+ ## Quick Start
31
+ We have benchmarked 13 distinct large language models and here we provide a simple guide to reproduce our experiments.
32
+
33
+ 1. Dependency Installation
34
+
35
+ Run the following command to install dependencies:
36
+ ```bash
37
+ pip install -r requirements.txt
38
+ ```
39
+ 3. Model Downloading
40
+
41
+ Our experiments about open-source models are based on [modelscope](https://github.com/modelscope/modelscope) and these open-source models can be downloaded by following command:
42
+ ```bash
43
+ cd code
44
+ python downloads_llms.py
45
+ ```
46
+
47
+ 4. Basic Prompt
48
+
49
+ Run the following command to benchmark all models through 13 tasks:
50
+ ```bash
51
+ python basic_prompting.py
52
+ ```
53
+
54
+ 6. In-Context Learning
55
+
56
+ Run the following command to evaluate the performance of all models with in-context learning:
57
+ ```bash
58
+ python icl_prompting.py
59
+ ```
60
+
61
+ 7. Chain-of-Thought Prompting
62
+
63
+ To conduct experiments with chain-of-thought prompting for all models, run the following command:
64
+ ```bash
65
+ python cot_prompting.py
66
+ ```
67
+
68
+ 8. Fine-tuning
69
+
70
+ Run the following command to fine-tune the model and evaluate the fine-tuned model:
71
+ ```bash
72
+ python fine_tuning.py
73
+ ```
74
+
75
+ ## Detailed Usage
76
+ This repository is organized as follows:
77
+ ```text
78
+ Project
79
+ |—— LICENSE
80
+ |—— overview.png
81
+ |—— README.md
82
+ |—— requirements.txt
83
+ |—— datasets # all datasets can be found in this directory
84
+ |—— basic # the main datasets of STBench, consists of over 60,000 QA pairs
85
+ |—— icl # two samples for each task to perform two-shot prompting
86
+ |—— cot # two samples containing reasoning for each task to perform CoT prompting
87
+ |—— sft # training datasets and validation datasets for fine-tuning
88
+ |—— code
89
+ |—— model_inference # calling the API of each large language model
90
+ |—— model_finetuning # fine-tuning code
91
+ |—— download_llms.py # downloading open-source models
92
+ |—— basic_prompting.py # run experiments with basic prompting
93
+ |—— icl_prompting.py # run experiments with icl prompting
94
+ |—— cot_prompting.py # run experiments with cot prompting
95
+ |—— fine_tuning.py # run experiments with fine-tuning
96
+ |—— result_parser.py # code for identifying the final answer of the model
97
+ |—— config.py # a declaration of some configuration such as the file path for each task
98
+ ```
99
+ 1. To benchmark a new model, namely **NEW_MODEL**
100
+
101
+ a. Write your code for calling the API of this model in `code/model_inference/new_model.py`, and modify `code/model_inference/__init__.py` accordingly.
102
+
103
+ b. Add the model to the model list in `code/basic_prompting.py`
104
+
105
+ 3. To include a new dataset, namely `new_dataset.jsonl`, for a task **NEW_TASK**
106
+
107
+ a. Put your datasets here: `dataset/basic/new_dataset.jsonl`
108
+
109
+ b. Modify `code/result_parser.py` and implement your function `new_task_parser()` to parse the results from the output of the LLMs
110
+
111
+ c. Modify `code/config.py` to specify the mapping from **NEW_TASK** to the dataset path `dataset/basic/new_dataset.jsonl` and the mapping from **NEW_TASK** to the result parser `new_task_parser()`
112
+
113
+ d. Add the task to the task list in `code/basic_prompting.py`
114
+
115
+ ## Experimental Results
116
+
117
+ <table>
118
+ <tr>
119
+ <td align="center"></td>
120
+ <td align="center" colspan="4">Knowledge Comprehension</td>
121
+ <td align="center" colspan="4">Spatio-temporal Reasoning</td>
122
+ <td align="center" colspan="2">Accurate Computation</td>
123
+ <td align="center" colspan="3">Downstream Applications</td>
124
+ </tr>
125
+ <tr>
126
+ <td align="center"></td><td align="center">PCR</td><td align="center">PI</td><td align="center">URFR</td><td align="center">ARD</td><td align="center">PTRD</td><td align="center">PRRD</td><td align="center">TRRD</td><td align="center">TI</td><td align="center">DD</td><td align="center">TTRA</td><td align="center">TAD</td><td align="center">TC</td><td align="center">TP</td>
127
+ </tr>
128
+ <tr>
129
+ <td align="center"> ChatGPT </td><td align="center"><span style="text-decoration: underline;"> 0.7926 </span></td><td align="center"> 0.5864 </td><td align="center"><span style="text-decoration: underline;"> 0.3978 </span></td><td align="center"><span style="text-decoration: underline;"> 0.8358 </span></td><td align="center"><b> 0.7525 </b></td><td align="center"><b> 0.9240 </b></td><td align="center"> 0.0258 </td><td align="center"> 0.3342 </td><td align="center"> 0.1698 </td><td align="center"> 0.1048 </td><td align="center"><span style="text-decoration: underline;"> 0.5382 </span></td><td align="center"><b> 0.4475 </b></td><td align="center"> -
130
+ </tr>
131
+ <tr>
132
+ <td align="center">GPT-4o </td><td align="center"><b> 0.9588 </b></td><td align="center"><b> 0.7268 </b></td><td align="center"><b> 0.6026 </b></td><td align="center"><b> 0.9656 </b></td><td align="center"> - </td><td align="center"><span style="text-decoration: underline;"> 0.9188 </span></td><td align="center"> 0.1102 </td><td align="center"> 0.4416 </td><td align="center"><b> 0.5434 </b></td><td align="center"><b> 0.3404 </b></td><td align="center"><b> 0.6016 </b></td><td align="center"> - </td><td align="center"> - </td>
133
+ </tr>
134
+ <tr>
135
+ <td align="center"> ChatGLM2 </td><td align="center"> 0.2938 </td><td align="center"> 0.5004 </td><td align="center"> 0.2661 </td><td align="center"> 0.2176 </td><td align="center"> 0.2036 </td><td align="center"> 0.5216 </td><td align="center"><b> 0.2790 </b></td><td align="center"> 0.5000 </td><td align="center"> 0.1182 </td><td align="center"> 0.1992 </td><td align="center"> 0.5000 </td><td align="center"> 0.3333 </td><td align="center"> 231.2 </td>
136
+ </tr>
137
+ <tr>
138
+ <td align="center"> ChatGLM3 </td><td align="center"> 0.4342 </td><td align="center"> 0.5272 </td><td align="center"> 0.2704 </td><td align="center"> 0.2872 </td><td align="center"> 0.3058 </td><td align="center"> 0.8244 </td><td align="center"> 0.1978 </td><td align="center"><span style="text-decoration: underline;"> 0.6842 </span></td><td align="center"> 0.1156 </td><td align="center"> 0.1828 </td><td align="center"> 0.5000 </td><td align="center"> 0.3111 </td><td align="center"> 224.5 </td>
139
+ </tr>
140
+ <tr>
141
+ <td align="center"> Phi-2 </td><td align="center"> - </td><td align="center"> 0.5267 </td><td align="center"> - </td><td align="center"> 0.2988 </td><td align="center"> - </td><td align="center"> - </td><td align="center"> - </td><td align="center"> 0.5000 </td><td align="center"> 0.1182 </td><td align="center"> 0.0658 </td><td align="center"> 0.5000 </td><td align="center"> 0.3333 </td><td align="center"> 206.9 </td>
142
+ </tr>
143
+ <tr>
144
+ <td align="center"> Llama-2-7B </td><td align="center"> 0.2146 </td><td align="center"> 0.4790 </td><td align="center"> 0.2105 </td><td align="center"> 0.2198 </td><td align="center"> 0.2802 </td><td align="center"> 0.6606 </td><td align="center"> 0.2034 </td><td align="center"> 0.5486 </td><td align="center"> 0.1256 </td><td align="center"> 0.2062 </td><td align="center"> 0.5098</td><td align="center"> 0.3333 </td><td align="center"> 189.3 </td>
145
+ </tr>
146
+ <tr>
147
+ <td align="center"> Vicuna-7B </td><td align="center"> 0.3858 </td><td align="center"> 0.5836 </td><td align="center"> 0.2063 </td><td align="center"> 0.2212 </td><td align="center"> 0.3470 </td><td align="center"> 0.7080 </td><td align="center"> 0.1968 </td><td align="center"> 0.5000 </td><td align="center"> 0.1106 </td><td align="center"> 0.1728 </td><td align="center"> 0.5000 </td><td align="center"> 0.2558 </td><td align="center"> 188.1</td>
148
+ </tr>
149
+ <tr>
150
+ <td align="center"> Gemma-2B </td><td align="center"> 0.2116 </td><td align="center"> 0.5000 </td><td align="center"> 0.1989 </td><td align="center"> 0.1938 </td><td align="center"> 0.4688 </td><td align="center"> 0.5744 </td><td align="center"> 0.2014 </td><td align="center"> 0.5000 </td><td align="center"><span style="text-decoration: underline;"> 0.1972 </span></td><td align="center"> 0.2038 </td><td align="center"> 0.5000 </td><td align="center"> 0.3333 </td><td align="center"> 207.7 </td>
151
+ </tr>
152
+ <tr>
153
+ <td align="center"> Gemma-7B </td><td align="center"> 0.4462 </td><td align="center"> 0.5000 </td><td align="center"> 0.2258 </td><td align="center"> 0.2652 </td><td align="center"> 0.3782 </td><td align="center"> 0.9044 </td><td align="center"> 0.1992 </td><td align="center"> 0.5000 </td><td align="center"> 0.1182 </td><td align="center"> 0.1426 </td><td align="center"> 0.5000 </td><td align="center"> 0.3333 </td><td align="center"><b> 139.4</b></td>
154
+ </tr>
155
+ <tr>
156
+ <td align="center"> DeepSeek-7B </td><td align="center"> 0.2160 </td><td align="center"> 0.4708 </td><td align="center"> 0.2071 </td><td align="center"> 0.1938 </td><td align="center"> 0.2142 </td><td align="center"> 0.6424 </td><td align="center"> 0.1173 </td><td align="center"> 0.4964 </td><td align="center"> 0.1972 </td><td align="center"> 0.1646 </td><td align="center"> 0.5000 </td><td align="center"> 0.3333 </td><td align="center"> 220.8</td>
157
+ </tr>
158
+ <tr>
159
+ <td align="center"> Falcon-7B </td><td align="center"> 0.1888 </td><td align="center"> 0.5112 </td><td align="center"> 0.1929 </td><td align="center"> 0.1928 </td><td align="center"> 0.1918 </td><td align="center"> 0.4222 </td><td align="center"><span style="text-decoration: underline;"> 0.2061 </span></td><td align="center"><b> 0.7072 </b></td><td align="center"> 0.1365 </td><td align="center"> 0.2124 </td><td align="center"> 0.5000 </td><td align="center"> 0.3309 </td><td align="center"> 3572.8 </td>
160
+ </tr>
161
+ <tr>
162
+ <td align="center"> Mistral-7B </td><td align="center"> 0.3526 </td><td align="center"> 0.4918 </td><td align="center"> 0.2168 </td><td align="center"> 0.3014 </td><td align="center"> 0.4476 </td><td align="center"> 0.7098 </td><td align="center"> 0.0702 </td><td align="center"> 0.4376 </td><td align="center"> 0.1182 </td><td align="center"> 0.1094 </td><td align="center"> 0.5000 </td><td align="center"> 0.3333 </td><td align="center"> 156.8 </td>
163
+ </tr>
164
+ <tr>
165
+ <td align="center"> Qwen-7B </td><td align="center"> 0.2504 </td><td align="center"><span style="text-decoration: underline;"> 0.6795 </span></td><td align="center"> 0.2569 </td><td align="center"> 0.2282 </td><td align="center"> 0.2272 </td><td align="center"> 0.5762 </td><td align="center"> 0.1661 </td><td align="center"> 0.4787 </td><td align="center"> 0.1324 </td><td align="center"><span style="text-decoration: underline;"> 0.2424 </span></td><td align="center"> 0.5049 </td><td align="center"><span style="text-decoration: underline;"> 0.3477 </span></td><td align="center"> 205.2 </td>
166
+ </tr>
167
+ <tr>
168
+ <td align="center"> Yi-6B </td><td align="center"> 0.3576 </td><td align="center"> 0.5052 </td><td align="center"> 0.2149 </td><td align="center"> 0.1880 </td><td align="center"><span style="text-decoration: underline;"> 0.5536 </span></td><td align="center"> 0.8264 </td><td align="center"> 0.1979 </td><td align="center"> 0.5722 </td><td align="center"> 0.1284 </td><td align="center"> 0.2214 </td><td align="center"> 0.5000 </td><td align="center"> 0.3333 </td><td align="center"><span style="text-decoration: underline;"> 156.2 </span></td>
169
+ </tr>
170
+ </table>
code/basic_prompting.py ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from model_inference import *
2
+ from config import result_parsers, dataset_files, max_tokens
3
+ from tqdm import tqdm
4
+ import json
5
+ import os
6
+
7
+ models = [chatglm2, chatglm3, deepseek7b, falcon7b, gemma2b, gemma7b, llama2_7b, mistral7b, phi2, qwen7b, vicuna7b, yi6b, chatgpt, gpt4o]
8
+ tasks = ["poi_category_recognition", "poi_identification", "urban_region_function_recognition", "administrative_region_determination", "point_trajectory", "point_region", "trajectory_region", "trajectory_identification", "trajectory_trajectory", "direction_determination", "trajectory_anomaly_detection", "trajectory_classification", "trajectory_prediction"]
9
+
10
+ if not os.path.exists("./logs"):
11
+ os.mkdir("./logs")
12
+
13
+ for fun in models:
14
+ model = fun()
15
+ for task in tasks:
16
+ error_writer = open("./logs/{}.log".format(task), 'a')
17
+ error_writer.write(model.model_path+'\n')
18
+ result_parser = result_parsers[task]
19
+ for dataset_path in dataset_files[task]:
20
+ dataset = open(dataset_path, 'r')
21
+ dataset = dataset.readlines()
22
+
23
+ correct = 0
24
+ total = 0
25
+ exception = 0
26
+
27
+ for i, item in tqdm(enumerate(dataset), total=len(dataset)):
28
+ item = json.loads(item)
29
+ response = model.generate(item["Question"], max_tokens[task])
30
+ score = result_parser(response, item["Answer"], error_writer)
31
+
32
+ if task!='trajectory_prediction' or score is not None:
33
+ total +=1
34
+ if score is None:
35
+ exception += 1
36
+ else:
37
+ correct += score
38
+
39
+ if i%100==0:
40
+ print("Dataset: {}\nTotal: {}, correct:{}, exception:{}, accuracy:{}\n\n".format(dataset_path, total, correct, exception, correct/total))
41
+
42
+ error_writer.write("Dataset: {}\nTotal: {}, correct:{}, exception:{}, accuracy:{}\n\n".format(dataset_path, total, correct, exception, correct/total))
43
+ error_writer.flush()
44
+ error_writer.write("\n")
45
+ error_writer.close()
code/config.py ADDED
@@ -0,0 +1,167 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from dataclasses import dataclass, field
2
+ from typing import Optional
3
+ from result_parser import yes_or_no, find_option_number, anomaly_detection, trajectory_prediction, trajectory_classification
4
+
5
+ result_parsers = {
6
+ "poi_category_recognition": find_option_number,
7
+ "poi_identification": yes_or_no,
8
+ "urban_region_function_recognition": find_option_number,
9
+ "administrative_region_determination": find_option_number,
10
+ "point_trajectory": find_option_number,
11
+ "point_region": find_option_number,
12
+ "trajectory_region": find_option_number,
13
+ "trajectory_identification": yes_or_no,
14
+ "trajectory_trajectory": find_option_number,
15
+ "direction_determination": find_option_number,
16
+ "trajectory_anomaly_detection": anomaly_detection,
17
+ "trajectory_classification": trajectory_classification,
18
+ "trajectory_prediction": trajectory_prediction
19
+ }
20
+
21
+ max_tokens = {
22
+ "poi_category_recognition": 15,
23
+ "poi_identification": 15,
24
+ "urban_region_function_recognition": 15,
25
+ "administrative_region_determination": 15,
26
+ "point_trajectory": 15,
27
+ "point_region": 15,
28
+ "trajectory_region": 15,
29
+ "trajectory_identification": 15,
30
+ "trajectory_trajectory": 15,
31
+ "direction_determination": 15,
32
+ "trajectory_anomaly_detection": 15,
33
+ "trajectory_classification": 15,
34
+ "trajectory_prediction": 50
35
+ }
36
+
37
+ dataset_files = {
38
+ "poi_category_recognition": ["../datasets/basic/knowledge_comprehension/poi_category_recognition.jsonl"],
39
+ "poi_identification": ["../datasets/basic/knowledge_comprehension/poi_identification.jsonl"],
40
+ "urban_region_function_recognition": ["../datasets/basic/knowledge_comprehension/urban_region_function_recognition.jsonl"],
41
+ "administrative_region_determination": ["../datasets/basic/knowledge_comprehension/administrative_region_determination.jsonl"],
42
+ "point_trajectory": ["../datasets/basic/spatiotemporal_reasoning/point_trajectory.jsonl"],
43
+ "point_region": ["../datasets/basic/spatiotemporal_reasoning/point_region_2regions.jsonl",
44
+ "../datasets/basic/spatiotemporal_reasoning/point_region_3regions.jsonl",
45
+ "../datasets/basic/spatiotemporal_reasoning/point_region_4regions.jsonl",
46
+ "../datasets/basic/spatiotemporal_reasoning/point_region_5regions.jsonl"],
47
+ "trajectory_region": ["../datasets/basic/spatiotemporal_reasoning/trajectory_region_length2.jsonl",
48
+ "../datasets/basic/spatiotemporal_reasoning/trajectory_region_length4.jsonl",
49
+ "../datasets/basic/spatiotemporal_reasoning/trajectory_region_length6.jsonl",
50
+ "../datasets/basic/spatiotemporal_reasoning/trajectory_region_length8.jsonl",
51
+ "../datasets/basic/spatiotemporal_reasoning/trajectory_region_length10.jsonl"],
52
+ "trajectory_identification": ["../datasets/basic/spatiotemporal_reasoning/trajectory_identification_downsampling.jsonl",
53
+ "../datasets/basic/spatiotemporal_reasoning/trajectory_identification_staggered_sampling.jsonl",
54
+ "../datasets/basic/spatiotemporal_reasoning/trajectory_identification_spatial_offset.jsonl",
55
+ "../datasets/basic/spatiotemporal_reasoning/trajectory_identification_temporal_offset.jsonl"],
56
+ "trajectory_trajectory": ["../datasets/basic/accurate_calculation/trajectory_trajectory.jsonl"],
57
+ "direction_determination": ["../datasets/basic/accurate_calculation/direction_determination.jsonl"],
58
+ "trajectory_anomaly_detection": ["../datasets/basic/downstream_applications/trajectory_anomaly_detection_abnormal.jsonl",
59
+ "../datasets/basic/downstream_applications/trajectory_anomaly_detection_normal.jsonl"],
60
+ "trajectory_classification": ["../datasets/basic/downstream_applications/trajectory_classification.jsonl"],
61
+ "trajectory_prediction": ["../datasets/basic/downstream_applications/trajectory_prediction.jsonl"]
62
+ }
63
+
64
+ icl_files = {
65
+ "poi_identification": "../datasets/icl/poi_identification.jsonl",
66
+ "trajectory_region": "../datasets/icl/trajectory_region.jsonl",
67
+ "trajectory_trajectory": "../datasets/icl/trajectory_trajectory.jsonl",
68
+ "direction_determination": "../datasets/icl/direction_determination.jsonl",
69
+ "trajectory_anomaly_detection": "../datasets/icl/trajectory_anomaly_detection.jsonl",
70
+ "trajectory_prediction": "../datasets/icl/trajectory_prediction.jsonl"
71
+ }
72
+
73
+ cot_files = {
74
+ "urban_region_function_recognition": "../datasets/cot/urban_region_function_recognition.jsonl",
75
+ "trajectory_region": "../datasets/cot/trajectory_region.jsonl",
76
+ "trajectory_trajectory": "../datasets/cot/trajectory_trajectory.jsonl",
77
+ "trajectory_classification": "../datasets/cot/trajectory_classification.jsonl"
78
+ }
79
+
80
+ sft_files = {
81
+ "administrative_region_determination": {
82
+ "train": "../datasets/sft/administrative_region_determination_train.jsonl",
83
+ "valid": "../datasets/sft/administrative_region_determination_valid.jsonl"
84
+ },
85
+ "direction_determination": {
86
+ "train": "../datasets/sft/direction_determination_train.jsonl",
87
+ "valid": "../datasets/sft/direction_determination_valid.jsonl"
88
+ },
89
+ "trajectory_anomaly_detection":{
90
+ "train": "../datasets/sft/trajectory_anomaly_detection_train.jsonl",
91
+ "valid": "../datasets/sft/trajectory_anomaly_detection_valid.jsonl"
92
+ },
93
+ "trajectory_prediction": {
94
+ "train": "../datasets/sft/trajectory_prediction_train.jsonl",
95
+ "valid": "../datasets/sft/trajectory_prediction_valid.jsonl"
96
+ },
97
+ "trajectory_region": {
98
+ "train": "../datasets/sft/trajectory_region_train.jsonl",
99
+ "valid": "../datasets/sft/trajectory_region_valid.jsonl"
100
+ },
101
+ "trajectory_trajectory": {
102
+ "train": "../datasets/sft/trajectory_trajectory_train.jsonl",
103
+ "valid": "../datasets/sft/trajectory_trajectory_valid.jsonl"
104
+ }
105
+ }
106
+
107
+ @dataclass
108
+ class ScriptArguments:
109
+ """
110
+ These arguments vary depending on how many GPUs you have, what their capacity and features are, and what size model you want to train.
111
+ """
112
+ per_device_train_batch_size: Optional[int] = field(default=4)
113
+ per_device_eval_batch_size: Optional[int] = field(default=1)
114
+ gradient_accumulation_steps: Optional[int] = field(default=4)
115
+ learning_rate: Optional[float] = field(default=2e-4)
116
+ max_grad_norm: Optional[float] = field(default=0.3)
117
+ weight_decay: Optional[int] = field(default=0.001)
118
+ lora_alpha: Optional[int] = field(default=16)
119
+ lora_dropout: Optional[float] = field(default=0.1)
120
+ lora_r: Optional[int] = field(default=8)
121
+ max_seq_length: Optional[int] = field(default=2048)
122
+ model_name: Optional[str] = field(
123
+ default=None,
124
+ metadata={
125
+ "help": "The model that you want to train from the Hugging Face hub. E.g. gpt2, gpt2-xl, bert, etc."
126
+ }
127
+ )
128
+ dataset_name: Optional[str] = field(
129
+ default="stingning/ultrachat",
130
+ metadata={"help": "The preference dataset to use."},
131
+ )
132
+ fp16: Optional[bool] = field(
133
+ default=False,
134
+ metadata={"help": "Enables fp16 training."},
135
+ )
136
+ bf16: Optional[bool] = field(
137
+ default=False,
138
+ metadata={"help": "Enables bf16 training."},
139
+ )
140
+ packing: Optional[bool] = field(
141
+ default=True,
142
+ metadata={"help": "Use packing dataset creating."},
143
+ )
144
+ gradient_checkpointing: Optional[bool] = field(
145
+ default=True,
146
+ metadata={"help": "Enables gradient checkpointing."},
147
+ )
148
+ use_flash_attention_2: Optional[bool] = field(
149
+ default=False,
150
+ metadata={"help": "Enables Flash Attention 2."},
151
+ )
152
+ optim: Optional[str] = field(
153
+ default="paged_adamw_32bit",
154
+ metadata={"help": "The optimizer to use."},
155
+ )
156
+ lr_scheduler_type: str = field(
157
+ default="constant",
158
+ metadata={"help": "Learning rate schedule. Constant a bit better than cosine, and has advantage for analysis"},
159
+ )
160
+ max_steps: int = field(default=1000, metadata={"help": "How many optimizer update steps to take"})
161
+ warmup_ratio: float = field(default=0.03, metadata={"help": "Fraction of steps to do a warmup for"})
162
+ save_steps: int = field(default=100, metadata={"help": "Save checkpoint every X updates steps."})
163
+ logging_steps: int = field(default=10, metadata={"help": "Log every X updates steps."})
164
+ output_dir: str = field(
165
+ default="./results",
166
+ metadata={"help": "The output directory where the model predictions and checkpoints will be written."},
167
+ )
code/cot_prompting.py ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from model_inference import *
2
+ from config import dataset_files, cot_files
3
+ from result_parser import find_option_number_for_cot
4
+ from tqdm import tqdm
5
+ import json
6
+ import os
7
+
8
+ models = [gemma2b]
9
+ tasks = ["urban_region_function_recognition", "trajectory_region", "trajectory_trajectory", "trajectory_classification"]
10
+
11
+ if not os.path.exists("./logs"):
12
+ os.mkdir("./logs")
13
+
14
+ for fun in models:
15
+ model = fun()
16
+ for task in tasks:
17
+ error_writer = open("./logs/cot_{}.log".format(task), 'a')
18
+ error_writer.write(model.model_path+'\n')
19
+
20
+ context_samples = open(cot_files[task])
21
+ prompt = ""
22
+ for _i, sample in enumerate(context_samples.readlines()):
23
+ sample = json.loads(sample)
24
+ prompt += "{}{}\n".format(sample['Question'], sample['Answer'])
25
+
26
+ for dataset_path in dataset_files[task]:
27
+ dataset = open(dataset_path, 'r')
28
+ dataset = dataset.readlines()
29
+
30
+ correct = 0
31
+ total = 0
32
+ exception = 0
33
+
34
+ for i, item in tqdm(enumerate(dataset), total=len(dataset)):
35
+ item = json.loads(item)
36
+
37
+ # remove the guidance from data samples for cot-prompting
38
+ if task=="urban_region_function_recognition":
39
+ question = item['Question'].replace("Please just answer the number of your option with no other texts. Answer: Option (", "")
40
+ elif task=="trajectory_trajectory":
41
+ question = item['Question'].replace(" with no other texts. Answer: Option (", ".")
42
+ elif task=="trajectory_region":
43
+ question = item['Question'].replace(" with no other texts. Answer: Option ", ".")
44
+ elif task=="trajectory_classification":
45
+ question = item['Question'].replace("Answer: The trajectory is most likely to be generated by", "")
46
+
47
+ response = model.generate(prompt+question, 100)
48
+ score = find_option_number_for_cot(response, item["Answer"], error_writer)
49
+
50
+ total +=1
51
+ if score is None:
52
+ exception += 1
53
+ else:
54
+ correct += score
55
+
56
+ if i%100==0:
57
+ print("Dataset: {}\nTotal: {}, correct:{}, exception:{}, accuracy:{}\n\n".format(dataset_path, total, correct, exception, correct/total))
58
+
59
+ error_writer.write("Dataset: {}\nTotal: {}, correct:{}, exception:{}, accuracy:{}\n\n".format(dataset_path, total, correct, exception, correct/total))
60
+ error_writer.flush()
61
+ error_writer.write("\n")
62
+ error_writer.close()
code/download_llms.py ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from modelscope import snapshot_download
2
+
3
+ snapshot_download('AI-ModelScope/gemma-7b', ignore_file_pattern = [r'\w+\.gguf'])
4
+ snapshot_download('AI-ModelScope/gemma-2b', ignore_file_pattern = [r'\w+\.gguf'])
5
+ snapshot_download('deepseek-ai/deepseek-llm-7b-base')
6
+ snapshot_download('AI-ModelScope/falcon-7b')
7
+ snapshot_download('AI-ModelScope/Mistral-7B-v0.1')
8
+ snapshot_download('qwen/Qwen-7B')
9
+ snapshot_download('01ai/Yi-6B')
10
+ snapshot_download('ZhipuAI/chatglm2-6b')
11
+ snapshot_download('ZhipuAI/chatglm3-6b')
12
+ snapshot_download('AI-ModelScope/phi-2')
13
+ snapshot_download('modelscope/Llama-2-7b-ms', ignore_file_pattern = [r'\w+\.bin'])
14
+ snapshot_download('AI-ModelScope/vicuna-7b-v1.5')
code/fine_tuning.py ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from model_finetuning import formatting_func_without_space, formatting_func_space, trajectory_region_formatting, sft
2
+ from model_inference import gemma2b
3
+ from config import ScriptArguments, sft_files, dataset_files, max_tokens, result_parsers
4
+ from tqdm import tqdm
5
+ import json
6
+ import os
7
+
8
+ models_path = '~/.cache/modelscope/hub/AI-ModelScope/gemma-2b'
9
+ tasks2formatting = {"administrative_region_determination": formatting_func_without_space, "direction_determination": formatting_func_without_space, "trajectory_anomaly_detection": formatting_func_space, "trajectory_prediction": formatting_func_space, "trajectory_region": trajectory_region_formatting, "trajectory_trajectory": formatting_func_without_space}
10
+
11
+ if not os.path.exists("./save"):
12
+ os.mkdir("./save")
13
+
14
+ if not os.path.exists("./logs"):
15
+ os.mkdir("./logs")
16
+
17
+ for task, formatting_func in tasks2formatting.items():
18
+ save_path = "/save/{}/".format(task)
19
+
20
+ if not os.path.exists(save_path):
21
+ os.mkdir(save_path)
22
+
23
+ sft(ScriptArguments, models_path, formatting_func, sft_files[task], save_path)
24
+
25
+ model = gemma2b(save_path)
26
+
27
+ error_writer = open("./logs/{}.log".format(task), 'a')
28
+ error_writer.write(save_path+'\n')
29
+ result_parser = result_parsers[task]
30
+ for dataset_path in dataset_files[task]:
31
+ dataset = open(dataset_path, 'r')
32
+ dataset = dataset.readlines()
33
+
34
+ correct = 0
35
+ total = 0
36
+ exception = 0
37
+
38
+ for i, item in tqdm(enumerate(dataset), total=len(dataset)):
39
+ item = json.loads(item)
40
+ response = model.generate(item["Question"], max_tokens[task])
41
+ score = result_parser(response, item["Answer"], error_writer)
42
+
43
+ if task!='trajectory_prediction' or score is not None:
44
+ total +=1
45
+ if score is None:
46
+ exception += 1
47
+ else:
48
+ correct += score
49
+
50
+ if i%100==0:
51
+ print("Dataset: {}\nTotal: {}, correct:{}, exception:{}, accuracy:{}\n\n".format(dataset_path, total, correct, exception, correct/total))
52
+
53
+ error_writer.write("Dataset: {}\nTotal: {}, correct:{}, exception:{}, accuracy:{}\n\n".format(dataset_path, total, correct, exception, correct/total))
54
+ error_writer.flush()
55
+ error_writer.write("\n")
56
+ error_writer.close()
code/icl_prompting.py ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from model_inference import *
2
+ from config import result_parsers, dataset_files, max_tokens, icl_files
3
+ from tqdm import tqdm
4
+ import json
5
+ import os
6
+
7
+ models = [gemma2b, llama2_7b]
8
+ tasks = ["poi_identification", "trajectory_region", "trajectory_trajectory", "direction_determination", "trajectory_anomaly_detection", "trajectory_prediction"]
9
+
10
+ if not os.path.exists("./logs"):
11
+ os.mkdir("./logs")
12
+
13
+ for fun in models:
14
+ model = fun()
15
+ for task in tasks:
16
+ error_writer = open("./logs/icl_{}.log".format(task), 'a')
17
+ error_writer.write(model.model_path+'\n')
18
+ result_parser = result_parsers[task]
19
+
20
+ context_samples = open(icl_files[task])
21
+ prompt = ""
22
+ for _i, sample in enumerate(context_samples.readlines()):
23
+ sample = json.loads(sample)
24
+ prompt += "{}{}\n".format(sample['Question'], sample['Answer'])
25
+
26
+ for dataset_path in dataset_files[task]:
27
+ dataset = open(dataset_path, 'r')
28
+ dataset = dataset.readlines()
29
+
30
+ correct = 0
31
+ total = 0
32
+ exception = 0
33
+
34
+ for i, item in tqdm(enumerate(dataset), total=len(dataset)):
35
+ item = json.loads(item)
36
+ response = model.generate(prompt+item["Question"], max_tokens[task])
37
+ score = result_parser(response, item["Answer"], error_writer)
38
+
39
+ if task!='trajectory_prediction' or score is not None:
40
+ total +=1
41
+ if score is None:
42
+ exception += 1
43
+ else:
44
+ correct += score
45
+
46
+ if i%100==0:
47
+ print("Dataset: {}\nTotal: {}, correct:{}, exception:{}, accuracy:{}\n\n".format(dataset_path, total, correct, exception, correct/total))
48
+
49
+ error_writer.write("Dataset: {}\nTotal: {}, correct:{}, exception:{}, accuracy:{}\n\n".format(dataset_path, total, correct, exception, correct/total))
50
+ error_writer.flush()
51
+ error_writer.write("\n")
52
+ error_writer.close()
code/model_finetuning/__init__.py ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ from .my_trainer import sft
2
+ from .my_formatting_fun import formatting_func_space, formatting_func_without_space, trajectory_region_formatting
code/model_finetuning/my_formatting_fun.py ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import random
2
+
3
+ def formatting_func_space(example):
4
+ output_texts = []
5
+ for i in range(len(example['Question'])):
6
+ text = f"{example['Question'][i]} {example['Answer'][i]}"
7
+ output_texts.append(text)
8
+ random.shuffle(output_texts)
9
+ return output_texts
10
+
11
+ def formatting_func_without_space(example):
12
+ output_texts = []
13
+ for i in range(len(example['Question'])):
14
+ text = f"{example['Question'][i]}{example['Answer'][i]}"
15
+ output_texts.append(text)
16
+ random.shuffle(output_texts)
17
+ return output_texts
18
+
19
+ def trajectory_region_formatting(example):
20
+ output_texts = []
21
+ for i in range(len(example['Question'])):
22
+ text = f"{example['Question'][i]}({example['Answer'][i]}"
23
+ output_texts.append(text)
24
+ random.shuffle(output_texts)
25
+ return output_texts
code/model_finetuning/my_trainer.py ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import pdb
3
+
4
+ from transformers import AutoTokenizer, HfArgumentParser, AutoModelForCausalLM, BitsAndBytesConfig, TrainingArguments
5
+ from datasets import load_dataset
6
+ from peft import LoraConfig, PeftModel
7
+ from trl import SFTTrainer
8
+ import os
9
+ import random
10
+
11
+ def sft(ScriptArguments, model_id, formatting_func, datasets, save_path):
12
+ parser = HfArgumentParser(ScriptArguments)
13
+ script_args = parser.parse_args_into_dataclasses()[0]
14
+
15
+ quantization_config = BitsAndBytesConfig(
16
+ load_in_4bit=True,
17
+ bnb_4bit_compute_dtype=torch.float16,
18
+ bnb_4bit_quant_type="nf4"
19
+ )
20
+
21
+ # Load model
22
+ model = AutoModelForCausalLM.from_pretrained(
23
+ model_id,
24
+ quantization_config=quantization_config,
25
+ torch_dtype=torch.float32,
26
+ attn_implementation="sdpa" if not script_args.use_flash_attention_2 else "flash_attention_2"
27
+ )
28
+
29
+ # Load tokenizer
30
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
31
+
32
+ lora_config = LoraConfig(
33
+ r=script_args.lora_r,
34
+ target_modules=["q_proj", "o_proj", "k_proj", "v_proj", "gate_proj", "up_proj", "down_proj"],
35
+ bias="none",
36
+ task_type="CAUSAL_LM",
37
+ lora_alpha=script_args.lora_alpha,
38
+ lora_dropout=script_args.lora_dropout
39
+ )
40
+
41
+ train_dataset = load_dataset('json', data_files={'train': datasets['train'], 'test': datasets['valid']}, split='train')
42
+
43
+ training_arguments = TrainingArguments(
44
+ output_dir=save_path,
45
+ per_device_train_batch_size=script_args.per_device_train_batch_size,
46
+ gradient_accumulation_steps=script_args.gradient_accumulation_steps,
47
+ optim=script_args.optim,
48
+ save_steps=script_args.save_steps,
49
+ logging_steps=script_args.logging_steps,
50
+ learning_rate=script_args.learning_rate,
51
+ max_grad_norm=script_args.max_grad_norm,
52
+ max_steps=script_args.max_steps,
53
+ warmup_ratio=script_args.warmup_ratio,
54
+ lr_scheduler_type=script_args.lr_scheduler_type,
55
+ gradient_checkpointing=script_args.gradient_checkpointing,
56
+ fp16=script_args.fp16,
57
+ bf16=script_args.bf16,
58
+ )
59
+
60
+ trainer = SFTTrainer(
61
+ model=model,
62
+ args=training_arguments,
63
+ train_dataset=train_dataset,
64
+ peft_config=lora_config,
65
+ packing=False,
66
+ tokenizer=tokenizer,
67
+ max_seq_length=script_args.max_seq_length,
68
+ formatting_func=formatting_func,
69
+ )
70
+
71
+ trainer.train()
72
+
73
+ # merge
74
+ base_model = AutoModelForCausalLM.from_pretrained(
75
+ model_id,
76
+ load_in_8bit=False,
77
+ torch_dtype=torch.float32,
78
+ device_map={"": "cuda:0"},
79
+ )
80
+
81
+ lora_model = PeftModel.from_pretrained(
82
+ base_model,
83
+ os.path.join(save_path, "checkpoint-{}".format(script_args.max_steps)),
84
+ device_map={"": "cuda:0"},
85
+ torch_dtype=torch.float32,
86
+ )
87
+
88
+ model = lora_model.merge_and_unload()
89
+ lora_model.train(False)
90
+
91
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
92
+ model.save_pretrained(os.path.join(save_path, "merged_model"))
93
+ tokenizer.save_pretrained(os.path.join(save_path, "merged_model"))
code/model_inference/__init__.py ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from .chatglm2 import chatglm2
2
+ from .chatglm3 import chatglm3
3
+ from .deepseek7b import deepseek7b
4
+ from .falcon7b import falcon7b
5
+ from .gemma2b import gemma2b
6
+ from .gemma7b import gemma7b
7
+ from .llama2_7b import llama2_7b
8
+ from .mistral7b import mistral7b
9
+ from .phi2 import phi2
10
+ from .qwen7b import qwen7b
11
+ from .vicuna7b import vicuna7b
12
+ from .yi6b import yi6b
13
+ from .chatgpt import chatgpt
14
+ from .gpt4o import gpt4o
code/model_inference/chatglm2.py ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from modelscope import AutoTokenizer, AutoModel
2
+ import torch
3
+ import pdb
4
+
5
+ class chatglm2(object):
6
+
7
+ def __init__(self, model_path='~/.cache/modelscope/hub/ZhipuAI/chatglm2-6b', torch_dtype=torch.float32, device='cuda', max_new_tokens=5):
8
+ print("Loading model from", model_path)
9
+ self.model = AutoModel.from_pretrained(model_path, torch_dtype=torch_dtype, device_map=device, trust_remote_code=True)
10
+ self.tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
11
+ self.model_path = model_path
12
+ self.max_new_tokens = max_new_tokens
13
+
14
+ def generate(self, input_text, max_new_tokens=None):
15
+ if max_new_tokens is None:
16
+ max_new_tokens = self.max_new_tokens
17
+ inputs = self.tokenizer(input_text, return_tensors="pt").input_ids.to(self.model.device)
18
+ outputs = self.model.generate(inputs, max_length=len(inputs[0])+max_new_tokens)
19
+ return self.tokenizer.batch_decode(outputs)[0][len(input_text):]
20
+
21
+ if __name__=='__main__':
22
+ model = chatglm2()
23
+ print(model.generate("Yesterday was Thursday, today is Friday, so tomorrow is ", 10))
24
+ print(model.generate("Yesterday was 2022-01-01, today is 2022-01-02, so tomorrow is ", 10))
code/model_inference/chatglm3.py ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from modelscope import AutoTokenizer, AutoModel
2
+ import torch
3
+ import pdb
4
+
5
+ class chatglm3(object):
6
+
7
+ def __init__(self, model_path='~/.cache/modelscope/hub/ZhipuAI/chatglm3-6b', torch_dtype=torch.float32, device='cuda', max_new_tokens=5):
8
+ print("Loading model from", model_path)
9
+ self.model = AutoModel.from_pretrained(model_path, torch_dtype=torch_dtype, device_map=device, trust_remote_code=True)
10
+ self.tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
11
+ self.model_path = model_path
12
+ self.max_new_tokens = max_new_tokens
13
+
14
+ def generate(self, input_text, max_new_tokens=None):
15
+ if max_new_tokens is None:
16
+ max_new_tokens = self.max_new_tokens
17
+ inputs = self.tokenizer(input_text, return_tensors="pt").input_ids.to(self.model.device)
18
+ outputs = self.model.generate(inputs, max_length=len(inputs[0])+max_new_tokens)
19
+ return self.tokenizer.batch_decode(outputs)[0][len(input_text)+11:]
20
+
21
+ if __name__=='__main__':
22
+ model = chatglm3()
23
+ print(model.generate("Yesterday was Thursday, today is Friday, so tomorrow is ", 10))
24
+ print(model.generate("Yesterday was 2022-01-01, today is 2022-01-02, so tomorrow is ", 10))
code/model_inference/chatgpt.py ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from langchain.schema.runnable import RunnablePassthrough
2
+ from langchain.prompts import ChatPromptTemplate
3
+ from langchain.chat_models import ChatOpenAI
4
+ from langchain_core.output_parsers import StrOutputParser
5
+
6
+
7
+ class chatgpt(object):
8
+
9
+ def __init__(self, api_key, max_new_tokens=5):
10
+ OpenAIChatModel = ChatOpenAI(
11
+ temperature=0,
12
+ max_tokens=max_new_tokens,
13
+ openai_api_key=api_key,
14
+ model_name="gpt-3.5-turbo-1106"
15
+ )
16
+ self.api_key = api_key
17
+ self.max_new_tokens = max_new_tokens
18
+ self._init_chain(OpenAIChatModel)
19
+
20
+ def _init_chain(self, chat_model):
21
+ common_prompt = ChatPromptTemplate.from_messages(
22
+ [
23
+ "{question}"
24
+ ]
25
+ )
26
+ self.common_chain = (
27
+ {"question": RunnablePassthrough()}
28
+ | common_prompt
29
+ | chat_model
30
+ | StrOutputParser()
31
+ )
32
+
33
+ def generate(self, code: str, max_new_tokens: int):
34
+ if max_new_tokens is not None and max_new_tokens!=self.max_new_tokens:
35
+ OpenAIChatModel = ChatOpenAI(
36
+ temperature=0,
37
+ max_tokens=max_new_tokens,
38
+ openai_api_key=self.api_key,
39
+ model_name="gpt-3.5-turbo-1106"
40
+ )
41
+ self.max_new_tokens = max_new_tokens
42
+ self._init_chain(OpenAIChatModel)
43
+ return self.common_chain.invoke(code)
44
+
45
+ if __name__=='__main__':
46
+ model = chatgpt()
47
+ print(model.generate("Yesterday was Thursday, today is Friday, so tomorrow is ", 5))
48
+ print(model.generate("Yesterday was 2022-01-01, today is 2022-01-02, so tomorrow is ", 20))
code/model_inference/deepseek7b.py ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from modelscope import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
2
+ import torch
3
+ import pdb
4
+
5
+ class deepseek7b(object):
6
+
7
+ def __init__(self, model_path='~/.cache/modelscope/hub/deepseek-ai/deepseek-llm-7b-base', torch_dtype=torch.float32, device='cuda', max_new_tokens=5):
8
+ print("Loading model from", model_path)
9
+ self.model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch_dtype, device_map=device)
10
+ self.tokenizer = AutoTokenizer.from_pretrained(model_path)
11
+ self.model.generation_config = GenerationConfig.from_pretrained(model_path)
12
+ self.model.generation_config.pad_token_id = self.model.generation_config.eos_token_id
13
+ self.model_path = model_path
14
+ self.max_new_tokens = max_new_tokens
15
+
16
+ def generate(self, input_text, max_new_tokens=None):
17
+ if max_new_tokens is None:
18
+ max_new_tokens = self.max_new_tokens
19
+ inputs = self.tokenizer(input_text, return_tensors="pt").input_ids.to(self.model.device)
20
+ outputs = self.model.generate(inputs, max_length=len(inputs[0])+max_new_tokens)
21
+ return self.tokenizer.batch_decode(outputs)[0][len(input_text)+21:]
22
+
23
+ if __name__=='__main__':
24
+ model = deepseek7b()
25
+ print(model.generate("Yesterday was Thursday, today is Friday, so tomorrow is ", 10))
26
+ print(model.generate("Yesterday was 2022-01-01, today is 2022-01-02, so tomorrow is ", 10))
code/model_inference/falcon7b.py ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from modelscope import AutoTokenizer, Model
2
+ import torch
3
+ import pdb
4
+
5
+ class falcon7b(object):
6
+
7
+ def __init__(self, model_path='~/.cache/modelscope/hub/AI-ModelScope/falcon-7b', torch_dtype=torch.float32, device='cuda', max_new_tokens=5):
8
+ print("Loading model from", model_path)
9
+ self.model = Model.from_pretrained(model_path, torch_dtype=torch_dtype, device_map=device)
10
+ self.tokenizer = AutoTokenizer.from_pretrained(model_path)
11
+ self.model_path = model_path
12
+ self.max_new_tokens = max_new_tokens
13
+
14
+ def generate(self, input_text, max_new_tokens=None):
15
+ if max_new_tokens is None:
16
+ max_new_tokens = self.max_new_tokens
17
+ inputs = self.tokenizer(input_text, return_tensors="pt").input_ids.to(self.model.device)
18
+ outputs = self.model.generate(inputs, max_length=len(inputs[0])+max_new_tokens)
19
+ return self.tokenizer.batch_decode(outputs)[0][len(input_text):]
20
+
21
+ if __name__=='__main__':
22
+ model = falcon7b()
23
+ print(model.generate("Yesterday was Thursday, today is Friday, so tomorrow is ", 10))
24
+ print(model.generate("Yesterday was 2022-01-01, today is 2022-01-02, so tomorrow is ", 10))
code/model_inference/gemma2b.py ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers import AutoTokenizer, AutoModelForCausalLM
2
+ from baukit import TraceDict
3
+ import torch
4
+ import pdb
5
+
6
+ class gemma2b(object):
7
+
8
+ def __init__(self, model_path='~/.cache/modelscope/hub/AI-ModelScope/gemma-2b', torch_dtype=torch.float32, device='cuda', max_new_tokens=5):
9
+ print("Loading model from", model_path)
10
+ self.model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch_dtype, device_map=device)
11
+ self.tokenizer = AutoTokenizer.from_pretrained(model_path)
12
+ self.model_path = model_path
13
+ self.max_new_tokens = max_new_tokens
14
+
15
+ def generate(self, input_text, max_new_tokens=None):
16
+ if max_new_tokens is None:
17
+ max_new_tokens = self.max_new_tokens
18
+ inputs = self.tokenizer(input_text, return_tensors="pt").input_ids.to(self.model.device)
19
+ outputs = self.model.generate(inputs, max_length=len(inputs[0])+max_new_tokens)
20
+ return self.tokenizer.batch_decode(outputs)[0][len(input_text)+5:]
21
+
22
+ if __name__=='__main__':
23
+ model = gemma2b()
24
+ print(model.generate("Yesterday was Thursday, today is Friday, so tomorrow is ", 10))
25
+ print(model.generate("Yesterday was 2022-01-01, today is 2022-01-02, so tomorrow is ", 10))
code/model_inference/gemma7b.py ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers import AutoTokenizer, AutoModelForCausalLM
2
+ import torch
3
+ import pdb
4
+
5
+ class gemma7b(object):
6
+
7
+ def __init__(self, model_path='~/.cache/modelscope/hub/AI-ModelScope/gemma-7b', torch_dtype=torch.float32, device='cuda', max_new_tokens=5):
8
+ print("Loading model from", model_path)
9
+ self.model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch_dtype, device_map=device)
10
+ self.tokenizer = AutoTokenizer.from_pretrained(model_path)
11
+ self.model_path = model_path
12
+ self.max_new_tokens = max_new_tokens
13
+
14
+ def generate(self, input_text, max_new_tokens=None):
15
+ if max_new_tokens is None:
16
+ max_new_tokens = self.max_new_tokens
17
+ inputs = self.tokenizer(input_text, return_tensors="pt").input_ids.to(self.model.device)
18
+ outputs = self.model.generate(inputs, max_length=len(inputs[0])+max_new_tokens)
19
+ return self.tokenizer.batch_decode(outputs)[0][len(input_text)+5:]
20
+
21
+
22
+ if __name__=='__main__':
23
+ model = gemma7b()
24
+ print(model.generate("Yesterday was Thursday, today is Friday, so tomorrow is ", 10))
25
+ print(model.generate("Yesterday was 2022-01-01, today is 2022-01-02, so tomorrow is ", 10))
code/model_inference/gpt4o.py ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from langchain.schema.runnable import RunnablePassthrough
2
+ from langchain.prompts import ChatPromptTemplate
3
+ from langchain.chat_models import ChatOpenAI
4
+ from langchain_core.output_parsers import StrOutputParser
5
+
6
+
7
+ class gpt4o(object):
8
+
9
+ def __init__(self, api_key, max_new_tokens=5):
10
+ OpenAIChatModel = ChatOpenAI(
11
+ temperature=0,
12
+ max_tokens=max_new_tokens,
13
+ openai_api_key=api_key,
14
+ model_name="gpt-4o-2024-05-13"
15
+ )
16
+ self._init_chain(OpenAIChatModel)
17
+
18
+ def _init_chain(self, chat_model):
19
+ common_prompt = ChatPromptTemplate.from_messages(
20
+ [
21
+ (
22
+ "system",
23
+ "You are a helpful text completion assistant. Please continue writing the text entered by the human."
24
+ ),
25
+ ("human", "{question}"),
26
+ ]
27
+ )
28
+ self.common_chain = (
29
+ {"question": RunnablePassthrough()}
30
+ | common_prompt
31
+ | chat_model
32
+ | StrOutputParser()
33
+ )
34
+
35
+ def generate(self, code: str, max_new_tokens: int):
36
+ if max_new_tokens is not None and max_new_tokens!=self.max_new_tokens:
37
+ OpenAIChatModel = ChatOpenAI(
38
+ temperature=0,
39
+ max_tokens=max_new_tokens,
40
+ openai_api_key=self.api_key,
41
+ model_name="gpt-4o-2024-05-13"
42
+ )
43
+ self.max_new_tokens = max_new_tokens
44
+ self._init_chain(OpenAIChatModel)
45
+ return self.common_chain.invoke(code)
46
+
47
+ if __name__=='__main__':
48
+ model = gpt4o()
49
+ print(model.generate("Yesterday was Thursday, today is Friday, so tomorrow is ", 10))
50
+ print(model.generate("Yesterday was 2022-01-01, today is 2022-01-02, so tomorrow is ", 10))
code/model_inference/llama2_7b.py ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from modelscope import Model
2
+ from modelscope.models.nlp.llama2 import Llama2Tokenizer
3
+ import torch
4
+ import pdb
5
+
6
+ class llama2_7b(object):
7
+
8
+ def __init__(self, model_path='~/.cache/modelscope/hub/modelscope/Llama-2-7b-ms', torch_dtype=torch.float32, device='cuda', max_new_tokens=5):
9
+ print("Loading model from", model_path)
10
+ self.model = Model.from_pretrained(model_path, torch_dtype=torch_dtype, device_map=device)
11
+ self.tokenizer = Llama2Tokenizer.from_pretrained(model_path)
12
+ self.model_path = model_path
13
+ self.max_new_tokens = max_new_tokens
14
+
15
+ def generate(self, input_text, max_new_tokens=None):
16
+ if max_new_tokens is None:
17
+ max_new_tokens = self.max_new_tokens
18
+ inputs = self.tokenizer(input_text, return_tensors="pt").input_ids.to(self.model.device)
19
+ outputs = self.model.generate(inputs, max_length=len(inputs[0])+max_new_tokens)
20
+ return self.tokenizer.batch_decode(outputs)[0][len(input_text)+4:]
21
+
22
+ if __name__=='__main__':
23
+ model = llama2_7b()
24
+ print(model.generate("Yesterday was Thursday, today is Friday, so tomorrow is ", 10))
25
+ print(model.generate("Yesterday was 2022-01-01, today is 2022-01-02, so tomorrow is ", 10))
code/model_inference/mistral7b.py ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers import AutoTokenizer, AutoModelForCausalLM
2
+ import torch
3
+ import pdb
4
+
5
+ class mistral7b(object):
6
+
7
+ def __init__(self, model_path='~/.cache/modelscope/hub/AI-ModelScope/Mistral-7B', torch_dtype=torch.float32, device='cuda', max_new_tokens=5):
8
+ print("Loading model from", model_path)
9
+ self.model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch_dtype, device_map=device)
10
+ self.tokenizer = AutoTokenizer.from_pretrained(model_path)
11
+ self.model_path = model_path
12
+ self.max_new_tokens = max_new_tokens
13
+
14
+ def generate(self, input_text, max_new_tokens=None):
15
+ if max_new_tokens is None:
16
+ max_new_tokens = self.max_new_tokens
17
+ inputs = self.tokenizer(input_text, return_tensors="pt").input_ids.to(self.model.device)
18
+ outputs = self.model.generate(inputs, max_length=len(inputs[0])+max_new_tokens)
19
+ return self.tokenizer.batch_decode(outputs)[0][len(input_text)+4:]
20
+
21
+ if __name__=='__main__':
22
+ model = mistral7b()
23
+ print(model.generate("Yesterday was Thursday, today is Friday, so tomorrow is ", 10))
24
+ print(model.generate("Yesterday was 2022-01-01, today is 2022-01-02, so tomorrow is ", 10))
code/model_inference/phi2.py ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from modelscope import AutoModelForCausalLM, AutoTokenizer
2
+ import torch
3
+ import pdb
4
+
5
+ class phi2(object):
6
+
7
+ def __init__(self, model_path='~/.cache/modelscope/hub/AI-ModelScope/phi-2', torch_dtype=torch.float32, device='cuda', max_new_tokens=5):
8
+ print("Loading model from", model_path)
9
+ self.model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch_dtype, device_map=device, trust_remote_code=True)
10
+ self.tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
11
+ self.model_path = model_path
12
+ self.max_new_tokens = max_new_tokens
13
+
14
+ def generate(self, input_text, max_new_tokens=None):
15
+ if max_new_tokens is None:
16
+ max_new_tokens = self.max_new_tokens
17
+ inputs = self.tokenizer(input_text, return_tensors="pt").input_ids.to(self.model.device)
18
+ outputs = self.model.generate(inputs, max_length=len(inputs[0])+max_new_tokens)
19
+ return self.tokenizer.batch_decode(outputs)[0][len(input_text):]
20
+
21
+ if __name__=='__main__':
22
+ model = phi2()
23
+ print(model.generate("Yesterday was Thursday, today is Friday, so tomorrow is ", 10))
24
+ print(model.generate("Yesterday was 2022-01-01, today is 2022-01-02, so tomorrow is ", 10))
code/model_inference/qwen7b.py ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from modelscope import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
2
+ import torch
3
+ import pdb
4
+
5
+ class qwen7b(object):
6
+
7
+ def __init__(self, model_path='~/.cache/modelscope/hub/qwen/Qwen-7B', torch_dtype=torch.float32, device='cuda', max_new_tokens=5):
8
+ print("Loading model from", model_path)
9
+ self.model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch_dtype, device_map=device, trust_remote_code=True)
10
+ self.tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
11
+ self.model.generation_config = GenerationConfig.from_pretrained(model_path, trust_remote_code=True)
12
+ self.model_path = model_path
13
+ self.max_new_tokens = max_new_tokens
14
+
15
+ def generate(self, input_text, max_new_tokens=None):
16
+ if max_new_tokens is None:
17
+ max_new_tokens = self.max_new_tokens
18
+ inputs = self.tokenizer(input_text, return_tensors="pt").input_ids.to(self.model.device)
19
+ outputs = self.model.generate(inputs, max_length=len(inputs[0])+max_new_tokens, max_new_tokens=None)
20
+ return self.tokenizer.batch_decode(outputs)[0][len(input_text):]
21
+
22
+ if __name__=='__main__':
23
+ model = qwen7b()
24
+ print(model.generate("Yesterday was Thursday, today is Friday, so tomorrow is ", 10))
25
+ print(model.generate("Yesterday was 2022-01-01, today is 2022-01-02, so tomorrow is ", 10))
code/model_inference/vicuna7b.py ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from modelscope import AutoModelForCausalLM, AutoTokenizer
2
+ import torch
3
+ import pdb
4
+ from fastchat.model import load_model, add_model_args
5
+
6
+ class vicuna7b(object):
7
+
8
+ def __init__(self, model_path='~/.cache/modelscope/hub/AI-ModelScope/vicuna-7b-v1___5', torch_dtype=torch.float32, device='cuda', max_new_tokens=5):
9
+ print("Loading model from", model_path)
10
+ self.model, self.tokenizer = load_model(model_path, device=device, load_8bit=False, dtype=torch_dtype)
11
+ self.model_path = model_path
12
+ self.max_new_tokens = max_new_tokens
13
+
14
+ def generate(self, input_text, max_new_tokens=None):
15
+ if max_new_tokens is None:
16
+ max_new_tokens = self.max_new_tokens
17
+ inputs = self.tokenizer(input_text, return_tensors="pt").to(self.model.device)
18
+ outputs = self.model.generate(**inputs, max_new_tokens=max_new_tokens)
19
+ if self.model.config.is_encoder_decoder:
20
+ outputs = outputs[0]
21
+ else:
22
+ outputs = outputs[0][len(inputs["input_ids"][0]) :]
23
+ return self.tokenizer.decode(outputs, skip_special_tokens=True, spaces_between_special_tokens=False)
24
+
25
+ if __name__=='__main__':
26
+ model = vicuna7b()
27
+ print(model.generate("Yesterday was Thursday, today is Friday, so tomorrow is ", 10))
28
+ print(model.generate("Yesterday was 2022-01-01, today is 2022-01-02, so tomorrow is ", 10))
code/model_inference/yi6b.py ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers import AutoTokenizer, AutoModelForCausalLM
2
+ import torch
3
+ import pdb
4
+
5
+ class yi6b(object):
6
+
7
+ def __init__(self, model_path='~/.cache/modelscope/hub/01ai/Yi-6B', torch_dtype=torch.float32, device='cuda', max_new_tokens=5):
8
+ print("Loading model from", model_path)
9
+ self.model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch_dtype, device_map=device)
10
+ self.tokenizer = AutoTokenizer.from_pretrained(model_path)
11
+ self.model_path = model_path
12
+ self.max_new_tokens = max_new_tokens
13
+
14
+ def generate(self, input_text, max_new_tokens=None):
15
+ if max_new_tokens is None:
16
+ max_new_tokens = self.max_new_tokens
17
+ inputs = self.tokenizer(input_text, return_tensors="pt").input_ids.to(self.model.device)
18
+ outputs = self.model.generate(inputs, max_length=len(inputs[0])+max_new_tokens)
19
+ return self.tokenizer.batch_decode(outputs)[0][len(input_text):]
20
+
21
+ if __name__=='__main__':
22
+ model = yi6b()
23
+ print(model.generate("Yesterday was Thursday, today is Friday, so tomorrow is ", 10))
24
+ print(model.generate("Yesterday was 2022-01-01, today is 2022-01-02, so tomorrow is ", 10))
code/result_parser.py ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from geopy.distance import geodesic
2
+ import re
3
+
4
+ def find_first_digit(s):
5
+ for char in s:
6
+ if char.isdigit():
7
+ return char
8
+ return None
9
+
10
+ def find_option_number(response, label, error_writer):
11
+ predicted = find_first_digit(response)
12
+ if predicted!=None:
13
+ if predicted==str(label)[0]:
14
+ return 1
15
+ else:
16
+ return 0
17
+ else:
18
+ error_writer.write("### response:{}, answer:{} ###\n".format(response, label))
19
+ return None
20
+
21
+ def trajectory_classification(response, label, error_writer):
22
+ pattern = r'car|bike|bicycle|pedestrian'
23
+ mapping = {'car': 1, 'bike': 2, 'bicycle':2, 'pedestrian': 3}
24
+ match = re.search(pattern, response, flags=re.I)
25
+ if match:
26
+ predicted = match.group()
27
+ predicted = mapping[predicted]
28
+ if predicted==label:
29
+ return 1
30
+ else:
31
+ return 0
32
+ else:
33
+ error_writer.write("### response:{}, ### answer:{} ###\n".format(response, label))
34
+ return None
35
+
36
+ def find_option_number_for_cot(response, label, error_writer):
37
+ pattern = r'\((\d+)\)'
38
+ match = re.search(pattern, response, flags=re.I)
39
+ if match:
40
+ predicted = match.group(1)
41
+ if predicted==str(label)[0]:
42
+ return 1
43
+ else:
44
+ return 0
45
+ else:
46
+ error_writer.write("### response:{}, ### answer:{} ###\n".format(response, label))
47
+ return None
48
+
49
+ def yes_or_no(response, label, error_writer):
50
+ pattern = r'Yes|No'
51
+ match = re.search(pattern, response, flags=re.I)
52
+ if match:
53
+ predicted = match.group()
54
+ predicted = predicted.title()
55
+ if predicted==label:
56
+ return 1
57
+ else:
58
+ return 0
59
+ else:
60
+ error_writer.write("### response:{}, ### answer:{} ###\n".format(response, label))
61
+ return None
62
+
63
+ def anomaly_detection(response, label, error_writer):
64
+ pattern = r'Normal|Anomalous|Anomaly|Abnormal'
65
+ match = re.search(pattern, response, flags=re.I)
66
+ if match:
67
+ predicted = match.group()
68
+ predicted = predicted.title()
69
+ if predicted=="Abnormal" or predicted=="Anomaly":
70
+ predicted=="Anomalous"
71
+ if predicted==label:
72
+ return 1
73
+ else:
74
+ return 0
75
+ else:
76
+ error_writer.write("### response:{}, ### answer:{} ###\n".format(response, label))
77
+ return None
78
+
79
+ def extract_floats(input_string):
80
+ floats = re.findall(r'\d+\.\d+', input_string)
81
+ if len(floats) >= 2:
82
+ return float(floats[0]), float(floats[1])
83
+ else:
84
+ return None
85
+
86
+ def calculate_distance(coord1, coord2):
87
+ distance = geodesic([coord2[1], coord2[0]], [coord1[1], coord1[0]]).meters
88
+ return distance
89
+
90
+ def trajectory_prediction(response, label, error_writer):
91
+ lon, lat = extract_floats(response)
92
+ distance = calculate_distance([lon, lat], label)
93
+ if distance>=100000:
94
+ error_writer.write("### response:{}, answer:{} ###\n".format(response, label))
95
+ return None
96
+ return distance
datasets/basic/accurate_calculation/direction_determination.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
datasets/basic/accurate_calculation/trajectory_trajectory.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
datasets/basic/downstream_applications/trajectory_anomaly_detection_abnormal.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
datasets/basic/downstream_applications/trajectory_anomaly_detection_normal.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
datasets/basic/downstream_applications/trajectory_classification.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
datasets/basic/downstream_applications/trajectory_prediction.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
datasets/basic/knowledge_comprehension/administrative_region_determination.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
datasets/basic/knowledge_comprehension/urban_region_function_recognition.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
datasets/basic/spatiotemporal_reasoning/point_region_2regions.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
datasets/basic/spatiotemporal_reasoning/point_region_3regions.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
datasets/basic/spatiotemporal_reasoning/point_region_4regions.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
datasets/basic/spatiotemporal_reasoning/point_region_5regions.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
datasets/basic/spatiotemporal_reasoning/point_trajectory.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
datasets/basic/spatiotemporal_reasoning/trajectory_identification_downsampling.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
datasets/basic/spatiotemporal_reasoning/trajectory_identification_spatial_offset.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
datasets/basic/spatiotemporal_reasoning/trajectory_identification_staggered_sampling.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
datasets/basic/spatiotemporal_reasoning/trajectory_identification_temporal_offset.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
datasets/basic/spatiotemporal_reasoning/trajectory_region_length10.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
datasets/basic/spatiotemporal_reasoning/trajectory_region_length2.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
datasets/basic/spatiotemporal_reasoning/trajectory_region_length4.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
datasets/basic/spatiotemporal_reasoning/trajectory_region_length6.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
datasets/basic/spatiotemporal_reasoning/trajectory_region_length8.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
datasets/cot/trajectory_classification.jsonl ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ {"Question": "Question: The following is a sequence of points sampled from a trajectory, and the meaning of each point is (longitude, latitude, timestamp): [(116.3291849,39.9543649,1214536203.0),(116.3291750,39.9542833,1214536205.0),(116.3291766,39.9542166,1214536207.0),(116.3291466,39.9541799,1214536209.0),(116.3291549,39.9541700,1214536211.0),(116.3291766,39.9541666,1214536213.0),(116.3291999,39.9541766,1214536215.0),(116.3291799,39.9542199,1214536217.0),(116.3291633,39.9542066,1214536219.0),(116.3291583,39.9541733,1214536221.0),(116.3291616,39.9541432,1214536223.0),(116.3291766,39.9541083,1214536225.0),(116.3291616,39.9540833,1214536227.0),(116.3291483,39.9540450,1214536229.0),(116.3291433,39.9540200,1214536231.0),(116.3291416,39.9540016,1214536233.0),(116.3291233,39.9539799,1214536235.0),(116.3291033,39.9539500,1214536237.0),(116.3290816,39.9539116,1214536239.0),(116.3290816,39.9538783,1214536241.0)]. The trajectory is generated by one of the following option: (1) car, (2) bike, (3) pedestrian. Please calculate the length and the average speed of the trajectory, and answer which option is most likely to generate this trajectory. Reasoning: The length of this trajectory is 72 meters and the duration is 38 seconds, thus the average speed is 1.89 m/s. Answer: The trajectory is most likely to be generated by ", "Answer": "(3) pedestrian."}
2
+ {"Question": "Question: The following is a sequence of points sampled from a trajectory, and the meaning of each point is (longitude, latitude, timestamp): [(116.4494420,39.9322690,1245545667.0),(116.4496340,39.9322790,1245545669.0),(116.4497530,39.9322900,1245545670.0),(116.4499910,39.9322950,1245545672.0),(116.4501360,39.9322960,1245545673.0),(116.4502800,39.9322870,1245545674.0),(116.4504180,39.9322800,1245545675.0),(116.4505630,39.9322780,1245545676.0),(116.4507100,39.9322710,1245545677.0),(116.4508500,39.9322630,1245545678.0),(116.4509820,39.9322620,1245545679.0),(116.4511610,39.9322730,1245545680.0),(116.4512990,39.9322710,1245545681.0),(116.4514370,39.9322710,1245545682.0),(116.4515870,39.9322600,1245545683.0),(116.4517670,39.9322880,1245545684.0),(116.4519020,39.9322920,1245545685.0),(116.4520820,39.9323040,1245545686.0),(116.4522550,39.9323100,1245545687.0),(116.4523950,39.9323040,1245545688.0)]. The trajectory is generated by one of the following option: (1) car, (2) bike, (3) pedestrian. Please calculate the length and the average speed of the trajectory, and answer which option is most likely to generate this trajectory. Reasoning: The length of this trajectory is 253 meters and the duration is 21 seconds, thus the average speed is 12.06 m/s. Answer: The trajectory is most likely to be generated by ", "Answer": "(1) car."}
datasets/cot/trajectory_region.jsonl ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ {"Question": "Question: There are several regions, and the boundary lines of each region are presented in the form of a list of (longitude, latitude) below: \nRegion 1: [(104.24832, 33.24466), (104.24805, 33.24404), (104.24697, 33.24377), (104.24656, 33.24404), (104.24643, 33.24431), (104.24627, 33.24457), (104.24775, 33.24563)]\nRegion 2: [(104.24465, 33.24706), (104.24526, 33.24596), (104.24563, 33.24496), (104.24509, 33.24512), (104.24482, 33.24539), (104.24428, 33.24566), (104.24374, 33.24593), (104.24320, 33.24620), (104.24307, 33.24647)]\nRegion 3: [(104.24768, 33.24576), (104.24596, 33.24509), (104.24541, 33.24602), (104.24480, 33.24712), (104.24651, 33.24781), (104.24768, 33.24576)]\nRegion 4: [(104.24458, 33.24719), (104.24276, 33.24669), (104.24231, 33.24773), (104.24244, 33.24818), (104.24335, 33.24917), (104.24335, 33.24917), (104.24364, 33.24888), (104.24423, 33.24781)]\nRegion 5: [(104.24644, 33.24793), (104.24473, 33.24724), (104.24438, 33.24787), (104.24380, 33.24892), (104.24363, 33.24947), (104.24520, 33.24999), (104.24581, 33.24903), (104.24644, 33.24793)]\nNow there is a trajectory presented in the form of a list of (longitude, latitude): [(104.24532, 33.24688), (104.24389, 33.24923)]. Note that although we only provide the coordinates of some discrete points, the trajectory is actually continuous. Please answer which regions it has passed through in chronological order: 1) [3, 5], 2) [3], 3) [3, 4, 5, 4, 5], 4) [3, 4], 5) [3, 5, 4, 3]. Answer only one option. Reasoning: The first point falls in Region 3 and the second point falls in Region 5. Answer: Option ", "Answer": "(1): [3, 5]."}
2
+ {"Question": "Question: There are several regions, and the boundary lines of each region are presented in the form of a list of (longitude, latitude) below: \nRegion 1: [(79.20856, 41.20335), (79.20769, 41.19898), (79.20742, 41.19925), (79.20715, 41.19979), (79.20688, 41.20114), (79.20715, 41.20222), (79.20742, 41.20249), (79.20769, 41.20271)]\nRegion 2: [(79.22657, 41.21221), (79.22598, 41.21370), (79.22934, 41.21407), (79.23072, 41.21425), (79.23216, 41.21316)]\nRegion 3: [(79.24759, 41.21481), (79.24768, 41.21483), (79.24772, 41.21490), (79.25109, 41.21534), (79.25081, 41.21435), (79.24786, 41.21381), (79.24711, 41.21600), (79.24751, 41.21485), (79.24759, 41.21481)]\nRegion 4: [(79.25162, 41.21488), (79.25139, 41.21538), (79.25151, 41.21619), (79.25243, 41.21569), (79.25216, 41.21542), (79.25189, 41.21515), (79.25162, 41.21488)]\nRegion 5: [(79.21402, 41.20769), (79.21537, 41.20938), (79.21674, 41.21208), (79.21795, 41.21265), (79.22356, 41.21348), (79.22357, 41.21170), (79.22014, 41.21087), (79.21687, 41.20944)]\nNow there is a trajectory presented in the form of a list of (longitude, latitude): [(79.22966, 41.20759), (79.22600, 41.20676), (79.22949, 41.21478), (79.23302, 41.21056)]. Note that although we only provide the coordinates of some discrete points, the trajectory is actually continuous. Please answer which regions it has passed through in chronological order: 1) [2, 3], 2) [3], 3) [2], 4) [2, 5], 5) [5]. Answer only one option. Reasoning: The line connecting the first point and the second point in the trajectory does not pass through any region. The line connecting the second point and the third point in the trajectory passes through Region 2. The line connecting the third point and the fourth point in the trajectory passes through Region 2. Answer: Option ", "Answer": "(3): [2]."}