diff --git a/DISC_2024_Assignment.pdf b/DISC_2024_Assignment.pdf new file mode 100644 index 0000000000000000000000000000000000000000..13af977703b94c1dbe203186c40501455f9f7911 Binary files /dev/null and b/DISC_2024_Assignment.pdf differ diff --git a/Dataset/Cosmosqa_train25k.jsonl b/Dataset/Cosmosqa_train25k.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..e795b32101f8e037457bd9e180a40cfe710f0e04 --- /dev/null +++ b/Dataset/Cosmosqa_train25k.jsonl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3429c45914ced38a9eceed7aac9cb4ee5227872fb92efa5106a28cb40b394f18 +size 22909738 diff --git a/Dataset/README.md b/Dataset/README.md new file mode 100644 index 0000000000000000000000000000000000000000..f99f106f9cb7a9e0fdf1762aa503ada6d6a45f41 --- /dev/null +++ b/Dataset/README.md @@ -0,0 +1,65 @@ +# 概述 +阅读理解指令微调数据集大小为50k,提供jsonl,csv,json,Xtuner训练Format四种格式。它由两个子数据集构成,分别是25k的CosmosQA25k,和25k的TriviaQA25k数据集。 + + +- 50262行 [Read_Comprehension50k](Read_Comperhension50k.jsonl),开源地址:[HuggingFace-Read_Comprehension50k](https://huggingface.co/datasets/KashiwaByte/Read_ComprehensionQA) + +- 25262行 [CosmosQA25k](Cosmosqa_train25k.jsonl) + +- 25000行 [TriviaQA25k](TriviaQA_train25k.jsonl) + ![alt text](image.png) + +- [transmethod](transmethod)文件夹中是一些格式处理的中间脚本。 +- [cosmosqa](cosmosqa)文件夹中是Cosmosqa数据集的原始数据和一些中间数据。 +- [triviaqa](triviaqa)文件夹中是Triviaqa数据集的原始数据和一些中间数据。 + + + + +## Cosmosqa +以下是CosmosQA25k的数据格式: +``` +{ + "instruction":"As a reading comprehension expert, you will receive context, question and four answer options. Please understand the given Context first and then output the label of the correct option as the answer to the question based on the Context", + "input": str( + { + 'context':{context}, + 'question':{question}, + "answer0":{answer0}, + "answer1":{answer1}, + "answer2":{answer2}, + "answer3":{answer3} + }), + output":labe + } + +``` +以下是一个示例 + +{"instruction": "As a reading comprehension expert, you will receive context, question and four answer options. Please understand the given Context first and then output the label of the correct option as the answer to the question based on the Context", +"input": "{'context': {\"Good Old War and person L : I saw both of these bands Wednesday night , and they both blew me away . seriously . Good Old War is acoustic and makes me smile . I really can not help but be happy when I listen to them ; I think it 's the fact that they seemed so happy themselves when they played .\"}, 'question': {'In the future , will this person go to see other bands play ?'}, 'answer0': {'None of the above choices .'}, 'answer1': {'This person likes music and likes to see the show , they will see other bands play .'}, 'answer2': {'This person only likes Good Old War and Person L , no other bands .'}, 'answer3': {'Other Bands is not on tour and this person can not see them .'}}", + "output": "1"} + +## TriviaQA +以下是TriviaQA25k的数据格式: + +``` +{ + "instruction":"As a reading comprehension expert, you will receive context and question. Please understand the given Context first and then output the answer of the question based on the Context", + + "input": str( + { + 'context':{context}, + 'question':{question} + }), + + "output":f'{answer}' +} + + +``` + +以下是一个示例 + +{"instruction": "As a reading comprehension expert, you will receive context and question. Please understand the given Context first and then output the answer of the question based on the Context", "input": "{'context': {'[DOC] [TLE] THEME FROM MAHOGANY - (DO YOU KNOW WHERE YOU\\'RE GOING TO ...THEME FROM MAHOGANY - (DO YOU KNOW WHERE YOU\\'RE GOING TO) - YouTube [PAR] THEME FROM MAHOGANY - (DO YOU KNOW WHERE YOU\\'RE GOING TO) [PAR] Want to watch this again later? [PAR] Sign in to add this video to a playlist. [PAR] Need to report the video? [PAR] Sign in to report inappropriate content. [PAR] Rating is available when the video has been rented. [PAR] This feature is not available right now. Please try again later. [PAR] Published on Jun 28, 2013 [PAR] The Theme from the movie \"Mahogany\" also titled \"Do You Know Where You\\'re Going To\" is a song written by Michael Masser and Gerald Giffin and was sung by Dianna Ross as the theme to the 1975 Paramount film. Her recording of the theme became a number one hit on both the U.S. Billboard Hot 100 Hits and the Easy Listening Charts. The song was nominated for an Academy Award and was performed live by Dianna Ross at the oscars show. INFRINGEMENT OF COPYRIGHT LAW IS NEVER INTENDED! [PAR] Category[DOC] [TLE] \"Theme From Mahogany (Do You Know Where You\\'re Going To ...DIANA ROSS LYRICS - Theme From Mahogany (Do You Know Where You\\'re Going To) [PAR] \"Theme From Mahogany (Do You Know Where You\\'re Going To)\" lyrics [PAR] DIANA ROSS LYRICS [PAR] \"Theme From Mahogany (Do You Know Where You\\'re Going To)\" [PAR] Do you know where you\\'re going to [PAR] Do you like the things that life is showing you [PAR] Where are you going to [PAR] Do you know [PAR] When you look behind you [PAR] There\\'s no open doors [PAR] What are you hoping for [PAR] Do you know [PAR] Once we were standing still in time [PAR] Chasing the fantasies [PAR] You knew how I loved you [PAR] But my spirit was free [PAR] Laughin\\' at the questions [PAR] That you once asked of me [PAR] Do you know where you\\'re going to [PAR] Do you like the things that life is showing you [PAR] Where are you going to [PAR] Do you know [PAR] Now looking back at all we\\'ve planned [PAR] We let so many dreams [PAR] Just slip through our hands [PAR] Why must we wait so long [PAR] Before we\\'ll see [PAR] To those questions can be [PAR] Do you know where you\\'re going to [PAR] Do you like the things that life is showing you [PAR] Where are you going to [PAR] Do you know[DOC] [TLE] Theme From Mahogany (Do You Know Where You\\'re Going To)Theme From Mahogany (Do You Know Where You\\'re Going To) - YouTube [PAR] Rating is available when the video has been rented. [PAR] This feature is not available right now. Please try again later. [PAR] Uploaded on Dec 29, 2008 [PAR] This movie was the first movie I ever saw after moving to the Bay Area in 1975. Having come from a small town, everything was so BIG. Intersections, malls, people rushing everywhere. The question this song asks, I asked myself over and over after such upheaval. I still don\\'t have the answer. The movie Mahogany (and this theme song) turn 40 this October. [PAR] Theme From Mahogany (Do You Know Where You\\'re Going To), by Diana Ross. From the movie, Mahogany in the year 1975. [PAR] Do you know where youre going to? [PAR] Do you like the things that life is showing you [PAR] Where are you going to? [PAR] Do you know...?[DOC] [TLE] Theme from Mahogany (Do You Know Where You\\'re Going To)Diana\\xa0Ross \u2013 Theme from Mahogany (Do You Know Where You\\'re Going To) Lyrics | Genius Lyrics [PAR] Theme from Mahogany (Do You Know Where You\\'re Going To) Lyrics [PAR] Do you know where you\\'re going to? [PAR] Do you like the things that life is showing you? [PAR] Where are you going to? [PAR] Do you know? [PAR] Do you get what you\\'re hoping for? [PAR] When you look behind you [PAR] There\\'s no open door [PAR] What are you hoping for? [PAR] Do you know? [PAR] Once we were'}, 'question': {\"Do You Know Where You're Going To? was the theme from which film?\"}}", "output": "['mahogany']"} + diff --git a/Dataset/Read_Comperhension50k.csv b/Dataset/Read_Comperhension50k.csv new file mode 100644 index 0000000000000000000000000000000000000000..d0423a6781568a0637ed05b78b215572cf66f83b --- /dev/null +++ b/Dataset/Read_Comperhension50k.csv @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4cb65cb7de2ba4085551d69ae7db738d0bbcc01f6b5984723cdcb6905f4d852d +size 132615197 diff --git a/Dataset/Read_Comperhension50k.json b/Dataset/Read_Comperhension50k.json new file mode 100644 index 0000000000000000000000000000000000000000..eab8eaecc8448ddc3ceaf566ea3c42ce9ad0c960 --- /dev/null +++ b/Dataset/Read_Comperhension50k.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c8b6a7d52a0be1877451894c538b36b95f23f5b412462c3269cdfb2cbb01d171 +size 135770699 diff --git a/Dataset/Read_Comperhension50k.jsonl b/Dataset/Read_Comperhension50k.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..23ce36bae6239f6dddcb688839c945c6d53de480 --- /dev/null +++ b/Dataset/Read_Comperhension50k.jsonl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:37aef0856d8d250cd6d1c99f39f8eac1fa71c8b845ca44d58f5cdf02bae44977 +size 135770699 diff --git a/Dataset/TriviaQA_train25k.jsonl b/Dataset/TriviaQA_train25k.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..4b3803ec25e7b16ff52e7fae6d9d954161899d4d --- /dev/null +++ b/Dataset/TriviaQA_train25k.jsonl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f50f71142db3bae44300704d28aea952b6048e79fc80710ee7eced0fb2e50e30 +size 112860766 diff --git a/Dataset/Xtuner_Read_Comperhension50k.jsonl b/Dataset/Xtuner_Read_Comperhension50k.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..f4dca40b6101f3d5c803ac97cc4138cd17e8a0b5 --- /dev/null +++ b/Dataset/Xtuner_Read_Comperhension50k.jsonl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4bb6f8104fb0e35151e48a4483fedbb5a82374ba647b220fe09c58089896428a +size 135603261 diff --git a/Dataset/cosmosqa/Cosmosqa_train.jsonl b/Dataset/cosmosqa/Cosmosqa_train.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..e795b32101f8e037457bd9e180a40cfe710f0e04 --- /dev/null +++ b/Dataset/cosmosqa/Cosmosqa_train.jsonl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3429c45914ced38a9eceed7aac9cb4ee5227872fb92efa5106a28cb40b394f18 +size 22909738 diff --git a/Dataset/cosmosqa/Prompt.md b/Dataset/cosmosqa/Prompt.md new file mode 100644 index 0000000000000000000000000000000000000000..f6b7285a92a30935dc2934c831e76d2cf59e7e71 --- /dev/null +++ b/Dataset/cosmosqa/Prompt.md @@ -0,0 +1,12 @@ +# 指令微调提示词 +As a reading comprehension expert, you will receive context, question and four options. Please understand the context given below first, and then output the label of the correct option as the answer to the question based on the context. + +# Few-shot + + +# CoT +As a reading comprehension expert, you will receive context, question and four options. Please understand the context given below first, and then output the label of the correct option as the answer to the question based on the context.You Should Follow thinking steps below: +1.Read the question and understand the requirements. +2.Eliminate obviously incorrect options. +3.Skim through the passage to find information that supports the remaining options. +4.Choose the best answer based on the information in the passage. \ No newline at end of file diff --git a/Dataset/cosmosqa/README.md b/Dataset/cosmosqa/README.md new file mode 100644 index 0000000000000000000000000000000000000000..e71373aed8bf3d6438274c9153e7842b0e772ca4 --- /dev/null +++ b/Dataset/cosmosqa/README.md @@ -0,0 +1,18 @@ +## Data Format + +Each data instance consists of a paragraph (context), a question, and 4 candidate answers. The goal of each system is to determine the most plausible answer to the question by reading the paragraph. + +## Expected Output to Leaderboard +If you intend to submit to the leaderboard [here](https://leaderboard.allenai.org/cosmosqa/submissions/get-started), please follow the data format described on the that page. The prediction file should contain one label per line. + +``` +1 +2 +0 +... +``` + + +## Disclamer + +* Since the source of Cosmos QA is from personal webblogs, they may contain offensive or inappropriate content. diff --git a/Dataset/cosmosqa/construct.py b/Dataset/cosmosqa/construct.py new file mode 100644 index 0000000000000000000000000000000000000000..a73c6a635e93a3218e4d772e1f377d6123d71cd6 --- /dev/null +++ b/Dataset/cosmosqa/construct.py @@ -0,0 +1,29 @@ +import csv +import json + +csv_file = 'data/train.csv' +jsonl_file = 'Construct_data/Cosmosqa_train.jsonl' + +# 生成JSONL文件 +messages = [] + +with open(csv_file, 'r', encoding='utf-8') as file: + reader = csv.reader(file) + next(reader) # 跳过标题行 + + for row in reader: + if len(row) >= 4: + context = row[1] + question = row[2] + answer0 = row[3] + answer1 = row[4] + answer2 = row[5] + answer3 = row[6] + label = row[7] + message={ "instruction":"As a reading comprehension expert, you will receive context, question and four answer options. Please understand the given Context first and then output the label of the correct option as the answer to the question based on the Context","input": str({'context':{context},'question':{question},"answer0":{answer0},"answer1":{answer1},"answer2":{answer2},"answer3":{answer3}}),"output":label} + messages.append(message) +# 保存为JSONL文件 +with open(jsonl_file, 'w', encoding='utf-8') as file: + for message in messages: + file.write(json.dumps(message, ensure_ascii=False) + '\n') + \ No newline at end of file diff --git a/Dataset/cosmosqa/sample_prediction.csv b/Dataset/cosmosqa/sample_prediction.csv new file mode 100644 index 0000000000000000000000000000000000000000..6109cc38f41acc4994743b0ff675e3cbfb9780be --- /dev/null +++ b/Dataset/cosmosqa/sample_prediction.csv @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b139747fbff3dba9066a20416398fb76558bd948183f51ce7ed1c61f53204aa1 +size 932788 diff --git a/Dataset/cosmosqa/test.jsonl b/Dataset/cosmosqa/test.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..eb0131d55ddbebcdf5eab2d89174c63993ba9aac --- /dev/null +++ b/Dataset/cosmosqa/test.jsonl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a9c66761267eb5b8e7171af27c1cd859f3dc28926df11744da52863bf4367781 +size 5617644 diff --git a/Dataset/cosmosqa/train.csv b/Dataset/cosmosqa/train.csv new file mode 100644 index 0000000000000000000000000000000000000000..d7d8c5b7b2764ff7f6c8d9f747ba4235bb5d284f --- /dev/null +++ b/Dataset/cosmosqa/train.csv @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d8d5ca1f9f6534b6530550718591af89372d976a8fc419360fab4158dee4d0b2 +size 16660449 diff --git a/Dataset/cosmosqa/valid.csv b/Dataset/cosmosqa/valid.csv new file mode 100644 index 0000000000000000000000000000000000000000..0c25ec4b56a5356d14ae6db58818d410122392a6 --- /dev/null +++ b/Dataset/cosmosqa/valid.csv @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a6a94fc1463ca82bb10f98ef68ed535405e6f5c36e044ff8e136b5c19dea63f3 +size 2128345 diff --git a/Dataset/image.png b/Dataset/image.png new file mode 100644 index 0000000000000000000000000000000000000000..65a4ec9931b2a8dbb2feccbd19bad3ee77ab9c8f --- /dev/null +++ b/Dataset/image.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4adfbff1ec7285479692e8e540f1e7d120fdb5ab61fb92197c85dabadfe2608c +size 194582 diff --git a/Dataset/transmethod/jsonl2csv.py b/Dataset/transmethod/jsonl2csv.py new file mode 100644 index 0000000000000000000000000000000000000000..03028e9e925838b63e72243a916b20886257bcf6 --- /dev/null +++ b/Dataset/transmethod/jsonl2csv.py @@ -0,0 +1,21 @@ +import json +import csv + +def convert_jsonl_to_csv(jsonl_file, csv_file): + with open(jsonl_file, 'r',encoding='utf-8') as file: + json_data = [json.loads(line) for line in file] + + if len(json_data) > 0: + fields = list(json_data[0].keys()) + + with open(csv_file, 'w', newline='',encoding='utf-8') as file: + writer = csv.DictWriter(file, fieldnames=fields) + writer.writeheader() + + for data in json_data: + writer.writerow(data) + +# 使用示例 +jsonl_file = 'Read_Comperhension50k.jsonl' +csv_file = 'Read_Comperhension50k.csv' +convert_jsonl_to_csv(jsonl_file, csv_file) \ No newline at end of file diff --git a/Dataset/transmethod/jsonl2json.py b/Dataset/transmethod/jsonl2json.py new file mode 100644 index 0000000000000000000000000000000000000000..c0b75b044868dad86d8b40089cf545a2d83f12b0 --- /dev/null +++ b/Dataset/transmethod/jsonl2json.py @@ -0,0 +1,16 @@ +import json + +def convert_jsonl_to_json(jsonl_file, json_file): + json_data = [] + + with open(jsonl_file, 'r') as file: + for line in file: + json_data.append(json.loads(line)) + + with open(json_file, 'w') as file: + json.dump(json_data, file) + +# 使用示例 +jsonl_file = 'Xtuner_Read_Comperhension50k.jsonl' +json_file = 'Xtuner_Read_Comperhension50k.json' +convert_jsonl_to_json(jsonl_file, json_file) \ No newline at end of file diff --git a/Dataset/transmethod/merge.py b/Dataset/transmethod/merge.py new file mode 100644 index 0000000000000000000000000000000000000000..40bc76a0a58f3f2ec3160665cac62314e1ab1791 --- /dev/null +++ b/Dataset/transmethod/merge.py @@ -0,0 +1,18 @@ +import json + +input_file1 = 'Cosmosqa_train.jsonl' +input_file2 = 'TriviaQA_train25k.jsonl' +output_file = 'Read_Comperhension50k.jsonl' + +with open(input_file1, 'r', encoding='utf-8') as f_input1, \ + open(input_file2, 'r', encoding='utf-8') as f_input2, \ + open(output_file, 'w', encoding='utf-8') as f_output: + for line in f_input1: + data = json.loads(line) + json.dump(data, f_output) + f_output.write('\n') + + for line in f_input2: + data = json.loads(line) + json.dump(data, f_output) + f_output.write('\n') \ No newline at end of file diff --git a/Dataset/transmethod/xtunerformat.py b/Dataset/transmethod/xtunerformat.py new file mode 100644 index 0000000000000000000000000000000000000000..44aa158d81745ae1d9c13a12158f5e416f1e7297 --- /dev/null +++ b/Dataset/transmethod/xtunerformat.py @@ -0,0 +1,31 @@ +import csv +import json + +csv_file = 'Read_Comperhension50k.csv' +jsonl_file = 'Xtuner_Read_Comperhension50k.jsonl' + +# 生成JSONL文件 +messages = [] + +with open(csv_file, 'r', encoding='utf-8') as file: + reader = csv.reader(file) + next(reader) # 跳过标题行 + + for row in reader: + if len(row) >= 3: + insturction = row[0] + input= row[1] + output = row[2] + conversation = [ + { + "system": insturction, + "input": input, + "output": output + } + ] + messages.append({"conversation": conversation}) +# 保存为JSONL文件 +with open(jsonl_file, 'w', encoding='utf-8') as file: + for message in messages: + file.write(json.dumps(message, ensure_ascii=False) + '\n') + \ No newline at end of file diff --git a/Dataset/triviaqa/README.md b/Dataset/triviaqa/README.md new file mode 100644 index 0000000000000000000000000000000000000000..0496dfab839c3f1be9ecdd50e3dfe4703232638c --- /dev/null +++ b/Dataset/triviaqa/README.md @@ -0,0 +1,34 @@ +# TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension +- This repo contains code for the paper +Mandar Joshi, Eunsol Choi, Daniel Weld, Luke Zettlemoyer. + +[TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension][triviaqa-arxiv] +In Association for Computational Linguistics (ACL) 2017, Vancouver, Canada. + +- The data can be downloaded from the [TriviaQA website][triviaqa-website]. The Apache 2.0 License applies to both the code and the data. +- Please contact [Mandar Joshi][mandar-home] (\90@cs.washington.edu) for suggestions and comments. + +## Requirements +#### General +- Python 3. You should be able to run the evaluation scripts using Python 2.7 if you take care of unicode in ```utils.utils.py```. +- BiDAF requires Python 3 -- check the [original repository][bidaf-orig-github] for more details. + +#### Python Packages +- tensorflow (only if you want to run BiDAF, verified on r0.11) +- nltk +- tqdm + +## Evaluation +The ```dataset file``` parameter refers to files in the ```qa``` directory of the data (e.g., ```wikipedia-dev.json```). For file format, check out the ```sample``` directory in the repo. +``` +python3 -m evaluation.triviaqa_evaluation --dataset_file samples/triviaqa_sample.json --prediction_file samples/sample_predictions.json +``` +## Miscellaneous +- If you have a SQuAD model and want to run on TriviaQA, please refer to ```utils.convert_to_squad_format.py``` + + + +[bidaf-orig-github]: https://github.com/allenai/bi-att-flow/ +[triviaqa-arxiv]: https://arxiv.org/abs/1705.03551 +[mandar-home]: http://homes.cs.washington.edu/~mandar90/ +[triviaqa-website]: http://nlp.cs.washington.edu/triviaqa/ diff --git a/Dataset/triviaqa/dataset/TriviaQA_train.jsonl b/Dataset/triviaqa/dataset/TriviaQA_train.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..4f2bce84a12f34dea23dd8f552bea90e30c35b78 --- /dev/null +++ b/Dataset/triviaqa/dataset/TriviaQA_train.jsonl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fa23e5b22e227963c728545352de47e96f4f1652e727a7c29d64c836729e5298 +size 276217264 diff --git a/Dataset/triviaqa/dataset/TriviaQA_train25k.jsonl b/Dataset/triviaqa/dataset/TriviaQA_train25k.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..4b3803ec25e7b16ff52e7fae6d9d954161899d4d --- /dev/null +++ b/Dataset/triviaqa/dataset/TriviaQA_train25k.jsonl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f50f71142db3bae44300704d28aea952b6048e79fc80710ee7eced0fb2e50e30 +size 112860766 diff --git a/Dataset/triviaqa/dataset/TriviaQA_val.jsonl b/Dataset/triviaqa/dataset/TriviaQA_val.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..4fb1ec933a9dd51d23c741fcd3190d6ef1c486d5 --- /dev/null +++ b/Dataset/triviaqa/dataset/TriviaQA_val.jsonl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ee69d07946f4aedc148a4985c3ddce929f852014f3184da3d21c3b755fdedfab +size 34823273 diff --git a/Dataset/triviaqa/dataset/construct.py b/Dataset/triviaqa/dataset/construct.py new file mode 100644 index 0000000000000000000000000000000000000000..1d2136b16db0380c3e0db2c4a371210a8d07e2e0 --- /dev/null +++ b/Dataset/triviaqa/dataset/construct.py @@ -0,0 +1,34 @@ + +import re +import csv +import json +import pandas as pd +# df = pd.read_parquet('train-00000-of-00001-288b64d2a0003a2f.parquet') +# 将数据框保存为CSV文件,并设置escapechar参数 +# df.to_csv('train_triviaqa.csv',encoding="utf-8",escapechar='\\') +csv_file = 'train_triviaqa.csv' +jsonl_file = 'TriviaQA_train.jsonl' + + + + +# 生成JSONL文件 +messages = [] + +with open(csv_file, 'r', encoding='utf-8') as file: + reader = csv.reader((line.replace('\0', '') for line in file)) + next(reader) # 跳过标题行 + + for row in reader: + if len(row) >= 4: + context = row[1] + question = row[2] + answers = row[3] + answer = answers + message={ "instruction":"As a reading comprehension expert, you will receive context and question. Please understand the given Context first and then output the answer of the question based on the Context","input": str({'context':{context},'question':{question},}),"output":f'{answer}'} + messages.append(message) +# 保存为JSONL文件 +with open(jsonl_file, 'w', encoding='utf-8') as file: + for message in messages: + file.write(json.dumps(message, ensure_ascii=False) + '\n') + \ No newline at end of file diff --git a/Dataset/triviaqa/dataset/extract.py b/Dataset/triviaqa/dataset/extract.py new file mode 100644 index 0000000000000000000000000000000000000000..747af1ef96712aa5d0279a931e45a8b800483217 --- /dev/null +++ b/Dataset/triviaqa/dataset/extract.py @@ -0,0 +1,13 @@ +import json + +input_file = 'TriviaQA_train.jsonl' +output_file = 'TriviaQA_train25k.jsonl' + +with open(input_file, 'r', encoding='utf-8') as f_input: + with open(output_file, 'w', encoding='utf-8') as f_output: + for idx, line in enumerate(f_input): + if idx >= 25000: + break + data = json.loads(line) + json.dump(data, f_output) + f_output.write('\n') \ No newline at end of file diff --git a/Dataset/triviaqa/dataset/train-00000-of-00001-288b64d2a0003a2f.parquet b/Dataset/triviaqa/dataset/train-00000-of-00001-288b64d2a0003a2f.parquet new file mode 100644 index 0000000000000000000000000000000000000000..5404fb1b9fb184e5d80900f6040fbdb2ceede3b0 --- /dev/null +++ b/Dataset/triviaqa/dataset/train-00000-of-00001-288b64d2a0003a2f.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b65dc9198c14bac68d4a0227f29a4c0f55e5e2e8c22a21fd1df5130105476e3b +size 158776377 diff --git a/Dataset/triviaqa/dataset/train_triviaqa.csv b/Dataset/triviaqa/dataset/train_triviaqa.csv new file mode 100644 index 0000000000000000000000000000000000000000..a070e37fb26c42e87987f535674136d8adb780db --- /dev/null +++ b/Dataset/triviaqa/dataset/train_triviaqa.csv @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6e101b802ba91774ba99054788758d77a1b8dd23e89ec7a2fd4bd39fac97cc72 +size 274191880 diff --git a/Dataset/triviaqa/dataset/val_triviaqa.csv b/Dataset/triviaqa/dataset/val_triviaqa.csv new file mode 100644 index 0000000000000000000000000000000000000000..465fe491205f156835470378834f9419582ccef4 --- /dev/null +++ b/Dataset/triviaqa/dataset/val_triviaqa.csv @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dfca5bc6dfa02e183cbda07f4e076a8b9714c4c2270145c3c7824cfa8830a547 +size 34546128 diff --git a/Dataset/triviaqa/dataset/validation-00000-of-00001-e0b96d7de24c1bf5.parquet b/Dataset/triviaqa/dataset/validation-00000-of-00001-e0b96d7de24c1bf5.parquet new file mode 100644 index 0000000000000000000000000000000000000000..c51482c1178bf240a3cb6f194fcb4e162f6e5c2f --- /dev/null +++ b/Dataset/triviaqa/dataset/validation-00000-of-00001-e0b96d7de24c1bf5.parquet @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a36451011d87c681e9600a13f57c909431920fb4cbba2ad6fdd6ef5166348c4e +size 20063789 diff --git a/Dataset/triviaqa/samples/sample_predictions.json b/Dataset/triviaqa/samples/sample_predictions.json new file mode 100644 index 0000000000000000000000000000000000000000..2bab166e7f415f7a6e48e4fe78a9a6c576e8f636 --- /dev/null +++ b/Dataset/triviaqa/samples/sample_predictions.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a7125f623eb065f5b1efd24c6297898c691718fbdc140d76a5caf591848183db +size 56 diff --git a/Dataset/triviaqa/samples/triviaqa_sample.json b/Dataset/triviaqa/samples/triviaqa_sample.json new file mode 100644 index 0000000000000000000000000000000000000000..d6c0923178594901ea7483f98b99527f6dee243a --- /dev/null +++ b/Dataset/triviaqa/samples/triviaqa_sample.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b50ec7222f1801e33870976dbcd179a56519a7d4af0f149a15ee4fff1e869cdd +size 1650 diff --git a/Experiment/CoT.md b/Experiment/CoT.md new file mode 100644 index 0000000000000000000000000000000000000000..04f7089dcf068deea1f5f7615d012301ed588904 --- /dev/null +++ b/Experiment/CoT.md @@ -0,0 +1,16 @@ + + +``` + {"role": "system", "content": """As a reading comprehension expert, you will receive context, question and four options. Please understand the context given below first, and then output the label of the correct option as the answer to the question based on the context.You Should Follow thinking steps below: + 1.Read the question and understand the requirements. + 2.Eliminate obviously incorrect options. + 3.Skim through the passage to find information that supports the remaining options. + 4.Choose the best answer based on the information in the passage."""}, + {"role": "user", "content": "{'context': {\"Good Old War and person L : I saw both of these bands Wednesday night , and they both blew me away . seriously . Good Old War is acoustic and makes me smile . I really can not help but be happy when I listen to them ; I think it 's the fact that they seemed so happy themselves when they played .\"}, 'question': {'In the future , will this person go to see other bands play ?'}, 'answer0': {'None of the above choices .'}, 'answer1': {'This person likes music and likes to see the show , they will see other bands play .'}, 'answer2': {'This person only likes Good Old War and Person L , no other bands .'}, 'answer3': {'Other Bands is not on tour and this person can not see them .'}}"}, + {"role": "assistant", "content": "1"}, + {"role": "user", "content": "{'context': {""A hot girl appears who flirts with me to convince me to help her dig the deeper well ( Whoa ... even my dreams reference Buffy . Whedon is my God ) . I ' m in a narrow tunnel helping to pull a horse attached to a plow of some sort , while the brother of the hot chick is riding on the back of the plow . By the time we 're done I can tell it 's becoming night , though oddly when I get out of the tunnel it 's still light outside .""}, 'question': {'Why do even my dreams reference Buffy ?'}, 'answer0': {""I like Joss Whedon 's work a little bit .""}, 'answer1': {'I think Buffy the Vampire Slayer is an alright TV show .'}, 'answer2': {'I love the TV show Buffy the Vampire Slayer .'}, 'answer3': {'None of the above choices .'}}"}, + {"role": "assistant", "content": "2"}, + {"role": "user", "content": "{'context': {""A while later I tried the car again and lo and behold it does n't start at all , so a tow truck was called , and I chatted with Ellen ( who was n't in class after all ) while I waited . My dad came and got me from the body shop . The End . ( Where the hell did my freaking cow go ?""}, 'question': {""What is n't working properly ?""}, 'answer0': {'None of the above choices .'}, 'answer1': {'The cow'}, 'answer2': {'The tow truck'}, 'answer3': {'The body shop'}}"}, + {"role": "assistant", "content": "0"}, + +``` \ No newline at end of file diff --git a/Experiment/ComsmosQA/CoT.md b/Experiment/ComsmosQA/CoT.md new file mode 100644 index 0000000000000000000000000000000000000000..616682c67428c7dc92933b30e05f4b353a0f57d0 --- /dev/null +++ b/Experiment/ComsmosQA/CoT.md @@ -0,0 +1,11 @@ + {"role": "system", "content": """As a reading comprehension expert, you will receive context, question and four options. Please understand the context given below first, and then output the label of the correct option as the answer to the question based on the context.You Should Follow thinking steps below: + 1.Read the question and understand the requirements. + 2.Eliminate obviously incorrect options. + 3.Skim through the passage to find information that supports the remaining options. + 4.Choose the best answer based on the information in the passage."""}, + {"role": "user", "content": "{'context': {\"Good Old War and person L : I saw both of these bands Wednesday night , and they both blew me away . seriously . Good Old War is acoustic and makes me smile . I really can not help but be happy when I listen to them ; I think it 's the fact that they seemed so happy themselves when they played .\"}, 'question': {'In the future , will this person go to see other bands play ?'}, 'answer0': {'None of the above choices .'}, 'answer1': {'This person likes music and likes to see the show , they will see other bands play .'}, 'answer2': {'This person only likes Good Old War and Person L , no other bands .'}, 'answer3': {'Other Bands is not on tour and this person can not see them .'}}"}, + {"role": "assistant", "content": "1"}, + {"role": "user", "content": "{'context': {""A hot girl appears who flirts with me to convince me to help her dig the deeper well ( Whoa ... even my dreams reference Buffy . Whedon is my God ) . I ' m in a narrow tunnel helping to pull a horse attached to a plow of some sort , while the brother of the hot chick is riding on the back of the plow . By the time we 're done I can tell it 's becoming night , though oddly when I get out of the tunnel it 's still light outside .""}, 'question': {'Why do even my dreams reference Buffy ?'}, 'answer0': {""I like Joss Whedon 's work a little bit .""}, 'answer1': {'I think Buffy the Vampire Slayer is an alright TV show .'}, 'answer2': {'I love the TV show Buffy the Vampire Slayer .'}, 'answer3': {'None of the above choices .'}}"}, + {"role": "assistant", "content": "2"}, + {"role": "user", "content": "{'context': {""A while later I tried the car again and lo and behold it does n't start at all , so a tow truck was called , and I chatted with Ellen ( who was n't in class after all ) while I waited . My dad came and got me from the body shop . The End . ( Where the hell did my freaking cow go ?""}, 'question': {""What is n't working properly ?""}, 'answer0': {'None of the above choices .'}, 'answer1': {'The cow'}, 'answer2': {'The tow truck'}, 'answer3': {'The body shop'}}"}, + {"role": "assistant", "content": "0"}, \ No newline at end of file diff --git a/Experiment/ComsmosQA/GPT-fewshot.md b/Experiment/ComsmosQA/GPT-fewshot.md new file mode 100644 index 0000000000000000000000000000000000000000..5bd14a71fd4e65336ef8a0983b238a9c3ffdea36 --- /dev/null +++ b/Experiment/ComsmosQA/GPT-fewshot.md @@ -0,0 +1,8 @@ + {"role": "system", "content": "As a reading comprehension expert, you will receive context, question and four answer options. Please understand the given Context first and then output the label of the correct option as the answer to the question based on the Context"}, + {"role": "user", "content": "{'context': {\"Good Old War and person L : I saw both of these bands Wednesday night , and they both blew me away . seriously . Good Old War is acoustic and makes me smile . I really can not help but be happy when I listen to them ; I think it 's the fact that they seemed so happy themselves when they played .\"}, 'question': {'In the future , will this person go to see other bands play ?'}, 'answer0': {'None of the above choices .'}, 'answer1': {'This person likes music and likes to see the show , they will see other bands play .'}, 'answer2': {'This person only likes Good Old War and Person L , no other bands .'}, 'answer3': {'Other Bands is not on tour and this person can not see them .'}}"}, + {"role": "assistant", "content": "1"}, + {"role": "user", "content": "{'context': {""A hot girl appears who flirts with me to convince me to help her dig the deeper well ( Whoa ... even my dreams reference Buffy . Whedon is my God ) . I ' m in a narrow tunnel helping to pull a horse attached to a plow of some sort , while the brother of the hot chick is riding on the back of the plow . By the time we 're done I can tell it 's becoming night , though oddly when I get out of the tunnel it 's still light outside .""}, 'question': {'Why do even my dreams reference Buffy ?'}, 'answer0': {""I like Joss Whedon 's work a little bit .""}, 'answer1': {'I think Buffy the Vampire Slayer is an alright TV show .'}, 'answer2': {'I love the TV show Buffy the Vampire Slayer .'}, 'answer3': {'None of the above choices .'}}"}, + {"role": "assistant", "content": "2"}, + {"role": "user", "content": "{'context': {""A while later I tried the car again and lo and behold it does n't start at all , so a tow truck was called , and I chatted with Ellen ( who was n't in class after all ) while I waited . My dad came and got me from the body shop . The End . ( Where the hell did my freaking cow go ?""}, 'question': {""What is n't working properly ?""}, 'answer0': {'None of the above choices .'}, 'answer1': {'The cow'}, 'answer2': {'The tow truck'}, 'answer3': {'The body shop'}}"}, + {"role": "assistant", "content": "0"}, + {"role": "user", "content": str({'context':{context},'question':{question},"answer0":{answer0},"answer1":{answer1},"answer2":{answer2},"answer3":{answer3}})} \ No newline at end of file diff --git a/Experiment/ComsmosQA/result/MiniCPM2B-CoT-fewshot_answers.csv b/Experiment/ComsmosQA/result/MiniCPM2B-CoT-fewshot_answers.csv new file mode 100644 index 0000000000000000000000000000000000000000..669819ab62a826e9577bf6555dfc4e09b729dd42 --- /dev/null +++ b/Experiment/ComsmosQA/result/MiniCPM2B-CoT-fewshot_answers.csv @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a19b54457c8075d84a78220f66c3da5177b917d11664496901026e599fb60414 +size 20889 diff --git a/Experiment/ComsmosQA/result/MiniCPM2B-CoT_answers.csv b/Experiment/ComsmosQA/result/MiniCPM2B-CoT_answers.csv new file mode 100644 index 0000000000000000000000000000000000000000..33cbcfd927a9e012188805101cc0aae59641ce10 --- /dev/null +++ b/Experiment/ComsmosQA/result/MiniCPM2B-CoT_answers.csv @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7607ff80edb26233fc66281771aef16811060c3062ea54e8222971d210813eda +size 20889 diff --git a/Experiment/ComsmosQA/result/MiniCPM2B-Nlora_fewshot_answers.csv b/Experiment/ComsmosQA/result/MiniCPM2B-Nlora_fewshot_answers.csv new file mode 100644 index 0000000000000000000000000000000000000000..97a2117d86d84efe56685fbaea2e6300546cc476 --- /dev/null +++ b/Experiment/ComsmosQA/result/MiniCPM2B-Nlora_fewshot_answers.csv @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ddf5604f5119e771daceb8593d9b72440cc03d31f3a215f7635ab693346bc286 +size 20889 diff --git a/Experiment/ComsmosQA/result/MiniCPM2B-ZH-_answers.csv b/Experiment/ComsmosQA/result/MiniCPM2B-ZH-_answers.csv new file mode 100644 index 0000000000000000000000000000000000000000..e14c4843f08a8ae449319ae6705628ef3bcec6d2 --- /dev/null +++ b/Experiment/ComsmosQA/result/MiniCPM2B-ZH-_answers.csv @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:faf2db8380d7319eadb96d3072849636bf6a7c9aa865d0afe443ecd46bc4cbc2 +size 20889 diff --git a/Experiment/ComsmosQA/result/MiniCPM2B-fewshot_answers.csv b/Experiment/ComsmosQA/result/MiniCPM2B-fewshot_answers.csv new file mode 100644 index 0000000000000000000000000000000000000000..ef09b80fe8b1821885edcd2611f70ce878a64eda --- /dev/null +++ b/Experiment/ComsmosQA/result/MiniCPM2B-fewshot_answers.csv @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b0d47ecb984a8d1bfde5199e2faa808f6f59c94a26a9c7723235bdc9fac38567 +size 20889 diff --git a/Experiment/ComsmosQA/result/MiniCPM_answer.csv b/Experiment/ComsmosQA/result/MiniCPM_answer.csv new file mode 100644 index 0000000000000000000000000000000000000000..3099a70b3108a89fb07f14d505d0392e959da9da --- /dev/null +++ b/Experiment/ComsmosQA/result/MiniCPM_answer.csv @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:34c4983db0749554253bbafb106313f25510b5e571accc0281fbe335bdf57a8e +size 20890 diff --git a/Experiment/ComsmosQA/result/chatglm3-CoT_answers.csv b/Experiment/ComsmosQA/result/chatglm3-CoT_answers.csv new file mode 100644 index 0000000000000000000000000000000000000000..ac997f58268ea29f253164461c6f4051f5cd5385 --- /dev/null +++ b/Experiment/ComsmosQA/result/chatglm3-CoT_answers.csv @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:400b15eb3c72d682b6b247e844e91840d73a3d8ababf3c93e2d00aa201a207bc +size 20889 diff --git a/Experiment/ComsmosQA/result/chatglm3_answers.csv b/Experiment/ComsmosQA/result/chatglm3_answers.csv new file mode 100644 index 0000000000000000000000000000000000000000..06105d59bb6f241fad1d87dc1af67de2ae606756 --- /dev/null +++ b/Experiment/ComsmosQA/result/chatglm3_answers.csv @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fcbbca260f48dc76dc095faa7cb18fa4c95cfe193a8cb6fc52c01a8ef4a3168a +size 20889 diff --git a/Experiment/ComsmosQA/result/opeai_3.5_answers.csv b/Experiment/ComsmosQA/result/opeai_3.5_answers.csv new file mode 100644 index 0000000000000000000000000000000000000000..790d2b8eea9edcc35ab045bb27c3fb4ef187283c --- /dev/null +++ b/Experiment/ComsmosQA/result/opeai_3.5_answers.csv @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:90d45841eb1c5d8c5b79b7fe5c17c248ff1020a156bc01f58ef752dabc59781e +size 20889 diff --git a/Experiment/ComsmosQA/test_Chatglm3_LMdeploy.py b/Experiment/ComsmosQA/test_Chatglm3_LMdeploy.py new file mode 100644 index 0000000000000000000000000000000000000000..eaba46e0cf98271cd2292a75a97a8a8db8933c1f --- /dev/null +++ b/Experiment/ComsmosQA/test_Chatglm3_LMdeploy.py @@ -0,0 +1,59 @@ + +from typing import Any, List, Optional +from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig +from peft import PeftModel +import torch +import json +import csv +from lmdeploy import pipeline, GenerationConfig, TurbomindEngineConfig,ChatTemplateConfig + +backend_config = TurbomindEngineConfig(tp=2) +templateconfig = ChatTemplateConfig(model_name = "chatglm3-6b",system = "As a reading comprehension expert, you will receive context, question and four options. Please understand the context given below first, and then output the label of the correct option as the answer to the question based on the context.") +gen_config = GenerationConfig(top_p=0.8, + top_k=40, + temperature=0.8, + max_new_tokens=1024) +pipe = pipeline(model_path='/root/lanyun-tmp/ZhipuAI/chatglm3-6b', + model_name="chatglm3-6b", + backend_config=backend_config, + chat_template_config = templateconfig ) + + + + + +# 读取JSONL文件 +filename = '/root/lanyun-tmp/Dataset/test.jsonl' +data = [] +with open(filename, 'r') as f: + for line in f: + item = json.loads(line) + data.append(item) + + + + files = 'chatglm3_answers.csv' + with open(files, 'w', newline='') as csvfile: + writer = csv.writer(csvfile) + # 提取内容 + for item in data: + context = item['context'] + question = item['question'] + answer0 = item['answer0'] + answer1 = item['answer1'] + answer2 = item['answer2'] + answer3 = item['answer3'] + + + prompts = [[{ + 'role': 'user', + 'content': str({'context':{context},'question':{question},"answer0":{answer0},"answer1":{answer1},"answer2":{answer2},"answer3":{answer3}}) + }], ] + response = pipe(prompts, + gen_config=gen_config, + ) + print(response) + answer = response + writer.writerow([answer, '\n']) + + \ No newline at end of file diff --git a/Experiment/ComsmosQA/test_MiniCPM2.py b/Experiment/ComsmosQA/test_MiniCPM2.py new file mode 100644 index 0000000000000000000000000000000000000000..f88d14e5c4e87f54f700f150faf4177761e02979 --- /dev/null +++ b/Experiment/ComsmosQA/test_MiniCPM2.py @@ -0,0 +1,55 @@ +import torch +from transformers import AutoModelForCausalLM, AutoTokenizer +from transformers.generation.utils import GenerationConfig +from peft import PeftModel, PeftConfig +import json +import csv + +lora_path = "/root/lanyun-tmp/output/MiniCPM/checkpoint-9000/" +model_path = '/root/lanyun-tmp/OpenBMB/MiniCPM-2B-sft-fp32' +model = AutoModelForCausalLM.from_pretrained( + model_path, torch_dtype=torch.float16, device_map="auto", trust_remote_code=True +) +model.generation_config = GenerationConfig.from_pretrained(model_path) +tokenizer = AutoTokenizer.from_pretrained( + model_path, use_fast=False, trust_remote_code=True, +) +model = PeftModel.from_pretrained(model, lora_path +) + + + + +# 读取JSONL文件 +filename = '/root/lanyun-tmp/Dataset/test.jsonl' +data = [] +with open(filename, 'r') as f: + for line in f: + item = json.loads(line) + data.append(item) + + + + files = 'MiniCPM2B-ZH-_answers.csv' + with open(files, 'w', newline='') as csvfile: + writer = csv.writer(csvfile) + # 提取内容 + for item in data: + context = item['context'] + question = item['question'] + answer0 = item['answer0'] + answer1 = item['answer1'] + answer2 = item['answer2'] + answer3 = item['answer3'] + + messages = str([ + {"role": "system", "content": "作为阅读理解专家,你​​将收到上下文,问题和四个选项,请先理解下面给出的上下文,然后根据上下文输出正确选项的标签作为问题的答案}"}, + {"role": "user", "content": str({'context':{context},'question':{question},"answer0":{answer0},"answer1":{answer1},"answer2":{answer2},"answer3":{answer3}})}, + ]) + response = model.chat(tokenizer, messages) + + answer = response[0][0] + print(answer) + writer.writerow(answer) + + \ No newline at end of file diff --git a/Experiment/ComsmosQA/test_MiniCPM3.py b/Experiment/ComsmosQA/test_MiniCPM3.py new file mode 100644 index 0000000000000000000000000000000000000000..0caa42ba064e2963b28b206d26238cbf2231c994 --- /dev/null +++ b/Experiment/ComsmosQA/test_MiniCPM3.py @@ -0,0 +1,59 @@ +import torch +from transformers import AutoModelForCausalLM, AutoTokenizer +from transformers.generation.utils import GenerationConfig +from peft import PeftModel, PeftConfig +import json +import csv + + +model_path = '/root/lanyun-tmp/OpenBMB/MiniCPM-2B-sft-fp32' +model = AutoModelForCausalLM.from_pretrained( + model_path, torch_dtype=torch.float16, device_map="auto", trust_remote_code=True +) +model.generation_config = GenerationConfig.from_pretrained(model_path) +tokenizer = AutoTokenizer.from_pretrained( + model_path, use_fast=False, trust_remote_code=True, +) + + + + +# 读取JSONL文件 +filename = '/root/lanyun-tmp/Dataset/test.jsonl' +data = [] +with open(filename, 'r') as f: + for line in f: + item = json.loads(line) + data.append(item) + + + + files = 'MiniCPM2B-Nlora_fewshot_answers.csv' + with open(files, 'w', newline='') as csvfile: + writer = csv.writer(csvfile) + # 提取内容 + for item in data: + context = item['context'] + question = item['question'] + answer0 = item['answer0'] + answer1 = item['answer1'] + answer2 = item['answer2'] + answer3 = item['answer3'] + + messages = str([ + {"role": "system", "content": "As a reading comprehension expert, you will receive context, question and four answer options. Please understand the given Context first and then output the label of the correct option as the answer to the question based on the Context"}, + {"role": "user", "content": "{'context': {\"Good Old War and person L : I saw both of these bands Wednesday night , and they both blew me away . seriously . Good Old War is acoustic and makes me smile . I really can not help but be happy when I listen to them ; I think it 's the fact that they seemed so happy themselves when they played .\"}, 'question': {'In the future , will this person go to see other bands play ?'}, 'answer0': {'None of the above choices .'}, 'answer1': {'This person likes music and likes to see the show , they will see other bands play .'}, 'answer2': {'This person only likes Good Old War and Person L , no other bands .'}, 'answer3': {'Other Bands is not on tour and this person can not see them .'}}"}, + {"role": "assistant", "content": "1"}, + {"role": "user", "content": "{'context': {""A hot girl appears who flirts with me to convince me to help her dig the deeper well ( Whoa ... even my dreams reference Buffy . Whedon is my God ) . I ' m in a narrow tunnel helping to pull a horse attached to a plow of some sort , while the brother of the hot chick is riding on the back of the plow . By the time we 're done I can tell it 's becoming night , though oddly when I get out of the tunnel it 's still light outside .""}, 'question': {'Why do even my dreams reference Buffy ?'}, 'answer0': {""I like Joss Whedon 's work a little bit .""}, 'answer1': {'I think Buffy the Vampire Slayer is an alright TV show .'}, 'answer2': {'I love the TV show Buffy the Vampire Slayer .'}, 'answer3': {'None of the above choices .'}}"}, + {"role": "assistant", "content": "2"}, + {"role": "user", "content": "{'context': {""A while later I tried the car again and lo and behold it does n't start at all , so a tow truck was called , and I chatted with Ellen ( who was n't in class after all ) while I waited . My dad came and got me from the body shop . The End . ( Where the hell did my freaking cow go ?""}, 'question': {""What is n't working properly ?""}, 'answer0': {'None of the above choices .'}, 'answer1': {'The cow'}, 'answer2': {'The tow truck'}, 'answer3': {'The body shop'}}"}, + {"role": "assistant", "content": "0"}, + {"role": "user", "content": str({'context':{context},'question':{question},"answer0":{answer0},"answer1":{answer1},"answer2":{answer2},"answer3":{answer3}})} + ]) + response = model.chat(tokenizer, messages) + + answer = response[0][0] + print(answer) + writer.writerow(answer) + + \ No newline at end of file diff --git a/Experiment/ComsmosQA/test_MiniCPM_Langchain.py b/Experiment/ComsmosQA/test_MiniCPM_Langchain.py new file mode 100644 index 0000000000000000000000000000000000000000..ef3fcc801b9444ffc49743afcfa963b92a3529af --- /dev/null +++ b/Experiment/ComsmosQA/test_MiniCPM_Langchain.py @@ -0,0 +1,67 @@ +from langchain.llms.base import LLM +from typing import Any, List, Optional +from langchain.callbacks.manager import CallbackManagerForLLMRun +from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig +from peft import PeftModel +import torch +import jsonlines +import json +import csv +class MiniCPM_LLM(LLM): + # 基于本地 InternLM 自定义 LLM 类 + tokenizer : AutoTokenizer = None + model: AutoModelForCausalLM = None + + def __init__(self, model_path :str): + # model_path: InternLM 模型路径 + # 从本地初始化模型 + super().__init__() + print("正在从本地加载模型...") + self.tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) + self.model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True,torch_dtype=torch.bfloat16, device_map="auto") + self.model = PeftModel.from_pretrained(model = self.model, model_id="/root/lanyun-tmp/output/MiniCPM/checkpoint-9000/") + print("完成本地模型的加载") + + def _call(self, prompt : str, stop: Optional[List[str]] = None, + run_manager: Optional[CallbackManagerForLLMRun] = None, + **kwargs: Any): + # 通过模型获得输出 + responds, history = self.model.chat(self.tokenizer, prompt, temperature=0, top_p=0.8, repetition_penalty=1.02) + return responds + + @property + def _llm_type(self) -> str: + return "MiniCPM_LLM" + + +llm = MiniCPM_LLM('/root/lanyun-tmp/OpenBMB/MiniCPM-2B-sft-fp32') + + +# 读取JSONL文件 +filename = '/root/lanyun-tmp/Dataset/test.jsonl' +data = [] +with open(filename, 'r') as f: + for line in f: + item = json.loads(line) + data.append(item) + + + + files = 'MiniCPM2B_answers.csv' + with open(files, 'w', newline='') as csvfile: + writer = csv.writer(csvfile) + # 提取内容 + for item in data: + context = item['context'] + question = item['question'] + answer0 = item['answer0'] + answer1 = item['answer1'] + answer2 = item['answer2'] + answer3 = item['answer3'] + message = "As a reading comprehension expert, you will receive context, question and four options. Please understand the context given below first, and then output the label of the correct option as the answer to the question based on the context"+str({'context':{context},'question':{question},"answer0":{answer0},"answer1":{answer1},"answer2":{answer2},"answer3":{answer3}})+"" + + + answer=llm._call(message) + writer.writerow(answer) + + \ No newline at end of file diff --git a/Experiment/ComsmosQA/test_OpenAI.py b/Experiment/ComsmosQA/test_OpenAI.py new file mode 100644 index 0000000000000000000000000000000000000000..35c5d8f72053f82616f4dfb16092dc5003c6a0f4 --- /dev/null +++ b/Experiment/ComsmosQA/test_OpenAI.py @@ -0,0 +1,55 @@ +import json +import csv +from openai import OpenAI +import random +count = 0 +client = OpenAI() + +# 读取JSONL文件 +filename = 'C:/Users/94427/kashiwa/DISC-Assignment/cosmosqa/CosmosQA_data/test.jsonl' +data = [] +with open(filename, 'r') as f: + for line in f: + item = json.loads(line) + data.append(item) + + + + files = 'GPT4_answers.csv' + with open(files, 'w', newline='') as csvfile: + writer = csv.writer(csvfile) + # 提取内容 + for item in data: + context = item['context'] + question = item['question'] + answer0 = item['answer0'] + answer1 = item['answer1'] + answer2 = item['answer2'] + answer3 = item['answer3'] + + + response = client.chat.completions.create( + # model="ft:gpt-3.5-turbo-0125:personal:arg-quality-0328:97kBFgug", + model="gpt-4", + temperature=0, + messages=[ + {"role": "system", "content": "As a reading comprehension expert, you will receive context, question and four answer options. Please understand the given Context first and then output the label of the correct option as the answer to the question based on the Context"}, + {"role": "user", "content": "{'context': {\"Good Old War and person L : I saw both of these bands Wednesday night , and they both blew me away . seriously . Good Old War is acoustic and makes me smile . I really can not help but be happy when I listen to them ; I think it 's the fact that they seemed so happy themselves when they played .\"}, 'question': {'In the future , will this person go to see other bands play ?'}, 'answer0': {'None of the above choices .'}, 'answer1': {'This person likes music and likes to see the show , they will see other bands play .'}, 'answer2': {'This person only likes Good Old War and Person L , no other bands .'}, 'answer3': {'Other Bands is not on tour and this person can not see them .'}}"}, + {"role": "assistant", "content": "1"}, + {"role": "user", "content": "{'context': {""A hot girl appears who flirts with me to convince me to help her dig the deeper well ( Whoa ... even my dreams reference Buffy . Whedon is my God ) . I ' m in a narrow tunnel helping to pull a horse attached to a plow of some sort , while the brother of the hot chick is riding on the back of the plow . By the time we 're done I can tell it 's becoming night , though oddly when I get out of the tunnel it 's still light outside .""}, 'question': {'Why do even my dreams reference Buffy ?'}, 'answer0': {""I like Joss Whedon 's work a little bit .""}, 'answer1': {'I think Buffy the Vampire Slayer is an alright TV show .'}, 'answer2': {'I love the TV show Buffy the Vampire Slayer .'}, 'answer3': {'None of the above choices .'}}"}, + {"role": "assistant", "content": "2"}, + {"role": "user", "content": "{'context': {""A while later I tried the car again and lo and behold it does n't start at all , so a tow truck was called , and I chatted with Ellen ( who was n't in class after all ) while I waited . My dad came and got me from the body shop . The End . ( Where the hell did my freaking cow go ?""}, 'question': {""What is n't working properly ?""}, 'answer0': {'None of the above choices .'}, 'answer1': {'The cow'}, 'answer2': {'The tow truck'}, 'answer3': {'The body shop'}}"}, + {"role": "assistant", "content": "0"}, + {"role": "user", "content": str({'context':{context},'question':{question},"answer0":{answer0},"answer1":{answer1},"answer2":{answer2},"answer3":{answer3}})} + ] + ) + answer = response.choices[0].message.content + + + + if answer not in ["0", "1", "2", "3"]: + answer = str(random.choice([0, 1, 2, 3])) + count+=1 + print(answer) + print(f"错误计数:{count}") + writer.writerow(answer) \ No newline at end of file diff --git a/Experiment/ComsmosQA/test_chatglm3.py b/Experiment/ComsmosQA/test_chatglm3.py new file mode 100644 index 0000000000000000000000000000000000000000..5afeb8b7c06a2eaaf4b6f0405f5dcba1f329281f --- /dev/null +++ b/Experiment/ComsmosQA/test_chatglm3.py @@ -0,0 +1,57 @@ +import torch +from transformers import AutoModelForCausalLM, AutoTokenizer +from transformers.generation.utils import GenerationConfig +from peft import PeftModel, PeftConfig +import json +import csv + +# model_path = "/root/merge_models" +model_path = "/root/lanyun-tmp/ZhipuAI/chatglm3-6b/" + +model = AutoModelForCausalLM.from_pretrained( + model_path, torch_dtype=torch.float16, device_map="auto", trust_remote_code=True +) +model.generation_config = GenerationConfig.from_pretrained(model_path) +tokenizer = AutoTokenizer.from_pretrained( + model_path, use_fast=False, trust_remote_code=True, +) + + + +# 读取JSONL文件 +filename = '/root/lanyun-tmp/Dataset/test.jsonl' +data = [] +with open(filename, 'r') as f: + for line in f: + item = json.loads(line) + data.append(item) + + + + files = 'chatglm3-CoT_answers.csv' + with open(files, 'w', newline='') as csvfile: + writer = csv.writer(csvfile) + # 提取内容 + for item in data: + context = item['context'] + question = item['question'] + answer0 = item['answer0'] + answer1 = item['answer1'] + answer2 = item['answer2'] + answer3 = item['answer3'] + + + + messages = str([ + {"role": "system", "content": """As a reading comprehension expert, you will receive context, question and four options. Please understand the context given below first, and then output the label of the correct option as the answer to the question based on the context.You Should Follow thinking steps below: + 1.Read the question and understand the requirements. + 2.Eliminate obviously incorrect options. + 3.Skim through the passage to find information that supports the remaining options. + 4.Choose the best answer based on the information in the passage."""}, + {"role": "user", "content": str({'context':{context},'question':{question},"answer0":{answer0},"answer1":{answer1},"answer2":{answer2},"answer3":{answer3}})} + ]) + response = model.chat(tokenizer, messages) + + answer = response[0][0] + print(answer) + writer.writerow(answer) \ No newline at end of file diff --git a/Experiment/GPT-fewshot.md b/Experiment/GPT-fewshot.md new file mode 100644 index 0000000000000000000000000000000000000000..9c30f984d6b330b75f2285875ecfc229ee144ae6 --- /dev/null +++ b/Experiment/GPT-fewshot.md @@ -0,0 +1,10 @@ + ``` + {"role": "system", "content": "As a reading comprehension expert, you will receive context, question and four answer options. Please understand the given Context first and then output the label of the correct option as the answer to the question based on the Context"}, + {"role": "user", "content": "{'context': {\"Good Old War and person L : I saw both of these bands Wednesday night , and they both blew me away . seriously . Good Old War is acoustic and makes me smile . I really can not help but be happy when I listen to them ; I think it 's the fact that they seemed so happy themselves when they played .\"}, 'question': {'In the future , will this person go to see other bands play ?'}, 'answer0': {'None of the above choices .'}, 'answer1': {'This person likes music and likes to see the show , they will see other bands play .'}, 'answer2': {'This person only likes Good Old War and Person L , no other bands .'}, 'answer3': {'Other Bands is not on tour and this person can not see them .'}}"}, + {"role": "assistant", "content": "1"}, + {"role": "user", "content": "{'context': {""A hot girl appears who flirts with me to convince me to help her dig the deeper well ( Whoa ... even my dreams reference Buffy . Whedon is my God ) . I ' m in a narrow tunnel helping to pull a horse attached to a plow of some sort , while the brother of the hot chick is riding on the back of the plow . By the time we 're done I can tell it 's becoming night , though oddly when I get out of the tunnel it 's still light outside .""}, 'question': {'Why do even my dreams reference Buffy ?'}, 'answer0': {""I like Joss Whedon 's work a little bit .""}, 'answer1': {'I think Buffy the Vampire Slayer is an alright TV show .'}, 'answer2': {'I love the TV show Buffy the Vampire Slayer .'}, 'answer3': {'None of the above choices .'}}"}, + {"role": "assistant", "content": "2"}, + {"role": "user", "content": "{'context': {""A while later I tried the car again and lo and behold it does n't start at all , so a tow truck was called , and I chatted with Ellen ( who was n't in class after all ) while I waited . My dad came and got me from the body shop . The End . ( Where the hell did my freaking cow go ?""}, 'question': {""What is n't working properly ?""}, 'answer0': {'None of the above choices .'}, 'answer1': {'The cow'}, 'answer2': {'The tow truck'}, 'answer3': {'The body shop'}}"}, + {"role": "assistant", "content": "0"}, + {"role": "user", "content": str({'context':{context},'question':{question},"answer0":{answer0},"answer1":{answer1},"answer2":{answer2},"answer3":{answer3}})} + ``` diff --git a/Experiment/README.md b/Experiment/README.md new file mode 100644 index 0000000000000000000000000000000000000000..e280182ab5e9f87c8e69fa2082eb60c5fca3e55f --- /dev/null +++ b/Experiment/README.md @@ -0,0 +1,83 @@ +# 概述 +在微调完成后,我们通过以下方法对数据进行了测试 + +数据集1:Cosmos QA(包含35.6K个问题的多项选择阅读理解数据集) + +▪ 下载地址:https://wilburone.github.io/cosmos/ + +▪ 测评⽅法:将测试结果上传到 https://leaderboard.allenai.org/cosmosqa/submission/create ◦ + + +数据集2:TrivailQA (基于维基百科和⽹络收集的阅读理解问答数据集) + +▪ 下载地址:Download TriviaQA version 1.0 for RC (2.5G) + +▪ 测评⽅法:使⽤官⽅仓库中的triviaqa_evaluation.py + +(由于官方仓库的测试集形式和本人的不太一样,因此代码上做了专门的修改,不过核心metrics保持不变,仍是Exact完全匹配数与F1参数) + +## CosmosQA 实验结果 + + +### 消融研究 +| Model | Score | Comment | +|-----------------------|--------|-------------------------------| +| Fewshot MiniCPM | 0.3251 | 原模型,Fewshot | +| Fewshot LoRA MiniCPM | 0.7773 | 微调模型,Fewshot | +| Fewshot CoT LoRA MiniCPM | 0.7790 | 微调模型,Fewshot,思维链 | +| CoT LoRA MiniCPM | 0.8211 | 微调模型,ZeroShot,思维链 | +| ZH LoRA MiniCPM | 0.8215 | 微调模型,ZeroShot,中文提示词| +| LoRA MiniCPM | 0.8291 | 微调模型,ZeroShot| + +- 我们可以看到基础的MiniCPM基本不具备阅读理解选择能力,在ZeroShot的情况下根本无法完成任务,即使是在Fewshot中也只比纯随机好一点。 +- LoRA微调过后的MiniCPM基本可以实现不错的效果,在测评榜单上的成绩是Top36。 +- LoRA微调过的在使用中文提示词的情况下,效果基本只有微小的减损,可以达到Top42。 +- 也许是小模型对于提示词和Fewshot的接收能力较低,实验结果,加入Fewshot和CoT,都会让效果得到减损。 + + +### 对比实验 +| Model | Score | Comment | +|-------------------|--------|-----------------| +| ChatGPT3.5 | 0.7233 |gpt-3.5-Turbo-0125,FewShot | +| LoRA MiniCPM | 0.8291 |LoRA微调MiniCPM-2b,ZeroShot | +| QLoRA Chatglm3 | 0.8416 |QLoRA微调chatglm3-6b,ZeroShot | +- 我们将微调过后的MiniCPM分别与QLoRA微调的ChatGLM3和ChatGPT3.5进行了对比,事实证明小参数的模型通过指令微调后能够在特定任务上达到超越ChatGPT3.5的效果。 + +### 实验截图 + +![image.png](https://kashiwa-pic.oss-cn-beijing.aliyuncs.com/20240413223243.png) +![image.png](https://kashiwa-pic.oss-cn-beijing.aliyuncs.com/20240413224855.png) +![image.png](https://kashiwa-pic.oss-cn-beijing.aliyuncs.com/20240413230555.png) +![image.png](https://kashiwa-pic.oss-cn-beijing.aliyuncs.com/20240413230724.png) +![image.png](https://kashiwa-pic.oss-cn-beijing.aliyuncs.com/20240413233703.png) +![image.png](https://kashiwa-pic.oss-cn-beijing.aliyuncs.com/20240414093126.png) +![image.png](https://kashiwa-pic.oss-cn-beijing.aliyuncs.com/20240414095310.png) +![image.png](https://kashiwa-pic.oss-cn-beijing.aliyuncs.com/20240414105439.png) +![image.png](https://kashiwa-pic.oss-cn-beijing.aliyuncs.com/20240414165336.png) + + +## TriviaQA 实验结果 +实验反复尝试了Fewshot,Zeroshot的LoRA微调MiniCPM-2B,MiniCPM-2B原模型,QLoRA微调ChatGLM3-6B,ChatGLM3-6B,总共八种情况结果都无法得到实际能用的内容。以下是四类模型情况的结果 +### MiniCPM-2B LoRA +![image.png](https://kashiwa-pic.oss-cn-beijing.aliyuncs.com/20240414180717.png) + + +ChatGLM3-6B QLoRA +![image.png](https://kashiwa-pic.oss-cn-beijing.aliyuncs.com/20240414180608.png) + + +ChatGLM3-6B 原模型 +![image.png](https://kashiwa-pic.oss-cn-beijing.aliyuncs.com/20240414180451.png) + + +MiniCPM-2B 原模型 +![image.png](https://kashiwa-pic.oss-cn-beijing.aliyuncs.com/20240414180529.png) + +推测是TriviaQA的输入token过多,由于受限于显存大小,我将max_line设置在了512,因此基本无法得到好的效果。 + +不过本人还是测试了ChatGPT3.5-Turbo的效果,平均F1在0.377,完全匹配率在0.153,1000条的完全错误数只有73个,可以看到在长文阅读理解能力方面还是远超开源小模型的,只是答案不够凝练。 + + +![image.png](https://kashiwa-pic.oss-cn-beijing.aliyuncs.com/20240414204605.png) + + diff --git a/Experiment/TriviaQA/TriviaQA_test.jsonl b/Experiment/TriviaQA/TriviaQA_test.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..6b60ae7571461fa67a261094aace744352b8175e --- /dev/null +++ b/Experiment/TriviaQA/TriviaQA_test.jsonl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:44a832f1b30ee3efad44d183bd8975230e5fca5c6766ce6ac095dccaf1ddfb60 +size 34823270 diff --git a/Experiment/TriviaQA/TriviaQA_test_format.jsonl b/Experiment/TriviaQA/TriviaQA_test_format.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..5cb8d15e56f544d259ded84813b857ae861ff974 --- /dev/null +++ b/Experiment/TriviaQA/TriviaQA_test_format.jsonl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9c2df2ed1cbc9a098e3cffacd7af4acb4b875863bd6db3cf1ea5788416e88db1 +size 33043223 diff --git a/Experiment/TriviaQA/TriviaQA_test_format.py b/Experiment/TriviaQA/TriviaQA_test_format.py new file mode 100644 index 0000000000000000000000000000000000000000..76bcff405ab8da32cdee400e24b03794107f9fff --- /dev/null +++ b/Experiment/TriviaQA/TriviaQA_test_format.py @@ -0,0 +1,39 @@ +import json +import csv + +csvin = 'C:/Users/94427/kashiwa/DISC-Assignment/Experiment/TriviaQA/val_triviaqa.csv' + + + + +import csv + +# 打开CSV文件 +with open(csvin, newline='',encoding='utf-8') as csvfile: + reader = csv.DictReader(csvfile) + + + + files = 'C:/Users/94427/kashiwa/DISC-Assignment/Experiment/TriviaQA/TriviaQA_test_format1k.jsonl' + with open(files, 'w', newline='',encoding='utf-8') as outfile: + messages = [] + # 逐行读取CSV文件 + for index,row in enumerate(reader): + if index<=1000: + context = row['context'] + question = row['question'] + answers = row['answers'] + + message = { + "Data": + { + "Answer":answers, + "Question":question, + "Context":context, + }, + } + + messages.append(message) + + for message in messages: + outfile.write(json.dumps(message, ensure_ascii=False) + '\n') \ No newline at end of file diff --git a/Experiment/TriviaQA/TriviaQA_test_format1k.jsonl b/Experiment/TriviaQA/TriviaQA_test_format1k.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..5e843ef220b86ca0aaa325302d34045b85c536f2 --- /dev/null +++ b/Experiment/TriviaQA/TriviaQA_test_format1k.jsonl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:73e006240d18ef9f34352390f65ce7db3fff3542543fc487a2b127d7f3826d68 +size 4209121 diff --git a/Experiment/TriviaQA/TriviaQA_val.jsonl b/Experiment/TriviaQA/TriviaQA_val.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..4fb1ec933a9dd51d23c741fcd3190d6ef1c486d5 --- /dev/null +++ b/Experiment/TriviaQA/TriviaQA_val.jsonl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ee69d07946f4aedc148a4985c3ddce929f852014f3184da3d21c3b755fdedfab +size 34823273 diff --git a/Experiment/TriviaQA/jsonl2json.py b/Experiment/TriviaQA/jsonl2json.py new file mode 100644 index 0000000000000000000000000000000000000000..18b85367937f7cd08aa3a273fe6b936035085d64 --- /dev/null +++ b/Experiment/TriviaQA/jsonl2json.py @@ -0,0 +1,16 @@ +import json + +def convert_jsonl_to_json(jsonl_file, json_file): + json_data = [] + + with open(jsonl_file, 'r',encoding="utf-8") as file: + for line in file: + json_data.append(json.loads(line)) + + with open(json_file, 'w',encoding="utf-8") as file: + json.dump(json_data, file) + +# 使用示例 +jsonl_file = 'TriviaQA/6test.jsonl' +json_file = 'TriviaQA/6test.json' +convert_jsonl_to_json(jsonl_file, json_file) \ No newline at end of file diff --git a/Experiment/TriviaQA/result/TriviaQA_GPT3.5Turbo_answers.csv b/Experiment/TriviaQA/result/TriviaQA_GPT3.5Turbo_answers.csv new file mode 100644 index 0000000000000000000000000000000000000000..bb26ec2371f411535330570f229d4ef72298bb75 --- /dev/null +++ b/Experiment/TriviaQA/result/TriviaQA_GPT3.5Turbo_answers.csv @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8c2286a3731db7d38643b7bd3320fe4e2eb50c37e79bfa6e2adafa1919408f4a +size 608944 diff --git a/Experiment/TriviaQA/result/TriviaQA_GPT3.5_answers1k.csv b/Experiment/TriviaQA/result/TriviaQA_GPT3.5_answers1k.csv new file mode 100644 index 0000000000000000000000000000000000000000..9d1866282c8e0e1374cf94dee22743780f799f3d --- /dev/null +++ b/Experiment/TriviaQA/result/TriviaQA_GPT3.5_answers1k.csv @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c1493d9326592ee7f8a50edcd447eb0b7f6b4ce4080a282da140d31233f0229f +size 72065 diff --git a/Experiment/TriviaQA/test_MiniCPM.py b/Experiment/TriviaQA/test_MiniCPM.py new file mode 100644 index 0000000000000000000000000000000000000000..9b6c227166948fe858c7f732921474222837cab2 --- /dev/null +++ b/Experiment/TriviaQA/test_MiniCPM.py @@ -0,0 +1,46 @@ +import torch +from transformers import AutoModelForCausalLM, AutoTokenizer +from transformers.generation.utils import GenerationConfig +from peft import PeftModel, PeftConfig +import json +import csv + +# lora_path = "/root/lanyun-tmp/output/MiniCPM/checkpoint-9000/" +model_path = '/root/lanyun-tmp/OpenBMB/MiniCPM-2B-sft-fp32' +model = AutoModelForCausalLM.from_pretrained( + model_path, torch_dtype=torch.float16, device_map="auto", trust_remote_code=True +) +model.generation_config = GenerationConfig.from_pretrained(model_path) +tokenizer = AutoTokenizer.from_pretrained( + model_path, use_fast=False, trust_remote_code=True, +) +# model = PeftModel.from_pretrained(model, lora_path) + + + + +# 读取JSONL文件 +filename = '/root/lanyun-tmp/Dataset/val_triviaqa.csv' +data = [] +with open(filename, newline='',encoding="utf-8") as csvfile: + reader = csv.DictReader(csvfile) + + + files = 'TriviaQA_MiniCPM_NLoRA.csv' + with open(files, 'w', newline='',encoding='utf-8') as csvfile: + writer = csv.writer(csvfile) + # 提取内容 + # 逐行读取CSV文件 + for row in reader: + context = row['context'] + question = row['question'] + + + messages = str([{'role': 'system', 'content': 'Don t output "[" !!!, As a reading comprehension expert, you will receive context and question. Please understand the given Context first and then output the answer of the question based on the Context'}, {'role': 'user', 'content': '{\'context\': \'[DOC] [TLE] richard marx had an 80s No 1 hit with Hold On To The Nights? \', \'question\': \'Who had an 80s No 1 hit with Hold On To The Nights?\'}'}, {'role': 'assistant', 'content': "richard marx"}]) + response = model.chat(tokenizer, messages) + + answer = response[0][0] + print(answer) + writer.writerow(answer) + + \ No newline at end of file diff --git a/Experiment/TriviaQA/test_OpenAI.py b/Experiment/TriviaQA/test_OpenAI.py new file mode 100644 index 0000000000000000000000000000000000000000..be9cbcae6b65bf857b16de2c6cf8223096387713 --- /dev/null +++ b/Experiment/TriviaQA/test_OpenAI.py @@ -0,0 +1,40 @@ +import json +import csv +from openai import OpenAI +import random +token = 0 +client = OpenAI() + +# 读取JSONL文件 +filename = 'C:/Users/94427/kashiwa/DISC-Assignment/Experiment/TriviaQA/val_triviaqa.csv' +data = [] + +with open(filename, newline='',encoding="utf-8") as csvfile: + reader = csv.DictReader(csvfile) + + + files = 'TriviaQA_GPT3.5Turbo_answers.csv' + with open(files, 'w', newline='',encoding='utf-8') as csvfile: + writer = csv.writer(csvfile) + # 提取内容 + # 逐行读取CSV文件 + for row in reader: + context = row['context']+row['question'] + + + + response = client.chat.completions.create( + # model="ft:gpt-3.5-turbo-0125:personal:arg-quality-0328:97kBFgug", + model="gpt-3.5-turbo-0125", + temperature=0, + messages=[ + {"role": "system", "content":"As a reading comprehension expert, you will receive context and question. Please understand the given Context first and then output the answer of the question based on the Context" }, + {"role": "user", "content": str(context)} + ] + ) + answer = response.choices[0].message.content + token += response.usage.total_tokens + print(f"已消耗Token:{token}") + print(answer) + writer.writerow([answer]) + diff --git a/Experiment/TriviaQA/test_chatglm3.py b/Experiment/TriviaQA/test_chatglm3.py new file mode 100644 index 0000000000000000000000000000000000000000..e9f9eb48626075c850c6e8df3870c7cf5d59ce4b --- /dev/null +++ b/Experiment/TriviaQA/test_chatglm3.py @@ -0,0 +1,46 @@ +import torch +from transformers import AutoModelForCausalLM, AutoTokenizer +from transformers.generation.utils import GenerationConfig +from peft import PeftModel, PeftConfig +import json +import csv + +# model_path = "/root/merge_models" +model_path = "/root/lanyun-tmp/ZhipuAI/chatglm3-6b" + +model = AutoModelForCausalLM.from_pretrained( + model_path, torch_dtype=torch.float16, device_map="auto", trust_remote_code=True +) +# model.generation_config = GenerationConfig.from_pretrained(model_path) +tokenizer = AutoTokenizer.from_pretrained( + model_path, use_fast=False, trust_remote_code=True, +) + + + +# 读取JSONL文件 +filename = '/root/lanyun-tmp/Dataset/val_triviaqa.csv' +data = [] +with open(filename, newline='',encoding="utf-8") as csvfile: + reader = csv.DictReader(csvfile) + + + files = 'TriviaQA_ChatGLM3_Nlora.csv' + with open(files, 'w', newline='',encoding='utf-8') as csvfile: + writer = csv.writer(csvfile) + # 提取内容 + # 逐行读取CSV文件 + for index,row in enumerate(reader): + if index<=1000: + context = row['context'] + question = row['question'] + + + messages = str([{'role': 'system', 'content': 'Don t output "[" !!!, As a reading comprehension expert, you will receive context and question. Please understand the given Context first and then output the answer of the question based on the Context'}, {'role': 'user', 'content': '{\'context\': \'[DOC] [TLE] richard marx had an 80s No 1 hit with Hold On To The Nights? \', \'question\': \'Who had an 80s No 1 hit with Hold On To The Nights?\'}'}, {'role': 'assistant', 'content': "richard marx"}]) + response = model.chat(tokenizer, messages) + + answer = response[0][0] + print(answer) + writer.writerow(answer) + + \ No newline at end of file diff --git a/Experiment/TriviaQA/triviaqa/.gitignore b/Experiment/TriviaQA/triviaqa/.gitignore new file mode 100644 index 0000000000000000000000000000000000000000..72364f99fe4bf8d5262df3b19b33102aeaa791e5 --- /dev/null +++ b/Experiment/TriviaQA/triviaqa/.gitignore @@ -0,0 +1,89 @@ +# Byte-compiled / optimized / DLL files +__pycache__/ +*.py[cod] +*$py.class + +# C extensions +*.so + +# Distribution / packaging +.Python +env/ +build/ +develop-eggs/ +dist/ +downloads/ +eggs/ +.eggs/ +lib/ +lib64/ +parts/ +sdist/ +var/ +*.egg-info/ +.installed.cfg +*.egg + +# PyInstaller +# Usually these files are written by a python script from a template +# before PyInstaller builds the exe, so as to inject date/other infos into it. +*.manifest +*.spec + +# Installer logs +pip-log.txt +pip-delete-this-directory.txt + +# Unit test / coverage reports +htmlcov/ +.tox/ +.coverage +.coverage.* +.cache +nosetests.xml +coverage.xml +*,cover +.hypothesis/ + +# Translations +*.mo +*.pot + +# Django stuff: +*.log +local_settings.py + +# Flask stuff: +instance/ +.webassets-cache + +# Scrapy stuff: +.scrapy + +# Sphinx documentation +docs/_build/ + +# PyBuilder +target/ + +# IPython Notebook +.ipynb_checkpoints + +# pyenv +.python-version + +# celery beat schedule file +celerybeat-schedule + +# dotenv +.env + +# virtualenv +venv/ +ENV/ + +# Spyder project settings +.spyderproject + +# Rope project settings +.ropeproject diff --git a/Experiment/TriviaQA/triviaqa/LICENSE b/Experiment/TriviaQA/triviaqa/LICENSE new file mode 100644 index 0000000000000000000000000000000000000000..290cc8618d5da6d0027365c00e9374ae0c6120ff --- /dev/null +++ b/Experiment/TriviaQA/triviaqa/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "{}" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright 2017 Kenton Lee + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/Experiment/TriviaQA/triviaqa/README.md b/Experiment/TriviaQA/triviaqa/README.md new file mode 100644 index 0000000000000000000000000000000000000000..0496dfab839c3f1be9ecdd50e3dfe4703232638c --- /dev/null +++ b/Experiment/TriviaQA/triviaqa/README.md @@ -0,0 +1,34 @@ +# TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension +- This repo contains code for the paper +Mandar Joshi, Eunsol Choi, Daniel Weld, Luke Zettlemoyer. + +[TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension][triviaqa-arxiv] +In Association for Computational Linguistics (ACL) 2017, Vancouver, Canada. + +- The data can be downloaded from the [TriviaQA website][triviaqa-website]. The Apache 2.0 License applies to both the code and the data. +- Please contact [Mandar Joshi][mandar-home] (\90@cs.washington.edu) for suggestions and comments. + +## Requirements +#### General +- Python 3. You should be able to run the evaluation scripts using Python 2.7 if you take care of unicode in ```utils.utils.py```. +- BiDAF requires Python 3 -- check the [original repository][bidaf-orig-github] for more details. + +#### Python Packages +- tensorflow (only if you want to run BiDAF, verified on r0.11) +- nltk +- tqdm + +## Evaluation +The ```dataset file``` parameter refers to files in the ```qa``` directory of the data (e.g., ```wikipedia-dev.json```). For file format, check out the ```sample``` directory in the repo. +``` +python3 -m evaluation.triviaqa_evaluation --dataset_file samples/triviaqa_sample.json --prediction_file samples/sample_predictions.json +``` +## Miscellaneous +- If you have a SQuAD model and want to run on TriviaQA, please refer to ```utils.convert_to_squad_format.py``` + + + +[bidaf-orig-github]: https://github.com/allenai/bi-att-flow/ +[triviaqa-arxiv]: https://arxiv.org/abs/1705.03551 +[mandar-home]: http://homes.cs.washington.edu/~mandar90/ +[triviaqa-website]: http://nlp.cs.washington.edu/triviaqa/ diff --git a/Experiment/TriviaQA/triviaqa/evaluation/__init__.py b/Experiment/TriviaQA/triviaqa/evaluation/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/Experiment/TriviaQA/triviaqa/evaluation/evaluate_bidaf.py b/Experiment/TriviaQA/triviaqa/evaluation/evaluate_bidaf.py new file mode 100644 index 0000000000000000000000000000000000000000..c5559022ea745df5c6d455de0652ee1fa25f15d8 --- /dev/null +++ b/Experiment/TriviaQA/triviaqa/evaluation/evaluate_bidaf.py @@ -0,0 +1,57 @@ +# -*- coding: utf-8 -*- +import argparse +import utils.utils +import evaluation.triviaqa_evaluation +import utils.dataset_utils +from collections import defaultdict + + +def create_answer_dict(answer_json, ques_level): + key_to_answer_scores = {} + key_to_pred_answer = {} + key_to_pred_score = {} + qid_to_confidence = answer_json['scores'] + for qid_filename in answer_json: + if not (qid_filename == 'scores' or qid_filename == 'all_scores'): + confidence = qid_to_confidence[qid_filename] + qid_filename_tup = tuple(qid_filename.split('--')) + + key = qid_filename_tup[0] if ques_level else qid_filename + # key = '_'.join(qid_filename.split('_')[:2]) + + answer = answer_json[qid_filename] + answer = evaluation.triviaqa_evaluation.normalize_answer(answer) + key_to_answer_scores[key] = key_to_answer_scores.get(key, defaultdict(float)) + key_to_answer_scores[key][answer] += confidence + for key in key_to_answer_scores: + if len(key_to_answer_scores[key]) > 0: + sorted_ans = sorted(key_to_answer_scores[key].items(), key=lambda x: float(x[1]), reverse=True) # confidence + key_to_pred_answer[key] = sorted_ans[0][0] + key_to_pred_score[key] = sorted_ans[0][1] + + return key_to_pred_answer, key_to_pred_score + + +def evaluate(bidaf_op_file, questions_file, limited=False): + bidaf_json = utils.utils.read_json(bidaf_op_file) + triviaqa_data = utils.dataset_utils.read_triviaqa_data(questions_file) + key_to_pred, key_to_pred_score = create_answer_dict(bidaf_json, triviaqa_data['Domain'] == 'Wikipedia') + key_to_ground_truth = utils.dataset_utils.get_key_to_ground_truth(triviaqa_data) + qids = key_to_pred.keys() if limited else None + print (evaluation.triviaqa_evaluation.evaluate_triviaqa(key_to_ground_truth, key_to_pred, qid_list=qids, mute=True)) + + +def get_args(): + parser = argparse.ArgumentParser() + parser.add_argument('--dataset_file', help='Triviaqa file') + parser.add_argument('--bidaf_file', help='BiDAF output file') + + parser.add_argument('--limited', default=False, type=bool, help='Evaluate only qids appearing in predictions') + args = parser.parse_args() + return args + + +if __name__ == '__main__': + args = get_args() + evaluate(args.bidaf_file, args.dataset_file, args.limited) + diff --git a/Experiment/TriviaQA/triviaqa/evaluation/triviaqa_evaluation.py b/Experiment/TriviaQA/triviaqa/evaluation/triviaqa_evaluation.py new file mode 100644 index 0000000000000000000000000000000000000000..b80d92039b76cc960e1c7f9348196663310c4a26 --- /dev/null +++ b/Experiment/TriviaQA/triviaqa/evaluation/triviaqa_evaluation.py @@ -0,0 +1,207 @@ +# -*- coding: utf-8 -*- +""" Official evaluation script for v1.0 of the TriviaQA dataset. +Extended from the evaluation script for v1.1 of the SQuAD dataset. """ +from __future__ import print_function +import os +import sys +# 获取当前脚本所在的目录 +current_dir = os.path.dirname(os.path.abspath(__file__)) +# 构建相对路径 +relative_path = os.path.join(current_dir, '..') +# 将相对路径添加到sys.path +sys.path.append(relative_path) + +from collections import Counter +import string +import re +import sys +import argparse +import utils.dataset_utils +import utils.utils +import json +import csv +f1 = exact_match = common = Wrong = 0 + +def normalize_answer(s): + """Lower text and remove punctuation, articles and extra whitespace.""" + # print(s) + s = json.dumps(s) + + def remove_articles(text): + return re.sub(r'\b(a|an|the)\b', ' ', text) + + def white_space_fix(text): + return ' '.join(text.split()) + + def handle_punc(text): + exclude = set(string.punctuation + "".join([u"‘", u"’", u"´", u"`"])) + return ''.join(ch if ch not in exclude else ' ' for ch in text) + + def lower(text): + return text.lower() + + def replace_underscore(text): + return text.replace('_', ' ') + + # print(white_space_fix(remove_articles(handle_punc(lower(replace_underscore(s))))).strip()) + return white_space_fix(remove_articles(handle_punc(lower(replace_underscore(s))))).strip() + + +def f1_score(prediction, ground_truth): + global Wrong + prediction_tokens = normalize_answer(prediction).split() + print(f"规范化预测:{normalize_answer(prediction)}") + # print(f"预测token数:{len(prediction_tokens)}") + + ground_truth_tokens = normalize_answer(ground_truth).split() + print(f"规范化答案:{normalize_answer(ground_truth)}") + # print(f"答案token数:{len(ground_truth_tokens) }") + common = Counter(prediction_tokens) & Counter(ground_truth_tokens) + print(common) + num_same = sum(common.values()) + print(num_same) + if num_same == 0: + Wrong+=1 + return 0 + precision = 1.0 * num_same / len(prediction_tokens) + # print(f"预测率:{precision}") + recall = 1.0 * num_same / len(ground_truth_tokens) + # print(f"召回率:{recall}") + f1 = (2 * precision * recall) / (precision + recall) + return f1 + + +def exact_match_score(prediction, ground_truth): + return normalize_answer(prediction) == normalize_answer(ground_truth) + + +def metric_max_over_ground_truths(metric_fn, prediction, ground_truths): + scores_for_ground_truths = [] + score = metric_fn(prediction, ground_truths) + scores_for_ground_truths.append(score) + return max(scores_for_ground_truths) + + +def is_exact_match(answer_object, prediction): + ground_truths = get_ground_truths(answer_object) + for ground_truth in ground_truths: + if exact_match_score(prediction, ground_truth): + return True + return False + + +def has_exact_match(ground_truths, candidates): + for ground_truth in ground_truths: + if ground_truth in candidates: + return True + return False + + +def get_ground_truths(answer): + return answer['NormalizedAliases'] + [normalize_answer(ans) for ans in answer.get('HumanAnswers', [])] + + +def get_oracle_score(ground_truth, predicted_answers, i=None, mute=False,maxline=1000): + exact_match = common = 0 + + common += 1 + prediction = normalize_answer(predicted_answers[i]) + ground_truths = ground_truth[i] + print(f"预测:{prediction}") + print(f"事实{ground_truths}") + em_for_this_question = has_exact_match(ground_truths, prediction) + exact_match += int(em_for_this_question) + + exact_match = 100.0 * exact_match / maxline + + return {'oracle_exact_match': exact_match, 'common': common, 'denominator': maxline,"Wrong":Wrong, + 'pred_len': len(predicted_answers), 'gold_len': len(ground_truth)} + + +def evaluate_triviaqa(ground_truth, predicted_answers, i=None, mute=False,maxline=None): + global f1,exact_match,common + common += i + prediction = predicted_answers[i] + ground_truths = ground_truth[i]["Data"]["Answer"] + # print(f"预测:{prediction}") + # print(f"事实{ground_truths}") + em_for_this_question = metric_max_over_ground_truths( + exact_match_score, prediction, ground_truths) + if em_for_this_question == 0 and not mute: + print("em=0:", prediction, ground_truths) + exact_match += em_for_this_question + f1_for_this_question = metric_max_over_ground_truths( + f1_score, prediction, ground_truths) + f1 += f1_for_this_question + print(f"当前轮次:{i+1}") + print(f"本轮F1率:{f1_for_this_question}") + print(f"累加F1率:{f1}") + print(f"本轮exact:{em_for_this_question}") + print(f"累加exact:{exact_match}") + + + + exact_match_mean = exact_match / (i+1) + f1_mean = f1 / (i+1) + + print(f"平均F1率:{f1_mean}") + print(f"平均exact率:{exact_match_mean}") + + return {'exact_match': exact_match_mean, 'f1': f1_mean, 'common': common, 'denominator': i+1,"Wrong":Wrong, + 'pred_len': len(predicted_answers), 'gold_len': len(ground_truth)} + + +def get_args(): + parser = argparse.ArgumentParser( + description='Evaluation for TriviaQA {}'.format(expected_version)) + parser.add_argument('--dataset_file',default="C:/Users/94427/kashiwa/DISC-Assignment/Experiment/TriviaQA/TriviaQA_test_format1k.jsonl", help='Dataset file') + parser.add_argument('--prediction_file',default="C:/Users/94427/kashiwa/DISC-Assignment/Experiment/TriviaQA/result/TriviaQA_GPT3.5_answers1k.csv", help='Prediction File') + args = parser.parse_args() + return args + + +if __name__ == '__main__': + expected_version = 1.0 + args = get_args() + + # dataset_json = utils.dataset_utils.read_triviaqa_data(args.dataset_file) + dataset_json = args.dataset_file + prediction_json = args.prediction_file + # dataset_dict = json.loads(dataset_json) + dataset_dict = [] + + prediction_dict = [] + + + + + +# 打开JSONL文件并读取数据 +with open(args.dataset_file, 'r',encoding="utf-8") as file: + for line in file: + json_data = json.loads(line) + dataset_dict.append(json_data) + +# 打印读取的数据 + + +# 打开CSV文件并读取数据 +with open(args.prediction_file, newline='',encoding="utf-8") as csvfile: + reader = csv.reader(csvfile) + + # 遍历每一行数据 + for row in reader: + prediction_dict.append(row) + # print(prediction_dict) + # if dataset_json['Version'] != expected_version: + # print('Evaluation expects v-{} , but got dataset with v-{}'.format(expected_version,dataset_json['Version']), + # file=sys.stderr) +for i in range(0,1000): + # print(dataset_dict) + # print(dataset_dict[i]) + # print(dataset_dict[i]["Data"]) + print(f"当前行数:{i}") + key_to_ground_truth = dataset_dict + predictions = prediction_dict + eval_dict = evaluate_triviaqa(key_to_ground_truth, predictions,i=i,maxline=1000) + print(eval_dict) diff --git a/Experiment/TriviaQA/triviaqa/requirements.txt b/Experiment/TriviaQA/triviaqa/requirements.txt new file mode 100644 index 0000000000000000000000000000000000000000..af5add666cc06e0001a0f39148f2572794891abe --- /dev/null +++ b/Experiment/TriviaQA/triviaqa/requirements.txt @@ -0,0 +1,4 @@ +tensorflow>=0.11 +nltk +tqdm +jinja2 \ No newline at end of file diff --git a/Experiment/TriviaQA/triviaqa/samples/sample_predictions.json b/Experiment/TriviaQA/triviaqa/samples/sample_predictions.json new file mode 100644 index 0000000000000000000000000000000000000000..2bab166e7f415f7a6e48e4fe78a9a6c576e8f636 --- /dev/null +++ b/Experiment/TriviaQA/triviaqa/samples/sample_predictions.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a7125f623eb065f5b1efd24c6297898c691718fbdc140d76a5caf591848183db +size 56 diff --git a/Experiment/TriviaQA/triviaqa/samples/triviaqa_sample.json b/Experiment/TriviaQA/triviaqa/samples/triviaqa_sample.json new file mode 100644 index 0000000000000000000000000000000000000000..d6c0923178594901ea7483f98b99527f6dee243a --- /dev/null +++ b/Experiment/TriviaQA/triviaqa/samples/triviaqa_sample.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b50ec7222f1801e33870976dbcd179a56519a7d4af0f149a15ee4fff1e869cdd +size 1650 diff --git a/Experiment/TriviaQA/triviaqa/utils/__init__.py b/Experiment/TriviaQA/triviaqa/utils/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/Experiment/TriviaQA/triviaqa/utils/convert_to_squad_format.py b/Experiment/TriviaQA/triviaqa/utils/convert_to_squad_format.py new file mode 100644 index 0000000000000000000000000000000000000000..b58374200d66ac5c5744f7cbc2fe3293650fa908 --- /dev/null +++ b/Experiment/TriviaQA/triviaqa/utils/convert_to_squad_format.py @@ -0,0 +1,110 @@ +import utils.utils +import utils.dataset_utils +import os +from tqdm import tqdm +import random +import nltk +import argparse + + +def get_text(qad, domain): + local_file = os.path.join(args.web_dir, qad['Filename']) if domain == 'SearchResults' else os.path.join(args.wikipedia_dir, qad['Filename']) + return utils.utils.get_file_contents(local_file, encoding='utf-8') + + +def select_relevant_portion(text): + paras = text.split('\n') + selected = [] + done = False + for para in paras: + sents = sent_tokenize.tokenize(para) + for sent in sents: + words = nltk.word_tokenize(sent) + for word in words: + selected.append(word) + if len(selected) >= args.max_num_tokens: + done = True + break + if done: + break + if done: + break + selected.append('\n') + st = ' '.join(selected).strip() + return st + + +def add_triple_data(datum, page, domain): + qad = {'Source': domain} + for key in ['QuestionId', 'Question', 'Answer']: + qad[key] = datum[key] + for key in page: + qad[key] = page[key] + return qad + + +def get_qad_triples(data): + qad_triples = [] + for datum in data['Data']: + for key in ['EntityPages', 'SearchResults']: + for page in datum.get(key, []): + qad = add_triple_data(datum, page, key) + qad_triples.append(qad) + return qad_triples + + +def convert_to_squad_format(qa_json_file, squad_file): + qa_json = utils.dataset_utils.read_triviaqa_data(qa_json_file) + qad_triples = get_qad_triples(qa_json) + + random.seed(args.seed) + random.shuffle(qad_triples) + + data = [] + for qad in tqdm(qad_triples): + qid = qad['QuestionId'] + + text = get_text(qad, qad['Source']) + selected_text = select_relevant_portion(text) + + question = qad['Question'] + para = {'context': selected_text, 'qas': [{'question': question, 'answers': []}]} + data.append({'paragraphs': [para]}) + qa = para['qas'][0] + qa['id'] = utils.dataset_utils.get_question_doc_string(qid, qad['Filename']) + qa['qid'] = qid + + ans_string, index = utils.dataset_utils.answer_index_in_document(qad['Answer'], selected_text) + if index == -1: + if qa_json['Split'] == 'train': + continue + else: + qa['answers'].append({'text': ans_string, 'answer_start': index}) + + if qa_json['Split'] == 'train' and len(data) >= args.sample_size and qa_json['Domain'] == 'Web': + break + + squad = {'data': data, 'version': qa_json['Version']} + utils.utils.write_json_to_file(squad, squad_file) + print ('Added', len(data)) + + +def get_args(): + parser = argparse.ArgumentParser() + parser.add_argument('--triviaqa_file', help='Triviaqa file') + parser.add_argument('--squad_file', help='Squad file') + parser.add_argument('--wikipedia_dir', help='Wikipedia doc dir') + parser.add_argument('--web_dir', help='Web doc dir') + + parser.add_argument('--seed', default=10, type=int, help='Random seed') + parser.add_argument('--max_num_tokens', default=800, type=int, help='Maximum number of tokens from a document') + parser.add_argument('--sample_size', default=80000, type=int, help='Random seed') + parser.add_argument('--tokenizer', default='tokenizers/punkt/english.pickle', help='Sentence tokenizer') + args = parser.parse_args() + return args + + +if __name__ == '__main__': + args = get_args() + sent_tokenize = nltk.data.load(args.tokenizer) + convert_to_squad_format(args.triviaqa_file, args.squad_file) diff --git a/Experiment/TriviaQA/triviaqa/utils/dataset_utils.py b/Experiment/TriviaQA/triviaqa/utils/dataset_utils.py new file mode 100644 index 0000000000000000000000000000000000000000..b5de201faba5b8149a4cba5e9a6698bc72b89958 --- /dev/null +++ b/Experiment/TriviaQA/triviaqa/utils/dataset_utils.py @@ -0,0 +1,57 @@ +# -*- coding: utf-8 -*- +import utils.utils +import utils + + +# Key for wikipedia eval is question-id. Key for web eval is the (question_id, filename) tuple +def get_key_to_ground_truth(data): + if data['Domain'] == 'Wikipedia': + return {datum['QuestionId']: datum['Answer'] for datum in data['Data']} + else: + return get_qd_to_answer(data) + + +def get_question_doc_string(qid, doc_name): + return '{}--{}'.format(qid, doc_name) + +def get_qd_to_answer(data): + key_to_answer = {} + for datum in data['Data']: + for page in datum.get('EntityPages', []) + datum.get('SearchResults', []): + qd_tuple = get_question_doc_string(datum['QuestionId'], page['Filename']) + key_to_answer[qd_tuple] = datum['Answer'] + return key_to_answer + + +def read_clean_part(datum): + for key in ['EntityPages', 'SearchResults']: + new_page_list = [] + for page in datum.get(key, []): + if page['DocPartOfVerifiedEval']: + new_page_list.append(page) + datum[key] = new_page_list + assert len(datum['EntityPages']) + len(datum['SearchResults']) > 0 + return datum + + +def read_triviaqa_data(qajson): + data = utils.utils.read_json(qajson) + # read only documents and questions that are a part of clean data set + if data['VerifiedEval']: + clean_data = [] + for datum in data['Data']: + if datum['QuestionPartOfVerifiedEval']: + if data['Domain'] == 'Web': + datum = read_clean_part(datum) + clean_data.append(datum) + data['Data'] = clean_data + return data + + +def answer_index_in_document(answer, document): + answer_list = answer['NormalizedAliases'] + for answer_string_in_doc in answer_list: + index = document.lower().find(answer_string_in_doc) + if index != -1: + return answer_string_in_doc, index + return answer['NormalizedValue'], -1 diff --git a/Experiment/TriviaQA/triviaqa/utils/utils.py b/Experiment/TriviaQA/triviaqa/utils/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..4a8e5158672cd2a5d88f5b7e88e750e734850530 --- /dev/null +++ b/Experiment/TriviaQA/triviaqa/utils/utils.py @@ -0,0 +1,24 @@ +import json + + +def write_json_to_file(json_object, json_file, mode='w', encoding='utf-8'): + with open(json_file, mode, encoding=encoding) as outfile: + json.dump(json_object, outfile, indent=4, sort_keys=True, ensure_ascii=False) + + +def get_file_contents(filename, encoding='utf-8'): + with open(filename, encoding=encoding) as f: + content = f.read() + return content + + +def read_json(filename, encoding='utf-8'): + contents = get_file_contents(filename, encoding=encoding) + return json.loads(contents) + + +def get_file_contents_as_list(file_path, encoding='utf-8', ignore_blanks=True): + contents = get_file_contents(file_path, encoding=encoding) + lines = contents.split('\n') + lines = [line for line in lines if line != ''] if ignore_blanks else lines + return lines \ No newline at end of file diff --git a/Experiment/TriviaQA/val_triviaqa.csv b/Experiment/TriviaQA/val_triviaqa.csv new file mode 100644 index 0000000000000000000000000000000000000000..465fe491205f156835470378834f9419582ccef4 --- /dev/null +++ b/Experiment/TriviaQA/val_triviaqa.csv @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dfca5bc6dfa02e183cbda07f4e076a8b9714c4c2270145c3c7824cfa8830a547 +size 34546128 diff --git a/FineTune/README.md b/FineTune/README.md new file mode 100644 index 0000000000000000000000000000000000000000..d5881e46522310b74fd13d3ef4926c845b34a94b --- /dev/null +++ b/FineTune/README.md @@ -0,0 +1,129 @@ +# 概述 +我们采取了两种的不同的策略来微调模型,分别使用LoRA方法微调了MiniCPM-2B,使用QLoRA方法微调了ChatGLM3-6B + +# ChatGLM3-6B QLoRA微调 + +## 依赖 +Xtuner 集成 DeepSpeed 安装依赖: +``` +pip install -U 'xtuner[deepspeed]' +``` + +## 模型训练 + +设置好模型位置和数据集后执行Xtuner命令进行训练 +``` +xtuner train ${CONFIG_NAME_OR_PATH} +``` +显存消耗在13.5G左右,耗时约4个小时 + +![image.png](https://kashiwa-pic.oss-cn-beijing.aliyuncs.com/20240412091253.png) +![image.png](https://kashiwa-pic.oss-cn-beijing.aliyuncs.com/20240412085937.png) + +## 模型文件 +模型文件上传至HuggingFace + +[Read_Comprehension_Chatglm3-6b_qlora](https://huggingface.co/KashiwaByte/Read_Comprehension_Chatglm3-6b_qlora/tree/main) +![image.png](https://kashiwa-pic.oss-cn-beijing.aliyuncs.com/20240414191810.png) +# MiniCPM-2B LoRA微调 + +设置好模型位置和数据集后运行train.sh脚本进行训练,训练显存消耗在21.5G左右,耗时约6个小时 +![image.png](https://kashiwa-pic.oss-cn-beijing.aliyuncs.com/20240411224008.png) + + +以下分别是 train脚本的argparse超参数,Deepspeed配置和LoRA配置 + + #train argparse + --deepspeed ./ds_config.json \ + --output_dir="./output/MiniCPM" \ + --per_device_train_batch_size=4 \ + --gradient_accumulation_steps=4 \ + --logging_steps=10 \ + --num_train_epochs=3 \ + --save_steps=500 \ + --learning_rate=1e-4 \ + --save_on_each_node=True \ + + + # deepspeed config + { + "fp16": { + "enabled": "auto", + "loss_scale": 0, + "loss_scale_window": 1000, + "initial_scale_power": 16, + "hysteresis": 2, + "min_loss_scale": 1 + }, + "optimizer": { + "type": "AdamW", + "params": { + "lr": "auto", + "betas": "auto", + "eps": "auto", + "weight_decay": "auto" + } + }, + + "scheduler": { + "type": "WarmupDecayLR", + "params": { + "last_batch_iteration": -1, + "total_num_steps": "auto", + "warmup_min_lr": "auto", + "warmup_max_lr": "auto", + "warmup_num_steps": "auto" + } + }, + + "zero_optimization": { + "stage": 2, + "offload_optimizer": { + "device": "cpu", + "pin_memory": true + }, + "offload_param": { + "device": "cpu", + "pin_memory": true + }, + "allgather_partitions": true, + "allgather_bucket_size": 5e8, + "overlap_comm": true, + "reduce_scatter": true, + "reduce_bucket_size": 5e8, + "contiguous_gradients": true + }, + "activation_checkpointing": { + "partition_activations": false, + "cpu_checkpointing": false, + "contiguous_memory_optimization": false, + "number_checkpoints": null, + "synchronize_checkpoint_boundary": false, + "profile": false + }, + "gradient_accumulation_steps": "auto", + "gradient_clipping": "auto", + "steps_per_print": 2000, + "train_batch_size": "auto", + "min_lr": 5e-7, + "train_micro_batch_size_per_gpu": "auto", + "wall_clock_breakdown": false + } + + # loraConfig + config = LoraConfig( + task_type=TaskType.CAUSAL_LM, + target_modules=["q_proj", "v_proj"], # 这个不同的模型需要设置不同的参数,需要看模型中的attention层 + inference_mode=False, # 训练模式 + r=8, # Lora 秩 + lora_alpha=32, # Lora alaph,具体作用参见 Lora 原理 + lora_dropout=0.1# Dropout 比例 + ) + + + +## 模型文件 +模型文件上传至HuggingFace + +[Read_Comprehension_MiniCPM2B](https://huggingface.co/KashiwaByte/Read_Comprehension_MiniCPM2B/tree/main) +![image.png](https://kashiwa-pic.oss-cn-beijing.aliyuncs.com/20240414191757.png) \ No newline at end of file diff --git a/FineTune/chatglm3_6b_qlora_read_comprehension.py b/FineTune/chatglm3_6b_qlora_read_comprehension.py new file mode 100644 index 0000000000000000000000000000000000000000..8b618c2c4ea6d0ef9fb9a55ddcccbf473023bcc8 --- /dev/null +++ b/FineTune/chatglm3_6b_qlora_read_comprehension.py @@ -0,0 +1,214 @@ +# Copyright (c) OpenMMLab. All rights reserved. +import torch +from datasets import load_dataset +from mmengine.dataset import DefaultSampler +from mmengine.hooks import (CheckpointHook, DistSamplerSeedHook, IterTimerHook, + LoggerHook, ParamSchedulerHook) +from mmengine.optim import AmpOptimWrapper, CosineAnnealingLR, LinearLR +from peft import LoraConfig +from torch.optim import AdamW +from transformers import (AutoModelForCausalLM, AutoTokenizer, + BitsAndBytesConfig) + +from xtuner.dataset import process_hf_dataset +from xtuner.dataset.collate_fns import default_collate_fn +from xtuner.dataset.map_fns import alpaca_map_fn, template_map_fn_factory +from xtuner.engine.hooks import (DatasetInfoHook, EvaluateChatHook, + VarlenAttnArgsToMessageHubHook) +from xtuner.engine.runner import TrainLoop +from xtuner.model import SupervisedFinetune +from xtuner.utils import PROMPT_TEMPLATE, SYSTEM_TEMPLATE + +####################################################################### +# PART 1 Settings # +####################################################################### +# Model +pretrained_model_name_or_path = '/root/lanyun-tmp/ZhipuAI/chatglm3-6b' +use_varlen_attn = False + +# Data +data_path = '/root/lanyun-tmp/Dataset/Xtuner_Read_Comperhension50k.jsonl' +prompt_template = PROMPT_TEMPLATE.chatglm3 +max_length = 512 +pack_to_max_length = True + +# Scheduler & Optimizer +batch_size = 1 # per_device +accumulative_counts = 16 +dataloader_num_workers = 0 +max_epochs = 3 +optim_type = AdamW +lr = 2e-4 +betas = (0.9, 0.999) +weight_decay = 0 +max_norm = 1 # grad clip +warmup_ratio = 0.03 + +# Save +save_steps = 500 +save_total_limit = 2 # Maximum checkpoints to keep (-1 means unlimited) + +# Evaluate the generation performance during the training +evaluation_freq = 500 +SYSTEM = SYSTEM_TEMPLATE.alpaca +evaluation_inputs = [ + "{'context': {'[DOC] [TLE] Belltown Pub - Seattle BoozeBelltown Pub - Seattle Booze [PAR] Trivia [PAR] Booze [PAR] (Q) \\xa0What popular drink did a Dutch medical professor produce in his laboratory while trying to come up with a blood cleanser that could be sold in drugstores? [PAR] (Q) Gin.'}, 'question': {'What popular drink did a Dutch medical professor produce in his laboratory while trying to come up with a blood cleanser that could be sold in drugstores?'}}", 'Please tell me five scenic spots in Shanghai', "{'context': {\"( See ! I ' m working on not being so damn shy ! ) I got in and got excellent seats . I think I was on the 3rd row , almost directly in front of Michael Rosenbaum !\"}, 'question': {'What sort of behavior type do they tend to possess ?'}, 'answer0': {'They tend to sit near the back of lectures'}, 'answer1': {'They tend to avoid sitting near the front'}, 'answer2': {'None of the above choices .'}, 'answer3': {'They tend to be a very shy person'}}" + +] + +####################################################################### +# PART 2 Model & Tokenizer # +####################################################################### +tokenizer = dict( + type=AutoTokenizer.from_pretrained, + pretrained_model_name_or_path=pretrained_model_name_or_path, + trust_remote_code=True, + encode_special_tokens=True, + padding_side='left') + +model = dict( + type=SupervisedFinetune, + use_varlen_attn=use_varlen_attn, + llm=dict( + type=AutoModelForCausalLM.from_pretrained, + pretrained_model_name_or_path=pretrained_model_name_or_path, + trust_remote_code=True, + torch_dtype=torch.float16, + quantization_config=dict( + type=BitsAndBytesConfig, + load_in_4bit=True, + load_in_8bit=False, + llm_int8_threshold=6.0, + llm_int8_has_fp16_weight=False, + bnb_4bit_compute_dtype=torch.float16, + bnb_4bit_use_double_quant=True, + bnb_4bit_quant_type='nf4')), + lora=dict( + type=LoraConfig, + r=64, + lora_alpha=16, + lora_dropout=0.1, + bias='none', + task_type='CAUSAL_LM')) + +####################################################################### +# PART 3 Dataset & Dataloader # +####################################################################### +alpaca_en = dict( + type=process_hf_dataset, + dataset=dict(type=load_dataset,path='json',data_files=dict(train=data_path)), + tokenizer=tokenizer, + max_length=max_length, + dataset_map_fn=None, + template_map_fn=dict( + type=template_map_fn_factory, template=prompt_template), + remove_unused_columns=True, + shuffle_before_pack=True, + pack_to_max_length=pack_to_max_length, + use_varlen_attn=use_varlen_attn) + +train_dataloader = dict( + batch_size=batch_size, + num_workers=dataloader_num_workers, + dataset=alpaca_en, + sampler=dict(type=DefaultSampler, shuffle=True), + collate_fn=dict(type=default_collate_fn, use_varlen_attn=use_varlen_attn)) + +####################################################################### +# PART 4 Scheduler & Optimizer # +####################################################################### +# optimizer +optim_wrapper = dict( + type=AmpOptimWrapper, + optimizer=dict( + type=optim_type, lr=lr, betas=betas, weight_decay=weight_decay), + clip_grad=dict(max_norm=max_norm, error_if_nonfinite=False), + accumulative_counts=accumulative_counts, + loss_scale='dynamic', + dtype='float16') + +# learning policy +# More information: https://github.com/open-mmlab/mmengine/blob/main/docs/en/tutorials/param_scheduler.md # noqa: E501 +param_scheduler = [ + dict( + type=LinearLR, + start_factor=1e-5, + by_epoch=True, + begin=0, + end=warmup_ratio * max_epochs, + convert_to_iter_based=True), + dict( + type=CosineAnnealingLR, + eta_min=0.0, + by_epoch=True, + begin=warmup_ratio * max_epochs, + end=max_epochs, + convert_to_iter_based=True) +] + +# train, val, test setting +train_cfg = dict(type=TrainLoop, max_epochs=max_epochs) + +####################################################################### +# PART 5 Runtime # +####################################################################### +# Log the dialogue periodically during the training process, optional +custom_hooks = [ + dict(type=DatasetInfoHook, tokenizer=tokenizer), + dict( + type=EvaluateChatHook, + tokenizer=tokenizer, + every_n_iters=evaluation_freq, + evaluation_inputs=evaluation_inputs, + system=SYSTEM, + prompt_template=prompt_template) +] + +if use_varlen_attn: + custom_hooks += [dict(type=VarlenAttnArgsToMessageHubHook)] + +# configure default hooks +default_hooks = dict( + # record the time of every iteration. + timer=dict(type=IterTimerHook), + # print log every 10 iterations. + logger=dict(type=LoggerHook, log_metric_by_epoch=False, interval=10), + # enable the parameter scheduler. + param_scheduler=dict(type=ParamSchedulerHook), + # save checkpoint per `save_steps`. + checkpoint=dict( + type=CheckpointHook, + by_epoch=False, + interval=save_steps, + max_keep_ckpts=save_total_limit), + # set sampler seed in distributed evrionment. + sampler_seed=dict(type=DistSamplerSeedHook), +) + +# configure environment +env_cfg = dict( + # whether to enable cudnn benchmark + cudnn_benchmark=False, + # set multi process parameters + mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0), + # set distributed parameters + dist_cfg=dict(backend='nccl'), +) + +# set visualizer +visualizer = None + +# set log level +log_level = 'INFO' + +# load from which checkpoint +load_from = None + +# whether to resume training from the loaded checkpoint +resume = False + +# Defaults to use random seed and disable `deterministic` +randomness = dict(seed=None, deterministic=False) + +# set log processor +log_processor = dict(by_epoch=False) diff --git a/FineTune/ds_config.json b/FineTune/ds_config.json new file mode 100644 index 0000000000000000000000000000000000000000..d7a15779fd354d1e71e7b946a65b46c12da31a2c --- /dev/null +++ b/FineTune/ds_config.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2aae03a7a080f0e996ee14394147f0fec98a67322308f2b2526ce72f38b6b882 +size 1720 diff --git a/FineTune/train.py b/FineTune/train.py new file mode 100644 index 0000000000000000000000000000000000000000..d9d9aed1c7f46c1e4a462d1e3a96277c72b125cb --- /dev/null +++ b/FineTune/train.py @@ -0,0 +1,76 @@ +from datasets import Dataset +import pandas as pd +from transformers import AutoTokenizer, AutoModelForCausalLM, DataCollatorForSeq2Seq, TrainingArguments, HfArgumentParser, Trainer +import os +import torch +from peft import LoraConfig, TaskType, get_peft_model +from dataclasses import dataclass, field +import deepspeed +deepspeed.ops.op_builder.CPUAdamBuilder().load() + +@dataclass +class FinetuneArguments: + # 微调参数 + # field:dataclass 函数,用于指定变量初始化 + model_path: str = field(default="./OpenBMB/MiniCPM-2B-sft-fp32") + +# 用于处理数据集的函数 +def process_func(example): + MAX_LENGTH = 512 # Llama分词器会将一个中文字切分为多个token,因此需要放开一些最大长度,保证数据的完整性 + input_ids, attention_mask, labels = [], [], [] + instruction = tokenizer(f"{example['instruction']+example['input']}", add_special_tokens=False) # add_special_tokens 不在开头加 special_tokens + response = tokenizer(f"{example['output']}", add_special_tokens=False) + input_ids = instruction["input_ids"] + response["input_ids"] + [tokenizer.pad_token_id] + attention_mask = instruction["attention_mask"] + response["attention_mask"] + [1] # 因为eos token咱们也是要关注的所以 补充为1 + labels = [-100] * len(instruction["input_ids"]) + response["input_ids"] + [tokenizer.pad_token_id] + if len(input_ids) > MAX_LENGTH: # 做一个截断 + input_ids = input_ids[:MAX_LENGTH] + attention_mask = attention_mask[:MAX_LENGTH] + labels = labels[:MAX_LENGTH] + return { + "input_ids": input_ids, + "attention_mask": attention_mask, + "labels": labels + } + + # loraConfig +config = LoraConfig( + task_type=TaskType.CAUSAL_LM, + target_modules=["q_proj", "v_proj"], # 这个不同的模型需要设置不同的参数,需要看模型中的attention层 + inference_mode=False, # 训练模式 + r=8, # Lora 秩 + lora_alpha=32, # Lora alaph,具体作用参见 Lora 原理 + lora_dropout=0.1# Dropout 比例 +) + + +if "__main__" == __name__: + # 解析参数 + # Parse 命令行参数 + finetune_args, training_args = HfArgumentParser( + (FinetuneArguments, TrainingArguments) + ).parse_args_into_dataclasses() + + # 处理数据集 + # 将JSON文件转换为CSV文件 + df = pd.read_json('./Dataset/Read_Comperhension50k.jsonl',lines=True) + ds = Dataset.from_pandas(df) + # 加载tokenizer + tokenizer = AutoTokenizer.from_pretrained(finetune_args.model_path, use_fast=False, trust_remote_code=True) + tokenizer.padding_side = 'right' + tokenizer.pad_token_id = tokenizer.eos_token_id + # 将数据集变化为token形式 + tokenized_id = ds.map(process_func, remove_columns=ds.column_names) + + # 创建模型并以半精度形式加载 + model = AutoModelForCausalLM.from_pretrained(finetune_args.model_path, trust_remote_code=True, torch_dtype=torch.half, device_map={"": int(os.environ.get("LOCAL_RANK") or 0)}) + model = get_peft_model(model, config) + # 使用trainer训练 + trainer = Trainer( + model=model, + args=training_args, + train_dataset=tokenized_id, + data_collator=DataCollatorForSeq2Seq(tokenizer=tokenizer, padding=True), + ) + trainer.train() # 开始训练 + trainer.save_model() # 保存模型 \ No newline at end of file diff --git a/FineTune/train.sh b/FineTune/train.sh new file mode 100644 index 0000000000000000000000000000000000000000..0234b97edc81078f8fa4b9536805f06121eae7b5 --- /dev/null +++ b/FineTune/train.sh @@ -0,0 +1,12 @@ +num_gpus=1 + +deepspeed --num_gpus $num_gpus train.py \ + --deepspeed ./ds_config.json \ + --output_dir="./output/MiniCPM" \ + --per_device_train_batch_size=4 \ + --gradient_accumulation_steps=4 \ + --logging_steps=10 \ + --num_train_epochs=3 \ + --save_steps=500 \ + --learning_rate=1e-4 \ + --save_on_each_node=True \ \ No newline at end of file diff --git a/README.md b/README.md index 154df8298fab5ecf322016157858e08cd1bccbe1..25f3782f95f40ebcb2a7050f2024676e0a8e5a0c 100644 --- a/README.md +++ b/README.md @@ -1,3 +1,9 @@ ---- -license: apache-2.0 ---- +本仓库包含如下内容: + +1、报告。2、训练代码。3测试结果。4、指令构造数据集,5.任务书 + +- 报告是ACL格式的代码 +- 训练代码在[Finetune](Finetune)文件中,详细信息可看文件夹中的README。 +- 指令数据集构造代码与最终数据集在[Dataset](Dataset)文件中,详细信息可看文件夹中的README。 +- 实验代码与实验结果在[Experiment](Experiment)文件中,详细信息可看文件夹中的README。 +- 任务书即[DISC-Assignment](DISC_2024_Assignment.pdf),包含实验的指导与具体要求。