travis0103 commited on
Commit
338e3e7
1 Parent(s): 0d25962

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -12
README.md CHANGED
@@ -9,6 +9,8 @@ base_model: mistralai/Mistral-7B-Instruct-v0.2
9
  model-index:
10
  - name: mistral_7b_paper_review_lora
11
  results: []
 
 
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -16,21 +18,51 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # mistral_7b_paper_review_lora
18
 
19
- This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
20
 
21
- ## Model description
22
 
23
- More information needed
24
 
25
- ## Intended uses & limitations
26
 
27
- More information needed
 
 
28
 
29
- ## Training and evaluation data
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
 
31
- More information needed
 
 
 
 
 
 
 
 
32
 
33
- ## Training procedure
 
 
34
 
35
  ### Training hyperparameters
36
 
@@ -50,10 +82,6 @@ The following hyperparameters were used during training:
50
  - training_steps: 1
51
  - mixed_precision_training: Native AMP
52
 
53
- ### Training results
54
-
55
-
56
-
57
  ### Framework versions
58
 
59
  - PEFT 0.10.0
 
9
  model-index:
10
  - name: mistral_7b_paper_review_lora
11
  results: []
12
+ language:
13
+ - en
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
18
 
19
  # mistral_7b_paper_review_lora
20
 
21
+ This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the [dataset](travis0103/abstract_paper_review).
22
 
23
+ ## Model Description
24
 
25
+ This model is specifically fine-tuned to assist in reviewing machine learning papers based on the abstracts provided. It serves as a useful tool for researchers preparing to submit papers or for reviewers tasked with evaluating submissions. For a detailed description of the model's functionalities and features, please visit our project's GitHub page at [MLPapersReviewGPT](https://github.com/yinuotxie/MLPapersReviewGPT).
26
 
27
+ ## Example Usage
28
 
29
+ ```python
30
+ SYSTEM_PROMPT = """
31
+ You are a professional machine learning conference reviewer who reviews a given paper and considers 4 criteria: [Significance and novelty], [Potential reasons for acceptance], [Potential reasons for rejection], and [Suggestions for improvement]. Please ensure that for each criterion, you summarize and provide random number of detailed supporting points from the content of the paper. And for each supporting point within each of criteria, use the format: '<title of supporting point>' followed by a detailed explanation. The criteria you need to focus on are:
32
 
33
+ 1. [Significance and novelty]: Assess the importance of the paper in its research field and the innovation of its methods or findings。
34
+ 2. [Potential reasons for acceptance]: Summarize reasons that may support the acceptance of the paper, based on its quality, research results, experimental design, etc.
35
+ 3. [Potential reasons for rejection]: Identify and explain flaws or shortcomings that could lead to the paper's rejection.
36
+ 4. [Suggestions for improvement]: Provide specific suggestions to help the authors improve the paper and increase its chances of acceptance.
37
+
38
+ After reading the content of the paper provided below, your response should only include your reviews only, which means always start with [Significance and novelty], dont' repeat the given paper and output things other than your reviews in required format, just extract and summarize information related to these criteria from the provided paper. The paper is given as follows:
39
+ """
40
+
41
+ abstract_input = """"
42
+ [TITLE]
43
+ <Title of the paper you want to review>
44
+
45
+ [ABSTRACT]
46
+ <Abstract of the paper you want to review>
47
+
48
+ # load the model
49
+ model_id = "travis0103/mistral_7b_paper_review_lora"
50
+ model = AutoPeftModelForCausalLM.from_pretrained(model_id)
51
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
52
 
53
+ messages = [
54
+ {"role": "user", "content": SYSTEM_PROMPT},
55
+ {"role": "assistant", "content": abstract_input}
56
+ ]
57
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
58
+ encoded_input = tokenizer.apply_chat_template(messages, return_tensors="pt").to(device)
59
+ generated_ids = model.generate(encoded_input, max_new_tokens=1024, do_sample=True, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id)
60
+ decoded_output = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
61
+ ```
62
 
63
+ ## Training and evaluation data
64
+
65
+ Abstract Paper Review: [Dataset](travis0103/abstract_paper_review)
66
 
67
  ### Training hyperparameters
68
 
 
82
  - training_steps: 1
83
  - mixed_precision_training: Native AMP
84
 
 
 
 
 
85
  ### Framework versions
86
 
87
  - PEFT 0.10.0