qwenzoo commited on
Commit
c453ae0
·
1 Parent(s): e4faf03

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -28
README.md CHANGED
@@ -8,10 +8,43 @@ tags:
8
  - fake papers
9
  - science
10
  - scientific text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
- # Model Card for Model ID
 
 
13
 
14
- A fine-tuned Galactica model to detect machine-generated scientific papers based on their abstract, introduction, and conclusion.
15
 
16
  # this model card is WIP, please check the repository, the dataset card and the paper for more details.
17
 
@@ -27,13 +60,12 @@ A fine-tuned Galactica model to detect machine-generated scientific papers based
27
  - **License:** [More Information Needed]
28
  - **Finetuned from model [optional]:** Galactica
29
 
30
- ### Model Sources [optional]
31
 
32
  <!-- Provide the basic links for the model. -->
33
 
34
  - **Repository:** https://github.com/qwenzo/-IDMGSP
35
- - **Paper [optional]:** [More Information Needed]
36
- - **Demo [optional]:** [More Information Needed]
37
 
38
  ## Uses
39
 
@@ -41,15 +73,12 @@ A fine-tuned Galactica model to detect machine-generated scientific papers based
41
 
42
  ### Direct Use
43
 
44
- ```{python}
45
  from transformers import AutoTokenizer, OPTForSequenceClassification, pipeline
46
 
47
  model = OPTForSequenceClassification.from_pretrained("tum-nlp/IDMGSP-Galactica-TRAIN-CG")
48
-
49
  tokenizer = AutoTokenizer.from_pretrained("tum-nlp/IDMGSP-Galactica-TRAIN-CG")
50
-
51
  reader = pipeline("text-classification", model=model, tokenizer = tokenizer)
52
-
53
  reader(
54
  '''
55
  Abstract:
@@ -63,8 +92,6 @@ Conclusion:
63
  )
64
  ```
65
 
66
- [More Information Needed]
67
-
68
  ### Downstream Use [optional]
69
 
70
  <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
@@ -83,25 +110,18 @@ Conclusion:
83
 
84
  [More Information Needed]
85
 
86
- ### Recommendations
87
-
88
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
89
-
90
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
91
-
92
- ## How to Get Started with the Model
93
-
94
- Use the code below to get started with the model.
95
-
96
- [More Information Needed]
97
-
98
  ## Training Details
99
 
100
  ### Training Data
101
 
102
- <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
103
 
104
- [More Information Needed]
 
 
 
 
 
105
 
106
  ### Training Procedure
107
 
@@ -114,7 +134,7 @@ Use the code below to get started with the model.
114
 
115
  #### Training Hyperparameters
116
 
117
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
118
 
119
  #### Speeds, Sizes, Times [optional]
120
 
@@ -164,8 +184,6 @@ Use the code below to get started with the model.
164
 
165
  <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
166
 
167
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
168
-
169
  - **Hardware Type:** [More Information Needed]
170
  - **Hours used:** [More Information Needed]
171
  - **Cloud Provider:** [More Information Needed]
 
8
  - fake papers
9
  - science
10
  - scientific text
11
+ widget:
12
+ - text: |
13
+ Abstract:
14
+
15
+ The Hartree-Fock (HF) method is a widely used method for approximating the electronic structure of many-electron systems. In this work, we study the properties of HF solutions of the three-dimensional electron gas (3DEG), a model system consisting of a uniform, non-interacting electron gas in three dimensions. We find that the HF solutions accurately reproduce the known analytic results for the ground state energy and the static structure factor of the 3DEG. However, we also find that the HF solutions fail to accurately describe the excitation spectrum of the 3DEG, particularly at high energies.
16
+
17
+ Introduction:
18
+
19
+ The HF method is a self-consistent method for approximating the electronic structure of many-electron systems. It is based on the assumption that the electrons in a system can be described as non-interacting quasiparticles, each with its own effective potential. The HF method is commonly used to study the ground state properties of systems, such as the energy and the density distribution, but it can also be used to study excited states.
20
+
21
+ The 3DEG is a model system that has been widely studied as a test case for electronic structure methods. It consists of a uniform, non-interacting electron gas in three dimensions, with a finite density and a periodic boundary condition. The 3DEG has a number of known analytic results for its ground state properties, such as the ground state energy and the static structure factor, which can be used to test the accuracy of approximate methods.
22
+
23
+ Conclusion:
24
+
25
+ In this work, we have studied the properties of HF solutions of the 3DEG. We find that the HF solutions accurately reproduce the known analytic results for the ground state energy and the static structure factor of the 3DEG. However, we also find that the HF solutions fail to accurately describe the excitation spectrum of the 3DEG, particularly at high energies. This suggests that the HF method may not be suitable for accurately describing the excited states of the 3DEG. Further work is needed to understand the limitations of the HF method and to develop improved methods for studying the electronic structure of many-electron systems.
26
+
27
+ example_title: "Example ChatGPT fake"
28
+ - text: |
29
+ Abstract:
30
+
31
+ Recent calculations have pointed to a 2.8 $\sigma$ tension between data on $\epsilon^{\prime}_K / \epsilon_K$ and the standard-model (SM) prediction. Several new physics (NP) models can explain this discrepancy, and such NP models are likely to predict deviations of $\mathcal{B}(K\to \pi \nu \overline{\nu})$ from the SM predictions, which can be probed precisely in the near future by NA62 and KOTO experiments. We present correlations between $\epsilon^{\prime}_K / \epsilon_K$ and $\mathcal{B}(K\to \pi \nu \overline{\nu})$ in two types of NP scenarios: a box dominated scenario and a $Z$-penguin dominated one. It is shown that different correlations are predicted and the future precision measurements of $K \to \pi \nu \overline{\nu}$ can distinguish both scenarios.
32
+
33
+ Introduction:
34
+
35
+ CP violating flavor-changing neutral current decays of K mesons are extremely sensitive to new physics (NP) and can probe virtual effects of particles with masses far above the reach of the Large Hadron Collider. Prime examples of such observables are ϵ′ K measuring direct CP violation in K → ππ decays and B(KL → π0νν). Until recently, large theoretical uncertainties precluded reliable predictions for ϵ′ K. Although standard-model (SM) predictions of ϵ′ K using chiral perturbation theory are consistent with the experimental value, their theoretical uncertainties are large. In contrast, calculation by the dual QCD approach 1 finds the SM value much below the experimental one. A major breakthrough has been the recent lattice-QCD calculation of the hadronic matrix elements by RBC-UKQCD collaboration 2, which gives support to the latter result. The SM value at the next-to-leading order divided by the indirect CP violating measure ϵK is 3 which is consistent with (ϵ′ K/ϵK)SM = (1.9±4.5)×10−4 given by Buras et al 4.a Both results are based on the lattice numbers, and further use CP-conserving K → ππ data to constrain some of the hadronic matrix elements involved. Compared to the world average of the experimental results 6, Re (ϵ′ K/ϵK)exp = (16.6 ± 2.3) × 10−4, (2) the SM prediction lies below the experimental value by 2.8 σ. Several NP models including supersymmetry (SUSY) can explain this discrepancy. It is known that such NP models are likely to predict deviations of the kaon rare decay branching ratios from the SM predictions, especially B(K → πνν) which can be probed precisely in the near future by NA62 and KOTO experiments.b In this contribution, we present correlations between ϵ′ K/ϵK and B(K → πνν) in two types of NP scenarios: a box dominated scenario and a Z-penguin dominated one. Presented at the 52th Rencontres de Moriond electroweak interactions and unified theories, La Thuile, Italy, 18-25 March, 2017. aOther estimations of the SM value are listed in Kitahara et al 5. b The correlations between ϵ′ K/ϵK, B(K → πνν) and ϵK through the CKM components in the SM are discussed in Ref. 7.
36
+
37
+ Conclusion:
38
+
39
+ We have presented the correlations between ϵ′ K/ϵK, B(KL → π0νν), and B(K+ → π+νν) in the box dominated scenario and the Z-penguin dominated one. It is shown that the constraint from ϵK produces different correlations between two NP scenarios. In the future, measurements of B(K → πνν) will be significantly improved. The NA62 experiment at CERN measuring B(K+ → π+νν) is aiming to reach a precision of 10 % compared to the SM value already in 2018. In order to achieve 5% accuracy more time is needed. Concerning KL → π0νν, the KOTO experiment at J-PARC aims in a first step at measuring B(KL → π0νν) around the SM sensitivity. Furthermore, the KOTO-step2 experiment will aim at 100 events for the SM branching ratio, implying a precision of 10 % of this measurement. Therefore, we conclude that when the ϵ′ K/ϵK discrepancy is explained by the NP contribution, NA62 experiment could probe whether a modified Z-coupling scenario is realized or not, and KOTO-step2 experiment can distinguish the box dominated scenario and the simplified modified Z-coupling scenario.
40
+
41
+ example_title: "Example real"
42
  ---
43
+ # Model Card for IDMGSP-Galactica-TRAIN-CG
44
+
45
+ A fine-tuned Galactica model to detect machine-generated scientific papers based on their abstract, introduction, and conclusion.
46
 
47
+ This model is trained on the `train-cg` dataset found in https://huggingface.co/datasets/tum-nlp/IDMGSP.
48
 
49
  # this model card is WIP, please check the repository, the dataset card and the paper for more details.
50
 
 
60
  - **License:** [More Information Needed]
61
  - **Finetuned from model [optional]:** Galactica
62
 
63
+ ### Model Sources
64
 
65
  <!-- Provide the basic links for the model. -->
66
 
67
  - **Repository:** https://github.com/qwenzo/-IDMGSP
68
+ - **Paper:** [More Information Needed]
 
69
 
70
  ## Uses
71
 
 
73
 
74
  ### Direct Use
75
 
76
+ ```python
77
  from transformers import AutoTokenizer, OPTForSequenceClassification, pipeline
78
 
79
  model = OPTForSequenceClassification.from_pretrained("tum-nlp/IDMGSP-Galactica-TRAIN-CG")
 
80
  tokenizer = AutoTokenizer.from_pretrained("tum-nlp/IDMGSP-Galactica-TRAIN-CG")
 
81
  reader = pipeline("text-classification", model=model, tokenizer = tokenizer)
 
82
  reader(
83
  '''
84
  Abstract:
 
92
  )
93
  ```
94
 
 
 
95
  ### Downstream Use [optional]
96
 
97
  <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
 
110
 
111
  [More Information Needed]
112
 
 
 
 
 
 
 
 
 
 
 
 
 
113
  ## Training Details
114
 
115
  ### Training Data
116
 
117
+ The training dataset comprises scientific papers generated by the Galactica, GPT-2, and SCIgen models, as well as papers extracted from the arXiv database.
118
 
119
+ The provided table displays the sample counts from each source utilized in constructing the training dataset.
120
+ The dataset could be found in https://huggingface.co/datasets/tum-nlp/IDMGSP.
121
+
122
+ | Dataset | arXiv (real) | ChatGPT (fake) | GPT-2 (fake) | SCIgen (fake) | Galactica (fake) | GPT-3 (fake) |
123
+ |------------------------------|--------------|----------------|--------------|----------------|------------------|--------------|
124
+ | TRAIN without ChatGPT (TRAIN-CG) | 8k | - | 2k | 2k | 2k | - |
125
 
126
  ### Training Procedure
127
 
 
134
 
135
  #### Training Hyperparameters
136
 
137
+ [More Information Needed]
138
 
139
  #### Speeds, Sizes, Times [optional]
140
 
 
184
 
185
  <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
186
 
 
 
187
  - **Hardware Type:** [More Information Needed]
188
  - **Hours used:** [More Information Needed]
189
  - **Cloud Provider:** [More Information Needed]