Cbelem commited on
Commit
d705d07
·
verified ·
1 Parent(s): 190f585

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -106
README.md CHANGED
@@ -1,12 +1,19 @@
1
  ---
2
  library_name: transformers
3
- tags: []
 
 
 
 
 
4
  ---
5
 
6
  # Model Card for Model ID
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
9
 
 
 
 
10
 
11
 
12
  ## Model Details
@@ -17,92 +24,62 @@ tags: []
17
 
18
  This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
-
28
- ### Model Sources [optional]
29
-
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
  ## Uses
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
- ### Direct Use
41
-
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
 
58
- ## Bias, Risks, and Limitations
59
 
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
 
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
 
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
 
69
 
70
- ## How to Get Started with the Model
 
 
71
 
72
- Use the code below to get started with the model.
 
 
 
 
 
73
 
74
- [More Information Needed]
 
 
75
 
76
  ## Training Details
77
 
78
  ### Training Data
79
 
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
 
84
  ### Training Procedure
85
 
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
 
88
  #### Preprocessing [optional]
89
 
90
- [More Information Needed]
91
 
92
 
93
  #### Training Hyperparameters
94
 
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
 
103
  ## Evaluation
104
 
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
 
107
  ### Testing Data, Factors & Metrics
108
 
@@ -120,41 +97,42 @@ Use the code below to get started with the model.
120
 
121
  #### Metrics
122
 
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
 
127
  ### Results
128
 
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
133
 
134
 
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
 
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
 
153
  ## Technical Specifications [optional]
154
 
155
  ### Model Architecture and Objective
156
 
157
- [More Information Needed]
158
 
159
  ### Compute Infrastructure
160
 
@@ -166,34 +144,18 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
166
 
167
  #### Software
168
 
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
 
181
- [More Information Needed]
182
 
183
- ## Glossary [optional]
184
 
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
 
187
- [More Information Needed]
188
 
189
- ## More Information [optional]
190
 
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
 
197
  ## Model Card Contact
198
 
199
- [More Information Needed]
 
1
  ---
2
  library_name: transformers
3
+ license: apache-2.0
4
+ language:
5
+ - en
6
+ base_model:
7
+ - allenai/scibert_scivocab_uncased
8
+ pipeline_tag: text-classification
9
  ---
10
 
11
  # Model Card for Model ID
12
 
 
13
 
14
+ This is a text classification model.
15
+ It was fine-tuned to predict certainty ratings of scientific findings using a classification loss and a ranking loss.
16
+ We fine-tuned an allenai/scibert_scivocab_uncased on the dataset made available by [Wurl et al (2024): Understanding Fine-Grained Distortions in Reports for Scientific Finding.](https://aclanthology.org/2024.findings-acl.369/).
17
 
18
 
19
  ## Model Details
 
24
 
25
  This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
26
 
27
+ - **Developed by:** Researchers at UCI with the goal of obtaining a reliable certainty scoring function.
28
+ - **Model type:** BERT
29
+ - **Language(s) (NLP):** English
30
+ - **Finetuned from model:** allenai/scibert_scivocab_uncased
 
 
 
 
 
 
 
 
 
 
 
31
 
32
  ## Uses
33
 
34
+ The model is meant to be used for estimating certainty scores. Because it is trained on sentence-level academic findings, we suspect its reliability to be restricted to this domain.
35
+ The original dataset had only moderate inter-annotator agreement (spearman correlation coefficient of 0.44), which suggests that predicting certainty scores is difficult even for humans.
36
+ We recommend users of this model to validate that the model behaves as intended in a small portion of the data of interest before scaling evaluations.
37
+ We also note that the per-class F1 scores ranged between (0.48-0.70), which reflects once again the difficulty in learning clear class boundaries.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
 
 
39
 
40
+ ## How to Get Started with the Model
 
 
 
 
41
 
42
+ Use the code below to get started with the model.
43
 
44
+ ```python
45
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
46
 
47
+ tokenizer = AutoTokenizer.from_pretrained("Cbelem/scibert-certainty-classif")
48
+ model = AutoModelForSequenceClassification.from_pretrained("Cbelem/scibert-certainty-classif")
49
+ model.eval()
50
 
51
+ texts = [
52
+ "Compared with controls, taxi drivers had greater grey matter volume in the posterior hippocampi (Maguire et al.",
53
+ "The study described in this paper focuses on gaze, but similar approaches can be used to understand the effects of other interactions that contribute to patient outcomes such as emotion.",
54
+ '""The initial findings could have been explained by a correlation, that people with big hippocampi become taxi drivers,"" he says.',
55
+ "We are less sure about a possible explanation for lower acceptance for mobile phone behaviors among professionals in the West.",
56
+ ]
57
 
58
+ inputs_ids = tokenizer(texts, return_tensors="pt")
59
+ model(**inputs_ids)
60
+ ```
61
 
62
  ## Training Details
63
 
64
  ### Training Data
65
 
66
+ TBD
 
 
67
 
68
  ### Training Procedure
69
 
70
+ TBD
71
 
72
  #### Preprocessing [optional]
73
 
74
+ TBD
75
 
76
 
77
  #### Training Hyperparameters
78
 
79
+ - **Training regime:** fp32
 
 
 
 
 
 
80
 
81
  ## Evaluation
82
 
 
83
 
84
  ### Testing Data, Factors & Metrics
85
 
 
97
 
98
  #### Metrics
99
 
100
+ TBD
 
 
101
 
102
  ### Results
103
 
104
+ ```
105
+ "train/learning_rate": 6.869747470432602e-7,
106
+ "train/loss": 0.562,
107
+ "train/global_step": 3000,
108
+ "eval/qwk": 0.5507,
109
+ "eval/loss": 0.9391,
110
+ "eval/accuracy": 0.6078,
111
+ "eval/balanced_accuracy": 0.3980,
112
+ "eval/f1_macro": 0.6006,
113
+ "eval/f1_class_0": 0.6211,
114
+ "eval/f1_class_1": 0.4932,
115
+ "eval/f1_class_2": 0.6875,
116
+ "eval/precision_macro": 0.6033,
117
+ "eval/precision_class_0": 0.6410,
118
+ "eval/precision_class_1": 0.5,
119
+ "eval/precision_class_2": 0.6689,
120
+ "eval/recall_macro": 0.5987,
121
+ "eval/recall_class_0": 0.6024,
122
+ "eval/recall_class_1": 0.4865,
123
+ "eval/recall_class_2": 0.7071,
124
+ "train_steps_per_second": 6.532,
125
+ ```
126
 
127
 
128
+ #### Summary
 
 
 
 
 
 
 
 
 
 
129
 
 
 
 
 
 
130
 
131
  ## Technical Specifications [optional]
132
 
133
  ### Model Architecture and Objective
134
 
135
+ TBD
136
 
137
  ### Compute Infrastructure
138
 
 
144
 
145
  #### Software
146
 
147
+ Transformers, Pytorch, Wandb for running the hyperparameter sweep
 
 
 
 
 
 
 
 
 
 
148
 
149
+ ## Citation
150
 
151
+ TBD
152
 
 
153
 
 
154
 
155
+ ## Model Card Authors
156
 
157
+ Catarina Belem (Cbelem)
 
 
 
 
158
 
159
  ## Model Card Contact
160
 
161
+ For more information contact cbelem@uci.edu.