File size: 2,536 Bytes
0712d03
 
4d1c666
 
 
 
 
 
 
0712d03
4d1c666
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
---
license: apache-2.0
datasets:
- race
language:
- en
tags:
- text classification
- multiple-choice
---
# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->

This model was finetuned on RACE for multiple choice (text classification). The initial model used was distilbert-uncased-base https://huggingface.co/distilbert-uncased-base

The model was trained using the code from https://github.com/zphang/lrqa. Please refer to and cite the authors.

# Model Details

- **Initial model:** distilbert-uncased-base
- **LR:** 1e-5
- **Epochs:** 3
- **Warmup Ratio:** 0.1 (10%)
- **Batch Size:** 16
- **Max Seq Len:** 512

## Model Description

<!-- Provide a longer summary of what this model is. -->
- **Model type:** [DistilBERT]
- **Language(s) (NLP):** [English]
- **License:** [Apache-2.0]
- **Finetuned from model [optional]:** [distilbert-uncased-base]

## Model Sources [optional]

<!-- Provide the basic links for the model. -->

- **Repository:** [https://github.com/zphang/lrqa]
- **Dataset:** [https://huggingface.co/datasets/race]

# Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

[More Information Needed]

## How to Get Started with the Model

Use the code below to get started with the model.

[More Information Needed]

# Training Details

## Training Data

<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->

[More Information Needed]


# Model Examination [optional]

<!-- Relevant interpretability work for the model goes here -->

[More Information Needed]

# Environmental Impact

<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->

- **Hardware Type:** A100 - 40GB
- **Hours used:** 4
- **Cloud Provider:** Private
- **Compute Region:** Portugal
- **Carbon Emitted:** 0.18 kgCO2

Experiments were conducted using a private infrastructure, which has a carbon efficiency of 0.178 kgCO$_2$eq/kWh. A cumulative of 4 hours of computation was performed on hardware of type A100 PCIe 40/80GB (TDP of 250W).
Total emissions are estimated to be 0.18 kgCO$_2$eq of which 0 percent were directly offset.
Estimations were conducted using the \href{https://mlco2.github.io/impact#compute}{MachineLearning Impact calculator} presented in \cite{lacoste2019quantifying}.