D1V1DE commited on
Commit
28ff85c
1 Parent(s): e817366

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -32
README.md CHANGED
@@ -1,12 +1,22 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
3
  ---
4
 
5
  # Model Card for Model ID
6
 
7
  <!-- Provide a quick summary of what the model is/does. -->
8
 
9
- The model is fine-tuned from valurank/distilroberta-bias model for research purpose.
 
 
 
 
 
10
  ## Model Details
11
 
12
  ### Model Description
@@ -23,73 +33,67 @@ The model is capable of classifying any text into Biased or Non_biased. Max leng
23
  - **Model type:** DistillRoBERTa transformer
24
  - **Language(s) (NLP):** English
25
  - **License:** Apache 2.0
26
- - **Finetuned from model [optional]:** valurank/distilroberta-bias
27
-
28
- ### Model Sources [optional]
29
-
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** To be uploaded
33
 
34
  ### The following sections are under construction...
35
 
36
 
37
- ### Recommendations
38
 
39
  <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
40
 
41
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
 
42
 
43
  ## How to Get Started with the Model
44
 
45
  Use the code below to get started with the model.
46
 
 
 
47
  [More Information Needed]
48
 
49
  ## Training Details
50
 
51
- ### Training Data
52
 
53
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
54
 
55
- [More Information Needed]
56
 
57
- ### Training Procedure
58
 
59
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
60
 
61
- #### Preprocessing [optional]
62
 
63
- [More Information Needed]
64
 
 
65
 
66
- #### Training Hyperparameters
67
 
68
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
69
 
70
- #### Speeds, Sizes, Times [optional]
71
 
72
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
73
 
74
- [More Information Needed]
75
-
76
- ## Evaluation
77
 
78
- <!-- This section describes the evaluation protocols and provides the results. -->
79
 
80
- ### Testing Data, Factors & Metrics
81
 
82
- #### Testing Data
83
 
84
- <!-- This should link to a Dataset Card if possible. -->
85
 
86
- [More Information Needed]
87
 
88
- #### Factors
89
 
90
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
91
 
92
- [More Information Needed]
93
 
94
  #### Metrics
95
 
@@ -103,3 +107,6 @@ Use the code below to get started with the model.
103
 
104
  #### Summary
105
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ datasets:
4
+ - mediabiasgroup/BABE
5
+ language:
6
+ - en
7
+ pipeline_tag: text-classification
8
  ---
9
 
10
  # Model Card for Model ID
11
 
12
  <!-- Provide a quick summary of what the model is/does. -->
13
 
14
+ This model is designed to detect bias in text data.
15
+ It analyzes text inputs to identify and classify types of biases,
16
+ aiding in the development of more inclusive and fair AI systems.
17
+ The model is fine-tuned from valurank/distilroberta-bias model for research purpose. The model is able to detect bias in formal language since the
18
+ training corpus is news titles.
19
+
20
  ## Model Details
21
 
22
  ### Model Description
 
33
  - **Model type:** DistillRoBERTa transformer
34
  - **Language(s) (NLP):** English
35
  - **License:** Apache 2.0
36
+ - **Finetuned from model:** valurank/distilroberta-bias
37
+ - **Repository:** ***To be uploaded***
 
 
 
 
 
38
 
39
  ### The following sections are under construction...
40
 
41
 
42
+ <!--### Recommendations
43
 
44
  <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
45
 
46
+ <!--Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
47
+ More information needed for further recommendations. -->
48
 
49
  ## How to Get Started with the Model
50
 
51
  Use the code below to get started with the model.
52
 
53
+ ***Link to the github demo page to be included***
54
+
55
  [More Information Needed]
56
 
57
  ## Training Details
58
 
59
+ ******
60
 
61
+ Size of the Dataset: 1700 entries
62
 
63
+ Preprocessing Steps: Tokenization using a pre-specified tokenizer, padding, and truncation to convert text to numerical features. Classes are encoded numerically.
64
 
65
+ Data Splitting Strategy: 80% training, 20% validation split, with a random state for reproducibility.
66
 
67
+ Optimization Algorithm: AdamW
68
 
69
+ Loss Function: CrossEntropyLoss, weighted by class frequencies to address class imbalance.
70
 
71
+ Learning Rate: 1e-5
72
 
73
+ Number of Epochs: 3
74
 
75
+ Batch Size: 16
76
 
77
+ Regularization Techniques: Gradient clipping is applied with a max norm of 1.0.
78
 
79
+ Model-Specific Hyperparameters: Scheduler with step size of 3 and gamma of 0.1 for learning rate decay.
80
 
81
+ Training time: around 150 iterations/s under CUDA pytorch, less than 10 minutes for training.
82
 
83
+ Monitoring Strategies: Training and validation losses and validation accuracy are monitored.
 
 
84
 
85
+ Details on the Validation Dataset: Generated from the same DataFrame df using a train-test split.
86
 
87
+ Techniques Used for Fine-tuning: Learning rate scheduler for adjusting the learning rate.
88
 
89
+ ## Challenges and Solutions
90
 
91
+ **Challenges Faced During Training**: Class imbalance is addressed through weighted CrossEntropyLoss.
92
 
93
+ **Solutions and Techniques Applied**: Calculation of class weights from the training data and applying gradient clipping.
94
 
 
95
 
 
96
 
 
97
 
98
  #### Metrics
99
 
 
107
 
108
  #### Summary
109
 
110
+
111
+
112
+ ### Model Update Log