amir7d0 commited on
Commit
6fc51b4
1 Parent(s): f5369bb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +105 -22
README.md CHANGED
@@ -1,48 +1,131 @@
1
  ---
 
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  tags:
4
  - generated_from_keras_callback
5
- model-index:
6
- - name: distilbert-base-uncased-finetuned-amazon-reviews
7
- results: []
8
  ---
9
 
10
- <!-- This model card has been generated automatically according to the information Keras had access to. You should
11
- probably proofread and complete it, then remove this comment. -->
12
 
13
- # distilbert-base-uncased-finetuned-amazon-reviews
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
- This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
16
- It achieves the following results on the evaluation set:
17
 
 
18
 
19
- ## Model description
20
 
21
- More information needed
22
 
23
- ## Intended uses & limitations
24
 
25
- More information needed
26
 
27
- ## Training and evaluation data
28
 
29
- More information needed
 
 
30
 
31
- ## Training procedure
32
 
33
- ### Training hyperparameters
34
 
35
- The following hyperparameters were used during training:
36
- - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.0001, 'decay_steps': 18750, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
37
- - training_precision: float32
38
 
39
- ### Training results
40
 
 
41
 
42
 
43
- ### Framework versions
44
 
45
  - Transformers 4.26.1
46
  - TensorFlow 2.11.0
47
  - Datasets 2.1.0
48
- - Tokenizers 0.13.2
 
1
  ---
2
+ language: en
3
  license: apache-2.0
4
+ datasets:
5
+ - amazon_reviews_multi
6
+ model-index:
7
+ - name: distilbert-base-uncased-finetuned-amazon-reviews
8
+ results:
9
+ - task:
10
+ type: text-classification
11
+ name: Text Classification
12
+ dataset:
13
+ name: amazon_reviews_multi
14
+ type: amazon_reviews_multi22
15
+ split: test
16
+ metrics:
17
+ - type: accuracy
18
+ value: .85
19
+ name: Accuracy
20
+
21
+ - type: loss
22
+ value: 0.1
23
+ name: loss
24
+
25
  tags:
26
  - generated_from_keras_callback
27
+
28
+ pipeline_tag: text-classification
 
29
  ---
30
 
 
 
31
 
32
+ # Model Card for distilbert-base-uncased-finetuned-amazon-reviews
33
+
34
+
35
+ # Table of Contents
36
+
37
+ - [Model Card for distilbert-base-uncased-finetuned-amazon-reviews](#model-card-for--model_id-)
38
+ - [Table of Contents](#table-of-contents)
39
+ - [Model Details](#model-details)
40
+ - [Uses](#uses)
41
+ - [Training Details](#training-details)
42
+ - [Evaluation](#evaluation)
43
+ - [Framework versions](#framework-versions)
44
+
45
+
46
+ # Model Details
47
+
48
+ ## Model Description
49
+
50
+ <!-- Provide a longer summary of what this model is/does. -->
51
+ This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on [amazon_reviews_multi](https://huggingface.co/datasets/amazon_reviews_multi) dataset.
52
+ This model reaches an accuracy of xxx on the dev set.
53
+
54
+ - **Model type:** Language model
55
+ - **Language(s) (NLP):** en
56
+ - **License:** apache-2.0
57
+ - **Parent Model:** For more details about DistilBERT, check out [this model card](https://huggingface.co/distilbert-base-uncased).
58
+ - **Resources for more information:**
59
+ - [Model Documentation](https://huggingface.co/docs/transformers/main/en/model_doc/distilbert#transformers.DistilBertForSequenceClassification)
60
+
61
+
62
+ # Uses
63
+
64
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
65
+
66
+ ## Direct Use
67
+
68
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
69
+ <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
70
+ ```
71
+ from transformers import DistilBertTokenizer, TFDistilBertModel
72
+
73
+ checkpoint = "amir7d0/distilbert-base-uncased-finetuned-amazon-reviews"
74
+ tokenizer = DistilBertTokenizer.from_pretrained(checkpoint)
75
+ model = TFDistilBertModel.from_pretrained(checkpoint)
76
+ text = "xxxxxxxxxxxxxxxxxxxxxxxxxx"
77
+ encoded_input = tokenizer(text, return_tensors="tf")
78
+ output = model(encoded_input)
79
+
80
+
81
+ ```
82
+
83
+
84
+
85
+ # Training Details
86
+
87
+ ## Training Data
88
+
89
+ <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
90
+
91
+ train data [amazon_reviews_multi](https://huggingface.co/datasets/amazon_reviews_multi)
92
+
93
+
94
+ # Evaluation
95
+
96
+ <!-- This section describes the evaluation protocols and provides the results. -->
97
 
98
+ ## Testing Data, Factors & Metrics
 
99
 
100
+ ### Testing Data
101
 
102
+ <!-- This should link to a Data Card if possible. -->
103
 
104
+ [amazon_reviews_multi](https://huggingface.co/datasets/amazon_reviews_multi)
105
 
 
106
 
107
+ ### Factors
108
 
109
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
110
 
111
+ acc
112
+ f1
113
+ precision
114
 
115
+ ### Metrics
116
 
117
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
118
 
119
+ metric1
 
 
120
 
121
+ ## Results
122
 
123
+ result1
124
 
125
 
126
+ # Framework versions
127
 
128
  - Transformers 4.26.1
129
  - TensorFlow 2.11.0
130
  - Datasets 2.1.0
131
+ - Tokenizers 0.13.2