mariav commited on
Commit
1725e9a
1 Parent(s): fba7740

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -18
README.md CHANGED
@@ -5,12 +5,6 @@ tags:
5
  model-index:
6
  - name: distilbert-base-german-cased-finetuned-amazon-reviews
7
  results: []
8
- datasets:
9
- - amazon_reviews_multi
10
- language:
11
- - de
12
- metrics:
13
- - perplexity
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -18,27 +12,21 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  # distilbert-base-german-cased-finetuned-amazon-reviews
20
 
21
- This model is a fine-tuned version of [distilbert-base-german-cased](https://huggingface.co/distilbert-base-german-cased) on the Amazon Reviews multilingual dataset.
22
  It achieves the following results on the evaluation set:
23
  - Loss: 3.8874
24
 
25
-
26
  ## Model description
27
 
28
- The model is a fine-tuned version of distilbert-base-german-cased using the dataset from amazon_reviews_multi (available in Huggin Face). The purpose is to extend the distilbert-base-german-cased domain, which, once fine-tuned, will be modified for the fill-in-the-gaps task.
29
 
30
  ## Intended uses & limitations
31
 
32
- The use is limited to school use and the limitations have to do with the size of the dataset, since it does not allow for a large contribution, a larger dataset would have to be used to get a larger contribution.
33
 
34
  ## Training and evaluation data
35
 
36
- The training parameters are shown above.
37
- Evaluation: I used perplexity to evaluate the performance of my model:
38
-
39
- - Perplexity: 64.91
40
-
41
- The result is quite high, but the performance is quite good.
42
 
43
  ## Training procedure
44
 
@@ -71,6 +59,6 @@ The following hyperparameters were used during training:
71
 
72
  ### Framework versions
73
 
74
- - Transformers 4.26.1
75
  - Pytorch 1.13.1+cu116
76
- - Tokenizers 0.13.2
 
5
  model-index:
6
  - name: distilbert-base-german-cased-finetuned-amazon-reviews
7
  results: []
 
 
 
 
 
 
8
  ---
9
 
10
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
12
 
13
  # distilbert-base-german-cased-finetuned-amazon-reviews
14
 
15
+ This model is a fine-tuned version of [distilbert-base-german-cased](https://huggingface.co/distilbert-base-german-cased) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
  - Loss: 3.8874
18
 
 
19
  ## Model description
20
 
21
+ More information needed
22
 
23
  ## Intended uses & limitations
24
 
25
+ More information needed
26
 
27
  ## Training and evaluation data
28
 
29
+ More information needed
 
 
 
 
 
30
 
31
  ## Training procedure
32
 
 
59
 
60
  ### Framework versions
61
 
62
+ - Transformers 4.27.0
63
  - Pytorch 1.13.1+cu116
64
+ - Tokenizers 0.13.2