File size: 2,556 Bytes
9d88afe
 
 
e333537
9d88afe
 
 
 
 
e333537
 
 
 
 
 
 
 
 
9d88afe
 
 
 
 
 
 
 
 
 
9f9f919
9d88afe
f60087e
 
 
 
 
 
9d88afe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f60087e
 
9d88afe
 
 
 
 
 
 
f60087e
 
 
 
 
9d88afe
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
tags:
- generated_from_trainer
datasets: cmotions/NL_restaurant_reviews
metrics:
- accuracy
- recall
- precision
- f1
widget:
- text: Wat een geweldige ervaring. Wij gebruikte de lunch bij de Librije. 10 gangen
    met in overleg hierbij gekozen wijnen. Alles klopt. De aandacht, de timing, prachtige
    gerechtjes. En wat een smaaksensaties! Bediening met humor. Altijd daar wanneer
    je ze nodig hebt, maar nooit overdreven aanwezig.
  example_title: Michelin restaurant
- text: Mooie locatie, aardige medewerkers. Maaltijdsalade helaas teleurstellend,
    zeer kleine portie voor 13,80. Jammer.
  example_title: Mooie locatie, matig eten
model-index:
- name: NL_BERT_michelin_finetuned
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# NL_BERT_michelin_finetuned

This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on a [Dutch restaurant reviews dataset](https://huggingface.co/datasets/cmotions/NL_restaurant_reviews). Provide Dutch review text to the API on the right and receive a score that indicates whether this restaurant is eligible for a Michelin star ;)
It achieves the following results on the evaluation set:
- Loss: 0.0637
- Accuracy: 0.9836
- Recall: 0.5486
- Precision: 0.7914
- F1: 0.6480
- Mse: 0.0164

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3

### Training results

| Training Loss | Epoch | Step  | Validation Loss | Accuracy | Recall | Precision | F1     | Mse    |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|
| 0.1043        | 1.0   | 3647  | 0.0961          | 0.9792   | 0.3566 | 0.7606    | 0.4856 | 0.0208 |
| 0.0799        | 2.0   | 7294  | 0.0797          | 0.9803   | 0.4364 | 0.7415    | 0.5495 | 0.0197 |
| 0.0589        | 3.0   | 10941 | 0.0637          | 0.9836   | 0.5486 | 0.7914    | 0.6480 | 0.0164 |


### Framework versions

- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1