File size: 2,533 Bytes
967fe42
 
 
 
1f6a1c4
 
8e2aca0
967fe42
 
 
 
 
819e8c5
8e2aca0
 
 
819e8c5
 
 
 
 
8e2aca0
 
 
1f6a1c4
 
 
 
 
 
 
 
 
967fe42
 
 
 
 
 
 
819e8c5
967fe42
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1f6a1c4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
---
license: mit
tags:
- generated_from_trainer
- nlu
- text-classification
- intent-classification
metrics:
- accuracy
- f1
model-index:
- name: multilingual_minilm-amazon_massive-intent_eu_noen
  results:
  - task:
      name: intent-classification
      type: intent-classification
    dataset:
      name: MASSIVE
      type: AmazonScience/massive
      split: test
    metrics:
    - name: F1
      type: f1
      value: 0.8551
datasets:
- AmazonScience/massive
language:
- de
- fr
- it
- pt
- es
- pl
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# multilingual_minilm-amazon_massive-intent_eu_noen

This model is a fine-tuned version of [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) on the [MASSIVE1.1](https://huggingface.co/datasets/AmazonScience/massive) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7794
- Accuracy: 0.8551
- F1: 0.8551

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10

### Training results

| Training Loss | Epoch | Step  | Validation Loss | Accuracy | F1     |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 1.7624        | 1.0   | 4318  | 1.5462          | 0.6331   | 0.6331 |
| 0.9535        | 2.0   | 8636  | 0.9628          | 0.7698   | 0.7698 |
| 0.6849        | 3.0   | 12954 | 0.8034          | 0.8097   | 0.8097 |
| 0.5163        | 4.0   | 17272 | 0.7444          | 0.8290   | 0.8290 |
| 0.3973        | 5.0   | 21590 | 0.7346          | 0.8383   | 0.8383 |
| 0.331         | 6.0   | 25908 | 0.7369          | 0.8453   | 0.8453 |
| 0.2876        | 7.0   | 30226 | 0.7325          | 0.8510   | 0.8510 |
| 0.2319        | 8.0   | 34544 | 0.7726          | 0.8496   | 0.8496 |
| 0.2098        | 9.0   | 38862 | 0.7803          | 0.8543   | 0.8543 |
| 0.1863        | 10.0  | 43180 | 0.7794          | 0.8551   | 0.8551 |


### Framework versions

- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2