File size: 3,090 Bytes
90877c3
 
 
0637ca5
 
90877c3
 
 
 
0637ca5
 
 
 
 
 
 
 
 
 
 
90877c3
 
 
 
 
 
 
0637ca5
90877c3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
tags:
- generated_from_trainer
datasets:
- kanishka/counterfactual_babylm_measure_nps_as_singular_new
metrics:
- accuracy
model-index:
- name: smolm-autoreg-bpe-counterfactual_babylm_measure_nps_as_singular_new-1e-3
  results:
  - task:
      name: Causal Language Modeling
      type: text-generation
    dataset:
      name: kanishka/counterfactual_babylm_measure_nps_as_singular_new
      type: kanishka/counterfactual_babylm_measure_nps_as_singular_new
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.4110677518157195
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# smolm-autoreg-bpe-counterfactual_babylm_measure_nps_as_singular_new-1e-3

This model was trained from scratch on the kanishka/counterfactual_babylm_measure_nps_as_singular_new dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4116
- Accuracy: 0.4111

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 20.0
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step   | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 3.6071        | 1.0   | 18602  | 3.7886          | 0.3573   |
| 3.379         | 2.0   | 37204  | 3.5653          | 0.3798   |
| 3.2515        | 3.0   | 55806  | 3.4692          | 0.3918   |
| 3.1729        | 4.0   | 74408  | 3.4193          | 0.3983   |
| 3.1139        | 5.0   | 93010  | 3.3907          | 0.4026   |
| 3.0709        | 6.0   | 111612 | 3.3642          | 0.4043   |
| 3.0297        | 7.0   | 130214 | 3.3545          | 0.4067   |
| 2.9988        | 8.0   | 148816 | 3.3596          | 0.4080   |
| 2.9717        | 9.0   | 167418 | 3.3723          | 0.4087   |
| 2.9432        | 10.0  | 186020 | 3.3579          | 0.4093   |
| 2.9217        | 11.0  | 204622 | 3.3701          | 0.4098   |
| 2.8986        | 12.0  | 223224 | 3.3646          | 0.4103   |
| 2.8745        | 13.0  | 241826 | 3.3676          | 0.4105   |
| 2.8518        | 14.0  | 260428 | 3.3750          | 0.4110   |
| 2.8328        | 15.0  | 279030 | 3.3722          | 0.4111   |
| 2.8089        | 16.0  | 297632 | 3.3797          | 0.4115   |
| 2.7911        | 17.0  | 316234 | 3.3882          | 0.4109   |
| 2.773         | 18.0  | 334836 | 3.3951          | 0.4115   |
| 2.7517        | 19.0  | 353438 | 3.4023          | 0.4112   |
| 2.7342        | 20.0  | 372040 | 3.4116          | 0.4111   |


### Framework versions

- Transformers 4.38.0
- Pytorch 2.3.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2