File size: 1,611 Bytes
39901dc
 
 
37e6db5
 
bd7aa5e
37e6db5
bd7aa5e
39901dc
 
 
 
 
623078d
39901dc
0cdbae3
30bdecb
39901dc
 
 
 
 
1229b30
39901dc
 
 
 
 
 
 
 
 
30bdecb
39901dc
30bdecb
39901dc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
---
tags:
- generated_from_trainer
widget:
  - text: "Sthewillswes emy hedrpi cepl ritie"
  - text: "orel nol hammug antees sopa raus"
  - text: "Gan nstho lanuat tharestlint erks"
  - text: "Jel chatr thefl harewh wh's"
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# fake-gpt-2-17m

This model is a GPTJ (with 17,637,632 parameters) trained from scratch on a synthetic dataset (1gb of documents created in 4 fake languages, each with a formal and informal writing style) for 1 epoch.

It achieves the following results on the evaluation set:
- Loss: 3.5592

## Intended uses & limitations

This model is to be used as a base model for fine-tuning any language/task to probe the effectiveness of both pre-training on an algorithmically generated corpus and effectiveness of extremely small language models (SLMs?). It can only generate text based on its training data (which will be uploaded as a huggingface dataset soon).

## Training and evaluation data

More information needed

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.001
- batch_size 64
- seed: 42
- optimizer: Adam
- lr_scheduler_type: linear
- num_epochs: 1

### Training results

| Training Loss | Epoch | Step  | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.5175        | 1.0   | 46857 | 3.5592          |


### Framework versions

- Transformers 4.22.1
- Pytorch 1.12.0
- Datasets 2.3.2
- Tokenizers 0.12.1