Text Generation
Transformers
Safetensors
Czech
mpt
custom_code
text-generation-inference
Inference Endpoints
mfajcik commited on
Commit
95785af
1 Parent(s): b9588ec

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -35,9 +35,9 @@ The model was trained on 3 corpora, which were hot-swapped during the training.
35
 
36
 
37
  <img src="figures/tloss_full.png" width="900"/>
38
- _Figure 1: Training loss._
39
  <img src="figures/tloss_closeup.png" width="900"/>
40
- _Figure 2: Training loss closeup. We mark two hotswap places, where the training corpus #1 was switched for internal-corpus #2 and internal-corpus #2.1 respectively._
41
 
42
  Additionaly, we perform two ablations:
43
 
@@ -45,7 +45,7 @@ Additionaly, we perform two ablations:
45
  - (b) On step 94,000, the training loss stopped decreasing, increased, and around step 120,000 (near hot swap #2) started decreasing again. To ablate whether this was an effect of hot-swap,
46
  - we resume training from step 93,000 using corpus #3. The optimizer states were reinitialized.
47
  <img src="figures/vloss_closeup.png" width="900"/>
48
- _Figure 3: Test loss closeup, testing performed on split of internal-corpus #1. See Figure 2 description for ablation explanation._
49
 
50
 
51
  ## Training Method
@@ -53,7 +53,7 @@ _Figure 3: Test loss closeup, testing performed on split of internal-corpus #1.
53
  To transfer knowledge from English model to Czech, we developed a simple method that (i) aligns several tokens between two vocabularies and (ii) copies the embeddings from original language to new language.
54
  <img src="figures/tllama_test.png" width="900"/>
55
 
56
- _Figure 4: Ablation: Test perplexity over the course of training for vocabulary swap method on TinyLLAMA. Our method (green curve) vs TinyLLAMA training from scratch (blue curve)._
57
 
58
  The vocabulary swap was done the same way as our [Czech-GPT-2](https://huggingface.co/BUT-FIT/Czech-GPT-2-XL-133k) model (check it out for comprehensive description.)
59
  We managed to align 4,177 english tokens with corresponding czech tokens.
 
35
 
36
 
37
  <img src="figures/tloss_full.png" width="900"/>
38
+ Figure 1: Training loss.
39
  <img src="figures/tloss_closeup.png" width="900"/>
40
+ Figure 2: Training loss closeup. We mark two hotswap places, where the training corpus #1 was switched for internal-corpus #2 and internal-corpus #2.1 respectively.
41
 
42
  Additionaly, we perform two ablations:
43
 
 
45
  - (b) On step 94,000, the training loss stopped decreasing, increased, and around step 120,000 (near hot swap #2) started decreasing again. To ablate whether this was an effect of hot-swap,
46
  - we resume training from step 93,000 using corpus #3. The optimizer states were reinitialized.
47
  <img src="figures/vloss_closeup.png" width="900"/>
48
+ Figure 3: Test loss closeup, testing performed on split of internal-corpus #1. See Figure 2 description for ablation explanation.
49
 
50
 
51
  ## Training Method
 
53
  To transfer knowledge from English model to Czech, we developed a simple method that (i) aligns several tokens between two vocabularies and (ii) copies the embeddings from original language to new language.
54
  <img src="figures/tllama_test.png" width="900"/>
55
 
56
+ Figure 4: Ablation: Test perplexity over the course of training for vocabulary swap method on TinyLLAMA. Our method (green curve) vs TinyLLAMA training from scratch (blue curve).
57
 
58
  The vocabulary swap was done the same way as our [Czech-GPT-2](https://huggingface.co/BUT-FIT/Czech-GPT-2-XL-133k) model (check it out for comprehensive description.)
59
  We managed to align 4,177 english tokens with corresponding czech tokens.