Update README.md
Browse files
README.md
CHANGED
@@ -5,12 +5,10 @@ tags:
|
|
5 |
- axolotl
|
6 |
- generated_from_trainer
|
7 |
model-index:
|
8 |
-
- name:
|
9 |
results: []
|
10 |
---
|
11 |
|
12 |
-
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
13 |
-
should probably proofread and complete it, then remove this comment. -->
|
14 |
|
15 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
16 |
<details><summary>See axolotl config</summary>
|
@@ -29,9 +27,7 @@ datasets:
|
|
29 |
- path: CodeGPTPlus/typescript-0-500000-seq1024
|
30 |
type: completion
|
31 |
field: text
|
32 |
-
#dataset_prepared_path:
|
33 |
|
34 |
-
#pretraining_dataset: CodeGPTPlus/typescript-0-500000-seq1024
|
35 |
|
36 |
val_set_size: 0.001
|
37 |
output_dir: ./fft-out
|
@@ -57,7 +53,6 @@ wandb_log_model: end
|
|
57 |
gradient_accumulation_steps: 2
|
58 |
micro_batch_size: 20
|
59 |
num_epochs: 1
|
60 |
-
#max_steps: 1 # REMOVE IT
|
61 |
optimizer: adamw_bnb_8bit
|
62 |
adam_beta1: 0.9
|
63 |
adam_beta2: 0.999
|
@@ -96,17 +91,13 @@ special_tokens:
|
|
96 |
bos_token: "<|begin▁of▁sentence|>"
|
97 |
eos_token: "<|end▁of▁sentence|>"
|
98 |
pad_token: "<|end▁of▁sentence|>"
|
99 |
-
# fim_prefix: "<|fim▁begin|>"
|
100 |
-
# fim_middle: "<|fim▁hole|>"
|
101 |
-
# fim_suffix: "<|fim▁end|>"
|
102 |
-
|
103 |
```
|
104 |
|
105 |
</details><br>
|
106 |
|
107 |
-
#
|
108 |
|
109 |
-
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) on the
|
110 |
It achieves the following results on the evaluation set:
|
111 |
- Loss: 0.7681
|
112 |
|
|
|
5 |
- axolotl
|
6 |
- generated_from_trainer
|
7 |
model-index:
|
8 |
+
- name: deepseek-coder-1.3b-typescript
|
9 |
results: []
|
10 |
---
|
11 |
|
|
|
|
|
12 |
|
13 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
14 |
<details><summary>See axolotl config</summary>
|
|
|
27 |
- path: CodeGPTPlus/typescript-0-500000-seq1024
|
28 |
type: completion
|
29 |
field: text
|
|
|
30 |
|
|
|
31 |
|
32 |
val_set_size: 0.001
|
33 |
output_dir: ./fft-out
|
|
|
53 |
gradient_accumulation_steps: 2
|
54 |
micro_batch_size: 20
|
55 |
num_epochs: 1
|
|
|
56 |
optimizer: adamw_bnb_8bit
|
57 |
adam_beta1: 0.9
|
58 |
adam_beta2: 0.999
|
|
|
91 |
bos_token: "<|begin▁of▁sentence|>"
|
92 |
eos_token: "<|end▁of▁sentence|>"
|
93 |
pad_token: "<|end▁of▁sentence|>"
|
|
|
|
|
|
|
|
|
94 |
```
|
95 |
|
96 |
</details><br>
|
97 |
|
98 |
+
# deepseek-coder-1.3b-typescript
|
99 |
|
100 |
+
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) on the the-stack dataset, using 0.5B of tokens of typescript only.
|
101 |
It achieves the following results on the evaluation set:
|
102 |
- Loss: 0.7681
|
103 |
|