fasterinnerlooper commited on
Commit
e340409
1 Parent(s): 0f31b7e

Training in progress, step 100

Browse files
README.md CHANGED
@@ -1,37 +1,34 @@
1
  ---
2
  license: other
3
  base_model: stabilityai/stable-code-3b
 
 
4
  model-index:
5
  - name: stable-code-3b
6
  results: []
7
- datasets:
8
- - fasterinnerlooper/lcc_csharp
9
- library_name: peft
10
- language:
11
- - code
12
  ---
13
 
 
 
14
 
15
  # stable-code-3b
16
 
17
- This model is a fine-tuned (LoRA) version of [stabilityai/stable-code-3b](https://huggingface.co/stabilityai/stable-code-3b) trained on the Microsoft/lcc_csharp dataset.
18
 
19
  ## Model description
20
 
21
- Stable Code 3B fine-tuned on microsoft/lcc_csharp modified for in-filling
22
 
23
  ## Intended uses & limitations
24
 
25
- Meant to be used to in-fill C# code
26
 
27
  ## Training and evaluation data
28
 
29
- [microsoft/lcc_csharp](https://huggingface.co/microsoft/lcc_csharp) modified for in-filling. The dataset is sliced randomly across the entire dataset. One entry in the original dataset maps to one entry in the modified dataset.
30
 
31
  ## Training procedure
32
 
33
- Trained in a single GPU environment (either A4000 or P5000) with 16GB of RAM.
34
-
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
@@ -51,4 +48,4 @@ The following hyperparameters were used during training:
51
  - Transformers 4.36.2
52
  - Pytorch 2.1.2+cu121
53
  - Datasets 2.16.1
54
- - Tokenizers 0.15.1
 
1
  ---
2
  license: other
3
  base_model: stabilityai/stable-code-3b
4
+ tags:
5
+ - generated_from_trainer
6
  model-index:
7
  - name: stable-code-3b
8
  results: []
 
 
 
 
 
9
  ---
10
 
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
 
14
  # stable-code-3b
15
 
16
+ This model is a fine-tuned version of [stabilityai/stable-code-3b](https://huggingface.co/stabilityai/stable-code-3b) on an unknown dataset.
17
 
18
  ## Model description
19
 
20
+ More information needed
21
 
22
  ## Intended uses & limitations
23
 
24
+ More information needed
25
 
26
  ## Training and evaluation data
27
 
28
+ More information needed
29
 
30
  ## Training procedure
31
 
 
 
32
  ### Training hyperparameters
33
 
34
  The following hyperparameters were used during training:
 
48
  - Transformers 4.36.2
49
  - Pytorch 2.1.2+cu121
50
  - Datasets 2.16.1
51
+ - Tokenizers 0.15.1
adapter_config.json CHANGED
@@ -19,13 +19,13 @@
19
  "rank_pattern": {},
20
  "revision": null,
21
  "target_modules": [
 
22
  "o_proj",
23
- "v_proj",
24
  "up_proj",
25
- "q_proj",
26
  "down_proj",
27
- "gate_proj",
28
- "k_proj"
29
  ],
30
  "task_type": "CAUSAL_LM"
31
  }
 
19
  "rank_pattern": {},
20
  "revision": null,
21
  "target_modules": [
22
+ "gate_proj",
23
  "o_proj",
24
+ "k_proj",
25
  "up_proj",
26
+ "v_proj",
27
  "down_proj",
28
+ "q_proj"
 
29
  ],
30
  "task_type": "CAUSAL_LM"
31
  }
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f21612564c2cfbe49eed1ad256b95567d4cc292abbc2f5399166991e8f203292
3
  size 50128536
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:821af586ba7a2272447a7fb2ad6d31e26f58c317cb638c3d65d95e3b7e56fe4f
3
  size 50128536
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:49aec3c0a7659316c088ad7250bb3b9741cd44fba92bdc0275a4e23fca8369ad
3
- size 4792
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fcc85f732e4a7af3d7a3192082ee4455c3d8306c23f0afac854f43833577b8bb
3
+ size 4728