Update README.md
Browse files
README.md
CHANGED
@@ -3,14 +3,14 @@ license: bigscience-openrail-m
|
|
3 |
datasets:
|
4 |
- apcl/so13m
|
5 |
---
|
6 |
-
#
|
7 |
-
|
8 |
|
9 |
---
|
10 |
|
11 |
-
##
|
12 |
|
13 |
-
- We trained the
|
14 |
|
15 |
- The dataset used to train our model is our own dataset [so13m dataset](https://huggingface.co/datasets/apcl/so13m), processed from 13 million StackOverflow posts picked from a [Stack Exchange data dump](https://archive.org/details/stackexchange) for posts between January 2014 and December 2022.
|
16 |
|
@@ -35,6 +35,6 @@ We train our models using a single NVidia A5000 GPUs.
|
|
35 |
---
|
36 |
## Jam Projects
|
37 |
|
38 |
-
Current projects using the
|
39 |
|
40 |
https://github.com/apcl-research/jam
|
|
|
3 |
datasets:
|
4 |
- apcl/so13m
|
5 |
---
|
6 |
+
# Jam_so
|
7 |
+
Jam_so is a GPT2-like model for research in fine-grained Java analysis. It is intended for fine-grained analysis of Java source code at the level of methods, statements, and variables, as a foundation for downstream tasks like code completion, comment generation, and automated bug repair.
|
8 |
|
9 |
---
|
10 |
|
11 |
+
## Jam_so Training Details
|
12 |
|
13 |
+
- We trained the jam_so model using the training procedures from Daniel Grittner's [NanoGPT-LoRA](https://github.com/danielgrittner/nanoGPT-LoRA)
|
14 |
|
15 |
- The dataset used to train our model is our own dataset [so13m dataset](https://huggingface.co/datasets/apcl/so13m), processed from 13 million StackOverflow posts picked from a [Stack Exchange data dump](https://archive.org/details/stackexchange) for posts between January 2014 and December 2022.
|
16 |
|
|
|
35 |
---
|
36 |
## Jam Projects
|
37 |
|
38 |
+
Current projects using the jam_so pre-trained model can be found at our Github repository:
|
39 |
|
40 |
https://github.com/apcl-research/jam
|