burakkececi commited on
Commit
9dcff01
1 Parent(s): 56d5060

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -0
README.md ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ library_name: transformers
6
+ ---
7
+ # BERT Model for Software Engineering
8
+
9
+ This repository was created within the scope of computer engineering undergraduate graduation project.
10
+ This research aims to perform an exploratory case study to determine the functional dimensions of user requirements or use cases for software projects.
11
+ In order to perform this task we created two models, SE-BERT and [SE-BERTurk](https://huggingface.co/burakkececi/bert-turkish-software-engineering).
12
+
13
+ # SE-BERT
14
+
15
+ SE-BERT is a BERT model trained for domain adaptation in a software engineering context.
16
+
17
+ We applied Masked Language Modeling (MLM), an unsupervised learning technique, for domain adaptation. MLM enhances the model understanding of domain-specific language by masking portions of the input text and training the model to predict the masked words based on the surrounding context.
18
+
19
+ ## Stats
20
+ Created a bilingual [SE corpus](https://drive.google.com/file/d/1IgnJTaR2-pe889TdQZtYF8SKOH92mi1l/view?usp=drive_link) (166Mb) ➡️ [Descriptive stats of the corpus](https://docs.google.com/spreadsheets/d/1Xnn_xfu4tdCtWg-nQ8ce_LHe9F-g0BSmUxzTdi5g1r4/edit?usp=sharing)
21
+ * 166K entry = 886K sentence = 10M words
22
+ * 156K training entry + 10K test entry
23
+ * Each entry has a maximum length of 512 tokens
24
+
25
+ The final training corpus has a size of 166MB and 10.554.750 words.
26
+
27
+ ## MLM Training (Domain Adaptation)
28
+ Used ``AdamW`` optimizer and set ``num_epochs = 1``, ``lr = 2e-5``, ``eps = 1e-8``
29
+ * For T4 GPU ➡️ Set ``batch_size = 6`` (13.5Gb memory)
30
+ * For A100 GPU ➡️ Set ``batch_size = 50`` (37Gb memory) and ``fp16 = True``
31
+
32
+ **Perplexity**
33
+ * ``6,673`` PPL for SE-BERT
34
+
35
+ ### Evaluation Steps:
36
+ 1) Calculate ``PPL`` (perplexity) on the test corpus (10K context with a maximum length of 512 tokens)
37
+ 2) Calculate ``PPL`` (perplexity) on the requirement datasets
38
+ 3) Evaluate performance on downstream tasks:
39
+ * For size measurement ➡️ ``MAE``, ``MSE``, ``MMRE``, ``PRED(30)``, ``ACC``
40
+
41
+ ## Usage
42
+
43
+ With Transformers >= 2.11 our SE-BERT uncased model can be loaded like:
44
+
45
+ ```python
46
+ from transformers import AutoModel, AutoTokenizer
47
+ tokenizer = AutoTokenizer.from_pretrained("burakkececi/bert-software-engineering/model")
48
+ model = AutoModel.from_pretrained("burakkececi/bert-software-engineering/tokenizer")
49
+ ```
50
+
51
+ # Huggingface model hub
52
+
53
+ All models are available on the [Huggingface model hub](https://huggingface.co/burakkececi).