burakkececi commited on
Commit
4f9057f
1 Parent(s): 293cfb1

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -0
README.md ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - tr
5
+ library_name: transformers
6
+ ---
7
+ # 🇹🇷 Turkish BERT Model for Software Engineering
8
+
9
+ This repository was created within the scope of computer engineering undergraduate graduation project.
10
+
11
+ This research aims to perform an exploratory case study to determine the functional dimensions of user requirements or use cases for software projects.
12
+ In order to perform this task we created two models, [SE-BERT](https://huggingface.co/burakkececi/bert-software-engineering) and SE-BERTurk.
13
+
14
+ You can find a detailed description of the project at the [link](https://github.com/burakkececi/software-size-estimation-nlp).
15
+
16
+ # SE-BERTurk
17
+
18
+ SE-BERT is a BERT model trained for domain adaptation in a software engineering context.
19
+
20
+ We applied Masked Language Modeling (MLM), an unsupervised learning technique, for domain adaptation. MLM enhances the model understanding of domain-specific language by masking portions of the input text and training the model to predict the masked words based on the surrounding context.
21
+
22
+ ## Stats
23
+ Created a bilingual [SE corpus](https://drive.google.com/file/d/1IgnJTaR2-pe889TdQZtYF8SKOH92mi1l/view?usp=drive_link) (166Mb) ➡️ [Descriptive stats of the corpus](https://docs.google.com/spreadsheets/d/1Xnn_xfu4tdCtWg-nQ8ce_LHe9F-g0BSmUxzTdi5g1r4/edit?usp=sharing)
24
+ * 166K entry = 886K sentence = 10M words
25
+ * 156K training entry + 10K test entry
26
+ * Each entry has a maximum length of 512 tokens
27
+
28
+ The final training corpus has a size of 166MB and 10.554.750 words.
29
+
30
+ ## MLM Training (Domain Adaptation)
31
+ Used ``AdamW`` optimizer and set ``num_epochs = 1``, ``lr = 2e-5``, ``eps = 1e-8``
32
+ * For T4 GPU ➡️ Set ``batch_size = 6`` (13.5Gb memory)
33
+ * For A100 GPU ➡️ Set ``batch_size = 50`` (37Gb memory) and ``fp16 = True``
34
+
35
+ **Perplexity**
36
+ * ``3,665`` PPL for SE-BERTurk
37
+
38
+ ### Evaluation Steps:
39
+ 1) Calculate ``PPL`` (perplexity) on the test corpus (10K context with a maximum length of 512 tokens)
40
+ 2) Calculate ``PPL`` (perplexity) on the requirement datasets
41
+ 3) Evaluate performance on downstream tasks:
42
+ * For size measurement ➡️ ``MAE``, ``MSE``, ``MMRE``, ``PRED(30)``, ``ACC``
43
+
44
+ ## Usage
45
+
46
+ With Transformers >= 2.11 our SE-BERT uncased model can be loaded like:
47
+
48
+ ```python
49
+ from transformers import AutoModel, AutoTokenizer
50
+ tokenizer = AutoTokenizer.from_pretrained("burakkececi/bert-turkish-software-engineering/model")
51
+ model = AutoModel.from_pretrained("burakkececi/bert-turkish-software-engineering/tokenizer")
52
+ ```
53
+
54
+ # Huggingface model hub
55
+
56
+ All models are available on the [Huggingface model hub](https://huggingface.co/burakkececi).