Shushant commited on
Commit
4f62b6c
1 Parent(s): 57ccd9f

description added

Browse files
Files changed (1) hide show
  1. README.md +14 -7
README.md CHANGED
@@ -4,6 +4,11 @@ tags:
4
  model-index:
5
  - name: training_bert
6
  results: []
 
 
 
 
 
7
  ---
8
 
9
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -11,24 +16,26 @@ should probably proofread and complete it, then remove this comment. -->
11
 
12
  # training_bert
13
 
14
- This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
15
  It achieves the following results on the evaluation set:
16
  - Loss: 4.0495
17
 
18
  ## Model description
19
 
20
- More information needed
21
 
22
  ## Intended uses & limitations
23
 
24
- More information needed
 
25
 
26
- ## Training and evaluation data
27
 
28
- More information needed
 
 
29
 
30
  ## Training procedure
31
-
32
  ### Training hyperparameters
33
 
34
  The following hyperparameters were used during training:
@@ -96,4 +103,4 @@ The following hyperparameters were used during training:
96
  - Transformers 4.25.1
97
  - Pytorch 1.8.0+cu111
98
  - Datasets 2.7.1
99
- - Tokenizers 0.13.2
 
4
  model-index:
5
  - name: training_bert
6
  results: []
7
+ license: mit
8
+ language:
9
+ - en
10
+ metrics:
11
+ - perplexity
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
16
 
17
  # training_bert
18
 
19
+ This model is a fine-tuned version of [Bert Base Uncased](https://huggingface.co/) on dataset composed of different jobs posted in several job platforms and thousands of resumes.
20
  It achieves the following results on the evaluation set:
21
  - Loss: 4.0495
22
 
23
  ## Model description
24
 
25
+ Pretraining done on bert base architecture.
26
 
27
  ## Intended uses & limitations
28
 
29
+ This model can be used to generate contextual embeddings for textual data used in Applicant Tracking Systems such as resumes, jobs and cover letters.
30
+ The embeddings can be further used to perform other NLP downstream tasks such as classification, Named Entity Recognition and so on.
31
 
 
32
 
33
+ ## Training and evaluation data
34
+ THe training corpus is developed using about 40000 resumes and 2000 jobs posted scrapped from different job portals. This is a preliminary dataset
35
+ for the experimentation. THe corpus size is about 2.35 GB of textual data. Similary evaluation data contains few resumes and jobs making about 12 mb of textual data.
36
 
37
  ## Training procedure
38
+ For the pretraining of masked language model, Trainer API from Huggingface is used. The pretraining took about 6 hrs 40 mins.
39
  ### Training hyperparameters
40
 
41
  The following hyperparameters were used during training:
 
103
  - Transformers 4.25.1
104
  - Pytorch 1.8.0+cu111
105
  - Datasets 2.7.1
106
+ - Tokenizers 0.13.2