Mike Zhang commited on
Commit
543b970
1 Parent(s): 77465de

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -10
README.md CHANGED
@@ -12,21 +12,26 @@ tags:
12
 
13
  This is the JobSpanBERT model from:
14
 
15
- Mike Zhang, Kristian Nørgaard Jensen, Sif Dam Sonniks, and Barbara Plank. __SkillSpan: Hard and Soft Skill Extraction from Job Postings__. To appear at the 2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). 2022.
16
 
17
  This model is continuously pre-trained from a spanbert-base-cased checkpoint (which can also be found in our repository) on ~3.2M sentences from job postings. More information can be found in the paper (which should be released when the NAACL proceedings are online).
18
 
19
  If you use this model, please cite the following paper:
20
 
21
  ```
22
- @misc{https://doi.org/10.48550/arxiv.2204.12811,
23
- doi = {10.48550/ARXIV.2204.12811},
24
- url = {https://arxiv.org/abs/2204.12811},
25
- author = {Zhang, Mike and Jensen, Kristian Nørgaard and Sonniks, Sif Dam and Plank, Barbara},
26
- keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
27
- title = {SkillSpan: Hard and Soft Skill Extraction from English Job Postings},
28
- publisher = {arXiv},
29
- year = {2022},
30
- copyright = {arXiv.org perpetual, non-exclusive license}
 
 
 
 
 
31
  }
32
  ```
 
12
 
13
  This is the JobSpanBERT model from:
14
 
15
+ Mike Zhang, Kristian Nørgaard Jensen, Sif Dam Sonniks, and Barbara Plank. __SkillSpan: Hard and Soft Skill Extraction from Job Postings__. Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
16
 
17
  This model is continuously pre-trained from a spanbert-base-cased checkpoint (which can also be found in our repository) on ~3.2M sentences from job postings. More information can be found in the paper (which should be released when the NAACL proceedings are online).
18
 
19
  If you use this model, please cite the following paper:
20
 
21
  ```
22
+ @inproceedings{zhang-etal-2022-skillspan,
23
+ title = "{S}kill{S}pan: Hard and Soft Skill Extraction from {E}nglish Job Postings",
24
+ author = "Zhang, Mike and
25
+ Jensen, Kristian and
26
+ Sonniks, Sif and
27
+ Plank, Barbara",
28
+ booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
29
+ month = jul,
30
+ year = "2022",
31
+ address = "Seattle, United States",
32
+ publisher = "Association for Computational Linguistics",
33
+ url = "https://aclanthology.org/2022.naacl-main.366",
34
+ pages = "4962--4984",
35
+ abstract = "Skill Extraction (SE) is an important and widely-studied task useful to gain insights into labor market dynamics. However, there is a lacuna of datasets and annotation guidelines; available datasets are few and contain crowd-sourced labels on the span-level or labels from a predefined skill inventory. To address this gap, we introduce SKILLSPAN, a novel SE dataset consisting of 14.5K sentences and over 12.5K annotated spans. We release its respective guidelines created over three different sources annotated for hard and soft skills by domain experts. We introduce a BERT baseline (Devlin et al., 2019). To improve upon this baseline, we experiment with language models that are optimized for long spans (Joshi et al., 2020; Beltagy et al., 2020), continuous pre-training on the job posting domain (Han and Eisenstein, 2019; Gururangan et al., 2020), and multi-task learning (Caruana, 1997). Our results show that the domain-adapted models significantly outperform their non-adapted counterparts, and single-task outperforms multi-task learning.",
36
  }
37
  ```