File size: 1,771 Bytes
e8a2b50 f312c3c e8a2b50 f312c3c 51c0d06 f312c3c 51c0d06 f312c3c 71543bd f312c3c 7add91c f312c3c 7add91c f312c3c c0c5c5f f312c3c c0c5c5f f312c3c 51c0d06 f312c3c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
---
base_model: distilbert-base-uncased
model-index:
- name: ojobert
results: []
license: mit
language:
- en
widget:
- text: Would you like to join a major [MASK] company?
tags:
- jobs
---
_Nesta, the UK's innovation agency, has been scraping online job adverts since 2021 and building algorithms to extract and structure information as part of the [Open Jobs Observatory](https://www.nesta.org.uk/project/open-jobs-observatory/) project._
_Although we are unable to share the raw data openly, we aim to open source **our models, algorithms and tools** so that anyone can use them for their own research and analysis._
## 📟 About
This model is pre-trained from a `distilbert-base-uncased` checkpoint on 100k sentences from scraped online job postings as part of the Open Jobs Observatory.
## 🖨️ Use
To use the model:
```
from transformers import pipeline
model = pipeline('fill-mask', model='ihk/ojobert', tokenizer='ihk/ojobert')
```
An example use is as follows:
```
text = "Would you like to join a major [MASK] company?"
results = model(text, top_k=3)
results
>> [{'score': 0.1886572688817978,
'token': 13859,
'token_str': 'pharmaceutical',
'sequence': 'would you like to join a major pharmaceutical company?'},
{'score': 0.07436735928058624,
'token': 5427,
'token_str': 'insurance',
'sequence': 'would you like to join a major insurance company?'},
{'score': 0.06400047987699509,
'token': 2810,
'token_str': 'construction',
'sequence': 'would you like to join a major construction company?'}]
```
## ⚖️ Training results
The fine-tuning metrics are as follows:
- eval_loss: 2.5871026515960693
- eval_runtime: 134.4452
- eval_samples_per_second: 14.281
- eval_steps_per_second: 0.223
- epoch: 3.0
- perplexity: 13.29 |