sadickam's picture
Update README.md
98b65ca
---
license: mit
language:
- en
metrics:
- accuracy
- matthews_correlation
---
# sadickam/sdg-classification-bert
<!-- Provide a quick summary of what the model is/does. -->
This model is for classifying text with respect to the United Nations sustainable development goals (SDG).
![image](https://user-images.githubusercontent.com/73560591/216751462-ced482ba-5d8e-48aa-9a48-5557979a35f1.png)
Source:https://www.un.org/development/desa/disabilities/about-us/sustainable-development-goals-sdgs-and-disability.html
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This text classification model was developed by fine-tuning the bert-base-uncased pre-trained model. The training data for this fine-tuned model was sourced from the publicly available OSDG Community Dataset (OSDG-CD) at https://zenodo.org/record/5550238#.ZBulfcJByF4.
This model was made as part of academic research at Deakin University. The goal was to make a transformer-based SDG text classification model that anyone could use. Only the first 16 UN SDGs supported. The primary model details are highlighted below:
- **Model type:** Text classification
- **Language(s) (NLP):** English
- **License:** mit
- **Finetuned from model [optional]:** bert-base-uncased
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://huggingface.co/sadickam/sdg-classification-bert
- **Demo [optional]:** option 1: https://sadickam-sdg-text-classifier.hf.space/; option 2: https://sadickam-sdg-classification-bert-main-qxg1gv.streamlit.app/
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
This is a fine-tuned model and therefore requires no further training.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("sadickam/sdg-classification-bert")
model = AutoModelForSequenceClassification.from_pretrained("sadickam/sdg-classification-bert")
```
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The training data includes text from a wide range of industries and academic research fields. Hence, this fine-tuned model is not for a specific industry.
See training here: https://zenodo.org/record/5550238#.ZBulfcJByF4
## Training Hyperparameters
- Num_epoch = 3
- Learning rate = 5e-5
- Batch size = 16
## Evaluation
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
- Accuracy = 0.9
- Matthews correlation = 0.89
<!--## Citation [optional] -->
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
<!--## Model Card Contact -->