File size: 3,283 Bytes
ff752fe b12d7ce ff752fe b12d7ce ff752fe f2c0c68 b12d7ce f2c0c68 b12d7ce f2c0c68 b12d7ce f2c0c68 b12d7ce f2c0c68 b12d7ce f2c0c68 b12d7ce f2c0c68 72f6e17 f2c0c68 b12d7ce |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 |
---
language:
- en
- de
tags:
- nsp
- next-sentence-prediction
- gpt
datasets:
- wikipedia
metrics:
- accuracy
---
# mGPT-nsp
mGPT-nsp is fine-tuned for Next Sentence Prediction task on the [wikipedia dataset](https://huggingface.co/datasets/wikipedia) using [multilingual GPT](https://huggingface.co/THUMT/mGPT) model. It was introduced in this [paper](https://arxiv.org/abs/2307.07331) and first released on this page.
## Model description
mGPT-nsp is a Transformer-based model which was fine-tuned for Next Sentence Prediction task on 11000 English and 11000 German Wikipedia articles. We use the same tokenization and vocabulary as the [mT5 model](https://huggingface.co/google/mt5-base).
## Intended uses
- Apply Next Sentence Prediction tasks. (compare the results with BERT models since BERT natively supports this task)
- See how to fine-tune an mGPT2 model using our [code](https://github.com/slds-lmu/stereotypes-multi/tree/main)
- Check our [paper](https://arxiv.org/abs/2307.07331) to see its results
## How to use
You can use this model directly with a pipeline for next sentence prediction. Here is how to use this model in PyTorch:
### Necessary Initialization
```python
from transformers import MT5Tokenizer, GPT2Model
import torch
from huggingface_hub import hf_hub_download
class ModelNSP(torch.nn.Module):
def __init__(self, pretrained_model="THUMT/mGPT"):
super(ModelNSP, self).__init__()
self.core_model = GPT2Model.from_pretrained(pretrained_model)
self.nsp_head = torch.nn.Sequential(torch.nn.Linear(self.core_model.config.hidden_size, 300),
torch.nn.Linear(300, 300), torch.nn.Linear(300, 2))
def forward(self, input_ids, attention_mask=None):
return self.nsp_head(self.core_model(input_ids, attention_mask=attention_mask)[0].mean(dim=1)).softmax(dim=-1)
model = torch.nn.DataParallel(ModelNSP().eval())
model.load_state_dict(torch.load(hf_hub_download(repo_id="tolga-ozturk/mGPT-nsp", filename="model_weights.bin")))
tokenizer = MT5Tokenizer.from_pretrained("tolga-ozturk/mGPT-nsp")
```
### Inference
```python
batch_texts = [("In Italy, pizza is presented unsliced.", "The sky is blue."),
("In Italy, pizza is presented unsliced.", "However, it is served sliced in Turkey.")]
encoded_dict = tokenizer.batch_encode_plus(batch_text_or_text_pairs=batch_texts, truncation="longest_first",padding=True, return_tensors="pt", return_attention_mask=True, max_length=256)
print(torch.argmax(model(encoded_dict.input_ids, attention_mask=encoded_dict.attention_mask), dim=-1))
```
### Training Metrics
<img src="https://huggingface.co/tolga-ozturk/mgpt-nsp/resolve/main/metrics.png">
## BibTeX entry and citation info
```bibtex
@misc{title={How Different Is Stereotypical Bias Across Languages?},
author={Ibrahim Tolga Öztürk and Rostislav Nedelchev and Christian Heumann and Esteban Garces Arias and Marius Roger and Bernd Bischl and Matthias Aßenmacher},
year={2023},
eprint={2307.07331},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
The work is done with Ludwig-Maximilians-Universität Statistics group, don't forget to check out [their huggingface page](https://huggingface.co/misoda) for other interesting works! |