File size: 804 Bytes
b9a85fd eb788d9 b9a85fd fbd8e59 0526e55 9cec50f fbd8e59 9cec50f fbd8e59 9cec50f fbd8e59 eb788d9 fbd8e59 eb788d9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
---
license: mit
datasets:
- tsac
language:
- ar
---
# Disclamer
I do not own, distribute, or take credits for this model, all copyrights belong to [Instadeep](https://huggingface.co/InstaDeepAI) under the [MIT licence](https://github.com/instadeepai/tunbert/)
# how to load the model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("not-lain/TunBERT")
model = AutoModelForSequenceClassification.from_pretrained("not-lain/TunBERT",trust_remote_code=True)
```
**IMPORTANT** : make sure to enable `trust_remote_code=True`
# how to use the model
```python
text = "[insert text here]"
inputs = tokenizer(text,return_tensors='pt') # make sure you are using the `return_tensors='pt'` parameter
output = model(**inputs)
``` |