File size: 1,189 Bytes
b9a85fd
 
eb788d9
 
 
 
b9a85fd
4aa8c55
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fbd8e59
 
 
0526e55
9cec50f
fbd8e59
9cec50f
 
fbd8e59
 
 
 
 
 
4aa8c55
fbd8e59
4aa8c55
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
---
license: mit
datasets:
- tsac
language:
- ar
---

This is a converted version of [Instadeep's](https://huggingface.co/InstaDeepAI) [TunBERT](https://github.com/instadeepai/tunbert/) from nemo to safetensors.

Make sure to read the original model [licence](https://github.com/instadeepai/tunbert/blob/main/LICENSE)
<details>
<summary>architectural changes </summary>
  
## original model head

![image/png](https://cdn-uploads.huggingface.co/production/uploads/6527e89a8808d80ccff88b7a/b-uXLwsi4n1Tc7-OtHe9b.png)


## this model head

![image/png](https://cdn-uploads.huggingface.co/production/uploads/6527e89a8808d80ccff88b7a/xG-tOQscrvxb4wQm_2n-r.png)

</details>



# how to load the model 
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("not-lain/TunBERT")
model = AutoModelForSequenceClassification.from_pretrained("not-lain/TunBERT",trust_remote_code=True)
```


# how to use the model 
```python
text = "[insert text here]"
inputs = tokenizer(text,return_tensors='pt')
output = model(**inputs)
```

**IMPORTANT** : 
* Make sure to enable `trust_remote_code=True`
* Avoid using the pipeline method