--- license: mit datasets: - tsac language: - ar --- This is a converted version of [Instadeep's](https://huggingface.co/InstaDeepAI) [TunBERT](https://github.com/instadeepai/tunbert/) from nemo to safetensors. Make sure to read the original model [licence](https://github.com/instadeepai/tunbert/blob/main/LICENSE)
architectural changes ## original model head ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6527e89a8808d80ccff88b7a/b-uXLwsi4n1Tc7-OtHe9b.png) ## this model head ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6527e89a8808d80ccff88b7a/xG-tOQscrvxb4wQm_2n-r.png)
## Note this is a WIP and any contributions are welcome # how to load the model ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("not-lain/TunBERT") model = AutoModelForSequenceClassification.from_pretrained("not-lain/TunBERT",trust_remote_code=True) ``` # how to use the model ```python text = "[insert text here]" inputs = tokenizer(text,return_tensors='pt') output = model(**inputs) ``` or you can use the pipeline : ```python from transformers import pipeline pipe = pipeline(model="not-lain/TunBERT",tokenizer = "not-lain/TunBERT",trust_remote_code=True) pipe("text") ``` **IMPORTANT** : * Make sure to enable `trust_remote_code=True`