Instructions to use MayBashendy/Arabic_CrossPrompt_FineTuningAraBERT_noAug_TestTask7_holistic with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use MayBashendy/Arabic_CrossPrompt_FineTuningAraBERT_noAug_TestTask7_holistic with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="MayBashendy/Arabic_CrossPrompt_FineTuningAraBERT_noAug_TestTask7_holistic")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("MayBashendy/Arabic_CrossPrompt_FineTuningAraBERT_noAug_TestTask7_holistic") model = AutoModelForSequenceClassification.from_pretrained("MayBashendy/Arabic_CrossPrompt_FineTuningAraBERT_noAug_TestTask7_holistic") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 6810500bad49871c6e759a4c4fabe1a7884ee5851add3a5b09f6a352a4502f29
- Size of remote file:
- 5.24 kB
- SHA256:
- 0b0ded030968736a57648e973e7ba836ff79e3dfb609b6878cd7ea871b0160cf
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.