Instructions to use YakovElm/MariaDB_5_BERT_Under_Sampling with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use YakovElm/MariaDB_5_BERT_Under_Sampling with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="YakovElm/MariaDB_5_BERT_Under_Sampling")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("YakovElm/MariaDB_5_BERT_Under_Sampling") model = AutoModelForSequenceClassification.from_pretrained("YakovElm/MariaDB_5_BERT_Under_Sampling") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- e3f8633febdb5910a2abc110adeeb30452df6d3decd84e9083d71bde72e7ab1d
- Size of remote file:
- 438 MB
- SHA256:
- 0e8e15eddaa2f9a34fb4cf3cf876bb5e471ebc80a0660e04c51a6235d2f39d5c
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.