Instructions to use moshew/bert-tiny-sst2-distilled with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use moshew/bert-tiny-sst2-distilled with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="moshew/bert-tiny-sst2-distilled")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("moshew/bert-tiny-sst2-distilled") model = AutoModelForSequenceClassification.from_pretrained("moshew/bert-tiny-sst2-distilled") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 942dc5accc472ff90b3f176fb74bfb9e5ce9273356fe7096508399b9fd9d9648
- Size of remote file:
- 3.06 kB
- SHA256:
- 563ee04996020a0bd9edc53b55b4643c8b61e66ac7be6e08afad8605f15f1204
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.