Instructions to use rose-e-wang/priorKnowledge_lr0.00002 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use rose-e-wang/priorKnowledge_lr0.00002 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="rose-e-wang/priorKnowledge_lr0.00002")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("rose-e-wang/priorKnowledge_lr0.00002") model = AutoModelForSequenceClassification.from_pretrained("rose-e-wang/priorKnowledge_lr0.00002") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- aaf709df08f3c3b440089cbfae53d0d5849a283e65724880d67eeb7512702049
- Size of remote file:
- 1.42 GB
- SHA256:
- 772b0300ff62ff8d7f9aaac880d02b10e87c6a27c9bbb73fc9531f0b50942d94
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.