Instructions to use cuadron11/modelBsc with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use cuadron11/modelBsc with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="cuadron11/modelBsc")# Load model directly from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("cuadron11/modelBsc") model = AutoModelForTokenClassification.from_pretrained("cuadron11/modelBsc") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- f0525e7a9210a79073b7feb8185c436077888e7d582ac9c32e6685687623ad53
- Size of remote file:
- 496 MB
- SHA256:
- 8fc16ddd8dd08a98c87019e670781071308ce15a1c4eba1558ab8607ffa5b39d
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.