Instructions to use Qingspring/dummy-model with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Qingspring/dummy-model with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="Qingspring/dummy-model")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("Qingspring/dummy-model") model = AutoModelForMaskedLM.from_pretrained("Qingspring/dummy-model") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- f1cf6515c963744bcd84061c63fe5ebc67812fef57061a50d53a5a58652aed9f
- Size of remote file:
- 443 MB
- SHA256:
- ec976718b6a803fc1e2e2ca8691fc941562774fbf518a5676931f1af8712ed53
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.