Instructions to use catpawdev/QA_model with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use catpawdev/QA_model with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="catpawdev/QA_model")# Load model directly from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("catpawdev/QA_model") model = AutoModelForQuestionAnswering.from_pretrained("catpawdev/QA_model") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 3260980035806ca0de7de36e8c3201c385a7c9f52241a54d6a248f06ce668c63
- Size of remote file:
- 436 MB
- SHA256:
- 840ea93b958a67654243e459e29388c8a9bbf58c8ad8559d736c256397b82259
路
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.