Instructions to use Chetna19/bert_large_subjqa_model_v4 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Chetna19/bert_large_subjqa_model_v4 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="Chetna19/bert_large_subjqa_model_v4")# Load model directly from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("Chetna19/bert_large_subjqa_model_v4") model = AutoModelForQuestionAnswering.from_pretrained("Chetna19/bert_large_subjqa_model_v4") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 1bd2533525ac9a8e3c023100c32ba7d6e2e3a034ae2bec199734509297f80729
- Size of remote file:
- 1.34 GB
- SHA256:
- 2b00b0e73a2fc049a01a14cc7e98d900119889a364ea8b67c8fbcf701ed8ca46
路
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.