Instructions to use xezezeze/use_data_finetuning with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use xezezeze/use_data_finetuning with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("object-detection", model="xezezeze/use_data_finetuning")# Load model directly from transformers import AutoImageProcessor, AutoModelForObjectDetection processor = AutoImageProcessor.from_pretrained("xezezeze/use_data_finetuning") model = AutoModelForObjectDetection.from_pretrained("xezezeze/use_data_finetuning") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 038ed36183bfbf7fea79ad3aaa38b6381c0946d25c726689ff3ab69d37ec84ff
- Size of remote file:
- 167 MB
- SHA256:
- 3bb634eb2501112bd1e10f7ec62d1d2354d35d63a0426364d9832492f03023df
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.