distilbert-onnx / README.md
nazneen's picture
model documentation
eb79cba
---
language: en
license: apache-2.0
datasets:
- squad
metrics:
- squad
---
# Model Card for ONNX Conversion of distilbert-base-cased-distilled-squad
# Model Details
## Model Description
This model is a fine-tune checkpoint of DistilBERT-base-cased, fine-tuned using (a second step of) knowledge distillation on SQuAD v1.1.
- **Developed by:** Philipp Schmid
- **Shared by [Optional]:** Hugging Face
- **Model type:** Question Answering
- **Language(s) (NLP):** en
- **License:** Apache-2.0
- **Related Models:** [distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad)
- **Parent Model:** distilbert
- **Resources for more information:**
- [Space](https://huggingface.co/spaces/krrishD/philschmid_distilbert-onnx)
- [Blog Post](https://www.philschmid.de/convert-transformers-to-onnx)
# Uses
## Direct Use
This model can be used for question answering.
## Downstream Use [Optional]
More information needed.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
To learn more about the SQuAD v1.1 dataset, see the associated [SQuAD v1.1 dataset card](https://huggingface.co/datasets/squad) for further details.
## Training Procedure
### Preprocessing
See the [distilbert-base-cased model card](https://huggingface.co/distilbert-base-cased) for further details.
### Speeds, Sizes, Times
See the [distilbert-base-cased model card](https://huggingface.co/distilbert-base-cased) for further details.
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed
### Factors
### Metrics
More information needed
## Results
This model reaches a F1 score of 87.1 on the dev set (for comparison, BERT bert-base-cased version reaches a F1 score of 88.7).
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed
# Citation
**BibTeX:**
More information needed
**APA:**
More information needed
# Glossary [optional]
1. What is ONNX?
The ONNX (Open Neural Network eXchange) is an open standard and format to represent machine learning models. ONNX defines a common set of operators and a common file format to represent deep learning models in a wide variety of frameworks, including PyTorch and TensorFlow.
# More Information [optional]
More information needed
# Model Card Authors [optional]
Philipp Schmid in collaboration with Ezi Ozoani and the Hugging Face team.
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("philschmid/distilbert-onnx")
model = AutoModelForQuestionAnswering.from_pretrained("philschmid/distilbert-onnx")
```
</details>