alpcansoydas's picture
Update README.md
0712993 verified
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
[{'loss': 3.0878, 'learning_rate': 4.8630820399113085e-05, 'epoch': 0.14, 'step': 500}, {'loss': 1.9885, 'learning_rate': 4.7245011086474504e-05, 'epoch': 0.28, 'step': 1000}, {'loss': 1.7381, 'learning_rate': 4.585920177383592e-05, 'epoch': 0.42, 'step': 1500}, {'loss': 1.4517, 'learning_rate': 4.447339246119734e-05, 'epoch': 0.55, 'step': 2000}, {'loss': 1.225, 'learning_rate': 4.3087583148558755e-05, 'epoch': 0.69, 'step': 2500}, {'loss': 1.1094, 'learning_rate': 4.1704545454545456e-05, 'epoch': 0.83, 'step': 3000}, {'loss': 1.0243, 'learning_rate': 4.0318736141906875e-05, 'epoch': 0.97, 'step': 3500}, {'eval_loss': 0.9151868224143982, 'eval_cer': 0.1013281301966697, 'eval_runtime': 2841.3659, 'eval_samples_per_second': 1.27, 'eval_steps_per_second': 0.317, 'epoch': 1.0, 'step': 3608}, {'loss': 0.8896, 'learning_rate': 3.8932926829268294e-05, 'epoch': 1.11, 'step': 4000}, {'loss': 0.7499, 'learning_rate': 3.754711751662971e-05, 'epoch': 1.25, 'step': 4500}, {'loss': 0.7259, 'learning_rate': 3.616130820399113e-05, 'epoch': 1.39, 'step': 5000}, {'loss': 0.7324, 'learning_rate': 3.477549889135255e-05, 'epoch': 1.52, 'step': 5500}, {'loss': 0.6456, 'learning_rate': 3.338968957871397e-05, 'epoch': 1.66, 'step': 6000}, {'loss': 0.563, 'learning_rate': 3.200388026607539e-05, 'epoch': 1.8, 'step': 6500}, {'loss': 0.5532, 'learning_rate': 3.061807095343681e-05, 'epoch': 1.94, 'step': 7000}, {'eval_loss': 0.43645456433296204, 'eval_cer': 0.037061262499722844, 'eval_runtime': 2791.7981, 'eval_samples_per_second': 1.292, 'eval_steps_per_second': 0.323, 'epoch': 2.0, 'step': 7216}, {'loss': 0.4441, 'learning_rate': 2.9235033259423506e-05, 'epoch': 2.08, 'step': 7500}, {'loss': 0.4233, 'learning_rate': 2.7849223946784925e-05, 'epoch': 2.22, 'step': 8000}, {'loss': 0.3911, 'learning_rate': 2.646618625277162e-05, 'epoch': 2.36, 'step': 8500}, {'loss': 0.3454, 'learning_rate': 2.5080376940133038e-05, 'epoch': 2.49, 'step': 9000}, {'loss': 0.3201, 'learning_rate': 2.3694567627494457e-05, 'epoch': 2.63, 'step': 9500}, {'loss': 0.2908, 'learning_rate': 2.2308758314855876e-05, 'epoch': 2.77, 'step': 10000}, {'loss': 0.2651, 'learning_rate': 2.0922949002217295e-05, 'epoch': 2.91, 'step': 10500}, {'eval_loss': 0.26305779814720154, 'eval_cer': 0.021252300392452496, 'eval_runtime': 2821.9411, 'eval_samples_per_second': 1.279, 'eval_steps_per_second': 0.32, 'epoch': 3.0, 'step': 10824}, {'loss': 0.2313, 'learning_rate': 1.9537139689578714e-05, 'epoch': 3.05, 'step': 11000}, {'loss': 0.1767, 'learning_rate': 1.8151330376940133e-05, 'epoch': 3.19, 'step': 11500}, {'loss': 0.196, 'learning_rate': 1.6765521064301552e-05, 'epoch': 3.33, 'step': 12000}, {'loss': 0.1668, 'learning_rate': 1.537971175166297e-05, 'epoch': 3.46, 'step': 12500}, {'loss': 0.1489, 'learning_rate': 1.3993902439024392e-05, 'epoch': 3.6, 'step': 13000}, {'loss': 0.1439, 'learning_rate': 1.260809312638581e-05, 'epoch': 3.74, 'step': 13500}, {'loss': 0.1413, 'learning_rate': 1.122228381374723e-05, 'epoch': 3.88, 'step': 14000}, {'eval_loss': 0.17799262702465057, 'eval_cer': 0.011995299439036829, 'eval_runtime': 2814.9178, 'eval_samples_per_second': 1.282, 'eval_steps_per_second': 0.32, 'epoch': 4.0, 'step': 14432}, {'loss': 0.1193, 'learning_rate': 9.836474501108648e-06, 'epoch': 4.02, 'step': 14500}, {'loss': 0.0904, 'learning_rate': 8.453436807095343e-06, 'epoch': 4.16, 'step': 15000}, {'loss': 0.0822, 'learning_rate': 7.070399113082041e-06, 'epoch': 4.3, 'step': 15500}, {'loss': 0.0675, 'learning_rate': 5.684589800443459e-06, 'epoch': 4.43, 'step': 16000}, {'loss': 0.0606, 'learning_rate': 4.298780487804878e-06, 'epoch': 4.57, 'step': 16500}, {'loss': 0.055, 'learning_rate': 2.912971175166297e-06, 'epoch': 4.71, 'step': 17000}, {'loss': 0.0476, 'learning_rate': 1.5271618625277162e-06, 'epoch': 4.85, 'step': 17500}, {'loss': 0.0513, 'learning_rate': 1.4135254988913526e-07, 'epoch': 4.99, 'step': 18000}, {'eval_loss': 0.10952820628881454, 'eval_cer': 0.007350169619298907, 'eval_runtime': 2798.4707, 'eval_samples_per_second': 1.289, 'eval_steps_per_second': 0.322, 'epoch': 5.0, 'step': 18040}, {'train_runtime': 58940.2961, 'train_samples_per_second': 1.224, 'train_steps_per_second': 0.306, 'total_flos': 9.69366452586519e+19, 'train_loss': 0.575048929558625, 'epoch': 5.0, 'step': 18040}]
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]