--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: AnaniyaX/decision-distilbert-uncased results: [] datasets: - textvqa - squad widget: - text: 'What does the sign says' example_title: 'Visual Question Example 1' - text: 'What does string theory talks about' example_title: 'Textual Question Example 1' --- # AnaniyaX/decision-distilbert-uncased This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on textvqa and squad. It achieves the following results on the evaluation set: - Train Loss: 0.0097 - Train Accuracy: 0.9976 - Epoch: 9 ## Model description The Text-Visual Question Classifier is a Hugging Face model that can classify questions as either text-based or visual-based. It uses a natural language processing and techniques to analyze the question and determine its type. The model has been trained on a large dataset of questions labeled as either text-based or visual-based, and has achieved high accuracy in identifying the correct type of question. ## Intended uses & limitations #### Applications This model can be used in various applications such as chatbots, virtual assistants, search engines, and recommendation systems. For example, it can help chatbots to provide more accurate responses by understanding the type of question being asked. It can also help search engines to retrieve more relevant results by filtering out irrelevant content based on the type of question. #### Limitations: The model may not perform well on questions that are ambiguous or have multiple interpretations. It may also be biased towards certain types of questions based on the training data. ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 2e-06, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Epoch | |:----------:|:--------------:|:-----:| | 0.1914 | 0.9444 | 0 | | 0.0711 | 0.9768 | 1 | | 0.0531 | 0.9826 | 2 | | 0.0427 | 0.9868 | 3 | | 0.0330 | 0.9904 | 4 | | 0.0264 | 0.9923 | 5 | | 0.0195 | 0.9947 | 6 | | 0.0149 | 0.9960 | 7 | | 0.0123 | 0.9965 | 8 | | 0.0097 | 0.9976 | 9 | ### Framework versions - Transformers 4.27.2 - TensorFlow 2.11.0 - Datasets 2.10.1 - Tokenizers 0.13.2