--- license: apache-2.0 base_model: bedus-creation/mBart-small-dataset-i-eng-lim tags: - generated_from_keras_callback model-index: - name: bedus-creation/mBart-small-dataset-ii-eng-lim-004 results: [] --- # bedus-creation/mBart-small-dataset-ii-eng-lim-004 This model is a fine-tuned version of [bedus-creation/mBart-small-dataset-i-eng-lim](https://huggingface.co/bedus-creation/mBart-small-dataset-i-eng-lim) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2334 - Validation Loss: 0.3155 - Epoch: 69 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.9940 | 0.4653 | 0 | | 0.4659 | 0.3647 | 1 | | 0.4011 | 0.3331 | 2 | | 0.3798 | 0.3284 | 3 | | 0.3640 | 0.3210 | 4 | | 0.3539 | 0.3087 | 5 | | 0.3456 | 0.3106 | 6 | | 0.3377 | 0.3049 | 7 | | 0.3340 | 0.2998 | 8 | | 0.3285 | 0.2974 | 9 | | 0.3246 | 0.2980 | 10 | | 0.3202 | 0.2950 | 11 | | 0.3174 | 0.2910 | 12 | | 0.3154 | 0.2932 | 13 | | 0.3124 | 0.2882 | 14 | | 0.3094 | 0.2895 | 15 | | 0.3092 | 0.2880 | 16 | | 0.3073 | 0.2861 | 17 | | 0.3043 | 0.2842 | 18 | | 0.3037 | 0.2856 | 19 | | 0.3009 | 0.2834 | 20 | | 0.2999 | 0.2859 | 21 | | 0.2983 | 0.2836 | 22 | | 0.2973 | 0.2809 | 23 | | 0.2952 | 0.2825 | 24 | | 0.2942 | 0.2809 | 25 | | 0.2933 | 0.2792 | 26 | | 0.2914 | 0.2813 | 27 | | 0.2898 | 0.2817 | 28 | | 0.2884 | 0.2794 | 29 | | 0.2866 | 0.2797 | 30 | | 0.2853 | 0.2797 | 31 | | 0.2849 | 0.2844 | 32 | | 0.2835 | 0.2798 | 33 | | 0.2821 | 0.2803 | 34 | | 0.2823 | 0.2828 | 35 | | 0.2798 | 0.2796 | 36 | | 0.2797 | 0.2788 | 37 | | 0.2766 | 0.2811 | 38 | | 0.2765 | 0.2800 | 39 | | 0.2747 | 0.2852 | 40 | | 0.2731 | 0.2825 | 41 | | 0.2720 | 0.2841 | 42 | | 0.2709 | 0.2855 | 43 | | 0.2693 | 0.2843 | 44 | | 0.2678 | 0.2863 | 45 | | 0.2667 | 0.2912 | 46 | | 0.2645 | 0.2863 | 47 | | 0.2633 | 0.2862 | 48 | | 0.2618 | 0.2881 | 49 | | 0.2607 | 0.2890 | 50 | | 0.2585 | 0.2928 | 51 | | 0.2585 | 0.2903 | 52 | | 0.2562 | 0.2904 | 53 | | 0.2545 | 0.2902 | 54 | | 0.2541 | 0.2937 | 55 | | 0.2528 | 0.2930 | 56 | | 0.2512 | 0.3014 | 57 | | 0.2484 | 0.2979 | 58 | | 0.2478 | 0.3002 | 59 | | 0.2460 | 0.3034 | 60 | | 0.2449 | 0.3000 | 61 | | 0.2442 | 0.3010 | 62 | | 0.2418 | 0.3054 | 63 | | 0.2399 | 0.3046 | 64 | | 0.2395 | 0.3072 | 65 | | 0.2374 | 0.3117 | 66 | | 0.2368 | 0.3081 | 67 | | 0.2351 | 0.3149 | 68 | | 0.2334 | 0.3155 | 69 | ### Framework versions - Transformers 4.33.3 - TensorFlow 2.13.0 - Datasets 2.14.5 - Tokenizers 0.13.3