--- license: cc-by-4.0 base_model: Helsinki-NLP/opus-mt-en-de tags: - translation - generated_from_trainer model-index: - name: pokemon-finetuned-opus-mt-en-de results: [] --- # pokemon-finetuned-opus-mt-en-de This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-de](https://huggingface.co/Helsinki-NLP/opus-mt-en-de) on a dataset of translated Pokemon names. It achieves the following results on the evaluation set: - Loss: 0.0554 - Exact Match: 0.9893 ## Model description This model is similar to the [Helsinki-NLP/opus-mt-en-de](https://huggingface.co/Helsinki-NLP/opus-mt-en-de) but it now properly translates Pokemon names. ## Intended uses & limitations This model is part of this [tutorial repository](https://github.com/ajgrant6/Pokemon_LLM_Finetuner). It is only intended as a proof-of-concept and is not intended for legitimate usage or deployment. This model has not been tested to see if the fine-tuning process changed anything beyond a few Pokemon-related phrases. ## Training and evaluation data The model was purposely overfit toward the training data, which was a list of translated Pokemon names from this [forum post](https://www.pokecommunity.com/threads/international-list-of-names-in-csv.460446/) ## Training procedure The evaluation and training sets were the same given a list of translated Pokemon names. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1