Introduction
Embark on a linguistic voyage with my latest creation—a fully functional, user-friendly Gradio Language Translation model. This innovative system effortlessly transforms English sentences into their French equivalents, thanks to the meticulous fine-tuning of a pre-trained model sourced from HuggingFace. To embark on this journey, establish a local directory for the repository on your computer. Once there, navigate to the repository's core using your terminal. A mere command— "python gradio_LT.py"—unveils an intuitive user interface for seamless translation. This project is not just about code; it's an invitation to experience the future of language interaction. Designed to be accessible and engaging, the Gradio Language Translation model welcomes enthusiasts and language lovers alike. Uncover the magic of seamless translation by exploring this project now. 🌐📜🚀
eng-to-fra-model
This model is a fine-tuned version of Helsinki-NLP/opus-mt-en-fr on the kde4 dataset.
Model description
An innovative language translation model using advanced Transformers technology. Leveraging the powerful AutoModelForSeq2SeqLM, this model becomes a linguistic maestro, seamlessly translating English sentences into French. Trained on diverse datasets and fine-tuned for precision, it excels at capturing language nuances. Tokenization, its secret sauce, dissects words for an in-depth understanding.
Training and evaluation data
I carefully selected a variety of examples for teaching the language translation model. The KDE4 dataset from Hugging Face provided a great starting point for this English-to-French translation project. I used a powerful Seq2Seq model called AutoModelForSeq2SeqLM, tweaking it to improve performance. To make sure the translations were accurate, I evaluated the model using the sacrebleu metric, which counts matching words. Some special tricks, like dynamic padding and sentence start tokens, were applied during training. The end result is a smart model ready to handle English and French translation tasks with finesse.
Training procedure
I employed the Seq2SeqTrainer function, customizing parameters for training my dataset on a pre-trained language translation model, optimizing its performance and accuracy.
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
Gradio Interface
I've created a dedicated file, gradio_LT.py. Executing this file opens a Gradio user interface for sentence translation. Ensure you've pre-downloaded transformers, gradio, and sentencepiece in your environment for seamless functionality.
Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
- Downloads last month
- 73
Model tree for rajbhirud/eng-to-fra-model
Base model
Helsinki-NLP/opus-mt-en-fr