YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
π Language Translation Model
This repository hosts a fine-tuned T5-small-based model optimized for language translation. The model translates text between multiple languages, including English, Spanish, German, French, and Hindi.
π Model Details
- Model Architecture: T5-small
- Task: Language Translation
- Dataset: Custom multilingual dataset
- Fine-tuning Framework: Hugging Face Transformers
- Quantization: Dynamic (int8) for efficiency
π Usage
Installation
pip install transformers torch datasets
Loading the Model
from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
model_name = AventIQ-AI/t5-language-translation
model = T5ForConditionalGeneration.from_pretrained(model_name).to(device)
tokenizer = T5Tokenizer.from_pretrained(model_name)
Perform Translation
def translate_text(model, tokenizer, input_text, target_language):
device = "cuda" if torch.cuda.is_available() else "cpu"
formatted_text = f"translate English to {target_language}: {input_text}"
input_ids = tokenizer(formatted_text, return_tensors="pt").input_ids.to(device)
with torch.no_grad():
output_ids = model.generate(input_ids, max_length=50)
return tokenizer.decode(output_ids[0], skip_special_tokens=True)
# πΉ **Test Translation**
input_text = "Hello, how are you?"
target_language = "French" # Options: "Spanish", "German".
translated_text = translate_text(model, tokenizer, input_text, target_language)
print(f"Original: {input_text}")
print(f"Translated: {translated_text}")
π Evaluation Results
After fine-tuning, the model was evaluated on a multilingual dataset, achieving the following performance:
Metric | Score | Meaning |
---|---|---|
BLEU Score | 38.5 | Measures translation accuracy |
Inference Speed | Fast | Optimized for real-time translation |
π§ Fine-Tuning Details
Dataset
The model was trained using a multilingual dataset containing sentence pairs from multiple language sources.
Training Configuration
- Number of epochs: 3
- Batch size: 8
- Optimizer: AdamW
- Learning rate: 2e-5
- Evaluation strategy: Epoch-based
Quantization
The model was quantized using fp16 quantization, reducing latency and memory usage while maintaining accuracy.
π Repository Structure
.
βββ model/
βββ tokenizer_config/
βββ quantized_model/
βββ README.md
β οΈ Limitations
- The model may struggle with very complex sentences.
- Low-resource languages may have slightly lower accuracy.
- Contextual understanding is limited to sentence-level translation.
- Downloads last month
- 118
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
HF Inference deployability: The model has no library tag.