YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

🌍 Language Translation Model

This repository hosts a fine-tuned T5-small-based model optimized for language translation. The model translates text between multiple languages, including English, Spanish, German, French, and Hindi.

πŸ“Œ Model Details

  • Model Architecture: T5-small
  • Task: Language Translation
  • Dataset: Custom multilingual dataset
  • Fine-tuning Framework: Hugging Face Transformers
  • Quantization: Dynamic (int8) for efficiency

πŸš€ Usage

Installation

pip install transformers torch datasets

Loading the Model

from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch

device = "cuda" if torch.cuda.is_available() else "cpu"

model_name = AventIQ-AI/t5-language-translation
model = T5ForConditionalGeneration.from_pretrained(model_name).to(device)
tokenizer = T5Tokenizer.from_pretrained(model_name)

Perform Translation


def translate_text(model, tokenizer, input_text, target_language):
    device = "cuda" if torch.cuda.is_available() else "cpu"
    formatted_text = f"translate English to {target_language}: {input_text}"
    input_ids = tokenizer(formatted_text, return_tensors="pt").input_ids.to(device)
    
    with torch.no_grad():
        output_ids = model.generate(input_ids, max_length=50)
    
    return tokenizer.decode(output_ids[0], skip_special_tokens=True)

# πŸ”Ή **Test Translation**
input_text = "Hello, how are you?"
target_language = "French"  # Options: "Spanish", "German".
translated_text = translate_text(model, tokenizer, input_text, target_language)

print(f"Original: {input_text}")
print(f"Translated: {translated_text}")

πŸ“Š Evaluation Results

After fine-tuning, the model was evaluated on a multilingual dataset, achieving the following performance:

Metric Score Meaning
BLEU Score 38.5 Measures translation accuracy
Inference Speed Fast Optimized for real-time translation

πŸ”§ Fine-Tuning Details

Dataset

The model was trained using a multilingual dataset containing sentence pairs from multiple language sources.

Training Configuration

  • Number of epochs: 3
  • Batch size: 8
  • Optimizer: AdamW
  • Learning rate: 2e-5
  • Evaluation strategy: Epoch-based

Quantization

The model was quantized using fp16 quantization, reducing latency and memory usage while maintaining accuracy.

πŸ“‚ Repository Structure

.
β”œβ”€β”€ model/               
β”œβ”€β”€ tokenizer_config/   
β”œβ”€β”€ quantized_model/     
β”œβ”€β”€ README.md            

⚠️ Limitations

  • The model may struggle with very complex sentences.
  • Low-resource languages may have slightly lower accuracy.
  • Contextual understanding is limited to sentence-level translation.
Downloads last month
118
Safetensors
Model size
60.5M params
Tensor type
FP16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support