Access aimped/nlp-health-translation-base-pt-en on Hugging Face
This is a form to enable access to this model on Hugging Face after you have been granted access from the Aimped. Please visit the Aimped website to Sign Up and accept our Terms of Use and Privacy Policy before submitting this form. Requests will be processed in 1-2 days.
Your Hugging Face account email address MUST match the email you provide on the Aimped website or your request will not be approved.
Log in or Sign Up to review the conditions and access this model content.
Description of the Model
Paper: LLMs-in-the-loop Part-1: Expert Small AI Models for Bio-Medical Text Translation
The Medical Translation AI model represents a specialized language model, trained for the accurate translations of medical documents from Portuguese to English. Its primary objective is to provide healthcare professionals, researchers, and individuals within the medical field with a reliable tool for the precise translation of a wide spectrum of medical documents.
The development of this model entailed the utilization of the
Hensinki/MarianMT neural translation architecture, which required 2+ days of intensive training using A100 (24G RAM) GPU. To create an exceptionally high-quality corpus for training the translation model, we combined both publicly available and proprietary datasets. These datasets were further enriched by meticulously curated text collected from online sources. In addition, the inclusion of clinical and discharge reports from diverse healthcare institutions enhanced the dataset's depth and diversity. This meticulous curation process plays a pivotal role in ensuring the model's ability to generate accurate translations tailored specifically to the medical domain, meeting the stringent standards expected by our users.
The versatility of the Medical Translation AI model extends to the translation of a wide array of healthcare-related documents, encompassing medical reports, patient records, medication instructions, research manuscripts, clinical trial documents, and more. By harnessing the capabilities of this model, users can efficiently and dependably obtain translations, thereby streamlining and expediting the often complex task of language translation within the medical field.
The model we have developed outperforms leading translation companies like Google, Helsinki-Opus/MarianMT, and DeepL when compared against our meticulously curated proprietary test data set.
ROUGE |
BLEU |
METEOR |
BERT |
|
Aimped |
0.88 | 0.67 | 0.87 | 0.98 |
0.85 | 0.61 | 0.85 | 0.95 | |
DeepL | 0.83 | 0.58 | 0.83 | 0.95 |
Opus/MarianMT | - | - | - | - |
Why should you use Aimped API?
To get started, you can easily use our open-source version of the models for research purposes. However, the models provided through the Aimped API are trained on new data every three months. This ensures that the models understand ongoing healthcare developments in the world and can identify the most relevant medical terminology without a knowledge cutoff. In addition, we implement post/pre processing steps to improve the translation quality. Naturally, our quality control ensures that the models' performance always remains at least similar to previous versions.
Text Format Requirements: The text to be translated must adhere to a structured and grammatically correct format, including proper paragraph and sentence structures. Spelling errors or formatting issues, such as line breaks occurring before the completion of a sentence, will not be automatically corrected.
Character and Word Limits: Each translation process is limited to a maximum of 5K characters both for user interface (UI) and API request. Please note that exceeding these limits is not supported for translation operations.
Segmentation of Translation Text: In cases where the text to be translated exceeds the specified character limits, it is advisable to divide the text into appropriate segments for translation. This approach allows for the translation of larger texts without exceeding the defined limits. When segmenting the text, it is preferable to divide it into paragraphs or topic headings.
API Requests: When utilizing the API, exercise caution to ensure that your translation requests conform to the data size limitations. Large data sets should be divided or processed sequentially to effectively complete translation tasks within these constraints.
Limitations: Our translation model has been meticulously designed and extensively trained to cater specifically to the demanding needs of the Healthcare and Biomedical domain. While it excels within this highly specialized realm, it's important to note that if you opt to employ the model in domains outside of healthcare, its performance may not meet the exceptional standards characteristic of the medical field. We advise a thoughtful consideration of this limitation when contemplating the model's application.
Trainin data: Public and in-house datasets.
Test data: Public and in-house datasets which is available here.
How to Use:
To get the right results, use this function.
- Install requirements
!pip install transformers
!pip install sentencepiece
!pip install aimped
import nltk
nltk.download('punkt')
- import libraries
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
from aimped.nlp.translation import text_translate
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
- load model
model_path = "aimped/nlp-health-translation-base-pt-en"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForSeq2SeqLM.from_pretrained(model_path)
translater = pipeline(
task="translation_pt_to_en",
model=model,
tokenizer=tokenizer,
device= device,
max_length=512,
num_beams=7,
early_stopping=False,
num_return_sequences=1,
do_sample=False,
)
- Use Model:
sentence = "As doenças pulmonares imunomediadas são um grupo complexo de doenças caracterizadas por infiltração celular inflamatória dos pulmões que pode resultar em progressiva remodelação das vias aéreas e lesão do parênquima." translated_text = text_translate([sentence],source_lang="pt", pipeline=translater)
Test Set
Trainin data: Public and in-house datasets.
Test data: Public and in-house datasets which is available here.
- Downloads last month
- 2