--- language: es tags: - sagemaker - roberta - ruperta - TextClassification license: apache-2.0 datasets: - IMDbreviews_es model-index: - name: RuPERTa_base_sentiment_analysis_es results: - task: name: Sentiment Analysis type: sentiment-analysis dataset: name: "IMDb Reviews in Spanish" type: IMDbreviews_es metrics: - name: Accuracy type: accuracy value: 0.881866 - name: F1 Score type: f1 value: 0.008272 - name: Precision type: precision value: 0.858605 - name: Recall type: recall value: 0.920062 ## `RuPERTa_base_sentiment_analysis_es` This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container. The base model is RuPERTa-base (uncased) which is a RoBERTa model trained on a uncased version of big Spanish corpus. It was trained by mrm8488, Manuel Romero. ## Hyperparameters { "epochs": "4", "eval_batch_size": "8", "fp16": "true", "learning_rate": "3e-05", "model_name": "\"mrm8488/RuPERTa-base\"", "sagemaker_container_log_level": "20", "sagemaker_job_name": "\"ruperta-sentiment-analysis-full-p2-2021-12-06-20-32-27\"", "sagemaker_program": "\"train.py\"", "sagemaker_region": "\"us-east-1\"", "sagemaker_submit_directory": "\"s3://edumunozsala-ml-sagemaker/ruperta-sentiment/ruperta-sentiment-analysis-full-p2-2021-12-06-20-32-27/source/sourcedir.tar.gz\"", "train_batch_size": "32", "train_filename": "\"train_data.pt\"", "val_filename": "\"val_data.pt\"" } ## Usage ## Results epoch = 1.0 eval_accuracy = 0.8629333333333333 eval_f1 = 0.8648790746582545 eval_loss = 0.3160930573940277 eval_mem_cpu_alloc_delta = 0 eval_mem_cpu_peaked_delta = 0 eval_mem_gpu_alloc_delta = 0 eval_mem_gpu_peaked_delta = 94507520 eval_precision = 0.8479381443298969 eval_recall = 0.8825107296137339 eval_runtime = 114.4994 eval_samples_per_second = 32.751