English

GEM_PubMedQA Model Card

This model card provides an overview of the GEM_PubMedQA model, a finetuned implementation of the GEM architecture designed for the PubMedQA dataset.

Purpose

The GEM_PubMedQA model was developed to assess the performance of the GEM architecture on domain-specific datasets, with a focus on healthcare. The PubMedQA dataset, a key benchmark in this field, was selected to evaluate its effectiveness.

Key Details

  • License: Apache-2.0
  • Dataset: qiaojin/PubMedQA
  • Language: English
  • Metrics: Accuracy: 92.5%
  • Base Model: google-bert/bert-base-uncased

Model Details

The GEM_PubMedQA model is built on the GEM architecture and finetuned from the google-bert/bert-base-uncased model using the PubMedQA dataset. The training was performed with the following parameters:

  • Number of epochs: 5
  • Batch size: 128
  • Learning rate: 2e-5
  • Maximum sequence length: 128
  • Gradient accumulation steps: 2
  • Cluster size: 256
  • Threshold: 0.65
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for GEM025/GEM_PubMedQA

Finetuned
(3506)
this model

Dataset used to train GEM025/GEM_PubMedQA