File size: 1,185 Bytes
c2bc1aa
 
cffb972
c2bc1aa
 
cffb972
c2bc1aa
 
 
cffb972
c2bc1aa
6e57967
c2bc1aa
 
cffb972
 
c2bc1aa
cffb972
 
c2bc1aa
cffb972
 
c2bc1aa
cffb972
 
 
c2bc1aa
cffb972
 
 
c2bc1aa
cffb972
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
---
library_name: transformers
tags: [transformers, T5, question-answering]
---

# Model Card for starman76/t5_500

## Model Details

This model is a fine-tuned version of the T5-small model specifically tailored for question answering tasks in the biomedical domain. It has been trained to understand and generate responses based on biomedical literature, making it particularly useful for researchers and practitioners in the field.

## Getting started with the model


```python
pip install transformers

from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch

tokenizer = T5Tokenizer.from_pretrained("starman76/t5_500")
model = T5ForConditionalGeneration.from_pretrained("starman76/t5_500")

context = "Aspirin is a medication used to reduce pain, fever, or inflammation."
question = "What is Aspirin used for?"
inputs = tokenizer(question, context, add_special_tokens=True, return_tensors="pt", max_length=512, truncation=True)

with torch.no_grad():
    outputs = model.generate(input_ids=inputs['input_ids'], attention_mask=inputs['attention_mask'], max_length=50)
answer = tokenizer.decode(outputs[0], skip_special_tokens=True)

print("Answer:", answer)