23MATHQ_TrOCR-large / README.md
SGpirr's picture
Update README.md
516996b verified
|
raw
history blame
1.71 kB
metadata
library_name: transformers
language:
  - ko

Model Card for Model ID

Model Details

line ๋‹จ์œ„๋กœ ์ˆ˜์‹์ด ํฌํ•จ๋œ ๊ธ€์ž๋ฅผ ์ธ์‹ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค.
ํ•œ๊ตญ์–ด + latex ๋ฐ์ดํ„ฐ์…‹์œผ๋กœ finetuning ํ–ˆ์Šต๋‹ˆ๋‹ค.

Uses

Direct Use

from PIL import Image
import glob
from transformers import TrOCRProcessor, VisionEncoderDecoderModel
import torch
import IPython.display as ipd

## ์ด๋ฏธ์ง€ ์ค€๋น„
img_path_list = sorted(glob.glob('images/mathematical_expression_2-*.png'))
img_list = [Image.open(img_path).convert("RGB") for img_path in img_path_list]

## ๋ชจ๋ธ ๋ฐ ํ”„๋กœ์„ธ์„œ ์ค€๋น„
model_path = 'models/math_ocr'
processor = TrOCRProcessor.from_pretrained(model_path)
model = VisionEncoderDecoderModel.from_pretrained(model_path)
model.eval()

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model.to(device)

processor.feature_extractor.size = model.config.encoder.image_size

gc = model.generation_config
gc.max_length = 128
gc.early_stopping = True
gc.no_repeat_ngram_size = 3
gc.length_penalty = 2.0
gc.num_beams = 4
gc.eos_token_id = processor.tokenizer.sep_token_id

## TrOCR ์ถ”๋ก 
pixel_values = processor(img_list, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values.to(model.device), pad_token_id=processor.tokenizer.eos_token_id)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)

for img,text in zip(img_list, generated_text):
    ipd.display(img)
    print(text)