Back to all models
fill-mask mask_token: [MASK]
Query this model
๐Ÿ”ฅ This model is currently loaded and running on the Inference API. โš ๏ธ This model could not be loaded by the inference API. โš ๏ธ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint
								$
								curl -X POST \
-H "Authorization: Bearer YOUR_ORG_OR_USER_API_TOKEN" \
-H "Content-Type: application/json" \
-d '"json encoded string"' \
https://api-inference.huggingface.co/models/mrm8488/spanbert-base-finetuned-tacred
Share Copied link to clipboard

Monthly model downloads

mrm8488/spanbert-base-finetuned-tacred mrm8488/spanbert-base-finetuned-tacred
32 downloads
last 30 days

pytorch

tf

Contributed by

mrm8488 Manuel Romero
119 models

How to use this model directly from the ๐Ÿค—/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("mrm8488/spanbert-base-finetuned-tacred") model = AutoModelWithLMHead.from_pretrained("mrm8488/spanbert-base-finetuned-tacred")

SpanBERT base fine-tuned on TACRED

SpanBERT created by Facebook Research and fine-tuned on TACRED dataset by them

Details of SpanBERT

SpanBERT: Improving Pre-training by Representing and Predicting Spans

Dataset ๐Ÿ“š

TACRED A large-scale relation extraction dataset with 106k+ examples over 42 TAC KBP relation types.

Model fine-tuning ๐Ÿ‹๏ธโ€

You can get the fine-tuning script here

python code/run_tacred.py \
  --do_train \
  --do_eval \
  --data_dir <TACRED_DATA_DIR> \
  --model spanbert-base-cased \
  --train_batch_size 32 \
  --eval_batch_size 32 \
  --learning_rate 2e-5 \
  --num_train_epochs 10 \
  --max_seq_length 128 \
  --output_dir tacred_dir \
  --fp16

Results Comparison ๐Ÿ“

SQuAD 1.1 SQuAD 2.0 Coref TACRED
F1 F1 avg. F1 F1
BERT (base) 88.5* 76.5* 73.1 67.7
SpanBERT (base) 92.4* 83.6* 77.4 68.2 (this one)
BERT (large) 91.3 83.3 77.1 66.4
SpanBERT (large) 94.6 88.7 79.6 70.8

Note: The numbers marked as * are evaluated on the development sets becaus those models were not submitted to the official SQuAD leaderboard. All the other numbers are test numbers.

Created by Manuel Romero/@mrm8488

Made with in Spain