Back to all models
question-answering mask_token: [MASK]
Context
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

⚡️ Upgrade your account to access the Inference API

							$
							curl -X POST \
-H "Authorization: Bearer YOUR_ORG_OR_USER_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{"question": "Where does she live?", "context": "She lives in Berlin."}' \
https://api-inference.huggingface.co/models/antoniocappiello/bert-base-italian-uncased-squad-it
Share Copied link to clipboard

Monthly model downloads

antoniocappiello/bert-base-italian-uncased-squad-it antoniocappiello/bert-base-italian-uncased-squad-it
75 downloads
last 30 days

pytorch

tf

Contributed by

antoniocappiello Antonio Cappiello
1 model

How to use this model directly from the 🤗/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("antoniocappiello/bert-base-italian-uncased-squad-it") model = AutoModelForQuestionAnswering.from_pretrained("antoniocappiello/bert-base-italian-uncased-squad-it")
Uploaded in S3

Italian Bert Base Uncased on Squad-it

Model description

This model is the uncased base version of the italian BERT (which you may find at dbmdz/bert-base-italian-uncased) trained on the question answering task.

How to use

from transformers import pipeline

nlp = pipeline('question-answering', model='antoniocappiello/bert-base-italian-uncased-squad-it')

# nlp(context="D'Annunzio nacque nel 1863", question="Quando nacque D'Annunzio?")
# {'score': 0.9990354180335999, 'start': 22, 'end': 25, 'answer': '1863'}

Training data

It has been trained on the question answering task using SQuAD-it, derived from the original SQuAD dataset and obtained through the semi-automatic translation of the SQuAD dataset in Italian.

Training procedure

python ./examples/run_squad.py \
    --model_type bert \
    --model_name_or_path dbmdz/bert-base-italian-uncased \
    --do_train \
    --do_eval \
    --train_file ./squad_it_uncased/train-v1.1.json \
    --predict_file ./squad_it_uncased/dev-v1.1.json \
    --learning_rate 3e-5 \
    --num_train_epochs 2 \
    --max_seq_length 384 \
    --doc_stride 128 \
    --output_dir ./models/bert-base-italian-uncased-squad-it/ \
    --per_gpu_eval_batch_size=3   \
    --per_gpu_train_batch_size=3   \
    --do_lower_case \

Eval Results

Metric # Value
EM 63.8
F1 75.30

Comparison

Model EM F1 score
DrQA-it trained on SQuAD-it 56.1 65.9
This one 63.8 75.30