{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "l5Ds1ZM41KC9" }, "source": [ "## Introduction: TAPAS\n", "\n", "* Original TAPAS paper (ACL 2020): https://www.aclweb.org/anthology/2020.acl-main.398/\n", "* Follow-up paper on intermediate pre-training (EMMNLP Findings 2020): https://www.aclweb.org/anthology/2020.findings-emnlp.27/\n", "* Original Github repository: https://github.com/google-research/tapas\n", "* Blog post: https://ai.googleblog.com/2020/04/using-neural-networks-to-find-answers.html\n", "\n", "TAPAS is an algorithm that (among other tasks) can answer questions about tabular data. It is essentially a BERT model with relative position embeddings and additional token type ids that encode tabular structure, and 2 classification heads on top: one for **cell selection** and one for (optionally) performing an **aggregation** among selected cells (such as summing or counting).\n", "\n", "Similar to BERT, the base `TapasModel` is pre-trained using the masked language modeling (MLM) objective on a large collection of tables from Wikipedia and associated texts. In addition, the authors further pre-trained the model on an second task (table entailment) to increase the numerical reasoning capabilities of TAPAS (as explained in the follow-up paper), which further improves performance on downstream tasks.\n", "\n", "In this notebook, we are going to fine-tune `TapasForQuestionAnswering` on [Sequential Question Answering (SQA)](https://www.microsoft.com/en-us/research/publication/search-based-neural-structured-learning-sequential-question-answering/), a dataset built by Microsoft Research which deals with asking questions related to a table in a **conversational set-up**. We are going to do so as in the original paper, by adding a randomly initialized cell selection head on top of the pre-trained base model (note that SQA does not have questions that involve aggregation and hence no aggregation head), and then fine-tuning them altogether.\n", "\n", "First, we install both the Transformers library as well as the dependency on [`torch-scatter`](https://github.com/rusty1s/pytorch_scatter), which the model requires." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "MUMrt5Ow_PEA", "outputId": "eda7d53e-9846-4941-ed72-ce84f495469f" }, "outputs": [], "source": [ "#! rm -r transformers\n", "#! git clone https://github.com/huggingface/transformers.git\n", "#! cd transformers\n", "#! pip install ./transformers" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "gx4u09iTyRjY", "outputId": "e4cd9f4b-7d8d-4b47-e8b2-304b921dba98" }, "outputs": [], "source": [ "#! pip install torch-scatter==latest+cu101 -f https://pytorch-geometric.com/whl/torch-1.7.0.html" ] }, { "cell_type": "markdown", "metadata": { "id": "BSZfmBt0meYm" }, "source": [ "We also install a small portion from the SQA training dataset, for demonstration purposes. This is a TSV file containing table-question pairs. Besides this, we also download the `table_csv` directory, which contains the actual tabular data.\n", "\n", "Note that you can download the entire SQA dataset on the [official website](https://www.microsoft.com/en-us/download/details.aspx?id=54253)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "wsuwgDEU4J_f" }, "outputs": [], "source": [ "import requests, zipfile, io\n", "import os\n", "\n", "def download_files(dir_name):\n", " if not os.path.exists(dir_name):\n", " # 28 training examples from the SQA training set + table csv data\n", " urls = [\"https://www.dropbox.com/s/2p6ez9xro357i63/sqa_train_set_28_examples.zip?dl=1\",\n", " \"https://www.dropbox.com/s/abhum8ssuow87h6/table_csv.zip?dl=1\"\n", " ]\n", " for url in urls:\n", " r = requests.get(url)\n", " z = zipfile.ZipFile(io.BytesIO(r.content))\n", " z.extractall()\n", "\n", "dir_name = \"sqa_data\"\n", "download_files(dir_name)" ] }, { "cell_type": "markdown", "metadata": { "id": "EPrYJOn81f0D" }, "source": [ "## Prepare the data\n", "\n", "Let's look at the first few rows of the dataset:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 279 }, "id": "2X27wyd805D8", "outputId": "7ccfd32c-e8dd-4fec-c044-d7d8de8dd578" }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
idannotatorpositionquestiontable_fileanswer_coordinatesanswer_text
0nt-63900where are the players from?table_csv/203_149.csv['(0, 4)', '(1, 4)', '(2, 4)', '(3, 4)', '(4, ...['Louisiana State University', 'Valley HS (Las...
1nt-63901which player went to louisiana state university?table_csv/203_149.csv['(0, 1)']['Ben McDonald']
2nt-63910who are the players?table_csv/203_149.csv['(0, 1)', '(1, 1)', '(2, 1)', '(3, 1)', '(4, ...['Ben McDonald', 'Tyler Houston', 'Roger Salke...
3nt-63911which ones are in the top 26 picks?table_csv/203_149.csv['(0, 1)', '(1, 1)', '(2, 1)', '(3, 1)', '(4, ...['Ben McDonald', 'Tyler Houston', 'Roger Salke...
4nt-63912and of those, who is from louisiana state univ...table_csv/203_149.csv['(0, 1)']['Ben McDonald']
\n", "
" ], "text/plain": [ " id annotator position \\\n", "0 nt-639 0 0 \n", "1 nt-639 0 1 \n", "2 nt-639 1 0 \n", "3 nt-639 1 1 \n", "4 nt-639 1 2 \n", "\n", " question table_file \\\n", "0 where are the players from? table_csv/203_149.csv \n", "1 which player went to louisiana state university? table_csv/203_149.csv \n", "2 who are the players? table_csv/203_149.csv \n", "3 which ones are in the top 26 picks? table_csv/203_149.csv \n", "4 and of those, who is from louisiana state univ... table_csv/203_149.csv \n", "\n", " answer_coordinates \\\n", "0 ['(0, 4)', '(1, 4)', '(2, 4)', '(3, 4)', '(4, ... \n", "1 ['(0, 1)'] \n", "2 ['(0, 1)', '(1, 1)', '(2, 1)', '(3, 1)', '(4, ... \n", "3 ['(0, 1)', '(1, 1)', '(2, 1)', '(3, 1)', '(4, ... \n", "4 ['(0, 1)'] \n", "\n", " answer_text \n", "0 ['Louisiana State University', 'Valley HS (Las... \n", "1 ['Ben McDonald'] \n", "2 ['Ben McDonald', 'Tyler Houston', 'Roger Salke... \n", "3 ['Ben McDonald', 'Tyler Houston', 'Roger Salke... \n", "4 ['Ben McDonald'] " ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import pandas as pd\n", "\n", "data = pd.read_excel(\"sqa_train_set_28_examples.xlsx\")\n", "data.head()" ] }, { "cell_type": "markdown", "metadata": { "id": "OMJ4dNBV1oj6" }, "source": [ "As you can see, each row corresponds to a question related to a table.\n", "* The `position` column identifies whether the question is the first, second, ... in a sequence of questions related to a table.\n", "* The `table_file` column identifies the name of the table file, which refers to a CSV file in the `table_csv` directory.\n", "* The `answer_coordinates` and `answer_text` columns indicate the answer to the question. The `answer_coordinates` is a list of tuples, each tuple being a (row_index, column_index) pair. The `answer_text` column is a list of strings, indicating the cell values.\n", "\n", "However, the `answer_coordinates` and `answer_text` columns are currently not recognized as real Python lists of Python tuples and strings respectively. Let's do that first using the `.literal_eval()`function of the `ast` module:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 511 }, "id": "BAovAs5s1k10", "outputId": "0849a829-4ae8-43e9-e138-177fa14e3e36" }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
idannotatorpositionquestiontable_fileanswer_coordinatesanswer_text
0nt-63900where are the players from?table_csv/203_149.csv[(0, 4), (1, 4), (2, 4), (3, 4), (4, 4), (5, 4...[Louisiana State University, Valley HS (Las Ve...
1nt-63901which player went to louisiana state university?table_csv/203_149.csv[(0, 1)][Ben McDonald]
2nt-63910who are the players?table_csv/203_149.csv[(0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (5, 1...[Ben McDonald, Tyler Houston, Roger Salkeld, J...
3nt-63911which ones are in the top 26 picks?table_csv/203_149.csv[(0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (5, 1...[Ben McDonald, Tyler Houston, Roger Salkeld, J...
4nt-63912and of those, who is from louisiana state univ...table_csv/203_149.csv[(0, 1)][Ben McDonald]
5nt-63920who are the players in the top 26?table_csv/203_149.csv[(0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (5, 1...[Ben McDonald, Tyler Houston, Roger Salkeld, J...
6nt-63921of those, which one was from louisiana state u...table_csv/203_149.csv[(0, 1)][Ben McDonald]
7nt-1164900what are all the names of the teams?table_csv/204_135.csv[(0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (5, 1...[Cordoba CF, CD Malaga, Granada CF, UD Las Pal...
8nt-1164901of these, which teams had any losses?table_csv/204_135.csv[(0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (5, 1...[Cordoba CF, CD Malaga, Granada CF, UD Las Pal...
9nt-1164902of these teams, which had more than 21 losses?table_csv/204_135.csv[(15, 1)][CD Villarrobledo]
\n", "
" ], "text/plain": [ " id annotator position \\\n", "0 nt-639 0 0 \n", "1 nt-639 0 1 \n", "2 nt-639 1 0 \n", "3 nt-639 1 1 \n", "4 nt-639 1 2 \n", "5 nt-639 2 0 \n", "6 nt-639 2 1 \n", "7 nt-11649 0 0 \n", "8 nt-11649 0 1 \n", "9 nt-11649 0 2 \n", "\n", " question table_file \\\n", "0 where are the players from? table_csv/203_149.csv \n", "1 which player went to louisiana state university? table_csv/203_149.csv \n", "2 who are the players? table_csv/203_149.csv \n", "3 which ones are in the top 26 picks? table_csv/203_149.csv \n", "4 and of those, who is from louisiana state univ... table_csv/203_149.csv \n", "5 who are the players in the top 26? table_csv/203_149.csv \n", "6 of those, which one was from louisiana state u... table_csv/203_149.csv \n", "7 what are all the names of the teams? table_csv/204_135.csv \n", "8 of these, which teams had any losses? table_csv/204_135.csv \n", "9 of these teams, which had more than 21 losses? table_csv/204_135.csv \n", "\n", " answer_coordinates \\\n", "0 [(0, 4), (1, 4), (2, 4), (3, 4), (4, 4), (5, 4... \n", "1 [(0, 1)] \n", "2 [(0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (5, 1... \n", "3 [(0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (5, 1... \n", "4 [(0, 1)] \n", "5 [(0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (5, 1... \n", "6 [(0, 1)] \n", "7 [(0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (5, 1... \n", "8 [(0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (5, 1... \n", "9 [(15, 1)] \n", "\n", " answer_text \n", "0 [Louisiana State University, Valley HS (Las Ve... \n", "1 [Ben McDonald] \n", "2 [Ben McDonald, Tyler Houston, Roger Salkeld, J... \n", "3 [Ben McDonald, Tyler Houston, Roger Salkeld, J... \n", "4 [Ben McDonald] \n", "5 [Ben McDonald, Tyler Houston, Roger Salkeld, J... \n", "6 [Ben McDonald] \n", "7 [Cordoba CF, CD Malaga, Granada CF, UD Las Pal... \n", "8 [Cordoba CF, CD Malaga, Granada CF, UD Las Pal... \n", "9 [CD Villarrobledo] " ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import ast\n", "\n", "def _parse_answer_coordinates(answer_coordinate_str):\n", " \"\"\"Parses the answer_coordinates of a question.\n", " Args:\n", " answer_coordinate_str: A string representation of a Python list of tuple\n", " strings.\n", " For example: \"['(1, 4)','(1, 3)', ...]\"\n", " \"\"\"\n", "\n", " try:\n", " answer_coordinates = []\n", " # make a list of strings\n", " coords = ast.literal_eval(answer_coordinate_str)\n", " # parse each string as a tuple\n", " for row_index, column_index in sorted(\n", " ast.literal_eval(coord) for coord in coords):\n", " answer_coordinates.append((row_index, column_index))\n", " except SyntaxError:\n", " raise ValueError('Unable to evaluate %s' % answer_coordinate_str)\n", "\n", " return answer_coordinates\n", "\n", "\n", "def _parse_answer_text(answer_text):\n", " \"\"\"Populates the answer_texts field of `answer` by parsing `answer_text`.\n", " Args:\n", " answer_text: A string representation of a Python list of strings.\n", " For example: \"[u'test', u'hello', ...]\"\n", " answer: an Answer object.\n", " \"\"\"\n", " try:\n", " answer = []\n", " for value in ast.literal_eval(answer_text):\n", " answer.append(value)\n", " except SyntaxError:\n", " raise ValueError('Unable to evaluate %s' % answer_text)\n", "\n", " return answer\n", "\n", "data['answer_coordinates'] = data['answer_coordinates'].apply(lambda coords_str: _parse_answer_coordinates(coords_str))\n", "data['answer_text'] = data['answer_text'].apply(lambda txt: _parse_answer_text(txt))\n", "\n", "data.head(10)" ] }, { "cell_type": "markdown", "metadata": { "id": "X7FYPpdW5dY4" }, "source": [ "Let's create a new dataframe that groups questions which are asked in a sequence related to the table. We can do this by adding a `sequence_id` column, which is a combination of the `id` and `annotator` columns:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 279 }, "id": "O1Quo0FL7h9-", "outputId": "5223d575-b86d-41e6-b23a-6071b3048211" }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
idannotatorpositionquestiontable_fileanswer_coordinatesanswer_textsequence_id
0nt-63900where are the players from?table_csv/203_149.csv[(0, 4), (1, 4), (2, 4), (3, 4), (4, 4), (5, 4...[Louisiana State University, Valley HS (Las Ve...nt-639-0
1nt-63901which player went to louisiana state university?table_csv/203_149.csv[(0, 1)][Ben McDonald]nt-639-0
2nt-63910who are the players?table_csv/203_149.csv[(0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (5, 1...[Ben McDonald, Tyler Houston, Roger Salkeld, J...nt-639-1
3nt-63911which ones are in the top 26 picks?table_csv/203_149.csv[(0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (5, 1...[Ben McDonald, Tyler Houston, Roger Salkeld, J...nt-639-1
4nt-63912and of those, who is from louisiana state univ...table_csv/203_149.csv[(0, 1)][Ben McDonald]nt-639-1
\n", "
" ], "text/plain": [ " id annotator position \\\n", "0 nt-639 0 0 \n", "1 nt-639 0 1 \n", "2 nt-639 1 0 \n", "3 nt-639 1 1 \n", "4 nt-639 1 2 \n", "\n", " question table_file \\\n", "0 where are the players from? table_csv/203_149.csv \n", "1 which player went to louisiana state university? table_csv/203_149.csv \n", "2 who are the players? table_csv/203_149.csv \n", "3 which ones are in the top 26 picks? table_csv/203_149.csv \n", "4 and of those, who is from louisiana state univ... table_csv/203_149.csv \n", "\n", " answer_coordinates \\\n", "0 [(0, 4), (1, 4), (2, 4), (3, 4), (4, 4), (5, 4... \n", "1 [(0, 1)] \n", "2 [(0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (5, 1... \n", "3 [(0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (5, 1... \n", "4 [(0, 1)] \n", "\n", " answer_text sequence_id \n", "0 [Louisiana State University, Valley HS (Las Ve... nt-639-0 \n", "1 [Ben McDonald] nt-639-0 \n", "2 [Ben McDonald, Tyler Houston, Roger Salkeld, J... nt-639-1 \n", "3 [Ben McDonald, Tyler Houston, Roger Salkeld, J... nt-639-1 \n", "4 [Ben McDonald] nt-639-1 " ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "def get_sequence_id(example_id, annotator):\n", " if \"-\" in str(annotator):\n", " raise ValueError('\"-\" not allowed in annotator.')\n", " return f\"{example_id}-{annotator}\"\n", "\n", "data['sequence_id'] = data.apply(lambda x: get_sequence_id(x.id, x.annotator), axis=1)\n", "data.head()" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 541 }, "id": "-uPpds5D762B", "outputId": "38aa6f13-2cc7-4d96-b8b3-a510288bfca2" }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
questiontable_fileanswer_coordinatesanswer_text
sequence_id
ns-1292-0[who are all the athletes?, where are they fro...table_csv/204_521.csv[[(0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (5, ...[[Tommy Green, Janis Dalins, Ugo Frigerio, Kar...
nt-10730-0[what was the production numbers of each revol...table_csv/203_253.csv[[(0, 4), (1, 4), (2, 4), (3, 4), (4, 4), (5, ...[[1,900 (estimated), 14,500 (estimated), 6,000...
nt-10730-1[what three revolver models had the least amou...table_csv/203_253.csv[[(0, 0), (6, 0), (7, 0)], [(0, 0)]][[Remington-Beals Army Model Revolver, New Mod...
nt-10730-2[what are all of the remington models?, how ma...table_csv/203_253.csv[[(0, 0), (1, 0), (2, 0), (3, 0), (4, 0), (5, ...[[Remington-Beals Army Model Revolver, Remingt...
nt-11649-0[what are all the names of the teams?, of thes...table_csv/204_135.csv[[(0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (5, ...[[Cordoba CF, CD Malaga, Granada CF, UD Las Pa...
nt-11649-1[what are the losses?, what team had more than...table_csv/204_135.csv[[(0, 6), (1, 6), (2, 6), (3, 6), (4, 6), (5, ...[[6, 6, 9, 10, 10, 12, 12, 11, 13, 14, 15, 14,...
nt-11649-2[what were all the teams?, what were the loss ...table_csv/204_135.csv[[(0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (5, ...[[Cordoba CF, CD Malaga, Granada CF, UD Las Pa...
nt-639-0[where are the players from?, which player wen...table_csv/203_149.csv[[(0, 4), (1, 4), (2, 4), (3, 4), (4, 4), (5, ...[[Louisiana State University, Valley HS (Las V...
nt-639-1[who are the players?, which ones are in the t...table_csv/203_149.csv[[(0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (5, ...[[Ben McDonald, Tyler Houston, Roger Salkeld, ...
nt-639-2[who are the players in the top 26?, of those,...table_csv/203_149.csv[[(0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (5, ...[[Ben McDonald, Tyler Houston, Roger Salkeld, ...
\n", "
" ], "text/plain": [ " question \\\n", "sequence_id \n", "ns-1292-0 [who are all the athletes?, where are they fro... \n", "nt-10730-0 [what was the production numbers of each revol... \n", "nt-10730-1 [what three revolver models had the least amou... \n", "nt-10730-2 [what are all of the remington models?, how ma... \n", "nt-11649-0 [what are all the names of the teams?, of thes... \n", "nt-11649-1 [what are the losses?, what team had more than... \n", "nt-11649-2 [what were all the teams?, what were the loss ... \n", "nt-639-0 [where are the players from?, which player wen... \n", "nt-639-1 [who are the players?, which ones are in the t... \n", "nt-639-2 [who are the players in the top 26?, of those,... \n", "\n", " table_file \\\n", "sequence_id \n", "ns-1292-0 table_csv/204_521.csv \n", "nt-10730-0 table_csv/203_253.csv \n", "nt-10730-1 table_csv/203_253.csv \n", "nt-10730-2 table_csv/203_253.csv \n", "nt-11649-0 table_csv/204_135.csv \n", "nt-11649-1 table_csv/204_135.csv \n", "nt-11649-2 table_csv/204_135.csv \n", "nt-639-0 table_csv/203_149.csv \n", "nt-639-1 table_csv/203_149.csv \n", "nt-639-2 table_csv/203_149.csv \n", "\n", " answer_coordinates \\\n", "sequence_id \n", "ns-1292-0 [[(0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (5, ... \n", "nt-10730-0 [[(0, 4), (1, 4), (2, 4), (3, 4), (4, 4), (5, ... \n", "nt-10730-1 [[(0, 0), (6, 0), (7, 0)], [(0, 0)]] \n", "nt-10730-2 [[(0, 0), (1, 0), (2, 0), (3, 0), (4, 0), (5, ... \n", "nt-11649-0 [[(0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (5, ... \n", "nt-11649-1 [[(0, 6), (1, 6), (2, 6), (3, 6), (4, 6), (5, ... \n", "nt-11649-2 [[(0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (5, ... \n", "nt-639-0 [[(0, 4), (1, 4), (2, 4), (3, 4), (4, 4), (5, ... \n", "nt-639-1 [[(0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (5, ... \n", "nt-639-2 [[(0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (5, ... \n", "\n", " answer_text \n", "sequence_id \n", "ns-1292-0 [[Tommy Green, Janis Dalins, Ugo Frigerio, Kar... \n", "nt-10730-0 [[1,900 (estimated), 14,500 (estimated), 6,000... \n", "nt-10730-1 [[Remington-Beals Army Model Revolver, New Mod... \n", "nt-10730-2 [[Remington-Beals Army Model Revolver, Remingt... \n", "nt-11649-0 [[Cordoba CF, CD Malaga, Granada CF, UD Las Pa... \n", "nt-11649-1 [[6, 6, 9, 10, 10, 12, 12, 11, 13, 14, 15, 14,... \n", "nt-11649-2 [[Cordoba CF, CD Malaga, Granada CF, UD Las Pa... \n", "nt-639-0 [[Louisiana State University, Valley HS (Las V... \n", "nt-639-1 [[Ben McDonald, Tyler Houston, Roger Salkeld, ... \n", "nt-639-2 [[Ben McDonald, Tyler Houston, Roger Salkeld, ... " ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# let's group table-question pairs by sequence id, and remove some columns we don't need\n", "grouped = data.groupby(by='sequence_id').agg(lambda x: x.tolist())\n", "grouped = grouped.drop(columns=['id', 'annotator', 'position'])\n", "grouped['table_file'] = grouped['table_file'].apply(lambda x: x[0])\n", "grouped.head(10)" ] }, { "cell_type": "markdown", "metadata": { "id": "r6RKTkSeLLyJ" }, "source": [ "Each row in the dataframe above now consists of a **table and one or more questions** which are asked in a **sequence**. Let's visualize the first row, i.e. a table, together with its queries:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 525 }, "id": "J-dTi5omLdN_", "outputId": "b8e1d893-8d8b-4540-dc35-57586312c992" }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
RankNameNationalityTime (hand)Notes
0nanTommy GreenGreat Britain4:50:10OR
1nanJanis DalinsLatvia4:57:20nan
2nanUgo FrigerioItaly4:59:06nan
34.0Karl HahnelGermany5:06:06nan
45.0Ettore RivoltaItaly5:07:39nan
56.0Paul SievertGermany5:16:41nan
67.0Henri QuintricFrance5:27:25nan
78.0Ernie CrosbieUnited States5:28:02nan
89.0Bill ChisholmUnited States5:51:00nan
910.0Alfred MaasikEstonia6:19:00nan
10nanHenry CiemanCanadananDNF
11nanJohn MoralisGreecenanDNF
12nanFrancesco PrettiItalynanDNF
13nanArthur Tell SchwabSwitzerlandnanDNF
14nanHarry HinkelUnited StatesnanDNF
\n", "
" ], "text/plain": [ " Rank Name Nationality Time (hand) Notes\n", "0 nan Tommy Green Great Britain 4:50:10 OR\n", "1 nan Janis Dalins Latvia 4:57:20 nan\n", "2 nan Ugo Frigerio Italy 4:59:06 nan\n", "3 4.0 Karl Hahnel Germany 5:06:06 nan\n", "4 5.0 Ettore Rivolta Italy 5:07:39 nan\n", "5 6.0 Paul Sievert Germany 5:16:41 nan\n", "6 7.0 Henri Quintric France 5:27:25 nan\n", "7 8.0 Ernie Crosbie United States 5:28:02 nan\n", "8 9.0 Bill Chisholm United States 5:51:00 nan\n", "9 10.0 Alfred Maasik Estonia 6:19:00 nan\n", "10 nan Henry Cieman Canada nan DNF\n", "11 nan John Moralis Greece nan DNF\n", "12 nan Francesco Pretti Italy nan DNF\n", "13 nan Arthur Tell Schwab Switzerland nan DNF\n", "14 nan Harry Hinkel United States nan DNF" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\n", "['who are all the athletes?', 'where are they from?', 'along with paul sievert, which athlete is from germany?']\n" ] } ], "source": [ "# path to the directory containing all csv files\n", "table_csv_path = \"table_csv\"\n", "\n", "item = grouped.iloc[0]\n", "table = pd.read_csv(table_csv_path + item.table_file[9:]).astype(str)\n", "\n", "display(table)\n", "print(\"\")\n", "print(item.question)" ] }, { "cell_type": "markdown", "metadata": { "id": "yw8MqIExLnnq" }, "source": [ "We can see that there are 3 sequential questions asked related to the contents of the table.\n", "\n", "We can now use `TapasTokenizer` to batch encode this, as follows:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "id": "t5iU5byAICWb" }, "outputs": [], "source": [ "import torch\n", "from transformers import TapasTokenizer\n", "\n", "# initialize the tokenizer\n", "tokenizer = TapasTokenizer.from_pretrained(\"google/tapas-base\")" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "5qOBiUPEGgK8", "outputId": "7bc36d39-21be-433e-ecde-3f0d81c340ea" }, "outputs": [ { "data": { "text/plain": [ "dict_keys(['input_ids', 'labels', 'numeric_values', 'numeric_values_scale', 'token_type_ids', 'attention_mask'])" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "encoding = tokenizer(table=table, queries=item.question, answer_coordinates=item.answer_coordinates, answer_text=item.answer_text,\n", " truncation=True, padding=\"max_length\", return_tensors=\"pt\")\n", "encoding.keys()" ] }, { "cell_type": "markdown", "metadata": { "id": "y2JRiKjPRHAF" }, "source": [ "TAPAS basically flattens every table-question pair before feeding it into a BERT like model:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 137 }, "id": "lhipz2_GRNKQ", "outputId": "a3ad3993-5173-45c7-a43b-993ab42f77e3" }, "outputs": [ { "data": { "text/plain": [ "'[CLS] who are all the athletes? [SEP] rank name nationality time ( hand ) notes [EMPTY] tommy green great britain 4 : 50 : 10 or [EMPTY] janis dalins latvia 4 : 57 : 20 [EMPTY] [EMPTY] ugo frigerio italy 4 : 59 : 06 [EMPTY] 4. 0 karl hahnel germany 5 : 06 : 06 [EMPTY] 5. 0 ettore rivolta italy 5 : 07 : 39 [EMPTY] 6. 0 paul sievert germany 5 : 16 : 41 [EMPTY] 7. 0 henri quintric france 5 : 27 : 25 [EMPTY] 8. 0 ernie crosbie united states 5 : 28 : 02 [EMPTY] 9. 0 bill chisholm united states 5 : 51 : 00 [EMPTY] 10. 0 alfred maasik estonia 6 : 19 : 00 [EMPTY] [EMPTY] henry cieman canada [EMPTY] dnf [EMPTY] john moralis greece [EMPTY] dnf [EMPTY] francesco pretti italy [EMPTY] dnf [EMPTY] arthur tell schwab switzerland [EMPTY] dnf [EMPTY] harry hinkel united states [EMPTY] dnf [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD]'" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tokenizer.decode(encoding[\"input_ids\"][0])" ] }, { "cell_type": "markdown", "metadata": { "id": "nVeB5IPaN5oN" }, "source": [ "The `token_type_ids` created here will be of shape (batch_size, sequence_length, 7), as TAPAS uses 7 different token types to encode tabular structure. Let's verify this:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "id": "zM0v-pwbN6gR" }, "outputs": [], "source": [ "assert encoding[\"token_type_ids\"].shape == (3, 512, 7)" ] }, { "cell_type": "markdown", "metadata": { "id": "TMt7cWJMLvue" }, "source": [ "\n", "\n", "One thing we can verify is whether the `prev_label` token type ids are created correctly. These indicate which tokens were (part of) an answer to the previous table-question pair.\n", "\n", "The prev_label token type ids of the first example in a batch must always be zero (since there's no previous table-question pair). Let's verify this:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "id": "ytUk-H1yL9cc" }, "outputs": [], "source": [ "assert encoding[\"token_type_ids\"][0][:,3].sum() == 0" ] }, { "cell_type": "markdown", "metadata": { "id": "rJ_o-82nMfK5" }, "source": [ "However, the `prev_label` token type ids of the second table-question pair in the batch must be set to 1 for the tokens which were an answer to the previous (i.e. the first) table question pair in the batch. The answers to the first table-question pair are the following:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "yxT9h2LIMNt3", "outputId": "69b29df5-8103-4b55-e8f4-598bd637a546" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['Tommy Green', 'Janis Dalins', 'Ugo Frigerio', 'Karl Hahnel', 'Ettore Rivolta', 'Paul Sievert', 'Henri Quintric', 'Ernie Crosbie', 'Bill Chisholm', 'Alfred Maasik', 'Henry Cieman', 'John Moralis', 'Francesco Pretti', 'Arthur Tell Schwab', 'Harry Hinkel']\n" ] } ], "source": [ "print(item.answer_text[0])" ] }, { "cell_type": "markdown", "metadata": { "id": "CSUkMGAcMpfE" }, "source": [ "So let's now verify whether the `prev_label` ids of the second table-question pair are set correctly:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "Uv6P7OpJGxuu", "outputId": "69b92a6a-408f-48f8-9842-dd3842f7188c" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[CLS] 0\n", "where 0\n", "are 0\n", "they 0\n", "from 0\n", "? 0\n", "[SEP] 0\n", "rank 0\n", "name 0\n", "nationality 0\n", "time 0\n", "( 0\n", "hand 0\n", ") 0\n", "notes 0\n", "[EMPTY] 0\n", "tommy 1\n", "green 1\n", "great 0\n", "britain 0\n", "4 0\n", ": 0\n", "50 0\n", ": 0\n", "10 0\n", "or 0\n", "[EMPTY] 0\n", "jan 1\n", "##is 1\n", "dali 1\n", "##ns 1\n", "latvia 0\n", "4 0\n", ": 0\n", "57 0\n", ": 0\n", "20 0\n", "[EMPTY] 0\n", "[EMPTY] 0\n", "u 1\n", "##go 1\n", "fr 1\n", "##iger 1\n", "##io 1\n", "italy 0\n", "4 0\n", ": 0\n", "59 0\n", ": 0\n", "06 0\n", "[EMPTY] 0\n", "4 0\n", ". 0\n", "0 0\n", "karl 1\n", "hahn 1\n", "##el 1\n", "germany 0\n", "5 0\n", ": 0\n", "06 0\n", ": 0\n", "06 0\n", "[EMPTY] 0\n", "5 0\n", ". 0\n", "0 0\n", "et 1\n", "##tore 1\n", "ri 1\n", "##vo 1\n", "##lta 1\n", "italy 0\n", "5 0\n", ": 0\n", "07 0\n", ": 0\n", "39 0\n", "[EMPTY] 0\n", "6 0\n", ". 0\n", "0 0\n", "paul 1\n", "si 1\n", "##ever 1\n", "##t 1\n", "germany 0\n", "5 0\n", ": 0\n", "16 0\n", ": 0\n", "41 0\n", "[EMPTY] 0\n", "7 0\n", ". 0\n", "0 0\n", "henri 1\n", "qui 1\n", "##nt 1\n", "##ric 1\n", "france 0\n", "5 0\n", ": 0\n", "27 0\n", ": 0\n", "25 0\n", "[EMPTY] 0\n", "8 0\n", ". 0\n", "0 0\n", "ernie 1\n", "cr 1\n", "##os 1\n", "##bie 1\n", "united 0\n", "states 0\n", "5 0\n", ": 0\n", "28 0\n", ": 0\n", "02 0\n", "[EMPTY] 0\n", "9 0\n", ". 0\n", "0 0\n", "bill 1\n", "chi 1\n", "##sho 1\n", "##lm 1\n", "united 0\n", "states 0\n", "5 0\n", ": 0\n", "51 0\n", ": 0\n", "00 0\n", "[EMPTY] 0\n", "10 0\n", ". 0\n", "0 0\n", "alfred 1\n", "ma 1\n", "##asi 1\n", "##k 1\n", "estonia 0\n", "6 0\n", ": 0\n", "19 0\n", ": 0\n", "00 0\n", "[EMPTY] 0\n", "[EMPTY] 0\n", "henry 1\n", "ci 1\n", "##eman 1\n", "canada 0\n", "[EMPTY] 0\n", "d 0\n", "##n 0\n", "##f 0\n", "[EMPTY] 0\n", "john 1\n", "moral 1\n", "##is 1\n", "greece 0\n", "[EMPTY] 0\n", "d 0\n", "##n 0\n", "##f 0\n", "[EMPTY] 0\n", "francesco 1\n", "pre 1\n", "##tti 1\n", "italy 0\n", "[EMPTY] 0\n", "d 0\n", "##n 0\n", "##f 0\n", "[EMPTY] 0\n", "arthur 1\n", "tell 1\n", "sc 1\n", "##hwa 1\n", "##b 1\n", "switzerland 0\n", "[EMPTY] 0\n", "d 0\n", "##n 0\n", "##f 0\n", "[EMPTY] 0\n", "harry 1\n", "hi 1\n", "##nk 1\n", "##el 1\n", "united 0\n", "states 0\n", "[EMPTY] 0\n", "d 0\n", "##n 0\n", "##f 0\n" ] } ], "source": [ "for id, prev_label in zip (encoding[\"input_ids\"][1], encoding[\"token_type_ids\"][1][:,3]):\n", " if id != 0: # we skip padding tokens\n", " print(tokenizer.decode([id]), prev_label.item())" ] }, { "cell_type": "markdown", "metadata": { "id": "wjVk49fO6u8H" }, "source": [ "This looks OK! Be sure to check this, because the token type ids are critical for the performance of TAPAS.\n", "\n", "Let's create a PyTorch dataset and corresponding dataloader. Note the __getitem__ method here: in order to properly set the prev_labels token types, we must check whether a table-question pair is the first in a sequence or not. In case it is, we can just encode it. In case it isn't, we need to encode it together with the previous table-question pair.\n", "\n", "Note that this is not the most efficient approach, because we're effectively tokenizing each table-question pair twice when applied on the entire dataset (feel free to ping me a more efficient solution)." ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "id": "C-n9vDTD1-k9" }, "outputs": [], "source": [ "class TableDataset(torch.utils.data.Dataset):\n", " def __init__(self, df, tokenizer):\n", " self.df = df\n", " self.tokenizer = tokenizer\n", "\n", " def __getitem__(self, idx):\n", " item = self.df.iloc[idx]\n", " table = pd.read_csv(table_csv_path + item.table_file[9:]).astype(str) # TapasTokenizer expects the table data to be text only\n", " if item.position != 0:\n", " # use the previous table-question pair to correctly set the prev_labels token type ids\n", " previous_item = self.df.iloc[idx-1]\n", " encoding = self.tokenizer(table=table,\n", " queries=[previous_item.question, item.question],\n", " answer_coordinates=[previous_item.answer_coordinates, item.answer_coordinates],\n", " answer_text=[previous_item.answer_text, item.answer_text],\n", " padding=\"max_length\",\n", " truncation=True,\n", " return_tensors=\"pt\"\n", " )\n", " # use encodings of second table-question pair in the batch\n", " encoding = {key: val[-1] for key, val in encoding.items()}\n", " else:\n", " # this means it's the first table-question pair in a sequence\n", " encoding = self.tokenizer(table=table,\n", " queries=item.question,\n", " answer_coordinates=item.answer_coordinates,\n", " answer_text=item.answer_text,\n", " padding=\"max_length\",\n", " truncation=True,\n", " return_tensors=\"pt\"\n", " )\n", " # remove the batch dimension which the tokenizer adds\n", " encoding = {key: val.squeeze(0) for key, val in encoding.items()}\n", " return encoding\n", "\n", " def __len__(self):\n", " return len(self.df)\n", "\n", "train_dataset = TableDataset(df=data, tokenizer=tokenizer)\n", "train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=2)" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "X4CHgnTzwfNp", "outputId": "a0980f27-a317-4375-9bc4-0085acad0e5f" }, "outputs": [ { "data": { "text/plain": [ "torch.Size([512, 7])" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "train_dataset[0][\"token_type_ids\"].shape" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "bZN1psdBy5_s", "outputId": "e085d737-3a6f-45e5-c200-7c21916b284a" }, "outputs": [ { "data": { "text/plain": [ "torch.Size([512])" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "train_dataset[1][\"input_ids\"].shape" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "id": "pHAyf85k_xQt" }, "outputs": [], "source": [ "batch = next(iter(train_dataloader))" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "FoqySHh-_0JV", "outputId": "9c0ab5d9-0a06-4331-80e2-ba3b739cfa92" }, "outputs": [ { "data": { "text/plain": [ "torch.Size([2, 512])" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "batch[\"input_ids\"].shape" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "g5pjJCCT_53N", "outputId": "d2ebbf1e-0701-47c1-8533-a087892bd715" }, "outputs": [ { "data": { "text/plain": [ "torch.Size([2, 512, 7])" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "batch[\"token_type_ids\"].shape" ] }, { "cell_type": "markdown", "metadata": { "id": "xVb1-H-jAEoS" }, "source": [ "Let's decode the first table-question pair:" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 137 }, "id": "1vfjT1JC_7zI", "outputId": "f1a85d76-96ab-4a4d-f8ae-c7ee913c6d7f" }, "outputs": [ { "data": { "text/plain": [ "'[CLS] where are the players from? [SEP] pick player team position school 1 ben mcdonald baltimore orioles rhp louisiana state university 2 tyler houston atlanta braves c valley hs ( las vegas, nv ) 3 roger salkeld seattle mariners rhp saugus ( ca ) hs 4 jeff jackson philadelphia phillies of simeon hs ( chicago, il ) 5 donald harris texas rangers of texas tech university 6 paul coleman saint louis cardinals of frankston ( tx ) hs 7 frank thomas chicago white sox 1b auburn university 8 earl cunningham chicago cubs of lancaster ( sc ) hs 9 kyle abbott california angels lhp long beach state university 10 charles johnson montreal expos c westwood hs ( fort pierce, fl ) 11 calvin murray cleveland indians 3b w. t. white high school ( dallas, tx ) 12 jeff juden houston astros rhp salem ( ma ) hs 13 brent mayne kansas city royals c cal state fullerton 14 steve hosey san francisco giants of fresno state university 15 kiki jones los angeles dodgers rhp hillsborough hs ( tampa, fl ) 16 greg blosser boston red sox of sarasota ( fl ) hs 17 cal eldred milwaukee brewers rhp university of iowa 18 willie greene pittsburgh pirates ss jones county hs ( gray, ga ) 19 eddie zosky toronto blue jays ss fresno state university 20 scott bryant cincinnati reds of university of texas 21 greg gohr detroit tigers rhp santa clara university 22 tom goodwin los angeles dodgers of fresno state university 23 mo vaughn boston red sox 1b seton hall university 24 alan zinter new york mets c university of arizona 25 chuck knoblauch minnesota twins 2b texas a & m university 26 scott burrell seattle mariners rhp hamden ( ct ) hs [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD]'" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tokenizer.decode(batch[\"input_ids\"][0])" ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "id": "sujsp8o9DtsY" }, "outputs": [], "source": [ "#first example should not have any prev_labels set\n", "assert batch[\"token_type_ids\"][0][:,3].sum() == 0" ] }, { "cell_type": "markdown", "metadata": { "id": "EIeql5vfFI6s" }, "source": [ "Let's decode the second table-question pair and verify some more:" ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 137 }, "id": "WrNo_qMqFOzi", "outputId": "b2051f0b-72d8-42e2-a6b6-c5a40eda666f" }, "outputs": [ { "data": { "text/plain": [ "'[CLS] which player went to louisiana state university? [SEP] pick player team position school 1 ben mcdonald baltimore orioles rhp louisiana state university 2 tyler houston atlanta braves c valley hs ( las vegas, nv ) 3 roger salkeld seattle mariners rhp saugus ( ca ) hs 4 jeff jackson philadelphia phillies of simeon hs ( chicago, il ) 5 donald harris texas rangers of texas tech university 6 paul coleman saint louis cardinals of frankston ( tx ) hs 7 frank thomas chicago white sox 1b auburn university 8 earl cunningham chicago cubs of lancaster ( sc ) hs 9 kyle abbott california angels lhp long beach state university 10 charles johnson montreal expos c westwood hs ( fort pierce, fl ) 11 calvin murray cleveland indians 3b w. t. white high school ( dallas, tx ) 12 jeff juden houston astros rhp salem ( ma ) hs 13 brent mayne kansas city royals c cal state fullerton 14 steve hosey san francisco giants of fresno state university 15 kiki jones los angeles dodgers rhp hillsborough hs ( tampa, fl ) 16 greg blosser boston red sox of sarasota ( fl ) hs 17 cal eldred milwaukee brewers rhp university of iowa 18 willie greene pittsburgh pirates ss jones county hs ( gray, ga ) 19 eddie zosky toronto blue jays ss fresno state university 20 scott bryant cincinnati reds of university of texas 21 greg gohr detroit tigers rhp santa clara university 22 tom goodwin los angeles dodgers of fresno state university 23 mo vaughn boston red sox 1b seton hall university 24 alan zinter new york mets c university of arizona 25 chuck knoblauch minnesota twins 2b texas a & m university 26 scott burrell seattle mariners rhp hamden ( ct ) hs [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD]'" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "tokenizer.decode(batch[\"input_ids\"][1])" ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "9a1OToVqxNap", "outputId": "2040d63a-024f-4e65-e17c-2a51630a9226" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor(132)\n" ] } ], "source": [ "assert batch[\"labels\"][0].sum() == batch[\"token_type_ids\"][1][:,3].sum()\n", "print(batch[\"token_type_ids\"][1][:,3].sum())" ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "x4PRdYvBE1k3", "outputId": "31bc6092-57e8-4040-f410-83e314ee4a0f" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[CLS] 0\n", "which 0\n", "player 0\n", "went 0\n", "to 0\n", "louisiana 0\n", "state 0\n", "university 0\n", "? 0\n", "[SEP] 0\n", "pick 0\n", "player 0\n", "team 0\n", "position 0\n", "school 0\n", "1 0\n", "ben 0\n", "mcdonald 0\n", "baltimore 0\n", "orioles 0\n", "r 0\n", "##hp 0\n", "louisiana 1\n", "state 1\n", "university 1\n", "2 0\n", "tyler 0\n", "houston 0\n", "atlanta 0\n", "braves 0\n", "c 0\n", "valley 1\n", "hs 1\n", "( 1\n", "las 1\n", "vegas 1\n", ", 1\n", "n 1\n", "##v 1\n", ") 1\n", "3 0\n", "roger 0\n", "sal 0\n", "##kel 0\n", "##d 0\n", "seattle 0\n", "mariners 0\n", "r 0\n", "##hp 0\n", "sa 1\n", "##ug 1\n", "##us 1\n", "( 1\n", "ca 1\n", ") 1\n", "hs 1\n", "4 0\n", "jeff 0\n", "jackson 0\n", "philadelphia 0\n", "phillies 0\n", "of 0\n", "simeon 1\n", "hs 1\n", "( 1\n", "chicago 1\n", ", 1\n", "il 1\n", ") 1\n", "5 0\n", "donald 0\n", "harris 0\n", "texas 0\n", "rangers 0\n", "of 0\n", "texas 1\n", "tech 1\n", "university 1\n", "6 0\n", "paul 0\n", "coleman 0\n", "saint 0\n", "louis 0\n", "cardinals 0\n", "of 0\n", "franks 1\n", "##ton 1\n", "( 1\n", "tx 1\n", ") 1\n", "hs 1\n", "7 0\n", "frank 0\n", "thomas 0\n", "chicago 0\n", "white 0\n", "sox 0\n", "1b 0\n", "auburn 1\n", "university 1\n", "8 0\n", "earl 0\n", "cunningham 0\n", "chicago 0\n", "cubs 0\n", "of 0\n", "lancaster 1\n", "( 1\n", "sc 1\n", ") 1\n", "hs 1\n", "9 0\n", "kyle 0\n", "abbott 0\n", "california 0\n", "angels 0\n", "l 0\n", "##hp 0\n", "long 1\n", "beach 1\n", "state 1\n", "university 1\n", "10 0\n", "charles 0\n", "johnson 0\n", "montreal 0\n", "expo 0\n", "##s 0\n", "c 0\n", "westwood 1\n", "hs 1\n", "( 1\n", "fort 1\n", "pierce 1\n", ", 1\n", "fl 1\n", ") 1\n", "11 0\n", "calvin 0\n", "murray 0\n", "cleveland 0\n", "indians 0\n", "3 0\n", "##b 0\n", "w 1\n", ". 1\n", "t 1\n", ". 1\n", "white 1\n", "high 1\n", "school 1\n", "( 1\n", "dallas 1\n", ", 1\n", "tx 1\n", ") 1\n", "12 0\n", "jeff 0\n", "jude 0\n", "##n 0\n", "houston 0\n", "astros 0\n", "r 0\n", "##hp 0\n", "salem 1\n", "( 1\n", "ma 1\n", ") 1\n", "hs 1\n", "13 0\n", "brent 0\n", "may 0\n", "##ne 0\n", "kansas 0\n", "city 0\n", "royals 0\n", "c 0\n", "cal 1\n", "state 1\n", "fuller 1\n", "##ton 1\n", "14 0\n", "steve 0\n", "hose 0\n", "##y 0\n", "san 0\n", "francisco 0\n", "giants 0\n", "of 0\n", "fresno 1\n", "state 1\n", "university 1\n", "15 0\n", "ki 0\n", "##ki 0\n", "jones 0\n", "los 0\n", "angeles 0\n", "dodgers 0\n", "r 0\n", "##hp 0\n", "hillsborough 1\n", "hs 1\n", "( 1\n", "tampa 1\n", ", 1\n", "fl 1\n", ") 1\n", "16 0\n", "greg 0\n", "b 0\n", "##los 0\n", "##ser 0\n", "boston 0\n", "red 0\n", "sox 0\n", "of 0\n", "sara 1\n", "##so 1\n", "##ta 1\n", "( 1\n", "fl 1\n", ") 1\n", "hs 1\n", "17 0\n", "cal 0\n", "el 0\n", "##dre 0\n", "##d 0\n", "milwaukee 0\n", "brewers 0\n", "r 0\n", "##hp 0\n", "university 1\n", "of 1\n", "iowa 1\n", "18 0\n", "willie 0\n", "greene 0\n", "pittsburgh 0\n", "pirates 0\n", "ss 0\n", "jones 1\n", "county 1\n", "hs 1\n", "( 1\n", "gray 1\n", ", 1\n", "ga 1\n", ") 1\n", "19 0\n", "eddie 0\n", "z 0\n", "##os 0\n", "##ky 0\n", "toronto 0\n", "blue 0\n", "jays 0\n", "ss 0\n", "fresno 1\n", "state 1\n", "university 1\n", "20 0\n", "scott 0\n", "bryant 0\n", "cincinnati 0\n", "reds 0\n", "of 0\n", "university 1\n", "of 1\n", "texas 1\n", "21 0\n", "greg 0\n", "go 0\n", "##hr 0\n", "detroit 0\n", "tigers 0\n", "r 0\n", "##hp 0\n", "santa 1\n", "clara 1\n", "university 1\n", "22 0\n", "tom 0\n", "goodwin 0\n", "los 0\n", "angeles 0\n", "dodgers 0\n", "of 0\n", "fresno 1\n", "state 1\n", "university 1\n", "23 0\n", "mo 0\n", "vaughn 0\n", "boston 0\n", "red 0\n", "sox 0\n", "1b 0\n", "seton 1\n", "hall 1\n", "university 1\n", "24 0\n", "alan 0\n", "z 0\n", "##int 0\n", "##er 0\n", "new 0\n", "york 0\n", "mets 0\n", "c 0\n", "university 1\n", "of 1\n", "arizona 1\n", "25 0\n", "chuck 0\n", "knob 0\n", "##lau 0\n", "##ch 0\n", "minnesota 0\n", "twins 0\n", "2 0\n", "##b 0\n", "texas 1\n", "a 1\n", "& 1\n", "m 1\n", "university 1\n", "26 0\n", "scott 0\n", "burr 0\n", "##ell 0\n", "seattle 0\n", "mariners 0\n", "r 0\n", "##hp 0\n", "ham 1\n", "##den 1\n", "( 1\n", "ct 1\n", ") 1\n", "hs 1\n" ] } ], "source": [ "for id, prev_label in zip(batch[\"input_ids\"][1], batch[\"token_type_ids\"][1][:,3]):\n", " if id != 0:\n", " print(tokenizer.decode([id]), prev_label.item())" ] }, { "cell_type": "markdown", "metadata": { "id": "cAem9QnIxoKb" }, "source": [ "## Define the model\n", "\n", "Here we initialize the model with a pre-trained base and randomly initialized cell selection head, and move it to the GPU (if available).\n", "\n", "Note that the `google/tapas-base` checkpoint has (by default) an SQA configuration, so we don't need to specify any additional hyperparameters." ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 1000, "referenced_widgets": [ "768723f1af4c4bf497064a9796567382", "82bfe5d88c5546358de33952e42221a3", "c6aadbda246d47fcad9660c2f57926a0", "cd910003940b498e94db9746091e4c3c", "a11bc2e0d34543d1bb0ab708e0cc1a67", "d59737fa0a424a90ba02110e4cf1b639", "d534c2ebdbb144efa59500fcadf6e118", "f6fef10b29f74c458d05c2cbda46bf4a", "695d0c44ebbe4a55a17c735645c26c82", "7e3cd6c49143436bad3d84be7ca2cf79", "be6159a5be0945629a517297df1170b8", "b889ffa28a1949e9ac46e205ee582688", "3c95dd6249784fe6a2a466737e5f5866", "cf9382ee988f4787a88340d3b2af95f3", "45fdca8a04f94b43b021c590181e8a48", "3bad5ab136084c2e92b55153828a403f" ] }, "id": "_OsPodbiDliR", "outputId": "e2094861-fc6c-42b9-b12b-0f6824ad3048" }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "5448e41096dc49b1a25147fdf8720f83", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading pytorch_model.bin: 0%| | 0.00/443M [00:00= 0 and col_id >= 0 and segment_id == 1:\n", " model_label_ids[i] = int(coords_to_answer[(col_id, row_id)])\n", "\n", " # set the prev label ids of the example (shape (1, seq_len) )\n", " token_type_ids_example[:,3] = torch.from_numpy(model_label_ids).type(torch.long).to(device)\n", "\n", " prev_answers = {}\n", " # get the example\n", " input_ids_example = input_ids[idx] # shape (seq_len,)\n", " attention_mask_example = attention_mask[idx] # shape (seq_len,)\n", " token_type_ids_example = token_type_ids[idx] # shape (seq_len, 7)\n", " # forward pass to obtain the logits\n", " outputs = model(input_ids=input_ids_example.unsqueeze(0),\n", " attention_mask=attention_mask_example.unsqueeze(0),\n", " token_type_ids=token_type_ids_example.unsqueeze(0))\n", " logits = outputs.logits\n", " all_logits.append(logits)\n", "\n", " # convert logits to probabilities (which are of shape (1, seq_len))\n", " dist_per_token = torch.distributions.Bernoulli(logits=logits)\n", " probabilities = dist_per_token.probs * attention_mask_example.type(torch.float32).to(dist_per_token.probs.device)\n", "\n", " # Compute average probability per cell, aggregating over tokens.\n", " # Dictionary maps coordinates to a list of one or more probabilities\n", " coords_to_probs = collections.defaultdict(list)\n", " prev_answers = {}\n", " for i, p in enumerate(probabilities.squeeze().tolist()):\n", " segment_id = token_type_ids_example[:,0].tolist()[i]\n", " col = token_type_ids_example[:,1].tolist()[i] - 1\n", " row = token_type_ids_example[:,2].tolist()[i] - 1\n", " if col >= 0 and row >= 0 and segment_id == 1:\n", " coords_to_probs[(col, row)].append(p)\n", "\n", " # Next, map cell coordinates to 1 or 0 (depending on whether the mean prob of all cell tokens is > 0.5)\n", " coords_to_answer = {}\n", " for key in coords_to_probs:\n", " coords_to_answer[key] = np.array(coords_to_probs[key]).mean() > 0.5\n", " prev_answers[idx+1] = coords_to_answer\n", "\n", " logits_batch = torch.cat(tuple(all_logits), 0)\n", "\n", " return logits_batch" ] }, { "cell_type": "code", "execution_count": 29, "metadata": { "id": "jflxDE_BfVg9" }, "outputs": [], "source": [ "data = {'Actors': [\"Brad Pitt\", \"Leonardo Di Caprio\", \"George Clooney\"],\n", " 'Age': [\"56\", \"45\", \"59\"],\n", " 'Number of movies': [\"87\", \"53\", \"69\"],\n", " 'Date of birth': [\"7 february 1967\", \"10 june 1996\", \"28 november 1967\"]}\n", "queries = [\"How many movies has George Clooney played in?\", \"How old is he?\", \"What's his date of birth?\"]\n", "\n", "table = pd.DataFrame.from_dict(data)\n", "\n", "inputs = tokenizer(table=table, queries=queries, padding='max_length', return_tensors=\"pt\")\n", "logits = compute_prediction_sequence(model, inputs, device)" ] }, { "cell_type": "markdown", "metadata": { "id": "k_a_Y-rDq__o" }, "source": [ "Finally, we can use the handy `convert_logits_to_predictions` function of `TapasTokenizer` to convert the logits into predicted coordinates, and print out the result:" ] }, { "cell_type": "code", "execution_count": 30, "metadata": { "id": "5fAcNOVsqoVD" }, "outputs": [], "source": [ "predicted_answer_coordinates, = tokenizer.convert_logits_to_predictions(inputs, logits.cpu().detach())" ] }, { "cell_type": "code", "execution_count": 31, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 254 }, "id": "QP4AHMxFujhV", "outputId": "aed2fc99-957b-4b9f-e804-b426a80de8df" }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ActorsAgeNumber of moviesDate of birth
0Brad Pitt56877 february 1967
1Leonardo Di Caprio455310 june 1996
2George Clooney596928 november 1967
\n", "
" ], "text/plain": [ " Actors Age Number of movies Date of birth\n", "0 Brad Pitt 56 87 7 february 1967\n", "1 Leonardo Di Caprio 45 53 10 june 1996\n", "2 George Clooney 59 69 28 november 1967" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\n", "How many movies has George Clooney played in?\n", "Predicted answer: \n", "How old is he?\n", "Predicted answer: Brad Pitt, Leonardo Di Caprio, George Clooney\n", "What's his date of birth?\n", "Predicted answer: 7 february 1967, 10 june 1996, 28 november 1967\n" ] } ], "source": [ "# handy helper function in case inference on Pandas dataframe\n", "answers = []\n", "for coordinates in predicted_answer_coordinates:\n", " if len(coordinates) == 1:\n", " # only a single cell:\n", " answers.append(table.iat[coordinates[0]])\n", " else:\n", " # multiple cells\n", " cell_values = []\n", " for coordinate in coordinates:\n", " cell_values.append(table.iat[coordinate])\n", " answers.append(\", \".join(cell_values))\n", "\n", "display(table)\n", "print(\"\")\n", "for query, answer in zip(queries, answers):\n", " print(query)\n", " print(\"Predicted answer: \" + answer)" ] }, { "cell_type": "markdown", "metadata": { "id": "6L0KBaPjG7uj" }, "source": [ "Note that the results here are not correct, that's obvious since we only trained on 28 examples and tested it on an entire different example. In reality, you should train on the entire dataset. The result of this is the `google/tapas-base-finetuned-sqa` checkpoint." ] }, { "cell_type": "markdown", "metadata": { "id": "Y4S-TIGSvqhZ" }, "source": [ "## Legacy\n", "\n", "The code below was considered during the creation of this tutorial, but eventually not used." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ox1ZECiJ5vSD" }, "outputs": [], "source": [ "# grouped = data.groupby(data.position)\n", "# test = grouped.get_group(0)\n", "# test.index" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "L0IuO6vivrw_" }, "outputs": [], "source": [ "def custom_collate_fn(data):\n", " \"\"\"\n", " A custom collate function to batch input_ids, attention_mask, token_type_ids and so on of different batch sizes.\n", "\n", " Args:\n", " data:\n", " a list of dictionaries (each dictionary is what the __getitem__ method of TableDataset returns)\n", " \"\"\"\n", " result = {}\n", " for k in data[0].keys():\n", " result[k] = torch.cat([x[k] for x in data], dim=0)\n", "\n", " return result\n", "\n", "class TableDataset(torch.utils.data.Dataset):\n", " def __init__(self, df, tokenizer):\n", " self.df = df\n", " self.tokenizer = tokenizer\n", "\n", " def __getitem__(self, idx):\n", " item = self.df.iloc[idx]\n", " table = pd.read_csv(table_csv_path + item.table_file[9:]).astype(str) # TapasTokenizer expects the table data to be text only\n", " if item.position != 0:\n", " # use the previous table-question pair\n", " previous_item = self.df.iloc[idx-1]\n", " encoding = self.tokenizer(table=table,\n", " queries=[previous_item.question, item.question],\n", " answer_coordinates=[previous_item.answer_coordinates, item.answer_coordinates],\n", " answer_text=[previous_item.answer_text, item.answer_text],\n", " padding=\"max_length\",\n", " truncation=True,\n", " return_tensors=\"pt\"\n", " )\n", " # remove the batch dimension which the tokenizer adds\n", " encoding = {key: val[-1] for key, val in encoding.items()}\n", " #encoding = {key: val.squeeze(0) for key, val in encoding.items()}\n", " else:\n", " # this means it's the first table-question pair in a sequence\n", " encoding = self.tokenizer(table=table,\n", " queries=item.question,\n", " answer_coordinates=item.answer_coordinates,\n", " answer_text=item.answer_text,\n", " padding=\"max_length\",\n", " truncation=True,\n", " return_tensors=\"pt\"\n", " )\n", " return encoding\n", "\n", " def __len__(self):\n", " return len(self.df)\n", "\n", "train_dataset = TableDataset(df=grouped, tokenizer=tokenizer)\n", "train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=2, collate_fn=custom_collate_fn)" ] } ], "metadata": { "accelerator": "GPU", "colab": { "name": "Fine-tuning TapasForQuestionAnswering on SQA.ipynb", "provenance": [], "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.16" }, "widgets": { "application/vnd.jupyter.widget-state+json": { "3bad5ab136084c2e92b55153828a403f": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "3c95dd6249784fe6a2a466737e5f5866": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "ProgressStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "ProgressStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "bar_color": null, "description_width": "initial" } }, "45fdca8a04f94b43b021c590181e8a48": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "DescriptionStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "DescriptionStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "description_width": "" } }, "695d0c44ebbe4a55a17c735645c26c82": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HBoxModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HBoxModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HBoxView", "box_style": "", "children": [ "IPY_MODEL_be6159a5be0945629a517297df1170b8", "IPY_MODEL_b889ffa28a1949e9ac46e205ee582688" ], "layout": "IPY_MODEL_7e3cd6c49143436bad3d84be7ca2cf79" } }, "768723f1af4c4bf497064a9796567382": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HBoxModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HBoxModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HBoxView", "box_style": "", "children": [ "IPY_MODEL_c6aadbda246d47fcad9660c2f57926a0", "IPY_MODEL_cd910003940b498e94db9746091e4c3c" ], "layout": "IPY_MODEL_82bfe5d88c5546358de33952e42221a3" } }, "7e3cd6c49143436bad3d84be7ca2cf79": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "82bfe5d88c5546358de33952e42221a3": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "a11bc2e0d34543d1bb0ab708e0cc1a67": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "ProgressStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "ProgressStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "bar_color": null, "description_width": "initial" } }, "b889ffa28a1949e9ac46e205ee582688": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HTMLModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HTMLModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HTMLView", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_3bad5ab136084c2e92b55153828a403f", "placeholder": "​", "style": "IPY_MODEL_45fdca8a04f94b43b021c590181e8a48", "value": " 443M/443M [00:06<00:00, 66.0MB/s]" } }, "be6159a5be0945629a517297df1170b8": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "FloatProgressModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "FloatProgressModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "ProgressView", "bar_style": "success", "description": "Downloading: 100%", "description_tooltip": null, "layout": "IPY_MODEL_cf9382ee988f4787a88340d3b2af95f3", "max": 442768791, "min": 0, "orientation": "horizontal", "style": "IPY_MODEL_3c95dd6249784fe6a2a466737e5f5866", "value": 442768791 } }, "c6aadbda246d47fcad9660c2f57926a0": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "FloatProgressModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "FloatProgressModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "ProgressView", "bar_style": "success", "description": "Downloading: 100%", "description_tooltip": null, "layout": "IPY_MODEL_d59737fa0a424a90ba02110e4cf1b639", "max": 1432, "min": 0, "orientation": "horizontal", "style": "IPY_MODEL_a11bc2e0d34543d1bb0ab708e0cc1a67", "value": 1432 } }, "cd910003940b498e94db9746091e4c3c": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HTMLModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HTMLModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HTMLView", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_f6fef10b29f74c458d05c2cbda46bf4a", "placeholder": "​", "style": "IPY_MODEL_d534c2ebdbb144efa59500fcadf6e118", "value": " 1.43k/1.43k [00:00<00:00, 4.46kB/s]" } }, "cf9382ee988f4787a88340d3b2af95f3": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "d534c2ebdbb144efa59500fcadf6e118": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "DescriptionStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "DescriptionStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "description_width": "" } }, "d59737fa0a424a90ba02110e4cf1b639": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "f6fef10b29f74c458d05c2cbda46bf4a": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } } } } }, "nbformat": 4, "nbformat_minor": 0 }