id
int64
0
6.41k
repo_name
stringlengths
2
91
repo_owner
stringlengths
2
39
file_link
stringlengths
84
311
line_link
stringlengths
91
317
path
stringlengths
8
227
content_sha
stringlengths
64
64
content
stringlengths
1.11k
29.2M
0
finance-complaint
Machine-Learning-01
https://github.com/Machine-Learning-01/finance-complaint/blob/9b207785ca1d12ce2ba2a8acf8141c5f00055d1d/notebook/Untitled1.ipynb
https://github.com/Machine-Learning-01/finance-complaint/blob/9b207785ca1d12ce2ba2a8acf8141c5f00055d1d/notebook/Untitled1.ipynb#L608
notebook/Untitled1.ipynb
d12c58483c42f93f58d6943065e34ed0a636d6a5ae1732b81b68dd82ddce4c2c
{ "cells": [ { "cell_type": "code", "execution_count": 4, "id": "f5fe9aa4-23f3-4a32-a4c8-7c25106e8736", "metadata": { "canvas": { "comments": [], "componentType": "CodeCell", "copiedOriginId": null, "diskcache": false, "headerColor": "none", "id": "c76bcc2e-bc22-4c56-b6e8-ff6fda7137d1", "isComponent": false, "name": "", "parents": [] } }, "outputs": [], "source": [ "df=df.select(['scaled_input_features','consumer_disputed'])" ] }, { "cell_type": "code", "execution_count": 5, "id": "c041a7cd-69fa-44d5-bf18-1be7b6ce07fc", "metadata": { "canvas": { "comments": [], "componentType": "CodeCell", "copiedOriginId": null, "diskcache": false, "headerColor": "none", "id": "e37a5f13-304e-4a80-8947-77a09abe50ad", "isComponent": false, "name": "", "parents": [] } }, "outputs": [], "source": [ "df=df.replace({\"Yes\":\"1\",\"No\":\"0\"})" ] }, { "cell_type": "code", "execution_count": 6, "id": "086c62ab-6dbe-4a17-890c-1ba1675497dd", "metadata": { "canvas": { "comments": [], "componentType": "CodeCell", "copiedOriginId": null, "diskcache": false, "headerColor": "none", "id": "517ae244-6b1e-4136-b863-ac9b2252affc", "isComponent": false, "name": "", "parents": [] } }, "outputs": [], "source": [ "from pyspark.sql.functions import col\n", "from pyspark.sql.types import IntegerType" ] }, { "cell_type": "code", "execution_count": 7, "id": "c3771f60-abce-4eb7-98d0-37444babf185", "metadata": { "canvas": { "comments": [], "componentType": "CodeCell", "copiedOriginId": null, "diskcache": false, "headerColor": "none", "id": "85c826a3-be0a-4230-b780-60ba97761f1f", "isComponent": false, "name": "", "parents": [] } }, "outputs": [], "source": [ "df=df.withColumn(\"consumer_disputed\",col(\"consumer_disputed\").cast(IntegerType()))" ] }, { "cell_type": "code", "execution_count": 8, "id": "c88e115a-2f2e-4672-b755-d2423cc28bc7", "metadata": { "canvas": { "comments": [], "componentType": "CodeCell", "copiedOriginId": null, "diskcache": false, "headerColor": "none", "id": "383cc564-0e4d-4807-80e6-4bbbc21d24b3", "isComponent": false, "name": "", "parents": [] } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "root\n", " |-- scaled_input_features: vector (nullable = true)\n", " |-- consumer_disputed: integer (nullable = true)\n", "\n" ] } ], "source": [ "df.printSchema()" ] }, { "cell_type": "code", "execution_count": 9, "id": "869fe559-df6b-47e1-93a7-09904d0a0e04", "metadata": { "canvas": { "comments": [], "componentType": "CodeCell", "copiedOriginId": null, "diskcache": false, "headerColor": "none", "id": "0ec110ec-61e6-4a29-8c2f-701209a3c52b", "isComponent": false, "name": "", "parents": [] } }, "outputs": [], "source": [ "train_df,test_df = df.randomSplit([0.8,0.2])" ] }, { "cell_type": "code", "execution_count": 10, "id": "27bddd6c-2ce4-4b1e-8e8c-f535d1e71b91", "metadata": { "canvas": { "comments": [], "componentType": "CodeCell", "copiedOriginId": null, "diskcache": false, "headerColor": "none", "id": "10a9c68d-7b94-435c-ac5c-b7d93e141120", "isComponent": false, "name": "", "parents": [] } }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/home/avnish/iNeuron_Private_Intelligence_Limited/industry_ready_project/env/venv/lib/python3.8/site-packages/petastorm/spark/spark_dataset_converter.py:28: FutureWarning: pyarrow.LocalFileSystem is deprecated as of 2.0.0, please use pyarrow.fs.LocalFileSystem instead.\n", " from pyarrow import LocalFileSystem\n" ] } ], "source": [ "from petastorm.spark import SparkDatasetConverter, make_spark_converter\n" ] }, { "cell_type": "code", "execution_count": null, "id": "1e7223ec-da79-45ec-bbca-5ce5f619a8ab", "metadata": { "canvas": { "comments": [], "componentType": "CodeCell", "copiedOriginId": null, "diskcache": false, "headerColor": "none", "id": "a87946d2-7f36-402b-9fc2-0e96e4b15f8d", "isComponent": false, "name": "", "parents": [] } }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": 11, "id": "81a196ab-cbf6-40a5-818f-ebcfe7dc584f", "metadata": { "canvas": { "comments": [], "componentType": "CodeCell", "copiedOriginId": null, "diskcache": false, "headerColor": "none", "id": "d53e6eb8-833d-4f8f-81b2-59b15df2b0b7", "isComponent": false, "name": "", "parents": [] } }, "outputs": [], "source": [ "spark=spark_session.sparkContext" ] }, { "cell_type": "code", "execution_count": 12, "id": "9d3c803d-7a09-40f3-aeb7-9207720f4ac9", "metadata": { "canvas": { "comments": [], "componentType": "CodeCell", "copiedOriginId": null, "diskcache": false, "headerColor": "none", "id": "a26bafcf-f050-4e90-8f94-198e1d50bb14", "isComponent": false, "name": "", "parents": [] } }, "outputs": [], "source": [ "import os\n", "os.makedirs(\"avnish/testing\",exist_ok=True)" ] }, { "cell_type": "code", "execution_count": 13, "id": "8ada6bf1-6dae-4716-8f82-000f6881c868", "metadata": { "canvas": { "comments": [], "componentType": "CodeCell", "copiedOriginId": null, "diskcache": false, "headerColor": "none", "id": "a6ecc839-9904-452e-a25b-2f7938510b9a", "isComponent": false, "name": "", "parents": [] } }, "outputs": [], "source": [ "from pyspark.sql.types import IntegerType\n", "from pyspark.sql.functions import col\n", "df=df.withColumn(\"consumer_disputed\",col(\"consumer_disputed\").cast(IntegerType()))" ] }, { "cell_type": "code", "execution_count": 14, "id": "adb5793f-8d9b-4e0e-ac82-29cac1200931", "metadata": { "canvas": { "comments": [], "componentType": "CodeCell", "copiedOriginId": null, "diskcache": false, "headerColor": "none", "id": "5d7a7a52-9237-44e5-a30b-b593edb913fa", "isComponent": false, "name": "", "parents": [] } }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/home/avnish/iNeuron_Private_Intelligence_Limited/industry_ready_project/env/venv/lib/python3.8/site-packages/petastorm/fs_utils.py:88: FutureWarning: pyarrow.localfs is deprecated as of 2.0.0, please use pyarrow.fs.LocalFileSystem instead.\n", " self._filesystem = pyarrow.localfs\n", "Converting floating-point columns to float32\n", "The median size 2254445 B (< 50 MB) of the parquet files is too small. Total size: 20010215 B. Increase the median file size by calling df.repartition(n) or df.coalesce(n), which might help improve the performance. Parquet files: file:///home/avnish/iNeuron_Private_Intelligence_Limited/industry_ready_project/finance_complaint/avnish/testing/20220912131122-appid-local-1662968477222-d613b1eb-4828-4da7-8428-7b79c78ce72a/part-00004-a782a7c4-bcdd-4018-94e9-e852c795dee9-c000.parquet, ...\n", "Converting floating-point columns to float32\n", "The median size 569681 B (< 50 MB) of the parquet files is too small. Total size: 4963139 B. Increase the median file size by calling df.repartition(n) or df.coalesce(n), which might help improve the performance. Parquet files: file:///home/avnish/iNeuron_Private_Intelligence_Limited/industry_ready_project/finance_complaint/avnish/testing/20220912131126-appid-local-1662968477222-21f0fb30-d8e9-4071-b9dc-0713df54e1ad/part-00006-a0e719ba-ea91-4d04-a0e0-a4d3f9b1e02e-c000.parquet, ...\n" ] } ], "source": [ "spark_session.conf.set(SparkDatasetConverter.PARENT_CACHE_DIR_URL_CONF, \"file:///home/avnish/iNeuron_Private_Intelligence_Limited/industry_ready_project/finance_complaint/avnish/testing\")\n", "\n", "converter_train = make_spark_converter(train_df)\n", "converter_val = make_spark_converter(test_df)" ] }, { "cell_type": "code", "execution_count": 15, "id": "118c821e-6b69-4ae6-945f-a0400503ff3a", "metadata": { "canvas": { "comments": [], "componentType": "CodeCell", "copiedOriginId": null, "diskcache": false, "headerColor": "none", "id": "fedfe020-b639-4f91-9fce-c139e9ae7bf6", "isComponent": false, "name": "", "parents": [] } }, "outputs": [], "source": [ "BATCH_SIZE=32" ] }, { "cell_type": "code", "execution_count": null, "id": "4c2c9c97-8be4-487c-af5c-2e252a551dd7", "metadata": { "canvas": { "comments": [], "componentType": "CodeCell", "copiedOriginId": null, "diskcache": false, "headerColor": "none", "id": "bd2c90af-2d75-4b42-b127-b0eabe02cacd", "isComponent": false, "name": "", "parents": [] } }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "6c43ed70-1293-4414-9d3e-6c69cf6ec420", "metadata": { "canvas": { "comments": [], "componentType": "CodeCell", "copiedOriginId": null, "diskcache": false, "headerColor": "none", "id": "90d97ad8-93ae-4b63-ada3-e981dfc167e8", "isComponent": false, "name": "", "parents": [] } }, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": 16, "id": "6d854ba3-92d3-4e92-940e-105366165550", "metadata": { "canvas": { "comments": [], "componentType": "CodeCell", "copiedOriginId": null, "diskcache": false, "headerColor": "none", "id": "fbccdbcb-ad7b-4886-925e-65a21a6fcf23", "isComponent": false, "name": "", "parents": [] } }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/home/avnish/iNeuron_Private_Intelligence_Limited/industry_ready_project/env/venv/lib/python3.8/site-packages/petastorm/fs_utils.py:88: FutureWarning: pyarrow.localfs is deprecated as of 2.0.0, please use pyarrow.fs.LocalFileSystem instead.\n", " self._filesystem = pyarrow.localfs\n", "/home/avnish/iNeuron_Private_Intelligence_Limited/industry_ready_project/env/venv/lib/python3.8/site-packages/petastorm/etl/dataset_metadata.py:402: FutureWarning: Specifying the 'metadata_nthreads' argument is deprecated as of pyarrow 8.0.0, and the argument will be removed in a future version\n", " dataset = pq.ParquetDataset(path_or_paths, filesystem=fs, validate_schema=False, metadata_nthreads=10)\n", "/home/avnish/iNeuron_Private_Intelligence_Limited/industry_ready_project/env/venv/lib/python3.8/site-packages/petastorm/etl/dataset_metadata.py:362: FutureWarning: 'ParquetDataset.common_metadata' attribute is deprecated as of pyarrow 5.0.0 and will be removed in a future version.\n", " if not dataset.common_metadata:\n", "/home/avnish/iNeuron_Private_Intelligence_Limited/industry_ready_project/env/venv/lib/python3.8/site-packages/petastorm/reader.py:418: FutureWarning: Specifying the 'metadata_nthreads' argument is deprecated as of pyarrow 8.0.0, and the argument will be removed in a future version\n", " self.dataset = pq.ParquetDataset(dataset_path, filesystem=pyarrow_filesystem,\n", "/home/avnish/iNeuron_Private_Intelligence_Limited/industry_ready_project/env/venv/lib/python3.8/site-packages/petastorm/unischema.py:317: FutureWarning: 'ParquetDataset.pieces' attribute is deprecated as of pyarrow 5.0.0 and will be removed in a future version. Specify 'use_legacy_dataset=False' while constructing the ParquetDataset, and then use the '.fragments' attribute instead.\n", " meta = parquet_dataset.pieces[0].get_metadata()\n", "/home/avnish/iNeuron_Private_Intelligence_Limited/industry_ready_project/env/venv/lib/python3.8/site-packages/petastorm/unischema.py:321: FutureWarning: 'ParquetDataset.partitions' attribute is deprecated as of pyarrow 5.0.0 and will be removed in a future version. Specify 'use_legacy_dataset=False' while constructing the ParquetDataset, and then use the '.partitioning' attribute instead.\n", " for partition in (parquet_dataset.partitions or []):\n", "/home/avnish/iNeuron_Private_Intelligence_Limited/industry_ready_project/env/venv/lib/python3.8/site-packages/petastorm/etl/dataset_metadata.py:253: FutureWarning: 'ParquetDataset.metadata' attribute is deprecated as of pyarrow 5.0.0 and will be removed in a future version.\n", " metadata = dataset.metadata\n", "/home/avnish/iNeuron_Private_Intelligence_Limited/industry_ready_project/env/venv/lib/python3.8/site-packages/petastorm/etl/dataset_metadata.py:254: FutureWarning: 'ParquetDataset.common_metadata' attribute is deprecated as of pyarrow 5.0.0 and will be removed in a future version.\n", " common_metadata = dataset.common_metadata\n", "/home/avnish/iNeuron_Private_Intelligence_Limited/industry_ready_project/env/venv/lib/python3.8/site-packages/petastorm/etl/dataset_metadata.py:350: FutureWarning: 'ParquetDataset.pieces' attribute is deprecated as of pyarrow 5.0.0 and will be removed in a future version. Specify 'use_legacy_dataset=False' while constructing the ParquetDataset, and then use the '.fragments' attribute instead.\n", " futures_list = [thread_pool.submit(_split_piece, piece, dataset.fs.open) for piece in dataset.pieces]\n", "/home/avnish/iNeuron_Private_Intelligence_Limited/industry_ready_project/env/venv/lib/python3.8/site-packages/petastorm/etl/dataset_metadata.py:350: FutureWarning: 'ParquetDataset.fs' attribute is deprecated as of pyarrow 5.0.0 and will be removed in a future version. Specify 'use_legacy_dataset=False' while constructing the ParquetDataset, and then use the '.filesystem' attribute instead.\n", " futures_list = [thread_pool.submit(_split_piece, piece, dataset.fs.open) for piece in dataset.pieces]\n", "/home/avnish/iNeuron_Private_Intelligence_Limited/industry_ready_project/env/venv/lib/python3.8/site-packages/petastorm/etl/dataset_metadata.py:334: FutureWarning: ParquetDatasetPiece is deprecated as of pyarrow 5.0.0 and will be removed in a future version.\n", " return [pq.ParquetDatasetPiece(piece.path, open_file_func=fs_open,\n", "/home/avnish/iNeuron_Private_Intelligence_Limited/industry_ready_project/env/venv/lib/python3.8/site-packages/petastorm/arrow_reader_worker.py:140: FutureWarning: 'ParquetDataset.fs' attribute is deprecated as of pyarrow 5.0.0 and will be removed in a future version. Specify 'use_legacy_dataset=False' while constructing the ParquetDataset, and then use the '.filesystem' attribute instead.\n", " parquet_file = ParquetFile(self._dataset.fs.open(piece.path))\n", "/home/avnish/iNeuron_Private_Intelligence_Limited/industry_ready_project/env/venv/lib/python3.8/site-packages/petastorm/arrow_reader_worker.py:288: FutureWarning: 'ParquetDataset.partitions' attribute is deprecated as of pyarrow 5.0.0 and will be removed in a future version. Specify 'use_legacy_dataset=False' while constructing the ParquetDataset, and then use the '.partitioning' attribute instead.\n", " partition_names = self._dataset.partitions.partition_names if self._dataset.partitions else set()\n", "/home/avnish/iNeuron_Private_Intelligence_Limited/industry_ready_project/env/venv/lib/python3.8/site-packages/petastorm/arrow_reader_worker.py:291: FutureWarning: 'ParquetDataset.partitions' attribute is deprecated as of pyarrow 5.0.0 and will be removed in a future version. Specify 'use_legacy_dataset=False' while constructing the ParquetDataset, and then use the '.partitioning' attribute instead.\n", " table = piece.read(columns=column_names - partition_names, partitions=self._dataset.partitions)\n" ] } ], "source": [ "train_data=None\n", "with converter_train.make_torch_dataloader(batch_size=BATCH_SIZE) as train_data_loader,\\\n", "converter_val.make_torch_dataloader() as test_data_loader:\n", " train_dataloader_iter = iter(train_data_loader)\n", " steps_per_epoch = len(converter_train) // BATCH_SIZE\n", " train_data = next(train_dataloader_iter)\n", " \n", " " ] }, { "cell_type": "code", "execution_count": 17, "id": "0e9c0fc5-cc37-4c59-b350-1bba92edef41", "metadata": { "canvas": { "comments": [], "componentType": "CodeCell", "copiedOriginId": null, "diskcache": false, "headerColor": "none", "id": "2293d09b-3199-4cc8-828c-4d3109146dcd", "isComponent": false, "name": "", "parents": [] } }, "outputs": [], "source": [ "import torch" ] }, { "cell_type": "code", "execution_count": 18, "id": "1a59509f-8eec-43fb-ba8c-b989fb02101c", "metadata": { "canvas": { "comments": [], "componentType": "CodeCell", "copiedOriginId": null, "diskcache": false, "headerColor": "none", "id": "ae868348-9639-4580-b85c-8e71444de5a5", "isComponent": false, "name": "", "parents": [] } }, "outputs": [], "source": [ "device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")" ] }, { "cell_type": "code", "execution_count": 19, "id": "aba96560-692f-484f-918b-dab97cce6cf4", "metadata": { "canvas": { "comments": [], "componentType": "CodeCell", "copiedOriginId": null, "diskcache": false, "headerColor": "none", "id": "e2076ef0-3d66-439a-b448-7de52af08e1d", "isComponent": false, "name": "", "parents": [] } }, "outputs": [ { "data": { "text/plain": [ "device(type='cuda', index=0)" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "device" ] }, { "cell_type": "code", "execution_count": 21, "id": "9c3d2572-9033-4a5f-94b7-d9e14e28e5fb", "metadata": { "canvas": { "comments": [], "componentType": "CodeCell", "copiedOriginId": null, "diskcache": false, "headerColor": "none", "id": "1a59153a-46c5-4326-bcaf-8a8571147781", "isComponent": false, "name": "", "parents": [] } }, "outputs": [], "source": [ "x=train_data['scaled_input_features'].to(device)" ] }, { "cell_type": "code", "execution_count": 23, "id": "7e7d78dc-8108-4709-b983-55a16f457dd8", "metadata": { "canvas": { "comments": [], "componentType": "CodeCell", "copiedOriginId": null, "diskcache": false, "headerColor": "none", "id": "e8bfbc80-4826-4203-98ea-3109a06a72dc", "isComponent": false, "name": "", "parents": [] } }, "outputs": [ { "data": { "text/plain": [ "22" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x.shape[1]" ] }, { "cell_type": "code", "execution_count": 30, "id": "6da34ab2-c1b4-40c5-8759-b4877c2a7523", "metadata": { "canvas": { "comments": [], "componentType": "CodeCell", "copiedOriginId": null, "diskcache": false, "headerColor": "none", "id": "b58af835-e389-45cb-a629-1ba4a85d0e98", "isComponent": false, "name": "", "parents": [] } }, "outputs": [], "source": [ "a=range(5)" ] }, { "cell_type": "code", "execution_count": 31, "id": "1e20dc9e-00cd-490d-a510-1ede54953c44", "metadata": { "canvas": { "comments": [], "componentType": "CodeCell", "copiedOriginId": null, "diskcache": false, "headerColor": "none", "id": "8adaa9f9-48bf-4c3e-a20b-d0ee0ced7964", "isComponent": false, "name": "", "parents": [] } }, "outputs": [ { "ename": "TypeError", "evalue": "'range' object cannot be interpreted as an integer", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mTypeError\u001b[0m Traceback (most recent call last)", "Input \u001b[0;32mIn [31]\u001b[0m, in \u001b[0;36m<cell line: 1>\u001b[0;34m()\u001b[0m\n\u001b[0;32m----> 1\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m i,j,k \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28;43menumerate\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43ma\u001b[49m\u001b[43m,\u001b[49m\u001b[43ma\u001b[49m\u001b[43m)\u001b[49m:\n\u001b[1;32m 2\u001b[0m \u001b[38;5;28mprint\u001b[39m(i,j,k)\n", "\u001b[0;31mTypeError\u001b[0m: 'range' object cannot be interpreted as an integer" ] } ], "source": [ "for i,j,k in enumerate(a,a):\n", " print(i,j,k)" ] }, { "cell_type": "code", "execution_count": null, "id": "439f915c-1999-4e2e-8343-989031080f96", "metadata": { "canvas": { "comments": [], "componentType": "CodeCell", "copiedOriginId": null, "diskcache": false, "headerColor": "none", "id": "3a969f8b-0cc9-4159-8949-f2eadd6e7658", "isComponent": false, "name": "", "parents": [] } }, "outputs": [], "source": [] } ], "metadata": { "canvas": { "colorPalette": [ "inherit", "inherit", "inherit", "inherit", "inherit", "inherit", "inherit", "inherit", "inherit", "inherit" ], "parameters": [], "version": "1.0" }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.0" } }, "nbformat": 4, "nbformat_minor": 5 }
1
langchain
langchain-ai
https://github.com/langchain-ai/langchain/blob/9b24f0b067d9f4a5f3e1f53fe3f7342f79a1f010/docs/extras/modules/model_io/output_parsers/enum.ipynb
https://github.com/langchain-ai/langchain/blob/9b24f0b067d9f4a5f3e1f53fe3f7342f79a1f010/docs/extras/modules/model_io/output_parsers/enum.ipynb#L125
docs/extras/modules/model_io/output_parsers/enum.ipynb
e515f22c581952d6cb0b36104d398722c5186e06e301b448cd42cd5f1c7e987d
{ "cells": [ { "cell_type": "markdown", "id": "0360be02", "metadata": {}, "source": [ "# Enum parser\n", "\n", "This notebook shows how to use an Enum output parser" ] }, { "cell_type": "code", "execution_count": 1, "id": "2f039b4b", "metadata": {}, "outputs": [], "source": [ "from langchain.output_parsers.enum import EnumOutputParser" ] }, { "cell_type": "code", "execution_count": 3, "id": "9a35d1a7", "metadata": {}, "outputs": [], "source": [ "from enum import Enum\n", "\n", "\n", "class Colors(Enum):\n", " RED = \"red\"\n", " GREEN = \"green\"\n", " BLUE = \"blue\"" ] }, { "cell_type": "code", "execution_count": 4, "id": "a90a66f5", "metadata": {}, "outputs": [], "source": [ "parser = EnumOutputParser(enum=Colors)" ] }, { "cell_type": "code", "execution_count": 5, "id": "c48b88cb", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "<Colors.RED: 'red'>" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "parser.parse(\"red\")" ] }, { "cell_type": "code", "execution_count": 6, "id": "7d313e41", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "<Colors.GREEN: 'green'>" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Can handle spaces\n", "parser.parse(\" green\")" ] }, { "cell_type": "code", "execution_count": 7, "id": "976ae42d", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "<Colors.BLUE: 'blue'>" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# And new lines\n", "parser.parse(\"blue\\n\")" ] }, { "cell_type": "code", "execution_count": 8, "id": "636a48ab", "metadata": {}, "outputs": [ { "ename": "OutputParserException", "evalue": "Response 'yellow' is not one of the expected values: ['red', 'green', 'blue']", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mValueError\u001b[0m Traceback (most recent call last)", "File \u001b[0;32m~/workplace/langchain/langchain/output_parsers/enum.py:25\u001b[0m, in \u001b[0;36mEnumOutputParser.parse\u001b[0;34m(self, response)\u001b[0m\n\u001b[1;32m 24\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m---> 25\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43menum\u001b[49m\u001b[43m(\u001b[49m\u001b[43mresponse\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mstrip\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 26\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mValueError\u001b[39;00m:\n", "File \u001b[0;32m~/.pyenv/versions/3.9.1/lib/python3.9/enum.py:315\u001b[0m, in \u001b[0;36mEnumMeta.__call__\u001b[0;34m(cls, value, names, module, qualname, type, start)\u001b[0m\n\u001b[1;32m 314\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m names \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m: \u001b[38;5;66;03m# simple value lookup\u001b[39;00m\n\u001b[0;32m--> 315\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mcls\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[38;5;21;43m__new__\u001b[39;49m\u001b[43m(\u001b[49m\u001b[38;5;28;43mcls\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mvalue\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 316\u001b[0m \u001b[38;5;66;03m# otherwise, functional API: we're creating a new Enum type\u001b[39;00m\n", "File \u001b[0;32m~/.pyenv/versions/3.9.1/lib/python3.9/enum.py:611\u001b[0m, in \u001b[0;36mEnum.__new__\u001b[0;34m(cls, value)\u001b[0m\n\u001b[1;32m 610\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m result \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;129;01mand\u001b[39;00m exc \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[0;32m--> 611\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m ve_exc\n\u001b[1;32m 612\u001b[0m \u001b[38;5;28;01melif\u001b[39;00m exc \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n", "\u001b[0;31mValueError\u001b[0m: 'yellow' is not a valid Colors", "\nDuring handling of the above exception, another exception occurred:\n", "\u001b[0;31mOutputParserException\u001b[0m Traceback (most recent call last)", "Cell \u001b[0;32mIn[8], line 2\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[38;5;66;03m# And raises errors when appropriate\u001b[39;00m\n\u001b[0;32m----> 2\u001b[0m \u001b[43mparser\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mparse\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43myellow\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\n", "File \u001b[0;32m~/workplace/langchain/langchain/output_parsers/enum.py:27\u001b[0m, in \u001b[0;36mEnumOutputParser.parse\u001b[0;34m(self, response)\u001b[0m\n\u001b[1;32m 25\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39menum(response\u001b[38;5;241m.\u001b[39mstrip())\n\u001b[1;32m 26\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mValueError\u001b[39;00m:\n\u001b[0;32m---> 27\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m OutputParserException(\n\u001b[1;32m 28\u001b[0m \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mResponse \u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mresponse\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124m is not one of the \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 29\u001b[0m \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mexpected values: \u001b[39m\u001b[38;5;132;01m{\u001b[39;00m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_valid_values\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 30\u001b[0m )\n", "\u001b[0;31mOutputParserException\u001b[0m: Response 'yellow' is not one of the expected values: ['red', 'green', 'blue']" ] } ], "source": [ "# And raises errors when appropriate\n", "parser.parse(\"yellow\")" ] }, { "cell_type": "code", "execution_count": null, "id": "c517f447", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.3" } }, "nbformat": 4, "nbformat_minor": 5 }
2
deep_prediction
sapan-ostic
https://github.com/sapan-ostic/deep_prediction/blob/e4709e4a66477755e6afe39849597ae1e3e969b5/scripts/.ipynb_checkpoints/test_argo-checkpoint.ipynb
https://github.com/sapan-ostic/deep_prediction/blob/e4709e4a66477755e6afe39849597ae1e3e969b5/scripts/.ipynb_checkpoints/test_argo-checkpoint.ipynb#L468
scripts/.ipynb_checkpoints/test_argo-checkpoint.ipynb
7736c22796f980a4998a16ec0eb26d703d829be1d0c2abd660e19494c8dd05aa
{ "cells": [ { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import argparse\n", "import gc\n", "import logging\n", "import os\n", "import sys\n", "import time\n", "\n", "from collections import defaultdict\n", "\n", "import torch\n", "import torch.nn as nn\n", "import torch.optim as optim\n", "\n", "from sgan.data.loader import data_loader\n", "from sgan.losses_argo import gan_g_loss, gan_d_loss, l2_loss\n", "from sgan.losses_argo import displacement_error, final_displacement_error\n", "\n", "from sgan.models_argo import TrajectoryGenerator, TrajectoryDiscriminator\n", "from sgan.utils import int_tuple, bool_flag, get_total_norm\n", "from sgan.utils import relative_to_abs, get_dset_path\n", "\n", "from sgan.data.data import Argoverse_Social_Data, collate_traj_social\n", "from torch.utils.data import Dataset, DataLoader\n", "from torch.utils.tensorboard import SummaryWriter\n", "\n", "import numpy as np\n", "torch.backends.cudnn.benchmark = True\n", "\n", "from matplotlib import pyplot as plt\n", "\n", "from matplotlib import pyplot as plt\n", "from argoverse.map_representation.map_api import ArgoverseMap\n", "from argoverse.data_loading.argoverse_forecasting_loader import ArgoverseForecastingLoader\n", "from argoverse.visualization.visualize_sequences import viz_sequence\n", "avm = ArgoverseMap()" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "def init_weights(m):\n", " classname = m.__class__.__name__\n", " if classname.find('Linear') != -1:\n", " nn.init.kaiming_normal_(m.weight)\n", " \n", "def get_dtypes(use_gpu=True):\n", " long_dtype = torch.LongTensor\n", " float_dtype = torch.FloatTensor\n", " if use_gpu == 1:\n", " long_dtype = torch.cuda.LongTensor\n", " float_dtype = torch.cuda.FloatTensor\n", " return long_dtype, float_dtype " ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "def relativeToAbsolute(traj, R, t): \n", " seq_len = traj.shape[0] \n", " traj = np.reshape(traj, (-1, 2))\n", " \n", " traj = np.swapaxes(traj, 1, 0)\n", " traj = np.matmul(R.T,traj)\n", " traj = np.swapaxes(traj, 1, 0)\n", " traj = traj + t\n", " traj = np.reshape(traj, (seq_len, -1, 2))\n", " return traj" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "os.environ[\"CUDA_VISIBLE_DEVICES\"] = '0'\n", "long_dtype, float_dtype = get_dtypes()\n", "\n", "argoverse_val = Argoverse_Social_Data('../../deep_prediction/data/val/data')\n", "# argoverse_test = Argoverse_Social_Data('../../deep_prediction/data/test_obs/data')\n", "sample_data = Argoverse_Social_Data('../forecasting_sample/data')" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "batch_size = 32\n", "val_loader = DataLoader(argoverse_val, batch_size=batch_size,\n", " shuffle=True, num_workers=2,collate_fn=collate_traj_social)\n", "sample_loader = DataLoader(sample_data, batch_size=batch_size,\n", " shuffle=True, num_workers=2,collate_fn=collate_traj_social)" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "TrajectoryGenerator(\n", " (encoder): Encoder(\n", " (encoder): LSTM(16, 32)\n", " (spatial_embedding): Linear(in_features=2, out_features=16, bias=True)\n", " )\n", " (decoder): Decoder(\n", " (decoder): LSTM(16, 32)\n", " (spatial_embedding): Linear(in_features=2, out_features=16, bias=True)\n", " (hidden2pos): Linear(in_features=32, out_features=2, bias=True)\n", " )\n", " (pool_net): PoolHiddenNet(\n", " (spatial_embedding): Linear(in_features=2, out_features=16, bias=True)\n", " (mlp_pre_pool): Sequential(\n", " (0): Linear(in_features=48, out_features=512, bias=True)\n", " (1): ReLU()\n", " (2): Linear(in_features=512, out_features=32, bias=True)\n", " (3): ReLU()\n", " )\n", " )\n", " (mlp_decoder_context): Sequential(\n", " (0): Linear(in_features=64, out_features=64, bias=True)\n", " (1): ReLU()\n", " (2): Linear(in_features=64, out_features=24, bias=True)\n", " (3): ReLU()\n", " )\n", ")" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "generator = TrajectoryGenerator(\n", " obs_len=20,\n", " pred_len=30,\n", " embedding_dim=16,\n", " encoder_h_dim=32,\n", " decoder_h_dim=32,\n", " mlp_dim=64,\n", " num_layers=1,\n", " noise_dim=(8, ),\n", " noise_type='gaussian',\n", " noise_mix_type='global',\n", " pooling_type='pool_net',\n", " pool_every_timestep=0,\n", " dropout=0,\n", " bottleneck_dim=32,\n", " neighborhood_size=2.0,\n", " grid_size=8,\n", " batch_norm=False)\n", "\n", "generator.apply(init_weights)\n", "generator.type(float_dtype).train()" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "<All keys matched successfully>" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "output_dir = os.getcwd()\n", "checkpoint_name = 'gan_test'\n", "restore_path = os.path.join(output_dir,'%s_with_model.pt' % checkpoint_name)\n", "\n", "checkpoint = torch.load(restore_path)\n", "generator.load_state_dict(checkpoint['g_state'])" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "def poly_fit(traj, traj_len=30, threshold=0.002):\n", " \"\"\"\n", " Input:\n", " - traj: Numpy array of shape (2, traj_len)\n", " - traj_len: Len of trajectory\n", " - threshold: Minimum error to be considered for non linear traj\n", " Output:\n", " - int: 1 -> Non Linear 0-> Linear\n", " \"\"\"\n", " t = np.linspace(0, traj_len - 1, traj_len)\n", " res_x = np.polyfit(t, traj[:, 0], 2, full=True)[1]\n", " res_y = np.polyfit(t, traj[:, 1], 2, full=True)[1]\n", " \n", " if res_x + res_y >= threshold:\n", " return 1.0\n", " else:\n", " return 0.0" ] }, { "cell_type": "code", "execution_count": 63, "metadata": {}, "outputs": [], "source": [ "dataiter = iter(val_loader)\n", "batch = dataiter.next()\n", "\n", "train_agent = batch['train_agent']\n", "gt_agent = batch['gt_agent']\n", "neighbour = batch['neighbour']\n", "neighbour_gt = batch['neighbour_gt']\n", "seq_path = batch['seq_path']\n", "seq_id = batch['indexes']\n", "Rs = batch['rotation']\n", "ts = batch['translation']\n", "\n", "obs_traj = train_agent[0].unsqueeze(0)\n", "obs_traj = torch.cat((obs_traj, neighbour[0]),0)\n", "\n", "pred_traj_gt = gt_agent[0].unsqueeze(0)\n", "pred_traj_gt = torch.cat((pred_traj_gt, neighbour_gt[0]),0)\n", "\n", "ped_count = obs_traj.shape[0]\n", "seq_start_end = [[0, ped_count]] # last number excluded\n", "\n", "non_linear_ped = []\n", "_non_linear_ped = [poly_fit(np.array(gt_agent[0]))]\n", "\n", "for j in range(ped_count-1):\n", " _non_linear_ped.append(poly_fit(np.array(neighbour_gt[0][j])))\n", "non_linear_ped += _non_linear_ped\n", "\n", "for i in range(1, len(neighbour)):\n", " obs_traj = torch.cat((obs_traj, train_agent[i].unsqueeze(0)), 0)\n", " obs_traj = torch.cat((obs_traj, neighbour[i]), 0)\n", "\n", " pred_traj_gt = torch.cat((pred_traj_gt, gt_agent[i].unsqueeze(0)), 0)\n", " pred_traj_gt = torch.cat((pred_traj_gt, neighbour_gt[i]), 0)\n", "\n", " seq_start_end.append([ped_count, obs_traj.shape[0]])\n", "\n", " num_peds_considered = obs_traj.shape[0] - ped_count\n", " ped_count = obs_traj.shape[0]\n", "\n", " _non_linear_ped = [poly_fit(np.array(gt_agent[i]))]\n", "\n", " for j in range(num_peds_considered-1):\n", " _non_linear_ped.append(poly_fit(np.array(neighbour_gt[i][j])))\n", "\n", " non_linear_ped += _non_linear_ped\n", "\n", "obs_traj_rel = torch.zeros(obs_traj.shape)\n", "obs_traj_rel[:,1:,:] = obs_traj[:,1:,:] - obs_traj[:,:-1,:] \n", "\n", "pred_traj_gt_rel = torch.zeros(pred_traj_gt.shape)\n", "pred_traj_gt_rel[:,1:,:] = pred_traj_gt[:,1:,:] - pred_traj_gt[:,0:-1,:]\n", "\n", "seq_start_end = torch.tensor(seq_start_end)\n", "non_linear_ped = torch.tensor(non_linear_ped).cuda()\n", "\n", "## \n", "obs_traj = obs_traj.transpose_(0,1).cuda() \n", "obs_traj_rel = obs_traj_rel.transpose_(0,1).cuda() \n", "pred_traj_gt = pred_traj_gt.transpose_(0,1).cuda() \n", "pred_traj_gt_rel = pred_traj_gt_rel.transpose_(0,1).cuda() \n", "\n", "################################################################\n", "\n", "linear_ped = 1 - non_linear_ped\n", "\n", "loss_mask = torch.ones(pred_traj_gt.shape[1],30).cuda()\n", "\n", "pred_traj_fake_rel = generator(obs_traj, obs_traj_rel, seq_start_end)\n", "pred_traj_fake = relative_to_abs(pred_traj_fake_rel, obs_traj[-1])\n", "\n", "# Get data to CPU\n", "obs_traj_ = np.array(obs_traj.cpu())\n", "pred_traj_gt_ = np.array(pred_traj_gt.cpu())\n", "pred_traj_fake_ = pred_traj_fake.cpu().detach().numpy()" ] }, { "cell_type": "code", "execution_count": 66, "metadata": {}, "outputs": [], "source": [ "def plot_seq(seq_num):\n", " start = seq_start_end[seq_num][0] # agent idx in seq_list\n", " end = seq_start_end[seq_num][1] # agent idx in seq_list\n", "\n", " seq_len = end - start\n", "\n", " city = batch['city_name'][seq_num]\n", " seq_id_ = seq_id[seq_num]\n", " seq_lane_props = avm.city_lane_centerlines_dict[city]\n", " \n", " R = Rs[seq_num]\n", " t = ts[seq_num]\n", " \n", " obs_traj_[:,1:end,:] = obs_traj_[:,1:end,:] + np.expand_dims(obs_traj_[:,0,:], axis=1)\n", " pred_traj_gt_[:,1:end,:] = pred_traj_gt_[:,1:end,:] + np.expand_dims(pred_traj_gt_[:,0,:], axis=1)\n", " pred_traj_fake_[:,1:end,:] = pred_traj_fake_[:,1:end,:] + np.expand_dims(pred_traj_fake_[:,0,:], axis=1)\n", " \n", " obs_traj_abs = relativeToAbsolute(obs_traj_[:,start:end,:], R, t)\n", " pred_traj_gt_abs = relativeToAbsolute(pred_traj_gt_[:,start:end,:], R, t)\n", " pred_traj_fake_abs = relativeToAbsolute(pred_traj_fake_[:,start:end,:], R, t)\n", "\n", " x_min = np.min(np.concatenate((obs_traj_abs[:,:seq_len,0], pred_traj_gt_abs[:,:seq_len,0], pred_traj_fake_abs[:,:seq_len,0]),axis=0))\n", " x_max = np.max(np.concatenate((obs_traj_abs[:,:seq_len,0], pred_traj_gt_abs[:,:seq_len,0], pred_traj_fake_abs[:,:seq_len,0]),axis=0))\n", " y_min = np.min(np.concatenate((obs_traj_abs[:,:seq_len,1], pred_traj_gt_abs[:,:seq_len,1], pred_traj_fake_abs[:,:seq_len,1]),axis=0))\n", " y_max = np.max(np.concatenate((obs_traj_abs[:,:seq_len,1], pred_traj_gt_abs[:,:seq_len,1], pred_traj_fake_abs[:,:seq_len,1]),axis=0))\n", " \n", " plt.figure(0, figsize=(8, 7))\n", "\n", " lane_centerlines = []\n", "\n", " # Get lane centerlines which lie within the range of trajectories\n", " for lane_id, lane_props in seq_lane_props.items():\n", " lane_cl = lane_props.centerline\n", "\n", " if (np.min(lane_cl[:, 0]) < x_max and np.min(lane_cl[:, 1]) < y_max and np.max(lane_cl[:, 0]) > x_min and np.max(lane_cl[:, 1]) > y_min):\n", " lane_centerlines.append(lane_cl)\n", "\n", " for lane_cl in lane_centerlines:\n", " plt.plot(lane_cl[:, 0], lane_cl[:, 1], \"--\", color=\"grey\", alpha=1, linewidth=1, zorder=0)\n", "\n", " # plot obsvervation trajectory\n", "\n", " obsv_color = \"#ECA154\"\n", " target_color = \"#d33e4c\"\n", " pred_color = \"#007672\"\n", " others_color = \"#89D4F5\"\n", "\n", " plt.plot(\n", " obs_traj_abs[:,1:seq_len,0],\n", " obs_traj_abs[:,1:seq_len,1],\n", " color=others_color,\n", " alpha=0.5,\n", " linewidth=3,\n", " zorder=15,\n", " )\n", "\n", " plt.plot(\n", " pred_traj_gt_abs[:,1:seq_len,0],\n", " pred_traj_gt_abs[:,1:seq_len,1],\n", " color=others_color,\n", " alpha=0.5,\n", " linewidth=3,\n", " zorder=15,\n", " )\n", "\n", " plt.plot(\n", " pred_traj_gt_abs[-1,1:seq_len,0],\n", " pred_traj_gt_abs[-1,1:seq_len,1],\n", " \"o\",\n", " color=\"#C3A2D6\",\n", " alpha=0.8,\n", " linewidth=3,\n", " zorder=15,\n", " markersize=9\n", " )\n", "\n", " plt.plot(\n", " obs_traj_abs[:,0,0],\n", " obs_traj_abs[:,0,1],\n", " color= obsv_color,\n", " label=\"Observed\",\n", " alpha=1,\n", " linewidth=3,\n", " zorder=15,\n", " )\n", "\n", " plt.plot(\n", " obs_traj_abs[-1,0,0],\n", " obs_traj_abs[-1,0,1],\n", " \"o\",\n", " color=obsv_color,\n", " alpha=1,\n", " linewidth=3,\n", " zorder=15,\n", " markersize=9,\n", " )\n", "\n", " plt.plot(\n", " pred_traj_gt_abs[:,0,0],\n", " pred_traj_gt_abs[:,0,1],\n", " color=target_color,\n", " label=\"Target\",\n", " alpha=0.8,\n", " linewidth=3,\n", " zorder=20,\n", " )\n", "\n", " plt.plot(\n", " pred_traj_gt_abs[-1,0,0],\n", " pred_traj_gt_abs[-1,0,1],\n", " \"o\",\n", " color=target_color,\n", " alpha=1,\n", " linewidth=3,\n", " zorder=20,\n", " markersize=9\n", " )\n", "\n", " plt.plot(\n", " pred_traj_fake_abs[:,0,0],\n", " pred_traj_fake_abs[:,0,1],\n", " color=pred_color,\n", " label=\"Prediction\",\n", " alpha=1,\n", " linewidth=3,\n", " zorder=15,\n", " )\n", "\n", " plt.plot(\n", " pred_traj_fake_abs[-1,0,0],\n", " pred_traj_fake_abs[-1,0,1],\n", " \"o\",\n", " color=pred_color,\n", " alpha=1,\n", " linewidth=3,\n", " zorder=15,\n", " markersize=9\n", " )\n", "\n", " plt.legend()\n", " plt.show() " ] }, { "cell_type": "code", "execution_count": 67, "metadata": {}, "outputs": [ { "ename": "ValueError", "evalue": "operands could not be broadcast together with shapes (20,13,2) (20,2) ", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mValueError\u001b[0m Traceback (most recent call last)", "\u001b[0;32m<ipython-input-67-eac8e486fd7e>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mi\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mrange\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m32\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 2\u001b[0;31m \u001b[0mplot_seq\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mi\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m", "\u001b[0;32m<ipython-input-66-c4614b33aa03>\u001b[0m in \u001b[0;36mplot_seq\u001b[0;34m(seq_num)\u001b[0m\n\u001b[1;32m 12\u001b[0m \u001b[0mt\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mts\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mseq_num\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 13\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 14\u001b[0;31m \u001b[0mobs_traj_\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0mend\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mobs_traj_\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0mend\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m+\u001b[0m \u001b[0mobs_traj_\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 15\u001b[0m \u001b[0mpred_traj_gt_\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0mend\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mpred_traj_gt_\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0mend\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m+\u001b[0m \u001b[0mpred_traj_gt_\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 16\u001b[0m \u001b[0mpred_traj_fake_\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0mend\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mpred_traj_fake_\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0mend\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m+\u001b[0m \u001b[0mpred_traj_fake_\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;31mValueError\u001b[0m: operands could not be broadcast together with shapes (20,13,2) (20,2) " ] } ], "source": [ "for i in range(32):\n", " plot_seq(i)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "dl", "language": "python", "name": "dl" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.10" } }, "nbformat": 4, "nbformat_minor": 4 }
3
cv-ferattn-code
HelenGuohx
"https://github.com/HelenGuohx/cv-ferattn-code/blob/faa9b7850fe2a0f8c08193bb129b5fec4639d616/fervide(...TRUNCATED)
"https://github.com/HelenGuohx/cv-ferattn-code/blob/faa9b7850fe2a0f8c08193bb129b5fec4639d616/fervide(...TRUNCATED)
fervideo/Facial_recognition.ipynb
881e69a1e530676b4a28e425af897c09e8ebcc8037fc460a4aa7d5f4cc63e44f
"{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {\n \"colab_type\": \"t(...TRUNCATED)
4
diseno_sci_sfw
leliel12
"https://github.com/leliel12/diseno_sci_sfw/blob/6a616096780dfeb320b8537d3597a842d34ee1a0/00_anteced(...TRUNCATED)
"https://github.com/leliel12/diseno_sci_sfw/blob/6a616096780dfeb320b8537d3597a842d34ee1a0/00_anteced(...TRUNCATED)
00_antecedentes/02_niveles_de_abstraccion.ipynb
b0c26856e090641929400716e6906670c5fde357f3d5609e06f6565c3328c1c7
"{\n \"cells\": [\n {\n \"attachments\": {\n \"image-2.png\": {\n \"image/png\": \"iVBORw0(...TRUNCATED)
5
Elements-of-Data-Analytics
kiat
"https://github.com/kiat/Elements-of-Data-Analytics/blob/0739359d399816477059d8585a0b65b8eb342dc0/Co(...TRUNCATED)
"https://github.com/kiat/Elements-of-Data-Analytics/blob/0739359d399816477059d8585a0b65b8eb342dc0/Co(...TRUNCATED)
Code-Example-040.ipynb
665242fa589d505f6755b8be2dc6f1e129d92e4b7049db74eeb95d66924a6a53
"{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {},\n \"source\": [\n (...TRUNCATED)
6
gsn-projekt
jkoscialkowski
"https://github.com/jkoscialkowski/gsn-projekt/blob/947e1ff4215988fa68360b11df755661aea228a1/tests/t(...TRUNCATED)
"https://github.com/jkoscialkowski/gsn-projekt/blob/947e1ff4215988fa68360b11df755661aea228a1/tests/t(...TRUNCATED)
tests/test_notebook.ipynb
6ae89a622893acab2fa5954657ee3e3733f1c6e3c3e30a5dcc95cc72e0f4adc4
"{\n \"cells\": [\n {\n \"cell_type\": \"code\",\n \"execution_count\": 76,\n \"metadata\": {(...TRUNCATED)
7
cc2018
SocratesAcademy
"https://github.com/SocratesAcademy/cc2018/blob/f3bac9d357b80ca09dc4b6fb7d92764a4708e4ce/PythonDataS(...TRUNCATED)
"https://github.com/SocratesAcademy/cc2018/blob/f3bac9d357b80ca09dc4b6fb7d92764a4708e4ce/PythonDataS(...TRUNCATED)
PythonDataScience/notebooks/04.14-Visualization-With-Seaborn.ipynb
4db555e3497a61e044ccd47687ff19a1ce6a7be3375036b4f4d8d95488e6c08a
"{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {\n \"slideshow\": {\n (...TRUNCATED)
8
diseno_sci_sfw
leliel12
"https://github.com/leliel12/diseno_sci_sfw/blob/6a616096780dfeb320b8537d3597a842d34ee1a0/01_paradig(...TRUNCATED)
"https://github.com/leliel12/diseno_sci_sfw/blob/6a616096780dfeb320b8537d3597a842d34ee1a0/01_paradig(...TRUNCATED)
01_paradigmas/01_python.ipynb
f6915a32a41c0bf57d49daa049e08ee301132dbfe7ad9dba21980f18e9a8e88f
"{\n \"cells\": [\n {\n \"attachments\": {\n \"image.png\": {\n \"image/png\": \"iVBORw0KG(...TRUNCATED)
9
AutoCog
LLNL
https://github.com/LLNL/AutoCog/blob/44a58c9338403a0f815f530f00a28b06b5d90469/share/fta.ipynb
https://github.com/LLNL/AutoCog/blob/44a58c9338403a0f815f530f00a28b06b5d90469/share/fta.ipynb#L50
share/fta.ipynb
aeb6090a6980a4c539e2951b0139a34c4b66e6c21c0cd43ac0d7bf095cf4f0fe
"{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"id\": \"e9961830-c0f6-4d46-a8ee-ab6ea6(...TRUNCATED)

Dataset Summary

The presented dataset contains 10000 Jupyter notebooks, each of which contains at least one error. In addition to the notebook content, the dataset also provides information about the repository where the notebook is stored. This information can help restore the environment if needed.

Getting Started

This dataset is organized such that it can be naively loaded via the Hugging Face datasets library. We recommend using streaming due to the large size of the files.

import nbformat
from datasets import load_dataset

dataset = load_dataset(
    "JetBrains-Research/jupyter-errors-dataset", split="test", streaming=True
)
row = next(iter(dataset))
notebook = nbformat.reads(row["content"], as_version=nbformat.NO_CONVERT)

Citation

@misc{JupyterErrorsDataset,
  title = {Dataset of Errors in Jupyter Notebooks},
  author = {Konstantin Grotov and Sergey Titov and Yaroslav Zharov and Timofey Bryksin},
  year = {2023},
  publisher = {HuggingFace},
  journal = {HuggingFace repository},
  howpublished = {\url{https://huggingface.co/datasets/JetBrains-Research/jupyter-errors-dataset}},
}
Downloads last month
1
Edit dataset card