{ "cells": [ { "cell_type": "markdown", "id": "be58b994-bc68-4166-91d5-282418b78864", "metadata": { "id": "be58b994-bc68-4166-91d5-282418b78864" }, "source": [ "# Project: Portfolio - Final Project" ] }, { "cell_type": "markdown", "id": "c5120b06-0abf-49e3-b54d-db8afa9eda01", "metadata": { "id": "c5120b06-0abf-49e3-b54d-db8afa9eda01" }, "source": [ "**Instructions for Students:**\n", "\n", "Please carefully follow these steps to complete and submit your assignment:\n", "\n", "1. **Completing the Assignment**: You are required to work on and complete all tasks in the provided assignment. Be disciplined and ensure that you thoroughly engage with each task.\n", " \n", "2. **Creating a Google Drive Folder**: If you don't previously have a folder for collecting assignments, you must create a new folder in your Google Drive. This will be a repository for all your completed assignment files, helping you keep your work organized and easy to access.\n", " \n", "3. **Uploading Completed Assignment**: Upon completion of your assignment, make sure to upload all necessary files, involving codes, reports, and related documents into the created Google Drive folder. Save this link in the 'Student Identity' section and also provide it as the last parameter in the `submit` function that has been provided.\n", " \n", "4. **Sharing Folder Link**: You're required to share the link to your assignment Google Drive folder. This is crucial for the submission and evaluation of your assignment.\n", " \n", "5. **Setting Permission toPublic**: Please make sure your **Google Drive folder is set to public**. This allows your instructor to access your solutions and assess your work correctly.\n", "\n", "Adhering to these procedures will facilitate a smooth assignment process for you and the reviewers." ] }, { "cell_type": "markdown", "id": "eca56111-19cb-46c0-a77b-11bd18c55673", "metadata": { "id": "eca56111-19cb-46c0-a77b-11bd18c55673" }, "source": [ "**Description:**\n", "\n", "Welcome to your final portfolio project assignment for AI Bootcamp. This is your chance to put all the skills and knowledge you've learned throughout the bootcamp into action by creating real-world AI application.\n", "\n", "You have the freedom to create any application or model, be it text-based or image-based or even voice-based or multimodal.\n", "\n", "To get you started, here are some ideas:\n", "\n", "1. **Sentiment Analysis Application:** Develop an application that can determine sentiment (positive, negative, neutral) from text data like reviews or social media posts. You can use Natural Language Processing (NLP) libraries like NLTK or TextBlob, or more advanced pre-trained models from transformers library by Hugging Face, for your sentiment analysis model.\n", "\n", "2. **Chatbot:** Design a chatbot serving a specific purpose such as customer service for a certain industry, a personal fitness coach, or a study helper. Libraries like ChatterBot or Dialogflow can assist in designing conversational agents.\n", "\n", "3. **Predictive Text Application:** Develop a model that suggests the next word or sentence similar to predictive text on smartphone keyboards. You could use the transformers library by Hugging Face, which includes pre-trained models like GPT-2.\n", "\n", "4. **Image Classification Application:** Create a model to distinguish between different types of flowers or fruits. For this type of image classification task, pre-trained models like ResNet or VGG from PyTorch or TensorFlow can be utilized.\n", "\n", "5. **News Article Classifier:** Develop a text classification model that categorizes news articles into predefined categories. NLTK, SpaCy, and sklearn are valuable libraries for text pre-processing, feature extraction, and building classification models.\n", "\n", "6. **Recommendation System:** Create a simplified recommendation system. For instance, a book or movie recommender based on user preferences. Python's Surprise library can assist in building effective recommendation systems.\n", "\n", "7. **Plant Disease Detection:** Develop a model to identify diseases in plants using leaf images. This project requires a good understanding of convolutional neural networks (CNNs) and image processing. PyTorch, TensorFlow, and OpenCV are all great tools to use.\n", "\n", "8. **Facial Expression Recognition:** Develop a model to classify human facial expressions. This involves complex feature extraction and classification algorithms. You might want to leverage deep learning libraries like TensorFlow or PyTorch, along with OpenCV for processing facial images.\n", "\n", "9. **Chest X-Ray Interpretation:** Develop a model to detect abnormalities in chest X-ray images. This task may require understanding of specific features in such images. Again, TensorFlow and PyTorch for deep learning, and libraries like SciKit-Image or PIL for image processing, could be of use.\n", "\n", "10. **Food Classification:** Develop a model to classify a variety of foods such as local Indonesian food. Pre-trained models like ResNet or VGG from PyTorch or TensorFlow can be a good starting point.\n", "\n", "11. **Traffic Sign Recognition:** Design a model to recognize different traffic signs. This project has real-world applicability in self-driving car technology. Once more, you might utilize PyTorch or TensorFlow for the deep learning aspect, and OpenCV for image processing tasks.\n", "\n", "**Submission:**\n", "\n", "Please upload both your model and application to Huggingface or your own Github account for submission.\n", "\n", "**Presentation:**\n", "\n", "You are required to create a presentation to showcase your project, including the following details:\n", "\n", "- The objective of your model.\n", "- A comprehensive description of your model.\n", "- The specific metrics used to measure your model's effectiveness.\n", "- A brief overview of the dataset used, including its source, pre-processing steps, and any insights.\n", "- An explanation of the methodology used in developing the model.\n", "- A discussion on challenges faced, how they were handled, and your learnings from those.\n", "- Suggestions for potential future improvements to the model.\n", "- A functioning link to a demo of your model in action.\n", "\n", "**Grading:**\n", "\n", "Submissions will be manually graded, with a select few given the opportunity to present their projects in front of a panel of judges. This will provide valuable feedback, further enhancing your project and expanding your knowledge base.\n", "\n", "Remember, consistent practice is the key to mastering these concepts. Apply your knowledge, ask questions when in doubt, and above all, enjoy the process. Best of luck to you all!\n" ] }, { "cell_type": "code", "execution_count": 1, "id": "213a611a-c434-4894-ba35-689963ee5274", "metadata": { "id": "213a611a-c434-4894-ba35-689963ee5274" }, "outputs": [], "source": [ "# @title #### Student Identity\n", "student_id = \"REAS0XP1\" # @param {type:\"string\"}\n", "name = \"Mikael Kristiadi\" # @param {type:\"string\"}\n", "drive_link = \"https://drive.google.com/drive/folders/1lNPe5vm0Tntbs6leXDh7LULhTOh2esOI?usp=drive_link\" # @param {type:\"string\"}\n", "assignment_id = \"00_portfolio_project\"" ] }, { "cell_type": "markdown", "id": "2c97aef3-b747-49f7-99e0-4086c03e4200", "metadata": { "id": "2c97aef3-b747-49f7-99e0-4086c03e4200" }, "source": [ "## Installation and Import `rggrader` Package" ] }, { "cell_type": "code", "execution_count": null, "id": "36c07e23-0280-467f-b0d2-44d966253bb4", "metadata": { "id": "36c07e23-0280-467f-b0d2-44d966253bb4" }, "outputs": [], "source": [ "%pip install rggrader\n", "from rggrader import submit_image\n", "from rggrader import submit" ] }, { "cell_type": "markdown", "id": "a4af3420-ff0e-472b-8b44-7a495ddf76c3", "metadata": { "id": "a4af3420-ff0e-472b-8b44-7a495ddf76c3" }, "source": [ "## Working Space" ] }, { "cell_type": "code", "execution_count": 3, "id": "c1fb239a-1c81-4476-9009-d87abadf9506", "metadata": { "id": "c1fb239a-1c81-4476-9009-d87abadf9506" }, "outputs": [], "source": [ "# Write your code here\n", "# Feel free to add new code block as needed\n", "import os\n", "import numpy as np\n", "import torch\n", "import glob\n", "import torch.nn as nn\n", "from torchvision.transforms import transforms\n", "from torch.utils.data import DataLoader\n", "from torch.optim import Adam\n", "from torch.autograd import Variable\n", "import torchvision\n", "import pathlib" ] }, { "cell_type": "code", "source": [ "!pip install split-folders\n", "!pip install datasets torchvision\n", "!pip install matplotlib" ], "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "AxAKZNYhawZT", "outputId": "426267dc-339e-4584-a891-46603863a861" }, "id": "AxAKZNYhawZT", "execution_count": 4, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Collecting split-folders\n", " Downloading split_folders-0.5.1-py3-none-any.whl (8.4 kB)\n", "Installing collected packages: split-folders\n", "Successfully installed split-folders-0.5.1\n", "Collecting datasets\n", " Downloading datasets-2.18.0-py3-none-any.whl (510 kB)\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m510.5/510.5 kB\u001b[0m \u001b[31m6.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[?25hRequirement already satisfied: torchvision in /usr/local/lib/python3.10/dist-packages (0.17.1+cu121)\n", "Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from datasets) (3.13.3)\n", "Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.10/dist-packages (from datasets) (1.25.2)\n", "Requirement already satisfied: pyarrow>=12.0.0 in /usr/local/lib/python3.10/dist-packages (from datasets) (14.0.2)\n", "Requirement already satisfied: pyarrow-hotfix in /usr/local/lib/python3.10/dist-packages (from datasets) (0.6)\n", "Collecting dill<0.3.9,>=0.3.0 (from datasets)\n", " Downloading dill-0.3.8-py3-none-any.whl (116 kB)\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m116.3/116.3 kB\u001b[0m \u001b[31m18.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[?25hRequirement already satisfied: pandas in /usr/local/lib/python3.10/dist-packages (from datasets) (1.5.3)\n", "Requirement already satisfied: requests>=2.19.0 in /usr/local/lib/python3.10/dist-packages (from datasets) (2.31.0)\n", "Requirement already satisfied: tqdm>=4.62.1 in /usr/local/lib/python3.10/dist-packages (from datasets) (4.66.2)\n", "Collecting xxhash (from datasets)\n", " Downloading xxhash-3.4.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (194 kB)\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m194.1/194.1 kB\u001b[0m \u001b[31m26.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[?25hCollecting multiprocess (from datasets)\n", " Downloading multiprocess-0.70.16-py310-none-any.whl (134 kB)\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m134.8/134.8 kB\u001b[0m \u001b[31m20.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[?25hRequirement already satisfied: fsspec[http]<=2024.2.0,>=2023.1.0 in /usr/local/lib/python3.10/dist-packages (from datasets) (2023.6.0)\n", "Requirement already satisfied: aiohttp in /usr/local/lib/python3.10/dist-packages (from datasets) (3.9.3)\n", "Requirement already satisfied: huggingface-hub>=0.19.4 in /usr/local/lib/python3.10/dist-packages (from datasets) (0.20.3)\n", "Requirement already satisfied: packaging in /usr/local/lib/python3.10/dist-packages (from datasets) (24.0)\n", "Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.10/dist-packages (from datasets) (6.0.1)\n", "Requirement already satisfied: torch==2.2.1 in /usr/local/lib/python3.10/dist-packages (from torchvision) (2.2.1+cu121)\n", "Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in /usr/local/lib/python3.10/dist-packages (from torchvision) (9.4.0)\n", "Requirement already satisfied: typing-extensions>=4.8.0 in /usr/local/lib/python3.10/dist-packages (from torch==2.2.1->torchvision) (4.10.0)\n", "Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from torch==2.2.1->torchvision) (1.12)\n", "Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from torch==2.2.1->torchvision) (3.2.1)\n", "Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch==2.2.1->torchvision) (3.1.3)\n", "Collecting nvidia-cuda-nvrtc-cu12==12.1.105 (from torch==2.2.1->torchvision)\n", " Downloading nvidia_cuda_nvrtc_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (23.7 MB)\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m23.7/23.7 MB\u001b[0m \u001b[31m66.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[?25hCollecting nvidia-cuda-runtime-cu12==12.1.105 (from torch==2.2.1->torchvision)\n", " Downloading nvidia_cuda_runtime_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (823 kB)\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m823.6/823.6 kB\u001b[0m \u001b[31m64.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[?25hCollecting nvidia-cuda-cupti-cu12==12.1.105 (from torch==2.2.1->torchvision)\n", " Downloading nvidia_cuda_cupti_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (14.1 MB)\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m14.1/14.1 MB\u001b[0m \u001b[31m88.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[?25hCollecting nvidia-cudnn-cu12==8.9.2.26 (from torch==2.2.1->torchvision)\n", " Downloading nvidia_cudnn_cu12-8.9.2.26-py3-none-manylinux1_x86_64.whl (731.7 MB)\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m731.7/731.7 MB\u001b[0m \u001b[31m2.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[?25hCollecting nvidia-cublas-cu12==12.1.3.1 (from torch==2.2.1->torchvision)\n", " Downloading nvidia_cublas_cu12-12.1.3.1-py3-none-manylinux1_x86_64.whl (410.6 MB)\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m410.6/410.6 MB\u001b[0m \u001b[31m1.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[?25hCollecting nvidia-cufft-cu12==11.0.2.54 (from torch==2.2.1->torchvision)\n", " Downloading nvidia_cufft_cu12-11.0.2.54-py3-none-manylinux1_x86_64.whl (121.6 MB)\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m121.6/121.6 MB\u001b[0m \u001b[31m8.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[?25hCollecting nvidia-curand-cu12==10.3.2.106 (from torch==2.2.1->torchvision)\n", " Downloading nvidia_curand_cu12-10.3.2.106-py3-none-manylinux1_x86_64.whl (56.5 MB)\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m56.5/56.5 MB\u001b[0m \u001b[31m10.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[?25hCollecting nvidia-cusolver-cu12==11.4.5.107 (from torch==2.2.1->torchvision)\n", " Downloading nvidia_cusolver_cu12-11.4.5.107-py3-none-manylinux1_x86_64.whl (124.2 MB)\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m124.2/124.2 MB\u001b[0m \u001b[31m8.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[?25hCollecting nvidia-cusparse-cu12==12.1.0.106 (from torch==2.2.1->torchvision)\n", " Downloading nvidia_cusparse_cu12-12.1.0.106-py3-none-manylinux1_x86_64.whl (196.0 MB)\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m196.0/196.0 MB\u001b[0m \u001b[31m3.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[?25hCollecting nvidia-nccl-cu12==2.19.3 (from torch==2.2.1->torchvision)\n", " Downloading nvidia_nccl_cu12-2.19.3-py3-none-manylinux1_x86_64.whl (166.0 MB)\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m166.0/166.0 MB\u001b[0m \u001b[31m7.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[?25hCollecting nvidia-nvtx-cu12==12.1.105 (from torch==2.2.1->torchvision)\n", " Downloading nvidia_nvtx_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (99 kB)\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m99.1/99.1 kB\u001b[0m \u001b[31m14.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[?25hRequirement already satisfied: triton==2.2.0 in /usr/local/lib/python3.10/dist-packages (from torch==2.2.1->torchvision) (2.2.0)\n", "Collecting nvidia-nvjitlink-cu12 (from nvidia-cusolver-cu12==11.4.5.107->torch==2.2.1->torchvision)\n", " Downloading nvidia_nvjitlink_cu12-12.4.99-py3-none-manylinux2014_x86_64.whl (21.1 MB)\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m21.1/21.1 MB\u001b[0m \u001b[31m56.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[?25hRequirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (1.3.1)\n", "Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (23.2.0)\n", "Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (1.4.1)\n", "Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (6.0.5)\n", "Requirement already satisfied: yarl<2.0,>=1.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (1.9.4)\n", "Requirement already satisfied: async-timeout<5.0,>=4.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (4.0.3)\n", "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests>=2.19.0->datasets) (3.3.2)\n", "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests>=2.19.0->datasets) (3.6)\n", "Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests>=2.19.0->datasets) (2.0.7)\n", "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests>=2.19.0->datasets) (2024.2.2)\n", "Requirement already satisfied: python-dateutil>=2.8.1 in /usr/local/lib/python3.10/dist-packages (from pandas->datasets) (2.8.2)\n", "Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.10/dist-packages (from pandas->datasets) (2023.4)\n", "Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil>=2.8.1->pandas->datasets) (1.16.0)\n", "Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch==2.2.1->torchvision) (2.1.5)\n", "Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->torch==2.2.1->torchvision) (1.3.0)\n", "Installing collected packages: xxhash, nvidia-nvtx-cu12, nvidia-nvjitlink-cu12, nvidia-nccl-cu12, nvidia-curand-cu12, nvidia-cufft-cu12, nvidia-cuda-runtime-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-cupti-cu12, nvidia-cublas-cu12, dill, nvidia-cusparse-cu12, nvidia-cudnn-cu12, multiprocess, nvidia-cusolver-cu12, datasets\n", "Successfully installed datasets-2.18.0 dill-0.3.8 multiprocess-0.70.16 nvidia-cublas-cu12-12.1.3.1 nvidia-cuda-cupti-cu12-12.1.105 nvidia-cuda-nvrtc-cu12-12.1.105 nvidia-cuda-runtime-cu12-12.1.105 nvidia-cudnn-cu12-8.9.2.26 nvidia-cufft-cu12-11.0.2.54 nvidia-curand-cu12-10.3.2.106 nvidia-cusolver-cu12-11.4.5.107 nvidia-cusparse-cu12-12.1.0.106 nvidia-nccl-cu12-2.19.3 nvidia-nvjitlink-cu12-12.4.99 nvidia-nvtx-cu12-12.1.105 xxhash-3.4.1\n", "Requirement already satisfied: matplotlib in /usr/local/lib/python3.10/dist-packages (3.7.1)\n", "Requirement already satisfied: contourpy>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib) (1.2.0)\n", "Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.10/dist-packages (from matplotlib) (0.12.1)\n", "Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib) (4.50.0)\n", "Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib) (1.4.5)\n", "Requirement already satisfied: numpy>=1.20 in /usr/local/lib/python3.10/dist-packages (from matplotlib) (1.25.2)\n", "Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib) (24.0)\n", "Requirement already satisfied: pillow>=6.2.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib) (9.4.0)\n", "Requirement already satisfied: pyparsing>=2.3.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib) (3.1.2)\n", "Requirement already satisfied: python-dateutil>=2.7 in /usr/local/lib/python3.10/dist-packages (from matplotlib) (2.8.2)\n", "Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil>=2.7->matplotlib) (1.16.0)\n" ] } ] }, { "cell_type": "code", "source": [ "!mkdir -p ~/.kaggle\n", "!cp kaggle.json ~/.kaggle" ], "metadata": { "id": "wdXZLZCtaxDm" }, "id": "wdXZLZCtaxDm", "execution_count": 8, "outputs": [] }, { "cell_type": "code", "source": [ "!kaggle datasets download -d faldoae/padangfood" ], "metadata": { "id": "EzbWB5F6axIV", "colab": { "base_uri": "https://localhost:8080/" }, "outputId": "3293cebf-59fb-4aae-c9e5-e499ed89052c" }, "id": "EzbWB5F6axIV", "execution_count": 9, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Warning: Your Kaggle API key is readable by other users on this system! To fix this, you can run 'chmod 600 /root/.kaggle/kaggle.json'\n", "Downloading padangfood.zip to /content\n", " 99% 113M/114M [00:07<00:00, 20.4MB/s]\n", "100% 114M/114M [00:07<00:00, 15.8MB/s]\n" ] } ] }, { "cell_type": "code", "source": [ "import zipfile\n", "zip_ref = zipfile.ZipFile('/content/padangfood.zip', 'r')\n", "zip_ref.extractall('/content')\n", "zip_ref.close()" ], "metadata": { "id": "B8VxUeq1azid" }, "id": "B8VxUeq1azid", "execution_count": 10, "outputs": [] }, { "cell_type": "code", "source": [ "import splitfolders\n", "splitfolders.ratio('/content/dataset_padang_food', output=\"output\", seed=1337, ratio=(0.8, 0.2))" ], "metadata": { "id": "FH4ZFVfMazpr", "colab": { "base_uri": "https://localhost:8080/" }, "outputId": "928717a8-4637-4bb0-c1ad-6b6763fe842a" }, "id": "FH4ZFVfMazpr", "execution_count": 11, "outputs": [ { "output_type": "stream", "name": "stderr", "text": [ "Copying files: 993 files [00:00, 2919.10 files/s]\n" ] } ] }, { "cell_type": "markdown", "source": [ "**Transforms Data**" ], "metadata": { "id": "d6A07Oh4aTc4" }, "id": "d6A07Oh4aTc4" }, { "cell_type": "code", "source": [ "train_dataset_path = '/content/output/train'\n", "test_dataset_path = '/content/output/val'" ], "metadata": { "id": "lDPMA1M1a_GL" }, "id": "lDPMA1M1a_GL", "execution_count": 12, "outputs": [] }, { "cell_type": "code", "source": [ "training_transforms = transforms.Compose([transforms.Resize([224,224]), transforms.ToTensor()])" ], "metadata": { "id": "1lEhoF-abxHI" }, "id": "1lEhoF-abxHI", "execution_count": 13, "outputs": [] }, { "cell_type": "code", "source": [ "training_dataset = torchvision.datasets.ImageFolder(root = train_dataset_path, transform = training_transforms)" ], "metadata": { "id": "xWCftoMwbxRI" }, "id": "xWCftoMwbxRI", "execution_count": 14, "outputs": [] }, { "cell_type": "code", "source": [ "training_loader = torch.utils.data.DataLoader(dataset = training_dataset, batch_size = 32, shuffle = False)" ], "metadata": { "id": "Zy4BTyl2bxVb" }, "id": "Zy4BTyl2bxVb", "execution_count": 15, "outputs": [] }, { "cell_type": "code", "source": [ "def get_mean_stdev(loader):\n", " mean = 0\n", " std = 0\n", " total_images_count = 0\n", " for images, _ in loader:\n", " image_count_in_a_batch = images.size(0)\n", " images = images.view(image_count_in_a_batch, images.size(1), (-1))\n", " mean += images.mean(2).sum(0)\n", " std += images.std(2).sum(0)\n", " total_images_count += image_count_in_a_batch\n", "\n", " mean /= total_images_count\n", " std /= total_images_count\n", "\n", " return mean, std" ], "metadata": { "id": "9XagRUxfb3os" }, "id": "9XagRUxfb3os", "execution_count": 16, "outputs": [] }, { "cell_type": "code", "source": [ "get_mean_stdev(training_loader)" ], "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "XPOcXMWBb5mW", "outputId": "fb6f9b6f-4aa2-4a28-9220-0574d8036ce8" }, "id": "XPOcXMWBb5mW", "execution_count": 17, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "(tensor([0.6207, 0.4703, 0.3137]), tensor([0.2067, 0.2334, 0.2579]))" ] }, "metadata": {}, "execution_count": 17 } ] }, { "cell_type": "code", "source": [ "mean = [0.6207, 0.4703, 0.3137]\n", "std = [0.2067, 0.2334, 0.2579]\n", "\n", "train_transforms = transforms.Compose([\n", " transforms.Resize((150,150)),\n", " transforms.RandomHorizontalFlip(),\n", " transforms.RandomRotation(10),\n", " transforms.ToTensor(),\n", " transforms.Normalize(torch.Tensor(mean), torch.Tensor(std))\n", "])\n", "\n", "test_transforms = transforms.Compose([\n", " transforms.Resize((150,150)),\n", " transforms.ToTensor(),\n", " transforms.Normalize(torch.Tensor(mean), torch.Tensor(std))\n", "])" ], "metadata": { "id": "RroaerlvaSu1" }, "id": "RroaerlvaSu1", "execution_count": 18, "outputs": [] }, { "cell_type": "markdown", "source": [ "**Load Dataset**" ], "metadata": { "id": "c4qDU8iHbAXw" }, "id": "c4qDU8iHbAXw" }, { "cell_type": "code", "source": [ "train_loader = DataLoader(\n", " torchvision.datasets.ImageFolder(train_dataset_path, transform=train_transforms),\n", " batch_size=256, shuffle=True\n", ")\n", "\n", "test_loader = DataLoader(\n", " torchvision.datasets.ImageFolder(test_dataset_path, transform=test_transforms),\n", " batch_size=256, shuffle=False\n", ")" ], "metadata": { "id": "pOKLO5qlbNVH" }, "id": "pOKLO5qlbNVH", "execution_count": 19, "outputs": [] }, { "cell_type": "markdown", "source": [ "**Categories**" ], "metadata": { "id": "3Zx-zmAqcCyY" }, "id": "3Zx-zmAqcCyY" }, { "cell_type": "code", "source": [ "root = pathlib.Path(train_dataset_path)\n", "classes = sorted([j.name.split('/')[-1] for j in root.iterdir()])" ], "metadata": { "id": "Himox9shcHHx" }, "id": "Himox9shcHHx", "execution_count": 20, "outputs": [] }, { "cell_type": "code", "source": [ "print(classes)" ], "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "v-B0ajyJcbOg", "outputId": "71c012e0-3a63-434b-861e-f0639ea32139" }, "id": "v-B0ajyJcbOg", "execution_count": 21, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "['ayam_goreng', 'ayam_pop', 'daging_rendang', 'dendeng_batokok', 'gulai_ikan', 'gulai_tambusu', 'gulai_tunjang', 'telur_balado', 'telur_dadar']\n" ] } ] }, { "cell_type": "markdown", "source": [ "**Model Define**" ], "metadata": { "id": "VRj7ZHVTdTnZ" }, "id": "VRj7ZHVTdTnZ" }, { "cell_type": "code", "source": [ "class ConvNet(nn.Module):\n", " def __init__(self, num_classes=9):\n", " super(ConvNet, self).__init__()\n", "\n", " self.conv1 = nn.Conv2d(in_channels=3, out_channels=12, kernel_size=3, stride=1, padding=1)\n", " self.bn1 = nn.BatchNorm2d(num_features=12)\n", " self.relu1 = nn.ReLU()\n", " self.pool = nn.MaxPool2d(kernel_size=2)\n", "\n", " self.conv2 =nn.Conv2d(in_channels=12, out_channels=20, kernel_size=3, stride=1, padding=1)\n", " self.relu2 = nn.ReLU()\n", "\n", " self.conv3 =nn.Conv2d(in_channels=20, out_channels=32, kernel_size=3, stride=1, padding=1)\n", " self.bn3 = nn.BatchNorm2d(num_features=32)\n", " self.relu3 = nn.ReLU()\n", "\n", " self.fc = nn.Linear(in_features=32*75*75, out_features=num_classes)\n", "\n", " #Feed forward\n", " def forward(self, input):\n", " output = self.conv1(input)\n", " output = self.bn1(output)\n", " output = self.relu1(output)\n", "\n", " output = self.pool(output)\n", "\n", " output = self.conv2(output)\n", " output = self.relu2(output)\n", "\n", " output = self.conv3(output)\n", " output = self.bn3(output)\n", " output = self.relu3(output)\n", "\n", " output = output.view(-1, 32*75*75)\n", "\n", " output = self.fc(output)\n", "\n", " return(output)" ], "metadata": { "id": "boa_zMGedWqG" }, "id": "boa_zMGedWqG", "execution_count": 22, "outputs": [] }, { "cell_type": "code", "source": [ "model = ConvNet(num_classes=9)" ], "metadata": { "id": "PkzRT0YUgJfC" }, "id": "PkzRT0YUgJfC", "execution_count": 23, "outputs": [] }, { "cell_type": "markdown", "source": [ "**Optimizer**" ], "metadata": { "id": "B7C09jmQhCJ2" }, "id": "B7C09jmQhCJ2" }, { "cell_type": "code", "source": [ "optimizer=Adam(model.parameters(), lr=0.001, weight_decay=0.0001)\n", "loss_function = nn.CrossEntropyLoss()" ], "metadata": { "id": "6lhDnZRmhGku" }, "id": "6lhDnZRmhGku", "execution_count": 24, "outputs": [] }, { "cell_type": "markdown", "source": [ "**Define Train and Test Size**" ], "metadata": { "id": "g4u6SfwXkKQo" }, "id": "g4u6SfwXkKQo" }, { "cell_type": "code", "source": [ "train_count = len(glob.glob(train_dataset_path+'/**/*.jpg'))\n", "test_count = len(glob.glob(test_dataset_path+'/**/*.jpg'))" ], "metadata": { "id": "HZ0kLh7LkI6Q" }, "id": "HZ0kLh7LkI6Q", "execution_count": 25, "outputs": [] }, { "cell_type": "code", "source": [ "print(train_count, test_count)" ], "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "fWwfT6HtkJDu", "outputId": "d4df8dc4-f361-4ee4-af71-e98bae2e596f" }, "id": "fWwfT6HtkJDu", "execution_count": 26, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "767 199\n" ] } ] }, { "cell_type": "markdown", "source": [ "**Model Training**" ], "metadata": { "id": "ln5pa6sShklq" }, "id": "ln5pa6sShklq" }, { "cell_type": "code", "source": [ "num_epoch = 20\n", "def train_nn(model, train_loader, test_loader, loss_function, optimizer, num_epoch):\n", " best_acc = 0\n", "\n", " for epoch in range(num_epoch):\n", " print('Epoch number %d ' % (epoch + 1))\n", " model.train()\n", " running_loss = 0.0\n", " running_correct = 0.0\n", " total = 0\n", "\n", " for i, (images, labels) in enumerate(train_loader):\n", " total += labels.size(0)\n", "\n", " optimizer.zero_grad()\n", "\n", " outputs = model(images)\n", "\n", " _, predicted = torch.max(outputs.data, 1)\n", "\n", " loss = loss_function(outputs, labels)\n", " loss.backward()\n", "\n", " optimizer.step()\n", "\n", " running_loss += loss.item()\n", " running_correct += (labels==predicted).sum().item()\n", "\n", " epoch_loss = running_loss/len(train_loader)\n", " epoch_acc = 100.00 * running_correct / total\n", "\n", " print(' -Training dataset. Got %d out of %d images correctly (%.3f%%). Epoch loss: %.3f'\n", " % (running_correct, total, epoch_acc, epoch_loss))\n", "\n", " test_dataset_acc = evaluate_model_on_test_set(model, test_loader)\n", "\n", " if(test_dataset_acc > best_acc):\n", " best_acc = test_dataset_acc\n", " save_checkpoint(model, epoch, optimizer, best_acc)\n", "\n", " print('Finished')\n", " return" ], "metadata": { "id": "PewDbv35u63v" }, "id": "PewDbv35u63v", "execution_count": 32, "outputs": [] }, { "cell_type": "code", "source": [ "def evaluate_model_on_test_set(model, test_loader):\n", " model.eval()\n", " predicted_correctly_on_epoch = 0\n", " total = 0\n", "\n", " with torch.no_grad():\n", " for i, (images, labels) in enumerate(test_loader):\n", " total += labels.size(0)\n", "\n", " outputs = model(images)\n", "\n", " _, predicted = torch.max(outputs.data, 1)\n", "\n", " predicted_correctly_on_epoch += (predicted == labels).sum().item()\n", "\n", " epoch_acc = 100.00 * predicted_correctly_on_epoch / total\n", " print(' -Testing dataset. Got %d out of %d images correctly (%.3f%%)'\n", " % (predicted_correctly_on_epoch, total, epoch_acc))\n", "\n", " return epoch_acc" ], "metadata": { "id": "GajjbS2gvCqV" }, "id": "GajjbS2gvCqV", "execution_count": 33, "outputs": [] }, { "cell_type": "code", "source": [ "def save_checkpoint(model, epoch, optimizer, best_acc):\n", " state = {\n", " 'epoch': epoch + 1,\n", " 'model': model.state_dict(),\n", " 'best_accuracy': best_acc,\n", " 'optimizer': optimizer.state_dict\n", " }\n", " torch.save(state, 'best_model_checkpoint.pth.tar')" ], "metadata": { "id": "s7ndFZcSvH5_" }, "id": "s7ndFZcSvH5_", "execution_count": 34, "outputs": [] }, { "cell_type": "code", "source": [ "import torchvision.models as models\n", "import torch.nn as nn\n", "import torch.optim as optim\n", "\n", "model = ConvNet(num_classes=9)\n", "loss_function = nn.CrossEntropyLoss()\n", "\n", "optimizer=Adam(model.parameters(), lr=0.001, weight_decay=0.0001)" ], "metadata": { "id": "myhOMJlDvJlM" }, "id": "myhOMJlDvJlM", "execution_count": 35, "outputs": [] }, { "cell_type": "code", "source": [ "train_nn(model, train_loader, test_loader, loss_function, optimizer, 30)" ], "metadata": { "id": "ZQ74B3RKvMKA", "colab": { "base_uri": "https://localhost:8080/" }, "outputId": "86c68c32-cf85-4e5b-b08e-d9e46ecfba72" }, "id": "ZQ74B3RKvMKA", "execution_count": 52, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Epoch number 1 \n", " -Training dataset. Got 600 out of 790 images correctly (75.949%). Epoch loss: 1.600\n", " -Testing dataset. Got 133 out of 203 images correctly (65.517%)\n", "Epoch number 2 \n", " -Training dataset. Got 601 out of 790 images correctly (76.076%). Epoch loss: 1.671\n", " -Testing dataset. Got 131 out of 203 images correctly (64.532%)\n", "Epoch number 3 \n", " -Training dataset. Got 601 out of 790 images correctly (76.076%). Epoch loss: 1.908\n", " -Testing dataset. Got 130 out of 203 images correctly (64.039%)\n", "Epoch number 4 \n", " -Training dataset. Got 600 out of 790 images correctly (75.949%). Epoch loss: 2.209\n", " -Testing dataset. Got 130 out of 203 images correctly (64.039%)\n", "Epoch number 5 \n", " -Training dataset. Got 590 out of 790 images correctly (74.684%). Epoch loss: 1.751\n", " -Testing dataset. Got 131 out of 203 images correctly (64.532%)\n", "Epoch number 6 \n", " -Training dataset. Got 600 out of 790 images correctly (75.949%). Epoch loss: 1.693\n", " -Testing dataset. Got 133 out of 203 images correctly (65.517%)\n", "Epoch number 7 \n", " -Training dataset. Got 611 out of 790 images correctly (77.342%). Epoch loss: 1.389\n", " -Testing dataset. Got 133 out of 203 images correctly (65.517%)\n", "Epoch number 8 \n", " -Training dataset. Got 598 out of 790 images correctly (75.696%). Epoch loss: 1.974\n", " -Testing dataset. Got 133 out of 203 images correctly (65.517%)\n", "Epoch number 9 \n", " -Training dataset. Got 611 out of 790 images correctly (77.342%). Epoch loss: 1.571\n", " -Testing dataset. Got 134 out of 203 images correctly (66.010%)\n", "Epoch number 10 \n", " -Training dataset. Got 597 out of 790 images correctly (75.570%). Epoch loss: 2.105\n", " -Testing dataset. Got 133 out of 203 images correctly (65.517%)\n", "Epoch number 11 \n", " -Training dataset. Got 610 out of 790 images correctly (77.215%). Epoch loss: 1.445\n", " -Testing dataset. Got 132 out of 203 images correctly (65.025%)\n", "Epoch number 12 \n", " -Training dataset. Got 603 out of 790 images correctly (76.329%). Epoch loss: 1.674\n", " -Testing dataset. Got 133 out of 203 images correctly (65.517%)\n", "Epoch number 13 \n", " -Training dataset. Got 601 out of 790 images correctly (76.076%). Epoch loss: 2.220\n", " -Testing dataset. Got 133 out of 203 images correctly (65.517%)\n", "Epoch number 14 \n", " -Training dataset. Got 616 out of 790 images correctly (77.975%). Epoch loss: 2.020\n", " -Testing dataset. Got 133 out of 203 images correctly (65.517%)\n", "Epoch number 15 \n", " -Training dataset. Got 598 out of 790 images correctly (75.696%). Epoch loss: 1.645\n", " -Testing dataset. Got 132 out of 203 images correctly (65.025%)\n", "Epoch number 16 \n", " -Training dataset. Got 607 out of 790 images correctly (76.835%). Epoch loss: 1.771\n", " -Testing dataset. Got 132 out of 203 images correctly (65.025%)\n", "Epoch number 17 \n", " -Training dataset. Got 584 out of 790 images correctly (73.924%). Epoch loss: 1.752\n", " -Testing dataset. Got 132 out of 203 images correctly (65.025%)\n", "Epoch number 18 \n", " -Training dataset. Got 604 out of 790 images correctly (76.456%). Epoch loss: 1.949\n", " -Testing dataset. Got 133 out of 203 images correctly (65.517%)\n", "Epoch number 19 \n", " -Training dataset. Got 592 out of 790 images correctly (74.937%). Epoch loss: 1.743\n", " -Testing dataset. Got 133 out of 203 images correctly (65.517%)\n", "Epoch number 20 \n", " -Training dataset. Got 614 out of 790 images correctly (77.722%). Epoch loss: 1.363\n", " -Testing dataset. Got 132 out of 203 images correctly (65.025%)\n", "Epoch number 21 \n", " -Training dataset. Got 610 out of 790 images correctly (77.215%). Epoch loss: 1.469\n", " -Testing dataset. Got 132 out of 203 images correctly (65.025%)\n", "Epoch number 22 \n", " -Training dataset. Got 620 out of 790 images correctly (78.481%). Epoch loss: 2.037\n", " -Testing dataset. Got 132 out of 203 images correctly (65.025%)\n", "Epoch number 23 \n", " -Training dataset. Got 585 out of 790 images correctly (74.051%). Epoch loss: 1.970\n", " -Testing dataset. Got 132 out of 203 images correctly (65.025%)\n", "Epoch number 24 \n", " -Training dataset. Got 602 out of 790 images correctly (76.203%). Epoch loss: 2.068\n", " -Testing dataset. Got 133 out of 203 images correctly (65.517%)\n", "Epoch number 25 \n", " -Training dataset. Got 602 out of 790 images correctly (76.203%). Epoch loss: 2.027\n", " -Testing dataset. Got 132 out of 203 images correctly (65.025%)\n", "Epoch number 26 \n", " -Training dataset. Got 604 out of 790 images correctly (76.456%). Epoch loss: 1.635\n", " -Testing dataset. Got 132 out of 203 images correctly (65.025%)\n", "Epoch number 27 \n", " -Training dataset. Got 579 out of 790 images correctly (73.291%). Epoch loss: 1.606\n", " -Testing dataset. Got 133 out of 203 images correctly (65.517%)\n", "Epoch number 28 \n", " -Training dataset. Got 612 out of 790 images correctly (77.468%). Epoch loss: 1.624\n", " -Testing dataset. Got 133 out of 203 images correctly (65.517%)\n", "Epoch number 29 \n", " -Training dataset. Got 615 out of 790 images correctly (77.848%). Epoch loss: 1.672\n", " -Testing dataset. Got 133 out of 203 images correctly (65.517%)\n", "Epoch number 30 \n", " -Training dataset. Got 611 out of 790 images correctly (77.342%). Epoch loss: 1.480\n", " -Testing dataset. Got 132 out of 203 images correctly (65.025%)\n", "Finished\n" ] } ] }, { "cell_type": "markdown", "source": [ "**Saving Model and Checkpoint**" ], "metadata": { "id": "_fQxYHJm2HfS" }, "id": "_fQxYHJm2HfS" }, { "cell_type": "code", "source": [ "checkpoint = torch.load('/content/best_model_checkpoint.pth.tar')" ], "metadata": { "id": "ZvJj5dMutjoq" }, "id": "ZvJj5dMutjoq", "execution_count": 53, "outputs": [] }, { "cell_type": "code", "source": [ "model = ConvNet(num_classes=9)\n", "model.load_state_dict(checkpoint['model'])\n", "\n", "torch.save(model, 'best_model.pth')" ], "metadata": { "id": "ODnmzziGtj5H" }, "id": "ODnmzziGtj5H", "execution_count": 54, "outputs": [] }, { "cell_type": "markdown", "source": [ "**Testing**" ], "metadata": { "id": "JI1wwJOG21EJ" }, "id": "JI1wwJOG21EJ" }, { "cell_type": "code", "source": [ "root = pathlib.Path(train_dataset_path)\n", "classes = sorted([j.name.split('/')[-1] for j in root.iterdir()])\n", "print(classes)" ], "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "3BuaF0tv23Ap", "outputId": "99e6ca8c-5639-4455-dd45-ffc2cb4f83c5" }, "id": "3BuaF0tv23Ap", "execution_count": 55, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "['ayam_goreng', 'ayam_pop', 'daging_rendang', 'dendeng_batokok', 'gulai_ikan', 'gulai_tambusu', 'gulai_tunjang', 'telur_balado', 'telur_dadar']\n" ] } ] }, { "cell_type": "code", "source": [ "Image_transforms = transforms.Compose([\n", " transforms.Resize((150,150)),\n", " transforms.ToTensor(),\n", " transforms.Normalize(torch.Tensor(mean), torch.Tensor(std))\n", "])" ], "metadata": { "id": "wOzNyghD2_se" }, "id": "wOzNyghD2_se", "execution_count": 56, "outputs": [] }, { "cell_type": "code", "source": [ "import PIL.Image as Image\n", "def classify(model, Image_transforms, Image_path, classes):\n", " # Load the image and apply the image transforms\n", " image = Image.open(Image_path)\n", " image = Image_transforms(image)\n", "\n", " # Add a batch dimension to the image tensor\n", " image = image.unsqueeze(0)\n", "\n", " # Make a prediction using the model\n", " output = model(image)\n", "\n", " # Get the predicted class index\n", " _, predicted = torch.max(output.data, 1)\n", "\n", " # Print the predicted class index and the corresponding class label\n", " for index, class_label in enumerate(classes):\n", " if index == predicted.item():\n", " print(f'Predicted class index: {index}, Class: {class_label}')\n", " break\n", " else:\n", " print(f\"Predicted class index {predicted.item()} is out of range for the classes list.\")" ], "metadata": { "id": "ddLKZ1A12_0q" }, "id": "ddLKZ1A12_0q", "execution_count": 57, "outputs": [] }, { "cell_type": "code", "source": [ "classify(model, Image_transforms, '/content/ayam pop.png', classes)" ], "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "EVHt1ZoX3JKj", "outputId": "b3f63a52-875d-432a-c231-f0e7d5f27917" }, "id": "EVHt1ZoX3JKj", "execution_count": 58, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Predicted class index: 1, Class: ayam_pop\n" ] } ] }, { "cell_type": "markdown", "id": "2b151c52-20a3-432f-ab16-4721c16581c4", "metadata": { "id": "2b151c52-20a3-432f-ab16-4721c16581c4" }, "source": [ "## Submit Notebook" ] }, { "cell_type": "code", "execution_count": null, "id": "ced6b581-708f-4758-86ff-3cd51bf14f99", "metadata": { "id": "ced6b581-708f-4758-86ff-3cd51bf14f99" }, "outputs": [], "source": [ "portfolio_link = \"\"\n", "presentation_link = \"https://www.canva.com/design/DAGAEAMyzxU/Nley3agoNuaG1DNuZNvGHQ/edit?utm_content=DAGAEAMyzxU&utm_campaign=designshare&utm_medium=link2&utm_source=sharebutton\"\n", "\n", "question_id = \"01_portfolio_link\"\n", "submit(student_id, name, assignment_id, str(portfolio_link), question_id, drive_link)\n", "\n", "question_id = \"02_presentation_link\"\n", "submit(student_id, name, assignment_id, str(presentation_link), question_id, drive_link)" ] }, { "cell_type": "markdown", "id": "792aa177-c74e-42e5-9881-40376cd746a8", "metadata": { "id": "792aa177-c74e-42e5-9881-40376cd746a8" }, "source": [ "# FIN" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.3" }, "colab": { "provenance": [], "gpuType": "T4" }, "accelerator": "GPU" }, "nbformat": 4, "nbformat_minor": 5 }