{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Title: #Web Crawler"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Difficulty: #Medium"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Category Title: #Algorithms"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Tag Slug: #depth-first-search #breadth-first-search #string #interactive"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Name Translated: #深度优先搜索 #广度优先搜索 #字符串 #交互"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Solution Name: crawl"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Translated Title: #网络爬虫"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Translated Content:\n",
    "<p>给定一个链接&nbsp;<code>startUrl</code> 和一个接口&nbsp;<code>HtmlParser</code>&nbsp;，请你实现一个网络爬虫，以实现爬取同&nbsp;<code>startUrl</code>&nbsp;拥有相同&nbsp;<strong>域名标签&nbsp;</strong>的全部链接。该爬虫得到的全部链接可以&nbsp;<strong>任何顺序&nbsp;</strong>返回结果。</p>\n",
    "\n",
    "<p>你的网络爬虫应当按照如下模式工作：</p>\n",
    "\n",
    "<ul>\n",
    "\t<li>自链接&nbsp;<code>startUrl</code>&nbsp;开始爬取</li>\n",
    "\t<li>调用&nbsp;<code>HtmlParser.getUrls(url)</code>&nbsp;来获得链接<code>url</code>页面中的全部链接</li>\n",
    "\t<li>同一个链接最多只爬取一次</li>\n",
    "\t<li>只输出&nbsp;<strong>域名&nbsp;</strong>与<strong>&nbsp;</strong><code>startUrl</code>&nbsp;<strong>相同&nbsp;</strong>的链接集合</li>\n",
    "</ul>\n",
    "\n",
    "<p><img alt=\"\" src=\"https://assets.leetcode.com/uploads/2019/08/13/urlhostname.png\" style=\"height: 164px; width: 600px;\" /></p>\n",
    "\n",
    "<p>如上所示的一个链接，其域名为&nbsp;<code>example.org</code>。简单起见，你可以假设所有的链接都采用&nbsp;<strong>http协议&nbsp;</strong>并没有指定&nbsp;<strong>端口</strong>。例如，链接&nbsp;<code>http://leetcode.com/problems</code>&nbsp;和&nbsp;<code>http://leetcode.com/contest</code>&nbsp;是同一个域名下的，而链接<code>http://example.org/test</code>&nbsp;和&nbsp;<code>http://example.com/abc</code> 是不在同一域名下的。</p>\n",
    "\n",
    "<p><code>HtmlParser</code> 接口定义如下：&nbsp;</p>\n",
    "\n",
    "<pre>\n",
    "interface HtmlParser {\n",
    "  // 返回给定 url 对应的页面中的全部 url 。\n",
    "  public List&lt;String&gt; getUrls(String url);\n",
    "}</pre>\n",
    "\n",
    "<p>下面是两个实例，用以解释该问题的设计功能，对于自定义测试，你可以使用三个变量&nbsp;&nbsp;<code>urls</code>,&nbsp;<code>edges</code>&nbsp;和&nbsp;<code>startUrl</code>。注意在代码实现中，你只可以访问&nbsp;<code>startUrl</code>&nbsp;，而&nbsp;<code>urls</code>&nbsp;和&nbsp;<code>edges</code>&nbsp;不可以在你的代码中被直接访问。</p>\n",
    "\n",
    "<p>&nbsp;</p>\n",
    "\n",
    "<p><strong>示例 1：</strong></p>\n",
    "\n",
    "<p><img alt=\"\" src=\"https://assets.leetcode.com/uploads/2019/10/23/sample_2_1497.png\" style=\"height: 300px; width: 610px;\" /></p>\n",
    "\n",
    "<pre>\n",
    "<strong>输入：\n",
    "</strong>urls = [\n",
    "&nbsp; \"http://news.yahoo.com\",\n",
    "&nbsp; \"http://news.yahoo.com/news\",\n",
    "&nbsp; \"http://news.yahoo.com/news/topics/\",\n",
    "&nbsp; \"http://news.google.com\",\n",
    "&nbsp; \"http://news.yahoo.com/us\"\n",
    "]\n",
    "edges = [[2,0],[2,1],[3,2],[3,1],[0,4]]\n",
    "startUrl = \"http://news.yahoo.com/news/topics/\"\n",
    "<strong>输出：</strong>[\n",
    "&nbsp; \"http://news.yahoo.com\",\n",
    "&nbsp; \"http://news.yahoo.com/news\",\n",
    "&nbsp; \"http://news.yahoo.com/news/topics/\",\n",
    "&nbsp; \"http://news.yahoo.com/us\"\n",
    "]\n",
    "</pre>\n",
    "\n",
    "<p><strong>示例 2：</strong></p>\n",
    "\n",
    "<p><strong><img alt=\"\" src=\"https://assets.leetcode.com/uploads/2019/10/23/sample_3_1497.png\" style=\"height: 270px; width: 540px;\" /></strong></p>\n",
    "\n",
    "<pre>\n",
    "<strong>输入：</strong>\n",
    "urls = [\n",
    "&nbsp; \"http://news.yahoo.com\",\n",
    "&nbsp; \"http://news.yahoo.com/news\",\n",
    "&nbsp; \"http://news.yahoo.com/news/topics/\",\n",
    "&nbsp; \"http://news.google.com\"\n",
    "]\n",
    "edges = [[0,2],[2,1],[3,2],[3,1],[3,0]]\n",
    "startUrl = \"http://news.google.com\"\n",
    "<strong>输出：</strong>[\"http://news.google.com\"]\n",
    "<strong>解释：</strong>startUrl 链接到所有其他不共享相同主机名的页面。</pre>\n",
    "\n",
    "<p>&nbsp;</p>\n",
    "\n",
    "<p><strong>提示：</strong></p>\n",
    "\n",
    "<ul>\n",
    "\t<li><code>1 &lt;= urls.length &lt;= 1000</code></li>\n",
    "\t<li><code>1 &lt;= urls[i].length &lt;= 300</code></li>\n",
    "\t<li><code>startUrl</code>&nbsp;为&nbsp;<code>urls</code>&nbsp;中的一个。</li>\n",
    "\t<li>域名标签的长为1到63个字符（包括点），只能包含从‘a’到‘z’的ASCII字母、‘0’到‘9’的数字以及连字符即减号（‘-’）。</li>\n",
    "\t<li>域名标签不会以连字符即减号（‘-’）开头或结尾。</li>\n",
    "\t<li>关于域名有效性的约束可参考:&nbsp;&nbsp;<a href=\"https://en.wikipedia.org/wiki/Hostname#Restrictions_on_valid_hostnames\">https://en.wikipedia.org/wiki/Hostname#Restrictions_on_valid_hostnames</a></li>\n",
    "\t<li>你可以假定url库中不包含重复项。</li>\n",
    "</ul>\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Description: [web-crawler](https://leetcode.cn/problems/web-crawler/description/)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Solutions: [web-crawler](https://leetcode.cn/problems/web-crawler/solutions/)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "test_cases = ['[\"http://news.yahoo.com\",\"http://news.yahoo.com/news\",\"http://news.yahoo.com/news/topics/\",\"http://news.google.com\",\"http://news.yahoo.com/us\"]\\n[[2,0],[2,1],[3,2],[3,1],[0,4]]\\n\"http://news.yahoo.com/news/topics/\"', '[\"http://news.yahoo.com\",\"http://news.yahoo.com/news\",\"http://news.yahoo.com/news/topics/\",\"http://news.google.com\"]\\n[[0,2],[2,1],[3,2],[3,1],[3,0]]\\n\"http://news.google.com\"']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "import threading\n",
    "from typing import List\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        def hostname(url):\n",
    "            return url.split(\"//\")[1].split(\"/\")[0]\n",
    "\n",
    "        startHostname = hostname(startUrl)\n",
    "        visited = {startUrl}\n",
    "        queue = [startUrl]\n",
    "\n",
    "        lock = threading.Lock()\n",
    "        condition = threading.Condition(lock)\n",
    "        workingThreads = 0\n",
    "\n",
    "        def worker():\n",
    "            nonlocal workingThreads\n",
    "            while True:\n",
    "                with lock:\n",
    "                    while not queue and workingThreads:\n",
    "                        condition.wait()\n",
    "                    if not queue and not workingThreads:\n",
    "                        return\n",
    "                    url = queue.pop(0)\n",
    "                    workingThreads += 1\n",
    "\n",
    "                nextUrls = htmlParser.getUrls(url)\n",
    "                nextUrls = [u for u in nextUrls if hostname(u) == startHostname]\n",
    "\n",
    "                with lock:\n",
    "                    for u in nextUrls:\n",
    "                        if u not in visited:\n",
    "                            visited.add(u)\n",
    "                            queue.append(u)\n",
    "                    workingThreads -= 1\n",
    "                    condition.notify_all()\n",
    "\n",
    "        threads = []\n",
    "        for _ in range(4):  # Starting 4 threads, you can adjust the number based on performance needs\n",
    "            t = threading.Thread(target=worker)\n",
    "            t.start()\n",
    "            threads.append(t)\n",
    "\n",
    "        for t in threads:\n",
    "            t.join()\n",
    "\n",
    "        return list(visited)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        from concurrent.futures import ThreadPoolExecutor, wait\n",
    "        from threading import Lock\n",
    "\n",
    "        def parse_hostname(url):\n",
    "            return url.split(\"//\", 1)[1].split(\"/\",1)[0]\n",
    "\n",
    "        hostname = parse_hostname(startUrl)\n",
    "        lock = Lock()\n",
    "        visited = set()\n",
    "        thread_pool = ThreadPoolExecutor()\n",
    "        q = []\n",
    "\n",
    "        def fetch(src):\n",
    "            urls = htmlParser.getUrls(src)\n",
    "            for url in urls:\n",
    "                if parse_hostname(url) == hostname:\n",
    "                    with lock:\n",
    "                        if url in visited:\n",
    "                            continue\n",
    "                        visited.add(url)\n",
    "                        q.append(thread_pool.submit(fetch,url))\n",
    "\n",
    "        visited.add(startUrl)\n",
    "        q.append(thread_pool.submit(fetch, startUrl))\n",
    "        for f in q:\n",
    "            f.result()\n",
    "        \n",
    "        return list(visited)\n",
    "\n",
    "                    \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        from concurrent.futures import ThreadPoolExecutor, wait\n",
    "        from threading import Lock\n",
    "\n",
    "        def parse_hostname(url):\n",
    "            return url.split(\"//\", 1)[1].split(\"/\",1)[0]\n",
    "\n",
    "        hostname = parse_hostname(startUrl)\n",
    "        lock = Lock()\n",
    "        visited = set()\n",
    "        thread_pool = ThreadPoolExecutor()\n",
    "        q = deque()\n",
    "\n",
    "        def fetch(src):\n",
    "            urls = htmlParser.getUrls(src)\n",
    "            for url in urls:\n",
    "                if parse_hostname(url) == hostname:\n",
    "                    with lock:\n",
    "                        if url in visited:\n",
    "                            continue\n",
    "                        visited.add(url)\n",
    "                    q.append(thread_pool.submit(fetch,url))\n",
    "\n",
    "        visited.add(startUrl)\n",
    "        q.append(thread_pool.submit(fetch, startUrl))\n",
    "        while q:\n",
    "            q.popleft().result()\n",
    "        \n",
    "        return list(visited)\n",
    "\n",
    "                    \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "from concurrent.futures import ThreadPoolExecutor, wait\n",
    "import threading\n",
    "\n",
    "class Solution:\n",
    "    def __init__(self):\n",
    "        self.history = set()\n",
    "        self.pool = ThreadPoolExecutor(32)\n",
    "\n",
    "        self.h = \"\"\n",
    "        self.s = set()\n",
    "        self.l = threading.Lock()\n",
    "\n",
    "\n",
    "    def get_hostname(self, url: str) -> str:\n",
    "        return url[7:].split('/', 1)[0]\n",
    "\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        self.h = self.get_hostname(startUrl)\n",
    "        tasks = []\n",
    "\n",
    "        with self.l:\n",
    "            self.s.add(startUrl)\n",
    "            for i in htmlParser.getUrls(startUrl):\n",
    "                h = self.get_hostname(i)\n",
    "                if h != self.h:\n",
    "                    continue\n",
    "                if i not in self.s:\n",
    "                    self.s.add(i)\n",
    "                    task = self.pool.submit(self.crawl, i, htmlParser)\n",
    "                    tasks.append(task)\n",
    "        wait(tasks)\n",
    "\n",
    "        return list(self.s)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "from threading import Lock, Thread\n",
    "from collections import deque\n",
    "\n",
    "class Solution:    \n",
    "    def fetch(self, url: str) -> None:\n",
    "        for url in self.htmlParser.getUrls(url):\n",
    "            hostname = url.split('//', 1)[1].split('/', 1)[0]\n",
    "            if hostname == self.hostname:\n",
    "                with self.lock: # acquire lock to protect against race conditions (multiple threads modifying the same shared resource)\n",
    "                    if url in self.visited:\n",
    "                        continue\n",
    "                    self.visited.add(url)\n",
    "                thread = Thread(target=self.fetch, args=(url,))\n",
    "                thread.start()\n",
    "                self.queue.append(thread) # collections.deque has atomic append() and popleft() operations (thread-safe)\n",
    "\n",
    "    '''\n",
    "    Multi-threading.\n",
    "    Create a thread for fetching each url.\n",
    "    The main thread waits for all threads to finish.\n",
    "    '''\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        # initialize shared resources\n",
    "        self.htmlParser = htmlParser\n",
    "        self.hostname = startUrl.split('//', 1)[1].split('/', 1)[0]\n",
    "        self.lock = Lock()\n",
    "        self.visited = set([startUrl])\n",
    "        \n",
    "        # start the first thread\n",
    "        thread = Thread(target=self.fetch, args=(startUrl,))\n",
    "        thread.start()\n",
    "        self.queue = [thread]\n",
    "\n",
    "        # wait for all threads to finish\n",
    "        while self.queue:\n",
    "            self.queue.pop().join()\n",
    "        \n",
    "        return list(self.visited)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "from concurrent.futures import ThreadPoolExecutor, wait\n",
    "import threading\n",
    "\n",
    "class Solution:\n",
    "    def __init__(self):\n",
    "        self.history = set()\n",
    "        self.pool = ThreadPoolExecutor(32)\n",
    "\n",
    "        self.h = \"\"\n",
    "        self.s = set()\n",
    "        self.l = threading.Lock()\n",
    "\n",
    "\n",
    "    def get_hostname(self, url: str) -> str:\n",
    "        return url[7:].split('/', 1)[0]\n",
    "\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        self.h = self.get_hostname(startUrl)\n",
    "        tasks = []\n",
    "\n",
    "        with self.l:\n",
    "            self.s.add(startUrl)\n",
    "            for i in htmlParser.getUrls(startUrl):\n",
    "                h = self.get_hostname(i)\n",
    "                if h != self.h:\n",
    "                    continue\n",
    "                if i not in self.s:\n",
    "                    self.s.add(i)\n",
    "                    task = self.pool.submit(self.crawl, i, htmlParser)\n",
    "                    tasks.append(task)\n",
    "        wait(tasks)\n",
    "\n",
    "        return list(self.s)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "from threading import Lock, Thread\n",
    "from collections import deque\n",
    "\n",
    "class Solution:    \n",
    "    def fetch(self, url: str) -> None:\n",
    "        for url in self.htmlParser.getUrls(url):\n",
    "            hostname = url.split('//', 1)[1].split('/', 1)[0]\n",
    "            if hostname == self.hostname:\n",
    "                with self.lock: # acquire lock to protect against race conditions (multiple threads modifying the same shared resource)\n",
    "                    if url in self.visited:\n",
    "                        continue\n",
    "                    self.visited.add(url)\n",
    "                thread = Thread(target=self.fetch, args=(url,))\n",
    "                thread.start()\n",
    "                self.queue.append(thread) # collections.deque has atomic append() and popleft() operations (thread-safe)\n",
    "\n",
    "    '''\n",
    "    Multi-threading.\n",
    "    Create a thread for fetching each url.\n",
    "    The main thread waits for all threads to finish.\n",
    "    '''\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        # initialize shared resources\n",
    "        self.htmlParser = htmlParser\n",
    "        self.hostname = startUrl.split('//', 1)[1].split('/', 1)[0]\n",
    "        self.lock = Lock()\n",
    "        self.visited = set([startUrl])\n",
    "        \n",
    "        # start the first thread\n",
    "        thread = Thread(target=self.fetch, args=(startUrl,))\n",
    "        thread.start()\n",
    "        self.queue = deque([thread])\n",
    "\n",
    "        # wait for all threads to finish\n",
    "        while self.queue:\n",
    "            self.queue.popleft().join()\n",
    "        \n",
    "        return list(self.visited)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "from threading import Lock, Thread\n",
    "from collections import deque\n",
    "\n",
    "class Solution:    \n",
    "    def fetch(self, url: str) -> None:\n",
    "        for url in self.htmlParser.getUrls(url):\n",
    "            hostname = url.split('//', 1)[1].split('/', 1)[0]\n",
    "            if hostname == self.hostname:\n",
    "                with self.lock: # acquire lock to protect against race conditions (multiple threads modifying the same shared resource)\n",
    "                    if url in self.visited:\n",
    "                        continue\n",
    "                    self.visited.add(url)\n",
    "                thread = Thread(target=self.fetch, args=(url,))\n",
    "                thread.start()\n",
    "                self.queue.append(thread) # collections.deque has atomic append() and popleft() operations (thread-safe)\n",
    "\n",
    "    '''\n",
    "    Multi-threading.\n",
    "    Create a thread for fetching each url.\n",
    "    The main thread waits for all threads to finish.\n",
    "    '''\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        # initialize shared resources\n",
    "        self.htmlParser = htmlParser\n",
    "        self.hostname = startUrl.split('//', 1)[1].split('/', 1)[0]\n",
    "        self.lock = Lock()\n",
    "        self.visited = set([startUrl])\n",
    "        \n",
    "        # start the first thread\n",
    "        thread = Thread(target=self.fetch, args=(startUrl,))\n",
    "        thread.start()\n",
    "        self.queue = deque([thread])\n",
    "\n",
    "        # wait for all threads to finish\n",
    "        while self.queue:\n",
    "            self.queue.popleft().join()\n",
    "        \n",
    "        return list(self.visited)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        from threading import Lock, Thread\n",
    "\n",
    "        def get_hostname(url: str) -> str:\n",
    "            return url.split('//', 1)[1].split('/', 1)[0]\n",
    "\n",
    "        def fetch(url: str) -> None:\n",
    "            for url in htmlParser.getUrls(url):\n",
    "                if get_hostname(url) == hostname:\n",
    "                    with lock:\n",
    "                        if url in visited:\n",
    "                            continue\n",
    "                        visited.add(url)\n",
    "                    thread = Thread(target=fetch, args=(url,))\n",
    "                    thread.start()\n",
    "                    queue.append(thread)\n",
    "\n",
    "        hostname = get_hostname(startUrl)\n",
    "        lock = Lock()\n",
    "        visited = {startUrl}\n",
    "        main_thread = Thread(target=fetch, args=(startUrl,))\n",
    "        main_thread.start()\n",
    "        queue = deque([main_thread])\n",
    "        while queue:\n",
    "            queue.popleft().join()\n",
    "        return list(visited)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        from threading import Lock, Thread\n",
    "\n",
    "        def get_hostname(url: str) -> str:\n",
    "            return url.split('//', 1)[1].split('/', 1)[0]\n",
    "\n",
    "        def fetch(url: str) -> None:\n",
    "            for url in htmlParser.getUrls(url):\n",
    "                if get_hostname(url) == hostname:\n",
    "                    with lock:\n",
    "                        if url in visited:\n",
    "                            continue\n",
    "                        visited.add(url)\n",
    "                    thread = Thread(target=fetch, args=(url,))\n",
    "                    thread.start()\n",
    "                    queue.append(thread)\n",
    "\n",
    "        hostname = get_hostname(startUrl)\n",
    "        lock = Lock()\n",
    "        visited = {startUrl}\n",
    "        main_thread = Thread(target=fetch, args=(startUrl,))\n",
    "        main_thread.start()\n",
    "        queue = deque([main_thread])\n",
    "        while queue:\n",
    "            queue.popleft().join()\n",
    "        return list(visited)\n",
    "\n",
    "# 作者：yinheze\n",
    "# 链接：https://leetcode.cn/problems/web-crawler-multithreaded/solutions/530128/pythonduo-xian-cheng-bfs-by-yinheze-tsk5/\n",
    "# 来源：力扣（LeetCode）\n",
    "# 著作权归作者所有。商业转载请联系作者获得授权，非商业转载请注明出处。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        from threading import Lock, Thread\n",
    "\n",
    "        def get_hostname(url: str) -> str:\n",
    "            return url.split('//', 1)[1].split('/', 1)[0]\n",
    "\n",
    "        def fetch(url: str) -> None:\n",
    "            for url in htmlParser.getUrls(url):\n",
    "                if get_hostname(url) == hostname:\n",
    "                    with lock:\n",
    "                        if url in visited:\n",
    "                            continue\n",
    "                        visited.add(url)\n",
    "                    thread = Thread(target=fetch, args=(url,))\n",
    "                    thread.start()\n",
    "                    queue.append(thread)\n",
    "\n",
    "        hostname = get_hostname(startUrl)\n",
    "        lock = Lock()\n",
    "        visited = {startUrl}\n",
    "        main_thread = Thread(target=fetch, args=(startUrl,))\n",
    "        main_thread.start()\n",
    "        queue = deque([main_thread])\n",
    "        while queue:\n",
    "            queue.popleft().join()\n",
    "        return list(visited)\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        from threading import Lock, Thread\n",
    "        def get_hostname(url: str) -> str:\n",
    "            return url.split('//', 1)[1].split('/', 1)[0]\n",
    "\n",
    "        visited_url = {startUrl}\n",
    "        hostname = get_hostname(startUrl)\n",
    "        set_lock = Lock()\n",
    "\n",
    "        def crawl(startUrl):\n",
    "            for url in htmlParser.getUrls(startUrl):\n",
    "                if get_hostname(url) == hostname:\n",
    "                    with set_lock:\n",
    "                        if url in visited_url:\n",
    "                            continue\n",
    "                        visited_url.add(url)\n",
    "                    crawl_url = Thread(target=crawl, args=(url, ))\n",
    "                    crawl_url.start()\n",
    "                    processes.append(crawl_url)\n",
    "        \n",
    "        main_process = Thread(target=crawl, args=(startUrl, ))\n",
    "        main_process.start()\n",
    "        processes = deque([main_process])\n",
    "        while processes:\n",
    "            processes.popleft().join()\n",
    "        return list(visited_url)\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        from threading import Lock, Thread\n",
    "\n",
    "        def get_hostname(url: str) -> str:\n",
    "            return url.split('//', 1)[1].split('/', 1)[0]\n",
    "\n",
    "        def fetch(url: str) -> None:\n",
    "            for url in htmlParser.getUrls(url):\n",
    "                if get_hostname(url) == hostname:\n",
    "                    with lock:\n",
    "                        if url in visited:\n",
    "                            continue\n",
    "                        visited.add(url)\n",
    "                    thread = Thread(target=fetch, args=(url,))\n",
    "                    thread.start()\n",
    "                    queue.append(thread)\n",
    "\n",
    "        hostname = get_hostname(startUrl)\n",
    "        lock = Lock()\n",
    "        visited = {startUrl}\n",
    "        main_thread = Thread(target=fetch, args=(startUrl,))\n",
    "        main_thread.start()\n",
    "        queue = deque([main_thread])\n",
    "        while queue:\n",
    "            queue.popleft().join()\n",
    "        return list(visited)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "import threading\n",
    "\n",
    "class Solution:\n",
    "    def get_host(self, url):\n",
    "        url += '/'\n",
    "        return url.split('/')[2]\n",
    "    \n",
    "    def fetch(self, url, htmlParser):\n",
    "        for url in htmlParser.getUrls(url):\n",
    "            if self.get_host(url) == self.host:\n",
    "                with self.lock:\n",
    "                    if url in self.visited:\n",
    "                        continue\n",
    "                    self.visited.add(url)\n",
    "                thread = threading.Thread(target=self.fetch, args=(url, htmlParser))\n",
    "                thread.start()\n",
    "                self.queue.append(thread)\n",
    "\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        self.host = self.get_host(startUrl)\n",
    "        self.lock = threading.Lock()\n",
    "        self.visited = {startUrl}\n",
    "        thread = threading.Thread(target=self.fetch, args=(startUrl, htmlParser))\n",
    "        thread.start()\n",
    "        self.queue = deque({thread})\n",
    "        while self.queue:\n",
    "            self.queue.popleft().join()\n",
    "        return list(self.visited)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "from threading import Lock, Thread\n",
    "# import deque\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        \n",
    "        def get_hostname(url: str) -> str:\n",
    "            return url.split('//', 1)[1].split('/', 1)[0]\n",
    "        \n",
    "        \n",
    "        def fetch(url):\n",
    "            for eachurl in htmlParser.getUrls(url):\n",
    "                if get_hostname(eachurl) == hostname:\n",
    "                    with lock:\n",
    "                        if eachurl in visited:\n",
    "                            continue\n",
    "                        else:\n",
    "                            visited.add(eachurl)\n",
    "                    thread = Thread(target=fetch, args=(eachurl,))\n",
    "                    thread.start()\n",
    "                    q.append(thread)\n",
    "        hostname = get_hostname(startUrl)\n",
    "        visited = {startUrl}\n",
    "        lock = Lock()\n",
    "        mainthread = Thread(target=fetch, args=(startUrl,))\n",
    "        mainthread.start()\n",
    "        q = deque([mainthread])\n",
    "        while q:\n",
    "            q.popleft().join()\n",
    "        return list(visited)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        import queue\n",
    "        import threading\n",
    "\n",
    "        hostname = self.get_hostname(startUrl)\n",
    "\n",
    "        request_queue = queue.Queue()\n",
    "        result_queue = queue.Queue()\n",
    "\n",
    "        for _ in range(5):\n",
    "            t = threading.Thread(target=self.crawl_thread, args=(hostname, htmlParser, request_queue, result_queue))\n",
    "            t.daemon = True\n",
    "            t.start()\n",
    "        \n",
    "        visited = set()\n",
    "        visited.add(startUrl)\n",
    "        request_queue.put(startUrl)\n",
    "        running = 1\n",
    "        # producer\n",
    "        while running > 0:\n",
    "            urls = result_queue.get()\n",
    "            running -= 1\n",
    "            for url in urls:\n",
    "                if url in visited:\n",
    "                    continue\n",
    "                request_queue.put(url)\n",
    "                visited.add(url)\n",
    "                running += 1\n",
    "        \n",
    "        return list(visited)\n",
    "    \n",
    "    def crawl_thread(self, hostname, parser, request_queue, result_queue):\n",
    "        while True:\n",
    "            url = request_queue.get()\n",
    "            urls = parser.getUrls(url)\n",
    "            result = []\n",
    "            for url in urls:\n",
    "                if self.get_hostname(url) == hostname:\n",
    "                    result.append(url)\n",
    "            result_queue.put(result)\n",
    "\n",
    "    def get_hostname(self, url):\n",
    "        url = url[7:]\n",
    "        index = url.find('/')\n",
    "        if index != -1:\n",
    "            url = url[:index]\n",
    "        return url"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "from threading import Lock, Thread\n",
    "from collections import deque\n",
    "from queue import Queue\n",
    "\n",
    "'''\n",
    "Method 1:\n",
    "\n",
    "Create a thread for fetching each url.\n",
    "Use a lock to protect against race conditions (multiple threads modifying the same shared resource).\n",
    "The main thread waits for all threads to finish.\n",
    "'''\n",
    "# class Solution:    \n",
    "#     def fetch(self, url: str) -> None:\n",
    "#         for url in self.htmlParser.getUrls(url):\n",
    "#             hostname = url.split('//', 1)[1].split('/', 1)[0]\n",
    "#             if hostname == self.hostname:\n",
    "#                 with self.lock: # acquire lock to protect against race conditions\n",
    "#                     if url in self.visited:\n",
    "#                         continue\n",
    "#                     self.visited.add(url)\n",
    "#                 thread = Thread(target=self.fetch, args=(url,))\n",
    "#                 thread.start()\n",
    "#                 self.queue.append(thread) # collections.deque has atomic append() and popleft() operations (thread-safe)\n",
    "\n",
    "#     def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "#         # initialize shared resources\n",
    "#         self.htmlParser = htmlParser\n",
    "#         self.hostname = startUrl.split('//', 1)[1].split('/', 1)[0]\n",
    "#         self.lock = Lock()\n",
    "#         self.visited = set([startUrl])\n",
    "        \n",
    "#         # start the first thread\n",
    "#         thread = Thread(target=self.fetch, args=(startUrl,))\n",
    "#         thread.start()\n",
    "#         self.queue = deque([thread])\n",
    "\n",
    "#         # wait for all threads to finish\n",
    "#         while self.queue:\n",
    "#             self.queue.popleft().join()\n",
    "        \n",
    "#         return list(self.visited)\n",
    "\n",
    "'''\n",
    "Method 2:\n",
    "\n",
    "Use two queues, one for to-be-processed urls and one for discovered urls.\n",
    "Create a main thread and a few worker threads.\n",
    "The main thread first adds the startUrl to the pending queue, which the worker threads read from, find new urls, and add to the results queue.\n",
    "The main thread then reads from the results queue, records the new urls, and adds them to the pending queue.\n",
    "\n",
    "Since queue (queue.Queue) is thread-safe, and only the main threads modifies the answers set, we don't need to use a lock.\n",
    "This method also creates a fixed number of threads and ensures each worker thread has roughly the same amount of work.\n",
    "'''\n",
    "class Solution:    \n",
    "    def worker(self) -> None:\n",
    "        while True:\n",
    "            url = self.pendingQueue.get()\n",
    "            urls = self.htmlParser.getUrls(url)\n",
    "            newUrls = []\n",
    "            for url in urls:\n",
    "                hostname = url.split('//', 1)[1].split('/', 1)[0]\n",
    "                if hostname == self.hostname:\n",
    "                    newUrls.append(url)\n",
    "            self.resultsQueue.put(newUrls)\n",
    "\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        # initialize shared resources\n",
    "        self.htmlParser = htmlParser\n",
    "        self.hostname = startUrl.split('//', 1)[1].split('/', 1)[0]\n",
    "        self.visited = set([startUrl])\n",
    "        self.pendingQueue = Queue() # queue provides a thread-safe interface for exchanging data between threads\n",
    "        self.resultsQueue = Queue()\n",
    "\n",
    "        # add startUrl to the pending queue\n",
    "        self.pendingQueue.put(startUrl)\n",
    "        \n",
    "        # start worker threads (5 threads in this case)\n",
    "        for _ in range(5):\n",
    "            thread = Thread(target=self.worker)\n",
    "            thread.daemon = True # daemon threads are killed when the main thread exits\n",
    "            thread.start()\n",
    "        \n",
    "        # read from the results queue and add them to the pending queue\n",
    "        pendingUrls = 1\n",
    "        visited = set([startUrl])\n",
    "        while pendingUrls > 0:\n",
    "            urls = self.resultsQueue.get() # Queue.get() blocks until an item is available\n",
    "            pendingUrls -= 1\n",
    "            for url in urls:\n",
    "                if url in visited:\n",
    "                    continue\n",
    "                visited.add(url)\n",
    "                self.pendingQueue.put(url)\n",
    "                pendingUrls += 1\n",
    "        \n",
    "        return list(visited)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "from threading import Thread, Lock\n",
    "import concurrent.futures\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "\n",
    "    def __init__(self):\n",
    "        self.ll = Lock()\n",
    "\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        \n",
    "        self.htmlParser = htmlParser\n",
    "        self.hname = self.get_hostname(startUrl)\n",
    "        self.result = {startUrl}\n",
    "        t = Thread(target=self.work, args=(startUrl,))\n",
    "        t.start()\n",
    "        self.dq = deque([t])\n",
    "        while self.dq:\n",
    "            self.dq.popleft().join()\n",
    "        return list(self.result)\n",
    "\n",
    "    def work(self, url):\n",
    "        for uu in self.htmlParser.getUrls(url):\n",
    "            if self.get_hostname(uu) == self.hname:\n",
    "                with self.ll:\n",
    "                    if uu in self.result:\n",
    "                        continue\n",
    "                    self.result.add(uu)\n",
    "                t = Thread(target=self.work, args=(uu,))\n",
    "                t.start()\n",
    "                self.dq.append(t)\n",
    "            else:\n",
    "                continue\n",
    "\n",
    "    def get_hostname(self, url):\n",
    "        return url.split(\"://\")[1].split(\"/\")[0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "from threading import Lock, Thread\n",
    "from collections import deque\n",
    "from queue import Queue\n",
    "\n",
    "'''\n",
    "Method 1:\n",
    "\n",
    "Create a thread for fetching each url.\n",
    "Use a lock to protect against race conditions (multiple threads modifying the same shared resource).\n",
    "The main thread waits for all threads to finish.\n",
    "'''\n",
    "# class Solution:    \n",
    "#     def fetch(self, url: str) -> None:\n",
    "#         for url in self.htmlParser.getUrls(url):\n",
    "#             hostname = url.split('//', 1)[1].split('/', 1)[0]\n",
    "#             if hostname == self.hostname:\n",
    "#                 with self.lock: # acquire lock to protect against race conditions\n",
    "#                     if url in self.visited:\n",
    "#                         continue\n",
    "#                     self.visited.add(url)\n",
    "#                 thread = Thread(target=self.fetch, args=(url,))\n",
    "#                 thread.start()\n",
    "#                 self.queue.append(thread) # collections.deque has atomic append() and popleft() operations (thread-safe)\n",
    "\n",
    "#     def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "#         # initialize shared resources\n",
    "#         self.htmlParser = htmlParser\n",
    "#         self.hostname = startUrl.split('//', 1)[1].split('/', 1)[0]\n",
    "#         self.lock = Lock()\n",
    "#         self.visited = set([startUrl])\n",
    "        \n",
    "#         # start the first thread\n",
    "#         thread = Thread(target=self.fetch, args=(startUrl,))\n",
    "#         thread.start()\n",
    "#         self.queue = deque([thread])\n",
    "\n",
    "#         # wait for all threads to finish\n",
    "#         while self.queue:\n",
    "#             self.queue.popleft().join()\n",
    "        \n",
    "#         return list(self.visited)\n",
    "\n",
    "'''\n",
    "Method 2:\n",
    "\n",
    "Use two queues, one for to-be-processed urls and one for discovered urls.\n",
    "Create a main thread and a few worker threads.\n",
    "The main thread first adds the startUrl to the pending queue, which the worker threads read from, find new urls, and add to the results queue.\n",
    "The main thread then reads from the results queue, records the new urls, and adds them to the pending queue.\n",
    "\n",
    "Since queue (queue.Queue) is thread-safe, and only the main threads modifies the answers set, we don't need to use a lock.\n",
    "This method also creates a fixed number of threads and ensures each worker thread has roughly the same amount of work.\n",
    "'''\n",
    "class Solution:    \n",
    "    def worker(self) -> None:\n",
    "        while True:\n",
    "            url = self.pendingQueue.get()\n",
    "            urls = self.htmlParser.getUrls(url)\n",
    "            newUrls = []\n",
    "            for url in urls:\n",
    "                hostname = url.split('//', 1)[1].split('/', 1)[0]\n",
    "                if hostname == self.hostname:\n",
    "                    newUrls.append(url)\n",
    "            self.resultsQueue.put(newUrls)\n",
    "\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        # initialize shared resources\n",
    "        self.htmlParser = htmlParser\n",
    "        self.hostname = startUrl.split('//', 1)[1].split('/', 1)[0]\n",
    "        self.visited = set([startUrl])\n",
    "        self.pendingQueue = Queue() # queue provides a thread-safe interface for exchanging data between threads\n",
    "        self.resultsQueue = Queue()\n",
    "\n",
    "        # add startUrl to the pending queue\n",
    "        self.pendingQueue.put(startUrl)\n",
    "        \n",
    "        # start worker threads (5 threads in this case)\n",
    "        for _ in range(5):\n",
    "            thread = Thread(target=self.worker)\n",
    "            thread.daemon = True # daemon threads are killed when the main thread exits\n",
    "            thread.start()\n",
    "        \n",
    "        # read from the results queue and add them to the pending queue\n",
    "        pendingUrls = 1\n",
    "        visited = set([startUrl])\n",
    "        while pendingUrls > 0:\n",
    "            urls = self.resultsQueue.get() # Queue.get() blocks until an item is available\n",
    "            pendingUrls -= 1\n",
    "            for url in urls:\n",
    "                if url in visited:\n",
    "                    continue\n",
    "                visited.add(url)\n",
    "                self.pendingQueue.put(url)\n",
    "                pendingUrls += 1\n",
    "        \n",
    "        return list(visited)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "from queue import Queue\n",
    "from urllib.parse import urlparse\n",
    "import threading\n",
    "\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, start_url: str, parser: 'HtmlParser') -> List[str]:\n",
    "        request_queue, result_queue, visited = Queue(), Queue(), set([start_url])\n",
    "        result = [start_url]\n",
    "        request_queue.put(start_url)\n",
    "\n",
    "        def worker():\n",
    "            while True:\n",
    "                cur_url = request_queue.get()\n",
    "                new_urls = parser.getUrls(cur_url)\n",
    "                result_queue.put(new_urls)\n",
    "        \n",
    "        for i in range(5):\n",
    "            thread = threading.Thread(target=worker)\n",
    "            thread.daemon = True\n",
    "            thread.start()\n",
    "\n",
    "        start_host_name = urlparse(start_url).netloc\n",
    "        remaining = 1\n",
    "        while remaining:\n",
    "            new_urls = result_queue.get()\n",
    "            for new_url in new_urls:\n",
    "                new_host_name = urlparse(new_url).netloc\n",
    "                if new_url not in visited and new_host_name == start_host_name:\n",
    "                    request_queue.put(new_url)\n",
    "                    remaining += 1\n",
    "                    visited.add(new_url)\n",
    "                    result.append(new_url)\n",
    "            remaining -= 1\n",
    "        return result\n",
    "\n",
    "\n",
    "\n",
    "# # Single-threaded solution\n",
    "# class Solution:\n",
    "#     def crawl(self, start_url: str, parser: 'HtmlParser') -> List[str]:\n",
    "#         result, visited, q = [], set([start_url]), deque()\n",
    "#         host_name = urlparse(start_url).netloc\n",
    "#         q.append(start_url)\n",
    "#         while q:\n",
    "#             cur_url = q.popleft()\n",
    "#             result.append(cur_url)\n",
    "#             new_urls = parser.getUrls(cur_url)\n",
    "#             for new_url in new_urls:\n",
    "#                 new_host_name = urlparse(new_url).netloc\n",
    "#                 if new_url not in visited and new_host_name == host_name:\n",
    "#                     q.append(new_url)\n",
    "#                     visited.add(new_url)\n",
    "#         return result\n",
    "        "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "import threading\n",
    "import queue\n",
    "from urllib.parse import urlsplit\n",
    "\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        domain = urlsplit(startUrl).netloc\n",
    "        requestQueue = queue.Queue()\n",
    "        resultQueue = queue.Queue()\n",
    "        requestQueue.put(startUrl)\n",
    "        for _ in range(5):\n",
    "            t = threading.Thread(target=self._crawl, \n",
    "                args=(domain, htmlParser, requestQueue, resultQueue))\n",
    "            t.daemon = True\n",
    "            t.start()\n",
    "        running = 1\n",
    "        visited = set([startUrl])\n",
    "        while running > 0:\n",
    "            urls = resultQueue.get()\n",
    "            for url in urls:\n",
    "                if url in visited:\n",
    "                    continue\n",
    "                visited.add(url)\n",
    "                requestQueue.put(url)\n",
    "                running += 1\n",
    "            running -= 1\n",
    "        return list(visited)\n",
    "\n",
    "    def _crawl(self, domain, htmlParser, requestQueue, resultQueue):\n",
    "        while True:\n",
    "            url = requestQueue.get()\n",
    "            urls = htmlParser.getUrls(url)\n",
    "            newUrls = []\n",
    "            for url in urls:\n",
    "                u = urlsplit(url)\n",
    "                if u.netloc == domain:\n",
    "                    newUrls.append(url)\n",
    "            resultQueue.put(newUrls)\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        def gettag(url):                     \n",
    "            return 'http://'+url.split('/')[2]\n",
    "        \n",
    "        process = []\n",
    "        process.append(startUrl)\n",
    "        res = set()\n",
    "        tag = gettag(startUrl) \n",
    "        \n",
    "        #urls = htmlParser.getUrls(startUrl)\n",
    "        #print(urls)\n",
    "        while process:\n",
    "            t = process.pop(0)\n",
    "            res.add(t)\n",
    "            for url in htmlParser.getUrls(t):\n",
    "                #print(url)\n",
    "                if url.startswith(tag) and url not in res:                    \n",
    "                    process.append(url)\n",
    "        \n",
    "        return list(res)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        dm = startUrl.split(\"/\")[2]\n",
    "        def issame(u):\n",
    "            return u.split(\"/\")[2] == dm\n",
    "        \n",
    "        res = []\n",
    "        vs = {}\n",
    "        def dfs(u):\n",
    "            if u in vs: return\n",
    "            if issame(u):\n",
    "                res.append(u)\n",
    "            else:\n",
    "                return\n",
    "            vs[u] = 1\n",
    "            urls = htmlParser.getUrls(u)\n",
    "            for uu in urls:\n",
    "                dfs(uu)\n",
    "\n",
    "        dfs(startUrl)\n",
    "        return res\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        vis=set()\n",
    "        def getMain(url):\n",
    "            u=url.split('/')\n",
    "            return u[2]\n",
    "        tar=getMain(startUrl)\n",
    "        ans=[]\n",
    "        def dfs(url):\n",
    "            vis.add(url)\n",
    "            if tar!=getMain(url):\n",
    "                return\n",
    "            ans.append(url)\n",
    "            nt=htmlParser.getUrls(url)\n",
    "            for u in nt:\n",
    "                if u not in vis:\n",
    "                    dfs(u)\n",
    "        dfs(startUrl)\n",
    "        return ans\n",
    "        "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        host_name = \"http://\" + startUrl.split('/')[2]\n",
    "        res = [startUrl]\n",
    "        for url in res:\n",
    "            a = htmlParser.getUrls(url)\n",
    "            for i in a:\n",
    "                if i.startswith(host_name) and i not in res:\n",
    "                    res.append(i)  \n",
    "        return res   \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "    def judStart(self, srcStr: str, tarStr: tuple) -> bool:\n",
    "        tarLength = len(tarStr)\n",
    "        startStr = srcStr[0:tarLength]\n",
    "        return startStr == tarStr\n",
    "    \n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        tarStr1 = startUrl.partition(\"//\")\n",
    "        tarStr2 = tarStr1[2].partition(\"/\")[0]\n",
    "        tarStr = tarStr1[0] + tarStr1[1] + tarStr2\n",
    "\n",
    "        queue, res = [startUrl], set()\n",
    "        while queue:\n",
    "            newUrl = queue.pop(0)\n",
    "            res.add(newUrl)\n",
    "            for i in htmlParser.getUrls(newUrl):\n",
    "                if self.judStart(i, tarStr) and i not in res:\n",
    "                    queue.append(i)\n",
    "\n",
    "        return list(res)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        def gettag(url):                     \n",
    "            return 'http://'+url.split('/')[2]\n",
    "        \n",
    "        process = deque()\n",
    "        process.append(startUrl)\n",
    "        res = set()\n",
    "        tag = gettag(startUrl) \n",
    "        \n",
    "        #urls = htmlParser.getUrls(startUrl)\n",
    "        #print(urls)\n",
    "        while process:\n",
    "            t = process.popleft()\n",
    "            res.add(t)\n",
    "            for url in htmlParser.getUrls(t):\n",
    "                #print(url)\n",
    "                if url.startswith(tag) and url not in res:                    \n",
    "                    process.append(url)\n",
    "        \n",
    "        return list(res)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        def dfs(url):\n",
    "            urls = htmlParser.getUrls(url)\n",
    "\n",
    "            for u in urls:\n",
    "                if u not in visited and u.startswith(prefix):\n",
    "                    visited.add(u)\n",
    "                    res.append(u)\n",
    "                    dfs(u)\n",
    "\n",
    "        res = [startUrl]\n",
    "        hostname = \"\"\n",
    "        protocol = 'http://'\n",
    "        visited = set([startUrl])\n",
    "\n",
    "        for i in range(len(protocol), len(startUrl)):\n",
    "            if startUrl[i] == \"/\":\n",
    "                break\n",
    "\n",
    "            hostname += startUrl[i]\n",
    "        prefix = protocol + hostname\n",
    "        dfs(startUrl)\n",
    "        \n",
    "        return res"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        def parseHostName(url):\n",
    "            return re.search(r'http:\\/\\/([a-z0-9\\-.]*)', url).group(1)\n",
    "\n",
    "        q = deque([startUrl])\n",
    "        visited = set([startUrl])\n",
    "        original_host_name = parseHostName(startUrl)\n",
    "        while q:\n",
    "            cur_url = q.popleft()\n",
    "            succ_urls = htmlParser.getUrls(url=cur_url)\n",
    "            for succ_url in succ_urls:\n",
    "                if succ_url not in visited and parseHostName(succ_url) == original_host_name:\n",
    "                    q.append(succ_url)\n",
    "                    visited.add(succ_url)\n",
    "        \n",
    "        return list(visited)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        main_url = startUrl.split('/')[2]\n",
    "        visited = set()\n",
    "\n",
    "        def dfs(url):\n",
    "            visited.add(url)\n",
    "            for next_url in htmlParser.getUrls(url):\n",
    "                if next_url.split('/')[2] == main_url and next_url not in visited:\n",
    "                    dfs(next_url)\n",
    "        \n",
    "        dfs(startUrl)\n",
    "        return visited"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        def f(x):\n",
    "            i = x.find('/', 7)\n",
    "            return x[7:i] if i >= 0 else x[7:]\n",
    "        s, v, domain = {startUrl}, {startUrl}, f(startUrl)\n",
    "        while s:\n",
    "            s = {url for x in s for url in htmlParser.getUrls(x) if f(url) == domain and not url in v}\n",
    "            v |= s\n",
    "        return list(v)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        host_name = \"http://\" + startUrl.split(\"/\")[2]\n",
    "\n",
    "        result = []\n",
    "        flag = set()\n",
    "\n",
    "        def _dfs(url):\n",
    "            if not url.startswith(host_name):\n",
    "                return\n",
    "            if url in flag:\n",
    "                return\n",
    "            flag.add(url)\n",
    "            result.append(url)\n",
    "            for relation_url in htmlParser.getUrls(url):\n",
    "                _dfs(relation_url)\n",
    "\n",
    "        _dfs(startUrl)\n",
    "        return result\n",
    "\n",
    "        "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        def getTag(url):\n",
    "            return 'http://' + url.split('/')[2]\n",
    "        \n",
    "        q = [startUrl]\n",
    "        res = set()\n",
    "        tag = getTag(startUrl)\n",
    "\n",
    "        while q:\n",
    "            t = q.pop(0)\n",
    "            res.add(t)\n",
    "            for url in htmlParser.getUrls(t):\n",
    "                if url.startswith(tag) and url not in res:\n",
    "                    q.append(url)\n",
    "        return list(res)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        from collections import deque\n",
    "        url_set = set()\n",
    "        url_prefix = \"http://\" + startUrl.split('/')[2]\n",
    "\n",
    "        url_queue = deque([startUrl])\n",
    "        url_set.add(startUrl)\n",
    "        \n",
    "        while url_queue:\n",
    "            curr_url = url_queue.popleft()\n",
    "            for url in htmlParser.getUrls(curr_url):\n",
    "                if url.startswith(url_prefix) and url not in url_set:\n",
    "                    url_queue.append(url)\n",
    "                    url_set.add(url)\n",
    "\n",
    "        return list(url_set)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        isVisited = set()\n",
    "\n",
    "        def dfs(url):\n",
    "            isVisited.add(url)\n",
    "            urls = htmlParser.getUrls(url)\n",
    "            for nextUrl in urls:\n",
    "                if nextUrl not in isVisited and url.split('/')[2] == nextUrl.split('/')[2]:\n",
    "                    dfs(nextUrl)\n",
    "\n",
    "        dfs(startUrl)\n",
    "        return list(isVisited)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        base = startUrl.split('/')[2]\n",
    "        q = [startUrl]\n",
    "        ans = set()\n",
    "        ans.add(startUrl)\n",
    "        while len(q) > 0:\n",
    "            n = len(q)\n",
    "            for i in range(n):\n",
    "                for item in htmlParser.getUrls(q[i]):\n",
    "                    if base in item and item not in ans:\n",
    "                        q.append(item)\n",
    "                        ans.add(item)\n",
    "            q = q[n:]\n",
    "        return list(ans)\n",
    "                \n",
    "\n",
    "\n",
    "        "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        def gettag(url):                     \n",
    "            return 'http://'+url.split('/')[2]\n",
    "        res = set()\n",
    "        itr = [startUrl]\n",
    "        domain = \"/\".join(startUrl.split(\"/\")[:3])\n",
    "        while itr:\n",
    "            t = itr.pop()\n",
    "            res.add(t)\n",
    "            urls = htmlParser.getUrls(t)\n",
    "            for url in urls:\n",
    "                if url.startswith(domain) and url not in res:\n",
    "                    itr.append(url)\n",
    "        return res"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "   \n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        visited = set()\n",
    "        res = []\n",
    "\n",
    "        self._crawl(startUrl, visited, htmlParser, res, startUrl.split(\"/\")[2])\n",
    "\n",
    "        return res\n",
    "\n",
    "    \n",
    "    def _crawl(self, start_url, visited, html_parser, res, domain_name):\n",
    "        if start_url is None or start_url in visited:\n",
    "            return \n",
    "\n",
    "        if domain_name not in start_url:\n",
    "            return\n",
    "\n",
    "        visited.add(start_url)\n",
    "        res.append(start_url)\n",
    "\n",
    "        children = html_parser.getUrls(start_url)\n",
    "\n",
    "        if not children:\n",
    "            return\n",
    "\n",
    "        for url in children:\n",
    "            self._crawl(url, visited, html_parser, res, domain_name)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "from collections import defaultdict, deque\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        def get_hostname(url):\n",
    "            return url[7:].split('/')[0]\n",
    "        \n",
    "        dq = deque([startUrl])\n",
    "        visted = set()\n",
    "        visted.add(startUrl)\n",
    "        while dq:\n",
    "            cur_url = dq.popleft()\n",
    "            candidate_urls = htmlParser.getUrls(cur_url)\n",
    "            \n",
    "            for url in candidate_urls:\n",
    "                if url not in visted:\n",
    "                    if get_hostname(url) == get_hostname(startUrl):\n",
    "                        dq.append(\n",
    "                            url\n",
    "                        )\n",
    "                        visted.add(url)            \n",
    "\n",
    "        return list(visted)\n",
    "        "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "\n",
    "    def get_namespace(self, url:str):\n",
    "        return url.replace(\"http://\", \"\").split(\"/\")[0]\n",
    "\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        namespace = self.get_namespace(startUrl)\n",
    "\n",
    "        queue_cur = set([startUrl])\n",
    "        queue_next = set()\n",
    "        visited =set()\n",
    "\n",
    "        ans = []\n",
    "\n",
    "        while queue_cur or queue_next:\n",
    "            for url in queue_cur:\n",
    "                if url in visited:\n",
    "                    continue\n",
    "                visited.add(url)\n",
    "                ns = self.get_namespace(url)\n",
    "                if ns == namespace:\n",
    "                    ans.append(url)\n",
    "                else:\n",
    "                    continue\n",
    "                \n",
    "                for url_next in htmlParser.getUrls(url):\n",
    "                    queue_next.add(url_next)\n",
    "            \n",
    "            queue_cur = queue_next\n",
    "            queue_next = set()\n",
    "        \n",
    "        return ans"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        def gettag(url):\n",
    "            return 'http://' + url.split('/')[2]\n",
    "        process = deque()\n",
    "        process.append(startUrl)\n",
    "        res = set()\n",
    "        tag = gettag(startUrl)\n",
    "\n",
    "        while process:\n",
    "            t = process.popleft()\n",
    "            res.add(t)\n",
    "            for url in htmlParser.getUrls(t):\n",
    "                if url.startswith(tag) and url not in res:\n",
    "                    process.append(url)\n",
    "        return list(res)\n",
    "        "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        sign=startUrl[7:].split(\"/\")[0]\n",
    "        vis=set()\n",
    "        res=[startUrl]\n",
    "        def dfs(net):\n",
    "            if net:\n",
    "                vis.add(net)\n",
    "                for y in htmlParser.getUrls(net):\n",
    "                    if y not in vis and y[7:].split(\"/\")[0]==sign:\n",
    "                        res.append(y)\n",
    "                        dfs(y)\n",
    "        dfs(startUrl)\n",
    "        return res"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        def gettag(url):\n",
    "            return 'http://' + url.split('/')[2]\n",
    "        \n",
    "        q = deque()\n",
    "        q.append(startUrl)\n",
    "        res = set()\n",
    "        tag = gettag(startUrl)\n",
    "        \n",
    "        while q:\n",
    "            cur = q.popleft()\n",
    "            res.add(cur)\n",
    "            for nxt in htmlParser.getUrls(cur):\n",
    "                if nxt.startswith(tag) and nxt not in res:\n",
    "                    q.append(nxt)\n",
    "        \n",
    "        return list(res)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        res = []\n",
    "        target = startUrl.split('/')[2]\n",
    "        q = deque([startUrl])\n",
    "        s = set([startUrl])\n",
    "        while q:\n",
    "            url = q.popleft()\n",
    "            res.append(url)\n",
    "            nextUrl = htmlParser.getUrls(url)\n",
    "            for url in nextUrl:\n",
    "                if url not in s and url.split('/')[2]==target:\n",
    "                    q.append(url)\n",
    "                    s.add(url)\n",
    "        return res\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "from collections import deque\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:        \n",
    "        hostname = \"\"\n",
    "        protocol = 'http://'\n",
    "        res = [startUrl]\n",
    "        visited = set([startUrl])\n",
    "        q = deque([startUrl])\n",
    "\n",
    "        for i in range(len(protocol), len(startUrl)):\n",
    "            if startUrl[i] == \"/\":\n",
    "                break\n",
    "\n",
    "            hostname += startUrl[i]\n",
    "        prefix = protocol + hostname\n",
    "        \n",
    "        while q:\n",
    "            url = q.popleft()\n",
    "            urls = htmlParser.getUrls(url)\n",
    "\n",
    "            for u in urls:\n",
    "                if u.startswith(prefix) and u not in visited:\n",
    "                    res.append(u)\n",
    "                    visited.add(u)\n",
    "                    q.append(u)        \n",
    "        \n",
    "        return res"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        def gettag(url):                     \n",
    "            return 'http://'+url.split('/')[2]\n",
    "        \n",
    "        process = deque()\n",
    "        process.append(startUrl)\n",
    "        res = set()\n",
    "        tag = gettag(startUrl) \n",
    "        \n",
    "        #urls = htmlParser.getUrls(startUrl)\n",
    "        #print(urls)\n",
    "        while process:\n",
    "            t = process.popleft()\n",
    "            res.add(t)\n",
    "            for url in htmlParser.getUrls(t):\n",
    "                #print(url)\n",
    "                if url.startswith(tag) and url not in res:                    \n",
    "                    process.append(url)\n",
    "        \n",
    "        return list(res)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        def get_hostname(url):\n",
    "            return url.split('//')[1].split('/')[0]\n",
    "\n",
    "        hostname = get_hostname(startUrl)\n",
    "        visited = set()\n",
    "        res = []\n",
    "\n",
    "        queue = collections.deque([startUrl])\n",
    "        visited.add(startUrl)\n",
    "\n",
    "        while queue:\n",
    "            cur = queue.popleft()\n",
    "            res.append(cur)\n",
    "\n",
    "            cur_urls = htmlParser.getUrls(cur)\n",
    "            for url in cur_urls:\n",
    "                if get_hostname(url) == hostname and url not in visited:\n",
    "                    queue.append(url)\n",
    "                    visited.add(url)\n",
    "\n",
    "        return res\n",
    "        "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "    host=None\n",
    "    r =[]\n",
    "    parser  = None\n",
    "    def sameHost(self, url):\n",
    "        return self.host == url.split(\"/\")[2]\n",
    "\n",
    "    def f(self, url): \n",
    "        print(\"f:\",url,  len(self.r))\n",
    "        if url in self. r:\n",
    "            return \n",
    "        if self.sameHost(url):  \n",
    "            self.r.append(url)\n",
    "            nbs = self.parser.getUrls(url)  \n",
    "            for nb in nbs:\n",
    "                self.f(nb)\n",
    "\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        print(\"crawl:\", startUrl)\n",
    "        self.host = startUrl.split(\"/\")[2] \n",
    "        self.parser = htmlParser \n",
    "        self.r=[]\n",
    "        print(self.r)\n",
    "        self.f(startUrl) \n",
    "        print(self.r)\n",
    "\n",
    "        return self.r\n",
    "\n",
    "\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "from collections import deque\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        def getDomain(url):\n",
    "            return url.split(\"//\")[1].split(\"/\")[0]\n",
    "        domain = getDomain(startUrl)\n",
    "        s = set()\n",
    "        queue = deque()\n",
    "        queue.appendleft(startUrl)\n",
    "        while queue:\n",
    "            cur = queue.pop()\n",
    "            if cur in s: continue\n",
    "            s.add(cur)\n",
    "            for url in HtmlParser.getUrls(htmlParser, url = cur):\n",
    "                if getDomain(url) == domain:\n",
    "                    queue.appendleft(url)\n",
    "        return list(s)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        def getYM(url):\n",
    "            t=url.split(\"/\")\n",
    "            # print('t',t)\n",
    "            for i in t:\n",
    "                if \".\" in i:\n",
    "                    # print(\"getYM\",url,i)\n",
    "                    return i\n",
    "            \n",
    "            \n",
    "        startYM=getYM(startUrl)\n",
    "        visited=set()\n",
    "        visited.add(startUrl)\n",
    "        que=[startUrl]\n",
    "        ans=[]\n",
    "        while(que):\n",
    "            p = que.pop(0)\n",
    "            nexURL=htmlParser.getUrls(p)\n",
    "            for u in nexURL:\n",
    "                if u not in visited and getYM(u)==startYM:\n",
    "                    visited.add(u)\n",
    "                    \n",
    "                    que.append(u)\n",
    "\n",
    "        return list(visited)\n",
    "        "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        ans = set()\n",
    "        \n",
    "        root, process = \"http://\" + startUrl.split('/')[2], [startUrl]\n",
    "        while process:\n",
    "            url = process.pop(0)\n",
    "            ans.add(url)\n",
    "            for u in htmlParser.getUrls(url):\n",
    "                if u.startswith(root) and u not in ans:\n",
    "                    process.append(u)\n",
    "        \n",
    "        return list(ans)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        sign=startUrl[7:].split(\"/\")[0]\n",
    "        vis=set()\n",
    "        res=[startUrl]\n",
    "        def dfs(net):\n",
    "            if net:\n",
    "                vis.add(net)\n",
    "                for y in htmlParser.getUrls(net):\n",
    "                    if y not in vis and y[7:].split(\"/\")[0]==sign:\n",
    "                        res.append(y)\n",
    "                        dfs(y)\n",
    "        dfs(startUrl)\n",
    "        return res"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "        rec_set = {startUrl}\n",
    "        rec_list = [startUrl]\n",
    "        host_name = 'http://' + startUrl.split('/')[2]\n",
    "        for url in rec_list:\n",
    "            a = htmlParser.getUrls(url)\n",
    "            for i in a:\n",
    "                if i.startswith(host_name) and i not in rec_list:\n",
    "                    rec_set.add(i)\n",
    "                    rec_list.append(i)\n",
    "        return rec_list \n",
    "        "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "import collections\n",
    "\n",
    "# \"\"\"\n",
    "# This is HtmlParser's API interface.\n",
    "# You should not implement it, or speculate about its implementation\n",
    "# \"\"\"\n",
    "#class HtmlParser(object):\n",
    "#    def getUrls(self, url):\n",
    "#        \"\"\"\n",
    "#        :type url: str\n",
    "#        :rtype List[str]\n",
    "#        \"\"\"\n",
    "\n",
    "class Solution:\n",
    "    def crawl(self, startUrl: str, htmlParser: 'HtmlParser') -> List[str]:\n",
    "\n",
    "        used=set([])\n",
    "\n",
    "\n",
    "\n",
    "        result=[]\n",
    "        \n",
    "\n",
    "        def get_url(url):\n",
    "            url=url[7:]\n",
    "            url_line=url.split(\"/\")\n",
    "            return url_line[0]\n",
    "\n",
    "\n",
    "\n",
    "        def method(url):\n",
    "            nonlocal used\n",
    "            nonlocal result\n",
    "            nonlocal startUrl\n",
    "            nonlocal htmlParser\n",
    "            if url in used:\n",
    "                return\n",
    "            else:\n",
    "                if get_url(startUrl)==get_url(url):               \n",
    "                    result.append(url)\n",
    "                    used.add(url)\n",
    "                    url_line=htmlParser.getUrls(url)\n",
    "                    for u in url_line:\n",
    "                        method(u)\n",
    "        method(startUrl)\n",
    "        return result\n",
    "\n",
    "\n",
    "        "
   ]
  }
 ],
 "metadata": {},
 "nbformat": 4,
 "nbformat_minor": 2
}
