# 下一步：做成 GUI 程序，检查时小窗置顶，交互方便，slenium 无头模式运行，检查结果返回到软件上展示
from os import mkdir

import requests
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import *
from threading import Thread, active_count
from rich import print as rprint

def req(url, dire=False):
    headersvalue = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.81 Safari/537.36 Edg/94.0.992.50',
        }
    r = requests.get(url, timeout=10, headers=headersvalue)
    if dire:
        return r.status_code
    soup = BeautifulSoup(r.text, 'lxml')
    return soup

def test(item, results):
    try:
        code = req(item[1], dire=True)
        if code == 200:
            rprint(f'[green]{item[0]} | 可正常访问')
        else:
            rprint(f'[yellow]{item[0]} | {code} | {item[1]}')
            results.append(item)
    except:
        rprint(f'[red]{item[0]} | 没有状态码的错误 | {item[1]}')
        results.append(item)

def confirm(results):
    options = Options()
    options.page_load_strategy = 'eager'
    options.add_argument('--start-maximized')
    options.add_argument('--headless')
    browser = webdriver.Chrome(options=options)
    browser.set_page_load_timeout(20)
    for result in results:
        name, url = result
        browser.get('https://gualemang.com/')
        browser.find_element(By.XPATH, '//div[@class="input-group"]/input').send_keys(url)
        browser.find_element(By.XPATH, '//div[@class="input-group-append"]/button').click()
        rprint('现在检测的是：{}，网址：{}'.format(name, url))
        try:
            browser.find_element(By.XPATH, '//div[@class="input-group"]/input').send_keys('——' + name)
        except NoSuchElementException:
            rprint('[red]此网站的网址有问题')
            continue
        except:
            rprint('[red]未知错误')
            continue
        WebDriverWait(browser, 20).until_not(EC.text_to_be_present_in_element((By.ID, 'status-text'), '检测中'))
        rprint('检测情况：\n' + browser.find_element(By.ID, 'status-list').text.replace(' ', '   '))
        if browser.find_element(By.CLASS_NAME, 'code').text == '200':
            rprint('[yellow]网站似乎正常，正在 webdriver 中尝试打开。。。')
            browser_1 = webdriver.Chrome()
            browser.set_page_load_timeout(20)
            try:
                browser_1.get(url)
            except TimeoutException:
                rprint('[red]TimeoutException!!!')
            except WebDriverException:
                rprint('[yellow]不是 webdriver 出现了问题，就是网站访问不了')
            except:
                rprint('[red]未知错误')
            q = input('\n是否删除此网站的不良记录？(y/n)')
            urls = [result[1] for result in results]
            if q in 'Yy':
                results.pop(urls.index(url))
            q = input('回车关闭：')
            if q == '':
                browser_1.quit()
                rprint('[green]网站窗口已关闭')
        next = input('下一个：')

def get_input(s):
    flag = True
    while flag:
        try:
            c = input(s)
            flag = False
        except:
            continue
    return c

def web_info():
    node_li = req('https://shouku123.com/tiantian').find_all('li', attrs={'class': 'app-item'})
    page_names = ['默认'] + [li.find('a').string for li in node_li]
    page_urls = ['https://shouku123.com/tiantian'] + [li.find('a').attrs['href'] for li in node_li]
    pages = tuple(zip(page_names, page_urls))
    t = [f'{i+1}. {page_names[i]}' for i in range(len(page_names)) ]
    rprint('{}\n{}\n{}'.format('-'*140, '   '.join(t), '-'*140))
    n = int(get_input('需要检查哪个？\n'))

    if not pages[n-1][1].startswith('http'):
        url = 'https://shouku123.com' + pages[n-1][1]
    else:
        url = pages[n-1][1]
    node_ul = req(url).find_all('ul', attrs={'class': 'list-group collapse in urls'})
    groups_dict = {}
    for ul in node_ul:
        node_span = ul.find_all('span', attrs={'title': '点击查看二维码'})
        items = []
        for span in node_span:
            ls = span.attrs['onclick'][18:-2].split("'")
            items.append((ls[1], ls[3]))
        groups_dict[ul.attrs['title']] = items
    choice = int(get_input('\n“{}” 主题下共有 {} 个分类，分别是：\n {} \n {} \n {} \n 0. 一次性测试全部          1. 单独测试分类\n做出选择：'.\
                       format(pages[n-1][0], len(groups_dict), '-'*140, '      '.join(groups_dict.keys()), '-'*140)))
    if choice:
        flag = True
        while flag:
            try:
                c = input('输入需要测试分类的名称：')
                if c in groups_dict.keys():
                    flag = False
            except:
                continue
        temp = {}
        temp[c], groups_dict = groups_dict[c], temp
    results, n = [], 1
    for key, value in groups_dict.items():
        rprint('\n现在开始检查分类 “{}” （{}/{}）：\n'.format(key, n, len(groups_dict)))
        n += 1
        for item in value:
            Thread(target=test, args=(item,results)).start()
        while True:
            if active_count() == 1:
                break
    
    rprint('\n第一遍测试结束！结果汇总如下：\n'+'\n'.join([str(i) for i in results])) if results else rprint('\n测试结束，网站全部可以正常访问！')
    while True:
        try:
            with open('d:/data/results.txt', 'w', encoding='utf-8') as f:
                f.write('\n'.join([str(i) for i in results]))
            break
        except FileNotFoundError:
            mkdir('d:/data') 
            rprint('未找到文件夹 D:/data ，已新建')
    confirm(results)
    with open('d:/data/results.txt', 'w', encoding='utf-8') as f:
        f.write('\n'.join([str(i) for i in results]))           

def from_local():
    with open('d:/data/results.txt', 'r', encoding='utf-8') as f:
        results = f.readlines()
    results = [eval(i) for i in results]
    confirm(results)
    with open('d:/data/results.txt', 'w', encoding='utf-8') as f:
        f.write('\n'.join([str(i) for i in results]))
    rprint(results)

if __name__ == '__main__':
    choice = input('输入选择（回车=>网络，其他=>本地）：')
    if choice == '':
        web_info()
    else:
        from_local()

# pages:
# (('默认', 'https://shouku123.com/tiantian'), ('海外剧', 'https://shouku123.com/tiantian/海外剧'), ('素材_资料', 'https://shouku123.com/tiantian/素材_资料'), ('动漫_漫画', 'https://shouku123.com/tiantian/动漫_漫画'), ('软件', 'https://shouku123.com/tiantian/软件'), ('视频解析', 'https://shouku123.com/tiantian/视频解析'), ('影视下载', 'https://shouku123.com/tiantian/影视下载'), ('其他', 'https://shouku123.com/tiantian/其他'))

