'''
多线程查找前10页排名 
包含 百度电脑，百度手机，360，搜狗，搜狗手机，神马
Q群：170555357 
'''
from multiprocessing import Pool
import time, threading
import requests,re,redis
from bs4 import BeautifulSoup
import sys as sys
from pskpackage.db import *

pool = redis.ConnectionPool(host='127.0.0.1', port=6379,  db=1)

#结果
return_list = {}
#多线程执行分页查找
def Search_thread(url_val,id,mark,lyc,table,field,redis_val):
    global return_list,pool
    r = redis.Redis(connection_pool=pool)
    time.sleep(1)
    i = 0
    ranking=0
    try:
        data = requests.get(url_val, timeout=5)
    except:
        r.lpush('rank', redis_val)  # 失败数据写入redis
        return False
    soup = BeautifulSoup(data.text, "lxml")

    if lyc == "baidu":
        for item in soup.find_all("div", class_='f13'):
            i += 1
            try:
                if item.find("span").text:
                    item_mark = item.find("span").text
                else:
                    item_mark = item.find("a").text
                item_mark = re.search(mark, item_mark)
                if item_mark:
                    if str(item_mark.group().strip()) == str(mark.strip()):
                        page = re.findall('&pn=(.+)&oq', url_val)
                        ranking = (int(page[0]) + int(i))
                        if return_list.get(id):
                            if int(return_list[id]) >=int(ranking):
                                return_list[id] = ranking
                                update_task_rank(id, ranking, table, field)
                                print("更新完成")
                        else:
                            return_list[id]=ranking
                            update_task_rank(id, ranking, table, field)
                            print("更新完成")
                        break
            except:
                pass

    elif lyc == "so":
        for item in soup.find_all("p", attrs={'class': 'res-linkinfo'}):
            i += 1
            item_mark = item.find("cite").text
            item_mark = re.search(mark, item_mark)
            if item_mark:
                if str(item_mark.group().strip()) == str(mark.strip()):
                    page = re.findall('&pn=(.+)&psid', url_val)
                    ranking = (int(page[0]) * 10) - 10 + int(i)
                    if return_list.get(id):
                        if int(return_list[id]) >= int(ranking):
                            return_list[id] = ranking
                            update_task_rank(id, ranking, table, field)
                            print("更新完成")
                    else:
                        return_list[id] = ranking
                        update_task_rank(id, ranking, table, field)
                        print("更新完成")

    elif lyc == "sogou":
        for item in soup.find_all("div", attrs={'class': 'fb'}):
            i += 1
            item_mark = item.find("cite").text
            item_mark = re.search(mark, item_mark)
            if item_mark:
                if str(item_mark.group().strip()) == str(mark.strip()):
                    page = re.findall('&page=(.+)&ie', url_val)
                    ranking = (int(page[0]) * 10) - 10 + int(i)
                    if return_list.get(id):
                        if int(return_list[id]) >= int(ranking):
                            return_list[id] = ranking
                            update_task_rank(id, ranking, table, field)
                            print("更新完成")
                    else:
                        return_list[id] = ranking
                        update_task_rank(id, ranking, table, field)
                        print("更新完成")
                    break

    elif lyc == "shenma":
        for item in soup.find_all("div", attrs={'class': 'article ali_row'}):
            if item.find("div", attrs={'class': 'other'}).find("span"):
                continue
            i += 1

            item_mark = item.find("div", attrs={'class': 'other'}).text
            item_mark = re.search(mark, item_mark)
            if item_mark:
                if str(item_mark.group().strip()) == str(mark.strip()):
                    page = re.findall('&page=(.+)', url_val)
                    ranking = (int(page[0]) * 10) - 10 + int(i)
                    if return_list.get(id):
                        if int(return_list[id]) >= int(ranking):
                            return_list[id] = ranking
                            update_task_rank(id, ranking, table, field)
                            print("更新完成")
                    else:
                        return_list[id] = ranking
                        update_task_rank(id, ranking, table, field)
                        print("更新完成")
                    break

    elif lyc == "sogou_wap":
        for item in soup.find_all("div", attrs={'class': 'result'}):
            i += 1
            item_mark = item.find("div", attrs={'class': 'citeurl'}).text
            item_mark = re.search(mark, item_mark)
            if item_mark:
                if str(item_mark.group().strip()) == str(mark.strip()):
                    page = re.findall('&p=(.+)', url_val)
                    ranking = (int(page[0]) * 10) - 10 + int(i)
                    if return_list.get(id):
                        if int(return_list[id]) >= int(ranking):
                            return_list[id] = ranking
                            update_task_rank(id, ranking, table, field)
                            print("更新完成")
                    else:
                        return_list[id] = ranking
                        update_task_rank(id, ranking, table, field)
                        print("更新完成")
                    break

    if int(ranking)==0 and return_list.get(id)==None:
        update_task_rank(id, ranking, table, field)
        print("更新完成")



#第一页查找
def Search(id,keywords,mark,lyc,table,field,redis_val):
    global return_list, pool
    r = redis.Redis(connection_pool=pool)
    time.sleep(1)
    i = 0
    ranking = 0
    if lyc =="baidu":
        #百度搜索
        url = "http://www.baidu.com/s?wd=%s" % (str(keywords))
        try:
            data = requests.get(url, timeout=5)
        except:
            r.lpush('rank', redis_val)  # 失败数据写入redis
            return False
        soup = BeautifulSoup(data.text, "lxml")
        page_url = []
        for item in soup.find_all("div", {"id": "page"}):
            for v in item.find_all("a", limit=9):
                page_url.append("http://www.baidu.com"+v.get("href"))
        try:
            for item in soup.find_all("div", attrs={'class': 'f13'}):
                i += 1
                if item.find("span").text:
                    item_mark = item.find("span").text
                else:
                    item_mark = item.find("a").text
                item_mark = re.search(mark, item_mark)
                if item_mark:
                    if str(item_mark.group().strip()) == str(mark.strip()):
                        ranking = i
                        update_task_rank(id,i,table,field)
                        break
        except:
            pass

    elif lyc == "so":
        #360搜索
        url = "https://www.so.com/s?q=%s" % (str(keywords))
        try:
            data = requests.get(url, timeout=5)
        except:
            r.lpush('rank', redis_val)  # 失败数据写入redis
            return False

        soup = BeautifulSoup(data.text, "lxml")
        page_url = []
        for item in soup.find_all("div", {"id": "page"}):
            for v in item.find_all("a", limit=9):
                page_url.append("https://www.so.com"+v.get("href"))
        try:
            for item in soup.find_all("p", attrs={'class': 'res-linkinfo'}):
                i += 1
                item_mark = item.find("cite").text
                item_mark = re.search(mark, item_mark)
                if item_mark:
                    if str(item_mark.group().strip()) == str(mark.strip()):
                        ranking = i
                        update_task_rank(id,i,table, field)
                        break
        except:
            pass

    elif lyc == "sogou":
        #搜狗
        url = "https://www.sogou.com/web?query=%s" % (str(keywords))
        try:
            data = requests.get(url, timeout=5)
        except:
            r.lpush('rank', redis_val)  # 失败数据写入redis
            return False
        soup = BeautifulSoup(data.text, "lxml")
        page_url = []
        for item in soup.find_all("div", {"id": "pagebar_container"}):
            for v in item.find_all("a", limit=9):
                page_url.append("https://www.sogou.com/web"+v.get("href"))
        try:
            for item in soup.find_all("div", attrs={'class': 'fb'}):
                i += 1
                item_mark = item.find("cite").text
                item_mark = re.search(mark, item_mark)
                if item_mark:
                    if str(item_mark.group().strip()) == str(mark.strip()):
                        ranking = i
                        update_task_rank(id,i,table,field)
                        break
        except:
            pass

    elif lyc == "shenma":
        #神马
        url = "https://m.sm.cn/s?q=%s&from=smor" % (str(keywords))

        try:
            data = requests.get(url,timeout=5)
        except:
            r.lpush('rank', redis_val)  # 失败数据写入redis
            return False

        soup = BeautifulSoup(data.text, "lxml")
        page_url = []
        for item_url in range(2, 10):
            page_url.append(url + "&page=%s" % (str(item_url)))
        try:
            for item in soup.find_all("div", attrs={'class': 'article ali_row'}):
                if item.find("div", attrs={'class': 'other'}).find("span"):
                    continue
                i += 1
                item_mark = item.find("div", attrs={'class': 'other'}).text
                item_mark = re.search(mark, item_mark)
                if item_mark:
                    if str(item_mark.group().strip()) == str(mark.strip()):
                        ranking = i
                        update_task_rank(id,i,table,field)
                        break
        except:
            pass

    elif lyc == "baidu_wap":
        #百度手机端
        url = "http://www.78901.net/wap/"
        data = {"m": mark, "yd": keywords, "submit": "查询"}
        try:
            return_data = requests.post(url, data=data, timeout=60)
            return_data.encoding = 'utf-8'
        except:
            r.lpush('rank',redis_val)  # 失败数据写入redis
            return False
        soup = BeautifulSoup(return_data.text, "lxml")
        p = soup.find_all("p")
        try:
            html_p = str(p[2])
            page = re.findall("第(.+)页", html_p)
            ranking = re.findall("页的第(.+)个", html_p)
            ranking = (int(page[0]) * 10) - 10 + int(ranking[0])
            update_task_rank(id, ranking, table, field)
        except:
            update_task_rank(id,0,table, field)
        print("更新成功")

    elif lyc == "sogou_wap":
        url = "https://m.sogou.com/web/searchList.jsp?keyword=%s" % (str(keywords))
        try:
            data = requests.get(url, timeout=5)
        except:
            r.lpush('rank', redis_val)  # 失败数据写入redis
            return False
        soup = BeautifulSoup(data.text, "lxml")
        page_url = []
        for item_url in range(2, 10):
            page_url.append(url + "&p=%s" % (str(item_url)))
        try:
            for item in soup.find_all("div", attrs={'class': 'result'}):
                i += 1
                item_mark = item.find("div", attrs={'class': 'citeurl'}).text
                item_mark = re.search(mark, item_mark)
                if item_mark:
                    if str(item_mark.group().strip()) == str(mark.strip()):
                        ranking = i
                        update_task_rank(id,i,table,field)
                        break
        except:
            pass


    if int(ranking) == 0 and page_url:
        #多线程执行分页查找
        for url_val in page_url:
            t = threading.Thread(target=Search_thread, args=(url_val,id,mark,lyc,table,field,redis_val))  # 在每个进程中又起了1个线程
            t.start()

        t.join()
    if return_list.get(id):
        del return_list[id]

    print("完成")

if __name__ == '__main__':
    r = redis.Redis(connection_pool=pool)
    p = Pool(2)  # 4核cpu
    while True:
        try:
            redis_val = r.rpop("rank")
            if redis_val:
                redis_val = redis_val.decode()
                task = redis_val.split('^')
                # 如果搜索任务没有搜索标识，直接返回0
                if task[2] == None:
                    update_task_rank(id,0,task[4],task[5])
                else:
                    p.apply_async(Search, args=(int(task[0]), task[1], task[2],task[3],task[4], task[5], redis_val))
        except:
            pass
    p.close()  # 关闭进程池,不在接收新的任务
    p.join()  # 等待子进程全部运行完成，执行后续操作


