File size: 3,735 Bytes
95f63e4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
from playwright.async_api import async_playwright
from urllib.parse import urljoin, urlparse
import logging

# Set up logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')

async def scrape_page(url: str, visited: set, base_domain: str) -> tuple[dict, set]:
    """Scrape a single page for text, images, and links using Playwright."""
    try:
        async with async_playwright() as p:
            browser = await p.chromium.launch(headless=True)
            context = await browser.new_context(
                user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36",
                viewport={"width": 1280, "height": 720}
            )
            page = await context.new_page()
            await page.goto(url, wait_until="networkidle", timeout=30000)
            await page.evaluate("window.scrollTo(0, document.body.scrollHeight)")
            await page.wait_for_timeout(2000)
            
            # Extract text content
            text_content = await page.evaluate("document.body.innerText")
            text_content = ' '.join(text_content.split()) if text_content else ""
            
            # Extract images (only JPEG, PNG, WebP, exclude data URLs and SVGs)
            images = await page.evaluate(
                """() => {

                    const validExtensions = ['.jpg', '.jpeg', '.png', '.webp'];

                    const imgElements = document.querySelectorAll('img');

                    const imgUrls = new Set();

                    imgElements.forEach(img => {

                        const src = img.src || '';

                        const dataSrc = img.dataset.src || '';

                        const srcset = img.srcset || '';

                        // Check src

                        if (src && !src.startsWith('data:') && validExtensions.some(ext => src.toLowerCase().endsWith(ext))) {

                            imgUrls.add(src);

                        }

                        // Check data-src

                        if (dataSrc && !dataSrc.startsWith('data:') && validExtensions.some(ext => dataSrc.toLowerCase().endsWith(ext))) {

                            imgUrls.add(dataSrc);

                        }

                        // Check srcset

                        if (srcset) {

                            srcset.split(',').forEach(src => {

                                const url = src.trim().split(' ')[0];

                                if (url && !url.startsWith('data:') && validExtensions.some(ext => url.toLowerCase().endsWith(ext))) {

                                    imgUrls.add(url);

                                }

                            });

                        }

                    });

                    return Array.from(imgUrls);

                }"""
            )
            images = [urljoin(url, img) for img in images if img]
            
            # Extract links
            links = await page.evaluate("Array.from(document.querySelectorAll('a')).map(a => a.href)")
            links = set(urljoin(url, link) for link in links 
                        if urlparse(urljoin(url, link)).netloc == base_domain 
                        and urljoin(url, link) not in visited)
            
            await browser.close()
        
        page_data = {"url": url, "text": text_content, "images": images}
        logging.info(f"Scraped data: url={url}, text_length={len(text_content)}, images={images}")
        return page_data, links
    
    except Exception as e:
        logging.error(f"Error scraping {url}: {e}")
        return {}, set()