Dataset Viewer
hash
stringlengths 40
40
| date
stringdate 2016-08-27 00:05:04
2025-03-25 06:07:18
| author
stringclasses 65
values | commit_message
stringlengths 4
13.5k
| is_merge
bool 1
class | git_diff
stringlengths 97
12M
β | type
stringclasses 16
values | masked_commit_message
stringlengths 4
13.5k
|
---|---|---|---|---|---|---|---|
c39dccbb67e130beb3efcd7b30fcf83352cc692b | 2019-06-25 15:21:26 | Ondra Urban | Fix image release git repo | false | diff --git a/.travis/release-images-beta.js b/.travis/release-images-beta.js
index 91574b345a6f..d9a68a44045d 100644
--- a/.travis/release-images-beta.js
+++ b/.travis/release-images-beta.js
@@ -78,7 +78,7 @@ if (dryRun) {
}
log('Adding new origin with token.');
-execGitCommand(`remote add origin-token https://${process.env.GH_TOKEN}@github.com/apifytech/apify-js > /dev/null 2>&1`);
+execGitCommand(`remote add origin-token https://${process.env.GH_TOKEN}@github.com/apifytech/apify-actor-docker > /dev/null 2>&1`);
log('Pushing changes to remote.');
execGitCommand(`push --set-upstream origin-token master`); | unknown | Fix image release git repo |
4518cdf3c78d2048a807d2cfae8f2e0dd5d71c9c | 2019-03-26 00:17:11 | Ondra Urban | Update tutorial | false | diff --git a/docs/guides/getting_started.md b/docs/guides/getting_started.md
index e967d8f06558..cb564abb1db5 100644
--- a/docs/guides/getting_started.md
+++ b/docs/guides/getting_started.md
@@ -54,10 +54,12 @@ Let's try that!
cd my-new-project
```
```bash
-apify run
+apify run -p
```
-You should start seeing log messages in the terminal as the system boots up and after a second, a Chromium browser window should pop up. In the window, you'll see quickly changing pages and back in the terminal, you should see the titles (contents of the <title> HTML tags) of the pages printed.
+> The `-p` flag is great to remember, because it stands for `--purge` and it clears out your persistent storages before starting the actor. `INPUT.json` and named storages are kept. Whenever you're just restarting your actor and you're not interested in the data of the previous run, you should use `apify run -p` to prevent the old state from messing with your current run. If this is confusing, don't worry. You'll learn about storages and `INPUT.json` soon.
+
+You should start seeing log messages in the terminal as the system boots up and after a second, a Chromium browser window should pop up. In the window, you'll see quickly changing pages and back in the terminal, you should see the titles (contents of the `<title>` HTML tags) of the pages printed.
You can always terminate the crawl with a keypress in the terminal:
@@ -75,7 +77,7 @@ It comes with a free account, so let's go to our
<a href="https://my.apify.com/sign-up" target="_blank">sign-up page</a>
and create one, if you haven't already. Don't forget to verify your email. Without it, you won't be able to run any projects.
-Once you're in, you might be prompted by our in-app help to walk through a step-by-step guide into some of our new features. Feel free to finish that, if you'd like, but once you're done, click on the **Actors** tab in the left menu. You might be tempted to go directly to Crawlers, because what the heck are **Actors**, right? Bear with me, **Actors** are the tool that you really want! To read more about them, see: [What is an Actor](./whatisanactor).
+Once you're in, you might be prompted by our in-app help to walk through a step-by-step guide into some of our new features. Feel free to finish that, if you'd like, but once you're done, click on the **Actors** tab in the left menu. To read more about **Actors**, see: [What is an Actor](./whatisanactor).
### Creating a new project
In the page that shows after clicking on Actors in the left menu, choose **Create new**. Give it a name in the form that opens, let's say, `my-new-actor`. Disregard all the available options for now and save your changes.
@@ -741,16 +743,14 @@ Apify.main(async () => {
'https://apify.com/library?type=acts&category=ENTERTAINMENT'
];
- const requestList = await Apify.openRequestList('x', sources);
- const requestQueue = await Apify.openRequestQueue();
+ const requestList = await Apify.openRequestList('categories', sources);
const crawler = new Apify.CheerioCrawler({
- maxRequestsPerCrawl: 10,
requestList,
- requestQueue,
handlePageFunction: async ({ $, request }) => {
$('.item').each((i, el) => { // <---- Select all the actor cards.
- console.log($(el).text());
+ const text = $(el).text();
+ console.log(`ITEM: ${text}\n`);
})
}
});
@@ -830,9 +830,6 @@ const handlePageFunction = async ({ $, request }) => {
The code should look pretty familiar to you. It's a very simple `handlePageFunction` where we log the currently processed URL to the console and enqueue more links. But there are also a few new, interesting additions. Let's break it down.
-##### The `response` parameter of the `handlePageFunction()`
-You might have noticed that we've added a third parameter to the function's definition - `response`. The response is actually the HTTP response object that our crawler received when making the HTTP request to the target website. It contains useful information like the HTTP `statusCode` and the original HTTP `request` we've made to the website. Note that this is NOT the same object as our own `Request` instance, that's available as the `request` parameter of the `handlePageFunction`. The `response.request` object is a low level representation provided by the underlying HTTP client library and is not provided by Apify SDK.
-
##### The `selector` parameter of `enqueueLinks()`
When we previously used `enqueueLinks()`, we were not providing any `selector` parameter and it was fine, because we wanted to use the default setting, which is `a` - finds all `<a>` elements. But now, we need to be more specific. There are multiple `<a>` links on the given category page, but we're only interested in those that will take us to item (actor) details. Using the DevTools, we found out that we can select the links we wanted using the `a.item` selector, which selects all the `<a class="item ...">` elements. And those are exactly the ones we're interested in.
@@ -840,12 +837,12 @@ When we previously used `enqueueLinks()`, we were not providing any `selector` p
Earlier we've learned that `pseudoUrls` are not required and if omitted, all links matching the given `selector` will be enqueued. This is exactly what we need, so we're skipping `pseudoUrls` this time. That does not mean that you can't use `pseudoUrls` together with a custom `selector` though, because you absolutely can!
##### Finally, the `userData` of `enqueueLinks()`
-You will see `userData` used often throughout Apify SDK and it's nothing more than a place to store the user's data on a `Request` instance. You can access it by `request.userData` and it's a plain `Object` that can be used to store anything that needs to survive a single `handlePageFunction()` invocation.
+You will see `userData` used often throughout Apify SDK and it's nothing more than a place to store your own data on a `Request` instance. You can access it by `request.userData` and it's a plain `Object` that can be used to store anything that needs to survive the full life-cycle of the `Request`.
Using the `userData` parameter of `enqueueLinks()` will populate all the `Request` instances it creates and enqueues with the provided data. In our case, we use it to mark the enqueued `Requests` as a `detailPage` so that we can easily differentiate between the category pages and the detail pages.
#### Another sanity check
-It's always good to work step by step. We have this new enqueueing logic in place, so let's test it out:
+It's always good to work step by step. We have this new enqueueing logic in place and since the previous [Sanity check](#sanity-check) worked only with a `RequestList`, because we were not enqueueing anything so don't forget to add back the `RequestQueue` and `maxRequestsPerCrawl` limit. Let's test it out!
```js
const Apify = require('apify');
@@ -858,14 +855,14 @@ Apify.main(async () => {
'https://apify.com/library?type=acts&category=ENTERTAINMENT'
];
- const requestList = await Apify.openRequestList('x', sources);
- const requestQueue = await Apify.openRequestQueue();
+ const requestList = await Apify.openRequestList('categories', sources);
+ const requestQueue = await Apify.openRequestQueue(); // <----------------
const crawler = new Apify.CheerioCrawler({
- maxRequestsPerCrawl: 50,
+ maxRequestsPerCrawl: 50, // <----------------------------------------
requestList,
- requestQueue,
- handlePageFunction: async ({ $, request, response }) => {
+ requestQueue, // <---------------------------------------------------
+ handlePageFunction: async ({ $, request }) => {
console.log(`Processing ${request.url}`);
// Only enqueue new links from the category pages.
@@ -874,7 +871,7 @@ Apify.main(async () => {
$,
requestQueue,
selector: 'a.item',
- baseUrl: response.request.uri.href,
+ baseUrl: request.loadedUrl,
userData: {
detailPage: true
}
@@ -888,7 +885,7 @@ Apify.main(async () => {
});
```
-There's actually nothing new here. We've only added the `handlePageFunction()` with the `enqueueLinks()` logic from the previous section to the code we've written earlier. As always, try running it in an environment of your choice. You should see the crawler output a number of links to the console, as it crawls the category pages first and then all the links to the actor detail pages it found.
+We've added the `handlePageFunction()` with the `enqueueLinks()` logic from the previous section to the code we've written earlier. As always, try running it in an environment of your choice. You should see the crawler output a number of links to the console, as it crawls the category pages first and then all the links to the actor detail pages it found.
This concludes our Crawling strategy section, because we have taught the crawler to visit all the pages we need. Let's continue with scraping the tasty data.
diff --git a/examples/puppeteer_crawler.js b/examples/puppeteer_crawler.js
index 2cab403ad77f..4693ec98b66c 100644
--- a/examples/puppeteer_crawler.js
+++ b/examples/puppeteer_crawler.js
@@ -1,7 +1,6 @@
/**
* This example demonstrates how to use [`PuppeteerCrawler`](../api/puppeteercrawler)
- * in combination with [`RequestList`](../api/requestlist)
- * and [`RequestQueue`](../api/requestqueue) to recursively scrape the
+ * in combination with [`RequestQueue`](../api/requestqueue) to recursively scrape the
* <a href="https://news.ycombinator.com" target="_blank">Hacker News website</a> using headless Chrome / Puppeteer.
* The crawler starts with a single URL, finds links to next pages,
* enqueues them and continues until no more desired links are available.
@@ -14,23 +13,14 @@
const Apify = require('apify');
Apify.main(async () => {
- // Create and initialize an instance of the RequestList class that contains the start URL.
- const requestList = new Apify.RequestList({
- sources: [
- { url: 'https://news.ycombinator.com/' },
- ],
- });
- await requestList.initialize();
-
// Apify.openRequestQueue() is a factory to get a preconfigured RequestQueue instance.
+ // We add our first request to it - the initial page the crawler will visit.
const requestQueue = await Apify.openRequestQueue();
+ await requestQueue.addRequest({ url: 'https://news.ycombinator.com/' });
// Create an instance of the PuppeteerCrawler class - a crawler
// that automatically loads the URLs in headless Chrome / Puppeteer.
const crawler = new Apify.PuppeteerCrawler({
- // The crawler will first fetch start URLs from the RequestList
- // and then the newly discovered URLs from the RequestQueue
- requestList,
requestQueue,
// Here you can set options that are passed to the Apify.launchPuppeteer() function.
@@ -82,6 +72,9 @@ Apify.main(async () => {
// This function is called if the page processing failed more than maxRequestRetries+1 times.
handleFailedRequestFunction: async ({ request }) => {
console.log(`Request ${request.url} failed too many times`);
+ await Apify.pushData({
+ '#debug': Apify.utils.createRequestDebugInfo(request),
+ });
},
});
diff --git a/website/pages/en/index.js b/website/pages/en/index.js
index 96490698917b..aa7c2f0feefd 100755
--- a/website/pages/en/index.js
+++ b/website/pages/en/index.js
@@ -216,7 +216,7 @@ const TryOut = () => (
' handlePageFunction: async ({ request, page }) => {\n' +
' const title = await page.title();\n' +
' console.log(`Title of ${request.url}: ${title}`);\n' +
- ' await Apify.utils.puppeteer.enqueueLinks({ page, selector: \'a\', pseudoUrls, requestQueue });\n' +
+ ' await Apify.utils.enqueueLinks({ page, selector: \'a\', pseudoUrls, requestQueue });\n' +
' },\n' +
' maxRequestsPerCrawl: 100,\n' +
' maxConcurrency: 10,\n' + | unknown | Update tutorial |
2f5ac46445369d38e0e0db4754bce848c1883992 | 2019-10-15 12:58:39 | Ondra Urban | Improve docs | false | diff --git a/src/session_pool/session.js b/src/session_pool/session.js
index a078f172f291..70bed7d6cc89 100644
--- a/src/session_pool/session.js
+++ b/src/session_pool/session.js
@@ -9,7 +9,7 @@ import EVENTS from './events';
/**
* Class aggregating data for session.
* Sessions are used to store information such as cookies and can be used for generating fingerprints and proxy sessions.
- * You can think of a session as about one specific user.
+ * You can imagine each session as a specific user, with its own cookies, IP (via proxy) and potentially a unique browser fingerprint.
* Session internal state can be enriched with custom user data for example some authorization tokens and specific headers in general.
*/
export class Session { | unknown | Improve docs |
bca9eabfcdbddcb7a304728da73791c0eb4e0e54 | 2023-11-16 00:26:17 | Apify Release Bot | chore(release): update internal dependencies [skip ci] | false | diff --git a/packages/basic-crawler/package.json b/packages/basic-crawler/package.json
index ad0eeebcdfd5..5f8490a49249 100644
--- a/packages/basic-crawler/package.json
+++ b/packages/basic-crawler/package.json
@@ -48,9 +48,9 @@
"@apify/log": "^2.4.0",
"@apify/timeout": "^0.3.0",
"@apify/utilities": "^2.7.10",
- "@crawlee/core": "^3.6.1",
- "@crawlee/types": "^3.6.1",
- "@crawlee/utils": "^3.6.1",
+ "@crawlee/core": "3.6.1",
+ "@crawlee/types": "3.6.1",
+ "@crawlee/utils": "3.6.1",
"got-scraping": "^4.0.0",
"ow": "^0.28.1",
"tldts": "^6.0.0",
diff --git a/packages/browser-crawler/package.json b/packages/browser-crawler/package.json
index 4506ef4cc8ac..a6246b94bb9e 100644
--- a/packages/browser-crawler/package.json
+++ b/packages/browser-crawler/package.json
@@ -54,10 +54,10 @@
},
"dependencies": {
"@apify/timeout": "^0.3.0",
- "@crawlee/basic": "^3.6.1",
- "@crawlee/browser-pool": "^3.6.1",
- "@crawlee/types": "^3.6.1",
- "@crawlee/utils": "^3.6.1",
+ "@crawlee/basic": "3.6.1",
+ "@crawlee/browser-pool": "3.6.1",
+ "@crawlee/types": "3.6.1",
+ "@crawlee/utils": "3.6.1",
"ow": "^0.28.1",
"tslib": "^2.4.0"
}
diff --git a/packages/browser-pool/package.json b/packages/browser-pool/package.json
index a62acbda8089..1e3a3df37ea9 100644
--- a/packages/browser-pool/package.json
+++ b/packages/browser-pool/package.json
@@ -38,8 +38,8 @@
"dependencies": {
"@apify/log": "^2.4.0",
"@apify/timeout": "^0.3.0",
- "@crawlee/core": "^3.6.1",
- "@crawlee/types": "^3.6.1",
+ "@crawlee/core": "3.6.1",
+ "@crawlee/types": "3.6.1",
"fingerprint-generator": "^2.0.6",
"fingerprint-injector": "^2.0.5",
"lodash.merge": "^4.6.2",
diff --git a/packages/cheerio-crawler/package.json b/packages/cheerio-crawler/package.json
index 9f3395069371..86e56174799a 100644
--- a/packages/cheerio-crawler/package.json
+++ b/packages/cheerio-crawler/package.json
@@ -53,8 +53,8 @@
"access": "public"
},
"dependencies": {
- "@crawlee/http": "^3.6.1",
- "@crawlee/types": "^3.6.1",
+ "@crawlee/http": "3.6.1",
+ "@crawlee/types": "3.6.1",
"cheerio": "^1.0.0-rc.12",
"htmlparser2": "^9.0.0",
"tslib": "^2.4.0"
diff --git a/packages/cli/package.json b/packages/cli/package.json
index a7a2e1b391aa..6c3ddc685dcf 100644
--- a/packages/cli/package.json
+++ b/packages/cli/package.json
@@ -51,7 +51,7 @@
"access": "public"
},
"dependencies": {
- "@crawlee/templates": "^3.6.1",
+ "@crawlee/templates": "3.6.1",
"ansi-colors": "^4.1.3",
"fs-extra": "^11.0.0",
"inquirer": "^8.2.4",
diff --git a/packages/core/package.json b/packages/core/package.json
index b11ea73b8612..f170bab10765 100644
--- a/packages/core/package.json
+++ b/packages/core/package.json
@@ -59,9 +59,9 @@
"@apify/pseudo_url": "^2.0.30",
"@apify/timeout": "^0.3.0",
"@apify/utilities": "^2.7.10",
- "@crawlee/memory-storage": "^3.6.1",
- "@crawlee/types": "^3.6.1",
- "@crawlee/utils": "^3.6.1",
+ "@crawlee/memory-storage": "3.6.1",
+ "@crawlee/types": "3.6.1",
+ "@crawlee/utils": "3.6.1",
"@sapphire/async-queue": "^1.5.0",
"@types/tough-cookie": "^4.0.2",
"@vladfrangu/async_event_emitter": "^2.2.2",
diff --git a/packages/crawlee/package.json b/packages/crawlee/package.json
index 1b640d3fd6f8..19026349d84a 100644
--- a/packages/crawlee/package.json
+++ b/packages/crawlee/package.json
@@ -54,18 +54,18 @@
"access": "public"
},
"dependencies": {
- "@crawlee/basic": "^3.6.1",
- "@crawlee/browser": "^3.6.1",
- "@crawlee/browser-pool": "^3.6.1",
- "@crawlee/cheerio": "^3.6.1",
- "@crawlee/cli": "^3.6.1",
- "@crawlee/core": "^3.6.1",
- "@crawlee/http": "^3.6.1",
- "@crawlee/jsdom": "^3.6.1",
- "@crawlee/linkedom": "^3.6.1",
- "@crawlee/playwright": "^3.6.1",
- "@crawlee/puppeteer": "^3.6.1",
- "@crawlee/utils": "^3.6.1",
+ "@crawlee/basic": "3.6.1",
+ "@crawlee/browser": "3.6.1",
+ "@crawlee/browser-pool": "3.6.1",
+ "@crawlee/cheerio": "3.6.1",
+ "@crawlee/cli": "3.6.1",
+ "@crawlee/core": "3.6.1",
+ "@crawlee/http": "3.6.1",
+ "@crawlee/jsdom": "3.6.1",
+ "@crawlee/linkedom": "3.6.1",
+ "@crawlee/playwright": "3.6.1",
+ "@crawlee/puppeteer": "3.6.1",
+ "@crawlee/utils": "3.6.1",
"import-local": "^3.1.0",
"tslib": "^2.4.0"
},
diff --git a/packages/http-crawler/package.json b/packages/http-crawler/package.json
index df173205ca74..8af1784e7694 100644
--- a/packages/http-crawler/package.json
+++ b/packages/http-crawler/package.json
@@ -55,9 +55,9 @@
"dependencies": {
"@apify/timeout": "^0.3.0",
"@apify/utilities": "^2.7.10",
- "@crawlee/basic": "^3.6.1",
- "@crawlee/types": "^3.6.1",
- "@crawlee/utils": "^3.6.1",
+ "@crawlee/basic": "3.6.1",
+ "@crawlee/types": "3.6.1",
+ "@crawlee/utils": "3.6.1",
"@types/content-type": "^1.1.5",
"cheerio": "^1.0.0-rc.12",
"content-type": "^1.0.4",
diff --git a/packages/jsdom-crawler/package.json b/packages/jsdom-crawler/package.json
index 44ce050c9557..875987741db6 100644
--- a/packages/jsdom-crawler/package.json
+++ b/packages/jsdom-crawler/package.json
@@ -55,8 +55,8 @@
"dependencies": {
"@apify/timeout": "^0.3.0",
"@apify/utilities": "^2.7.10",
- "@crawlee/http": "^3.6.1",
- "@crawlee/types": "^3.6.1",
+ "@crawlee/http": "3.6.1",
+ "@crawlee/types": "3.6.1",
"@types/jsdom": "^21.0.0",
"cheerio": "^1.0.0-rc.12",
"jsdom": "^22.0.0",
diff --git a/packages/linkedom-crawler/package.json b/packages/linkedom-crawler/package.json
index 3484d5b303d0..698bca180626 100644
--- a/packages/linkedom-crawler/package.json
+++ b/packages/linkedom-crawler/package.json
@@ -55,8 +55,8 @@
"dependencies": {
"@apify/timeout": "^0.3.0",
"@apify/utilities": "^2.7.10",
- "@crawlee/http": "^3.6.1",
- "@crawlee/types": "^3.6.1",
+ "@crawlee/http": "3.6.1",
+ "@crawlee/types": "3.6.1",
"linkedom": "^0.16.0",
"ow": "^0.28.2",
"tslib": "^2.4.0"
diff --git a/packages/memory-storage/package.json b/packages/memory-storage/package.json
index cdcf053b18b2..c298f6d872be 100644
--- a/packages/memory-storage/package.json
+++ b/packages/memory-storage/package.json
@@ -49,7 +49,7 @@
},
"dependencies": {
"@apify/log": "^2.4.0",
- "@crawlee/types": "^3.6.1",
+ "@crawlee/types": "3.6.1",
"@sapphire/async-queue": "^1.5.0",
"@sapphire/shapeshift": "^3.0.0",
"content-type": "^1.0.4",
diff --git a/packages/playwright-crawler/package.json b/packages/playwright-crawler/package.json
index 8028bd87a097..f18c20793958 100644
--- a/packages/playwright-crawler/package.json
+++ b/packages/playwright-crawler/package.json
@@ -55,10 +55,10 @@
"dependencies": {
"@apify/datastructures": "^2.0.0",
"@apify/log": "^2.4.0",
- "@crawlee/browser": "^3.6.1",
- "@crawlee/browser-pool": "^3.6.1",
- "@crawlee/types": "^3.6.1",
- "@crawlee/utils": "^3.6.1",
+ "@crawlee/browser": "3.6.1",
+ "@crawlee/browser-pool": "3.6.1",
+ "@crawlee/types": "3.6.1",
+ "@crawlee/utils": "3.6.1",
"cheerio": "^1.0.0-rc.12",
"idcac-playwright": "^0.1.2",
"jquery": "^3.6.0",
diff --git a/packages/puppeteer-crawler/package.json b/packages/puppeteer-crawler/package.json
index d2635e55d9c9..108e257e3e39 100644
--- a/packages/puppeteer-crawler/package.json
+++ b/packages/puppeteer-crawler/package.json
@@ -55,10 +55,10 @@
"dependencies": {
"@apify/datastructures": "^2.0.0",
"@apify/log": "^2.4.0",
- "@crawlee/browser": "^3.6.1",
- "@crawlee/browser-pool": "^3.6.1",
- "@crawlee/types": "^3.6.1",
- "@crawlee/utils": "^3.6.1",
+ "@crawlee/browser": "3.6.1",
+ "@crawlee/browser-pool": "3.6.1",
+ "@crawlee/types": "3.6.1",
+ "@crawlee/utils": "3.6.1",
"cheerio": "^1.0.0-rc.12",
"devtools-protocol": "*",
"idcac-playwright": "^0.1.2",
diff --git a/packages/utils/package.json b/packages/utils/package.json
index 2326791976f1..a82d9a339b1f 100644
--- a/packages/utils/package.json
+++ b/packages/utils/package.json
@@ -49,7 +49,7 @@
"dependencies": {
"@apify/log": "^2.4.0",
"@apify/ps-tree": "^1.2.0",
- "@crawlee/types": "^3.6.1",
+ "@crawlee/types": "3.6.1",
"cheerio": "^1.0.0-rc.12",
"got-scraping": "^4.0.0",
"ow": "^0.28.1",
diff --git a/yarn.lock b/yarn.lock
index 895587ce04fb..c57b81d776c0 100644
--- a/yarn.lock
+++ b/yarn.lock
@@ -377,16 +377,16 @@ __metadata:
languageName: node
linkType: hard
-"@crawlee/basic@npm:^3.6.1, @crawlee/basic@workspace:packages/basic-crawler":
+"@crawlee/basic@npm:3.6.1, @crawlee/basic@workspace:packages/basic-crawler":
version: 0.0.0-use.local
resolution: "@crawlee/basic@workspace:packages/basic-crawler"
dependencies:
"@apify/log": "npm:^2.4.0"
"@apify/timeout": "npm:^0.3.0"
"@apify/utilities": "npm:^2.7.10"
- "@crawlee/core": "npm:^3.6.1"
- "@crawlee/types": "npm:^3.6.1"
- "@crawlee/utils": "npm:^3.6.1"
+ "@crawlee/core": "npm:3.6.1"
+ "@crawlee/types": "npm:3.6.1"
+ "@crawlee/utils": "npm:3.6.1"
got-scraping: "npm:^4.0.0"
ow: "npm:^0.28.1"
tldts: "npm:^6.0.0"
@@ -395,14 +395,14 @@ __metadata:
languageName: unknown
linkType: soft
-"@crawlee/browser-pool@npm:^3.6.1, @crawlee/browser-pool@workspace:packages/browser-pool":
+"@crawlee/browser-pool@npm:3.6.1, @crawlee/browser-pool@workspace:packages/browser-pool":
version: 0.0.0-use.local
resolution: "@crawlee/browser-pool@workspace:packages/browser-pool"
dependencies:
"@apify/log": "npm:^2.4.0"
"@apify/timeout": "npm:^0.3.0"
- "@crawlee/core": "npm:^3.6.1"
- "@crawlee/types": "npm:^3.6.1"
+ "@crawlee/core": "npm:3.6.1"
+ "@crawlee/types": "npm:3.6.1"
fingerprint-generator: "npm:^2.0.6"
fingerprint-injector: "npm:^2.0.5"
lodash.merge: "npm:^4.6.2"
@@ -424,37 +424,37 @@ __metadata:
languageName: unknown
linkType: soft
-"@crawlee/browser@npm:^3.6.1, @crawlee/browser@workspace:packages/browser-crawler":
+"@crawlee/browser@npm:3.6.1, @crawlee/browser@workspace:packages/browser-crawler":
version: 0.0.0-use.local
resolution: "@crawlee/browser@workspace:packages/browser-crawler"
dependencies:
"@apify/timeout": "npm:^0.3.0"
- "@crawlee/basic": "npm:^3.6.1"
- "@crawlee/browser-pool": "npm:^3.6.1"
- "@crawlee/types": "npm:^3.6.1"
- "@crawlee/utils": "npm:^3.6.1"
+ "@crawlee/basic": "npm:3.6.1"
+ "@crawlee/browser-pool": "npm:3.6.1"
+ "@crawlee/types": "npm:3.6.1"
+ "@crawlee/utils": "npm:3.6.1"
ow: "npm:^0.28.1"
tslib: "npm:^2.4.0"
languageName: unknown
linkType: soft
-"@crawlee/cheerio@npm:^3.6.1, @crawlee/cheerio@workspace:packages/cheerio-crawler":
+"@crawlee/cheerio@npm:3.6.1, @crawlee/cheerio@workspace:packages/cheerio-crawler":
version: 0.0.0-use.local
resolution: "@crawlee/cheerio@workspace:packages/cheerio-crawler"
dependencies:
- "@crawlee/http": "npm:^3.6.1"
- "@crawlee/types": "npm:^3.6.1"
+ "@crawlee/http": "npm:3.6.1"
+ "@crawlee/types": "npm:3.6.1"
cheerio: "npm:^1.0.0-rc.12"
htmlparser2: "npm:^9.0.0"
tslib: "npm:^2.4.0"
languageName: unknown
linkType: soft
-"@crawlee/cli@npm:^3.6.1, @crawlee/cli@workspace:packages/cli":
+"@crawlee/cli@npm:3.6.1, @crawlee/cli@workspace:packages/cli":
version: 0.0.0-use.local
resolution: "@crawlee/cli@workspace:packages/cli"
dependencies:
- "@crawlee/templates": "npm:^3.6.1"
+ "@crawlee/templates": "npm:3.6.1"
ansi-colors: "npm:^4.1.3"
fs-extra: "npm:^11.0.0"
inquirer: "npm:^8.2.4"
@@ -466,7 +466,7 @@ __metadata:
languageName: unknown
linkType: soft
-"@crawlee/core@npm:^3.5.6, @crawlee/core@npm:^3.6.1, @crawlee/core@workspace:packages/core":
+"@crawlee/core@npm:3.6.1, @crawlee/core@npm:^3.5.6, @crawlee/core@workspace:packages/core":
version: 0.0.0-use.local
resolution: "@crawlee/core@workspace:packages/core"
dependencies:
@@ -476,9 +476,9 @@ __metadata:
"@apify/pseudo_url": "npm:^2.0.30"
"@apify/timeout": "npm:^0.3.0"
"@apify/utilities": "npm:^2.7.10"
- "@crawlee/memory-storage": "npm:^3.6.1"
- "@crawlee/types": "npm:^3.6.1"
- "@crawlee/utils": "npm:^3.6.1"
+ "@crawlee/memory-storage": "npm:3.6.1"
+ "@crawlee/types": "npm:3.6.1"
+ "@crawlee/utils": "npm:3.6.1"
"@sapphire/async-queue": "npm:^1.5.0"
"@types/tough-cookie": "npm:^4.0.2"
"@vladfrangu/async_event_emitter": "npm:^2.2.2"
@@ -497,15 +497,15 @@ __metadata:
languageName: unknown
linkType: soft
-"@crawlee/http@npm:^3.6.1, @crawlee/http@workspace:packages/http-crawler":
+"@crawlee/http@npm:3.6.1, @crawlee/http@workspace:packages/http-crawler":
version: 0.0.0-use.local
resolution: "@crawlee/http@workspace:packages/http-crawler"
dependencies:
"@apify/timeout": "npm:^0.3.0"
"@apify/utilities": "npm:^2.7.10"
- "@crawlee/basic": "npm:^3.6.1"
- "@crawlee/types": "npm:^3.6.1"
- "@crawlee/utils": "npm:^3.6.1"
+ "@crawlee/basic": "npm:3.6.1"
+ "@crawlee/types": "npm:3.6.1"
+ "@crawlee/utils": "npm:3.6.1"
"@types/content-type": "npm:^1.1.5"
cheerio: "npm:^1.0.0-rc.12"
content-type: "npm:^1.0.4"
@@ -518,14 +518,14 @@ __metadata:
languageName: unknown
linkType: soft
-"@crawlee/jsdom@npm:^3.6.1, @crawlee/jsdom@workspace:packages/jsdom-crawler":
+"@crawlee/jsdom@npm:3.6.1, @crawlee/jsdom@workspace:packages/jsdom-crawler":
version: 0.0.0-use.local
resolution: "@crawlee/jsdom@workspace:packages/jsdom-crawler"
dependencies:
"@apify/timeout": "npm:^0.3.0"
"@apify/utilities": "npm:^2.7.10"
- "@crawlee/http": "npm:^3.6.1"
- "@crawlee/types": "npm:^3.6.1"
+ "@crawlee/http": "npm:3.6.1"
+ "@crawlee/types": "npm:3.6.1"
"@types/jsdom": "npm:^21.0.0"
cheerio: "npm:^1.0.0-rc.12"
jsdom: "npm:^22.0.0"
@@ -534,26 +534,26 @@ __metadata:
languageName: unknown
linkType: soft
-"@crawlee/linkedom@npm:^3.6.1, @crawlee/linkedom@workspace:packages/linkedom-crawler":
+"@crawlee/linkedom@npm:3.6.1, @crawlee/linkedom@workspace:packages/linkedom-crawler":
version: 0.0.0-use.local
resolution: "@crawlee/linkedom@workspace:packages/linkedom-crawler"
dependencies:
"@apify/timeout": "npm:^0.3.0"
"@apify/utilities": "npm:^2.7.10"
- "@crawlee/http": "npm:^3.6.1"
- "@crawlee/types": "npm:^3.6.1"
+ "@crawlee/http": "npm:3.6.1"
+ "@crawlee/types": "npm:3.6.1"
linkedom: "npm:^0.16.0"
ow: "npm:^0.28.2"
tslib: "npm:^2.4.0"
languageName: unknown
linkType: soft
-"@crawlee/memory-storage@npm:^3.6.1, @crawlee/memory-storage@workspace:packages/memory-storage":
+"@crawlee/memory-storage@npm:3.6.1, @crawlee/memory-storage@workspace:packages/memory-storage":
version: 0.0.0-use.local
resolution: "@crawlee/memory-storage@workspace:packages/memory-storage"
dependencies:
"@apify/log": "npm:^2.4.0"
- "@crawlee/types": "npm:^3.6.1"
+ "@crawlee/types": "npm:3.6.1"
"@sapphire/async-queue": "npm:^1.5.0"
"@sapphire/shapeshift": "npm:^3.0.0"
content-type: "npm:^1.0.4"
@@ -565,16 +565,16 @@ __metadata:
languageName: unknown
linkType: soft
-"@crawlee/playwright@npm:^3.6.1, @crawlee/playwright@workspace:packages/playwright-crawler":
+"@crawlee/playwright@npm:3.6.1, @crawlee/playwright@workspace:packages/playwright-crawler":
version: 0.0.0-use.local
resolution: "@crawlee/playwright@workspace:packages/playwright-crawler"
dependencies:
"@apify/datastructures": "npm:^2.0.0"
"@apify/log": "npm:^2.4.0"
- "@crawlee/browser": "npm:^3.6.1"
- "@crawlee/browser-pool": "npm:^3.6.1"
- "@crawlee/types": "npm:^3.6.1"
- "@crawlee/utils": "npm:^3.6.1"
+ "@crawlee/browser": "npm:3.6.1"
+ "@crawlee/browser-pool": "npm:3.6.1"
+ "@crawlee/types": "npm:3.6.1"
+ "@crawlee/utils": "npm:3.6.1"
cheerio: "npm:^1.0.0-rc.12"
idcac-playwright: "npm:^0.1.2"
jquery: "npm:^3.6.0"
@@ -588,16 +588,16 @@ __metadata:
languageName: unknown
linkType: soft
-"@crawlee/puppeteer@npm:^3.6.1, @crawlee/puppeteer@workspace:packages/puppeteer-crawler":
+"@crawlee/puppeteer@npm:3.6.1, @crawlee/puppeteer@workspace:packages/puppeteer-crawler":
version: 0.0.0-use.local
resolution: "@crawlee/puppeteer@workspace:packages/puppeteer-crawler"
dependencies:
"@apify/datastructures": "npm:^2.0.0"
"@apify/log": "npm:^2.4.0"
- "@crawlee/browser": "npm:^3.6.1"
- "@crawlee/browser-pool": "npm:^3.6.1"
- "@crawlee/types": "npm:^3.6.1"
- "@crawlee/utils": "npm:^3.6.1"
+ "@crawlee/browser": "npm:3.6.1"
+ "@crawlee/browser-pool": "npm:3.6.1"
+ "@crawlee/types": "npm:3.6.1"
+ "@crawlee/utils": "npm:3.6.1"
cheerio: "npm:^1.0.0-rc.12"
devtools-protocol: "npm:*"
idcac-playwright: "npm:^0.1.2"
@@ -673,7 +673,7 @@ __metadata:
languageName: unknown
linkType: soft
-"@crawlee/templates@npm:^3.6.1, @crawlee/templates@workspace:packages/templates":
+"@crawlee/templates@npm:3.6.1, @crawlee/templates@workspace:packages/templates":
version: 0.0.0-use.local
resolution: "@crawlee/templates@workspace:packages/templates"
dependencies:
@@ -685,7 +685,7 @@ __metadata:
languageName: unknown
linkType: soft
-"@crawlee/types@npm:^3.3.0, @crawlee/types@npm:^3.5.6, @crawlee/types@npm:^3.6.1, @crawlee/types@workspace:packages/types":
+"@crawlee/types@npm:3.6.1, @crawlee/types@npm:^3.3.0, @crawlee/types@npm:^3.5.6, @crawlee/types@workspace:packages/types":
version: 0.0.0-use.local
resolution: "@crawlee/types@workspace:packages/types"
dependencies:
@@ -693,13 +693,13 @@ __metadata:
languageName: unknown
linkType: soft
-"@crawlee/utils@npm:^3.5.6, @crawlee/utils@npm:^3.6.1, @crawlee/utils@workspace:packages/utils":
+"@crawlee/utils@npm:3.6.1, @crawlee/utils@npm:^3.5.6, @crawlee/utils@workspace:packages/utils":
version: 0.0.0-use.local
resolution: "@crawlee/utils@workspace:packages/utils"
dependencies:
"@apify/log": "npm:^2.4.0"
"@apify/ps-tree": "npm:^1.2.0"
- "@crawlee/types": "npm:^3.6.1"
+ "@crawlee/types": "npm:3.6.1"
cheerio: "npm:^1.0.0-rc.12"
got-scraping: "npm:^4.0.0"
ow: "npm:^0.28.1"
@@ -3966,18 +3966,18 @@ __metadata:
version: 0.0.0-use.local
resolution: "crawlee@workspace:packages/crawlee"
dependencies:
- "@crawlee/basic": "npm:^3.6.1"
- "@crawlee/browser": "npm:^3.6.1"
- "@crawlee/browser-pool": "npm:^3.6.1"
- "@crawlee/cheerio": "npm:^3.6.1"
- "@crawlee/cli": "npm:^3.6.1"
- "@crawlee/core": "npm:^3.6.1"
- "@crawlee/http": "npm:^3.6.1"
- "@crawlee/jsdom": "npm:^3.6.1"
- "@crawlee/linkedom": "npm:^3.6.1"
- "@crawlee/playwright": "npm:^3.6.1"
- "@crawlee/puppeteer": "npm:^3.6.1"
- "@crawlee/utils": "npm:^3.6.1"
+ "@crawlee/basic": "npm:3.6.1"
+ "@crawlee/browser": "npm:3.6.1"
+ "@crawlee/browser-pool": "npm:3.6.1"
+ "@crawlee/cheerio": "npm:3.6.1"
+ "@crawlee/cli": "npm:3.6.1"
+ "@crawlee/core": "npm:3.6.1"
+ "@crawlee/http": "npm:3.6.1"
+ "@crawlee/jsdom": "npm:3.6.1"
+ "@crawlee/linkedom": "npm:3.6.1"
+ "@crawlee/playwright": "npm:3.6.1"
+ "@crawlee/puppeteer": "npm:3.6.1"
+ "@crawlee/utils": "npm:3.6.1"
import-local: "npm:^3.1.0"
tslib: "npm:^2.4.0"
peerDependencies: | chore | update internal dependencies [skip ci] |
eb84ce9ce5540b72d5799b1f66c80938d57bc1cc | 2023-12-05 01:24:47 | Vlad Frangu | fix(MemoryStorage): lock request JSON file when reading to support multiple process crawling (#2215)
Needed for https://github.com/apify/crawlee-parallel-scraping-example | false | diff --git a/packages/memory-storage/src/background-handler/fs-utils.ts b/packages/memory-storage/src/background-handler/fs-utils.ts
index d446189ded52..91368b6365e1 100644
--- a/packages/memory-storage/src/background-handler/fs-utils.ts
+++ b/packages/memory-storage/src/background-handler/fs-utils.ts
@@ -39,8 +39,7 @@ async function updateMetadata(message: BackgroundHandlerUpdateMetadataMessage) {
}
export async function lockAndWrite(filePath: string, data: unknown, stringify = true, retry = 10, timeout = 10): Promise<void> {
- try {
- const release = await lock(filePath, { realpath: false });
+ await lockAndCallback(filePath, async () => {
await new Promise<void>((pResolve, reject) => {
writeFile(filePath, stringify ? JSON.stringify(data, null, '\t') : data as Buffer, (err) => {
if (err) {
@@ -50,11 +49,30 @@ export async function lockAndWrite(filePath: string, data: unknown, stringify =
}
});
});
- await release();
+ }, retry, timeout);
+}
+
+export async function lockAndCallback<Callback extends () => Promise<any>>(
+ filePath: string,
+ callback: Callback,
+ retry = 10,
+ timeout = 10,
+): Promise<Awaited<ReturnType<Callback>>> {
+ let release: (() => Promise<void>) | null = null;
+ try {
+ release = await lock(filePath, { realpath: false });
+
+ return await callback();
} catch (e: any) {
if (e.code === 'ELOCKED' && retry > 0) {
await setTimeout(timeout);
- return lockAndWrite(filePath, data, stringify, retry - 1, timeout * 2);
+ return lockAndCallback(filePath, callback, retry - 1, timeout * 2);
+ }
+
+ throw e;
+ } finally {
+ if (release) {
+ await release();
}
}
}
diff --git a/packages/memory-storage/src/fs/common.ts b/packages/memory-storage/src/fs/common.ts
index 13894caf0d02..1a26019fe571 100644
--- a/packages/memory-storage/src/fs/common.ts
+++ b/packages/memory-storage/src/fs/common.ts
@@ -1,5 +1,5 @@
export interface StorageImplementation<T> {
- get(): Promise<T>;
+ get(force?: boolean): Promise<T>;
update(data: T): void | Promise<void>;
delete(): void | Promise<void>;
}
diff --git a/packages/memory-storage/src/fs/request-queue/fs.ts b/packages/memory-storage/src/fs/request-queue/fs.ts
index cfc94eb5039c..a4937e10e1f7 100644
--- a/packages/memory-storage/src/fs/request-queue/fs.ts
+++ b/packages/memory-storage/src/fs/request-queue/fs.ts
@@ -4,7 +4,7 @@ import { dirname, resolve } from 'node:path';
import { AsyncQueue } from '@sapphire/async-queue';
import { ensureDir } from 'fs-extra';
-import { lockAndWrite } from '../../background-handler/fs-utils';
+import { lockAndCallback, lockAndWrite } from '../../background-handler/fs-utils';
import type { InternalRequest } from '../../resource-clients/request-queue';
import type { StorageImplementation } from '../common';
@@ -14,6 +14,7 @@ export class RequestQueueFileSystemEntry implements StorageImplementation<Intern
private filePath: string;
private fsQueue = new AsyncQueue();
private data?: InternalRequest;
+ private directoryExists = false;
/**
* A "sweep" timeout that is created/refreshed whenever this entry is accessed/updated.
@@ -25,20 +26,22 @@ export class RequestQueueFileSystemEntry implements StorageImplementation<Intern
this.filePath = resolve(options.storeDirectory, `${options.requestId}.json`);
}
- async get() {
+ async get(force = false) {
await this.fsQueue.wait();
this.setOrRefreshSweepTimeout();
- if (this.data) {
+ if (this.data && !force) {
this.fsQueue.shift();
return this.data;
}
try {
- const req = JSON.parse(await readFile(this.filePath, 'utf-8'));
- this.data = req;
+ return await lockAndCallback(this.filePath, async () => {
+ const req = JSON.parse(await readFile(this.filePath, 'utf-8'));
+ this.data = req;
- return req;
+ return req;
+ });
} finally {
this.fsQueue.shift();
}
@@ -49,7 +52,11 @@ export class RequestQueueFileSystemEntry implements StorageImplementation<Intern
this.data = data;
try {
- await ensureDir(dirname(this.filePath));
+ if (!this.directoryExists) {
+ await ensureDir(dirname(this.filePath));
+ this.directoryExists = true;
+ }
+
await lockAndWrite(this.filePath, data);
} finally {
this.setOrRefreshSweepTimeout();
diff --git a/packages/memory-storage/src/resource-clients/request-queue.ts b/packages/memory-storage/src/resource-clients/request-queue.ts
index 2e2d0589a0b3..412ec4441290 100644
--- a/packages/memory-storage/src/resource-clients/request-queue.ts
+++ b/packages/memory-storage/src/resource-clients/request-queue.ts
@@ -200,7 +200,8 @@ export class RequestQueueClient extends BaseClient implements storage.RequestQue
break;
}
- const request = await storageEntry.get();
+ // Always fetch from fs, as this also locks and we do not want to end up in a state where another process locked the request but we have cached it as unlocked
+ const request = await storageEntry.get(true);
if (isLocked(request)) {
continue; | fix | lock request JSON file when reading to support multiple process crawling (#2215)
Needed for https://github.com/apify/crawlee-parallel-scraping-example |
29e25890462cffe368ae8cdadec3440e0eb796ee | 2024-12-04 14:57:02 | Apify Release Bot | chore(release): update internal dependencies [skip ci] | false | diff --git a/packages/basic-crawler/package.json b/packages/basic-crawler/package.json
index 26415d85917d..f4c13dffaf74 100644
--- a/packages/basic-crawler/package.json
+++ b/packages/basic-crawler/package.json
@@ -48,9 +48,9 @@
"@apify/log": "^2.4.0",
"@apify/timeout": "^0.3.0",
"@apify/utilities": "^2.7.10",
- "@crawlee/core": "^3.12.1",
- "@crawlee/types": "^3.12.1",
- "@crawlee/utils": "^3.12.1",
+ "@crawlee/core": "3.12.1",
+ "@crawlee/types": "3.12.1",
+ "@crawlee/utils": "3.12.1",
"csv-stringify": "^6.2.0",
"fs-extra": "^11.0.0",
"got-scraping": "^4.0.0",
diff --git a/packages/browser-crawler/package.json b/packages/browser-crawler/package.json
index dc0bd310ba65..89892bc47d2b 100644
--- a/packages/browser-crawler/package.json
+++ b/packages/browser-crawler/package.json
@@ -54,10 +54,10 @@
},
"dependencies": {
"@apify/timeout": "^0.3.0",
- "@crawlee/basic": "^3.12.1",
- "@crawlee/browser-pool": "^3.12.1",
- "@crawlee/types": "^3.12.1",
- "@crawlee/utils": "^3.12.1",
+ "@crawlee/basic": "3.12.1",
+ "@crawlee/browser-pool": "3.12.1",
+ "@crawlee/types": "3.12.1",
+ "@crawlee/utils": "3.12.1",
"ow": "^0.28.1",
"tslib": "^2.4.0",
"type-fest": "^4.0.0"
diff --git a/packages/browser-pool/package.json b/packages/browser-pool/package.json
index eff956819e23..2c1f2f86b9eb 100644
--- a/packages/browser-pool/package.json
+++ b/packages/browser-pool/package.json
@@ -38,8 +38,8 @@
"dependencies": {
"@apify/log": "^2.4.0",
"@apify/timeout": "^0.3.0",
- "@crawlee/core": "^3.12.1",
- "@crawlee/types": "^3.12.1",
+ "@crawlee/core": "3.12.1",
+ "@crawlee/types": "3.12.1",
"fingerprint-generator": "^2.0.6",
"fingerprint-injector": "^2.0.5",
"lodash.merge": "^4.6.2",
diff --git a/packages/cheerio-crawler/package.json b/packages/cheerio-crawler/package.json
index 4b03f758d9b3..24ce41706756 100644
--- a/packages/cheerio-crawler/package.json
+++ b/packages/cheerio-crawler/package.json
@@ -53,9 +53,9 @@
"access": "public"
},
"dependencies": {
- "@crawlee/http": "^3.12.1",
- "@crawlee/types": "^3.12.1",
- "@crawlee/utils": "^3.12.1",
+ "@crawlee/http": "3.12.1",
+ "@crawlee/types": "3.12.1",
+ "@crawlee/utils": "3.12.1",
"cheerio": "1.0.0-rc.12",
"htmlparser2": "^9.0.0",
"tslib": "^2.4.0"
diff --git a/packages/cli/package.json b/packages/cli/package.json
index 379ba69cad2e..2a7cfbf20ccb 100644
--- a/packages/cli/package.json
+++ b/packages/cli/package.json
@@ -51,7 +51,7 @@
"access": "public"
},
"dependencies": {
- "@crawlee/templates": "^3.12.1",
+ "@crawlee/templates": "3.12.1",
"ansi-colors": "^4.1.3",
"fs-extra": "^11.0.0",
"inquirer": "^8.2.4",
diff --git a/packages/core/package.json b/packages/core/package.json
index fcd87bd4fdc5..0f4065d48879 100644
--- a/packages/core/package.json
+++ b/packages/core/package.json
@@ -59,9 +59,9 @@
"@apify/pseudo_url": "^2.0.30",
"@apify/timeout": "^0.3.0",
"@apify/utilities": "^2.7.10",
- "@crawlee/memory-storage": "^3.12.1",
- "@crawlee/types": "^3.12.1",
- "@crawlee/utils": "^3.12.1",
+ "@crawlee/memory-storage": "3.12.1",
+ "@crawlee/types": "3.12.1",
+ "@crawlee/utils": "3.12.1",
"@sapphire/async-queue": "^1.5.1",
"@vladfrangu/async_event_emitter": "^2.2.2",
"csv-stringify": "^6.2.0",
diff --git a/packages/crawlee/package.json b/packages/crawlee/package.json
index 4c70fb896a11..4bca7ed22a8b 100644
--- a/packages/crawlee/package.json
+++ b/packages/crawlee/package.json
@@ -54,18 +54,18 @@
"access": "public"
},
"dependencies": {
- "@crawlee/basic": "^3.12.1",
- "@crawlee/browser": "^3.12.1",
- "@crawlee/browser-pool": "^3.12.1",
- "@crawlee/cheerio": "^3.12.1",
- "@crawlee/cli": "^3.12.1",
- "@crawlee/core": "^3.12.1",
- "@crawlee/http": "^3.12.1",
- "@crawlee/jsdom": "^3.12.1",
- "@crawlee/linkedom": "^3.12.1",
- "@crawlee/playwright": "^3.12.1",
- "@crawlee/puppeteer": "^3.12.1",
- "@crawlee/utils": "^3.12.1",
+ "@crawlee/basic": "3.12.1",
+ "@crawlee/browser": "3.12.1",
+ "@crawlee/browser-pool": "3.12.1",
+ "@crawlee/cheerio": "3.12.1",
+ "@crawlee/cli": "3.12.1",
+ "@crawlee/core": "3.12.1",
+ "@crawlee/http": "3.12.1",
+ "@crawlee/jsdom": "3.12.1",
+ "@crawlee/linkedom": "3.12.1",
+ "@crawlee/playwright": "3.12.1",
+ "@crawlee/puppeteer": "3.12.1",
+ "@crawlee/utils": "3.12.1",
"import-local": "^3.1.0",
"tslib": "^2.4.0"
},
diff --git a/packages/http-crawler/package.json b/packages/http-crawler/package.json
index 363f05edff69..f3c469ebecca 100644
--- a/packages/http-crawler/package.json
+++ b/packages/http-crawler/package.json
@@ -55,9 +55,9 @@
"dependencies": {
"@apify/timeout": "^0.3.0",
"@apify/utilities": "^2.7.10",
- "@crawlee/basic": "^3.12.1",
- "@crawlee/types": "^3.12.1",
- "@crawlee/utils": "^3.12.1",
+ "@crawlee/basic": "3.12.1",
+ "@crawlee/types": "3.12.1",
+ "@crawlee/utils": "3.12.1",
"@types/content-type": "^1.1.5",
"cheerio": "1.0.0-rc.12",
"content-type": "^1.0.4",
diff --git a/packages/jsdom-crawler/package.json b/packages/jsdom-crawler/package.json
index 20bb0cb70cca..63393d49fa75 100644
--- a/packages/jsdom-crawler/package.json
+++ b/packages/jsdom-crawler/package.json
@@ -55,9 +55,9 @@
"dependencies": {
"@apify/timeout": "^0.3.0",
"@apify/utilities": "^2.7.10",
- "@crawlee/http": "^3.12.1",
- "@crawlee/types": "^3.12.1",
- "@crawlee/utils": "^3.12.1",
+ "@crawlee/http": "3.12.1",
+ "@crawlee/types": "3.12.1",
+ "@crawlee/utils": "3.12.1",
"@types/jsdom": "^21.0.0",
"cheerio": "1.0.0-rc.12",
"jsdom": "^25.0.0",
diff --git a/packages/linkedom-crawler/package.json b/packages/linkedom-crawler/package.json
index 1f7af9306563..6f6f274df018 100644
--- a/packages/linkedom-crawler/package.json
+++ b/packages/linkedom-crawler/package.json
@@ -55,8 +55,8 @@
"dependencies": {
"@apify/timeout": "^0.3.0",
"@apify/utilities": "^2.7.10",
- "@crawlee/http": "^3.12.1",
- "@crawlee/types": "^3.12.1",
+ "@crawlee/http": "3.12.1",
+ "@crawlee/types": "3.12.1",
"linkedom": "^0.18.0",
"ow": "^0.28.2",
"tslib": "^2.4.0"
diff --git a/packages/memory-storage/package.json b/packages/memory-storage/package.json
index 38eacc7d2a2e..d9f374a913e6 100644
--- a/packages/memory-storage/package.json
+++ b/packages/memory-storage/package.json
@@ -49,7 +49,7 @@
},
"dependencies": {
"@apify/log": "^2.4.0",
- "@crawlee/types": "^3.12.1",
+ "@crawlee/types": "3.12.1",
"@sapphire/async-queue": "^1.5.0",
"@sapphire/shapeshift": "^3.0.0",
"content-type": "^1.0.4",
diff --git a/packages/playwright-crawler/package.json b/packages/playwright-crawler/package.json
index 9382f344e142..be0d8ca627da 100644
--- a/packages/playwright-crawler/package.json
+++ b/packages/playwright-crawler/package.json
@@ -56,11 +56,11 @@
"@apify/datastructures": "^2.0.0",
"@apify/log": "^2.4.0",
"@apify/timeout": "^0.3.1",
- "@crawlee/browser": "^3.12.1",
- "@crawlee/browser-pool": "^3.12.1",
- "@crawlee/core": "^3.12.1",
- "@crawlee/types": "^3.12.1",
- "@crawlee/utils": "^3.12.1",
+ "@crawlee/browser": "3.12.1",
+ "@crawlee/browser-pool": "3.12.1",
+ "@crawlee/core": "3.12.1",
+ "@crawlee/types": "3.12.1",
+ "@crawlee/utils": "3.12.1",
"cheerio": "1.0.0-rc.12",
"idcac-playwright": "^0.1.2",
"jquery": "^3.6.0",
diff --git a/packages/puppeteer-crawler/package.json b/packages/puppeteer-crawler/package.json
index 1f4fb473d461..00dbc42fee33 100644
--- a/packages/puppeteer-crawler/package.json
+++ b/packages/puppeteer-crawler/package.json
@@ -55,10 +55,10 @@
"dependencies": {
"@apify/datastructures": "^2.0.0",
"@apify/log": "^2.4.0",
- "@crawlee/browser": "^3.12.1",
- "@crawlee/browser-pool": "^3.12.1",
- "@crawlee/types": "^3.12.1",
- "@crawlee/utils": "^3.12.1",
+ "@crawlee/browser": "3.12.1",
+ "@crawlee/browser-pool": "3.12.1",
+ "@crawlee/types": "3.12.1",
+ "@crawlee/utils": "3.12.1",
"cheerio": "1.0.0-rc.12",
"devtools-protocol": "*",
"idcac-playwright": "^0.1.2",
diff --git a/packages/utils/package.json b/packages/utils/package.json
index e42f599deb86..7c4eb52deb9a 100644
--- a/packages/utils/package.json
+++ b/packages/utils/package.json
@@ -49,7 +49,7 @@
"dependencies": {
"@apify/log": "^2.4.0",
"@apify/ps-tree": "^1.2.0",
- "@crawlee/types": "^3.12.1",
+ "@crawlee/types": "3.12.1",
"@types/sax": "^1.2.7",
"cheerio": "1.0.0-rc.12",
"file-type": "^19.0.0",
diff --git a/yarn.lock b/yarn.lock
index f9125e9b068e..ca64e3a2c273 100644
--- a/yarn.lock
+++ b/yarn.lock
@@ -465,16 +465,16 @@ __metadata:
languageName: node
linkType: hard
-"@crawlee/basic@npm:^3.12.1, @crawlee/basic@workspace:packages/basic-crawler":
+"@crawlee/basic@npm:3.12.1, @crawlee/basic@workspace:packages/basic-crawler":
version: 0.0.0-use.local
resolution: "@crawlee/basic@workspace:packages/basic-crawler"
dependencies:
"@apify/log": "npm:^2.4.0"
"@apify/timeout": "npm:^0.3.0"
"@apify/utilities": "npm:^2.7.10"
- "@crawlee/core": "npm:^3.12.1"
- "@crawlee/types": "npm:^3.12.1"
- "@crawlee/utils": "npm:^3.12.1"
+ "@crawlee/core": "npm:3.12.1"
+ "@crawlee/types": "npm:3.12.1"
+ "@crawlee/utils": "npm:3.12.1"
csv-stringify: "npm:^6.2.0"
fs-extra: "npm:^11.0.0"
got-scraping: "npm:^4.0.0"
@@ -485,14 +485,14 @@ __metadata:
languageName: unknown
linkType: soft
-"@crawlee/browser-pool@npm:^3.12.1, @crawlee/browser-pool@workspace:packages/browser-pool":
+"@crawlee/browser-pool@npm:3.12.1, @crawlee/browser-pool@workspace:packages/browser-pool":
version: 0.0.0-use.local
resolution: "@crawlee/browser-pool@workspace:packages/browser-pool"
dependencies:
"@apify/log": "npm:^2.4.0"
"@apify/timeout": "npm:^0.3.0"
- "@crawlee/core": "npm:^3.12.1"
- "@crawlee/types": "npm:^3.12.1"
+ "@crawlee/core": "npm:3.12.1"
+ "@crawlee/types": "npm:3.12.1"
fingerprint-generator: "npm:^2.0.6"
fingerprint-injector: "npm:^2.0.5"
lodash.merge: "npm:^4.6.2"
@@ -514,15 +514,15 @@ __metadata:
languageName: unknown
linkType: soft
-"@crawlee/browser@npm:^3.12.1, @crawlee/browser@workspace:packages/browser-crawler":
+"@crawlee/browser@npm:3.12.1, @crawlee/browser@workspace:packages/browser-crawler":
version: 0.0.0-use.local
resolution: "@crawlee/browser@workspace:packages/browser-crawler"
dependencies:
"@apify/timeout": "npm:^0.3.0"
- "@crawlee/basic": "npm:^3.12.1"
- "@crawlee/browser-pool": "npm:^3.12.1"
- "@crawlee/types": "npm:^3.12.1"
- "@crawlee/utils": "npm:^3.12.1"
+ "@crawlee/basic": "npm:3.12.1"
+ "@crawlee/browser-pool": "npm:3.12.1"
+ "@crawlee/types": "npm:3.12.1"
+ "@crawlee/utils": "npm:3.12.1"
ow: "npm:^0.28.1"
tslib: "npm:^2.4.0"
type-fest: "npm:^4.0.0"
@@ -537,24 +537,24 @@ __metadata:
languageName: unknown
linkType: soft
-"@crawlee/cheerio@npm:^3.12.1, @crawlee/cheerio@workspace:packages/cheerio-crawler":
+"@crawlee/cheerio@npm:3.12.1, @crawlee/cheerio@workspace:packages/cheerio-crawler":
version: 0.0.0-use.local
resolution: "@crawlee/cheerio@workspace:packages/cheerio-crawler"
dependencies:
- "@crawlee/http": "npm:^3.12.1"
- "@crawlee/types": "npm:^3.12.1"
- "@crawlee/utils": "npm:^3.12.1"
+ "@crawlee/http": "npm:3.12.1"
+ "@crawlee/types": "npm:3.12.1"
+ "@crawlee/utils": "npm:3.12.1"
cheerio: "npm:1.0.0-rc.12"
htmlparser2: "npm:^9.0.0"
tslib: "npm:^2.4.0"
languageName: unknown
linkType: soft
-"@crawlee/cli@npm:^3.12.1, @crawlee/cli@workspace:packages/cli":
+"@crawlee/cli@npm:3.12.1, @crawlee/cli@workspace:packages/cli":
version: 0.0.0-use.local
resolution: "@crawlee/cli@workspace:packages/cli"
dependencies:
- "@crawlee/templates": "npm:^3.12.1"
+ "@crawlee/templates": "npm:3.12.1"
ansi-colors: "npm:^4.1.3"
fs-extra: "npm:^11.0.0"
inquirer: "npm:^8.2.4"
@@ -566,7 +566,7 @@ __metadata:
languageName: unknown
linkType: soft
-"@crawlee/core@npm:^3.12.1, @crawlee/core@npm:^3.9.0, @crawlee/core@workspace:packages/core":
+"@crawlee/core@npm:3.12.1, @crawlee/core@npm:^3.9.0, @crawlee/core@workspace:packages/core":
version: 0.0.0-use.local
resolution: "@crawlee/core@workspace:packages/core"
dependencies:
@@ -576,9 +576,9 @@ __metadata:
"@apify/pseudo_url": "npm:^2.0.30"
"@apify/timeout": "npm:^0.3.0"
"@apify/utilities": "npm:^2.7.10"
- "@crawlee/memory-storage": "npm:^3.12.1"
- "@crawlee/types": "npm:^3.12.1"
- "@crawlee/utils": "npm:^3.12.1"
+ "@crawlee/memory-storage": "npm:3.12.1"
+ "@crawlee/types": "npm:3.12.1"
+ "@crawlee/utils": "npm:3.12.1"
"@sapphire/async-queue": "npm:^1.5.1"
"@vladfrangu/async_event_emitter": "npm:^2.2.2"
csv-stringify: "npm:^6.2.0"
@@ -595,15 +595,15 @@ __metadata:
languageName: unknown
linkType: soft
-"@crawlee/http@npm:^3.12.1, @crawlee/http@workspace:packages/http-crawler":
+"@crawlee/http@npm:3.12.1, @crawlee/http@workspace:packages/http-crawler":
version: 0.0.0-use.local
resolution: "@crawlee/http@workspace:packages/http-crawler"
dependencies:
"@apify/timeout": "npm:^0.3.0"
"@apify/utilities": "npm:^2.7.10"
- "@crawlee/basic": "npm:^3.12.1"
- "@crawlee/types": "npm:^3.12.1"
- "@crawlee/utils": "npm:^3.12.1"
+ "@crawlee/basic": "npm:3.12.1"
+ "@crawlee/types": "npm:3.12.1"
+ "@crawlee/utils": "npm:3.12.1"
"@types/content-type": "npm:^1.1.5"
cheerio: "npm:1.0.0-rc.12"
content-type: "npm:^1.0.4"
@@ -616,15 +616,15 @@ __metadata:
languageName: unknown
linkType: soft
-"@crawlee/jsdom@npm:^3.12.1, @crawlee/jsdom@workspace:packages/jsdom-crawler":
+"@crawlee/jsdom@npm:3.12.1, @crawlee/jsdom@workspace:packages/jsdom-crawler":
version: 0.0.0-use.local
resolution: "@crawlee/jsdom@workspace:packages/jsdom-crawler"
dependencies:
"@apify/timeout": "npm:^0.3.0"
"@apify/utilities": "npm:^2.7.10"
- "@crawlee/http": "npm:^3.12.1"
- "@crawlee/types": "npm:^3.12.1"
- "@crawlee/utils": "npm:^3.12.1"
+ "@crawlee/http": "npm:3.12.1"
+ "@crawlee/types": "npm:3.12.1"
+ "@crawlee/utils": "npm:3.12.1"
"@types/jsdom": "npm:^21.0.0"
cheerio: "npm:1.0.0-rc.12"
jsdom: "npm:^25.0.0"
@@ -633,26 +633,26 @@ __metadata:
languageName: unknown
linkType: soft
-"@crawlee/linkedom@npm:^3.12.1, @crawlee/linkedom@workspace:packages/linkedom-crawler":
+"@crawlee/linkedom@npm:3.12.1, @crawlee/linkedom@workspace:packages/linkedom-crawler":
version: 0.0.0-use.local
resolution: "@crawlee/linkedom@workspace:packages/linkedom-crawler"
dependencies:
"@apify/timeout": "npm:^0.3.0"
"@apify/utilities": "npm:^2.7.10"
- "@crawlee/http": "npm:^3.12.1"
- "@crawlee/types": "npm:^3.12.1"
+ "@crawlee/http": "npm:3.12.1"
+ "@crawlee/types": "npm:3.12.1"
linkedom: "npm:^0.18.0"
ow: "npm:^0.28.2"
tslib: "npm:^2.4.0"
languageName: unknown
linkType: soft
-"@crawlee/memory-storage@npm:^3.12.1, @crawlee/memory-storage@workspace:packages/memory-storage":
+"@crawlee/memory-storage@npm:3.12.1, @crawlee/memory-storage@workspace:packages/memory-storage":
version: 0.0.0-use.local
resolution: "@crawlee/memory-storage@workspace:packages/memory-storage"
dependencies:
"@apify/log": "npm:^2.4.0"
- "@crawlee/types": "npm:^3.12.1"
+ "@crawlee/types": "npm:3.12.1"
"@sapphire/async-queue": "npm:^1.5.0"
"@sapphire/shapeshift": "npm:^3.0.0"
content-type: "npm:^1.0.4"
@@ -664,18 +664,18 @@ __metadata:
languageName: unknown
linkType: soft
-"@crawlee/playwright@npm:^3.12.1, @crawlee/playwright@workspace:packages/playwright-crawler":
+"@crawlee/playwright@npm:3.12.1, @crawlee/playwright@workspace:packages/playwright-crawler":
version: 0.0.0-use.local
resolution: "@crawlee/playwright@workspace:packages/playwright-crawler"
dependencies:
"@apify/datastructures": "npm:^2.0.0"
"@apify/log": "npm:^2.4.0"
"@apify/timeout": "npm:^0.3.1"
- "@crawlee/browser": "npm:^3.12.1"
- "@crawlee/browser-pool": "npm:^3.12.1"
- "@crawlee/core": "npm:^3.12.1"
- "@crawlee/types": "npm:^3.12.1"
- "@crawlee/utils": "npm:^3.12.1"
+ "@crawlee/browser": "npm:3.12.1"
+ "@crawlee/browser-pool": "npm:3.12.1"
+ "@crawlee/core": "npm:3.12.1"
+ "@crawlee/types": "npm:3.12.1"
+ "@crawlee/utils": "npm:3.12.1"
cheerio: "npm:1.0.0-rc.12"
idcac-playwright: "npm:^0.1.2"
jquery: "npm:^3.6.0"
@@ -693,16 +693,16 @@ __metadata:
languageName: unknown
linkType: soft
-"@crawlee/puppeteer@npm:^3.12.1, @crawlee/puppeteer@workspace:packages/puppeteer-crawler":
+"@crawlee/puppeteer@npm:3.12.1, @crawlee/puppeteer@workspace:packages/puppeteer-crawler":
version: 0.0.0-use.local
resolution: "@crawlee/puppeteer@workspace:packages/puppeteer-crawler"
dependencies:
"@apify/datastructures": "npm:^2.0.0"
"@apify/log": "npm:^2.4.0"
- "@crawlee/browser": "npm:^3.12.1"
- "@crawlee/browser-pool": "npm:^3.12.1"
- "@crawlee/types": "npm:^3.12.1"
- "@crawlee/utils": "npm:^3.12.1"
+ "@crawlee/browser": "npm:3.12.1"
+ "@crawlee/browser-pool": "npm:3.12.1"
+ "@crawlee/types": "npm:3.12.1"
+ "@crawlee/utils": "npm:3.12.1"
cheerio: "npm:1.0.0-rc.12"
devtools-protocol: "npm:*"
idcac-playwright: "npm:^0.1.2"
@@ -782,7 +782,7 @@ __metadata:
languageName: unknown
linkType: soft
-"@crawlee/templates@npm:^3.12.1, @crawlee/templates@workspace:packages/templates":
+"@crawlee/templates@npm:3.12.1, @crawlee/templates@workspace:packages/templates":
version: 0.0.0-use.local
resolution: "@crawlee/templates@workspace:packages/templates"
dependencies:
@@ -794,7 +794,7 @@ __metadata:
languageName: unknown
linkType: soft
-"@crawlee/types@npm:^3.12.1, @crawlee/types@npm:^3.3.0, @crawlee/types@npm:^3.9.0, @crawlee/types@workspace:packages/types":
+"@crawlee/types@npm:3.12.1, @crawlee/types@npm:^3.3.0, @crawlee/types@npm:^3.9.0, @crawlee/types@workspace:packages/types":
version: 0.0.0-use.local
resolution: "@crawlee/types@workspace:packages/types"
dependencies:
@@ -802,13 +802,13 @@ __metadata:
languageName: unknown
linkType: soft
-"@crawlee/utils@npm:^3.12.1, @crawlee/utils@npm:^3.9.0, @crawlee/utils@workspace:packages/utils":
+"@crawlee/utils@npm:3.12.1, @crawlee/utils@npm:^3.9.0, @crawlee/utils@workspace:packages/utils":
version: 0.0.0-use.local
resolution: "@crawlee/utils@workspace:packages/utils"
dependencies:
"@apify/log": "npm:^2.4.0"
"@apify/ps-tree": "npm:^1.2.0"
- "@crawlee/types": "npm:^3.12.1"
+ "@crawlee/types": "npm:3.12.1"
"@types/sax": "npm:^1.2.7"
"@types/whatwg-mimetype": "npm:^3.0.2"
cheerio: "npm:1.0.0-rc.12"
@@ -4344,18 +4344,18 @@ __metadata:
version: 0.0.0-use.local
resolution: "crawlee@workspace:packages/crawlee"
dependencies:
- "@crawlee/basic": "npm:^3.12.1"
- "@crawlee/browser": "npm:^3.12.1"
- "@crawlee/browser-pool": "npm:^3.12.1"
- "@crawlee/cheerio": "npm:^3.12.1"
- "@crawlee/cli": "npm:^3.12.1"
- "@crawlee/core": "npm:^3.12.1"
- "@crawlee/http": "npm:^3.12.1"
- "@crawlee/jsdom": "npm:^3.12.1"
- "@crawlee/linkedom": "npm:^3.12.1"
- "@crawlee/playwright": "npm:^3.12.1"
- "@crawlee/puppeteer": "npm:^3.12.1"
- "@crawlee/utils": "npm:^3.12.1"
+ "@crawlee/basic": "npm:3.12.1"
+ "@crawlee/browser": "npm:3.12.1"
+ "@crawlee/browser-pool": "npm:3.12.1"
+ "@crawlee/cheerio": "npm:3.12.1"
+ "@crawlee/cli": "npm:3.12.1"
+ "@crawlee/core": "npm:3.12.1"
+ "@crawlee/http": "npm:3.12.1"
+ "@crawlee/jsdom": "npm:3.12.1"
+ "@crawlee/linkedom": "npm:3.12.1"
+ "@crawlee/playwright": "npm:3.12.1"
+ "@crawlee/puppeteer": "npm:3.12.1"
+ "@crawlee/utils": "npm:3.12.1"
import-local: "npm:^3.1.0"
tslib: "npm:^2.4.0"
peerDependencies: | chore | update internal dependencies [skip ci] |
f736fefc0b75b85221ea25d1893ff22ded48e7d8 | 2019-01-22 23:39:20 | Ondra Urban | Update and build docs | false | diff --git a/docs/api/Apify.md b/docs/api/Apify.md
index 6adc46dbdf6b..1c533be0da86 100644
--- a/docs/api/Apify.md
+++ b/docs/api/Apify.md
@@ -725,7 +725,7 @@ a small state object is regularly persisted to keep track of the crawling status
For more details and code examples, see the [`RequestList`](requestlist) class.
-**Example Usage:**
+**Example usage:**
```javascript
const sources = [
diff --git a/docs/api/puppeteer.md b/docs/api/puppeteer.md
index a709b93b6348..435dcec3456f 100644
--- a/docs/api/puppeteer.md
+++ b/docs/api/puppeteer.md
@@ -161,12 +161,34 @@ const escapedHtml = await page.evaluate(() => {
<a name="puppeteer.enqueueLinks"></a>
## `puppeteer.enqueueLinks(options)` β <code>Promise<Array<QueueOperationInfo>></code>
-Finds HTML elements matching a CSS selector, clicks the elements and if a redirect is triggered and destination URL matches
-one of the provided [`PseudoUrl`](pseudourl)s, then the function enqueues that URL to a given request queue.
-To create a Request object function uses `requestTemplate` from a matching [`PseudoUrl`](pseudourl).
+The function finds HTML anchor (`<a>`) elements matching a specific CSS selector on a Puppeteer page,
+and enqueues the corresponding links to the provided [`RequestQueue`](requestqueue).
+Optionally, the function allows you to filter the target links URLs using an array of [`PseudoUrl`](pseudourl) objects
+and override settings of the enqueued [`Request`](request) objects.
-*WARNING*: This is work in progress. Currently the function doesn't click elements and only takes their `href` attribute and so
- is working only for link (`a`) elements, but not for buttons or JavaScript links.
+*IMPORTANT*: This is a work in progress. Currently the function only supports `<a>` elements with
+`href` attribute point to some URL. However, in the future the function will also support
+JavaScript links, buttons and form submissions.
+
+**Example usage**
+
+```javascript
+const Apify = require('apify');
+
+const browser = await Apify.launchPuppeteer();
+const page = await browser.goto('https://www.example.com');
+const requestQueue = await Apify.openRequestQueue();
+
+await Apify.utils.puppeteer.enqueueLinks({
+ page,
+ requestQueue,
+ selector: 'a.product-detail',
+ pseudoUrls: [
+ 'https://www.example.com/handbags/[.*]'
+ 'https://www.example.com/purses/[.*]'
+ ],
+});
+```
**Returns**: <code>Promise<Array<QueueOperationInfo>></code> - Promise that resolves to an array of [`QueueOperationInfo`](../typedefs/queueoperationinfo) objects.
<table>
@@ -189,29 +211,30 @@ To create a Request object function uses `requestTemplate` from a matching [`Pse
<td><code>options.requestQueue</code></td><td><code><a href="requestqueue">RequestQueue</a></code></td><td></td>
</tr>
<tr>
-<td colspan="3"><p><a href="requestqueue"><code>RequestQueue</code></a> instance where URLs will be enqueued.</p>
+<td colspan="3"><p>A request queue to which the URLs will be enqueued.</p>
</td></tr><tr>
<td><code>[options.selector]</code></td><td><code>String</code></td><td><code>'a'</code></td>
</tr>
<tr>
-<td colspan="3"><p>CSS selector matching elements to be clicked.</p>
+<td colspan="3"><p>A CSS selector matching links to be enqueued.</p>
</td></tr><tr>
<td><code>[options.pseudoUrls]</code></td><td><code><a href="pseudourl">Array<PseudoUrl></a></code> | <code>Array<Object></code> | <code>Array<String></code></td><td></td>
</tr>
<tr>
<td colspan="3"><p>An array of <a href="pseudourl"><code>PseudoUrl</code></a>s matching the URLs to be enqueued,
- or an array of Strings or Objects from which the <a href="pseudourl"><code>PseudoUrl</code></a>s should be constructed
- The Objects must include at least a <code>purl</code> property, which holds a pseudoUrl string.
+ or an array of strings or objects from which the <a href="pseudourl"><code>PseudoUrl</code></a>s can be constructed.
+ The objects must include at least the <code>purl</code> property, which holds the pseudo-URL string.
All remaining keys will be used as the <code>requestTemplate</code> argument of the <a href="pseudourl"><code>PseudoUrl</code></a> constructor.
- If <code>pseudoUrls</code> is an empty array, null or undefined, then the function
+ which lets you specify special properties for the enqueued <a href="request"><code>Request</code></a> objects.</p>
+<p> If <code>pseudoUrls</code> is an empty array, <code>null</code> or <code>undefined</code>, then the function
enqueues all links found on the page.</p>
</td></tr><tr>
<td><code>[options.userData]</code></td><td><code>Object</code></td><td></td>
</tr>
<tr>
-<td colspan="3"><p>Object that will be merged with the new Request's userData, overriding any values that
- were set via templating from pseudoUrls. This is useful when you need to override generic
- userData set by PseudoURL template in specific use cases.
+<td colspan="3"><p>An object that will be merged with the new <a href="request"><code>Request</code></a>'s <code>userData</code>, overriding any values that
+ were set via templating from <code>pseudoUrls</code>. This is useful when you need to override generic
+ <code>userData</code> set by the <a href="pseudourl"><code>PseudoUrl</code></a> template in specific use cases.
<strong>Example:</strong></p>
<pre><code> // pseudoUrl.userData
{
diff --git a/docs/api/social.md b/docs/api/social.md
index 2bd0ca1fec08..830b231aaf60 100644
--- a/docs/api/social.md
+++ b/docs/api/social.md
@@ -365,8 +365,11 @@ extracted from the plain text, which might be very inaccurate.
**Example usage:**
```javascript
-const puppeteer = await Apify.launchPuppeteer();
-await puppeteer.goto('http://www.example.com');
+const Apify = require('apify');
+
+const browser = await Apify.launchPuppeteer();
+const page = await browser.newPage();
+await page.goto('http://www.example.com');
const html = await puppeteer.content();
const result = Apify.utils.social.parseHandlesFromHtml(html);
diff --git a/docs/examples/PuppeteerCrawler.md b/docs/examples/PuppeteerCrawler.md
index e9ffdc7f6b0b..1e62bf287423 100644
--- a/docs/examples/PuppeteerCrawler.md
+++ b/docs/examples/PuppeteerCrawler.md
@@ -72,17 +72,14 @@ Apify.main(async () => {
// Store the results to the default dataset.
await Apify.pushData(data);
- // Find the link to the next page using Puppeteer functions.
- let nextHref;
- try {
- nextHref = await page.$eval('.morelink', el => el.href);
- } catch (err) {
- console.log(`${request.url} is the last page!`);
- return;
- }
-
- // Enqueue the link to the RequestQueue
- await requestQueue.addRequest(new Apify.Request({ url: nextHref }));
+ // Find a link to the next page and enqueue it if it exists.
+ const infos = await Apify.utils.puppeteer.enqueueLinks({
+ page,
+ requestQueue,
+ selector: '.morelink',
+ });
+
+ if (infos.length === 0) console.log(`${request.url} is the last page!`);
},
// This function is called if the page processing failed more than maxRequestRetries+1 times.
diff --git a/docs/typedefs/LaunchPuppeteerOptions.md b/docs/typedefs/LaunchPuppeteerOptions.md
index 747481cd1db0..79d478c74fef 100644
--- a/docs/typedefs/LaunchPuppeteerOptions.md
+++ b/docs/typedefs/LaunchPuppeteerOptions.md
@@ -4,11 +4,11 @@ title: LaunchPuppeteerOptions
---
<a name="LaunchPuppeteerOptions"></a>
-An object representing options passed to the
-[`Apify.launchPuppeteer()`](../api/apify#module_Apify.launchPuppeteer)
-function. In this object, you can pass any options supported by the
-[`puppeteer.launch()`](https://pptr.dev/#?product=Puppeteer&show=api-puppeteerlaunchoptions),
-as well as the additional options provided by Apify SDK listed below.
+Apify extends the launch options of Puppeteer.
+You can use any of the
+<a href="https://pptr.dev/#?product=Puppeteer&show=api-puppeteerlaunchoptions" target="_blank"><code>puppeteer.launch()</code></a>
+options in the [`Apify.launchPuppeteer()`](../api/apify#module_Apify.launchPuppeteer)
+function and in addition, all the options available below.
**Properties**
<table>
@@ -19,6 +19,13 @@ as well as the additional options provided by Apify SDK listed below.
</thead>
<tbody>
<tr>
+<td><code>...</code></td><td></td><td></td>
+</tr>
+<tr>
+<td colspan="3"><p>You can use any of the
+ <a href="https://pptr.dev/#?product=Puppeteer&show=api-puppeteerlaunchoptions" target="_blank"><code>puppeteer.launch()</code></a>
+ options too.</p>
+</td></tr><tr>
<td><code>[proxyUrl]</code></td><td><code>String</code></td><td></td>
</tr>
<tr>
diff --git a/src/puppeteer.js b/src/puppeteer.js
index 173625df8519..34fcd6a59608 100644
--- a/src/puppeteer.js
+++ b/src/puppeteer.js
@@ -21,13 +21,17 @@ const LAUNCH_PUPPETEER_DEFAULT_VIEWPORT = {
};
/**
- * An object representing options passed to the
- * [`Apify.launchPuppeteer()`](../api/apify#module_Apify.launchPuppeteer)
- * function. In this object, you can pass any options supported by the
- * [`puppeteer.launch()`](https://pptr.dev/#?product=Puppeteer&show=api-puppeteerlaunchoptions),
- * as well as the additional options provided by Apify SDK listed below.
+ * Apify extends the launch options of Puppeteer.
+ * You can use any of the
+ * <a href="https://pptr.dev/#?product=Puppeteer&show=api-puppeteerlaunchoptions" target="_blank"><code>puppeteer.launch()</code></a>
+ * options in the [`Apify.launchPuppeteer()`](../api/apify#module_Apify.launchPuppeteer)
+ * function and in addition, all the options available below.
*
* @typedef {Object} LaunchPuppeteerOptions
+ * @property ...
+ * You can use any of the
+ * <a href="https://pptr.dev/#?product=Puppeteer&show=api-puppeteerlaunchoptions" target="_blank"><code>puppeteer.launch()</code></a>
+ * options.
* @property {String} [proxyUrl]
* URL to a HTTP proxy server. It must define the port number,
* and it may also contain proxy username and password. | unknown | Update and build docs |
b57222de25d32fc8368f713e4b5cd540ee2084e8 | 2017-06-16 22:03:46 | Jan Curn | Updated README.md | false | diff --git a/README.md b/README.md
index 9bc4eeb957c9..4b769cb61754 100644
--- a/README.md
+++ b/README.md
@@ -95,9 +95,6 @@ argument called `context` which is an object such as:
```javascript
{
- // Internal port on which the web server is listening
- internalPort: Number,
-
// ID of the act
actId: String,
@@ -124,7 +121,11 @@ argument called `context` which is an object such as:
input: {
body: Object,
contentType: String,
- }
+ },
+
+ // Port on which the act's internal web server is listening.
+ // This is still work in progress, stay tuned.
+ internalPort: Number,
}
``` | unknown | Updated README.md |
6656466fd5ac97a81e9b43c8d4e939da450e4de3 | 2019-07-23 14:02:38 | Ondra Urban | Update CHANGELOG.md | false | diff --git a/CHANGELOG.md b/CHANGELOG.md
index f8fe199c38be..338faee3b524 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,10 +1,7 @@
NEXT
====================
-- Bugfix in BasicCrawler: async calls in `isFinishedFunction` were not awaited
-- Better logging of memory overload errors
- Added `desiredConcurrency` option to `AutoscaledPool` constructor, removed
- unnecessary bound check from the setter property
-- Upgraded `apify-client` to 0.5.22
+ unnecessary bound check from the setter property
0.15.2 / 2019-07-11
====================
@@ -52,7 +49,7 @@ NEXT
0.14.12 / 2019-05-29
====================
- `Snapshotter` will now log critical memory overload warnings at most once per 10 seconds.
-_ Live view snapshots are now made right after navigation finishes, instead of right before page close.
+- Live view snapshots are now made right after navigation finishes, instead of right before page close.
0.14.11 / 2019-05-28
==================== | unknown | Update CHANGELOG.md |
25bc9b26389b2566a2c94d5788b632a122f6e3a1 | 2018-09-19 21:35:09 | davidjohnbarton | Update cheerio_crawler.js | false | diff --git a/examples/cheerio_crawler.js b/examples/cheerio_crawler.js
index 9b7cfdd285e3..a5fed480a9d1 100644
--- a/examples/cheerio_crawler.js
+++ b/examples/cheerio_crawler.js
@@ -8,7 +8,7 @@
const Apify = require('apify');
// Apify.utils contains various utilities, e.g. for logging.
-// Here we turn off logging of unimportant messages.
+// Here we turn off the logging of unimportant messages.
const { log } = Apify.utils;
log.setLevel(log.LEVELS.WARNING);
@@ -25,7 +25,7 @@ Apify.main(async () => {
await requestList.initialize();
// Create an instance of the CheerioCrawler class - a crawler
- // that automatically loads the URLs and parses their HTML using cheerio library.
+ // that automatically loads the URLs and parses their HTML using the cheerio library.
const crawler = new Apify.CheerioCrawler({
// Let the crawler fetch URLs from our list.
requestList,
@@ -75,7 +75,7 @@ Apify.main(async () => {
},
});
- // Run the crawler and wait for its finish.
+ // Run the crawler and wait for it to finish.
await crawler.run();
console.log('Crawler finished.'); | unknown | Update cheerio_crawler.js |
f09e3ed6039d5c5f889b98afcfd00ff4d24fe497 | 2024-12-30 07:16:45 | renovate[bot] | chore(deps): update yarn to v4.6.0 | false | diff --git a/package.json b/package.json
index b425d5406fe1..fdcba34f9a37 100644
--- a/package.json
+++ b/package.json
@@ -114,9 +114,9 @@
"vite-tsconfig-paths": "^4.3.2",
"vitest": "^2.0.0"
},
- "packageManager": "yarn@4.5.3",
+ "packageManager": "yarn@4.6.0",
"volta": {
"node": "22.12.0",
- "yarn": "4.5.3"
+ "yarn": "4.6.0"
}
}
diff --git a/website/package.json b/website/package.json
index 6ff8c8a2aea0..406c09e4b9a4 100644
--- a/website/package.json
+++ b/website/package.json
@@ -57,5 +57,5 @@
"stream-browserify": "^3.0.0",
"unist-util-visit": "^5.0.0"
},
- "packageManager": "yarn@4.5.3"
+ "packageManager": "yarn@4.6.0"
} | chore | update yarn to v4.6.0 |
3deb5e5082a74225e9ed64fbda06b76b9eefdb30 | 2018-03-23 15:39:12 | Jan Curn | Updated README and package.json description | false | diff --git a/README.md b/README.md
index 3ae42239b921..8d8c4137970b 100644
--- a/README.md
+++ b/README.md
@@ -6,8 +6,9 @@
The `apify` NPM package enables development of web scrapers and crawlers,
-either locally or running on <a href="https://www.apify.com/docs/actor" target="_blank">Apify Actor</a>
-- a serverless computing platform that enables execution of arbitrary pieces of code in the cloud. The package provides helper functions to launch web browsers with proxies, access the storage etc. Note that the usage of the package is optional, you can create acts without it.
+either locally or running on <a href="https://www.apify.com/docs/actor" target="_blank">Apify Actor</a> -
+a serverless computing platform that enables execution of arbitrary code in the cloud.
+The package provides helper functions to launch web browsers with proxies, access the storage etc. Note that the usage of the package is optional, you can create acts without it.
Complete documentation of this package is available at https://www.apify.com/docs/sdk/apify-runtime-js/latest
diff --git a/package.json b/package.json
index d7b35e9f514d..8bc83a99082c 100644
--- a/package.json
+++ b/package.json
@@ -1,7 +1,7 @@
{
"name": "apify",
"version": "0.5.16",
- "description": "Apify SDK for JavaScript / Node.js",
+ "description": "Web scraping and automation SDK",
"main": "build/index.js",
"keywords": [
"apify", | unknown | Updated README and package.json description |
b027dc95ec0869797887773d9a93d99bdb42e123 | 2019-07-09 13:42:44 | Ondra Urban | Update version in changelog [skip ci] | false | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 04cdf78dd050..9f2fb698ee4c 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,4 +1,4 @@
-NEXT
+0.15.1 / 2019-07-09
====================
- **BREAKING CHANGE** Removed support for Web Driver (Selenium) since no further updates are planned.
If you wish to continue using Web Driver, please stay on Apify SDK version ^0.14.15 | unknown | Update version in changelog [skip ci] |
d603700e96a63de5c5838e507c1a05f69d401176 | 2021-08-24 22:15:45 | Martin AdΓ‘mek | chore: pin client to 1.3 | false | diff --git a/package.json b/package.json
index 120880f8311d..2db250f7899e 100644
--- a/package.json
+++ b/package.json
@@ -61,7 +61,7 @@
"@types/node": "^15.14.2",
"@types/socket.io": "^2.1.13",
"@types/tough-cookie": "^4.0.1",
- "apify-client": "^1.3.0",
+ "apify-client": "1.3.0",
"browser-pool": "^2.0.0",
"cheerio": "1.0.0-rc.10",
"content-type": "^1.0.4", | chore | pin client to 1.3 |
ba80c87d3e88f1a006f41c30dde7eb646d29d24b | 2024-12-04 18:02:33 | JindΕich BΓ€r | docs: bump `typedoc-api` plugin to mitigate stray React errors (#2762)
Context: https://apify.slack.com/archives/C0L33UM7Z/p1732811764451919 | false | diff --git a/website/package.json b/website/package.json
index c7e5e4b00596..6ff8c8a2aea0 100644
--- a/website/package.json
+++ b/website/package.json
@@ -33,7 +33,7 @@
"typescript": "^5.7.2"
},
"dependencies": {
- "@apify/docusaurus-plugin-typedoc-api": "^4.2.10",
+ "@apify/docusaurus-plugin-typedoc-api": "^4.3.2",
"@apify/utilities": "^2.8.0",
"@docusaurus/core": "3.6.3",
"@docusaurus/faster": "3.6.3",
diff --git a/website/yarn.lock b/website/yarn.lock
index 8007b1eeffd7..2806f71537ff 100644
--- a/website/yarn.lock
+++ b/website/yarn.lock
@@ -365,7 +365,7 @@ __metadata:
languageName: node
linkType: hard
-"@apify/docusaurus-plugin-typedoc-api@npm:^4.2.10":
+"@apify/docusaurus-plugin-typedoc-api@npm:^4.3.2":
version: 4.3.2
resolution: "@apify/docusaurus-plugin-typedoc-api@npm:4.3.2"
dependencies:
@@ -14092,7 +14092,7 @@ __metadata:
version: 0.0.0-use.local
resolution: "root-workspace-0b6124@workspace:."
dependencies:
- "@apify/docusaurus-plugin-typedoc-api": "npm:^4.2.10"
+ "@apify/docusaurus-plugin-typedoc-api": "npm:^4.3.2"
"@apify/eslint-config-ts": "npm:^0.4.0"
"@apify/tsconfig": "npm:^0.1.0"
"@apify/utilities": "npm:^2.8.0" | docs | bump `typedoc-api` plugin to mitigate stray React errors (#2762)
Context: https://apify.slack.com/archives/C0L33UM7Z/p1732811764451919 |
3aa03f0660f3991ba4aad978f45b9ce20e1a7171 | 2018-08-02 13:24:36 | Ondra Urban | Support XML extension in local KeyValueStore | false | diff --git a/src/key_value_store.js b/src/key_value_store.js
index 15f17623dde4..6334765da12f 100644
--- a/src/key_value_store.js
+++ b/src/key_value_store.js
@@ -17,6 +17,7 @@ const LOCAL_FILE_TYPES = [
{ contentType: 'text/plain', extension: 'txt' },
{ contentType: 'image/jpeg', extension: 'jpg' },
{ contentType: 'image/png', extension: 'png' },
+ { contentType: 'application/xml', extension: 'xml' },
];
const DEFAULT_LOCAL_FILE_TYPE = LOCAL_FILE_TYPES[0]; | unknown | Support XML extension in local KeyValueStore |
bc3b98dbcf1a52eac645016a74473f727348d345 | 2024-12-19 16:50:09 | renovate[bot] | chore(deps): update dependency puppeteer to v23.11.1 | false | diff --git a/package.json b/package.json
index 46d26f737e2f..b425d5406fe1 100644
--- a/package.json
+++ b/package.json
@@ -106,7 +106,7 @@
"playwright": "1.49.1",
"portastic": "^1.0.1",
"proxy": "^1.0.2",
- "puppeteer": "23.11.0",
+ "puppeteer": "23.11.1",
"rimraf": "^6.0.0",
"tsx": "^4.4.0",
"turbo": "^2.1.0",
diff --git a/yarn.lock b/yarn.lock
index 0f3aa44cd4f6..5e0d3b1c71c5 100644
--- a/yarn.lock
+++ b/yarn.lock
@@ -772,7 +772,7 @@ __metadata:
playwright: "npm:1.49.1"
portastic: "npm:^1.0.1"
proxy: "npm:^1.0.2"
- puppeteer: "npm:23.11.0"
+ puppeteer: "npm:23.11.1"
rimraf: "npm:^6.0.0"
tsx: "npm:^4.4.0"
turbo: "npm:^2.1.0"
@@ -10589,9 +10589,9 @@ __metadata:
languageName: node
linkType: hard
-"puppeteer-core@npm:23.11.0":
- version: 23.11.0
- resolution: "puppeteer-core@npm:23.11.0"
+"puppeteer-core@npm:23.11.1":
+ version: 23.11.1
+ resolution: "puppeteer-core@npm:23.11.1"
dependencies:
"@puppeteer/browsers": "npm:2.6.1"
chromium-bidi: "npm:0.11.0"
@@ -10599,23 +10599,23 @@ __metadata:
devtools-protocol: "npm:0.0.1367902"
typed-query-selector: "npm:^2.12.0"
ws: "npm:^8.18.0"
- checksum: 10c0/d7b2195f3d2ebd9610bf27c4f56e1f9c74781f0d6443217bd3e6bcb94d98901312e7eaeffcabe2c569e1dc40e558845df058349e9293e16feb05b10d2ac76847
+ checksum: 10c0/6512a3dca8c7bea620219332b84c4442754fead6c5021c26ea395ddc2f84610a54accf185ba1450e02885cb063c2d12f96eb5f18e7e1b6795f3e32a4b8a2102e
languageName: node
linkType: hard
-"puppeteer@npm:23.11.0":
- version: 23.11.0
- resolution: "puppeteer@npm:23.11.0"
+"puppeteer@npm:23.11.1":
+ version: 23.11.1
+ resolution: "puppeteer@npm:23.11.1"
dependencies:
"@puppeteer/browsers": "npm:2.6.1"
chromium-bidi: "npm:0.11.0"
cosmiconfig: "npm:^9.0.0"
devtools-protocol: "npm:0.0.1367902"
- puppeteer-core: "npm:23.11.0"
+ puppeteer-core: "npm:23.11.1"
typed-query-selector: "npm:^2.12.0"
bin:
puppeteer: lib/cjs/puppeteer/node/cli.js
- checksum: 10c0/449e0d7c042f86e5e48814b3122fac0000eda837086bb138622cb1add84dd7573e4a296e548839c6c3e05b970e77734a6ac8a59635ec3d5a86873036cf465535
+ checksum: 10c0/e967f5ce02ab9e0343eb4403f32ab7de8a6dbeffe6b23be8725e112015ae4a60264a554742cf10302434795a8e9ea27ec9b048126fee23750ce24c3b238d2ebc
languageName: node
linkType: hard | chore | update dependency puppeteer to v23.11.1 |
485d83d6fd2cfbeda7d8629ab5b5dc83948c5afb | 2022-08-19 04:17:21 | Martin AdΓ‘mek | docs: remove debugging code to fix build | false | diff --git a/CHANGELOG.md b/CHANGELOG.md
index bc36a368ff75..c012b4460f14 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -420,12 +420,6 @@ const response = await promise;
Previously, you were able to have a browser pool that would mix Puppeteer and Playwright plugins (or even your own custom plugins if you've built any). As of this version, that is no longer allowed, and creating such a browser pool will cause an error to be thrown (it's expected that all plugins that will be used are of the same type).
-:::info Confused?
-
-As an example, this change disallows a pool to mix Puppeteer with Playwright. You can still create pools that use multiple Playwright plugins, each with a different launcher if you want!
-
-:::
-
### Handling requests outside of browser
One small feature worth mentioning is the ability to handle requests with browser crawlers outside the browser. To do that, we can use a combination of `Request.skipNavigation` and `context.sendRequest()`.
@@ -490,14 +484,14 @@ await Actor.main(async () => {
#### Events
-Apify SDK (v2) exports `Apify.events`, which is an `EventEmitter` instance. With Crawlee, the events are managed by <ApiLink to="core/class/EventManager">`EventManager`</ApiLink> class instead. We can either access it via `Actor.eventManager` getter, or use `Actor.on` and `Actor.off` shortcuts instead.
+Apify SDK (v2) exports `Apify.events`, which is an `EventEmitter` instance. With Crawlee, the events are managed by [`EventManager`](https://crawlee.dev/api/core/class/EventManager) class instead. We can either access it via `Actor.eventManager` getter, or use `Actor.on` and `Actor.off` shortcuts instead.
```diff
-Apify.events.on(...);
+Actor.on(...);
```
-> We can also get the <ApiLink to="core/class/EventManager">`EventManager`</ApiLink> instance via `Configuration.getEventManager()`.
+> We can also get the [`EventManager`](https://crawlee.dev/api/core/class/EventManager) instance via `Configuration.getEventManager()`.
In addition to the existing events, we now have an `exit` event fired when calling `Actor.exit()` (which is called at the end of `Actor.main()`). This event allows you to gracefully shut down any resources when `Actor.exit` is called.
diff --git a/website/docusaurus.config.js b/website/docusaurus.config.js
index bd07676d135b..1ee48c0d6ccf 100644
--- a/website/docusaurus.config.js
+++ b/website/docusaurus.config.js
@@ -144,24 +144,22 @@ module.exports = {
docId: 'quick-start/quick-start',
label: 'Docs',
position: 'left',
- activeBaseRegex: 'docs/quick-start',
},
{
type: 'doc',
docId: 'examples/examples',
label: 'Examples',
position: 'left',
- activeBaseRegex: 'docs/examples',
},
{
type: 'custom-api',
to: 'core',
label: 'API',
position: 'left',
- activeBaseRegex: 'api/(?!core/changelog)',
},
{
- to: 'api/core/changelog',
+ type: 'custom-api',
+ to: 'core/changelog',
label: 'Changelog',
position: 'left',
className: 'changelog',
diff --git a/website/src/theme/NavbarItem/ComponentTypes.js b/website/src/theme/NavbarItem/ComponentTypes.js
index ff0e5058a9a2..d871a4b229ba 100644
--- a/website/src/theme/NavbarItem/ComponentTypes.js
+++ b/website/src/theme/NavbarItem/ComponentTypes.js
@@ -6,15 +6,14 @@ import HtmlNavbarItem from '@theme/NavbarItem/HtmlNavbarItem';
import DocSidebarNavbarItem from '@theme/NavbarItem/DocSidebarNavbarItem';
import DocsVersionNavbarItem from '@theme/NavbarItem/DocsVersionNavbarItem';
import DocsVersionDropdownNavbarItem from '@theme/NavbarItem/DocsVersionDropdownNavbarItem';
-import React from 'react';
import { useActiveDocContext } from '@docusaurus/plugin-content-docs/client';
import { useDocsVersion, useLayoutDoc } from '@docusaurus/theme-common/internal';
import useDocusaurusContext from '@docusaurus/useDocusaurusContext';
+import React from 'react';
-const pkg = require('../../../../packages/crawlee/package.json');
+const versions = require('../../../versions.json');
-const [v1, v2] = pkg.version.split('.');
-const stable = [v1, v2].join('.');
+const stable = versions[0];
function DocNavbarItem({
docId,
@@ -41,17 +40,16 @@ function DocNavbarItem({
function ApiNavbarItem(ctx) {
const { activeDoc, activeVersion } = useActiveDocContext();
- const version = useDocsVersion();
+ let version = {};
+
+ try {
+ // eslint-disable-next-line react-hooks/rules-of-hooks
+ version = useDocsVersion();
+ } catch {
+ version.version = stable;
+ }
+
const { siteConfig } = useDocusaurusContext();
- console.log(ctx, activeDoc, activeVersion, activeVersion?.path.replace(/^\/docs/, '/api'));
- console.log(window.location.href, version);
- // const { activeDoc } = useActiveDocContext(docsPluginId);
- // const doc = useLayoutDoc(docId, docsPluginId);
- // console.log(activeDoc, doc, (!!activeDoc?.sidebar && activeDoc.sidebar === doc.sidebar));
- // Draft items are not displayed in the navbar.
- // if (doc === null) {
- // return null;
- // }
if (siteConfig.presets[0][1].docs.disableVersioning || version.version === stable) {
return (
diff --git a/website/static/js/custom.js b/website/static/js/custom.js
index 10f43f191337..bdb2a7fc4346 100644
--- a/website/static/js/custom.js
+++ b/website/static/js/custom.js
@@ -25,24 +25,6 @@ function load() {
});
el.classList.add('api-version-bound');
}
-
- // const navbarLinks = document.querySelectorAll('.navbar a.navbar__link');
- //
- // for (const el of [...navbarLinks].slice(1, 3)) {
- // if (el.classList.contains('api-version-bound')) {
- // continue;
- // }
- //
- // const url = new URL(el.href);
- // const parts = url.pathname.split('/');
- // parts.splice(2, 0, 'next');
- // url.pathname = parts.join('/');
- // el.href = url.href;
- // el.classList.add('api-version-bound');
- // el.addEventListener('click', (e) => {
- // e.stopPropagation();
- // });
- // }
}
setInterval(() => { | docs | remove debugging code to fix build |
3c10dce8f3d03a1eae4bd729093c5abd4c47f8fd | 2018-09-20 19:20:23 | Marek Trunkat | Tweaking autoscaled pool values | false | diff --git a/src/autoscaling/autoscaled_pool.js b/src/autoscaling/autoscaled_pool.js
index 13851568a005..13267f77a134 100644
--- a/src/autoscaling/autoscaled_pool.js
+++ b/src/autoscaling/autoscaled_pool.js
@@ -8,7 +8,7 @@ import SystemStatus from './system_status';
const DEFAULT_OPTIONS = {
maxConcurrency: 1000,
minConcurrency: 1,
- desiredConcurrencyRatio: 0.95,
+ desiredConcurrencyRatio: 0.90,
scaleUpStepRatio: 0.05,
scaleDownStepRatio: 0.05,
maybeRunIntervalSecs: 0.5,
diff --git a/src/autoscaling/snapshotter.js b/src/autoscaling/snapshotter.js
index 8aab6fc187cc..ab8e801dad72 100644
--- a/src/autoscaling/snapshotter.js
+++ b/src/autoscaling/snapshotter.js
@@ -8,10 +8,10 @@ import events from '../events';
const DEFAULT_OPTIONS = {
eventLoopSnapshotIntervalSecs: 0.5,
- maxBlockedMillis: 50,
+ maxBlockedMillis: 50, // 0.05
memorySnapshotIntervalSecs: 1,
maxUsedMemoryRatio: 0.7,
- snapshotHistorySecs: 60,
+ snapshotHistorySecs: 30,
};
/**
diff --git a/src/autoscaling/system_status.js b/src/autoscaling/system_status.js
index 1af5c09d5f63..0eae8f2eb782 100644
--- a/src/autoscaling/system_status.js
+++ b/src/autoscaling/system_status.js
@@ -7,7 +7,7 @@ const DEFAULT_OPTIONS = {
currentHistorySecs: 5, // TODO this should be something like "nowDurationSecs" but it's weird, ideas?
maxMemoryOverloadedRatio: 0.2,
maxEventLoopOverloadedRatio: 0.2,
- maxCpuOverloadedRatio: 0.1,
+ maxCpuOverloadedRatio: 0.2,
};
/** | unknown | Tweaking autoscaled pool values |
52649861245e1f0968768790ace2e48c87e2683a | 2018-11-28 11:51:35 | Aaron Jackson | added getPublicUrl function to key_value_store | false | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 93c36979dca2..0cc1c57d4403 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,6 +1,6 @@
-0.9.14 / 2018-11-26
+xxx
===================
-- Added `_getPublicUrl` function to `KeyValueStoreLocal`
+- Added `getPublicUrl` function to `KeyValueStoreLocal`
0.9.13 / 2018-11-26
=================== | unknown | added getPublicUrl function to key_value_store |
06a0bcd24b9836fee22871535884ce8e99ada517 | 2019-05-21 13:17:22 | Ondra Urban | Unrelated: Rename Library to Store | false | diff --git a/docs/examples/SynchronousRun.md b/docs/examples/SynchronousRun.md
index e26813292f6b..217592d9c778 100644
--- a/docs/examples/SynchronousRun.md
+++ b/docs/examples/SynchronousRun.md
@@ -12,7 +12,7 @@ can be invoked synchronously using a single HTTP request to directly obtain its
<a href="https://apify.com/docs/api/v2#/reference/actors/run-actor-synchronously/without-input" target="_blank">Run actor synchronously</a>
Apify API endpoint. The example is also shared as the
<a href="https://apify.com/apify/example-golden-gate-webcam" target="_blank">apify/example-golden-gate-webcam</a>
-actor in the Apify library, so you can test it directly there simply by sending a POST request to
+actor in the Apify Store, so you can test it directly there simply by sending a POST request to
```http
https://api.apify.com/v2/acts/apify~example-golden-gate-webcam/run-sync?token=[YOUR_API_TOKEN]
```
@@ -48,4 +48,4 @@ Apify.main(async () => {
await Apify.setValue('OUTPUT', imageBuffer, { contentType: 'image/jpeg' });
console.log('Actor finished.');
});
-```
\ No newline at end of file
+```
diff --git a/docs/guides/getting_started.md b/docs/guides/getting_started.md
index abde2dde0a87..ae3a6d74b832 100644
--- a/docs/guides/getting_started.md
+++ b/docs/guides/getting_started.md
@@ -269,7 +269,7 @@ Earlier we said that we would let the crawler:
So let's get to it!
#### Finding new links
-There are numerous approaches to finding links to follow when crawling the web. For our purposes, we will be looking for `<a>` elements that contain the `href` attribute. For example `<a href="https://apify.com/library>This is a link to Apify Library</a>`. To do this, we need to update our Cheerio function.
+There are numerous approaches to finding links to follow when crawling the web. For our purposes, we will be looking for `<a>` elements that contain the `href` attribute. For example `<a href="https://apify.com/store>This is a link to Apify Store</a>`. To do this, we need to update our Cheerio function.
```js
const links = $('a[href]').map((i, el) => $(el).attr('href')).get();
@@ -517,7 +517,7 @@ const options = {
await enqueueLinks(options);
```
-> To break the pseudo-URL string down, we're looking for both `http` and `https` protocols and the links may only lead to `apify.com` domain. The final brackets `[.*]` allow everything, so `apify.com/contact` as well as `apify.com/library` will match. If this is complex to you, we suggest <a href="https://www.regular-expressions.info/tutorial.html" target="_blank">reading a tutorial</a> or two on regular expression syntax.
+> To break the pseudo-URL string down, we're looking for both `http` and `https` protocols and the links may only lead to `apify.com` domain. The final brackets `[.*]` allow everything, so `apify.com/contact` as well as `apify.com/store` will match. If this is complex to you, we suggest <a href="https://www.regular-expressions.info/tutorial.html" target="_blank">reading a tutorial</a> or two on regular expression syntax.
#### Resolving relative URLs with `enqueueLinks()`
@@ -632,8 +632,8 @@ And that's it! No more parsing the links from HTML using Cheerio, filtering them
We hear you young padawan! First, learn how to crawl, you must. Only then, save data, you can!
-###Β Making a library crawler
-Fortunately, we don't have to travel to a galaxy far far away to find a good candidate for learning how to scrape structured data. The <a href="https://apify.com/library" target="_blank">Apify Library</a> is a library of public actors that anyone can grab and use. You can find ready-made solutions for crawling Google Places, Amazon, Google SERPs, Booking, Kickstarter and many other websites. Feel free to check them out! It also poses a great place to practice our jedi scraping skills since it has categories, lists and details. That's almost like our imaginary `online-store.com` from the previous chapter.
+###Β Making a store crawler
+Fortunately, we don't have to travel to a galaxy far far away to find a good candidate for learning how to scrape structured data. The <a href="https://apify.com/store" target="_blank">Apify Store</a> is a store of public actors that anyone can grab and use. You can find ready-made solutions for crawling Google Places, Amazon, Google SERPs, Booking, Kickstarter and many other websites. Feel free to check them out! It also poses a great place to practice our jedi scraping skills since it has categories, lists and details. That's almost like our imaginary `online-store.com` from the previous chapter.
### The importance of having a plan
Sometimes scraping is really straightforward, but most of the times, it really pays out to do a little bit of research first. How is the website structured? Can I scrape it only with HTTP requests (read "with `CheerioCrawler`") or would I need a full browser solution? Are there any anti-scraping protections in place? Do I need to parse the HTML or can I get the data otherwise, such as directly from the website's API. Jakub, one of Apify's founders wrote a <a href="https://blog.apify.com/web-scraping-in-2018-forget-html-use-xhrs-metadata-or-javascript-variables-8167f252439c" target="_blank">great article about all the different techniques</a> and tips and tricks so make sure to check that out!
@@ -659,17 +659,17 @@ We can see that some of the information is available directly on the list page,
Knowing that we will use plain HTTP requests, we immediately know that we won't be able to manipulate the website in any way. We will only be able to go through the HTML it gives us and parse our data from there. This might sound like a huge limitation, but you might be surprised in how effective it might be. Let's get on it!
#### The start URL(s)
-This is where we start our crawl. It's convenient to start as close to our data as possible. For example, it wouldn't make much sense to start at `apify.com` and look for a `library` link there, when we already know that everything we want to extract can be found at the `apify.com/library` page.
+This is where we start our crawl. It's convenient to start as close to our data as possible. For example, it wouldn't make much sense to start at `apify.com` and look for a `store` link there, when we already know that everything we want to extract can be found at the `apify.com/store` page.
-Once we look at the `apify.com/library` page more carefully though, we see that the categories themselves produce URLs that we can use to access those individual categories.
+Once we look at the `apify.com/store` page more carefully though, we see that the categories themselves produce URLs that we can use to access those individual categories.
```
-https://apify.com/library?category=ENTERTAINMENT
+https://apify.com/store?category=ENTERTAINMENT
```
-Should we write down all the category URLs down and use all of them as start URLs? It's definitely possible, but what if a new category appears on the page later? We would not learn about it unless we manually visit the page and inspect it again. So scraping the category links off the library page definitely makes sense. This way we always get an up to date list of categories.
+Should we write down all the category URLs down and use all of them as start URLs? It's definitely possible, but what if a new category appears on the page later? We would not learn about it unless we manually visit the page and inspect it again. So scraping the category links off the store page definitely makes sense. This way we always get an up to date list of categories.
-But is it really that straightforward? By digging further into the library page's HTML we find that it does not actually contain the category links. The menu on the left uses JavaScript to display the items from a given category and, as we've learned earlier, `CheerioCrawler` cannot execute JavaScript.
+But is it really that straightforward? By digging further into the store page's HTML we find that it does not actually contain the category links. The menu on the left uses JavaScript to display the items from a given category and, as we've learned earlier, `CheerioCrawler` cannot execute JavaScript.
> We've deliberately chosen this scenario to show an example of the number one weakness of `CheerioCrawler`. We will overcome this difficulty in our `PuppeteerCrawler` tutorial, but at the cost of compute resources and speed. Always remember that no tool is best for everything!
@@ -678,11 +678,11 @@ So we're back to the pre-selected list of URLs. Since we cannot scrape the list
Therefore, after careful consideration, we've determined that we should use multiple start URLs and that they should look as follows:
```
-https://apify.com/library?type=acts&category=TRAVEL
-https://apify.com/library?type=acts&category=ECOMMERCE
-https://apify.com/library?type=acts&category=ENTERTAINMENT
+https://apify.com/store?type=acts&category=TRAVEL
+https://apify.com/store?type=acts&category=ECOMMERCE
+https://apify.com/store?type=acts&category=ENTERTAINMENT
```
-> The `type=acts` query parameter comes from selecting `Actors only` in the `Show` dropdown. This is in line with us only wanting to scrape actors' data. If you're wondering how we've created these URLs, simply visit the `https://apify.com/library` page, select `Actors only` in the `Show` dropdown and click on one of the categories in the left hand menu. The correct URL will show up in your browser's address bar.
+> The `type=acts` query parameter comes from selecting `Actors only` in the `Show` dropdown. This is in line with us only wanting to scrape actors' data. If you're wondering how we've created these URLs, simply visit the `https://apify.com/store` page, select `Actors only` in the `Show` dropdown and click on one of the categories in the left hand menu. The correct URL will show up in your browser's address bar.
### The crawling strategy
Now that we know where to start, we need to figure out where to go next. Since we've eliminated one level of crawling by selecting the categories manually, we only need to crawl the actor detail pages now. The algorithm therefore follows:
@@ -697,13 +697,13 @@ Now that we know where to start, we need to figure out where to go next. Since w
`CheerioCrawler` will make sure to visit the pages for us, if we provide the correct `Requests` and we already know how to enqueue pages, so this should be fairly easy. Nevertheless, there are two more tricks that we'd like to show you.
#### Using a `RequestList`
-`RequestList` is a perfect tool for scraping a pre-existing list of URLs and if you think about our start URLs, this is exactly what we have! A list of links to the different categories of the library. Let's see how we'd get them into a `RequestList`.
+`RequestList` is a perfect tool for scraping a pre-existing list of URLs and if you think about our start URLs, this is exactly what we have! A list of links to the different categories of the store. Let's see how we'd get them into a `RequestList`.
```js
const sources = [
- 'https://apify.com/library?type=acts&category=TRAVEL',
- 'https://apify.com/library?type=acts&category=ECOMMERCE',
- 'https://apify.com/library?type=acts&category=ENTERTAINMENT'
+ 'https://apify.com/store?type=acts&category=TRAVEL',
+ 'https://apify.com/store?type=acts&category=ECOMMERCE',
+ 'https://apify.com/store?type=acts&category=ENTERTAINMENT'
];
const requestList = await Apify.openRequestList('categories', sources);
@@ -740,9 +740,9 @@ const Apify = require('apify');
Apify.main(async () => {
const sources = [
- 'https://apify.com/library?type=acts&category=TRAVEL',
- 'https://apify.com/library?type=acts&category=ECOMMERCE',
- 'https://apify.com/library?type=acts&category=ENTERTAINMENT'
+ 'https://apify.com/store?type=acts&category=TRAVEL',
+ 'https://apify.com/store?type=acts&category=ECOMMERCE',
+ 'https://apify.com/store?type=acts&category=ENTERTAINMENT'
];
const requestList = await Apify.openRequestList('categories', sources);
@@ -770,7 +770,7 @@ You might be wondering how we got that `.item` selector. After analyzing the cat
At time of this writing, there are only 2 actors in the Travel category, so we'll use this one for our examples, since it will make everything much less cluttered. Now, go to
```
-https://apify.com/library?type=acts&category=TRAVEL
+https://apify.com/store?type=acts&category=TRAVEL
```
and open DevTools either by right clicking anywhere in the page and selecting `Inspect`, or by pressing `F12` or by any other means relevant to your system. Once you're there, you'll see a bunch of DevToolsy stuff and a view of the category page with the individual actor cards.
@@ -846,9 +846,9 @@ const Apify = require('apify');
Apify.main(async () => {
const sources = [
- 'https://apify.com/library?type=acts&category=TRAVEL',
- 'https://apify.com/library?type=acts&category=ECOMMERCE',
- 'https://apify.com/library?type=acts&category=ENTERTAINMENT'
+ 'https://apify.com/store?type=acts&category=TRAVEL',
+ 'https://apify.com/store?type=acts&category=ECOMMERCE',
+ 'https://apify.com/store?type=acts&category=ENTERTAINMENT'
];
const requestList = await Apify.openRequestList('categories', sources);
@@ -886,7 +886,7 @@ We've added the `handlePageFunction()` with the `enqueueLinks()` logic from the
This concludes our Crawling strategy section, because we have taught the crawler to visit all the pages we need. Let's continue with scraping the tasty data.
### Scraping data
-At the beginning of this chapter, we've created a list of information we wanted to collect about the actors in the library. Let's review that and figure out ways to access it.
+At the beginning of this chapter, we've created a list of information we wanted to collect about the actors in the store. Let's review that and figure out ways to access it.
1. URL
2. Owner
@@ -912,7 +912,7 @@ const owner = urlArr[0]; // 'apify'
> It's always a matter of preference, whether to store this information separately in the resulting dataset, or not. Whoever uses the dataset can easily parse the `owner` from the `URL`, so should we duplicate the data unnecessarily? Our opinion is that unless the increased data consumption would be too large to bear, it's always better to make the dataset as readable as possible. Someone might want to filter by `owner` for example and keeping only the `URL` in the dataset would make this complicated without using additional tools.
#### Scraping Title, Description, Last run date and Number of runs
-Now it's time to add more data to the results. Let's open one of the actor detail pages in the Library, for example the [`apify/web-scraper`](https://apify.com/apify/web-scraper) page and use our DevTools-Fu to figure out how to get the title of the actor.
+Now it's time to add more data to the results. Let's open one of the actor detail pages in the Store, for example the [`apify/web-scraper`](https://apify.com/apify/web-scraper) page and use our DevTools-Fu to figure out how to get the title of the actor.
##### Title

@@ -1016,9 +1016,9 @@ const Apify = require('apify');
Apify.main(async () => {
const sources = [
- 'https://apify.com/library?type=acts&category=TRAVEL',
- 'https://apify.com/library?type=acts&category=ECOMMERCE',
- 'https://apify.com/library?type=acts&category=ENTERTAINMENT'
+ 'https://apify.com/store?type=acts&category=TRAVEL',
+ 'https://apify.com/store?type=acts&category=ECOMMERCE',
+ 'https://apify.com/store?type=acts&category=ENTERTAINMENT'
];
const requestList = await Apify.openRequestList('categories', sources);
@@ -1089,9 +1089,9 @@ const Apify = require('apify');
Apify.main(async () => {
const sources = [
- 'https://apify.com/library?type=acts&category=TRAVEL',
- 'https://apify.com/library?type=acts&category=ECOMMERCE',
- 'https://apify.com/library?type=acts&category=ENTERTAINMENT'
+ 'https://apify.com/store?type=acts&category=TRAVEL',
+ 'https://apify.com/store?type=acts&category=ECOMMERCE',
+ 'https://apify.com/store?type=acts&category=ENTERTAINMENT'
];
const requestList = await Apify.openRequestList('categories', sources);
@@ -1206,7 +1206,7 @@ Once we have that, we can load it in the actor and populate the crawler's source
const input = await Apify.getInput();
const sources = input.map(category => ({
- url: `https://apify.com/library?type=acts&category=${category}`,
+ url: `https://apify.com/store?type=acts&category=${category}`,
userData: {
label: 'CATEGORY'
}
@@ -1276,7 +1276,7 @@ exports.getSources = async () => {
log.debug('Getting sources.');
const input = await Apify.getInput();
return input.map(category => ({
- url: `https://apify.com/library?type=acts&category=${category}`,
+ url: `https://apify.com/store?type=acts&category=${category}`,
userData: {
label: 'CATEGORY'
}
diff --git a/docs/guides/what_is_an_actor.md b/docs/guides/what_is_an_actor.md
index 1ef273eb5cf3..979c6f9abb64 100644
--- a/docs/guides/what_is_an_actor.md
+++ b/docs/guides/what_is_an_actor.md
@@ -10,14 +10,14 @@ An actor can perform anything from a simple action such as filling out a web for
to complex operations such as crawling an entire website and removing duplicates from a large dataset.
To run an actor, you need to have an <a href="https://my.apify.com/)" target="_blank">Apify Account</a>.
-Actors can be shared in the <a href="https://apify.com/library?&type=acts)" target="_blank">Apify Library</a>
+Actors can be shared in the <a href="https://apify.com/store?&type=acts)" target="_blank">Apify Store</a>
so that other people can use them.
-But don't worry, if you share your actor in the library
+But don't worry, if you share your actor in the store
and somebody uses it, it runs under their account, not yours.
**Related links**
-* <a href="https://apify.com/library?&type=acts" target="_blank">Library of existing actors</a>
+* <a href="https://apify.com/store?&type=acts" target="_blank">Store of existing actors</a>
* <a href="https://apify.com/docs/actor" target="_blank">Documentation</a>
* <a href="https://my.apify.com/actors" target="_blank">View actors in Apify app</a>
* <a href="https://apify.com/docs/api/v2#/reference/actors" target="_blank">API reference</a> | Unrelated | Rename Library to Store |
fc833c83a43e8d5ce61300ff1f3064b4ee234906 | 2018-09-07 20:49:42 | Ondra Urban | Update README examples | false | diff --git a/README-new.md b/README-new.md
index c78c637373a5..6370fb709435 100644
--- a/README-new.md
+++ b/README-new.md
@@ -70,7 +70,7 @@ But eventually things will get complicated, for example when you try to:
* Perform a deep crawl of an entire website using a persistent queue of URLs.
* Run your scraping code on a list of 100k URLs in a CSV file,
- without loosing any data when your code crashes.
+ without losing any data when your code crashes.
* Rotate proxies to hide your browser origin.
* Schedule the code to run periodically and send notification on errors.
* Disable browser fingerprinting protections used by websites.
@@ -116,7 +116,7 @@ The Apify SDK package provides the following tools:
- Represents a queue of URLs to crawl, which is stored either on local filesystem or in cloud.
The queue is used for deep crawling of websites, where you start with
several URLs and then recursively follow links to other pages.
- The data structure supports both breath-first and depth-first crawling orders.
+ The data structure supports both breadth-first and depth-first crawling orders.
</li>
<li>
<a href="https://www.apify.com/docs/sdk/apify-runtime-js/latest#Dataset">Dataset</a>
@@ -164,7 +164,8 @@ You can add Apify SDK to any Node.js project by running:
npm install apify
```
-However, to make the package work at its full potential,
+It works right off the bat locally. No configuration needed.
+To make the package work with Apify cloud services
you'll need to set one or more of the following environment variables
for your Node.js process, depending on your circumstances:
@@ -176,50 +177,19 @@ for your Node.js process, depending on your circumstances:
</tr>
</thead>
<tbody>
- <tr>
- <td><code>APIFY_LOCAL_EMULATION_DIR</code></td>
- <td>Defines the path to a local directory where key-value stores, request lists and request queues store their data.
- If omitted, the package will try to use cloud storage instead and will expect that the
- <code>APIFY_TOKEN</code> environment variable is defined.
- </td>
- </tr>
<tr>
<td><code>APIFY_TOKEN</code></td>
<td>
The API token for your Apify account. It is used to access Apify APIs, e.g. to access cloud storage.
You can find your API token on the <a href="https://my.apify.com/account#intergrations" target="_blank">Apify - Account - Integrations</a> page.
- If omitted, you should define <code>APIFY_LOCAL_EMULATION_DIR</code> environment variable instead.
</td>
</tr>
<tr>
<td><code>APIFY_PROXY_PASSWORD</code></td>
<td>Password to <a href="https://www.apify.com/docs/proxy" target="_blank">Apify Proxy</a> for IP address rotation.
If you have have an Apify account, you can find the password on the
- <a href="https://my.apify.com/proxy" target="_blank">Proxy page</a> in the Apify app.</td>
- </tr>
- <tr>
- <td><code>APIFY_DEFAULT_KEY_VALUE_STORE_ID</code></td>
- <td>ID of the default key-value store, where the
- <code>Apify.getValue()</code> or <code>Apify.setValue()</code> functions store the values.
- If you defined <code>APIFY_LOCAL_EMULATION_DIR</code>, then each value is stored as a file at
- <code>[APIFY_LOCAL_EMULATION_DIR]/key-value-stores/[APIFY_DEFAULT_KEY_VALUE_STORE_ID]/[KEY].[EXT]</code>,
- where <code>[KEY]</code> is the key nad <code>[EXT]</code> corresponds to the MIME content type of the value.
- </td>
- </tr>
- <tr>
- <td><code>APIFY_DEFAULT_DATASET_ID</code></td>
- <td>ID of the default dataset, where the <code>Apify.pushData()</code> function store the data.
- If you defined <code>APIFY_LOCAL_EMULATION_DIR</code>, then dataset items are stored as files at
- <code>[APIFY_LOCAL_EMULATION_DIR]/datasets/[APIFY_DEFAULT_DATASET_ID]/[INDEX].json</code>,
- where <code>[INDEX]</code> is a zero-based index of the item.
- </td>
- </tr>
- <tr>
- <td><code>APIFY_DEFAULT_REQUEST_QUEUE_ID</code></td>
- <td>ID of the default request queue, where functions like <code>RequestQueue.addRequest()</code> store the data.
- If you defined <code>APIFY_LOCAL_EMULATION_DIR</code>, then request queue records are stored as files at
- <code>[APIFY_LOCAL_EMULATION_DIR]/request-queues/[APIFY_DEFAULT_REQUEST_QUEUE_ID]/[NUM].json</code>,
- where <code>[NUM]</code> indicates the order of the item in the queue.
+ <a href="https://my.apify.com/proxy" target="_blank">Proxy page</a> in the Apify app.
+ You may freely use your own proxies, instead of the Apify pool.
</td>
</tr>
</tbody>
@@ -233,7 +203,7 @@ section of the Apify actor documentation.
### Local usage with Apify command-line interface (CLI)
-To avoid the need to set all the necessary environment variables manually,
+To avoid the need to set the necessary environment variables manually,
to create a boilerplate of your project,
and to enable pushing and running your code on the Apify cloud,
you can take advantage of the
@@ -260,11 +230,9 @@ cd my-hello-world
apify run
```
-Note that the CLI automatically sets all necessary environment variables.
-The `APIFY_LOCAL_EMULATION_DIR` variable is set to `./apify_local` directory,
-where all the data will be stored.
+The default local storage folder is set to `./apify_local` directory, where all the data will be stored.
For example, the input JSON file for the actor is expected to be in the default key-value store
-in `./apify_local/key-value-stores/default/INPUT.json`.
+in `./apify_local/key_value_stores/default/INPUT.json`.
With the CLI you can also easily deploy your code to Apify cloud by running:
@@ -290,27 +258,226 @@ For more information, view the [Apify actors quick start guide](https://www.apif
## Examples
-TODO: This sections need to be finished
+Because examples are often the best way to explain anything, let's look at some of the above
+described features put to good use in solving various scraping challenges.
All the following examples can be found in the [./examples] directory in the repository.
-### Load few pages in raw HTML
+Or just run the following command to see the Cheerio Crawler in action.
+```
+node ./examples/crawler_cheerio
+```
-TODO: maybe use example from https://www.apify.com/docs/sdk/apify-runtime-js/latest#BasicCrawler, but make sure it's working
+### Load few pages in raw HTML
+This is the most basic example of using the Apify SDK. Start with it. It explains some
+essential concepts that are used throughout the SDK.
+```javascript
+// We require the Apify SDK and a popular client to make HTTP requests.
+const Apify = require('apify');
+const requestPromise = require('request-promise');
+
+// The Apify.main() function wraps the crawler logic and is a mandatory
+// part of every crawler run using Apify SDK.
+Apify.main(async () => {
+ // Prepare a list of URLs to crawl. For that we use an instance of the RequestList class.
+ // Here we just throw some URLs into an array of sources, but the RequestList can do much more.
+ const requestList = new Apify.RequestList({
+ sources: [
+ { url: 'http://www.google.com/' },
+ { url: 'http://www.example.com/' },
+ { url: 'http://www.bing.com/' },
+ { url: 'http://www.wikipedia.com/' },
+ ],
+ });
+
+ // Since initialization of the RequestList is asynchronous, you must always
+ // call .initialize() before using it.
+ await requestList.initialize();
+
+ // To crawl the URLs, we use an instance of the BasicCrawler class which is our simplest,
+ // but still powerful crawler. Its constructor takes an options object where you can
+ // configure it to your liking. Here, we're keeping things simple.
+ const crawler = new Apify.BasicCrawler({
+
+ // We use the request list created earlier to feed URLs to the crawler.
+ requestList,
+
+ // We define a handleRequestFunction that describes the actions
+ // we wish to perform for each URL.
+ handleRequestFunction: async ({ request }) => {
+ // 'request' contains an instance of the Request class which is a container
+ // for request related data such as URL or Method (GET, POST ...) and is supplied by the requestList we defined.
+ console.log(`Processing ${request.url}...`);
+
+ // Here we simply fetch the HTML of the page and store it to the default Dataset.
+ await Apify.pushData({
+ url: request.url,
+ html: await requestPromise(request.url),
+ });
+ },
+ });
+
+ // Once started the crawler, will automatically work through all the pages in the requestList
+ // and the created promise will resolve once the crawl is completed. The collected HTML will be
+ // saved in the ./apify_storage/datasets/default folder, unless configured differently.
+ await crawler.run();
+ console.log('Crawler finished.');
+});
+```
### Crawl a large list of URLs with Cheerio
+This example shows how to extract data (the content of title and all h1 tags) from an external
+list of URLs (parsed from a CSV file) using CheerioCrawler.
-Demonstrates how to create a crawler that will take
-a list of URLs from a CSV file and crawls the pages using
-<a href="https://www.npmjs.com/package/cheerio" target="_blank">cheerio</a>
-HTML parser. The results are stored into a dataset.
-
-TODO
+It builds upon the previous BasicCrawler example, so if you missed that one, you should check it out.
+```javascript
+const Apify = require('apify');
+
+// Utils is a namespace with nice to have things, such as logging control.
+const { log } = Apify.utils;
+// This is how you can turn internal logging off.
+log.setLevel(log.LEVELS.OFF);
+
+// This is just a list of Fortune 500 companies' websites available on GitHub.
+const CSV_LINK = 'https://gist.githubusercontent.com/hrbrmstr/ae574201af3de035c684/raw/2d21bb4132b77b38f2992dfaab99649397f238e9/f1000.csv';
+
+Apify.main(async () => {
+ // Using the 'requestsFromUrl' parameter instead of 'url' tells the RequestList to download
+ // the document available at the given URL and parse URLs out of it.
+ const requestList = new Apify.RequestList({
+ sources: [{ requestsFromUrl: CSV_LINK }],
+ });
+ await requestList.initialize();
+
+ // We're using the CheerioCrawler here. Its core difference from the BasicCrawler is the fact
+ // that the HTTP request is already handled for you and you get a parsed HTML of the
+ // page in the form of the cheerio object - $.
+ const crawler = new Apify.CheerioCrawler({
+ requestList,
+
+ // We define some boundaries for concurrency. It will be automatically managed.
+ // Here we say that no less than 5 and no more than 50 parallel requests should
+ // be run. The actual concurrency amount is based on memory and CPU load and is
+ // managed by the AutoscaledPool class.
+ minConcurrency: 10,
+ maxConcurrency: 50,
+
+ // We can also set the amount of retries.
+ maxRequestRetries: 1,
+
+ // Or the timeout for each page in seconds.
+ handlePageTimeoutSecs: 3,
+
+ // In addition to the BasicCrawler, which only provides access to the request parameter,
+ // CheerioCrawler further exposes the '$' parameter, which is the cheerio object containing
+ // the parsed page, and the 'html' parameter, which is just the raw HTML.
+ // Also, since we're not making the request ourselves, the function is named differently.
+ handlePageFunction: async ({ $, html, request }) => {
+ console.log(`Processing ${request.url}...`);
+
+ // Extract data with cheerio.
+ const title = $('title').text();
+ const h1texts = [];
+ $('h1').each((index, el) => {
+ h1texts.push({
+ text: $(el).text(),
+ });
+ });
+
+ // Save data to default Dataset.
+ await Apify.pushData({
+ url: request.url,
+ title,
+ h1texts,
+ html,
+ });
+ },
+
+ // If request failed 1 + maxRequestRetries then this function is executed.
+ handleFailedRequestFunction: async ({ request }) => {
+ console.log(`Request ${request.url} failed twice.`);
+ },
+ });
+
+ await crawler.run();
+ console.log('Crawler finished.');
+});
+```
### Recursively crawl a website using headless Chrome / Puppeteer
-
-Demonstrates how to recursively TODO
-
+This example demonstrates how to use PuppeteerCrawler in connection with the RequestQueue to recursively scrape
+the Hacker News site (https://news.ycombinator.com). It starts with a single URL where it finds more links,
+enqueues them to the RequestQueue and continues until no more desired links are available.
+```javascript
+const Apify = require('apify');
+
+Apify.main(async () => {
+ // Apify.openRequestQueue() is a factory to get preconfigured RequestQueue instance.
+ const requestQueue = await Apify.openRequestQueue();
+
+ // Enqueue only the first URL.
+ await requestQueue.addRequest(new Apify.Request({ url: 'https://news.ycombinator.com/' }));
+
+ // Create a PuppeteerCrawler. It's configuration is similar to the CheerioCrawler,
+ // only instead of the parsed HTML, handlePageFunction gets an instance of the
+ // Puppeteer.Page class. See Puppeteer docs for more information.
+ const crawler = new Apify.PuppeteerCrawler({
+ // Use of requestQueue is similar to RequestList.
+ requestQueue,
+
+ // Run Puppeteer headless. If you turn this off, you'll see the scraping
+ // browsers showing up on screen. Non-headless mode is great for debugging.
+ launchPuppeteerOptions: { headless: true },
+
+ // For each Request in the queue, a new Page is opened in a browser.
+ // This is the place to write the Puppeteer scripts you are familiar with,
+ // with the exception that browsers and pages are managed for you by Apify SDK automatically.
+ handlePageFunction: async ({ page, request }) => {
+ console.log(`Processing ${request.url}...`);
+
+ // A function to be evaluated by Puppeteer within
+ // the browser context.
+ const pageFunction = ($posts) => {
+ const data = [];
+
+ // We're getting the title, rank and url of each post on Hacker News.
+ $posts.forEach(($post) => {
+ data.push({
+ title: $post.querySelector('.title a').innerText,
+ rank: $post.querySelector('.rank').innerText,
+ href: $post.querySelector('.title a').href,
+ });
+ });
+
+ return data;
+ };
+ const data = await page.$$eval('.athing', pageFunction);
+
+ // Save data to default Dataset.
+ await Apify.pushData(data);
+
+ // To continue crawling, we need to enqueue some more pages into
+ // the requestQueue. First we find the correct URLs using Puppeteer
+ // and then we add the request to the queue.
+ try {
+ const nextHref = await page.$eval('.morelink', el => el.href);
+ // You may omit the Request constructor and just use a plain object.
+ await requestQueue.addRequest(new Apify.Request({ url: nextHref }));
+ } catch (err) {
+ console.log(`Url ${request.url} is the last page!`);
+ }
+ },
+
+ handleFailedRequestFunction: async ({ request }) => {
+ console.log(`Request ${request.url} failed 4 times`); // Because 3 retries is the default value.
+ },
+ });
+
+ // Run crawler.
+ await crawler.run();
+ console.log('Crawler finished.');
+});
+```
### Save page screenshots into key-value store
diff --git a/examples/basic_crawler.js b/examples/basic_crawler.js
new file mode 100644
index 000000000000..0ec9ac23f2df
--- /dev/null
+++ b/examples/basic_crawler.js
@@ -0,0 +1,60 @@
+/**
+ * This is the most basic example of using the Apify SDK. Start with it. It explains some
+ * essential concepts that are used throughout the SDK.
+ *
+ * Example uses:
+ * - Apify BasicCrawler to manage requests and autoscale the scraping job.
+ * - Apify Dataset to store data.
+ * - Apify RequestList to save a list of target URLs.
+ */
+// We require the Apify SDK and a popular client to make HTTP requests.
+const Apify = require('apify');
+const requestPromise = require('request-promise');
+
+// The Apify.main() function wraps the crawler logic and is a mandatory
+// part of every crawler run using Apify SDK.
+Apify.main(async () => {
+ // Prepare a list of URLs to crawl. For that we use an instance of the RequestList class.
+ // Here we just throw some URLs into an array of sources, but the RequestList can do much more.
+ const requestList = new Apify.RequestList({
+ sources: [
+ { url: 'http://www.google.com/' },
+ { url: 'http://www.example.com/' },
+ { url: 'http://www.bing.com/' },
+ { url: 'http://www.wikipedia.com/' },
+ ],
+ });
+
+ // Since initialization of the RequestList is asynchronous, you must always
+ // call .initialize() before using it.
+ await requestList.initialize();
+
+ // To crawl the URLs, we use an instance of the BasicCrawler class which is our simplest,
+ // but still powerful crawler. Its constructor takes an options object where you can
+ // configure it to your liking. Here, we're keeping things simple.
+ const crawler = new Apify.BasicCrawler({
+
+ // We use the request list created earlier to feed URLs to the crawler.
+ requestList,
+
+ // We define a handleRequestFunction that describes the actions
+ // we wish to perform for each URL.
+ handleRequestFunction: async ({ request }) => {
+ // 'request' contains an instance of the Request class which is a container
+ // for request related data such as URL or Method (GET, POST ...) and is supplied by the requestList we defined.
+ console.log(`Processing ${request.url}...`);
+
+ // Here we simply fetch the HTML of the page and store it to the default Dataset.
+ await Apify.pushData({
+ url: request.url,
+ html: await requestPromise(request.url),
+ });
+ },
+ });
+
+ // Once started the crawler, will automatically work through all the pages in the requestList
+ // and the created promise will resolve once the crawl is completed. The collected HTML will be
+ // saved in the ./apify_storage/datasets/default folder, unless configured differently.
+ await crawler.run();
+ console.log('Crawler finished.');
+});
diff --git a/examples/cheerio_crawler.js b/examples/cheerio_crawler.js
new file mode 100644
index 000000000000..a639efd0e1a8
--- /dev/null
+++ b/examples/cheerio_crawler.js
@@ -0,0 +1,82 @@
+/**
+ * This example shows how to extract data (the content of title and all h1 tags) from an external
+ * list of URLs (parsed from a CSV file) using CheerioCrawler.
+ *
+ * It builds upon the previous BasicCrawler example, so if you missed that one, you should check it out.
+ *
+ * Example uses:
+ * - Apify CheerioCrawler to scrape pages using the cheerio NPM package.
+ * - Apify Dataset to store data.
+ * - Apify RequestList to download a list of URLs from a remote resource.
+ */
+const Apify = require('apify');
+
+// Utils is a namespace with nice to have things, such as logging control.
+const { log } = Apify.utils;
+// This is how you can turn internal logging off.
+log.setLevel(log.LEVELS.OFF);
+
+// This is just a list of Fortune 500 companies' websites available on GitHub.
+const CSV_LINK = 'https://gist.githubusercontent.com/hrbrmstr/ae574201af3de035c684/raw/2d21bb4132b77b38f2992dfaab99649397f238e9/f1000.csv';
+
+Apify.main(async () => {
+ // Using the 'requestsFromUrl' parameter instead of 'url' tells the RequestList to download
+ // the document available at the given URL and parse URLs out of it.
+ const requestList = new Apify.RequestList({
+ sources: [{ requestsFromUrl: CSV_LINK }],
+ });
+ await requestList.initialize();
+
+ // We're using the CheerioCrawler here. Its core difference from the BasicCrawler is the fact
+ // that the HTTP request is already handled for you and you get a parsed HTML of the
+ // page in the form of the cheerio object - $.
+ const crawler = new Apify.CheerioCrawler({
+ requestList,
+
+ // We define some boundaries for concurrency. It will be automatically managed.
+ // Here we say that no less than 5 and no more than 50 parallel requests should
+ // be run. The actual concurrency amount is based on memory and CPU load and is
+ // managed by the AutoscaledPool class.
+ minConcurrency: 10,
+ maxConcurrency: 50,
+
+ // We can also set the amount of retries.
+ maxRequestRetries: 1,
+
+ // Or the timeout for each page in seconds.
+ handlePageTimeoutSecs: 3,
+
+ // In addition to the BasicCrawler, which only provides access to the request parameter,
+ // CheerioCrawler further exposes the '$' parameter, which is the cheerio object containing
+ // the parsed page, and the 'html' parameter, which is just the raw HTML.
+ // Also, since we're not making the request ourselves, the function is named differently.
+ handlePageFunction: async ({ $, html, request }) => {
+ console.log(`Processing ${request.url}...`);
+
+ // Extract data with cheerio.
+ const title = $('title').text();
+ const h1texts = [];
+ $('h1').each((index, el) => {
+ h1texts.push({
+ text: $(el).text(),
+ });
+ });
+
+ // Save data to default Dataset.
+ await Apify.pushData({
+ url: request.url,
+ title,
+ h1texts,
+ html,
+ });
+ },
+
+ // If request failed 1 + maxRequestRetries then this function is executed.
+ handleFailedRequestFunction: async ({ request }) => {
+ console.log(`Request ${request.url} failed twice.`);
+ },
+ });
+
+ await crawler.run();
+ console.log('Crawler finished.');
+});
diff --git a/examples/crawler_cheerio.js b/examples/crawler_cheerio.js
deleted file mode 100644
index 7afbbfeaf987..000000000000
--- a/examples/crawler_cheerio.js
+++ /dev/null
@@ -1,70 +0,0 @@
-/**
- * This is example how to scrape Hacker News site (https://news.ycombinator.com) using Apify SDK
- * with Cheerio and Request NPM packages.
- *
- * Example uses:
- * - Apify BasicCrawler to scrape pages in parallel
- * - Apify Dataset to store data
- * - Apify RequestQueue to manage dynamic queue of pending and handled requests
- * - Request NPM package to request html content of website
- * - Cherio NPM package to parse html and extract data
- */
-
-const Apify = require('apify');
-const rp = require('request-promise');
-const cheerio = require('cheerio');
-
-Apify.main(async () => {
- // Get queue and enqueue first url.
- const requestQueue = await Apify.openRequestQueue();
-
- // Enqueue Start url.
- await requestQueue.addRequest(new Apify.Request({ url: 'https://news.ycombinator.com/' }));
-
- // Create crawler.
- const crawler = new Apify.BasicCrawler({
- requestQueue,
-
- // This page is executed for each request.
- // If request failes then it's retried 3 times.
- handleRequestFunction: async ({ request }) => {
- console.log(`Processing ${request.url}...`);
-
- // Request html of page.
- const html = await rp(request.url);
-
- // Extract data with cheerio.
- const data = [];
- const $ = cheerio.load(html);
- $('.athing').each((index, el) => {
- data.push({
- title: $(el).find('.title a').text(),
- rank: $(el).find('.rank').text(),
- href: $(el).find('.title a').attr('href'),
- });
- });
-
- // Save data.
- await Apify.pushData(data);
-
- // Enqueue next page.
- const $moreLink = $('.morelink');
- if ($moreLink.length) {
- const path = $moreLink.attr('href')
- const url = `https://news.ycombinator.com/${path}`;
-
- await requestQueue.addRequest(new Apify.Request({ url }));
- } else {
- console.log(`Url ${request.url} is the last page!`);
- }
- },
-
- // If request failed 4 times then this function is executed.
- handleFailedRequestFunction: async ({ request }) => {
- console.log(`Request ${request.url} failed 4 times`);
- },
- });
-
- // Run crawler.
- await crawler.run();
-});
diff --git a/examples/crawler_puppeteer.js b/examples/crawler_puppeteer.js
deleted file mode 100644
index 260a6ff6395d..000000000000
--- a/examples/crawler_puppeteer.js
+++ /dev/null
@@ -1,66 +0,0 @@
-/**
- * This is example how to scrape Hacker News site (https://news.ycombinator.com) using Apify SDK and Puppeteer.
- *
- * Example uses:
- * - Apify PuppeteerCrawler to scrape pages using Puppeteer in parallel
- * - Apify Dataset to store data
- * - Apify RequestQueue to manage dynamic queue of pending and handled requests
- * - Puppeter to controll headless Chrome browser
- */
-
-const Apify = require('apify');
-
-Apify.main(async () => {
- // Get queue and enqueue first url.
- const requestQueue = await Apify.openRequestQueue();
-
- // Enqueue Start url.
- await requestQueue.addRequest(new Apify.Request({ url: 'https://news.ycombinator.com/' }));
-
- // Create crawler.
- const crawler = new Apify.PuppeteerCrawler({
- requestQueue,
-
- // This page is executed for each request.
- // If request failes then it's retried 3 times.
- // Parameter page is Puppeteers page object with loaded page.
- handlePageFunction: async ({ page, request }) => {
- console.log(`Processing ${request.url}...`);
-
- // Extract all posts.
- const pageFunction = ($posts) => {
- const data = [];
-
- $posts.forEach(($post) => {
- data.push({
- title: $post.querySelector('.title a').innerText,
- rank: $post.querySelector('.rank').innerText,
- href: $post.querySelector('.title a').href,
- });
- });
-
- return data;
- };
- const data = await page.$$eval('.athing', pageFunction);
-
- // Save data.
- await Apify.pushData(data);
-
- // Enqueue next page.
- try {
- const nextHref = await page.$eval('.morelink', el => el.href);
- await requestQueue.addRequest(new Apify.Request({ url: nextHref }));
- } catch (err) {
- console.log(`Url ${request.url} is the last page!`);
- }
- },
-
- // If request failed 4 times then this function is executed.
- handleFailedRequestFunction: async ({ request }) => {
- console.log(`Request ${request.url} failed 4 times`);
- },
- });
-
- // Run crawler.
- await crawler.run();
-});
diff --git a/examples/puppeteer_crawler.js b/examples/puppeteer_crawler.js
new file mode 100644
index 000000000000..617f121815b7
--- /dev/null
+++ b/examples/puppeteer_crawler.js
@@ -0,0 +1,80 @@
+/**
+ * This example demonstrates how to use PuppeteerCrawler in connection with the RequestQueue to recursively scrape
+ * the Hacker News site (https://news.ycombinator.com). It starts with a single URL where it finds more links,
+ * enqueues them to the RequestQueue and continues until no more desired links are available.
+ *
+ * Example uses:
+ * - Apify PuppeteerCrawler to scrape pages using Puppeteer in parallel.
+ * - Apify Dataset to store data.
+ * - Apify RequestQueue to manage dynamic queue of pending and handled requests.
+ * - Puppeter to control headless Chrome browser.
+ */
+
+const Apify = require('apify');
+
+Apify.main(async () => {
+ // Apify.openRequestQueue() is a factory to get preconfigured RequestQueue instance.
+ const requestQueue = await Apify.openRequestQueue();
+
+ // Enqueue only the first URL.
+ await requestQueue.addRequest(new Apify.Request({ url: 'https://news.ycombinator.com/' }));
+
+ // Create a PuppeteerCrawler. It's configuration is similar to the CheerioCrawler,
+ // only instead of the parsed HTML, handlePageFunction gets an instance of the
+ // Puppeteer.Page class. See Puppeteer docs for more information.
+ const crawler = new Apify.PuppeteerCrawler({
+ // Use of requestQueue is similar to RequestList.
+ requestQueue,
+
+ // Run Puppeteer headless. If you turn this off, you'll see the scraping
+ // browsers showing up on screen. Non-headless mode is great for debugging.
+ launchPuppeteerOptions: { headless: true },
+
+ // For each Request in the queue, a new Page is opened in a browser.
+ // This is the place to write the Puppeteer scripts you are familiar with,
+ // with the exception that browsers and pages are managed for you by Apify SDK automatically.
+ handlePageFunction: async ({ page, request }) => {
+ console.log(`Processing ${request.url}...`);
+
+ // A function to be evaluated by Puppeteer within
+ // the browser context.
+ const pageFunction = ($posts) => {
+ const data = [];
+
+ // We're getting the title, rank and url of each post on Hacker News.
+ $posts.forEach(($post) => {
+ data.push({
+ title: $post.querySelector('.title a').innerText,
+ rank: $post.querySelector('.rank').innerText,
+ href: $post.querySelector('.title a').href,
+ });
+ });
+
+ return data;
+ };
+ const data = await page.$$eval('.athing', pageFunction);
+
+ // Save data to default Dataset.
+ await Apify.pushData(data);
+
+ // To continue crawling, we need to enqueue some more pages into
+ // the requestQueue. First we find the correct URLs using Puppeteer
+ // and then we add the request to the queue.
+ try {
+ const nextHref = await page.$eval('.morelink', el => el.href);
+ // You may omit the Request constructor and just use a plain object.
+ await requestQueue.addRequest(new Apify.Request({ url: nextHref }));
+ } catch (err) {
+ console.log(`Url ${request.url} is the last page!`);
+ }
+ },
+
+ handleFailedRequestFunction: async ({ request }) => {
+ console.log(`Request ${request.url} failed 4 times`); // Because 3 retries is the default value.
+ },
+ });
+
+ // Run crawler.
+ await crawler.run();
+ console.log('Crawler finished.');
+});
diff --git a/examples/url_list_cheerio.js b/examples/url_list_cheerio.js
deleted file mode 100644
index cb5ca107ab15..000000000000
--- a/examples/url_list_cheerio.js
+++ /dev/null
@@ -1,67 +0,0 @@
-/**
- * This example shows how to extract data (title and "see also" links) form a list of Wikipedia articles
- * using Cheerio and Request NPM packages.
- *
- * Example uses:
- * - Apify BasicCrawler to scrape pages using Puppeteer in parallel
- * - Apify Dataset to store data
- * - Apify RequestList to manage a list of urls to be processed
- * - Request NPM package to request html content of website
- * - Cherio NPM package to parse html and extract data
- */
-
-const Apify = require('apify');
-const rp = require('request-promise');
-const cheerio = require('cheerio');
-
-Apify.main(async () => {
- const sources = [
- { url: 'https://en.wikipedia.org/wiki/Amazon_Web_Services' },
- { url: 'https://en.wikipedia.org/wiki/Google_Cloud_Platform' },
- { url: 'https://en.wikipedia.org/wiki/Microsoft_Azure' },
- { url: 'https://en.wikipedia.org/wiki/Rackspace_Cloud' },
- ];
-
- // Create a request list.
- const requestList = new Apify.RequestList({ sources });
- await requestList.initialize();
-
- const crawler = new Apify.BasicCrawler({
- requestList,
-
- // This page is executed for each request.
- // If request failes then it's retried 3 times.
- handleRequestFunction: async ({ request }) => {
- console.log(`Processing ${request.url}...`);
-
- // Request html of page.
- const html = await rp(request.url);
-
- // Extract data with cheerio.
- const $ = cheerio.load(html);
- const $seeAlsoElement = $('#See_also').parent().next();
- const seeAlsoLinks = [];
- $seeAlsoElement.find('a').each((index, el) => {
- seeAlsoLinks.push({
- url: $(el).attr('href'),
- text: $(el).text(),
- });
- });
-
- // Save data.
- await Apify.pushData({
- url: request.url,
- title: $('h1').text(),
- seeAlsoLinks,
- });
- },
-
- // If request failed 4 times then this function is executed.
- handleFailedRequestFunction: async ({ request }) => {
- console.log(`Request ${request.url} failed 4 times`);
- },
- });
-
- // Run crawler for request list.
- await crawler.run();
-}); | unknown | Update README examples |
cb390b5771d2046610946cb8d5e5b65a84707dc2 | 2020-01-15 18:26:09 | Petr Patek | Improved session docs and session_management.md guides. | false | diff --git a/docs/api/Session.md b/docs/api/Session.md
index 9c68fa5564b7..da6117550fd5 100644
--- a/docs/api/Session.md
+++ b/docs/api/Session.md
@@ -19,7 +19,7 @@ internal state can be enriched with custom user data for example some authorizat
- [`.getState()`](#Session+getState) β `Object`
- [`.retire()`](#Session+retire)
- [`.markBad()`](#Session+markBad)
- - [`.checkStatus(statusCode)`](#Session+checkStatus) β `boolean`
+ - [`.retireOnBlockedStatusCodes(statusCode, blockedStatusCodes)`](#Session+retireOnBlockedStatusCodes) β `boolean`
- [`.putResponse(response)`](#Session+putResponse)
- [`.putPuppeteerCookies(puppeteerCookies, url)`](#Session+putPuppeteerCookies)
- [`.setCookies(cookies, url)`](#Session+setCookies)
@@ -156,9 +156,9 @@ due to some external factors as server error such as 5XX you probably want to us
Increases usage and error count. Should be used when the session has been used unsuccessfully. For example because of timeouts.
-<a name="Session+checkStatus"></a>
+<a name="Session+retireOnBlockedStatusCodes"></a>
-## `session.checkStatus(statusCode)` β `boolean`
+## `session.retireOnBlockedStatusCodes(statusCode, blockedStatusCodes)` β `boolean`
Retires session based on status code.
@@ -176,6 +176,11 @@ Retires session based on status code.
</tr>
<tr>
<td colspan="3"><p>HTTP status code</p>
+</td></tr><tr>
+<td><code>blockedStatusCodes</code></td><td><code>Array<Number></code></td>
+</tr>
+<tr>
+<td colspan="3"><p>Custom HTTP status codes that means blocking on particular website.</p>
</td></tr></tbody>
</table>
<a name="Session+putResponse"></a>
diff --git a/docs/guides/session_management.md b/docs/guides/session_management.md
index 7c379c12d8d1..2dff92406267 100644
--- a/docs/guides/session_management.md
+++ b/docs/guides/session_management.md
@@ -55,12 +55,12 @@ const crawler = new Apify.PuppeteerCrawler({
useApifyProxy: true,
// Activates the Session pool.
useSessionPool: true,
- // Overrides default Session pool configuration
+ // Overrides default Session pool configuration.
sessionPoolOptions: {
maxPoolSize: 100
},
// Set to true if you want the crawler to save cookies per session,
- // and set the cookie header to request automatically..
+ // and set the cookie header to request automatically...
persistCookiesPerSession: true,
handlePageFunction: async ({request, $, session}) => {
const title = $("title");
@@ -80,15 +80,82 @@ const crawler = new Apify.PuppeteerCrawler({
**Example usage in [`BasicCrawler`](../api/basiccrawler)**
```javascript
-Finish API first
+ const crawler = new Apify.BasicCrawler({
+ requestQueue,
+ useSessionPool: true,
+ sessionPoolOptions: {
+ maxPoolSize: 100
+ },
+ handleRequestFunction: async ({request, session}) => {
+ // To use the proxy IP session rotation logic you must turn the proxy usage on.
+ const proxyUrl = Apify.getApifyProxyUrl({session});
+ const requestOptions = {
+ url: request.url,
+ proxyUrl,
+ throwHttpErrors: false,
+ headers: {
+ // If you want to use the cookieJar.
+ // This way you get the Cookie headers string from session.
+ Cookie: session.getCookieString(),
+ }
+ };
+ let response;
+
+ try {
+ response = await Apify.utils.requestAsBrowser(requestOptions);
+ } catch (e) {
+ if (e === "SomeNetworkError") {
+ // If network error happens like timeout, socket hangup etc...
+ // There is usually a chance that it was just bad luck and the proxy works.
+ // No need to throw it away.
+ session.markBad();
+ }
+ throw e;
+ }
+
+ // Automatically retires the session based on response HTTP status code.
+ session.retireOnBlockedStatusCodes(response.statusCode);
+
+ if (response.body.blocked) {
+ // You are sure it is blocking.
+ // This will trow away the session.
+ session.retire();
+
+ }
+
+ // Everything is ok you can get the data.
+ // No need to call session.markGood -> BasicCrawler calls it for you.
+
+ // If you want to use the CookieJar in session you need.
+ session.setCookiesFromResponse(response);
+
+ }
+ });
```
**Example solo usage**
```javascript
-Finish API first
-```
+Apify.main(async () => {
+
+ const sessionPoolOptions = {
+ maxPoolSize: 100
+ };
+ const sessionPool = await Apify.openSessionPool(sessionPoolOptions);
+
+ // Get session
+ const session = sessionPool.getSession();
+
+ // Increase the errorScore.
+ session.markBad();
+ // Throw away the session
+ session.retire();
+
+ // Lower the errorScore and marks the session good.
+ session.markGood();
+});
+```
These are the basics of configuring the SessionPool.
Please, bear in mind that the Session pool needs some time to find the working IPs and build up the pool,
so you will be probably seeing a lot of errors until it gets stabilized. | unknown | Improved session docs and session_management.md guides. |
d71e1bfc8e384765b09bdbc2e035d8b9e0f08938 | 2021-11-15 16:34:30 | Martin AdΓ‘mek | chore(deps): upgrade @apify/eslint | false | diff --git a/package.json b/package.json
index ee470d4138ab..41d915a09d34 100644
--- a/package.json
+++ b/package.json
@@ -85,7 +85,7 @@
}
},
"devDependencies": {
- "@apify/eslint-config": "^0.2.1",
+ "@apify/eslint-config": "^0.2.2",
"@apify/tsconfig": "^0.1.0",
"@babel/cli": "^7.14.8",
"@babel/core": "^7.14.8",
diff --git a/src/browser_launchers/browser_launcher.js b/src/browser_launchers/browser_launcher.js
index 3a0a1998bbc1..25fd7e5e5a9a 100644
--- a/src/browser_launchers/browser_launcher.js
+++ b/src/browser_launchers/browser_launcher.js
@@ -35,7 +35,7 @@ export default class BrowserLauncher {
useIncognitoPages: ow.optional.boolean,
userDataDir: ow.optional.string,
launchOptions: ow.optional.object,
- }
+ };
/**
*
diff --git a/src/browser_launchers/playwright_launcher.js b/src/browser_launchers/playwright_launcher.js
index 683f4c7e486c..a9232df97213 100644
--- a/src/browser_launchers/playwright_launcher.js
+++ b/src/browser_launchers/playwright_launcher.js
@@ -52,7 +52,7 @@ export class PlaywrightLauncher extends BrowserLauncher {
static optionsShape = {
...BrowserLauncher.optionsShape,
launcher: ow.optional.object,
- }
+ };
/**
* @param {PlaywrightLaunchContext} launchContext
diff --git a/src/browser_launchers/puppeteer_launcher.js b/src/browser_launchers/puppeteer_launcher.js
index f0a731619663..6b3cf07570ec 100644
--- a/src/browser_launchers/puppeteer_launcher.js
+++ b/src/browser_launchers/puppeteer_launcher.js
@@ -76,7 +76,7 @@ export class PuppeteerLauncher extends BrowserLauncher {
userAgent: ow.optional.string,
stealth: ow.optional.boolean,
stealthOptions: ow.optional.object,
- }
+ };
/**
* @param {PuppeteerLaunchContext} launchContext
diff --git a/src/crawlers/cheerio_crawler.js b/src/crawlers/cheerio_crawler.js
index a3986c36207c..3a16eca15f03 100644
--- a/src/crawlers/cheerio_crawler.js
+++ b/src/crawlers/cheerio_crawler.js
@@ -410,7 +410,7 @@ class CheerioCrawler extends BasicCrawler {
preNavigationHooks: ow.optional.array,
postNavigationHooks: ow.optional.array,
- }
+ };
/**
* @param {CheerioCrawlerOptions} options
diff --git a/src/crawlers/playwright_crawler.js b/src/crawlers/playwright_crawler.js
index 2b36e9e9d57f..8540128a4fec 100644
--- a/src/crawlers/playwright_crawler.js
+++ b/src/crawlers/playwright_crawler.js
@@ -266,7 +266,7 @@ class PlaywrightCrawler extends BrowserCrawler {
browserPoolOptions: ow.optional.object,
launcher: ow.optional.object,
launchContext: ow.optional.object,
- }
+ };
/**
* @param {PlaywrightCrawlerOptions} options
diff --git a/src/crawlers/puppeteer_crawler.js b/src/crawlers/puppeteer_crawler.js
index 4b96a080212b..85928fc05b29 100644
--- a/src/crawlers/puppeteer_crawler.js
+++ b/src/crawlers/puppeteer_crawler.js
@@ -252,7 +252,7 @@ class PuppeteerCrawler extends BrowserCrawler {
...BrowserCrawler.optionsShape,
browserPoolOptions: ow.optional.object,
launchContext: ow.optional.object,
- }
+ };
/**
* @param {PuppeteerCrawlerOptions} options | chore | upgrade @apify/eslint |
b087031d23309f1c5a09ca39213202112cc7d50f | 2023-02-03 05:06:43 | renovate[bot] | chore(deps): update node.js to v18.14.0 | false | diff --git a/package.json b/package.json
index 7e7711f08df4..be3e0b0fcc39 100644
--- a/package.json
+++ b/package.json
@@ -102,7 +102,7 @@
},
"packageManager": "yarn@3.4.1",
"volta": {
- "node": "18.13.0",
+ "node": "18.14.0",
"yarn": "3.4.1"
}
} | chore | update node.js to v18.14.0 |
b2644c6fe799545484cb4ab416b7a9989d54f9d4 | 2018-12-30 19:18:51 | Jan Curn | Updated FACEBOOK_RESERVED_PATHS | false | diff --git a/src/utils_social.js b/src/utils_social.js
index cde39e21c860..8245c43f6826 100644
--- a/src/utils_social.js
+++ b/src/utils_social.js
@@ -196,7 +196,7 @@ const TWITTER_RESERVED_PATHS = 'oauth|account|tos|privacy|signup|home|hashtag|se
const TWITTER_REGEX_STRING = `(?<!\\w)(?:http(?:s)?:\\/\\/)?(?:www.)?(?:twitter.com)\\/(?!(?:${TWITTER_RESERVED_PATHS})(?:[\\'\\"\\?\\.\\/]|$))([a-z0-9_]{1,15})(?![a-z0-9_])(?:/)?`;
// eslint-disable-next-line max-len, quotes
-const FACEBOOK_RESERVED_PATHS = 'rsrc\\.php|apps|groups|events|l\\.php|friends|images|photo.php|chat|ajax|dyi|common|policies|login|recover|reg|help|security|messages|marketplace|pages|live|bookmarks|games|fundraisers|saved|gaming|salesgroups|jobs|people|ads|ad_campaign|weather|offers|recommendations|crisisresponse|onthisday|developers|settings';
+const FACEBOOK_RESERVED_PATHS = 'rsrc\\.php|apps|groups|events|l\\.php|friends|images|photo.php|chat|ajax|dyi|common|policies|login|recover|reg|help|security|messages|marketplace|pages|live|bookmarks|games|fundraisers|saved|gaming|salesgroups|jobs|people|ads|ad_campaign|weather|offers|recommendations|crisisresponse|onthisday|developers|settings|connect|business';
// eslint-disable-next-line max-len, quotes
const FACEBOOK_REGEX_STRING = `(?<!\\w)(?:http(?:s)?:\\/\\/)?(?:www.)?(?:facebook.com|fb.com)\\/(?!(?:${FACEBOOK_RESERVED_PATHS})(?:[\\'\\"\\?\\.\\/]|$))(profile\\.php\\?id\\=[0-9]{3,20}|(?!profile\\.php)[a-z0-9\\.]{5,51})(?![a-z0-9\\.])(?:/)?`; | unknown | Updated FACEBOOK_RESERVED_PATHS |
209316db233606286631d4a57eb7368b5aa2769e | 2018-11-23 20:03:39 | Jan Curn | Development of new Apify.utils.social functions | false | diff --git a/src/utils_social.js b/src/utils_social.js
index 7caa14117d2f..a2b601ccede6 100644
--- a/src/utils_social.js
+++ b/src/utils_social.js
@@ -1,5 +1,7 @@
/* eslint-disable no-continue */
import _ from 'underscore';
+import htmlToText from 'html-to-text';
+import cheerio from 'cheerio';
// Regex inspired by https://zapier.com/blog/extract-links-email-phone-regex/
// eslint-disable-next-line max-len
@@ -47,30 +49,6 @@ const emailsFromUrls = (urls) => {
};
-/**
- * The function extracts email addresses from a HTML text.
- * It looks for `mailto:` links as well as email addresses in the text content.
- * Note that the function preserves the order of emails and keep duplicates.
- * @param {String} html HTML document
- * @return {String[]} Array of emails addresses found.
- * If no emails are found, the function returns an empty array.
- * @memberOf utils.social
- */
-const emailsFromHtml = (html) => {
- if (!Array.isArray(urls)) throw new Error('The "urls" parameter must be an array');
-
- const emails = [];
- for (const url of urls) {
- if (!url) continue;
- if (!EMAIL_URL_PREFIX_REGEX.test(url)) continue;
-
- const email = url.replace(EMAIL_URL_PREFIX_REGEX, '').trim();
- if (EMAIL_REGEX.test(email)) emails.push(email);
- }
- return emails;
-};
-
-
const LINKEDIN_URL_REGEX = /http(s)?:\/\/[a-zA-Z]+\.linkedin\.com\/in\/[a-zA-Z0-9\-_%]+/g;
const INSTAGRAM_URL_REGEX = /(?:(^|[^0-9a-z]))(((http|https):\/\/)?((www\.)?(?:instagram.com|instagr.am)\/([A-Za-z0-9_.]{2,30})))/ig;
// eslint-disable-next-line max-len, no-useless-escape
@@ -188,6 +166,53 @@ const phonesFromUrls = (urls) => {
};
+/**
+ * The functions attempts to extract the following social handles from a HTML document:
+ * emails, phones. Note that the function removes duplicates.
+ * @param {String} html HTML document
+ * @return {*} An object with social handles. It has the following strucute:
+ * ```
+ * {
+ * emails: String[],
+ * phones: String[],
+ * }
+ * ```
+ */
+const handlesFromHtml = (html) => {
+ const result = {
+ emails: [],
+ phones: [],
+ };
+
+ if (!_.isString(html)) return result;
+
+ // We use ignoreHref and ignoreImage options so that the text doesn't contain links,
+ // since their parts can be interpreted as e.g. phone numbers.
+ const text = htmlToText.fromString(html, { ignoreHref: true, ignoreImage: true });
+
+ // TODO: Both html-to-text and cheerio use htmlparser2, the parsing could be done only once to improve performance
+ const $ = cheerio.load(html, { decodeEntities: true });
+
+ // Find all <a> links with href tag
+ const linkUrls = [];
+ $('a[href]').each((index, elem) => {
+ if (elem) linkUrls.push($(elem).attr('href'));
+ });
+
+ // TODO: We should probably normalize all the handles to lower-case
+
+ result.emails = emailsFromUrls(linkUrls).concat(emailsFromText(text));
+ result.emails.sort();
+ result.emails = _.uniq(result.emails, true);
+
+ result.phones = phonesFromUrls(linkUrls).concat(phonesFromText(text));
+ result.phones.sort();
+ result.phones = _.uniq(result.phones, true);
+
+ return result;
+};
+
+
/**
* A namespace that contains various Puppeteer utilities.
*
@@ -205,4 +230,5 @@ export const socialUtils = {
emailsFromUrls,
phonesFromText,
phonesFromUrls,
+ handlesFromHtml,
};
diff --git a/test/utils_social.js b/test/utils_social.js
index 5f6e2b9279f0..bf45c13f69af 100644
--- a/test/utils_social.js
+++ b/test/utils_social.js
@@ -301,3 +301,53 @@ describe('utils.social.phonesFromUrls()', () => {
]);
});
});
+
+
+
+
+describe('utils.social.handlesFromHtml()', () => {
+ const EMPTY_RESULT = {
+ emails: [],
+ phones: [],
+ };
+
+ it('handles invalid arg', () => {
+ expect(social.handlesFromHtml()).to.eql(EMPTY_RESULT);
+ expect(social.handlesFromHtml(undefined)).to.eql(EMPTY_RESULT);
+ expect(social.handlesFromHtml(null)).to.eql(EMPTY_RESULT);
+ expect(social.handlesFromHtml({})).to.eql(EMPTY_RESULT);
+ expect(social.handlesFromHtml(1234)).to.eql(EMPTY_RESULT);
+ });
+
+ it('works', () => {
+ expect(social.handlesFromHtml('')).to.eql(EMPTY_RESULT);
+ expect(social.handlesFromHtml(' ')).to.eql(EMPTY_RESULT);
+
+ expect(social.handlesFromHtml(`
+ <html>
+ <head>
+ <title>Bla</title>
+ </head>
+ <a>
+ <p>bob@example.com</p>
+ <p>carl@example.com</p>
+ <a href="mailto:alice@example.com"></a>
+ <a href="mailto:david@example.com"></a>
+
+ <a href="skip.this.one@gmail.com"></a>
+ <img src="http://somewhere.com/ skip.this.one.too@gmail.com " />
+ <a href="http://somewhere.com/ skip.this.one.too@gmail.com "></a>
+
+ +420775222222
+ +4207751111111
+ <a href="skip.this.one: +42099999999"></a>
+ <a href="tel://+42077533333"></a>
+
+ </body>
+ </html>
+ `)).to.eql({
+ emails: ['alice@example.com', 'bob@example.com', 'carl@example.com', 'david@example.com'],
+ phones: ['+4207751111111', '+420775222222', '+42077533333'],
+ });
+ });
+}); | unknown | Development of new Apify.utils.social functions |
6e4ab1c132edf0b98407be7c667463f20e47eb89 | 2022-07-29 12:48:54 | Andrey Bykov | fix: add missing configuration to CheerioCrawler constructor (#1432) | false | diff --git a/packages/cheerio-crawler/src/internals/cheerio-crawler.ts b/packages/cheerio-crawler/src/internals/cheerio-crawler.ts
index 0f46f969d951..71257d4f7d85 100644
--- a/packages/cheerio-crawler/src/internals/cheerio-crawler.ts
+++ b/packages/cheerio-crawler/src/internals/cheerio-crawler.ts
@@ -3,7 +3,7 @@ import { concatStreamToBuffer, readStreamToString } from '@apify/utilities';
import type { AutoscaledPoolOptions, BasicCrawlerOptions, ErrorHandler, RequestHandler } from '@crawlee/basic';
import { BasicCrawler, BASIC_CRAWLER_TIMEOUT_BUFFER_SECS } from '@crawlee/basic';
import type { CrawlingContext, EnqueueLinksOptions, ProxyConfiguration, Request, RequestQueue, Session } from '@crawlee/core';
-import { CrawlerExtension, enqueueLinks, mergeCookies, Router, resolveBaseUrlForEnqueueLinksFiltering, validators } from '@crawlee/core';
+import { CrawlerExtension, enqueueLinks, mergeCookies, Router, resolveBaseUrlForEnqueueLinksFiltering, validators, Configuration } from '@crawlee/core';
import type { BatchAddRequestsResult, Awaitable, Dictionary } from '@crawlee/types';
import type { CheerioRoot } from '@crawlee/utils';
import { entries, parseContentTypeFromResponse } from '@crawlee/utils';
@@ -404,7 +404,7 @@ export class CheerioCrawler extends BasicCrawler<CheerioCrawlingContext> {
/**
* All `CheerioCrawler` parameters are passed via an options object.
*/
- constructor(options: CheerioCrawlerOptions = {}) {
+ constructor(options: CheerioCrawlerOptions = {}, override readonly config = Configuration.getGlobalConfig()) {
ow(options, 'CheerioCrawlerOptions', ow.object.exactShape(CheerioCrawler.optionsShape));
const {
@@ -437,7 +437,7 @@ export class CheerioCrawler extends BasicCrawler<CheerioCrawlingContext> {
// We need to add some time for internal functions to finish,
// but not too much so that we would stall the crawler.
requestHandlerTimeoutSecs: navigationTimeoutSecs + requestHandlerTimeoutSecs + BASIC_CRAWLER_TIMEOUT_BUFFER_SECS,
- });
+ }, config);
this._handlePropertyNameChange({
newName: 'requestHandler',
@@ -716,7 +716,7 @@ export class CheerioCrawler extends BasicCrawler<CheerioCrawlingContext> {
throw new Error(`${statusCode} - ${message}`);
}
- // It's not a JSON so it's probably some text. Get the first 100 chars of it.
+ // It's not a JSON, so it's probably some text. Get the first 100 chars of it.
throw new Error(`${statusCode} - Internal Server Error: ${body.substr(0, 100)}`);
} else if (HTML_AND_XML_MIME_TYPES.includes(type)) {
const dom = await this._parseHtmlToDom(response); | fix | add missing configuration to CheerioCrawler constructor (#1432) |
b25c8692bb71c28e2d514989537c8849d863701e | 2019-11-01 18:00:27 | metalwarrior665 | enhanced options for utils.puppeteer.saveSnapshot | false | diff --git a/docs/api/puppeteer.md b/docs/api/puppeteer.md
index 668b15278215..28506b9808aa 100644
--- a/docs/api/puppeteer.md
+++ b/docs/api/puppeteer.md
@@ -28,6 +28,7 @@ await puppeteer.injectJQuery(page);
- [`.removeInterceptRequestHandler`](#puppeteer.removeInterceptRequestHandler) β `Promise`
- [`.gotoExtended`](#puppeteer.gotoExtended) β `Promise<Response>`
- [`.infiniteScroll`](#puppeteer.infiniteScroll) β `Promise`
+ - [`.saveSnapshot`](#puppeteer.saveSnapshot) β `Promise`
- [`.injectFile(page, filePath, [options])`](#puppeteer.injectFile) β `Promise`
- [`.injectJQuery(page)`](#puppeteer.injectJQuery) β `Promise`
- [`.injectUnderscore(page)`](#puppeteer.injectUnderscore) β `Promise`
@@ -320,6 +321,51 @@ Scrolls to the bottom of a page, or until it times out. Loads dynamic content wh
<td colspan="3"><p>How many seconds to wait for no new content to load before exit.</p>
</td></tr></tbody>
</table>
+<a name="puppeteer.saveSnapshot"></a>
+
+## `puppeteer.saveSnapshot` β `Promise`
+
+Saves a full screenshot and HTML of the current page into a Key-Value store.
+
+<table>
+<thead>
+<tr>
+<th>Param</th><th>Type</th><th>Default</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td><code>page</code></td><td><code>Object</code></td><td></td>
+</tr>
+<tr>
+<td colspan="3"><p>Puppeteer <a href="https://pptr.dev/#?product=Puppeteer&show=api-class-page" target="_blank"><code>Page</code></a> object.</p>
+</td></tr><tr>
+<td><code>[options]</code></td><td><code>Object</code></td><td></td>
+</tr>
+<tr>
+<td colspan="3"></td></tr><tr>
+<td><code>[options.key]</code></td><td><code>String</code></td><td><code>SNAPSHOT</code></td>
+</tr>
+<tr>
+<td colspan="3"><p>Key under which the screenshot and HTML will be saved. <code>.png</code> will be appended for screenshot and <code>.html</code> for HTML.
+ Must contain only letters, numbers, dashes, dots and underscores.</p>
+</td></tr><tr>
+<td><code>[options.saveScreenshot]</code></td><td><code>Boolean</code></td><td><code>true</code></td>
+</tr>
+<tr>
+<td colspan="3"><p>If true, it will save a full screenshot of the current page with as a record with key appended by <code>.png</code>.</p>
+</td></tr><tr>
+<td><code>[options.saveHtml]</code></td><td><code>Boolean</code></td><td><code>true</code></td>
+</tr>
+<tr>
+<td colspan="3"><p>If true, it will save a full HTML of the current page with as a record with key appended by <code>.html</code>.</p>
+</td></tr><tr>
+<td><code>[options.storeName]</code></td><td><code>String</code></td><td><code></code></td>
+</tr>
+<tr>
+<td colspan="3"><p>Name or id of the Key-Value store where snapshot is saved. By default it is saved to default Key-Value store.</p>
+</td></tr></tbody>
+</table>
<a name="puppeteer.injectFile"></a>
## `puppeteer.injectFile(page, filePath, [options])` β `Promise`
diff --git a/src/puppeteer_utils.js b/src/puppeteer_utils.js
index 1164b85c1811..e62bb8421300 100644
--- a/src/puppeteer_utils.js
+++ b/src/puppeteer_utils.js
@@ -11,8 +11,7 @@ import Request from './request';
import { enqueueLinks } from './enqueue_links/enqueue_links';
import { enqueueLinksByClickingElements } from './enqueue_links/click_elements';
import { addInterceptRequestHandler, removeInterceptRequestHandler } from './puppeteer_request_interception';
-import { apifyClient } from './utils';
-import { setValue } from './key_value_store';
+import { openKeyValueStore } from './key_value_store';
const jqueryPath = require.resolve('jquery/dist/jquery.min');
const underscorePath = require.resolve('underscore/underscore-min');
@@ -513,27 +512,52 @@ export const infiniteScroll = async (page, options = {}) => {
}
};
-const saveSnapshot = async (page, key, options = {}) => {
+/**
+ * Saves a full screenshot and HTML of the current page into a Key-Value store.
+ * @param {Object} page
+ * Puppeteer <a href="https://pptr.dev/#?product=Puppeteer&show=api-class-page" target="_blank"><code>Page</code></a> object.
+ * @param {Object} [options]
+ * @param {String} [options.key=SNAPSHOT]
+ * Key under which the screenshot and HTML will be saved. `.png` will be appended for screenshot and `.html` for HTML.
+ * Must contain only letters, numbers, dashes, dots and underscores.
+ * @param {Boolean} [options.saveScreenshot=true]
+ * If true, it will save a full screenshot of the current page as a record with `key` appended by `.png`.
+ * @param {Boolean} [options.saveHtml=true]
+ * If true, it will save a full HTML of the current page as a record with `key` appended by `.html`.
+ * @param {String} [options.storeName=null]
+ * Name or id of the Key-Value store where snapshot is saved. By default it is saved to default Key-Value store.
+ * @returns {Promise}
+ * @memberOf puppeteer
+ * @name saveSnapshot
+ */
+const saveSnapshot = async (page, options = {}) => {
+ const DEFAULT_KEY = 'SNAPSHOT';
+ let key;
try {
checkParamOrThrow(page, 'page', 'Object');
checkParamOrThrow(options, 'options', 'Object');
- const { saveScreenshot = true, saveHtml = true } = options;
+ const { saveScreenshot = true, saveHtml = true, storeName = null } = options;
+ key = options.key || DEFAULT_KEY;
+
+ checkParamOrThrow(saveScreenshot, 'saveScreenshot', 'Boolean');
+ checkParamOrThrow(saveHtml, 'saveHtml', 'Boolean');
+ checkParamOrThrow(key, 'key', 'String');
+ checkParamOrThrow(storeName, 'storeName', 'Maybe String');
- checkParamOrThrow(saveScreenshot, 'timeoutSecs', 'Boolean');
- checkParamOrThrow(saveHtml, 'waitForSecs', 'Boolean');
+ const store = await openKeyValueStore(storeName);
if (saveScreenshot) {
const screenshotBuffer = await page.screenshot({ fullPage: true });
- await setValue(`${key}.png`, screenshotBuffer, { contentType: 'image/png' });
+ await store.setValue(`${key}.png`, screenshotBuffer, { contentType: 'image/png' });
}
if (saveHtml) {
const html = await page.content();
- await setValue(`${key}.html`, html, { contentType: 'text/html' });
+ await store.setValue(`${key}.html`, html, { contentType: 'text/html' });
}
} catch (e) {
// I like this more than having to investigate stack trace
- log.error(`saveSnapshot with key ${key} failed with error:`);
+ log.error(`saveSnapshot with key ${key || ''} failed with error:`);
throw e;
}
};
diff --git a/test/puppeteer_utils.js b/test/puppeteer_utils.js
index f9af2260e477..35d1feca4653 100644
--- a/test/puppeteer_utils.js
+++ b/test/puppeteer_utils.js
@@ -377,39 +377,40 @@ describe('Apify.utils.puppeteer', () => {
}
});
-
it('saveSnapshot() works', async () => {
const mock = sinon.mock(keyValueStore);
const browser = await Apify.launchPuppeteer({ headless: true });
try {
const page = await browser.newPage();
- let count = 0;
- const content = Array(10).fill(null).map(() => {
- return `<div style="border: 1px solid black">Div number: ${count++}</div>`;
- });
- const contentHTML = `<html><head></head><body>${content}</body></html>`;
+ const contentHTML = '<html><head></head><body><div style="border: 1px solid black">Div number: 1</div></body></html>';
await page.setContent(contentHTML);
const screenshot = await page.screenshot({ fullPage: true });
- mock.expects('setValue')
- .once()
- .withArgs('TEST.png', screenshot, { contentType: 'image/png' })
- .returns(Promise.resolve());
+ // Test saving both image and html
+ const object = { setValue: async () => {} };
+ const stub = sinon.stub(object, 'setValue');
- mock.expects('setValue')
+ mock.expects('openKeyValueStore')
.once()
- .withArgs('TEST.html', contentHTML, { contentType: 'text/html' })
- .returns(Promise.resolve());
+ .withExactArgs('TEST-STORE')
+ .resolves(object);
- await Apify.utils.puppeteer.saveSnapshot(page, 'TEST');
+ await Apify.utils.puppeteer.saveSnapshot(page, { key: 'TEST', storeName: 'TEST-STORE' });
- mock.expects('setValue')
- .once()
- .withArgs('TEST.png', screenshot, { contentType: 'image/png' })
- .returns(Promise.resolve());
+ expect(stub.calledWithExactly('TEST.png', screenshot, { contentType: 'image/png' })).to.be.eql(true);
+ expect(stub.calledWithExactly('TEST.html', contentHTML, { contentType: 'text/html' })).to.be.eql(true);
+
+ // Test saving only image
+ const object2 = { setValue: async () => {} };
+ const stub2 = sinon.stub(object2, 'setValue');
+ mock.expects('openKeyValueStore')
+ .withExactArgs(null)
+ .resolves(object2);
+
+ await Apify.utils.puppeteer.saveSnapshot(page, { saveHtml: false });
- await Apify.utils.puppeteer.saveSnapshot(page, 'TEST', { saveHtml: false });
+ expect(stub2.calledOnceWithExactly('SNAPSHOT.png', screenshot, { contentType: 'image/png' })).to.be.eql(true);
mock.verify();
} finally { | unknown | enhanced options for utils.puppeteer.saveSnapshot |
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- 29