vicious-gold
vicious-gold•2y ago

Unexpected results when currying a value into the request handler

I need to curry a value into my request handler. When I do this, my first invocation of the crawler runs as expected, but the second invocation reaches maxRequestsPerCrawl immediately (even when passing in a new url).
I see this on the first run:
...
INFO PlaywrightCrawler: Earlier, the crawler reached the maxRequestsPerCrawl limit of 2 requests and all requests that were in progress at that time have now finished. In total, the crawler processed 5 requests and will shut down.
INFO PlaywrightCrawler: Final request statistics: {"requestsFinished":5,"requestsFailed":0,"retryHistogram":[5],"requestAvgFailedDurationMillis":null,"requestAvgFinishedDurationMillis":13368,"requestsFinishedPerMinute":9,"requestsFailedPerMinute":0,"requestTotalDurationMillis":66838,"requestsTotal":5,"crawlerRuntimeMillis":32684}
INFO PlaywrightCrawler: Finished! Total 5 requests: 5 succeeded, 0 failed. {"terminal":true}
...
INFO PlaywrightCrawler: Earlier, the crawler reached the maxRequestsPerCrawl limit of 2 requests and all requests that were in progress at that time have now finished. In total, the crawler processed 5 requests and will shut down.
INFO PlaywrightCrawler: Final request statistics: {"requestsFinished":5,"requestsFailed":0,"retryHistogram":[5],"requestAvgFailedDurationMillis":null,"requestAvgFinishedDurationMillis":13368,"requestsFinishedPerMinute":9,"requestsFailedPerMinute":0,"requestTotalDurationMillis":66838,"requestsTotal":5,"crawlerRuntimeMillis":32684}
INFO PlaywrightCrawler: Finished! Total 5 requests: 5 succeeded, 0 failed. {"terminal":true}
Then this on the second:
INFO PlaywrightCrawler: Starting the crawler.
INFO PlaywrightCrawler: Crawler reached the maxRequestsPerCrawl limit of 2 requests and will shut down soon. Requests that are in progress will be allowed to finish.
INFO PlaywrightCrawler: Final request statistics: {"requestsFinished":0,"requestsFailed":0,...
INFO PlaywrightCrawler: Starting the crawler.
INFO PlaywrightCrawler: Crawler reached the maxRequestsPerCrawl limit of 2 requests and will shut down soon. Requests that are in progress will be allowed to finish.
INFO PlaywrightCrawler: Final request statistics: {"requestsFinished":0,"requestsFailed":0,...
I have a function that takes a single string value (step_id) and passes that value into a function that returns the request handler.
export function crawler<T>(step_id: string) {
const playwright_crawler = new PlaywrightCrawler({
maxRequestsPerCrawl: 2,
requestHandler: create_crawlee_rec_handler(step_id),
});
return playwright_crawler;
}
export function crawler<T>(step_id: string) {
const playwright_crawler = new PlaywrightCrawler({
maxRequestsPerCrawl: 2,
requestHandler: create_crawlee_rec_handler(step_id),
});
return playwright_crawler;
}
This function is imported and called as such:
const crawlerInstance = crawler(`${step.id}`);
await crawlerInstance.run([href]);
const crawlerInstance = crawler(`${step.id}`);
await crawlerInstance.run([href]);
If I don't curry the step_id into the request handler, the second run works just fine. Maybe there is another way to get my step_id value into the scope of the request handler? Thanks in advance for any advice.
3 Replies
vicious-gold
vicious-goldOP•2y ago
Update. Knowing that the step_id is globally unique, I added configuration options to override the default requestQueueId and keyValueStoreId. I also opted not to have these written to the storage directory.
export function crawler<T>(step_id: string) {
const config = Configuration.getGlobalConfig();
config.set("persistStorage", false);
config.set("defaultRequestQueueId", step_id);
config.set("defaultKeyValueStoreId", step_id);
const playwright_crawler = new PlaywrightCrawler(
{
launchContext: {
// Here you can set options that are passed to the playwright .launch() function.
launchOptions: {
headless: true,
},
},
preNavigationHooks: [
async (crawlingContext, gotoOptions) => {
const { page } = crawlingContext;
await page.context().addInitScript(pageInitScript);
},
],
keepAlive: false,
maxConcurrency: 15,
maxRequestsPerCrawl: 2,
requestHandler: create_crawlee_rec_handler(step_id),
},
config
);
return playwright_crawler;
}
export function crawler<T>(step_id: string) {
const config = Configuration.getGlobalConfig();
config.set("persistStorage", false);
config.set("defaultRequestQueueId", step_id);
config.set("defaultKeyValueStoreId", step_id);
const playwright_crawler = new PlaywrightCrawler(
{
launchContext: {
// Here you can set options that are passed to the playwright .launch() function.
launchOptions: {
headless: true,
},
},
preNavigationHooks: [
async (crawlingContext, gotoOptions) => {
const { page } = crawlingContext;
await page.context().addInitScript(pageInitScript);
},
],
keepAlive: false,
maxConcurrency: 15,
maxRequestsPerCrawl: 2,
requestHandler: create_crawlee_rec_handler(step_id),
},
config
);
return playwright_crawler;
}
Seems to be working now, but IDK if this is the appropriate approach...
Saurav Jain
Saurav Jain•2y ago
Hi, our team will reply soon, its weekend so might take a day or two. 🙂
Lukas Celnar
Lukas Celnar•2y ago
Hi, It seems the issue you encountered was related to how the PlaywrightCrawlers state is managed between invocations. When currying step_id into your request handler without explicitly isolating each run's state, the crawler's internal mechanisms for tracking requests and their statuses could have been improperly influenced by the residual state from previous runs. This would explain why the second run immediately hit the maxRequestsPerCrawl limit because, from the crawlers perspective, it was continuing from the previous state rather than starting fresh. One other way of separating the runs would be to create named request que with the step_id as a name https://crawlee.dev/api/core/class/RequestQueue And also if you need to pass some data between different instances, you could do it through the key value store. https://docs.apify.com/platform/storage/key-value-store

Did you find this page helpful?