dead-brownD
Apify & Crawlee•12mo ago•
2 replies
dead-brown

puppeteer runs often stopping due to `ProtocolError ` after `Memory is critically overloaded.`

During my Apify scraping runs with Crawlee / puppeteer, 32GB RAM per run, my jobs stop showing
There was an uncaught exception during the run of the Actor and it was not handled.

And the logs you see in the screenshot at the end.
This often happens for runs that are running for 30+ minutes. Under 30 minutes is less likely to have this error.
I've tried "Increase the 'protocolTimeout' setting ", but observed that the error still happens, just after a longer wait.
Tried different concurrency settings, even leaving to default, but consistently have seen this error.

const crawler = new PuppeteerCrawler({
    launchContext: {
        launchOptions: {
            headless: true,
            args: [
                "--no-sandbox", // Mitigates the "sandboxed" process issue in Docker containers,
                "--ignore-certificate-errors",
                "--disable-dev-shm-usage",
                "--disable-infobars",
                "--disable-extensions",
                "--disable-setuid-sandbox",
                "--ignore-certificate-errors",
                "--disable-gpu", // Mitigates the "crashing GPU process" issue in Docker containers
            ],
        },
    },
    maxRequestRetries: 1,
    navigationTimeoutSecs: 60,
    autoscaledPoolOptions: { minConcurrency: 30 },
    maxSessionRotations: 5,
    preNavigationHooks: [
        async ({ blockRequests }, goToOptions) => {
            if (goToOptions) goToOptions.waitUntil = "domcontentloaded"; // Set waitUntil here
            await blockRequests({
                urlPatterns: [
...
                ],
            });
        },
    ],
    proxyConfiguration,
    requestHandler: router,
});
await crawler.run(startUrls);
await Actor.exit();
Screenshot_2025-03-20_at_20.54.33.png
Was this page helpful?