recent-tealR
Apify & Crawlee11mo ago
4 replies
recent-teal

Memory is critically overloaded

I have an AWS EC2 instance with 64GB memory.
My crawler is running in a docker container. The
CRAWLEE_MEMORY_MBYTES
is set to
61440


My docker config
docker run --rm --init -t $docker_args \
    -v /mnt/storage:/app/storage \
    --user appuser \
    --security-opt seccomp=/var/lib/jenkins/helpers/docker/seccomp_profile.json \
    -e MONGO_HOST=${MONGO_HOST} \
    -e AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} \
    -e AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} \
    -e SPAWN=${SPAWN} \
    -e MONGO_CACHE=${MONGO_CACHE} \
    -e CRAWLEE_MEMORY_MBYTES=${CRAWLEE_MEMORY_MBYTES} \
    ${IMAGE_NAME} $prog_args


After crawling around 15-20 minutes I receive the warning
Memory is critically overloaded
. But when doing the command
free -h
I notice I still have a lot of free memory as you can see in the screenshots.

The playwright crawler is configured like this
 crawler = PlaywrightCrawler(
        concurrency_settings=ConcurrencySettings(
            min_concurrency=25,
            max_concurrency=125,
        ),
        request_manager=request_queue,
        request_handler=router,
        retry_on_blocked=True,
        headless=True,
    )


I'm a bit clueless why I have this issue. In the past I had issues with zombie processes but this was solved by adding the
--init
to my docker run command.
Do you have any idea on how I can fix this or further debug this?
Screenshot_2025-04-23_at_15.06.09.png
Screenshot_2025-04-23_at_15.05.40.png
Was this page helpful?