ratty-blush•10mo ago
crawlee not respecting cgroup resource limits
crawlee doesnt seem to respect resource limits imposed by cgroups. This poses problems for containerised enviroments where ethier crawlee gets oom killed or silently slows to a crawl as it thinks it has much more resource available then it actually does. reading and setting the maximum ram is pretty easy
this can then be used to set a reasonable RAM limit for crawlee however, the CPU limits are proving more difficult. Has anyone found a fix yet?
3 Replies
Someone will reply to you shortly. In the meantime, this might help:
ratty-blushOP•10mo ago
reading the code for the
getMemoryInfo utility function,
https://github.com/apify/crawlee/blob/master/packages/utils/src/internals/memory-info.ts#L53
it relies on the isDocker utility function to read from cGroups
https://github.com/apify/crawlee/blob/master/packages/utils/src/internals/general.ts#L39
i think my problem may be that since im running in k8, this check fails meaning crawlee defaults to working against the host resource limitsGitHub
crawlee/packages/utils/src/internals/general.ts at master · apify/c...
Crawlee—A web scraping and browser automation library for Node.js to build reliable crawlers. In JavaScript and TypeScript. Extract data for AI, LLMs, RAG, or GPTs. Download HTML, PDF, JPG, PNG, an...
GitHub
crawlee/packages/utils/src/internals/memory-info.ts at master · api...
Crawlee—A web scraping and browser automation library for Node.js to build reliable crawlers. In JavaScript and TypeScript. Extract data for AI, LLMs, RAG, or GPTs. Download HTML, PDF, JPG, PNG, an...
ratty-blushOP•10mo ago
im going to try tricking this function by manually creating a
/.dockerenv file to make isDocker return true
this appears to have worked, fudging in a /.dockerenv file makes crawlee respect cgroup memory and cpu limits
the same issue still persists for cpu though, its reading the hosts total cpu usage rather than its cgroup's