sensitive-blue•2y ago
Need serious help scaling crawlee
I have an ECS instance with 4vCPU & 16gb RAM. My scaling options are the following:
I am starting 4 of these crawlers at a time.
Here is a snapshot log:
Can anyone help me identify the correct settings so it is not maxed out?
17 Replies
Our team will reply you soon!
Hi @bmax
Did you set the right env variable for memory allocation ? https://crawlee.dev/docs/guides/configuration#crawlee_memory_mbytes
sensitive-blueOP•2y ago
@Pepa J do you think that is what's missing here? I do set
CRAWLEE_AVAILABLE_MEMORY_RATIO=.80
but how come currency is 1 when I have it set to max of 200?@bmax It could be many things, like requests being enqueued one by one, I cannot really tell without seeing the code. Does the code work for you as expected when running locally?
@bmax by starting 4 of these crawlers, you mean like 4 separate processes? And it can take
0.8 * 16GB = 12.8GB => 4 * 12.8GB = 51.2GB
total memory?sensitive-blueOP•2y ago
No this is one instance of node on one ec2 instance. I start 4 of them asynchronously I.e .run() with separate queues.
Thanks for diving deep with me. Happy to answer any other questions. It feels kind of like trial and error to me but I guess I really don’t understand why concurrency isn’t hitting at least double digits
So what happens if you try to run it with env variable set to
CRAWLEE_MEMORY_MBYTES=13107
which is about 80% of 16GB?sensitive-blueOP•2y ago
@Pepa J Memory doesn't seem to be the biggest issue here, CPU is the one maxing out, but, I guess I don't understand the
PuppeteerCrawler:AutoscaledPool: state
log well enough to know what the bottle neck is.
but happy to try that if you think that is the problem.
I figured it was CPU becuase: ,"cpuInfo":{"isOverloaded":true,"limitRatio":0.4,"actualRatio":1},
@bmax When you run it locally everything works properly?
sensitive-blueOP•2y ago
will test in a few.
i also have a huge cpu/mem
I would like to distinguish if it is resources, website or implementation related issue. 🙂
sensitive-blueOP•2y ago
running on local which basically has unlimited cpu/mem.
it seems pretty fast
but the log that it returns is still single digits for conurrency,
here's an interesting thing from the ec2 instance:
501 total requests but only 18 per minute?
@Pepa J can I pay you an consulting/hourly fee to check this out with me?
If there are no warnings or retries in the log then it seems like pretty huge website or deoptimization in code, we used to had some bad experiences with some 3rd party libraries that provided sync blocking sorting/transforming data.
can I pay you an consulting/hourly fee to check this out with me?Unfortunately we do not provide such a service here on discord. I mostly advice to fill some "dangerous code" areas with logs with timestamp, so it can be determined what is taking so much time. Or you may run the Actor in headful mode with
maxConcurrency: 1
locally and see if you spot anything locally.sensitive-blueOP•2y ago
Are you thinking event loop lag?
Something like that, probably not full one, since you are getting the logs, but I don't have deep knowledge there, I know that the browsers should run in separate thread, but the CDP instructions for browser should be in single thread 🤔
So first I would discover what is causing the blocking - should be easy if you can to reproduce it with
maxConcurrency: 1
.sensitive-blueOP•2y ago
@Pepa J

sensitive-blueOP•2y ago
any puppeteer settings you know of that would make chrome take up less?
@bmax I don't think you may save many resources by this... Can you just keep the website open without processing anything? Is it also taking so much cpu?