system design of concurrent crawlers
i have multiple crawlers - primarily playwright, per site that work on their own completely fine when i use only 1 crawler per site
i have tried running these crawlers concurrently through a scrape event that is emitted from the server that emits individual scrape events for each site to run each crawler
i face a lot of memory overloads, timed out navigations, skipping of many products, and early ending of the crawlers
each crawler essentially takes base urls or scrapes these base urls to get product urls that are then indvidually scraped through to get product page info
i have tried running these crawlers concurrently through a scrape event that is emitted from the server that emits individual scrape events for each site to run each crawler
i face a lot of memory overloads, timed out navigations, skipping of many products, and early ending of the crawlers
each crawler essentially takes base urls or scrapes these base urls to get product urls that are then indvidually scraped through to get product page info