Hey everyone, would love it if we had a
crawler itself either using crawler.teardown() or crawler.requestQueue.drop() (Not sure about this one), the main use case for it being saving on proxy costs/stopping crawlers from redundantly scraping data or even some other arbritrary conditions. I have found a workaround for this by setting a
shutdown flag in a state or even a variable and checking for it inside the handlers and if its true, just doing a return;(to empty out the queue) while this works it does add in a lot of noise in logs (also in the code) because we need to log that we are skipping them because of this flag for debugging purposes and I wish it would be handled a little more gracefully in the scraper instead of every request handler checking for it