Abort Crawler on Exception
What I'm trying to achieve is: when there's any exception thrown for
url_1 (e.g. http error code or any other exceptions inside the request handler), an exception will be also thrown after the first line, so that url_2 won't be scraped.However, it looks like when there's an exception for
url_1, Crawlee handles it gracefully, and continues to execute the second line there. I searched around the doc, github issues, and this channel for a while but didn't find any luck.Is there any configuration that I can do to achieve this?
