How to handle TimeoutErrors
We've just started using Crawlee 0.1.2 for some basic web scraping tasks, and I'm unsure from the docs how to handle cases where the HTTP request times out.
Below is a simple script that scrapes a webpage and outputs the number of anchor tags on the page. This particular site blocks the request (in that it never responds) and the script hangs indefinitely.
How can I handle this case so that the script doesn't hang indefinitely?
Thanks in advance :perfecto:
