Apify & CrawleeA&CApify & Crawlee
Powered by
EricE
Apify & Crawlee•5mo ago•
6 replies
Eric

Whole crawler dies because "failed to lookup address information: Name or service not known"

I am not able to reproduce it in a simple example (it may be a transient error), but I have gotten this error regularly and it kills the crawler completely.

Traceback:
  File "crawlee/crawlers/_basic/_basic_crawler.py", line 1366, in __run_task_function
    if not (await self._is_allowed_based_on_robots_txt_file(request.url)):

  File "crawlee/crawlers/_basic/_basic_crawler.py", line 1566, in _is_allowed_based_on_robots_txt_file
    robots_txt_file = await self._get_robots_txt_file_for_url(url)

  File "crawlee/crawlers/_basic/_basic_crawler.py", line 1589, in _get_robots_txt_file_for_url
    robots_txt_file = await self._find_txt_file_for_url(url)

  File "crawlee/crawlers/_basic/_basic_crawler.py", line 1599, in _find_txt_file_for_url
    return await RobotsTxtFile.find(url, self._http_client)

  File "crawlee/_utils/robots.py", line 48, in find
    return await cls.load(str(robots_url), http_client, proxy_info)

  File "crawlee/_utils/robots.py", line 59, in load
    response = await http_client.send_request(url, proxy_info=proxy_info)

  File "crawlee/http_clients/_impit.py", line 167, in send_request
    response = await client.request(

impit.ConnectError: Failed to connect to the server.
Reason: hyper_util::client::legacy::Error(
    Connect,
    ConnectError(
        "dns error",
        Custom {
            kind: Uncategorized,
            error: "failed to lookup address information: Name or service not known",
        },
    ),
)
exited with code 1
Traceback:
  File "crawlee/crawlers/_basic/_basic_crawler.py", line 1366, in __run_task_function
    if not (await self._is_allowed_based_on_robots_txt_file(request.url)):

  File "crawlee/crawlers/_basic/_basic_crawler.py", line 1566, in _is_allowed_based_on_robots_txt_file
    robots_txt_file = await self._get_robots_txt_file_for_url(url)

  File "crawlee/crawlers/_basic/_basic_crawler.py", line 1589, in _get_robots_txt_file_for_url
    robots_txt_file = await self._find_txt_file_for_url(url)

  File "crawlee/crawlers/_basic/_basic_crawler.py", line 1599, in _find_txt_file_for_url
    return await RobotsTxtFile.find(url, self._http_client)

  File "crawlee/_utils/robots.py", line 48, in find
    return await cls.load(str(robots_url), http_client, proxy_info)

  File "crawlee/_utils/robots.py", line 59, in load
    response = await http_client.send_request(url, proxy_info=proxy_info)

  File "crawlee/http_clients/_impit.py", line 167, in send_request
    response = await client.request(

impit.ConnectError: Failed to connect to the server.
Reason: hyper_util::client::legacy::Error(
    Connect,
    ConnectError(
        "dns error",
        Custom {
            kind: Uncategorized,
            error: "failed to lookup address information: Name or service not known",
        },
    ),
)
exited with code 1


This is my crawler:
crawler = AdaptivePlaywrightCrawler.with_beautifulsoup_static_parser(
  playwright_crawler_specific_kwargs={
      "browser_type": "firefox",
      "headless": True,
  },
  max_session_rotations=10,
  retry_on_blocked=True,
  max_request_retries=5,
  keep_alive=True,
  respect_robots_txt_file=True,
)
crawler = AdaptivePlaywrightCrawler.with_beautifulsoup_static_parser(
  playwright_crawler_specific_kwargs={
      "browser_type": "firefox",
      "headless": True,
  },
  max_session_rotations=10,
  retry_on_blocked=True,
  max_request_retries=5,
  keep_alive=True,
  respect_robots_txt_file=True,
)


I am on version 1.0.4 and I was crawling crawlee.dev (though it doesn't fail in a specific page)
Apify & Crawlee banner
Apify & CrawleeJoin
This is the official developer community of Apify and Crawlee.
14,091Members
Resources
Recent Announcements

Similar Threads

Was this page helpful?
Recent Announcements
ellativity

**Update to Store Publishing Terms and Acceptable Use Policy** Due to an influx of fraudulent reviews recently, Apify's Legal team has taken some actions to protect developers, customers, and Apify, by updating the Store Publishing Terms and Acceptable Use Policy. Please pay special attention to the updated terms in section 4 of the Store Publishing Terms here: https://docs.apify.com/legal/store-publishing-terms-and-conditions Additionally, please review the changes to section 2 of the Acceptable Use Policy here: https://docs.apify.com/legal/acceptable-use-policy If you have any questions, please ask them in <#1206131794261315594> so everyone can see the discussion. Thanks!

ellativity · 3w ago

ellativity

Hi @everyone I'm hanging out with the Creator team at Apify in https://discord.com/channels/801163717915574323/1430491198145167371 if you want to discuss Analytics and Insights!

ellativity · 4w ago

ellativity

2 things for <@&1092713625141137429> members today: 1. The Apify developer rewards program is open for registrations: https://apify.notion.site/developer-rewards This is the program where you will earn points for marketing activities. The rewards are still TBC, but the real purpose of the program is to help you structure your marketing activities and efforts. In the coming weeks, I will be populating that link with guides to help you identify the best ways to market your Actors, as well as scheduling workshops and office hours to help you create content and develop your own marketing strategy. 2. At 2PM CET (in about 80 minutes) there will be an office hour with the team behind Insights and Analytics, who want your feedback on how to improve analytics for you. Join us in https://discord.com/channels/801163717915574323/1430491198145167371 to share your ideas!

ellativity · 4w ago

Similar Threads

ImportError: cannot import name 'service_container
HallHHall / crawlee-python
13mo ago
ImportError: cannot import name 'service_container' from 'crawlee'
sacred-emeraldSsacred-emerald / crawlee-python
15mo ago
Playwright Crawler on Windows?
skinny-azureSskinny-azure / crawlee-python
15mo ago
Website Content Crawler Issue
clean-aquamarineCclean-aquamarine / crawlee-python
2y ago