Apify & CrawleeA&CApify & Crawlee
Powered by
clean-aquamarineC
Apify & Crawlee•11mo ago•
4 replies
clean-aquamarine

Routers not working as expected

Hello everyone

First of all, thanks for this project — it looks really good and promising!

I'm considering using Crawlee as an alternative to Scrapy.

I'm trying to use a router to run different processes based on the URL.

But the request is never captured by the handler .

I’d appreciate any insights — am I missing something here?



Here’s my crawl.py:

import asyncio
from crawlee.crawlers import AdaptivePlaywrightCrawler
from crawlee import service_locator
from routes import router      

async def main() -> None:
    configuration = service_locator.get_configuration()
    configuration.persist_storage = False
    configuration.write_metadata = False

    crawler = AdaptivePlaywrightCrawler.with_beautifulsoup_static_parser(
        request_handler=router,
        max_requests_per_crawl=5,
    )

    await crawler.run(['https://investor.agenusbio.com/news/default.aspx'])

if __name__ == '__main__':
    asyncio.run(main())
import asyncio
from crawlee.crawlers import AdaptivePlaywrightCrawler
from crawlee import service_locator
from routes import router      

async def main() -> None:
    configuration = service_locator.get_configuration()
    configuration.persist_storage = False
    configuration.write_metadata = False

    crawler = AdaptivePlaywrightCrawler.with_beautifulsoup_static_parser(
        request_handler=router,
        max_requests_per_crawl=5,
    )

    await crawler.run(['https://investor.agenusbio.com/news/default.aspx'])

if __name__ == '__main__':
    asyncio.run(main())


and here my routes:

from __future__ import annotations

from crawlee.crawlers import AdaptivePlaywrightCrawlingContext
from crawlee.router import Router
from crawlee import RequestOptions, RequestTransformAction

router = Router[AdaptivePlaywrightCrawlingContext]()

def transform_request(request_options: RequestOptions) -> RequestOptions | RequestTransformAction:
    url = request_options.get('url', '')

    if url.endswith('.pdf'):
        print(f"Request options: {request_options} before")
        request_options['label'] = 'pdf_handler'
        print(f"Request options: {request_options} after")
        return request_options

    return request_options

@router.default_handler
async def default_handler(context: AdaptivePlaywrightCrawlingContext) -> None:
    await context.enqueue_links(
        transform_request_function=transform_request,
    )

@router.handler(label='pdf_handler')
async def pdf_handler(context: AdaptivePlaywrightCrawlingContext) -> None:
    context.log.info('Processing PDF: %s', context.request.url)
from __future__ import annotations

from crawlee.crawlers import AdaptivePlaywrightCrawlingContext
from crawlee.router import Router
from crawlee import RequestOptions, RequestTransformAction

router = Router[AdaptivePlaywrightCrawlingContext]()

def transform_request(request_options: RequestOptions) -> RequestOptions | RequestTransformAction:
    url = request_options.get('url', '')

    if url.endswith('.pdf'):
        print(f"Request options: {request_options} before")
        request_options['label'] = 'pdf_handler'
        print(f"Request options: {request_options} after")
        return request_options

    return request_options

@router.default_handler
async def default_handler(context: AdaptivePlaywrightCrawlingContext) -> None:
    await context.enqueue_links(
        transform_request_function=transform_request,
    )

@router.handler(label='pdf_handler')
async def pdf_handler(context: AdaptivePlaywrightCrawlingContext) -> None:
    context.log.info('Processing PDF: %s', context.request.url)
Apify & Crawlee banner
Apify & CrawleeJoin
This is the official developer community of Apify and Crawlee.
14,091Members
Resources
Recent Announcements

Similar Threads

Was this page helpful?
Recent Announcements
ellativity

**Update to Store Publishing Terms and Acceptable Use Policy** Due to an influx of fraudulent reviews recently, Apify's Legal team has taken some actions to protect developers, customers, and Apify, by updating the Store Publishing Terms and Acceptable Use Policy. Please pay special attention to the updated terms in section 4 of the Store Publishing Terms here: https://docs.apify.com/legal/store-publishing-terms-and-conditions Additionally, please review the changes to section 2 of the Acceptable Use Policy here: https://docs.apify.com/legal/acceptable-use-policy If you have any questions, please ask them in <#1206131794261315594> so everyone can see the discussion. Thanks!

ellativity · 3w ago

ellativity

Hi @everyone I'm hanging out with the Creator team at Apify in https://discord.com/channels/801163717915574323/1430491198145167371 if you want to discuss Analytics and Insights!

ellativity · 4w ago

ellativity

2 things for <@&1092713625141137429> members today: 1. The Apify developer rewards program is open for registrations: https://apify.notion.site/developer-rewards This is the program where you will earn points for marketing activities. The rewards are still TBC, but the real purpose of the program is to help you structure your marketing activities and efforts. In the coming weeks, I will be populating that link with guides to help you identify the best ways to market your Actors, as well as scheduling workshops and office hours to help you create content and develop your own marketing strategy. 2. At 2PM CET (in about 80 minutes) there will be an office hour with the team behind Insights and Analytics, who want your feedback on how to improve analytics for you. Join us in https://discord.com/channels/801163717915574323/1430491198145167371 to share your ideas!

ellativity · 4w ago

Similar Threads

user_data is not working as expected
rubber-blueRrubber-blue / crawlee-python
17mo ago
Proxy Not Working
unexpected-ivoryUunexpected-ivory / crawlee-python
2y ago
Apify Actors Not Working | HELP PLEASE
NoahNNoah / crawlee-python
7mo ago
Proxy not working for chrome browser
living-lavenderLliving-lavender / crawlee-python
2y ago