Apify Discord Mirror

Updated last month

Splitting the handlers into multiple files and testing

At a glance
The community member is looking for the best pattern to organize their website scraping code, where each Python file may contain multiple handlers. They are considering two approaches: 1) using a global router variable and passing it around, or 2) importing the handler files into a central routes.py file and registering the handlers there. The community member is unsure which approach is better. In the comments, another community member suggests using decorators to register the handlers, such as router.handler('label')(handler_one).
Hello so Ideally i would like to have a file for website im scraping (so ome will contain more than one handler per py file). Im thinking of what the best pattern for that is. I was just going from the docs and have router = Router[BeautifulSoupCrawlingContext]() as a global var in my routes.py but i would need to either pass that router around as a singleton into the different handler files or i would import the files into the one routes.py and then register the handers there which sounds better but then I have something like webpage_handler.py which has my handler_one(context) and handler_two(context) then i register them in routes with. Whitch is fine but doesn't look too pretty.
Plain Text
@router.handler("my_label")
async def handler(context: BeautifulSoupCrawlingContext) -> None:
    handler_one(context)
@router.handler("another_label")
async def handler_another_name(context: BeautifulSoupCrawlingContext) -> None:
    handler_two(context)



to be honest not super sure wondering if someone already has a nice pattern that works.
f
1 comment
Hi. You could use decorators in this way: router.handler('label')(handler_one)
Add a reply
Sign up and join the conversation on Discord