Can’t find info on url base of a crawler

how in Crawlee (Playwright) I can handle links that were already visited by the crawler? I mean, how not to repeat the links it already handled from disc memory? is
purgeRequestQueue: false
sufficient? I just don't purge the Data already done and it is handled automatically?

so for example I can crawl in chunks - 50 first urls collected from the page dynamically during the crawl run; but during second run those 50 urls would be omitted and 50 next, etc.
Was this page helpful?