Apify & CrawleeA&CApify & Crawlee
Powered by
fierDeToiMonGrandF
Apify & Crawlee•2mo ago•
2 replies
fierDeToiMonGrand

Optimization: replacing addRequests with sendRequest to save Queue Writes cost—blocking concerns?

🚀RequestQueue
Hey everyone, I’m trying to optimize my actor's costs. I noticed "Request Queue Writes" are eating up a huge chunk of my budget because I currently enqueue a separate request for a "Details/Enrichment" page for every single item I scrape.

I want to switch to fetching these details inline using
sendRequest
sendRequest
to skip the database write costs. However, the target site is strict with blocking.

My question is: If I use
sendRequest
sendRequest
inside a handler, is manually passing
session
session
and
proxyInfo.url
proxyInfo.url
enough to maintain the exact same fingerprint/IP as the main crawler?


I want to avoid the "Details" request looking like a fresh bot hit. Here is the pattern I'm trying to implement:

// CURRENT EXPENSIVE APPROACH:
// Enqueues a whole new request, costing 1 Write + 1 Handled per item
/*
await crawler.addRequests([{
    url: detailUrl,
    label: 'EXTRA_DATA',
    userData: { ... }
}]);
*/

// PROPOSED COST-SAVING APPROACH:
// Fetches inline to avoid touching the Request Queue DB
router.addHandler('ITEM_PAGE', async ({ request, session, proxyInfo, sendRequest }) => {
    
    // 1. Scrape the main item data
    const mainData = extractMainData();
    const detailUrl = extractDetailUrl();

    // 2. Fetch extra details inline without queueing
    if (detailUrl) {  
      // My concern: Does this perfectly mimic the parent request?
      const { body } = await sendRequest({
          url: detailUrl,
          session: session,          // Passing current session
          proxyUrl: proxyInfo?.url,  // Passing current proxy IP
      });
      
      const extraData = extractExtraData(body);
    }

    await pushData(mainData);
});
// CURRENT EXPENSIVE APPROACH:
// Enqueues a whole new request, costing 1 Write + 1 Handled per item
/*
await crawler.addRequests([{
    url: detailUrl,
    label: 'EXTRA_DATA',
    userData: { ... }
}]);
*/

// PROPOSED COST-SAVING APPROACH:
// Fetches inline to avoid touching the Request Queue DB
router.addHandler('ITEM_PAGE', async ({ request, session, proxyInfo, sendRequest }) => {
    
    // 1. Scrape the main item data
    const mainData = extractMainData();
    const detailUrl = extractDetailUrl();

    // 2. Fetch extra details inline without queueing
    if (detailUrl) {  
      // My concern: Does this perfectly mimic the parent request?
      const { body } = await sendRequest({
          url: detailUrl,
          session: session,          // Passing current session
          proxyUrl: proxyInfo?.url,  // Passing current proxy IP
      });
      
      const extraData = extractExtraData(body);
    }

    await pushData(mainData);
});


Thanks!
Apify & Crawlee banner
Apify & CrawleeJoin
This is the official developer community of Apify and Crawlee.
13,923Members
Resources
Recent Announcements

Similar Threads

Was this page helpful?
Recent Announcements
ellativity

**Update to Store Publishing Terms and Acceptable Use Policy** Due to an influx of fraudulent reviews recently, Apify's Legal team has taken some actions to protect developers, customers, and Apify, by updating the Store Publishing Terms and Acceptable Use Policy. Please pay special attention to the updated terms in section 4 of the Store Publishing Terms here: https://docs.apify.com/legal/store-publishing-terms-and-conditions Additionally, please review the changes to section 2 of the Acceptable Use Policy here: https://docs.apify.com/legal/acceptable-use-policy If you have any questions, please ask them in <#1206131794261315594> so everyone can see the discussion. Thanks!

ellativity · 3w ago

ellativity

Hi @everyone I'm hanging out with the Creator team at Apify in https://discord.com/channels/801163717915574323/1430491198145167371 if you want to discuss Analytics and Insights!

ellativity · 4w ago

ellativity

2 things for <@&1092713625141137429> members today: 1. The Apify developer rewards program is open for registrations: https://apify.notion.site/developer-rewards This is the program where you will earn points for marketing activities. The rewards are still TBC, but the real purpose of the program is to help you structure your marketing activities and efforts. In the coming weeks, I will be populating that link with guides to help you identify the best ways to market your Actors, as well as scheduling workshops and office hours to help you create content and develop your own marketing strategy. 2. At 2PM CET (in about 80 minutes) there will be an office hour with the team behind Insights and Analytics, who want your feedback on how to improve analytics for you. Join us in https://discord.com/channels/801163717915574323/1430491198145167371 to share your ideas!

ellativity · 4w ago

Similar Threads

How to add headers to `addRequests`
living-lavenderLliving-lavender / crawlee-js
4y ago
PlaywrightCrawler not competing the jobs I added in the queue using addRequests()
Hamid JuttHHamid Jutt / crawlee-js
4w ago
Add label to pages via `crawler.addRequests()`?
exclusive-coralEexclusive-coral / crawlee-js
3y ago
got-scraping vs cheerioCrawler or sendRequest
HonzaSHHonzaS / crawlee-js
2y ago