not skipping over urls for unfound elements

when i am scraping product data from product urls, if i am trying to either see whether a tag is available and if not to use a different tag or if a tag simply isn't found, i don't want it to give a full error for not finding that certain element i want and not scrape and save the rest of the data
how do i avoid this "skipping" over by overriding or changing the natural response of the crawler

i even have tried try catch statements and if else statements and nothing works
Was this page helpful?