The community member encountered a website using shadow DOM, where their web scraping tool, crawlee, was unable to find elements. They asked if anyone knew what to look at to make it work. The comments suggest that accessing elements in the shadow DOM can be challenging, as the "Copy JS path" option in Chrome is disabled for those elements. Community members discussed using the Puppeteer library and the Chrome DevTools protocol as potential solutions to access the shadow DOM. However, there is no explicitly marked answer in the comments.
Since the page is opening a consent dialog in the shadow dom, I can just ignore it for now. But if the whole page was encased int it that might be a problem.