
At least with Google though you had to think about how to form a query that would get you the results you wanted, sift through sources, and think critically about where the info was coming from. LLMs do a lot of that work for you and just spoonfeed you the answer. Plus if you saw something on Twitter about like “Joe Biden shot dead in Gaza” and have to Google whether or not it’s true, I’d still think you’re dumb
Frankly another big issue of this is people falsely believing that AI has all the answers and is correct all the time when it demonstrably is not. It provides false responses all the time. One of the most obvious examples was Grok not believing Kirk was dead. Or when Grok’s output was intentionally modified to say Elon was the best person ever at any physical or mental activity.