Sidechat icon
Join communities on Sidechat Download
I’m not so much concerned about the environmental impacts of AI as I am that society is seemingly losing all its ability to think critically whatsoever. Like why are you outsourcing a text message to an LLM can you seriously not form a sentence anymore
upvote 17 downvote

default user profile icon
Anonymous 2d

It’s not the LLMs. Just a symptom of being a lib

upvote -1 downvote
default user profile icon
Anonymous replying to -> #1 2d

Have you been on X recently? Boatloads of conservatives “@grok”ing the simplest things

upvote 8 downvote
default user profile icon
Anonymous replying to -> OP 2d

Yeah because it holds the world’s knowledge. That’s like saying someone is stupid for using google

upvote 0 downvote
default user profile icon
Anonymous replying to -> OP 2d

Lmao you have to pay for it now but I’ve seen some disappointing stuff

upvote 1 downvote
default user profile icon
Anonymous replying to -> #1 2d

At least with Google though you had to think about how to form a query that would get you the results you wanted, sift through sources, and think critically about where the info was coming from. LLMs do a lot of that work for you and just spoonfeed you the answer. Plus if you saw something on Twitter about like “Joe Biden shot dead in Gaza” and have to Google whether or not it’s true, I’d still think you’re dumb

upvote 4 downvote
default user profile icon
Anonymous replying to -> OP 2d

Why would you thjnk the person is dumb. That’s something that can actually happen.

upvote 1 downvote
default user profile icon
Anonymous replying to -> #1 2d

Because it’s highly implausible and even in the wild chance that happened, I doubt you’d hear it first from like MAGA AMERICA X NEWS on social media

upvote 6 downvote
default user profile icon
Anonymous replying to -> #1 2d

Frankly another big issue of this is people falsely believing that AI has all the answers and is correct all the time when it demonstrably is not. It provides false responses all the time. One of the most obvious examples was Grok not believing Kirk was dead. Or when Grok’s output was intentionally modified to say Elon was the best person ever at any physical or mental activity.

upvote 1 downvote
default user profile icon
Anonymous replying to -> #1 2d

And frankly I also do think that people’s over-reliance on Google was already a problem before AI made it so much worse. People used the first search result rather than knowing how to do actual research, and disinformation was spread that way all the time.

upvote 1 downvote
default user profile icon
Anonymous replying to -> #3 2d

The Grok not knowing about Kirk dying was definitely because the model was trained before that happened. But the Elon dickriding was genuinely insane 😭 just shows that it’s all about the training data, the parameters, and the system prompt

upvote 1 downvote
default user profile icon
Anonymous replying to -> #2 2d

And it shows how insidious this reliance on AI is. People assume it’s objectively correct and holds all the answers. Then the company that owns the AI can just modify the output to push whatever agenda they want. Or the output is unintentionally biased just from the training data.

upvote 5 downvote