Sidechat icon
Join communities on Sidechat Download
You’re welcome for this prompt
-6 upvote, 45 comments. Sidechat image post by Anonymous in US Politics. "You’re welcome for this prompt"
upvote -6 downvote

default user profile icon
Anonymous 3w

did you just reinvent the fucking index

upvote 11 downvote
default user profile icon
Anonymous 3w

I seriously think OP needs to watch Eddy Burback’s video on AI psychosis…

upvote 7 downvote
🍺
Anonymous 3w

Idiot spotted

upvote 5 downvote
default user profile icon
Anonymous 3w

as a teacher in the field of biology/ecology working primarily as an outdoor educator, going for my doctorate in Education Technology, please for the love of god LEARN how to find this shit yourself. this isn’t how any person learns… this is just silly… find textbooks yourself that you think will be an umbrella coverage of the problem and then determine (by ctrl+F maybe) where or what chapters might help.

upvote 4 downvote
default user profile icon
Anonymous replying to -> #1 3w

did nobody teach you how to use the index in your textbooks

upvote 11 downvote
default user profile icon
Anonymous replying to -> #1 3w

Do you have an index for all textbooks available online?

upvote -1 downvote
default user profile icon
Anonymous replying to -> OP 3w

… available online. … available. online????? you are asking ChatGPT to tell you how to read your ONLINE. TEXTBOOKS???? INSTEAD. OF. CONTROL-F-ING KEY WORDS????

upvote 27 downvote
default user profile icon
Anonymous replying to -> OP 3w

Like FUCK dude. Control-F is the Everything Index. you type a word in and it literally lets you JUMP to every instance of it. no manual scrolling required. Why on EARTH are you asking chatgpt when Control-F exists?

upvote 12 downvote
default user profile icon
Anonymous replying to -> #1 3w

Control F only searches through the book that you’re viewing. ChatGPT searches through millions of textbooks, and does it more intelligently than control F.

upvote -3 downvote
default user profile icon
Anonymous replying to -> OP 3w

… it doesn’t. You’re using GPT-5, right? Allow me to tell you a little story about a time I tried to use GPT-5 to do exactly that. Search through a gigantic body of work, pull out specific information, and aggregate it to make it easier to read. I’m a teacher going for my masters. As a teacher, you need to make sure you’re teaching to the state standards, or your kids are going to fail when they get assessed ON those standards, right? My state’s standards for my subject are over 300 pages.

upvote 17 downvote
default user profile icon
Anonymous replying to -> OP 3w

And the most frustrating thing is, when working with the standards? You don’t NEED all the information in those 300 pages. On the day-to-day, the “Evidence Outcomes” are the most important. That’s what tells you your kid Learned what they’re supposed to - if they can Do This, they learned something. So I asked GPT-5 to pull the evidence outcomes for each substandard and organize them into copy-pasteable text.

upvote 15 downvote
default user profile icon
Anonymous replying to -> OP 3w

And to GPT-5’s credit - it tried. Asked me what format I wanted, asked me if I wanted original language or a summary (I specified original language - WITH the page number it was from in the PDF cited), and then began spitting out Evidence Outcomes in the EXACT format I asked it to. Right? Seems good. When I went to double CHECK the page numbers it was citing - to make sure it was correctly pulling the right EO’s and not changing language - my jaw hit the fucking floor.

upvote 11 downvote
default user profile icon
Anonymous replying to -> OP 3w

It. Hallucinated. Everything. EVERYTHING. The page numbers it cited - in the PDF I had UPLOADED to it, so it could properly analyze - were bullshit. It was giving me page numbers that WEREN’T EVEN STANDARDS - STUFF LIKE THE STATE’S EDUCATION MISSION STATEMENT GOT CITED. I control-F’d a couple of GPT’s “Evidence Outcomes” Not a single one showed up ANYWHERE in the PDF.

upvote 9 downvote
default user profile icon
Anonymous replying to -> OP 3w

And if I had TRUSTED that - if I had saw that it was spitting out correctly Formatted things that SEEMED to answer my question - and then taught. my. real. life. human. students. lessons. based. on. GPT-5’s. hallucinated. standards? I would have lost my job and been blacklisted from teaching. It would have been career-ruiningly bad.

upvote 10 downvote
default user profile icon
Anonymous replying to -> #1 3w

Seems like a completely different use case from simply searching for titles and chapter names. The textbooks themselves don’t hallucinate anything, chatgpt just points me towards them.

upvote -3 downvote
default user profile icon
Anonymous replying to -> OP 3w

And that’s when GPT is working with an Uploaded file. the source material. The EASIEST stuff to parse. You’re describing aggregation across MILLIONS of textbooks. If. it. can’t. even. do. three. hundred. pages. how’s. it. gonna. handle. millions?

upvote 9 downvote
default user profile icon
Anonymous replying to -> #1 3w

I’m not describing aggregation at all

upvote -3 downvote
default user profile icon
Anonymous replying to -> OP 3w

So let me be completely blunt with you, OP. I have seen how GPT is supposedly “intelligently” working with data. I put “intelligently” in quotes, because from MY experience, that simply does not apply to LLMs. If you trust that GPT is giving you correct information because the answer came out LOOKING right and SOUNDING plausible - you are accepting hallucinations of a non-conscious, non-intelligent computer program as fact. And that does not reflect well on YOUR intelligence.

upvote 14 downvote
default user profile icon
Anonymous replying to -> #1 3w

I’m not sure you understand the purpose of the prompt.

upvote -2 downvote
default user profile icon
Anonymous replying to -> OP 3w

Oh, you’re not. You’re just asking CHATGPT to parse the textbook for you and tell you what chapter to read! … did you like. comprehend ANY of what I just said about it hallucinating page numbers and quotes?

upvote 6 downvote
default user profile icon
Anonymous replying to -> OP 3w

Tell me the purpose of the prompt then! You had SO many more characters to work with before, you didn’t need to vaguepost, you could have just said it

upvote 8 downvote
default user profile icon
Anonymous replying to -> #1 3w

It’s pretty easy to tell whether or not chatgpt got it right after a minute or two of reading. I’ve used this prompt before and it usually works very well.

upvote -3 downvote
default user profile icon
Anonymous replying to -> OP 3w

No, it’s not. It’s NOT EASY TO TELL, OP, if it takes MULTIPLE MINUTES to identify. If it takes you - again - MULTIPLE MINUTES to detect that it’s hallucinating - what about all the prompts that DON’T produce a response that requires minutes to read?

upvote 2 downvote
default user profile icon
Anonymous replying to -> OP 3w

This is genuinely fucking insane because I’m describing how it hallucinates from a personal experience with VERY high stakes and you’re going “yeah but i trust it. I can tell. I am that special” You aren’t. I can tell by the fact you realized it hallucinates - and CONTINUE TO USE IT.

upvote 5 downvote
default user profile icon
Anonymous replying to -> #2 3w

someone earlier this week asked me for resources on how climate zones are stratified like a lake, and that the layers are mixing as a sign that the climate zones are becoming too similar in temp. i gave them two papers and simply the title & authors of a textbook. they should have the skills to find the correct chapter (lakes) and section (stratification) then read about the basics, then transfer that learning to the articles’ discussion sections… AI is causing learned helplessness in learning

upvote 2 downvote
default user profile icon
Anonymous replying to -> #1 3w

Here’s an example

post
upvote 0 downvote
default user profile icon
Anonymous replying to -> #1 3w
post
upvote 0 downvote
default user profile icon
Anonymous replying to -> OP 3w

Aaaaand I’m done. You’re too far gone. The AI has convinced you it’s more trustworthy than your human peers. I can’t logically pull someone away from that mentality - they’re too far gone. Your two screenshots do not disprove the hallucinations I experienced in my real, actual life - nor the consequences I would have experienced if I’d trusted them without checking. I sincerely hope AI never tries to ruin your career like it did mine. You will almost certainly fall for it. Goodbye.

upvote 6 downvote
default user profile icon
Anonymous replying to -> #1 3w

How did you come to that conclusion? Because I trust AI to find textbooks sometimes means I trust AI more than humans? Once again, you were using AI for a completely different purpose. One that it’s not nearly as good at.

upvote -2 downvote
default user profile icon
Anonymous replying to -> #2 3w

Do you have the same reaction when people use google instead of visiting a library? It’s not that I *can’t* find what I need af a library, it’s just not practical when I want to spend an hour learning about a random subject I’m curious about.

upvote 0 downvote
🍺
Anonymous replying to -> OP 3w

ChatGPT is not a search engine

upvote 5 downvote
default user profile icon
Anonymous replying to -> og_beer 3w

And? Its still a tool that is useful for this purpose

upvote -1 downvote
default user profile icon
Anonymous replying to -> OP 3w

But you’re not using Google, which is leagues better for learning and it’s essentially a digital library of resources! It’s absolutely practical to search your problem on Google (preferably Google Scholar), or at least the subject, and then look into the resources even further. Do you want to learn everything the first time around, or just go down the wrong path for like a mile before realizing you didn’t learn the fundamentals correctly?

upvote 1 downvote
default user profile icon
Anonymous replying to -> #2 3w

I saw the Army vs. Navy football game and had so many damn questions for why it exists, who funds that shit, who is even on the team or staff, etc. … so I actually Googled it, read the wiki pages, looked into articles on their branch funding (or discovered lack thereof), and answered all my questions in an hour… with just a search and opening a few articles directly from the sources, without an AI hallucinating or incorrectly reporting the information.

upvote 1 downvote
default user profile icon
Anonymous replying to -> #2 3w

Chat bots are not programmed to give you honest answers on lack of information, lack of data, conflicting data, nuance in statistics or statements, and aren’t able to properly distinguish between trusted resources vs. biased ones or sometimes they just feckin Reddit posts as a resource bc enough people have repeated a claim online that the AI reads it as fact or substantially relevant.

upvote 1 downvote
🍺
Anonymous replying to -> OP 3w

It is a language model, not a tool. It replicates how it believes a person would respond to your query. It does not actually respond, it pretends to

upvote 4 downvote
default user profile icon
Anonymous replying to -> #2 3w

I’m searching through the same digital library with a different method that is significantly faster and reliably leads me to textbooks that answer the questions I have. Do you think online textbooks can’t teach you the fundamentals? I’m not getting AI to report the information, just to find it for me, similar to the function of google.

upvote 1 downvote
default user profile icon
Anonymous replying to -> og_beer 3w

What do you think a tool is?

upvote 0 downvote
🍺
Anonymous replying to -> OP 3w

For context you’re arguing a hammer’s usefulness as a wrench here

upvote 1 downvote
default user profile icon
Anonymous replying to -> #2 3w

You can’t trust online textbooks, but you can trust a YouTube comedian apparently?

upvote 1 downvote
default user profile icon
Anonymous replying to -> og_beer 3w

Not at all.

upvote 0 downvote
default user profile icon
Anonymous replying to -> OP 3w

“How did you come to that conclusion? Because I trust AI to-“ There. Stop there. Right there. You’ve got it. I came to that conclusion because you trust the Confirmation Bias Algorithm that Hallucinates Consistently at ALL. Goodbye.

upvote 1 downvote
default user profile icon
Anonymous replying to -> #1 3w

Are you saying that I don’t trust my human peers at all? You realize that you’re not making any sense, right? The goofy goodbyes and WRITING IN ALL CAPS don’t actually make your logic convincing.

upvote -1 downvote
default user profile icon
Anonymous replying to -> OP 3w

I actually feel like I was quite clear about my point? You trust AI at ALL. That’s how I got to this conclusion. If you couldn’t get that just because I called AI the “Confirmation Bias Algorithm”… well. are you REALLY sure that asking AI to do research and reading for you is doing GOOD things to your reading comprehension skills?

upvote 0 downvote
default user profile icon
Anonymous replying to -> #1 3w

You: you trust AI more than people! Me: how did you come to that conclusion? You: You trust AI at all! The logic doesn’t check out unless I don’t trust people at all, which clearly isn’t the case.

upvote 0 downvote