
We need to be aware of who exactly is reporting what tho. I’ve read that what OpenAI reports in terms of water consumption conflicts with other reports as well as energy consumption. Before we all go out and start screaming at people, we need to understand the effects of these systems from people who actually KNOW about them—not some rando CNN journalist who doesn’t know how to read a weather report
OP is NTA, while true that it is terrible for the environment and also has detrimental effects on learning and the art community, it is inevitable that AI will have even greater influence in the future. Large tech companies have enough funding to utilize and produce AI projects and will continue to push it onto consumers. We see that social media is already starting to be somewhat reliant on AI. It isn’t ideal, but people should start becoming familiar with using AI, especially these days.
what do you mean by that? the entire point of ai is for it to be incredibly easy to use. there’s not much to familiarize yourself with. They’re shoving ai features into everything they can, it’s not like people don’t know what it is. The people against this more often seem to know how ai works than people who want it.
most people use AI for the simplest of tasks (eg. write an email, etc.) out of ease when it can be used for a lot more than that. by using AI for such tasks, it’s defeating its true purpose. it has shown to be super useful in corporate and computing industries, and there is a whole new skillset called prompt engineering to complete such tasks. not everyone knows how to properly utilize AI beyond the basics because it is often misused
“prompt engineering” is linkedin speak for “writing things in a way the ai will best understand what you mean”. You cannot be serious about that being a whole new skillset. would you care to elaborate on some examples for super useful corporate or computing uses? because in my experience, it sucks at coding anything that it can’t copy paste from online somewhere.
I mean don’t get me wrong we’re on the same page here. I agree AI is generally bad. And you can just be snarky if you’d like, but when in the right hands, this technology can genuinely save people’s lives. A friend of mine died because her doctor missed cancer in a scan that was in her body. Since you love reasoning so much, here’s some sources for you https://pmc.ncbi.nlm.nih.gov/articles/PMC12455834/ https://www.unilad.com/news/health/man-diagnosed-blood-cancer-ai-saved-life-317726-20250910
I’m a geophysics graduate student and it streamlines our ability to process seismic data sets. Instead of spending literal hours a day picking horizons and calculating how to stack our results, I can use my time much more efficiently. And this isn’t just me using it, it’s the entire field of geophysics. I’m not an expert in other fields but I’m sure AI programs have a large positive effect elsewhere in science. AI has had the largest impact on geophysics since the invention of computers
sounds pretty interesting, but are ML algorithms for picking through seismic data really significantly better than handwritten iterative ones? You want some way to pick things out ofc, but I’m curious how significantly that program has changed within like the last 2 years. I’m not a geophysicist, and I’m sure it really depends on what you’re looking for, but detecting significant signals can be done in tons of different ways, and it’s surprising to me that ML methods would be much faster
See, I’d say this is one of few good uses of AI personally. I think it’s vital to separate generative AI from other types like the kinds used to process information quickly. Non-generative AI has existed for a long time and doesn’t do nearly as much harm as generative. When you look at it this way the only think generative is used for is trying to replace people to save the ultra rich money and screw people out of work, which I think we can agree is bad