
Yeah true I had this thought because I was looking up something from the early 2000s and was able to find an article about it. I just figure it would be much more prevalent from like 2005 or so on. But I do love that some news outlets (I know NYT has done this for example) publish tons of their really old articles online too
I mean sure you can do that for certain major events, but in hundreds of years people will still be able to find articles with comments about what people said when the queen or the pope died or any other historical event in history. Kinda eliminates the “history is written by the winners” effect assuming the Internet stays relatively free
Aside from the environmental concerns AI is wrong an enormous amount of time when it comes to history research. It also actively harms historical publications, institutions, and academic practice because fundamentally it runs on plagiarism. Please just learn to use JSTOR and other databases. It’s more reliable and not that much harder, and you won’t be doing the harm to the planet and the historical institutions that you enjoy that generative AI does.
I might be rationalizing my own usage, but it’s not like they won’t train new models if I don’t use it and the statistics for average prompts point to water consumption, energy usage, and carbon emissions that are almost nothing compared to my overall daily impact in those three metrics.
Gemini 3 pro is pretty good when you turn on grounding with google search. It will cite things and give you the link, but there’s a limit to how much you can use it and that kind of forces you to check the sources it provides. Unfortunately I love asking it specifically for perspectives I want to hear about in history, or asking it to answer as a certain philosopher, or using it to understand the things in a theory I specifically don’t have a good grasp on.
You’re definitely rationalizing. It’s so easy not to use AI. Asking it to answer as a philosopher, for example, is pretty directly asking it to make something up. You’re going to be fed misinformation. There are many other sources that are accessible, easy, and more reliable and accurate, and any benefit you get is negligible compared to the harm caused, which is why if I’m honest I have very little sympathy for continuing to rationalize your use.
They literally can’t drink water from their own taps near my hometown because of one of the data centers. I also find the fact that your primary reason for continuing to use AI despite its lack of accuracy and negative ethical implications is “I like using it,” and that you’ve used that as your primary reason to make the statement that you “can’t” stop using it to be really concerning
In other words, you should really stop using generative AI period, but if you continue you need to be intellectually honest with yourself that you’re receiving misinformation, participating in plagiarism and the erosion of historical institutions, asking an LLM to fabricate things by having it roleplay as a dead individual, causing environmental harm, and rejecting better alternatives that are easily available.
I would have to read all of the important works of important philosophers in order to see what they would think about a modern issue, or I could just ask “what would the most important philosophers in history think about dopamine addiction, and the impact that has on agency” for a basic connection that I can do further reading on.
My usage or lack of usage would realistically have no effect on the amount of water in that town. Let’s be real. The perspective I’m using is definitely what causes the issue, I’m not going to say the perspective is correct, but if I were to take on the perspective you suggest I probably wouldn’t get the life I want.
If this is a great filter, if there is going to be global conflict following Taiwan, or if the economy can’t handle the strain of ai, I want to travel soon, and ai feels like the only realistic path for that to happen with something that actually makes me happy and causes positive societal change.
I can’t think of a solution to what’s happening with ai, but there’s a couple things I can profit off of that actually have net positives towards society and I will do them as soon as I can, which means a reliance on n8n, open source ai, and no code tools, I’m currently learning as much as I can to do them without ai, but when it’s possible I’m not going to just sit on my hands.