
Personally I think LLMs just amplify all the bad things about capitalism. I don’t think LLMs are itself bad but I don’t think they are profitable on a large scale without theft. I have had a chance I think to use a local LLM for a project so that was cool but personally LLMs don’t really have any major upsides for the common person relative to like the internet.
Like local model usage avoids many of the concerns that people have, and someone could train their own model from the ground up. There are some open weight models too where you can see the training data, to ensure no infringement (there are ethical ways of training, that doesn’t involve theft) The “ai evil” talk is reductive imo, when it’s the owners of each respective AI company that’s causing the major impacts, making those decisions; but that doesn’t make this technology worthless
I don’t disagree, but isn’t that an issue with the companies making and running the major models in the field; rather than an inherent issue with artificial intelligence frameworks? Aka an issue with capitalism and its control over government and regulation, vs the product itself? That’s why I wanted to discuss locally installed and operated models, ones completely isolated from the internet if you’d like; ones that could be entirely made at home with your own choice of data!
It’s both. The methods involved require such enormous amounts of data the only way to train them is data theft. They’re so uninterpretable and hard to control that making them reliably and scaleably safe is almost impossible. Yes it is also the fault of people like David Sacks stopping the government from doing anything about it, but some of the problems are inherent to the technology itself.
I completely agree about how people interfere with gov regulation, and honestly that imo is the biggest impact. If the gov (or state govt in the US at least) can’t regulate at all; then these companies are going to fuck us all over. I’m actually a local AI dev myself; I know what you mean about data theft, it’s severely common. Many datasets on sites like huggingface are littered with work of a variety of licenses; so unless you’re crafting your own custom datasets with verifiably free-domain
Stuff, then there likely will be some theft (even if inadvertent); so I def agree there, but I also think that’s also an issue in common practices. It’s entirely possible to train a reliable AI using open-source/free-domain materials! Regarding the other topic though, about alignment and such; yeah. It’s really intriguing to think about because it invokes other topics as well: what’s the line between a sufficiently advanced intelligence (even if artificial), and being “alive”?
It invokes a plethora of not only technical questions, but inherently ethical ones as well; especially as the field advances and newer, more complex, neural network architectures are created and explored. I do see what you mean though; but personally I think the technology itself isn’t inherently nefarious. However, it absolutely could learn to become nefarious based on our own collective actions as humans, since our knowledge and history is what we use as training data.
I feel like a lot of the sentiment here is derived from a mixture of anti-capitalist sentiment diluted by oppressive capitalist culture; even when recognizing the issues of capitalism one *has* to find a group at fault other than the fundamental capital owners; even if that group is a set of digital frameworks with little to no collective self-determination. I agree though, I believe we’ll be left behind as our biases hold us back. I remember reading that one American billionaire (I think
Peter theil? That fkn ghoul) wanted to use ai to create the “second coming of Christ” or some shit. Like this is technology that could revolutionize education, manufacturing, scientific research and development, and so much more; without even considering the potential insights into the nature and origin of life itself (studying emergent behaviors in different systems/frameworks); but instead we’re using it to amplify wealth inequality, imbalance of power between classes, and religion…
So I don’t think you’re wrong but this country is too wrapped up in capitalism and selfishness that I doubt LLMs will bring more good than bad. Theoretically LLMs and machine learning is bad on its own but with the broken world we live in its existence makes the world a worse place. So I feel being mad at AI is easier than being solely mad at American capitalism because that’s more abstract and it’s harder to be opposed to it versus LLMs.
LLMs also have so many negative impacts that while are mitigable or fixable our society just isn’t ready for them and our government isn’t willing to protect us from them due to inefficiency, impunity, greed, immorality, or a combination of the four. I feel like our world isn’t ready for AGI or LLMs because we aren’t ethically & morally ready for it.
yeah I agree. it just highlights particularly well the problems that the free market has created but where a lot of people lose the plot is when they turn their anger against the existence of AI rather than the people who want to use it to do bad things and the systems that allow it