Sidechat icon
Join communities on Sidechat Download
I don’t understand anti ai sentiment. Like, on most fronts, the issues that people are upset with are purely caused by capitalism and the method that capital owners are implementing AI models (environmental, privacy, etc); but what about local models?
upvote -6 downvote

default user profile icon
Anonymous 2w

The makers of AI are fucking evil. They use your data without permission to train something that pollutes the internet, steals your job, accumulates power to them, kills and addicts children, and which they don’t even know how to make safe.

upvote 12 downvote
😎
Anonymous 2w

Personally I think LLMs just amplify all the bad things about capitalism. I don’t think LLMs are itself bad but I don’t think they are profitable on a large scale without theft. I have had a chance I think to use a local LLM for a project so that was cool but personally LLMs don’t really have any major upsides for the common person relative to like the internet.

upvote 1 downvote
default user profile icon
Anonymous 2w

Like local model usage avoids many of the concerns that people have, and someone could train their own model from the ground up. There are some open weight models too where you can see the training data, to ensure no infringement (there are ethical ways of training, that doesn’t involve theft) The “ai evil” talk is reductive imo, when it’s the owners of each respective AI company that’s causing the major impacts, making those decisions; but that doesn’t make this technology worthless

upvote 0 downvote
default user profile icon
Anonymous 2w

What do yall think though? Do the concerns still apply when discussing local models use? Are there some concerns I’m overlooking that always applied to local model use (like the one regarding theft in training data, especially for models that don’t provide access to training data)?

upvote 0 downvote
default user profile icon
Anonymous replying to -> #1 2w

I don’t disagree, but isn’t that an issue with the companies making and running the major models in the field; rather than an inherent issue with artificial intelligence frameworks? Aka an issue with capitalism and its control over government and regulation, vs the product itself? That’s why I wanted to discuss locally installed and operated models, ones completely isolated from the internet if you’d like; ones that could be entirely made at home with your own choice of data!

upvote 1 downvote
default user profile icon
Anonymous replying to -> OP 2w

It’s both. The methods involved require such enormous amounts of data the only way to train them is data theft. They’re so uninterpretable and hard to control that making them reliably and scaleably safe is almost impossible. Yes it is also the fault of people like David Sacks stopping the government from doing anything about it, but some of the problems are inherent to the technology itself.

upvote 6 downvote
default user profile icon
Anonymous replying to -> #1 2w

I completely agree about how people interfere with gov regulation, and honestly that imo is the biggest impact. If the gov (or state govt in the US at least) can’t regulate at all; then these companies are going to fuck us all over. I’m actually a local AI dev myself; I know what you mean about data theft, it’s severely common. Many datasets on sites like huggingface are littered with work of a variety of licenses; so unless you’re crafting your own custom datasets with verifiably free-domain

upvote 0 downvote
default user profile icon
Anonymous replying to -> OP 2w

Stuff, then there likely will be some theft (even if inadvertent); so I def agree there, but I also think that’s also an issue in common practices. It’s entirely possible to train a reliable AI using open-source/free-domain materials! Regarding the other topic though, about alignment and such; yeah. It’s really intriguing to think about because it invokes other topics as well: what’s the line between a sufficiently advanced intelligence (even if artificial), and being “alive”?

upvote 1 downvote
default user profile icon
Anonymous replying to -> OP 2w

It invokes a plethora of not only technical questions, but inherently ethical ones as well; especially as the field advances and newer, more complex, neural network architectures are created and explored. I do see what you mean though; but personally I think the technology itself isn’t inherently nefarious. However, it absolutely could learn to become nefarious based on our own collective actions as humans, since our knowledge and history is what we use as training data.

upvote 1 downvote
🚮
Anonymous replying to -> OP 2w

you’re completely right on all of it

upvote -1 downvote
🚮
Anonymous replying to -> basedpaperbasket 2w

China definitely laughs at our immature attitude towards AI

upvote 4 downvote
default user profile icon
Anonymous replying to -> basedpaperbasket 2w

I feel like a lot of the sentiment here is derived from a mixture of anti-capitalist sentiment diluted by oppressive capitalist culture; even when recognizing the issues of capitalism one *has* to find a group at fault other than the fundamental capital owners; even if that group is a set of digital frameworks with little to no collective self-determination. I agree though, I believe we’ll be left behind as our biases hold us back. I remember reading that one American billionaire (I think

upvote 6 downvote
default user profile icon
Anonymous replying to -> basedpaperbasket 2w

Peter theil? That fkn ghoul) wanted to use ai to create the “second coming of Christ” or some shit. Like this is technology that could revolutionize education, manufacturing, scientific research and development, and so much more; without even considering the potential insights into the nature and origin of life itself (studying emergent behaviors in different systems/frameworks); but instead we’re using it to amplify wealth inequality, imbalance of power between classes, and religion…

upvote 6 downvote
😎
Anonymous replying to -> brattybottom51 2w

So I don’t think you’re wrong but this country is too wrapped up in capitalism and selfishness that I doubt LLMs will bring more good than bad. Theoretically LLMs and machine learning is bad on its own but with the broken world we live in its existence makes the world a worse place. So I feel being mad at AI is easier than being solely mad at American capitalism because that’s more abstract and it’s harder to be opposed to it versus LLMs.

upvote 1 downvote
🌊
Anonymous replying to -> #1 2w

Wdym data theft? Don’t people willing give their data to another party and then that’s where they get their data sets?

upvote 1 downvote
😎
Anonymous replying to -> brattybottom51 2w

LLMs also have so many negative impacts that while are mitigable or fixable our society just isn’t ready for them and our government isn’t willing to protect us from them due to inefficiency, impunity, greed, immorality, or a combination of the four. I feel like our world isn’t ready for AGI or LLMs because we aren’t ethically & morally ready for it.

upvote 1 downvote
🚮
Anonymous replying to -> OP 2w

yeah I agree. it just highlights particularly well the problems that the free market has created but where a lot of people lose the plot is when they turn their anger against the existence of AI rather than the people who want to use it to do bad things and the systems that allow it

upvote 1 downvote