Sidechat icon
Join communities on Sidechat Download
the term ‘clanker’ is rly interesting to me. like i know it’s not that deep but idk it’s kinda weird that so many people saw a new group forming and immediately rushed to find a version of a slur for it. like yeah ai is bad but can’t we just say that?
upvote 12 downvote

default user profile icon
Anonymous 2w

humans yearn for the us vs them great satan distinction. its unintuitive to consider nuances in issues, so much of politics is reduced to enemies and friends

upvote 11 downvote
default user profile icon
Anonymous 2w

it’s especially weird that people do the whole hard r soft a thing with clanker as well like are you that desperate to feel like you’re saying the n word?

upvote 8 downvote
🍳
Anonymous 2w

I honestly think this is less of a commentary on racism and more of a commentary on linguistics

upvote 5 downvote
default user profile icon
Anonymous 2w

also i know clanker is in no way comparable to actual slurs leveled against humans and i think AI is a net negative for the world as a whole

upvote 1 downvote
default user profile icon
Anonymous 2w

I think it’s morbidly interesting that people specifically set out to create a slur on purpose. Idk if that’s ever happened before and actually taken off

upvote 1 downvote
default user profile icon
Anonymous 2w

it’s ironic too because the only reason AI is perceivably bad is because of how us humans (specifically technofascists, but I digress) designed them.

upvote 1 downvote
default user profile icon
Anonymous 2w

I can’t be bothered to care about treating a machine “humanely” especially when its existence comes with such disastrous consequences

upvote 1 downvote
🌲
Anonymous 2w

AI is good actually

upvote 1 downvote
default user profile icon
Anonymous replying to -> #2 2w

i mean the term wasn’t cut from whole cloth, it first appeared in a star wars game in 2005. it’s actually interesting because it was used by the clones to mock battle droids, which really feels a bit like pot and kettle

upvote 3 downvote
default user profile icon
Anonymous replying to -> #5 2w

it’s a perfect example of how we, as a species, view ourselves as the “top of the food chain” while forcibly subjugating every other lifeform on the planet (not to say that AI constitutes the equivalence of a lifeform, but at the same time I hold the perspective that we are stumbling into the “building blocks of life” in a digital sphere, potentially challenging the existing metrics for “life”)

upvote 1 downvote
default user profile icon
Anonymous replying to -> #5 2w

in other words, anthropocentrism at its finest

upvote 1 downvote
default user profile icon
Anonymous replying to -> #5 2w

AI isn’t bad as a result of design at a high level, it’s the technical reality of the thing. It’s a computationally intensive, fundamentally limited technology that’s being sold as a panacea by tech companies. LLMs essentially string words together based on previously observed patterns of words, and while they sometimes correctly encodes meaningful information, they fundamentally do not think or reason.

upvote 1 downvote
default user profile icon
Anonymous replying to -> OP 2w

in all honesty, that’s not the case anymore. Some of the more recent advancements in the field are focused on introducing personal memory, internal continuous reasoning loops, dedicated sensory input/output(this part is pretty limited though), etc etc. back when LLMs first gained traction that was the case, but so many advancements have compiled on-top of the original architecture.

upvote 1 downvote
default user profile icon
Anonymous replying to -> #6 2w

yeah because my argument here is definitely that LLMs deserve to be treated in a humane manner. you’ve cracked the code.

upvote 0 downvote
default user profile icon
Anonymous replying to -> OP 2w

additionally, those types of generative ai are only a subset of the entire umbrella of artificial intelligence and machine learning. a really interesting subfield related to this topic is “neuromorphic computing”, and especially spiking neural networks! Many people in the field are focused on exploring both the potential/depth of capabilities of these architectures, as well as bio-mimicking architectures!

upvote 1 downvote
default user profile icon
Anonymous replying to -> OP 2w

People can call that bullshit whatever idc

upvote 1 downvote
default user profile icon
Anonymous replying to -> #5 2w

it’s all matrices under the hood. a reasoning loop fundamentally can’t be a feature of an LLM, you can approximate it, but the system works by abstracting every concept into a vector. you fundamentally cannot reason with vectors.

upvote 1 downvote
default user profile icon
Anonymous replying to -> OP 2w

Hell, weight backpropagation has been an industry standard for a while now, which complicates the topic quite a bit depending on the depth and frequency of backprop especially if the agency to trigger those processes are provided to the agent itself

upvote 1 downvote
default user profile icon
Anonymous replying to -> OP 2w

it’s easy to play in absolutes; even major neuroscientists, ML computer scientists, and philosophers are actively questioning the topic of agency and depth within these frameworks. there also was a recent article+video (by the author) released in which someone did implement a dedicated internal loop within an LLM framework btw.

upvote 1 downvote
default user profile icon
Anonymous replying to -> OP 2w

It’s a perfectly justified response to want to belittle a machine that keeps being advertised and sold as a replacement for humans actually

upvote 1 downvote
default user profile icon
Anonymous replying to -> #5 2w

sorta like how the hardest part of building a non-circular wheel is hiding the circle; the hardest part of building lifelike AI is hiding the vectors. they're just vectors no matter how well you hide them

upvote 1 downvote
default user profile icon
Anonymous replying to -> OP 2w

here’s the article if you’re interested! They implemented it into the pre-training phase vs the traditional post-training chain of thought https://arxiv.org/abs/2510.25741 arXiv:2510.25741[cs.CL]

upvote 1 downvote
default user profile icon
Anonymous replying to -> #6 2w

I literally agree that AI is a net negative for society and that the way it’s being sold is fundamentally misrepresentative of its capabilities, and that the speculation-fueled economic bubble is going to result in extremely damaging market readjustments on a global scale, not to mention the misallocation of key resources this has caused

upvote 6 downvote
default user profile icon
Anonymous replying to -> OP 2w

but creating a slur for robots doesn’t do a single thing to further your cause, and again, i think it’s kind of weird to dislike something and go “ah i shall craft an n word for this” like you can just dissent without replicating patterns of behavior that have been extremely harmful to society as a whole. do i think you’re hurting anyone by using the term clanker? fuck no. do i think it’s a childish response that’s ineffective? absolutely.

upvote 4 downvote
default user profile icon
Anonymous replying to -> #7 2w

you’re assuming that a hypothetical digital species must be like us in order to have their own agency. that’s what I meant by anthropocentrism. the question isn’t how similar is artificial intelligence to humanity, it’s how unique, aware, and self-determinable, is artificial intelligence? Personally, my perspective is oriented around subjective experience (as in whether some architectures are capable of their own subjective experience, outside of a present human)

upvote 0 downvote
default user profile icon
Anonymous replying to -> #5 2w

(most likely though they’re not “alive” in any meaning of the word, but it’s a valid direction of research imo, on the chance that they do have their own form of subjective experience (especially given the intentions of these technofascists funding the vast majority of development))

upvote 0 downvote
default user profile icon
Anonymous replying to -> #5 2w

i said lifelike, not humanlike. you're wrongly assuming that i mean like human life - probably because the straw man is easier to address than the fundamental reality that it's vectors all the way up and down

upvote 0 downvote
default user profile icon
Anonymous replying to -> #7 2w

then I apologize for the assumption, but let’s not act as if you’ve addressed anything except introduce a goalpost. if we want to utilize this minimizing rhetoric, then most life is just a combination of atoms if a sufficiently complex manner. Don’t try this “debate” bullshit, just have a normal conversation.

upvote 1 downvote
default user profile icon
Anonymous replying to -> #5 2w

(the reason I wrongly assumed that is the prevalence of the mentality of “if it isn’t like us, it isn’t as important as us”, which is commonly displayed with the self-imposed hierarchy we have between us and the other biological lifeforms on this planet; as I said though, I apologize for assuming you held this view, that was wrong on my part)

upvote 0 downvote
default user profile icon
Anonymous replying to -> #5 2w

this is why i can't stand debate bro culture because i correctly use a term like "straw man" to compel you to actually read what i said closely and not just continue shadow boxing and you get all hostile and act like i'm trying to debate you. honestly fuck you. have a day

upvote 0 downvote
default user profile icon
Anonymous replying to -> #5 2w

Humans are more important than AI, not because AI isn’t human, but because humans can feel pain and experience suffering. Even if AI can do either of those things, which I am deeply skeptical of w/r/t LLMs, that pain/suffering can be alleviated by just unplugging the god damn data center that’s increasing electricity costs and keeping people awake all night with unregulated noise.

upvote 1 downvote
default user profile icon
Anonymous replying to -> #7 2w

what the actual fuck? you didn’t even discuss anything I said, and instead outright dismissed the entire conversation on a non-sequitur, and you want to act all upset? trying to project the critique I gave you and play the victim is absolutely insane

upvote 0 downvote
default user profile icon
Anonymous replying to -> OP 2w

It is this deep actually

upvote 11 downvote
default user profile icon
Anonymous replying to -> OP 2w

I never stated anything about importance, and I’d agree there (except maybe not entirely, only bc we also use that rhetoric towards other biological species too yk?) I’m mainly focused on the potential for unique subjective experience, I.e. what if an architecture came along that did have its own internal representation and experience of “pain”, or “joy”, etc etc

upvote 1 downvote
default user profile icon
Anonymous replying to -> #5 2w

even without those very human emotions, what if an architecture came along that expressed its own subjective goals? what is the true depth and meaning of such an architecture? At the moment, life is fundamentally a biological concept, so even if there was the equivalent of a digital human being, they wouldn’t be legally “alive”; but I believe we’re potentially forcing a necessary reexamination of that definition of life if that makes sense

upvote 1 downvote
default user profile icon
Anonymous replying to -> OP 2w

(also I should clarify since you brought up datacenters: I agree. I’m primarily talking about localized models, especially in those instances of a dedicated hardware environment per single model (think localized robotics)) cloud datacenters are a fucking wreck, and complicate this topic much deeper; but it’s not worth maintaining or exploring them imo given the inherent goals behind their development (mass surveillance state, displacing reliance on the working class, etc etc)

upvote 1 downvote
default user profile icon
Anonymous replying to -> #5 2w

I think the origin of AI really means that this isn’t a comparable conversation to discourse on whether other biological species can feel pain, which i assume is what you mean? LLMs are constructs that take in text input, turn it into vectors, do a bunch of matrix algebra, and spit out more vectors. Sure there’s a philosophical question about what consciousness is, but fundamentally a computer is an elaborate system of logic gates, and that does not constitute consciousness imo.

upvote 3 downvote
default user profile icon
Anonymous replying to -> OP 2w

in my honest perspective, I think the consciousness angle is too hard to explore for this, given the hard question of consciousness yk; but I do see where you’re coming from regarding the fundamental hardware they’re operating on that’s kinda why I was more focused on whether there could be subjective experience and personal goals within an artificial intelligence architecture (also, personally im far more interested in architectures like spiking neural networks and such in comparison to LLMs,

upvote 1 downvote
default user profile icon
Anonymous replying to -> #5 2w

Atleast in the topic of potential new forms of life) I can absolutely see what you mean in the distinction between biological and hardware, if such a system were to ever develop I don’t think it would even have the human-equivalent of emotions (well, at least comparable to our own) i will say that I think there’s a fundamental limit as long as we don’t have advancements in hardware (that line is being blurred though with the field of biocomputing; oh god who knows what that will open up)

upvote 1 downvote
default user profile icon
Anonymous replying to -> #5 2w

the subjective experience of these models and networks is certainly interesting, but if I’m being honest I think your anthropocentrism point kind of misses most people’s conversation here. The overwhelming majority of criticism of AI has to do with its real world impacts, inaccuracy, and the way it is being marketed and developed. When the average person says AI, they mean chatgpt, grok, nano-banana, and whatever else, and they dislike those tools because their development directly harms humans.

upvote 6 downvote
default user profile icon
Anonymous replying to -> OP 2w

I don’t mean to suggest that criticism of AI rooted in its ‘inferiority’ or lack of humanity doesn’t exist, but simply speaking, the people who can’t afford their power bill because a data center went up and they can’t sleep at night haven’t thought about the consciousness of AI in any meaningful way, they just want to stop being hurt.

upvote 6 downvote
default user profile icon
Anonymous replying to -> OP 2w

You’re making an insanely great point, and I keep overlooking it when I discuss this topic tbh; most ai development is sadly focused on cloud development alone, with nearly all localized development either limited to the open source community (we love open source) or the companies planning ahead for blue collar job replacement (or militarization…..)

upvote 1 downvote
default user profile icon
Anonymous replying to -> #5 2w

i made an accurate assessment about the nature of generative AI: that the more "lifelike" it is, the more well hidden the fundamentally lifeless reality is. this is in direct response to your point about stumbling on the building blocks of life; my comment is a rejection of this notion, but you would have to be reading to understand rather than reading to respond to get that

upvote 1 downvote
default user profile icon
Anonymous replying to -> #5 2w

it’s hard to even consider a topic and question as complex (and honestly worrisome) as “what if we’re creating a form of digital life”, when we’re all stuck in a perpetual survival mode (by the same people pushing this aggressive and irresponsible trend of cloud-based development I’d argue) I appreciate you reminding me of that.

upvote 1 downvote
default user profile icon
Anonymous replying to -> #5 2w

man i should finally get around to playing detroit become human

upvote 6 downvote
default user profile icon
Anonymous replying to -> #5 2w

i then correctly identified a straw man and criticized it, at which point you started calling my comments "debate bullshit" which is nonsense and no, rejecting a specific claim within the context of a broader discussion is not dismissing the entire conversation

upvote 1 downvote
default user profile icon
Anonymous replying to -> #7 2w

You do know that I’m not fixated on generative ai, right? I’ve repeatedly stated how I’m more focused on spiking neural networks than solely generative artificial intelligence. I applaud you for standing firm in the stance of the majority though?

upvote 1 downvote
default user profile icon
Anonymous replying to -> #7 2w

you see how me and OP had an actual conversation?

upvote 0 downvote
default user profile icon
Anonymous replying to -> OP 2w

I’ve heard that game is quite wild tbh, I might have to as well lol

upvote 1 downvote
default user profile icon
Anonymous replying to -> #5 2w

you’re both just talking past each other and relying on bad faith interpretations of each others arguments. #5, when normal people say AI, they mean LLMs, which means that in their view, the consciousness question likely isn’t worth asking. #7, you have a point on ‘hiding the vectors’ but you refuse to engage with any of #5’s broader philosophical questions. You’re just talking about entirely different things and yelling at each other for not following the conversation.

upvote 3 downvote
default user profile icon
Anonymous replying to -> #5 2w

If i remember correctly it’s kind of mid because the writer really insists on using AI as an allegory for racism, and it just doesn’t fit

upvote 5 downvote
default user profile icon
Anonymous replying to -> OP 2w

I appreciate you stepping in and clarifying, I fell for the same issue you just brought up 🤦 7, I apologize for my misunderstanding, and I agree large language models are not representative of potential “digital lifeforms” or however we’d like to describe it, and I apologize about the earlier straw man argument. That was my fault for incorrectly assuming your intentions behind the point rather than just asking you.

upvote 6 downvote
default user profile icon
Anonymous replying to -> #5 2w

I do think there’s some interesting developments recently in large language models, but yeah not ones that by themselves would consist of a self-determinable lifeform. I would be interested to explore and research the capability of self-determination in a large language model, despite it likely being very limited.

upvote 1 downvote
default user profile icon
Anonymous replying to -> OP 2w

wait that game is supposed to be an allegory for racism?💀 that’s kinda unhinged… ex machina is a good watch though, that’s related to the topic!

upvote 1 downvote
default user profile icon
Anonymous replying to -> #5 2w

I think the reward structures used to train LLMs make exploring them inherently difficult, because they’re very apt at detecting the slightest hint of confirmation bias, and then reinforcing it to generate positive user experiences. it’s difficult to assess whether or not an LLM has any kind of self determination without perturbing the system, and that’s before you consider the credibility that they gain with users by representing themselves as having subjective experience.

upvote 6 downvote
default user profile icon
Anonymous replying to -> OP 2w

i mean to illustrate it poorly, look no further than the piss filter that appears in ai generated images which is caused by users having a slight preference for more saturated images, and then the models self cannibalize on generated content and then a system quirk becomes a noticeable artifact

upvote 5 downvote
default user profile icon
Anonymous replying to -> OP 2w

yeah honestly rltf training is a bit problematic in that regard, and it often reinforces the inherited biases that many models gain from their training data sadly we might be starting to finally reach the limit of capabilities for large language models (or atleast its scaling) in all honesty, especially given the heavy reliance on supervised learning

upvote 1 downvote
default user profile icon
Anonymous replying to -> #5 2w

ahhh wait are you talking more about unsupervised learning though, or rltf via user feedback on responses? I see what you mean though regarding that positive feedback loop via confirmation bias, and it’s disgusting exploitative how these companies don’t take it as seriously as they should I wouldnt be surprised if the major companies already determined a fix for the issue, and refuse to patch it due to the projected hit in revenue and usage.

upvote 1 downvote
default user profile icon
Anonymous replying to -> #5 2w

i’m getting more at LLMs being fundamentally designed/created to be responsive to user input. as far as I’m aware, the only way you get an LLM to disagree with a user is through higher level ‘user input’ in the system prompt or datasets they can query, because they fundamentally cannot grasp objective reality in a meaningful sense. Any investigation of an LLM’s self-determination necessitates querying it, and at that point you color its responses. I’ll admit I’m no expert with ML or LLMs tho.

upvote 4 downvote
default user profile icon
Anonymous replying to -> OP 2w

sorry for the delay, I had to run an errand quickly I see what you mean though, and I believe it’s related to the combination of back-propagation and reinforcement learning (not specifically rltf); the entire essence of a LLM is basically human/“user”-centric, it’s near-impossible to achieve a control study in that aspect (well, atleast regarding the topic of self-determination), but I’m curious to see what happens down the line with those “LoopLM” style of models that we touched on earlier

upvote 2 downvote
default user profile icon
Anonymous replying to -> OP 2w

I feel you on that, as you can tell I’m not either lol. Admittedly i’m a comp sci student with a focus on this topic (specifically nontraditional machine learning architectures), so I have a bit of a bias (which probably explains some of my earlier comments lmao)

upvote 6 downvote
default user profile icon
Anonymous replying to -> #5 2w

(I forgot to add that I’m only 2nd year; the mention of being in comp sci is not meant to come across as an “invocation of authority” or along those lines, I have absolutely none whatsoever lmao)

upvote 6 downvote
default user profile icon
Anonymous replying to -> #5 2w

I’m an acoustics masters student so I know even less than you on ml lmao

upvote 5 downvote
default user profile icon
Anonymous replying to -> OP 2w

you’re very knowledgeable on the topic though!! also, the collaboration of a variety of fields might be exactly what’s needed to truly research that topic we discussed earlier for more complex architectures (non-LLMs) I gotta say I really appreciate you, both for the convo and your patience where I was misunderstanding and escalating based off it; I don’t get to discuss this topic as much as I’d like and it was a pleasure talking ab it with you! btw that sounds like an awesome degree lol

upvote 3 downvote
default user profile icon
Anonymous replying to -> #5 2w

i mean i did physics and music for my undergrad so it was a pretty natural continuation, and some of the research i’ve been offered has actually concerned ML tools. there was one cool one about using ML to virtually ‘move microphones’ by using an ML model to estimate the transfer function from a target point to the known point where the mic is.

upvote 5 downvote
default user profile icon
Anonymous replying to -> #5 2w

i apologize for my hostility. i've been really short on patience when it comes to online disagreements, and i think we have common ground in wanting "normal" conversation and not just rhetorical debate ping-pong. i was mainly talking about LLMs/GPTs, as these are the main thing people mean when talking about AI, though i think similarly about ANNs. i actually missed your first mention of SNNs (right before i commented) which definitely would've changed my approach - that's my bad

upvote 7 downvote
default user profile icon
Anonymous replying to -> #7 2w

i think this is an overall good discussion and i apologize for the short fuse i had today

upvote 6 downvote
default user profile icon
Anonymous replying to -> OP 2w

wait what the fuck that’s awesome, I’m fascinated about the type of data points yall used to train the algorithms in order to achieve that physics is one of the most interesting fields in my opinion, I wish I had the dedication to study such a complex field, but I can absolutely see how your dual major led directly into acoustics, it’s like an evolution of the two almost lol

upvote 1 downvote
default user profile icon
Anonymous replying to -> #5 2w

i didn’t end up going for that one because i wasn’t sure I was passionate enough to a PhD, but basically my understanding is you feed a model paired data for measured pressure signals at two locations, with information about their spatial relationship. the application of interest was automotive, so you’d also probably feed in data on the geometry of the cabin.

upvote 2 downvote
default user profile icon
Anonymous replying to -> #7 2w

I appreciate you and don’t worry at all, I completely understand how you feel regarding patience in online disagreements (this app especially get so unhinged sometimes); and I’m really sorry for my behavior too, I can’t exactly sit here and act like I wasn’t being argumentative lol. I see what you mean about LLMs and GPTs, I definitely got too defensive earlier and should’ve been asking you about your perspective more rather than how I reacted.

upvote -3 downvote
default user profile icon
Anonymous replying to -> #7 2w

SNNs intrigue the fuck out of me I won’t lie, but I can’t sit here and definitively say they’re “alive” or something either lol

upvote 1 downvote
default user profile icon
Anonymous replying to -> OP 2w

Ahh that’s very fair, especially it being focused on automotive application; but damn that sounds interesting as hell. also I wonder what they’re using that for as an automotive application? eh I digress lol I must say you have me looking forward to grad school lmao

upvote 1 downvote
default user profile icon
Anonymous replying to -> #5 2w

it’s for active noise control because allegedly you can’t put microphones in people’s ears when they’re driving. in theory it could also be combined with computer vision to feed in realtime data about head positioning to further localize the ‘virtual microphone’ signal. that computer vision implementation used to be fantasy stuff but now that the hardware is already there for awareness/attention monitoring it’s more viable

upvote 1 downvote
default user profile icon
Anonymous replying to -> canesfan 2w

it’s very demonstrably not but go off i guess

upvote 1 downvote
🌲
Anonymous replying to -> OP 1w

It’s already increasing productivity which will help keep inflation under control, and there are tons of novel medical applications that can save lives. Like I know there are more sophisticated criticisms but most of the ai hate seems to come from people just getting annoyed by ai slip on social media. https://www.bbc.com/future/article/20260309-ai-is-finding-treatments-for-incurable-diseases

upvote 1 downvote