
it’s a perfect example of how we, as a species, view ourselves as the “top of the food chain” while forcibly subjugating every other lifeform on the planet (not to say that AI constitutes the equivalence of a lifeform, but at the same time I hold the perspective that we are stumbling into the “building blocks of life” in a digital sphere, potentially challenging the existing metrics for “life”)
AI isn’t bad as a result of design at a high level, it’s the technical reality of the thing. It’s a computationally intensive, fundamentally limited technology that’s being sold as a panacea by tech companies. LLMs essentially string words together based on previously observed patterns of words, and while they sometimes correctly encodes meaningful information, they fundamentally do not think or reason.
in all honesty, that’s not the case anymore. Some of the more recent advancements in the field are focused on introducing personal memory, internal continuous reasoning loops, dedicated sensory input/output(this part is pretty limited though), etc etc. back when LLMs first gained traction that was the case, but so many advancements have compiled on-top of the original architecture.
additionally, those types of generative ai are only a subset of the entire umbrella of artificial intelligence and machine learning. a really interesting subfield related to this topic is “neuromorphic computing”, and especially spiking neural networks! Many people in the field are focused on exploring both the potential/depth of capabilities of these architectures, as well as bio-mimicking architectures!
it’s easy to play in absolutes; even major neuroscientists, ML computer scientists, and philosophers are actively questioning the topic of agency and depth within these frameworks. there also was a recent article+video (by the author) released in which someone did implement a dedicated internal loop within an LLM framework btw.
I literally agree that AI is a net negative for society and that the way it’s being sold is fundamentally misrepresentative of its capabilities, and that the speculation-fueled economic bubble is going to result in extremely damaging market readjustments on a global scale, not to mention the misallocation of key resources this has caused
but creating a slur for robots doesn’t do a single thing to further your cause, and again, i think it’s kind of weird to dislike something and go “ah i shall craft an n word for this” like you can just dissent without replicating patterns of behavior that have been extremely harmful to society as a whole. do i think you’re hurting anyone by using the term clanker? fuck no. do i think it’s a childish response that’s ineffective? absolutely.
you’re assuming that a hypothetical digital species must be like us in order to have their own agency. that’s what I meant by anthropocentrism. the question isn’t how similar is artificial intelligence to humanity, it’s how unique, aware, and self-determinable, is artificial intelligence? Personally, my perspective is oriented around subjective experience (as in whether some architectures are capable of their own subjective experience, outside of a present human)
(most likely though they’re not “alive” in any meaning of the word, but it’s a valid direction of research imo, on the chance that they do have their own form of subjective experience (especially given the intentions of these technofascists funding the vast majority of development))
then I apologize for the assumption, but let’s not act as if you’ve addressed anything except introduce a goalpost. if we want to utilize this minimizing rhetoric, then most life is just a combination of atoms if a sufficiently complex manner. Don’t try this “debate” bullshit, just have a normal conversation.
(the reason I wrongly assumed that is the prevalence of the mentality of “if it isn’t like us, it isn’t as important as us”, which is commonly displayed with the self-imposed hierarchy we have between us and the other biological lifeforms on this planet; as I said though, I apologize for assuming you held this view, that was wrong on my part)
Humans are more important than AI, not because AI isn’t human, but because humans can feel pain and experience suffering. Even if AI can do either of those things, which I am deeply skeptical of w/r/t LLMs, that pain/suffering can be alleviated by just unplugging the god damn data center that’s increasing electricity costs and keeping people awake all night with unregulated noise.
I never stated anything about importance, and I’d agree there (except maybe not entirely, only bc we also use that rhetoric towards other biological species too yk?) I’m mainly focused on the potential for unique subjective experience, I.e. what if an architecture came along that did have its own internal representation and experience of “pain”, or “joy”, etc etc
even without those very human emotions, what if an architecture came along that expressed its own subjective goals? what is the true depth and meaning of such an architecture? At the moment, life is fundamentally a biological concept, so even if there was the equivalent of a digital human being, they wouldn’t be legally “alive”; but I believe we’re potentially forcing a necessary reexamination of that definition of life if that makes sense
(also I should clarify since you brought up datacenters: I agree. I’m primarily talking about localized models, especially in those instances of a dedicated hardware environment per single model (think localized robotics)) cloud datacenters are a fucking wreck, and complicate this topic much deeper; but it’s not worth maintaining or exploring them imo given the inherent goals behind their development (mass surveillance state, displacing reliance on the working class, etc etc)
I think the origin of AI really means that this isn’t a comparable conversation to discourse on whether other biological species can feel pain, which i assume is what you mean? LLMs are constructs that take in text input, turn it into vectors, do a bunch of matrix algebra, and spit out more vectors. Sure there’s a philosophical question about what consciousness is, but fundamentally a computer is an elaborate system of logic gates, and that does not constitute consciousness imo.
in my honest perspective, I think the consciousness angle is too hard to explore for this, given the hard question of consciousness yk; but I do see where you’re coming from regarding the fundamental hardware they’re operating on that’s kinda why I was more focused on whether there could be subjective experience and personal goals within an artificial intelligence architecture (also, personally im far more interested in architectures like spiking neural networks and such in comparison to LLMs,
Atleast in the topic of potential new forms of life) I can absolutely see what you mean in the distinction between biological and hardware, if such a system were to ever develop I don’t think it would even have the human-equivalent of emotions (well, at least comparable to our own) i will say that I think there’s a fundamental limit as long as we don’t have advancements in hardware (that line is being blurred though with the field of biocomputing; oh god who knows what that will open up)
the subjective experience of these models and networks is certainly interesting, but if I’m being honest I think your anthropocentrism point kind of misses most people’s conversation here. The overwhelming majority of criticism of AI has to do with its real world impacts, inaccuracy, and the way it is being marketed and developed. When the average person says AI, they mean chatgpt, grok, nano-banana, and whatever else, and they dislike those tools because their development directly harms humans.
I don’t mean to suggest that criticism of AI rooted in its ‘inferiority’ or lack of humanity doesn’t exist, but simply speaking, the people who can’t afford their power bill because a data center went up and they can’t sleep at night haven’t thought about the consciousness of AI in any meaningful way, they just want to stop being hurt.
You’re making an insanely great point, and I keep overlooking it when I discuss this topic tbh; most ai development is sadly focused on cloud development alone, with nearly all localized development either limited to the open source community (we love open source) or the companies planning ahead for blue collar job replacement (or militarization…..)
i made an accurate assessment about the nature of generative AI: that the more "lifelike" it is, the more well hidden the fundamentally lifeless reality is. this is in direct response to your point about stumbling on the building blocks of life; my comment is a rejection of this notion, but you would have to be reading to understand rather than reading to respond to get that
it’s hard to even consider a topic and question as complex (and honestly worrisome) as “what if we’re creating a form of digital life”, when we’re all stuck in a perpetual survival mode (by the same people pushing this aggressive and irresponsible trend of cloud-based development I’d argue) I appreciate you reminding me of that.
you’re both just talking past each other and relying on bad faith interpretations of each others arguments. #5, when normal people say AI, they mean LLMs, which means that in their view, the consciousness question likely isn’t worth asking. #7, you have a point on ‘hiding the vectors’ but you refuse to engage with any of #5’s broader philosophical questions. You’re just talking about entirely different things and yelling at each other for not following the conversation.
I appreciate you stepping in and clarifying, I fell for the same issue you just brought up 🤦 7, I apologize for my misunderstanding, and I agree large language models are not representative of potential “digital lifeforms” or however we’d like to describe it, and I apologize about the earlier straw man argument. That was my fault for incorrectly assuming your intentions behind the point rather than just asking you.
I do think there’s some interesting developments recently in large language models, but yeah not ones that by themselves would consist of a self-determinable lifeform. I would be interested to explore and research the capability of self-determination in a large language model, despite it likely being very limited.
I think the reward structures used to train LLMs make exploring them inherently difficult, because they’re very apt at detecting the slightest hint of confirmation bias, and then reinforcing it to generate positive user experiences. it’s difficult to assess whether or not an LLM has any kind of self determination without perturbing the system, and that’s before you consider the credibility that they gain with users by representing themselves as having subjective experience.
i mean to illustrate it poorly, look no further than the piss filter that appears in ai generated images which is caused by users having a slight preference for more saturated images, and then the models self cannibalize on generated content and then a system quirk becomes a noticeable artifact
yeah honestly rltf training is a bit problematic in that regard, and it often reinforces the inherited biases that many models gain from their training data sadly we might be starting to finally reach the limit of capabilities for large language models (or atleast its scaling) in all honesty, especially given the heavy reliance on supervised learning
ahhh wait are you talking more about unsupervised learning though, or rltf via user feedback on responses? I see what you mean though regarding that positive feedback loop via confirmation bias, and it’s disgusting exploitative how these companies don’t take it as seriously as they should I wouldnt be surprised if the major companies already determined a fix for the issue, and refuse to patch it due to the projected hit in revenue and usage.
i’m getting more at LLMs being fundamentally designed/created to be responsive to user input. as far as I’m aware, the only way you get an LLM to disagree with a user is through higher level ‘user input’ in the system prompt or datasets they can query, because they fundamentally cannot grasp objective reality in a meaningful sense. Any investigation of an LLM’s self-determination necessitates querying it, and at that point you color its responses. I’ll admit I’m no expert with ML or LLMs tho.
sorry for the delay, I had to run an errand quickly I see what you mean though, and I believe it’s related to the combination of back-propagation and reinforcement learning (not specifically rltf); the entire essence of a LLM is basically human/“user”-centric, it’s near-impossible to achieve a control study in that aspect (well, atleast regarding the topic of self-determination), but I’m curious to see what happens down the line with those “LoopLM” style of models that we touched on earlier
you’re very knowledgeable on the topic though!! also, the collaboration of a variety of fields might be exactly what’s needed to truly research that topic we discussed earlier for more complex architectures (non-LLMs) I gotta say I really appreciate you, both for the convo and your patience where I was misunderstanding and escalating based off it; I don’t get to discuss this topic as much as I’d like and it was a pleasure talking ab it with you! btw that sounds like an awesome degree lol
i mean i did physics and music for my undergrad so it was a pretty natural continuation, and some of the research i’ve been offered has actually concerned ML tools. there was one cool one about using ML to virtually ‘move microphones’ by using an ML model to estimate the transfer function from a target point to the known point where the mic is.
i apologize for my hostility. i've been really short on patience when it comes to online disagreements, and i think we have common ground in wanting "normal" conversation and not just rhetorical debate ping-pong. i was mainly talking about LLMs/GPTs, as these are the main thing people mean when talking about AI, though i think similarly about ANNs. i actually missed your first mention of SNNs (right before i commented) which definitely would've changed my approach - that's my bad
wait what the fuck that’s awesome, I’m fascinated about the type of data points yall used to train the algorithms in order to achieve that physics is one of the most interesting fields in my opinion, I wish I had the dedication to study such a complex field, but I can absolutely see how your dual major led directly into acoustics, it’s like an evolution of the two almost lol
i didn’t end up going for that one because i wasn’t sure I was passionate enough to a PhD, but basically my understanding is you feed a model paired data for measured pressure signals at two locations, with information about their spatial relationship. the application of interest was automotive, so you’d also probably feed in data on the geometry of the cabin.
I appreciate you and don’t worry at all, I completely understand how you feel regarding patience in online disagreements (this app especially get so unhinged sometimes); and I’m really sorry for my behavior too, I can’t exactly sit here and act like I wasn’t being argumentative lol. I see what you mean about LLMs and GPTs, I definitely got too defensive earlier and should’ve been asking you about your perspective more rather than how I reacted.
it’s for active noise control because allegedly you can’t put microphones in people’s ears when they’re driving. in theory it could also be combined with computer vision to feed in realtime data about head positioning to further localize the ‘virtual microphone’ signal. that computer vision implementation used to be fantasy stuff but now that the hardware is already there for awareness/attention monitoring it’s more viable
It’s already increasing productivity which will help keep inflation under control, and there are tons of novel medical applications that can save lives. Like I know there are more sophisticated criticisms but most of the ai hate seems to come from people just getting annoyed by ai slip on social media. https://www.bbc.com/future/article/20260309-ai-is-finding-treatments-for-incurable-diseases