Hard reduction cannot be set up as the highest aim of a system, because its maximization means total removal of human autonomy. It must be subject to a higher good
“Keeping as many people out of harm is actually a bad thing, we should be subject to arbitrary rules written by nomadic shepherds 6,000 years ago”
1
🍳
Anonymous#24w
There has to be some allowance for people to make choices and mistakes and get hurt sometimes is what op is saying I think
5
Anonymous4w
It’s really simple actually, if somebody says something is harming them then odds are they’re being harmed
1
Anonymous#24w
Im thinking more along the lines of “every scifi story about an AI programmed to do this always ends in either human extinction, or hooked up to pods and dopamine boxes”
4
Anonymous#24w
Basically either the system has to be perfectly totalitarian, eliminate all those who could be harmed, or the system must be removed so the system cannot cause harm.
Additionally, its a case for eugenics as well.
6
🍳
Anonymous4w
It’s not dissimilar to the idea that if you built an AI and just gave it flat utilitarian ethics, and told it to max utility, it would do some super fucked up shit
4
Anonymousdumbegg4w
Exactly, a system, digital or legal, cannot be built with its highest principle being harm reduction, because that inevitably leads to what we would consider bad endings.