Sidechat icon
Join communities on Sidechat Download
Hard reduction cannot be set up as the highest aim of a system, because its maximization means total removal of human autonomy. It must be subject to a higher good
upvote 3 downvote

default user profile icon
Anonymous 4w

“Keeping as many people out of harm is actually a bad thing, we should be subject to arbitrary rules written by nomadic shepherds 6,000 years ago”

upvote 1 downvote
🍳
Anonymous replying to -> #2 4w

There has to be some allowance for people to make choices and mistakes and get hurt sometimes is what op is saying I think

upvote 5 downvote
default user profile icon
Anonymous 4w

It’s really simple actually, if somebody says something is harming them then odds are they’re being harmed

upvote 1 downvote
default user profile icon
Anonymous replying to -> #2 4w

Im thinking more along the lines of “every scifi story about an AI programmed to do this always ends in either human extinction, or hooked up to pods and dopamine boxes”

upvote 4 downvote
default user profile icon
Anonymous replying to -> #2 4w

Basically either the system has to be perfectly totalitarian, eliminate all those who could be harmed, or the system must be removed so the system cannot cause harm. Additionally, its a case for eugenics as well.

upvote 6 downvote
🍳
Anonymous 4w

It’s not dissimilar to the idea that if you built an AI and just gave it flat utilitarian ethics, and told it to max utility, it would do some super fucked up shit

upvote 4 downvote
default user profile icon
Anonymous replying to -> dumbegg 4w

Exactly, a system, digital or legal, cannot be built with its highest principle being harm reduction, because that inevitably leads to what we would consider bad endings.

upvote 1 downvote