pleroma.debian.social

pleroma.debian.social

One overused cliche I see in discussions about “ethical AI” is the idea of making autonomous systems, robots, etc, “three laws compliant”.

While it is obviously a credit to the imagination of Asimov, I find it to be a very clear sign that the people who say that robots need to follow these laws IRL haven’t actually read his novels. You only need to read the first few stories that Asimov wrote to understand “oh, huh, these Three Laws don’t work”.

The Three Laws are a literary device, not a scientific one. Asimov only invented them to explore the conflict between the three laws and to explore the conflict between artificial intelligences and human intelligence. They are deliberately vague and loose to be the vehicle of which Asimov explores his stories through.

They are, in essence, a thought experiment.

Most crucially and most importantly: you can’t apply them to real robots/AI, because unlike Asmiov’s fictional creations, no autonomous system that exists today actually has the ability of foresight or reason in a way that would allow them to come to a conclusion over whether they are following The Three Laws.

@yassie_j I think it's pretty firmly established at this point that the tech bros don't understand thought experiments. Or maybe even thought.

These are the people who cooked their brains on weapons-grade deliriants and scared themselves with Roko's basilisk

@yassie_j unfortunately I find Harry Harrison's War with the robots is probably closer to the way things will turn out.

replies
0
announces
0
likes
1