This article is published in partnership with Quora, a platform where Internet users can ask questions and get answers from other experts in the field.
Question of the Day: “Are there laws governing artificial intelligence?”
Enora Bescon’s response:
I would say no, we are really in the background of this idea.
In fact, today we are at a stage where we are wondering who is responsible, such as when an autonomous car causes an accident (after it has been proven that the driver had nothing to do with it).
Everyone agrees that if a programmer deliberately encoded strings that cause the automaton to do something illegal, then he should be punished. But in practice this doesn’t happen too often, and if an (unintentional) error occurs that results in damage… the responsibility is already diluted between the programmer, the project manager, the test manager, and everyone up the hierarchy. to the general manager.
There is a lot of exciting discussion and work going on, but there is really no general consensus. There are lawsuits going on and we get by with our existing laws, each side is preparing their arguments based on the investigations. Do an internet search for “Tesla Autopilot test” and you’ll see that we’re in the middle of this process, and it’s on a case-by-case basis. And there are consumer or user associations formed to deal with possible corporate abuse.
For example, after recent crashes, Tesla will definitely need to rethink its communications and make it clear that its cars don’t drive on their own and that they need an attentive user who is ready to take control very, very quickly.
But understand that until our AIs are conscious (and that’s not tomorrow!), they’re just machines with no legal existence. Therefore, the problem is the same as in the case when any machine is the cause of the accident (plane crash …).
And it’s really not on the agenda to want to impose Asimov’s laws of robotics, which specifically want “a robot to be unable to harm a human and, while remaining passive, allow that human to be endangered.”
This currently doesn’t make sense because AIs aren’t at that level of understanding and the “laws” that govern them are much clearer.