Who Should Bear The Responsibility For The Actions Performed by Artificial Intelligence?

More and more AI is present in our everyday lives, tech companies are using the huge amount of data available to them to make better predictions, track our behaviour and offer services they think we will use. As AI is being used in everything nowadays it begs the question on who is responsible for the decisions AI makes?

Criminal Liability

What happens when AI systems fail and kills or harm someone?

Can AI system be held criminally liable for its actions?

Criminal liability usually requires action and mental intent. So one of three scenarios could apply to AI systems (from MIT Technological Review, March 2018).

The first is perpetrator via another, applies when an offence has been committed by a mentally deficient person or animal, who is therefore deemed to be innocent.

But anybody that instructed the mentally deficient entity can be held criminally liable. For example, a dog owner who instructed the animal to attack another individual.

These scenarios would see those who design intelligent systems to be held liable, as the AI system is an innocent agent.

The second scenario is known as natural probable consequence, occurs when the ordinary actions of an AI system might be used inappropriately to perform a criminal act. This has happened when in a Japanese motorcycle factory a robot erroneously identified an employee as a threat to its mission and calculated that the most efficient way to eliminate the threat was to push it against an operating machine nearby, killing him instantly.

The key question here is whether the programmer of the machine knew that this outcome was a probable consequence of its use.

The third scenario is direct liability, and this requires both an action and an intent. An action is straightforward to prove if the AI system takes an action that results in a criminal act or fails to take any action when there is a duty to act.

The intent is much harder to prove but is still relevant. If a self-driving car is breaking the speed limit on the road it is on, this is a strict liability offence, and the AI system will be assigned the criminal liability. The owner may not be liable.

Who owns an AI-generated idea?

If a computer has an idea, who owns it?

Much of the growth in patent applications in recent years have been related to AI. In the 340 000 patents related to AI 53 per cent of all the patents have been published since 2013, with China leading the way for most patents published (Financial Times, October 2019).

Now scientists are beginning to develop machines capable of coming up with ideas outside the creators’ expertise. This raises the question of who owns the intellectual property for an AI-generated invention.

The problem is that if AI cannot be recognised as an inventor, the owners of the AI will not have any protection for the ideas generated by their work. This may discourage them from pushing further development. Not recognising AI as an inventor threatens innovation.

While the idea of granting intellectual property protection to a machine might seem not a concern now, it will become more pressing as AI systems invent more routinely.

Who owns the output of an AI system?

As deeper learning algorithms become more popular, researchers have less understanding of the working of the machine to produce an outcome, creating black boxes that are only readable by the machine.

This is true in many fields, even recently when an AI system could predict when a subject would die with great accuracy, but the researchers have no idea how the machine reached that conclusion.

A particular field of black-box algorithms is the GANs (Generation adversarial networks), were 2 neural networks contest with each other in a game (in the sense of game theory, often but not always in the form of a zero-sum game).

Giving a training set, this technique generates new data with the same statistics as the training set. An example is using photographs to train a machine capable of producing new photo-quality images.

This creates the scenario where a machine is trained using one composer’s music, mapping out the style and nuances of that specific musician and then generating an original musical piece from that information.

Who owns that new music?

Is it the original composer, the one that provided the data that made possible this new piece? Or is it the AI system owner that created the technology that made this new musical score existence possible? Or is it the AI system that created it?

Another example is the rise of deep fakes, where very realistic image and sound is created to mimic a person talking. This technology together with social media can create havoc if the people viewing it does not realise that what they are seeing is a creation from a machine.

Imagine the repercussions if a deep fake is used to mimic a world leader declaring war on another country, the information would spread like wildfire and create a huge impact before anyone would realise that it was the work of a deep fake. Or a company CEO deep fake being used to trigger a market fluctuation. The stock market reacts instantly to news and the impact would be huge.

So in these cases who will be in front of the FCA or the Hague Court to justify the machines decisions or his misuse?

Conclusion

The points raised in this post are meant to be a point of discussion on how in the future, we can assure we build responsible AI, making sure we take responsibility for the systems we created, and to create guidelines for future developers to use when researching new AI applications.

read original article here