AI already infiltrates our lives on so many levels in a multitude of practical, unseen ways. While the machine revolution is fascinating — and will cause harm to humans here and there — embodied artificial intelligence systems perhaps pose the most significant challenges for lawmakers.
Emergent robot behavior and the blame game
Emergent behavior is going to make robots infinitely more effective and useful than they’ve ever been before. The potential danger with emergent behavior is that it’s unpredictable. In the past, robots got programmed for set tasks – and that was that. Staying behind the safety barrier and following established protocols kept operators safe.
Blame the user, blame the maker, or blame the robot?
Let’s look at a hypothetical example. We’ll consider a robot in some future setting tasked with washing cars. The manufacturer creates it to learn ever more efficient and effective ways to clean vehicles. The machine can even rewrite its algorithms to that end. Within a year, the robot is working faster, better, and exceeding all expectations when an accident occurs. A motorist gets seriously injured as a result of this mishap, but who’s at fault?
In the US, such an incident might lead to a civil case based on negligence or defective design. It might be argued that any user of the product had a right to be safe and unharmed by the experience. The manufacturer would probably appear pretty careless to the plaintiff, who only wanted to get their car washed and trusted the tech to do the job safely. All a user really needs to avoid any share of responsibility is to use the product for its intended purpose.
Product liability laws are all about attributing blame, but who’s really at fault in such a scenario? After all, this is a machine that learns as it goes along, and is unpredictable as a result. The manufacturer, far from trying to hide that, probably featured the fact prominently in any marketing. Is it fair to blame the maker for something that’s a result of the robot’s environment and learning ability — and could not have been foreseen? It’s probably also true to say that by the time of the accident, the robot was operating according to algorithms it wrote itself — so do we blame the robot instead?
It’s a gray area, to say the very least. Ultimately, the manufacturer chose to sell a self-learning machine and is profiting from the technology. If the nature of the product caused harm, maybe they need to bear some of the brunts. However, there’s a very real danger that progress could be slowed significantly or even halted if companies can’t balance the risk of legal actions against profits. The fact is that while artificial intelligence is relatively new, the problems that it poses for lawmakers are not.
Embodied AI won’t be the first time product liability laws met entirely new types of products.
New technologies and lawmaking through history
US highways would be killing fields where pedestrians were permitted to roam them freely. Only time can tell how twenty-first-century lawmakers deal with AI and emergent robot behavior. The only thing that seems clear is it won’t be the last time society is required to accommodate new tech.