The European Commission has proposed new rules to assist people who have been harmed by artificial intelligence (AI) and digital devices such as drones.
People suing over incidents involving such items would face a lower burden of proof under the AI Liability Directive. Justice Commissioner Didier Reynders stated that it would create a digital-age legal framework. The directive’s scope could include self-driving cars, voice assistants, and search engines.
If passed, the Commission’s rules could coexist with the EU’s proposed Artificial Intelligence Act, which would be the first of its kind to limit how and when AI systems can be used.
Artificial intelligence systems are trained on massive amounts of data or information to enable machines to perform tasks that would normally be considered human intelligence.
The European Commission published the AI Liability Directive on Wednesday, which will include a “presumption of causality” for those claiming injuries from AI-enabled products.
This means that victims will not have to untangle complicated AI systems in order to prove their case, as long as a causal link between a product’s AI performance and the resulting harm can be demonstrated.
For a long time, social media companies have hidden behind the caveat that they are simply platforms for other people’s content and thus are not responsible for its content.
The EU does not want to repeat this scenario, in which companies that manufacture drones, for example, are excused from liability if they cause harm simply because the firm was not directly responsible for the controllers.
If your product is designed to cause distress or damage, you must accept responsibility. If it does, it is a clear message – and perhaps one that is long overdue.
Is this being overly harsh on a relatively new industry? It is the manufacturer’s responsibility if a car crashes due to the mechanics inside the vehicle. However, the driver’s behavior is not.
All eyes will be on the first test case if this draft passes. Europe is still chasing the tail of big tech with big regulation, but is this realistic?
According to the European Commission, high-risk AI applications can include infrastructure or products that directly impact someone’s life and livelihoods, such as transportation, exam scoring, and border control.
Disclosure of information about such products will provide victims with more insight into a liability but will be subject to safeguards to “protect sensitive information.”
While such provisions in the directive may make businesses “unhappy,” Sarah Cameron, technology legal director at law firm Pinsent Masons, said the rules helped consumers and businesses alike clarify liability for AI-enabled products.
“The complexity, autonomy, and opacity (the so-called black box effect) of AI has been a major barrier to businesses adopting AI, creating uncertainty around establishing liability and with whom it sits,” she said.
“The proposal will make it possible to seek compensation from the AI-system provider or any manufacturer that integrates an AI system into another product when AI systems are defective and cause physical damage or data loss.”