Who is responsible for the effect of renegade computer programs is going to become a serious legal topic as an increasing number of things become ‘intelligent” and connected to the internet.
Britain’s Financial Conduct Authority (FCA) is one of the first regulators to start looking at how companies’ algorithms. In their just released rules for wholesale traders, the FCA sets out the responsibilities for companies and their managers.
“We are determined to embed a culture of personal responsibility within the banking sector,” says the FCA’s Acting Chief Executive Tracey McDermott. “Clear individual accountability should focus minds, drive up standards, and make firms easier to run and to supervise. And if things go wrong, it will allow senior managers to be held to account for misconduct that falls within their area of responsibility.”
The definition of ‘misconduct’ when an algorithm goes awry will undoubtedly prove contentious, as will the idea of ‘personal responsibility’ in the banking sector.
While it’s too tempting to be dismissive of such move in the financial services industry, the FCA’s regulations are a pointer of what most industries are going to face over the next ten years as the more devices make decisions for themselves or communicate with other equipment over the Internet of Things.
In many areas the question of who is responsible for a rogue computer program will be left to the uncertainties of the legal system with no doubt many surprises, injustices, inconsistencies and unintended consequences so the earlier regulators develop a framework for dealing with mishaps the better.
Should the IoT start delivering on its promise of a connected world a poorly designed algorithm in even what should be relatively trivial devices or services may have the potential to cause massive disruption and damage. It’s hard not to imagine many other regulators in other industries are looking at how to attribute responsibilities, if not minimise risk, in a smart connected world.