The industry must develop a code of ethics around the insurance of artificial intelligence systems if it does not want to have a set of rules arbitrarily imposed on.

For decades, when discussing the basis on which futuristic robots would make decisions, people turned to science fiction for answers – specifically, visionary writer Isaac Asimov, whose laws of robotics seemed to hold the promise of a workable solution.

Back in 1942, Asimov postulated three laws that would govern a robot's behaviour, namely: a robot must not injure a human being; a robot must obey orders given by human beings, unless this resulted in harm to a human; a robot must protect its own existence as long as this did not conflict with the first or second laws.

As a first attempt at creating a code of ethics for artificial intelligence (AI), the three laws were a commendable piece of work, but as real AI proliferates, we can now see their inherent weaknesses.

Consider the well-trodden scenario in which the AI controlling an autonomous vehicle must choose between hitting a pedestrian or, potentially, injuring its occupant. Asimov's first law ¬– a robot may not injure a human being – is immediately revealed as unfit for purpose. The vehicle's AI has no ethical basis, or value system, on which to make the difficult choice. The result: either the vehicle's AI freezes, or it flips a philosophical coin. Clearly, neither option is satisfactory.

Beyond the relative simplicity of the kill/do not kill decision, AI poses a host of less instantly dramatic conundrums, but whose impact could be just as profound. We have already seen the harbingers of these: trolls taking advantage of the programming of Microsoft's chatbot Tay to make it use racist and abusive language; or biased data leading to biased decisions, such as those reached by algorithms used by US courts to guide judges' sentencing.

Delivering on AI's potential

If the full potential of AI is to be reached, we must accept the neural networks underpinning AI systems' decision-making will be too complex to map. Essentially, the "intelligence" manifested by black box algorithms grows exponentially so it rapidly surpasses the cognitive abilities of its maker.

While this could create issues under the EU's General Data Protection Regulation – specifically rights regarding automated individual decision-making – it does not justify stepping back from the technology.

AI and robotics hold out the possibility of tremendous benefits, including better pricing, improved compliance, fairer outcomes and huge cost savings – in an investor presentation last week, Chubb suggested the figure of $350m to $500m a year through improving and speeding up foundational processes.

We do not fully understand the workings of our brains, but this has not prevented us from developing a system of ethics to guide and manage their operation. The same philosophical scaffolding is now urgently needed to support the growth of AI. This, in turn, raises the question of whether the development of ethics is solely within the purview of humans or if machines can also play a role – but that may be a step too far at this time.

For centuries, the insurance industry, with its pools of shared risk, has played a valuable social role. But machine learning applied to data on a never-before-seen scale will allow insurers to understand policyholders and risks in new and all-encompassing ways. In some areas, this could inform underwriting decisions so completely the concept of shared risk may become virtually obsolete.

Hyper-personalised underwriting that sidesteps "difficult" risks might improve insurers' financial performance, at least in the short term. But giving up on risk pooling would create a class of risks that are, effectively, uninsurable and, ultimately, undermine the important societal role played by insurance. That, in turn, would be likely to affect the way in which the industry is regulated.

Additionally, the use of AI is a two-way street. We must remember customers and regulators will have access to the same powerful machine learning tools. These might lead a commercial business to self-insure its property for flood and fire, say, or an individual with favourable genetics to forgo life cover. Regulators might one day demand access to insurers' live data so they can use AI to monitor their activities 24/7. Already an AI system exists that can "listen" to financial traders' phone conversations with clients and apply sentiment analysis to flag unauthorised advice by spotting phrases such as "I suggest..." or "If I were you...".

How will we develop the robust ethics necessary to chart a course through this brave new world? After all, today's ethical issue is tomorrow's legal dispute. It would be complacent to assume ethics, law and regulation will develop as they have with previous technological advances. First, because of the rapid rate at which AI is developing and being deployed; second, because it offers capabilities (such as image and voice recognition, predictive analytics, fraud detection) that were previously the exclusive domain of humans; and third, because those capabilities open up rapid and widespread automation of functions, with all of the advantages and dangers that entails.

Ethical framework

Unless they wish to have a code of ethics imposed on them by government and the courts – a prospect now facing the tech giants – insurers must make a concerted effort to get on the front foot. This means putting in place a clear process and infrastructure for developing and testing ethical principles. Other sectors may, in part, provide a template.

Bioengineering is a highly contentious sector. Only recently, the media reported claims of gene-edited babies. The Nuffield Council on Bioethics is an independent body that examines and reports on ethical issues in biology and medicine. Its objective is to inform policy and debate about these ethical questions. Its council comprises academics, clinicians and researchers with strong credentials in field. Lawyers and professional bodies like the General Medical Council also have an important role to play.

For the insurance sector, a similar bringing together of experts from the fields of technology, law, medicine, risk management, social policy, economics and ethics would be a solid starting point for the industry to review and rebuild its ethical foundations. What the industry needs is a safe space in which thought experiments can be conducted, in much the same way as Thatcham allows motor insurers to address the big questions they face.

It is clear we need to arm ourselves with the legal, intellectual and corporate frameworks necessary to address the conundrums that lie ahead with confidence. How we do that is up to us – but the pace and magnitude of this change will be far greater than anything we have experienced before.

It would have been convenient if Asimov's laws had held the answers, but they represent a too-neat solution from a more technologically primitive age. One can only imagine what law Asimov might have formulated had he found himself in the path of an ethically challenged auto¬nomous car.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.