The European Union Aviation Safety Agency (EASA) established a task force on artificial intelligence (AI) in October 2018 and has now published its Artificial Intelligence Roadmap: A human-centric approach to AI in aviation. This sets out a timeline to autonomous flights and surveys the extensive regulatory changes needed to ensure responsible and safe use of AI, particularly of machine learning (ML). The report is a useful summary of the challenges for any company using ML, especially those working on applications that could cause physical harm or even death, such as self-driving cars and medical devices. Key challenges to all companies include achieving robust, predictable and explainable AI performance and recruiting AI specialists in a highly competitive market. Matt Hervey explores these in this blog post.

Last year, the EU High Level Expert Group on AI published its Ethics Guidelines for Trustworthy AI, setting out general requirements, such as human oversight, robustness and safety, privacy, transparency, fairness, societal and environment wellbeing and accountability. EASA's report builds on the Guidelines and, in section G, examines in greater detail the complexities of regulating safety-critical AI. These include difficulties in defining intended function, avoiding unpredictable behaviour, the lack of standard methods to evaluate the operational performance of ML, the complexity of the architectures and algorithms and the possibility of adaptive, ever-changing, software.

The regulatory challenges are immense. EASA's report expressly states the need "for a shift in paradigm to develop specific assurance methodologies to deal with learning processes" and notes that explainability of AI (a key challenge) is the subject of several research initiatives - or, in other words, an unresolved problem.

The report also acknowledges the "utmost importance" of ensuring EASA's personnel have the right level of AI expertise. It notes: "Contrary to industry personnel, the Agency staff is not directly exposed or involved in the development of AI. This poses the risk of having a knowledge gap between EASA and industry experts, which could be detrimental to the fulfilment of the EASA core safety functions." In fact, a shortage of AI specialists is also a problem for industry and has been identified as a key challenge to the adoption of AI. In December, LinkedIn identified "AI Specialist" among the top three fastest-growing jobs, in Australia, Canada, France, Germany, India, Ireland, Singapore, Sweden, Switzerland, the UK and the USA. Reuters has reported that demand for AI specialists is outstripping supply by an increasing margin. Companies need to compete for skilled graduates and some may struggle to match the salaries offered by "Big Tech" and the finance sector. It is important to build ties with centres of academic excellence, for example through sponsorship of research projects and offering internships.

EASA plans to release its guidance for aviation from 2021 onwards:

Level First guidance Final guidance Certification
1: Crew assistance/augmentation 2021 2026 2022-2025
2: Human/machine collaboration 2022 2026 2025-2030
3: Autonomous commercial air transport 2024 2028 2035+

Its guidance will be critical reading for the aviation industry. Given the regulatory precision traditionally required in aviation, may help inform best practice for any company using AI where safety is a factor.

Read the original article on GowlingWLG.com

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.