Forrester perceives Explainable AI (XAI) as a set of approaches and methodologies designed to make AI systems comprehensible to humans. XAI technologies grant individuals the ability to scrutinize, grasp, and effectively control AI during its various stages of implementation. Explainability comes in multiple variants. It could manifest as model transparency, or it could employ strategies that create an intelligible surrogate model to mimic the internal functions of a non-transparent one. Explanations can be global, clarifying the entire mechanics of a model, or local, shedding light on an individual prediction or decision. Furthermore, explainability is contextual, providing in-depth technical elucidation for a data scientist while also offering a more simplistic interpretation for a marketer, regulator, or consumer.

XAI has secured a spot on our top 10 emerging technologies list as it serves as the bridge over the AI trust divide. A trust issue within the realm of artificial intelligence is currently impeding its adoption. Numerous consumers remain skeptical of AI, often spurred by media reports highlighting bias and discrimination. Similarly, within organizations, many AI initiatives are stagnant due to insufficient trust from stakeholders to incorporate it into their workflows. By elucidating the reasoning behind a model’s predictions or suggestions, XAI nurtures trust in AI systems. This aids in enhancing business performance, reduces the risk of regulatory or reputational damage, and fosters transparency for affected stakeholders. For AI to realize its true transformative potential, human trust is imperative, and XAI plays a vital role in cultivating this trust.

The integration of XAI technology into mainstream practices is imminent. Initially, XAI was a niche subject amongst machine learning scholars; however, its appeal is growing rapidly. Organizations are increasingly looking for advanced techniques like neural networks, while concurrently trying to evade the potential risks associated with opacity. The availability of solutions is rapidly increasing. Popular methods such as LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (SHapley Additive exPlanations) are already widely used for model interpretation within the open-source realm. Some vendors, like Truera and Fairly AI, have developed exclusive explainability methodologies, asserting improved performance and precision. Furthermore, prominent AI/ML vendors such as IBM and Microsoft have integrated explainability into their platforms.

Companies operating in Europe and regulated industries urgently require XAI. The EU’s Artificial Intelligence Act, set to be enforced in 2023, mandates AI explainability to reflect the risk associated with AI. Any organization conducting business in Europe will have to comply or face a penalty of €30 million or 6% of their global annual turnover, whichever is higher. Around the world, highly regulated industries like banking and healthcare, intending to use AI for creditworthiness and diagnoses respectively, will have to adopt explainability shortly. Eventually, all sectors will adopt XAI. Even industries like retail and manufacturing with less stringent regulation will want to leverage the heightened stakeholder trust and the increased understanding gained from model interpretations.