Logo (3)

The tryst with trust when it comes to AI is an ongoing and much talked about topic of discussion for corporations across the world. While they spend big bucks and time to regulate AI, a trend for Explainable AI has emerged.

According to Reuters–Amazon.com Inc’s AMZN.O machine-learning specialists uncovered a big problem: their new recruiting engine did not like women. The team had been building computer programs since 2014 to review job applicants’ resumes with the aim of mechanizing the search for top talent. But by 2015, the company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way. That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period.

In order to trust the decisions of the AI systems,  humans must be able to fully understand 

how decisions are being made. The lack of explainability and trust will be a hindrance to fully trust AI systems.This is known as Explainable AI (XAI).

 Explainable AI: Decoding the Black Box

An emerging field, Explainable AI (XAI) addresses concerns about how black box decisions of AI systems are made. It inspects and tries to understand the steps and models involved in making decisions. 

XAI answers some hot questions such as:

How can we gain explainability in AI systems?

One way to gain explainability would be to use machine learning algorithms that are inherently explainable.Simpler forms of ML such as decision trees, Bayesian classifiers, and other algorithms have certain amounts of traceability and transparency in their decision making. These Ml forms do not sacrifice much on performance or accuracy and provide the visibility needed for critical AI systems. 

However, there are complicated and powerful algorithms such as neural networks, ensemble methods including random forests, and other similar algorithms that sacrifice transparency and explainability for power, performance, and accuracy.

In 2019 US Defense Advanced Research Project Agency (DARPA) had noticed the need to provide explainability for deep learning and other more complex algorithmic approaches, and pursued it through various funded research initiatives.

 DARPA describes AI explainability in three parts

What can we gain from interpretable and traceable ML models?

Fairplay:When we ensure our predictions are unbiased, we prevent discrimination against under-represented groups. 

Trust: if people understand how our model reaches its decisions, it’s easier for them to trust it.

Traceability will help us to get into AI decision loops and empower us with the ability to stop or control its tasks whenever need arises. 

The way forward for XAI

Transparency is not needed for all systems and it might not be possible to standardize algorithms or even XAI approaches,  but it might certainly be possible to standardize levels of transparency / levels of explainability as per requirements. 

Emlylabs has sort a method for creating models more interpreteble, that allow users to ask questions and understand how well the model has been trained, as well as evaluate its performance. This approach helps to build trust in AI models by increasing transparency toward interpreting the results.

Organizations also need to have governance over the operation of their AI systems. Oversight can be achieved through the creation of committees or bodies to regulate the use of AI. These bodies will oversee AI explanation models to prevent roll out of incorrect systems. As AI becomes more profound in our lives, explainable AI becomes even more important.” Ron Schmelzer, Managing Partner & Principal Analyst at AI Focused Research and Advisory firm.

You can reach out to us simply by signing up – emlylabs.com 

Leave a Reply