“ Most modern-day AI is just a black box where the AI engine doesn ' t explain why it arrived at a particular decision ,” argues Ramamoorthy . “ But in an enterprise , usually The Aletheia Framework
TRUSTWORTHY AI according to Cevoli : “ Biases can creep into technologies in numerous ways . Hence why monitoring and addressing bias is generally a hard problem to solve . Yet , this cannot be an excuse to use ‘ tunnel vision ’ when it comes to training and testing . AI and machine learning technologies are only as good as the datasets and the algorithms used to train them . Labelling data is extraordinarily time-consuming and therefore limiting , with the datasets created far too narrow to be entirely representative . Therefore , we cannot expect surveillance technology , for example , to be able to accurately identify cohorts of individuals never seen before ”.
For trustworthy AI , Cevoli believes we need it to be “ globally representative and to understand where bias will be compounded rather than reduced . Using unlabelled data has the potential to greatly reduce bias as the volume of data that a model can be trained on is increased by orders of magnitude . Testing beyond our own communities is paramount to designing technology that does not intentionally or accidentally favour certain groups in society over others . A wider testing culture is essential to addressing and mitigating bias ”.
Ramprakash Ramamoorthy , Director of AI Research at ManageEngine , part of Zoho Corporation , adds that “ AI has been enabling enterprise software to move from just process automation to decision automation . “ Given how AI is automating decisions in mission-critical use cases , it ' s important to add some accountability to the whole system . This can be achieved by using explainable AI ,” he said .
“ Most modern-day AI is just a black box where the AI engine doesn ' t explain why it arrived at a particular decision ,” argues Ramamoorthy . “ But in an enterprise , usually The Aletheia Framework
Used by Rolls Royce , it is a simple , practical framework for adopting AI responsibly . Kainos has adopted it as a standard for guiding how they work with our customers , deploying it with each of their customers who ’ re developing AI .
there are processes built around decisions that involve a hierarchy of people or teams . When a decision is automated , it needs to be documented for future reference and due process has to be followed . An AI model that can explain its decision can help human beings understand and execute processes related to the decision – or even veto it , given how AI models are only 80 % accurate on average ,” he adds .
ManageEngine has deployed explainable AI wherever possible , and the use of AI features has gone up by 72 % since they started adding explanations to their predictions for IT automation .
EY – only 35 % of companies have a process in place to evaluate AI risks such as bias and errors Greg Cudahy is EY Global Technology , Media & Entertainment and
technologymagazine . com 133