How we think about trust will determine the success of AI

Tuesday, 04 August 2020

Clare Payne photo
Clare Payne
EY Fellow for Trust and Ethics, Honorary Fellow, The University of Melbourne
    Current

    The ethical issues surrounding AI continue to be debated and understood. However products are being developed and launched regardless, responding to our changing needs. EY Fellow for Trust and Ethics, Clare Payne, talks about the importance - and the challenges - of business and regulators working together to ensure AI and AI-related policy operates in the public interest, for the good of us all.


    Various surveys and tools attempt to analyse trust, they try to determine whether it is rising or falling and by how much. However, the indicators used to gauge trust are not always reliable. For example, does the mass take-up of a new product indicate trust or is it just convenience?

    In the case of tracing apps for COVID-19, are millions of citizens downloading the app because they trust the government or is it fear, or pressure? It’s the same with the shift to digital financial services, for many this has been forced due to limits on the use of cash rather than representing a new-found trust in digital financial products or those who develop them. To take these shifts as evidence of trust could be considered naïve, claiming ‘trust’ before it has been earned.

    We know that the ethical issues surrounding AI, the intelligence that powers many of these new technologies, are still being debated and understood. However, products are being developed and launched regardless, responding to our changing needs. Given this context, it’s clear that business and regulators must work together to ensure AI and AI-related policy operates in the public interest, for the good of us all.

    But do business and regulators trust each other enough to work together effectively? A recent survey conducted by EY, titled Bridging AI’s Trust Gaps, highlights the ethical and trust implications facing business and policy makers involved in AI.

    A trust gap

    The EY survey identified a trust gap between companies and policymakers, undoubtedly making the task ahead – working together for the greater good – more difficult. The report concluded that, ‘policymakers don’t trust the intentions of companies.’ Some would hold that this position is justified and maybe even prudent - there are certainly enough cases of unethical behaviour to claim a patchy record.

    Whilst companies must eventually comply with regulations to stay in business, it’s the self-regulatory component in which they have often fallen short of regulator and community expectations. Self-regulation is a critical component of the professions, such as law and accounting, where they seek to monitor their own adherence to not just legal, but also ethical standards, rather than have an external agency enforce standards upon them. Self-regulation must be constant in order to maintain social licence and earn trust.

    Deserved or justified trust is what we should be aiming for

    Whilst trust might be the goal, we need to remain aware that not all trust is good, sometimes trust can be misplaced, we can trust the wrong things. Therefore, the idea of ‘deserved’ or ‘justified’ trust is important. This is the type of trust for which businesses and governments should be aiming. Distinctively, this type of trust has a long-term horizon.

    Charlie Munger, the Vice Chairman of Berkshire Hathaway, in his 2007 commencement address to USC law students referred to deserved trust when he said, “To get what you want, deserve what you want. Trust, success, and admiration are earned.” He concluded, “in your own life what you want is a seamless web of deserved trust.”

    ‘Justified trust’ is a concept used by the OECD in relation to tax. In this context, justified trust builds and maintains community confidence that taxpayers are paying the right amount of tax. The important aspect in relation to justified trust is the focus on both ‘building’ and ‘maintaining’, these suggest the process is ongoing, over the long-term.

    In contrast to this concept, many businesses and leaders talk about ‘winning’ trust, with a short-term focus on their own gain (increased sales or a stock price increase for example), rather than looking across the longer-term and considering value to others.

    Bridging the gaps through a shared understanding

    The task ahead, to regulate AI in the public interest, is not easy but is essential to ensure a safe and well-functioning society. The process we are already seeing, where the risks and consequences of AI are being discussed and debated is important, it is ethics in motion, a concept I have written about previously. However, during this critical phase of policy development and implementation, we must ensure that trust is not just won by some but earned by all.

    Latest news

    This is of of your complimentary pieces of content

    This is exclusive content.

    You have reached your limit for guest contents. The content you are trying to access is exclusive for AICD members. Please become a member for unlimited access.