Current

    Boards will need to pick their way carefully through a complex ethical minefield


    Picture this: a larger retailer introduces artificial intelligence (AI) to reduce store theft. The algorithm analyses facial expressions and other gestures to spot potential shoplifters. But it overwhelmingly identifies black people, leading to allegations of discrimination.

    A charity engages an e-marketing company to analyse millions of social-media conversations and identify potential supporters of its cause. With pinpoint accuracy, the AI targets consumers for donations and creates tailored marketing offers based on their social-media profile. The move sparks allegations of unethical marketing and damages the charity’s brand.

    Meanwhile, a bank replaces more of its mortgage brokers with AI that assesses loan applications. Over time, the technology is shown to have discriminated against certain races that have a higher probability of loan default.

    At the same time, a manufacturing company suffers from divisions between its human and robotic workforce. Tired of losing jobs to machines, a human worker strikes a robot that is programmed to think and feel. Critics argue the company allowed discrimination against the robot.

    These are just a few of many ethical issues that could confront organisations – and their boards – as AI takes off. In fact, governance of AI, and the “big data” on which it is based, will become one of the great board issues in the next 10 years.

    Ethical issues on AI are here now. The World Economic Forum last year highlighted the risk of AI bias and its effect on corporate ethics. Some machines make mistakes as they learn and cannot always be trusted to be fair or neutral. Others are programmed by humans who might unconsciously inject their biases and judgements into the technology.

    1. Governance conversation on AI and ethics needed

    As boards grapple with the impact of AI on their organisation, it is unlikely that anywhere near enough directors have considered ethical issues associated with this trend. These issues extend well beyond customer privacy and ethical use of data.

    A lack of governance debate on AI ethics is not suprising. Business, generally, is grappling with the implications of AI on customer behaviour, business models and industry structures. Ensuring ethical safeguards for AI seems a low governance priority.

    Moreover, much AI change is still years away. Talk about the ethical treatment of robotics, for example, creates catchy headlines. However, the vast majority of robotics in the workplace will do routine tasks that have fewer ethical considerations for now.

    Boards might view AI ethics as mostly an issue for giant technology companies. Google, Facebook, Apple, Amazon, Microsoft, are part of the Partnership on AI, a not-for-profit formed last year to advance public understanding and best practices in AI. The partnership is considering issues such as ethics, fairness and inclusiveness in AI.

    Nevertheless, as more Australian companies incorporate AI into strategy, boards will need to ensure the technology is supported by appropriate ethical behaviour. Changes in AI – and its impact on governance – could happen faster than many boards realise.

    AI will take organisations in new directions. Banks, for example, will partly become data companies. What are the boundaries for banks to sell aggregated, real-time data based on customer purchases? The electronic footprint from one’s daily purchases – where, when and how much was spent – is powerful, potentially intrusive data.

    Telecommunications companies could become geolocation data providers. Thanks to smartphones, it’s easier for telco providers to know where customers are, 24/7. Could telcos one day sell this data to retailers that offer real-time deals to nearby customers?

    Then there are energy companies that use smart-monitoring systems to identify which home appliances are used. As the Internet of Things grows, and billions of devices are connected online, the data could recreate how people spend their time at home: when they cook, watch TV, and wash their clothes. Such data could potentially invade home privacy.

    That is not to suggest companies in these or other sectors will use AI for such purposes. But the above examples highlight the possibilities and the need for boards to ensure well-considered ethics policies that cover AI are in place.

    2. Strategies to govern AI and ethics

    The starting point is board composition. Does the board have sufficient understanding of AI, it’s impact on strategy and organisation culture? Nobody expects boards to recruit AI specialists, but all directors should be able to consider AI in the context of their organisation, it strategy, value and ethics policy.

    The next step is environmental scanning on AI. Is the board receiving sufficient information on the impact of AI on the organisation and its industry? Is the board exposed to latest thinking on how AI will affect ethical considerations? Would directors benefit from specialist training on ethical frameworks as they apply to AI-related decisions?

    Boards should also ensure there are appropriate structures to consider ethical issues around AI in the workplace. Specialist committees are overkill, but a working group of a few directors and executives who discuss potential ethical considerations from the introduction of AI in the organisation – and report back to the main board – could add value to organisations grappling with this issue.

    Ensuring ethics policies are up-to-date and relevant for the new world of AI is vital. Directors need to know there is a clear policy to guide organisation thinking on ethical principles around AI. How far will the organisation go with AI? When are the organisation’s ethics compromised?

    Boards must also receive regular information on how the introduction of AI affects the organisation’s human workforce and customer base. How are staff and clients responding to the introduction of AI and are any risks and ethical considerations emerging?

    These are a just a few steps boards can take to spark discussions on the ethics of AI in the workplace and take a more structured approach to this topic.

    Not all organisations, of course, are affected by AI and smaller ones might have little need to consider the ethics relating to it. Also, it is unfair to paint AI as only a risk: used well, it can enhance corporate ethics by improving fairness and inclusiveness.

    But the development of AI has profound ethical implications across industries. It’s a conversation boards would do well to kickstart, given the potential for AI developments to overtake the governance community, such is the speed of change.

    Latest news

    This is of of your complimentary pieces of content

    This is exclusive content.

    You have reached your limit for guest contents. The content you are trying to access is exclusive for AICD members. Please become a member for unlimited access.