Current

    As AI becomes more advanced and its use integral to business, directors discuss the ethics of machines making decisions on behalf of humans.


    Sales assistant Rosie, inquiry assistant Maggie, and knowledge engine Libby all have two things in common. The first is their common, feminine names, and the second is that they are artificial intelligence — software designed to help customers with queries and purchases.

    “Research shows when a robot is personalised and has the name of a young woman, people are less fearful of that robot,” says organisational behaviour specialist Dr Catriona Wallace MAICD, founder and CEO of ASX-listed Flamingo AI, which developed Rosie, Maggie and Libby.

    Wallace says it is crucial that governments, business and the public gain a greater understanding about what exactly AI is, how it works, and the transformation it will bring to the world, so it can be used ethically. She believes it is fine to personalise robots to make them more acceptable to the public, but doesn’t believe they should — or could — be given real human emotions.

    “We should remember robots are just software,” she says. “We should not be trying to give them a heart nor should we expect them to gain consciousness.”

    That said, she concedes AI robots can be coded to make decisions for the good of humanity and not its detriment, which raises its own ethical issues.

    The phrase “artificial intelligence” was coined in the 1950s to explain the idea of machines or robots mimicking human behaviour. Now, robots are being built for many reasons — from helping online customers to standing in for humans in dangerous situations. They also have other uses that need to be thought about carefully.

    When a robot is personalised and has the name of a young woman, people are less fearful of that robot.

    Dr Catriona Wallace

    AI is just software and technically has nothing to do with ethics, but Wallace believes there should be a strong ethical framework to guide the way it is developed and used. She backs the recent work of the CSIRO in helping to frame a code of ethics for AI developers. “For example, in the future, autonomous cars, which we already have, will be programmed to make decisions about whether to hit object A or object B,” she says. “The same with automatic weapons programmed to strike person A or B, or the emergency service bot, which may be programmed to determine whether it should firstly rescue a child or an adult, a man or a woman.”

    Wallace says AI already has inherent bias because the coders training the machines are not representative of the world’s diversity. “Amazon had an example recently with a recruiting tool that was biased towards recruiting men over women. We need to realise the bias is already there and do something about it quickly.”

    She believes AI is moving so quickly that autonomous cars will be commonplace in 10 years. “We will wonder why we ever let 17-year-olds drive around in killing machines.”

    However, Wallace says it is crucial for ethical frameworks to be established for conflict situations such as war in regard to automatic weapons.

    “There needs to be a much greater effort put into ethics in the domain itself. AI will mean it is easier and faster than ever before to kill people. The Russians already have a highly trained robot with a weapon [called] Fedor. We need to have coding of AI that is done in a fair and balanced way — one that is reflective of humanity.”

    This article first appeared in Matrix, the magazine of the Ethics Alliance.

    Latest news

    This is of of your complimentary pieces of content

    This is exclusive content.

    You have reached your limit for guest contents. The content you are trying to access is exclusive for AICD members. Please become a member for unlimited access.