Data & privacy governance
Questions for directors
- Is the board regularly briefed on the risks to the organisation associated with data handling?
- Is data and privacy a regular item on the agenda and addressed in a structured manner?
- Does the board have a clear view on major investments in data projects from a risk and return perspective?
- Does the board obtain regular progress reports on major data projects?
- Is the board getting independent assurance on the achievement of data objectives and containment of data risks?
You can find more information on the Director Tool from the AICD and Australian Information Security Association, here.
Podcast: Episode Four
In this episode, Alan Kohler AM explores the world of data with Yasmin Allen, Tim Reed and Lee Hickin. The panel looks at security and privacy concerns, AI algorithm bias and how boards should use data responsibly in order to unlock value
For organisations leveraging AI — essentially when machines perform tasks that typically require human intelligence — the value of data can be immense. It provides incredible opportunities to better meet customer needs and identify where business value lies. The term “data liquidity” has emerged to describe both the volume of data and the way it is being used by companies.
“You don’t actually sell your data, you sell copies of it,” says Yasmin Allen, chair of Digital Skills Organisation and a director of ASX, Santos and Cochlear. “Data is infinite... If you get it right, it’s potentially a huge revenue driver.”
The case for data governance
However, many organisations are overwhelmed by the sheer volume of data they possess and are failing to unlock its potential and meet their ethical responsibilities in its use, collection and storage.
Directors should consider developing data governance rules, says Lee Hickin, chief technology officer for Microsoft Australia. This is sometimes a legal imperative, but always a moral one, as Australia currently lacks legislation around accountability in AI-informed decision-making. This is despite recommendations by the Human Rights Commission in the wake of the failed “Robodebt” scheme — an automated welfare debt assessment and recovery process (formerly Online Compliance Intervention) in place between July 2015 and November 2019, which used data-matching in an attempt to identify the overpayment of social security benefits. In June 2021, the Federal Court ruled unlawful the raising of $1.73b in Centrelink debts from 433,000 people in the absence of human oversight. The process was labelled an “AI ethics disaster” (The Age, 29 June) and compensation was awarded to more than 400,000 members of a class action. In approving a settlement worth at least $1.8b, Justice Bernard Murphy called it a “massive failure in public administration”.
“I’d probably recommend that directors don’t wait for regulation to come,” says Hickin. “It’s incumbent on directors not to define the boundaries, but to at least understand what the risks might look like around data and inclusion, transparency, accountability and privacy.”
Tim Reed, president of the Business Council of Australia, agrees. He tells Directors on Digital that directors have a responsibility “to get their head around the way in which data is informing, enabling and making vulnerable the businesses they govern”.
A good starting point is to create a business plan that focuses on a particular set of data that is valuable to customers, such as a revenue model they desire. A targeted focus is important — it is all too easy to waste time attempting to analyse a miscellaneous collection of data that is inaccessible or low-value.
“If you start with a data lake, it can end up being a data swamp,” says Allen. “You can spend three years collecting data and making sure it’s right, but if you don’t have a plan to use it and monetise it, what is the point?”
Reed, also a former CEO of accounting software company MYOB, tells Directors on Digital how he led the company through a fundamental reappraisal of the way it approached data.
“At MYOB, we looked at three things we thought we could do with our data — lower our cost, improve our service, and develop new products,” he says. “It was a very simple framework. Once we got some data and started to pull it together, we realised we weren’t managing it proactively — and that was millions of dollars a year as a cost line for us.”
Directors and responsible use
There is undoubtedly a certain amount of hype, uncertainty, and fear around the use and misuse of AI, acknowledges Hickin, who leads Microsoft’s Responsible AI program. However, when its powers are properly harnessed, it can serve as a force for good in tackling global challenges. For example, in June 2020, Microsoft partnered with the CSIRO to stem illegal fishing and combat plastic waste in Australia’s Great Barrier Reef and Indonesia. The partnership follows on from initiatives such as the Healthy Country Project, which is transforming the management of Kakadu National Park using AI and machine learning to analyse information gathered from high-resolution cameras and underwater microphones.
“As we’re building technology that’s going to impact people’s lives, we have to take some responsibility for how that technology is going to be used,” says Hickin. “It’s not about policing the system, but about providing mechanisms to ensure we build shared responsibility approaches with our customers.”