This bonus episode of Directors on Digital shares the extended insights of Microsoft Australia’s national chief technology officer, Lee Hickin, with introduction and narration by Narelle Hooper, editor-in-chief, Company Director magazine.

This episode will deepen directors’ understanding of why correct digital hygiene and management is the engine of successful digital transformation. Lee Hickin outlines the importance of getting the basics of good data governance right ‒ what's vital to protect and how to do that well. To wrap up, he also shares his favourite examples of where artificial intelligence is being used successfully and what responsible use of AI really requires.

Host: Narelle Hooper MAICD, Editor-in-Chief Company Director magazine.  

Guest 1: Lee Hickin, Chief Technology Officer Microsoft Australia.

 

Listen and subscribe: Apple Podcasts | Spotify


Transcript

Narelle Hooper: Hello and welcome back to Directors on Digital, the podcast from Microsoft and Company Director magazine, where we sit down with Australia’s leading company directors who share their experiences and insights in driving digital transformation on their boards. 

I’m Narelle Hooper, the editor-in-chief of Company Director, this episode was recorded on the lands of the Wiradjuri people and the Gadigal people of the Eora Nation. We'd like to pay our respects to Elders past, present and emerging.

Now that you’ve listened to the six episodes of Directors on Digital, you’ll have heard the words “data swamp” and “data lake” mentioned more than once, or should that be data?

So, how do you stop being sucked into a data swamp? It needs to start with good data governance.

Lee Hickin, chief technology officer at Microsoft Australia ‒ gave us an insider’s guide to getting the basics of good data governance right, he spoke about how to unlock its value ‒ and why keeping your data clean and getting really clear about how you manage it, is at the core of successful digital transformation. 

It’s not too different from some of the sanitising principles we’ve had to get very familiar with over the past 18 months or so. 

Now, you heard from Lee Hickin in Episode 4, but we only had time to share a small part of his conversation with you ‒ in this episode you can hear his insights in full.

And turtles might seem out of place ‒ but they’re not. We’ll also discover some of Lee’s favourite examples of where Artificial Intelligence [AI] is being used. There’s some seriously clever stuff around computers and turning speech into code also, and we’ll learn about the responsible use of AI and what that really means.

So, get your boots on and let’s wade in as Lee begins by exploring the connection between data and AI in the world today.

Lee Hicken:

Right now we are sort of in this accelerating phase of what we call the second era of AI. We've moved away from AI on old computers and we're moving into this AI on computers designed for AI. And so they're accelerating at a rapid scale and the data accessibility is growing at a huge range. We're taking these evolutionary steps every four to six months now. The leaps and bounds are huge. Of course with all of that bigger data sets, more accurate models and more smarter tools, we are kind of getting to feel the sense of AI's impact on our real world, you know we're seeing it impacting the lives of our normal activities and the things we do.

So, there's no question we're building these bigger data sets and more data sets and models and we're creating these great outcomes, but it's still very much what we talk about as being narrow AI. So, AI that's just really, really big, but really, really good at doing one thing particularly well. So, for example, you know we have the GPT-3 model [Generative Pre-trained Transformer 3] in human speech recognition, 17 billion bits of data goes into it and it's excellent at identifying different speech patterns and conversations. But it's no good at identifying two different cars for example. So it's built for purpose. We're building better models, but we're a long way from that human comparable experience. And I don't think we're going to see a robot uprising anytime soon.

I'd argue that it probably hasn't crept up on us. I mean it's really been just accelerating over the last 10 years. And so it's something that's been there, but I guess because of that speed of acceleration it feels like we're suddenly waking up every morning and there's a new capability or a new technology that's impacting our lives. So I don't think directors should be surprised or caught off guard here. What I would suggest is that directors need to be alert and start to learn more about the building blocks and maybe some of the bad practices that can creep into an organisation as a result of AI adoption.

By building blocks I mean good data governance. The basis of AI and these digital services is data. So the more data governance you have, and by that I mean management of your data, curation of it, labelling of your data, managing access control, building training and culture around a data centric organisation. That's where directors should be really focusing their attentions on how to grasp that, build mechanisms around it.

And then looking for those bad practices that can creep into any organisation. Data being moved between parts of the business without due process or consideration, which can create these uber data sets that we don't understand the power of. Building tools that your business can’t explain. If you've got something out there that nobody in your business really understands how it's operating, you probably want to take a look at that and think about it. And so it's kind of that broad oversight that the directors really start to need to take some ownership of.

Lee Hicken:

The risks are hard to define at this stage and yet because it's a such a broad spectrum of potential risks and risks could be bad data, could be bad implementation, could be just not really understanding how your customers are going to work with that. So, as a guiding principle for me when I go and talk to organisations around AI and data usage, I would push them to look at anything they build from these three lenses. Which is first of all your customers. Do they get value from what you're building? Can you explain that value to a customer in plain and simple terms? From your employer's point of view, does it add to their contribution to your business or does it replace it? How does it work with your business to kind of grow together?

And then lastly, the thing that you deliver the service or the value that your business delivers, does this add to it? Does it replace it, does it open up new markets or kind of does it direct you in a different way? And if you look at it from those three lenses, then you start to get a sense of not the risks view, but the bigger picture.

Narelle Hooper:

At the foundation of all this, is good cyber security. Microsoft itself was subject to an attack when IT provider SolarWinds was hacked in 2020. As Lee explains, we’re all vulnerable, not just Microsoft.

Lee Hicken:;

It's about being alert, being aware and really understanding the risks. If you talk in any security circles, you'll kind of get these conversations around well, can you truly be safe from anything and everything all the time? No, of course you can't and anyone that's telling you the idea that this is a completely safe service is probably not telling you the truth.

Lee Hicken:

It's about kind of having all of the mechanisms in place to be watching, alert and then mitigating when these things happen. And that's the real challenge with cybersecurity. It's not about are you fully protected, are all your barriers up, is everything you've done to be in place to be ensure that you're secure? That's not going to work, but are you doing everything you can to be mindful, monitoring, and be able to react quickly to any kind of cyber incident or any kind of data breach, data privacy incident?

Lee Hicken:

I mean simple little things ‒ are you not reusing passwords? Are you using two factor authentication as much as possible? And so from a director's point of view you're not relinquishing responsibility, you're taking a shared responsibility. And I don't think it all lands on the director or directors of any organisation to kind of own this problem, but it's incumbent on them to build the mechanisms inside their organisations to align to the kind of ways in which new technology is built and delivered. So having parts of their organisation that are capable of focusing on cyber, focusing on data protection and focusing on responsible or ethical usage. Those are probably the better markers for being prepared and being ready at any organisation.

Narelle Hooper:

What about the line between what's legal and what's ethical? How do directors know what's the right thing to do when it comes to using AI?

Lee Hicken:

There really isn't any laws or regulation or policy or any governance around these things. I mean we see pockets of it popping up around the world and Microsoft we've ourselves have sort of enforced decisions around what we will and won't do with certain parts of AI, such as facial recognition. When you look at it from a director's point of view, when you've got that position of you've got to make a it's almost like a pub test validation. You've got to listen to what the business is planning and how it's going to deliver it and what the outcome is and kind of get that sense of “does this feel like we're doing it for us, we're doing it for the right reasons, that we've got all the right governance in place?” and kind of pick your way through that.

Now, on a more longer term and more practical level, I'd probably recommend that directors don't wait for regulation to come. It's coming, absolutely no doubt about it. And I think that's a good thing. But the best way for most organisations to do this is to think about the principles of what they want their brand or business to stand for. You know, the things that matter to their audiences, to their consumers and their customers. And, that matter, to the employers of the organisation. Think a bit about that, now do some work to understand what are the risks.

So this is the job of the directors and particularly boards today is to really understand the risks of bad data or bad AI around inclusion, you know transparency, accountability, owning the problem so to speak. And then of course privacy.

And then share these broadly and widely, be very vocal, be very visible, be very seen-to-be taking action on these points because there is no black and white line between what's right and wrong. It's a bit of a grey area, it's going to vary by the industry you're in, the thing you're trying to do and kind of the trust you already have with the market as to how you do it. But it's incumbent on directors I think to at least understand where those risks lie. Not to define the boundaries, but to know what the risks might look like and which parts of society might see this as a good or bad approach.

Narelle Hooper:

So how do directors best navigate the risk around privacy and using customer data, collecting it, and storing it?

Lee Hicken:

Privacy issues are there are some things that we have to do depending on the size of the organisation you're talking about. Australian privacy act of 1988 enforces the rules by which you have to play by. And recent laws around mandatory disclosure obviously have some impact as well that many organisations need to be aware of. So first thing there is, of course, understand the legal constraints and the obligations you have as any on your organisation, depending on your size. But then if we put it more in the context of kind of on the ground what do you do as an organisation as you're thinking about data? There are two things that I think directors and boards need to think about, which is thinking about how you collect data and then how you store that data.

And so to kind of break that apart into more detail, if you're collecting the data there are some core principles you need to think about. Collect only what you need. You need to be very clear about what you're building as a result of this, but collect only the data you need. Be clear to your users what and how that data is going to be collected. This is the one that gets missed so often by so many organisations. They collect the data, but they don't provide the disclosure and the value proposition to the customer ‒ why are they collecting it? What's it going to mean to them? And then when you've got that data, you need to classify it. And classifying data means labeling data. So you know what the data is when it was captured, what it was captured for.

But of course, if you don't need personal information, don't collect it. It just puts you at an obligation to have a lot more rigor and control around it. And you need to understand that responsibility for particularly for PII, personally identifying information, and financial data. And then when it's about the storing the data. So once you've collected it, now you're storing it it's about processes to ensure that only the right people have access to data. Think about the risks of exposing data to broader parts of your organisation if you don't have the controls in place. Think about the mechanisms you have at your hands to ensure that data can't be taken from one part of your business placed in another part of the business and merged to create something that was never foreseen in the first case.

And we think a lot about this idea of API [application programming interface] has been there, these application programming interfaces. Business owners and directors need to start understanding what those things mean and how they're applied to their business. APIs are the future economy of digital services. And as long as you clearly define how long you're keeping data for, why you're keeping it and give people the tools to be removed from that data should they wish to under GDPR type rules, then as long as you've got those two areas covered, storing and collecting, I think that's the primary area where directors need to think about this.

Narelle Hooper:

Jargon alert. The GDPR stands for general data protection regulation ‒ laws in Europe which require businesses to protect the personal data and privacy of EU citizens. The point is it catches organisations more broadly, organisations that do business with or in EU member states.

It’s likely you’ve heard the term “dark data” used recently ‒ but what is it really? What if you can’t see the data that you’ve got? Lee explains how to search for this hidden treasure and the tools you’ll need to uncover it.

Lee Hicken:

It's this idea of, it's data that, it's in your business. It's intrinsically there, but you can't see it or more to the point you can't unlock the value of it. We've got it all filed away but we don't know what it is. We don't know how to use it, we can't tap into the value of it. And this is this intrinsic problem we have today. So much data, so little value being derived from it. And the opportunity is huge. The potential is massive for organisations that can wrangle their head around dark data.

And dark data doesn't stay dark data. It can become whatever the opposite of dark data is, you know. Lit up value creating data. It can be used, but you've got to kind of almost sweep every corner of your business to understand. You've got to classify it understand it, label it and then put the controls around it, API is to make sure it can be accessed by your business. It's not something that is locked away.

We can get value back from dark data, but it just takes effort and work.

It isn't a massive task. It's a detailed task. You've got to do it right, you can't shortcut a lot of these journeys. You've really got to take the time. The beauty of this is today is, there are many capabilities and tools out there on the market that help you sift through that data.

And I'll give an example because I'm here from Microsoft, we have a tool called Purview, which is a tool that is about looking for where data is in your organisation, labelling it, classifying it, and understanding enough about the data set to expose it to the business in a controlled manner. There are tools to help you do it. It is a process and it's something you can't ignore because of the value that it offers, but there are tools to help you on that journey. And that's probably where you should start looking is to how does this technology help you unlock that dark data.

Narelle Hooper:

The thing we’re all trying to get to grips with is that digital technology is moving really fast ‒ so what’s it going to be like in five to ten years? Lee says the opportunities are plenty if you don’t get put off by the risks, but, you’ve also gotta be aware of the risks.

Lee Hicken:

Well, if I had a crystal ball and could look into the future I'd be probably a much richer man than I am today. What I'm seeing ‒ quite a significant growth in investment in both the technology. So we're building better models and building more technology and making it more available. And I'm seeing that investment in responsible processes and ethics committees and organisations and peers of ours. But of course because I'm in that world, it's probably something of a bubble. And over the next 10 years what is absolutely going to happen is AI and digital technologies that underpin that are going to become even more of the core of every business in every sector. I have the privilege of occasionally having a conversation with Professor Genevieve Bell.

And we've had this conversation about how is AI going to grow? Is it just going to grow on its own? And the answer is no, it's not just going to grow. It needs a purpose. It needs to grow in the context of its surroundings, by the people, the processes, the industry and the outcomes. So we need Australian businesses, Australian industry to sort of recognise that opportunity, embrace the potential for what AI and data and digital can do and adopt it and take some, I won't say take chances, but look at the opportunity. Don't get encumbered by the risks or the unknown risks. Look at the potential you could do. We're seeing it already at retail, mining, resources, finance industries. I would love to see in the next ten years that we actually start to see a real advancement in government digital platforms to enable us as citizens to move through our lives in a more seamless way.

I hope that we continue to build the digital services at the pace we have done over the last 18 months during the pandemic. And if we do that and we continue to ask some core questions, what is this AI going to do for people? Where did the data come from? Does this tool help us achieve more? Are we going to explore the risks of harms? Are we building a future where this AI and this data is going to enhance someone's life in some way? And if we think about it from that lens, then, you know, the hope is that we can create opportunities for Australian businesses to grow and compete on the global stage.

Narelle Hooper:

Microsoft's had a lot to say around responsible AI and the building blocks of its responsible AI program. So what is it and how does it work? Lee describes the building blocks that are required.

Lee Hicken:

Where Microsoft began this idea of thinking about the importance of responsibility in the technology and that was born out of this challenge we knew which was as we're building technology that's going to impact people's lives we have to take some responsibility for how that technology is going to be used. And that's not about policing the system, but it's about guiding it and providing mechanisms to ensure we do the right thing and we give our customers the best guidance on how they should do it. So it started with this idea of what we call, an “ether committee”, which is an AI and ethics and research committee.

This body of people, diverse group of people, that would just talk about these AI challenges and walk and talk ourselves through what's at risk, what's the challenges. We quickly realised that we needed to build a framework around that standard process to apply that in each time we see these things pop up. Because, in a lot of organisations today, many of the directors listening to this would have ethics or principles of their business ‒ but ethics and principles are personal. You apply them in the way that you see the world. So, you need a standard that makes sure that every time you or a customer is going to be impacted by something you build, that you get the same experience. Because standardisation brings consistency and consistency brings trust. That was our principle, so we created this standard process and then we distributed it out across the globe.

So people like myself, who are the responsible AI leads for Australia, take responsibility for how that process of assessment, governance process. Do we ask the right questions? Do we ensure that we are akin to and aware of the sensitive uses that AI may have. You know, is it going to cause someone harm? If we implied it in a certain way could it deny someone access to a service? And we'd walk and talk through these challenges, but it's a governance process, it's not a technology process. We talk and we think and we work through the issues and what we do is provide guidance, not rules, not thou shalt thou shan’t. But guidance, if you're going to build this, think about this issue, think about inclusivity, think about people's experiences with that technology.

And so we build that process and today those are building blocks and what we aim to do and what we continue to do is make those building blocks into capabilities we can put to the market and give to our customers so they can build responsible AI processes into their business as well.

First thing is, you need leadership from the top. This is not a process that can be just popped into a little corner of the business like a special projects work. It needs to be led by leadership. It needs to have the head of the business, the top of the business on board invested and understanding the real implications. So this comes back to that issue of directors and boards taking some time to learn, and understand it. And one of the things I really like is this idea that boards need to now have a technical consultative arm to them. Somebody in that board that understands technology that we can meld with the business and their goals of the overall outcomes of the business. Certainly leadership from the top, create a clear set of standards or principles.

Be very explicit on what you're going to do and why you're going to do it and what's important to you as a business. Make sure the entire business knows them, is empowered to act them and live that way and be vocal about it. And the most important thing is don't just tell the market you're going to do this thing, do it. Demonstrate some things to market that show that you have these principles, these standards and you're applying them. Tell stories, tell human stories, tell stories that actually people can understand not some corporate story about how we've changed our business, but what it meant to people, organisations, customers, communities, that kind of thing. ;

Narelle Hooper:

So where’s the best place to start? 

Lee Hicken:

The best place to start is to think about your customers perhaps. Think about what is it that your customers need from you that you can't deliver today? What are the things that your customers are asking for that you're struggling to deliver on? Solve a problem, don't just create a mechanism to say, “Hey, look, we're building some mechanisms in that business to be more responsible”, look at what your market needs, look at where the competitive pressures are if you're in any economically competitive market.;

But if you've got customers that are asking you for better capabilities, better insights, better personalisation, then take that problem and solve that and work your way back from that. I always like the idea that you if you've got something that people need, that's probably the best place to start. That'll lead to conversations around how do we do data? How do we do governance? Do we have the capability to be agile and deliver AI services? And this will create more work streams, but you're doing it with purpose. Not just because it's the thing to do today.;

Narelle Hooper:

Which all brings us to turtles, and turtle surveillance. It's one of Lee Hickin’s favourite examples of using AI at work in Australia. ;

Lee Hicken:

I'll share a couple of good examples with you. They're both ones that are close to my heart. So, the first one is I think just the speed of what we're seeing in the language recognition models in AI. 10 years ago we had AI tools that were just capable of kind of understanding a very basic conversation. You might have a chatbot type thing. And we've seen that evolve. And already here in Australia we've seen a lot of our own organisations, government organisations, companies like Services Australia using chatbots to accelerate that process. But just recently Microsoft released a set of tools that help organisations code through speech.;

So think about that for a second. One of the most complex things that we do in the IT sector or in technology is write code. It's a very detailed and complex thing to do. You can now talk to a computer and say, "I would like to write code that categorises all of these users by their age, gender, and location." That's a bad example because it's very irresponsible, but you get my point. You could do that and the computer will write the code for you. It will translate what you said and turn it into a piece of code you can drop into your program. So that's just a huge acceleration of the idea that language could be just a simple conversation to actually creating complex logic that we struggle to do today for a lot of people. So that's that kind of big picture stuff.;

But the one that I'm really proud of and I find it to be something that has been something I've been deeply engaged in over the last 12 months is a project called “Healthy Country”. Which we spearheaded with CSIRO but it has grown beyond that. So, it's this idea that how do we use AI as a tool to basically identify things in the wild. Now, whether we're identifying turtles on a beach in Cape York or magpie geese in the Kakadu or fish in Darling Harbor or whatever it is we're using, we've created a set of models that let us identify and classify the biodiversity of Australia. Unique creatures and animals we have with the intention of better understanding them, better understanding how they live, connecting the Indigenous knowledge of how these things have lived for thousands of years. And then applying the science to make sure they continue to live and we understand what their ecologies are like right now.;

And that Healthy Country model, it's just vision AI. It's just the same tools we use to take a picture of the camera and on your phone and it tells you what it sees in the picture. The same logic, but we've built a model that identifies unique animal flora and fauna in Australia. And it's now being used not just here in Australia, we're seeing that getting used around the world. And it's just that acceleration and it's happening because of the acceleration of the amount of data we can do things like crowdsourcing. We can have people take pictures of animals in the wild, feed into that model and learn a bit more about a particular species. So those are kind of, for me, some really passionately important examples of how AI is improving the world, not just replacing some of the things that we think of as human endeavors. ;

Narelle Hooper:

I hope you enjoyed the full conversation with Lee Hickin, chief technology officer of Microsoft Australia. I’m Narelle Hooper, editor-in-chief of Company Director, the AICD’s member magazine. 

Check out all six episodes of Directors on Digital on your favourite podcast platform, and for updates and director guides go to: aicd.com.au/directorsondigital