john green culture

What do motherhood and Enron have to do with Royal Commission into Misconduct in the Banking, Superannuation and Financial Services Industry? Everything.

The mother of all employee questions is simple: would I want to treat my mother the way my boss wants me to treat this customer?

If the answer is “no” — and assuming you like your mother (that’s a whole other avenue we could go down) — the treatment is likely to be suspect and should be questioned, challenged or stopped. The “mother” question can be asked in many other ways. The Australian Prudential Regulation Authority (APRA) report into the Commonwealth Bank of Australia (CBA) put it as: “we can do this, but should we?”

Unsurprisingly, every director I’ve spoken to about this agrees. But what do we do about it? We don’t work on the shop floor, we’re all part-time, so how do we know if asking the “mother” question is a routine part of a company’s culture?

The mother of all employee questions is simple: would I want to treat my mother the way my boss wants me to treat this customer?

How do we know that even if people do ask the question, the right steps are taken if they get the wrong answer? Satisfaction surveys and complaints data are two methods. One problem is that they are lagging indicators, so the bad stuff has already happened. Worse, they can snow us with an avalanche of data that buries what’s important.

APRA’s CBA report pointed out that these kinds of data can encourage us to focus on a company’s aggregate success rather than its outliers. If, say, 85 per cent of your customers are satisfied and it’s better than last year, human nature will see you patting yourself on the back. But why, asks APRA, aren’t we digging deeper into finding out why the other 15 per cent are unhappy?

This is where mixing the collapse of energy, commodities and services company Enron in 2001 with a dose of artificial intelligence (AI) can give us a useful answer. Super-smart people are working on AI all over the world, but I’m going to tell you a true story about some geniuses in a Rotterdam startup called KeenCorp.

KeenCorp developed an AI system using psycholinguistics to examine employee engagement in real time. The system analyses internal communications — but not what employees are saying rather how they are saying it. It’s the scanning equivalent of analysing body language. First, the software is run against historic email traffic, allowing it to create a baseline. Subjectivity is achieved by determining what the company’s normal “tone of voice” is, then tracking all the pattern changes in the language to determine the collective mindset. The employee data is anonymised and complies with the strict privacy regulations in both the US and European Union.

To test the AI, KeenCorp researchers applied it against the years of publicly available emails from Enron’s top 150 employees. What they wanted to know was whether the AI — if it had been available back in the 1990s — could have provided Enron’s board with an early warning, a red flag, ahead of the devastating bankruptcy.

What they found was very material, a dramatic plunge in employee engagement. But it was problematic for three reasons. The plunge occurred in June 1999, more than two years before the collapse. They didn’t know what caused it. Lastly, it seemed counterintuitive because it happened at the same time as Enron’s share price and market capitalisation were skyrocketing, when the company was being feted as among the best in the world — the now infamous “smartest guys in the room”.

Worried their AI was flawed, they contacted the very architect of Enron’s collapse, former CFO Andrew Fastow. Once reviled as the most hated man in America, Fastow had just been released from prison after serving six years.

Still paying penance, Fastow was giving talks — for free — to business schools and director roundtables explaining precisely why he deserved to go to prison and that unless they didn’t change some of the ways business was still being done today, they might follow him there.

Two years ago, I was invited to one of Fastow’s talks, and it was one of the best sessions on corporate governance I’ve ever attended. Not only didn’t Fastow make excuses for himself, he plainly took responsibility that what he’d done was wrong. Now out of jail, he was on a public service mission, like a canary in the coalmine, pointing out how many directors and companies are unwittingly committing the same mistakes he did.

How, as Enron’s CFO, was the only question that Andrew Fastow thought mattered: ‘Is it legal?’

While Fastow doesn’t use the “mother” question — or APRA’s “can we/should we” model — his test is similar. He focuses on rules versus principles. How, as Enron’s CFO, the only question that he thought mattered was: “Is it legal?” Rather than: “Is it right?”

Fastow told us that at Enron, if the auditors, lawyers, bankers and board members all signed off on a deal as “legal”, then that was as fine by CFO Fastow as it was by all of them.

Enron’s big problem was that neither Fastow nor any of those gatekeepers asked: “is it right?” Their only real concern was to make sure that it complied with the rules. It didn’t matter that the rules were dumb. Loopholing was “smart”. But then the guys from Rotterdam contacted Fastow. And it took him a nanosecond to realise that they had something.

What their AI had pinpointed was the precise day — 28 June 1999 — when Fastow had actually persuaded Enron’s board to accept the dodgy off-balance-sheet partnerships that led to the death of the company.

Their AI was red-flagging how appalled the top 150 people in the company were at the board’s decision, many silently screaming, “How the hell could the board have fallen for Fastow’s flim-flam accounting?”

Yet, while these employees knew the board was making the wrong decision, none of them raised their hands. Not one of them spoke up. The magic of the AI was that it was picking up — effectively in real time — what had been left unsaid; what was felt.

Imagine if this kind of AI had actually been available back in 1999. Imagine that it’s you sitting at the Enron board table, presented with the chart that shows the plunge in employee engagement caused by your last board decision. You ask, “What’s going on?” You’re told, “The employees are really unhappy the board agreed to those dodgy partnership deals”. There’d be a pretty good chance you’d have wanted to unwind those deals. If you did, you would’ve saved Enron.

What if it’s today and our Australian companies started seeing this kind of internal early warning data? It just might give us the mother of all answers.