AI has the power to supercharge Australia’s productivity, but its widespread adoption is not without risks, the ASIC Annual Forum today heard.
Advances in AI are increasingly driving predictions, decision-making and recommendations across many organisations, including in financial services and markets.
Nick Abrahams, Futurist and Global Leader Digital Transformation, said while many organisations were reluctant to greenlight the use of generative AI systems such as ChatGPT, the technology was seeping in through third-party vendors.
“For most of us using Microsoft applications, you’ll have a ChatGPT-style capability within it within a year. Imagine waking up and seeing your inbox in the morning and its already read and drafted responses in your voice because it’s read every email you’ve ever written,” Mr Abrahams said.
“From a productivity point of view, two credible university studies have shown the impact of generative AI on knowledge workers is between 35 and 50%, it’s massive. When steam was introduced during the Industrial Revolution, the increase was only 25% so we are at an inflection point.”
Josh Shipman, Co-Founder and Co-Chief Executive Officer of Australian software company Elula, said this widespread implementation can have benefits for consumers and businesses alike.
His company works with more than 20 banks to provide a machine learning product that predicts when customers are likely to refinance and what conversation could make them most likely to stay.
“This is kind of like nirvana for the frontline of the bank,” Mr Shipman said.
“The banks are getting tremendous value out of what we do as are their customers.”
However, as ASIC’s recent action against IAG has alleged, algorithms can cause consumer harm.
Professor Nicholas Davis, Industry Professor of Emerging Technology at the University of Technology Sydney and Co-director, Human Technology Institute, said the variable quality of AI decisions presented a risk for organisations, particularly when its use was not documented.
“Where we’re going in five years is that most things it will be taken for granted that they’re powered by AI but right now for generative AI the performance threshold is around 90%. It sounds good in marketing but it’s not great if you’re running a business with high consequences where small mistakes can have big impacts,” Professor Davis said.
“One of the key questions I keep coming back to is inventory – where AI might be being used in systems – and most organisations cannot answer that right now.”
The future of AI
ASIC is closely monitoring how the development and application of AI is affecting the safety and integrity of our financial ecosystem.
The risks of new to consumers, investors and overall market integrity are a key strategic priority for ASIC. All participants in the financial system – including regulators – have a duty to balance innovation with the responsible and ethical use of emerging technologies.
ASIC is exploring the potential uses of artificial intelligence and other technologies to remain a digitally enabled, data-informed regulator.
ASIC will also continue to promote the ethical use of consumer data and artificial intelligence.
Mr Abrahams added there needed to be minimum standards in relation to the use of AI, particularly in cases where there could be bias or discrimination.
“This technology is capable of great good but also great harm,” Mr Abrahams said.
“Almost all big technology applications in your tech stack will have some sort of generative AI engine within them in 12 months. The question is how are you going to use that, how are you going to get it in your workflow and is it explainable – that is if you were taken to task to explain why an insurance claim had been denied, could you actually explain why?”