In this article, NOW Finance Group Founder and CEO, Richard Blumberg, puts forward his analysis of the potential risks associated with how we utilise ever-advancing AI technology, and how we are best placed to manage these going forward.
By now, it’s clear to most of us that artificial intelligence (AI) is going to have a significant impact on our lives.
Although in truth it already has. All of us regularly use and benefit from narrow artificial intelligence which is designed to perform a specific task or set of tasks like shopping online for example or searching the internet which we all do all of the time.
There’s no doubting the technology is revolutionary. It has – and will continue, to improve outcomes and efficiencies in many areas. However, recent advances, like the emergence of later versions of large language models like ChatGPT, have precipitated a growing fear that AI could make humans redundant, amongst other evils.
So, are we right to be scared or are the current fears simply misplaced hyperbole?
Well, the news is both good and bad. Right now, although large language models like ChatGPT are capable of providing a human-like response, they’re still a long way off morphing into general intelligence. Inescapably, that will happen at some time in the future, at which point AI clearly has the potential to become scarier and more dangerous – especially in the wrong hands. In the meantime, while AI may potentially impact on things like employment in certain areas or industries, the corollary argument is that overall improvements in efficiency have the capacity to free us up for other more interesting and important things, and not simply render us ‘unemployed’ or redundant.
In fact, there’s plenty of good reason to believe that AI, like all general-purpose technologies before it, will improve human welfare. In the finance sector, we’ve already seen AI emerge as an extremely useful and adaptable tool, augmenting and enhancing how lenders and borrowers interact in a myriad of ways.
For example, in our own business, we maintain a large enterprise data warehouse with endless data points and tailored data marts. We apply models and machine learning to make better risk decisions, tailor individual customer pricing and improve our customer acquisition – and we’re only just getting started!
While machine learning (ML) uses algorithms that learn from data to improve, AI can expand beyond ML and use rule-based systems, expert systems and other techniques to create intelligent machines that perform tasks that normally require human intelligence. While we have AI and ML operating in our technology stack, we don’t use in customer facing operations because we value speaking to our customers. However, we’re already starting to see AI used in call centres and in the provision of customer service support – where speaking to a computer can be quite hard to distinguish from an actual person in many instances.
So, what’s the problem?
Like any rapidly evolving technology, the ride won’t always be a smooth one and a key challenge is how to ensure the use of AI remains both ethical and explainable.
One of the key risks associated with any AI models is that it will become a ‘black box’. So, alongside this, ideally there also needs to be development of corresponding policy frameworks, controls and documentation that clearly articulate what each model is seeking to do – to ensure that they’re readily transparent and explainable.
In short, you need to ensure there’s accountability and that users understand how algorithms reach their conclusions. Other concerns that need to be addressed include fairness and the avoidance of bias, as well as safeguarding privacy and data.
More broadly, the illegal or inappropriate exploitation of AI remains the number one concern taxing most commentators, experts and major players in the tech space, especially at that future point in time when AI morphs into general intelligence.
All of this suggests that AI, like any powerful technology, will ultimately require the development of a regulatory framework. It’s worth noting that striking the right balance between regulation and innovation is essential. Overregulation may stifle AI advancements, while inadequate regulation may result in risks and harms. In a nutshell, I think we should be focused more on regulating the application of the technology, and less on the technology itself, which would be very difficult to manage.
I’m personally excited about AI and remain confident that, like all general-purpose technologies before it – electricity, electronics, modern transportation, the internet – generative AI will change our world for the better.