Alexa, are you acting in my best interest? | Fin24
 
Loading...

Alexa, are you acting in my best interest?

Oct 21 2019 18:13
Susan Spinner

Recently, I attended a conference where one of the points of discussion was the disruption within the world of finance through new breakthroughs in artificial intelligence, or AI. One of the panellists said she worries about the influence that Amazon’s (in)famous virtual assistant, Alexa, might have on her children.

She questioned whether we can actually know what motives and morals ‘she’ has been programmed with. Can we trust the voice-controlled device sitting on our kitchen counter to keep our best interests at heart? 

Alexa is an example of natural language processing, which is one form of AI. Although the exact definition of AI is still a point of contention, it usually involves the use of self-learning systems to mine big data, recognise patterns and process natural language. The aim is to copy how the brain works while processing information and making better decisions more quickly. 

What does all this have to do with finance and investment, you might ask. While most financial services companies have not yet introduced voice-assisted technology, many do offer online financial advice and make computer-based investment recommendations. In fact, it was not long ago that so-called ‘robo-advisers’ were touted as a major threat to established investment firms. 

But instead they have proven to be more complementary, rather than disruptive, with large firms simply buying up the most promising fintech startups and adding the new technology to their own services.

A robo-adviser is a computer-automated platform that responds to your queries online similarly to a human adviser, but is in fact a software program driven by AI and can be offered to clients at a lower cost. 

The advantages of this type of service are clear for individuals who do not want to pay the higher fees of a financial adviser, who do not have large sums to invest, or who like to invest their funds independently, but with the support of an automated solution. 

As a client, that means you are making a conscious choice to opt for the computer rather than the human adviser. Naturally, a human can make a mistake, give misleading advice or simply be bad at their job. In the same way, a robo-adviser is entirely dependent on the soundness of the algorithms and data that power it. 

However, even if you have shied away from such a model thus far, there is every chance that you are already affected by the deployment of AI and machine learning technology somewhere in your financial affairs – whether you realise it or not. 

In a global survey of investment professionals conducted by the CFA Institute in early 2018, 51% of respondents stated that their firm’s top tech priority is the use of technology for client engagement, and a further 21% said it was the employment of machine learning in portfolio construction (i.e. using machines to make automated and instantaneous investment decisions depending on market movements and new data flowing in). 

A more recent study from early 2019 showed financial industry leaders identifying the growth in AI and machine learning as the greatest source of disruption for investment professionals in the next five to ten years. 

This development has the potential to bring significant advances and improvements to the world of finance. Imagine submitting a mortgage request (perhaps including a unique digital ID, so you don’t even need to fill out lengthy forms) and immediately knowing whether and at which rate a bank will grant you a loan. 

Imagine supervisory authorities employing AI systems to detect fraudulent transactions, money laundering or tax evasion. Or a program tracking, comparing and evaluating company reports in real time, automatically highlighting the most relevant indicators to help investors make smarter choices. Already, today, a virtual assistant called Jasmine helps people in Singapore navigate the tax system and file returns.

Computer-based technology is also infinitely better and more efficient at repetitive tasks, while remaining accurate. So, what’s not to like? The pace, for a start. It is apparent that AI is evolving much faster than our legal frameworks, regulatory oversight and popular understanding of these technologies can. 

We know that both conscious and unconscious biases affect all of us – whether we are plumbers, bankers, portfolio managers or computer programmers. We have not yet established guidelines and frameworks that will reliably prevent such biases from infiltrating the datasets and code that shape an algorithm. 

Some unfortunate examples have become infamous: Microsoft’s chatbot “Tay” began to spew antisemitic vitriol; a computer program used by US courts to assess the likelihood that defendants will become repeat offenders flags up black defendants at twice the rate of their white counterparts, and three different AI systems by IBM, Microsoft and Megvii have been found to identify a person’s gender from a photo correctly at a rate of 99% – but only if that person is a white male. In other cases, such as for women of colour, the accuracy of facial recognition drops significantly. 

If these kinds of issues persist, how can we be certain that race, religion, gender and other factors will not have adverse effects on how a software ranks an individual’s credit quality, insurance premiums or similar issues?

Even leaving these unwanted and, one may hope, unintended results to one side, how can consumers know whether a financial product that is marketed to them by an algorithm really is in their best interest and not the company’s? 

The same can be true of a human adviser, one could argue. And that is undeniably true. However, AI brings with it the matter of scale, potentially misinforming hundreds of thousands of savers and investors, rather than the few hundred a bad adviser might. There is also a common notion that programs are more neutral and fairer than humans would be – a misconception that may easily lull consumers into a false sense of security.

What we need, then, is a fruitful interaction of artificial and human intelligence: AI + HI, ensuring that algorithms and datasets are adequately tested, screened for quality and regularly reviewed. There are already companies popping up to specialise in performing such “Algorithm Audits”. 

With technology taking over the more repetitive, basic tasks, the value of uniquely human characteristics such as empathy, ethical orientation, tacit knowledge and face-to-face communication rises. This is why we are increasingly emphasising the growing importance of soft skills to investment professionals. 

In addition, we need regulatory bodies to move quickly and establish standard rules of play. The European Union earlier this year took a notable first step in this direction by publishing its Ethics Guidelines for Trustworthy AI, which include the need to maintain human agency and oversight as well as accountability, transparency and privacy. However, these guidelines are not yet legally binding. 

And as long as the big players in technological innovation, China and the US, do not take similar steps, we will continue to play catch-up with the speed of development. 

Because, at the end, we all want to be certain that all the “Alexas”, “Siris” and “Jasmines” that we are dealing with will truly have our best interests at heart.

This article forms part of finweek’s Collective Insight series titled “How technology is impacting on financial decision-making” published in the 24 October 2019 edition of finweek. To read the entire series, get a copy of this magazine here.

NEXT ON FIN24X

Quiz #225: 21-28 October

2019-10-18 10:57

 
 
 
Loading...