The inbox of every financial journalist who dabbles in tech is absolutely hoaching with outlandish claims for the ways in which AI will revolutionise whole business models and put hitherto undreamed of numbers of salaried workers out of a job.
Like blockchain, the claims made for the powers of AI are vague enough and distant enough that no one is entirely sure what it might one day be able to do. The one thing we are constantly assured of is that it will be extremely #disruptive.
That’s not to say there aren’t uses in the capital markets but it all depends on what you mean by AI.
The threshold for what constitutes genuine AI is constantly being adjusted. Engineers in the field are known to complain that the performance of all sorts of functions used to be considered the benchmark for AI, up until the point where they got them to work.
The classic example is the Turing test, which states that a computer will have achieved artificial intelligence if it can fool a human into believing they are conversing with another human.
Chatbots passed that threshold years ago, but they did so in such a specific and unsatisfying way (unforeseen by Turing when he posited the test) that it seems a very shallow type of artificial intelligence.
Nevertheless, chatbots are finding their role in capital markets. Credit Suisse has a chatbot that can offer quotes (and automatically execute orders up to a certain size) on 7,500 corporate bonds.
The system is much more than a chatbot, of course. The algorithm looks at the same sorts of data points a trader would look at and comes up with an executable price. It makes the market more transparent and makes it more likely that orders will be executed, as well as freeing up traders to work on bigger deals.
It’s a very nice system and provides genuine value for its owners and their clients. But it’s not exactly disruptive. It’s certainly something of a quantum leap from there into a sort of finance Skynet, automating the higher functions of credit and capital.
There are those who would have you believe that such a world is not far away. One of this week’s aforementioned AI press releases trumpeted research that said capital markets professionals believe that “artificial intelligence will produce more accurate, reliable and transparent credit decisions than human-based systems within five years”.
Drilling into the data, it emerges that “almost half” (i.e. less than half) of the surveyed professionals agreed that AI can already outperform human credit decisions for simple vanilla loans; 15% believe that is true for more complex non-conforming loans, but 58% believe it could within an average of five years.
The mechanism by which AI would achieve this is the assessment of lifestyle factors as presented in the public domain. Simply automating the current system of credit scoring would not produce better results, although it might yield efficiency savings. The key is that AI will assess new data unavailable to human underwriters.
Computer says 'yes'
If banks were to adopt such a system, then loans portfolios produced by AI underwriters might form a new “super prime” tier in ABS. Cliff Pearce, global head of capital markets at Intertrust, said: “In ABS portfolios, there is an opportunity for AI to offer real value. AI assessment of loans could lead to more fine-grained distinctions of tiers. We could use it to segregate prime into several different categories priced differently.”
There are a couple of objections to the concept of relying on artificial intelligence for credit assessments, though. The first might be regarded as an argument about quality: will machine-produced loans portfolios expose new correlations and vulnerabilities that cause a huge section of a portfolio to default at once?
Loans portfolios are already penalised for being too heavily correlated, but it may not be enough to keep investors safe. After all, the point of this system is to spot connections humans can’t. However, we can’t truly know whether this is the case or not until a full credit cycle has elapsed.
But there is another objection, and it is one which it behoves us to air before such a system becomes widespread, not after.
In Asia, WeChat, a messenger app and payments network, is collecting data on users’ chats and purchases and using the data to make inferences about their lifestyle. The information is already used to grant access to things such as waivers for bank account minimums.
Is this a level of access we want to grant loans companies? The upside is simple: appropriately priced financial products and lower default rates. Such an outcome might be particularly valuable in an economy with large proportions of unbanked people with no credit history or FICO score.
The downside? Well, quite apart from granting a new degree of technological invasion into our lives, how accurate a picture of you does your online presence paint? Does ByteMe’s inability to stop sharing memes about cryptocurrency on our unpopular Twitter account really reflect my inability to pay back a loan? Or will the machine simply dismiss our chances once it sees we’re in a field as financially inutile as journalism.
More to the point, though, can we trust those to whom we grant this access? Given the profusion of data leaks, simple competence is a concern, but there are ethical worries too. Human rights activist believe that data from WeChat was used to jail an activist for sedition in China, while criminals are believed to have made use of the data gathered to facilitate 20 crimes in the city of Hangzhou alone.
True, these problems are more to do with the proliferation of the data we generate and its use, and is not unique to the credit underwriting business, but making use of our data for credit decisions is an important step from which we will find it difficult to retreat.