One theory tried to explain the 2008 financial crisis by referring back to the end of the cold war. The new geopolitical development meant lower defense spending and, consequently, fewer contracts between the defence industry and the US government. As a result, aeronautics experts, such as mathematicians, physicists, highly qualified engineers had to leave their jobs for new careers.
One industry that welcomed them with open arms was finance where they were encouraged to tap into their mathematical modelling skills. Their creative abilities and theoretical training would have given rise to a flurry of derivates on the financial market. As later analyses have shown, some of these products were so sophisticated, that not even experienced investors were able to get their head around them and their issuers had valuation difficulties.
If the above-mentioned phenomenon is correct, then the genie is again about to get out of the bottle…with a vengeance, as an FT article recently noted.
Barclays’ credit card business struck a deal with Amazon that would allow it to provide services of unprecedented quality to its customers. Essentially, Barclays and Amazon are going to use big data and AI technologies to rapidly approve credit lines and predict what kind of services customers are likely to want next.
The AI platforms to be used in financial operations are extremely powerful and will lead to what an MIT analysis calls disruptive as they use what has been dubbed “deep learning” processes.
The half full side of the glass is that such an approach may make financial products more accessible as more care will go into their design to better reflect the needs of various categories of clients. As a former governor of the bank of England put it, we are seeing a “democratisation of finance”. On the other hand, more efficient and faster business loan approval will lead to a lower financing cost.
Transferring decision-making powers to AI systems also comes with risks that should not be underestimated. Firstly, there is the distinct possibility that AI systems that learn from past records, would pick up less fortunate tendencies by the industries/companies they relate to. For instance, only 2 to 3 % of the employees of companies such as Facebook or Google are black. What kind of inferences could an AI system draw from that…?
Add to that data privacy concerns and having already dominant and financially potent companies become even more so. Herding should not be underestimated either which may be driven by the fact that AI systems are designed based on common principles.
Finally, the fifth and probably the most serious risk refers to losing control over the AI-led automated processes which may lead to considerable systemic risks across the financial sector. Over the past years, less sophisticated systems underpinning ultra-fast algorithmic trading were suspected on more than one occasions of triggering sudden and unjustified market dips.
Preventing such situations from reoccurring requires new levels of risk management on both the supervision side where central banks among others are involved, and on the general public side. Extreme decentralization and liberalization so admired by digitization fans and a diminished role for the central authority may create monsters the likes of those in 2008 which had also emerged against a backdrop of weaker supervisory authorities.
The threat here is to end up with an inherently unstable system resulting from the instability of the digital applications that govern it. As in kinetics, the faster you drive, the longer it takes the ‘vehicle’ to stop in order to avoid catastrophe.
Have a nice weekend!
Subscribe to receive notifications when new articles are published