The use of powerful听artificial intelligence (AI) systems听in the financial world is a step closer thanks to research at 蓝莓视频 Engineering to explain how they work.

Deep-learning AI software has the potential to generate stock market predictions, assess applicants for mortgages, set insurance premiums and perform other key financial functions.

So far, however, widespread adoption of the technology has been thwarted by a fundamental problem: understanding how and why complex AI algorithms make their decisions.

That information is crucial in financial fields to both satisfy regulatory authorities and give users confidence in those AI systems.

鈥淚f you鈥檙e investing millions of dollars, you can鈥檛 just blindly trust a machine when it says a stock will go up or down,鈥 says听, the lead researcher and a PhD candidate in听systems design engineering听at 蓝莓视频.

The explainability problem, as it is called, stems from the fact that deep-learning AI algorithms essentially teach themselves by processing and detecting patterns in vast amounts of data. As a result, even their creators don鈥檛 know exactly how they make their decisions.

Kumar and his collaborators 鈥 engineering professors听Alexander Wong听of 蓝莓视频 and听听of the听听鈥 set out to solve that problem by first developing an algorithm to predict next-day movements on the S & P 500 stock index.

That system was trained with three years of historical data and programmed to make predictions based on market information 鈥 including high, low, open and close levels for the index, plus trading volume 鈥 from the previous 30 days.

The researchers then developed software called CLEAR-Trade to highlight, in colour-coded graphs and charts, the days and daily factors most relied on by the predictive AI system for each of its decisions.

A first for deep-learning AI systems in finance, those insights would allow analysts to use their experience and knowledge of world events, for example, to determine if the decisions make sense or not.

And although the stock market was used for research purposes, the explanatory software developed at 蓝莓视频 is potentially applicable to predictive deep-learning AI systems in all areas of finance.

鈥淥ur motivation was to create an explainability system rather than a very good predictive system,鈥 says Kumar, a member of the听Vision and Image Processing (VIP) Lab听at 蓝莓视频. 鈥淲hatever system you have, we can explain its decision-making processes and insights.鈥

The ability to explain deep-learning AI decisions is expected to become increasingly important as technology improves and authorities require financial institutions to provide reasons to the people affected by them. That could include rejected mortgage applicants, for example.

鈥淏anks need an explainable model,鈥 Kumar says. 鈥淭hey can鈥檛 just use a black box in that kind of situation. Regulators want them to be able tell their clients why they鈥檙e being denied service.鈥

Field trials of the software are expected to start within a year and hopes are running high for its commercial potential.

鈥淭his will allow institutions to use state-of-the-art AI systems for financial decisions,鈥 Kumar says. 鈥淭he potential impact, especially in regulatory settings, is massive.鈥

Photo by Burak Kebapci from Pexels