Members of the management team had raised concerns that the system was not ready to be fully launched after seeing initial results.
The AI company assured the firm it had put the correct processes in place to mitigate any risks, the software could write client letters, offer advice and give recommendations for any changes.
However, there was a short-term coding pattern which meant the metrics and advice started steering towards portfolios that discarded the fundamentals of diversified investment, with the system failing to take into account market volatility and risks of over concentration.
As the technology had been heavily invested in, generating helpful advice for some clients, the firm was reluctant to make any immediate changes.
Advisers were asked what the firm should do next with 47 per cent voting that the senior management team did not understand the implications of implementing the third-party AI technology. And that they should implement a secondary process to manually check the software was working correctly.
Some 32 per cent said the firm should stop using the AI software and review the process, while 10 per cent voted that the use of the AI should be stopped immediately as the error was “unacceptable”.
Bhogal said: “When it comes to implementing technology within organisations that deals with highly sensitive clients, risks do need to be mitigated.
“What happens sometimes within organisations, is we are very reliant on consultants and specialists to come in and put in these processes which can lead to scenarios like the one above, where something has happened and the firm is being reassured by these specialists that everything is fine.
“Communication needs to be better and there needs to be a better understanding from organisations about how the technology they have implemented works.”
alina.khan@ft.com