Posted: 10th January 2018

First published by PIMFA in December 2017

With the number of firms willing to invest in the development of robo-advice propositions increasing, new guidance papers published by the FCA during 2017 have been welcomed by many.

Much of the guidance is informed by the FCA's work with their Advice Unit and, as a result, the regulatory expectations for robo-advice are much clearer, which should give firms increased confidence in either entering this market or expanding their offering. However, there are still operational and technological challenges to ensuring their automated processes are aligned with regulatory requirements.

As a result of the Financial Services Market Review (FAMR) in 2015, the FCA has largely taken a supportive role in the development of new technologies which can deliver cost-effective advice in order to fill the advice gap. However, recently, there are signs that the regulator now wants to be a little more intrusive and take a closer look under the bonnet of firms’ robo-advice processes.

What areas is the FCA likely to focus on and, importantly, how can firms ensure that their proposition is fit for purpose?

increasing Regulatory focus

The FCA is concerned that should robo-advice processes be poorly designed it could lead to systemic mis-selling. Bob Ferguson, Department Head at the FCA’s Strategy and Competition Division, recently expressed this point at the Westminster and City annual conference on robo-advice.

The FCA’s Business Plan for 2017/2018, which was published earlier this year, stated that the FCA will monitor the development of robo-advice and it will be the focus of thematic work starting from quarter two of 2018.

Although the FCA is willing to support the development of robo-advice, firms need to make sure that the automated advice process is well designed and fit for purpose.

Where the regulator’s scrutiny will be focused

Firms need to be prepared for a more intrusive investigation into their advice processes, but what should they expect the regulator to scrutinise? Areas of focus are likely to include:

  • The target market and distribution strategy, and whether it has been defined, robustly researched and is supported by evidence to ensure it is being offered to the right clients
  • The onboarding process, and whether it captures the right amount and level of information
  • Whether the customer journey gives clear information that defines the service and who it is suitable for
  • Whether the system is capable of filtering out those potential clients for whom the service is not suitable
  • Whether advice meets the definition of a personal recommendation
  • For a non-advised service, whether the presentation of information meets the needs of the target market and avoids straying into regulated advice
  • Whether the process is aligned with new regulation, such as Markets in Financial Instruments Directive (MiFID II) and the Insurance Distribution Directive (IDD)
  • How the service ensures any product arranged remains suitable for the client on an ongoing basis
  • How the firm manages and discloses any conflicts of interest

Covering the above areas should be relatively comfortable for firms that have prepared well. However, the more challenging questions are those that involve ‘looking under the bonnet’ and focus on the internal operations of the advice model, such as:

  • What controls are there over the accuracy and reliability of client information?
  • What controls are in place to identify and handle inconsistencies in client information?
  • What level of outcome testing has been carried out pre-launch and how is this monitored on an ongoing basis?
  • How do the algorithms map directly to good client outcomes?
  • Is risk profiling fit for purpose?

What is the potential for systemic mis-selling?

A robo-advice process would contain broadly similar elements of the client journey to face-to-face advice, but there is a key difference which can increase the risk of systemic mis-selling.

In face-to-face advice, the nature of human decision-making means there can be a variance in outcomes arising from a client relationship with an adviser. However, this should still result in suitable outcomes in most cases due to the skill and experience of the adviser.

For robo-advice, the model is likely to be less variable, and there is a greater 'fixed' nature to the outcomes. Here, the complexity of the algorithm plays a part.  

The key challenge is; if there is a flaw in the design process which produces a poor outcome, the 'fixed' nature of the system can magnify the number of poor outcomes.

Some robo-advice models include part-human, part-automated advice, which is likely to reduce the risk of this 'fixed' nature. For advice models with no human involvement, the risk of systemic mis-selling due to a single flaw in the process is much greater.

Gaining sufficient comfort that your system is fit for purpose

Having covered various elements of robo-advice during his speech, Bob Ferguson finished by asserting that the real focus is ultimately, and invariably, customer outcomes:

"Thinking about the risks prompts a question about how the FCA will supervise robo-advice models and their algorithms. The answer is that we are focused on outcomes. That is to say, it is above all about what the model generates."

Whether the robo-advice model is well designed can be established through effective outcomes testing. Firms should ensure that they are testing sufficient volumes of customer outcomes, and that enough diversity (in terms of the variety of customer circumstances) exists within samples, both prior to launch and on an ongoing basis.

Outcome testing is not just about running scenarios to see if the algorithms produce suitable advice; it should also include other aspects, for example, testing customers' understanding and testing the technology’s tolerances based on the widest possible variations in client answers.

Firms should also consider the factors which influence how much testing they should carry out. Models which are fully automated may need more testing than those with some human involvement. The greater the number of inputs for the algorithm, the more possible scenarios and the greater the complexity.

ACT EARLY TO CAPTURE ISSUES

If outcomes are the sole marker for success, and the risk of driving detriment into the customer base within robo-advice propositions is increased, then firms have an imperative to ‘check the oil’ more regularly for their own, and their customers’, benefit.

The key commercial advantage of robo-advice is the efficiency with which it can provide advice. However, in order to gain in the long term, firms must ensure they are investing sufficient time and money into the prevention of poor outcomes and balancing commercial advantages with customers’ needs in order to futureproof their approach.

Huntswood h green

Huntswood - Insight