Guy Hopkins and  Jian Chen
Guy Hopkins and Jian Chen

Looking for Best Execution?

Looking for Best Execution? Here is what might influence your FX trading strategy By Guy Hopkins, Founding Director and Jian Chen, Head of Data Science, FairXchange

First Published: e-Forex Magazine 98 / Trading Operations / June 2020

Rules around best execution have been familiar to EU investment firms since 2007 with the implementation of the first MiFID directive, and have come into much sharper focus with the advent of MiFID II. Simply defined it means achieving the best possible result for customers when executing their orders, either via venues or OTC. The formal reporting requirements are in fact quite modest; however, the onus is very much on firms to demonstrate that they have a process in place to ensure the best possible result. Costs are clearly a key component of this, but best execution extends a long way beyond trading on the best price at any given time - it informs the entirety of the execution process. If we have a large order to execute, how can we execute it without incurring unnecessary market impact, which may negatively affect us during the life of the order? Conversely, how do we avoid taking unnecessary market risk by taking too long to execute?

Choices

A useful starting point is to consider the various strategies that could potentially be used to execute a given trade; the final choice of that strategy is the culmination of several decisions at varying levels within the organisation; all of which should be based on rigorous analysis and understanding of data. 

Decisions first need to be made by the firm about what execution strategies are approved for usage by the trading desk. Naturally there are certain instruments which can only be traded in certain ways (onshore/restricted pairs, frontier markets and so on) so for the purposes of this discussion we’ll focus on the more liquid currency pairs. 

One approach adopted by some firms is to automate the execution of orders below a certain size, so they never require any human intervention in the first place. These are commonly executed direct over an API with a panel of liquidity providers, usually on a best price basis. Several decisions need to be made here: what should the size cut-off be for each pair? Can/should orders be netted together before execution, and how long should the netting period be prior to execution? Which liquidity providers should be chosen for the panel? Should that panel be rotated and if so, what are the criteria for deciding who should leave and who should join? Clearly these are decisions that must be justifiable and therefore require extensive ongoing data analysis to answer. Traders must also have these metrics at their fingertips, as they invariably represent the firm when meeting counterparties to discuss performance.

Best Execution
Some firms automate the execution of orders below a certain size, so they never require any human intervention in the first place

Inherent within this model is a tacit acceptance that all qualifying trades will incur a cost, specifically due to crossing the spread to trade on prices from liquidity providers (i.e. worse than the market mid). It should be possible to quantify this estimated cost to the business over a year - assuming one has a reasonable assessment of available spreads and the likely volumes to be executed. Set against this crystallised cost are the operational efficiencies to be gained by releasing the human traders from handling large numbers of small tickets and letting them focus on higher value trades. These are inherently more qualitative, but it should still be possible to make a reasonable estimate of the time savings both to the desk and the middle and back office functions through a more streamlined approach.

With the firm having automated the small tickets, our hypothetical desk trader is now left with a smaller number of larger orders for which she takes responsibility; for any given order she must now choose an execution strategy from a number that are available to her, either trading in full at once (“risk transfer”) via electronic or voice channels, or working the order over time. Many firms have adopted execution algos to transact their larger orders and there are a huge number of strategies available from a wide variety of firms. How does one go about selecting the right ones to participate in a panel? Again, there needs to be an ongoing data-driven analytical process that assesses the available offerings, both those in the existing panel and “challengers” who are competing to be included in the next rotation. This is not a simple task, not least because firms are to an extent reliant on data provided to them by the algo providers themselves – if they have never used a bank’s algo, how can they decide if it is worth using? This of course is a key factor in the emergence of independent analytics firms, who aim to bring clarity to that decision-making process, standardising data and removing any perception of conflicts of interest. The aim here is to introduce an analytical framework to inform the selection of candidate providers and strategies, both initially and on an ongoing basis. Newer entrants need to be reviewed, while one must be open to the possibility that old favourites may fail to keep up with the pace of innovation and performance may therefore suffer. Keeping on top of this shifting landscape should be a key component of any best execution process.

Best Execution

Liquidity management

Regardless of the specifics of any given strategy, all of them - from immediate risk transfer to algos - must ultimately interact with sources of liquidity, and “liquidity management” is now a service in its own right. The largest providers spend a huge amount of time curating their liquidity pools and this has become a key part of their sales pitch to their customers. This goes much further than simply saying “we are a large firm so we have a large client base for you to match with” – the most sophisticated firms will conduct rigorous testing on each source to ensure that the liquidity is both sustainable and reliable. One area that is currently getting a lot of focus is “skew leakage”. When market makers take on a position from a client and attempt to get out of that position through the rest of their franchise, they will show an improved price (or “skew”) on the side that reduces their risk. The inherent danger however is that the skew they are showing to their clients gets propagated out into the wider market, where it gets incorporated into other participants’ pricing models and steadily pushes the market away, causing the originating market maker to lose money. As a result, market makers must be very selective around to whom they show their skews. Done well, it should ultimately result in lower rates of information leakage (or signalling risk) and market impact, and clients can verify this to a degree by conducting their rigorous assessments of their providers using execution analysis tools. Similarly clients may seek to identify which of their providers are able to demonstrably internalise their risk and which may be immediately covering their positions in the wider market (and thus causing additional market impact.)

Trade characteristics

Deciding which strategy to employ is also heavily influenced by the specific characteristics of the trade in question. Orders may be generated during the Asia time zone when liquidity is thinner; our trader must then weigh up the potential spread savings of waiting until the European trading day begins and liquidity starts to increase, against the risk that the market may drift away in the intervening period. In the most active pairs it may be possible to source sufficient liquidity at most if not all times of day, but others show very localised liquidity effects – Scandinavian pairs being a prime example.

This presents issues particularly for those clients that prefer spread-earning “passive” strategies, using algorithms that post interest in venues. The potential savings compared with crossing the spread and trading on someone else’s bid or offer can be attractive, but of course if there is no activity in the pair in question at that time of day you could be waiting (and exposed to market risk) for a long time. Of course, liquidity also changes over time – the recent Covid-19 pandemic has understandably caused profound changes in the liquidity landscape, and strategies that may been effective beforehand may have to be rethought. Execution analytics help give traders a sense of the seasonality of liquidity, both on an intraday and a historical basis, allowing them to better understand changes in activity (i.e. volume), pricing competitiveness and depth. This provides an all-important context within which to evaluate the effectiveness of each of the potential execution strategies they may employ.

Best Execution
Execution analytics now form an integral part of the trading process

Time to act

As we can see, there are many separate decisions – long term and short term, organisational and individual - that ultimately contribute to the specific choice of strategy for a given order, and all of these should be data-driven. Now we have arrived at the point where it is time for the trader to act; the appropriate strategy has been selected from the list of approved options for the trade in question. Let’s assume that our trader has chosen an algo; they are now faced with a new set of challenges, specifically what settings should they use? How fast do they expect the algo to trade the order, either by setting a duration or choosing an urgency level? Are there market levels at which they want to suspend participation, or aggressively consume any available liquidity? This is where we start to see the merging of human and automated decision making, which makes measurement significantly more challenging. The algo behaves according to its underlying programming, and may even respond to market dynamics in-flight, but of course is only able to operate within the constraints that the human executing the order imposes upon it. To have any chance of implementing a repeatable process for measuring the effectiveness of both the algos chosen to participate in the panel and the traders using them, it is essential to distinguish between the actions of each. User-discretionary decisions – changing execution speed, imposing limit prices, pausing execution and so on – are completely out of the control of the algo, and human traders should be able to point to the value they have added to the execution process by using these controls. 

Our trader also needs extensive insight into current liquidity conditions, both to be able to set the initial parameters of the algo and then to adjust them in-flight as they observe how the algo is behaving and how the market is potentially responding to its activity. They also need to be aware of imminent news events which may cause a spike in volatility. This of course has led to the recent emergence of pre- and in-trade analysis tools that assist the decision-making process in real-time.

So, as we have seen, execution analytics now form an integral part of the trading process; informing the long-term strategic perspective for the organisation, counterparty and strategy selection, liquidity analysis and timing, right through to the individual decisions made by a trader at the point of trade. As technology becomes ever more sophisticated this interaction between the human and machine worlds seems only set to increase.