FX is a dealer market where common information is not readily (if at all) available. Liquidity providers (LPs) have private information about their flow and hedging strategies and Liquidity consumers (LCs or Clients) know their flow and tend to keep the whole size of the flow secret from each individual LPs. At the same time most trading (off the main ECNs) happens on a “disclosed” basis where Liquidity Providers knows the Clients identity before trading.
Therefore, LPs are forced to “act” (as they observe clients requests) on imperfect information and this is where “bad equilibria” where both LP and LC are worse off, can happen. LPs observing only partial flow may draw the wrong conclusions about customer (LC) flow quality. A LC may be utilizing its stack of LPs in a sub-optimal way. The introduction of commonly accepted and credible flow quality metrics which can be exchanged/shared and collaborated-on can eliminate those “bad equilibria” or sub-optimal trading arrangements. Figure 1 presents two possible extremes for market information structure (edges are information/data/analytics, not trading flows).
At the moment, a typical approach to liquidity management involves purely independent decision making:
- If a LP does not like certain characteristics of a LC’s flow they just demand that the LC should “fix” it. As they have an imperfect view on client flow they normally consider that “fixing” the flow is the clients job.
- A LC normally adds LPs to increase the price competition without looking into the “side effects”. Moreover, a LC considers it a good market practice to tell LPs as little as possible about the other LPs they trade with. It is assumed that keeping LPs unaware of others in the stack decreases the information asymmetry (amount of market knowledge) between LP and LC.
While it is a convenient setup, it is sometimes inefficient. And the outcome of this decentralized approach can be far from perfect. This article addresses typical situations where collaboration (information exchange and jointly designed liquidity experiment) can improve (sometimes significantly) the execution outcomes.
Market Structure & Terminology
We consider FX traders connecting to several Liquidity Providers (LPs, together referred to LP stack) using a specific software (FX Aggregator) and routing to the “best” price (normally lowest offer/highest bid) by taking (rather than making) liquidity. In this scenario information sets are as follows:
- A client does not know how his LP handles his flow. He can produce a few simple metrics to guess how his LP values his flow but the majority of clients do not have the technical ability (database infrastructure) to do this
- LPs in the stack do not know about each other and how they are compared. The most common comparison is the lowest offer/highest bid routing but other methods adjusting for reject probability and market impact are used by sophisticated players.
This market structure subjects the participating LPs to “winner’s curse”: that is showing wrong or too aggressive price and thus ex-post regretting being a winner of the price competition.
We also refer to an LP as “internalizing” the flow if it is not immediately hedging it into the market. Otherwise we refer to an LP as “externalizing”. Obviously, we would expect externalizers to have high market impact than internalizers. We define trade or order market impact as the mid-price move post trade or order placement event. Note that externalizer/internalizer split is not “black and white” – there is the whole range of degree of internalization. For example, an LP employing aggressive risk management skew with low risk limits will be “more externalizing” (or equivalently “less internalizing”) than the one with a high risk limit. Finally, the same LP can internalize clients with low short term alpha and externalize clients with high short term alpha (sometime referred to as toxic) as part of its risk management problem.
The first execution workflow when collaboration is important is “sweep” execution which is quite common in FX. Sweep is a series of near simultaneous orders to several LPs. For example, a client wants to buy 50mil EUR and sends 5 orders of 10mil each to 5 different LPs. Client motivation is simple: save the spread costs.
Consider an example on Figure 3, top panel. An LC is trading with five LPs. Some LPs are faster than others for a variety of reasons (old technology, different connections – including client connecting via different pipes to different LPs!).
LC send 5 requests to buy to LP1-5. LP1, LP2 and LP3 return with fills. The price action following LP1 fills is due to LP1 “externalizing” flow. That is LP1 is immediately executing into the market. LP2 and LP3 are unhappy with the client flow as their find it toxic. They would have to honour the initial quote they gave to LC so the P&L of LP2 and LP3 is immediately reduced by approximately 50 and 70 $/m (0.5 and 0.7 BPS). LPs 4 and 5 reject the LC request to buy as they observe a strong adverse price action.
When the process above (sweep) repeats several times LP2 and LP3 notice that the client flow is extremely toxic and start to “externalize” it just like LP1. Then the client flow becomes even more toxic. Next, LPs increase their spreads for this client as a defensive reaction. This “vicious circle” dynamics is sometimes referred as “prisoner dilemma” in liquidity optimization articles. It is some (small) abuse of game theory terminology but it is a colourful name. What it means is that rational actors (like LPs) in the lack of coordination (say common analytics in this case) choose to deviate from a strategy which is good for all of them and pursue a strategy which is sub-optimal for all of them.
Assume that externalizing LPs are considerably worse off compared to the situation when they are all internalizing. This is easy to believe as in an externalizing scenario it is a race to zero (they fight in the same direction for the same liquidity). Hence LP2-5 are worse off. However, our analogy with a classic prisoner’s dilemma is not complete. The bad actor (LP1) cannot internalize and can be eliminated (see below). Hence the prisoners’ dilemma can be solved with some relatively simple analytics.
LC Liquidity Experiment
Normally LCs do not have tools to do flow optimization. And individual LP reports would lead to precisely the wrong outcome! LP1 will look the best if measured purely on the fill ratio and response time! LP4 and LP5 will look bad on the execution report quality report. They have high rejection ratio. They will not be happy with client flow as well. Even on short term market impact LP1 does not look much worse than LP2 and LP3. The most likely outcome is that the LC will drop LP4 and LP5 from the stack. This will not help. If anything it would make the situation worse for everyone including the LC itself as a smaller LP stack means sizes and hence spreads are likely to increase.
How to identify and fix the problem? Experimentation! If FX traders always trade against the same stack of LPs (same pool) it would be very difficult if not impossible to identify LP1 as a problem. FX trader may have an impression that he observes his own market impact. Consequent increases in spread (as a defensive reaction of LPs) would be just accepted by the client as a necessary evil.
However a simple experiment design would be to create 5 pools: with one LP excluded at the time (see Figure 4). Then it would be useful to compare market impact. If the market impact is substantially lower on the removal of an LP, then this LP is a culprit for the market impact. The observed price dynamics may look somewhat like the bottom panel of Figure 3.
However, there is no point of doing any experimentation if you do not have access to a system which records all your actions and produces measurable outcomes. As experimentation is normally expensive and only make sense to uncover some actionable facts.
What does this have to do with collaborative analytics? If the stack of LPs already went down the “vicious circle” path, the chances are that the experimentation will not yield the desired result. This is because all the LPs already treat the LC’s flow as toxic and hence try to get rid of it as soon as possible (externalize). Hence no clever experimental design would help.
In this case the challenge for an LC is to get into a dialogue with your LPs (or with the subset of them), present them with the stats and ask for “hard reset” in their internal stats. Then the LC can experiment along the lines described above.
Your flow is not good, fix it!
It is not uncommon for a client (especially one of those high volume but not HFT variety) to get confronted by an LP with the message “to do something about your flow”. The client consults with other LPs and they are ok with the flow. The client is faced with the choice to stop trading with the LP in question or indeed do something. But what?
Despite the focus being squarely on the client, it is not always the clients fault. There can be multiple reasons why client flows can (slowly or suddenly) deteriorate:
- (LP Fault) Other LPs in the stack can indeed out-complete this LP in this client stack (see Figure 5 for an example). For example, he shows a low offer when every other LP in the stack “knows” the price is going up. A client buys and the price does goes up. Interestingly with time the client may notice this and use this LP as a signal (which will further exacerbate the problem but ironically can result in this LP winning more toxic flow).
- (Client Fault) A client aggregates this LP with another LP who uses last look aggressively. This allow the other LP to show aggressive spread leaving the original LP mostly with the trade re-tries post rejects. The picture will be similar to Figure 5 but the origin is very different. The client can fix it on its own by changing the composition of its stack
- (Client, LP Fault or ECN Fault) This LP connection is slow (for example via an ECN) while other LP connections are fast). This can be seen by a client observing the quote time of this LP. It is the old quotes which are normally rejected.
If the client motivation is to offload the risk in an efficient matter he would want to try to keep the LP. However, in many cases only collaborative analytics will allow them to solve the problem. The first challenge is normally to convince the LP to listen. To this end the existence of a mutually accepted calculating agent would help enormously. Next, if the client can demonstrate his version of Figure 3 to this LP, it can force the LP to perform an internal investigation.
Broker Algos: Information Leakage and User Collaboration
1. Performance Measurement
FX algos have become an extremely popular way to execute trades. However, what is the right algo for the job? A natural first step is to measure the historical performance and project it forward. However, a closer look suggests we need quite a lot of observations to get a reasonable degree of precision.
Consider a 30 minute algo, 10% currency volatility and simple TWAP style execution (“TWAP style” is used as execution process, not as a benchmark). A simple calculation would suggest that TWAP price volatility over 30min would be around 5 basis points. We define the precision of Implementation Shortfall (IS) performance measure as the interval such that we have 95% confidence the true value is inside this interval.
The dependence is shown on the figure below. For example for 50 algo runs the precision is about 150 $/m which means algo Is performance is X +/- 150 $/m. If we are trying to save 100 $/m this precision is not very good.
How can we improve things? Algo users are not likely to share their individual runs with other users. This can reveal too much about their business. At the same time aggregate performance statistics (like performance of Algo1 over last month in currency pair X over number of runs N) are ok to share. Clearly credibility is the main problem. If self-reporting is allowed, a bias may be introduced (everyone is above average). Hence a central performance repository is the most logical way to solve aggregation problem. Aggregating algo runs can move individual uses from red to blue areas in the Figure 6 below:
2. Measuring information leakage
There is a trend in FX algo execution for the algo user to analyze the underlying liquidity pools themselves. Effectively the end user (LC) uses bank order placement infrastructure (for a fee) but tries to make sure that the underlying liquidity pool is not “leaking”. That is there no LP in the underlying liquidity pool (see Figure 7, all in blue) that either uses the algo flow as a signal or inadvertently signals (via skewing for example) to other market participant the intentions of the algo flow.
Algo User and algo vendor (Bank, algo Infrastructure provider on Figure 7) would have to communicate around common metrics to reach an agreement on underlying liquidity management and how it can be operated with bank order placement strategies. Hence, collaborative analytics becomes the key for algo user to achieve his objective.
This article demonstrates a number of execution workflows in FX (most them are applicable to any dealer market) where joint (collaborative) analytics are the best way to establish the optimal engagement between LP and LC. Dealers (Liquidity Providers) tend to produce their own report of rejection costs grading their own performance. However, the interplay between dealers is a crucial part of the trading dynamics and ignoring it can lead to wrong conclusions.
The article also argues that the experimentation is crucial. Unfortunately, it is down to FX traders to develop technology and expertise to do the experimentation. It seems natural though that while the decision making should be up to LPs and LCs, a common trading infrastructure would be beneficial. This would be consistent with right hand side market structure diagram on Figure 1.
Who could host such a necessary infrastructure? We have already seen that individual LPs cannot do this. An ECN observes the entire flow of LCs and hence is a potential candidate. In theory it might be true especially if an LC is trading exclusively with an ECN. In practice it is less obvious for the following reasons:
- ECNs have their own workflow which need to be analyzed. So called “missed quotes” can be “no man’s land” in a sense that the LP does not see them so does not accept the blame while the client still incurs a real cost
- Analysis involves a considerable amount of historical data manipulation. Doing this is normally not the first level of expertise for an ECN.
- An ECN is normally in competition with other ECNs and hence has a definite interest in getting a “good grade”. Analysis which is too “forensic” is likely to backfire as other ECNs may respond by reporting that their numbers are better.
This task therefore should be left to an independent market utility. Important questions are whether the cost of running this utility becomes lower as the market benefits it brings increases and who should bear the costs.