Regulation: drawing opportunity from necessity
The past few years have seen a raft of legislation, such as PSD2 and GDPR, that directly affects the banking industry. A significant proportion of this regulation includes regulatory requirements specific to data, with one example being the BCBS 239 standard within Basel IV, which requires banks to meet specific standards relating to risk data aggregation. Compliance with this (and other) data-related regulation is clearly not optional. However, if undertaken in the right way, it is also possible to derive major business benefits at the same time. This applies across the banking enterprise in general, but is particularly relevant in FX businesses, where extreme cost pressure is now the norm and profitability is depressed - both of which are driving a need for automation.
The critical point is how data is managed and stored. An ideal implementation is one where diverse data classes and formats become completely clean, consistent, normalised and enriched. In addition, this capability has to be channel-agnostic and apply (among others) across electronic, voice, direct, prime broking, retail and corporate activities. Apart from achieving regulatory compliance, this opens the door to converting clean big data into smart data: in this case, smaller information subsets that are both valuable and actionable. This data will then be accessible from multiple perspectives, including driving machine learning which can enable advanced algorithmic strategies.
The scale challenge
The challenge is to intuitively combine and standardise many different data sets to provide more information with which to make decisions, but this is very difficult to achieve on a large scale and automate in a global bank or major buy-side firm. When you extrapolate techniques used on a smaller scale to data sets that are terabytes or larger in size, many applications begin to fail. Even simple, trivial operations like opening and closing files and searching might work perfectly well on a spreadsheet, but then crash completely when applied to a large data set. The average size of data sets has been multiplied thousands of times in the last decade, but expertise in handling large data sets is still scarce in financial markets. Platforms like Mosaic harness the power of data science techniques developed in the lab and hone them so they are able to operate effectively on an industrial scale.
More than human intelligence
A smart data solution that makes available a large repository of clean consistent transaction data represents a major opportunity for artificial as well as human intelligence. While in many instances AI and ML have been seriously over-hyped, in the case of an FX business the immediate opportunities are genuine.
An area seeing significant and meaningful progress is that of anomaly detection. By using cutting-edge data science techniques originally developed in the academic field, firms can feed algorithms with massive data sets and have them spot regularities and patterns in real-time. This means they can also instantly spot any anomalies in the data, which provide very useful information in a financial markets context.
This form of analytics – combined with real-time data processing - provides a new, nuanced and highly sensitive feed of information to power algorithmic trading strategies. Anomaly detection is already widely used in retail finance, for example by credit card companies to detect fraud or as part of the credit scoring process when a consumer applies for a loan. The challenge in capital markets is that, unlike credit card fraud prevention where the data is all in the same format, financial institutions have to analyse a vast amount of completely unstandardized data.
In a large global bank, for example, sales and trading desks typically store their own transaction data independently of each other and within separate databases.
In some cases, data capture can be as rudimentary as a simple Excel spreadsheet and thus locating the data within the bank can be one of the most substantial hurdles. Once it has been located, it is typically in wildly different formats, making it extremely difficult to derive any analytical value.
The FX market will only become more and more data oriented over the coming years. A major driver will continue to be the huge push to automating the full trade lifecycle, which requires a growing amount of data that must be aggregated, standardised, cleansed and analysed effectively. To this end, banks and buy-side firms have been hiring data scientists at an unprecedented rate, as they reached the realisation that data-driven decision making is the future of financial markets.
Decision systems used in the trading process will become fully automated and data-based, so instead of manually estimating parameters, running model simulations and then giving the results to the human trader, the full workflow will be automated.
We are on the cusp of an exciting period in capital markets. The explosion in big data has placed more information than ever before at the fingertips of financial institutions, but until recently they have lacked the ability to harness its power effectively. Thanks to the crossover of expertise and technology between the academic world and the business world, the tide is now beginning to change.