Jamie Rose

FX risk has not been automated away. It has been re-encoded

April 2026 in Risk Management

Jamie Rose, founder of Isomiq outlines why the FX industry has not reduced risk through automation. It has relocated it.

Risk no longer sits primarily in positions, limits or trader judgement. It sits inside system behaviour. Pricing logic, liquidity aggregation rules, skewing models, internalisation thresholds, execution routing, these are not neutral mechanisms. They are expressions of risk appetite, encoded into deterministic infrastructure.

A deterministic system executes whatever assumptions it is built on with consistency and speed. That is often mistaken for robustness. It is not. A flawed assumption, once embedded, is no longer an occasional error. It becomes a repeatable outcome. Efficiency and correctness are not the same thing. A system can be precise, stable and wrong at scale.

Firms measure outcomes without interrogating the mechanisms that produce them

This is where modern FX risk actually lives.

Historically, risk was easier to locate. A trader made a judgement. A dealer adjusted a price. A risk manager intervened. The process was inconsistent, but it was visible and it was owned. Today, those decisions are made upstream, in systems treated as infrastructure rather than as active expressions of behaviour. Pricing engines, execution stacks and aggregation models are assumed to be technical implementations. They are not. They are risk-bearing constructs.

Every system contains embedded decisions about how the firm behaves when conditions change, when liquidity fragments, when flow becomes one-sided, when inventory accumulates, when execution quality deteriorates. Those decisions are not made in the moment. They are pre-determined in code. When they are not explicitly understood and governed, they become a hidden layer of exposure that no limit framework captures.

This is compounded by a structural problem inside most institutions. Pricing logic sits with  e-FX quants. Execution behaviour sits with technology. Risk oversight sits with a separate control function. No single group owns the behavioural outcome of the system as a whole. Risk is assessed in components. It is expressed through interaction. That gap is where exposure accumulates undetected.

Visibility makes this harder to see, not easier.

Modern platforms provide extensive monitoring. Fills, rejections, spread movements and inventory positions are all observable in real time. That creates a false sense of control. Firms measure outcomes without interrogating the mechanisms that produce them. The system appears controlled because it is observable, not because it is coherent.

The gap only becomes visible under stress.

Deploying AI into a poorly specified execution environment does not reduce risk. It accelerates it.

When market conditions deviate from embedded assumptions, behaviour changes abruptly. Liquidity aggregation logic amplifies rather than dampens volatility. Internalisation frameworks recycle risk that should have been externalised. Pricing models stream with parameters calibrated to conditions that no longer exist. Execution routing prioritises speed over certainty at precisely the wrong moment. None of these registers in standard monitoring as a failure. It appears as the system functioning as designed.

That is the point. The system is not failing. It is revealing its assumptions.

The AI narrative does not resolve this. AI improves pattern recognition, classification and processing speed. It enhances the ability to observe and react. It does not determine whether the underlying behavioural framework is coherent. It can optimise within a flawed structure without identifying the flaw. Deploying AI into a poorly specified execution environment does not reduce risk. It accelerates it.

The problem is not the absence of intelligence. It is the absence of explicit design.

Most FX environments rely on implied behaviour rather than specified behaviour. Terms like “stable pricing” or “resilient execution” are used without defining what they mean when conditions deteriorate. The sequencing of decisions is rarely formalised, which signals override others, when internalisation gives way to external execution, how behaviour should shift across market regimes. These questions are distributed across systems rather than resolved as a coherent layer. This is not a technology gap. It is a governance failure.

Testing does not compensate for it. Back testing and historical replay validate performance against known conditions. They do not expose how systems behave when multiple assumptions break simultaneously or when feedback effects emerge across components. A model that appears robust in isolation behaves differently once connected to live liquidity, real client flow and operational constraints.

The practical distinction is simple: firms that define system behaviour explicitly are defining their risk in advance. Firms that do not are discovering it in real time, under conditions that leave little room to respond.

The industry has made FX risk consistent, scalable and fast. That has created a more subtle problem. Consistency is now mistaken for safety. It is not. A deterministic system can be efficient, elegant and wrong all at once, and when it is wrong, it is wrong repeatedly, at speed, before anyone has located the source.

The firms that manage risk effectively will not be distinguished by their tooling. They will be distinguished by whether they understand how their systems behave, why they behave that way, and what that implies when markets stop cooperating.

That is where FX risk now resides. For most firms, it remains poorly mapped.