The Fundamental Review of the Trading Book (FRTB) represents a pivotal regulatory effort to strengthen market risk management and the capital framework governing the trading activities of banks. While some jurisdictions are already in advanced stages of implementing FRTB, others have deferred or recalibrated their deadlines.
Banks have to decide on choosing between the standardized approach (SA) and the internal model approach (IMA) in FRTB. The choice in the approach, together with variations in local rules, means banks will encounter markedly different capital and operational consequences across regions. This diversity underscores the importance of robust market risk management practices to navigate the evolving regulatory environment.
Key data challenges
Data challenges and solutions
Data lineage and auditability
Supervisors are increasingly demanding transparent traceability. Under the SA framework, institutions must demonstrate traceability of the capital impact to its respective trade.
Many organizations rely on spreadsheet adjustments and manual mapping. As a result, lineage documentation becomes fragmented. Banks struggle to record evidence related to transformation logic and override approvals.
Solutions
Lineage metadata store: An enterprise lineage repository catalogues every data asset, transformation and downstream consumption with technical and business metadata.
Immutable audit log: All market data overrides, model parameter changes, and sensitivity adjustments written to an append-only audit ledger.
Point-in-time replay: Historical market data snapshots retained in immutable storage. Capital figures can be fully reproduced for any business date using the versioned data state and model version on that date.
Regulatory report explainer: A self-service tool enabling risk officers and regulators to drill down from a capital line item through to the constituent sensitivities, market data observations and the respective instrument attributes.
Sensitivity data generation and consistency
Institutions must generate delta, vega and curvature sensitivities consistently across asset classes. Large banking organizations typically operate multiple pricing and risk engines that support multiple asset classes and business lines. Systems such as Murex or Calypso Technology often coexist alongside proprietary pricing libraries developed by quantitative teams.
Each platform may apply a different yield curve construction logic, use varying volatility surface interpolation methods, and implement unique shock sizes or calibration conventions.
Banks struggle with lack of standardized shock definitions: Sensitivity must be calculated to prescribed risk factor shocks (e.g., 1 basis point (bp)/1% shift in rates, parallel vs bucketed shocks). In practice, different shock sizes and directions may lead to one engine computing interest-rate deltas using a 1 bp parallel shift per curve and another using bucketed key rate shifts of 5 bps.
To avoid timing mismatches, sensitivities must be calculated using a consistent market data snapshot (or at least a tightly controlled window) across all desks and asset classes.
Solutions
Deploy a sensitivity calculation engine (SCE) as a shared source consumed by front-office profit and loss, risk management and regulatory capital pipelines.
Create a golden source of market data from where all departments can consume the market timeseries data.
Implement a sensitivity reconciliation framework with automated daily break detection: Any delta, vega or curvature break >0.5% vs prior run or cross-system to trigger an automated investigation workflow.
Establish an intelligent control framework: A composite metric of completeness, accuracy and timeliness with respect to each day’s sensitivity calculation process.
Data intensity, RFET and NMRF
For implementing FRTB IMA, risk factor eligibility test (RFET) mandates banks to prove that every risk factor used in capital models is supported by “real price observations” that meet either of the thresholds for a minimum count and a maximum gaps over a 12-month period.
Non-modellable risk factors (NMRFs) attract a separate capital charge computed via stress scenarios on those factors, so poor data coverage directly increases capital and forces governance on when to add proxies, pool data with other banks, and/or rely on vendor prices.
These data quality issues emerge during RFET assessment:
Fragmented data sourcing: Bloomberg and Refinitiv data feeds are ingested separately, with no deduplication
No real price observation (RPO) eligibility filter: Indicative broker quotes are mixed with executable prices in the historical database
90-day gap breaches for illiquid tenors
Solutions
RPO eligibility engine: Automated classification of price observations using trade type flags, counterparty affiliation checks, and executable vs indicative tagging.
Cross-source deduplication: Deterministic matching of trades across Bloomberg, Refinitiv and internal data using trade time, notional and CCY pair to eliminate double-counting.
RFET dashboard and alerting: Real-time monitoring of rolling 12-month observation counts per risk factor. Automated NMRF escalation to raise an alert when a factor approaches the 24-observation threshold or a 90-day gap.
Optimize current data pooling: Deep evaluation of the market data pool of all the current vendors with respect to the risk factor level to find the most appropriate source before connecting with the consortium observation pool.
The FRTB mandates valuation under multiple stress scenarios. Each trade requires a base valuation, upward shocked valuation and a downward shocked valuation.
Institutions encounter data warehouse scalability constraints, batch processing bottlenecks and long overnight risk runs. High-volume derivatives portfolios generate millions of shock results every day.
Solutions
Implement GPU-accelerated batch repricing using clusters: Reducing options repricing through parallelized Monte Carlo and analytical approximation engines.
Introduce a tiered storage architecture: Hot for the current quarter, warm tier for 1-2 years, and cold archive for years 3-5, with automated lifecycle management.
Establish a curvature API: Downstream access results via a standardized API, rather than direct database queries, decoupling consumption from storage.
Reference data governance
Reference data is fragmented across multiple systems (counterparty masters, market data vendors and static repositories), leading to inconsistencies. Common issues include missing issuer hierarchies, outdated securities, and unapproved manual overrides, which worsen after mergers.
Solutions
Policy: Establish data standards and exception policies with regulatory attestation.
Data quality checks and data quality scores: Conduct quarterly data quality checks, enrich ISIN/LEI information and introduce a data quality score to monitor completeness, accuracy and timeliness.
Deploy a master data management platform: Use it as the golden source for instrument reference data, with upstream feeds from Bloomberg, Refinitiv and SWIFT/DTCC.
Risk factor mapping and regulatory bucketing
The FRTB’s SA requires sensitivities to be aggregated by regulatory buckets (e.g., credit quality, sector, geography and tenor), but banks’ internal classifications often differ. Mapping internal data to regulatory buckets is complex due to conflicting vendor ratings, unclear issuer hierarchies and multi-industry issuers.
Solutions
Risk factor taxonomy engine: Map instruments to FRTB SA buckets using ISIN/CUSIP, sector codes and issuer data.
Establish a mapping override workflow: Any bucket assignment can be challenged by the desk with a documented rationale, and all overrides captured for audit with dual approval.
Conclusion
FRTB is not simply a capital calculation upgrade, but also a fundamental transformation of how banks govern, store and consume data. The challenges explored in this blog are deeply interconnected: Poor reference data contaminates sensitivity calculations, weak RFET observation processes inflate NMRF counts and the absence of lineage makes curvature results indefensible to regulators.
The banks that succeed are those that architect a unified FRTB data foundation at the outset: A single source of truth for market data, sensitivities and reference data, underpinned by immutable audit trails and purpose-built computing infrastructure.
With CRR III rules now in force and regulatory tolerance for data shortcomings at an all-time low, the cost of inaction now measurably exceeds the cost of transformation.
We bring to the table what most institutions cannot assemble internally: A team that combines deep regulatory knowledge, hands-on data engineering capability and front-to-back FRTB delivery experience across Tier-I banks globally. We do not come with a pre-packaged product and a slide deck. We embed ourselves with your risk, technology and finance teams to diagnose data gaps and build and industrialize the solutions that can create a more resilient system. We quantify the capital benefit of every remediation recommendation to ensure investment decisions are grounded in hard numbers, not regulatory anxiety.
Whether you are at the start of your FRTB data journey, mid-program and off-track, or preparing for a supervisory model review, we can help you move forward decisively and in the right path.
Compliance is the floor. Competitive capital efficiency is the ceiling. We will help you reach both.