The Current Expected Credit Loss (CECL) model, the new Financial Accounting Standards Board standard for estimating credit losses on financial instruments, is to be implemented from next year for publicly traded companies and from 2023 for private companies.
This new model governs the recognition and measurement of credit losses for loans and debt securities. Because CECL (and compliances such as BCBS239) requires organisations to measure credit exposures and expected credit losses across the life of a loan, there are concerns that it could require more data and more careful data modelling.
As organisations around the world prepare to implement CECL, many say they may not have enough data – or the right quality data – to effectively comply.
Three years ago, Moody’s research in the US found that banks foresaw challenges in terms of the data they had available to comply, and an Abrigo Lender Survey earlier this year found almost one in six respondents are unsure whether they have the quantity and quality of data necessary to estimate losses under the CECL standard.
But they are likely wrong.
Most major financial institutions have all the historical and current data they need to forecast losses accurately and with confidence within the CECL model.
The CECL model in effect underpins best practice in reporting, loss forecasting and production of balance sheets and income statements, and therefore banks – and all businesses – should already have the foundations in place to align with the CECL model.
The challenge they may face, however, is unravelling their own data ‘spaghetti junctions’ to consolidate and report on the appropriate data.
To estimate potential losses on credit given to its customers, credit providers (such as the banks) are fully reliant on their data collation processes and storage of such relevant data which has been qualified (verified, validated, cleansed, integrated, reconciled), ie, quality/trusted data.
The reality is that such processes are often disjointed and overlapped/duplicated with differing logic, which results in “many truths”. Some data is traditionally hidden across departments for internal competitive reasons, which could also complicate forecasting.
As organisations around the world prepare to implement CECL, many say they may not have enough data – or the right quality data – to effectively comply.
Many organisations also implement manual interventions in the collation and preparation of the data, which introduces more risk with regards to data quality. Furthermore, periodic reports may be excluding transactions in transit (in the process of being committed by the payees/lenders) or those in suspense, which will skew loss totals (ie, overstate the losses). In many instances loss deviations are used in reports to compensate and reconcile the numbers.
In line with CECL and BCBS239 compliance, organisations have to show how losses were calculated and which governance processes were followed that approved these figures, to ultimately reveal its data lineage and prove governance.
In most organisations, the necessary data exists.
Many also have the necessary risk models and forecasting expertise in place. The problem in preparing for CECL lies in disparities in the understanding of their data, and the overall management of their data.
In many cases, traditional data management evolved without proper controls in place, and over time, data management was not formalised or organised, which could skew the trustworthiness of data. Where data management practices have not been able to keep up with changes, organisations have tended to skip the set, traditional frameworks and standards, which is the cause of data chaos.
To unravel the spaghetti junction and prepare the quality data needed for accurate loss and risk forecasting, organisations benefit from centralised stores of quality data and full data tracking (or lineage) and reporting capability. Enterprise-wide sharing of data supports best practice governance, compliance and risk management, but also allows organisations to better understand their market and identify growth opportunities through cross-selling and up-selling.
Enterprises also have to organise their data in a formalised fashion – and this is the objective of data governance. The advent of data governance in the past few years provides all the guidelines necessary for data best practice, and uniform data management covering the rules, policies and standards to be applied to data throughout its life-cycles is the execution of this, setting in place all the foundations an organisation needs to align with CECL or any future best practice reporting or forecasting models to emerge in future.
Share