CGAP photo by Lorena Velasco via Communication for Development Ltd.
Research & Analysis
Publication

Tool 1: Analysis of Regulatory Reports


The objective of regulatory reporting is to periodically provide market conduct supervisors (MCSs) with crucial standardized data which allows them to fulfill their supervisory mandate and carry out institution- and market-focused supervisory activities. Regulatory reporting is the most important source of information about a regulated market for an MCS because it is the most comprehensive. It is based on regulatory requirements that mandate financial services providers (FSPs) to submit such information or face penalties for failure to do so. Regulatory reporting is one of the few available sources of periodic and standardized information that gives the MCS the ability to consistently track a range of indicators over time

The most common regulatory reports require regulated FSPs to fill out and submit quantitative reporting templates that follow the frequency, formats, and content prescribed by regulation and determined by the MCS’s needs. Subsequent analysis by the MCS may involve interrogating aggregated higher-level data or granular detailed data.

In addition to standardized quantitative reporting templates, an MCS may require FSPs to provide qualitative information on governance, internal controls and systems, operations manuals, complaints handling procedures, etc. As part of the regulatory reporting framework, qualitative reporting is usually required less frequently than quantitative reporting. While this tool covers both types of reporting, the sections below focus on standardized quantitative regulatory reporting.


Benefits and opportunities

Regulatory report analysis helps an MCS to be responsive enough to identify, understand, monitor, measure, and address consumer risks and outcomes; competition issues; and issues of compliance with consumer protection regulations. It also supports effective risk-based supervision by helping to prioritize supervisory programs and resources based on assessed risks.

Imposed regulatory reporting also has the potential to trigger some level of self-analysis by FSPs with respect to their compliance performance and treatment of customers. This especially applies to qualitative reporting. In fact, some countries impose regulatory reporting that requires self-assessment (e.g., under the purview of South Africa’s Financial Sector Conduct Authority, Conduct of Business Returns are imposed on insurance companies).


The benefits and opportunities that stem from aggregated higher-level data analysis include:

  • Simplicity. Developing a list of aggregated indicators to monitor and inputting them into a reporting template’s data fields is relatively easy. Analyzing aggregated indicators is a straightforward process that rarely requires complex data crunching or advanced analytics.Standardizing indicators at the aggregated level is relatively simple as well, although it does require consultation with FSPs to harmonize terms and ensure that calculations are based on common formulas.
  • Low resource intensiveness. Storing and retrieving aggregated data is easy, quick, and does not require high storage capacity, excessive or expensive bandwidth, or special computing power. Further, FSPs are responsible for aggregating the data while MCSs are tasked with data validation.
  • Ease of implementation. Since the amount of data collected is limited, it can be (and, for prudential supervision, is) implemented in any country context—regardless of the technology and expertise available for data aggregation, validation, storage, retrieval, analysis, visualization, and reporting.
  • Dissemination. An MCS can periodically disseminate main aggregated indicators. Providing these resources for consumers, consumer groups, and the media pushes for better FSP practices.

The benefits and opportunities of analyzing granular data include:

  • Depth. The greatest benefit granular data analysis can give an MCS is deeper insights into consumer issues and business conduct, particularly when the data include demographic information such as gender, age, or customer location. The MCS can “play” with large data sets that offer greater customer detail; perform various data crunches; identify correlations between data sets; and use suptech tools to combine a range of data sources, types, or formats (e.g., structured and unstructured data). All these activities open up many possibilities for analysis.
  • Flexibility. Granular data allow an MCS to see and understand the data underpinning traditional aggregated data indicators and calculate additional indicators as needed. For example, due to emerging concerns that cannot simply be addressed by looking at traditional aggregated data, an MCS may temporarily monitor indicators that target specific product types, customer segments, or consumer issues.
  • Comprehensiveness. Granular data that combine demographic, transactional, financial, and operational elements can provide an MCS with valuable insights on consumer behavior, gaps in inclusion, discriminatory or anticompetitive practices, and algorithmic bias. It may include gender- or minority-based bias that leads to higher interest rates, higher insurance premia, higher rates of rejection for loan applications, longer complaints resolution times, and higher nonperforming loan ratios for women or minority groups.
  • Segmentation. Granular data facilitate analyses by different consumer segments and identify consumer clusters based on behavior, which may produce insights on specific issues that affect, for example, low-income women.
BOX 1. Mexico’s pensions regulator, CONSAR, on the importance of gender-disaggregated data

In Mexico, a 2015 demand-side financial inclusion survey revealed a gap in account ownership and use of a range of financial services. As a result, the Mexican pensions regulator, CONSAR, studied the disaggregated data it collected from FSPs through regulatory reporting. The data confirmed the survey results and showed that women were not only saving lower amounts for retirement but also saved less frequently than men, likely due to lower wages and higher job informality. This increased the risk women faced in not meeting minimum pension rights requirements. On average, for every 100 pesos a man receives in retirement a woman receives only 70. These insights only came to light through disaggregated data. Based on these findings, CONSAR engaged in a public awareness campaign with media outlets and developed a series of programs to promote retirement savings. One program specifically focused on formalizing domestic workers (a majority of whom are women) and providing them with access to pension benefits and insurance. It also established a women’s microcredit program that includes funds earmarked for retirement.
Source: Enabling Women’s Financial Inclusion Through Data: The Case of Mexico (IDB 2019).
Back to top ↑


Characteristics of this tool

Regulatory reporting requirements differ by frequency, format, and type of data collected:

  • Financial (e.g., fee revenue, nonperforming loans by credit type, total e-money issued)
  • Operational (e.g., number of loans, depositors, agents, consumer complaints, fraud reports)
  • Demographic (e.g., number of borrowers or depositors by gender, age, location)

In terms of level of detail of data collected, regulatory reporting falls into two categories:

  • Aggregated (e.g., total e-money issued, total number of loans)
  • Granular (e.g., list of all e-money transactions, all loan transactions, all payment transactions, all complaints)

The level of data granularity varies across countries and across regulatory reporting templates. Most templates require aggregated rather than granular data because implementation of aggregated data reporting is usually less complex for both MCSs and FSPs. However, MCSs are increasingly shifting at least part of their data collection efforts to granular data. The two levels can coexist for different data, and an MCS may use either one for a standalone project such as a thematic review.

Aggregated data

Aggregated data can be defined as data reported after they are compiled from an FSP’s various systems and aggregated through calculations determined by an MCS. For instance, if the MCS requests a quarterly report on total number of complaints divided by number of deposit accounts, the FSP makes three simple calculations:

1. Sum the total number of complaints received in a quarter and registered in the complaints management system.  

2. Sum the total number of deposit accounts at the end of a quarter recorded in the core banking system.  

3. Divide 1 by 2.

The resulting figure is reported to the MCS as a single number that becomes one of many data fields in a reporting template. It is an aggregated indicator. If the MCS had asked for reporting of the data points of steps 1 and 2 rather than the final division from step 3, this would still be considered aggregated reporting but at a lower level of aggregation (and a higher level of granularity). The MCS itself would be responsible for calculating the final division (step 3) after receiving the data points from steps 1 and 2.

Granular data

Granular data produces aggregated data. It is the closest in level of detail to the data an FSP produces on an ongoing basis as part of its business operations. For example, an MCS that wants to quarterly monitor an indicator on consumer risk may choose to collect and monitor the number of complaints received divided by the number of loans. (This indicator relativizes the number of complaints to account for large FSPs that facilitate more transactions, have more customers, and invariably amass more complaints). The MCS has the following options for gathering the indicator:

1. Collect the ready-made indicator.

2. Collect the ready-made numerator (total number of consumer complaints received by each FSP in a quarter) and the denominator (total number of loan contracts outstanding by the end of a quarter), then calculate the indicator using those two data points.

3. Collect granular data:

  • The complete list of complaints received in a quarter (to calculate the numerator) 
  • The complete list of loans outstanding by the end of a quarter (to calculate the denominator) 

An MCS may feel more inclined to collect ready-made aggregated data (options 1 and 2 above) as these data require the least amount of resources. However, they are the most rigid and offer the MCS no room to perform other types of analysis. Granular data (option 3) allow the MCS to not only recreate the ready-made data but to run other data queries, conduct additional and deeper analyses, spot broader sets of patterns and issues, and construct additional or alternative indicators based on supervisory needs.

BOX 2. Range of technologies used for regulatory reporting

FSPs usually send regulatory reports by manually or automatically entering data then transferring data files into a centralized database at an MCS or other third party (e.g., through a web portal or a downloadable software solution). Some small FSPs, such as community-based providers, may be permitted to manually send reports to the MCS (e.g., via email attachments, faxes, or hard copies by mail). Technology-enabled methods such as pulling or viewing data from FSP operational systems using application programming interfaces (APIs) may also be used, however, they are less common in developing economies as they require investment. Nonetheless, supervisory technology (suptech) creates opportunities for MCSs to increase the scope, granularity, accuracy, and timeliness of regulatory reporting once fundamental frameworks and strategy are in place.
Back to top ↑

How to use this tool

There is no single recipe for implementing a regulatory reporting regime or for choosing a data collection mechanism, as the many options fall somewhere between collecting hard copies and using automated methods powered by suptech. However, there are five common steps MCSs can take:

Graphic of steps for tool 1

STEP 1:

IDENTIFY GOALS AND OBJECTIVES

Before implementing a regulatory reporting regime, an MCS should identify its goals and objectives to determine which data are needed. CGAP’s paper Data Collection by Supervisors of Digital Financial Services (2017) provides guidance on how to design new reporting requirements, especially for digital financial services (DFS). In the context of market monitoring, the MCS maps data needs to its policy goals and supervisory objectives.

The main question the MCS needs to address in this step:

  • Which risks and developments do we need to monitor and why?

STEP 2:

IDENTIFY INDICATORS AND DATA POINTS TO BE COLLECTED

Only after goals and objectives have been identified does the MCS identify the indicators required to fulfill them and, importantly, the underlying data points necessary to build each indicator. It is critical to continue past the identification of indicators since some data points will be used as multiple indicators and may be required by more than one department within the MCS.

The following example illustrates this point. While data points B and C are both “total number of e-money accounts,” they are used to build different indicators:

Indicator 1. E-money penetration (e-money accounts per 10,000 adults)
Data points needed for Indicator 1:
Data point A: Total adult population
Data point B: Total number of e-money accounts

Indicator 2. E-money account usage (percentage of active e-money accounts)
Data points needed for Indicator 2:
Data point C: Total number of e-money accounts
Data point D: Number of active e-money accounts

The main questions the MCS needs to address in this step:

  • Which indicators give us information about the risks and developments identified in the previous step? 
  • Which data points are needed to build each indicator?

STEP 3:

DESIGN REPORTING REQUIREMENTS

Designing reporting requirements depends on the type of data collection mechanism in place. In most countries, the mechanism uses traditional reporting templates—often in a table format that indicates which data fields an FSP needs to fill out. The templates include instructions for filling out each field and a clear definition of each term mentioned. 

  • Check what has already been collected. Reporting templates are often designed for supervision of individual FSPs. The MCS should consider how existing templates may provide inputs for market-wide monitoring. It is usually not necessary to create an entirely new reporting template for market monitoring alone. Existing templates used by other supervisors often suffice or only require small adjustments. See examples of reporting templates for DFS supervision.
  • Coordinate with other departments to minimize reporting costs. It is important to avoid duplication of reporting requirements to minimize compliance costs for FSPs and to reduce the risk of inconsistency across reports. Identifying common data points and deciding whether to collect them (instead of indicators) may require interdepartmental coordination and guidance that prepares the MCS to calculate indicators itself using previously collected data points. Depending on how extensive templates are, collecting data points rather than indicators may increase the amount of data the supervisory agency needs to store. The MCS needs to coordinate with the department that oversees data management—usually IT—to ensure its current infrastructure can accommodate a higher volume of data.
  • Consult with FSPs. New reporting requirements are subject to consultations with FSPs to ensure they are able to provide the data and to assess how much time they need to fully implement new requirements. Consultations also clarify defined terms to ensure that all FSPs have the same understanding of each data point, which, in turn, allows the MCS to compare data across FSPs.

The main question the MCS needs to address in this step:

  • Which additional collectable data points do we need to complement those the agency already collects?

STEP 4:

CHECK OTHER SOURCES AND COORDINATE OUTSIDE THE AGENCY

Along with data and information already collected by other departments, the MCS can benefit from data generated by other agencies and third parties. It should take into consideration these other sources and arrange to access them, which may require interagency coordination agreements.

Data sharing is often a two-way street. The MCS may both collect and share data with other agencies as part of its interagency coordination agreements. In Mexico, for example, the National Banking and Securities Commission (CNBV) shares consumer-relevant data with the National Commission for Financial Consumer Protection (Condusef). Policy makers promoting financial inclusion may be interested in data collected by the MCS, and the MCS may allow other parties, such as researchers, to use supervisory data under its terms of confidentiality.

The main question the MCS needs to address in this step:

  • Which additional data points do we need that other agencies are not collecting?

STEP 5:

DESIGN A GUIDE FOR ANALYSIS THEN CONDUCT ANALYSES

It is good practice to provide supervisors with guidance on how to conduct analyses. The main types of analyses that use regulatory reports for market monitoring purposes include:

  • Growth trends for the market as a whole
  • Growth trends for specific types of products, FSPs, or customer segments
  • Peer group trends and outliers
  • Sharp drops, increases, or changes in indicators
  • Emerging patterns that may indicate changes in business models and market conduct

One aspect to consider is the fundamental difference between prudential analyses and market conduct analyses, even if the source data or information are the same. Prudential supervision strongly focuses on quantitative analyses to check compliance with a range of quantitative ratios and to understand financial soundness and systemic risks. On the other hand, market conduct supervision—including market monitoring—makes judgements on the fairness of conduct and the treatment of financial consumers.

It follows that an MCS should combine various reporting data types, sources, and formats. No single source or type of data should be analyzed in isolation; in addition to quantitative data, it is essential to leverage qualitative information and supervisory judgement.

For example, the MCS may want to explore whether declining profitability across FSPs, as reflected in financial statements, is leading to misconduct and unfair treatment. However, since looking at financial data is not enough, it can be combined with nonfinancial data such as complaints data. For example, an increase in complaints about unauthorized debits may indicate that FSPs are intentionally duplicating fee charges or adopting poor practices to make up for declining profits elsewhere. Combining aggregated data with granular data, financial data with nonfinancial data, and structured data with unstructured data results in richer supervisory findings.

The following examples discuss analyses that use key operational and financial aggregated data.

TABLE  1.    The three main types of aggregated data used for market monitoring of consumer  protection

How do complaints data help market monitoring?

Please refer to this toolkit’s section on analysis of complaints data.

How can operational data help market monitoring?

These data help an MCS prioritize which products and FSPs should receive greater attention. The MCS can use the data to identify which FSPs are outliers or important players; to keep an eye on salient market features (e.g., frequently used products and institutions, services that most frequently reach low-income customers and women); or to track new offerings, especially those the supervisor has classified as complex.

Examples of these types of analyses include:

  1. FSPs with the largest number of customers
  2. FSPs with the largest number of female customers 
  3. FSPs with the largest number of contracts (e.g., accounts, policies, loans) 
  4. FSPs with the largest number of service points
  5. FSPs with the largest number of DFS transactions
  6. FSPs with the largest number of low-value transactions
  7. FSPs with the largest number of low-value deposit accounts
  8. FSPs with the largest number of new product launches
  9. FSPs with the largest number of products classified as complex
  10. FSPs with the largest number of complaints relative to number of customers
  11. FSPs with the largest number of complaints classified as serious
  12. FSPs with the largest number of transactions performed through digital channels
  13. FSPs with the strongest growth in number of customers or contracts, and with the strongest growth in number of female customers
  14. Most used types of products (with the largest number of customers/contracts)
  15. Most used types of delivery channels
  16. Most frequent types of reported fraud
  17. Most targeted types of customers (e.g., corporate, microenterprise, retail, women, low-income)
  18. New types of products introduced in the market

How can financial data help market monitoring?

MCSs need financial data to gain a complete picture of the risk profiles, business models, and conduct of FSPs and an entire market or sector. If data analysis is performed manually, an MCS may choose to monitor financial data only for certain types or sizes of FSPs and focus on certain types of products or customer segments. If access to automated analytical tools is not a constraint, financial data analysis should cover the same universe as market monitoring analysis.

  • Example: Monitoring profitability levels. Monitoring trends in profitability indicators (e.g., net profit margin; earnings before interest, taxes, depreciation, and amortization [EBITDA] margin; return on equity [ROE]) can give hints about upcoming or recent changes to business models and FSP conduct. The MCS would need to complete deeper trend analyses of underlying variables, operational data, or qualitative information to better understand trends in profitability. For instance:
    • Declining profitability could trigger cost-saving measures that are deleterious to consumers in an attempt to preserve shareholder value.
    • Declining profitability could indicate an increased focus on positive outcomes for consumers and employees—and less of a focus on maximizing profits.
    • Rapid growth in profitability could suggest weak or abusive sales practices (e.g., increases in non-performing loans, cases of aggressive marketing).
  • Example: Monitoring revenue sources. Revenue sources reflect the business models FSPs have adopted and can lead to changes in business conduct. For instance:
    • Increasing fee revenue and decreasing interest income has revealed consumer protection issues in many countries, including business models that rely on nontransparent, misleading, and excessive fees. Data on revenue sources can help the MCS to identify peer groups. Revenue source analyses include fee income and interest income.
    • A sudden growth in late fees for loan payments may indicate lax suitability assessment before lending, ineffective disclosure of fees, or misleading communication with customers.
    • Analysis of outlier FSPs in terms of levels of interest income could focus on a given product type. For example, an FSP whose interest income on consumer loans is much higher than its peers could trigger the MCS to compare charges for that loan category with relevant consumer complaints.
    • Unusual revenue patterns could be the result of abusing a dominant market position.
  • Example: Monitoring operational costs. Operational costs also reflect the business models FSPs have adopted and can lead to harmful practices by FSPs and agents or other third parties.
    • Agent fees include those paid to mobile money agents, banking agents, and insurance agents. Certain variable fees can lead to harmful behavior. In Brazil, monitoring of the high fees paid to loan agents led to an inquiry into fraudulent and abusive practices that targeted vulnerable populations, such as retirees. Monitoring activities culminated in a regulatory change that, among other reforms, capped upfront fees paid to agents. Similarly, Mexico’s pensions regulator took legal action and changed its regulation to deal with fraudulent agent behavior that was spurred by high upfront agent fees. Both regulators relied on a range of analyses, combining financial and nonfinancial data to identify the issues.
    • Low insurance claims ratios (number of claims over the premium revenue) could be a sign of mis-selling, unsuitability, excessive coverage exclusions, ineffective disclosure, burdensome claims procedures, or more frequently, a combination of those practices. Comparing claims ratios, expenses, and net underwriting results across insurers may be a valuable exercise for the MCS. For example, the European Insurance and Occupational Pensions Authority (EIOPA) issued a warning to the travel insurance industry following a thematic review of key financial ratios of 201 insurers in 29 countries. The review identified multiple instances of poor practices and problematic business models.
  • Example: Monitoring loan portfolio performance. When analyzed along with several of the indicators discussed above, data on loan delinquency in relevant product types such as consumer loans, microcredit, and digital loans may reveal potential overlending. Overlending may be due to weaknesses in the loan approval process, exploitation of customer bias, or problems in the credit scoring model that need to be further investigated. Deeper analyses may be completed with the data reported by different customer segments (e.g. women, microenterprises, rural individuals). It may also be useful to compare FSPs within the same market segment and performance across product types.

STEP 6:

DEFINE AND PRODUCE KEY OUTPUTS

Market monitoring can produce a range of outputs—from standardized periodic reports for internal or interdepartmental consumption to external dissemination of statistics and analytical reports which can be used by stakeholders that have an interest in monitoring financial consumer issues. The range and frequency of outputs is usually constrained by capacity. In either case, it is useful to define in advance key outputs that will be periodically produced, and provide staff with guidance on their format, tone, and content. Special or sporadic outputs may also be produced.

Back to top ↑

Limitations of this tool

An important limitation of both types of regulatory reporting analysis is the difficulty of ensuring data quality. In particular, it may take considerable time after a new reporting requirement has been implemented for collected data to reach acceptable levels of data quality. An MCS may need to coordinate with other supervisors and IT specialists to check regulatory reporting quality. These actions may include reviewing how FSPs fill out and validate data reporting templates prior to submission and reviewing validation performed by the supervisory agency itself upon data receipt.

Key limitations of aggregated data analyses include:

  • Rigidity. The scope of supervisory analysis is limited to ready-made calculations performed by FSPs. 
  • Limited depth. To expand and strengthen analyses, the MCS may need to increase the list of aggregated indicators and change the reporting template.
  • Compliance costs. Data aggregation for extensive regulatory reporting translates into high compliance costs for FSPs. For example, aggregated indicators with breakdowns such as “total number of transactions by gender and by location” may increase compliance costs, as follows: An FSP would need to combine data, sometimes from multiple systems or “tables,” and transform the data into the standard indicator format the MCS requires. The same set of underlying data may be retrieved multiple times to fill out a single reporting template with multiple indicators.
  • Inefficiency. Updating aggregated data requirements is a relatively inefficient and inflexible approach. Each time a template is changed, FSPs need to reconfigure their systems and redefine their reporting IT rules to account for changes in data retrieval and formula calculation procedures. Reconfiguration may lead to delays in reporting and mistakes that affect the MCS’s work.
  • Data quality issues. Aggregated data may mask data quality issues because they permit only a superficial level of validation by the MCS. For example, when the MCS collects the total number of complaints from FSPs, it cannot validate whether the sum of all complaints is correct.

The main limitations of granular data analyses include:

  • Compliance costs. Many MCSs and FSPs depend on legacy systems and collecting granular data with these older mechanisms is neither practical nor effective. If the formats and systems the MCS uses require FSPs to manually fill out Excel spreadsheets, granular data reporting will be costly and data quality will worsen compared with aggregated data.

    Note: The need for adequate systems by both FSPs and MCSs is driving a slow global shift toward highly automated data reporting and collection that more reliably deals with granular data. However, FSPs may resist the shift to this type of reporting model, alleging high short-term investment costs.
  • Resource intensiveness. Managing large amounts of data can be resource intensive, requiring an MCS to increase data storage capacity, invest in data analytics tools and data analysis expertise/skills, and improve data transfer speed and security. Standardizing granular data and validation rules also requires significant time and profound understanding of regulated businesses.
  • Data protection issues. Some granular data need to be anonymized before they are reported. Complying with data protection requirements such as this may increase costs for FSPs.
While analyzing regulatory reports is arguably the most important market monitoring tool, combining it with other tools (like social media monitoring or phone surveys) leads to more effective market monitoring. Different tools complement and reinforce each other, and positive consumer outcomes and changes in market practices depend on how an MCS uses tools, combines them with other evidence, and takes timely action to generate changes in market practices, reform regulations, clarify supervisory expectations, and penalize poor conduct.
Back to top ↑

Other resources

Back to top ↑