How should one rate the credit rating agencies (CRAs)? It’s
a question often asked by market participants and commentators. Some apparently
think that you can evaluate ratings by looking at their impact on market
prices. If a rating change is not accompanied by a change in bond spreads – or
if bond prices diverge markedly from levels implied by ratings – many regard it
as a failure on the part of the ratings providers.
However, that is
the wrong test. Ratings and market indicators (bond spreads and credit default
swap [CDS] prices) cannot be directly compared because they are fundamentally
different things. They are generated by very different processes and are often
driven by different factors. Ratings provide a long-term subjective view of
creditworthiness based on fundamental credit analysis, while market-based
indicators reflect the ebb and flow of market sentiment, the liquidity of a
security and many other short- term technical factors.
market reacts differently or indifferently to a rating action, that does not
say anything about the ‘quality’ of the rating action. Take eurozone sovereign
ratings, for instance. For many years before the recent debt crisis, the market
valued debt issued by countries such as Greece and Italy roughly on a par with
AAA-rated German government bonds, at a time when their credit ratings were
significantly lower- and even after they were further downgraded from 2004-05.
It is a good example of how markets are volatile and prone to over or
undershooting, while ratings remain relatively stable.
Credit ratings are forward-looking
opinions about the relative creditworthiness of borrowers and the securities
they issue. The higher the rating of an issuer or debt issue is, the lower the
relative probability of default in the rating agency’s opinion.
judge the performance of ratings, therefore, one has to look at their
correlation over time with defaults, not with short-run movements in market
prices. They should be assessed by comparing the default experience of a
particular asset class and rating category against the long-term average for
that asset class and rating category. That is a straightforward – and
verifiable – empirical test.
Ratings agencies, regulators and many
others publish extensive data looking at just this. Standard & Poor’s
(S&P’s) global fixed income research group, for example, publishes in-depth
annual default studies covering a range of asset classes and regions.
The studies show that corporate and government ratings have continued to
perform well as indicators of default risk during the financial crisis. They
consistently demonstrate a close match between ratings and defaults across all
regions and all periods. The higher the rating is, the lower the incidence of
default, and vice versa. And higher ratings have proven progressively more
stable than lower ratings.
Globally, none of the 66 rated companies
and financial institutions that defaulted in 2012 had S&P investment grade
ratings (BBB- and above) at the start of the year, and around 80% of them were
rated B- or lower in January 2012. Ninety per cent of corporates globally that
defaulted last year, including all nine European defaulters, had initial
(first) ratings that were sub-investment grade (BB+ and below). Of the 10% that
were originally rated investment grade, the average time to default – the time
between first rating and date of default – was 17.6 years.
1981, only 1.1% of companies globally that were rated investment grade have
defaulted within five years, compared with 16.4% of companies that were rated
sub-investment grade. Ratings continue to remain relatively stable. Seventy-two
per cent of corporate ratings globally were unchanged in 2012, similar to the
annual average of the last 10 years.
Sovereign ratings, likewise, have an excellent long
term track record. In an October 2010 study, the International Monetary Fund
(IMF) found that CRAs provide a robust ranking of sovereign default risk –
meaning defaults tend to cluster in the lowest rating grades. Since 1975, an
average of 1% of investment-grade sovereigns rated by S&P have defaulted on
their foreign currency debt within 15 years, compared with around 30% of those
in the non-investment grade category. All sovereigns that have defaulted during
the past 40 years – including Greece, Grenada and Belize which defaulted in
2012 – had sub-investment grade ratings at least a year before default.
S&P data shows that the relative rank ordering of sovereign ratings
has been consistent with historical default experience. Sovereign ratings have
been no more volatile than ratings on companies and financial institutions
(FIs). In 2012, the group downgraded 17% of rated sovereigns and upgraded 8%,
while 75% were unchanged. Sovereign ratings have also exhibited greater
stability at higher rating levels than at lower levels.
has said many times, the group was very disappointed by the performance of our
ratings on certain US mortgage-backed securities and regret that, like many
others, we did not foresee the speed and severity of the U.S. housing
In other areas of structured finance, including mortgage
markets outside the US, our ratings have generally held up well in the crisis
and have continued to perform strongly. This includes European structured
finance where default rates have been low, despite the severity of the
recession and property market stresses in Europe. Only 1.4% (by original
issuance value) of rated European securitisations outstanding at mid-2007 had
defaulted by the end of 2012, and S&P ratings on about two-thirds of these
instruments have either been stable or have risen over this period.
There is no mystery about ratings performance. It can be found in the data
published by CRAs and others, and which S&P, for instance, makes freely
available to market participants and also the wider public. It also appears in
the ‘ratings comparison’ website maintained by the European Securities &
Markets Authority (ESMA), showing the comparative performance of 18 registered
or certified ratings agencies in the European Union (EU).
with hindsight, ratings don’t always correlate with how events unfolded. They
are subjective opinions about the future, which can be affected by
unpredictable events and factors; they are not guarantees about default risk.
And investors can sometimes disagree, as reflected in market prices. Yet the
evidence shows that ratings in general continue to be closely correlated with
defaults. That is the real test for rating the raters.
Many banks around the world, large and small, continue to experience major security failures. Biometric systems such as pay-by-selfie, iris scanners and vein pattern authentication can help.
The implementation date of Europe's revised Markets in Financial Instruments Directive, aka MiFID II, is fast approaching. Yet evidence suggests that awareness about the impact of Brexit on MiFID II is, at best, only patchy and there are some alarming misconceptions.
Banks might feel justified in victim blaming when fraud occurs, but it does little for customer confidence.
Politicians have united in urging the Reserve Bank of Australia to lend its backing to the digital currency by officially recognising it.