Quantcast
Channel: Policy, Science, Burgers » finance and economics
Viewing all articles
Browse latest Browse all 3

S&P Q&A (Or: Everybody Loves Ampersands)

$
0
0

Dear Readers: Thanks for all the feedback, questions, and requests for a follow up post to my latest entry detailing a few of the ways in which rating agencies suck.  I’ve taken many of your questions and prepared a special Part II, which will look a lot like I’m interviewing myself.  Rest assured, though, that most of the questions being asked came from actual conversations with, or messages from, readers like you (though probably not as good looking and brilliant).

I saved the most popular question for last: “How can we fix it?”  (Feel free to skip ahead.)

Q: Aside from the obvious impending macroeconomic disaster, what makes this downgrade so noteworthy and stupid?
From my perspective, two things make this downgrade particularly noteworthy and stupid:

  1. This may be the only downgrade in history in which the downgraded bonds became MORE VALUABLE.  I’m going to repeat that: Treasury bonds became more valuable AFTER the downgrade.  Why’s that?  Well, the downgrade happened; people started panicking; and investors seeking protection sold off stocks (in a massive way) to move their money into the SAFEST INVESTMENT THEY COULD FIND.  What were those miraculously safe investments you might ask?  That’s right, freshly downgraded securities issued by the Treasury of the United States of America!  Treasury markets had a strong rally while everything else fell off a small cliff.  Clearly the markets don’t agree with S&P in thinking that the United States has a significantly higher risk of default, which brings me to…
  2. We got a debt ceiling agreement, and the credit rating went DOWN.  Forget absolute risk for a moment.  Forget other countries, forget history, forget the letters and calculations and analysis and politics.  Just consider the following: What was the default risk a few weeks ago?  This was when Congress was threatening to force a government default if they didn’t get political concessions (with at least 90 Tea Partiers in the House who actually thought a default might be GOOD in the long run).  How likely were we to default then?  During that whole negotiation the United States was AAA rated.  Now, AFTER THE DEAL (which ensures we’ll be solvent for quite a few years), we get downgraded?  Find me anyone on the planet, aside from a rating agency employee, who honestly believes the United States is riskier now than it was before the debt ceiling increase.  Seriously, I’d like to talk to that person.  (If S&P had lowered the rating during the negotiations and then raised it back to AAA once we had a deal, then I might be OK with the move.  Instead, they did the exact opposite.)

Q: If S&P sucks so much, why do we care what they say?
A: Unfortunately, a lot of people still listen to them.  And even beyond that, a lot of entities (ranging from governments to union pension funds to hospitals) had so much faith in them at one point that references to ratings exist in formalized policies.  There are laws, bylaws and policies on the books that say things like: “This pension fund must have 75% of its fixed income investments in securities rated AA or better by all 3 major rating agencies” or “The portfolio managers will only invest in AAA rated securities.”

Like it or not, ratings matter.  These agencies were given a hugely influential place in our financial system and tasked with being the credit watchdogs.  The predictable human forces of greed and laziness have not enabled these entities to do their jobs well.

Q: How is it that other places in the US [like Montgomery County in Maryland] can have a AAA rating while the United States of America does not?
It’s worth noting that the rating agencies have separate scales, and Sovereign Nations are their own self-contained category, different from local governments which in turn are different from corporations.  I agree it’s kind of confusing to use the exact same ranking method for every scale they have, especially when the ranking system in place isn’t anything special.  These letter grades confer no special advantage in clarity of precision–if anything, they actively detract from those laudable goals.

Q: No, seriously, wtf?  All those subprime mortgage bonds had AAA too?
Look, I’m not saying S&P is corrupt, but the banks paid the rating agencies for each and every bond issue that got a rating.  Countries, on the other hand, don’t pay the rating agencies anything.

Q: How good are the ratings in general?
In general, ratings loosely correspond with creditworthiness.  But what an investor cares about is default risk (meaning: “How likely am I to get my money back with interest?”).  Noted statistician and mathematical busybody Nate Silver found (using data available to everybody) that a country’s Debt-to-GDP ratio alone is a better predictor of default risk than an S&P rating.  This means that S&P spends weeks and weeks worth of man-hours to destroy (or at least hide and obscure) information about creditworthiness.  It also means they contribute literally no value to the process (at BEST).  Oops.  He also found that S&P ratings are biased toward favoring European countries (how’s that working out for you, Greece/Spain/Italy/Suckers?).  And he made the same point I did about how unbelievably slow S&P is at taking into account new information.

Q: How do the rating agencies actually do the analysis?
Quite poorly.  They send a team (it can be as small as a couple of guys) to go to a country and dig around for about a week.  They have meetings with officials, request documents, evaluate financial information, and make their own assessment of the political climate (fun fact: all S&P analysts are extremely well trained political scientists or historians…nah, j/k).  Then they have some meetings; there’s a committee (to which the entity being rated can usually make a presentation or appeal); and then they slap a rating on it.  For the United States, they put a little more work into the analysis than they do for an average country, but their standards for “sufficient” are surprisingly low.  Also, as I mentioned in my last post, their quality standards are at least equally low (see: the 2 trillion dollar error they made then ignored).

Q: OK, Mr. Smartypants, if you’re so smart, how would YOU do it?  It’s easy to criticize a complex system that will invariably contain some errors, but do you have any better ideas?
Glad you asked.  Here’s a very short list of ideas that would make everything better:

Government Sponsored Enterprises (GSEs), or something similar
Fannie and Freddie are a special kind of entity known as a GSE or Government Sponsored Enterprise.  Fannie and Freddie are quasi-private organizations (which have shareholders), but they are also in part an extension of the government with a mission to operate in financial markets with the goal of achieving a policy objective.  Ratings should work in a similar manner–rating agencies should be at least partly answerable to the government above shareholders.  Pure profit-seeking doesn’t work in this field.  A microeconomist could break down the many ways in which this leads to market failures, but suffice it to say you wind up with informational asymmetries, natural monopolies (or at least oligopolies), and a massive incentive to worsen or create new informational asymmetries for personal profit.  The goal of rating agencies should be to make better information available to everyone.  But given the massive conflicts of interest faced by these organizations (who get paid by the people they’re rating), it would be almost impossible to make substantial lasting improvements while S&P, Moody’s and Fitch operate as profit-maximizing corporate entities (there’s just too many ways to make money by screwing over other people).

The details can change, but the basic argument here is that something needs to be done to better align the incentives of the rating agencies with the incentives of the people who use the ratings (and there’s a problem when the money comes from firms who profit off the ignorance of the people who use the ratings).

Stop The Letter Grades
The letter grades are obscure and confusing.  There are multiple scales, all with an identical grading scheme, and it’s very hard to interpret what they mean.  It’s also incredibly difficult to tell exactly how risky S&P thinks an investment is.  And under the current multi-scale system, it’s sometimes IMPOSSIBLE to determine which bond is a higher risk in the eyes of the rating agencies (for example, what’s riskier: AA rated countries or AAA rated states like Delaware?  Good luck getting an answer).

Let’s Use NUMBERS!
What if, instead of obscure letter ratings, we had the rating agencies give us 3 numbers?  Here’s my proposal for a new ratings system:

  1. Time Horizon (measured in years): This is kind of important.  How far in the future are you looking?  If a city does a 30 year bond issue and gets a rating on that issue, does that mean the 1-year bonds are that risky?  The 30-year bonds?  The middle bonds at 15 years or so?  The median bond at 20 years or so?  Are you expecting me to believe that a 30 year bond carries the same default risk as a 1 year bond (that is, if they don’t default in the next year, they won’t default in the next 29 years either?)?  Tell me how far out you’re looking, because that’s important information for investors to know.
  2. Probability of Default (measured using a number between 0 and 1): Instead of giving me a letter grade, just tell me, how likely is this bond to default?  You’ve got models, tell me what they spit out.  Combined with the time horizon, this would be very useful.  Instead of saying “These 30 different bonds issued together by the City of Philadelphia all get a rating of A-” you could say “We think the city of Reading, Pennsylvania has a 2.2% probability of defaulting over the next 10 years.”  If you had a million dollars to invest, which would you prefer?  (NOTE: This has the added benefits of making the rating agencies very easy to grade.  We could, in effect, rate the rating agencies very easily–perhaps rewarding good track records and punishing bad ones.)
  3. Standard Deviation (or some other measure of uncertainty): This one is a little controversial, but in addition to the probability and the timeline, I’d like to give the rating agencies a little wiggle room.  How SURE are you?  Give me a 95% confidence interval.  Is the probability of default 2% ± 1%?  Or is it more like “We think it’s 5%, but it could be anywhere between 1% and 28% because we really don’t understand how a mortgage backed security operates”?  If you’re not very confident about something, let me know and I’ll do some more homework on my own.

Let’s look at a recent example of how my proposed system would stack up against the current system:

I have two bonds.  A mortgage backed security and a security issued by the United States government.

Old System: S&P rates both at “AAA” and collects a hefty check from an investment bank to keep giving AAA ratings to mortgage bonds.

New system: S&P, now reporting to the taxpayers, can freely admit–without impacting their paychecks at all–that they think the probability of default on the mortgage bonds is about 3%, but they’re reasonably uncertain.  It’s probably between 1% and 3% for a total default and between 1.5% and 20% for a technical default that doesn’t wipe out everything.  There’s a footnote saying that their projections are based on a 10 year time horizon.  Meanwhile the probability of a US default should be about 0.01%, but given the political climate they’re going to say it’s 1% ± 1%, over the same 10 year horizon.

As a citizen and investor, which would you prefer?


Viewing all articles
Browse latest Browse all 3

Latest Images

Trending Articles





Latest Images