George Orwell famously stated that 'There are some things so stupid that only an intellectual can believe them', and this create biases peculiar to our intellectual elites. John Jost does research about how many oppressed people rationalize their oppression, becoming 'Uncle Toms', aiding and abetting the unjust status quo via their agreeablenesss. For example, he finds that when threatened, people tend to grasp comforting stereotypes, such as that rich people are more intelligent than poor people, and that fat people are lazier than skinny people. Crazy talk.
Less intelligent people tend to confuse 'tend' with 'always' more often, so if you say men are stronger than women, they would be more prone think this implies this is true for every possible pair of men and women. That is clearly a logical error in inference. The twentieth century's most singular intellectual insight is that discrimination—the idea that certain groups like women or blacks are all different than, say, white men—is morally wrong and inefficient. Thus, every New York Times article noting some difference between groups of people has a perfunctory paragraph stating, for example, that "this does not imply all women are not as strong as all men", just in case anyone made such an inference.
Modern intellectuals are so afraid of stereotyping they can no longer generalize about people as intelligently as the great unwashed. The thought that obese people are lazier than average seems not merely mean, but illogical, to many intellectuals. Thinking that there are many poor persons much smarter than many rich persons, yet on average rich people are smarter than poor people, does not imply we should sterilize the poor, or never listen, hire, or marry a less intelligent person. Yet to assume the wealthy are just as intelligent as the poor would imply a conspiracy so vast and efficient at working to offset the natural advantages implied by the nature of 'intelligence', defies credulity. Almost all stereotypes are true as generalizations, a useful fact plebians believe more than intellectuals. This guy seems to think the only way to believe pathetic people are not as virtuous or admirable as rich people rationalizes our current, arbitrary hierarchy.
Tuesday, June 30, 2009
A Risk Management Serenity Prayer
God grant me the serenity
to accept the things I cannot change;
courage to change the things I can;
and wisdom to know the difference.
There is profound wisdom in this statement, because it highlights the importance prioritization. A mathematician can tell you what is true, false, or undecidable using irrefutable logic, but that's generally not very helpful, and why mathematicians are generally considered smart in that idiot-savant way.
I thought of this when I read Paul Wilmott blog about why quants should address more of the outside-the-box questions. He gives the example of, basically, what is the probability a magician will pull a card out of a deck, given you get to 'randomly' pick the card (eg, ace of clubs). With a fair deck the odds are 1/52, but given the card picker is a magician this is probably not a 'fair' deck. So an obvious potential answer is 100%, especially if you are being asked in front of a large audience.
All well and fine, but Wilmott draws from this story that this is what quants should focus upon, the outside-the-box things that seem to bedevil real life. Look for the magicians, not the simple odds in a fair deck.
Consider high profile fiascoes as Metallgesellschaft, Orange County, Enron, AIG. These were not properly calculated risks that went awry, nor were they outright fraud where an unauthorized intraday position blew up. They were the result of investors or management not fully understanding the risks that were being taken (the CEO off AIG was telling employees they had no, zero, exposure to mortgages throughout most of 2008). These risks—breakdowns in incentives, communication, assumptions, etc.—are called operating risks, and represent a residual of all things that are not cleanly within credit or market risks. If operating risk are the primary reason why financial firms fail, emphasis on refining models where the assumptions are presumed true seemingly misses the point.
Operating risk is neglected by risk management for good reason. It is impossible to quantify existing operating risks, which in turn makes it near impossible to evaluate methods of monitoring and reducing these risks. One can endlessly discuss assumptions, but invariably there comes a time to make an assumption and then work on those assumptions. To merely assume anything can happen in a particular instrument invariably will imply you should not be investing in that instrument, because if it makes money under the 'anything can happen' assumption it is obvious arbitrage.
If the primary risks facing financial companies are from things 'outside the box', shouldn't one focus outside the box? That is, if what brings down most companies are flawed assumptions or poor controls rather than poor luck, then most of the true risk for a trading operation is not in stress tests or Value-at-Risk, but the risks that exist outside a firm’s precisely calculated risk metrics.
Consider an analogy from American football. The biggest single metric determining wins and losses is turnovers: you get a turnover you gain a huge amount of field, and vice versa if you lose. While you should tell your players to hold onto the ball, and not throw interceptions, this can't be the focus of your game preparation. There is a lot of luck involved in turnovers, and generally, a team fighting to catch up, or afraid of getting the snot smacked out of them, fumbles more. Focus on what you can improve.
Most high-profile risks appear in retrospect to be the result of avoidable vices such as overconfidence, laziness, fraud, and gross incompetence. Yet complicating this picture is the fact that traders are notorious for continually expanding the scope of products they offer, especially because these cutting-edge products tend to have higher profit margins. This is a risk a profitable trading floor cannot avoid; by the time a product is fully understood by independent risk managers, the large margins will be gone. As opposed to academia where one can spend a long time on a single issue that one defines, in the private sector quants have to come up with solutions for many problems they do not fully understand, and do not have the luxury of saying 'it may lose 100% of its value' as if that's helpful.
One sign of good judgment is the ability to make wise decisions when information is incomplete. Knowing how to prioritize one's focus is a big part of that. There's nothing more pointless than a bunch of high IQ quants—whose comparative advantage is not the 'bigger picture'—focused on that bigger picture. Have them calculate the implications to standard assumptions. This is yeoman's work, essential but insufficient
Monday, June 29, 2009
Everything is an Empirical Matter
In Boldrin and Levine's book Against Intellectual Property, they note that their are no a priori grounds for their argument. That is, there are offsetting costs and benefits to intellectual property, and so it is a matter of empirically estimating these costs and benefits. In the book's case, this is mainly focused on patents, as opposed to confidentiality agreements, non-compete agreements, or trade secrets.
I think this is really true for almost any economic debate. Theoretically, there's a case for quotas: assume sufficiently high increasing returns to scale, some spill-over effects, and you have the case for quotas (this kind of pretty reasoning led to Krugman's fall to the Dark Side). The issue is whether empirically, giving a legislature the ability to grant quotas, to what degree will they be used in this area, as opposed to a pure rent generation via government fiat.
Thus, theory is nice merely because it tell us what variables to look at when doing an empirical analysis. In practice, with enough data, the variables speak for themselves, and it will be obvious what they are saying. The problem is merely that there are infinite number of potential effects, and potential interesting variables to control for. For example, looking at stocks, we may be interested in their annualized returns and volatility. If stock returns are lognormally distributed, this completely defines their distribution. If they have fat tails, we need futher data, higher moments, or extremum statitics. If markets are not efficient, perhaps it helps to look at auto-correlation in returns over various horizons (daily, weekly), or various technical patterns (head-and-shoulders). The state space is infinite, you need a theory to constrain it.
Similarly, when looking at what affects the effects, you need to control for other things. You might look first at how the 'market' affects returns contemporaneously, or industry effects. You might look at size, or value factors. Again, the state space is infinite, and you need a theory, a story.
So, theory is very useful, but usually theory merely suggests something to look at. The data then say how the functional form fits. If theory says variance, but it turns out that the result is really a function of the square root of variance (volatility), you can be sure that in 10 years no one will remember the theories that proposed variance, and it will all appear an unbroken advance in science.
So, I don't get too excited by proofs, or the precise nature of the functional forms. Just identify what is important as an input and output, and then roll up one's sleeves and see what the data say.
For people who hate models, I think the key to remember is they provide a useful scaffold to fit real data. That is, if you fit data without theory, you need a lot of it so it is not really bumpy, and without focusing on a small set of data, the combinations of potentially interesting data is simply too big.
I think this is really true for almost any economic debate. Theoretically, there's a case for quotas: assume sufficiently high increasing returns to scale, some spill-over effects, and you have the case for quotas (this kind of pretty reasoning led to Krugman's fall to the Dark Side). The issue is whether empirically, giving a legislature the ability to grant quotas, to what degree will they be used in this area, as opposed to a pure rent generation via government fiat.
Thus, theory is nice merely because it tell us what variables to look at when doing an empirical analysis. In practice, with enough data, the variables speak for themselves, and it will be obvious what they are saying. The problem is merely that there are infinite number of potential effects, and potential interesting variables to control for. For example, looking at stocks, we may be interested in their annualized returns and volatility. If stock returns are lognormally distributed, this completely defines their distribution. If they have fat tails, we need futher data, higher moments, or extremum statitics. If markets are not efficient, perhaps it helps to look at auto-correlation in returns over various horizons (daily, weekly), or various technical patterns (head-and-shoulders). The state space is infinite, you need a theory to constrain it.
Similarly, when looking at what affects the effects, you need to control for other things. You might look first at how the 'market' affects returns contemporaneously, or industry effects. You might look at size, or value factors. Again, the state space is infinite, and you need a theory, a story.
So, theory is very useful, but usually theory merely suggests something to look at. The data then say how the functional form fits. If theory says variance, but it turns out that the result is really a function of the square root of variance (volatility), you can be sure that in 10 years no one will remember the theories that proposed variance, and it will all appear an unbroken advance in science.
So, I don't get too excited by proofs, or the precise nature of the functional forms. Just identify what is important as an input and output, and then roll up one's sleeves and see what the data say.
For people who hate models, I think the key to remember is they provide a useful scaffold to fit real data. That is, if you fit data without theory, you need a lot of it so it is not really bumpy, and without focusing on a small set of data, the combinations of potentially interesting data is simply too big.
Barry Ritholtz's Offer
I don't read Barry Ritholtz's blog, so I don't know much about how he thinks. I did skim his book, which I found very confused about what it means to explain something. He pridefully notes on his blog 30 people or groups to blame for the financial crisis (one factor being Congress, so perhaps all in there are 100 groups at fault). That's not an explanation. A useful way of looking at the data compresses it, so you can understand more with less. Merely cataloging everyone important who did not anticipate an event that by definition was not anticipated(a decline in asset values), is pointless.
He takes his epistemelogical confusion to the next level by offering to take one side of a $10-100k wager, on the proposition 'Is the CRA significantly to blame for the credit crisis?" This would be decided by "a fair jury." He is arguing for the negative. Anyone who thinks 'a debate' has ever settled a contentious issue is rather naive. I suppose most of this debate would fall under the 'what do you mean by significant' umbrella, where he would retreat to the meaning 'necessary and sufficient', which no one asserts.
You can read about how the CRA morphed into a monster in this piece by Stanley Kurz:
So the CRA was a tactic by nonprofits looking to redistribute wealth to their constituents. Classic politics. But then the CRA was used to gather pledges for lending targets, and aid to non-profits, which were then used to pay the measely down payments needed, and then the bank lenders (like Golden West) and homebuilders would profit. They in turn, would give grants to nonprofits, and legislators encouraging these activities. The naked self-interest in this game, all under the pretext of helping the little guy, is pretty obvious.
Now, CRA commitments were in the Trillions of dollars. You don't spend a trillion dollars without creating a lot of vested interests, and the same logic that underlay the CRA underlay the arguments by the Congressional Black Caucus in keeping Fannie and Freddie from greater oversight. The same Boston Fed study used to rationalize greater CRA lending, was used to rationalize lower, even absent down payments. The one exogenous mover in this whole mess was the thought that increasing home ownership was good business, in that it lead to happier, wealthier communities. This derived from the fact that when people don't lend purely out of discrimination, you can rectify a moral wrong costlessly the way breaking the color barrier in baseball both made the game better, made the first movers better, and was the right thing to do.
Unfortunately, they made an error in assuming bankers leave money on the table because of their irrational fear/hatred of minorities. This insidious assumption, and how it played into the zeitgeist, are discussed admirably in Stan Liebowitz's piece, Anatomy of a Train Wreck.
The CRA was insignificant in the following sense. I'm sure that without the CRA, another vehicle for promoting a political patronage system, focused on race and masked as justice, would have been found. That home buyers, investors, rating agencies, academics, legislators (Rep and Dem), and regulators, all agreed that this initiative was a good idea highlights how the real common factor was the idea that pure, inefficient discrimination was prevalent. That's the only way you can change an equilibrium level of homeownership without causing a mess.
He takes his epistemelogical confusion to the next level by offering to take one side of a $10-100k wager, on the proposition 'Is the CRA significantly to blame for the credit crisis?" This would be decided by "a fair jury." He is arguing for the negative. Anyone who thinks 'a debate' has ever settled a contentious issue is rather naive. I suppose most of this debate would fall under the 'what do you mean by significant' umbrella, where he would retreat to the meaning 'necessary and sufficient', which no one asserts.
You can read about how the CRA morphed into a monster in this piece by Stanley Kurz:
Banks merger or expansion plans were rarely held up under CRA until the late 1980s, when ACORN perfected its technique of filing CRA complaints in tandem with the sort of intimidation tactics perfected by that original “community organizer” (and Obama idol), Saul Alinsky
So the CRA was a tactic by nonprofits looking to redistribute wealth to their constituents. Classic politics. But then the CRA was used to gather pledges for lending targets, and aid to non-profits, which were then used to pay the measely down payments needed, and then the bank lenders (like Golden West) and homebuilders would profit. They in turn, would give grants to nonprofits, and legislators encouraging these activities. The naked self-interest in this game, all under the pretext of helping the little guy, is pretty obvious.
Now, CRA commitments were in the Trillions of dollars. You don't spend a trillion dollars without creating a lot of vested interests, and the same logic that underlay the CRA underlay the arguments by the Congressional Black Caucus in keeping Fannie and Freddie from greater oversight. The same Boston Fed study used to rationalize greater CRA lending, was used to rationalize lower, even absent down payments. The one exogenous mover in this whole mess was the thought that increasing home ownership was good business, in that it lead to happier, wealthier communities. This derived from the fact that when people don't lend purely out of discrimination, you can rectify a moral wrong costlessly the way breaking the color barrier in baseball both made the game better, made the first movers better, and was the right thing to do.
Unfortunately, they made an error in assuming bankers leave money on the table because of their irrational fear/hatred of minorities. This insidious assumption, and how it played into the zeitgeist, are discussed admirably in Stan Liebowitz's piece, Anatomy of a Train Wreck.
The CRA was insignificant in the following sense. I'm sure that without the CRA, another vehicle for promoting a political patronage system, focused on race and masked as justice, would have been found. That home buyers, investors, rating agencies, academics, legislators (Rep and Dem), and regulators, all agreed that this initiative was a good idea highlights how the real common factor was the idea that pure, inefficient discrimination was prevalent. That's the only way you can change an equilibrium level of homeownership without causing a mess.
Thursday, June 25, 2009
Youth Not Liking Catcher in the Rye
The classic (1951) book of teenage angst, Catcher in the Rye, is about a young man, Holden Caulfield, who finds the world filled with phonies. Adults are shallow, hypocritical, insignificant. He seems to have Tourrette's syndrome, as every other word is 'goddam'. The New York Times reports current teens find the protagonist whiny, as opposed to 'deep'. Perhaps reality television and more complex TV shows are paying off.
Steve Johnson's Everything Bad is Good for You argues that TV and video games are getting more complex, more engaging, and just better. Shows in the 60s and 70s were linear, with a minor comic subplot (think Starsky and Hutch, Dragnet). Today, shows like The Sapranos and Desparate Housewives have a multithreaded approach, where characters are much less black and white. The net effect is that your average TV watcher is more sophisticated that a generation ago, in the same way that New Yorkers were more sophisticated than country bumpkins in the 1920s (the term 'corny' relates to the observation in the 1920s that rural--corn fed--audiences tended to like trite or overly sentimental jokes or scenes, presumably because of their ignorance).
The whole navel gazing as in, Bartelby the Scrivener, where one is supposed to feel bad for someone who can't handle reality, I always found annoying. This is the beatnick idea, that self discovery is the number one priority of people, and that people who are part of an organization (eg, the military, a corporation) with their external values, are either deluded or empty and pathetic. This idea has been very damaging, as it invites a pointless narcissism, elevating a lack of focus, and instant gratification. I believe self discovery is important, as I describe in Finding Alpha, mainly at finding your competitive advantage, what you are best at. This is both related to the self, and the market, because if you are good at what others do not value, it is not good for you. The fact is, happiness and prosperity comes from focusing on others, not oneself. Loving a child, a god, serving a customer, are all other-directed, and generate a lot of happiness.
I was reminded of Holden Caulfield when I read Michael Lewis's book Liar's Poker, a book about Wall Street written by a young man who worked for exactly 3 years in the business. Lewis was appalled by the hypocrisy and shallowness of his rich superiors, who he thought were all phonies. It was a bunch of funny anecdotes about the rich and famous that purported to give one an understanding of finance. It didn't. If you're over 30, think about how clueless those 25 year old Ivy league kids are in your company.
Hopefully, our youth's rejection of adolescent whining is a permanent evolution in the zeitgeist, like when we learned that zero is a number. In the future, perhaps people will become sufficiently sophisticated to learn that knowing about the major personalities in debates or big organizations--their sexual proclivities, drug usage, their family history--is not the same thing as knowing about the ideas or organizations.
Steve Johnson's Everything Bad is Good for You argues that TV and video games are getting more complex, more engaging, and just better. Shows in the 60s and 70s were linear, with a minor comic subplot (think Starsky and Hutch, Dragnet). Today, shows like The Sapranos and Desparate Housewives have a multithreaded approach, where characters are much less black and white. The net effect is that your average TV watcher is more sophisticated that a generation ago, in the same way that New Yorkers were more sophisticated than country bumpkins in the 1920s (the term 'corny' relates to the observation in the 1920s that rural--corn fed--audiences tended to like trite or overly sentimental jokes or scenes, presumably because of their ignorance).
The whole navel gazing as in, Bartelby the Scrivener, where one is supposed to feel bad for someone who can't handle reality, I always found annoying. This is the beatnick idea, that self discovery is the number one priority of people, and that people who are part of an organization (eg, the military, a corporation) with their external values, are either deluded or empty and pathetic. This idea has been very damaging, as it invites a pointless narcissism, elevating a lack of focus, and instant gratification. I believe self discovery is important, as I describe in Finding Alpha, mainly at finding your competitive advantage, what you are best at. This is both related to the self, and the market, because if you are good at what others do not value, it is not good for you. The fact is, happiness and prosperity comes from focusing on others, not oneself. Loving a child, a god, serving a customer, are all other-directed, and generate a lot of happiness.
I was reminded of Holden Caulfield when I read Michael Lewis's book Liar's Poker, a book about Wall Street written by a young man who worked for exactly 3 years in the business. Lewis was appalled by the hypocrisy and shallowness of his rich superiors, who he thought were all phonies. It was a bunch of funny anecdotes about the rich and famous that purported to give one an understanding of finance. It didn't. If you're over 30, think about how clueless those 25 year old Ivy league kids are in your company.
Hopefully, our youth's rejection of adolescent whining is a permanent evolution in the zeitgeist, like when we learned that zero is a number. In the future, perhaps people will become sufficiently sophisticated to learn that knowing about the major personalities in debates or big organizations--their sexual proclivities, drug usage, their family history--is not the same thing as knowing about the ideas or organizations.
Wednesday, June 24, 2009
Conspicuous Missing Legislation
A lot of top-down financial legislation is in arrears, but there is nothing relating to actual home buyers. That is, the quaint 20% down on a home purchase? The Income-to-debt levels? In crazy times those things weren't verified, and there were non-profits organizations receiving money from the Federal Government that paid for down payments so that everyone was happy. To blame this on 'derivatives' is absurd.
That's the front end of the mortgage meltdown, the prime mover. David Reilly of Bloomberg asks why not require 'borrowers have some of their own money on the line?' Or as a Saturday Night Live parody notes, a profound but counterintuitive bankruptcy advice is merely Don't buy things you can't afford. Indeed, the government still (ie, today!) has a prominent 3% down homebuyer initiative. Given home volatility, a 3% down no-recourse mortgage is a giveaway given the option value (keep upside, lose 3% if wrong). But that's related to hard-working people as opposed to abstract caricatures like 'bankers' and such.
The obvious answer is this would have disproportionate impact on the poor, which does not play well when grandstanding new legislative initiatives. By logical extension, this would disproportionately affect NonAsian-Minorities (NAMs). Better to blame this on the greed of rich people, mathematical mistakes by quants, anyone but actual people instigating the necessary instruments for this boondoggle, or their 'well intentioned' legislators. This urge to blame the wealthy for anything that goes wrong clearly plays well to the median voter. Perhaps I should pitch a book for a new 'grass-roots' movement against the plutocracy: 'My Struggle: How Smart, but Greedy and Rich, Bankers Screwed Us.' I just have to find a comp to convince my publisher this kind of work has a natural audience...
This is why Plato hated democracy. Popular means crap, as in public golf course, school, bathroom, recreation center. It means placating the mob unrelated to the weight of their opinions. Our founding fathers fought to establish a republican democracy, but now all we see are democracy fetishes, indeed, legislators go out of their way to emphasize their low origins and how their legislation helps the lowest levels of society (ignoring illegal immigrants, of course). So current rectifications conveniently exclude actual home buyers who got us into this mess, because, they aren't empathetic to the median voter.
That's the front end of the mortgage meltdown, the prime mover. David Reilly of Bloomberg asks why not require 'borrowers have some of their own money on the line?' Or as a Saturday Night Live parody notes, a profound but counterintuitive bankruptcy advice is merely Don't buy things you can't afford. Indeed, the government still (ie, today!) has a prominent 3% down homebuyer initiative. Given home volatility, a 3% down no-recourse mortgage is a giveaway given the option value (keep upside, lose 3% if wrong). But that's related to hard-working people as opposed to abstract caricatures like 'bankers' and such.
The obvious answer is this would have disproportionate impact on the poor, which does not play well when grandstanding new legislative initiatives. By logical extension, this would disproportionately affect NonAsian-Minorities (NAMs). Better to blame this on the greed of rich people, mathematical mistakes by quants, anyone but actual people instigating the necessary instruments for this boondoggle, or their 'well intentioned' legislators. This urge to blame the wealthy for anything that goes wrong clearly plays well to the median voter. Perhaps I should pitch a book for a new 'grass-roots' movement against the plutocracy: 'My Struggle: How Smart, but Greedy and Rich, Bankers Screwed Us.' I just have to find a comp to convince my publisher this kind of work has a natural audience...
This is why Plato hated democracy. Popular means crap, as in public golf course, school, bathroom, recreation center. It means placating the mob unrelated to the weight of their opinions. Our founding fathers fought to establish a republican democracy, but now all we see are democracy fetishes, indeed, legislators go out of their way to emphasize their low origins and how their legislation helps the lowest levels of society (ignoring illegal immigrants, of course). So current rectifications conveniently exclude actual home buyers who got us into this mess, because, they aren't empathetic to the median voter.
Tuesday, June 23, 2009
Bloggers Address Gary Gorton's Paper
One reason I think Gary Gorton's work is so useful is because it reflects his appreciation of how subtle this financial crisis was. If you were totally outside it, or just calling for some unspecified disaster, it is tempting to see the crisis as something really obvious (eg, outlaw greed, hubris, and poorly targeted regulation).
Recently Fed Chief Ben Bernanke recommended people read Gary Gorton's latest paper, probably after reading my blog post praising it (heh).
So, you can watch Gorton's trenchant analysis get transformed into tripe in real time. James Kwak, Ezra Klein, Mike Rorty, Mark Thoma, and Felix Salmon, and Dr. Manhattan jumped in. I'm not saying they are all generating tripe, merely it basically demands a reader have correct prejudices to draw the correct inferences from the union of all these writings.
The debate is generating the focus of any decentralized collective, where everyone distills a different key point. Part of this group dynamic is simply specialization, where people focus on their unique insights because they want to say something new (interesting ideas are important, true and new). But it is very confusing when everyone agrees with the basics of what everyone else says (eg, there were regulatory failures) excepting the priority of the concerns.
First, I would agree with Dr. Manhattan that regulation has a poor batting average in the financial sector, and merely saying our new regulators should be smart, disinterested, and hardworking is like saying our political leaders should be so to. They aren't, and never have been. Highlight a specific regulatory idea, because merely calling for 'more' only ensures that it will be more of the same.
Secondly, Kwak and Klein get it backward when they first describe a repo secured by an 'informationally insensitive' derivative, then talk about a bank run caused by investors afraid that the bank will default. In the repo problem described by Gorton, the repo collateral-- mortgage securities rated AAA -- is what became risky or 'informationally sensitive'. This was the basis of the contraction, the prime mover, which lead to Bear and AIG's collapse, not Bear and AIG leading to mortgage problems. The causation arrow's direction is very important for both diagnosis and potential cures.
Felix Salmon notes that we shouldn't guarantee informationally insenstive assets, as it encourages the belief that investing is riskless. I agree, but I don't think it is possible to think investors will not create some new 'riskless' time bomb in the future. The situation is, like Minsky's financial instability hypothesis, endogenous. Currently investment grade securities are viewed like junk bonds circa 2006. For example, at the 5 year point, A-rated bonds trade at 170 basis points over Treasuries, the same spread BB-rated bonds traded at in June 2006. Historically, A-rated bonds have about a 0.1% annualized default rate, BB a 1.15% annualized default rate, so currently people don't think anything is riskless.
After another 50 years of no defaults, the new AAA asset securities will be of two types. An asset class with an even longer period of no default, like US Treasury debt, or the next generation of mortgage backed securities, which will have explicitly addressed the issues relevant to this current crisis. In the latter case, a reasonable argument can be made that such securities should not include the 2007-8 scenario in their future default rate expectations because there has been a structural change specifically rectifying the mistakes so that the 2007-8 scenario is no longer relevant (eg, all mortgage ABS now includes a 50% collateral value collapse as having a 3% probability, unlike previously).
As a practical matter, worrying about securities with 0.01% annualized default rates is a waste of time, like worrying about World Wars. Sure, they happen, but they are always different, you probably will have no control, and for most people they won't experience such a crisis here their entire working life. So the asset becomes, through no mandate or official diktat, money, and through the money multiplier, a keystone in any future financial collapse. If people generally believe an asset is AAA, it will be treated like money endogenously through the process Gorton outlines. Such assets do not acquire this characteristic merely by a rating agency, people actually believe it is true using all of their insight. They are usually write, but not always.
I think it is much better to focus on the regulations we know work, such as stable property rights, and accept that growing economies will have recessions. Robert Lucas has written about how "the potential for welfare gains from better long-run, supply side policies exceeds by far the potential from further improvements in short-run demand management". The current crisis is short term demand management. The mistakes, with hindsight, are obvious and should be addressed. But the general take-away should not be how to prevent anything like this from happening again, because it is going to happen again in spite or because of anything we do now. Remember, crashes are endogenous, but so are recoveries.
Recently Fed Chief Ben Bernanke recommended people read Gary Gorton's latest paper, probably after reading my blog post praising it (heh).
So, you can watch Gorton's trenchant analysis get transformed into tripe in real time. James Kwak, Ezra Klein, Mike Rorty, Mark Thoma, and Felix Salmon, and Dr. Manhattan jumped in. I'm not saying they are all generating tripe, merely it basically demands a reader have correct prejudices to draw the correct inferences from the union of all these writings.
The debate is generating the focus of any decentralized collective, where everyone distills a different key point. Part of this group dynamic is simply specialization, where people focus on their unique insights because they want to say something new (interesting ideas are important, true and new). But it is very confusing when everyone agrees with the basics of what everyone else says (eg, there were regulatory failures) excepting the priority of the concerns.
First, I would agree with Dr. Manhattan that regulation has a poor batting average in the financial sector, and merely saying our new regulators should be smart, disinterested, and hardworking is like saying our political leaders should be so to. They aren't, and never have been. Highlight a specific regulatory idea, because merely calling for 'more' only ensures that it will be more of the same.
Secondly, Kwak and Klein get it backward when they first describe a repo secured by an 'informationally insensitive' derivative, then talk about a bank run caused by investors afraid that the bank will default. In the repo problem described by Gorton, the repo collateral-- mortgage securities rated AAA -- is what became risky or 'informationally sensitive'. This was the basis of the contraction, the prime mover, which lead to Bear and AIG's collapse, not Bear and AIG leading to mortgage problems. The causation arrow's direction is very important for both diagnosis and potential cures.
Felix Salmon notes that we shouldn't guarantee informationally insenstive assets, as it encourages the belief that investing is riskless. I agree, but I don't think it is possible to think investors will not create some new 'riskless' time bomb in the future. The situation is, like Minsky's financial instability hypothesis, endogenous. Currently investment grade securities are viewed like junk bonds circa 2006. For example, at the 5 year point, A-rated bonds trade at 170 basis points over Treasuries, the same spread BB-rated bonds traded at in June 2006. Historically, A-rated bonds have about a 0.1% annualized default rate, BB a 1.15% annualized default rate, so currently people don't think anything is riskless.
After another 50 years of no defaults, the new AAA asset securities will be of two types. An asset class with an even longer period of no default, like US Treasury debt, or the next generation of mortgage backed securities, which will have explicitly addressed the issues relevant to this current crisis. In the latter case, a reasonable argument can be made that such securities should not include the 2007-8 scenario in their future default rate expectations because there has been a structural change specifically rectifying the mistakes so that the 2007-8 scenario is no longer relevant (eg, all mortgage ABS now includes a 50% collateral value collapse as having a 3% probability, unlike previously).
As a practical matter, worrying about securities with 0.01% annualized default rates is a waste of time, like worrying about World Wars. Sure, they happen, but they are always different, you probably will have no control, and for most people they won't experience such a crisis here their entire working life. So the asset becomes, through no mandate or official diktat, money, and through the money multiplier, a keystone in any future financial collapse. If people generally believe an asset is AAA, it will be treated like money endogenously through the process Gorton outlines. Such assets do not acquire this characteristic merely by a rating agency, people actually believe it is true using all of their insight. They are usually write, but not always.
I think it is much better to focus on the regulations we know work, such as stable property rights, and accept that growing economies will have recessions. Robert Lucas has written about how "the potential for welfare gains from better long-run, supply side policies exceeds by far the potential from further improvements in short-run demand management". The current crisis is short term demand management. The mistakes, with hindsight, are obvious and should be addressed. But the general take-away should not be how to prevent anything like this from happening again, because it is going to happen again in spite or because of anything we do now. Remember, crashes are endogenous, but so are recoveries.
Monday, June 22, 2009
Samuelson Examplifies my Hypothesis
In an interview with Conor Clarke, Paul Samuelson states
This preference is for relative status, and it is a zero-sum game. The standard utility function is ignorant of what others have, utility is solely a function of wealth. The increasing, concave utility function is a necessary and sufficient condition for risk aversion, why economists believe that risk must generate a return premium to the risk free asset. Yet in practice no one is indifferent to other's performance, and risk premiums are the exception, not the rule empirically.
Consider the following ideas by the CAPM gurus, whose model is predicated on ignoring what other's do:
“I want a product to be defined relative to a benchmark."
~Bill Sharpe
"Most investors are probably sensitive to the risk of being different from the market, even if overall variability is no higher. Value stocks do not outperform market portfolios regularly or predictably—if they did, they would not be riskier ."
~ Eugene Fama
Given this reality, people define risk relatively.
Consider a choice between two hypothetical worlds, one in which you earn $100,000 a year in perpetuity while others earned $90,000, and another world in which you would earn $110,000 while others earn $200,000. In surveys, almost everyone prefers the world which is in aggregate poorer because they would be relatively richer. Eugene Fama and many others noted that 'small stocks were in a depression' in the 1980s, even though they rose about the same as in the 1970s, its just that in the 1970s they were relative outperformers, in the 1980s, relative underperformers. For someone with sufficient food and shelter, relative wealth is the priority.
The implication of this kind of thinking is that risk becomes a deviation from a consensus, or market portfolio. If you are totally out of the market with your savings, you are taking a risk, because if the market goes on a bull run and you allocated none of your wealth to the market portfolio, you are relatively impoverished. People are not indifferent to this, which is why benchmark asset allocations are so perennially popular: people want to know what their neighbors are doing so they know how to define their risk.
As shown in the table above, Y is usually considered riskier, with a 60 point range in payoffs versus a 20 point range for X. Yet on a relative basis, each asset generates identical risk. In State 1, X is a +10 out performer; in State 2, X is a -10 underperformer, and vice versa for asset Y. In relative return space, the higher absolute volatility asset is not riskier; the reader can check this for any example in which the two assets have the same mean absolute payout over the states (i.e., the average for asset X and asset Y is the same) The risk in low volatility assets is its losing ground during good times. If X and Y are the only two assets in the economy, equivalent relative risk can be achieved by taking on an undiversified bet on X or Y, which is identical to taking a position on not-Y and not-X. The positions, from a relative standpoint, are mirror images. Buying the market, in this case allocating half of each, meanwhile, generates zero risk.
The proof of this is rather straightforward, and I outline several models in this SSRN paper here.
You know that happiness is: 'Having a little more money than your colleagues.'
This preference is for relative status, and it is a zero-sum game. The standard utility function is ignorant of what others have, utility is solely a function of wealth. The increasing, concave utility function is a necessary and sufficient condition for risk aversion, why economists believe that risk must generate a return premium to the risk free asset. Yet in practice no one is indifferent to other's performance, and risk premiums are the exception, not the rule empirically.
Consider the following ideas by the CAPM gurus, whose model is predicated on ignoring what other's do:
“I want a product to be defined relative to a benchmark."
~Bill Sharpe
"Most investors are probably sensitive to the risk of being different from the market, even if overall variability is no higher. Value stocks do not outperform market portfolios regularly or predictably—if they did, they would not be riskier ."
~ Eugene Fama
Given this reality, people define risk relatively.
Consider a choice between two hypothetical worlds, one in which you earn $100,000 a year in perpetuity while others earned $90,000, and another world in which you would earn $110,000 while others earn $200,000. In surveys, almost everyone prefers the world which is in aggregate poorer because they would be relatively richer. Eugene Fama and many others noted that 'small stocks were in a depression' in the 1980s, even though they rose about the same as in the 1970s, its just that in the 1970s they were relative outperformers, in the 1980s, relative underperformers. For someone with sufficient food and shelter, relative wealth is the priority.
The implication of this kind of thinking is that risk becomes a deviation from a consensus, or market portfolio. If you are totally out of the market with your savings, you are taking a risk, because if the market goes on a bull run and you allocated none of your wealth to the market portfolio, you are relatively impoverished. People are not indifferent to this, which is why benchmark asset allocations are so perennially popular: people want to know what their neighbors are doing so they know how to define their risk.
Total Return | Relative Return | ||||
X | Y | Avg | X | Y | |
state 1 | 0 | -20 | -10 | +10 | -10 |
state 2 | 20 | 40 | 30 | -10 | +10 |
As shown in the table above, Y is usually considered riskier, with a 60 point range in payoffs versus a 20 point range for X. Yet on a relative basis, each asset generates identical risk. In State 1, X is a +10 out performer; in State 2, X is a -10 underperformer, and vice versa for asset Y. In relative return space, the higher absolute volatility asset is not riskier; the reader can check this for any example in which the two assets have the same mean absolute payout over the states (i.e., the average for asset X and asset Y is the same) The risk in low volatility assets is its losing ground during good times. If X and Y are the only two assets in the economy, equivalent relative risk can be achieved by taking on an undiversified bet on X or Y, which is identical to taking a position on not-Y and not-X. The positions, from a relative standpoint, are mirror images. Buying the market, in this case allocating half of each, meanwhile, generates zero risk.
The proof of this is rather straightforward, and I outline several models in this SSRN paper here.
Levitt's Abortion Result
Steve Levitt's most famous empirical finding in his best-selling Freakonomics is that abortion cuts crime. The basic idea is that abortions cut unwanted births, and because unwanted kids would have less nurturing, they would be more prone to crime. Hence, abortion lowers crime. Looking at the data between states with interaction terms, using 1973 as the cut-off date for legal abortion in the US, Levitt and Donohue argue their analysis supports this hypothesis. Many have found data contrary to this, and note that those who abort appear have socio-economic characteristics suggesting they would be better than average parents.
In a new NBER paper, Theodore Joyce notes:
Empirical debates can be very tricky. But the bottom line is that a positive, true result becomes more clear the more data and people look at it. If the issue becomes less clear, this favors the null that there is no effect. With GMM, instrumental variables, and interaction terms, you can torture the data to confess to anything, but when others try to replicate your results without the same bias such confessions are rarely observed.
In a new NBER paper, Theodore Joyce notes:
Economists, on the other hand, have corrected mistakes in the original analyses, added new data, offered alternative tests and tried to replicate the association in other countries. Donohue and Levitt have responded to each challenge with more data and additional regressions. Making sense of the dueling econometrics has proven difficult for even the most seasoned empiricists.
Empirical debates can be very tricky. But the bottom line is that a positive, true result becomes more clear the more data and people look at it. If the issue becomes less clear, this favors the null that there is no effect. With GMM, instrumental variables, and interaction terms, you can torture the data to confess to anything, but when others try to replicate your results without the same bias such confessions are rarely observed.
True? No. But Strongly Felt!
In a review of Michael Harrington's The Other America, the wonkish book that motivated, or rationalized, so many 1960s anti-poverty programs:
'Moral clarity' is one of those virtues that is ambiguous by itself, because allied with a misguide notion it is the basis for a great amount of evil. Strong moral feelings without a good discrimination mechanism, is like having a powerful gun with no aim (Mr. Evil himself, Adolf Hitler, had large amounts of it). As Oscar Wilde has noted, many people die for sincere beliefs that are rather absurd. When the most singular compliment one can say about a former intellectual pertains to their 'moral clarity', you can be sure they are irrelevant to current debates, relevant only to biographers and historians.
In 1999, Time magazine named “The Other America” one of the 10 most influential nonfiction books of the 20th century. But how relevant does it remain today? As social theory, it is deeply flawed. Harrington’s culture-of-poverty thesis was at best ambiguous, at worst an impediment to making the case for what he regarded as the real solution. (In later books, he made no use of the term.)'Moral clarity' is a euphemism for a strongly held belief in right and wrong, usually used by those who agree with the dichotomy. 'Manichean', 'platonic', 'naive', 'simplistic', 'absolutist', are common criticisms of these views, and usually by the left towards the right.
But what remains fresh and vital in “The Other America” is its moral clarity.
'Moral clarity' is one of those virtues that is ambiguous by itself, because allied with a misguide notion it is the basis for a great amount of evil. Strong moral feelings without a good discrimination mechanism, is like having a powerful gun with no aim (Mr. Evil himself, Adolf Hitler, had large amounts of it). As Oscar Wilde has noted, many people die for sincere beliefs that are rather absurd. When the most singular compliment one can say about a former intellectual pertains to their 'moral clarity', you can be sure they are irrelevant to current debates, relevant only to biographers and historians.
Sunday, June 21, 2009
Risk Factors and Sex Appeal
Both relate to human preferences. Our perception of risk is primordial the way sex appeal is. People react to risky assets the way the respond to sexy women, the response is pervasive and predictable (though there are many individual exceptions). You can mathematize it and show how white sclera and smooth skin can be measured mathematically, and how these attributes relate to evolutionarily successful mating strategies, which is why we find them attractive, but this all backs up intuition. The key is, human preferences that exist can be modeled, analyzed at deeper levels. If you mathematize a human preference that generates absurd intuition (e.g., you should find fat, old women attractive), then you can be sure that thinking is on the wrong track.
See more here.
See more here.
Saturday, June 20, 2009
Larry Page's Commencement Address
Just a beautiful tribute to his father. I think I have something in my eye...
Thursday, June 18, 2009
Modern Risk Factors
In Justin Fox's book Myth of the Rational Market, he has a little mention of Eugene Fama and Ken French's three factor model:
I delve deeper into the whence and why of the factors. It isn't obvious these are data mined because they are prominently used by academics and practitioners for benchmarking, so either everyone's an idiot, or we have some explaining to do. I go over the anomalie soup that gave rise to size and value. How the size effect was initially >15%, now <1%. What did they initially represent? Financial distress. That didn't work. Measure distress independently and you get a very strong, and anomalous negative relationship. Hmmm. Fama and French's factors--the market, size, and value--are the most dominant risk factors applied outside the Capital Asset pricing Model's singular 'beta' model. They are ubiquitous in the academic literature and also in fund style guides.
See this brief (2:37 second) discussion of how size was essential in uncovering that CAPM betas does not work.
see my longer videos for more, or buy the book.
The Fama-French 'three-factor model' as it came to be known, and the subsequent four-factor model that included momentum, weren't really economic theories. They were exercises in data mining, with dubious explanations tacked on after the fact. What's more, they were exercises in data mining that revealed several time-honored Wall Street strategies--dismissed by finance scholars since the 1960s as folklore or worse--to be consistent money makers.
I delve deeper into the whence and why of the factors. It isn't obvious these are data mined because they are prominently used by academics and practitioners for benchmarking, so either everyone's an idiot, or we have some explaining to do. I go over the anomalie soup that gave rise to size and value. How the size effect was initially >15%, now <1%. What did they initially represent? Financial distress. That didn't work. Measure distress independently and you get a very strong, and anomalous negative relationship. Hmmm. Fama and French's factors--the market, size, and value--are the most dominant risk factors applied outside the Capital Asset pricing Model's singular 'beta' model. They are ubiquitous in the academic literature and also in fund style guides.
See this brief (2:37 second) discussion of how size was essential in uncovering that CAPM betas does not work.
see my longer videos for more, or buy the book.
More New Regulation!
The problem with Washington oversight is they haven't a clue what is important, only what is popular. The pitchfork and torches crowd is against 'rich guys in suits' and derivatives, and want to appear pro-active. So now we have:
Warning labels? Has anyone seen a mortgage in the past 5 years? There are tens of pages, and you have to initial it in 17 places, so many none are read by your average borrower. So now we will have to initial in 34 places. If the warning light is always flashing, people ignore it. But to prioritize implies understanding the relative magnitudes of risk, and that is outside their scope. This doesn't change anything.
Getting rid of the OTS, and moving them into the Fed? Yawn. Everyone gets new business cards. The new Consumer Financial Protection Agency can now "enforce rules across a slew of financial products", and chances are these will involve a lot of busy work, mercilessly slamming anyone trying to sell a derivative that no longer has a market. They will probably also prevent financial institutions from diversifying their product base, making friend with industry lobbyists eager to prevent competition, all in the name of preventing another systemic collapse. But they will also now micromanage Community Reinvestment Act requirements, which surely had absolutely nothing to do with this crisis.
But what about the prime mover in all this, mortgage underwriting. You know, the loans without which derivatives and banks would not have collapsed? The FHA actually encouraged a program that allowed home buyers to pay up to 6% of the down payment, which is nice because that means no money down in many cases! The positive feedback loop of nonprofits getting Federal grants, giving money to poor people to buy homes they could not afford, supported by homebuilders and lenders who would then donate to legislators, could not be more corrupt. Last year, they quietly shuttered that one, but the FHA still proudly notes "The most popular FHA home loan is the 203(b). This fixed-rate loan often works well for first time home buyers because it allows individuals to finance up to 97 percent of their home loan". And the CRA lending targets that could only be acheived via lowering lending standards? The Consumer Finance Protection Agency supposedly will only double down: "A critical part of the CFPA’s mission should be to promote access to financial services, especially for households and communities that traditionally have had limited access." With the government on the hook, this merely means higher taxes, though, not systemic financial panic. That's a relief.
Regulator shopping, agency rating shopping, are all a sideshow. Sure they are bad, but it didn't matter because no regulators or investors thought those mortgage innovations had any risk. It wasn't like evil guys in suits were manipulating the system, the system was lending too much because it was thought to be profitable, safe, and morally good. It's like blaming binge drinking on beer commercials. Sure some knew it was a train wreck, but they were lonely voices, and giving Washington more power would have done zero back in 2006 because there was no large support for rolling back the insane mortgage underwriting standards that were implicitly predicated on home prices not falling.
Regulators are making sure the market won't make the same mistakes again. Perhaps the UN should inform Germany not to start any more two-front wars in Europe this century (didn't work out so well in the 20th)?
Firms may have to put "warning labels" on their alternative products or require applicants to fill out financial-experience questionnaires, according to the administration.
"This is a game changer," said Ed Mierzwinski of U.S. Public Interest Research Group,
Warning labels? Has anyone seen a mortgage in the past 5 years? There are tens of pages, and you have to initial it in 17 places, so many none are read by your average borrower. So now we will have to initial in 34 places. If the warning light is always flashing, people ignore it. But to prioritize implies understanding the relative magnitudes of risk, and that is outside their scope. This doesn't change anything.
Getting rid of the OTS, and moving them into the Fed? Yawn. Everyone gets new business cards. The new Consumer Financial Protection Agency can now "enforce rules across a slew of financial products", and chances are these will involve a lot of busy work, mercilessly slamming anyone trying to sell a derivative that no longer has a market. They will probably also prevent financial institutions from diversifying their product base, making friend with industry lobbyists eager to prevent competition, all in the name of preventing another systemic collapse. But they will also now micromanage Community Reinvestment Act requirements, which surely had absolutely nothing to do with this crisis.
But what about the prime mover in all this, mortgage underwriting. You know, the loans without which derivatives and banks would not have collapsed? The FHA actually encouraged a program that allowed home buyers to pay up to 6% of the down payment, which is nice because that means no money down in many cases! The positive feedback loop of nonprofits getting Federal grants, giving money to poor people to buy homes they could not afford, supported by homebuilders and lenders who would then donate to legislators, could not be more corrupt. Last year, they quietly shuttered that one, but the FHA still proudly notes "The most popular FHA home loan is the 203(b). This fixed-rate loan often works well for first time home buyers because it allows individuals to finance up to 97 percent of their home loan". And the CRA lending targets that could only be acheived via lowering lending standards? The Consumer Finance Protection Agency supposedly will only double down: "A critical part of the CFPA’s mission should be to promote access to financial services, especially for households and communities that traditionally have had limited access." With the government on the hook, this merely means higher taxes, though, not systemic financial panic. That's a relief.
Regulator shopping, agency rating shopping, are all a sideshow. Sure they are bad, but it didn't matter because no regulators or investors thought those mortgage innovations had any risk. It wasn't like evil guys in suits were manipulating the system, the system was lending too much because it was thought to be profitable, safe, and morally good. It's like blaming binge drinking on beer commercials. Sure some knew it was a train wreck, but they were lonely voices, and giving Washington more power would have done zero back in 2006 because there was no large support for rolling back the insane mortgage underwriting standards that were implicitly predicated on home prices not falling.
Regulators are making sure the market won't make the same mistakes again. Perhaps the UN should inform Germany not to start any more two-front wars in Europe this century (didn't work out so well in the 20th)?
Wednesday, June 17, 2009
Strategy Momentum?
A big hedge fund is known for hiring a lot of traders, giving them each a small amount of capital, and then firing them when they lose, say, 5% of their capital. With this rule, they are basically running a strategy momentum portfolio. I know the fund is very successful, with about 140 traders at any time and getting rid of 25% or more each year. They have several billion dollars under management.
I'm not sure, however, this is all a smokescreen. They could make all their money doing some basic strategies, and the 100+ traders coming and going acts as camouflage. But any firm I have known implements a capital rule along similar lines: if you make money, you get more capital, if you lose, you get less. For most traders just starting out, this basically means you will be at 10x next year, or out. This make perfect sense if you think their profits are a function of their alpha plus the specific strategy return, which is often difficult for outsiders to see. Nevertheless, the result is pretty straightforward. It is conceivable that momentum in strategies is strong, and only a hedge fund, with its unlimited, discretionary, and secretive scope, can avail itself to it.
Actual strategies usually are not nearly as homogeneous as those reflected by standard hedge fund indices (converts, distressed, long/short), but much more heterogeneous. Think of them as factor bets within asset classes. Further the factors are not simply macro factors like the market, or size, but also correlations, dispersion, credit spreads, mean reversion. The asset class/factor combinations are many. Thus, this 'strategy momentum' strategy would be difficult to test academically. But if it does work, it gives large, multistrat hedge funds a big advantage. Their only disadvantage is that there is a large amount of operational risk with such comings and goings, and from a PR perspective, it sounds horrible. Rarely does a multistrat ever admit to actually doing this, they merely say they are doing individual analysis all the time. That many (most) asset managers basically allocate capital using the rule, 'did you make money last year?', is perhaps why hedge funds seem to generate positive alpha in contrast to long-only mutual funds (such funds don't have access to near the same scope of strategies).
I'm not sure, however, this is all a smokescreen. They could make all their money doing some basic strategies, and the 100+ traders coming and going acts as camouflage. But any firm I have known implements a capital rule along similar lines: if you make money, you get more capital, if you lose, you get less. For most traders just starting out, this basically means you will be at 10x next year, or out. This make perfect sense if you think their profits are a function of their alpha plus the specific strategy return, which is often difficult for outsiders to see. Nevertheless, the result is pretty straightforward. It is conceivable that momentum in strategies is strong, and only a hedge fund, with its unlimited, discretionary, and secretive scope, can avail itself to it.
Actual strategies usually are not nearly as homogeneous as those reflected by standard hedge fund indices (converts, distressed, long/short), but much more heterogeneous. Think of them as factor bets within asset classes. Further the factors are not simply macro factors like the market, or size, but also correlations, dispersion, credit spreads, mean reversion. The asset class/factor combinations are many. Thus, this 'strategy momentum' strategy would be difficult to test academically. But if it does work, it gives large, multistrat hedge funds a big advantage. Their only disadvantage is that there is a large amount of operational risk with such comings and goings, and from a PR perspective, it sounds horrible. Rarely does a multistrat ever admit to actually doing this, they merely say they are doing individual analysis all the time. That many (most) asset managers basically allocate capital using the rule, 'did you make money last year?', is perhaps why hedge funds seem to generate positive alpha in contrast to long-only mutual funds (such funds don't have access to near the same scope of strategies).
Moses Hadas Quotes
Classical scholar and prominent book reviewer Moses Hadas had some great lines:
“Thank you for sending me a copy of your book; I'll waste no time reading it”
“This book fills a much-needed gap.”
“I have read your book and much like it.”
[I have read your book and many similar books]
“Thank you for sending me a copy of your book; I'll waste no time reading it”
“This book fills a much-needed gap.”
“I have read your book and much like it.”
[I have read your book and many similar books]
Tuesday, June 16, 2009
My Book Finding Alpha
I got my complementary copies last week, and noted that after 3 months of not seeing the text, the typos seem to just jump off the page (eg, the index notes mention of "Einstein, Alfred"). My son said "you mean if you write a book, you get a copy for free?" "Yup". His eyes brighten, "that's awesome!"
I think I'm pretty harsh with the economic status quo, yet compared to most popular criticisms I'm pretty soft. On most issues I find myself having more in common with the works of the standard bearers of conventional theory than their critics. That is, I am less sympathetic to the behavioralist literature, or the 'economists are autistic idiots' crowd. The old phrase, 'the enemy of my enemy is my friend', may apply in war and politics, but not so much for ideas. This is only logical because if someone says x=3, and you say they are wrong, there are an infinite number of incorrect alternatives, and clearly as the status quo, the argument for x=3 is not bat-guano crazy.
You can see the 10-minute YouTube video below where I go over my argument (not the anecdotes). I have a whole collection of technical videos that complement my book over at /www.efalken.com/video (these are greater than 10 minutes, so can't be at YouTube), where you can download the powerpoints, and also with links to references. There are just to clarify those parts that are really technical. There's a technical version of my argument at the SSRN website here. The book is available today here at Amazon.
My book is the culmination of a 15-year journey that centers on a few key propositions, and the last third is on practical issues in finding alpha (e.g., the deception, the short-lived nature of strategies, the benefits of naive optimism). As much as my latter chapters really go over tangible strategies and practical stories, it is a niche book. My wife and sister both said it was over their heads, which implies that a smart person without a real financial interest may find this a bit too parochial.
Reading the Myth of the Rational Market, I noted that I highlighted many issues Fox mentions, I just focus more on the data and theory, as opposed to the personalities and anecdotes. That is, while popular books on Risk and Finance make you feel a lot closer to the personalities, who they disagree with, and the names of things they disagree upon, few of these books really get into what the key data are, and why you should believe a particular view, and why this is important. Indeed, in Myth of the Rational Markets you learn Eugene Fama and Dick Thaler strongly disagree as to whether markets should be called 'efficient'. But as their firms, Dimensional Fund Advisors and Fuller & Thaler, seem to offer the exact same strategies, it isn't clear why this matters.
Not that this is an extended academic journal article. There's a lot of personal stories and such in there, and I tried to make clear how the insights are not just true but useful. My companion article here at the SSRN website, goes over some of the data with much greater emphasis on my theory, with a lot more math and development of utility functions. I think someone interested in quantitative finance will see this as self-contained, but as my wife and sister noted, that may be optimistic on my part. Realistically, you probably need to have taken a corporate finance course to find this book intelligible.
To make an immodest plug for my book, I think it is a good read because it offers a consistent, new, important, and true argument. Very few books have all of these attributes.
Consistent: my basic argument is that risk, however measured, is not positively related to expected return. Further, that people tend to pay to take risk at the extreme, causing some assets to have a negative risk premium. Lastly, there is a slight risk premium on the low end of the risk spectrum, primarily as a cash premium.
New: I have yet to see elsewhere the basic idea, that risk in general does not and should not have an expected return premium in general (I note exceptions to this rule). This basically takes us back to pre-Markowitz, when no one presumed that risk explains the persistence of the equity return premium (and thus economic profits) in equilibrium.
Important: Risk, in terms of 'the risk that is priced', is everywhere in finance as an explanation. I'm asserting that, like in the Black-Scholes options model, you can ignore the risk premium and focus on the payoff space: magnitudes and probabilities. As Steve Ross has noted, if the standard model does not work it is actually more interesting, and useful, than if it did.
True: As I mention in the book, you can't test the general theory that risk is unrelated to return in general, because any test is merely a specific metric of risk, and it could be the metric that is flawed. In that way it is untestable. But I present a large survey of data in currencies, bonds, futures, equities, etc--over 20 asset classes--to suggest if risk is priced, it is very counterintuitive, which is implausible because presumably risk is priced because people generally agree on what it is.
I think I'm pretty harsh with the economic status quo, yet compared to most popular criticisms I'm pretty soft. On most issues I find myself having more in common with the works of the standard bearers of conventional theory than their critics. That is, I am less sympathetic to the behavioralist literature, or the 'economists are autistic idiots' crowd. The old phrase, 'the enemy of my enemy is my friend', may apply in war and politics, but not so much for ideas. This is only logical because if someone says x=3, and you say they are wrong, there are an infinite number of incorrect alternatives, and clearly as the status quo, the argument for x=3 is not bat-guano crazy.
You can see the 10-minute YouTube video below where I go over my argument (not the anecdotes). I have a whole collection of technical videos that complement my book over at /www.efalken.com/video (these are greater than 10 minutes, so can't be at YouTube), where you can download the powerpoints, and also with links to references. There are just to clarify those parts that are really technical. There's a technical version of my argument at the SSRN website here. The book is available today here at Amazon.
My book is the culmination of a 15-year journey that centers on a few key propositions, and the last third is on practical issues in finding alpha (e.g., the deception, the short-lived nature of strategies, the benefits of naive optimism). As much as my latter chapters really go over tangible strategies and practical stories, it is a niche book. My wife and sister both said it was over their heads, which implies that a smart person without a real financial interest may find this a bit too parochial.
Reading the Myth of the Rational Market, I noted that I highlighted many issues Fox mentions, I just focus more on the data and theory, as opposed to the personalities and anecdotes. That is, while popular books on Risk and Finance make you feel a lot closer to the personalities, who they disagree with, and the names of things they disagree upon, few of these books really get into what the key data are, and why you should believe a particular view, and why this is important. Indeed, in Myth of the Rational Markets you learn Eugene Fama and Dick Thaler strongly disagree as to whether markets should be called 'efficient'. But as their firms, Dimensional Fund Advisors and Fuller & Thaler, seem to offer the exact same strategies, it isn't clear why this matters.
Not that this is an extended academic journal article. There's a lot of personal stories and such in there, and I tried to make clear how the insights are not just true but useful. My companion article here at the SSRN website, goes over some of the data with much greater emphasis on my theory, with a lot more math and development of utility functions. I think someone interested in quantitative finance will see this as self-contained, but as my wife and sister noted, that may be optimistic on my part. Realistically, you probably need to have taken a corporate finance course to find this book intelligible.
To make an immodest plug for my book, I think it is a good read because it offers a consistent, new, important, and true argument. Very few books have all of these attributes.
Consistent: my basic argument is that risk, however measured, is not positively related to expected return. Further, that people tend to pay to take risk at the extreme, causing some assets to have a negative risk premium. Lastly, there is a slight risk premium on the low end of the risk spectrum, primarily as a cash premium.
New: I have yet to see elsewhere the basic idea, that risk in general does not and should not have an expected return premium in general (I note exceptions to this rule). This basically takes us back to pre-Markowitz, when no one presumed that risk explains the persistence of the equity return premium (and thus economic profits) in equilibrium.
Important: Risk, in terms of 'the risk that is priced', is everywhere in finance as an explanation. I'm asserting that, like in the Black-Scholes options model, you can ignore the risk premium and focus on the payoff space: magnitudes and probabilities. As Steve Ross has noted, if the standard model does not work it is actually more interesting, and useful, than if it did.
True: As I mention in the book, you can't test the general theory that risk is unrelated to return in general, because any test is merely a specific metric of risk, and it could be the metric that is flawed. In that way it is untestable. But I present a large survey of data in currencies, bonds, futures, equities, etc--over 20 asset classes--to suggest if risk is priced, it is very counterintuitive, which is implausible because presumably risk is priced because people generally agree on what it is.
Monday, June 15, 2009
How Not to Cheat
Don't do it too well. I once interviewed with a hedge fund, and they said they only take strategies with Sharpe ratios above 2. I've worked at a couple hedge funds, I never saw one with a Sharpe above 2 that didn't have access to retail flow. As the average fund had a Sharpe below 1, and these generally were diversified pools of several strategies, the Sharpe>2 criteria merely ensured you were either getting frauds or fools. They had over $5B under management at one time, now a couple hundred million. If investors merely talked to these guys, they should have known better.
Results that are too good are simply implausible. But while these guys were stupid, they weren't evil.
In the recent Iranian elections, former jailed reporter Amir Taheri noted:
The statistical probability the sample variance is this small is effectively nil.
I don't want my own fatwa, but I'm skeptical. Perhaps this is a good marketing pitch for statistics professors. Even if you are an evil dictatorship, it pays to know some statistics!
Results that are too good are simply implausible. But while these guys were stupid, they weren't evil.
In the recent Iranian elections, former jailed reporter Amir Taheri noted:
Mr. Ahmadinejad was credited with more votes than anyone in Iran's history. If the results are to be believed, he won in all 30 provinces, and among all social and age categories. His three rivals, all dignitaries of the regime, were humiliated by losing even in their own hometowns.
The statistical probability the sample variance is this small is effectively nil.
I don't want my own fatwa, but I'm skeptical. Perhaps this is a good marketing pitch for statistics professors. Even if you are an evil dictatorship, it pays to know some statistics!
Sunday, June 14, 2009
Do Crashes Support or Disprove 'Rational' Markets?
Noneconomists tend to think 'rational markets' is patently absurd, pointing to various asset bubbles such as the internet bubble, the recent housing bubble, or the 1987 stock market crash. That is, most people think extreme events are evidence against rational markets.
Malkiel and Shliefer once took opposite sides of the market efficiency debate in the Wall Street Journal. Dutch and Royal Shell trade on two separate exchanges. These different listings by law apportion a 60:40 split of cash flows and thus should trade as such, but indeed they vary by as much as 30% from this fundamental equivalence. Shleifer calls this a ‘fantastic embarrassment’ to the efficient markets hypothesis, yet Malkiel also notes it as being within his bands of reasonableness. To me, this highlights that much of this debate gets into semantics, and such debates are rarely fruitful (Wall Street Journal, 12/28/00).
To assert markets are irrational or inefficient, however, one needs to propose a measure of 'true value', and then show that actual market prices diverge from this. As classically worded by economists, any test of market efficiency is a joint test of a market model and the concept of efficiency. Thus, your test may merely be rejecting your market model, not efficiency. You may think this is unfair, but it's simple logic, and you have to deal with it. It is essential to have a specific alternative, because how do you know they are wrong unless you know the right answer? With hindsight, prices that were once really high, now not, were 'wrong', but one has to be able to go back in time and show the then-consensus was obviously wrong. Thus, if you propose, say, some metric of P/E, or dividend payout ratios, that is fine, but then presumably there will be some range of P/Es that, when breached, generate inevitable mean-reversion thus demonstrating the correctness of the P/E ratio. Actual arbitrage, in the form of strategies that generate attractive Sharpe ratios, are necessary, and this is very hard to do.
One big issue in tests of whether prices are 'right' or not is the Peso Problem. The term 'peso problem' has a long history, and I have seen the term attributed to several people, in any case it was first applied to the fact that the higher interest rate one received in the Mexican Peso for decades, was erased in a single day in 1977 when the Peso was devalued by about 45%. In 1982 Mexico did it again. Thus, decades of seemingly higher returns could have merely been the expected probability and size of these devaluations. As these probabilities are small, they are often not seen 'in sample', and the standard errors on these probabilities are sufficiently high that it is very difficult to see if they are sufficient to explain, or even over-explain, a certain return premium. That is, when you have a 1% or 5% chance, annually, of a 75% depreciation, the appropriate offset is 0.75% or 3.75%, a big difference.
Tom Rietz used the 'peso problem' 1988 to explain the anomalously high equity premium puzzle, then estimated around 6%. Big events aren't anomalies, but rather explanations in the rational markets paradigm. Recently, Robert Barro noted that historically, there has been about a 2% chance of a 15% to 45% GDP decline, which would probably cause equity markets to fall 90%. The implication is that many return premiums are really a mirage. Further, volatility is totally rational, not too high, because reasonable, rational people will disagree as to the specific probability, and as they move from a consensus of a 2% to a 5% probability of disaster, the price fluctuates wildly.
Such events are not proof against efficient or rational markets, but rather, supports it, because estimating the probabilities of these important events is clearly very difficult. A rational market should move a lot as people change their estimation that, say, the next Microsoft is extant in a set of internet stocks (with potential future market cap of $200B), or that a worldwide Depression is likely. The Peso Problem literature goes back to the 1980's at least, and fits within the rational market approach as one of the main reasons things that appear anomalous actually are rational. If you think extreme events invalidate rational markets, this implies one has a lot of certainty for the magnitude and probability of highly improbable events, which is not very compelling (eg, what is the probability of a second leg in the current financial crisis? 1%? 10%? 50%?).
Andre Shleifer and Larry Summers once wrote that “[i]f the efficient markets hypothesis was a publicly traded security, its price would be enormously volatile" —-too volatile, supposedly. Presumably Shleifer and Summers think economists are rational and understand that the rational consensus around a proposition can and does vary wildly around the truth. So why can’t market participants also be considered rational and yet have their collective opinions vary wildly over time and space? Truth is a very slippery concept, and whatever it may be for various propositions, it is something reasonable people can often agree to disagree, in aggregate and at different times.
In 2001 the New York Times had two articles by different authors on behavioral economics. The story was a cliche: a stolid conventional wisdom experiences a Kuhnian shift, lead by a small band of outsiders willing to flout traditional ways. The behavioralists reject "the narrow, mechanical homo economicus" and instead argue that " that most people actually behave like . . . people!" One articled noted "Some Economists [the behavioralists] Call Behavior a Key", implying that previously economists never were concerned with 'behavior'. This straw-man smack down has continued in the financial press to this day, and meanwhile, there are no canonical models of asset pricing based on behavioralist insights, merely explanations for well-known anomalies like momentum, size, and value, that were documented outside this literature.
Danny Kahneman, co-author of the Behavioralist Bible Judgement Under Uncertainty: Heuristics and Biases (a book published in 1982 about work mainly from the 1970s) went on to win the Nobel Prize in 2002. Herbert Simon, won the Nobel Prize in 1978 for his insight that humans have limited computing power, and so often satisfice in their optimization. In my dissertation back in 1994, I had to put 'behavioral economics' is scare quotes, but it has been part of conventional wisdom for at least a decade now. It's all grown up now, and shouldn't be judged on its potential anymore. One should apply behavioral biases to market 'data' (not anecdotes, or highly parochial experiments).
Crashes are interesting, but people's obsession with them highlights the hindsight bias more than a real-time, generalizable bias. Prices fluctuate more than we would like. But is it too much? The future is very uncertain, and in the US where so many prominent financial researchers work, we tend to forget we had a very fortunate 20th century (2-0 in World Wars!). Looking at history, where many countries have seen their equity indices get zeroed out (Hungary, Russia, China, Chechoslovakia, Poland), and some centuries are peaceful(13th in Europe) others horrific (14th in Europe), who's to say whether the stock market should be twice, or half, its current level with that sort of state space.
Malkiel and Shliefer once took opposite sides of the market efficiency debate in the Wall Street Journal. Dutch and Royal Shell trade on two separate exchanges. These different listings by law apportion a 60:40 split of cash flows and thus should trade as such, but indeed they vary by as much as 30% from this fundamental equivalence. Shleifer calls this a ‘fantastic embarrassment’ to the efficient markets hypothesis, yet Malkiel also notes it as being within his bands of reasonableness. To me, this highlights that much of this debate gets into semantics, and such debates are rarely fruitful (Wall Street Journal, 12/28/00).
To assert markets are irrational or inefficient, however, one needs to propose a measure of 'true value', and then show that actual market prices diverge from this. As classically worded by economists, any test of market efficiency is a joint test of a market model and the concept of efficiency. Thus, your test may merely be rejecting your market model, not efficiency. You may think this is unfair, but it's simple logic, and you have to deal with it. It is essential to have a specific alternative, because how do you know they are wrong unless you know the right answer? With hindsight, prices that were once really high, now not, were 'wrong', but one has to be able to go back in time and show the then-consensus was obviously wrong. Thus, if you propose, say, some metric of P/E, or dividend payout ratios, that is fine, but then presumably there will be some range of P/Es that, when breached, generate inevitable mean-reversion thus demonstrating the correctness of the P/E ratio. Actual arbitrage, in the form of strategies that generate attractive Sharpe ratios, are necessary, and this is very hard to do.
One big issue in tests of whether prices are 'right' or not is the Peso Problem. The term 'peso problem' has a long history, and I have seen the term attributed to several people, in any case it was first applied to the fact that the higher interest rate one received in the Mexican Peso for decades, was erased in a single day in 1977 when the Peso was devalued by about 45%. In 1982 Mexico did it again. Thus, decades of seemingly higher returns could have merely been the expected probability and size of these devaluations. As these probabilities are small, they are often not seen 'in sample', and the standard errors on these probabilities are sufficiently high that it is very difficult to see if they are sufficient to explain, or even over-explain, a certain return premium. That is, when you have a 1% or 5% chance, annually, of a 75% depreciation, the appropriate offset is 0.75% or 3.75%, a big difference.
Tom Rietz used the 'peso problem' 1988 to explain the anomalously high equity premium puzzle, then estimated around 6%. Big events aren't anomalies, but rather explanations in the rational markets paradigm. Recently, Robert Barro noted that historically, there has been about a 2% chance of a 15% to 45% GDP decline, which would probably cause equity markets to fall 90%. The implication is that many return premiums are really a mirage. Further, volatility is totally rational, not too high, because reasonable, rational people will disagree as to the specific probability, and as they move from a consensus of a 2% to a 5% probability of disaster, the price fluctuates wildly.
Such events are not proof against efficient or rational markets, but rather, supports it, because estimating the probabilities of these important events is clearly very difficult. A rational market should move a lot as people change their estimation that, say, the next Microsoft is extant in a set of internet stocks (with potential future market cap of $200B), or that a worldwide Depression is likely. The Peso Problem literature goes back to the 1980's at least, and fits within the rational market approach as one of the main reasons things that appear anomalous actually are rational. If you think extreme events invalidate rational markets, this implies one has a lot of certainty for the magnitude and probability of highly improbable events, which is not very compelling (eg, what is the probability of a second leg in the current financial crisis? 1%? 10%? 50%?).
Andre Shleifer and Larry Summers once wrote that “[i]f the efficient markets hypothesis was a publicly traded security, its price would be enormously volatile" —-too volatile, supposedly. Presumably Shleifer and Summers think economists are rational and understand that the rational consensus around a proposition can and does vary wildly around the truth. So why can’t market participants also be considered rational and yet have their collective opinions vary wildly over time and space? Truth is a very slippery concept, and whatever it may be for various propositions, it is something reasonable people can often agree to disagree, in aggregate and at different times.
In 2001 the New York Times had two articles by different authors on behavioral economics. The story was a cliche: a stolid conventional wisdom experiences a Kuhnian shift, lead by a small band of outsiders willing to flout traditional ways. The behavioralists reject "the narrow, mechanical homo economicus" and instead argue that " that most people actually behave like . . . people!" One articled noted "Some Economists [the behavioralists] Call Behavior a Key", implying that previously economists never were concerned with 'behavior'. This straw-man smack down has continued in the financial press to this day, and meanwhile, there are no canonical models of asset pricing based on behavioralist insights, merely explanations for well-known anomalies like momentum, size, and value, that were documented outside this literature.
Danny Kahneman, co-author of the Behavioralist Bible Judgement Under Uncertainty: Heuristics and Biases (a book published in 1982 about work mainly from the 1970s) went on to win the Nobel Prize in 2002. Herbert Simon, won the Nobel Prize in 1978 for his insight that humans have limited computing power, and so often satisfice in their optimization. In my dissertation back in 1994, I had to put 'behavioral economics' is scare quotes, but it has been part of conventional wisdom for at least a decade now. It's all grown up now, and shouldn't be judged on its potential anymore. One should apply behavioral biases to market 'data' (not anecdotes, or highly parochial experiments).
Crashes are interesting, but people's obsession with them highlights the hindsight bias more than a real-time, generalizable bias. Prices fluctuate more than we would like. But is it too much? The future is very uncertain, and in the US where so many prominent financial researchers work, we tend to forget we had a very fortunate 20th century (2-0 in World Wars!). Looking at history, where many countries have seen their equity indices get zeroed out (Hungary, Russia, China, Chechoslovakia, Poland), and some centuries are peaceful(13th in Europe) others horrific (14th in Europe), who's to say whether the stock market should be twice, or half, its current level with that sort of state space.
Friday, June 12, 2009
P/Es and the Stock Market
Hindsight is a very powerful bias. For large, infrequent events like the recent financial crisis, it is difficult for many, if not most people, to see this as anything but terribly inefficient, and most of all, predictable. Maybe not right away, but over the long run.
I disagree. As I've mentioned, even Robert Shiller's 2005 updated edition to Irrational Exuberance contained a very cautious look about housing. He noted it rose a lot recently, and such a rise was improbable going forward. But he did not say an aggregate price decline was likely or even significantly probable.
In that vein, the price-earnings ratio is a good example of a metric that seems very useful, but in practice not. Robert Shiller keeps historical data for the US back to 1871 on his website. You can see that P/E ratios vary over the cycle in what appears to be very predictable fashion.
The above graph shows the P/E ratio as well as the future 5 year equity premium, which I defined as the annualized future 5 year stock return minus the average Long Term Treasury bond yield. There's a nice negative correlation between this future return and current P/E ratios (Shiller uses the prior 10 years earnings for his denominator). One thinks, hey, this is a pretty obvious investing signal.
Yet, on a monthly level, the signal is horrible. If you look at the current P/E, and compare this to the then historical data, there's a very small relation with month-ahead returns. Given the upward drift in the stock market over time, and rule based on historal P/Es, where you choose to invest/not-invest based on, say, being in the 90th percentile of P/Es, lowers your annual return.
But, the correlation on a longer basis seems much better, and so, if you chose a rule to say, invest/not-invest every 5 years come hell-or-high-water, this surely generates a much better return? No. One way to see this is the graph below. Here I calculated the ratio of the earnings yield to the Treasury rate(earnings/price normalized by the interest rate as represented by the long term Treasury rate). One sees a very tight relation from 1920 through 1960, perhaps even to 1980, but you would have missed the bull run of the 1890's and 1980-2000. Any rule here, over the entire century, generates no real gain.
The bottom line is that many trends that seem really good, work over long periods of time, but for only half the time. In an up-trending market, anything that tempts you to sit on the sidelines is swimming against the stream. For assets that historically have trended upward, like housing or stocks, it is never obvious when to move totally out of the position.
I disagree. As I've mentioned, even Robert Shiller's 2005 updated edition to Irrational Exuberance contained a very cautious look about housing. He noted it rose a lot recently, and such a rise was improbable going forward. But he did not say an aggregate price decline was likely or even significantly probable.
In that vein, the price-earnings ratio is a good example of a metric that seems very useful, but in practice not. Robert Shiller keeps historical data for the US back to 1871 on his website. You can see that P/E ratios vary over the cycle in what appears to be very predictable fashion.
The above graph shows the P/E ratio as well as the future 5 year equity premium, which I defined as the annualized future 5 year stock return minus the average Long Term Treasury bond yield. There's a nice negative correlation between this future return and current P/E ratios (Shiller uses the prior 10 years earnings for his denominator). One thinks, hey, this is a pretty obvious investing signal.
Yet, on a monthly level, the signal is horrible. If you look at the current P/E, and compare this to the then historical data, there's a very small relation with month-ahead returns. Given the upward drift in the stock market over time, and rule based on historal P/Es, where you choose to invest/not-invest based on, say, being in the 90th percentile of P/Es, lowers your annual return.
But, the correlation on a longer basis seems much better, and so, if you chose a rule to say, invest/not-invest every 5 years come hell-or-high-water, this surely generates a much better return? No. One way to see this is the graph below. Here I calculated the ratio of the earnings yield to the Treasury rate(earnings/price normalized by the interest rate as represented by the long term Treasury rate). One sees a very tight relation from 1920 through 1960, perhaps even to 1980, but you would have missed the bull run of the 1890's and 1980-2000. Any rule here, over the entire century, generates no real gain.
The bottom line is that many trends that seem really good, work over long periods of time, but for only half the time. In an up-trending market, anything that tempts you to sit on the sidelines is swimming against the stream. For assets that historically have trended upward, like housing or stocks, it is never obvious when to move totally out of the position.
Thursday, June 11, 2009
Justin Fox Book Like The Market
The Myth of the Rational Markets, is highly imperfect. It exaggerates, omits, and occasionally inconsistent. Yet, looking at the Barnes and Noble Business & Investments section, it is the best book there I haven't fully read, and much better than any book about the current financial mess.
Fox took exception to my earlier post, in that I criticized his criticism of 'rational' or 'efficient' markets, and Fox noted he presents both sides. I was responding to the reviews, his title, and book cover, and it wasn't clear I hadn't read his book, which is actually a very even handed portrayal of efficient markets, and portfolio theory. Yet I find it amusing he is shocked, shocked, to think one would infer 'Myth of the Rational Market: History of Risk, Reward, and Delusion on Wall Street' suggests markets are generally not rational, or efficient. The cover jacket that states "the efficient market hypothesis has evolved into a powerful myth", and that now "scholars who no longer teach ... the markets are always right." Where would one infer he is criticizing a caricature?
I remember Johah Goldberg complaining that Liberals tended to dismiss his book 'Liberal Fascism' as labeling liberals as fascists, even though within the he states several times he is not saying that in the text. OK. Then why the title? Could it be because 90% of your potential audience has this predilection, and so does he? While we expect book jackets to be over-the-top (lots of brilliant, revolutionary books supposedly), you should expect people to draw inferences from your title. [Truth be told, I didn't title my upcoming book, but I probably had less pull, and it's not misleading].
I've only read the first and last chapter. Unlike Tyler Cowen, I only read about a book a fortnight. I skimmed the middle and will finish over the next week. But it is classically good nonfiction science in that he draws you in with personal vignettes that make you feel like you were there, and outlines many important people in the history of financial theory and issues in modern finance. As a finance geek that's near my interest in the way that wanna-be celebrities read People magazine. He doesn't have any equations, which is like a physics book without equations: while you can feel like your really understand the state of physics reading A Brief History of Time, you can't really understand the key issues, though it is a good read and it may pique your interest to major in finance.
Anyone with a strong opinion on the concept of rationality in finance will find his treatment a bit too easy on the other side. Yet, while I have an opinion I know it is a minority one, and technical, surely not one for a broad audience. The bottom line is that there are several important issues in finance that are contentious, and the reader will leave aware of this, which for most readers leaves them much the richer. Is the Dick Thaler view more productive than Eugene Fama's? Well, they both suggest index funds, so what's the essence of their difference? It's not obvious, other than semantics. Is the issue merely, whether one wants to call the same thing 'inefficient' or 'efficient', because of the broader implications for the invisible hand and thus regulation? Thaler runs diversified funds with tilts towards size and value, Fama is affiliated with Dimensional Fund Advisors, which has size and value tilts. One thinks 'size' is a risk factor, the other thinks it's an inefficiency. They both think most investors prefer to buy this slant, and are mindful of momentum. Thaler notes 'it's better to be vaguely right than precisely wrong', yet one could say that's why the market is better than fundamental analysis as opposed to vice versa. Figuring out the real essence of their difference is outside this book.
For example, I think Thaler is a real lightweight, in that his fame DeBondt and Thaler pieces were so contaminated by the low-prices, and have not been updated the way that momentum, the opposite of his findings, has. But that gets to my point. Getting into detail on what and why one disagrees gets technical very quickly, anathema to a book aimed at a broader audience.
I think Fox tends to side more against Fama than for him, but found the book's strong point was not creating a case for his client ('irrational/inefficient' markets) the way, say, Burton Malkiel advocated for his client ('rational/efficient' markets). The most successful 'irrational markets' books are either politically motivated, which have a stable but limited appeal usually advocating some liberal agenda, or really rich guys who can credibly present their wacky alternative to efficient markets based on how frickin' rich they are (Soros, Schiff). I sense Fox wants this book to be read in 10 years, and so he merely tries to let both sides speak their peace, in the process drawing on his strengths as a writer to make you feel like you are right there as the great ideas are being created.
As modern finance is a muddle, I should not blame his book for being a muddle itself. It gives you information, insight, not wisdom. Peter Bernstein's Capital Ideas was a ridiculously slavish hagiography of Modern Portfolio Theory, which is much less successful than the Efficient Markets Hypothesis. It gives a misleading view of the state of finance. If we are to present these ideas, either advocate one side well, like Malkiel's Random Walk, or shows there's a vibrant other side and present them both, like letting Republicans describe Democrats and vice versa. There are many worse ways to present the current debate, and this shelf needed something new.
Fox took exception to my earlier post, in that I criticized his criticism of 'rational' or 'efficient' markets, and Fox noted he presents both sides. I was responding to the reviews, his title, and book cover, and it wasn't clear I hadn't read his book, which is actually a very even handed portrayal of efficient markets, and portfolio theory. Yet I find it amusing he is shocked, shocked, to think one would infer 'Myth of the Rational Market: History of Risk, Reward, and Delusion on Wall Street' suggests markets are generally not rational, or efficient. The cover jacket that states "the efficient market hypothesis has evolved into a powerful myth", and that now "scholars who no longer teach ... the markets are always right." Where would one infer he is criticizing a caricature?
I remember Johah Goldberg complaining that Liberals tended to dismiss his book 'Liberal Fascism' as labeling liberals as fascists, even though within the he states several times he is not saying that in the text. OK. Then why the title? Could it be because 90% of your potential audience has this predilection, and so does he? While we expect book jackets to be over-the-top (lots of brilliant, revolutionary books supposedly), you should expect people to draw inferences from your title. [Truth be told, I didn't title my upcoming book, but I probably had less pull, and it's not misleading].
I've only read the first and last chapter. Unlike Tyler Cowen, I only read about a book a fortnight. I skimmed the middle and will finish over the next week. But it is classically good nonfiction science in that he draws you in with personal vignettes that make you feel like you were there, and outlines many important people in the history of financial theory and issues in modern finance. As a finance geek that's near my interest in the way that wanna-be celebrities read People magazine. He doesn't have any equations, which is like a physics book without equations: while you can feel like your really understand the state of physics reading A Brief History of Time, you can't really understand the key issues, though it is a good read and it may pique your interest to major in finance.
Anyone with a strong opinion on the concept of rationality in finance will find his treatment a bit too easy on the other side. Yet, while I have an opinion I know it is a minority one, and technical, surely not one for a broad audience. The bottom line is that there are several important issues in finance that are contentious, and the reader will leave aware of this, which for most readers leaves them much the richer. Is the Dick Thaler view more productive than Eugene Fama's? Well, they both suggest index funds, so what's the essence of their difference? It's not obvious, other than semantics. Is the issue merely, whether one wants to call the same thing 'inefficient' or 'efficient', because of the broader implications for the invisible hand and thus regulation? Thaler runs diversified funds with tilts towards size and value, Fama is affiliated with Dimensional Fund Advisors, which has size and value tilts. One thinks 'size' is a risk factor, the other thinks it's an inefficiency. They both think most investors prefer to buy this slant, and are mindful of momentum. Thaler notes 'it's better to be vaguely right than precisely wrong', yet one could say that's why the market is better than fundamental analysis as opposed to vice versa. Figuring out the real essence of their difference is outside this book.
For example, I think Thaler is a real lightweight, in that his fame DeBondt and Thaler pieces were so contaminated by the low-prices, and have not been updated the way that momentum, the opposite of his findings, has. But that gets to my point. Getting into detail on what and why one disagrees gets technical very quickly, anathema to a book aimed at a broader audience.
I think Fox tends to side more against Fama than for him, but found the book's strong point was not creating a case for his client ('irrational/inefficient' markets) the way, say, Burton Malkiel advocated for his client ('rational/efficient' markets). The most successful 'irrational markets' books are either politically motivated, which have a stable but limited appeal usually advocating some liberal agenda, or really rich guys who can credibly present their wacky alternative to efficient markets based on how frickin' rich they are (Soros, Schiff). I sense Fox wants this book to be read in 10 years, and so he merely tries to let both sides speak their peace, in the process drawing on his strengths as a writer to make you feel like you are right there as the great ideas are being created.
As modern finance is a muddle, I should not blame his book for being a muddle itself. It gives you information, insight, not wisdom. Peter Bernstein's Capital Ideas was a ridiculously slavish hagiography of Modern Portfolio Theory, which is much less successful than the Efficient Markets Hypothesis. It gives a misleading view of the state of finance. If we are to present these ideas, either advocate one side well, like Malkiel's Random Walk, or shows there's a vibrant other side and present them both, like letting Republicans describe Democrats and vice versa. There are many worse ways to present the current debate, and this shelf needed something new.
Wednesday, June 10, 2009
Washington Can Do Everything
I, for one, welcome our new Top-Down Economic Overlords. I’d like to remind them that as a trusted blogging personality, I can be helpful in rounding up others to toil in their diversity workshops. From the AP:
Under the House bill, car owners could get a voucher worth $3,500 if they traded in a vehicle getting 18 miles per gallon or less for one getting at least 22 miles per gallon. The value of the voucher would grow to $4,500 if the mileage of the new car is 10 mpg higher than the old vehicle. The miles per gallon figures are listed on the window sticker.
Owners of sport utility vehicles, pickup trucks or minivans that get 18 mpg or less could receive a voucher for $3,500 if their new truck or SUV is at least 2 mpg higher than their old vehicle. The voucher would increase to $4,500 if the mileage of the new truck or SUV is at least 5 mpg higher than the older vehicle. Consumers could also receive vouchers for leased vehicles.
Rep. Betty Sutton, D-Ohio, the bill's chief sponsor, said the bill showed that "the multiple goals of helping consumers purchase more fuel efficient vehicles, improving our environment and boosting auto sales can be achieved." Sen. Debbie Stabenow, D-Mich., has backed a similar version in the Senate, which has the support of automakers and their unions.
The bill would direct dealers to ensure that the older vehicles are crushed or shredded to get the clunkers off the road. It was intended to help replace older vehicles — built in model year 1984 or later — and would not make financial sense for consumers owning an older car with a trade-in value greater than $3,500 or $4,500
This is what happens when you don't believe in the invisible hand. You feel obliged to do everything: help consumers, companies, save the environment, all in one swoop. But one must remember that money from the government does not come from a cookie jar, it comes from us, so we are really deciding, top down, to subsidize the demand curve for autos, the supply curve of low-end cars, sure that creating a new car is better for the environment than using an existing one.
Next item, how many new refrigerators need to be made? Which ones should be replaced?
Tuesday, June 09, 2009
Men are Pigs
From a Harvard study:
Reading between the lines, if you are a woman with something to say, it helps to be attractive and post pics.
No one said life is fair.
For example, it found that men had 15% more followers than women despite there being slightly more females users of Twitter than males.
It also showed that an average man was almost twice as likely to follow another man than a woman, despite the reverse being true on other social networks.
"The sort of content that drives men to look at women on other social networks does not exist on Twitter," said Mr Heil.
"By that I mean pictures, extended articles and biographical information."
Reading between the lines, if you are a woman with something to say, it helps to be attractive and post pics.
No one said life is fair.
Myth of the Rational Market
Most non-economists find the 'efficient markets hypothesis' the most absurd belief that most economists believe. The latest broadside is Justin Fox's book The Myth of the Rational Market. A lot of this gets into semantics. If you think efficient markets mean they are always correct, then clearly this is a stupid theory. But it only means that the market price is an unbiased predictor of future prices, conditional on all the current information (there is a risk premium that complicates this, but I write about this in my upcoming book and will talk about that when released in a few weeks).
I haven't read the book, but there appear to be a lot of straw men suggested by the book jacket. It states "The efficient market hypothesis has evolved into a powerful myth". So, markets are predictable? Do tell. "A new wave of economists and scholars who no longer teach that investors are rational or that the markets are always right." Who were these guys saying markets are always right? "Investors overreact, underreact, and make irrational decisions based on imperfect data." Collectively? In what cases? Half?
Here's my defense of Rational Markets:
Think of the problem this way. Say one can buy a contract that Global Warming implies temperatures will be 3 degrees higher in 2100. Assume it were a traded contract, such that there was a way to generate some validation, say, that average temperatures in the troposphere done by NASA, paid to legal beneficiaries of current bettors. Now, people strongly disagree on this, and most people think those who disagree aren't merely making an honest mistake, but have biased or stupid beliefs, though unintentionally (tools of bigger forces, a malevolent Borg), yet the key is you cannot prove which side is wrong via indisputable logic today. The set of information is large, and it is not clear what is relevant to this forecast (climate models are very complicated). In 90 years, with hindsight, the losers will look like stupid ideologues, and that will pertain to a significant number of otherwise smart people. Is this market then 'inefficient', because those taking the other side of a losing bet will be not merely unlucky, but 'wrong'?
I think not. Truth is not obvious. People do their best, and usually the phalanx of assumptions and theories that underly a belief are so comlex you can not fully articulate why you believe something. That does not mean your belief is irrational, just that in the real world, many things are very complicated, and you can't work backward to isolate essential differences. Even if you could, there would be many assumptions that are also really unverifiable opinions, not that they don't have data, just like saying the minimum wage causes unemployment, you don't have enough data to prove it one way or the other to a suffficiently skeptical person. So it's an infinite regress. Are these disagreements, manifested in markets where prices change all the time, sometimes violently, irrational? It would be nice if we all could agree on the facts and theories, and that they be correct, but that's rather naive.
The bottom line is that most investing experts underperform passive benchmarks, and counter-examples like Warren Buffet are merely reused again and again. Sure, some may have alpha, but they are relatively few, and almost all of them follow the Peter Principle in finance and accept money until their alpha is gone (the dominant strategy, it appears). That fact has held up pretty well since first discovered by the Cowles commission back in the 1930s. I would like to see the final paragraph of every book touting the stupidity of rational markets to give specific advice, like "buy gold, GM, Sell the VIX and the dollar", or that investors generally overreact. That is, take an actual position, not a can't-be-wrong position. To say the market is biased, but you don't know which way, is equivalent to saying it is unbiased, and that it will fluctuate. The fact that people don't know the sign of the market's incorrectness is the reason we call the market efficient.
A good example of what seeming market inefficiency looks like in practice is the appearance and disappearance of the ‘convexity bias’ between swap forwards and Eurodollar futures. The arbitrage worked like this. If future and forward rates were equivalent, one could go long swaps (forwards) with short Eurodollars (futures), and the daily mark-to-market of the Eurodollars vs. the future mark-to-market of the swap would allow one to lock in a sure thing. In equilibrium, precluding arbitrage, futures should be slightly higher than forward rates. The mechanism underlying this opportunity is subtle, but the effect added up to 15 basis points in present value if done with 5-year swaps, and it was truly risk free. Several banks made tens of millions of dollars on it in the early 1990s. It was written up in RISK magazine in 1990 (Rombach, 1990). It disappeared around 1994, after which academic economists wrote about it in the Journal of Finance (Grinblatt and Jegadeesh, 1996), and today you see the convexity adjustment right there in Bloomberg so traders don't think forwards are futures.
That 'inefficiency' is history. True arbitrage profit opportunities such as these do exist, and they don’t disappear immediately, but they do go away eventually, usually well before academics have proven they are arbitrage. Market efficiency skeptics would see this as a fantastic embarrassment, I see it as disequilibrium phenomenon, a temporary aberration that is of interest only to those lucky few who identified it while it still conferred a profitable opportunity. Disequilibrium behavior has always been difficult for economists to explain, as all such activities are idiosyncratic or ephemeral.
The problem for those who think the market is irrational is to generate a model that is better. To merely state, with hindsight, that people were overreact in one case, underreact in another, leads to an unbiased market in real time, and it is unbiasedness, not zero price variance, that is the essence of the efficient markets hypothesis.
The behavioralists like to portray themselves as rebels, Davids versus the theoretical Goliath, but in reality the efficient markets folks deserve the true rebel status. Almost everyone outside this literature is sympathetic to behavioral theories over the rational markets assumption, regardless of one’s political bent. I used to work for a bank where our swap salesmen could always sell swaps to companies by telling their Treasurers how these instruments could make money given their personal view of the market.
To these Treasurer's detriment very few of them had an efficient markets prejudice (“perhaps the forward prices and their implied volatilities are as good a forecast as mine, and so including commission this is a negative-sum speculation!”). It has been well documented by people like Odean and Barber that people trade too much, based on the mistaken idea that markets are not efficient. It would be harmless fun, but it's a lot of money. Sure some are right, but most are burning money via transaction costs in zero-sum bets.
One doesn’t have to love Dilbert to know that there are lots of irrational business people, and that many business decisions are made by former B- students who are both boundedly rational and internally inconsistent. The real question is whether or not these irrational actions generate useful hypotheses about economic behavior, and thus far most of these predictable actions relate to volume and volatility. The over and underreaction hypotheses seem about as promising as the adaptive expectations assumption that underlies it, and Keynesians worked with that for years without bearing fruit.
The efficient markets paradigm is a triumph of economics because it is so counterintuitive to the layman, so restrictive in what it allows, and so pervasive in its application. A healthy respect for the rationality of markets is a hugely advantageous mindset for the researcher and practitioner. This is the most useful base from which one identifies anomalies, and then explains them with specific frictions or cognitive biases. If you start out thinking all prices are wrong, odds are you are gambling, and the house wins.
Monday, June 08, 2009
Value at Risk Essential but also Largely Irrelevant
I find Value at Risk (VaR) very useful, mainly so that you know people aren't taking unauthorized bets. If a rogue trader decides to punt on the dollar-yen exchange rate, that will show up, so it is helpful in keeping your traders in line. But it very rarely drives strategic decisions in banks. It's more like internet security for a bank's website, essential, but also not a first-order issue for managing a bank's relevant risk. It is a mistake of the first order to think banks are picking a VaR in mean-volatility space to find their optimal place on the efficient frontier. VaR risk is mainly incidental, and useful for minimizing operational risk (eg, many fatal risks would never have been tolerated if they were on the radar ex-ante).
Ricardo Rebenatto has written some very good books and articles on interest rate models, I have re-read several of them. He is Head of Market Risk and Global Head of the Quantitative Research Team at Royal Bank of Scotland. His book, Plight of the Fortune Tellers, argues against the naive application of quantitative models, a bold stand surely to be dismissed by the very vociferous advocates of naive usage of models in finance (I can't find their homepage...). We need more common sense, another courageous jab at the highly popular 'no common sense' mantra.
Russ Roberts interviewed him on EconTalk, and I just thought, this guy typifies a stereotypical 'risk manager'. He has credentials that demonstrate he knows a lot of math (Doctorate in Nuclear Engineering and PhD in Condensed Matter Physics/Science of Materials). He speaks often at Risk Conferences, and is on the Board of Trustees for GARP. But what clearly comes across is that he does not speak much to decision makers in RBS about actual strategy.
Why do I say this? Because, to think that risk measures to 5 decimal places are 'dangerous' presumes that major decisions are being made based on this information. From his preface:
If you look at Rebenatto's RBS, in their annual report they show a mere 40 million pounds of daily 95% VaR, which annualizes to about a 1.3B pounds at a 99.9% VaR. Say we multiply that by 3, to get capital needed, we are up to 4 B pounds. RBS has about 73B in equity capital. Clearly even in his own company, VaR is not the primary driver for understanding their 'risk'.
The idea that Value-at-Risk, or risk numbers at the 5th decimal place, are dangerous, presupposes people are using this information in a significant way. That is, supposedly, when the VaR of the currency market moves from 43.32491 to 43.32492, traders adjust their spreads. Let us give Ricardo the benefit of the doubt, that the "5th decimal" statement was rhetorical flourish. In any case he thinks VaR drives real decisions. In his circle of contacts (GARP, risk management conferences, presentation of VaR to regulators), this arguement may be tenable, but highlights he does not understand how strategic decisions are made within a bank. He is being patronized by senior management, who absolutely love risk managers who think the fifth decimal of their VaR is important, because they don't see the bigger picture and are then free to do whatever they want.
VaR is primarly applied to market making activities, and generally these have very high Sharpe ratios (>3) because you make you money off bid-ask spread and front-running customers, so profit is primarly a function of volume. Any residual holdings generate 'market risk' via your VaR, but most market makers don't care too much because on a VaR basis, they are all hitting massive home runs in that Sharpes are always greater than anyone's hurdle rate. Market making profits are a function of volume, which means sales, contacts, not having better VaR measures. If they could choose to double their market making activity, and thereby double their VaR, all would choose to do so. VaR is a good way to make sure traders aren't changing their business model, but it does not capture anything essential to that business model, because all market makers want more volume; VaR is not like a choice variable in risk-return space.
He argues we should basically use probabilities inferred from the revealed preference, noting that 'behavioral finance' shows all sorts of biases. This means, anecdotes. All you need is the representative anecdote, the correct prior! That's the problem with common sense. What some people think is common sense, other people think is nonsense. Not that this is not true, merely unhelpful. It should go without saying that the real world does not have numbers drawn from urns where we know the proportion of blue and red balls contained therein. How this plays out, given the parochial nature of most business decisions, is really no more helpful than saying 'you should estimate risks taking into account all the information' when making decisions about the future.
Kate Kelly's book on the failure of Bear Stearns highlights the irrelevancies of these kind of risk managers. As she says, "managers in places like risk management and operations were considered less important to the firm's core franchise and therefore largely excluded from important decisions", and Rebenatto's focus highlights why. He is looking under a very small lamplight in the bank risk landscape: the domain of full-time risk managers.
It is funny that Rebenatto bemoans the lack of common sense of PhD's from the hard sciences, their autistic focus on formulas, their ignorance of the bigger picture. His criticisms of others on a point so relevant to his own views suggests a very interesting bias.
Ricardo Rebenatto has written some very good books and articles on interest rate models, I have re-read several of them. He is Head of Market Risk and Global Head of the Quantitative Research Team at Royal Bank of Scotland. His book, Plight of the Fortune Tellers, argues against the naive application of quantitative models, a bold stand surely to be dismissed by the very vociferous advocates of naive usage of models in finance (I can't find their homepage...). We need more common sense, another courageous jab at the highly popular 'no common sense' mantra.
Russ Roberts interviewed him on EconTalk, and I just thought, this guy typifies a stereotypical 'risk manager'. He has credentials that demonstrate he knows a lot of math (Doctorate in Nuclear Engineering and PhD in Condensed Matter Physics/Science of Materials). He speaks often at Risk Conferences, and is on the Board of Trustees for GARP. But what clearly comes across is that he does not speak much to decision makers in RBS about actual strategy.
Why do I say this? Because, to think that risk measures to 5 decimal places are 'dangerous' presumes that major decisions are being made based on this information. From his preface:
Financial risk management is in a state of confusion. It has become obsessively focused on measuring risk. At the same time, it is forgetting that managing risk is about making decisions under uncertainty. It also seems to hold on to two dangerous beliefs: first, that our risk metrics can be estimated to five decimal places; second, that once we have done so the results will self-evidently guide our risk management choices.To the extent you think this is going on you really have no idea that you have no idea how this information is related to strategic and tactical decisions. VaR is pretty irrelevant for banks. Sure, it's an essential way to aggregate market maker risk, but that risk has been pretty insignificant in the current crisis, because to the extent assets or businesses subject to VaR had significant losses, the VaR was calculated predicated on the assumptions that drove the business decision (ie, that collateral value of equity asset backed securities would not decline). This assumption, which is not very technical, made the resulting VaR innocuous, and is something anyone understands. Why did people assume this? That's an interesting question, but I think it has very little to do with Value-at-Risk. That is, an arbitrary but large stress test would have embodied the same assumption circa 2006.
If you look at Rebenatto's RBS, in their annual report they show a mere 40 million pounds of daily 95% VaR, which annualizes to about a 1.3B pounds at a 99.9% VaR. Say we multiply that by 3, to get capital needed, we are up to 4 B pounds. RBS has about 73B in equity capital. Clearly even in his own company, VaR is not the primary driver for understanding their 'risk'.
The idea that Value-at-Risk, or risk numbers at the 5th decimal place, are dangerous, presupposes people are using this information in a significant way. That is, supposedly, when the VaR of the currency market moves from 43.32491 to 43.32492, traders adjust their spreads. Let us give Ricardo the benefit of the doubt, that the "5th decimal" statement was rhetorical flourish. In any case he thinks VaR drives real decisions. In his circle of contacts (GARP, risk management conferences, presentation of VaR to regulators), this arguement may be tenable, but highlights he does not understand how strategic decisions are made within a bank. He is being patronized by senior management, who absolutely love risk managers who think the fifth decimal of their VaR is important, because they don't see the bigger picture and are then free to do whatever they want.
VaR is primarly applied to market making activities, and generally these have very high Sharpe ratios (>3) because you make you money off bid-ask spread and front-running customers, so profit is primarly a function of volume. Any residual holdings generate 'market risk' via your VaR, but most market makers don't care too much because on a VaR basis, they are all hitting massive home runs in that Sharpes are always greater than anyone's hurdle rate. Market making profits are a function of volume, which means sales, contacts, not having better VaR measures. If they could choose to double their market making activity, and thereby double their VaR, all would choose to do so. VaR is a good way to make sure traders aren't changing their business model, but it does not capture anything essential to that business model, because all market makers want more volume; VaR is not like a choice variable in risk-return space.
He argues we should basically use probabilities inferred from the revealed preference, noting that 'behavioral finance' shows all sorts of biases. This means, anecdotes. All you need is the representative anecdote, the correct prior! That's the problem with common sense. What some people think is common sense, other people think is nonsense. Not that this is not true, merely unhelpful. It should go without saying that the real world does not have numbers drawn from urns where we know the proportion of blue and red balls contained therein. How this plays out, given the parochial nature of most business decisions, is really no more helpful than saying 'you should estimate risks taking into account all the information' when making decisions about the future.
Kate Kelly's book on the failure of Bear Stearns highlights the irrelevancies of these kind of risk managers. As she says, "managers in places like risk management and operations were considered less important to the firm's core franchise and therefore largely excluded from important decisions", and Rebenatto's focus highlights why. He is looking under a very small lamplight in the bank risk landscape: the domain of full-time risk managers.
It is funny that Rebenatto bemoans the lack of common sense of PhD's from the hard sciences, their autistic focus on formulas, their ignorance of the bigger picture. His criticisms of others on a point so relevant to his own views suggests a very interesting bias.
Sunday, June 07, 2009
Freerisk.org No Threat to Moody's
Wired Magazine had an article about some internet geeks (and I mean that as a compliment) who are trying to set up 'a better way of measuring corporate credit risk'. All well and fine. Their site, freerisk.org, is trying to set up a site where people can gather data and access models derived from that data. But strategy must have good tactics, and I don't see this working out.
To really create a 'better corporate credit risk' model, one needs a historical, survivorship-bias free set of financial and market data with information on 'bads'. That is, noting which companies were either delisted for performance reasons, defaulted, or went bankrupt. I don't think that data is available anywhere for free, and without it you can not develop, validate, or calibrate a model. Current financial data is, by definition, biased towards firms that have not defaulted in the past.
Further to get relevant data one needs not merely recent financial data, but stock price information. The current project seems focused on getting SEC filings in some common format, but one needs to address the issues of how to define robust firm IDs. Tickers, even cusips, change. Stock price information so important in calculating the Merton model and its derivatives is from a different set of data. Thus, there needs to be some work on creating a unique identifier across these two sources. This is a job that must be tackled top-down, and won't come from users.
Also, the financial data one needs is often of a 'relative to now' nature. You want the latest rolling 4 quarter net income (minus extraordinary items), and its change over the previous 4 quarters. This means you have a call for data arranged by someone into lags. This is non-trivial when SEC statements are presented as single quarter snapshots, with specific dates (one does not say 'current').
Their video goes over some credit problems, such as Moody's missing Lehman, or Enron, which is true enough. A Merton model that used stock price information would have been much better, but there is a downside to the Merton model, mainly, it generates a large amount of ratings volatility that investors do not like. The agency ratings aren't optimal for predicting ratings, but they create a common metric that works pretty well (I know, not optimal). The Altman Z-Score and the Piotroski method mentioned by Freerisk are really bad alternatives, hardly worth calculating ((2*net income - liabilities)/assets works as well as either)
Before Moody's RiskCalc(TM), several people who had the credibility and means to create an algorithm yet failed to create something people were willing to pay for. S&P, Loan Pricing Corp, Ed Altman, all should have been able to create models, yet they failed. For S&P, because they use a non-transparent and non-intuitive neural net that appears ridiculously overfit. Loan Pricing Corp had access to banks and their proprietary data, but created a model that was too dumb. Ed Altman created the first risk model in 1968, and while he is a perfunctory mention in every risk model because he was first, his model is an anachronism, and he never extended it to something that would be useful (by say mapping its output to default probabilities, incorporating stock price volatility).
The freerisk.org video mentions several red herrings. Copulas used in CDOs. Macro data on the Fed's FRED database. Issues in CDOs. Correlations between defaults and various sector risks. Nouriel Roubini's prescient macro forecasts (permabear is 'vindicated'). They note financial companies. These are all pretty independent of the nonfinancial corporate credit risk problem, of trying to improve on the 'rating' for, say, IBM. To the extent one emphasizes these issues suggests they really have no understanding of what is important, relevant, or feasible in the nonfinancial corporate credit risk objective (a model relevant to AIG is not relevant to IBM). Macro forecasts, conditional correlations, asset-backed securities, are all very parochial problems almost independent of each other. Any good solution in one is not very similar to a good solution in another.
They also mention they will allow real-time correlations between scores and default rates. By this, I presume they will look at the small number of current defaults. This would induce a horrific backtest bias, because if you know what went default recently, you can adjust an algorithm to do very well, over the past 12 months. You really need the longer dataset going back 10 years on defaults, and should be very wary of anything that merely shows it did well over the past quarter.
Credit risk calculation is an eminently feasible problem with a 'flat maximum', which is why the Altman model 'works' (so too, does Net Income/Assets). A near optimal measure of risk is not too difficult (I present data over at defprob for free!). Nevertheless, most people screw it up, because they don't collect a good dataset for construction and validation, they try to be too fancy, or are too rule-based (expert rule systems with many if-then clauses that create a knife-edged kluge) or don't calibrate the output into default probabilities.
Subscribe to:
Posts (Atom)