In Boldrin and Levine's book Against Intellectual Property, they note that their are no a priori grounds for their argument. That is, there are offsetting costs and benefits to intellectual property, and so it is a matter of empirically estimating these costs and benefits. In the book's case, this is mainly focused on patents, as opposed to confidentiality agreements, non-compete agreements, or trade secrets.
I think this is really true for almost any economic debate. Theoretically, there's a case for quotas: assume sufficiently high increasing returns to scale, some spill-over effects, and you have the case for quotas (this kind of pretty reasoning led to Krugman's fall to the Dark Side). The issue is whether empirically, giving a legislature the ability to grant quotas, to what degree will they be used in this area, as opposed to a pure rent generation via government fiat.
Thus, theory is nice merely because it tell us what variables to look at when doing an empirical analysis. In practice, with enough data, the variables speak for themselves, and it will be obvious what they are saying. The problem is merely that there are infinite number of potential effects, and potential interesting variables to control for. For example, looking at stocks, we may be interested in their annualized returns and volatility. If stock returns are lognormally distributed, this completely defines their distribution. If they have fat tails, we need futher data, higher moments, or extremum statitics. If markets are not efficient, perhaps it helps to look at auto-correlation in returns over various horizons (daily, weekly), or various technical patterns (head-and-shoulders). The state space is infinite, you need a theory to constrain it.
Similarly, when looking at what affects the effects, you need to control for other things. You might look first at how the 'market' affects returns contemporaneously, or industry effects. You might look at size, or value factors. Again, the state space is infinite, and you need a theory, a story.
So, theory is very useful, but usually theory merely suggests something to look at. The data then say how the functional form fits. If theory says variance, but it turns out that the result is really a function of the square root of variance (volatility), you can be sure that in 10 years no one will remember the theories that proposed variance, and it will all appear an unbroken advance in science.
So, I don't get too excited by proofs, or the precise nature of the functional forms. Just identify what is important as an input and output, and then roll up one's sleeves and see what the data say.
For people who hate models, I think the key to remember is they provide a useful scaffold to fit real data. That is, if you fit data without theory, you need a lot of it so it is not really bumpy, and without focusing on a small set of data, the combinations of potentially interesting data is simply too big.
No comments:
Post a Comment