The most common way to 'model' expectations when making a DSGE model is to assume that agents' expectations are rational -- i.e. 'model consistent.' To the extent that the modeler believes (wrongly) that the model is structural and wants to develop a quantitative model or the modeler doesn't care if the model is structural and only wants to write a logically consistent model, rational expectations are perfectly reasonable.
For instance, if I have just derived the consumption Euler equation given a budget constraint in which real government bonds can be traded intertermporally and log utility, I will find have the following equation governing consumption behavior:
$$(1)\ \frac{1}{c_t} = \beta E_t \frac{1}{c_{t+1}}\left(1+r_t\right)$$
Ignoring the fact that this is far from a complete model, it immediately becomes clear that, in order to determine the value of consumption this period, it is also necessary to determine the expected value of consumption in the next period. Since, in this model, the representative consumer has decided upon this consumption function itself, so it can simply extrapolate forward to determine what it 'expects' to consume in the future.
In my mind, there is nothing fundamentally wrong with this and, so long as you ban explosive solutions in real variables (something that is an extremely important part of Blanchard-Kahn), you can easily find equilibrium solutions to DSGE models. Different heterodox economists (and occasionally Noah Smith, who is so happy trashing mainstream econ that a casual observing might not notice that he has a PhD in economics and is a card-carrying member of the economic orthodoxy) have made their own critiques of rational expectations, but within the realm of qualitative DSGE models, I personally find the typical criticism of "but expectations aren't actually formed that way" somewhat excessive -- the model has other aspects that are more inaccurate, like the fact that there is no income distribution and all workers are paid the same wage, but no one seems to care about that.
The real issue with rational expectations comes from (guess who) market monetarists. Nick Rowe famously called monetary policy 99% expectations (and then corrected himself in comments on one of my post by arguing that monetary policy is 100% expectations) and Scott Sumner will happily evade any theoretical argument that monetary policy is ineffective at the zero lower bound by invoking the central bank's ability to create expected inflation simply by saying that it will occur.
See, when your only requirement of expectations is that they are model consistent, you can essentially argue that the only thing a central bank needs to do in order to change expectations of a nominal variable (which it controls in equilibrium, which is the long run -- and I don't mean 'solution to the model' when I say equilibrium, I mean steady state) is to start targeting that variable at the desired level. The real world equivalent of this would be the hypothetical scenario in which the Federal Reserve announced tomorrow that it will target 4% inflation from now on and inflation instantly jumps up to a 4% annualized rate for the quarter, thus causing a large boom and ending the liquidity trap.
Why can't this happen? Because, in order to achieve that target -- in order to be 'credible' -- the Fed has to have an effective tool to create that inflation. Open market operations don't count because it is empirically ineffective and the nominal interest rate doesn't count because it can't be cut significantly (plus the nominal interest rate would go up in this scenario anyway). The problem could be similarly phrased as 'there are multiple equilibria consistent with a rate increase, how do we know whether we will get the one with deflation or the one with immediately higher inflation?' If this sounds like John Cochrane's brand of Neo-Fisherism, don't worry; it is.
Sumner, by suggesting that the Fed can change expected inflation at will, is arguing right along with Cochrane that the second equilibrium is possible -- all the Fed need do is announce a higher inflation target to get there. Meanwhile, Cochrane is left puzzling about equilibrium selection given a set of concrete steppes. The answer is clear: central banks can obviously choose the equilibrium themselves, thus stimulating the economy and escaping the liquidity trap.
No. The fundamental problem here is that Cochrane understands the model while Sumner doesn't (OK this may not necessarily be true, but Sumner has repeatedly admitted his lack of use of DSGE, so I don't think I'm that far off). In Cochrane's paper, 'The New Keynesian Liquidity Trap,' the central bank does ultimately control the final inflation rate using a Taylor Rule (the coefficient may be zero in his baseline simulation, but the model is still stable, so my point still stands), but there are still multiple equilbria at the end of the liquidity trap, so, so long as the inflation target remains constant at $t=\infty$, the central bank remains unable to simply announce an end to the liquidity trap.
Naturally, this made me curious about the effect of a mid-liquidity trap increase in the inflation target in a perfect foresight model (my modeling tools preclude me from doing stochastic models with the zero lower bound unfortunately). Here are the results:
As you can see, in the basic New Keynesian setup, increasing the inflation target halfway through a natural rate-decline induced liquidity trap and allowing that increase to persist for another 30 periods after the liquidity trap has ended is partially effective, at least in a perfect foresight model, which ignores expectations altogether (the liquidity trap lasts from period 10 until period 20).
For instance, if I have just derived the consumption Euler equation given a budget constraint in which real government bonds can be traded intertermporally and log utility, I will find have the following equation governing consumption behavior:
$$(1)\ \frac{1}{c_t} = \beta E_t \frac{1}{c_{t+1}}\left(1+r_t\right)$$
Ignoring the fact that this is far from a complete model, it immediately becomes clear that, in order to determine the value of consumption this period, it is also necessary to determine the expected value of consumption in the next period. Since, in this model, the representative consumer has decided upon this consumption function itself, so it can simply extrapolate forward to determine what it 'expects' to consume in the future.
In my mind, there is nothing fundamentally wrong with this and, so long as you ban explosive solutions in real variables (something that is an extremely important part of Blanchard-Kahn), you can easily find equilibrium solutions to DSGE models. Different heterodox economists (and occasionally Noah Smith, who is so happy trashing mainstream econ that a casual observing might not notice that he has a PhD in economics and is a card-carrying member of the economic orthodoxy) have made their own critiques of rational expectations, but within the realm of qualitative DSGE models, I personally find the typical criticism of "but expectations aren't actually formed that way" somewhat excessive -- the model has other aspects that are more inaccurate, like the fact that there is no income distribution and all workers are paid the same wage, but no one seems to care about that.
The real issue with rational expectations comes from (guess who) market monetarists. Nick Rowe famously called monetary policy 99% expectations (and then corrected himself in comments on one of my post by arguing that monetary policy is 100% expectations) and Scott Sumner will happily evade any theoretical argument that monetary policy is ineffective at the zero lower bound by invoking the central bank's ability to create expected inflation simply by saying that it will occur.
See, when your only requirement of expectations is that they are model consistent, you can essentially argue that the only thing a central bank needs to do in order to change expectations of a nominal variable (which it controls in equilibrium, which is the long run -- and I don't mean 'solution to the model' when I say equilibrium, I mean steady state) is to start targeting that variable at the desired level. The real world equivalent of this would be the hypothetical scenario in which the Federal Reserve announced tomorrow that it will target 4% inflation from now on and inflation instantly jumps up to a 4% annualized rate for the quarter, thus causing a large boom and ending the liquidity trap.
Why can't this happen? Because, in order to achieve that target -- in order to be 'credible' -- the Fed has to have an effective tool to create that inflation. Open market operations don't count because it is empirically ineffective and the nominal interest rate doesn't count because it can't be cut significantly (plus the nominal interest rate would go up in this scenario anyway). The problem could be similarly phrased as 'there are multiple equilibria consistent with a rate increase, how do we know whether we will get the one with deflation or the one with immediately higher inflation?' If this sounds like John Cochrane's brand of Neo-Fisherism, don't worry; it is.
Sumner, by suggesting that the Fed can change expected inflation at will, is arguing right along with Cochrane that the second equilibrium is possible -- all the Fed need do is announce a higher inflation target to get there. Meanwhile, Cochrane is left puzzling about equilibrium selection given a set of concrete steppes. The answer is clear: central banks can obviously choose the equilibrium themselves, thus stimulating the economy and escaping the liquidity trap.
No. The fundamental problem here is that Cochrane understands the model while Sumner doesn't (OK this may not necessarily be true, but Sumner has repeatedly admitted his lack of use of DSGE, so I don't think I'm that far off). In Cochrane's paper, 'The New Keynesian Liquidity Trap,' the central bank does ultimately control the final inflation rate using a Taylor Rule (the coefficient may be zero in his baseline simulation, but the model is still stable, so my point still stands), but there are still multiple equilbria at the end of the liquidity trap, so, so long as the inflation target remains constant at $t=\infty$, the central bank remains unable to simply announce an end to the liquidity trap.
Naturally, this made me curious about the effect of a mid-liquidity trap increase in the inflation target in a perfect foresight model (my modeling tools preclude me from doing stochastic models with the zero lower bound unfortunately). Here are the results:
Temporary increase in the inflation target from period 15 to period 50 |
No change to inflation target |
So, evidently, changes in the inflation target (as long as they are accompanied by corresponding changes in the policy rule) can be effective, even if they are temporary and start after the liquidity trap has begun. The difference here, though, is that I have insured that the inflation rate will converge to the target, since the model is perfect foresight, so my point above remains valid; absent the ability to guarantee that, once the inflation target is revised upwards, inflation will converge to the new equilibrium (assuming there are multiple equilibria, one of them being a persistent liquidity trap), the ability of central banks to dodge liquidity traps by announcing changes in target inflation rates (or some other change to inflation expectations) is both unclear and entirely at the mercy of one's priors.
No comments:
Post a Comment