Robust Monetary Policy in a Currency Union

The study was implemented in the framework of the Programme of Fundamental Studies of the Higher School of Economics, Moscow in 2011. This paper was presented at the 15th International Conference on Macroeconomic Analysis and International Finance, Rethymnon, Greece and at the IEA 16th World Congress, Beijing, China. For comments and suggestions I thank Andrzej Torój, two anonymous referees, participants at the 15th International Conference on Macroeconomic Analysis and International Finance, Rethymnon, Greece and members of the Laboratory for Macroeconomic Analysis at the National Research University Higher School of Economics, Moscow, Russia. I am grateful to the Centre of Fundamental Studies at National Research University Higher School of Economics, Moscow, for financial support. Robust Monetary Policy in a Currency Union

A lot of researches are devoted to optimal policy in the European Monetary Union.For example, Avinash Dixit and Luisa Lambertini (2001) have initiated the analysis of the optimal design of fiscal and monetary policy interactions, whereas Jordi Gali and Tommaso Monacelli (2008) and Andrea Ferrero (2009) deal with optimal macroeconomic policy in a currency union with country specific shocks.Each of these papers is based on a precise model that is assumed to capture the main economic relationships correctly.However, nobody knows the true, and extremely complex structure, of the economy and nobody can be absolutely confident about the predicting power of any particular model employed for policy analysis.Thus, the problem of model uncertainty, or uncertainty, about the true structure of the economy arises.
There are a number of approaches to modeling this uncertainty.Most research deals with more or less "parametric" uncertainty.In this case the overall structure of the economy is supposed to be known, but the values of specific parameters are uncertain.The character of this parametric uncertainty can be different.Under Bayesian uncertainty it is assumed that distributions of model parameters are known.In the case of structured Knightian uncertainty only minimal and maximal possible values of some parameters are known.And finally, under unstructured Knightian uncertainty, neither in its location nor its nature, is specified.In spite of a precise character of uncertainty, a policymaker believes that the true economy lies in the "specified neighborhood" of a baseline model (William Brainard 1967).This neighborhood includes all possible deviations from the reference framework and this approach can be PANOECONOMICUS, 2012, 2, Special Issue, pp.[185][186][187][188][189][190][191][192][193][194][195][196][197][198][199] interpreted as an analysis of a set of similar, but not identical, models (Marc P. Giannoni 2002).
One of the possible approaches to the problem of model uncertainty is searching for robust monetary policy that works reasonably well across a given set of model specifications.The main question in this approach concerns the comparison of robust policies and simple optimal ones, designed for the particular model.The initial result called Brainard conservatism assumes that robust policy under Bayesian uncertainty is less aggressive in the reaction to economic shocks than a policy constructed for a single model without taking model uncertainty into account (Brainard 1967).This "attenuation effect" is usually not observed if Knightian uncertainty is analyzed.Yet there are studies that dispute this conclusion.For example, Roger Craine (1979) and Ulf Söderström (2002) find that an increase in uncertainty concerning the transition dynamics of a backward-looking model makes optimal policy more aggressive, although Bayesian uncertainty is assumed.This result holds for forward-looking models, as shown in Takeshi Kimura and Takushi Kurozumi (2007) and Kurozumi (2010), who analyzed Bayesian uncertainty concerning "deep" model parameters that influence not only structural dynamic equations, but also the social loss function.On the contrary, Alexei Onatski (2000) shows that Brainard principle holds for the backward-looking model, despite the fact that minimax choice criterion is applied.
The creation of the European Monetary Union, and the entrance of new member countries, considerably changes the economic relations between European countries.That is why the extent of uncertainty concerning the EMU models is extremely high.As a result, it is no surprise that many authors attend to the robust policy design for the euro area.For example, Hervé Le Bihan and Jean-Guillaume Sahuc (2002), Stan Žaković, Volker Wieland, and Berc Rustem (2005) and Keith Kuester and Volker Wieland (2008) find that for models of the euro zone the Brainard principle holds true.However, all of these papers deal with area-wide aggregated models, which seems not to be the best approach, as the euro area represents a set of interacting heterogeneous countries.
For example, Paul De Grauwe (2000) shows that the national data should be considered for the optimal policy construction because of asymmetries in the transmission of monetary policy in the EMU.More precisely, Pierpaolo Benigno and David J. Lopez-Salido (2006) find a huge extent of heterogeneity in inflation persistence across European countries.For example, inflation in Germany is almost completely forward-looking, while inflation dynamics in France, Italy, Spain and Netherlands demonstrates substantially more inertial (backward-looking) behavior.Different inflation persistence can provoke considerable distortions in relative prices in the case of terms of trade shocks since the speed of adjustment differs across the countries.Benigno and Lopez-Salido (2006) demonstrate that optimal monetary policy should mitigate these distortions.The importance of asymmetry between countries becomes even more considerable if there is some uncertainty about the transmission of monetary policy, as in De Grauwe and Marc-Alexandre Sénégas (2006).
Therefore, there is a great deal of research that show that specific country shocks matter for optimal policy, but there are no examples of taking into account these shocks when optimal policy under uncertainty is constructed for a currency PANOECONOMICUS, 2012, 2, Special Issue, pp.185-199 union.The only exception is De Grauwe and Sénégas (2006) who have analyzed the influence of uncertainty on optimal monetary policy in the multi-country model of a currency union.However, the research of De Grauwe and Sénégas ( 2006) is based on a very stylized model without micro-foundations.The main goal of our work is to fill this remaining gap between the literature on optimal policy under uncertainty and the studies of the EMU accounting for huge heterogeneity.For this purpose we analyze a micro-founded model of a two-country currency union of Benigno (2004) and elaborate the robust policy of the monetary authority.The analysis is based on the robust control methodology initiated by Lars P. Hansen and Thomas J. Sargent (2001).We find that for this model the aggressiveness of optimal monetary policy in the reaction on the terms of trade shocks rises with an increase in the extent of unstructured Knightian uncertainty.
The remainder of the paper is organized as follows: the two-country model is presented in the first section.Then we apply robust control techniques for this model and derive the characteristics of the robust policy under commitment.The last section concludes and outlines the possible directions for the future research.

Reference Model of Monetary Union
In this paper we assume a unique central bank that decides on monetary policy in a two-country currency union.This bank has in its possession a single micro-founded model with sticky prices that is taken as reference, but there are some doubts concerning its quality.Thus, the monetary authority tackles a model uncertainty problem.
The reference model of the central bank is the one described in Benigno (2004).The currency union consists of two countries or regions (H and F).The population of this union represents a unit-continuum where agents from [0, n) interval belong to the H -country and the rest [n, 1] are F -country's inhabitants.Each country has an independent local government, which determines fiscal policy (income taxes, transfers and purchases of products produced in its own country).Here we leave the problem of fiscal policy determination out of our attention, taking fiscal variables as exogenous.
Each inhabitant is simultaneously the producer of a single differentiated good and the consumer of all goods manufactured in the union, meaning there is an interregional trade while migration of labor force is absent.The number of goods produced in the H region is equal to n , so this parameter also represents an economic size of this region or the share of the total union GDP produced in the region H .
The producers in the model are monopolists in their products' markets.They set prices according to the Calvo scheme (Guillermo A. Calvo 1983).Each seller faces a probability   1   of adjusting his price.The parameter of price inertia  differs for two regions.

Key Equations
The law of motion of the economy is presented by the following equations: where by t X  we denote the deviation of the logarithm of the variable X from the steady state when prices are flexible, while ˆt X is the same deviation under sticky prices, W X represents a weighted average of specific countries values: The equation ( 1) follows explicitly from the definition of terms of trade and represents dynamics of this variable determined by its past value and the current inflation rates in both countries.
The equations (2-4) determine relations between consumption, government spending, output gap, expected future inflation and the value of the nominal interest rate.To simplify future calculations we assume that fiscal policy is known with certainty, so 1 . In this case we can rearrange equations (2-4) into a usual IS -curve for the whole currency area: According to the equation ( 7) the output gap depends positively on its expected future value and expected future inflation, and negatively on the nominal interest rate.
PANOECONOMICUS, 2012, 2, Special Issue, pp.185-199 The equations (5-6) describe the supply side of the union economy and stand for the New Keynesian Phillips curves.According to these equations inflation rates in the regions are determined by the union-wide output gap, expectations of future inflation and the union terms of trade.Usually inside terms of trade are omitted from the analysis based on the union-wide models, so the optimal policy is constructed for the aggregate levels of inflation and output.However, from the equations (5-6) it is clear that taking trade flows between regions into account is important for policy constructing.
The central bank's task is to set nominal rate that optimizes its objective function subject to the equations ( 1) and (5-7).Thus, in the model there are 4 forwardlooking variables ( t T , , ) and one policy control variable (R t ).For convenience the problem of the central bank can be rewritten in the usual state-space brief form: , showing that only the predetermined variable t e is al- lowed to be affected by shocks.L t stands for welfare losses in the period t.

Welfare Criterion
We assume that central bank is benevolent and tries to maximize social welfare given by 0 0


-an expected weighted sum of all future values of average utility in the union.The second-order approximation of the welfare function is based on Roel M. W. J. Beetsma and Henrik Jensen (2005) and gives the following form of the welfare criterion: PANOECONOMICUS, 2012, 2, Special Issue, pp.185-199 0 0 , where one-period losses are given by where t.i.p. stands for the terms independent from policy and the last part of this relation 3  includes all parameters of more -than -second order of approximation.The weight of the inflation of the region { ; } i H F  rises with an increase in the size of the region and in the extent of price stickiness.The brief form of the objective function (9) of the monetary authority is the following: , where represents a vector of variables that influence the social losses (9), Q is a matrix of coefficients of the loss function (9).

Calibration
In our calibration we partly follow Benigno (2004).Thus, we choose a value of elasticity of producing differentiated goods  equal to 0.67.The parameter of inter- temporal substitution  equals to 0.99.The degree of monopolistic competition  is taken to be equal to 7.66.The risk-aversion coefficient  is assumed to be 1/6.Moreover, it is assumed that the shock t T  follows the auto-regressive process of the kind 1 0.95 , where the term t  stands for the white-noise process with variance 0.0086.
The main difficulty concerns the choice of price inertia parameter i  .In this section we do not follow Benigno (2004), who allows these parameters to vary across a wide range of possible values.In contrast, our choice of these values is based on the estimations of Philip Vermeulen et al. (2007).We take the frequency of price changes as a proxy for the probability to change a price   1   and divide countries into two groups according to the following scheme: if a frequency of price changes is lower or equal to 0.22 (average frequency for the union), the country belongs to the H region.If this frequency is higher than 0.22, the country is a part of F region.Therefore, for the countries with available data region H consists of Germany, Spain and Italy, while region F con- sists of France, Belgium and Portugal.
According to the Table 1 region H produces around 70% of union output, so we calibrate the region size to the 0.7.According to the corresponding weights we assume that an average frequency of price change in the region H equals to 0.17, while this ratio for the region F equals to 0.23.These values correspond to the mod- el parameters 0.83 . According to this calibration, both price stickiness and the economic size of the region H are considerably higher than those in the region F .This means that inflation in the region H obtains much more weight in the objective function (9) of the central bank than the inflation rate in the second region.

Model Uncertainty Specification
We now assume that the central bank has, (8) as a reference, a model describing the economy.But at the same time this monetary authority fears that its reference construction does not correspond properly to the real state of nature -there is a risk of misspecification.In other words, some perturbations of modeled economy from the real one are allowed.The possible sources of these perturbations are unknown variables or processes.
To account for this possible misspecification, the monetary authority analyses only a class of alternative models, which cannot be distinguished from the reference one with the help of statistical methods.In other words, a set of possible perturbations is limited -it includes only such perturbations which will not be discovered with some fixed probability.The reason to impose this restriction on the possible misspecification is quite clear-for great perturbations, when the real economy differs considerably from the reference one, there is no reason to take any decision on the base of this concrete model and adaptation of the model to reality is needed.
So the task for the central bank is to construct a policy that performs reasonably well, even if there is any perturbation.In searching for such robust policy we implement Hansen and Sargent's approach, which is also called robust control.This method assumes a minimax criterion for robust policy construction; a robust policy is such one that produces the smallest losses in the case of the worst model perturbation.These perturbations from the reference model take the form of some additional shocks t s   which are added to the standard t s   in the model, ( 8) and are induced by the so-called "malevolent nature" or "evil agent", who tries to maximize losses of the central bank.Clearly, there is no such agent in reality, but this assumption helps us to design the problem of the monetary authority that minimizes the welfare losses in the worst case and in this way insures against the model uncertainty.Thus, the robust program can be represented by simultaneous two-agent game, where the evil agent chooses a perturbation for the reference model t s   and the central bank defines the value of the nominal interest rate.The set of the possible perturbations is modeled by the restriction on the evil agent's instruments t s   discussed in the next section.

Robust Control Problem
We assume the following intertemporal constraint of the malevolent agent: where t  is a vector of disturbances initiated by the malevolent agent in the econo- my.In other words, (10) represents the allowed set of perturbations, where  stands for the total possible extent of model misspecification.Moreover, the size of the possible perturbations, , corresponds to the central bank's fear of misspecification.It is worth noting that the evil agent does not exist in reality but represents a convenient way to model the choice problem of the policy-maker under uncertainty.If the possible misspecification does not worry the monetary authority the latter supposes that the possible deviations of the reference model from the real world are inessential.This is modeled by assuming that the evil agent has little possibilities to interrupt the model and so the value of η is low.On the contrary, if there is a serious fear of misspecification we assume that the evil agent has possibilities to interfere in the model more abruptly, so we allow the value of η to be high.
Taking into account (10) we can formulate the central bank's choice problem under commitment in the following way: Using a Lagrange multiplier theorem, the constraint set (10) in the problem ( 11) is converted to a penalty: where θ is a Lagrange multiplier of the constraint (10).A negative relation between θ and η in the continuous version of the problem is derived, for example in Hansen et al. (2006), for discretion time in Paolo Giordani and Paul Söderlind (2004) and in Hansen and Sargent (2008).This negative relation means that when the value of η is low the corresponding Lagrange multiplier is high and vice versa.So the parameter θ can be used as an implicit characteristic of allowed model perturbations instead of η; when uncertainty rises and the "budget" of malevolent nature increases, θ declines.Conversely, if    the size of possible perturbations is nil; 0   .In this case, the central bank does not account for any model misspecification and its choice problem corresponds to the usual optimization problem (10).As it is shown in Hansen and Sargent (2008), the solutions of the robust problems ( 11) and ( 12) are equivalent, but the latter is easier to solve, while the former is easier to interpret.So in this study, like in the most literature discussed earlier, we solve the problem ( 12) for different values of θ, keeping in mind the connection between both problems.The choice of the concrete value of θ that seems to be crucial for our analysis is based on the detection error probability approach also initiated by Hansen and Sargent (2001).According to this method the monetary authority tries to understand whether the available data are generated by the approximating model ( 8) or by the worst case model ( 11) with perturbations created by the evil agent.We exclude from our analysis all the situations when the central bank can define the data generating model for certain and so when the probability of the wrong choice between two models equals to zero.In this case the size of the perturbations, and therefore the doubts of the quality of the reference model, are so large that the monetary authority is hardly able to use this model for the optimal policy construction.We consider only the cases with positive probability of making a wrong choice between two models and conclude that the data are generated by the reference model while there are some perturbations or choosing the worst case model while the data are generated by the base model ( 8).When the extent of misspecification is high (and θ is low) we assume that the evil agent can generate considerable distortions and the possibility of the error described earlier is low because the worst case model and the reference one differ significantly.On the contrary, when the extent of misspecification is low (high θ) there can be only slight perturbations and the probability of choosing the wrong model is high.So the high uncertainty corresponds to the low probability of the error in the sense described above and to the low value of θ.
The probability of the error can be computed in the following way: where A L  stands for the value of the likelihood of the approximating model, and W L  is the likelihood value of the worst case model.So the first part of the right-hand expression is the probability to treat the model as an approximating case while in reality the malevolent nature interrupts the data generating process.The second part is the probability to take the model as the worst case while there are no any actions of the evil agent.Hansen and Sargent (2001) argue that the reasonable extent of misspecification corresponds to the detection error probability around 20%.In this case the extent of model uncertainty is neither trivial nor too high.In our analysis we suppose that the detection error probability can vary from 20% to 50%, allowing the extent of model uncertainty to change considerably.It is important to mention that the probability of 50% corresponds to the case when the central bank does not take account of the model uncertainty at all.This means that the monetary authority always decides that the data are generated by the reference model and does not suppose that there can be any perturbations.In this case the problem of the central bank is standard (10), so we allow the extent of the uncertainty to vary from the lowest level (where the detection error probability equals to 50% and θ is at the highest level) to some middle magnitude (corresponding to the error value of 20%).
Following solution techniques developed by Giordani and Söderlind (2004), we find the optimal robust policy that can be represented as a reaction of the nominal interest rate R to the shocks of the terms of trade and to the current Lagrange multipliers for the constraints in the problem (8): where e is a random component of the terms' of trade dynamics (see (1)); z t  is a (4x1) vector of Lagrange multipliers corresponding to the constraints on the forwardlooking variables in the model ( 8) and R  is a (1x5) vector of coefficients that describes the optimal policy.The presence of the Lagrange multipliers in the optimal policy ensures that today's policy measures confirm the private sector expectations formed in the past (Richard Dennis 2007).

Some Computational Results
We have constructed the robust policy for several variants of model uncertainty represented by the parameter θ and by the detection error probability.The monetary policy coefficients are summarized in Table 2.The most important for us is the first coefficient r 1 , which represents the central bank's reaction to the terms of trade shocks.The value of this parameter decreases with the rise in θ, which is opposite to the extent of possible model misspecification, η.If we treat the absolute value of this coefficient as a degree of policy aggressiveness in its reaction to the shocks, this means that the aggressiveness of the optimal monetary policy decreases when the extent of possible misspecification falls, and rises with an increase in the extent of the model uncertainty.Here we see the violation of the Brainard principle for the monetary union.
Another interesting point is the sign of policy reaction to the shock; if there is a positive shock, the central bank raises the interest rate.There is a clear intuition behind this fact.According to (8) this shock increases inflation in the H region and decreases inflation in the second part of the union.But according to our calibration the highest concern of the central bank is inflation in the region with higher inflation persistence, region H.So the reasonable response of the central bank is an increase in the interest rate that decreases the total output in the union and simultaneously stabilizes inflation in the region of the highest interest.On the contrary, when there is a negative shock, the central bank decreases the nominal interest rate in order to bring inflation in the first region back to the optimal level.
We simulate the conduct of economy under a negative shock of the union's terms of trade, and compare the robust policy and the policy which does not take uncertainty into account.Robust policy entails sufficient welfare benefits in comparison with the alternative.This result is presented in Figure 1.
Source: Author's calculations.Impulse-response functions are summarized in Figure 2, where the following notations are used: TT -terms of trade, pih -inflation in the region H, pif -inflation in the region F, V -the value of the strategic evil shock and L stands for the welfare losses.The first column represents the dynamics of the economy when the central bank fears the misspecification and assumes that the evil agent can produce some perturbations to the economy.We suppose that the detection error probability in this case equals to 20% like in the Hansen and Sargent (2001) and mark this situation as the worst case.The second column represents the situation when there are no additional shocks.As we can see in this case there is no action of the evil agent and no additional losses of the social welfare.The last column is the dynamics of the economy under the smallest extent of uncertainty when the detection error probability equals to 50%.
We can see that the malevolent agent reacts to the shock of terms of trade by adding some positive "strategic" shock.In response to that the central bank decreases the nominal interest rate.Comparison of the first and the last columns shows that when the monetary authority worries about uncertainty (the first column), the change in the nominal interest rate is much more considerable than if he does not account for the possible misspecification (the last column).In both cases the result of the game between the central bank and the evil agent is a rise in the output and inflation in the H region, while region F's inflation decreases.Meanwhile, the losses associated with the "active" policy under great concerns about misspecification are less than in the case when the monetary authority does not take the uncertainty into account.When there is no malevolent action, the losses are practically nil (see column 2), even if robust policy is applied.So, in all the cases the robust policy can properly counterattack any external shock.

Conclusion
For the micro-founded two-country model of the currency union we have constructed a robust monetary policy under commitment.We have found that the central bank reacts to the shocks of terms of trade more aggressively when a higher extent of possible misspecification is admitted, thus, the Brainard principle is violated.However, we have analyzed only one type of shock.This shock is very important, but it seems to be questionable to assume that all considerable distortions can be described in this way.For example, productivity shocks, tax shocks or shocks of households' preferences can also influence economy and can considerably change the optimal policy of the central bank.That is why adding an additional shock type into the analyses is one of the most important extensions of the current research.
Then, the case of unstructured Knightian uncertainty, when the central bank has no information concerning the nature and the location of uncertainty, seems to be a slightly far from reality.Much more likely the central bank should have doubts about the precise parameters of its model.This means that we should analyze structured Knightian uncertainty.The parameter of particular interest for the central bank is price stickiness in the different regions.As we found out this parameter influences crucially the social welfare function that is also the objective function of the central bank, so this case is one of the most provoking and promising.
And finally the robust control itself has received a lot of criticism for using the minimax choice criterion.For example, according to Christopher A. Sims (2001) this criterion supposes that the policy maker takes his decisions on the base of the least known worst cases and this seems to be a paradoxical pattern of behavior.This shortcut can be avoided if one of alternative approaches is used, for example, the info-gap robustness approach of Yakov Ben-Haim (2006).

Figure 1
Figure 1 Welfare Benefits from Robust Policy Application

Figure 2
Figure 2 Terms of Trade Shock in the Worst-Case, Approximating Model and Rational Expectations without Robustness

Table 2
Parameters of Robust Monetary Policy =