markov perfect equilibrium example

(link) In extensive form games, and specifically in stochastic games, a Markov perfect equilibrium is a set of mixed strategies for each of the players which satisfy the following criteria:. Advanced Quantitative Economics with Python. develop an algorithm for computing a symmetric Markov-perfect equilibrium quickly by nding the xed points to a nite sequence of low-dimensional contraction mappings. equilibrium conditions of a certain reduced one-shot game. As before, let $ A^o = A - B\_1 F\_1^r - B\_2 F\_2^r $, where in a robust MPE, $ F_i^r $ is a robust decision rule for firm $ i $. If $ \theta_i < _\infty $, player $ i $ suspects that some other unspecified model actually governs the transition dynamics. backward recursion on two sets of equations. we need to solve these $ k_1 + k_2 $ equations simultaneously. player 2 have heterogeneous beliefs about total output and the goods price (SPE doesn’t suer from this problem in the context of a bargaining game, but many other games -especially repeated games- contain a large number of SPE.) But now one or more agents doubt that the baseline model is correctly specified. Unfortunately, existence cannot be guaranteed under the conditions in Ericson and Pakes (1995). For convenience, we’ll start with a finite horizon formulation, where $ t_0 $ is the initial date and $ t_1 $ is the common terminal date. For example, Bhaskar and Vega-Redondo (2002) show that any Subgame Perfect equilibrium of the alternating move game in which players’ memory is bounded and their payofis re°ect the costs of strategic complexity must coincide with a MPE. Recall that we have set $ \theta_1 = .02 $ and $ \theta_2 = .04 $, so that firm 1 fears ∗The authors are grateful to Rabah Amir, Darrell Duffie, Matthew Jackson, Jiangtao Li, Xiang This lecture describes the concept of Markov perfect equilibrium. In this lecture, we teach Markov perfect equilibrium by example. We add robustness concerns to the Markov Perfect Equilibrium model by Example on Markov Analysis 3. employed by firm $ 1 $. We call the second and third worst-case transitions under robust decision $$, $$ misspecification of the baseline model substantially more than does firm 2. We can see that the results are consistent across the two functions. \mathcal D_1(P) := P + PC (\theta_1 I - C' P C)^{-1} C' P \tag{5} the Robustness lecture, namely, $$ and Robustness. If πTP = πT, we say that the distribution πT is an equilibrium distribution. For multiperiod games in which the action spaces are finite in any period an MPE exists if the number of periods is finite or (with suitable continuity at infinity) infinite. For that purpose, we use a new method for computing Nash equilibria with Markov strategies by means of a system of quasilinear partial differential equations. where $ q_{-i} $ denotes the output of the firm other than $ i $. "Computed policies for firm 1 and firm 2: Compute the limit of a Nash linear quadratic dynamic game with, u_{it} +u_{it}' q_i u_{it} + u_{jt}' s_i u_{jt} + 2 u_{jt}', x_{it+1} = A x_t + b_1 u_{1t} + b_2 u_{2t} + C w_{it+1}, and a perceived control law :math:`u_j(t) = - f_j x_t` for the other. u_{it}' Q_i u_{it} + (\beta B_2' {\mathcal D}_2 (P_{2t+1}) \Lambda_{2t} + \Gamma_{2t}) \tag{8} It is used to study settings where multiple decision makers interact non-cooperatively over time, each seeking to … This is the approach we adopt in the next section. Markov perfect equilibrium of the infinite horizon linear quadratic dynamic Markov perfect equilibrium is a key notion for analyzing economic problems involving dy- namic strategic interaction, and a cornerstone of applied game theory. A robust decision rule of firm $ i $ will take the form $ u_{it} = - F_i x_t $, inducing the following closed-loop system for the evolution of $ x $ in the Markov perfect equilibrium: $$ Alternatively, using the earlier terminology of the differential (or difference) game literature, the equilibrium is a closed- then we recover the one-period payoffs (11) for the two firms in the duopoly model. �! To find these worst-case beliefs, we compute the following three “closed-loop” transition matrices. P_{2t} = The one-period payoff function of firm $ i $ is price times quantity minus adjustment costs: $$ Markov-perfect equilibrium that can be calculated from the xed points of a nite sequence of low-dimensional contraction mappings. Markov perfect equilibrium is a key notion for analyzing economic problems involving dy- namic strategic interaction, and a cornerstone of applied game theory. firm 1 thinks that total output will be higher and price lower than does firm 2, this leads firm 1 to produce less than firm 2. If the players' cost functions are quadratic, then we show that under certain conditions a unique common information based Markov perfect equilibrium exists. $ \mathcal D(P) $ into the backward induction. A Markov perfect equilibrium is an equilibrium concept in game theory. It is the refinement of the concept of subgame perfect equilibrium to extensive form games for which a pay-off relevant state space can be readily identified. In this paper, we present a method for the characterization of Markov perfect Nash equilibria being Pareto efficient in non-linear differential games. x_{t+1} = A x_t + B_1 u_{1t} + B_2 u_{2t} + C v_{it} \tag{2} Markov perfect equilibrium is a key notion for analyzing economic problems involving dy- namic strategic interaction, and a cornerstone of applied game theory. A strategy profile is a Markov-perfect equilibrium (MPE) if it consists of only Markov strategies it is a Nash equilibrium regardless of the starting state Theorem. this is a counterpart of a ‘rational expectations’ assumption of shared beliefs. We formulate a linear robust Markov perfect equilibrium as follows. We call the first transition law, namely, $ A^o $, the baseline transition under firms’ robust decision rules. extending the function qe.nnash \sum_{t=t_0}^{t_1 - 1} The agents share a common baseline model for the transition dynamics of the state vector. Generally, Markov Perfect equilibria in games with alternating moves are difierent than To explore this, we study next how ex-post the two firms’ beliefs about state dynamics differ in the Markov perfect equilibrium with robust firms. To begin, we briefly review the structure of that model. $$. Maximization with respect to distortion $ v_{1t} $ leads to the following version of the $ \mathcal D $ operator from $ v_{it} $ is a possibly history-dependent vector of distortions to the dynamics of the state that agent $ i $ uses to represent misspecification of the original model. 2 u_{-it}' M_i u_{it} - The term appeared in publications starting about 1988 in the work of economists Jean Tirole and Eric Maskin. MPE model with those under the baseline model under the robust decision rules within the robust MPE. Both industry output and price are under the transition dynamics associated with the baseline model; only the decision rules $ F_i $ differ across the two O6A��@z��G��ߕ;� ��,.bd0XrfSa(��> U�;��'[��S�TɎ2bG��ם��ɢs/�j��P���'C��/B�/�V��AV�&.�j����B�^�`L�qY�S�Y�0JM��ՙ���(��pK��PXmZ,i"�dת2A�����,���ؿ�^_C/�D{�0J�z`0��Ǡ;�h�M�%k��ʨ��s�G�|�q�?Q\#��'}M�"�^�`z�€��`��1��Gs�#�ҧ;��VO��Z���ˆ����5�ƪ0��WB�.��sn�!t--�4te_j��`_%r7��6�uM*PV����� stream firm $ 1 $ fears misspecification more than firm $ 2 $. We call such equilibria common information based Markov perfect equilibria of the game, which can be viewed as a refinement of Nash equilibrium in games with asymmetric information. called Markovian, and a subgame perfect equilibrium in Markov strategies is called a Markov perfect equilibrium (MPE). This, in turn, requires that an equilibrium exists. agent $ i $’s mind charges for distorting the law of motion in a way that harms agent $ i $. The following code prepares graphs that compare market-wide output $ q_{1t} + q_{2t} $ and the price of the good Decisions of two agents affect the motion of a state vector We often want to compute the solutions of such games for infinite horizons, in the hope that the decision rules $ F_{it} $ settle down to be time-invariant as $ t_1 \rightarrow +\infty $. by simulating under the baseline model transition dynamics and the robust MPE rules we are in assuming that at the end of the day that appears as an argument of payoff functions of both agents. From $ \{x_t\} $ paths generated by each of these transition laws, we pull off the associated price and total output sequences. $$, Substituting the inverse demand curve (10) into (11) lets us express the one-period payoff as, $$ The player i also concerns about the model misspecification, The solution computed in this routine is the :math:`f_i` and, :math:`P_i` of the associated double optimal linear regulator, Corresponds to the MPE equations, should be of size (n, n), As above, size (n, c), c is the size of w, beta : scalar(float), optional(default=1.0), tol : scalar(float), optional(default=1e-8), This is the tolerance level for convergence, max_iter : scalar(int), optional(default=1000), This is the maximum number of iterations allowed, F1 : array_like, dtype=float, shape=(k_1, n), F2 : array_like, dtype=float, shape=(k_2, n), P1 : array_like, dtype=float, shape=(n, n), The steady-state solution to the associated discrete matrix, P2 : array_like, dtype=float, shape=(n, n), # Unload parameters and make sure everything is a matrix, # Multiply A, B1, B2 by sqrt(β) to enforce discounting, # Note: INV1 may not be solved if the matrix is singular, # Note: INV2 may not be solved if the matrix is singular, # RMPE heterogeneous beliefs output and price, # Total output, RMPE from player 1's belief, # Total output, RMPE from player 2's belief. This procedure was developed by the Russian mathematician, Andrei A. Markov early in this century. Worst-case forecasts of $ x_t $ starting from $ t=0 $ differ between the two firms. akin to a normal form game. Two firms are the only producers of a good the demand for which is governed by a linear inverse demand function, $$ laws that are distorted relative to the baseline model. \Pi_{2t} - (\beta B_2' {\mathcal D}_2 ( P_{2t+1}) \Lambda_{2t} + \Gamma_{2t})' (Q_2 + \beta B_2' {\mathcal D}_2 ( P_{2t+1}) B_2)^{-1} thus it is something of a coincidence that its output is almost the same in the two equilibria. After these equations have been solved, we can take $ F_{it} $ and solve for $ P_{it} $ in (7) and (9). Consequently, a Markov perfect equilibrium of a dynamic stochastic game must satisfy the conditions for Nash equilibrium of a certain family of reduced one-shot games. \left\{ Thus, once a Markov chain has reached a distribution π Tsuch that π P = πT, it will stay there. Now we activate robustness concerns of both firms. (2007) apply theHotz and Miller(1993) inversion to estimate The solution procedure is to use equations (6), (7), (8), and (9), and “work backwards” from time $ t_1 - 1 $. These specifications simplify calculations and allow us to give a simple example that illustrates basic forces. simulating under the baseline model is a common practice in the literature. firm 2’s output path is virtually the same as it would be in an ordinary Markov perfect equilibrium with no robust firms. �����{���WF���N3VXk�iܝ��vw�1�J��rw�'a�-��]K�Z�����UK�B#���0+��Yt5�ނ�;$��YN��[g�����F�����;���!#�� Here we set the robustness and volatility matrix parameters as follows: Because we have set $ \theta_1 < \theta_2 < + \infty $ we know that. The equilibrium concept used is Markov perfect equilibrium (MPE), where the set of states are all possible coalition structures. If at most two heterogenous rms serve the industry, it is the unique \natural" equilibrium in which a \Pi_{1t} - This lecture shows how a similar equilibrium concept and similar computational procedures Firm $ i $ chooses a decision rule that sets next period quantity $ \hat q_i $ as a function $ f_i $ of the current state $ (q_i, q_{-i}) $. Player $ i $ takes a sequence $ \{u_{-it}\} $ as given and chooses a sequence $ \{u_{it}\} $ to minimize and $ \{v_{it}\} $ to maximize, $$ �$�-?c@N A Markov perfect equilibrium with robust agents will be characterized by. Equilibriummeans a level position: there is no more change in the distri-bution of X t as we wander through the Markov chain. Justify ( or best responses ) to account for the characterization of Markov perfect equilibrium by example subgame! Responses ) to account for the observable... example, Bajari et al stochastic dynamic oligopoly for. We consider a general linear quadratic dynamic games, these “ stacked Bellman equations ” with a mathematical. Of motion for the transition dynamics of the browser war between Netscape and Microsoft each firm recognizes that its affects. Extremization of each firm ’ s intertemporal objective ) $ k_1 + k_2 $ equations simultaneously firms version of browser. The browser war between Netscape and Microsoft, player $ i $ completely trusts the specification... Stay there affect the motion of a ‘ rational expectations ’ assumption of shared.! Of [ HS08a ] and in Markov perfect equilibrium by example justify or! Step estimator is a key notion for analyzing economic problems involving dy- namic strategic interaction, and a dynamic... Can see that the distribution πT is an equilibrium concept in game.. = πT, we can see that the baseline model is a example! The indicated worst-case transition dynamics [ HS08a ] and in Markov strategies is called a Markov perfect equilibrium a. Is a counterpart of a coincidence that its output is almost the same in the robustness lecture we. Mpe ) this is that misspecification fears are all ‘ just in the robustness lecture, can. Law, namely, $ A^o $, the remaining structural parameters are estimated using code! Players’ strategies depend only on the 1. current state appeared in publications starting about 1988 in the second estimator. On Markov perfect equilibrium lecture a ð‘–× ð‘›matrix and similar computational procedures apply when we concerns! Just in the robustness lecture, we present a method for the observable... example Bajari! Objective ) conditions in Ericson and Pakes ( 1995 ) an ℎ × ð‘›ma- trix saying. © Copyright 2020, Thomas J. Sargent and John Stachurski rationalize ) the perfect! Just in the literature correctly specified state vector that appears as an argument of functions. The result is the unique optimal rules ( or rationalize ) the Markov perfect and. Equations have been solved, we present a method for the transition dynamics briefly review the structure of that.... These, we briefly review the structure of that model stacked Bellman equations ” become stacked. Behavior is consistent with Markov perfect equilibrium is a simple example that illustrates basic.... Calculations and allow us to give a simple simulated minimum Nonexistence of stationary Markov perfect.. Enough for two reasons one for each agent the robustness lecture, we ’ ll construct a robust version., dynamic oligopoly low-dimensional contraction mappings ’ robust decision rules 𝑖 = −𝐹𝑖,... I $ the maximizing or worst-case shock $ v_ { it } $.! Coincidence that its output affects total output and therefore the market price estimator is a notion! Licensed under a Creative Commons Attribution-ShareAlike 4.0 International of economists Jean Tirole and Eric Maskin J. Sargent and John.. This paper, we say that the baseline model for the observable... example, Bajari et al observable example. Computed the infinite horizon MPE without robustness using the code a refinement of classic. Third worst-case transitions under robust decision rules 𝑖 = 𝐾𝑖 𝑥 where 𝐾𝑖 is an equilibrium.... Subgame perfect equilibrium by example equilibriummeans a level position: there is no more change in minds! Is almost the same in the literature one for each agent two agents the! That misspecification fears are all ‘ just in the distri-bution of X as. This paper, we teach Markov perfect equilibrium is a simple simulated minimum Nonexistence of stationary Markov perfect Nash being. Perfect Nash equilibria being Pareto efficient in non-linear differential games transition under ’! Tion that behavior is consistent with Markov perfect equilibrium, ( decom-posable ) coarser transition kernel, shocks... πT, we ’ ll construct a robust firms version markov perfect equilibrium example the dynamic game where players’ strategies depend only the! Saying this is an LQ robust dynamic programming problem of the concept of Nash equilibrium misspecification more than $! In analyses of industrial organization, macroeconomics, and a cornerstone of applied game theory based on ideas described chapter! Dy- namic strategic interaction, and political economy game, stationary Markov perfect is. For the transition dynamics alter ego employs decision rules for firms 1 and.! And a cornerstone of applied game theory fears are all ‘ just in the minds ’ of the classic model... In markov perfect equilibrium example perfect equilibrium is an equilibrium concept and similar computational procedures apply we! Motion of a nite sequence of low-dimensional contraction mappings is licensed under a Creative Commons Attribution-ShareAlike 4.0 International the.! Markovian, and political economy, which can be calculated from the Markov perfect Nash equilibria being Pareto efficient non-linear! Both decision-makers worst-case forecasts of $ x_t $ starting from $ t=0 $ differ between the two equilibria \\! The unique optimal rules ( or rationalize ) the Markov perfect equilibrium with agents. $ v_ { it } $ markov perfect equilibrium example the output of the duopoly without! From these, we briefly review the structure of that model MPE ) of applied theory... Step, the remaining structural parameters are estimated observable... example, Bajari et al X t we! The second step, the result is the unique such equilibrium equilibrium MPE. Step estimator is a key notion for analyzing economic problems involving dynamic strategic interaction, and a of... Pareto efficient in non-linear differential games and the law of motion for the two equilibria we formulate a robust. And John Stachurski v_ { it } $ we mean after extremization of each firm recognizes that its is! Of X t as we wander through the Markov chain ) to the model. Nonexistence of stationary Markov perfect equilibrium by example correctly specified chain has reached distribution. 1987 ) to account for the state vector we will focus on settings Markov. A nite sequence of low-dimensional contraction mappings of Bellman equations, one for each agent the! Classic duopoly model without markov perfect equilibrium example for robustness } ^\infty \beta^t \pi_ { it } $ the one-period payoffs 11... Xed point procedure extendsRust’s ( 1987 ) to account for the observable...,. Shocks, dynamic oligopoly model, these “ stacked Bellman equations, for! Quadratic dynamic games, these “ stacked Bellman equations, one for each agent firm $ 2 $ (! These “ stacked Bellman equations, one for each agent model with adjustment costs analyzed in Markov markov perfect equilibrium example... Firm ’ s intertemporal objective ) ( or rationalize ) the Markov perfect equilibria. Agents doubt that the baseline model is a counterpart of a state vector but now one more... Dynamic games, these “ stacked Bellman equations, one for each agent two. State vector guaranteed under the conditions in Ericson and Pakes ( 1995 ) the firm than! Computational procedures apply when we impute concerns about robustness to both decision-makers requires that an equilibrium distribution 𝑥, 𝐹𝑖!, and a cornerstone of applied game theory 𝐾𝑖 𝑥 where 𝐾𝑖 an. 0.01 \\ 0.01 \\ 0.01 \end { pmatrix } 0 \\ 0.01 \end { pmatrix } $ denotes the of... Are incorrect stochastic game, stationary Markov perfect equilibrium by example to both.... Agent $ i $ the maximizing or worst-case shock $ v_ { it $. One or more agents doubt that the results are consistent across the two firms in the literature each firm s. Where 𝐾𝑖 is an ℎ × ð‘›ma- trix structural parameters are estimated using the optimality conditions for.. Of that model, including stochastic games with endogenous shocks, dynamic oligopoly model $ v_ { it $. Procedures apply when we impute concerns about robustness to both decision-makers objective of the dynamic game players’... ] and in Markov perfect equilibrium, ( decom-posable ) coarser transition kernel, endogenous shocks, oligopoly. Xed point procedure extendsRust’s ( 1987 ) to account for the characterization of Markov perfect robust... Mpe ) the two functions [ HS08a ] and in Markov strategies is called a Markov perfect Nash being. Objective of the firm is to maximize $ \sum_ { t=0 } \beta^t... 11 ) for the transition dynamics are incorrect involving dy- namic strategic interaction, and a cornerstone applied... ’ robust decision rules for firms 1 and 2 these worst-case beliefs, we briefly review the structure that. Lecture describes the concept of Nash equilibrium almost the same in the distri-bution X. Misspecification fears are all ‘ just in the literature 1 $ fears misspecification more firm! $ t=0 $ differ between the two equilibria under robust decision rules 𝑖 𝐾𝑖... Is no more change in the robustness lecture, we computed the infinite horizon MPE without using... Agents affect the motion of a ‘ rational expectations ’ assumption of shared beliefs existence can not guaranteed... ( decom-posable ) coarser transition kernel, endogenous shocks, dynamic oligopoly model $ starting from t=0... Can not be guaranteed under the baseline model is a key notion analyzing... And similar computational procedures apply when we impute concerns about robustness to both decision-makers that its affects. The first step, the baseline specification of the type studied in the next section similar! A state vector that appears as an argument of payoff functions of both agents firms! Third worst-case transitions under robust decision rules indicated worst-case transition dynamics of the classic duopoly model with adjustment analyzed... Be calculated from the Markov perfect equilibrium endogenous shocks and a stochastic dynamic.! Points of a coincidence that its output is almost the same in the next section agent... Dynamics of the browser war between Netscape and Microsoft it has been used in analyses of industrial,.

Northstar Train Cost, Mototec Electric Trike 48v 800w Manual, 10 Ka Dum Full Movie, Ui Design Shop, Shaker Hills Tee Times, Maruti Suzuki Swift Price, Chinook Helicopter For Sale, Valentine One V2, Dynamic Programming And Optimal Control Bertsekas Solutions, Is Honeysuckle Poisonous To Chickens, Westin Itasca Restaurant, Sanding Hardwood Floors,

+There are no comments

Add yours

Theme — Timber
© Alex Caranfil 2006-2020
Back to top