2011-11-07

Lucas' View on Free Trade

Continued from the previous post, I would like to quote a couple of paragraphs from the Lucas' book on economic growth. In the introductory chapter, the author mentions the connection between international trade and economic growth, which illustrates his (and perhaps most economists') view on free trade. While this part is written as an introduction to Chapter 3 ("Making a Miracle"), his evocative illustration provides better economic understanding and insight  of free trade in general.  This could also contribute to the debate on free trade (especially, on TPP issues in Japan).

The most spectacular growth successes of the postwar world have been associated with growth in international trade. This is the single empirical generalization that strikes everyone who is trying to understand economic growth in the last 50 years. Countries like Japan, South Korea, Taiwan, Hong Kong, and Singapore began producing goods they had never made before and exporting them to the United States, successfully competing with American and European producers who had the advantages of decades of experience. At the other extreme, the Communist countries that cut themselves off from trade with the West stagnated, as did India and many Latin American economies that used tariff walls to protect inefficient domestic producers from outside competition. These observations seem to provide further confirmation of the usual economic arguments in favor of free trade, arguments that seem to me as true and as relevant now as they were when Hume an Smith first articulated them. 
But classis trade theory does not really help in understanding the connections between trade and growth that we see in the postwar period. One problem is that while some of the Asian successes - in Taiwan and Honk Kong - were associated with liberal trade policy, others - Japan, Korea, and Singapore - occurred in heavily managed environments, under policies that Smith would certainly have criticized as mercantilist. (I agree with Smith that the mercantilist economies would have hared even better without managed trade, but this view is obviously not a straightforward statement of the facts.) A second, more important, barrier to the application of the theory of gains-from-trade to postwar growth is that quantitative versions of the theory do not yield estimated benefits of tariff reduction that are of the right order of magnitude to account for the growth miracles. (...) These models support a compelling case for the importance of free trade. What they do not provide, though, is a theoretical link between free trade and economic growth that is both rapid and sustained.

2011-11-03

Romer vs. Uzawa-Lucas

As I illustrated in the previous post, the most cited paper of Robert Lucas is written about (endogenous) economic growth. Surprisingly, its citation is even greater than those of Paul Romer (1986a, 1986b), the pioneering papers in this field (according to Google Scholar).
To understand the essence of these models and their differences, I have checked the Lucas' book on economic growth (this is actually a volume of collected papers), and found insightful exposition.



In the Introduction of the book, the author first provides a nice summary of Paul Romer's pioneering works.
Paul Romer (1986a, 1986b) worked out an explicit model of a growing economy that reconciled the opposing forces of increasing an diminishing returns, and did so in a way that generated sustained production growth and was at the same time consistent with market equilibrium of many, competing producers. The economics of Romer's model are closely related to the ideas of Allyn Young (1928), but his development of these ideas is entirely new. In the theory, goods are produced with a single kind of capital - Romer called it "knowledge capital" - and each producer's output depends both on his own stock of this capital and on the stock held by other firms. Aggregating over producers, production in the economy as a whole is subject to increasing returns: Every 10 percent increase in the total stock of knowledge capital leads to an output increase of more than 10 percent. But an individual producer, who has no control over the economy's total stock of capital, faces diminishing returns to increases in his own capital. Thus the fact of increasing inequality among the economies of the world is reconciled with the absence of a tendency to monopolization within each economy.

Then, the author relates Romer's idea with his own (Lucas, 1988).
Section 4 of my "On the Mechanics of Economic Development" constructs a model designed to deal with the problem posed by diminishing returns along the lines proposed by Romer. In doing this, I found it more convenient to make use of a model of Uzawa (1965) in which there is both physical and human capital but returns, private and social, depend only on the ratio of these two stocks. The theory replaces the increasing returns assumed by Romer with a kind of constant returns, yielding a system which is easier to analyze than Romer's but which circumvents the problems of diminishing returns in a similar way.
The human capital model I used involves an external effect of human capital, patterned on the external effect of knowledge capital that Romer introduced. But in my analysis, this external effect is not needed to ensure the existence of a competitive equilibrium the way it is in Romer's model. If this effect is removed, the model continues to be internally consistent and is in fact even easier to analyze. [footnote] 
[footnote]: Rebelo (1991) stripped the model down to its simplest one-capital-good "Ak" form. Caballe and Santos (1993) provide an elegant analysis of the off-balanced-path dynamics of an Uzawa model without a production externality.


References
Caballe, Jordi, and Manuel S. Santos (1993) "On Endogenous Growth with Physical and Human Capital." Journal of Political Economy, 101: 1042-1067. [313]
Lucas, Robert E., Jr (1988) "On the mechanics Economic Development." Journal of Monetary Economics, 22: 3-42. [13791]
Rebelo, Sergio (1991) "Long Run Policy Analysis and Long Run Growth." Journal of Political Economy, 99: 500-521. [2748]
Romer, Paul M. (1986a) "Increasing Returns and Long-Run Growth." Journal of Political Economy, 94: 1002-1037. [12099]
Romer, Paul M. (1986b) "Cake Eating, Chattering, and Jumps: Existence Results for Variational Problems." Econometrica, 54: 897-908. [58]
Uzawa, Hirofumi (1965) "Optimum Technical Change in an Aggregative Model of Economic Growth." International Economic Review, 6: 18-31. [1142]
Young, Allyn A. (1928) "Increasing Returns and Economic Progress." Economic Journal, 38: 527-542. [2014]

(Number in [ ] shows the citation in Google Scholar.)

2011-10-27

Growth "higher" than Rational Expectations

Professor Robert E. Lucas Jr. at Chicago Univ. is perhaps the most famous macroeconomist in the (at least academic) world. He is especially well-known to his series of works on rational expectations.  In fact, he received the Nobel Prize in 1995 due to this contribution:
The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 1995 was awarded to Robert E. Lucas Jr. "for having developed and applied the hypothesis of rational expectations, and thereby having transformed macroeconomic analysis and deepened our understanding of economic policy" (from the official website of the Nobel Prize).
However, somewhat surprisingly, I just noticed that Professor Lucas' most cited paper is NOT about rational expectations. According to Google Scholar (search result is here), his most cited paper is "On the mechanics of economic development" (Journal of Monetary Economics, 1988), which is cited more than three times as much as the second one, "Econometric policy evaluation: A critique." The citation of the former exceeds 13,000, which is amazingly high in the field of Economics (maybe in other fields, too).

The "mechanics" paper is a seminal pioneering work in endogenous growth theory and is built on the idea of Uzawa ("Optimum Technical Change in An Aggregative Model of Economic Growth", International Economic Review, 1965); because of this, the model is often called Uzawa-Lucas model. Professor Paolo Mattana, the author of "The Uzawa-Lucas Endogenous Growth Model" explain the model as follows:
R. Lucas, in the late 1980s, writes a path-breaking paper: by taking some initial intuitions of Uzawa (1965) a step further, he proposes a two-sector capital accumulation growth model where human capital plays the role of the key variable through which ongoing growth can be generated. Human capital is understood to refer, in Becker's tradition, to the skills and knowledge intensity of the labor force and is accumulated in the learning (or educational) sector via a linear constant-returns to scale technology, only requiring older vintages of human capital. The Uzawa-Lucas economy differs in a fundamental way from the standard neoclassical model; since a lower bound to the return of accumulation is implicitly imposed, the long-run growth rate basically reflects an endogenous equilibrium where only the "primitives" of a specific economy (endowments, technology and preferences) are relevant. Other factors, such as increasing population or exogenous technical progress, crucial in the traditional theory, have, conversely, no critical influence.



2011-07-18

Dream Comes True: Japan edge USA!

Congratulations! Many many many thanks to Nadeshiko JAPAN!! We all are very proud of you :)
Japan are FIFA Women’s World Cup™ champions for the first time after a penalty shootout victory over USA, following a drama-charged 2-2 draw in Frankfurt. (Link to FIFA official site)

What a wonderful moment! They really became world champion!!

2011-06-27

Two Papers on Repeated Games

Original article (link) posted: 31/10/2005

Sorin (1986) “On Repeated Games with Complete Information” Math. Of Operations Research, 11-1

Several properties of the sets of feasible payoffs for repeated games are shown. Particularly, the condition that the set of feasible payoffs are convex hull of the feasible payoffs in pure strategies is given. Namely, it is necessary and sufficient that a discount factor is larger than or equal to 1-1/N, where N is the number of the players.

Dal-Bo (2001) “Tacit Collusion under Interest Rate Fluctuations” Job Market Paper, UCLA

The paper examines the optimal tacit collusion equilibrium when the discount factor changes over time. It is shown that collusive prices and profits depend not only on the level of the discount factor but also on its volatility; they increase with a higher discount factor level and decrease with its volatility. The model is a variant of Rotemberg-Saloner model, where, instead of demand fluctuation, the discount factor is assumed to fluctuate.

2011-06-14

Independence and Implementation

Continued from the previous post on bargaining theory, I am going to quote other parts from the Introduction. This time, my focus is on editor's (Prof. Thomson) view on a controversial axiom of Nash solution, contraction independence(*), as well as his brief survey on implementation of bargaining solutions.

(*) Contraction independence is often called the axiom of independence of irrelevant alternatives (IIA), which is defined as follows:

Contraction independence If, keeping the disagreement point constant, the feasible set contracts but the alternative chosen as solution outcome remains feasible, then it should remain the solution outcome.

On contraction independence
Contraction independence has been the object of the sharpest criticisms. Nash himself expressed some misgivings about it and Luce and Raiffa's (1957) objections are well known. In evaluating a bargaining situation, it is unavoidable and probably desirable that it be simplified and summarized, that it be reduced to its essential features. The issue is how much and what information should be discarded in the process, and one can make a convincing case that contraction independence ignores too much. Indeed, it covers contractions that affect the shape of the feasible in ways that seem relevant, for instance, the elimination of the only alternatives at which a particular agent's payoff is higher than at the initial solution outcome and the other agent's payoff lower than at the initial solution outcome; contraction independence prevents solutions from responding to such eliminations.
The main counterargument to this criticism was made by Nash himself: in practice, information is usually lacking about which alternatives are truly available, and a compromise under evaluation only competes against others that are not too different from itself. Modeling this lack or information explicitly is what an investigator should perhaps do, but there are also advantages to keeping the model simple; contraction independence is a formal way to express the idea.

On implementation
Whether a solution is implementable depends on the type of game forms that are used, and on the behavioral assumptions made about how agents confronted with such games behave. For implementation by normal form games and when agents calculate best responses taking as given the choices made by the other agents, a critical property for what is then called Nash-implementability is Maskin-monotonicity (actually an invariance property with respect to enlargements of the lower contour set at the chosen alternative). Most solutions are not Maskin-monotonic and therefore not Nash-implementable by normal form games. However, an implementation of the Kalai-Smorodinsky solution by a sequential game is possible (Moulin, 1984). Later contributions delivered the Nash solution (Howard, 1992), the egalitarian solution (Bossert and Tan, 1995)./// For subgame perfect implementation, a general theorem is offered by Miyagawa (2002). It covers all solutions obtained, after normalizing problems so that the ideal point has equal coordinates, by maximizing a monotone and quasi-concave function of the agents' payoffs. Implementation is by means of a stage game and the equilibrium notion is subgame perfection.


References
Bossert and Tan (1995) "An arbitration game and the egalitarian bargaining solution", Social Choice and Welfare, Volume 12, Number 1, 29-41.
Luce and Raiffa (1957), Games and Decisions: Introduction and Critical Survey, Wiley.
Miyagawa (2002) "Subgame-perfect implementation of bargaining solutions", Games and Economic Behavior, Volume 41, Issue 2, 292-308.
Moulin (1984) "Implementing the Kalai-Smorodinsky bargaining solution", Journal of Economic Theory, Volume 33, Issue 1, 32-45.

2011-06-09

Bargaining Solutions: Nash, Egalitarian and Kalai-Smorodinsky

A leading researcher in bargaining theory, Professor William Thomson, recently edited the notable collection of papers in axiomatic bargaining:

Bargaining and the Theory of Cooperative Games: John Nash and Beyond


The following is quoted from the Introduction written by the editor. The parts I refer below focus on three representative solutions of the bargaining problem(*) in the literature, while his Introduction covers much more materials. I strongly recommend those who are interested in bargaining theory to read this insightful survey article.

Almost sixty years ago, Nash (1950) published his seminal paper on what is now known as the "axiomatic theory of bargaining"./// He formulated a list of properties, or "axioms", that he thought a solution should satisfy, and he established the existence and the uniqueness of a solution satisfying all of the axioms; this solution is now called the "Nash solution".

Nash's model has been one of the most successful paradigms of game theory. His paper is the founding stone of a literature that now comprises several hundred theoretical papers. The Nash solution is presented in all game theory textbooks./// Together with the Shapley value (Shapley, 1953) and the core (Gillies, 1959), it constitutes the obligatory background on cooperative games in most economics graduate programs.

In spite of the large number of reasonable solutions that one can easily define, only three solutions and variants have consistently emerged: in addition to the Nash solution, they are egalitarian(**) and Kalai-Smorodinsky solutions(***)./// The dominance of these three solutions and these variants is a central conclusion to be drawn from the literature.

The Nash solution has come out somewhat more often than the other two, and the claim can perhaps be made that it is indeed special./// But the argument is a little dangerous. Earlier, we talked about the theorist's need to simplify and summarize in order to analyze, and in axiomatic analysis simplification often takes the form of independence and invariance axioms. The Nash solution satisfies many invariance conditions, thus it is not much of a surprise that it should have dome out often. On the other hand, the monotonicity axioms that have generally led  to the Kalai-Smorodinsky and egalitarian solutions are readily understood and endorsed by the man on the street.

It is mainly on the basis of monotonicity properties that the Kalai-Smorodinsky solution should e seen as an important challenger to the Nash solution, the egalitarian solution presenting another appealing choice. This latter solution enjoys even stronger monotonicity requirements and like the Nash solution, it satisfies strong independence conditions. Unlike both the Nash and Kalai-Smorodinsky solutions, it requires interpersonal comparisons of utility however, which, depending upon the context, may be seen as a desirable feature or a limitation.

(*) A bargaining problem consists of a pair (S, d) where S, the feasible set, is the subset of alternatives, and d, the disagreement point, is a point of S./// A bargaining solution defined on a class of problems is a function that associates with each problem (S, d) in the class a unique point of S, the solution outcome of (S, d).
(**) The egalitarian solution (Kalai, 1977a) selects the maximal point of S at which utility gains from d are equal. More generally, by making utility gains proportional to a fixed vector of weights, we obtain the weighted egalitarian solution relative to these weights (Kalai, 1977b)

(***) The Kalai-Smorodinsky solution (Kalai and Smorodinsky, 1975) selects the maximal point of S that is proportional to the profile of maximal payoffs that agents can separately reach among the points of S that dominate d.


References
Gillies (1959) "Solutions to general non-zero-sum games" in Contributions to the Theory of Games IV, Princeton University Press, 47-85.
Kalai (1977a) "Nonsymmetric Nash solutions and replications of 2-person bargaining", International Journal of Game Theory, Volume 6, Number 3, 129-133.
Kalai (1977b) "Proportional Solutions to Bargaining Situations: Interpersonal Utility Comparisons", Econometrica, Vol. 45, No. 7 (Oct., 1977), 1623-1630.
Kalai and Smorodinsky (1975) "Other Solutions to Nash's Bargaining Problem", Econometrica, Vol. 43, No. 3 (May, 1975), 513-518.
Nash (1950) "The Bargaining Problem", Econometrica, Vol. 18, No. 2 (Apr., 1950), 155-162.
Shapley (1953) "A Value for n-Person Games" in Contributions to the Theory of Games II, Princeton University Press, 307-317.

2011-05-10

A Maskin's IO paper

Original article (link) posted: 30/10/2005

Maskin (1999) “Uncertainty and entry deterrence” Economic Theory, 14

A model where capacity installation by an incumbent firm serves to deter others from entering the industry is considered. The paper shows that uncertainty about demand or costs forces the incumbent to choose a higher capacity level than it would under certainty. The intuitive reason is explained in Introduction, which is stated as follows;

To deter entry, the incumbent must install enough capacity so that, if entry occurred, the entrant’s profit would be zero (or negative). Under certainty, the incumbent will install no more capacity than it would use were entry to occur. With uncertainty, when demand is high, an incumbent that has installed the certainty level of capacity still continues to produce at capacity; price simply rises to reflect the higher demand. But, when demand is low, the incumbent will wish to produce at less than full capacity. This means that the fall in price when demand is low is not so large as the rise in price when demand is high, and so if the entrant’s is zero under certainty, it is positive with certainty. To deter entry, therefore, the incumbent must increase capacity above the certainty level to ensure that when demand is high it produces enough to drive the entrant’s expected profit back down to zero.

2011-05-04

Matching and Market

I found insightful comments on the relationship between matching theory and market economy (more specifically, general equilibrium) in the following paper:
Vincent Crawford (1991), "Comparative Statics in Matching Markets" Journal of Economic Theory, 54: 389-400.
Perhaps the most important advantage of the matching approach is its robustness to heterogeneity. A traditional competitive equilibrium cannot exist in general unless the goods traded in each market are homogeneous, because all goods in the same market must sell at the same price. A traditional model of a labor market with the degree of heterogeneity normally encountered therefore has the structure of a multi-market general equilibrium model. But because the markets in such a model are very thin, the usual arguments in support of price-taking are strained. The theory of matching markets replaces this collection of thin markets with a single market game, in which the terms of partnerships are determined endogenously, along with the matching, via negotiations between prospective partners. Gale and Shapley's notion of stability(*), suitable generalized, formalizes the idea of competition, and thereby makes it possible to evaluate the robustness of traditional competitive analysis to heterogeneity. (Stable outcomes in matching markets can in fact be viewed as traditional competitive equilibria when prices are allowed to reflect the differences between matches; see, for example, Shapley and Shubik, 1972(**))

The author, Vince Crawford, who is known as a leading researcher in game theory has written a few influential papers on matching theory. Especially, the following two are of great importance since they initiated the area of (many-to-one) matching with monetary transfers.
"Job Matching with Heterogeneous Firms and Workers"
with Elsie Marie Knoer, Econometrica, Vol. 49(2): 437-450, 1981.
"Job Matching, Coalition Formation, and Gross Substitutes"
with Alexander S. Kelso, Jr., Econometrica, Vol. 50(6): 1483-1504, 1982.


* Gale and Shapley (1962) "College Admissions and the Stability of Marriage" American Mathematics Monthly, 69: 9-15.
** Shapley and Shubik (1972) "The Assignment Game. 1. The Core" International Journal of Game Theory, 1: 111-130.

2011-04-06

Theory Seminar (Tatur)

Original article (link) posted: 29/10/2005

Tatur "On an Evolutionary Model and an Equilibrium Concept"

The paper proposes a new evolutionary equilibrium concept that differs drastically from those of classical equilibrium concepts like ESS, Nash Equilibrium or Correlated Equilibrium. The evolutionary model is characterized the following three crucial features.
1. Imitation (not best response)
There is no sophisticated learning. Instead, players change their strategy by imitating a successful strategy taken by other players ("natural selection")
2. Local Interaction
A single, large, geographically dispersed population plays a finite two player game and only players nearby interact.
3. Correlation
There is a correlation device on which players can condition.

In his model, each player matches a partner only nearby, without knowing if he/she would become a colum player or a row player. An equilibrium set is a set of correlated strategies which survived in the imitation dynamics with random mutations in the geographical setting.
The author applies this equilibrium concept to many games and derives cooperative outcomes most of which cannot be predicted by standard equilibrium concepts such as Nash equilibrium or ESS yet frequently observed in actual economic situations or in experiments. One outstanding result is cooperation in finitely repeated games. His equilibrium concept yields strategies involve cooperation if the repeated game is sufficiently long. Moreover, we show that as the length of the game goes to infinity, the equilibrium payoffs of the repeated game will converge to a point which maximizes the utility of the population.
Just cool!! (Although Princeton faculties didn't seem to like his evolutionary idea...)

Related papers
Ellison (1993) "Learning, Interaction, and Coordination" Econometrica, 61
Morris (2000) "Contagion" RES, 67

2011-03-28

Gilboa's View on Theory

What is theory, or what is the role of theory especially in social sciences? In his book, "Theory of Decision under Uncertainty," Professor Gilboa shows thought-provoking argument.




The following quotes are all from Chapter 7: "The Role of Theories."

Theories are never correct, and in the case of the social sciences they tend to be almost always wrong. The question is, therefore, not whether they are right or wrong, but whether they are wrong in a way that invalidates the conclusions drawn from them. In other ways, theories are tools for reasoning and rhetorical devices.
The last standpoint I think is more or less common among great Israeli theorists such as Bob Aumann and Ariel Rubinstein. In a later part, Gilboa also says as follows:
Recall that we are not hoping to obtain theories that are very accurate. We use the theories more often as reasoning aids.
Then, the author further offers philosophical argument on science, referring the key thinkers such as Friedman, Popper, and Kuhn.
It follows that the degree to which we are willing to accept an assumption does not need to be a monotone function of its degree of accuracy. The assumption is tested based not only on its direct implications, but also on its indirect implications, which may involve nontrivial theorems.

The preceding discussion brings to mind Friedman's celebrated argument that theories should not be judged based on the validity of their assumptions, but on that of their conclusions. This argument is a bit extreme, and I would be careful to accept it (because of the following two reasons).
1) It is generally hard to draw a sharp distinction between assumptions and conclusions, completely ignoring the veracity of the former and testing only the latter.
2) Justifiably or not, it (= Friedman's argument) has become a bit of an excuse not to question the theory.

The logical positivist heritage (coupled with Popper's contribution) suggests that our theories should be falsifiable. The axiomatization we saw earlier is formulated in terms of conditions that can be violated. However, a theory such as utility maximization is not always easy to falsify. [...] Only in carefully designed controlled experiments can one hope to unambiguously refute a theory, but then one faces questions of external validity: the fact that a theory fails in artificial experimental environment may not be an indication that it will also fail in natural environment, to which it was presumably intended in the first place.

It started with Kuhn (1962), who asked questions about scientific paradigms and the way they changed. Kuhn described scientific evolution as a social phenomenon that need not converge to any objective truth. Rather, it was a process involving many factors, including accumulating evidence on the one hand, but also personal interests and tastes on the other.

The postmodern critique sometimes appears to confound descriptive and normative claims. It may well be true that science will never be able to be completely objective. But this does not mean that is shouldn't try. [...] There are instances of postmodern critique that sound to me similar to the argument, "We know that wars are inevitable. Hence, let's start shooting people."

2011-03-09

R&D investments and the persistence of monopoly

Original article (link) posted: 26/10/2005

R&D-intensive industries are natural context in which to address the following questions; “Do dominant firms tend to maintain, or even increase, their market dominance?” and “Is market power temporary or is it permanent?”
Gilbert and Newbery (1982) develop a strong argument in the view of persistence of incumbency. They claim that a monopolist has more to lose from letting competition in than a potential entrant has from challenging the monopolist. As a result, the tendency should be towards persistence, not alternation, of market dominance. Interestingly enough, they relate their model to a model of an auction market and state that preemption is a Nash equilibrium of the corresponding auction game. (A monopolist has higher willingness to pay than that of an entrant and hence bids higher amount. I think this result can be quoted in my license auctions paper.)
Reinganum (1982) makes the point that Gilbert and Newbery’s result depends on their assumptions on the deterministic process of R&D. She shows that with uncertainty, there are cases when the probability the monopolist is replaced by an entrant is greater than the probability of persistence, hence R&D would decrease the market dominance. Her logic is as follows. Under uncertainty, with positive probability, the potential entrant does not succeed in inventing a new product, even though it invests a positive amount. When this happens, a successful incumbent would only be replacing its monopoly product with another monopoly product. Because of this “replacement” effect, a monopolist has less incentive to engage in R&D that a competitive firm has.

References
Gilbert and Newbery (1982) “Preemptive Patenting and the Persistence of Monopoly” AER, 72
Reinganum (1982) “Uncertain Innovation and the Persistence of Monopoly” AER, 73

2011-02-28

Experimental Game Theory in GEB

I found an interesting website in Games and Economic Behavior (GEB), one of the leading academic journals in game theory. As titled "Two decades of experimental game theory in Games and Economic Behavior," this special online issue shows 17 articles on experimental game theory which have been published in GEB. It says:
Assembling this Virtual Special Issue on Experimental Game Theory has been an eye-opener. The first step was to go back through all the issues to get a bigger picture of the range of papers that we have published in this area. Games and Economic Behavior (GEB) was founded in 1989 at a time when there really wasn’t a subfield of experimental game theory as such. It wasn’t until a year later that this journal published its first article based on laboratory experiments, in the March 1990 issue – exactly twenty years ago.
See here for the detailed information.

2011-02-22

Liquidity and Financial Crisis

Here comes a long-awaited economics book on liquidity, which has great importance especially after having financial crisis.




In Inside and Outside Liquidity, leading economists Bengt Holmstrom and Jean Tirole offer an original unified perspective on the following questions that are center of all financial crises:

  • Why do financial institutions, industrial companies, and households hold low-yielding money balances, Treasury bills, and other liquid assets?
  • When and to what extent can the state and international financial markets make up for a shortage of liquid assets, allowing agents to save and share risk more effectively?

The publisher's description says:
In a slight, but important departure from the standard of finance, the authors show how imperfect pledgeability of corporate income leads to a demand for as well as a shortage of liquidity with interesting implications for the pricing of assets, investment decisions, and liquidity management.
The book surely attracts those who are interested in liquidity and financial crisis.

2011-02-19

Lecture 7 (Dutta): Repeated Games 2

Original article (link) posted: 25/10/2005

We continued to examine the Abreu-Pearce-Stacchetti (APS) operator, particularly focusing on the following two theorems.

Theorem1 (Necessity)
V* = LV*

Theorem2 (Sufficiency)
If V = LV (and V is bounded), then V is a subset of V*

where L is APS operator and V* is the set of SPE payoffs of the repeated game.

The proof of Theorem 1 is not difficult. We used "unimprovability" to prove Theorem 2. APS operator also establishes following results.

1. V* is compact
2. V* is increasing in the discount factor
3. APS operator is monotone

Using the third result with two theorems mentioned above, we can derive the algorithm to compute SPE payoffs. That is, starting with a large set of candidate equilibrium payoffs (say, a convex hull of the set of feasible payoffs), we just need to apply the APS operator iteratively until the sequence of sets will converge. Then, the limit must coincide with V*.

2011-02-12

Evolutionary Game Theory

The following recent textbook on evolutionary game theory seems to be must-read for those who are interested in this field:




Description on its cover says:
Evolutionary game theory studies the behavior of large populations of strategically interacting agents, and is used by economists to make predictions in settings where traditional assumptions about agents' rationality and knowledge may not be justified. Population Games and Evolutionary Dynamics offers a systematic, rigorous, and unified presentation of evolutionary game theory, covering the core developments of the theory from its inception in biology in the 1970s through recent advances.
As a recommending remark, Daniel Friedman, Professor of Economics at University of California, Santa Cruz, says:
"(this text) is designed to become the standard reference and textbook in its filed for many years."
My amazon booklist on "evolution and learning in game theory" (link in Japanese) might also be helpful.

2011-02-05

Lecture 6 (Dutta): Repeated Games 1

Original article (link) posted: 21/10/2005

Repeated Games: Set-Up
We first checked the definitions of the followings; a stage game, a repeated game, a subgame, strategies, histories, a Nash equilibrium, a subgame perfect NE, feasible payoffs, and individually rational payoffs.
Note) Any points in the convex hull of the pure strategy payoffs are feasible when the discount factor is sufficiently large. (The proof is done by using time-averaging strategies. See Sorin(1986))

Abreu-Pearce-Stachetti Characterization
Then, we investigated APS operator, which captures the similar idea of Bellman operator in a single-agent dynamic optimization problem.
Since this blog is not designed for writing messy equations, I will not cover the mathematical argument about APS operator here. You can check the chapter 5 of Fudenberg and Tirole (1991) ("Dynamic Programming and Self-Generation" in 5.5.4) or the original paper by APS (1990).

References
Abreu, Pearce and Stachetti (1990) "Toward a Theory of Discounted Repeated Games with Imperfect Monitoring" Econometrica, 58
Sorin (1986) "On Repeated Games with Complete Information" Math. of Operations Research, 11-1

2011-01-30

Japan Crowned Kings of Asia

Congratulations!!
Japan became the first team to win the AFC Asian Cup FOUR times as Tadanari Lee's goal in the second period of extra time clinched a 1-0 victory over Australia at Khalifa Stadium. See more in this article.
Many many thanks to our national team for having shown us such great games in this Asia Cup tournament :) You guys are really awesome!! (the following movie is also great!)

2011-01-25

Aumann's survey

Original article (link) posted: 20/10/2005

Aumann (1985) “What is Game Theory Trying to Accomplish?” in Frontiers of Economics

This is a survey article by Professor Aumann, a Nobel Laureate of this year. The paper consists of 18 sections. The first 8 sections concerned with generalities of game theory with particular attention of the concept of science, sections 9-17 illustrate four equilibrium concepts (Nash Equilibrium, Core, Von Neumann-Morgenstern Stable Set, and Shapley Value) with applications, and section 18 concludes. In the first half of the paper, he put his point of view about game theory (or economics) as a science, which is very deep and insightful. It is highly recommended to read first 8 chapters for those who are interested in such questions as “What is (social) science?” and “What is the definition of science or truth?”.
His main claim is stated in Introduction as follows;

A solution concept (could be replaced with “a scientific theory”) should be judged more by what it does than by what it is; more by its success in establishing relationships and providing insights into the workings of the social processes to which it is applied than by considerations of a priori plausibility based on its definition alone.

2011-01-20

What is the New Keynesian Approach?

I received a comment on the previous post (link) asking about the new Keynesian approach. So, let me try to provide some information based on the author's explanation of this concept. According to Walsh, the new Keynesian approach employs equilibrium models with some frictions or rigidities. He says, the new Keynesian approach uses
models based on dynamic optimization and nominal rigidities in consistent general equilibrium framework.
The monetary frictions or rigidities that Walsh refers include
money-in-utility function, cash-in-advance, search model of money, informational, portfolio, and nominal rigidities, and credit frictions,
which are all discussed in the book.
Another essential book on the new Keynesian monetary economics, written by one of the leading researchers in the field, is the following. This might also be useful for students and central bank economists alike.

2011-01-18

Walsh 3rd

A leading advanced textbook in monetary economics has been revised and released last year.

Monetary Theory and Policy, Third Edition


The author describes the new edition, which seems to be an essential reference in the field, as follows:
This third edition reflects the latest advances in the field, incorporating new or expanded material on such topics as monetary search equilibria, sticky information, adaptive learning, state-contingent pricing models, and channel systems for implementing monetary policy. Much of the material on policy analysis has been reorganized to reflect the dominance of the new Keynesian approach. Monetary Theory and Policy continues to be the only comprehensive and up-to-date treatment of monetary economics, not only the leading text in the field but also the standard reference for academics and central bank researchers.

2011-01-14

Summary of IO Papers (by O. Williamson)

Original article (link) posted:18/10/2005

Industrial Organization (edited by O. Williamson) contains 23 important IO papers in the literature. The below is the list of short summaries of selected papers written by the editor, O. Williamson which are quoted from the ‘Introduction’. Since most of the papers are still relevant and must-read for IO researchers, the list may be a helpful guide for you. Please enjoy it!

Alchian (1950) “Uncertainty, Evolution, and Economic Theory”
The paper is significant in several directions. For one thing, selection arguments play a large role in virtually all forms of long-run competitive analysis. Second, the use of simplifying assumptions of an ‘as if’ hyperrationality kind can sometimes be justified by invoking selection arguments. And third, Alchian’s treatment of evolutionary issues is insightful and is carefully nuanced.

Holmstrom (1982) “Moral Hazard in Teams”
The author extends earlier work of a principal/single-agent kind to include relations between multiple agents in teams. The paper develops a sufficient statistic condition on relative performance evaluation according to which competition among agents is not valued because it induces added effort but rather than because it is a device to extract information optimality.

Grossman and Hart (1986) “The Costs and Benefits of Ownership: A Theory of Vertical and Lateral Integration”
Although the formal modeling of incomplete contracting is formidably difficult, the paper develops a model in which both ex-ante alignment and ex post adaptation differences between market and hierarchical models of organization are recognized.

Dixit (1980) “The Role of Investment in Entry-Deterrence”
The paper sets out the basic logic and demonstrates the critical importance of investments in durable, nonredeployable assets to effect entry deterrence. Given credible pre-entry commitments, the logic of entry barriers was made secure. But inasmuch as a duopoly setup is highly specialized, the empirical significance and antitrust enforcement ramifications of the argument can be questioned.

Aghion and Bolton (1987) “Contracts as a Barrier to Entry”
What the paper examines is whether an incumbent supplier can fashion a penalty clause, the effects of which penalty make the incumbent better off. It is shown that penalties can be devised such that lower cost entrants can be deterred – although not necessarily precluded – from entering.

Milgrom and Robertes (1982) “Limit Pricing and Entry Under Incomplete Information: An Equilibrium Analysis”
The use of limit pricing here turns on an information asymmetry between the sitting monopolist (or incumbent) and the potential entrant. Whereas the incumbent knows its costs, the potential entrant can only infer them. The incumbent would like to signal to the potential entrant that it has low costs, thereby to deter entry. Although it can do this by setting a low price, the entrant is alert to the strategic nature of the game and redcognizes that signaling can be used for strategic purpose.

Kreps and Wilson (1982) “Reputation and Imperfect Information”
The paper shows that the introduction of a small amount of imperfect or incomplete information can transform such a game into one whereby monopolists strategically contest entry. The logic of unraveling gives way to a logic of reputation in an intertemporal framework into which imperfect information has been introduced.

Baumol, Pazner and Willig (1986) “On the Theory of Perfectly-Contestable Markets”
The paper summarizes the central arguments of their influential book and advances the argument that the perfectly contestable market – that in which asset-specificity is negligible, whence assets are easily redeployable to alternative uses and by alternative users – is usefully regarded as an analytical and public policy benchmark.

Salop (1979) “Monopolistic Competition with Outside Goods”
The author uses a spatial competition model to investigate monopolistic competition. Salop’s treatment nicely displays the key features of a monopolistically competitive contest in a spatial equilibrium setting.

2011-01-08

AER (September 2010)

The hard-copy of the American Economic Review (September 2010, link) has arrived in late December, which seems to contain lots of interesting papers (as usual). The following is the list of papers whose titles especially attracted me.
  • "Morally Motivated Self-Regulation" by David Baron
  • "Are Health Insurance Markets Competitive?" by Leemore Dafny
  • "The Law of the Few" by Andrea Galeotti and Sanjeev Goyal
  • "Monopoly Price Discrimination and Demand Curvature" by Inaki Aguirre, Simon Cowan and John Vickers
  • "Strategic Redistricting" by Faruk Gul and Wolfgang Pesendorfer
  • "A Price Theory of Multi-Sided Platforms" by Glen Weyl
  • "When Does Communication Improve Coordination?" by Tore Ellingsen and Robert Ostling
I should definitely check them out. They all look really interesting :)

2011-01-06

World's Most Expensive Tuna

A 511-pound tuna was sold for a record-breaking $175,000 in Tsukiji, the world biggest fish market in Tokyo! Can you believe that? The news (link) reports as follows.
Tokyo’s Tsukiji fish market is known worldwide as the top place to buy fish. It is the biggest wholesale fish and seafood market on the planet!
Over 400 different types of seafood are sold there, with an average yearly value of sales at $5.5 billion.
It came as no surprise that a 511-pound tuna finally broke the sale record, at $175,000! Almost four times the weight of an average Japanese man, the tuna was caught off the northern tip of the main island Honshu, where many prize fish are found.
It was bought jointly by two Tokyo restaurants and an entrepreneur from Hong Kong, to be divided up after the sale. It is the biggest sale since the previous 2001 record, when a 440-pound tuna sold for $220,000.