2010-12-28

Lecture 5 (Dutta): Dynamic Programming 4

Original article (link) posted: 13/10/2005

First we looked at two more properties of the value function, supermodularity and differentiability.
Then, we examined the model with slightly different assumptions; deterministic transition function, unbounded reward function, unit discount factor, and finite horizon DP.

Result (Supermodularity)
Suppose action space is subset of R. If the reward function is supermodular and transition function is action-dependent, then the optimal action correspondence is monotone (the largest action in the optimal set is monotone).

Result (Differentiability)
Suppose reward function is differentiable on S and either
a) transition function is action-dependent
b) transition function has a density which is differentiable on S
then, V is differentiable on int S

Deterministic model
Just a special case of the stochastic case

Unbounded reward
Some kind of bound conditions are needed

No discounting
Continuity of discounting case and no discounting case (See Dutta (1995))
Note)
It is hard to derive the value function without discounting (Long-run average payoff). So, we can first solve the value function in the discounting case and make a discount factor go to unity to solve it.

Finite horizon DP (continuity of V)
The value function of a finite DP problem with T remaining period, V(T) converges (in sup norm) to that of infinite horizon model as T goes to infinite.

Finite horizon DP (continuity of h)
Under continuity and compactness assumptions, there exists a sequence of optimal action policies h(t), h(t-1),… If h(T) converges to some policy, say h, as T goes to infinity, then h is a stationary optimal policy for the corresponding infinite DP problem.

2010-12-23

3G Auctions in UK: Q&A

The last chapter of Paul Klemperer's famous auction book provides non-technical answers to the FAQs on spectrum auctions, especially on the 3G auction that he (and Ken Binmore) designed and implemented in the UK.



The chapter titled "Were Auctions a Good Idea?" is based on his newspaper article, "The Wrong Culprit for Telecom Trouble" appeared in Financial Times (Nov. 26, 2002) where he listed three typical critics against 3G auctions:
The critics assume three things, that 
1: the telecoms companies paid more for the licenses than they thought the licenses were worth
2: this expenditure has reduced investment in 3G
3: it destroyed the telecoms companies' market value
Paul answers to each question as follows:
1. It is now hard to imagine any company voluntarily paid billions for a 3G license. But volunteer they all did - and they celebrated their victories. (...) Like any other market, an auction simply matches willing buyers and willing sellers - it cannot protect them against their own mistakes.
2. The auction fees are history; they have (almost) all been paid in full, cannot be recouped by cutting investment, and make no difference to its profitability. (...) Investment in 3G, as in anything else, is primarily motivated by attractive returns in the future - not by money spent in the past.
3. All in all, it seems a stretch to blame $100bn of shareholder misery on $100bn of 3G license fees.
Then, he concludes the article by saying:
There is nothing special about an auction: it is just another market. Buying houses or shares at the peak of a housing or stock-market boom does not entitle anyone to compensation. Why should we make an exception for the phone companies?
(...)
The main effect of the license fees was simply to transfer $100bn from shareholders around the world to certain European governments. This was both equitable, since the companies were buying a public asset that they valued this highly at the time, and efficient, since such a lump sum transfer is much more efficient than most forms of taxation. Efficient, equitable and voluntary government funding is not easy to find: perhaps we should be more enthusiastic about it.

2010-12-21

Theory Seminar "Fudenberg and Levine"

Original article (link) posted: 08/10/2005

Fudenberg and Levine (2005) "A Dual Self Model of Impulse Control"

The paper proposes a "dual-self" model for a single agent decision making. In their model, the patient long-run self and a sequence of myopic short-run selves who share the same preferences over the stage-game outcome play games to decide some dynamic decision. In each period, the long-run self moves first and chooses the utility function of the myopic self possibly with some reduction in utility ("self-control"). After seeing this "self-control" level, the short-run player takes the final decision.
The model gives a unified explanation for a number of empirical regularities related to self-control problems and a value for commitment in decision problems, including the apparent time-inconsistency that has motivated models of hyperbolic discounting and Rabin's paradox of risk aversion in the large and small. The base version of the model is consistent with Gul-Pesendorfer axioms.

Comments
The paper is quite interesting and the presentation by Professor Levine was really nice. He illustrated many experimental results and explain how their theory can explain them. However, it was bit unclear for me to see their marginal contribution in the literature of behavioral Economics. It seems that there are many alternative theories which can also explain those experimental results. I must check the papers in this field at least to some extent...

Interesting Papers on Reference

Gul and Pesendorfer (2001) "Temptation and Self-Control" Econometrica, 69
: An axiomatic approach of temptation preference
Gul and Pesendorfer (2004) "Self-Control and the Theory of Consumption" Econometrica, 72
: Generalization of the 2001 paper with multi-period decision aiming for the application to macro economics
Krusell and Smith (2003) "Consumption-Savings Decisions with Quasi-Geometric Discounting" Econometrica, 71
: Multiple equilibria in hyperbolic models
Laibson (1997) "Golden eggs and hyperbolic discounting" QJE, 112
: Revival of non-exponential discounting preference
Laibson (2001) "A Cue-Theory of Consumption" QJE, 116
: Cue-theory
O'Donogue and Rabin (1999) "Doing It Now or Latter" AER, 89
: Early or latter decision
Rabin (2000) "Risk Aversion and Expected-Utility Theory: A Calibration Theorem" Econometrica
: Rabin's paradox
Thaler and Shefin (1981) "An Economic Theory of Self-Control" JPE, 89
: A pioneering work of self-control

2010-12-16

Contract Theory Conference

I organized the 4th Japan-Taiwan Contract Theory Conference on December 4, 2010 at my institute, National Graduate Institute for Policy Studies (GRIPS), which turned out to be extremely successful. We had 49 participants, including 18 guests from Taiwan; enjoyed listening to six intriguing talks and discussion. I am grateful for all who kindly attended this event and helped me setting it up.
The following is the conference program with my brief comments.

First session: 10:05--11:55

Chair: Chia-Hui Chen (Academia Sinica)
Speaker 1: Chien-Lung Chen (National Tsing-Hua University) 
  • How Can Agents Be Better-Off with Pay Caps?  
Speaker 2: Mami Kobayashi (Kinki University)  
  • Market Liquidity and Capital Structure of Financial Institution

Comments: The first talk is about optimal wage scheme under pay caps (in addition to minimum wage constraints), and the second is about incentive issues on corporate finance and governance. Theoretical robustness and the connection to actual labor and financial markets were discussed.                                              

Second session: 13:10--15:00
Chair: Jong-Rong Chen (National Central University)
Speaker 1: Chi-Hsiang Liu (National Central University)
  • Promotion, Relative and Individual Performance Pay
Speaker 2: Kohei Kawamura (University of Edinburgh)
  • Eliciting Information from a Large Population
Comments: Both talks showed frontier works on contract theory. The first one is considerable extension of promotion model and the second is to combine the ideas of cheap talk and (sequential) Bayesian learning. The interpretation and potential applications of their results were discussed.

Third session: 15:20―17:10
Chair: Pao-Chih Cheng (National Central University)
Speaker 1: Hsiang-Ling Shen (National Taiwan University)
  • Sponsored Search Auctions under Two Search Engines
Speaker 2: Takanori Adachi (Nagoya University)
  • Political Accountability, Electoral Control and Media Bias (with Yoichi Hizen)
Comments: Two speakers talked about their preliminary works which are both very new and strongly connected to real life interests: sponsored search auctions and media bias. Since some parts of their models were not well shaped yet, ideas for further improvement were discussed and proposed.

2010-12-12

Lecture 4 (Dutta): Dynamic Programming 3

Original article (link) posted: 06/10/2005

In the last class, we checked the strong relationship between the dynamic optimization problem and the functional equation. In this class, we examined continuity, monotonicity and concavity of the value function.

Assumptions (for continuity)
(A1) State Space: Borel subset of a metric space
(A2) Action Space: Metric space
(A3) Reward Function: Bounded, measurable and continuous
(A4) Transition Function: Weakly continuous
(A5) Feasibility Correspondence: continuous

Theorem (Maitra 68)
Under (A1)-(A5), the value function V is bounded and continuous function. Furthermore, there is a stationary Markovian policy, h, that is optimal.
Proof
Step1 Bellman operator T maps bounded and continuous function (denoted by C(S)) back into the same space.
Step2 C(S) is a complete metric space.
Step3 T is a contraction.
Step4 There is a measurable selection h from the Bellman equation.
Note) To prove Step1, we use the Maximum Theorem. For the other part of the proof, we rely on the established results.

Assumptions (for monotonicity)
(B1) State Space: A subset of R^n
(B2)=(A2)
(B3)=(A3) + increasing on S (State Space)
(B4)=(A4) + First Order Stochastically Increases on S
(B5)=(A5) + A higher state has a larger feasible set

Theorem (Monotonicity)
Under (B1)-(B5), the value function is bounded, continuous and increasing on S.

Note) Step2-4 are almost same as before. So, we essentially need to check only Step1.

Theorem (Strict Monotonicity)
If we have a strictly increasing reward function, then V is also strictly increasing.
Proof
It is easy to see from the definition of the functional equation.
You should not try to restrict the space to the set of strictly increasing functions, because it is not complete. (the limit may not be a strictly increasing function)

Assumptions (Concavity)
(C1)=(B1) + S is convex
(C2) Action Space: A subset of R^m and convex
(C3)=(B3) + reward function is concave
(C4)=(A4) + transition function is "concave"
(concavity here is defined in terms of Second Order Stochastic Dominance)
(C5)=(B5) + graph is convex

Theorem (Concavity)
Under (C1)-(C5), the value function is bounded, continuous, and concave.
With strict concavity of the reward function, V is bounded, continuous, and strictly concave. In this case, h becomes a continuous function.

2010-12-06

Theory Seminar (Sigurdsson)

Original article (link) posted: 05/10/2005

Sigurdsson (2005) "Auctions as Mechanism: An Application to Bankruptcy Reorganization" Job Market Paper

In the paper, he proposes the new mechanism of reorganization of firms in cases of bankruptcy. As a matter of fact, a firm that files for bankruptcy must either liquidate under Chapter 7 of the Bankruptcy Code or reorganize under Chapter 11. The distribution follows the absolute value priority rule (APR) which states that no creditor shall receive any value until all claims senior to his have been paid in full. Legal scholars have proposed several mechanisms as alternatives to the current system of judicially supervised bargaining, widely believed to be costly, lengthy, and to result in inefficient capital structures and violations of the APR. A major drawback of those proposals is their reliance on cash payments.
In the mechanism he proposes, the entire reorganized firm is sold in a cash auction and the proceeds are distributed to creditors according to the APR. He mentions 4 advantages of his mechanism. That is, the mechanism
1) implements the APR for far more general capital structures than the simple debt and equity structures assumed in previous mechanisms.
2) all but eliminates the need for cash payments and therefore works under tight financial constraints.
3) offers the advantage of familiarity over its more novel competitors in an auction.
4) allocates ownership efficiently when creditors do not agree on the firm's value.

It seemed that the participants of the seminar liked his paper. Hope he will get a good job!!
By the way, his main advisor is Eric Maskin. I heard a rumor that he did not take a student, but it should be wrong. I would like to talk to him about my research too.

2010-12-01

Auction theory: Myerson and mechanism design approach

Continued from the previous two articles, we still focus on Introduction of the auction book:

After referring early literature on auction theory, the authors introduce mechanism design theory and explain Myerson's path-breaking contribution.

The early work of Vickrey, Wilson, and Milgrom was largely focused on an equilibrium analysis and comparison of standard auction formats. Myerson led the development of mechanism design theory, which enables the researcher to characterize equilibrium outcomes of all auction mechanisms, and identify optimal mechanisms - those mechanisms that maximize some objective, such as revenues. His first application was to auctions.
Myerson (1981) determined the revenue-maximizing auction with risk-neutral bidders and independent private information. He also proved a general revenue equivalence theorem that says that revenues depend fundamentally on how the items are assigned - any two auction formats that lead to the same assignment of the items yield the same revenues to the seller.
The trick in Myerson's analysis was recognizing that any auction can be represented as a direct mechanism in which bidders simultaneously report their private information and then the mechanism determines assignments and payments based on the vector of reports. For any equilibrium of any auction game, there is an equivalent direct mechanism in which bidders truthfully report types and agree to participate. Hence, without loss of generality we can look at incentive compatible and individually rational mechanisms to understand properties of all auction games. Incentive compatibility respects the fact that the bidders have private information about their values; individual rationality respects the bidders voluntary participation decision. This key idea is known as the revelation principle (Myerson, 1979).
The following is a brief summary of other related papers.

Myerson and Satterthwaite (1983)
  • applied revelation principle
  • proved the general impossibility of efficient bargaining
Related Papers
  • Cramton, GIbbons, and Klemperer (1987) showed that efficiency becomes possible when the traders jointly own the items
  • Wilson (1993) also showed efficient mechanisms when the roles of buyers and sellers are not fixed ex ante

References
Cramton, Gibbons, and Klemperer, "Dissolving a Partnership Efficiently," Econometrica, 1987.
Myerson, "Incentive Compatibility and the Bargaining Problem," Econometrica, 1979.
Myerson, "Optimal Auction Design," Mathmatics of Operations Research, 1981.
Myerson and Satterthwaite,  "Efficient Mechanism for Bilateral Trading," Journal of Economic Theory, 1983.
Wilson, "Design of Efficient Trading Procedures," in Friedman and Rust ed., The Double Auction Market: Institutions, Theories, and Evidence, Addison-Wesley Publishing Company, 1993.

2010-11-27

Auction theory: Vickrey and early literature

Continued from the previous post, let me quote interesting parts from the editors' introductory summary.The following nicely illustrates the contribution of the pioneer of auction theory, William Vickrey.
Vickrey's seminal paper (Vickrey, 1961), mentioned in his 1996 Nobel Prize in economics, introduced the independent private value model, demonstrated equilibrium bidding behavior in a first-price auction, and then showed that truthful bidding could be induced as a dominant strategy by modifying the pricing rule: let each bidder pay the social opportunity cost of his winnings, rather than his bid. Finally, he showed in an example what would later be proven generally as the revenue equivalence theorem: different auction mechanisms that result in the same allocation of goods yield the same revenue to the seller.
Then, the authors explain a few important papers in the early literature of auction theory since Vickrey. The followings are my summary.

Wilson (1969)
  • (pure) common value
  • first analysis of equilibrium bidding with common values
  • demonstrated the importance to avoid (what would be later called) the winner's curse

Milgrom (1981)
  • common + private values
  • discovered the importance of monotone likelihood ratio property (MLRP)
  • showed that MLRP + conditional independence implies that
  1. bidders use monotonic bidding strategies
  2. a monotonic strategy satisfying the first-order condition constitutes an equilibrium

Milgrom and Weber (1982)
  • affiliated values: if one bidder has a high signal of value, it is more likely that the signals of the other bidders are high
  • showed that under affiliated values
  1. Vickrey's revenue equivalence result no longer holds when we introduce a common value element
  2. ascending auctions yield higher revenues than sealed-bid auctions

References
Milgrom, "Rational Expectations, Information Acquisition, and Competitive Bidding," Econometrica, 1981.
Milgrom and Weber, "A Theory of Auctions and Competitive Bidding," Econometrica, 1982.
Vickrey, "Counterspeculation, Auctions, and Competitive Sealed Tenders," Journal of Finance, 1961.
Wilson, "Competitive Bidding with Disparate Information," Management Science, 1969.

2010-11-22

Combinatorial Auctions: Introduction

This book is a great collection of papers on a rapidly growing filed, "combinatorial auctions."


Let me quote a couple of useful sentences below taken from Introduction written by the editors, Peter Cramton, Yoav Shoham, and Richard Steinberg.
  • The study of combinatorial auctions thus lies at the intersection of economics, operations research, and computer science.
  • There are numerous examples of combinatorial auctions in practice. As is typical of many fields, practice precedes theory. Simple combinatorial auctions have been used for many decades in, for example, estate auctions.
  • Recently, a variety of industries have employed combinatorial auctions. For example, they have been used for truckload transportation, bus routes, and industrial procurement, and have been proposed for airport arrival and departure slots, as well as for allocating radio spectrum for wireless communications services.
  • Auction theory is among the most influential and widely studied topics in economics over the last forty years. Auctions ask and answer the most fundamental questions in economics: who should get the goods and at what prices? In answering these questions, auctions provide the micro-foundation of markets. Indeed, many modern markets are organized as auctions.

2010-11-17

Frontiers of Science

I have been to Potsdam in Germany on Nov. 11 - 14 to attend 7th Japanese-German Frontiers of Science Symposium 2010 (link). It's a really interdisciplinary conference jointly organized by Alexander von Humboldt Foundation and Japan Society for the Promotion of  Science.

I was a invited speaker of the social science session titled "New Methods in Decision Making" (session list), and talked about "Recent Developments in Market Design and its Applications to School Choice" (slide). It was quite exciting to give a presentation to researchers from completely different fields, mainly from natural science. Although I didn't have enough time to cover the details of my own studies, many of them seem to get surprised to see how powerful and useful game theoretical tools are.

I also enjoyed the talks and discussions in other sessions very much. Most of topics were unfamiliar to me of course, but their frontier works looked truly exciting. This was a wonderful opportunity indeed! Many thanks to the organizers and participants :)

2010-11-10

Kandori (1991)

Original article (link) posted: 01/10/2005

Kandori (1991) "Correlated Demand Shocks and Price Wars During Booms" RES, 58

The paper extends the analysis of Rotemberg and Saloner (1986) to the case of serially correlated demand shocks and derives the same counter-cyclical movement as in their case (i.i.d. case), provided the discount factor and the number of the firms satisfy certain relationship.
The key observation in Rotemberg and Saloner (1986) was that, if the sum of future profits is unaffected by today’s demand, firms must set the price relatively low when demand is high. The premise is clearly satisfied when the demand shocks are i.i.d.. This paper shows introducing Markov demand shocks also create the same situation in the following two cases. The first case is when the discount factor delta exceeds, but is close to (N-1)/N, where N is the number of the firm. It is shown that firms maintain a constant profit (which equals to the monopoly profit in the worst state) under all demand conditions. Therefore, the extent of the correlation in demand is irrelevant.
The second case arises when delta tends to unity while (1-delta)N is held constant. In this case, firms are enormously forward-looking and total future profit is mostly determined by the stationary distribution, which is independent of today’s demand position.

The result itself is not that surprising (comparing to the other papers by Kandori at least). However, he is amazingly good at selling his work, especially in the following two points;
First, he stresses the importance of Rotemberg and Saloner (1986) and their drawbacks as well. The motivation of extension of their paper becomes very clear and the reader necessarily gets interested in HIS work.
Second, his way of illustrating results is quite lucid and rigorous. Although, the results can be more or less expected to hold, it always is difficult to prove them in rigorously.
Those techniques should be useful for us. Let’s learn them by the papers by Kandori!

Interesting Papers that cite Kandori (1991)

Bagwell (2004) "Countercyclical Pricing in Customer Markets" Economica, 71
Bo (2001) "Tacit Collusion under Interest Rate Fluctuations" Job Market Paper
Harrington (2004) "Cartel Pricing Dynamics in the Presence of an Antitrust Authority" Rand

2010-11-05

Decision Theory 301

This is complementary to the previous post, "Decision Theory 101 (link)." In Appendix A: Optimal Choice, the author (Professor Gilboa) concisely explains the flexibility of rational choice framework. I think that his argument is really important especially when we evaluate the recent developments of behavioral economics and consider its relationship with the traditional (or rational) approach.
For our purposes, it is worthwhile highlighting what this model (the consumers' problem: by yyasuda) does not include. Choices are given as quantities of products. Various descriptions of the products, which may be part of their frames, are not part of the discussion. The utility function measures desirability on a scale. We did not mention any special point on this scale, such as a reference point. Further, choices are bundles of products to be consumed by the consumer in question at the time of the problem. They do not allow us to treat a certain bundle differently based on the consumer's history of consumption, or on the consumption of others around them. Hence, the very language of the model assumes that the consumer does not care what others have, they feel no envy, nor any disappointment in the case when their income drops as compared with last period, and so on.
It is important to emphasize that the general paradigm of rational choice does not necessitate these constraints. For instance, instead of the n products the consumer can consume today, we may have a model with 2n products, reflecting their consumption today and their consumption yesterday. This would allow us to specify a utility function u that takes into account considerations such as aspiration levels, disappointment, and so forth. Or, we can use more variables to indicate the average consumption in the consumer's social group, and then the utility function can capture social considerations such as the consumer's ranking in society and so forth. Indeed, such models have been suggested in the past and have become more popular with the rise of behavioral economics. These models show that the paradigm of rational choice is rather flexible. Yet, the specific theory restricts the relevant variables to be independent of history, others' experiences, emotions, and other factors which might be among the determinants of well-being.

2010-11-01

Decision Theory 101

Let me continue to quote some basics of decision theory (or economics) from the Gilboa's recent book, "Making Better Decisions."
In Appendix A: Optimal Choice, the author nicely illustrates the framework of decision theory and its key concepts such as axioms and utility function. The following might be especially helpful for those who are against or suspicious about the fundamental tool of economics, utility maximization.
A fundamental of optimal choice theory is the distinction between feasibility and desirability. A choice is feasible if it is possible for the decision maker, that is, one of the things that she can do. An outcome is desirable if the decision maker wishes to bring it about. Typically, feasibility is considered to be a dichotomous concept, while desirability is continuous: a choice is either feasible or not, with no shades in between; by contrast, an outcome is desirable to a certain degree, and different outcomes can be ranked according to their desirability.
We typically assume that desirability is measured by a utility function u, such that the higher the utility of a choice, the better will the decision maker like it. This might appear odd, as many people do not know what functions are and almost no one can be observed walking around with a calculator and finding the alternative with the highest utility. But it turns out that very mild assumptions on choice are sufficient to determine that the decision maker behaves as if she had a utility function that she was attempting to maximize. If the number of choice is finite, the assumptions (often called axioms) are the following:
1. Completeness: for every two choices, the decision maker can say that she prefers the first to the second, the second to the first, or that she is indifferent between them.
2. Transitivity: for every three choices a, b, c, if a is at least as good as b, and b is at least good as c, then a is at least as good as c.
It turns out that these assumptions are equivalent to the claim that there exists a function u such that, for every two alternatives a and b, a is at least as good as b if and only if u(a) ≧ u(b). (...) Any other algorithm that guarantees adherence to these axioms has to be equivalent to maximization of a certain function, and therefore the decision maker might well specify the function explicitly.

2010-10-29

Descriptive and Normative Theories

A great introductory book on decision theory written by a leading authority, professor Itzhak GIlboa, came out recently:



Professor Ehud Kalai at Northwestern University describes this book as follows:
"This book is extremely effective for anyone who wants to acquire quick, basic understanding of old and new concepts of decision theory, with a minimum level of technical details."
In the first chapter, the author indeed effectively explains two essential notions in economics, "descriptive" (sometime called "positive") and "normative" theories:
A descriptive theory is meant to describe reality. For instance, the claim that demand curves slope down attempts to tell us something about the world. Importantly, it does not make a value judgement and takes no stand on whether this feature of the world is good or bad.
A normative theory is a recommendation to decision makers, a suggestion regarding how they should make decisions. For instance, the claim that we should reduce income inequality is a normative claim. Note that the word "normative" does not mean here "the norm in a given society" as it does in other social sciences. The term only says something about the type of interaction between the theorist and the decision maker, namely, that this is an instance in which the former is trying to convince the latter to behave in a certain way.
The author continues to document the role of each theory, which is also very intuitive:
In decision theory it is often the case that a principle can be interpreted either descriptively or normatively. Consider the theory that each economic agent maximizes a utility function. I may propose it as descriptive, namely, arguing that this is a good description of real economic agents. And I may promote it as normative, in which case my claim will be that you would be wise to become such an agent. As a descriptive theory, the principle is tested for its correspondence to reality. The better it fits the data, the more successful it is. As a normative one, the principle should not fit reality. In fact, there is no point in giving decision makers recommendations that they anyway follow. Rather, the test is whether the decision makers would like to follow the principle.

2010-10-19

Shapiro (1983)

Original article (link) posted: 29/09/2005

Shapiro (1983) "Premiums for High Quality Products as Returns to Reputations" QJE 98

Think about the market where producers can change product quality over time and consumers cannot observe quality prior to purchase. Then, what will happen? To answer this question, Shapiro (1983) develops a model that explores the implications of firm-specific reputations in a perfectly competitive environment. The one of the most interesting results is that in the equilibrium, firms produce higher quality products earn larger premiums. The premiums are needed for the following two reasons;
First, there is a cost to establish reputation and to offset the cost, positive return (=premium) is needed. Without a premium, no firm chooses high quality.
Second, in this market, sellers can always increase profits in the short-run by reducing the quality of their products (="fly-by-night strategy"). To prevent this deviation, positive return on the faithful path (which dominates the short-run return induced by fly-by-night strategy) is needed.
Although, the above point has already been recognized (Klein and Leffler (1981) explored this idea informally), this is the first paper which models reputation under competitive markets.

Tirole (1988) (2.6.2) provides a simplified version of the Shapiro model; one firm, and two types of qualities. He also points out two problems of Shapiro's model, which are the reliance of infinite-horizon time and bootstrap aspects of the equilibria. As is easily seen, only the lowest quality product is provided in each period with finite horizon model (by backward induction). Bootstrap aspects mean that reputation matters only because consumers believe it matters. Indeed, if, for example, consumers believe the firms produce the low quality no matter what the past history, then their expectation would again be fulfilled. In other words, the analysis suggests only that repetition may offer incentives to supply quality, not that it necessarily will.

Note) In the paper, this possibility is excluded since the author poses strong assumption about reputation formation; the expected quality of the firm's product at t is simply the product quality he chooses at t-1, i.e., R(t)=q(t-1). This simple adjustment expectation turns out to be a rational expectation. However, as Tirole mentions, there are other rational expectation equilibria and Shapiro (1983) does not mention them.

Finally, notice that Kreps and Wilson (1982) and Milgrom and Roberts (1982) showed that reputation effects can be obtained even with a finite horizon by introducing asymmetric information about firm's type. Their models also pin down the equilibrium strategy and high quality is necessarily observed.

References

Klein and Leffler (1981) "The Role of Market Forces in Assuring Contractual Performance" JPE, 81
Kreps and Wilson (1982) "Reputation and Imperfect Information" JET, 27
Milgrom and Roberts (1982) "Predation, Reputation, and Entry Deterrence" JET, 27
Tirole (1988) "The Theory of Industrial Organization" MIT Press

2010-10-14

Game Theory in Finance

What is going on in the up-front academic research in finance? I found a concise description of the field of finance from the great survey article:
"Finance Applications of Game Theory"by Franklin Allen and Stephen Morris (1998, link)

In Introduction, they say the following:

1. Introduction
Finance is concerned with how the savings of investors are allocated through financial markets and intermediaries to firms, which use them to fund their activities. Finance can be broadly divided into two fields. The first is asset pricing, which is concerned with the decisions of investors. The second is corporate finance, which is concerned with the decisions of firms. Traditional neoclassical economics did not attach much importance to either kind of finance. It was more concerned with the production, pricing and allocation of inputs and outputs and the operation of the markets for these. Models assumed certainty and in this context financial decisions are relatively straightforward. However, even with this simple methodology important concepts such as the time value of money and discounting were developed.
Finance developed as a field in its own right with the introduction of uncertainty into asset pricing and the recognition that classical analysis failed to explain many aspects of corporate finance.

Although the paper was written more than 10 years ago, game theoretical perspectives in finance has still not been widespread. If you are interested in these materials, you should definitely check this. Here is the abstract of the paper:
Abstract
Traditional finance theory based on the assumptions of symmetric information and perfect and competitive markets has provided many important insights. These include the Modigliani and Miller Theorems, the CAPM, the Efficient Markets Hypothesis and continuous time finance.
However, many empirical phenomena are difficult to reconcile with this traditional framework. Game theoretic techniques have allowed insights into a number of these. Many puzzles remain. This paper argues that recent advances in game theory concerned with higher order beliefs, informational cascades and heterogeneous prior beliefs have the potential to provide insights into some of these remaining puzzles.

2010-10-08

IO Seminar (Kadyrzhanova)

Original article (link) posted: 28/09/2005

Kadyrzhanova "The Leader-Bias Hypothesis: Monopolization and Industry Structure under Imperfect Corporate Control" Job Market Paper

The paper examines the effect of the imperfect corporate control in a dynamic oligopoly market with cost reducing R&D investments. The managers are assumed to have an over-producing incentive ("empire-building" hypothesis), and hence, they do not maximize firms' short-run profit without intervention of the shareholders. Corporate control serves to shift managers' preference from "empire-building" to "profit maximization".
The key observation is that shareholders may want to choose imperfect control because of the commitment benefit of the over-producing derived by "empire-building" preference of the manager. Indeed, she shows that even if there is no cost of corporate control, shareholders do not choose full control. Moreover, it is shown that shareholders are more willing to leave discretionary authority to managers when ahead of rivals, which results in lower turnover, higher concentration, persistently monopolized markets, and significantly lower consumer surplus.

Comments
I found the paper quite interesting. However, there were so many things she put in her presentation and the relationship among those are not clear enough for me. I am afraid that audiences also got little because her focus of the talk is vague. I think her presentation could have become much better if she had tried the followings:
1) Stress and make clear the contribution to the literature
2) Put more intuitive explanation of the main results
3) Be more confident on mathematical parts
4) Mention some actual story in markets or empirical facts as a motivation
5) Explain which element of the model is a key to derive the corresponding result

Interesting Papers in Reference

Athey and Schmutzler (2001) "Investment and Market Dominance" RAND 32 (1): 1-26
Bagwell, Ramey and Spulber (1997) "Dynamic Retail Price and Investment
Competition" Rand, 28(2), 207-227
Bolton, Brodley, and Riordan (2000) "Predatory Pricing: Strategic Theory and Legal Policy" Georgetown Law Journal, 88, pp. 2239-2330
Bolton and Scharfstein (1990) "A Theory of Predation Based on Agency Problems in Financial Contracting" American Economic Review 80(1): 93-106
Cabral and Riordan (1994) "The Learning Curve, Market Dominance and Predatory Pricing" Econometrica, 62, pp. 1115-1140
Fershtman and Judd (1987) "Equilibrium Incentives in Oligopoly" American Economic Review, 77(5), 927-940

2010-10-04

ES Monographs Online

If you are a member of the econometric society, you can download most of (perhaps all) the monograph series of ES published by the Cambridge University Press FOR FREE. The series covers representative books in advanced economic theory and econometrics such as:




Collective papers in the world congress are also included:


Please access the ES website here and log in. Then, you can jump to the publisher's website with an online access code to the book chapters.

2010-09-30

Lecture 3 (Dutta): Dynamic Programming 2

Original article (link) posted: 28/09/2005

In the last class, we saw the following results;
Necessity: the value function of the dynamic problem V becomes the fixed point of the corresponding Bellman problem, i.e., TV=V.
Sufficiency: if the bounded function U satisfies TU=U in a Bellman problem and if there exists a maximizing selection for TU, then U is a value function of the original dynamic problem.

Our focus on Lecture 3 is to prove these results formally and to derive some other properties about Bellman operator T.

Topics in the class

1) The proof of necessity
2) The proof of sufficiency
To read the chapter 4 of SLP in advance helped me a lot to understand the proof. Although Chapter 4 deals with DP under certainty, it has direct link to uncertain case which was covered in the class.

3) Finding a value function
(a) The Bellman operator is a contraction (by Blackwell)
(b) The set of bounded and continuous functions with sup-norm  is a complete metric space. Completeness is quite important to derive properties of value functions, because it assures the properties in the limit (=the value function).

Comments
Blackwell's sufficient conditions for a mapping to be a contraction seem very useful. Professor Dutta mentioned that despite the seminal works in the field of applied mathematics Blackwell's first half of the career was not bright because of the racial discrimination. (He is a black.)

2010-09-26

Agenda for Development Economics

The latest volume of the Journal of Economic Perspectives (Vol. 24, No. 3, Summer 2010, link) puts together the feature story on development economics. As you can see below, the collection of articles looks really amazing!

Symposium: The Agenda for Development Economics

"Understanding the Mechanisms of Economic Development" (pp. 3-16)
Angus Deaton

"Theory, General Equilibrium, and Political Economy in Development Economics" (pp. 17-32)
Daron Acemoglu

"Diagnostics before Prescription" (pp. 33-44)
Dani Rodrik

"Uneven Growth: A Framework for Research in Development Economics" (pp. 45-60)
Debraj Ray

"Giving Credit Where It Is Due" (pp. 61-80)
Abhijit V. Banerjee and Esther Duflo

"Microeconomic Approaches to Development: Schooling, Learning, and Growth" (pp. 81-96)
Mark R. Rosenzweig

For those who have interests in education, the other symposium, No Child Left Behind, might also be helpful.

2010-09-23

Japanese style wins the day

Congratulations, Nadeshiko! Japanese women's team proceeded to the final of the FIFA U-17 World Cup:
Japanese style wins the day: Japan dragged themselves back from a goal down to beat defending champions Korea DPR in the second semi-final from the Ato Boldon Stadium in Couva on Tuesday. Winning out 2-1 and putting on a show of assorted flicks and tricks for the fans in attendance, the stylish East Asians – who scored both of their goals in the space of two second-half minutes – will now meet Korea Republic in the final of the FIFA U-17 Women’s World Cup Trinidad and Tobago 2010. (see more from FIFA's website)

The Japan's second goal by Kumi YOKOYAMA is really amazing! She was like Maradona or Messi.

2010-09-20

Central Themes on Microeconomics

This is a relatively new textbook on intermediate microeconomics written by two leading scholars in microeconomic/game theory, Douglas Bernheim and Michael Whinston
(See publisher's link for further information):


In the introduction, the authors mention the following eight themes on microeconomics. Four are related to individual decision making, the other four are about the competitive market, which might be helpful for instructors who teach microeconomics.
Central Themes on Decision Making: 
1. Trade-offs are unavoidable
2. Good choices are usually made at the margin
3. People respond to incentives
4. Prices provides incentives

Central Themes on Markets:
5. Trade can benefit everyone
6. The competitive market price reflects both value to consumers and cost to producers
7. Compared to other methods of resource allocation, markets have advantages
8. Sometimes, government policy improve on free-market resource allocation
These themes may look a bit too classical, but the book itself contains new topics such as behavioral economics, strategic interactions (with many examples), competitive policies in oligopoly markets, which reflects the authors' expertise and advantages compared to other rival textbooks.

2010-09-16

Risk and Liquidity

Hyun Song Shin (Hughes-Rogers Professor of Economics at Princeton University)'s awaited book on financial crises came out recently, titled "Risk and Liquidity":


The book is based on his 2008 Clarendon Lectures (in Finance). The table of contents and sample of a few sections are available from his website (link). Let me quote the comments by Roger Myerson and Franklin Allen, which seem to illuminate the feature and importance of  this outstanding book very well.
In the Great Recession, the world has looked for leading economists to offer a new and better understanding of macroeconomic instability, as Keynes did in the Great Depression. In this book, Hyun Song Shin delivers what was needed. Step by step, he develops as new comprehensive understanding of how macroeconomic booms and busts can be derived from microeconomic forces in the banking system. This book should be recognized as a major contribution to macroeconomic theory.
Roger Myerson, Nobel Laureate in Economics 2007, Chicago University

Hyun Song Shin is one of the leading scholars on financial stability in the world. His experience in this field is not confined to his academic work. He has also advised the President of Korea, the Bank of England and many other institutions on these issues. The recent crisis has underlined how important it is to understand the boom-bust cycle. During the boom asset prices rise, this allows financial institutions to borrow and expand their balance sheets and drive prices up more. Similarly, in the bust part of the cycle they reduce their debt, their balance sheets shrink, they sell assets and prices fall more. Hyun Song Shin done the foundational theoretical and empirical research on this leveraging and deleveraging amplification mechanism. This book provides a very accessible summary of this work. It is essential reading for all academics and practitioners interested in financial crises.
Franklin Allen, The Wharton School of the University of Pennsylvania

2010-09-13

Kandori (RES 1992)

Original article (link) posted: 26/09/2005

Kandori (1992) "Social Norms and Community Enforcement" RES, 59

The paper considers self-enforcing mechanisms in the situation where agents change their partners over time. Technically, the model in the paper is an extension of the theory of repeated games to the case of matching games. As main results, the following two are shown.

1) Community can sustain cooperation even when each agent knows nothing more than his personal experience.
2) Community can realize any mutually beneficial outcomes when each agent carries a label which is revised in a systematic way. That is, Folk Theorem holds.

As a motivation of the research, he mentions the benefit of specialization. After introducing theoretical achievements in personal enforcement mechanisms, i.e. Folk Theorem in the repeated game literature, he says as follows.

However, many important transactions are infrequent in nature. As economic historians argue, the division of labor and specialization are important driving forces of economic progress. Potential gains are larger in diverse transactions with different specialists than with fixed partners. Therefore, the control of incentives in such an infrequent trade is of vital importance to understand the organization of economic transactions.

He refers to two papers which initiated this line of research.

The attempt to generalize the Folk Theorem of repeated games to the case of matching games was initiated by Milgrom, North and Weingast (1990) and Okuno-Fujiwara and Postlewaite (1989). The former analyzed concrete examples of information transmission mechanisms and the latter introduced the notion of local information processing. Both of them, however, mainly deal with the infinite population case to avoid potentially complicated problems of incentives on off-equilibrium paths. Our paper shows that such problems can be resolved in a simple way if the stage game satisfies weak condition. Equilibria constructed in our paper work for any population size and any matching rule, and are robust to changes in information structures.

What a strong result he derived!! Although he does not stress the results given in Section 3 "Folk Theorem under Public Observability", I think Proposition 2 is very interesting. It is easy to understand that Folk Theorem holds if all the other players get into punishment phase after some player deviates, which is stated as Proposition 1. However, if we restrict our attention to such a situation where only the deviator is to be punished and innocent pairs are to play the originally prescribed actions, to show Folk Theorem is not straight forward. To be more precise, to check the incentives for innocent players in off-equilibrium path where community is highly populated with "guilty" agents is involved.
Introducing some "forgiveness" in the social norm, the author elegantly shows this problem can be avoided which leads to Proposition 2.

Interesting Papers in References

Harrington (1989) "Cooperation in Social Settings" mimeo
Section 7 of the above paper was revised and available as the following.
Harrington (1995) "Cooperation in a One-Shot Prisoners' Dilemma" GEB, 8
Milgrom, North and Weingast (1990) "The Role of Institutions in the Revival of Trade: The Law Merchant, Private Judges, and the Champagne Fairs" Economic Inquiry, 25
Okuno-Fujiwara and Polstlewaite (1995) "Social Norms in Random Matching Game" GEB, 9
Rubinstein and Wolinsky (1990) "Decentralized Trading, Strategic Behavior and the Walrasian Outcome" RES, 57

2010-09-10

Complementarity and supermodularity

I found a nice summary of key concepts in game theory, complementarity and supermodularity, which are especially important for auction and matching theory.


"Supermodularity and supermodular games" byXavier Vives
in the new palgrave dictionary of economics:


The below is quoted from Xavier's survey.
The basic idea of complementarity is that the marginal value of an action increases with the level of other actions available. The mathematical concept of supermodularity formalizes the idea of complementarity. The theory of monotone comparative statics and supermodular games provides the toolbox to deal with complementarities.

This theory, in contrast to classical convex analysis, is based on order and monotonicity properties on lattices. Monotone comparative statics analysis provides conditions under which optimal solutions to optimization problems change monotonically with a parameter.

The theory of supermodular games exploits order properties to ensure that the best response of a player to the actions of rivals increases with their level. The power of the approach is that it clarifies the drivers of comparative statics results and the need of regularity conditions; it allows very general strategy spaces, including indivisibilities and functional spaces such as those arising in dynamic or Bayesian games; it establishes the existence of equilibrium in pure strategies; it allows a global analysis of the equilibrium set when there are multiple equilibria, which has an order structure with largest and smallest elements; and finally, it finds that those extremal equilibria have strong stability properties and there is an algorithm to compute them.

2010-09-04

Theory Seminar (Gilboa, Lieberman and Schmeidler)

Original article (link) posted: 24/09/2005

Gilboa, Lieberman and Schmeidler "Empirical Similarity"

Although the above paper was supposed to be talked about, Professor Gilboa mainly explained the following paper.

Billot, Gilboa, Samet and Schmeidler (2005) "Probabilities as similarity-weighted frequencies" Econometrica, 73

The above two papers consider a decision rule when a decision maker who has data on past outcomes is asked to express her beliefs by assigning probabilities to certain possible states. As the original database becomes large, empirical frequency may not help for her to make a decision at all. Instead, she may assign a higher weight to more similar case in evaluating the probability of a state.
Billot et.al show that if beliefs given a union of two databases are a convex combination of beliefs given each of the databases, the belief formation process follows a simple formula: beliefs are a similarity-weighted average of the beliefs induced by each past case. However, their axiomatization does not suggest a particular similarity function, or even a particular form of the function. Gilboa et.al develop tools of statistical inference for parametric estimation of the similarity function, assuming that such a function governs the data generating process.
Notice that the axioms in the papers cannot be consistent with the situation where the range of belief becomes smaller as the number of observation increases or a decision maker cares about a trend of outcomes.

Presentation by Professor Gilboa was very clear and he was quite good at using power point (!!). But it was difficult to understand the material. I might need to study decision theory at least little bit... (It might be good to read his book "A Theory of Case-Based Decisions". The Japanese translated edition is also available.)

2010-08-31

Aumann's View on Science and Game Theory

I was impressed by Robert Aumann, when I met him at the game theory conference in Brazil (link). And, after coming back to Japan, I got impressed again to see what he has written on the preface of the volume of his "Collected Papers." His view on science and game theory is very close to mine (perhaps, I have been unconsciously affected by his view or similar idea spread among theorists).

[A]ll the papers in the collection concern game theory, its applications and its tools. Beyond the subject matter, they also share a common methodological theme: they deal with relationships. Science is often characterized as a quest for truth, where truth is something absolute, which exists outside of the observer. But I view science more as a quest for understanding, where the understanding is that of the observer, the scientist. Such understanding is best gained by studying relations - relations between different ideas, relations between different phenomena, relations between ideas and phenomena.
(...)
Indeed, the idea of relationship is fundamental to game theory. Disciplines like economics or political science use disparate models to analyze monopoly, oligopoly, perfect competition, public goods, elections, coalition formation, and so on. In contrast, game theory uses the same tools in all these applications. (...) Perhaps the most exciting advance in game theory in recent years has been the connection with evolution: The realization that when properly interpreted, the fundamental notion of Nash equilibrium, which a priori reflects the behavior of consciously maximizing agents, is the same as an equilibrium of populations that reproduce blindly without regard to maximizing anything.
Aumann's message forward to Two-Sided Matching: A Study in Game-Theoretic Modeling and Analysis by Roth and Sotomayor (1990), which he describes as a book chronicles one of the outstanding success stories of the theory of games,  is also insightful. I again share his view on evaluating the good "matching" of theory and practice.

The theoretical part of the story begins in 1962, with the publication of the famous Gale-Shapley paper, "College Admissions and the Stability of Marriage." Since then, a large theoretical literature has grown from this paper, which is thoroughly covered in this book. But the most dramatic development came in 1984, when Roth published his discovery that the Gale-Shapley algorithm had in fact been in practical use already since 1951 for the assignment of interns to hospitals in the United States; it had evolved by a tirial-and-error process that spanned more than half a century.

2010-08-26

Lecture 2 (Dutta): Dynamic Programming 1

Original article (link) posted: 22/09/2005

Topics in the class

1) Conditional Probability and Feller Property
(a) The definitions of a transition function: q and the conditional expectation operator: T.
(b) The properties of Tg (g is a measurable function); measurable, non-negative (if g non-negative), and bounded (if g bounded).
(c) Feller Property.

2) Dynamic Programming Set Up
(d) The def. of a reward function and a feasibility correspondence.
(e) Example; Neo-classical growth model, Portfolio choice, Capital accumulation, Search, and, Price with inertia (menu cost).
(f) The def. of history and policy (action).
(g) Setting up optimization problem and value function.

3) Bellman (Optimality) Equation
(h) Necessity: If the value function V is measurable, then TV=V.
(i) Sufficiency: If the bounded and measurable function U solves U=TU, then U is larger than or equal to V. Additionally if there is a selection from the optimality equation, then U=V. Note) a selection from TU=U is a stationary Markovian policy which solves TU=U.

Comments

Basic concepts in Measure theory such as sigma-algebra and measurability are postulated. I should check what Feller Property exactly means. (I'm not sure if it's just a definition or with necessary and sufficient conditions.)

2010-08-22

Historical drama on the creation of game theory

I just came back to Tokyo after attending the SAET conference (link) in Singapore and the world congress of the Econometric Society (link) in China (Shanghai). It was a great experience to visit the two Asian countries that I have never been before. I really enjoyed the conferences, meeting lots of people such as my friends, co-authors, teachers, and big names.

Now it's time to go back to my work! Many thanks to everyone I met there. Hope all of you would have a productive academic year starting from September :)

The short article below is what I had prepared before I left Japan:

A new book on the early history of game theory came out recently. It focuses on the two founding fathers, Von Neumann and Morgenstern.

"Von Neumann, Morgenstern, and the Creation of Game Theory: From Chess to Social Science, 1900–1960" by Robert Leonard, Cambridge University Press (link)

As a reviewer's comment, Harold W. Kuhn, a professor emeritus of mathematics at Princeton university describes the book as follows:
The publication of The Theory of Games and Economic Behavior by John von Neumann and Oskar Morgenstern in 1944 was hailed by one reviewer as 'one of the major scientific achievements of the first half of the twentieth century.' Another reviewer signaled that 'the techniques applied by the authors in tackling economic problems are of sufficient generality to be valid in political science, sociology, or even military strategy.' In this exemplary study in the history of economics, Robert Leonard has given us a masterful account of the gestation of this work, starting with the importance of chess in European intellectual life at the beginning of the twentieth century and ending with the military applications of game theory at the RAND Corporation during the middle of the century.
The predecessor of the work is the author's 1995 article in the Journal of Economic Literature, which won the Best Article Award of the History of Economics Society.

Robert Leonard (1995) "Von Neumann, Morgenstern, and the Creation of Game Theory From Chess to Social Science, 1900–1960" (link)

2010-08-11

Bayesian Games

Original article (link) posted: 22/09/2005

The following is the memo about Bayesian Games. All the sentences are quoted from Myerson (1991) "Game Theory" (Chapter 2.8 and 2.9).

Background
A game with incomplete information is a game in which, at the first point in time when the players can begin to plan their moves in the game, some players already have private information about the game that other players do not know.
The initial private information that a player has at this point in time is called the type of the player.
Harsanyi (1967-68) argued that a generalization of the strategic form, called the Bayesian form, is needed to represent games with incomplete information.

Consistent model
Most of the Bayesian games that have been studied in applied game theory have beliefs that are consistent with a common prior. One reason for this tendency to use consistent models is that consistency simplifies the definition of the model. Furthermore, inconsistency often seems like a strikingly unnatural feature of a model. In a consistent model, differences in beliefs among players can be explained by differences in information, whereas inconsistent beliefs involve differences of opinion that cannot be derived from any differences in observations and must be simply assumed a priori.

Agreeing to disagree
In a sports match, suppose it is common knowledge among the coaches of two teams that each believes that his own team has a 2/3 probability of winning its next game against the other team, then the coaches' beliefs cannot be consistent with a common prior. In a consistent model, it can happen that each coach believes that his team has a 2/3 probability of winning, but this difference of beliefs cannot be common knowledge among the coaches. (see Aumann, 1976)

Bayesian Games are general enough?
To describe a situation in which many individuals have substantial uncertainty about one another's information and beliefs, we may have to develop a very complicated Bayesian-game model with large type sets and assume that this model is common knowledge among the players. This result begs the question; is it possible to construct a situation for which there are no sets of types large enough to contain all the private information that players are supposed to have, so that no Bayesian game could represent this situation?
Mertens and Zamir (1985) showed under some technical assumptions, that no such counterexample to the generality of the Bayesian game model can be constructed, because a universal belief space can be constructed that is always big enough to serve as the set of types for each player.
Although constructing an accurate model for any given situation may be extremely difficult, we can at least be confident that no one will ever be able to prove that some specific conflict situation cannot be described by any sufficient complicated Bayesian game.

References

Aumann (1976) "Agreeing to Disagree" Annals of Statistics, 4
Harsanyi (1967-68) "Games with Incomplete Information Played by 'Bayesian' Players" Management Science, 14
Mertens and Zamir (1985) "Formulation of Bayesian Analysis for Games with Incomplete Information" IJGT, 14

2010-08-07

Quantum game theory

My friends working at University of Tokyo recently published an article about quantum game theory on Journal of Physics.
Yohei Sekiguchi, Kiri Sakahara, and Takashi Sato (2010), "Uniqueness of Nash equilibria in a quantum Cournot duopoly game," Journal of Physics A: Mathematical and Theoretical, Volume 43, Number 14
Here is a link to the article. Congratulations!

Unfortunately, I don't know anything about quantum game theory. According to wikipedia (link), it is explained as follows:
Quantum game theory is an extension of classical game theory to the quantum domain. It differs from classical game theory in three primary ways:
  1. Superposed initial states,
  2. Quantum entanglement of initial states,
  3. Superposition of strategies to be used on the initial states.
Anyways, it is surprising that game theorists publish their papers on physics journals. Hum, quantum game theory might be worth trying to study...


During the conference in Brazil (link), I had a chance to attend one of the presentations by physicists, whose topic is not about quantum game theory though:

"Distinguishing the Opponents: Mutual Cooperation is Never Destroyed"
by Lucas Lages Wardil (Universidade Federal de Minas Gerais)
The paper investigates the evolution in network structures. Unlike previous works, he considers that each agent can take a contingent action, i.e., strategy, rather than a unconditional action which has been commonly assumed in the literature. That is, depending on whom to play with, each agent will take different actions; with a certain updating process each agent changes her (contingent) action against a specific opponent. In this framework with extended agents' types, he shows that cooperation (in prisoner's dilemma) becomes easy to sustain under certain networks and imitation dynamics.

In evolutionary biology, where this kind of research is widely investigated, it is unrealistic to regard contingent actions as a agents' type, because a type is considered to be genetic. In Economics, we usually examine contingent action plans in rational frameworks but it is uncommon in bounded rational frameworks such as evolutionary game. The reason (I guess) is that it is difficult to argue why and how an agent with such a complicated action plan follows a irrational/heuristic adjustment process to update her behaviors.

Anyways, I found this paper by a physicist very interesting. In some sense, his research connects biology and economics (although further justification/interpretation seems to be necessary to apply his models in these fields). There might be many things that we economists can learn from physicists.

2010-08-02

Reny's new exsistence theorem

In the plenary session on the third day of BWGT conference (link), Professor Philip Reny talked about his new research on monotone pure strategy equilibria:
Title: On the Existence of Monotone Pure Strategy Equilibria in Bayesian Games (link to pdf)
Abstract: We generalize Athey's (2001) and McAdams' (2003) results on the existence of monotone pure strategy equilibria in Bayesian games. We allow action spaces to be compact locally-complete metrizable semilattices and type spaces to be partially ordered probability spaces. Our proof is based upon contractibility rather than convexity of best reply sets. Several examples illustrate the scope of the result, including new applications to multi-unit auctions with risk-averse bidders.
According to Prof. Reny, while the topic of the paper is related to many fields such as mathematical economics, mechanism design, and auctions, there are two seminal papers that strongly motivated his research. Athey (2001) first establishes the sufficient conditions to guarantee the existence of monotone pure strategy equilibria in Bayesian games with one-dimensional and totally ordered type and action spaces. The key condition is a Spence-Mirlees single-crossing property. McAdams (2003) extends Athey's analysis to multi-dimensional and partially ordered spaces.

Prof. Reny succeeded to derive weaker conditions than McAdams in Bayesian games with multi-dimensional strategy spaces, and also extend to the infinite type and action spaces. The key insight is to use a fixed point theorem derived by Eilenberg and Montgomery (1946) instead of Kakutani's (used by Athey) or Glicksberg's (used by McAdams) ones. The latter two theorems require best reply sets to be convex while the former requires only contractibility, which turns out to be (almost) automatically satisfied in Bayesian games.
His main result says the following:
Theorem: (Under some conditions) If, whenever the other players employ monotone pure strategies, each player's set  of monotone pure-strategy best replies is nonempty and join-closed, then a monotone pure strategy equilibrium exists.
Note that a subset of strategies is join-closed if the pointwise supremum of any pair of strategies in the set is also in the set.

The idea of join-closedness (in the different context, though) recently showed up when I discussed my jointwork on the structure of stable matchings with co-authors. It may have some connection...

References
Susan Athey (2001), "Single Crossing Properties and the Existence of Pure Strategy Equilibria in Games of Incomplete Information,"  Econometrica, Vol. 69: 861-889.
Samuel Eilenberg and Deane Montgomery (1946), "Fixed Point Theorems for Multi-Valued Transformations," American Journal of Mathematics, Vol. 68: 214-222.
David McAdams (2003), "Isotone Equilibrium in Games of Incomplete Information," Econometrica, Vol. 71:1191-1214.

2010-07-28

AER (June 2010)

The latest volume of the American Economic Review (link to the table of contents) just arrived by mail. One may get interested in the lead-off articles written by the Nobel laureates 2009 in Economic Sciences:
"Beyond Markets and States: Polycentric Governance of Complex Economic Systems" by Elinor Ostrom
"Transaction Cost Economics: The Natural Progression" by Oliver E. Williamson
Other articles that attracted my interest are the following:
  • "A Study of the Internal Organization of a Bidding Cartel" by John Asker
  • "Bargaining with Arrival of New Traders" by William Fuchs and Andrzej Skrzypacz
  • "The Burden of the Nondiversifiable Risk of Entrepreneurship" by Robert E. Hall and Susan E. Woodward
  • "What Causes Industry Agglomeration? Evidence from Coagglomeration Patterns" by Glenn Ellison, Edward L. Glaeser and William R. Kerr
I downloaded the pdf files of the above articles, trying to read them on the way to Brazil tomorrow, where I will attend a great game theory conference: The Second Brazilian Workshop of the Game Theory Society (BWGT2010, link).

2010-07-21

Great statistician passed away

I heard a sad news: Eminent statistician David Blackwell has died at 91

Professor Blackwell, though a mathematician or statistician, is quite famous to us economists for his significant contribution in mathematical statistics and dynamic programming.
Here is a link to the wikipedia article about him.

He has written many influential papers such as:
  • "Conditional Expectation and Unbiased Sequential Estimation," Annals of Mathematical Statistics, Vol. 18, No. 1 (Mar., 1947), pp. 105-110
  • "Equivalent Comparisons of Experiments," Annals of Mathematical Statistics, Vol. 24, No. 2 (Jun., 1953), pp. 265-272  
  • "Discounted Dynamic Programming," Annals of Mathematical Statistics, Vol. 36, No. 1 (Feb., 1965), pp. 226-235

Interestingly, despite his deep thoughts and ingenious works on each research topic, he described himself as follows:
I've worked in so many areas—I'm sort of a dilettante. Basically, I'm not interested in doing research and I never have been.

May he rest in peace.

2010-07-18

IO Seminar (Daughety and Reinganum)

Original article (link) posted: 21/09/2005

Daughety and Reinganum "Imperfect Competition and Quality Signaling"

The paper investigates the one-shot oligopoly model where firms produce substitute products with associated vertical quality measure. Each firm has private information about its quality (="type") and signaling effects by pricing are captured. Their main focus is comparison between separating equilibria of incomplete information (about vertical quality) and that of complete information.
As main results, they show the following;

1) incomplete information (signaled via prices) softens price competition, and imperfect competition can reduce the degree to which firms distort their prices to signal their types
2) low-quality firms always prefer playing the incomplete information game to the full-information analog
3) if the proportion of high-quality firms is great enough, they also prefer incomplete information to full-information

It is very difficult for me to judge their contribution in this field because there are so many papers in the literature and some of them look quite similar to this paper at least for those who are not familiar with this line of research. The model in the paper tries to capture the both effects of incomplete information and imperfect competition, which makes it very complicated. Although the main results mentioned above sound interesting, similar kind of qualitative results can be derived in simpler model I guess. For example, the comparison between a separating equilibrium and a pooling one in a simple Spence type signaling model has the implications quite similar to (2) and (3). Of curse, except for the results (1)-(3), they show a bunch of comparative static and some of them are interesting and not obvious. However, I would like to say it should be needed for them to say why such a complicated model is used to explain the results most of which were already known and hence not surprising.
The literature review (Section 2) is comprehensive. So, if you are interested in the paper, it might be better to read some key references before deeply tackle this paper.

Interesting papers in References

Bagwell and Riordan (1991) "High and Declining Prices Signal Product Quality" AER, 81
Mailath (1989) "Simultaneous Signaling in an Oligopoly Model" QJE, 104
Martin (1995) "Oligopoly Limit Pricing: Strategic Substitutes, Strategic Complements" IJIO, 13

2010-07-15

Grossman (1989)

Original article (link) posted: 20/09/2005

I was taking a look at Sanford Grossman's book "The Informational Role of Prices". This is a collection of his articles focusing on informational role of prices or that of contracts. All the articles except for the first one were reprinted from journals. Here, I take a memo for the chapter one which is an introduction of the book.

The main theme of the book is to propose models which incorporate two aspects of the role of prices at the same time. He says;

I have elaborated a model of economic equilibrium that is based upon the idea that prices have a dual role: They constrain behavior by affecting the costs or benefits of acts, but they also convey information about what will be the costs and benefits of the acts.

In the framework of Marshall or Walras, people are merely constrained by prices, and so, no one learns anything from prices. That is, their models cannot capture the second role of prices. To be more precise, in the Walrasian model, the demand function specifies a desired level of holdings of the security at each particular price, irrespective of whether that price is a market clearing price. However, a trader might be induced to adjust his "demand" function to reflect the fact that the price at which market clears conveys information. To take this informational role into account, the author assumes that the consumer faces a price that is a real offer of another person, or the outcome of a market process.
In short, his idea is to redefine the "demand" as an expression of desired holdings at prices that are "market clearing", i.e., each consumer chooses his demand at "p" to maximize his expected utility conditioned on both his private information and on the information contained in the event that "p" is a market clearing price. As a consequence, there is no desire to recontract after observing that a particular price is the market clearing price because each person has already incorporated it.
In the rest of the chapter 1, the author introduces the concept of "uninformed traders" or "noise traders" who demand a security for noninformational reasons. Incorporating them into his models, he tries to explain the stock market crush in 1987. Interestingly, he claims this event supports his models with rational agents rather than irrational behavior of traders, which is described as follows.

Some have suggested that this is strong evidence against investor rationality, and point to the October 1987 episode as another example of irrational behavior. In contrast, I think that these events and the excessive volatility of stock prices relative to the volatility of expected payoffs are evidence in favor of the type of "rationality" embodied in the R.E. approach outlined above, rather than evidence for irrationality. As I argue below, once the Walrasian notion of demand is eliminated, the volatility phenomena can be seen as an expression of a sophisticated trading strategy rather than irrationality.

Interesting papers in References

Grossman and Stiglitz (1976) "Information and Competitive Price Systems" AER, 66-2
Kreps (1988) "In Honor of Sandy Grossman, Winner of the John Bates Clark Medal" J. of Economic Perspectives, 2-2
Kyle (1985) "Continuous Auctions and Insider Trading" Econometrica, 53
Roll (1984) "Orange Juice and Weather" AER, 74

The following book might be useful for those who are interested in the line of research mentioned above.

Brunnermeier (2001) "Asset Pricing under Asymmetric Information" Oxford University Press

2010-07-11

IO Seminar (Rob and Fishman)

Original article (link) posted: 14/09/2005

Rob and Fishman "Is Bigger Better? Customer base expansion through word of mouth reputation" forthcoming in JPE

The paper develops a modeling framework in which a firm regards its reputation as a capital assets whose value is maintained through a process of active and continuous investment. Firms are required to investment for each period to produce high quality products. The quality of the product is only known to a consumer who buys it from the firm (experience good assumption), and she will tell this information to a new consumer with some probability. This information spread captures "word of mouth reputation".

Their main finding is that investment in quality is positively related to the size of customer base which is defined as the number of consumers who are aware of the firm's reputation. This is because reputation is costly to acquire and takes a long time to regain once it has been lost, and hence, a good reputation is more valuable to a firm the larger its customer base is. The model predicts that the larger is a firm, the more it invests in quality, and the higher is the average quality it delivers.

Interesting papers in references

Horner (2002) "Reputation and Competition" AER, 92
Mailath and Samuelson (2001) "Who wants a Good Reputation" RES, 68
Shapiro (1983) "Premiums for High Quality Products as Returns to Reputation" QJE, 98
Tadelis (2002) "The Market for Reputation as an Incentive Mechanism" JPE, 92

Their contribution in the literature is stated as follows.

What differentiates our approach from all these papers is that reputation in our model spreads in the market through word of mouth, or referrals - consumers tell other consumers about their experience, causing some firms to grow and other firms to decline. As a consequence, a firm starts out small, grows gradually, and changes its investment as its reputation is established. These interrelated processes of firm growth, reputation formation, and the links between age, size, and investment in quality represent our main contribution to the literature.

2010-07-07

Lecture 1 (Dutta): Math Preliminaries

Original article (link) posted: 13/09/2005

The class by professor Dutta started with mathematical preliminaries. It might be good to study those concepts and some theorem again because I left myself unclear of some of them during taking a math class in my first year.
Well, I am thinking to write a brief summary for each week (The class is once every week on Monday). Here is the first one.

Topics in the class

1) Correspondings and Maximum Theorem
2) Contraction Mapping Theorem

What we covered in the class

In (1):
(a) The definitions of correspondings and several versions of continuities: upper semi-continuity (USC), lower semi-continuity (LSC) (and continuity).
(b) Maximum Theorem and its proof. Note) Maximum Theorem says that the maximum value is continuous and the maximizer is USC in parameters under some conditions.

In (2):
(c) The def. of contraction, Cauchy sequences and complete metric space.
(d) Contraction Mapping Theorem and its proof. Note) The theorem says that if there is a contraction corresponding and its domain is a complete metric space, then there exists a unique fixed point.

Comments

(a) I've often mixed up USC and LSC, but finally the difference seems to be clear for me.
(b) I need to reconsider the proof. It's not so complicated but not that easy either.
(c) I realized that I had forgotten the def. of complete metric space...
(d) The proof is much easier than (b). Uniqueness is almost straight forward.

Recommended readings

SLP Chapter 3
Sundaram (1995) "A Course in Optimization Theory"

2010-07-04

Microeconomic Analysis - G6218

Original article (link) posted: 13/09/2005

Today, I attended the first class of G6218 which is an advanced microeconomics for upper class students by Professor Dutta. The course covers basic issues in dynamic economics and consists of the following three parts.

1) Dynamic Programming: The theory of single-agent models
2) Repeated/Dynamic Games: The theory of multi-agent models
3) Application: A strategic analysis of climate change

Although there is no required textbook, the two books listed below seems to be useful.

Stokey, Lucas and Prescott (1989) "Recursive Methods in DYnamic Economics" for (1)
Fudenberg and Tirole (1991) "Game Theory" for (2)

Since professor Dutta is very good at teaching and the topics are interesting for me and deep in mathematics, I am thinking to take this course. It should be better to bring SLP from Princeton... (I asked my friends in Princeton to keep my books.)

2010-07-01

JEP for FREE

Here comes a great news!
Online issues of the Journal of Economic Perspectives published since 1999 are now publicly accessible at no charge, compliments of the American Economic Association.
You can access to the Journal of Economic Perspectives from here. The symposium in the latest volume (Spring 2010), Con out of Economics, contains interesting debate among leading econometricians that may attract much attention.

2010-06-28

Kamien (Handbook of GT 1992)

Original article (link) posted: 12/09/2005

Kamien (1992) "Patent Licensing" Handbook of Game Theory, vol.1, chapter 11

This survey focuses on game-theoretic analysis of patent licensing. In the paper, the interaction between a patentee and licensees is described in terms of a three-stage game. In the first stage, a patentee sells the patent by using some mechanism. Firms in the industry simultaneously decide their buying strategies in the next stage. And finally, patents are distributed according to the rule of a mechanism and market competition (Cournot competition) is realized.
The main focus on the paper is comparison between different mechanisms which are (1) auction (2) fixed fee licensing (3) royalty (4) hybrid of (2)&(3). The interesting result is that licensing auction yields higher revenue than fixed fee or royalty does. In some situations, however, the hybrid mechanism colled "chutzpah" mechanism yields even higher revenue than auction does.
The analyses introduced in this survey has several restrictions such as; no private information, identical firms, only considering profit maximization.
In the end, the paper concludes as follows.

It is often the case that a survey of a line of research is a signal of it having peaked. This is certainly not true for game-theoretic analysis of paten licensing.

I hope this comment is still true although it has been written in 15 year ago...

Interesting Papers on References

Jensen (1989) "Reputational spillovers, innovation, licensing and entry" IJIO, 10-2
Katz and Shapiro (1985) "On the licensing of innovation" Rand, 16
Katz and Shapiro (1986) "How to license intangible property" QJE, 101
Muto (1987) "Possibility of relicensing and patent protection" EER, 31
Reinganum (1989) "The timing of innovation: Research, development and diffusion" in Handbook of IO

2010-06-25

Japan eye bigger prize

Congratulations for the great victory!
Blue Samurai, you can go further!!

"I know this is a really big win for Japan," Honda said after helping his side line up a Round of 16 meeting with Paraguay in Pretoria on Tuesday. "I expected to be really jubilant but I'm not as jubilant as I thought I would be because I don't think we are finished here. I believe we can go further in this competition."
(quoted from the article at FIFA's official website: Japan eye bigger prize)

Who is Honda? He has been elected twice as the MOM (man of the match) in this world cup (see the link).
You can also watch this movie:

2010-06-23

Fudenberg, Levine and Maskin (Econometrica 1994)

Original article (link) posted: 08/09/2005

In their paper, "The Folk Theorem with Imperfect Public Information", several versions of Folk Theorem are shown. This is a memo for them. Before mentioning actual theorem, let's briefly check their contribution in the literature.

An important hypothesis of the standard Folk Theorem is that the players can observe one another's actions in each repetition, so that deviations from equilibrium strategies are detectable. In contrast, this paper considers games in which players observe only a public outcome that is a stochastic function of the actions played. Thus these are games of moral hazard. The major task of the paper is to provide conditions sufficient for the Folk Theorem to extend to such games. The most important hypotheses concern the way the probability distribution over public outcomes depends on the player's actions.

To see the Folk Theorem with perfect information, you should check Fudenberg and Maskin (Econometrica 1986) "The Folk Theorem for Repeated Games with Discounting and Incomplete Information". (Notice that "Incomplete Information" in the title implies the cases without a public randomization device or some reputation models and do not mean imperfect monitoring cases, which is covered in FLM)

The three versions of Folk Theorem they showed are as follows:

Nash-thread 1
If all pure-action profiles are pairwise identifiable, a "Nash-threat" version of the Folk Theorem obtains: any payoff vector Pareto-dominating a Nash equilibrium of the stage game can be sustained in an equilibrium of the repeated game for discount factors near enough to 1.

Nash-thread 2
If a game has at least one (mixed-action) profile satisfying the conjunction of pairwise identifiability and individual full rank (= "pairwise full rank"), then again the Nash-threat Folk Theorem applies. Generic games possess such a profile provided that the number of possible public outcomes is no less than the total number of elements in the action sets of any two players.

Minimax-thread
To obtain the conventional "minimax-threat" Folk Theorem requires more stringent conditions. Specifically, besides the hypotheses for the Nash-threat theorem, it suffices to assume that all pure-action profiles satisfy individual full rank.

Finally, I quote their explanation about two examples of inefficient results shown by Radner-Myerson-Maskin and Green-Porter (or Abreu-Pearce-Stacchetti)

Our work makes clear that the R-M-M counterexample relies on there being relatively few possible public outcomes compared to the number of possible actions, so that the genericity result mentioned before does not apply, and that equilibria of the G-P and A-P-S variety are necessarily inefficient only because they are symmetric: if (even slightly) asymmetric equilibria are admitted, the Folk Theorem is restored.