Great professor Matsushima

Original article (link) posted: 03/08/2005

At the last part of the renegotiation section in Pearce (1992), I found the interesting description about Matsushima (1990) "Structure of Renegotiation in Infinitely Repeated Games" mimeo, Stanford University.

I cannot end this catalogue without mentioning an intriguing paper by Matsushima (1990). His idea is that just as an equilibrium specifies what will happen if its "instructions" are not obeyed, societies have metacodes indicating what happens when a social convention (equilibrium) is breached. The ensuing analysis is highly ingenious; to my astonishment, Matsushima emerges from a jungle of infinite sequences of social conventions and breaching rules, with an existence result. I will not try to explain the motivation for the solution concept; on that score, despite some enjoyable discussions with the author, I remain mystified.
(Pearce (1992), p.166)

By the way, I could not find Matsuhima's paper; he does not seem to have published the paper, and Stanford does not have the working paper either. If you know something about this paper, please let me know. (maybe I should ask him directly)


Regional Caps on JRMP

I attended a lunch seminar on microeconomics at University of Tokyo today (May 27th); Fuhito Kojima, one of the most productive game theorists in my ages, talked about his recent matching paper with Yuichiro Kamada (a rising star at Harvard):
"Improving Efficiency in Matching Markets with Regional Caps: The Case of the Japan Residency Matching Program" (joint with Yuichiro Kamada)

Their work is strongly motivated by the actual centralized matching system used in the Japan residency matching program (JRMP). JRMP, as it follows NRMP in the U.S., employs the Gale-Shapley algorithm to assign doctors to hospitals. However, due to the vacancy problem at (mainly) rural hospitals, JRMP recently introduced "regional caps" to each prefecture in order to control the numbers of doctors in popular area.

The current paper theoretically investigates the effects of imposing such an exogenous caps to the GS algorithm. Based on the new stability concept they define, which incorporates regional cap constraints in a natural way and coincides with the usual stability if there were no caps, they show the followings:
  1. The current Japanese mechanism, i.e., exogenous regional caps + GS, is not a stable matching mechanism.
  2. There exists a new algorithm (natural extension of the GS) that always a stable and constrained efficient matching.
  3. This new mechanism is group strategy-proof for doctors, that is, any group of doctors does not have an incentive to manipulate their preferences.

Here is a technical remark. To define new algorithm, we have to decide two things (which are absent from the usual GS algorithm), (1) target caps and (2) an order of hospitals. As I remember correctly, the resulting outcome is independent of the choice of (1) (whenever target caps satisfy weak feasibility conditions), but the speaker didn't mention whether the outcome is also invariant to (2) or not. I may better ask him later...

I found the paper really interesting: motivation is clear, results look nice, contains important policy implication, can be applied to different matching problems, and so on. It might be desirable if they could provide some policy recommendation with respect to the choice of regional caps. Although actual caps are typically determined by politics or something other than economic theory I suppose, some benchmark analysis should be helpful. Anyways, I like this paper very much :) (We may perhaps find its title on the front page of some top journal in the near future!)


Folk theorems by FLM (1994)

Original article (link) posted: 03/08/2005

According to Pearce (1992), Fudenberg, Levine and Maskin (1994) "The Folk Theorem in Repeated Games with Imperfect Public Information" Econometrica, 62 represent the state of art in discounted folk theorems for a broad range of information structure, and "anyone interested in repeated games should read it closely".

It is difficult to state their folk theorems without relying on mathematical symbols. Here, I try to summarize the key insight as I quote the relevant sentences (in italic) from Pearce (1992).
To derive folk theorems, they consider two conditions, the individual full rank condition and the pairwise full rank condition. The former is needed to ensure that a player's different possible actions can be distinguished, and hence encouraged or discouraged. The latter is needed to acquire information which discriminates statistically between deviations by some player and others.
Without the individual full rank condition, it may not be possible to induce players to play some strategy, no matter what rewards are attached to signal realizations. In contrast,
this (="the individual full rank condition") guarantees that any behavior can be enforced if arbitrary continuation payoffs can be used.

The failure of the pairwise full rank condition explains the inefficient results by Radner, Myerson and Maskin (1986).
The problem there was that the only way to enforce good behavior was to punish both players in the event that output is low. Efficient (or nearly efficient) cooperation in a model where no player's actions are observed, generally requires that, when one player's continuation payoff is reduced, another's must be increased; surplus should be passed back and for the amongst players, not thrown away.

With the additional restrictions on the information structure guaranteed by the full rank conditions, FLM prove a folk theorem of virtually the same degree of generality as for perfect monitoring.

As a final remark, it should be noticed that the pairwise full rank condition does not necessarily be satisfied at equilibrium strategy profiles.

It would have been reasonable to guess that, to prove that a desired profile "gamma" can be enforced (almost) efficiently, it would be necessary to impose pairwise full rank relative to deviations form "gamma". By contrast, all that is actually assumed is that, for each "i" and "j", there is some distinguishing "alpha" that allows i's and j's deviations to be distinguished, and not necessarily the same "alpha" for each pair of players! FLM demonstrates that a profile as close as desired to "gamma" can be found that puts a little weight on the strategies used in the "distinguishing profiles," and discriminates as required between deviations of different players.

I should read FLM again...


Information and timing

Original article (link) posted: 03/08/2005

With imperfect monitoring, shrinking the period length implies less discounting from one period to the next, but also leaves less time for players to observe signals relevant to behavior.
So there are two effects of reducing the period length: an effective increase in patience, which we know from the monotonicity result tends to increase the average value set, and a worsening of information, which Kandori (1992) elegantly shown to decrease the set of equilibrium values. Either of these two effects can dominate in a particular case.
If the period of fixed action is a year, under plausible parameter values cooperation could be sustained very profitably. But suppose that instead auctions can be changed daily. The only way to encourage cooperation is to punish the event that there is no goods news, which has probability near 1 whether anyone shirks or not.
Ironically, in this case the player's ability to respond quickly to information destroys all possibility of cooperation. This suggests that delaying the release of information might actually be valuable in partnerships; Abreu, Milgrom and Pearce (1991) show that for high "delta", information delays can virtually eliminate the inefficiency that Radner, Myerson and Maskin (1986) identified.

(Pearce (1992), p.156)

Abreu, Milgrom and Pearce (1991) "Information and Timing in Repeated Partnerships" Econometrica, 59
Kandori (1992) "The Use of Information in Repeated Games with Imperfect Monitoring" RES, 59-3
Radner, Myerson and Maskin (1986) "An example of a Repeated Partnership Games with Discounting and with Uniformly Inefficient Equilibria" RES, 53



Unpated (July 24, 2010)
Now, their paper is uploaded:
KANDORI, M. and Obara, I. (2010) "Towards a Belief-Based Theory of Repeated Games with Private Monitoring: An Application of POMDP"
(paper can be downloaded from the author's website)

When I had a chat with Prof. Kandori on Friday, he talked about his recent joint-work with Ichiro Obara (UCLA) on repeated games with private monitoring. The key idea is to use a well-known method in dynamic programming called POMDP (partially observable Markov decision process).
Then, they derived the (right notion of) one-shot deviation principle for private monitoring, and constructed a method to check whether an arbitrary strategy, expressed by finite automaton, constitutes an equilibrium or not. I couldn't follow the arguments in detail, but what they have found sounds really path-breaking.
Look forward to see their paper soon!

Information about POMDP websites
Useful site includes tutorials and codes. link
Lecture movie at Stanford (Machine learning: Lecture 20) link


Discounting and dynamic programming

Original article (link) posted: 03/08/2005

Radner (1985) demonstrated the existence of fully efficient perfect equilibria in a class of partnership games with the limit of means criterion.
In my opinion the repeated partnership games (and most repeated games) are better modeled with discounting than with the limit of means criterion.
The per-period nature of the problem is nicely reflected in the discounting case, where the loss gets capitalized in the value set: player must be punished (sooner or later) if current output is low. Without discounting, it is not necessary to deter shirking period by period: if a player cheats for k periods, it has no effect on his long-run average payoff. Only infinite strings of deviations are a problem, and these Radner detects using a "review strategy" that, according to the law of the iterated log, will yield a first-best equilibrium average payoff.
Statistical methods are ideally suited to guarding against long-run deviations, whereas dynamic programming methods are largely inapplicable at "delta"=1. With discounting, the problem of deterring current deviations leads naturally to the decomposition of a supergame profile into behavior today and continuation values for the future. The dynamic programming perspective has the benefit of unifying the treatment of patient and impatient players, infinite and finite horizon games, and implicit and explicit contracts.

(Pearce (1992), p.154-5)


Radner (1985) "Repeated Partnership Games with Imperfect Monitoring and No Discounting" RES, 53


Bang-bang rewards

Original article (link) posted: 03/08/2005

Self-generation and related techniques were first developed in the context of unrestricted symmetric equilibria of the Green-Porter model in APS (1986), and then presented in greater generality in APS (1990).
(Pearce (1992), p.152)

In APS (1990), they found the following bang-bang property.

The equilibrium value set V is compact, and for all elements v in V there exists an equilibrium whose implicit reward functions after each history take only values in the set of extreme points of V.
Under certain conditions the "bang-bang sufficiency" result given above can be strengthened to a necessity result: an equilibrium that maximizes a linear combination of player payoffs (including negative combinations) must have implicit reward functions that use only extreme points of V. The rough intuition is the same as the one given earlier for the Green-Porter model: if you are creating incentives by moving rewards in a direction that reduces the objective function of the problem, do so aggressively (move until you can't go any further in V) but in as small and informative a region of signal space as possible. This advice cannot be applied literally in a model with a discrete signal space, so the bang-bang necessity result does not hold. The sufficiency result can be restored trivially in an essentially discrete model if the signal space is taken to include the outcome space of a public randomization device.

(Pearce (1992), p.153)


The Green-Porter model and APS(1986)

Original article (link) posted: 03/08/2005

In many economic examples of practical interest, the assumption that players observe one another's past action is inappropriate.
Green and Porter (1984) and Porter (1983) were the first papers to study discounted repeated games in which players receive information related only stochastically to others actions.
For Economists, one of the most attractive features of the model is that it escapes the prediction of dynamically uniform behavior on the most collusive equilibrium path, thereby offering a possible interpretation of observed phenomena such as price wars.
Porter investigated symmetric equilibria that are optimally collusive among a restricted set of "trigger strategy" profiles. (A price realization of less than "p" triggers a T-period phase of Cournot-Nash behavior, after which cooperation resumes until the Cournot phase gets triggered again.)
Porter found that it is often optimal to set T="infinity", that is, revert permanently to the stage game Cournot-Nash equilibrium.

Abreu, Pearce and Stacchetti (1986) dropped the restriction to trigger strategy profiles, and characterized optimal pure strategy summetric equilibria of a class of games that generalize the Green-Porter model. They found that a constrained efficient solution is described by two "acceptance regions" in the signal space and two actions. (In the efficient equilibrium, players choose the strategy1 as long as the value of the signal falls in the region1. Otherwise, they switch to the strategy2, and keep playing that strategy as long as the signal falls in the region2.)
Using a larger region and a less severe punishment will generally result in a loss of efficiency because of the region's poorer ability to discriminate between good and bad behavior. Thus, after one period of the best equilibrium, player will be instructed either to begin the worst equilibrium or to restart the best equilibrium.
Notice that, in every contingency, players are duplicating the behavior of the first period of one of two equilibria (the best or the worst), so only two quantities are ever produced.

(Pearce (1992), p.150-2)


Abreu, Pearce and Stacchetti (1986) "Optimal Cartel Equilibria with Imperfect Monitoring" JET, 39
Green and Porter (1984) "Noncooperative Collusion under Imperfect Price Information" Econometrica, 52
Porter (1983) "Optimal Cartel Trigger Price Strategies" JET, 29


Cooperation amongst mortals

Original article (link) posted: 03/08/2005

Although the abstraction of a world that continues indefinitely is a useful device, modeling individuals as infinitely lived is less attractive. It seems important to inquire, then, into the possibilities of self-enforcing agreements amongst finitely lived agents in an infinite horizon world.
In an overlapping generations model, suppose that society acknowledges that in the final three periods of his life, say, no individual will act cooperatively. Hence, selfish behavior by the aged is part of the implicit agreement. But, if any young person fails to cooperate, the accord is broken and everyone subsequently optimizes myopically. Young person will choose not to defect, because they would lose the benefits of social cooperation for the rest of their lives. Folk theorems similar to those discussed earlier hold here, and Kandori (1992) shows that, if successive individuals are born far enough apart in time, there is no need to invoke any full dimension restriction. Recall that this assumption is usually made to ensure that punishers can be rewarded for minimaxing a defector, without incidentally also rewarding the defector himself. In Kandori's construction, the punishers wait until the defector dies, and then celebrate their earlier self-discipline.

(Pearce (1992), p.148-9)


Kandori (1992) "Repeated Games Played by Overlapping Generations of Players" RES, 59-1
Pearce (1992) "Repeated games: cooperation and rationality" in Advances in economic theory


Remarks on Rotemberg and Saloner (1986)

Original article (link) posted: 02/08/2005

Rotemberg and Saloner (1986) show price war during booms, in the sense that firms charge the monopoly price in the low state of demand whereas they charge below the monopoly price in the high state of demand if the discount factor falls in some region. However, we should notice that this price war does not necessarily imply countercyclical move of prices.

This is not a price war in the usual sense, because the price may actually be higher during booms than during busts; thus, that oligopoly prices move countercyclically is not an implication (but is consistent with) of the Rotemberg-Saloner model.
(Tirole, p.250)

Contrary to the Rotemberg-Saloner model, Green and Porter (1984) show that price wars are triggered by a recession. The latter develops the model that formalizes the issue of secret price cutting, in the setting of "imperfect public monitoring" called nowadays. In their model, price wars during recessions are involuntary, which give the incentive not to deviate when demand is not very low. (Remember, there is no private information and price wars are voluntary in Rotemberg-Saloner model.)

Tirole also mentions the following important point.

When price choices are perfectly observable, it makes sense to resort to extreme punishments because such punishments are never observed on the equilibrium path and therefore are costless to the firms (they are just threats). Under uncertainty, mistakes are unavoidable and maximal punishments (eternal reversion to Bertrand behavior) need not be optimal.
(Tirole, p.252)


Green and Porter (1984) "Non-cooperative Collusion Under Imperfect Price Information" Econometrica, 52
Rotemberg and Saloner (1986) "AQ Supergame-Theoretic Model of Business Cycle and Price Wars during Booms" AER, 76


Open-bid auctions facilitate collusion

Original article (link) posted: 02/08/2005

Milgrom (1987) makes the point that collusion may be easier to sustain for infinitely repeated "descending" auctions (notice we think about procurement auctions here) than for infinitely repeated sealed-bid auctions.
In a repeated descending auction, a deviation is discovered immediately (instead of one period later) and thus can be made unprofitable, even from a static point of view (the deviator's rival can refuse to give up before the price equals "c" in the current auction). Collusion is feasible for any discount factor. The market organization thus affects detection lags and the amount of feasible collusion.

(Tirole p.262)

Second price auctions also sustain collusions even in a static setting, because they can make deviations unprofitable by the same way as the open-bid auctions. Graham and Marshall (1987) is the pioneering paper about collusions in English (second-price) auctions.


Graham and Marshall (1987) "Collusive Behavior at Single-Object Second-Price and English Auctions" JPE, 95
Milgrom (1987) "Auction Theory" in Advances in Economic Theory


Foundations of Law and Economics

I just borrowed "Foundation of Law and Economics," the collection of papers on law and economics from the library. This is a volume of The International Library of Critical Writings in Economics (No. 239), which impressed me a lot. According to the publisher's website, it is described as:
This landmark collection of essays provides an overview of the essential theories and methods used in the study of law and economics. The editors’ careful selection includes substantial contributions from other disciplines that shed new light on the assumptions, theories and methods that may enhance the understanding of human behavior. The first part presents papers discussing theories central to the foundations of law and economics. The second part offers papers describing a variety of methodologies designed to improve traditional economic models.

This insightful volume is an essential reference source for law and economic scholars, whether they are delving into the field or determining the future direction of their research.
(bold by yyasuda).
The papers are carefully selected by the leading scholars of this field (see the content here). This collection is recommended especially for those who are interested in behavioral law and economics.


Hayashi or Wooldridge?

Original article (link) posted: 27/07/2005

I was asked by someone who is thinking to go to grad school in Econ. that which book is better for 1st year econometrics class.
The following is my answer;

I think "Hayashi" is better for preparing first year course work. Especially, to read first 3 chapters helps you a lot.
But, if you will major in some applied microeconomics, you may end up buying "Wooldridge" book too. (H- is good at Macro and time series, W- covers microeconometrics very much.)

Usually, mathematical statistics is covered in first few months in metrics class.
The following huge textbook covers math quite well. This book is also quoted often, so it may worth buying as a reference book.
"Econometric Analysis" by William H. Greene

If you would like to learn econometrics without technical issues, the following text book (for undergrad) is highly recommended.
"Introduction to Econometrics" by James H. Stock, Mark W. Watson


Refinements and Selection

Original article (link) posted: 25/07/2005

Myerson (1991) mentions the distinction between equilibrium refinements and criteria for selection among the set of equilibria.

A refinement of Nash equilibria is a solution concept that is intended to offer a more accurate characterization of rational intelligent behavior in games. However, the ultimate refinement that exactly characterizes rational behavior can still include multiple equilibria for many games. (e.g., the Battle of the Sexes)
A selection criterion is then any objective standard, defined in terms of the given structure of the mathematical game, that can be used to determine the focal equilibrium that everyone expects. Equity and efficiency criteria of cooperative game theory can be understood as such selection criteria.

(p. 241)

According to him, the concept of "cheap-talk" should be considered as an equilibrium selection criterion (NOT as a refinement), because it is based on the assumption that players share a rich natural language for communication, and hence rational and intelligent players may not reach that equilibrium without some cultural environment.


My Tweets so far

The followings are selected tweets from my English twitter account (follow me on Twitter):

Just started a blog about economic theory (in English). "yyasuda's blog" link
(May 9)

Prof. Lazear says "Leaders are generalists rather than specialists," in the insightful article about leadership: link
(cont.) He also derives an interesting testable implication: "the most able leaders should be found in the highest variance industries."
(Apr 27)

This is cool. A first economic paper to investigate "monopoly" pricing: link
(May 23)

Amazing lecture on intermediate microeconomics by Prof. Jeff Ely: Mechanism design approach. link
(May 16)

“When one wants to become a real economist,..., one has no choice but the United States,” says Luigi Zingales link
(Jan 28)

Nationwide kidney (donor) exchange program in Australia is getting started in Jan 2010! link
(Jan 28)

Nice documentary on Game Theory: Part 1 link ; 2 link ; 3 link ; EP link
(Jan 21)

May not look serious, but indeed a serious problem in Africa. "The Economics of Female Genital Cutting" link
(Jan 19)

Amazing blog by world leading developmental economist, William Easterly: "Aid Watch" link
(Jan 19)

"Uzawa" -Cass's supervisor in graduate study- mentioned 33 times! "Interview with David Cass" link
(Jan 18)

Excellent collection of short survey articles on Behavioral and Experimental Economics: link
(Jan 18)

Finished reading "repeated game" by Kandori. Just amazingly well written (as usual)! link
(Jan 15)

Better check it later: Chen and Gazzale (2004AER) "When Does Learning in Games Generate Convergence to Nash Equilibria?" link
(Jan 15)

AGT! RT @davidcoallier: An overview of the decade in algorithmic game theory, by Noam Nisan: link
(Jan 15)

"Top 10 Deadliest Earthquakes" link Nearly 143,000 people died in the Kanto earthquake (1923).
(Jan 14)

Mixed strategy played in the airport. "GUARDS" (Game-theoretic Unpredictable and Randomly Deployed Security) link
(Jan 14)

I found my co-authors (Atila and Yeon-Koo) are in the same session (Matching)! link
(Jan 14)



Original article (link) posted: 24/07/2005

The last topic of chapter 6 of FT is a purification.
6.7 shows the theorem by Harsanyi (1973), which states;

Mixed-strategy equilibrium of complete-information games can usually be interpreted as the limits of pure-strategy equilibria of slightly perturbed games of incomplete information.

We should note that full measure condition w.r.t. a set of payoff is sufficient. Moreover, given this condition, the following statement holds;

A single sequence of perturbed games can be used to "purify" all the mixed equilibria of the limit game.

Ex 6.7 gives the intuition that how purification argument works. Looking at this example, you can easily understand the idea.

In 6.8, the other kind of purification result due to Milgrom and Weber (1986) is shown.

Under certain regularity conditions, pure-strategy equilibria do exist in games with an atomless distribution over types. The idea is that the effects of mixing can be duplicated by having each type play a pure strategy.

In short, quite wide range of incomplete information games has pure-strategy equilibria. It is interesting, because pure-strategy equilibria need not exist in games of complete-information.

The proof is given by using a "distributional strategy" introduced by Milgrom and Weber, which is motivated by Aumann (1964).


Aumann (1964) "Mixed vs. behavior strategies in infinite extensive games" Annals of Mathematics Studies, 52
Harsanyi (1973) "Games with randomly disturbed payoffs: A new rationale for mixed-strategy equilibrium points" IJGT, 2
Milgrom and Weber (1986) "Distributional strategies for games with incomplete information" Mathematics of Operation Research, 10

Two Auction Examples in FT

Original article (link) posted: 23/07/2005

FT illustrates two examples of First-price Auction, Ex 6.5 (continuous) and Ex 6.6 (discrete).
In Ex 6.5, they establish the uniqueness of the equilibrium when the reservation price is higher then the lowest valuation, and the equilibrium is symmetric. Their basic argument is that when the above condition holds, Lipschitz continuity is also satisfied, which implies the unique solution to the differential equation derived by FOC.
In the footnote, they mentioned the multiplicity of the solution of the war of attrition.

Standard results on the uniqueness of solution to differential equations require Lipschitz continuity. The war of attrition of example 6.3 is not Lipschitz continuous at s=0, which is why it is possible for the system represented in equation 6.3 to have multiple solutions.

War of Attrition

Original article (link) posted: 23/07/2005

In our study of the stationary complete war of attrition in chapter 4, we saw that all the Nash equilibria are subgame perfect. Similarly, the multiple equilibria just described satisfy the concept of perfect Bayesian equilibrium we introduce in chapter 8.
(Fudenberg&Tirole (1991) p.219)

"stationary" is important and cannot be dropped. Obviously, one player choses "never stop" and the other "always stops" can be Nash equilibrium but which is not subgame perfect.
In chapter 4, they wrote as follows;

All stationary Nash equilibria are subgame perfect. To see this, note that the stationarity of the payoffs implies that all subgame where both players are still active are isomorphic.
(FT p.120-121)

Auction Literature (papers)

Original article (link) posted: 23/07/2005

Some imprtant papers in Literature on single-unit auctions.

The following three papers are survey and lecture notes.
1. Milgrom (1989) "Auctions and Bidding: A Primer" J. of Econ. Perspectives, 3
2. Bulow and Roberts (1989) "The Simple Economics of Optimal Auctions" JPE, 97
3. Matthews (1995) "A Technical Primer on Auction Theory 1" Mimeo, Northwestern Univ.

1 is bit old but excellent survey of single-unit auction theory. It is still worth reading.
2 is the first paper which pointed out that auction models can be analyzed by the same way as IO. They initiated graphical analysis in auction theory.
3 is an unpublished lecture note, which is suite for those who have no background.

The next is the most important paper in the literature. The author got the Nobel prize at 1996 because of this work.
4. Vickrey (1961) "Counterspeculation, Auctions and Competitive Sealed Tenders" Journal of Finance, 16

The following paper initiated revolution of auction theory. Everyone interested in auction theory MUST read this!
5. Myerson (1981) "Optimal Auction Design" Mathematics of Operations Research, 6

One of the most important contributions in auction theory after Myerson. It became a classic that you should know.
6. Milfrom and Weber (1982) "A Theory of Auctions and Competitive Bidding" Econometrica, 50

Collusions in auctions are exciting topic. You should first read the following.
7. McAfee and McMillan (1992) "Bidding Rings" AER, 82

Recent research on auction has been focusing more and more on experiments and multi-unit auctions. Multi-unit auctions are strongly related to matching problems. To study matching theory, the following textbook is must read.
8. Roth and Sotomayor (1990) "Two-Sided Matching: A Study in Game-Theoretic Modeling and Analysis"

Auction Literature (textbook)

Original article (link) posted: 23/07/2005

The followings are references for auction theory.
There are at least 4 textbooks now.

1. Krishna (2002) "Auction Theory"
2. Klemperer (2003) "Auctions: Theory and Practice"
3. Milgrom (2004) "Putting Auction Theory to Work"
4. Menezes&Monteiro (2005) "An Introduction to Auction Theory"

1 is the most standard and concise, yet very rigorous textbook. Those who want to focus on auction theory should read this book first. He also spends many spaces for multi-unit auctions, using 6 chapters out of 17.
2 is the collection of the articles by Klemperer. Easy to read and recommended for beginners. It does not cover theory of auction comprehensively.
3 contains both auction theory and detailed survey of actual auction design. You would get motivated by this book written by one of the legend in this field.
4 is a recent book. The style is similar to Krishna but mainly focus on single-unit auction.

The chapter of Auction in the following book is well written and highly recommended for the readers who don't have enough background on game theory.

5. Wolfstetter (1999) "Topics in Microeconomics : Industrial Organization, Auctions, and Incentives"

The following two books might be useful for those who have interest on auction designs in real world.

6. Illing and Kluh (2003) "Spectrum Auctions and Competition in Telecommunications"
7. Janssen (2004) "Auctioning Public Assets"