Showing posts with label decision theory. Show all posts
Showing posts with label decision theory. Show all posts

2015-11-10

SlideShare

I've uploaded lecture slides at SlideShare. They are written in English or Japanese. The below are the titles (+ links) of the slides that are written in English. It would be great if you could take a look!

2011-03-28

Gilboa's View on Theory

What is theory, or what is the role of theory especially in social sciences? In his book, "Theory of Decision under Uncertainty," Professor Gilboa shows thought-provoking argument.




The following quotes are all from Chapter 7: "The Role of Theories."

Theories are never correct, and in the case of the social sciences they tend to be almost always wrong. The question is, therefore, not whether they are right or wrong, but whether they are wrong in a way that invalidates the conclusions drawn from them. In other ways, theories are tools for reasoning and rhetorical devices.
The last standpoint I think is more or less common among great Israeli theorists such as Bob Aumann and Ariel Rubinstein. In a later part, Gilboa also says as follows:
Recall that we are not hoping to obtain theories that are very accurate. We use the theories more often as reasoning aids.
Then, the author further offers philosophical argument on science, referring the key thinkers such as Friedman, Popper, and Kuhn.
It follows that the degree to which we are willing to accept an assumption does not need to be a monotone function of its degree of accuracy. The assumption is tested based not only on its direct implications, but also on its indirect implications, which may involve nontrivial theorems.

The preceding discussion brings to mind Friedman's celebrated argument that theories should not be judged based on the validity of their assumptions, but on that of their conclusions. This argument is a bit extreme, and I would be careful to accept it (because of the following two reasons).
1) It is generally hard to draw a sharp distinction between assumptions and conclusions, completely ignoring the veracity of the former and testing only the latter.
2) Justifiably or not, it (= Friedman's argument) has become a bit of an excuse not to question the theory.

The logical positivist heritage (coupled with Popper's contribution) suggests that our theories should be falsifiable. The axiomatization we saw earlier is formulated in terms of conditions that can be violated. However, a theory such as utility maximization is not always easy to falsify. [...] Only in carefully designed controlled experiments can one hope to unambiguously refute a theory, but then one faces questions of external validity: the fact that a theory fails in artificial experimental environment may not be an indication that it will also fail in natural environment, to which it was presumably intended in the first place.

It started with Kuhn (1962), who asked questions about scientific paradigms and the way they changed. Kuhn described scientific evolution as a social phenomenon that need not converge to any objective truth. Rather, it was a process involving many factors, including accumulating evidence on the one hand, but also personal interests and tastes on the other.

The postmodern critique sometimes appears to confound descriptive and normative claims. It may well be true that science will never be able to be completely objective. But this does not mean that is shouldn't try. [...] There are instances of postmodern critique that sound to me similar to the argument, "We know that wars are inevitable. Hence, let's start shooting people."

2010-12-21

Theory Seminar "Fudenberg and Levine"

Original article (link) posted: 08/10/2005

Fudenberg and Levine (2005) "A Dual Self Model of Impulse Control"

The paper proposes a "dual-self" model for a single agent decision making. In their model, the patient long-run self and a sequence of myopic short-run selves who share the same preferences over the stage-game outcome play games to decide some dynamic decision. In each period, the long-run self moves first and chooses the utility function of the myopic self possibly with some reduction in utility ("self-control"). After seeing this "self-control" level, the short-run player takes the final decision.
The model gives a unified explanation for a number of empirical regularities related to self-control problems and a value for commitment in decision problems, including the apparent time-inconsistency that has motivated models of hyperbolic discounting and Rabin's paradox of risk aversion in the large and small. The base version of the model is consistent with Gul-Pesendorfer axioms.

Comments
The paper is quite interesting and the presentation by Professor Levine was really nice. He illustrated many experimental results and explain how their theory can explain them. However, it was bit unclear for me to see their marginal contribution in the literature of behavioral Economics. It seems that there are many alternative theories which can also explain those experimental results. I must check the papers in this field at least to some extent...

Interesting Papers on Reference

Gul and Pesendorfer (2001) "Temptation and Self-Control" Econometrica, 69
: An axiomatic approach of temptation preference
Gul and Pesendorfer (2004) "Self-Control and the Theory of Consumption" Econometrica, 72
: Generalization of the 2001 paper with multi-period decision aiming for the application to macro economics
Krusell and Smith (2003) "Consumption-Savings Decisions with Quasi-Geometric Discounting" Econometrica, 71
: Multiple equilibria in hyperbolic models
Laibson (1997) "Golden eggs and hyperbolic discounting" QJE, 112
: Revival of non-exponential discounting preference
Laibson (2001) "A Cue-Theory of Consumption" QJE, 116
: Cue-theory
O'Donogue and Rabin (1999) "Doing It Now or Latter" AER, 89
: Early or latter decision
Rabin (2000) "Risk Aversion and Expected-Utility Theory: A Calibration Theorem" Econometrica
: Rabin's paradox
Thaler and Shefin (1981) "An Economic Theory of Self-Control" JPE, 89
: A pioneering work of self-control

2010-11-05

Decision Theory 301

This is complementary to the previous post, "Decision Theory 101 (link)." In Appendix A: Optimal Choice, the author (Professor Gilboa) concisely explains the flexibility of rational choice framework. I think that his argument is really important especially when we evaluate the recent developments of behavioral economics and consider its relationship with the traditional (or rational) approach.
For our purposes, it is worthwhile highlighting what this model (the consumers' problem: by yyasuda) does not include. Choices are given as quantities of products. Various descriptions of the products, which may be part of their frames, are not part of the discussion. The utility function measures desirability on a scale. We did not mention any special point on this scale, such as a reference point. Further, choices are bundles of products to be consumed by the consumer in question at the time of the problem. They do not allow us to treat a certain bundle differently based on the consumer's history of consumption, or on the consumption of others around them. Hence, the very language of the model assumes that the consumer does not care what others have, they feel no envy, nor any disappointment in the case when their income drops as compared with last period, and so on.
It is important to emphasize that the general paradigm of rational choice does not necessitate these constraints. For instance, instead of the n products the consumer can consume today, we may have a model with 2n products, reflecting their consumption today and their consumption yesterday. This would allow us to specify a utility function u that takes into account considerations such as aspiration levels, disappointment, and so forth. Or, we can use more variables to indicate the average consumption in the consumer's social group, and then the utility function can capture social considerations such as the consumer's ranking in society and so forth. Indeed, such models have been suggested in the past and have become more popular with the rise of behavioral economics. These models show that the paradigm of rational choice is rather flexible. Yet, the specific theory restricts the relevant variables to be independent of history, others' experiences, emotions, and other factors which might be among the determinants of well-being.

2010-11-01

Decision Theory 101

Let me continue to quote some basics of decision theory (or economics) from the Gilboa's recent book, "Making Better Decisions."
In Appendix A: Optimal Choice, the author nicely illustrates the framework of decision theory and its key concepts such as axioms and utility function. The following might be especially helpful for those who are against or suspicious about the fundamental tool of economics, utility maximization.
A fundamental of optimal choice theory is the distinction between feasibility and desirability. A choice is feasible if it is possible for the decision maker, that is, one of the things that she can do. An outcome is desirable if the decision maker wishes to bring it about. Typically, feasibility is considered to be a dichotomous concept, while desirability is continuous: a choice is either feasible or not, with no shades in between; by contrast, an outcome is desirable to a certain degree, and different outcomes can be ranked according to their desirability.
We typically assume that desirability is measured by a utility function u, such that the higher the utility of a choice, the better will the decision maker like it. This might appear odd, as many people do not know what functions are and almost no one can be observed walking around with a calculator and finding the alternative with the highest utility. But it turns out that very mild assumptions on choice are sufficient to determine that the decision maker behaves as if she had a utility function that she was attempting to maximize. If the number of choice is finite, the assumptions (often called axioms) are the following:
1. Completeness: for every two choices, the decision maker can say that she prefers the first to the second, the second to the first, or that she is indifferent between them.
2. Transitivity: for every three choices a, b, c, if a is at least as good as b, and b is at least good as c, then a is at least as good as c.
It turns out that these assumptions are equivalent to the claim that there exists a function u such that, for every two alternatives a and b, a is at least as good as b if and only if u(a) ≧ u(b). (...) Any other algorithm that guarantees adherence to these axioms has to be equivalent to maximization of a certain function, and therefore the decision maker might well specify the function explicitly.

2010-10-29

Descriptive and Normative Theories

A great introductory book on decision theory written by a leading authority, professor Itzhak GIlboa, came out recently:



Professor Ehud Kalai at Northwestern University describes this book as follows:
"This book is extremely effective for anyone who wants to acquire quick, basic understanding of old and new concepts of decision theory, with a minimum level of technical details."
In the first chapter, the author indeed effectively explains two essential notions in economics, "descriptive" (sometime called "positive") and "normative" theories:
A descriptive theory is meant to describe reality. For instance, the claim that demand curves slope down attempts to tell us something about the world. Importantly, it does not make a value judgement and takes no stand on whether this feature of the world is good or bad.
A normative theory is a recommendation to decision makers, a suggestion regarding how they should make decisions. For instance, the claim that we should reduce income inequality is a normative claim. Note that the word "normative" does not mean here "the norm in a given society" as it does in other social sciences. The term only says something about the type of interaction between the theorist and the decision maker, namely, that this is an instance in which the former is trying to convince the latter to behave in a certain way.
The author continues to document the role of each theory, which is also very intuitive:
In decision theory it is often the case that a principle can be interpreted either descriptively or normatively. Consider the theory that each economic agent maximizes a utility function. I may propose it as descriptive, namely, arguing that this is a good description of real economic agents. And I may promote it as normative, in which case my claim will be that you would be wise to become such an agent. As a descriptive theory, the principle is tested for its correspondence to reality. The better it fits the data, the more successful it is. As a normative one, the principle should not fit reality. In fact, there is no point in giving decision makers recommendations that they anyway follow. Rather, the test is whether the decision makers would like to follow the principle.

2010-09-04

Theory Seminar (Gilboa, Lieberman and Schmeidler)

Original article (link) posted: 24/09/2005

Gilboa, Lieberman and Schmeidler "Empirical Similarity"

Although the above paper was supposed to be talked about, Professor Gilboa mainly explained the following paper.

Billot, Gilboa, Samet and Schmeidler (2005) "Probabilities as similarity-weighted frequencies" Econometrica, 73

The above two papers consider a decision rule when a decision maker who has data on past outcomes is asked to express her beliefs by assigning probabilities to certain possible states. As the original database becomes large, empirical frequency may not help for her to make a decision at all. Instead, she may assign a higher weight to more similar case in evaluating the probability of a state.
Billot et.al show that if beliefs given a union of two databases are a convex combination of beliefs given each of the databases, the belief formation process follows a simple formula: beliefs are a similarity-weighted average of the beliefs induced by each past case. However, their axiomatization does not suggest a particular similarity function, or even a particular form of the function. Gilboa et.al develop tools of statistical inference for parametric estimation of the similarity function, assuming that such a function governs the data generating process.
Notice that the axioms in the papers cannot be consistent with the situation where the range of belief becomes smaller as the number of observation increases or a decision maker cares about a trend of outcomes.

Presentation by Professor Gilboa was very clear and he was quite good at using power point (!!). But it was difficult to understand the material. I might need to study decision theory at least little bit... (It might be good to read his book "A Theory of Case-Based Decisions". The Japanese translated edition is also available.)