2010-08-26

Lecture 2 (Dutta): Dynamic Programming 1

Original article (link) posted: 22/09/2005

Topics in the class

1) Conditional Probability and Feller Property
(a) The definitions of a transition function: q and the conditional expectation operator: T.
(b) The properties of Tg (g is a measurable function); measurable, non-negative (if g non-negative), and bounded (if g bounded).
(c) Feller Property.

2) Dynamic Programming Set Up
(d) The def. of a reward function and a feasibility correspondence.
(e) Example; Neo-classical growth model, Portfolio choice, Capital accumulation, Search, and, Price with inertia (menu cost).
(f) The def. of history and policy (action).
(g) Setting up optimization problem and value function.

3) Bellman (Optimality) Equation
(h) Necessity: If the value function V is measurable, then TV=V.
(i) Sufficiency: If the bounded and measurable function U solves U=TU, then U is larger than or equal to V. Additionally if there is a selection from the optimality equation, then U=V. Note) a selection from TU=U is a stationary Markovian policy which solves TU=U.

Comments

Basic concepts in Measure theory such as sigma-algebra and measurability are postulated. I should check what Feller Property exactly means. (I'm not sure if it's just a definition or with necessary and sufficient conditions.)

No comments: