Fitted Q Iteration.

Background on Q Learning

Bellman optimality equation for the action-value function () is given as:

where is a transition probability of landing in state on taking action in state and is a Reward at state reached on taking action in state .

In the dynamic programming setting, the function for optimal policy is represented as:

Q-Learning is a model-free approach to learn the function by exploring the environment, i.e. performing actions based on some policy. A table of Q function for each state action pair is maintained and the table in updated after every action using the running average formula:

With multiple episodes the Q values will eventually converge and the optimal policy might be retrieved from that.

Drawbacks of Q Learning.

There are several draw backs of the Q-Learning. These drawbacks might be minor in typical reinforcement learning setting where we have simulators. But these drawbacks are serious limitations in the Batch RL setting. In Batch RL, the simulation environment is not present and complete set of transition samples (< s, a, r, s'>) is given and the challenge is to learn without exploring.

Exploration overhead

As we see it in the figure below, at some point in the top left cell , agent explored the action of going north and because it landed in the same cell, it updated as per the of that cell during that episode. After that episode, even though the of that cell changes, the does not get updated till the agent explores going north.

Q Learning

source: UC Berkeley CS188: Lecture of Pieter Abbeel

Stability issue

Q-Learning has 'asynchronous update' i.e. after each observation the value is updated locally only for the state at hand and all other states are left untouched. In the above figure, we know the value at the Red tile, but the Q value for the tile below it is not updated until we explore the action of going to red tile from that tile.

Similar idea of asynchronous update is also applicable in function approximations where function is estimated by a function and at every time step the function is updated using:

Inefficient approximation

The 'asynchronous update' in function approximation is particularly harmful with global approximation functions. An attempt to improve value of a single state after every time step might impair all other approximations. Specially when approximation function used is like Neural Network, where a single example can impact changes in all the weights. Gordon 1995 proves that using an impaired approximation in next iteration, the function may divergence from the optimal function.

This is where fitted methods come in.


Fitted Approaches

Gordon 1995 provided a stable function approximation approach by separating dynamic programming step from function approximation step. Effectively, the above function update equation is now split to two steps.

Observation: Splitting the function update from one to two steps is equivalent to changing the gram-schmidt orthonormalization to modified-gram schmidt orthonormalization.

Ernst 2005 proposed fitted Q iteration by borrowing the splitted approach from Gordon. The approach proposes iterative approximation of Q value by reformulating the Q Learning as a supervised regression problem. Algorithm proposed for fitted Q iteration is mentioned below.

Given: tuples {<s,a,r,s'>}, stopping condition

1. Q(s, a) = 0
2. while (!stopping condition):
3.    Build a training set:
         {feature; regression value} = {<s,a> ; r + max_a Q(s,a)}
4.    Learn a function approximating the regression values Q (s,a)

This is in principle similar to equations mentioned above, with as regression value and .

Further extensions to the fitted Q approaches have learnt function as some linear combination of the previous function and regression values.

References

  1. Lange, S., Gabel, T., & Riedmiller, M. Batch Reinforcement Learning. 2012
  2. Gordon, G. J. Stable Function Approximation in Dynamic Programming. ICML. 1995.
  3. Ernst, D., Geurts, P., & Wehenkel, L. Tree-Based Batch Mode Reinforcement Learning. In JMLR. 2005.
  4. http://www.cs.toronto.edu/~zemel/documents/411/rltutorial.pdf