Foundations and Trends® in Stochastic Systems >
Vol 1 > Issue 2

Jerzy A. Filar (2007), "Controlled Markov Chains, Graphs, and Hamiltonicity", Foundations and Trends® in Stochastic Systems: Vol. 1: No. 2, pp 77-162. http://dx.doi.org/10.1561/0900000003

© 2007 J. A. Filar

Download article
**In this article:**

1 Embedding of a Graph in a Markov Decision Process

2 Analysis in the Policy Space

3 Analysis in the Frequency Space

4 Spectral Properties, Spin-offs, and Speculation

Acknowledgments

References

This manuscript summarizes a line of research that maps certain classical problems of discrete mathematics — such as the Hamiltonian Cycle and the Traveling Salesman Problems — into convex domains where continuum analysis can be carried out. Arguably, the inherent difficulty of these, now classical, problems stems precisely from the discrete nature of domains in which these problems are posed. The convexification of domains underpinning the reported results is achieved by assigning probabilistic interpretation to key elements of the original deterministic problems.

In particular, approaches summarized here build on a technique that embeds Hamiltonian Cycle and Traveling Salesman Problems in a structured singularly perturbed Markov Decision Process. The unifying idea is to interpret subgraphs traced out by deterministic policies (including Hamiltonian Cycles, if any) as extreme points of a convex polyhedron in a space filled with randomized policies.

The topic has now evolved to the point where there are many, both theoretical and algorithmic, results that exploit the nexus between graph theoretic structures and both probabilistic and algebraic entities of related Markov chains. The latter include moments of first return times, limiting frequencies of visits to nodes, or the spectra of certain matrices traditionally associated with the analysis of Markov chains. Numerous open questions and problems are described in the presentation.

96 pp. $75.00

Buy book
96 pp. $100.00

Buy E-book (.pdf)
1: Embedding of a Graph in a Markow Decision Process

2: Analysis in the Policy Space

3: Analysis in the Frequency Space

4: Spectral Properties, Spin-offs and Speculation

Acknowledgments

References

The inherent difficulty of many problems of combinatorial optimization and graph theory stems from the discrete nature of the domains in which these problems are posed. Controlled Markov Chains, Graphs & Hamiltonicity summarizes a line of research that maps such problems into convex domains where continuum, dynamic and perturbation analyses can be more easily carried out. The convexification of domains is achieved by assigning probabilistic interpretation to key elements of the original problems even though these problems are deterministic. The dynamics are introduced via a controller whose choices select points, or trajectories in these domains. Singular perturbations are introduced as tools to simplify the structure of certain Markov processes. The above approach is illustrated by its application to one famous problem of discrete mathematics: the Hamiltonian Cycle Problem (HCP). The essence of HCP is contained in the following, deceptively simple, single sentence statement: given a graph on N nodes, find a simple cycle that contains all vertices of the graph (Hamiltonian Cycle) or prove that such a cycle does not exist The HCP is known to be NP-hard and has become a challenge that attracts mathematical minds both in its own right and because of its close relationship to the equally famous Traveling Salesman Problem (TSP). An efficient solution of the latter would have an enormous impact in operations research, optimization and computer science. However, from a mathematical perspective the underlying difficulty of the TSP is, perhaps, hidden in the Hamiltonian Cycle Problem that is the main focus here.