By Jerzy A. Filar, School of Mathematics and Statistics, University of South Australia, Australia, j.filar@unisa.edu.au
This manuscript summarizes a line of research that maps certain classical problems of discrete mathematics — such as the Hamiltonian Cycle and the Traveling Salesman Problems — into convex domains where continuum analysis can be carried out. Arguably, the inherent difficulty of these, now classical, problems stems precisely from the discrete nature of domains in which these problems are posed. The convexification of domains underpinning the reported results is achieved by assigning probabilistic interpretation to key elements of the original deterministic problems.
In particular, approaches summarized here build on a technique that embeds Hamiltonian Cycle and Traveling Salesman Problems in a structured singularly perturbed Markov Decision Process. The unifying idea is to interpret subgraphs traced out by deterministic policies (including Hamiltonian Cycles, if any) as extreme points of a convex polyhedron in a space filled with randomized policies.
The topic has now evolved to the point where there are many, both theoretical and algorithmic, results that exploit the nexus between graph theoretic structures and both probabilistic and algebraic entities of related Markov chains. The latter include moments of first return times, limiting frequencies of visits to nodes, or the spectra of certain matrices traditionally associated with the analysis of Markov chains. Numerous open questions and problems are described in the presentation.
The inherent difficulty of many problems of combinatorial optimization and graph theory stems from the discrete nature of the domains in which these problems are posed. Controlled Markov Chains, Graphs & Hamiltonicity summarizes a line of research that maps such problems into convex domains where continuum, dynamic and perturbation analyses can be more easily carried out. The convexification of domains is achieved by assigning probabilistic interpretation to key elements of the original problems even though these problems are deterministic. The dynamics are introduced via a controller whose choices select points, or trajectories in these domains. Singular perturbations are introduced as tools to simplify the structure of certain Markov processes. The above approach is illustrated by its application to one famous problem of discrete mathematics: the Hamiltonian Cycle Problem (HCP). The essence of HCP is contained in the following, deceptively simple, single sentence statement: given a graph on N nodes, find a simple cycle that contains all vertices of the graph (Hamiltonian Cycle) or prove that such a cycle does not exist The HCP is known to be NP-hard and has become a challenge that attracts mathematical minds both in its own right and because of its close relationship to the equally famous Traveling Salesman Problem (TSP). An efficient solution of the latter would have an enormous impact in operations research, optimization and computer science. However, from a mathematical perspective the underlying difficulty of the TSP is, perhaps, hidden in the Hamiltonian Cycle Problem that is the main focus here.