Articles for STOhttps://nowpublishers.com/feed/STOArticles for STOhttp://nowpublishers.com/article/Details/STO-006Solving Free-boundary Problems with Applications in Finance<h3>Abstract</h3>Stochastic control problems in which there are no bounds on the rate of control reduce to so-called free-boundary problems in partial differential equations (PDEs). In a free-boundary problem the solution of the PDE and the domain over which the PDE must be solved need to be determined simultaneously. Examples of such stochastic control problems are singular control, optimal stopping, and impulse control problems. Application areas of these problems are diverse and include finance, economics, queuing, healthcare, and public policy. In most cases, the free-boundary problem needs to be solved numerically.<p>In this survey, we present a recent computational method that solves these free-boundary problems. The method finds the free-boundary by solving a sequence of fixed-boundary problems. These fixed-boundary problems are relatively easy to solve numerically. We summarize and unify recent results on this <em>moving boundary</em> method, illustrating its application on a set of classical problems, of increasing difficulty, in finance. This survey is intended for those are primarily interested in computing numerical solutions to these problems. To this end, we include actual Matlab code for one of the problems studied, namely, American option pricing.</p><h3>Suggested Citation</h3>Kumar Muthuraman and Sunil Kumar (2008), "Solving Free-boundary Problems with Applications in Finance", Foundations and Trends® in Stochastic Systems: Vol. 1: No. 4, pp 259-341. http://dx.doi.org/10.1561/0900000006Mon, 25 Aug 2008 00:00:00 +0200http://nowpublishers.com/article/Details/STO-004Long Range Dependence<h3>Abstract</h3>The notion of long range dependence is discussed from a variety of points of view, and a new approach is suggested. A number of related topics is also discussed, including connections with non-stationary processes, with ergodic theory, self-similar processes and fractionally differenced processes, heavy tails and light tails, limit theorems and large deviations.<h3>Suggested Citation</h3>Gennady Samorodnitsky (2007), "Long Range Dependence", Foundations and Trends® in Stochastic Systems: Vol. 1: No. 3, pp 163-257. http://dx.doi.org/10.1561/0900000004Fri, 28 Dec 2007 00:00:00 +0100http://nowpublishers.com/article/Details/STO-003Controlled Markov Chains, Graphs, and Hamiltonicity<h3>Abstract</h3>This manuscript summarizes a line of research that maps certain classical problems of discrete mathematics — such as the Hamiltonian Cycle and the Traveling Salesman Problems — into convex domains where continuum analysis can be carried out. Arguably, the inherent difficulty of these, now classical, problems stems precisely from the discrete nature of domains in which these problems are posed. The convexification of domains underpinning the reported results is achieved by assigning probabilistic interpretation to key elements of the original deterministic problems.<p>In particular, approaches summarized here build on a technique that embeds Hamiltonian Cycle and Traveling Salesman Problems in a structured singularly perturbed Markov Decision Process. The unifying idea is to interpret subgraphs traced out by deterministic policies (including Hamiltonian Cycles, if any) as extreme points of a convex polyhedron in a space filled with randomized policies.</p><p>The topic has now evolved to the point where there are many, both theoretical and algorithmic, results that exploit the nexus between graph theoretic structures and both probabilistic and algebraic entities of related Markov chains. The latter include moments of first return times, limiting frequencies of visits to nodes, or the spectra of certain matrices traditionally associated with the analysis of Markov chains. Numerous open questions and problems are described in the presentation.</p><h3>Suggested Citation</h3>Jerzy A. Filar (2007), "Controlled Markov Chains, Graphs, and Hamiltonicity", Foundations and Trends® in Stochastic Systems: Vol. 1: No. 2, pp 77-162. http://dx.doi.org/10.1561/0900000003Thu, 20 Dec 2007 00:00:00 +0100http://nowpublishers.com/article/Details/STO-002Monotonicity in Markov Reward and Decision Chains: Theory and Applications<h3>Abstract</h3>This paper focuses on monotonicity results for dynamic systems that take values in the natural numbers or in more-dimensional lattices. The results are mostly formulated in terms of controlled queueing systems, but there are also applications to maintenance systems, revenue management, and so forth. We concentrate on results that are obtained by inductively proving properties of the dynamic programming value function. We give a framework for using this method that unifies results obtained for different models. We also give a comprehensive overview of the results that can be obtained through it, in which we discuss not only (partial) characterizations of optimal policies but also applications of monotonicity to optimization problems and the comparison of systems.<h3>Suggested Citation</h3>Ger Koole (2007), "Monotonicity in Markov Reward and Decision Chains: Theory and Applications", Foundations and Trends® in Stochastic Systems: Vol. 1: No. 1, pp 1-76. http://dx.doi.org/10.1561/0900000002Thu, 07 Jun 2007 00:00:00 +0200