You may have to register before you can download all our books and magazines, click the sign up button below to create a free account.
New up-to-date edition of this influential classic on Markov chains in general state spaces. Proofs are rigorous and concise, the range of applications is broad and knowledgeable, and key ideas are accessible to practitioners with limited mathematical background. New commentary by Sean Meyn, including updated references, reflects developments since 1996.
This book gives an account of recent developments in the field of probability and statistics for dependent data. It covers a wide range of topics from Markov chain theory and weak dependence with an emphasis on some recent developments on dynamical systems, to strong dependence in times series and random fields. There is a section on statistical estimation problems and specific applications. The book is written as a succession of papers by field specialists, alternating general surveys, mostly at a level accessible to graduate students in probability and statistics, and more general research papers mainly suitable to researchers in the field.
Financial econometrics has developed into a very fruitful and vibrant research area in the last two decades. The availability of good data promotes research in this area, specially aided by online data and high-frequency data. These two characteristics of financial data also create challenges for researchers that are different from classical macro-econometric and micro-econometric problems. This Special Issue is dedicated to research topics that are relevant for analyzing financial data. We have gathered six articles under this theme.
The state of the art in fluid-based methods for stability analysis, giving researchers and graduate students command of the tools.
This volume presents some of the most influential papers published by Rabi N. Bhattacharya, along with commentaries from international experts, demonstrating his knowledge, insight, and influence in the field of probability and its applications. For more than three decades, Bhattacharya has made significant contributions in areas ranging from theoretical statistics via analytical probability theory, Markov processes, and random dynamics to applied topics in statistics, economics, and geophysics. Selected reprints of Bhattacharya’s papers are divided into three sections: Modes of Approximation, Large Times for Markov Processes, and Stochastic Foundations in Applied Sciences. The accompanyin...
Fourier analysis is a subject that was born in physics but grew up in mathematics. Now it is part of the standard repertoire for mathematicians, physicists and engineers. This diversity of interest is often overlooked, but in this much-loved book, Tom Körner provides a shop window for some of the ideas, techniques and elegant results of Fourier analysis, and for their applications. These range from number theory, numerical analysis, control theory and statistics, to earth science, astronomy and electrical engineering. The prerequisites are few (a reader with knowledge of second- or third-year undergraduate mathematics should have no difficulty following the text), and the style is lively and entertaining. This edition of Körner's 1989 text includes a foreword written by Professor Terence Tao introducing it to a new generation of fans.
Markov Chain Monte Carlo (MCMC) methods are sampling based techniques, which use random numbers to approximate deterministic but unknown values. They can be used to obtain expected values, estimate parameters or to simply inspect the properties of a non-standard, high dimensional probability distribution. Bayesian analysis of model parameters provides the mathematical foundation for parameter estimation using such probabilistic sampling. The strengths of these stochastic methods are their robustness and relative simplicity even for nonlinear problems with dozens of parameters as well as a built-in uncertainty analysis. Because Bayesian model analysis necessarily involves the notion of prior ...
Alan Baker's systematic account of transcendental number theory, with a new introduction and afterword explaining recent developments.
In recent decades, Reinforcement Learning (RL) has emerged as an effective approach to address complex control tasks. In a Markov Decision Process (MDP), the framework typically used, the environment is assumed to be a fixed entity that cannot be altered externally. There are, however, several real-world scenarios in which the environment can be modified to a limited extent. This book, Exploiting Environment Configurability in Reinforcement Learning, aims to formalize and study diverse aspects of environment configuration. In a traditional MDP, the agent perceives the state of the environment and performs actions. As a consequence, the environment transitions to a new state and generates a r...
The first comprehensive guide to distributional reinforcement learning, providing a new mathematical formalism for thinking about decisions from a probabilistic perspective. Distributional reinforcement learning is a new mathematical formalism for thinking about decisions. Going beyond the common approach to reinforcement learning and expected values, it focuses on the total reward or return obtained as a consequence of an agent's choices—specifically, how this return behaves from a probabilistic perspective. In this first comprehensive guide to distributional reinforcement learning, Marc G. Bellemare, Will Dabney, and Mark Rowland, who spearheaded development of the field, present its key...