You may have to register before you can download all our books and magazines, click the sign up button below to create a free account.
This journal, covering topics in mathematical statistics, split into Annals of probability and Annals of statistics in 1973.
Two of the most exciting topics of current research in stochastic networks are the complementary subjects of stability and rare events - roughly, the former deals with the typical behavior of networks, and the latter with significant atypical behavior. Both are classical topics, of interest since the early days of queueing theory, that have experienced renewed interest mo tivated by new applications to emerging technologies. For example, new stability issues arise in the scheduling of multiple job classes in semiconduc tor manufacturing, the so-called "re-entrant lines;" and a prominent need for studying rare events is associated with the design of telecommunication systems using the new ATM...
The pioneering research of Hirotugu Akaike has an international reputation for profoundly affecting how data and time series are analyzed and modelled and is highly regarded by the statistical and technological communities of Japan and the world. His 1974 paper "A new look at the statistical model identification" (IEEE Trans Automatic Control, AC-19, 716-723) is one of the most frequently cited papers in the area of engineering, technology, and applied sciences (according to a 1981 Citation Classic of the Institute of Scientific Information). It introduced the broad scientific community to model identification using the methods of Akaike's criterion AIC. The AIC method is cited and applied i...
Taken literally, the title "All of Statistics" is an exaggeration. But in spirit, the title is apt, as the book does cover a much broader range of topics than a typical introductory book on mathematical statistics. This book is for people who want to learn probability and statistics quickly. It is suitable for graduate or advanced undergraduate students in computer science, mathematics, statistics, and related disciplines. The book includes modern topics like non-parametric curve estimation, bootstrapping, and classification, topics that are usually relegated to follow-up courses. The reader is presumed to know calculus and a little linear algebra. No previous knowledge of probability and statistics is required. Statistics, data mining, and machine learning are all concerned with collecting and analysing data.
The purpose of this book is to give a systematic account of the mathematical theory in this field as applied to problems of hypothesis testing. The resulting methods are predominantly sequential probability ratio tests. A brief account of classical tests under the neyman-pearson formulation is included. The notion of sequential analysis has been used in some form or other in various branches of statistical inference. We may classify these branches into three broad categories: Statistical tests of hypotheses, statistical estimation, and statistical decision theory.
This textbook provides a coherent introduction to the main concepts and methods of one-parameter statistical inference. Intended for students of Mathematics taking their first course in Statistics, the focus is on Statistics for Mathematicians rather than on Mathematical Statistics. The goal is not to focus on the mathematical/theoretical aspects of the subject, but rather to provide an introduction to the subject tailored to the mindset and tastes of Mathematics students, who are sometimes turned off by the informal nature of Statistics courses. This book can be used as the basis for an elementary semester-long first course on Statistics with a firm sense of direction that does not sacrifice rigor. The deeper goal of the text is to attract the attention of promising Mathematics students.
Classical statistical theory—hypothesis testing, estimation, and the design of experiments and sample surveys—is mainly the creation of two men: Ronald A. Fisher (1890-1962) and Jerzy Neyman (1894-1981). Their contributions sometimes complemented each other, sometimes occurred in parallel, and, particularly at later stages, often were in strong opposition. The two men would not be pleased to see their names linked in this way, since throughout most of their working lives they detested each other. Nevertheless, they worked on the same problems, and through their combined efforts created a new discipline. This new book by E.L. Lehmann, himself a student of Neyman’s, explores the relationship between Neyman and Fisher, as well as their interactions with other influential statisticians, and the statistical history they helped create together. Lehmann uses direct correspondence and original papers to recreate an historical account of the creation of the Neyman-Pearson Theory as well as Fisher’s dissent, and other important statistical theories.