You may have to register before you can download all our books and magazines, click the sign up button below to create a free account.
This book contains international perspectives that unifies the themes of strategic management, decision theory, and data science. It contains thought-provoking presentations of case studies backed by adequate analysis adding significance to the discussions. Most of the decision-making models in use do take due advantage of collection and processing of relevant data using appropriate analytics oriented to provide inputs into effective decision-making. The book showcases applications in diverse fields including banking and insurance, portfolio management, inventory analysis, performance assessment of comparable economic agents, managing utilities in a health-care facility, reducing traffic snarls on highways, monitoring achievement of some of the sustainable development goals in a country or state, and similar other areas that showcase policy implications. It holds immense value for researchers as well as professionals responsible for organizational decisions.
Sequential analysis refers to the body of statistical theory and methods where the sample size may depend in a random manner on the accumulating data. A formal theory in which optimal tests are derived for simple statistical hypotheses in such a framework was developed by Abraham Wald in the early 1
There has been an enormous growth in recent years in the literature on discrete optimal designs. The optimality problems have been formulated in various models arising in the experimental designs and substantial progress has been made towards solving some of these. The subject has now reached a stage of completeness which calls for a self-contained monograph on this topic. The aim of this monograph is to present the state of the art and to focus on more recent advances in this rapidly developing area. We start with a discussion of statistical optimality criteria in Chapter One. Chapters Two and Three deal with optimal block designs. Row-column designs are dealt with in Chapter Four. In Chapt...
Filling the gap for an up-to-date textbook in this relatively new interdisciplinary research field, this volume provides readers with a thorough and comprehensive introduction. Based on extensive teaching experience, it includes numerous worked examples and highlights in special biographical boxes some of the most outstanding personalities and their contributions to both physics and economics. The whole is rounded off by several appendices containing important background material.
The book presents contributions on statistical models and methods applied, for both data science and SDGs, in one place. Measuring and controlling data of SDGs, data driven measurement of progress needs to be distributed to stakeholders. In this situation, the techniques used in data science, specially, in the big data analytics, play an important role rather than the traditional data gathering and manipulation techniques. This book fills this space through its twenty contributions. The contributions have been selected from those presented during the 7th International Conference on Data Science and Sustainable Development Goals organized by the Department of Statistics, University of Rajshahi, Bangladesh; and cover topics mainly on SDGs, bioinformatics, public health, medical informatics, environmental statistics, data science and machine learning. The contents of the volume would be useful to policymakers, researchers, government entities, civil society, and nonprofit organizations for monitoring and accelerating the progress of SDGs.
This book presents a unified approach for obtaining the limiting distributions of minimum distance. It discusses classes of goodness-of-t tests for fitting an error distribution in some of these models and/or fitting a regression-autoregressive function without assuming the knowledge of the error distribution. The main tool is the asymptotic equi-continuity of certain basic weighted residual empirical processes in the uniform and L2 metrics.
By providing a comprehensive look at statistical inference from record-breaking data in both parametric and nonparametric settings, this book treats the area of nonparametric function estimation from such data in detail. Its main purpose is to fill this void on general inference from record values. Statisticians, mathematicians, and engineers will find the book useful as a research reference. It can also serve as part of a graduate-level statistics or mathematics course.
Government policy questions and media planning tasks may be answered by this data set. It covers a wide range of different aspects of statistical matching that in Europe typically is called data fusion. A book about statistical matching will be of interest to researchers and practitioners, starting with data collection and the production of public use micro files, data banks, and data bases. People in the areas of database marketing, public health analysis, socioeconomic modeling, and official statistics will find it useful.
Copulas are mathematical objects that fully capture the dependence structure among random variables and hence offer great flexibility in building multivariate stochastic models. Since their introduction in the early 50's, copulas have gained considerable popularity in several fields of applied mathematics, such as finance, insurance and reliability theory. Today, they represent a well-recognized tool for market and credit models, aggregation of risks, portfolio selection, etc. This book is divided into two main parts: Part I - "Surveys" contains 11 chapters that provide an up-to-date account of essential aspects of copula models. Part II - "Contributions" collects the extended versions of 6 talks selected from papers presented at the workshop in Warsaw.
This account of recent works on weakly dependent, long memory and multifractal processes introduces new dependence measures for studying complex stochastic systems and includes other topics such as the dependence structure of max-stable processes.