A Guide To Econometrics


Peter E. Kennedy - 1979
    This overview has enabled students to make sense more easily of what instructors are doing when they produce proofs, theorems and formulas.

Probability, Random Variables and Stochastic Processes with Errata Sheet


Athanasios Papoulis - 2001
    Unnikrishna Pillai of Polytechnic University. The book is intended for a senior/graduate level course in probability and is aimed at students in electrical engineering, math, and physics departments. The authors' approach is to develop the subject of probability theory and stochastic processes as a deductive discipline and to illustrate the theory with basic applications of engineering interest. Approximately 1/3 of the text is new material--this material maintains the style and spirit of previous editions. In order to bridge the gap between concepts and applications, a number of additional examples have been added for further clarity, as well as several new topics.

Information: The New Language of Science


Hans Christian Von Baeyer - 2003
    In this indispensable volume, a primer for the information age, Hans Christian von Baeyer presents a clear description of what information is, how concepts of its measurement, meaning, and transmission evolved, and what its ever-expanding presence portends for the future. Information is poised to replace matter as the primary stuff of the universe, von Baeyer suggests; it will provide a new basic framework for describing and predicting reality in the twenty-first century. Despite its revolutionary premise, von Baeyer's book is written simply in a straightforward fashion, offering a wonderfully accessible introduction to classical and quantum information. Enlivened with anecdotes from the lives of philosophers, mathematicians, and scientists who have contributed significantly to the field, Information conducts readers from questions of subjectivity inherent in classical information to the blurring of distinctions between computers and what they measure or store in our quantum age. A great advance in our efforts to define and describe the nature of information, the book also marks an important step forward in our ability to exploit information--and, ultimately, to transform the nature of our relationship with the physical universe. (20040301)

I Think, Therefore I Laugh: The Flip Side of Philosophy


John Allen Paulos - 1985
    Paulos uses jokes, stories, parables, and anecdotes to elucidate difficult concepts, in this case, some of the fundamental problems in modern philosophy.

Models.Behaving.Badly.: Why Confusing Illusion with Reality Can Lead to Disaster, on Wall Street and in Life


Emanuel Derman - 2011
    The reliance traders put on such quantitative analysis was catastrophic for the economy, setting off the series of financial crises that began to erupt in 2007 with the mortgage crisis and from which we're still recovering. Here Derman looks at why people--bankers in particular--still put so much faith in these models, and why it's a terrible mistake to do so.Though financial models imitate the style of physics by using the language of mathematics, ultimately they deal with human beings. Their similarity confuses the fundamental difference between the aims and possible achievements of the phsyics world and that of the financial world. When we make a model involving human beings, we are trying to force the ugly stepsister's foot into Cinderella's pretty glass slipper.  It doesn't fit without cutting off some of the essential parts. Physicists and economists have been too enthusiastic to recognize the limits of their equations in the sphere of human behavior--which of course is what economics is all about.  Models.Behaving.Badly. includes a personal account Derman's childhood encounter with failed models--the utopia of the kibbutz, his experience as a physicist on Wall Street, and a look at the models quants generated: the benefits they brought and the problems they caused. Derman takes a close look at what a model is, and then he highlights the differences between the success of modeling in physics and its relative failure in economics.  Describing the collapse of the subprime mortgage CDO market in 2007, Derman urges us to stop relying on these models where possible, and offers suggestions for mending these models where they might still do some good.  This is a fascinating, lyrical, and very human look behind the curtain at the intersection between mathematics and human nature.

The Elements of Statistical Learning: Data Mining, Inference, and Prediction


Trevor Hastie - 2001
    With it has come vast amounts of data in a variety of fields such as medicine, biology, finance, and marketing. The challenge of understanding these data has led to the development of new tools in the field of statistics, and spawned new areas such as data mining, machine learning, and bioinformatics. Many of these tools have common underpinnings but are often expressed with different terminology. This book describes the important ideas in these areas in a common conceptual framework. While the approach is statistical, the emphasis is on concepts rather than mathematics. Many examples are given, with a liberal use of color graphics. It should be a valuable resource for statisticians and anyone interested in data mining in science or industry. The book's coverage is broad, from supervised learning (prediction) to unsupervised learning. The many topics include neural networks, support vector machines, classification trees and boosting—the first comprehensive treatment of this topic in any book. Trevor Hastie, Robert Tibshirani, and Jerome Friedman are professors of statistics at Stanford University. They are prominent researchers in this area: Hastie and Tibshirani developed generalized additive models and wrote a popular book of that title. Hastie wrote much of the statistical modeling software in S-PLUS and invented principal curves and surfaces. Tibshirani proposed the Lasso and is co-author of the very successful An Introduction to the Bootstrap. Friedman is the co-inventor of many data-mining tools including CART, MARS, and projection pursuit.

Statistical Rethinking: A Bayesian Course with Examples in R and Stan


Richard McElreath - 2015
    Reflecting the need for even minor programming in today's model-based statistics, the book pushes readers to perform step-by-step calculations that are usually automated. This unique computational approach ensures that readers understand enough of the details to make reasonable choices and interpretations in their own modeling work.The text presents generalized linear multilevel models from a Bayesian perspective, relying on a simple logical interpretation of Bayesian probability and maximum entropy. It covers from the basics of regression to multilevel models. The author also discusses measurement error, missing data, and Gaussian process models for spatial and network autocorrelation.By using complete R code examples throughout, this book provides a practical foundation for performing statistical inference. Designed for both PhD students and seasoned professionals in the natural and social sciences, it prepares them for more advanced or specialized statistical modeling.Web ResourceThe book is accompanied by an R package (rethinking) that is available on the author's website and GitHub. The two core functions (map and map2stan) of this package allow a variety of statistical models to be constructed from standard model formulas.

Naked Statistics: Stripping the Dread from the Data


Charles Wheelan - 2012
    How can we catch schools that cheat on standardized tests? How does Netflix know which movies you’ll like? What is causing the rising incidence of autism? As best-selling author Charles Wheelan shows us in Naked Statistics, the right data and a few well-chosen statistical tools can help us answer these questions and more.For those who slept through Stats 101, this book is a lifesaver. Wheelan strips away the arcane and technical details and focuses on the underlying intuition that drives statistical analysis. He clarifies key concepts such as inference, correlation, and regression analysis, reveals how biased or careless parties can manipulate or misrepresent data, and shows us how brilliant and creative researchers are exploiting the valuable data from natural experiments to tackle thorny questions.And in Wheelan’s trademark style, there’s not a dull page in sight. You’ll encounter clever Schlitz Beer marketers leveraging basic probability, an International Sausage Festival illuminating the tenets of the central limit theorem, and a head-scratching choice from the famous game show Let’s Make a Deal—and you’ll come away with insights each time. With the wit, accessibility, and sheer fun that turned Naked Economics into a bestseller, Wheelan defies the odds yet again by bringing another essential, formerly unglamorous discipline to life.

How to Lie with Statistics


Darrell Huff - 1954
    Darrell Huff runs the gamut of every popularly used type of statistic, probes such things as the sample study, the tabulation method, the interview technique, or the way the results are derived from the figures, and points up the countless number of dodges which are used to fool rather than to inform.

Statistics in Plain English


Timothy C. Urdan - 2001
    Each self-contained chapter consists of three sections. The first describes the statistic, including how it is used and what information it provides. The second section reviews how it works, how to calculate the formula, the strengths and weaknesses of the technique, and the conditions needed for its use. The final section provides examples that use and interpret the statistic. A glossary of terms and symbols is also included.New features in the second edition include:an interactive CD with PowerPoint presentations and problems for each chapter including an overview of the problem's solution; new chapters on basic research concepts including sampling, definitions of different types of variables, and basic research designs and one on nonparametric statistics; more graphs and more precise descriptions of each statistic; and a discussion of confidence intervals.This brief paperback is an ideal supplement for statistics, research methods, courses that use statistics, or as a reference tool to refresh one's memory about key concepts. The actual research examples are from psychology, education, and other social and behavioral sciences.Materials formerly available with this book on CD-ROM are now available for download from our website www.psypress.com. Go to the book's page and look for the 'Download' link in the right-hand column.

An Introduction to Probability Theory and Its Applications, Volume 1


William Feller - 1968
    Beginning with the background and very nature of probability theory, the book then proceeds through sample spaces, combinatorial analysis, fluctuations in coin tossing and random walks, the combination of events, types of distributions, Markov chains, stochastic processes, and more. The book's comprehensive approach provides a complete view of theory along with enlightening examples along the way.

Why Stock Markets Crash: Critical Events in Complex Financial Systems


Didier Sornette - 2002
    In this book, Didier Sornette boldly applies his varied experience in these areas to propose a simple, powerful, and general theory of how, why, and when stock markets crash.Most attempts to explain market failures seek to pinpoint triggering mechanisms that occur hours, days, or weeks before the collapse. Sornette proposes a radically different view: the underlying cause can be sought months and even years before the abrupt, catastrophic event in the build-up of cooperative speculation, which often translates into an accelerating rise of the market price, otherwise known as a "bubble." Anchoring his sophisticated, step-by-step analysis in leading-edge physical and statistical modeling techniques, he unearths remarkable insights and some predictions--among them, that the "end of the growth era" will occur around 2050.Sornette probes major historical precedents, from the decades-long "tulip mania" in the Netherlands that wilted suddenly in 1637 to the South Sea Bubble that ended with the first huge market crash in England in 1720, to the Great Crash of October 1929 and Black Monday in 1987, to cite just a few. He concludes that most explanations other than cooperative self-organization fail to account for the subtle bubbles by which the markets lay the groundwork for catastrophe.Any investor or investment professional who seeks a genuine understanding of looming financial disasters should read this book. Physicists, geologists, biologists, economists, and others will welcome "Why Stock Markets Crash" as a highly original "scientific tale," as Sornette aptly puts it, of the exciting and sometimes fearsome--but no longer quite so unfathomable--world of stock markets.

Statistical Consequences of Fat Tails: Real World Preasymptotics, Epistemology, and Applications


Nassim Nicholas Taleb - 2020
    Switching from thin tailed to fat tailed distributions requires more than "changing the color of the dress." Traditional asymptotics deal mainly with either n=1 or n=∞, and the real world is in between, under the "laws of the medium numbers"-which vary widely across specific distributions. Both the law of large numbers and the generalized central limit mechanisms operate in highly idiosyncratic ways outside the standard Gaussian or Levy-Stable basins of convergence. A few examples: - The sample mean is rarely in line with the population mean, with effect on "na�ve empiricism," but can be sometimes be estimated via parametric methods. - The "empirical distribution" is rarely empirical. - Parameter uncertainty has compounding effects on statistical metrics. - Dimension reduction (principal components) fails. - Inequality estimators (Gini or quantile contributions) are not additive and produce wrong results. - Many "biases" found in psychology become entirely rational under more sophisticated probability distributions. - Most of the failures of financial economics, econometrics, and behavioral economics can be attributed to using the wrong distributions. This book, the first volume of the Technical Incerto, weaves a narrative around published journal articles.

Bayesian Data Analysis


Andrew Gelman - 1995
    Its world-class authors provide guidance on all aspects of Bayesian data analysis and include examples of real statistical analyses, based on their own research, that demonstrate how to solve complicated problems. Changes in the new edition include:Stronger focus on MCMC Revision of the computational advice in Part III New chapters on nonlinear models and decision analysis Several additional applied examples from the authors' recent research Additional chapters on current models for Bayesian data analysis such as nonlinear models, generalized linear mixed models, and more Reorganization of chapters 6 and 7 on model checking and data collectionBayesian computation is currently at a stage where there are many reasonable ways to compute any given posterior distribution. However, the best approach is not always clear ahead of time. Reflecting this, the new edition offers a more pluralistic presentation, giving advice on performing computations from many perspectives while making clear the importance of being aware that there are different ways to implement any given iterative simulation computation. The new approach, additional examples, and updated information make Bayesian Data Analysis an excellent introductory text and a reference that working scientists will use throughout their professional life.

Standard Deviations: Flawed Assumptions, Tortured Data, and Other Ways to Lie with Statistics


Gary Smith - 2014
    In Standard Deviations, economics professor Gary Smith walks us through the various tricks and traps that people use to back up their own crackpot theories. Sometimes, the unscrupulous deliberately try to mislead us. Other times, the well-intentioned are blissfully unaware of the mischief they are committing. Today, data is so plentiful that researchers spend precious little time distinguishing between good, meaningful indicators and total rubbish. Not only do others use data to fool us, we fool ourselves.With the breakout success of Nate Silver’s The Signal and the Noise, the once humdrum subject of statistics has never been hotter. Drawing on breakthrough research in behavioral economics by luminaries like Daniel Kahneman and Dan Ariely and taking to task some of the conclusions of Freakonomics author Steven D. Levitt, Standard Deviations demystifies the science behind statistics and makes it easy to spot the fraud all around.