Book picks similar to
Uncertainty: The Soul of Modeling, Probability & Statistics by William Briggs
philosophy
math
statistics
science
The Information: A History, a Theory, a Flood
James Gleick - 2011
The story of information begins in a time profoundly unlike our own, when every thought and utterance vanishes as soon as it is born. From the invention of scripts and alphabets to the long-misunderstood talking drums of Africa, Gleick tells the story of information technologies that changed the very nature of human consciousness. He provides portraits of the key figures contributing to the inexorable development of our modern understanding of information: Charles Babbage, the idiosyncratic inventor of the first great mechanical computer; Ada Byron, the brilliant and doomed daughter of the poet, who became the first true programmer; pivotal figures like Samuel Morse and Alan Turing; and Claude Shannon, the creator of information theory itself. And then the information age arrives. Citizens of this world become experts willy-nilly: aficionados of bits and bytes. And we sometimes feel we are drowning, swept by a deluge of signs and signals, news and images, blogs and tweets. The Information is the story of how we got here and where we are heading.
The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives
Stephen Thomas Ziliak - 2008
If it takes a book to get it across, I hope this book will do it. It ought to.”—Thomas Schelling, Distinguished University Professor, School of Public Policy, University of Maryland, and 2005 Nobel Prize Laureate in Economics “With humor, insight, piercing logic and a nod to history, Ziliak and McCloskey show how economists—and other scientists—suffer from a mass delusion about statistical analysis. The quest for statistical significance that pervades science today is a deeply flawed substitute for thoughtful analysis. . . . Yet few participants in the scientific bureaucracy have been willing to admit what Ziliak and McCloskey make clear: the emperor has no clothes.”—Kenneth Rothman, Professor of Epidemiology, Boston University School of Health The Cult of Statistical Significance shows, field by field, how “statistical significance,” a technique that dominates many sciences, has been a huge mistake. The authors find that researchers in a broad spectrum of fields, from agronomy to zoology, employ “testing” that doesn’t test and “estimating” that doesn’t estimate. The facts will startle the outside reader: how could a group of brilliant scientists wander so far from scientific magnitudes? This study will encourage scientists who want to know how to get the statistical sciences back on track and fulfill their quantitative promise. The book shows for the first time how wide the disaster is, and how bad for science, and it traces the problem to its historical, sociological, and philosophical roots. Stephen T. Ziliak is the author or editor of many articles and two books. He currently lives in Chicago, where he is Professor of Economics at Roosevelt University. Deirdre N. McCloskey, Distinguished Professor of Economics, History, English, and Communication at the University of Illinois at Chicago, is the author of twenty books and three hundred scholarly articles. She has held Guggenheim and National Humanities Fellowships. She is best known for How to Be Human* Though an Economist (University of Michigan Press, 2000) and her most recent book, The Bourgeois Virtues: Ethics for an Age of Commerce (2006).
Tell Me The Odds: A 15 Page Introduction To Bayes Theorem
Scott Hartshorn - 2017
Essentially, you make an initial guess, and then get more data to improve it. Bayes Theorem, or Bayes Rule, has a ton of real world applications, from estimating your risk of a heart attack to making recommendations on Netflix But It Isn't That Complicated This book is a short introduction to Bayes Theorem. It is only 15 pages long, and is intended to show you how Bayes Theorem works as quickly as possible. The examples are intentionally kept simple to focus solely on Bayes Theorem without requiring that the reader know complicated probability distributions. If you want to learn the basics of Bayes Theorem as quickly as possible, with some easy to duplicate examples, this is a good book for you.
Principles of Statistics
M.G. Bulmer - 1979
There are equally many advanced textbooks which delve into the far reaches of statistical theory, while bypassing practical applications. But between these two approaches is an unfilled gap, in which theory and practice merge at an intermediate level. Professor M. G. Bulmer's Principles of Statistics, originally published in 1965, was created to fill that need. The new, corrected Dover edition of Principles of Statistics makes this invaluable mid-level text available once again for the classroom or for self-study.Principles of Statistics was created primarily for the student of natural sciences, the social scientist, the undergraduate mathematics student, or anyone familiar with the basics of mathematical language. It assumes no previous knowledge of statistics or probability; nor is extensive mathematical knowledge necessary beyond a familiarity with the fundamentals of differential and integral calculus. (The calculus is used primarily for ease of notation; skill in the techniques of integration is not necessary in order to understand the text.)Professor Bulmer devotes the first chapters to a concise, admirably clear description of basic terminology and fundamental statistical theory: abstract concepts of probability and their applications in dice games, Mendelian heredity, etc.; definitions and examples of discrete and continuous random variables; multivariate distributions and the descriptive tools used to delineate them; expected values; etc. The book then moves quickly to more advanced levels, as Professor Bulmer describes important distributions (binomial, Poisson, exponential, normal, etc.), tests of significance, statistical inference, point estimation, regression, and correlation. Dozens of exercises and problems appear at the end of various chapters, with answers provided at the back of the book. Also included are a number of statistical tables and selected references.
The Improbability Principle: Why Coincidences, Miracles, and Rare Events Happen Every Day
David J. Hand - 2014
Hand argues that extraordinarily rare events are anything but. In fact, they’re commonplace. Not only that, we should all expect to experience a miracle roughly once every month. But Hand is no believer in superstitions, prophecies, or the paranormal. His definition of “miracle” is thoroughly rational. No mystical or supernatural explanation is necessary to understand why someone is lucky enough to win the lottery twice, or is destined to be hit by lightning three times and still survive. All we need, Hand argues, is a firm grounding in a powerful set of laws: the laws of inevitability, of truly large numbers, of selection, of the probability lever, and of near enough. Together, these constitute Hand’s groundbreaking Improbability Principle. And together, they explain why we should not be so surprised to bump into a friend in a foreign country, or to come across the same unfamiliar word four times in one day. Hand wrestles with seemingly less explicable questions as well: what the Bible and Shakespeare have in common, why financial crashes are par for the course, and why lightning does strike the same place (and the same person) twice. Along the way, he teaches us how to use the Improbability Principle in our own lives—including how to cash in at a casino and how to recognize when a medicine is truly effective. An irresistible adventure into the laws behind “chance” moments and a trusty guide for understanding the world and universe we live in, The Improbability Principle will transform how you think about serendipity and luck, whether it’s in the world of business and finance or you’re merely sitting in your backyard, tossing a ball into the air and wondering where it will land.
Introductory Statistics
Neil A. Weiss - 1987
This book develops statistical thinking over rote drill and practice. The Nature of Statistics; Organizing Data; Descriptive Measures; Probability Concepts; Discrete Random Variables; The Normal Distribution; The Sampling Distribution of the Sample Menu; Confidence Intervals for One Population Mean; Hypothesis Tests for One Population Mean; Inferences for Two Population Means; Inferences for Population Standard Deviations; Inferences for Population Proportions; Chi-Square Procedures; Descriptive Methods in Regression and Correlation; Inferential Methods in Regression and Correlation; Analysis of Variance (ANOVA)
For all readers interested in Introductory Statistics.
Combinatorial Optimization: Algorithms and Complexity
Christos H. Papadimitriou - 1998
All chapters are supplemented by thought-provoking problems. A useful work for graduate-level students with backgrounds in computer science, operations research, and electrical engineering. "Mathematicians wishing a self-contained introduction need look no further." — American Mathematical Monthly.
The Monty Hall Problem: The Remarkable Story of Math's Most Contentious Brain Teaser
Jason Rosenhouse - 2009
Imagine that you face three doors, behind one of which is a prize. You choose one but do not open it. The host--call him Monty Hall--opens a different door, alwayschoosing one he knows to be empty. Left with two doors, will you do better by sticking with your first choice, or by switching to the other remaining door? In this light-hearted yet ultimately serious book, Jason Rosenhouse explores the history of this fascinating puzzle. Using a minimum ofmathematics (and none at all for much of the book), he shows how the problem has fascinated philosophers, psychologists, and many others, and examines the many variations that have appeared over the years. As Rosenhouse demonstrates, the Monty Hall Problem illuminates fundamental mathematical issuesand has abiding philosophical implications. Perhaps most important, he writes, the problem opens a window on our cognitive difficulties in reasoning about uncertainty.
Linear Algebra and Its Applications [with CD-ROM]
David C. Lay - 1993
Elementary Statistics: A Step by Step Approach
Allan G. Bluman - 1992
The book is non-theoretical, explaining concepts intuitively and teaching problem solving through worked examples and step-by-step instructions. This edition places more emphasis on conceptual understanding and understanding results. This edition also features increased emphasis on Excel, MINITAB, and the TI-83 Plus and TI 84-Plus graphing calculators, computing technologies commonly used in such courses.
R for Everyone: Advanced Analytics and Graphics
Jared P. Lander - 2013
R has traditionally been difficult for non-statisticians to learn, and most R books assume far too much knowledge to be of help. R for Everyone is the solution. Drawing on his unsurpassed experience teaching new users, professional data scientist Jared P. Lander has written the perfect tutorial for anyone new to statistical programming and modeling. Organized to make learning easy and intuitive, this guide focuses on the 20 percent of R functionality you'll need to accomplish 80 percent of modern data tasks. Lander's self-contained chapters start with the absolute basics, offering extensive hands-on practice and sample code. You'll download and install R; navigate and use the R environment; master basic program control, data import, and manipulation; and walk through several essential tests. Then, building on this foundation, you'll construct several complete models, both linear and nonlinear, and use some data mining techniques. By the time you're done, you won't just know how to write R programs, you'll be ready to tackle the statistical problems you care about most. COVERAGE INCLUDES - Exploring R, RStudio, and R packages - Using R for math: variable types, vectors, calling functions, and more - Exploiting data structures, including data.frames, matrices, and lists - Creating attractive, intuitive statistical graphics - Writing user-defined functions - Controlling program flow with if, ifelse, and complex checks - Improving program efficiency with group manipulations - Combining and reshaping multiple datasets - Manipulating strings using R's facilities and regular expressions - Creating normal, binomial, and Poisson probability distributions - Programming basic statistics: mean, standard deviation, and t-tests - Building linear, generalized linear, and nonlinear models - Assessing the quality of models and variable selection - Preventing overfitting, using the Elastic Net and Bayesian methods - Analyzing univariate and multivariate time series data - Grouping data via K-means and hierarchical clustering - Preparing reports, slideshows, and web pages with knitr - Building reusable R packages with devtools and Rcpp - Getting involved with the R global community
The Art of R Programming: A Tour of Statistical Software Design
Norman Matloff - 2011
No statistical knowledge is required, and your programming skills can range from hobbyist to pro.Along the way, you'll learn about functional and object-oriented programming, running mathematical simulations, and rearranging complex data into simpler, more useful formats. You'll also learn to: Create artful graphs to visualize complex data sets and functions Write more efficient code using parallel R and vectorization Interface R with C/C++ and Python for increased speed or functionality Find new R packages for text analysis, image manipulation, and more Squash annoying bugs with advanced debugging techniques Whether you're designing aircraft, forecasting the weather, or you just need to tame your data, The Art of R Programming is your guide to harnessing the power of statistical computing.
Even You Can Learn Statistics: A Guide for Everyone Who Has Ever Been Afraid of Statistics
David M. Levine - 2004
Each technique is introduced with a simple, jargon-free explanation, practical examples, and hands-on guidance for solving real problems with Excel or a TI-83/84 series calculator, including Plus models. Hate math? No sweat. You'll be amazed how little you need! For those who do have an interest in mathematics, optional "Equation Blackboard" sections review the equations that provide the foundations for important concepts. David M. Levine is a much-honored innovator in statistics education. He is Professor Emeritus of Statistics and Computer Information Systems at Bernard M. Baruch College (CUNY), and co-author of several best-selling books, including Statistics for Managers using Microsoft Excel, Basic Business Statistics, Quality Management, and Six Sigma for Green Belts and Champions. Instructional designer David F. Stephan pioneered the classroom use of personal computers, and is a leader in making Excel more accessible to statistics students. He has co-authored several textbooks with David M. Levine. Here's just some of what you'll learn how to do... Use statistics in your everyday work or study Perform common statistical tasks using a Texas Instruments statistical calculator or Microsoft Excel Build and interpret statistical charts and tables "Test Yourself" at the end of each chapter to review the concepts and methods that you learned in the chapter Work with mean, median, mode, standard deviation, Z scores, skewness, and other descriptive statistics Use probability and probability distributions Work with sampling distributions and confidence intervals Test hypotheses and decision-making risks with Z, t, Chi-Square, ANOVA, and other techniques Perform regression analysis and modeling The easy, practical introduction to statistics--for everyone! Thought you couldn't learn statistics? Think again. You can--and you will!
Against the Gods: The Remarkable Story of Risk
Peter L. Bernstein - 1996
Peter Bernstein has written a comprehensive history of man's efforts to understand risk and probability, beginning with early gamblers in ancient Greece, continuing through the 17th-century French mathematicians Pascal and Fermat and up to modern chaos theory. Along the way he demonstrates that understanding risk underlies everything from game theory to bridge-building to winemaking.
Data Mining: Concepts and Techniques (The Morgan Kaufmann Series in Data Management Systems)
Jiawei Han - 2000
Not only are all of our business, scientific, and government transactions now computerized, but the widespread use of digital cameras, publication tools, and bar codes also generate data. On the collection side, scanned text and image platforms, satellite remote sensing systems, and the World Wide Web have flooded us with a tremendous amount of data. This explosive growth has generated an even more urgent need for new techniques and automated tools that can help us transform this data into useful information and knowledge.Like the first edition, voted the most popular data mining book by KD Nuggets readers, this book explores concepts and techniques for the discovery of patterns hidden in large data sets, focusing on issues relating to their feasibility, usefulness, effectiveness, and scalability. However, since the publication of the first edition, great progress has been made in the development of new data mining methods, systems, and applications. This new edition substantially enhances the first edition, and new chapters have been added to address recent developments on mining complex types of data- including stream data, sequence data, graph structured data, social network data, and multi-relational data.A comprehensive, practical look at the concepts and techniques you need to know to get the most out of real business dataUpdates that incorporate input from readers, changes in the field, and more material on statistics and machine learningDozens of algorithms and implementation examples, all in easily understood pseudo-code and suitable for use in real-world, large-scale data mining projectsComplete classroom support for instructors at www.mkp.com/datamining2e companion site