Book picks similar to
Modeling Count Data by Joseph M. Hilbe


1loc-home
astronomy
data-science
math_to_read

Statistics in Plain English


Timothy C. Urdan - 2001
    Each self-contained chapter consists of three sections. The first describes the statistic, including how it is used and what information it provides. The second section reviews how it works, how to calculate the formula, the strengths and weaknesses of the technique, and the conditions needed for its use. The final section provides examples that use and interpret the statistic. A glossary of terms and symbols is also included.New features in the second edition include:an interactive CD with PowerPoint presentations and problems for each chapter including an overview of the problem's solution; new chapters on basic research concepts including sampling, definitions of different types of variables, and basic research designs and one on nonparametric statistics; more graphs and more precise descriptions of each statistic; and a discussion of confidence intervals.This brief paperback is an ideal supplement for statistics, research methods, courses that use statistics, or as a reference tool to refresh one's memory about key concepts. The actual research examples are from psychology, education, and other social and behavioral sciences.Materials formerly available with this book on CD-ROM are now available for download from our website www.psypress.com. Go to the book's page and look for the 'Download' link in the right-hand column.

Hands-On Machine Learning with Scikit-Learn and TensorFlow


Aurélien Géron - 2017
    Now that machine learning is thriving, even programmers who know close to nothing about this technology can use simple, efficient tools to implement programs capable of learning from data. This practical book shows you how.By using concrete examples, minimal theory, and two production-ready Python frameworks—Scikit-Learn and TensorFlow—author Aurélien Géron helps you gain an intuitive understanding of the concepts and tools for building intelligent systems. You’ll learn how to use a range of techniques, starting with simple Linear Regression and progressing to Deep Neural Networks. If you have some programming experience and you’re ready to code a machine learning project, this guide is for you.This hands-on book shows you how to use:Scikit-Learn, an accessible framework that implements many algorithms efficiently and serves as a great machine learning entry pointTensorFlow, a more complex library for distributed numerical computation, ideal for training and running very large neural networksPractical code examples that you can apply without learning excessive machine learning theory or algorithm details

Computer Age Statistical Inference: Algorithms, Evidence, and Data Science


Bradley Efron - 2016
    'Big data', 'data science', and 'machine learning' have become familiar terms in the news, as statistical methods are brought to bear upon the enormous data sets of modern science and commerce. How did we get here? And where are we going? This book takes us on an exhilarating journey through the revolution in data analysis following the introduction of electronic computation in the 1950s. Beginning with classical inferential theories - Bayesian, frequentist, Fisherian - individual chapters take up a series of influential topics: survival analysis, logistic regression, empirical Bayes, the jackknife and bootstrap, random forests, neural networks, Markov chain Monte Carlo, inference after model selection, and dozens more. The distinctly modern approach integrates methodology and algorithms with statistical inference. The book ends with speculation on the future direction of statistics and data science.

Bayesian Methods for Hackers: Probabilistic Programming and Bayesian Inference


Cameron Davidson-Pilon - 2014
    However, most discussions of Bayesian inference rely on intensely complex mathematical analyses and artificial examples, making it inaccessible to anyone without a strong mathematical background. Now, though, Cameron Davidson-Pilon introduces Bayesian inference from a computational perspective, bridging theory to practice-freeing you to get results using computing power. Bayesian Methods for Hackers illuminates Bayesian inference through probabilistic programming with the powerful PyMC language and the closely related Python tools NumPy, SciPy, and Matplotlib. Using this approach, you can reach effective solutions in small increments, without extensive mathematical intervention. Davidson-Pilon begins by introducing the concepts underlying Bayesian inference, comparing it with other techniques and guiding you through building and training your first Bayesian model. Next, he introduces PyMC through a series of detailed examples and intuitive explanations that have been refined after extensive user feedback. You'll learn how to use the Markov Chain Monte Carlo algorithm, choose appropriate sample sizes and priors, work with loss functions, and apply Bayesian inference in domains ranging from finance to marketing. Once you've mastered these techniques, you'll constantly turn to this guide for the working PyMC code you need to jumpstart future projects. Coverage includes - Learning the Bayesian "state of mind" and its practical implications - Understanding how computers perform Bayesian inference - Using the PyMC Python library to program Bayesian analyses - Building and debugging models with PyMC - Testing your model's "goodness of fit" - Opening the "black box" of the Markov Chain Monte Carlo algorithm to see how and why it works - Leveraging the power of the "Law of Large Numbers" - Mastering key concepts, such as clustering, convergence, autocorrelation, and thinning - Using loss functions to measure an estimate's weaknesses based on your goals and desired outcomes - Selecting appropriate priors and understanding how their influence changes with dataset size - Overcoming the "exploration versus exploitation" dilemma: deciding when "pretty good" is good enough - Using Bayesian inference to improve A/B testing - Solving data science problems when only small amounts of data are available Cameron Davidson-Pilon has worked in many areas of applied mathematics, from the evolutionary dynamics of genes and diseases to stochastic modeling of financial prices. His contributions to the open source community include lifelines, an implementation of survival analysis in Python. Educated at the University of Waterloo and at the Independent University of Moscow, he currently works with the online commerce leader Shopify.

Probability Theory: The Logic of Science


E.T. Jaynes - 1999
    It discusses new results, along with applications of probability theory to a variety of problems. The book contains many exercises and is suitable for use as a textbook on graduate-level courses involving data analysis. Aimed at readers already familiar with applied mathematics at an advanced undergraduate level or higher, it is of interest to scientists concerned with inference from incomplete information.

R for Dummies


Joris Meys - 2012
    R is packed with powerful programming capabilities, but learning to use R in the real world can be overwhelming for even the most seasoned statisticians. This easy-to-follow guide explains how to use R for data processing and statistical analysis, and then, shows you how to present your data using compelling and informative graphics. You'll gain practical experience using R in a variety of settings and delve deeper into R's feature-rich toolset.Includes tips for the initial installation of RDemonstrates how to easily perform calculations on vectors, arrays, and lists of dataShows how to effectively visualize data using R's powerful graphics packagesGives pointers on how to find, install, and use add-on packages created by the R communityProvides tips on getting additional help from R mailing lists and websitesWhether you're just starting out with statistical analysis or are a procedural programming pro, "R For Dummies" is the book you need to get the most out of R.

Superforecasting: The Art and Science of Prediction


Philip E. Tetlock - 2015
    Unfortunately, people tend to be terrible forecasters. As Wharton professor Philip Tetlock showed in a landmark 2005 study, even experts’ predictions are only slightly better than chance. However, an important and underreported conclusion of that study was that some experts do have real foresight, and Tetlock has spent the past decade trying to figure out why. What makes some people so good? And can this talent be taught?   In Superforecasting, Tetlock and coauthor Dan Gardner offer a masterwork on prediction, drawing on decades of research and the results of a massive, government-funded forecasting tournament. The Good Judgment Project involves tens of thousands of ordinary people—including a Brooklyn filmmaker, a retired pipe installer, and a former ballroom dancer—who set out to forecast global events. Some of the volunteers have turned out to be astonishingly good. They’ve beaten other benchmarks, competitors, and prediction markets. They’ve even beaten the collective judgment of intelligence analysts with access to classified information. They are "superforecasters."   In this groundbreaking and accessible book, Tetlock and Gardner show us how we can learn from this elite group. Weaving together stories of forecasting successes (the raid on Osama bin Laden’s compound) and failures (the Bay of Pigs) and interviews with a range of high-level decision makers, from David Petraeus to Robert Rubin, they show that good forecasting doesn’t require powerful computers or arcane methods. It involves gathering evidence from a variety of sources, thinking probabilistically, working in teams, keeping score, and being willing to admit error and change course. Superforecasting offers the first demonstrably effective way to improve our ability to predict the future—whether in business, finance, politics, international affairs, or daily life—and is destined to become a modern classic.

Statistical Consequences of Fat Tails: Real World Preasymptotics, Epistemology, and Applications


Nassim Nicholas Taleb - 2020
    Switching from thin tailed to fat tailed distributions requires more than "changing the color of the dress." Traditional asymptotics deal mainly with either n=1 or n=∞, and the real world is in between, under the "laws of the medium numbers"-which vary widely across specific distributions. Both the law of large numbers and the generalized central limit mechanisms operate in highly idiosyncratic ways outside the standard Gaussian or Levy-Stable basins of convergence. A few examples: - The sample mean is rarely in line with the population mean, with effect on "na�ve empiricism," but can be sometimes be estimated via parametric methods. - The "empirical distribution" is rarely empirical. - Parameter uncertainty has compounding effects on statistical metrics. - Dimension reduction (principal components) fails. - Inequality estimators (Gini or quantile contributions) are not additive and produce wrong results. - Many "biases" found in psychology become entirely rational under more sophisticated probability distributions. - Most of the failures of financial economics, econometrics, and behavioral economics can be attributed to using the wrong distributions. This book, the first volume of the Technical Incerto, weaves a narrative around published journal articles.

Bayesian Data Analysis


Andrew Gelman - 1995
    Its world-class authors provide guidance on all aspects of Bayesian data analysis and include examples of real statistical analyses, based on their own research, that demonstrate how to solve complicated problems. Changes in the new edition include:Stronger focus on MCMC Revision of the computational advice in Part III New chapters on nonlinear models and decision analysis Several additional applied examples from the authors' recent research Additional chapters on current models for Bayesian data analysis such as nonlinear models, generalized linear mixed models, and more Reorganization of chapters 6 and 7 on model checking and data collectionBayesian computation is currently at a stage where there are many reasonable ways to compute any given posterior distribution. However, the best approach is not always clear ahead of time. Reflecting this, the new edition offers a more pluralistic presentation, giving advice on performing computations from many perspectives while making clear the importance of being aware that there are different ways to implement any given iterative simulation computation. The new approach, additional examples, and updated information make Bayesian Data Analysis an excellent introductory text and a reference that working scientists will use throughout their professional life.

Statistics for Dummies


Deborah J. Rumsey - 2003
    . ." and "The data bear this out. . . ." But the field of statistics is not just about data. Statistics is the entire process involved in gathering evidence to answer questions about the world, in cases where that evidence happens to be numerical data. Statistics For Dummies is for everyone who wants to sort through and evaluate the incredible amount of statistical information that comes to them on a daily basis. (You know the stuff: charts, graphs, tables, as well as headlines that talk about the results of the latest poll, survey, experiment, or other scientific study.) This book arms you with the ability to decipher and make important decisions about statistical results, being ever aware of the ways in which people can mislead you with statistics. Get the inside scoop on number-crunching nuances, plus insight into how you canDetermine the odds Calculate a standard score Find the margin of error Recognize the impact of polls Establish criteria for a good survey Make informed decisions about experiments This down-to-earth reference is chock-full of real examples from real sources that are relevant to your everyday life: from the latest medical breakthroughs, crime studies, and population trends to surveys on Internet dating, cell phone use, and the worst cars of the millennium. Statistics For Dummies departs from traditional statistics texts, references, supplement books, and study guides in the following ways:Practical and intuitive explanations of statistical concepts, ideas, techniques, formulas, and calculations. Clear and concise step-by-step procedures that intuitively explain how to work through statistics problems. Upfront and honest answers to your questions like, "What does this really mean?" and "When and how I will ever use this?" Chances are, Statistics For Dummies will be your No. 1 resource for discovering how numerical data figures into your corner of the universe.

Introductory Statistics with R


Peter Dalgaard - 2002
    It can be freely downloaded and it works on multiple computer platforms. This book provides an elementary introduction to R. In each chapter, brief introductory sections are followed by code examples and comments from the computational and statistical viewpoint. A supplementary R package containing the datasets can be downloaded from the web.

Presenting Data Effectively: Communicating Your Findings for Maximum Impact


Stephanie D.H. Evergreen - 2013
    With this accessible step-by-step guide, anyone--from students developing a research poster for a school project to faculty and researchers presenting data at a conference--can learn how to present and communicate their research findings in more interesting and effective ways.Author Stephanie Evergreen draws on her extensive experience in the study of research reporting, interdisciplinary evaluation, and data visualization, as well as from diverse interdisciplinary fields, including cognitive psychology, communications, and graphic design, to extract tangible and practical data-reporting communication lessons and insights. She then demonstrates how to apply those principles to the design of data presentations to make it easier for the audience to understand, remember, and use the data.

Math on Trial: How Numbers Get Used and Abused in the Courtroom


Leila Schneps - 2013
    Even the simplest numbers can become powerful forces when manipulated by politicians or the media, but in the case of the law, your liberty -- and your life -- can depend on the right calculation. In Math on Trial, mathematicians Leila Schneps and Coralie Colmez describe ten trials spanning from the nineteenth century to today, in which mathematical arguments were used -- and disastrously misused -- as evidence. They tell the stories of Sally Clark, who was accused of murdering her children by a doctor with a faulty sense of calculation; of nineteenth-century tycoon Hetty Green, whose dispute over her aunt's will became a signal case in the forensic use of mathematics; and of the case of Amanda Knox, in which a judge's misunderstanding of probability led him to discount critical evidence -- which might have kept her in jail. Offering a fresh angle on cases from the nineteenth-century Dreyfus affair to the murder trial of Dutch nurse Lucia de Berk, Schneps and Colmez show how the improper application of mathematical concepts can mean the difference between walking free and life in prison. A colorful narrative of mathematical abuse, Math on Trial blends courtroom drama, history, and math to show that legal expertise isn't't always enough to prove a person innocent.

Thinking Statistically


Uri Bram - 2011
    Along the way we’ll learn how selection bias can explain why your boss doesn’t know he sucks (even when everyone else does); how to use Bayes’ Theorem to decide if your partner is cheating on you; and why Mark Zuckerberg should never be used as an example for anything. See the world in a whole new light, and make better decisions and judgements without ever going near a t-test. Think. Think Statistically.

The Visual Display of Quantitative Information


Edward R. Tufte - 1983
    Theory and practice in the design of data graphics, 250 illustrations of the best (and a few of the worst) statistical graphics, with detailed analysis of how to display data for precise, effective, quick analysis. Design of the high-resolution displays, small multiples. Editing and improving graphics. The data-ink ratio. Time-series, relational graphics, data maps, multivariate designs. Detection of graphical deception: design variation vs. data variation. Sources of deception. Aesthetics and data graphical displays. This is the second edition of The Visual Display of Quantitative Information. Recently published, this new edition provides excellent color reproductions of the many graphics of William Playfair, adds color to other images, and includes all the changes and corrections accumulated during 17 printings of the first edition.