Statistics in Plain English


Timothy C. Urdan - 2001
    Each self-contained chapter consists of three sections. The first describes the statistic, including how it is used and what information it provides. The second section reviews how it works, how to calculate the formula, the strengths and weaknesses of the technique, and the conditions needed for its use. The final section provides examples that use and interpret the statistic. A glossary of terms and symbols is also included.New features in the second edition include:an interactive CD with PowerPoint presentations and problems for each chapter including an overview of the problem's solution; new chapters on basic research concepts including sampling, definitions of different types of variables, and basic research designs and one on nonparametric statistics; more graphs and more precise descriptions of each statistic; and a discussion of confidence intervals.This brief paperback is an ideal supplement for statistics, research methods, courses that use statistics, or as a reference tool to refresh one's memory about key concepts. The actual research examples are from psychology, education, and other social and behavioral sciences.Materials formerly available with this book on CD-ROM are now available for download from our website www.psypress.com. Go to the book's page and look for the 'Download' link in the right-hand column.

R Programming for Data Science


Roger D. Peng - 2015
    

Introduction to Algorithms


Thomas H. Cormen - 1989
    Each chapter is relatively self-contained and can be used as a unit of study. The algorithms are described in English and in a pseudocode designed to be readable by anyone who has done a little programming. The explanations have been kept elementary without sacrificing depth of coverage or mathematical rigor.

The Deep Learning Revolution


Terrence J. Sejnowski - 2018
    Deep learning networks can play poker better than professional poker players and defeat a world champion at Go. In this book, Terry Sejnowski explains how deep learning went from being an arcane academic field to a disruptive technology in the information economy.Sejnowski played an important role in the founding of deep learning, as one of a small group of researchers in the 1980s who challenged the prevailing logic-and-symbol based version of AI. The new version of AI Sejnowski and others developed, which became deep learning, is fueled instead by data. Deep networks learn from data in the same way that babies experience the world, starting with fresh eyes and gradually acquiring the skills needed to navigate novel environments. Learning algorithms extract information from raw data; information can be used to create knowledge; knowledge underlies understanding; understanding leads to wisdom. Someday a driverless car will know the road better than you do and drive with more skill; a deep learning network will diagnose your illness; a personal cognitive assistant will augment your puny human brain. It took nature many millions of years to evolve human intelligence; AI is on a trajectory measured in decades. Sejnowski prepares us for a deep learning future.

Superforecasting: The Art and Science of Prediction


Philip E. Tetlock - 2015
    Unfortunately, people tend to be terrible forecasters. As Wharton professor Philip Tetlock showed in a landmark 2005 study, even experts’ predictions are only slightly better than chance. However, an important and underreported conclusion of that study was that some experts do have real foresight, and Tetlock has spent the past decade trying to figure out why. What makes some people so good? And can this talent be taught?   In Superforecasting, Tetlock and coauthor Dan Gardner offer a masterwork on prediction, drawing on decades of research and the results of a massive, government-funded forecasting tournament. The Good Judgment Project involves tens of thousands of ordinary people—including a Brooklyn filmmaker, a retired pipe installer, and a former ballroom dancer—who set out to forecast global events. Some of the volunteers have turned out to be astonishingly good. They’ve beaten other benchmarks, competitors, and prediction markets. They’ve even beaten the collective judgment of intelligence analysts with access to classified information. They are "superforecasters."   In this groundbreaking and accessible book, Tetlock and Gardner show us how we can learn from this elite group. Weaving together stories of forecasting successes (the raid on Osama bin Laden’s compound) and failures (the Bay of Pigs) and interviews with a range of high-level decision makers, from David Petraeus to Robert Rubin, they show that good forecasting doesn’t require powerful computers or arcane methods. It involves gathering evidence from a variety of sources, thinking probabilistically, working in teams, keeping score, and being willing to admit error and change course. Superforecasting offers the first demonstrably effective way to improve our ability to predict the future—whether in business, finance, politics, international affairs, or daily life—and is destined to become a modern classic.

Statistical Methods for Psychology


David C. Howell - 2001
    This book has two underlying themes that are more or less independent of the statistical hypothesis tests that are the main content of the book. The first theme is the importance of looking at the data before formulating a hypothesis. With this in mind, the author discusses, in detail, plotting data, looking for outliers, and checking assumptions (Graphical displays are used extensively). The second theme is the importance of the relationship between the statistical test to be employed and the theoretical questions being posed by the experiment. To emphasize this relationship, the author uses real examples to help the student understand the purpose behind the experiment and the predictions made by the theory. Although this book is designed for students at the intermediate level or above, it does not assume that students have had either a previous course in statistics or a course in math beyond high-school algebra.

Math on Trial: How Numbers Get Used and Abused in the Courtroom


Leila Schneps - 2013
    Even the simplest numbers can become powerful forces when manipulated by politicians or the media, but in the case of the law, your liberty -- and your life -- can depend on the right calculation. In Math on Trial, mathematicians Leila Schneps and Coralie Colmez describe ten trials spanning from the nineteenth century to today, in which mathematical arguments were used -- and disastrously misused -- as evidence. They tell the stories of Sally Clark, who was accused of murdering her children by a doctor with a faulty sense of calculation; of nineteenth-century tycoon Hetty Green, whose dispute over her aunt's will became a signal case in the forensic use of mathematics; and of the case of Amanda Knox, in which a judge's misunderstanding of probability led him to discount critical evidence -- which might have kept her in jail. Offering a fresh angle on cases from the nineteenth-century Dreyfus affair to the murder trial of Dutch nurse Lucia de Berk, Schneps and Colmez show how the improper application of mathematical concepts can mean the difference between walking free and life in prison. A colorful narrative of mathematical abuse, Math on Trial blends courtroom drama, history, and math to show that legal expertise isn't't always enough to prove a person innocent.

Discrete Mathematics and Its Applications


Kenneth H. Rosen - 2000
    These themes include mathematical reasoning, combinatorial analysis, discrete structures, algorithmic thinking, and enhanced problem-solving skills through modeling. Its intent is to demonstrate the relevance and practicality of discrete mathematics to all students. The Fifth Edition includes a more thorough and linear presentation of logic, proof types and proof writing, and mathematical reasoning. This enhanced coverage will provide students with a solid understanding of the material as it relates to their immediate field of study and other relevant subjects. The inclusion of applications and examples to key topics has been significantly addressed to add clarity to every subject. True to the Fourth Edition, the text-specific web site supplements the subject matter in meaningful ways, offering additional material for students and instructors. Discrete math is an active subject with new discoveries made every year. The continual growth and updates to the web site reflect the active nature of the topics being discussed. The book is appropriate for a one- or two-term introductory discrete mathematics course to be taken by students in a wide variety of majors, including computer science, mathematics, and engineering. College Algebra is the only explicit prerequisite.

Introduction to Probability


Joseph K. Blitzstein - 2014
    The book explores a wide variety of applications and examples, ranging from coincidences and paradoxes to Google PageRank and Markov chain Monte Carlo MCMC. Additional application areas explored include genetics, medicine, computer science, and information theory. The print book version includes a code that provides free access to an eBook version. The authors present the material in an accessible style and motivate concepts using real-world examples. Throughout, they use stories to uncover connections between the fundamental distributions in statistics and conditioning to reduce complicated problems to manageable pieces. The book includes many intuitive explanations, diagrams, and practice problems. Each chapter ends with a section showing how to perform relevant simulations and calculations in R, a free statistical software environment.

Data Visualisation: A Handbook for Data Driven Design


Andy Kirk - 2016
    Scholars and students need to be able to analyze, design and curate information into useful tools of communication, insight and understanding. This book is the starting point in learning the process and skills of data visualization, teaching the concepts and skills of how to present data and inspiring effective visual design. Benefits of this book: A flexible step-by-step journey that equips you to achieve great data visualization.A curated collection of classic and contemporary examples, giving illustrations of good and bad practice Examples on every page to give creative inspiration Illustrations of good and bad practice show you how to critically evaluate and improve your own work Advice and experience from the best designers in the field Loads of online practical help, checklists, case studies and exercises make this the most comprehensive text available

Statistical Consequences of Fat Tails: Real World Preasymptotics, Epistemology, and Applications


Nassim Nicholas Taleb - 2020
    Switching from thin tailed to fat tailed distributions requires more than "changing the color of the dress." Traditional asymptotics deal mainly with either n=1 or n=∞, and the real world is in between, under the "laws of the medium numbers"-which vary widely across specific distributions. Both the law of large numbers and the generalized central limit mechanisms operate in highly idiosyncratic ways outside the standard Gaussian or Levy-Stable basins of convergence. A few examples: - The sample mean is rarely in line with the population mean, with effect on "na�ve empiricism," but can be sometimes be estimated via parametric methods. - The "empirical distribution" is rarely empirical. - Parameter uncertainty has compounding effects on statistical metrics. - Dimension reduction (principal components) fails. - Inequality estimators (Gini or quantile contributions) are not additive and produce wrong results. - Many "biases" found in psychology become entirely rational under more sophisticated probability distributions. - Most of the failures of financial economics, econometrics, and behavioral economics can be attributed to using the wrong distributions. This book, the first volume of the Technical Incerto, weaves a narrative around published journal articles.

Data Analysis with Open Source Tools: A Hands-On Guide for Programmers and Data Scientists


Philipp K. Janert - 2010
    With this insightful book, intermediate to experienced programmers interested in data analysis will learn techniques for working with data in a business environment. You'll learn how to look at data to discover what it contains, how to capture those ideas in conceptual models, and then feed your understanding back into the organization through business plans, metrics dashboards, and other applications.Along the way, you'll experiment with concepts through hands-on workshops at the end of each chapter. Above all, you'll learn how to think about the results you want to achieve -- rather than rely on tools to think for you.Use graphics to describe data with one, two, or dozens of variablesDevelop conceptual models using back-of-the-envelope calculations, as well asscaling and probability argumentsMine data with computationally intensive methods such as simulation and clusteringMake your conclusions understandable through reports, dashboards, and other metrics programsUnderstand financial calculations, including the time-value of moneyUse dimensionality reduction techniques or predictive analytics to conquer challenging data analysis situationsBecome familiar with different open source programming environments for data analysisFinally, a concise reference for understanding how to conquer piles of data.--Austin King, Senior Web Developer, MozillaAn indispensable text for aspiring data scientists.--Michael E. Driscoll, CEO/Founder, Dataspora

The Power of Experiments: Decision Making in a Data-Driven World


Michael Luca - 2020
    Once an esoteric tool for academic research, the randomized controlled trial has gone mainstream. No tech company worth its salt (or its share price) would dare make major changes to its platform without first running experiments to understand how they would influence user behavior. In this book, Michael Luca and Max Bazerman explain the importance of experiments for decision making in a data-driven world.Luca and Bazerman describe the central role experiments play in the tech sector, drawing lessons and best practices from the experiences of such companies as StubHub, Alibaba, and Uber. Successful experiments can save companies money--eBay, for example, discovered how to cut $50 million from its yearly advertising budget--or bring to light something previously ignored, as when Airbnb was forced to confront rampant discrimination by its hosts. Moving beyond tech, Luca and Bazerman consider experimenting for the social good--different ways that govenments are using experiments to influence or "nudge" behavior ranging from voter apathy to school absenteeism. Experiments, they argue, are part of any leader's toolkit. With this book, readers can become part of "the experimental revolution."

Python Crash Course: A Hands-On, Project-Based Introduction to Programming


Eric Matthes - 2015
    You'll also learn how to make your programs interactive and how to test your code safely before adding it to a project. In the second half of the book, you'll put your new knowledge into practice with three substantial projects: a Space Invaders-inspired arcade game, data visualizations with Python's super-handy libraries, and a simple web app you can deploy online.As you work through Python Crash Course, you'll learn how to: Use powerful Python libraries and tools, including matplotlib, NumPy, and PygalMake 2D games that respond to keypresses and mouse clicks, and that grow more difficult as the game progressesWork with data to generate interactive visualizationsCreate and customize simple web apps and deploy them safely onlineDeal with mistakes and errors so you can solve your own programming problemsIf you've been thinking seriously about digging into programming, Python Crash Course will get you up to speed and have you writing real programs fast. Why wait any longer? Start your engines and code!

The Model Thinker: What You Need to Know to Make Data Work for You


Scott E. Page - 2018
    But as anyone who has ever opened up a spreadsheet packed with seemingly infinite lines of data knows, numbers aren't enough: we need to know how to make those numbers talk. In The Model Thinker, social scientist Scott E. Page shows us the mathematical, statistical, and computational models—from linear regression to random walks and far beyond—that can turn anyone into a genius. At the core of the book is Page's "many-model paradigm," which shows the reader how to apply multiple models to organize the data, leading to wiser choices, more accurate predictions, and more robust designs. The Model Thinker provides a toolkit for business people, students, scientists, pollsters, and bloggers to make them better, clearer thinkers, able to leverage data and information to their advantage.