Book picks similar to
Bit by Bit: Social Research in the Digital Age by Matthew J. Salganik
nonfiction
non-fiction
science
data-science
How to Lie with Statistics
Darrell Huff - 1954
Darrell Huff runs the gamut of every popularly used type of statistic, probes such things as the sample study, the tabulation method, the interview technique, or the way the results are derived from the figures, and points up the countless number of dodges which are used to fool rather than to inform.
Living in Data: A Citizen's Guide to a Better Information Future
Jer Thorp - 2021
Data--our data--is mined and processed for profit, power, and political gain. In Living in Data, Thorp asks a crucial question of our time: How do we stop passively inhabiting data, and instead become active citizens of it?Threading a data story through hippo attacks, glaciers, and school gymnasiums, around colossal rice piles, and over active minefields, Living in Data reminds us that the future of data is still wide open, that there are ways to transcend facts and figures and to find more visceral ways to engage with data, that there are always new stories to be told about how data can be used.Punctuated with Thorp's original and informative illustrations, Living in Data not only redefines what data is, but reimagines who gets to speak its language and how to use its power to create a more just and democratic future. Timely and inspiring, Living in Data gives us a much-needed path forward.
Statistics for the Behavioral Sciences
Frederick J. Gravetter - 1996
You will have numerous opportunities to practice statistical techniques through learning checks, examples, demonstrations, and problems. Exam preparation is made easy with a student companion website that provides tutorials, crossword puzzles, flashcards, learning objectives, and more!
Science Fictions: The Epidemic of Fraud, Bias, Negligence and Hype in Science
Stuart Ritchie - 2020
But what if science itself can’t be relied on?Medicine, education, psychology, health, parenting – wherever it really matters, we look to science for advice. Science Fictions reveals the disturbing flaws that undermine our understanding of all of these fields and more.While the scientific method will always be our best and only way of knowing about the world, in reality the current system of funding and publishing science not only fails to safeguard against scientists’ inescapable biases and foibles, it actively encourages them. From widely accepted theories about ‘priming’ and ‘growth mindset’ to claims about genetics, sleep, microbiotics, as well as a host of drugs, allergies and therapies, we can trace the effects of unreliable, overhyped and even fraudulent papers in austerity economics, the anti-vaccination movement and dozens of bestselling books – and occasionally count the cost in human lives.Stuart Ritchie was among the first people to help expose these problems. In this vital investigation, he gathers together the evidence of their full and shocking extent – and how a new reform movement within science is fighting back. Often witty yet deadly serious, Science Fictions is at the vanguard of the insurgency, proposing a host of remedies to save and protect this most valuable of human endeavours from itself.
Diffusion of Innovations
Everett M. Rogers - 1982
It has sold 30,000 copies in each edition and will continue to reach a huge academic audience.In this renowned book, Everett M. Rogers, professor and chair of the Department of Communication & Journalism at the University of New Mexico, explains how new ideas spread via communication channels over time. Such innovations are initially perceived as uncertain and even risky. To overcome this uncertainty, most people seek out others like themselves who have already adopted the new idea. Thus the diffusion process consists of a few individuals who first adopt an innovation, then spread the word among their circle of acquaintances--a process which typically takes months or years. But there are exceptions: use of the Internet in the 1990s, for example, may have spread more rapidly than any other innovation in the history of humankind. Furthermore, the Internet is changing the very nature of diffusion by decreasing the importance of physical distance between people. The fifth edition addresses the spread of the Internet, and how it has transformed the way human beings communicate and adopt new ideas.
Scarcity: Why Having Too Little Means So Much
Sendhil Mullainathan - 2013
Busy people fail to manage their time efficiently for the same reasons the poor and those maxed out on credit cards fail to manage their money. The dynamics of scarcity reveal why dieters find it hard to resist temptation, why students and busy executives mismanage their time, and why sugarcane farmers are smarter after harvest than before. Once we start thinking in terms of scarcity and the strategies it imposes, the problems of modern life come into sharper focus.Mullainathan and Shafir discuss how scarcity affects our daily lives, recounting anecdotes of their own foibles and making surprising connections that bring this research alive. Their book provides a new way of understanding why the poor stay poor and the busy stay busy, and it reveals not only how scarcity leads us astray but also how individuals and organizations can better manage scarcity for greater satisfaction and success.http://us.macmillan.com/scarcity/Send...
The Great Transformation: The Political and Economic Origins of Our Time
Karl Polanyi - 1944
His analysis explains not only the deficiencies of the self-regulating market, but the potentially dire social consequences of untempered market capitalism. New introductory material reveals the renewed importance of Polanyi's seminal analysis in an era of globalization and free trade.
Probability Theory: The Logic of Science
E.T. Jaynes - 1999
It discusses new results, along with applications of probability theory to a variety of problems. The book contains many exercises and is suitable for use as a textbook on graduate-level courses involving data analysis. Aimed at readers already familiar with applied mathematics at an advanced undergraduate level or higher, it is of interest to scientists concerned with inference from incomplete information.
Information Theory, Inference and Learning Algorithms
David J.C. MacKay - 2002
These topics lie at the heart of many exciting areas of contemporary science and engineering - communication, signal processing, data mining, machine learning, pattern recognition, computational neuroscience, bioinformatics, and cryptography. This textbook introduces theory in tandem with applications. Information theory is taught alongside practical communication systems, such as arithmetic coding for data compression and sparse-graph codes for error-correction. A toolbox of inference techniques, including message-passing algorithms, Monte Carlo methods, and variational approximations, are developed alongside applications of these tools to clustering, convolutional codes, independent component analysis, and neural networks. The final part of the book describes the state of the art in error-correcting codes, including low-density parity-check codes, turbo codes, and digital fountain codes -- the twenty-first century standards for satellite communications, disk drives, and data broadcast. Richly illustrated, filled with worked examples and over 400 exercises, some with detailed solutions, David MacKay's groundbreaking book is ideal for self-learning and for undergraduate or graduate courses. Interludes on crosswords, evolution, and sex provide entertainment along the way. In sum, this is a textbook on information, communication, and coding for a new generation of students, and an unparalleled entry point into these subjects for professionals in areas as diverse as computational biology, financial engineering, and machine learning.
Storytelling with Data: A Data Visualization Guide for Business Professionals
Cole Nussbaumer Knaflic - 2015
You'll discover the power of storytelling and the way to make data a pivotal point in your story. The lessons in this illuminative text are grounded in theory, but made accessible through numerous real-world examples--ready for immediate application to your next graph or presentation.Storytelling is not an inherent skill, especially when it comes to data visualization, and the tools at our disposal don't make it any easier. This book demonstrates how to go beyond conventional tools to reach the root of your data, and how to use your data to create an engaging, informative, compelling story. Specifically, you'll learn how to:Understand the importance of context and audience Determine the appropriate type of graph for your situation Recognize and eliminate the clutter clouding your information Direct your audience's attention to the most important parts of your data Think like a designer and utilize concepts of design in data visualization Leverage the power of storytelling to help your message resonate with your audience Together, the lessons in this book will help you turn your data into high impact visual stories that stick with your audience. Rid your world of ineffective graphs, one exploding 3D pie chart at a time. There is a story in your data--Storytelling with Data will give you the skills and power to tell it!
Bayesian Methods for Hackers: Probabilistic Programming and Bayesian Inference
Cameron Davidson-Pilon - 2014
However, most discussions of Bayesian inference rely on intensely complex mathematical analyses and artificial examples, making it inaccessible to anyone without a strong mathematical background. Now, though, Cameron Davidson-Pilon introduces Bayesian inference from a computational perspective, bridging theory to practice-freeing you to get results using computing power.
Bayesian Methods for Hackers
illuminates Bayesian inference through probabilistic programming with the powerful PyMC language and the closely related Python tools NumPy, SciPy, and Matplotlib. Using this approach, you can reach effective solutions in small increments, without extensive mathematical intervention. Davidson-Pilon begins by introducing the concepts underlying Bayesian inference, comparing it with other techniques and guiding you through building and training your first Bayesian model. Next, he introduces PyMC through a series of detailed examples and intuitive explanations that have been refined after extensive user feedback. You'll learn how to use the Markov Chain Monte Carlo algorithm, choose appropriate sample sizes and priors, work with loss functions, and apply Bayesian inference in domains ranging from finance to marketing. Once you've mastered these techniques, you'll constantly turn to this guide for the working PyMC code you need to jumpstart future projects. Coverage includes - Learning the Bayesian "state of mind" and its practical implications - Understanding how computers perform Bayesian inference - Using the PyMC Python library to program Bayesian analyses - Building and debugging models with PyMC - Testing your model's "goodness of fit" - Opening the "black box" of the Markov Chain Monte Carlo algorithm to see how and why it works - Leveraging the power of the "Law of Large Numbers" - Mastering key concepts, such as clustering, convergence, autocorrelation, and thinning - Using loss functions to measure an estimate's weaknesses based on your goals and desired outcomes - Selecting appropriate priors and understanding how their influence changes with dataset size - Overcoming the "exploration versus exploitation" dilemma: deciding when "pretty good" is good enough - Using Bayesian inference to improve A/B testing - Solving data science problems when only small amounts of data are available Cameron Davidson-Pilon has worked in many areas of applied mathematics, from the evolutionary dynamics of genes and diseases to stochastic modeling of financial prices. His contributions to the open source community include lifelines, an implementation of survival analysis in Python. Educated at the University of Waterloo and at the Independent University of Moscow, he currently works with the online commerce leader Shopify.
The Elements of Statistical Learning: Data Mining, Inference, and Prediction
Trevor Hastie - 2001
With it has come vast amounts of data in a variety of fields such as medicine, biology, finance, and marketing. The challenge of understanding these data has led to the development of new tools in the field of statistics, and spawned new areas such as data mining, machine learning, and bioinformatics. Many of these tools have common underpinnings but are often expressed with different terminology. This book describes the important ideas in these areas in a common conceptual framework. While the approach is statistical, the emphasis is on concepts rather than mathematics. Many examples are given, with a liberal use of color graphics. It should be a valuable resource for statisticians and anyone interested in data mining in science or industry. The book's coverage is broad, from supervised learning (prediction) to unsupervised learning. The many topics include neural networks, support vector machines, classification trees and boosting—the first comprehensive treatment of this topic in any book. Trevor Hastie, Robert Tibshirani, and Jerome Friedman are professors of statistics at Stanford University. They are prominent researchers in this area: Hastie and Tibshirani developed generalized additive models and wrote a popular book of that title. Hastie wrote much of the statistical modeling software in S-PLUS and invented principal curves and surfaces. Tibshirani proposed the Lasso and is co-author of the very successful An Introduction to the Bootstrap. Friedman is the co-inventor of many data-mining tools including CART, MARS, and projection pursuit.
Deep Learning
Ian Goodfellow - 2016
Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning.The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models.Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.
Data Science from Scratch: First Principles with Python
Joel Grus - 2015
In this book, you’ll learn how many of the most fundamental data science tools and algorithms work by implementing them from scratch.
If you have an aptitude for mathematics and some programming skills, author Joel Grus will help you get comfortable with the math and statistics at the core of data science, and with hacking skills you need to get started as a data scientist. Today’s messy glut of data holds answers to questions no one’s even thought to ask. This book provides you with the know-how to dig those answers out.
Get a crash course in Python
Learn the basics of linear algebra, statistics, and probability—and understand how and when they're used in data science
Collect, explore, clean, munge, and manipulate data
Dive into the fundamentals of machine learning
Implement models such as k-nearest Neighbors, Naive Bayes, linear and logistic regression, decision trees, neural networks, and clustering
Explore recommender systems, natural language processing, network analysis, MapReduce, and databases
Hello World: Being Human in the Age of Algorithms
Hannah Fry - 2018
It’s time we stand face-to-digital-face with the true powers and limitations of the algorithms that already automate important decisions in healthcare, transportation, crime, and commerce. Hello World is indispensable preparation for the moral quandaries of a world run by code, and with the unfailingly entertaining Hannah Fry as our guide, we’ll be discussing these issues long after the last page is turned.