Book picks similar to
Information Geometry: Near Randomness and Near Independence by Khadiga Arwini
mathematics
mechanical
info-compsci-networks
engineering
Machine Learning: An Algorithmic Perspective
Stephen Marsland - 2009
The field is ready for a text that not only demonstrates how to use the algorithms that make up machine learning methods, but also provides the background needed to understand how and why these algorithms work. Machine Learning: An Algorithmic Perspective is that text.Theory Backed up by Practical ExamplesThe book covers neural networks, graphical models, reinforcement learning, evolutionary algorithms, dimensionality reduction methods, and the important area of optimization. It treads the fine line between adequate academic rigor and overwhelming students with equations and mathematical concepts. The author addresses the topics in a practical way while providing complete information and references where other expositions can be found. He includes examples based on widely available datasets and practical and theoretical problems to test understanding and application of the material. The book describes algorithms with code examples backed up by a website that provides working implementations in Python. The author uses data from a variety of applications to demonstrate the methods and includes practical problems for students to solve.Highlights a Range of Disciplines and ApplicationsDrawing from computer science, statistics, mathematics, and engineering, the multidisciplinary nature of machine learning is underscored by its applicability to areas ranging from finance to biology and medicine to physics and chemistry. Written in an easily accessible style, this book bridges the gaps between disciplines, providing the ideal blend of theory and practical, applicable knowledge."
The Visual Display of Quantitative Information
Edward R. Tufte - 1983
Theory and practice in the design of data graphics, 250 illustrations of the best (and a few of the worst) statistical graphics, with detailed analysis of how to display data for precise, effective, quick analysis. Design of the high-resolution displays, small multiples. Editing and improving graphics. The data-ink ratio. Time-series, relational graphics, data maps, multivariate designs. Detection of graphical deception: design variation vs. data variation. Sources of deception. Aesthetics and data graphical displays. This is the second edition of The Visual Display of Quantitative Information. Recently published, this new edition provides excellent color reproductions of the many graphics of William Playfair, adds color to other images, and includes all the changes and corrections accumulated during 17 printings of the first edition.
Calculated Risks: How to Know When Numbers Deceive You
Gerd Gigerenzer - 2002
G. Wells predicted that statistical thinking would be as necessary for citizenship in a technological world as the ability to read and write. But in the twenty-first century, we are often overwhelmed by a baffling array of percentages and probabilities as we try to navigate in a world dominated by statistics. Cognitive scientist Gerd Gigerenzer says that because we haven't learned statistical thinking, we don't understand risk and uncertainty. In order to assess risk -- everything from the risk of an automobile accident to the certainty or uncertainty of some common medical screening tests -- we need a basic understanding of statistics.Astonishingly, doctors and lawyers don't understand risk any better than anyone else. Gigerenzer reports a study in which doctors were told the results of breast cancer screenings and then were asked to explain the risks of contracting breast cancer to a woman who received a positive result from a screening. The actual risk was small because the test gives many false positives. But nearly every physician in the study overstated the risk. Yet many people will have to make important health decisions based on such information and the interpretation of that information by their doctors.Gigerenzer explains that a major obstacle to our understanding of numbers is that we live with an illusion of certainty. Many of us believe that HIV tests, DNA fingerprinting, and the growing number of genetic tests are absolutely certain. But even DNA evidence can produce spurious matches. We cling to our illusion of certainty because the medical industry, insurance companies, investment advisers, and election campaigns have become purveyors of certainty, marketing it like a commodity.To avoid confusion, says Gigerenzer, we should rely on more understandable representations of risk, such as absolute risks. For example, it is said that a mammography screening reduces the risk of breast cancer by 25 percent. But in absolute risks, that means that out of every 1,000 women who do not participate in screening, 4 will die; while out of 1,000 women who do, 3 will die. A 25 percent risk reduction sounds much more significant than a benefit that 1 out of 1,000 women will reap.This eye-opening book explains how we can overcome our ignorance of numbers and better understand the risks we may be taking with our money, our health, and our lives.
The Elements of Statistical Learning: Data Mining, Inference, and Prediction
Trevor Hastie - 2001
With it has come vast amounts of data in a variety of fields such as medicine, biology, finance, and marketing. The challenge of understanding these data has led to the development of new tools in the field of statistics, and spawned new areas such as data mining, machine learning, and bioinformatics. Many of these tools have common underpinnings but are often expressed with different terminology. This book describes the important ideas in these areas in a common conceptual framework. While the approach is statistical, the emphasis is on concepts rather than mathematics. Many examples are given, with a liberal use of color graphics. It should be a valuable resource for statisticians and anyone interested in data mining in science or industry. The book's coverage is broad, from supervised learning (prediction) to unsupervised learning. The many topics include neural networks, support vector machines, classification trees and boosting—the first comprehensive treatment of this topic in any book. Trevor Hastie, Robert Tibshirani, and Jerome Friedman are professors of statistics at Stanford University. They are prominent researchers in this area: Hastie and Tibshirani developed generalized additive models and wrote a popular book of that title. Hastie wrote much of the statistical modeling software in S-PLUS and invented principal curves and surfaces. Tibshirani proposed the Lasso and is co-author of the very successful An Introduction to the Bootstrap. Friedman is the co-inventor of many data-mining tools including CART, MARS, and projection pursuit.
The Human Face of Big Data
Rick Smolan - 2012
Its enable us to sense, measure, and understand aspects of our existence in ways never before possible. The Human Face of Big Data captures, in glorious photographs and moving essays, an extraordinary revolution sweeping, almost invisibly, through business, academia, government, healthcare, and everyday life. It's already enabling us to provide a healthier life for our children. To provide our seniors with independence while keeping them safe. To help us conserve precious resources like water and energy. To alert us to tiny changes in our health, weeks or years before we develop a life-threatening illness. To peer into our own individual genetic makeup. To create new forms of life. And soon, as many predict, to re-engineer our own species. And we've barely scratched the surface . . . Over the past decade, Rick Smolan and Jennifer Erwitt, co-founders of Against All Odds Productions, have produced a series of ambitious global projects in collaboration with hundreds of the world's leading photographers, writers, and graphic designers. Their Day in the Life projects were credited for creating a mass market for large-format illustrated books (rare was the coffee table book without one). Today their projects aim at sparking global conversations about emerging topics ranging from the Internet (24 Hours in Cyberspace), to Microprocessors (One Digital Day), to how the human race is learning to heal itself, (The Power to Heal) to the global water crisis (Blue Planet Run). This year Smolan and Erwitt dispatched photographers and writers in every corner of the globe to explore the world of “Big Data” and to determine if it truly does, as many in the field claim, represent a brand new toolset for humanity, helping address the biggest challenges facing our species. The book features 10 essays by noted writers:Introduction: OCEANS OF DATA by Dan GardnerChapter 1: REFLECTIONS IN A DIGITAL MIRROR by Juan Enriquez, CEO, BiotechnomomyChapter 2: OUR DATA OURSELVES by Kate Green, the EconomistChapter 3: QUANTIFYING MYSELF by AJ Jacobs, EsquireChapter 4: DARK DATA by Marc Goodman, Future Crime InstituteChapter 5: THE SENTIENT SENSOR MESH by Susan Karlin, Fast CompanyChapter 6: TAKING THE PULSE OF THE PLANET by Esther Dyson, EDventureChapter 7: CITIZEN SCIENCE by Gareth Cook, the Boston GlobeChapter 8: A DEMOGRAPH OF ONE by Michael Malone, Forbes magazineChapter 9: THE ART OF DATA by Aaron Koblin, Google Artist in ResidenceChapter 10: DATA DRIVEN by Jonathan Harris, Cowbird The book will also feature stunning info graphics from NIGEL HOLMES.1) GOOGLING GOOGLE: all the ways Google uses Data to help humanity2) DATA IS THE NEW OIL3) THE WORLD ACCORDING TO TWITTER4) AUCTIONING EYEBALLS: The world of Internet advertising5) FACEBOOK: A Billion Friends
Information Theory, Inference and Learning Algorithms
David J.C. MacKay - 2002
These topics lie at the heart of many exciting areas of contemporary science and engineering - communication, signal processing, data mining, machine learning, pattern recognition, computational neuroscience, bioinformatics, and cryptography. This textbook introduces theory in tandem with applications. Information theory is taught alongside practical communication systems, such as arithmetic coding for data compression and sparse-graph codes for error-correction. A toolbox of inference techniques, including message-passing algorithms, Monte Carlo methods, and variational approximations, are developed alongside applications of these tools to clustering, convolutional codes, independent component analysis, and neural networks. The final part of the book describes the state of the art in error-correcting codes, including low-density parity-check codes, turbo codes, and digital fountain codes -- the twenty-first century standards for satellite communications, disk drives, and data broadcast. Richly illustrated, filled with worked examples and over 400 exercises, some with detailed solutions, David MacKay's groundbreaking book is ideal for self-learning and for undergraduate or graduate courses. Interludes on crosswords, evolution, and sex provide entertainment along the way. In sum, this is a textbook on information, communication, and coding for a new generation of students, and an unparalleled entry point into these subjects for professionals in areas as diverse as computational biology, financial engineering, and machine learning.
Bayesian Methods for Hackers: Probabilistic Programming and Bayesian Inference
Cameron Davidson-Pilon - 2014
However, most discussions of Bayesian inference rely on intensely complex mathematical analyses and artificial examples, making it inaccessible to anyone without a strong mathematical background. Now, though, Cameron Davidson-Pilon introduces Bayesian inference from a computational perspective, bridging theory to practice-freeing you to get results using computing power.
Bayesian Methods for Hackers
illuminates Bayesian inference through probabilistic programming with the powerful PyMC language and the closely related Python tools NumPy, SciPy, and Matplotlib. Using this approach, you can reach effective solutions in small increments, without extensive mathematical intervention. Davidson-Pilon begins by introducing the concepts underlying Bayesian inference, comparing it with other techniques and guiding you through building and training your first Bayesian model. Next, he introduces PyMC through a series of detailed examples and intuitive explanations that have been refined after extensive user feedback. You'll learn how to use the Markov Chain Monte Carlo algorithm, choose appropriate sample sizes and priors, work with loss functions, and apply Bayesian inference in domains ranging from finance to marketing. Once you've mastered these techniques, you'll constantly turn to this guide for the working PyMC code you need to jumpstart future projects. Coverage includes - Learning the Bayesian "state of mind" and its practical implications - Understanding how computers perform Bayesian inference - Using the PyMC Python library to program Bayesian analyses - Building and debugging models with PyMC - Testing your model's "goodness of fit" - Opening the "black box" of the Markov Chain Monte Carlo algorithm to see how and why it works - Leveraging the power of the "Law of Large Numbers" - Mastering key concepts, such as clustering, convergence, autocorrelation, and thinning - Using loss functions to measure an estimate's weaknesses based on your goals and desired outcomes - Selecting appropriate priors and understanding how their influence changes with dataset size - Overcoming the "exploration versus exploitation" dilemma: deciding when "pretty good" is good enough - Using Bayesian inference to improve A/B testing - Solving data science problems when only small amounts of data are available Cameron Davidson-Pilon has worked in many areas of applied mathematics, from the evolutionary dynamics of genes and diseases to stochastic modeling of financial prices. His contributions to the open source community include lifelines, an implementation of survival analysis in Python. Educated at the University of Waterloo and at the Independent University of Moscow, he currently works with the online commerce leader Shopify.
Machine Learning
Ethem Alpaydin - 2016
It is the basis for a new approach to artificial intelligence that aims to program computers to use example data or past experience to solve a given problem. In this volume in the MIT Press Essential Knowledge series, Ethem Alpayd�n offers a concise and accessible overview of the new AI. This expanded edition offers new material on such challenges facing machine learning as privacy, security, accountability, and bias. Alpayd�n, author of a popular textbook on machine learning, explains that as Big Data has gotten bigger, the theory of machine learning--the foundation of efforts to process that data into knowledge--has also advanced. He describes the evolution of the field, explains important learning algorithms, and presents example applications. He discusses the use of machine learning algorithms for pattern recognition; artificial neural networks inspired by the human brain; algorithms that learn associations between instances; and reinforcement learning, when an autonomous agent learns to take actions to maximize reward. In a new chapter, he considers transparency, explainability, and fairness, and the ethical and legal implications of making decisions based on data.
The Little Book on CoffeeScript
Alex MacCaw - 2012
Through example code, this guide demonstrates how CoffeeScript abstracts JavaScript, providing syntactical sugar and preventing many common errors. You’ll learn CoffeeScript’s syntax and idioms step by step, from basic variables and functions to complex comprehensions and classes.Written by Alex MacCaw, author of JavaScript Web Applications (O’Reilly), with contributions from CoffeeScript creator Jeremy Ashkenas, this book quickly teaches you best practices for using this language—not just on the client side, but for server-side applications as well. It’s time to take a ride with the little language that could.Discover how CoffeeScript’s syntax differs from JavaScriptLearn about features such as array comprehensions, destructuring assignments, and classesExplore CoffeeScript idioms and compare them to their JavaScript counterpartsCompile CoffeeScript files in static sites with the Cake build systemUse CommonJS modules to structure and deploy CoffeeScript client-side applicationsExamine JavaScript’s bad parts—including features CoffeeScript was able to fix
Head First Data Analysis: A Learner's Guide to Big Numbers, Statistics, and Good Decisions
Michael G. Milton - 2009
If your job requires you to manage and analyze all kinds of data, turn to Head First Data Analysis, where you'll quickly learn how to collect and organize data, sort the distractions from the truth, find meaningful patterns, draw conclusions, predict the future, and present your findings to others. Whether you're a product developer researching the market viability of a new product or service, a marketing manager gauging or predicting the effectiveness of a campaign, a salesperson who needs data to support product presentations, or a lone entrepreneur responsible for all of these data-intensive functions and more, the unique approach in Head First Data Analysis is by far the most efficient way to learn what you need to know to convert raw data into a vital business tool. You'll learn how to:Determine which data sources to use for collecting information Assess data quality and distinguish signal from noise Build basic data models to illuminate patterns, and assimilate new information into the models Cope with ambiguous information Design experiments to test hypotheses and draw conclusions Use segmentation to organize your data within discrete market groups Visualize data distributions to reveal new relationships and persuade others Predict the future with sampling and probability models Clean your data to make it useful Communicate the results of your analysis to your audience Using the latest research in cognitive science and learning theory to craft a multi-sensory learning experience, Head First Data Analysis uses a visually rich format designed for the way your brain works, not a text-heavy approach that puts you to sleep.
Introduction to Graph Theory
Douglas B. West - 1995
Verification that algorithms work is emphasized more than their complexity. An effective use of examples, and huge number of interesting exercises, demonstrate the topics of trees and distance, matchings and factors, connectivity and paths, graph coloring, edges and cycles, and planar graphs. For those who need to learn to make coherent arguments in the fields of mathematics and computer science.
Data Science for Business: What you need to know about data mining and data-analytic thinking
Foster Provost - 2013
This guide also helps you understand the many data-mining techniques in use today.Based on an MBA course Provost has taught at New York University over the past ten years, Data Science for Business provides examples of real-world business problems to illustrate these principles. You’ll not only learn how to improve communication between business stakeholders and data scientists, but also how participate intelligently in your company’s data science projects. You’ll also discover how to think data-analytically, and fully appreciate how data science methods can support business decision-making.Understand how data science fits in your organization—and how you can use it for competitive advantageTreat data as a business asset that requires careful investment if you’re to gain real valueApproach business problems data-analytically, using the data-mining process to gather good data in the most appropriate wayLearn general concepts for actually extracting knowledge from dataApply data science principles when interviewing data science job candidates
Harmonograph: A Visual Guide to the Mathematics of Music
Anthony Ashton - 1999
Harmonograph is an introduction to the evolution of simple harmonic theory, from the discoveries of Pythagoras to diatonic tuning and equal temperament. Beautiful drawings show the octave as triangle, the fifth as pentagram; diagrams show the principles of harmonics, overtones, and the monochord. Anthony Ashton examines the phenomenon of resonance in Chladni patterns, describes how to build a harmonograph of your own, and provides tables of world tuning systems. This inspiring book will appeal to musicians, mathematicians, designers, and artists alike.
Deep Learning
Ian Goodfellow - 2016
Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning.The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models.Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.
How to Measure Anything Workbook: Finding the Value of "Intangibles" in Business
Douglas W. Hubbard - 2014
The invaluable companion to the new edition of the bestselling How to Measure Anything This companion workbook to the new edition of the insightful and eloquent How to Measure Anything walks readers through sample problems and exercises in which they can master and apply the methods discussed in the book.The book explains practical methods for measuring a variety of intangibles, including approaches to measuring customer satisfaction, organizational flexibility, technology risk, technology ROI, and other problems in business, government, and not-for-profits.Companion to the revision of the bestselling How to Measure AnythingProvides chapter-by-chapter exercises Written by industry leader Douglas Hubbard Written by recognized expert Douglas Hubbard--creator of Applied Information Economics--How to Measure Anything Workbook illustrates how the author has used his approach across various industries and how any problem, no matter how difficult, ill defined, or uncertain can lend itself to measurement using proven methods.