Book picks similar to
Algebraic Codes for Data Transmission by Richard E. Blahut
data
engineering
1_communication
algebraic-coding
The Visual Display of Quantitative Information
Edward R. Tufte - 1983
Theory and practice in the design of data graphics, 250 illustrations of the best (and a few of the worst) statistical graphics, with detailed analysis of how to display data for precise, effective, quick analysis. Design of the high-resolution displays, small multiples. Editing and improving graphics. The data-ink ratio. Time-series, relational graphics, data maps, multivariate designs. Detection of graphical deception: design variation vs. data variation. Sources of deception. Aesthetics and data graphical displays. This is the second edition of The Visual Display of Quantitative Information. Recently published, this new edition provides excellent color reproductions of the many graphics of William Playfair, adds color to other images, and includes all the changes and corrections accumulated during 17 printings of the first edition.
The Model Thinker: What You Need to Know to Make Data Work for You
Scott E. Page - 2018
But as anyone who has ever opened up a spreadsheet packed with seemingly infinite lines of data knows, numbers aren't enough: we need to know how to make those numbers talk. In The Model Thinker, social scientist Scott E. Page shows us the mathematical, statistical, and computational models—from linear regression to random walks and far beyond—that can turn anyone into a genius. At the core of the book is Page's "many-model paradigm," which shows the reader how to apply multiple models to organize the data, leading to wiser choices, more accurate predictions, and more robust designs. The Model Thinker provides a toolkit for business people, students, scientists, pollsters, and bloggers to make them better, clearer thinkers, able to leverage data and information to their advantage.
Multiple View Geometry in Computer Vision
Richard Hartley - 2000
This book covers relevant geometric principles and how to represent objects algebraically so they can be computed and applied. Recent major developments in the theory and practice of scene reconstruction are described in detail in a unified framework. Richard Hartley and Andrew Zisserman provide comprehensive background material and explain how to apply the methods and implement the algorithms. First Edition HB (2000): 0-521-62304-9
Practical Statistics for Data Scientists: 50 Essential Concepts
Peter Bruce - 2017
Courses and books on basic statistics rarely cover the topic from a data science perspective. This practical guide explains how to apply various statistical methods to data science, tells you how to avoid their misuse, and gives you advice on what's important and what's not.Many data science resources incorporate statistical methods but lack a deeper statistical perspective. If you're familiar with the R programming language, and have some exposure to statistics, this quick reference bridges the gap in an accessible, readable format.With this book, you'll learn:Why exploratory data analysis is a key preliminary step in data scienceHow random sampling can reduce bias and yield a higher quality dataset, even with big dataHow the principles of experimental design yield definitive answers to questionsHow to use regression to estimate outcomes and detect anomaliesKey classification techniques for predicting which categories a record belongs toStatistical machine learning methods that "learn" from dataUnsupervised learning methods for extracting meaning from unlabeled data
Head First Data Analysis: A Learner's Guide to Big Numbers, Statistics, and Good Decisions
Michael G. Milton - 2009
If your job requires you to manage and analyze all kinds of data, turn to Head First Data Analysis, where you'll quickly learn how to collect and organize data, sort the distractions from the truth, find meaningful patterns, draw conclusions, predict the future, and present your findings to others. Whether you're a product developer researching the market viability of a new product or service, a marketing manager gauging or predicting the effectiveness of a campaign, a salesperson who needs data to support product presentations, or a lone entrepreneur responsible for all of these data-intensive functions and more, the unique approach in Head First Data Analysis is by far the most efficient way to learn what you need to know to convert raw data into a vital business tool. You'll learn how to:Determine which data sources to use for collecting information Assess data quality and distinguish signal from noise Build basic data models to illuminate patterns, and assimilate new information into the models Cope with ambiguous information Design experiments to test hypotheses and draw conclusions Use segmentation to organize your data within discrete market groups Visualize data distributions to reveal new relationships and persuade others Predict the future with sampling and probability models Clean your data to make it useful Communicate the results of your analysis to your audience Using the latest research in cognitive science and learning theory to craft a multi-sensory learning experience, Head First Data Analysis uses a visually rich format designed for the way your brain works, not a text-heavy approach that puts you to sleep.
The Human Face of Big Data
Rick Smolan - 2012
Its enable us to sense, measure, and understand aspects of our existence in ways never before possible. The Human Face of Big Data captures, in glorious photographs and moving essays, an extraordinary revolution sweeping, almost invisibly, through business, academia, government, healthcare, and everyday life. It's already enabling us to provide a healthier life for our children. To provide our seniors with independence while keeping them safe. To help us conserve precious resources like water and energy. To alert us to tiny changes in our health, weeks or years before we develop a life-threatening illness. To peer into our own individual genetic makeup. To create new forms of life. And soon, as many predict, to re-engineer our own species. And we've barely scratched the surface . . . Over the past decade, Rick Smolan and Jennifer Erwitt, co-founders of Against All Odds Productions, have produced a series of ambitious global projects in collaboration with hundreds of the world's leading photographers, writers, and graphic designers. Their Day in the Life projects were credited for creating a mass market for large-format illustrated books (rare was the coffee table book without one). Today their projects aim at sparking global conversations about emerging topics ranging from the Internet (24 Hours in Cyberspace), to Microprocessors (One Digital Day), to how the human race is learning to heal itself, (The Power to Heal) to the global water crisis (Blue Planet Run). This year Smolan and Erwitt dispatched photographers and writers in every corner of the globe to explore the world of “Big Data” and to determine if it truly does, as many in the field claim, represent a brand new toolset for humanity, helping address the biggest challenges facing our species. The book features 10 essays by noted writers:Introduction: OCEANS OF DATA by Dan GardnerChapter 1: REFLECTIONS IN A DIGITAL MIRROR by Juan Enriquez, CEO, BiotechnomomyChapter 2: OUR DATA OURSELVES by Kate Green, the EconomistChapter 3: QUANTIFYING MYSELF by AJ Jacobs, EsquireChapter 4: DARK DATA by Marc Goodman, Future Crime InstituteChapter 5: THE SENTIENT SENSOR MESH by Susan Karlin, Fast CompanyChapter 6: TAKING THE PULSE OF THE PLANET by Esther Dyson, EDventureChapter 7: CITIZEN SCIENCE by Gareth Cook, the Boston GlobeChapter 8: A DEMOGRAPH OF ONE by Michael Malone, Forbes magazineChapter 9: THE ART OF DATA by Aaron Koblin, Google Artist in ResidenceChapter 10: DATA DRIVEN by Jonathan Harris, Cowbird The book will also feature stunning info graphics from NIGEL HOLMES.1) GOOGLING GOOGLE: all the ways Google uses Data to help humanity2) DATA IS THE NEW OIL3) THE WORLD ACCORDING TO TWITTER4) AUCTIONING EYEBALLS: The world of Internet advertising5) FACEBOOK: A Billion Friends
Linear Algebra Done Right
Sheldon Axler - 1995
The novel approach taken here banishes determinants to the end of the book and focuses on the central goal of linear algebra: understanding the structure of linear operators on vector spaces. The author has taken unusual care to motivate concepts and to simplify proofs. For example, the book presents - without having defined determinants - a clean proof that every linear operator on a finite-dimensional complex vector space (or an odd-dimensional real vector space) has an eigenvalue. A variety of interesting exercises in each chapter helps students understand and manipulate the objects of linear algebra. This second edition includes a new section on orthogonal projections and minimization problems. The sections on self-adjoint operators, normal operators, and the spectral theorem have been rewritten. New examples and new exercises have been added, several proofs have been simplified, and hundreds of minor improvements have been made throughout the text.
Introduction to Algorithms
Thomas H. Cormen - 1989
Each chapter is relatively self-contained and can be used as a unit of study. The algorithms are described in English and in a pseudocode designed to be readable by anyone who has done a little programming. The explanations have been kept elementary without sacrificing depth of coverage or mathematical rigor.
Data Structure Through C
Yashavant P. Kanetkar - 2003
It adopts a novel approach, by using the programming language c to teach data structures. The book discusses concepts like arrays, algorithm analysis, strings, queues, trees and graphs. Well-designed animations related to these concepts are provided in the cd-rom which accompanies the book. This enables the reader to get a better understanding of the complex procedures described in the book through a visual demonstration of the same. Data structure through c is a comprehensive book which can be used as a reference book by students as well as computer professionals. It is written in a clear, easy-to-understood manner and it includes several programs and examples to explain clearly the complicated concepts related to data structures. The book was published by bpb publications in 2003 and is available in paperback. Key features: the book contains example programs that elucidate the concepts. It comes with a cd that visually demonstrates the theory presented in the book.
Information Technology for Management: Transforming Organizations in the Digital Economy
Efraim Turban - 1995
Throughout, the emphasis is on how IT provides organizations with strategic advantage by facilitating problem solving, increasing productivity and quality, improving customer service, and enabling business process reengineering. It also covers the latest real-world developments, including the introduction of applied grid computing and utility computing.
The Difference Engine : Charles Babbage And The Quest To Build The First Computer
Doron Swade - 2000
Doron Swade, technology historian and assistant director of London's Science Museum, investigates the troubles that plagued 19th-century knowledge engineers in The Difference Engine: Charles Babbage and the Quest to Build the First Computer.The author is in a unique position to appreciate the technical difficulties of the time, as he led a team that built a working model of a Difference Engine, using contemporary materials, in time for Babbage's 1991 bicentenary. The meat of the book is comprised of the story of the first computing machine design as gathered from the technical notes and drawings curated by Swade. Though Babbage certainly had problems translating his ideas into brass, the reader also comes to understand his fruitless, drawn-out arguments with his funders. Swade had it comparatively easy, though his depictions of the frustrating search for money and then working out how best to build the enormous machine in the late 1980s are delightful.It is difficult--maybe impossible--to draw a clear, unbroken line of influence from Babbage to any modern computer researchers, but his importance both as the first pioneer and as a symbol of the joys and sorrows of computing is unquestioned. Swade clearly respects his subject deeply, all the more so for having tried to bring the great old man's ideas to life. The Difference Engine is lovingly comprehensive and will thrill readers looking for a more technical examination of Babbage's career. --Rob Lightner
Think Complexity: Complexity Science and Computational Modeling
Allen B. Downey - 2009
Whether you’re an intermediate-level Python programmer or a student of computational modeling, you’ll delve into examples of complex systems through a series of exercises, case studies, and easy-to-understand explanations.You’ll work with graphs, algorithm analysis, scale-free networks, and cellular automata, using advanced features that make Python such a powerful language. Ideal as a text for courses on Python programming and algorithms, Think Complexity will also help self-learners gain valuable experience with topics and ideas they might not encounter otherwise.Work with NumPy arrays and SciPy methods, basic signal processing and Fast Fourier Transform, and hash tablesStudy abstract models of complex physical systems, including power laws, fractals and pink noise, and Turing machinesGet starter code and solutions to help you re-implement and extend original experiments in complexityExplore the philosophy of science, including the nature of scientific laws, theory choice, realism and instrumentalism, and other topicsExamine case studies of complex systems submitted by students and readers
Calculus Made Easy
Silvanus Phillips Thompson - 1910
With a new introduction, three new chapters, modernized language and methods throughout, and an appendix of challenging and enjoyable practice problems, Calculus Made Easy has been thoroughly updated for the modern reader.
Foundations of Statistical Natural Language Processing
Christopher D. Manning - 1999
This foundational text is the first comprehensive introduction to statistical natural language processing (NLP) to appear. The book contains all the theory and algorithms needed for building NLP tools. It provides broad but rigorous coverage of mathematical and linguistic foundations, as well as detailed discussion of statistical methods, allowing students and researchers to construct their own implementations. The book covers collocation finding, word sense disambiguation, probabilistic parsing, information retrieval, and other applications.