The Art of R Programming: A Tour of Statistical Software Design


Norman Matloff - 2011
    No statistical knowledge is required, and your programming skills can range from hobbyist to pro.Along the way, you'll learn about functional and object-oriented programming, running mathematical simulations, and rearranging complex data into simpler, more useful formats. You'll also learn to: Create artful graphs to visualize complex data sets and functions Write more efficient code using parallel R and vectorization Interface R with C/C++ and Python for increased speed or functionality Find new R packages for text analysis, image manipulation, and more Squash annoying bugs with advanced debugging techniques Whether you're designing aircraft, forecasting the weather, or you just need to tame your data, The Art of R Programming is your guide to harnessing the power of statistical computing.

Artificial Intelligence for Humans, Volume 1: Fundamental Algorithms


Jeff Heaton - 2013
    This book teaches basic Artificial Intelligence algorithms such as dimensionality, distance metrics, clustering, error calculation, hill climbing, Nelder Mead, and linear regression. These are not just foundational algorithms for the rest of the series, but are very useful in their own right. The book explains all algorithms using actual numeric calculations that you can perform yourself. Artificial Intelligence for Humans is a book series meant to teach AI to those without an extensive mathematical background. The reader needs only a knowledge of basic college algebra or computer programming—anything more complicated than that is thoroughly explained. Every chapter also includes a programming example. Examples are currently provided in Java, C#, R, Python and C. Other languages planned.

Probability And Statistics For Engineers And Scientists


Ronald E. Walpole - 1978
     Offers extensively updated coverage, new problem sets, and chapter-ending material to enhance the book’s relevance to today’s engineers and scientists. Includes new problem sets demonstrating updated applications to engineering as well as biological, physical, and computer science. Emphasizes key ideas as well as the risks and hazards associated with practical application of the material. Includes new material on topics including: difference between discrete and continuous measurements; binary data; quartiles; importance of experimental design; “dummy” variables; rules for expectations and variances of linear functions; Poisson distribution; Weibull and lognormal distributions; central limit theorem, and data plotting. Introduces Bayesian statistics, including its applications to many fields. For those interested in learning more about probability and statistics.

Data Mining: Concepts and Techniques (The Morgan Kaufmann Series in Data Management Systems)


Jiawei Han - 2000
    Not only are all of our business, scientific, and government transactions now computerized, but the widespread use of digital cameras, publication tools, and bar codes also generate data. On the collection side, scanned text and image platforms, satellite remote sensing systems, and the World Wide Web have flooded us with a tremendous amount of data. This explosive growth has generated an even more urgent need for new techniques and automated tools that can help us transform this data into useful information and knowledge.Like the first edition, voted the most popular data mining book by KD Nuggets readers, this book explores concepts and techniques for the discovery of patterns hidden in large data sets, focusing on issues relating to their feasibility, usefulness, effectiveness, and scalability. However, since the publication of the first edition, great progress has been made in the development of new data mining methods, systems, and applications. This new edition substantially enhances the first edition, and new chapters have been added to address recent developments on mining complex types of data- including stream data, sequence data, graph structured data, social network data, and multi-relational data.A comprehensive, practical look at the concepts and techniques you need to know to get the most out of real business dataUpdates that incorporate input from readers, changes in the field, and more material on statistics and machine learningDozens of algorithms and implementation examples, all in easily understood pseudo-code and suitable for use in real-world, large-scale data mining projectsComplete classroom support for instructors at www.mkp.com/datamining2e companion site

AIQ: How People and Machines Are Smarter Together


Nick Polson - 2018
    AIQ explores the fascinating history of the ideas that drive this technology of the future and demystifies the core concepts behind it; the result is a positive and entertaining look at the great potential unlocked by marrying human creativity with powerful machines.” —Steven D. Levitt, bestselling co-author of Freakonomics From leading data scientists Nick Polson and James Scott, what everyone needs to know to understand how artificial intelligence is changing the world and how we can use this knowledge to make better decisions in our own lives. Dozens of times per day, we all interact with intelligent machines that are constantly learning from the wealth of data now available to them. These machines, from smart phones to talking robots to self-driving cars, are remaking the world in the 21st century in the same way that the Industrial Revolution remade the world in the 19th century. AIQ is based on a simple premise: if you want to understand the modern world, then you have to know a little bit of the mathematical language spoken by intelligent machines. AIQ will teach you that language—but in an unconventional way, anchored in stories rather than equations. You will meet a fascinating cast of historical characters who have a lot to teach you about data, probability, and better thinking. Along the way, you'll see how these same ideas are playing out in the modern age of big data and intelligent machines—and how these technologies will soon help you to overcome some of your built-in cognitive weaknesses, giving you a chance to lead a happier, healthier, more fulfilled life.

Statistical Inference


George Casella - 2001
    Starting from the basics of probability, the authors develop the theory of statistical inference using techniques, definitions, and concepts that are statistical and are natural extensions and consequences of previous concepts. This book can be used for readers who have a solid mathematics background. It can also be used in a way that stresses the more practical uses of statistical theory, being more concerned with understanding basic statistical concepts and deriving reasonable statistical procedures for a variety of situations, and less concerned with formal optimality investigations.

Problem Solving with Algorithms and Data Structures Using Python


Bradley N. Miller - 2005
    It is also about Python. However, there is much more. The study of algorithms and data structures is central to understanding what computer science is all about. Learning computer science is not unlike learning any other type of difficult subject matter. The only way to be successful is through deliberate and incremental exposure to the fundamental ideas. A beginning computer scientist needs practice so that there is a thorough understanding before continuing on to the more complex parts of the curriculum. In addition, a beginner needs to be given the opportunity to be successful and gain confidence. This textbook is designed to serve as a text for a first course on data structures and algorithms, typically taught as the second course in the computer science curriculum. Even though the second course is considered more advanced than the first course, this book assumes you are beginners at this level. You may still be struggling with some of the basic ideas and skills from a first computer science course and yet be ready to further explore the discipline and continue to practice problem solving. We cover abstract data types and data structures, writing algorithms, and solving problems. We look at a number of data structures and solve classic problems that arise. The tools and techniques that you learn here will be applied over and over as you continue your study of computer science.

Naked Statistics: Stripping the Dread from the Data


Charles Wheelan - 2012
    How can we catch schools that cheat on standardized tests? How does Netflix know which movies you’ll like? What is causing the rising incidence of autism? As best-selling author Charles Wheelan shows us in Naked Statistics, the right data and a few well-chosen statistical tools can help us answer these questions and more.For those who slept through Stats 101, this book is a lifesaver. Wheelan strips away the arcane and technical details and focuses on the underlying intuition that drives statistical analysis. He clarifies key concepts such as inference, correlation, and regression analysis, reveals how biased or careless parties can manipulate or misrepresent data, and shows us how brilliant and creative researchers are exploiting the valuable data from natural experiments to tackle thorny questions.And in Wheelan’s trademark style, there’s not a dull page in sight. You’ll encounter clever Schlitz Beer marketers leveraging basic probability, an International Sausage Festival illuminating the tenets of the central limit theorem, and a head-scratching choice from the famous game show Let’s Make a Deal—and you’ll come away with insights each time. With the wit, accessibility, and sheer fun that turned Naked Economics into a bestseller, Wheelan defies the odds yet again by bringing another essential, formerly unglamorous discipline to life.

Statistical Rethinking: A Bayesian Course with Examples in R and Stan


Richard McElreath - 2015
    Reflecting the need for even minor programming in today's model-based statistics, the book pushes readers to perform step-by-step calculations that are usually automated. This unique computational approach ensures that readers understand enough of the details to make reasonable choices and interpretations in their own modeling work.The text presents generalized linear multilevel models from a Bayesian perspective, relying on a simple logical interpretation of Bayesian probability and maximum entropy. It covers from the basics of regression to multilevel models. The author also discusses measurement error, missing data, and Gaussian process models for spatial and network autocorrelation.By using complete R code examples throughout, this book provides a practical foundation for performing statistical inference. Designed for both PhD students and seasoned professionals in the natural and social sciences, it prepares them for more advanced or specialized statistical modeling.Web ResourceThe book is accompanied by an R package (rethinking) that is available on the author's website and GitHub. The two core functions (map and map2stan) of this package allow a variety of statistical models to be constructed from standard model formulas.

The R Book


Michael J. Crawley - 2007
    The R language is recognised as one of the most powerful and flexible statistical software packages, and it enables the user to apply many statistical techniques that would be impossible without such software to help implement such large data sets.

Learn R in a Day


Steven Murray - 2013
    The book assumes no prior knowledge of computer programming and progressively covers all the essential steps needed to become confident and proficient in using R within a day. Topics include how to input, manipulate, format, iterate (loop), query, perform basic statistics on, and plot data, via a step-by-step technique and demonstrations using in-built datasets which the reader is encouraged to replicate on their computer. Each chapter also includes exercises (with solutions) to practice key skills and empower the reader to build on the essentials gained during this introductory course.

Elements of Information Theory


Thomas M. Cover - 1991
    Readers are provided once again with an instructive mix of mathematics, physics, statistics, and information theory.All the essential topics in information theory are covered in detail, including entropy, data compression, channel capacity, rate distortion, network information theory, and hypothesis testing. The authors provide readers with a solid understanding of the underlying theory and applications. Problem sets and a telegraphic summary at the end of each chapter further assist readers. The historical notes that follow each chapter recap the main points.The Second Edition features: * Chapters reorganized to improve teaching * 200 new problems * New material on source coding, portfolio theory, and feedback capacity * Updated referencesNow current and enhanced, the Second Edition of Elements of Information Theory remains the ideal textbook for upper-level undergraduate and graduate courses in electrical engineering, statistics, and telecommunications.

Dear Data


Giorgia Lupi - 2016
    The result is described as “a thought-provoking visual feast”.

Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins


Garry Kasparov - 2017
    It was the dawn of a new era in artificial intelligence: a machine capable of beating the reigning human champion at this most cerebral game. That moment was more than a century in the making, and in this breakthrough book, Kasparov reveals his astonishing side of the story for the first time. He describes how it felt to strategize against an implacable, untiring opponent with the whole world watching, and recounts the history of machine intelligence through the microcosm of chess, considered by generations of scientific pioneers to be a key to unlocking the secrets of human and machine cognition. Kasparov uses his unrivaled experience to look into the future of intelligent machines and sees it bright with possibility. As many critics decry artificial intelligence as a menace, particularly to human jobs, Kasparov shows how humanity can rise to new heights with the help of our most extraordinary creations, rather than fear them. Deep Thinking is a tightly argued case for technological progress, from the man who stood at its precipice with his own career at stake.

Introduction to Data Mining


Vipin Kumar - 2005
    Each major topic is organized into two chapters, beginning with basic concepts that provide necessary background for understanding each data mining technique, followed by more advanced concepts and algorithms.