You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place


Janelle Shane - 2019
    according to an artificial intelligence trained by scientist Janelle Shane, creator of the popular blog "AI Weirdness." She creates silly AIs that learn how to name paint colors, create the best recipes, and even flirt (badly) with humans--all to understand the technology that governs so much of our daily lives.We rely on AI every day for recommendations, for translations, and to put cat ears on our selfie videos. We also trust AI with matters of life and death, on the road and in our hospitals. But how smart is AI really, and how does it solve problems, understand humans, and even drive self-driving cars?Shane delivers the answers to every AI question you've ever asked, and some you definitely haven't--like, how can a computer design the perfect sandwich? What does robot-generated Harry Potter fan-fiction look like? And is the world's best Halloween costume really "Vampire Hog Bride"?In this smart, often hilarious introduction to the most interesting science of our time, Shane shows how these programs learn, fail, and adapt--and how they reflect the best and worst of humanity. You Look Like a Thing and I Love You is the perfect book for anyone curious about what the robots in our lives are thinking.

The Creativity Code: How AI Is Learning to Write, Paint and Think


Marcus du Sautoy - 2019
    They can navigate more data than a doctor or lawyer and act with greater precision. For many years we’ve taken solace in the notion that they can’t create. But now that algorithms can learn and adapt, does the future of creativity belong to machines, too?It is hard to imagine a better guide to the bewildering world of artificial intelligence than Marcus du Sautoy, a celebrated Oxford mathematician whose work on symmetry in the ninth dimension has taken him to the vertiginous edge of mathematical understanding. In The Creativity Code he considers what machine learning means for the future of creativity. The Pollockizer can produce drip paintings in the style of Jackson Pollock, Botnik spins off fanciful (if improbable) scenes inspired by J. K. Rowling, and the music-composing algorithm Emmy managed to fool a panel of Bach experts. But do these programs just mimic, or do they have what it takes to create? Du Sautoy argues that to answer this question, we need to understand how the algorithms that drive them work―and this brings him back to his own subject of mathematics, with its puzzles, constraints, and enticing possibilities.While most recent books on AI focus on the future of work, The Creativity Code moves us to the forefront of creative new technologies and offers a more positive and unexpected vision of our future cohabitation with machines. It challenges us to reconsider what it means to be human―and to crack the creativity code.

Deep Learning


Ian Goodfellow - 2016
    Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning.The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models.Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

Data Science from Scratch: First Principles with Python


Joel Grus - 2015
    In this book, you’ll learn how many of the most fundamental data science tools and algorithms work by implementing them from scratch. If you have an aptitude for mathematics and some programming skills, author Joel Grus will help you get comfortable with the math and statistics at the core of data science, and with hacking skills you need to get started as a data scientist. Today’s messy glut of data holds answers to questions no one’s even thought to ask. This book provides you with the know-how to dig those answers out. Get a crash course in Python Learn the basics of linear algebra, statistics, and probability—and understand how and when they're used in data science Collect, explore, clean, munge, and manipulate data Dive into the fundamentals of machine learning Implement models such as k-nearest Neighbors, Naive Bayes, linear and logistic regression, decision trees, neural networks, and clustering Explore recommender systems, natural language processing, network analysis, MapReduce, and databases

Introduction to Machine Learning with Python: A Guide for Data Scientists


Andreas C. Müller - 2015
    If you use Python, even as a beginner, this book will teach you practical ways to build your own machine learning solutions. With all the data available today, machine learning applications are limited only by your imagination.You'll learn the steps necessary to create a successful machine-learning application with Python and the scikit-learn library. Authors Andreas Muller and Sarah Guido focus on the practical aspects of using machine learning algorithms, rather than the math behind them. Familiarity with the NumPy and matplotlib libraries will help you get even more from this book.With this book, you'll learn:Fundamental concepts and applications of machine learningAdvantages and shortcomings of widely used machine learning algorithmsHow to represent data processed by machine learning, including which data aspects to focus onAdvanced methods for model evaluation and parameter tuningThe concept of pipelines for chaining models and encapsulating your workflowMethods for working with text data, including text-specific processing techniquesSuggestions for improving your machine learning and data science skills

Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins


Garry Kasparov - 2017
    It was the dawn of a new era in artificial intelligence: a machine capable of beating the reigning human champion at this most cerebral game. That moment was more than a century in the making, and in this breakthrough book, Kasparov reveals his astonishing side of the story for the first time. He describes how it felt to strategize against an implacable, untiring opponent with the whole world watching, and recounts the history of machine intelligence through the microcosm of chess, considered by generations of scientific pioneers to be a key to unlocking the secrets of human and machine cognition. Kasparov uses his unrivaled experience to look into the future of intelligent machines and sees it bright with possibility. As many critics decry artificial intelligence as a menace, particularly to human jobs, Kasparov shows how humanity can rise to new heights with the help of our most extraordinary creations, rather than fear them. Deep Thinking is a tightly argued case for technological progress, from the man who stood at its precipice with his own career at stake.

Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots


John Markoff - 2015
    Pulitzer prize-winning New York Times science writer John Markoff argues that we must decide to design ourselves into our future, or risk being excluded from it altogether.In the past decade, Google introduced us to driverless cars; Apple debuted Siri, a personal assistant that we keep in our pockets; and an Internet of Things connected the smaller tasks of everyday life to the farthest reaches of the Web. Robots have become an integral part of society on the battlefield and the road; in business, education, and health care. Cheap sensors and powerful computers will ensure that in the coming years, these robots will act on their own. This new era offers the promise of immensely powerful machines, but it also reframes a question first raised more than half a century ago, when the intelligent machine was born. Will we control these systems, or will they control us?In Machines of Loving Grace, John Markoff offers a sweeping history of the complicated and evolving relationship between humans and computers. In recent years, the pace of technological change has accelerated dramatically, posing an ethical quandary. If humans delegate decisions to machines, who will be responsible for the consequences? As Markoff chronicles the history of automation, from the birth of the artificial intelligence and intelligence augmentation communities in the 1950s and 1960s, to the modern-day brain trusts at Google and Apple in Silicon Valley, and on to the expanding robotics economy around Boston, he traces the different ways developers have addressed this fundamental problem and urges them to carefully consider the consequences of their work. We are on the brink of the next stage of the computer revolution, Markoff argues, and robots will profoundly transform modern life. Yet it remains for us to determine whether this new world will be a utopia. Moreover, it is now incumbent upon the designers of these robots to draw a bright line between what is human and what is machine.After nearly forty years covering the tech industry, Markoff offers an unmatched perspective on the most drastic technology-driven societal shifts since the introduction of the Internet. Machines of Loving Grace draws on an extensive array of research and interviews to present an eye-opening history of one of the most pressing questions of our time, and urges us to remember that we still have the opportunity to design ourselves into the future—before it's too late.

Machine Learning


Tom M. Mitchell - 1986
    Mitchell covers the field of machine learning, the study of algorithms that allow computer programs to automatically improve through experience and that automatically infer general laws from specific data.

Turing's Cathedral: The Origins of the Digital Universe


George Dyson - 2012
    In Turing’s Cathedral, George Dyson focuses on a small group of men and women, led by John von Neumann at the Institute for Advanced Study in Princeton, New Jersey, who built one of the first computers to realize Alan Turing’s vision of a Universal Machine. Their work would break the distinction between numbers that mean things and numbers that do things—and our universe would never be the same. Using five kilobytes of memory (the amount allocated to displaying the cursor on a computer desktop of today), they achieved unprecedented success in both weather prediction and nuclear weapons design, while tackling, in their spare time, problems ranging from the evolution of viruses to the evolution of stars. Dyson’s account, both historic and prophetic, sheds important new light on how the digital universe exploded in the aftermath of World War II. The proliferation of both codes and machines was paralleled by two historic developments: the decoding of self-replicating sequences in biology and the invention of the hydrogen bomb. It’s no coincidence that the most destructive and the most constructive of human inventions appeared at exactly the same time.  How did code take over the world? In retracing how Alan Turing’s one-dimensional model became John von Neumann’s two-dimensional implementation, Turing’s Cathedral offers a series of provocative suggestions as to where the digital universe, now fully three-dimensional, may be heading next.

The Sentient Machine: The Coming Age of Artificial Intelligence


Amir Husain - 2017
    Acclaimed technologist and inventor Amir Husain explains how we can live amidst the coming age of sentient machines and artificial intelligence—and not only survive, but thrive.Artificial “machine” intelligence is playing an ever-greater role in our society. We are already using cruise control in our cars, automatic checkout at the drugstore, and are unable to live without our smartphones. The discussion around AI is polarized; people think either machines will solve all problems for everyone, or they will lead us down a dark, dystopian path into total human irrelevance. Regardless of what you believe, the idea that we might bring forth intelligent creation can be intrinsically frightening. But what if our greatest role as humans so far is that of creators? Amir Husain, a brilliant inventor and computer scientist, argues that we are on the cusp of writing our next, and greatest, creation myth. It is the dawn of a new form of intellectual diversity, one that we need to embrace in order to advance the state of the art in many critical fields, including security, resource management, finance, and energy. “In The Sentient Machine, Husain prepares us for a brighter future; not with hyperbole about right and wrong, but with serious arguments about risk and potential” (Dr. Greg Hyslop, Chief Technology Officer, The Boeing Company). He addresses broad existential questions surrounding the coming of AI: Why are we valuable? What can we create in this world? How are we intelligent? What constitutes progress for us? And how might we fail to progress? Husain boils down complex computer science and AI concepts into clear, plainspoken language and draws from a wide variety of cultural and historical references to illustrate his points. Ultimately, Husain challenges many of our societal norms and upends assumptions we hold about “the good life.”

On Intelligence


Jeff Hawkins - 2004
    Now he stands ready to revolutionize both neuroscience and computing in one stroke, with a new understanding of intelligence itself.Hawkins develops a powerful theory of how the human brain works, explaining why computers are not intelligent and how, based on this new theory, we can finally build intelligent machines.The brain is not a computer, but a memory system that stores experiences in a way that reflects the true structure of the world, remembering sequences of events and their nested relationships and making predictions based on those memories. It is this memory-prediction system that forms the basis of intelligence, perception, creativity, and even consciousness.In an engaging style that will captivate audiences from the merely curious to the professional scientist, Hawkins shows how a clear understanding of how the brain works will make it possible for us to build intelligent machines, in silicon, that will exceed our human ability in surprising ways.Written with acclaimed science writer Sandra Blakeslee, On Intelligence promises to completely transfigure the possibilities of the technology age. It is a landmark book in its scope and clarity.

An Introduction to Statistical Learning: With Applications in R


Gareth James - 2013
    This book presents some of the most important modeling and prediction techniques, along with relevant applications. Topics include linear regression, classification, resampling methods, shrinkage approaches, tree- based methods, support vector machines, clustering, and more. Color graphics and real-world examples are used to illustrate the methods presented. Since the goal of this textbook is to facilitate the use of these statistical learning techniques by practitioners in science, industry, and other fields, each chapter contains a tutorial on implementing the analyses and methods presented in R, an extremely popular open source statistical software platform. Two of the authors co-wrote The Elements of Statistical Learning (Hastie, Tibshirani and Friedman, 2nd edition 2009), a popular reference book for statistics and machine learning researchers. An Introduction to Statistical Learning covers many of the same topics, but at a level accessible to a much broader audience. This book is targeted at statisticians and non-statisticians alike who wish to use cutting-edge statistical learning techniques to analyze their data. The text assumes only a previous course in linear regression and no knowledge of matrix algebra.

The Information: A History, a Theory, a Flood


James Gleick - 2011
    The story of information begins in a time profoundly unlike our own, when every thought and utterance vanishes as soon as it is born. From the invention of scripts and alphabets to the long-misunderstood talking drums of Africa, Gleick tells the story of information technologies that changed the very nature of human consciousness. He provides portraits of the key figures contributing to the inexorable development of our modern understanding of information: Charles Babbage, the idiosyncratic inventor of the first great mechanical computer; Ada Byron, the brilliant and doomed daughter of the poet, who became the first true programmer; pivotal figures like Samuel Morse and Alan Turing; and Claude Shannon, the creator of information theory itself. And then the information age arrives. Citizens of this world become experts willy-nilly: aficionados of bits and bytes. And we sometimes feel we are drowning, swept by a deluge of signs and signals, news and images, blogs and tweets. The Information is the story of how we got here and where we are heading.

Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again


Eric J. Topol - 2019
    The doctor-patient relationship--the heart of medicine--is broken: doctors are too distracted and overwhelmed to truly connect with their patients, and medical errors and misdiagnoses abound. In Deep Medicine, leading physician Eric Topol reveals how artificial intelligence can help. AI has the potential to transform everything doctors do, from notetaking and medical scans to diagnosis and treatment, greatly cutting down the cost of medicine and reducing human mortality. By freeing physicians from the tasks that interfere with human connection, AI will create space for the real healing that takes place between a doctor who can listen and a patient who needs to be heard.Innovative, provocative, and hopeful, Deep Medicine shows us how the awesome power of AI can make medicine better, for all the humans involved.

Probabilistic Graphical Models: Principles and Techniques


Daphne Koller - 2009
    The framework of probabilistic graphical models, presented in this book, provides a general approach for this task. The approach is model-based, allowing interpretable models to be constructed and then manipulated by reasoning algorithms. These models can also be learned automatically from data, allowing the approach to be used in cases where manually constructing a model is difficult or even impossible. Because uncertainty is an inescapable aspect of most real-world applications, the book focuses on probabilistic models, which make the uncertainty explicit and provide models that are more faithful to reality.Probabilistic Graphical Models discusses a variety of models, spanning Bayesian networks, undirected Markov networks, discrete and continuous models, and extensions to deal with dynamical systems and relational data. For each class of models, the text describes the three fundamental cornerstones: representation, inference, and learning, presenting both basic concepts and advanced techniques. Finally, the book considers the use of the proposed framework for causal reasoning and decision making under uncertainty. The main text in each chapter provides the detailed technical development of the key ideas. Most chapters also include boxes with additional material: skill boxes, which describe techniques; case study boxes, which discuss empirical cases related to the approach described in the text, including applications in computer vision, robotics, natural language understanding, and computational biology; and concept boxes, which present significant concepts drawn from the material in the chapter. Instructors (and readers) can group chapters in various combinations, from core topics to more technically advanced material, to suit their particular needs.