Networks of the Brain


Olaf Sporns - 2010
    Increasingly, science is concerned with the structure, behavior, and evolution of complex systems ranging from cells to ecosystems. In Networks of the Brain, Olaf Sporns describes how the integrative nature of brain function can be illuminated from a complex network perspective.Highlighting the many emerging points of contact between neuroscience and network science, the book serves to introduce network theory to neuroscientists and neuroscience to those working on theoretical network models. Sporns emphasizes how networks connect levels of organization in the brain and how they link structure to function, offering an informal and nonmathematical treatment of the subject. Networks of the Brain provides a synthesis of the sciences of complex networks and the brain that will be an essential foundation for future research.

Neural Networks: A Comprehensive Foundation


Simon Haykin - 1994
    Introducing students to the many facets of neural networks, this text provides many case studies to illustrate their real-life, practical applications.

The Computer and the Brain


John von Neumann - 1958
    This work represents the views of a mathematician on the analogies between computing machines and the living human brain.

An Introduction to Statistical Learning: With Applications in R


Gareth James - 2013
    This book presents some of the most important modeling and prediction techniques, along with relevant applications. Topics include linear regression, classification, resampling methods, shrinkage approaches, tree- based methods, support vector machines, clustering, and more. Color graphics and real-world examples are used to illustrate the methods presented. Since the goal of this textbook is to facilitate the use of these statistical learning techniques by practitioners in science, industry, and other fields, each chapter contains a tutorial on implementing the analyses and methods presented in R, an extremely popular open source statistical software platform. Two of the authors co-wrote The Elements of Statistical Learning (Hastie, Tibshirani and Friedman, 2nd edition 2009), a popular reference book for statistics and machine learning researchers. An Introduction to Statistical Learning covers many of the same topics, but at a level accessible to a much broader audience. This book is targeted at statisticians and non-statisticians alike who wish to use cutting-edge statistical learning techniques to analyze their data. The text assumes only a previous course in linear regression and no knowledge of matrix algebra.

Practical Statistics for Data Scientists: 50 Essential Concepts


Peter Bruce - 2017
    Courses and books on basic statistics rarely cover the topic from a data science perspective. This practical guide explains how to apply various statistical methods to data science, tells you how to avoid their misuse, and gives you advice on what's important and what's not.Many data science resources incorporate statistical methods but lack a deeper statistical perspective. If you're familiar with the R programming language, and have some exposure to statistics, this quick reference bridges the gap in an accessible, readable format.With this book, you'll learn:Why exploratory data analysis is a key preliminary step in data scienceHow random sampling can reduce bias and yield a higher quality dataset, even with big dataHow the principles of experimental design yield definitive answers to questionsHow to use regression to estimate outcomes and detect anomaliesKey classification techniques for predicting which categories a record belongs toStatistical machine learning methods that "learn" from dataUnsupervised learning methods for extracting meaning from unlabeled data

Machine Learning Yearning


Andrew Ng
    But building a machine learning system requires that you make practical decisions: Should you collect more training data? Should you use end-to-end deep learning? How do you deal with your training set not matching your test set? and many more. Historically, the only way to learn how to make these "strategy" decisions has been a multi-year apprenticeship in a graduate program or company. This is a book to help you quickly gain this skill, so that you can become better at building AI systems.

Bayesian Data Analysis


Andrew Gelman - 1995
    Its world-class authors provide guidance on all aspects of Bayesian data analysis and include examples of real statistical analyses, based on their own research, that demonstrate how to solve complicated problems. Changes in the new edition include:Stronger focus on MCMC Revision of the computational advice in Part III New chapters on nonlinear models and decision analysis Several additional applied examples from the authors' recent research Additional chapters on current models for Bayesian data analysis such as nonlinear models, generalized linear mixed models, and more Reorganization of chapters 6 and 7 on model checking and data collectionBayesian computation is currently at a stage where there are many reasonable ways to compute any given posterior distribution. However, the best approach is not always clear ahead of time. Reflecting this, the new edition offers a more pluralistic presentation, giving advice on performing computations from many perspectives while making clear the importance of being aware that there are different ways to implement any given iterative simulation computation. The new approach, additional examples, and updated information make Bayesian Data Analysis an excellent introductory text and a reference that working scientists will use throughout their professional life.

Tools for Thought: The History and Future of Mind-Expanding Technology


Howard Rheingold - 1985
    C. R. Licklider, Doug Engelbart, Bob Taylor, and Alan Kay.The digital revolution did not begin with the teenage millionaires of Silicon Valley, claims Howard Rheingold, but with such early intellectual giants as Charles Babbage, George Boole, and John von Neumann. In a highly engaging style, Rheingold tells the story of what he calls the patriarchs, pioneers, and infonauts of the computer, focusing in particular on such pioneers as J. C. R. Licklider, Doug Engelbart, Bob Taylor, and Alan Kay. Taking the reader step by step from nineteenth-century mathematics to contemporary computing, he introduces a fascinating collection of eccentrics, mavericks, geniuses, and visionaries.The book was originally published in 1985, and Rheingold's attempt to envision computing in the 1990s turns out to have been remarkably prescient. This edition contains an afterword, in which Rheingold interviews some of the pioneers discussed in the book. As an exercise in what he calls retrospective futurism, Rheingold also looks back at how he looked forward.

Reinforcement Learning: An Introduction


Richard S. Sutton - 1998
    Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications.Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability.The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning.

Machine Learning for Absolute Beginners


Oliver Theobald - 2017
    The manner in which computers are now able to mimic human thinking is rapidly exceeding human capabilities in everything from chess to picking the winner of a song contest. In the age of machine learning, computers do not strictly need to receive an ‘input command’ to perform a task, but rather ‘input data’. From the input of data they are able to form their own decisions and take actions virtually as a human would. But as a machine, can consider many more scenarios and execute calculations to solve complex problems. This is the element that excites companies and budding machine learning engineers the most. The ability to solve complex problems never before attempted. This is also perhaps one reason why you are looking at purchasing this book, to gain a beginner's introduction to machine learning. This book provides a plain English introduction to the following topics: - Artificial Intelligence - Big Data - Downloading Free Datasets - Regression - Support Vector Machine Algorithms - Deep Learning/Neural Networks - Data Reduction - Clustering - Association Analysis - Decision Trees - Recommenders - Machine Learning Careers This book has recently been updated following feedback from readers. Version II now includes: - New Chapter: Decision Trees - Cleanup of minor errors

Applied Predictive Modeling


Max Kuhn - 2013
    Non- mathematical readers will appreciate the intuitive explanations of the techniques while an emphasis on problem-solving with real data across a wide variety of applications will aid practitioners who wish to extend their expertise. Readers should have knowledge of basic statistical ideas, such as correlation and linear regression analysis. While the text is biased against complex equations, a mathematical background is needed for advanced topics. Dr. Kuhn is a Director of Non-Clinical Statistics at Pfizer Global R&D in Groton Connecticut. He has been applying predictive models in the pharmaceutical and diagnostic industries for over 15 years and is the author of a number of R packages. Dr. Johnson has more than a decade of statistical consulting and predictive modeling experience in pharmaceutical research and development. He is a co-founder of Arbor Analytics, a firm specializing in predictive modeling and is a former Director of Statistics at Pfizer Global R&D. His scholarly work centers on the application and development of statistical methodology and learning algorithms. Applied Predictive Modeling covers the overall predictive modeling process, beginning with the crucial steps of data preprocessing, data splitting and foundations of model tuning. The text then provides intuitive explanations of numerous common and modern regression and classification techniques, always with an emphasis on illustrating and solving real data problems. Addressing practical concerns extends beyond model fitting to topics such as handling class imbalance, selecting predictors, and pinpointing causes of poor model performance-all of which are problems that occur frequently in practice. The text illustrates all parts of the modeling process through many hands-on, real-life examples. And every chapter contains extensive R code f

Elements of Programming


Alexander Stepanov - 2009
    And then we wonder why software is notorious for being delivered late and full of bugs, while other engineers routinely deliver finished bridges, automobiles, electrical appliances, etc., on time and with only minor defects. This book sets out to redress this imbalance. Members of my advanced development team at Adobe who took the course based on the same material all benefited greatly from the time invested. It may appear as a highly technical text intended only for computer scientists, but it should be required reading for all practicing software engineers." --Martin Newell, Adobe Fellow"The book contains some of the most beautiful code I have ever seen." --Bjarne Stroustrup, Designer of C++"I am happy to see the content of Alex's course, the development and teaching of which I strongly supported as the CTO of Silicon Graphics, now available to all programmers in this elegant little book." --Forest Baskett, General Partner, New Enterprise Associates"Paul's patience and architectural experience helped to organize Alex's mathematical approach into a tightly-structured edifice--an impressive feat!" --Robert W. Taylor, Founder of Xerox PARC CSL and DEC Systems Research Center Elements of Programming provides a different understanding of programming than is presented elsewhere. Its major premise is that practical programming, like other areas of science and engineering, must be based on a solid mathematical foundation. The book shows that algorithms implemented in a real programming language, such as C++, can operate in the most general mathematical setting. For example, the fast exponentiation algorithm is defined to work with any associative operation. Using abstract algorithms leads to efficient, reliable, secure, and economical software.This is not an easy book. Nor is it a compilation of tips and tricks for incremental improvements in your programming skills. The book's value is more fundamental and, ultimately, more critical for insight into programming. To benefit fully, you will need to work through it from beginning to end, reading the code, proving the lemmas, and doing the exercises. When finished, you will see how the application of the deductive method to your programs assures that your system's software components will work together and behave as they must.The book presents a number of algorithms and requirements for types on which they are defined. The code for these descriptions--also available on the Web--is written in a small subset of C++ meant to be accessible to any experienced programmer. This subset is defined in a special language appendix coauthored by Sean Parent and Bjarne Stroustrup.Whether you are a software developer, or any other professional for whom programming is an important activity, or a committed student, you will come to understand what the book's experienced authors have been teaching and demonstrating for years--that mathematics is good for programming, and that theory is good for practice.

The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life Plus the Secrets of Enigma


Alan Turing - 2004
    In 1935, aged 22, he developed the mathematical theory upon which all subsequent stored-program digital computers are modeled.At the outbreak of hostilities with Germany in September 1939, he joined the Government Codebreaking team at Bletchley Park, Buckinghamshire and played a crucial role in deciphering Engima, the code used by the German armed forces to protect their radio communications. Turing's work on the versionof Enigma used by the German navy was vital to the battle for supremacy in the North Atlantic. He also contributed to the attack on the cyphers known as Fish, which were used by the German High Command for the encryption of signals during the latter part of the war. His contribution helped toshorten the war in Europe by an estimated two years.After the war, his theoretical work led to the development of Britain's first computers at the National Physical Laboratory and the Royal Society Computing Machine Laboratory at Manchester University.Turing was also a founding father of modern cognitive science, theorizing that the cortex at birth is an unorganized machine which through training becomes organized into a universal machine or something like it. He went on to develop the use of computers to model biological growth, launchingthe discipline now referred to as Artificial Life.The papers in this book are the key works for understanding Turing's phenomenal contribution across all these fields. The collection includes Turing's declassified wartime Treatise on the Enigma; letters from Turing to Churchill and to codebreakers; lectures, papers, and broadcasts which opened upthe concept of AI and its implications; and the paper which formed the genesis of the investigation of Artifical Life.

What Computers Still Can't Do: A Critique of Artificial Reason


Hubert L. Dreyfus - 1972
    The world has changed since then. Today it is clear that "good old-fashioned AI," based on the idea of using symbolic representations to produce general intelligence, is in decline (although several believers still pursue its pot of gold), and the focus of the AI community has shifted to more complex models of the mind. It has also become more common for AI researchers to seek out and study philosophy. For this edition of his now classic book, Dreyfus has added a lengthy new introduction outlining these changes and assessing the paradigms of connectionism and neural networks that have transformed the field. At a time when researchers were proposing grand plans for general problem solvers and automatic translation machines, Dreyfus predicted that they would fail because their conception of mental functioning was naive, and he suggested that they would do well to acquaint themselves with modern philosophical approaches to human being. "What Computers Still Can't Do" was widely attacked but quietly studied. Dreyfus's arguments are still provocative and focus our attention once again on what it is that makes human beings unique.

Vehicles: Experiments in Synthetic Psychology


Valentino Braitenberg - 1984
    They are vehicles, a series of hypothetical, self-operating machines that exhibit increasingly intricate if not always successful or civilized behavior. Each of the vehicles in the series incorporates the essential features of all the earlier models and along the way they come to embody aggression, love, logic, manifestations of foresight, concept formation, creative thinking, personality, and free will. In a section of extensive biological notes, Braitenberg locates many elements of his fantasy in current brain research.