Book picks similar to
Probabilistic Analysis of Algorithms: On Computing Methodologies for Computer Algorithms Performance Evaluation by Micha Hofri
algorithms
algorithms-and-theory
computer-science
reference-only
Concepts of Programming Languages
Robert W. Sebesta - 1988
It presents the principles, paradigms, designs and implementations of modern programming languages, and contains increased coverage of the object-oriented programming paradigm. The book also covers semantics and Java.
Artificial Intelligence: Structures and Strategies for Complex Problem Solving
George F. Luger - 1997
It is suitable for a one or two semester university course on AI, as well as for researchers in the field.
Probabilistic Graphical Models: Principles and Techniques
Daphne Koller - 2009
The framework of probabilistic graphical models, presented in this book, provides a general approach for this task. The approach is model-based, allowing interpretable models to be constructed and then manipulated by reasoning algorithms. These models can also be learned automatically from data, allowing the approach to be used in cases where manually constructing a model is difficult or even impossible. Because uncertainty is an inescapable aspect of most real-world applications, the book focuses on probabilistic models, which make the uncertainty explicit and provide models that are more faithful to reality.Probabilistic Graphical Models discusses a variety of models, spanning Bayesian networks, undirected Markov networks, discrete and continuous models, and extensions to deal with dynamical systems and relational data. For each class of models, the text describes the three fundamental cornerstones: representation, inference, and learning, presenting both basic concepts and advanced techniques. Finally, the book considers the use of the proposed framework for causal reasoning and decision making under uncertainty. The main text in each chapter provides the detailed technical development of the key ideas. Most chapters also include boxes with additional material: skill boxes, which describe techniques; case study boxes, which discuss empirical cases related to the approach described in the text, including applications in computer vision, robotics, natural language understanding, and computational biology; and concept boxes, which present significant concepts drawn from the material in the chapter. Instructors (and readers) can group chapters in various combinations, from core topics to more technically advanced material, to suit their particular needs.
Object-Oriented Software Construction (Book/CD-ROM)
Bertrand Meyer - 1988
A whole generation was introduced to object technology through the first edition of this book. This long-awaited new edition retains the qualities of clarity, practicality and scholarship that made the first an instant bestseller, but has been thoroughly revised and expanded.Among the new topics covered in depth are: concurrency, distribution, client/server and the Internet, object-oriented databases, design by contract, fundamental design patterns, finding classes, the use and misuse of inheritance, abstract data types, and typing issues. The book also includes completely updated discussions of reusability, modularity, software quality, object-oriented languages, memory management, and many other essential topics.
What Algorithms Want: Imagination in the Age of Computing
Ed Finn - 2017
It's as if we think of code as a magic spell, an incantation to reveal what we need to know and even what we want. Humans have always believed that certain invocations—the marriage vow, the shaman's curse—do not merely describe the world but make it. Computation casts a cultural shadow that is shaped by this long tradition of magical thinking. In this book, Ed Finn considers how the algorithm—in practical terms, “a method for solving a problem”—has its roots not only in mathematical logic but also in cybernetics, philosophy, and magical thinking.Finn argues that the algorithm deploys concepts from the idealized space of computation in a messy reality, with unpredictable and sometimes fascinating results. Drawing on sources that range from Neal Stephenson's Snow Crash to Diderot's Encyclopédie, from Adam Smith to the Star Trek computer, Finn explores the gap between theoretical ideas and pragmatic instructions. He examines the development of intelligent assistants like Siri, the rise of algorithmic aesthetics at Netflix, Ian Bogost's satiric Facebook game Cow Clicker, and the revolutionary economics of Bitcoin. He describes Google's goal of anticipating our questions, Uber's cartoon maps and black box accounting, and what Facebook tells us about programmable value, among other things.If we want to understand the gap between abstraction and messy reality, Finn argues, we need to build a model of “algorithmic reading” and scholarship that attends to process, spearheading a new experimental humanities.
Machine Learning: A Probabilistic Perspective
Kevin P. Murphy - 2012
Machine learning provides these, developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data. This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach.The coverage combines breadth and depth, offering necessary background material on such topics as probability, optimization, and linear algebra as well as discussion of recent developments in the field, including conditional random fields, L1 regularization, and deep learning. The book is written in an informal, accessible style, complete with pseudo-code for the most important algorithms. All topics are copiously illustrated with color images and worked examples drawn from such application domains as biology, text processing, computer vision, and robotics. Rather than providing a cookbook of different heuristic methods, the book stresses a principled model-based approach, often using the language of graphical models to specify models in a concise and intuitive way. Almost all the models described have been implemented in a MATLAB software package—PMTK (probabilistic modeling toolkit)—that is freely available online. The book is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.
Core Java 2, Volume I--Fundamentals (Core Series)
Cay S. Horstmann - 1999
A no-nonsense tutorial and reliable reference, this book features thoroughly tested real-world examples. The most important language and library features are demonstrated with deliberately simple sample programs, but they aren't fake and they don't cut corners. More importantly, all of the programs have been updated for J2SE 5.0 and should make good starting points for your own code. You won't find any toy examples here. This is a book for programmers who want to write real code to solve real problems. Cay S. Horstmann is a professor of computer science at San Jose State University. Previously he was vice president and chief technology officer of Preview Systems Inc. and a consultant on C++, Java, and Internet programming for major corporations, universities, and organizations. Gary Cornell has written or cowritten more than twenty popular computer books. He has a Ph.D. from Brown University and has been a visiting scientist at IBM Watson Laboratories, as well as a professor at the University of Connecticut.
The Golden Ticket: P, Np, and the Search for the Impossible
Lance Fortnow - 2013
Simply stated, it asks whether every problem whose solution can be quickly checked by computer can also be quickly solved by computer. The Golden Ticket provides a nontechnical introduction to P-NP, its rich history, and its algorithmic implications for everything we do with computers and beyond. Lance Fortnow traces the history and development of P-NP, giving examples from a variety of disciplines, including economics, physics, and biology. He explores problems that capture the full difficulty of the P-NP dilemma, from discovering the shortest route through all the rides at Disney World to finding large groups of friends on Facebook. The Golden Ticket explores what we truly can and cannot achieve computationally, describing the benefits and unexpected challenges of this compelling problem.
Eniac: The Triumphs and Tragedies of the World's First Computer
Scott McCartney - 1999
10 illustrations.
Ctrl+Shift+Enter Mastering Excel Array Formulas: Do the Impossible with Excel Formulas Thanks to Array Formula Magic
Mike Girvin - 2013
Beginning with an introduction to array formulas, this manual examines topics such as how they differ from ordinary formulas, the benefits and drawbacks of their use, functions that can and cannot handle array calculations, and array constants and functions. Among the practical applications surveyed include how to extract data from tables and unique lists, how to get results that match any criteria, and how to utilize various methods for unique counts. This book contains 529 screen shots.
Data Structures and Algorithms in Python
Michael T. Goodrich - 2012
Data Structures and Algorithms in Python
is the first mainstream object-oriented book available for the Python data structures course. Designed to provide a comprehensive introduction to data structures and algorithms, including their design, analysis, and implementation, the text will maintain the same general structure as Data Structures and Algorithms in Java and Data Structures and Algorithms in C++.
Make Your Own Neural Network
Tariq Rashid - 2016
Neural networks are a key element of deep learning and artificial intelligence, which today is capable of some truly impressive feats. Yet too few really understand how neural networks actually work. This guide will take you on a fun and unhurried journey, starting from very simple ideas, and gradually building up an understanding of how neural networks work. You won't need any mathematics beyond secondary school, and an accessible introduction to calculus is also included. The ambition of this guide is to make neural networks as accessible as possible to as many readers as possible - there are enough texts for advanced readers already! You'll learn to code in Python and make your own neural network, teaching it to recognise human handwritten numbers, and performing as well as professionally developed networks. Part 1 is about ideas. We introduce the mathematical ideas underlying the neural networks, gently with lots of illustrations and examples. Part 2 is practical. We introduce the popular and easy to learn Python programming language, and gradually builds up a neural network which can learn to recognise human handwritten numbers, easily getting it to perform as well as networks made by professionals. Part 3 extends these ideas further. We push the performance of our neural network to an industry leading 98% using only simple ideas and code, test the network on your own handwriting, take a privileged peek inside the mysterious mind of a neural network, and even get it all working on a Raspberry Pi. All the code in this has been tested to work on a Raspberry Pi Zero.
The Computational Beauty of Nature: Computer Explorations of Fractals, Chaos, Complex Systems, and Adaptation
Gary William Flake - 1998
Distinguishing agents (e.g., molecules, cells, animals, and species) from their interactions (e.g., chemical reactions, immune system responses, sexual reproduction, and evolution), Flake argues that it is the computational properties of interactions that account for much of what we think of as beautiful and interesting. From this basic thesis, Flake explores what he considers to be today's four most interesting computational topics: fractals, chaos, complex systems, and adaptation.Each of the book's parts can be read independently, enabling even the casual reader to understand and work with the basic equations and programs. Yet the parts are bound together by the theme of the computer as a laboratory and a metaphor for understanding the universe. The inspired reader will experiment further with the ideas presented to create fractal landscapes, chaotic systems, artificial life forms, genetic algorithms, and artificial neural networks.
Reinforcement Learning: An Introduction
Richard S. Sutton - 1998
Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications.Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability.The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning.