The Art of Doing Science and Engineering: Learning to Learn


Richard Hamming - 1996
    By presenting actual experiences and analyzing them as they are described, the author conveys the developmental thought processes employed and shows a style of thinking that leads to successful results is something that can be learned. Along with spectacular successes, the author also conveys how failures contributed to shaping the thought processes. Provides the reader with a style of thinking that will enhance a person's ability to function as a problem-solver of complex technical issues. Consists of a collection of stories about the author's participation in significant discoveries, relating how those discoveries came about and, most importantly, provides analysis about the thought processes and reasoning that took place as the author and his associates progressed through engineering problems.

Data Science for Business: What you need to know about data mining and data-analytic thinking


Foster Provost - 2013
    This guide also helps you understand the many data-mining techniques in use today.Based on an MBA course Provost has taught at New York University over the past ten years, Data Science for Business provides examples of real-world business problems to illustrate these principles. You’ll not only learn how to improve communication between business stakeholders and data scientists, but also how participate intelligently in your company’s data science projects. You’ll also discover how to think data-analytically, and fully appreciate how data science methods can support business decision-making.Understand how data science fits in your organization—and how you can use it for competitive advantageTreat data as a business asset that requires careful investment if you’re to gain real valueApproach business problems data-analytically, using the data-mining process to gather good data in the most appropriate wayLearn general concepts for actually extracting knowledge from dataApply data science principles when interviewing data science job candidates

The Imposter's Handbook


Rob Conery - 2016
    New languages, new frameworks, new ways of doing things - a constant struggle just to stay current in the industry. This left no time to learn the foundational concepts and skills that come with a degree in Computer Science.

The Eudaemonic Pie


Thomas A. Bass - 1985
    “The result is a veritable pi

The Productive Programmer


Neal Ford - 2008
    The Productive Programmer offers critical timesaving and productivity tools that you can adopt right away, no matter what platform you use. Master developer Neal Ford not only offers advice on the mechanics of productivity-how to work smarter, spurn interruptions, get the most out your computer, and avoid repetition-he also details valuable practices that will help you elude common traps, improve your code, and become more valuable to your team. You'll learn to:Write the test before you write the codeManage the lifecycle of your objects fastidiously Build only what you need now, not what you might need later Apply ancient philosophies to software development Question authority, rather than blindly adhere to standardsMake hard things easier and impossible things possible through meta-programming Be sure all code within a method is at the same level of abstraction Pick the right editor and assemble the best tools for the job This isn't theory, but the fruits of Ford's real-world experience as an Application Architect at the global IT consultancy ThoughtWorks. Whether you're a beginner or a pro with years of experience, you'll improve your work and your career with the simple and straightforward principles in The Productive Programmer.

Big Data Now: 2012 Edition


O'Reilly Media Inc. - 2012
    It's not just a technical book or just a businessguide. Data is ubiquitous and it doesn't pay much attention toborders, so we've calibrated our coverage to follow it wherever itgoes.In the first edition of Big Data Now, the O'Reilly team tracked thebirth and early development of data tools and data science. Now, withthis second edition, we're seeing what happens when big data grows up:how it's being applied, where it's playing a role, and theconsequences -- good and bad alike -- of data's ascendance.We've organized the second edition of Big Data Now into five areas:Getting Up to Speed With Big Data -- Essential information on thestructures and definitions of big data.Big Data Tools, Techniques, and Strategies -- Expert guidance forturning big data theories into big data products.The Application of Big Data -- Examples of big data in action,including a look at the downside of data.What to Watch for in Big Data -- Thoughts on how big data will evolveand the role it will play across industries and domains.Big Data and Health Care -- A special section exploring thepossibilities that arise when data and health care come together.

sed and awk Pocket Reference: Text Processing with Regular Expressions


Arnold Robbins - 2000
    sed, awk, and regular expressions allow programmers and system administrators to automate editing tasks that need to be performed on one or more files, to simplify the task of performing the same edits on multiple files, and to write conversion programs.The sed & awk Pocket Reference is a companion volume to sed & awk, Second Edition, Unix in a Nutshell, Third Edition, and Effective awk Programming, Third Edition. This new edition has expanded coverage of gawk (GNU awk), and includes sections on:An overview of sed and awk's command line syntaxAlphabetical summaries of commands, including nawk and gawkProfiling with pgawkCoprocesses and sockets with gawkInternationalization with gawkA listing of resources for sed and awk usersThis small book is a handy reference guide to the information presented in the larger volumes. It presents a concise summary of regular expressions and pattern matching, and summaries of sed and awk.Arnold Robbins, an Atlanta native now happily living in Israel, is a professional programmer and technical author and coauthor of various O'Reilly Unix titles. He has been working with Unix systems since 1980, and currently maintains gawk and its documentation.

Pro C# 3.0 and the .NET 3.5 Framework (Pro)


Andrew Troelsen - 2007
    Since that time, this text has been revised, tweaked, and enhanced to account for the changes found within each release of the .NET platform (1.1, 2.0, 3.0 and now 3.5)..NET 3.0 was more of an augmentative release, essentially providing three new APIs: Windows Presentation Foundation (WPF), Windows Communication Foundation (WCF) and Windows Workflow Foundation (WF). As you would expect, coverage of the "W's" has been expanded a great deal in this version of the book from the previous Special Edition text.Unlike .NET 3.0, .NET 3.5 provides dozens of C# language features and .NET APIs. This edition of the book will walk you through all of this material using the same readable approach as was found in previous editions. Rest assured, you'll find detailed coverage of Language Integrated Query (LINQ), the C# 2008 language changes (automatic properties, extension methods, anonymous types, etc.) and the numerous bells and whistles of Visual Studio 2008. What you'll learn Everything you need to knowget up to speed with C# 2008 quickly and efficiently. Discover all the new .NET 3.5 featuresLanguage Integrated Query, anonymous types, extension methods, automatic properties, and more. Get a professional footholdtargeted to appeal to experienced software professionals, this book gives you the facts you need the way you need to see them. A rock-solid foundationfocuses on everything you need to be a successful .NET 3.5 programmer, not just the new features. Get comfortable with all the core aspects of the platform including assemblies, remoting, Windows Forms, Web Forms, ADO.NET, XML web services, and much more. Who this book is forIf you're checking out this book for the first time, understand that it targets experienced software professionals and/or students of computer science (so please don't expect three chapters devoted to "for" loops). The mission of this text is to provide you with a rock-solid foundation to the C# 2008 programming language and the core aspects of the .NET platform (object-oriented programming, assemblies, file IO, Windows Forms/WPF, ASP.NET, ADO.NET, WCF, WF, etc.). Once you digest the information presented in these 33 chapters, you'll be in a perfect position to apply this knowledge to your specific programming assignments, and you'll be well equipped to explore the .NET universe on your own terms. "

Concrete Mathematics: A Foundation for Computer Science


Ronald L. Graham - 1988
    "More concretely," the authors explain, "it is the controlled manipulation of mathematical formulas, using a collection of techniques for solving problems."

Thinking in Systems: A Primer


Donella H. Meadows - 2008
    Edited by the Sustainability Institute’s Diana Wright, this essential primer brings systems thinking out of the realm of computers and equations and into the tangible world, showing readers how to develop the systems-thinking skills that thought leaders across the globe consider critical for 21st-century life.Some of the biggest problems facing the world—war, hunger, poverty, and environmental degradation—are essentially system failures. They cannot be solved by fixing one piece in isolation from the others, because even seemingly minor details have enormous power to undermine the best efforts of too-narrow thinking.While readers will learn the conceptual tools and methods of systems thinking, the heart of the book is grander than methodology. Donella Meadows was known as much for nurturing positive outcomes as she was for delving into the science behind global dilemmas. She reminds readers to pay attention to what is important, not just what is quantifiable, to stay humble, and to stay a learner.In a world growing ever more complicated, crowded, and interdependent, Thinking in Systems helps readers avoid confusion and helplessness, the first step toward finding proactive and effective solutions.

Mining the Social Web: Analyzing Data from Facebook, Twitter, LinkedIn, and Other Social Media Sites


Matthew A. Russell - 2011
    You’ll learn how to combine social web data, analysis techniques, and visualization to find what you’ve been looking for in the social haystack—as well as useful information you didn’t know existed.Each standalone chapter introduces techniques for mining data in different areas of the social Web, including blogs and email. All you need to get started is a programming background and a willingness to learn basic Python tools.Get a straightforward synopsis of the social web landscapeUse adaptable scripts on GitHub to harvest data from social network APIs such as Twitter, Facebook, LinkedIn, and Google+Learn how to employ easy-to-use Python tools to slice and dice the data you collectExplore social connections in microformats with the XHTML Friends NetworkApply advanced mining techniques such as TF-IDF, cosine similarity, collocation analysis, document summarization, and clique detectionBuild interactive visualizations with web technologies based upon HTML5 and JavaScript toolkits"A rich, compact, useful, practical introduction to a galaxy of tools, techniques, and theories for exploring structured and unstructured data." --Alex Martelli, Senior Staff Engineer, Google

Computer Age Statistical Inference: Algorithms, Evidence, and Data Science


Bradley Efron - 2016
    'Big data', 'data science', and 'machine learning' have become familiar terms in the news, as statistical methods are brought to bear upon the enormous data sets of modern science and commerce. How did we get here? And where are we going? This book takes us on an exhilarating journey through the revolution in data analysis following the introduction of electronic computation in the 1950s. Beginning with classical inferential theories - Bayesian, frequentist, Fisherian - individual chapters take up a series of influential topics: survival analysis, logistic regression, empirical Bayes, the jackknife and bootstrap, random forests, neural networks, Markov chain Monte Carlo, inference after model selection, and dozens more. The distinctly modern approach integrates methodology and algorithms with statistical inference. The book ends with speculation on the future direction of statistics and data science.

Understanding Computation: From Simple Machines to Impossible Programs


Tom Stuart - 2013
    Understanding Computation explains theoretical computer science in a context you’ll recognize, helping you appreciate why these ideas matter and how they can inform your day-to-day programming.Rather than use mathematical notation or an unfamiliar academic programming language like Haskell or Lisp, this book uses Ruby in a reductionist manner to present formal semantics, automata theory, and functional programming with the lambda calculus. It’s ideal for programmers versed in modern languages, with little or no formal training in computer science.* Understand fundamental computing concepts, such as Turing completeness in languages* Discover how programs use dynamic semantics to communicate ideas to machines* Explore what a computer can do when reduced to its bare essentials* Learn how universal Turing machines led to today’s general-purpose computers* Perform complex calculations, using simple languages and cellular automata* Determine which programming language features are essential for computation* Examine how halting and self-referencing make some computing problems unsolvable* Analyze programs by using abstract interpretation and type systems

Understanding the Linux Kernel


Daniel P. Bovet - 2000
    The kernel handles all interactions between the CPU and the external world, and determines which programs will share processor time, in what order. It manages limited memory so well that hundreds of processes can share the system efficiently, and expertly organizes data transfers so that the CPU isn't kept waiting any longer than necessary for the relatively slow disks.The third edition of Understanding the Linux Kernel takes you on a guided tour of the most significant data structures, algorithms, and programming tricks used in the kernel. Probing beyond superficial features, the authors offer valuable insights to people who want to know how things really work inside their machine. Important Intel-specific features are discussed. Relevant segments of code are dissected line by line. But the book covers more than just the functioning of the code; it explains the theoretical underpinnings of why Linux does things the way it does.This edition of the book covers Version 2.6, which has seen significant changes to nearly every kernel subsystem, particularly in the areas of memory management and block devices. The book focuses on the following topics:Memory management, including file buffering, process swapping, and Direct memory Access (DMA)The Virtual Filesystem layer and the Second and Third Extended FilesystemsProcess creation and schedulingSignals, interrupts, and the essential interfaces to device driversTimingSynchronization within the kernelInterprocess Communication (IPC)Program executionUnderstanding the Linux Kernel will acquaint you with all the inner workings of Linux, but it's more than just an academic exercise. You'll learn what conditions bring out Linux's best performance, and you'll see how it meets the challenge of providing good system response during process scheduling, file access, and memory management in a wide variety of environments. This book will help you make the most of your Linux system.

Information Theory, Inference and Learning Algorithms


David J.C. MacKay - 2002
    These topics lie at the heart of many exciting areas of contemporary science and engineering - communication, signal processing, data mining, machine learning, pattern recognition, computational neuroscience, bioinformatics, and cryptography. This textbook introduces theory in tandem with applications. Information theory is taught alongside practical communication systems, such as arithmetic coding for data compression and sparse-graph codes for error-correction. A toolbox of inference techniques, including message-passing algorithms, Monte Carlo methods, and variational approximations, are developed alongside applications of these tools to clustering, convolutional codes, independent component analysis, and neural networks. The final part of the book describes the state of the art in error-correcting codes, including low-density parity-check codes, turbo codes, and digital fountain codes -- the twenty-first century standards for satellite communications, disk drives, and data broadcast. Richly illustrated, filled with worked examples and over 400 exercises, some with detailed solutions, David MacKay's groundbreaking book is ideal for self-learning and for undergraduate or graduate courses. Interludes on crosswords, evolution, and sex provide entertainment along the way. In sum, this is a textbook on information, communication, and coding for a new generation of students, and an unparalleled entry point into these subjects for professionals in areas as diverse as computational biology, financial engineering, and machine learning.