Smart and Gets Things Done: Joel Spolsky's Concise Guide to Finding the Best Technical Talent


Joel Spolsky - 2007
    Ten times. You can't afford not to hire them. But if you haven't been reading Joel Spolsky's books or blog, you probably don't know how to find them and make them want to work for you.In this brief book, Joel reveals all his secrets--from his years at Microsoft, and as the co-founder of Fog Creek Software--for recruiting the best developers in the world.If you've ever wondered what you should be looking for in a resume, if you've ever struggled to decide whether to hire someone at the end of an interview, or if you're wondering why you can't find great programmers, stop everything and read this book.

AIQ: How People and Machines Are Smarter Together


Nick Polson - 2018
    AIQ explores the fascinating history of the ideas that drive this technology of the future and demystifies the core concepts behind it; the result is a positive and entertaining look at the great potential unlocked by marrying human creativity with powerful machines.” —Steven D. Levitt, bestselling co-author of Freakonomics From leading data scientists Nick Polson and James Scott, what everyone needs to know to understand how artificial intelligence is changing the world and how we can use this knowledge to make better decisions in our own lives. Dozens of times per day, we all interact with intelligent machines that are constantly learning from the wealth of data now available to them. These machines, from smart phones to talking robots to self-driving cars, are remaking the world in the 21st century in the same way that the Industrial Revolution remade the world in the 19th century. AIQ is based on a simple premise: if you want to understand the modern world, then you have to know a little bit of the mathematical language spoken by intelligent machines. AIQ will teach you that language—but in an unconventional way, anchored in stories rather than equations. You will meet a fascinating cast of historical characters who have a lot to teach you about data, probability, and better thinking. Along the way, you'll see how these same ideas are playing out in the modern age of big data and intelligent machines—and how these technologies will soon help you to overcome some of your built-in cognitive weaknesses, giving you a chance to lead a happier, healthier, more fulfilled life.

Dreaming in Code: Two Dozen Programmers, Three Years, 4,732 Bugs, and One Quest for Transcendent Software


Scott Rosenberg - 2007
    Along the way, we encounter black holes, turtles, snakes, dragons, axe-sharpening, and yak-shaving—and take a guided tour through the theories and methods, both brilliant and misguided, that litter the history of software development, from the famous ‘mythical man-month’ to Extreme Programming. Not just for technophiles but for anyone captivated by the drama of invention, Dreaming in Code offers a window into both the information age and the workings of the human mind.

Bayesian Data Analysis


Andrew Gelman - 1995
    Its world-class authors provide guidance on all aspects of Bayesian data analysis and include examples of real statistical analyses, based on their own research, that demonstrate how to solve complicated problems. Changes in the new edition include:Stronger focus on MCMC Revision of the computational advice in Part III New chapters on nonlinear models and decision analysis Several additional applied examples from the authors' recent research Additional chapters on current models for Bayesian data analysis such as nonlinear models, generalized linear mixed models, and more Reorganization of chapters 6 and 7 on model checking and data collectionBayesian computation is currently at a stage where there are many reasonable ways to compute any given posterior distribution. However, the best approach is not always clear ahead of time. Reflecting this, the new edition offers a more pluralistic presentation, giving advice on performing computations from many perspectives while making clear the importance of being aware that there are different ways to implement any given iterative simulation computation. The new approach, additional examples, and updated information make Bayesian Data Analysis an excellent introductory text and a reference that working scientists will use throughout their professional life.

Who Owns the Future?


Jaron Lanier - 2013
    Who Owns the Future? is his visionary reckoning with the most urgent economic and social trend of our age: the poisonous concentration of money and power in our digital networks.Lanier has predicted how technology will transform our humanity for decades, and his insight has never been more urgently needed. He shows how Siren Servers, which exploit big data and the free sharing of information, led our economy into recession, imperiled personal privacy, and hollowed out the middle class. The networks that define our world—including social media, financial institutions, and intelligence agencies—now threaten to destroy it.But there is an alternative. In this provocative, poetic, and deeply humane book, Lanier charts a path toward a brighter future: an information economy that rewards ordinary people for what they do and share on the web.

Implementing Domain-Driven Design


Vaughn Vernon - 2013
    Vaughn Vernon couples guided approaches to implementation with modern architectures, highlighting the importance and value of focusing on the business domain while balancing technical considerations.Building on Eric Evans’ seminal book, Domain-Driven Design, the author presents practical DDD techniques through examples from familiar domains. Each principle is backed up by realistic Java examples–all applicable to C# developers–and all content is tied together by a single case study: the delivery of a large-scale Scrum-based SaaS system for a multitenant environment.The author takes you far beyond “DDD-lite” approaches that embrace DDD solely as a technical toolset, and shows you how to fully leverage DDD’s “strategic design patterns” using Bounded Context, Context Maps, and the Ubiquitous Language. Using these techniques and examples, you can reduce time to market and improve quality, as you build software that is more flexible, more scalable, and more tightly aligned to business goals.

The Filter Bubble: What the Internet is Hiding From You


Eli Pariser - 2011
    Instead of giving you the most broadly popular result, Google now tries to predict what you are most likely to click on. According to MoveOn.org board president Eli Pariser, Google's change in policy is symptomatic of the most significant shift to take place on the Web in recent years - the rise of personalization. In this groundbreaking investigation of the new hidden Web, Pariser uncovers how this growing trend threatens to control how we consume and share information as a society-and reveals what we can do about it.Though the phenomenon has gone largely undetected until now, personalized filters are sweeping the Web, creating individual universes of information for each of us. Facebook - the primary news source for an increasing number of Americans - prioritizes the links it believes will appeal to you so that if you are a liberal, you can expect to see only progressive links. Even an old-media bastion like "The Washington Post" devotes the top of its home page to a news feed with the links your Facebook friends are sharing. Behind the scenes a burgeoning industry of data companies is tracking your personal information to sell to advertisers, from your political leanings to the color you painted your living room to the hiking boots you just browsed on Zappos.In a personalized world, we will increasingly be typed and fed only news that is pleasant, familiar, and confirms our beliefs - and because these filters are invisible, we won't know what is being hidden from us. Our past interests will determine what we are exposed to in the future, leaving less room for the unexpected encounters that spark creativity, innovation, and the democratic exchange of ideas.While we all worry that the Internet is eroding privacy or shrinking our attention spans, Pariser uncovers a more pernicious and far-reaching trend on the Internet and shows how we can - and must - change course. With vivid detail and remarkable scope, The Filter Bubble reveals how personalization undermines the Internet's original purpose as an open platform for the spread of ideas and could leave us all in an isolated, echoing world.

Information Theory, Inference and Learning Algorithms


David J.C. MacKay - 2002
    These topics lie at the heart of many exciting areas of contemporary science and engineering - communication, signal processing, data mining, machine learning, pattern recognition, computational neuroscience, bioinformatics, and cryptography. This textbook introduces theory in tandem with applications. Information theory is taught alongside practical communication systems, such as arithmetic coding for data compression and sparse-graph codes for error-correction. A toolbox of inference techniques, including message-passing algorithms, Monte Carlo methods, and variational approximations, are developed alongside applications of these tools to clustering, convolutional codes, independent component analysis, and neural networks. The final part of the book describes the state of the art in error-correcting codes, including low-density parity-check codes, turbo codes, and digital fountain codes -- the twenty-first century standards for satellite communications, disk drives, and data broadcast. Richly illustrated, filled with worked examples and over 400 exercises, some with detailed solutions, David MacKay's groundbreaking book is ideal for self-learning and for undergraduate or graduate courses. Interludes on crosswords, evolution, and sex provide entertainment along the way. In sum, this is a textbook on information, communication, and coding for a new generation of students, and an unparalleled entry point into these subjects for professionals in areas as diverse as computational biology, financial engineering, and machine learning.

Ambient Findability: What We Find Changes Who We Become


Peter Morville - 2005
    Written by Peter Morville, author of the groundbreaking Information Architecture for the World Wide Web, the book defines our current age as a state of unlimited findability. In other words, anyone can find anything at any time. Complete navigability.Morville discusses the Internet, GIS, and other network technologies that are coming together to make unlimited findability possible. He explores how the melding of these innovations impacts society, since Web access is now a standard requirement for successful people and businesses. But before he does that, Morville looks back at the history of wayfinding and human evolution, suggesting that our fear of being lost has driven us to create maps, charts, and now, the mobile Internet.The book's central thesis is that information literacy, information architecture, and usability are all critical components of this new world order. Hand in hand with that is the contention that only by planning and designing the best possible software, devices, and Internet, will we be able to maintain this connectivity in the future. Morville's book is highlighted with full color illustrations and rich examples that bring his prose to life.Ambient Findability doesn't preach or pretend to know all the answers. Instead, it presents research, stories, and examples in support of its novel ideas. Are we truly at a critical point in our evolution where the quality of our digital networks will dictate how we behave as a species? Is findability indeed the primary key to a successful global marketplace in the 21st century and beyond. Peter Morville takes you on a thought-provoking tour of these memes and more -- ideas that will not only fascinate but will stir your creativity in practical ways that you can apply to your work immediately.

Numsense! Data Science for the Layman: No Math Added


Annalyn Ng - 2017
    Sold in over 85 countries and translated into more than 5 languages.---------------Want to get started on data science?Our promise: no math added.This book has been written in layman's terms as a gentle introduction to data science and its algorithms. Each algorithm has its own dedicated chapter that explains how it works, and shows an example of a real-world application. To help you grasp key concepts, we stick to intuitive explanations and visuals.Popular concepts covered include:- A/B Testing- Anomaly Detection- Association Rules- Clustering- Decision Trees and Random Forests- Regression Analysis- Social Network Analysis- Neural NetworksFeatures:- Intuitive explanations and visuals- Real-world applications to illustrate each algorithm- Point summaries at the end of each chapter- Reference sheets comparing the pros and cons of algorithms- Glossary list of commonly-used termsWith this book, we hope to give you a practical understanding of data science, so that you, too, can leverage its strengths in making better decisions.

Learn Python The Hard Way


Zed A. Shaw - 2010
    The title says it is the hard way to learn to writecode but it’s actually not. It’s the “hard” way only in that it’s the way people used to teach things. In this book youwill do something incredibly simple that all programmers actually do to learn a language: 1. Go through each exercise. 2. Type in each sample exactly. 3. Make it run.That’s it. This will be very difficult at first, but stick with it. If you go through this book, and do each exercise for1-2 hours a night, then you’ll have a good foundation for moving on to another book. You might not really learn“programming” from this book, but you will learn the foundation skills you need to start learning the language.This book’s job is to teach you the three most basic essential skills that a beginning programmer needs to know:Reading And Writing, Attention To Detail, Spotting Differences.

Foundations of Statistical Natural Language Processing


Christopher D. Manning - 1999
    This foundational text is the first comprehensive introduction to statistical natural language processing (NLP) to appear. The book contains all the theory and algorithms needed for building NLP tools. It provides broad but rigorous coverage of mathematical and linguistic foundations, as well as detailed discussion of statistical methods, allowing students and researchers to construct their own implementations. The book covers collocation finding, word sense disambiguation, probabilistic parsing, information retrieval, and other applications.

Possible Minds: 25 Ways of Looking at AI


John Brockman - 2019
    It is the Second Coming and the Apocalypse at the same time: Good AI versus evil AI." --John BrockmanMore than sixty years ago, mathematician-philosopher Norbert Wiener published a book on the place of machines in society that ended with a warning: "we shall never receive the right answers to our questions unless we ask the right questions.... The hour is very late, and the choice of good and evil knocks at our door."In the wake of advances in unsupervised, self-improving machine learning, a small but influential community of thinkers is considering Wiener's words again. In Possible Minds, John Brockman gathers their disparate visions of where AI might be taking us.The fruit of the long history of Brockman's profound engagement with the most important scientific minds who have been thinking about AI--from Alison Gopnik and David Deutsch to Frank Wilczek and Stephen Wolfram--Possible Minds is an ideal introduction to the landscape of crucial issues AI presents. The collision between opposing perspectives is salutary and exhilarating; some of these figures, such as computer scientist Stuart Russell, Skype co-founder Jaan Tallinn, and physicist Max Tegmark, are deeply concerned with the threat of AI, including the existential one, while others, notably robotics entrepreneur Rodney Brooks, philosopher Daniel Dennett, and bestselling author Steven Pinker, have a very different view. Serious, searching and authoritative, Possible Minds lays out the intellectual landscape of one of the most important topics of our time.

All of Statistics: A Concise Course in Statistical Inference


Larry Wasserman - 2003
    But in spirit, the title is apt, as the book does cover a much broader range of topics than a typical introductory book on mathematical statistics. This book is for people who want to learn probability and statistics quickly. It is suitable for graduate or advanced undergraduate students in computer science, mathematics, statistics, and related disciplines. The book includes modern topics like nonparametric curve estimation, bootstrapping, and clas- sification, topics that are usually relegated to follow-up courses. The reader is presumed to know calculus and a little linear algebra. No previous knowledge of probability and statistics is required. Statistics, data mining, and machine learning are all concerned with collecting and analyzing data. For some time, statistics research was con- ducted in statistics departments while data mining and machine learning re- search was conducted in computer science departments. Statisticians thought that computer scientists were reinventing the wheel. Computer scientists thought that statistical theory didn't apply to their problems. Things are changing. Statisticians now recognize that computer scientists are making novel contributions while computer scientists now recognize the generality of statistical theory and methodology. Clever data mining algo- rithms are more scalable than statisticians ever thought possible. Formal sta- tistical theory is more pervasive than computer scientists had realized.

The Visible Ops Handbook: Starting ITIL in 4 Practical Steps


Kevin Behr - 2004
    Visible Ops is comprised of four prescriptive and self-fueling steps that take an organization from any starting point to a continually improving process. MAKING ITIL ACTIONABLE Although the Information Technology Infrastructure Library (ITIL) provides a wealth of best practices, it lacks prescriptive guidance: What do you implement first, and how do you do it? Moreover, the ITIL books remain relatively expensive to distribute. Other information, publicly available from a variety of sources, is too general and vague to effectively aid organizations that need to start or enhance process improvement efforts. The Visible Ops booklet provides a prescriptive roadmap for organizations beginning or continuing their IT process improvement journey. WHY DO WE NEED VISIBLE OPS? The Visible Ops methodology was developed because there was not a satisfactory answer to the question: "I believe in the need for IT process improvement, but where do I start?" Since 2000, Gene Kim and Kevin Behr have met with hundreds of IT organizations and identified eight high-performing IT organizations with the highest service levels, best security, and best efficiencies. For years, they studied these high-performing organizations to figure out the secrets to their success. Visible Ops codifies how these organizations achieved their transformation from good to great, showing how interested organizations can replicate the key processes of these high-performing organizations in just four steps: 1. Stabilize Patient, Modify First Response - Almost 80% of outages are self-inflicted. The first step is to control risky changes and reduce MTTR by addressing how changes are managed and how problems are resolved. 2. Catch and Release, Find Fragile Artifacts - Often, infrastructure exists that cannot be repeatedly replicated. In this step, we inventory assets, configurations and services, to identify those with the lowest change success rates, highest MTTR and highest business downtime costs. 3. Establish Repeatable Build Library - The highest return on investment is implementing effective release management processes. This step creates repeatable builds for the most critical assets and services, to make it "cheaper to rebuild than to repair." 4. Enable Continuous Improvement - The previous steps have progressively built a closed-loop between the Release, Control and Resolution processes. This step implements metrics to allow continuous improvement of all of these process areas, to best ensure that business objectives are met.