Deep Learning


Ian Goodfellow - 2016
    Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning.The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models.Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

Operational Excellence Pillar: AWS Well-Architected Framework (AWS Whitepaper)


AWS Whitepapers - 2017
    It provides guidance to help you apply best practices in the design, delivery, and maintenance of AWS environments. This documentation is offered for free here as a Kindle book, or you can read it in PDF format at https://aws.amazon.com/whitepapers/.

Python Data Science Handbook: Tools and Techniques for Developers


Jake Vanderplas - 2016
    Several resources exist for individual pieces of this data science stack, but only with the Python Data Science Handbook do you get them all—IPython, NumPy, Pandas, Matplotlib, Scikit-Learn, and other related tools.Working scientists and data crunchers familiar with reading and writing Python code will find this comprehensive desk reference ideal for tackling day-to-day issues: manipulating, transforming, and cleaning data; visualizing different types of data; and using data to build statistical or machine learning models. Quite simply, this is the must-have reference for scientific computing in Python.With this handbook, you’ll learn how to use: * IPython and Jupyter: provide computational environments for data scientists using Python * NumPy: includes the ndarray for efficient storage and manipulation of dense data arrays in Python * Pandas: features the DataFrame for efficient storage and manipulation of labeled/columnar data in Python * Matplotlib: includes capabilities for a flexible range of data visualizations in Python * Scikit-Learn: for efficient and clean Python implementations of the most important and established machine learning algorithms

Data Visualisation: A Handbook for Data Driven Design


Andy Kirk - 2016
    Scholars and students need to be able to analyze, design and curate information into useful tools of communication, insight and understanding. This book is the starting point in learning the process and skills of data visualization, teaching the concepts and skills of how to present data and inspiring effective visual design. Benefits of this book: A flexible step-by-step journey that equips you to achieve great data visualization.A curated collection of classic and contemporary examples, giving illustrations of good and bad practice Examples on every page to give creative inspiration Illustrations of good and bad practice show you how to critically evaluate and improve your own work Advice and experience from the best designers in the field Loads of online practical help, checklists, case studies and exercises make this the most comprehensive text available

Investigating the Social World: The Process and Practice of Research


Russell K. Schutt - 1995
    In this new Seventh Edition of his perennially successful social research text, author Russell K. Schutt continues to make research come alive through stories that illustrate the methods presented in each chapter, and hands-on exercises that help students learn by doing. Investigating the Social World helps readers understand research methods as an integrated whole, appreciate the value of both qualitative and quantitative methodologies, and understand the need to make ethical research decisions. New to this Edition: * upgraded coverage of research methods to include the spread of cell phones and the use of the Internet, including expanded coverage of Web surveys * larger page size in full color allows for better display of pedagogical features * new 'Research in the News' boxes included within chapters * more international examples * expanded statistics coverage now includes more coverage of inferenctial statistics and regression analysi

Data Science for Business: What you need to know about data mining and data-analytic thinking


Foster Provost - 2013
    This guide also helps you understand the many data-mining techniques in use today.Based on an MBA course Provost has taught at New York University over the past ten years, Data Science for Business provides examples of real-world business problems to illustrate these principles. You’ll not only learn how to improve communication between business stakeholders and data scientists, but also how participate intelligently in your company’s data science projects. You’ll also discover how to think data-analytically, and fully appreciate how data science methods can support business decision-making.Understand how data science fits in your organization—and how you can use it for competitive advantageTreat data as a business asset that requires careful investment if you’re to gain real valueApproach business problems data-analytically, using the data-mining process to gather good data in the most appropriate wayLearn general concepts for actually extracting knowledge from dataApply data science principles when interviewing data science job candidates

Introduction to Graph Theory


Douglas B. West - 1995
    Verification that algorithms work is emphasized more than their complexity. An effective use of examples, and huge number of interesting exercises, demonstrate the topics of trees and distance, matchings and factors, connectivity and paths, graph coloring, edges and cycles, and planar graphs. For those who need to learn to make coherent arguments in the fields of mathematics and computer science.

Programming in Python 3: A Complete Introduction to the Python Language


Mark Summerfield - 2008
    It brings together all the knowledge needed to write any program, use any standard or third-party Python 3 library, and create new library modules of your own.

Doing Bayesian Data Analysis: A Tutorial Introduction with R and BUGS


John K. Kruschke - 2010
    Included are step-by-step instructions on how to carry out Bayesian data analyses.Download Link : readbux.com/download?i=0124058884            0124058884 Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan PDF by John Kruschke

How to Measure Anything Workbook: Finding the Value of "Intangibles" in Business


Douglas W. Hubbard - 2014
    The invaluable companion to the new edition of the bestselling How to Measure Anything This companion workbook to the new edition of the insightful and eloquent How to Measure Anything walks readers through sample problems and exercises in which they can master and apply the methods discussed in the book.The book explains practical methods for measuring a variety of intangibles, including approaches to measuring customer satisfaction, organizational flexibility, technology risk, technology ROI, and other problems in business, government, and not-for-profits.Companion to the revision of the bestselling How to Measure AnythingProvides chapter-by-chapter exercises Written by industry leader Douglas Hubbard Written by recognized expert Douglas Hubbard--creator of Applied Information Economics--How to Measure Anything Workbook illustrates how the author has used his approach across various industries and how any problem, no matter how difficult, ill defined, or uncertain can lend itself to measurement using proven methods.

Information Theory, Inference and Learning Algorithms


David J.C. MacKay - 2002
    These topics lie at the heart of many exciting areas of contemporary science and engineering - communication, signal processing, data mining, machine learning, pattern recognition, computational neuroscience, bioinformatics, and cryptography. This textbook introduces theory in tandem with applications. Information theory is taught alongside practical communication systems, such as arithmetic coding for data compression and sparse-graph codes for error-correction. A toolbox of inference techniques, including message-passing algorithms, Monte Carlo methods, and variational approximations, are developed alongside applications of these tools to clustering, convolutional codes, independent component analysis, and neural networks. The final part of the book describes the state of the art in error-correcting codes, including low-density parity-check codes, turbo codes, and digital fountain codes -- the twenty-first century standards for satellite communications, disk drives, and data broadcast. Richly illustrated, filled with worked examples and over 400 exercises, some with detailed solutions, David MacKay's groundbreaking book is ideal for self-learning and for undergraduate or graduate courses. Interludes on crosswords, evolution, and sex provide entertainment along the way. In sum, this is a textbook on information, communication, and coding for a new generation of students, and an unparalleled entry point into these subjects for professionals in areas as diverse as computational biology, financial engineering, and machine learning.

Mining of Massive Datasets


Anand Rajaraman - 2011
    This book focuses on practical algorithms that have been used to solve key problems in data mining and which can be used on even the largest datasets. It begins with a discussion of the map-reduce framework, an important tool for parallelizing algorithms automatically. The authors explain the tricks of locality-sensitive hashing and stream processing algorithms for mining data that arrives too fast for exhaustive processing. The PageRank idea and related tricks for organizing the Web are covered next. Other chapters cover the problems of finding frequent itemsets and clustering. The final chapters cover two applications: recommendation systems and Web advertising, each vital in e-commerce. Written by two authorities in database and Web technologies, this book is essential reading for students and practitioners alike.

Learning SAS by Example: A Programmer's Guide


Ron Cody - 2007
    In an instructive and conversational tone, Cody clearly explains how to program SAS, illustrating with one or more real-life examples and giving a detailed description of how the program works.

R Packages


Hadley Wickham - 2015
    This practical book shows you how to bundle reusable R functions, sample data, and documentation together by applying author Hadley Wickham’s package development philosophy. In the process, you’ll work with devtools, roxygen, and testthat, a set of R packages that automate common development tasks. Devtools encapsulates best practices that Hadley has learned from years of working with this programming language. Ideal for developers, data scientists, and programmers with various backgrounds, this book starts you with the basics and shows you how to improve your package writing over time. You’ll learn to focus on what you want your package to do, rather than think about package structure. Learn about the most useful components of an R package, including vignettes and unit tests Automate anything you can, taking advantage of the years of development experience embodied in devtools Get tips on good style, such as organizing functions into files Streamline your development process with devtools Learn the best way to submit your package to the Comprehensive R Archive Network (CRAN) Learn from a well-respected member of the R community who created 30 R packages, including ggplot2, dplyr, and tidyr

Statistical Rethinking: A Bayesian Course with Examples in R and Stan


Richard McElreath - 2015
    Reflecting the need for even minor programming in today's model-based statistics, the book pushes readers to perform step-by-step calculations that are usually automated. This unique computational approach ensures that readers understand enough of the details to make reasonable choices and interpretations in their own modeling work.The text presents generalized linear multilevel models from a Bayesian perspective, relying on a simple logical interpretation of Bayesian probability and maximum entropy. It covers from the basics of regression to multilevel models. The author also discusses measurement error, missing data, and Gaussian process models for spatial and network autocorrelation.By using complete R code examples throughout, this book provides a practical foundation for performing statistical inference. Designed for both PhD students and seasoned professionals in the natural and social sciences, it prepares them for more advanced or specialized statistical modeling.Web ResourceThe book is accompanied by an R package (rethinking) that is available on the author's website and GitHub. The two core functions (map and map2stan) of this package allow a variety of statistical models to be constructed from standard model formulas.