Introduction to Superstrings and M-Theory


Michio Kaku - 1989
    Called by some, "the theory of everything," superstrings may solve a problem that has eluded physicists for the past 50 years, the final unification of the two great theories of the twentieth century, general relativity and quantum field theory. Now, here is a thoroughly revised, second edition of a course-tested comprehensive introductory graduate text on superstrings which stresses the most current areas of interest, not covered in other presentations, including: - Four-dimensional superstrings - Kac-Moody algebras - Teichm�ller spaces and Calabi-Yau manifolds - M-theory Membranes and D-branes - Duality and BPS relations - Matrix models The book begins with a simple discussion of point particle theory, and uses Feynman path integrals to unify the presentation of superstrings. It has been updated throughout, and three new chapters on M-theory have been added. Prerequisites are an acquaintance with quantum mechanics and relativity.

Information Theory, Inference and Learning Algorithms


David J.C. MacKay - 2002
    These topics lie at the heart of many exciting areas of contemporary science and engineering - communication, signal processing, data mining, machine learning, pattern recognition, computational neuroscience, bioinformatics, and cryptography. This textbook introduces theory in tandem with applications. Information theory is taught alongside practical communication systems, such as arithmetic coding for data compression and sparse-graph codes for error-correction. A toolbox of inference techniques, including message-passing algorithms, Monte Carlo methods, and variational approximations, are developed alongside applications of these tools to clustering, convolutional codes, independent component analysis, and neural networks. The final part of the book describes the state of the art in error-correcting codes, including low-density parity-check codes, turbo codes, and digital fountain codes -- the twenty-first century standards for satellite communications, disk drives, and data broadcast. Richly illustrated, filled with worked examples and over 400 exercises, some with detailed solutions, David MacKay's groundbreaking book is ideal for self-learning and for undergraduate or graduate courses. Interludes on crosswords, evolution, and sex provide entertainment along the way. In sum, this is a textbook on information, communication, and coding for a new generation of students, and an unparalleled entry point into these subjects for professionals in areas as diverse as computational biology, financial engineering, and machine learning.

The Quark and the Jaguar: Adventures in the Simple and the Complex


Murray Gell-Mann - 1994
    Nobel laureate Murray Gell-Mann offers a uniquely personal and unifying vision of the relationship between the fundamental laws of physics and the complexity and diversity of the natural world.

Mathematical Analysis


Tom M. Apostol - 1957
    It provides a transition from elementary calculus to advanced courses in real and complex function theory and introduces the reader to some of the abstract thinking that pervades modern analysis.

The Physics of Immortality: Modern Cosmology, God and the Resurrection of the Dead


Frank J. Tipler - 1994
    Tipler claims to scientifically prove the existence of God and the physical resurrection of the dead.

Data Smart: Using Data Science to Transform Information into Insight


John W. Foreman - 2013
    Major retailers are predicting everything from when their customers are pregnant to when they want a new pair of Chuck Taylors. It's a brave new world where seemingly meaningless data can be transformed into valuable insight to drive smart business decisions.But how does one exactly do data science? Do you have to hire one of these priests of the dark arts, the "data scientist," to extract this gold from your data? Nope.Data science is little more than using straight-forward steps to process raw data into actionable insight. And in Data Smart, author and data scientist John Foreman will show you how that's done within the familiar environment of a spreadsheet. Why a spreadsheet? It's comfortable! You get to look at the data every step of the way, building confidence as you learn the tricks of the trade. Plus, spreadsheets are a vendor-neutral place to learn data science without the hype. But don't let the Excel sheets fool you. This is a book for those serious about learning the analytic techniques, the math and the magic, behind big data.Each chapter will cover a different technique in a spreadsheet so you can follow along: - Mathematical optimization, including non-linear programming and genetic algorithms- Clustering via k-means, spherical k-means, and graph modularity- Data mining in graphs, such as outlier detection- Supervised AI through logistic regression, ensemble models, and bag-of-words models- Forecasting, seasonal adjustments, and prediction intervals through monte carlo simulation- Moving from spreadsheets into the R programming languageYou get your hands dirty as you work alongside John through each technique. But never fear, the topics are readily applicable and the author laces humor throughout. You'll even learn what a dead squirrel has to do with optimization modeling, which you no doubt are dying to know.

Statistical Methods for the Social Sciences


Alan Agresti - 1986
    No previous knowledge of statistics is assumed, and mathematical background is assumed to be minimal (lowest-level high-school algebra). This text may be used in a one or two course sequence. Such sequences are commonly required of social science graduate students in sociology, political science, and psychology. Students in geography, anthropology, journalism, and speech also are sometimes required to take at least one statistics course.

The Elements of Statistical Learning: Data Mining, Inference, and Prediction


Trevor Hastie - 2001
    With it has come vast amounts of data in a variety of fields such as medicine, biology, finance, and marketing. The challenge of understanding these data has led to the development of new tools in the field of statistics, and spawned new areas such as data mining, machine learning, and bioinformatics. Many of these tools have common underpinnings but are often expressed with different terminology. This book describes the important ideas in these areas in a common conceptual framework. While the approach is statistical, the emphasis is on concepts rather than mathematics. Many examples are given, with a liberal use of color graphics. It should be a valuable resource for statisticians and anyone interested in data mining in science or industry. The book's coverage is broad, from supervised learning (prediction) to unsupervised learning. The many topics include neural networks, support vector machines, classification trees and boosting—the first comprehensive treatment of this topic in any book. Trevor Hastie, Robert Tibshirani, and Jerome Friedman are professors of statistics at Stanford University. They are prominent researchers in this area: Hastie and Tibshirani developed generalized additive models and wrote a popular book of that title. Hastie wrote much of the statistical modeling software in S-PLUS and invented principal curves and surfaces. Tibshirani proposed the Lasso and is co-author of the very successful An Introduction to the Bootstrap. Friedman is the co-inventor of many data-mining tools including CART, MARS, and projection pursuit.

Essays on the Theory of Numbers


Richard Dedekind - 1901
    W. R. Dedekind. The first presents Dedekind's theory of the irrational number-the Dedekind cut idea-perhaps the most famous of several such theories created in the 19th century to give a precise meaning to irrational numbers, which had been used on an intuitive basis since Greek times. This paper provided a purely arithmetic and perfectly rigorous foundation for the irrational numbers and thereby a rigorous meaning of continuity in analysis.The second essay is an attempt to give a logical basis for transfinite numbers and properties of the natural numbers. It examines the notion of natural numbers, the distinction between finite and transfinite (infinite) whole numbers, and the logical validity of the type of proof called mathematical or complete induction.The contents of these essays belong to the foundations of mathematics and will be welcomed by those who are prepared to look into the somewhat subtle meanings of the elements of our number system. As a major work of an important mathematician, the book deserves a place in the personal library of every practicing mathematician and every teacher and historian of mathematics. Authorized translations by "Vooster " V. Beman.

Abstract Algebra


I.N. Herstein - 1986
    Providing a concise introduction to abstract algebra, this work unfolds some of the fundamental systems with the aim of reaching applicable, significant results.

Python Machine Learning


Sebastian Raschka - 2015
    We are living in an age where data comes in abundance, and thanks to the self-learning algorithms from the field of machine learning, we can turn this data into knowledge. Automated speech recognition on our smart phones, web search engines, e-mail spam filters, the recommendation systems of our favorite movie streaming services – machine learning makes it all possible.Thanks to the many powerful open-source libraries that have been developed in recent years, machine learning is now right at our fingertips. Python provides the perfect environment to build machine learning systems productively.This book will teach you the fundamentals of machine learning and how to utilize these in real-world applications using Python. Step-by-step, you will expand your skill set with the best practices for transforming raw data into useful information, developing learning algorithms efficiently, and evaluating results.You will discover the different problem categories that machine learning can solve and explore how to classify objects, predict continuous outcomes with regression analysis, and find hidden structures in data via clustering. You will build your own machine learning system for sentiment analysis and finally, learn how to embed your model into a web app to share with the world

Why Information Grows: The Evolution of Order, from Atoms to Economies


Cesar A. Hidalgo - 2015
    He believes that we should investigate what makes some countries more capable than others. Complex products—from films to robots, apps to automobiles—are a physical distillation of an economy’s knowledge, a measurable embodiment of its education, infrastructure, and capability. Economic wealth accrues when applications of this knowledge turn ideas into tangible products; the more complex its products, the more economic growth a country will experience.A radical new interpretation of global economics, Why Information Grows overturns traditional assumptions about the development of economies and the origins of wealth and takes a crucial step toward making economics less the dismal science and more the insightful one.

Discovering Statistics Using SPSS (Introducing Statistical Methods)


Andy Field - 2000
    What's new in the Second Edition? 1. Fully compliant with the latest version of SPSS version 12 2. More coverage of advanced statistics including completely new coverage of non-parametric statistics. The book is 50 per cent longer than the First Edition. 3. Each section of each chapter now has a notation - 1,2 or 3 - referring to the intended level of study. This helps students navigate their way through the book and makes it user-friendly for students of ALL levels. 4. Has a 'how to use this book' section at the start of the text. 5. Characters in each chapter have defined roles - summarizing key points, to pose questions etc 6. Each chapter now has several examples for students to work through. Answers provided on the enclosed CD-ROM

Principles of Statistics


M.G. Bulmer - 1979
    There are equally many advanced textbooks which delve into the far reaches of statistical theory, while bypassing practical applications. But between these two approaches is an unfilled gap, in which theory and practice merge at an intermediate level. Professor M. G. Bulmer's Principles of Statistics, originally published in 1965, was created to fill that need. The new, corrected Dover edition of Principles of Statistics makes this invaluable mid-level text available once again for the classroom or for self-study.Principles of Statistics was created primarily for the student of natural sciences, the social scientist, the undergraduate mathematics student, or anyone familiar with the basics of mathematical language. It assumes no previous knowledge of statistics or probability; nor is extensive mathematical knowledge necessary beyond a familiarity with the fundamentals of differential and integral calculus. (The calculus is used primarily for ease of notation; skill in the techniques of integration is not necessary in order to understand the text.)Professor Bulmer devotes the first chapters to a concise, admirably clear description of basic terminology and fundamental statistical theory: abstract concepts of probability and their applications in dice games, Mendelian heredity, etc.; definitions and examples of discrete and continuous random variables; multivariate distributions and the descriptive tools used to delineate them; expected values; etc. The book then moves quickly to more advanced levels, as Professor Bulmer describes important distributions (binomial, Poisson, exponential, normal, etc.), tests of significance, statistical inference, point estimation, regression, and correlation. Dozens of exercises and problems appear at the end of various chapters, with answers provided at the back of the book. Also included are a number of statistical tables and selected references.

Complexity: The Emerging Science at the Edge of Order and Chaos


M. Mitchell Waldrop - 1992
    The science of complexity studies how single elements, such as a species or a stock, spontaneously organize into complicated structures like ecosystems and economies; stars become galaxies, and snowflakes avalanches almost as if these systems were obeying a hidden yearning for order. Drawing from diverse fields, scientific luminaries such as Nobel Laureates Murray Gell-Mann and Kenneth Arrow are studying complexity at a think tank called The Santa Fe Institute. The revolutionary new discoveries researchers have made there could change the face of every science from biology to cosmology to economics. M. Mitchell Waldrop's groundbreaking bestseller takes readers into the hearts and minds of these scientists to tell the story behind this scientific revolution as it unfolds.