Book picks similar to
Econometric Analysis by William H. Greene
economics
econometrics
statistics
data-science
Bayesian Data Analysis
Andrew Gelman - 1995
Its world-class authors provide guidance on all aspects of Bayesian data analysis and include examples of real statistical analyses, based on their own research, that demonstrate how to solve complicated problems. Changes in the new edition include:Stronger focus on MCMC Revision of the computational advice in Part III New chapters on nonlinear models and decision analysis Several additional applied examples from the authors' recent research Additional chapters on current models for Bayesian data analysis such as nonlinear models, generalized linear mixed models, and more Reorganization of chapters 6 and 7 on model checking and data collectionBayesian computation is currently at a stage where there are many reasonable ways to compute any given posterior distribution. However, the best approach is not always clear ahead of time. Reflecting this, the new edition offers a more pluralistic presentation, giving advice on performing computations from many perspectives while making clear the importance of being aware that there are different ways to implement any given iterative simulation computation. The new approach, additional examples, and updated information make Bayesian Data Analysis an excellent introductory text and a reference that working scientists will use throughout their professional life.
The Quants: How a New Breed of Math Whizzes Conquered Wall Street and Nearly Destroyed It
Scott Patterson - 2010
They were preparing to compete in a poker tournament with million-dollar stakes, but those numbers meant nothing to them. They were accustomed to risking billions. At the card table that night was Peter Muller, an eccentric, whip-smart whiz kid who’d studied theoretical mathematics at Princeton and now managed a fabulously successful hedge fund called PDT…when he wasn’t playing his keyboard for morning commuters on the New York subway. With him was Ken Griffin, who as an undergraduate trading convertible bonds out of his Harvard dorm room had outsmarted the Wall Street pros and made money in one of the worst bear markets of all time. Now he was the tough-as-nails head of Citadel Investment Group, one of the most powerful money machines on earth. There too were Cliff Asness, the sharp-tongued, mercurial founder of the hedge fund AQR, a man as famous for his computer-smashing rages as for his brilliance, and Boaz Weinstein, chess life-master and king of the credit default swap, who while juggling $30 billion worth of positions for Deutsche Bank found time for frequent visits to Las Vegas with the famed MIT card-counting team. On that night in 2006, these four men and their cohorts were the new kings of Wall Street. Muller, Griffin, Asness, and Weinstein were among the best and brightest of a new breed, the quants. Over the prior twenty years, this species of math whiz --technocrats who make billions not with gut calls or fundamental analysis but with formulas and high-speed computers-- had usurped the testosterone-fueled, kill-or-be-killed risk-takers who’d long been the alpha males the world’s largest casino. The quants believed that a dizzying, indecipherable-to-mere-mortals cocktail of differential calculus, quantum physics, and advanced geometry held the key to reaping riches from the financial markets. And they helped create a digitized money-trading machine that could shift billions around the globe with the click of a mouse. Few realized that night, though, that in creating this unprecedented machine, men like Muller, Griffin, Asness and Weinstein had sowed the seeds for history’s greatest financial disaster. Drawing on unprecedented access to these four number-crunching titans, The Quants tells the inside story of what they thought and felt in the days and weeks when they helplessly watched much of their net worth vaporize – and wondered just how their mind-bending formulas and genius-level IQ’s had led them so wrong, so fast. Had their years of success been dumb luck, fool’s gold, a good run that could come to an end on any given day? What if The Truth they sought -- the secret of the markets -- wasn’t knowable? Worse, what if there wasn’t any Truth? In The Quants, Scott Patterson tells the story not just of these men, but of Jim Simons, the reclusive founder of the most successful hedge fund in history; Aaron Brown, the quant who used his math skills to humiliate Wall Street’s old guard at their trademark game of Liar’s Poker, and years later found himself with a front-row seat to the rapid emergence of mortgage-backed securities; and gadflies and dissenters such as Paul Wilmott, Nassim Taleb, and Benoit Mandelbrot. With the immediacy of today’s NASDAQ close and the timeless power of a Greek tragedy, The Quants is at once a masterpiece of explanatory journalism, a gripping tale of ambition and hubris…and an ominous warning about Wall Street’s future.
Statistical Rethinking: A Bayesian Course with Examples in R and Stan
Richard McElreath - 2015
Reflecting the need for even minor programming in today's model-based statistics, the book pushes readers to perform step-by-step calculations that are usually automated. This unique computational approach ensures that readers understand enough of the details to make reasonable choices and interpretations in their own modeling work.The text presents generalized linear multilevel models from a Bayesian perspective, relying on a simple logical interpretation of Bayesian probability and maximum entropy. It covers from the basics of regression to multilevel models. The author also discusses measurement error, missing data, and Gaussian process models for spatial and network autocorrelation.By using complete R code examples throughout, this book provides a practical foundation for performing statistical inference. Designed for both PhD students and seasoned professionals in the natural and social sciences, it prepares them for more advanced or specialized statistical modeling.Web ResourceThe book is accompanied by an R package (rethinking) that is available on the author's website and GitHub. The two core functions (map and map2stan) of this package allow a variety of statistical models to be constructed from standard model formulas.
Principles and Practice of Structural Equation Modeling
Rex B. Kline - 1998
Reviewed are fundamental statistical concepts--such as correlation, regressions, data preparation and screening, path analysis, and confirmatory factor analysis--as well as more advanced methods, including the evaluation of nonlinear effects, measurement models and structural regression models, latent growth models, and multilevel SEM. The companion Web page offers data and program syntax files for many of the research examples, electronic overheads that can be downloaded and printed by instructors or students, and links to SEM-related resources.
Theory of Games and Economic Behavior
John von Neumann - 1944
What began more than sixty years ago as a modest proposal that a mathematician and an economist write a short paper together blossomed, in 1944, when Princeton University Press published Theory of Games and Economic Behavior. In it, John von Neumann and Oskar Morgenstern conceived a groundbreaking mathematical theory of economic and social organization, based on a theory of games of strategy. Not only would this revolutionize economics, but the entirely new field of scientific inquiry it yielded--game theory--has since been widely used to analyze a host of real-world phenomena from arms races to optimal policy choices of presidential candidates, from vaccination policy to major league baseball salary negotiations. And it is today established throughout both the social sciences and a wide range of other sciences.This sixtieth anniversary edition includes not only the original text but also an introduction by Harold Kuhn, an afterword by Ariel Rubinstein, and reviews and articles on the book that appeared at the time of its original publication in the New York Times, tthe American Economic Review, and a variety of other publications. Together, these writings provide readers a matchless opportunity to more fully appreciate a work whose influence will yet resound for generations to come.
The R Book
Michael J. Crawley - 2007
The R language is recognised as one of the most powerful and flexible statistical software packages, and it enables the user to apply many statistical techniques that would be impossible without such software to help implement such large data sets.
Options, Futures and Other Derivatives
John C. Hull
Changes in the fifth edition include: A new chapter on credit derivatives (Chapter 21). New! Business Snapshots highlight real-world situations and relevant issues. The first six chapters have been -reorganized to better meet the needs of students and .instructors. A new release of the Excel-based software, DerivaGem, is included with each text. A useful Solutions Manual/Study Guide, which includes the worked-out answers to the "Questions and Problems" sections of each chapter, can be purchased separately (ISBN: 0-13-144570-7).
The Cartoon Guide to Statistics
Larry Gonick - 1993
Never again will you order the Poisson Distribution in a French restaurant!This updated version features all new material.
Stochastic Calculus Models for Finance II: Continuous Time Models (Springer Finance)
Steven E. Shreve - 2004
The content of this book has been used successfully with students whose mathematics background consists of calculus and calculus-based probability. The text gives both precise statements of results, plausibility arguments, and even some proofs, but more importantly intuitive explanations developed and refine through classroom experience with this material are provided. The book includes a self-contained treatment of the probability theory needed for shastic calculus, including Brownian motion and its properties. Advanced topics include foreign exchange models, forward measures, and jump-diffusion processes.This book is being published in two volumes. This second volume develops shastic calculus, martingales, risk-neutral pricing, exotic options and term structure models, all in continuous time.Masters level students and researchers in mathematical finance and financial engineering will find this book useful.Steven E. Shreve is Co-Founder of the Carnegie Mellon MS Program in Computational Finance and winner of the Carnegie Mellon Doherty Prize for sustained contributions to education.
The Economics of Money, Banking, and Financial Markets (Addison-Wesley Series in Economics)
Frederic S. Mishkin - 1986
Having just served as Governor of the Federal Reserve, only Mishkin has the unique insider's perspective needed to present the current state of money and banking and explain the latest debates and issues for today s students. By applying a unified analytical framework to the models, "The Economics of Money, Banking, and Financial Markets" makes theory intuitive for students, and the rich array of current, real-world events keeps students motivated. Authoritative, comprehensive, and flexible, the text is easy to integrate into a wide variety of syllabi, and its ancillaries provide complete support when teaching the course."
Microeconomics
Jeffrey M. Perloff - 1998
Beginning at the intermediate level and ending at a level appropriate for the graduate student, this is a core text for upper level undergraduate and taught graduate microeconomics courses.
Pattern Recognition and Machine Learning
Christopher M. Bishop - 2006
However, these activities can be viewed as two facets of the same field, and together they have undergone substantial development over the past ten years. In particular, Bayesian methods have grown from a specialist niche to become mainstream, while graphical models have emerged as a general framework for describing and applying probabilistic models. Also, the practical applicability of Bayesian methods has been greatly enhanced through the development of a range of approximate inference algorithms such as variational Bayes and expectation propagation. Similarly, new models based on kernels have had a significant impact on both algorithms and applications. This new textbook reflects these recent developments while providing a comprehensive introduction to the fields of pattern recognition and machine learning. It is aimed at advanced undergraduates or first-year PhD students, as well as researchers and practitioners, and assumes no previous knowledge of pattern recognition or machine learning concepts. Knowledge of multivariate calculus and basic linear algebra is required, and some familiarity with probabilities would be helpful though not essential as the book includes a self-contained introduction to basic probability theory.
Mastering Bitcoin: Unlocking Digital Cryptocurrencies
Andreas M. Antonopoulos - 2014
Whether you're building the next killer app, investing in a startup, or simply curious about the technology, this practical book is essential reading.Bitcoin, the first successful decentralized digital currency, is still in its infancy and it's already spawned a multi-billion dollar global economy. This economy is open to anyone with the knowledge and passion to participate. Mastering Bitcoin provides you with the knowledge you need (passion not included).This book includes:A broad introduction to bitcoin--ideal for non-technical users, investors, and business executivesAn explanation of the technical foundations of bitcoin and cryptographic currencies for developers, engineers, and software and systems architectsDetails of the bitcoin decentralized network, peer-to-peer architecture, transaction lifecycle, and security principlesOffshoots of the bitcoin and blockchain inventions, including alternative chains, currencies, and applicationsUser stories, analogies, examples, and code snippets illustrating key technical concepts
The (Mis)Behavior of Markets
Benoît B. Mandelbrot - 1997
Mandelbrot, one of the century's most influential mathematicians, is world-famous for making mathematical sense of a fact everybody knows but that geometers from Euclid on down had never assimilated: Clouds are not round, mountains are not cones, coastlines are not smooth. To these classic lines we can now add another example: Markets are not the safe bet your broker may claim. In his first book for a general audience, Mandelbrot, with co-author Richard L. Hudson, shows how the dominant way of thinking about the behavior of markets-a set of mathematical assumptions a century old and still learned by every MBA and financier in the world-simply does not work. As he did for the physical world in his classic The Fractal Geometry of Nature, Mandelbrot here uses fractal geometry to propose a new, more accurate way of describing market behavior. The complex gyrations of IBM's stock price and the dollar-euro exchange rate can now be reduced to straightforward formulae that yield a far better model of how risky they are. With his fractal tools, Mandelbrot has gotten to the bottom of how financial markets really work, and in doing so, he describes the volatile, dangerous (and strangely beautiful) properties that financial experts have never before accounted for. The result is no less than the foundation for a new science of finance.
Statistics Done Wrong: The Woefully Complete Guide
Alex Reinhart - 2013
Politicians and marketers present shoddy evidence for dubious claims all the time. But smart people make mistakes too, and when it comes to statistics, plenty of otherwise great scientists--yes, even those published in peer-reviewed journals--are doing statistics wrong."Statistics Done Wrong" comes to the rescue with cautionary tales of all-too-common statistical fallacies. It'll help you see where and why researchers often go wrong and teach you the best practices for avoiding their mistakes.In this book, you'll learn: - Why "statistically significant" doesn't necessarily imply practical significance- Ideas behind hypothesis testing and regression analysis, and common misinterpretations of those ideas- How and how not to ask questions, design experiments, and work with data- Why many studies have too little data to detect what they're looking for-and, surprisingly, why this means published results are often overestimates- Why false positives are much more common than "significant at the 5% level" would suggestBy walking through colorful examples of statistics gone awry, the book offers approachable lessons on proper methodology, and each chapter ends with pro tips for practicing scientists and statisticians. No matter what your level of experience, "Statistics Done Wrong" will teach you how to be a better analyst, data scientist, or researcher.