Experimental Economics: Theory and Practice

Author:   John A. List
Publisher:   The University of Chicago Press
ISBN:  

9780226820675


Pages:   784
Publication Date:   28 April 2026
Format:   Paperback
Availability:   Awaiting stock   Availability explained
The supplier is currently out of stock of this item. It will be ordered for you and placed on backorder. Once it does come back in stock, we will ship it out for you.

Our Price $65.95 Quantity:  
Add to Cart

Share |

Experimental Economics: Theory and Practice


Overview

A landmark practical guide from the twenty-first-century pioneer in economics. Experimental economics—generating and interpreting data to understand human decisions, motivations, and outcomes—is today all but synonymous with economics as a discipline. The advantages of the experimental method for understanding causal effects make it the gold standard for an increasingly empirical field. But until now the discipline has lacked comprehensive and definitive guidance for how to optimally design and conduct economic experiments. For more than 30 years, John A. List has been at the forefront of using experiments to advance economic knowledge, expanding the domain of economic experiment from the lab to the real-world. Experimental Economics is his A-to-Z compendium for students and researchers on the ground floor of designing, conducting, analyzing, and interpreting data that they generate. List seeks not only to guide readers on how to develop and implement their experimental projects—everything from design to administrative and ethical considerations—but to help them avoid all the mistakes he's made in his career, too. Experimental Economics codifies its author's refined approach to the design, execution, and analysis of laboratory and field experiments. It is a milestone work poised to become the definitive reference for the next century of economics (and economists).

Full Product Details

Author:   John A. List
Publisher:   The University of Chicago Press
Imprint:   University of Chicago Press
Dimensions:   Width: 17.80cm , Height: 5.60cm , Length: 25.40cm
Weight:   1.501kg
ISBN:  

9780226820675


ISBN 10:   022682067
Pages:   784
Publication Date:   28 April 2026
Audience:   Professional and scholarly ,  College/higher education ,  Professional & Vocational ,  Postgraduate, Research & Scholarly
Format:   Paperback
Publisher's Status:   Active
Availability:   Awaiting stock   Availability explained
The supplier is currently out of stock of this item. It will be ordered for you and placed on backorder. Once it does come back in stock, we will ship it out for you.

Table of Contents

Preface Part I. Experimental Methods in Economics 1. Introduction Key Ideas 1.1 Causal Inference Experimental Problem 1: Quantifying Economic Fundamentals, Measuring Treatment Effects, and Identifying Key Mediators and Moderators in an Ethically Responsible Manner Experimental Problem 2: Predicting If the Causal Impacts of Treatments Implemented in One Environment Transfer to Other Environments, Whether Spatially, Temporally, or Scale Differentiated 1.2 The Book’s Game Plan Notes References 2. A Primer on Economic Experiments Key Ideas 2.1 Four Running Examples 2.2 The Empirical Approach in Economics 2.3 Experiments in Economics 2.3.1 Laboratory Experiments 2.3.2 Field Experiments 2.3.2.1 Seven Criteria That Define Field Experiments Experimental Subjects: Population and Selection Experimental Environment 2.3.3 What Parameters Do the Various Experimental Types Recover? 2.4 What Experimental Type to Choose 2.4.1 Control across the Experimental Spectrum for Identification Purposes 2.4.2 Control across the Experimental Spectrum for Measurement Purposes 2.4.3 The Ability to Replicate across the Experimental Spectrum 2.4.4 Control across the Experimental Spectrum for Inferential Purposes 2.4.5 Control across the Experimental Spectrum to Ensure External Validity 2.5 Conclusions: Key Complementarities Exist across the Lab and Field Appendix 2.1 Introducing General Potential Outcomes Notation Notes References 3. Internal Validity: Identification in Economic Experiments Key Ideas 3.1 Four Running Examples 3.2 The Assignment Mechanism 3.3 Potential Outcomes Framework 3.4 From Individual Treatment Effects to Average Treatment Effects 3.5 How Selection Leads to Bias 3.5.1 Using Randomization to Solve the Selection Problem 3.6 Introducing EPATE: The Case When τPi = 1 ≠ τ 3.7 Recovering and Interpreting Heterogeneity of Treatment Effects 3.8 Violations of the Exclusion Restrictions 3.8.1 SUTVA 3.8.2 Observability 3.8.3 Compliance 3.8.4 Statistical Independence 3.9 Conclusions Appendix 3.1 Recovering the Wedge between τ and τ̃​τ (Derivation of Equation 3.5) Appendix 3.2 The Brass Tacks of Estimating the Effects of Training Programs Notes References 4. Statistical Conclusion Validity: Measurement in Economic Experiments Key Ideas 4.1 Two Running Examples 4.2 Perspectives on Sampling Frameworks 4.2.1 Subpopulations in the Superpopulation Framework 4.3 Estimating Treatment Effects and Making Inference 4.3.1 Motivating the Difference-in-Means Estimator for ATE Parameters 4.3.2 Single Hypothesis Testing and Statistical Power 4.3.3 Multiple Hypothesis Testing 4.3.3.1 Family-Wise Error Rate 4.3.3.2 Approaches to Controlling the FWER Bonferroni Correction Holm Stepdown Correction List et al. FWER Correction 4.3.4 Introducing the Difference-in-Differences Estimator for ATE Parameters 4.3.5 Introducing an Alternative to ATE Parameters: Fisher’s Randomization Inference 4.4 Conclusions Appendix 4.1 Code for List et al. (2019) and List et al. (2023) Installation (2019) Command Procedure (2019) Installation (2023) Command Procedure (2023) Notes References Part II. Designing Economic Experiments 5. Optimal Experimental Design Key Ideas 5.1 Three Running Examples 5.2 Basic Principles of Statistical Power 5.3 The Case of a Binary Treatment with Continuous Outcomes 5.3.1 Putting It All Together to Create an Optimal Design 5.4 The Case of a Binary Treatment with Binary Outcomes 5.5 Varying Treatment Levels with Continuous Outcomes 5.6 Expanding the Tool Kit 5.6.1 Heterogeneity in Participant Costs 5.6.2 Clustered Experimental Designs 5.6.3 Optimal Design with Multiple Hypothesis Adjustment 5.7 Less Considered Design Choices to Enhance Statistical Power 5.7.1 Including Covariates in the Estimation Model 5.7.2 Designs to Maximize Compliance 5.7.3 The Nature of the Sample 5.7.4 Measurement Choices 5.7.5 Factorial Designs 5.8 Conclusions Appendix 5.1 An Example of the Power of Simulation Methods: The Case of Varying Treatment Levels with Binary Outcomes Appendix 5.2 Step-by-Step Flexible Regression Adjustment Appendix 5.3 Introducing Full and Fractional Factorial Designs Three Factors Appendix 5.4 A Walk-Through Example Notes References 6. Randomization Techniques Key Ideas 6.1 Three Running Examples 6.2 Classical Assignment Mechanisms 6.3 Classical Randomization Approaches 6.3.1 Bernoulli Trials 6.3.2 Completely Randomized Experiments (CRE) 6.3.3 Randomized Block (Stratified) Experiments 6.3.4 Rerandomization Approaches 6.3.5 Optimal Stratification with Matched-Pairs Designs 6.3.5.1 Efficient Matching Minimizing Mean-Squared Error 6.4 Design-Conscious Inference 6.4.1 Statistical Inference in CREs 6.4.2 Adjusting Inference under Alternative Randomization Schemes 6.5 What to Do with Unanticipated Covariates 6.6 Conclusions Appendix 6.1 A Review of Rerandomization Approaches Notes References 7. Heterogeneity and Causal Moderation Key Ideas 7.1 Four Running Examples 7.2 Estimating Heterogeneities in Simple Cases 7.2.1 Using Causal Forests to Estimate Heterogeneities Eight-Step Causal Forest Procedure 7.3 Basic Mechanics of Causal Moderation 7.3.1 Causal Moderation in Economic Experiments 7.4 Two Crucial Margins of Heterogeneity: Intensive and Extensive 7.4.1 Bounding the Intensive and Extensive Margin Effects 7.4.2 Using Baseline Outcome Data to Identify Intensive Margin Effects 7.4.3 A Tobit Approach to Estimating Margins 7.5 Conclusions Notes References 8. Mediation: Exploring Relevant Mechanisms Key Ideas 8.1 Three Running Examples 8.2 Mediation: The Basics of Causal Pathways 8.2.1 Decomposing Total Effects in the Presence of Mediators 8.2.2 Moving the Goalposts: Controlled and Principal-Strata Effects 8.3 Applied Mediation Analysis for Economic Experiments 8.3.1 A Parametric Workhorse and Its Pitfalls 8.3.2 Basic Case: Binary Randomized Treatment 8.3.3 Separate Randomization of Treatment and Mediator 8.3.4 Paired Design 8.3.5 Crossover Design 8.4 Conclusions Appendix 8.1 Putting It All Together: Traditional Mediation Analysis and Alternative Approaches Using an In-Home Parent Visitation Program Notes References 9. Experiments with Longitudinal Elements Key Ideas 9.1 Three Running Examples 9.2 Potential Outcomes in Repeated Exposure Designs 9.2.1 Treatment Effects in the Presence of Repeated Exposures 9.3 Staggered Experimental Design 9.4 Leveraging Pre- and Post-treatment Outcomes to Increase Power 9.4.1 Including Covariates and Pre-treatment Outcomes in the Estimation Model 9.4.2 Leveraging Pre-treatment Outcomes in a Panel Data Estimation Model 9.4.2.1 Gains from Pre-treatment Outcome Measures 9.4.2.2 Autocorrelations That Vary with Treatment 9.4.3 Choosing the Optimal Number of Pre-treatment and Post-treatment Periods 9.4.4 Threats to Internal Validity 9.5 Experimental Designs with Outcomes Measured Long after Treatment 9.5.1 Identification Assumptions When Outcomes Are Far Removed from Treatment 9.5.2 Statistical Surrogates 9.5.2.1 Internal Validity of Statistical Surrogates 9.5.2.2 Putting the Comparability and Surrogacy Assumptions into Perspective 9.5.2.3 Interpreting Surrogates 9.5.2.4 Multiple Surrogates 9.6 Conclusions Appendix 9.1 Optimal Staggered Designs Appendix 9.2 Clustered Design in Panel Data Settings Appendix 9.3 Cluster-Randomized Experiments in Settings That Generate Short Panel Data Notes References 10. Within-Subject Experimental Designs Key Ideas 10.1 Three Running Examples 10.2 Potential Outcomes in a Within-Subject Design 10.3 Identification Assumptions in a Within-Subject Design 10.4 Threats to the Internal Validity of Within-Subject Designs 10.4.1 Threats to Balanced Panel 10.4.2 Threats to Temporal Stability 10.4.2.1 Crossover Designs and Latin Squares 10.4.3 Threats to Causal Transience 10.4.3.1 Washout Periods 10.5 Key Advantages of Within-Subject Designs 10.5.1 Heterogeneity and the Full Distribution of Treatment Effects 10.5.2 Experimental Power 10.5.2.1 Minimum Detectable Effects for Within-Subject Designs 10.6 Conclusions Notes References Part III. Violations of Exclusion Restrictions 11. SUTVA: Interference and Hidden Treatments Key Ideas 11.1 Three Running Examples 11.2 SUTVA Violation: Interference 11.2.1 Treatment Effect Parameters 11.2.2 Difference-in-Means 11.3 Approaches to Dealing with Interference Violations 11.3.1 Linear-in-Means Model 11.3.2 Clustered Randomized Trials to Attenuate Spillovers 11.3.3 Randomization Inference under Interference 11.4 Embracing Spillovers: Randomized Saturation Designs 11.4.1 Designs to Explore Spillovers 11.5 Hidden Versions of Treatment 11.5.1 Potential Outcomes with Hidden Versions of Treatment 11.5.2 Implications of Hidden Versions of Treatment 11.6 Conclusions Appendix 11.1 Optimal Saturation Designs Notes References 12. Observability: Nonrandom Attrition Key Ideas 12.1 Two Running Examples 12.2 Attrition in the Potential Outcomes Framework 12.2.1 Internal Validity for Respondents 12.2.2 Internal Validity for Study Participants 12.3 Tests for Internal Validity 12.3.1 Tests Using Baseline Outcome Data 12.3.2 Selective Attrition Test 12.3.3 Determinants of Attrition Test 12.3.4 Attrition Rates That Vary by Treatment 12.4 Analyzing Data with Attrition 12.4.1 Available Case Analysis 12.4.2 Horowitz and Manski Bounds 12.4.3 Inverse Probability Weighting 12.4.4 Selection Models 12.4.5 Lee Bounds 12.5 Missing Covariates 12.5.1 Complete and Available Case Analysis 12.5.2 Dummy Variable Adjustment 12.5.3 Imputation 12.6 Six Design Tips to Attenuate Attrition 12.7 Conclusions Appendix 12.1 Putting It All Together with CHECC Notes References 13. Complete Compliance: One-Sided and Two-Sided Violations Key Ideas 13.1 Two Running Examples 13.2 A Framework for Imperfect Compliance 13.2.1 As-Treated Analysis Reintroduces the Selection Problem 13.2.2 Intention-to-Treat (ITT) Analysis 13.3 Randomization as an Instrumental Variable and New Assumptions 13.4 Calculating ATEs for Compliers 13.4.1 Characterizing Compliers 13.4.2 Widening the Goalposts: Bounding the ATE 13.5 Six Design Tips to Attenuate Noncompliance 13.6 Conclusions Appendix 13.1 Encouragement Designs Notes References 14. Statistical Independence and Compromised Randomization Key Ideas 14.1 Three Running Examples 14.2 Statistical Independence: The Basics 14.3 Tests for Compromised Randomization 14.3.1 Comparing Planned versus Actual Assignment 14.3.2 Computing P-Values to Test for Compromised Randomization 14.3.3 Informal Checks of Compromised Randomization 14.4 Case 1: A Rerandomization Approach 14.5 Case 2a: Inference with Compromised Randomization and Full Documentation 14.5.1 Inference When the Randomization Procedure Is Correlated with Potential Outcomes 14.6 Case 2b: Inference with Compromised Randomization and Only Partial Documentation 14.6.1 An Example of Compromised Randomization Being Partly Understood at the Aggregate Level 14.6.2 Breaking Down the Randomization Procedure 14.6.3 A Basic Model 14.6.4 Testing a Single Joint Null Hypothesis 14.7 A Decision-Theoretic Framework with Incomplete Documentation 14.7.1 Modeling the Randomization Protocol 14.7.2 Partially Identifying Model Parameters 14.7.3 Worst-Case Randomization Test 14.8 Seven Design Tips to Prevent Compromised Randomization 14.8.1 Three Tips When the Researcher Is Responsible for Randomization 14.8.2 Four Tips When the Experimenter Relies on Partners for Randomization 14.9 Conclusions Appendix 14.1 Using Fisher’s Sharp Inference with Compromised Randomization Appendix 14.2 Putting the Ideas of Section 14.6 in Motion Appendix 14.3 Extending Section 14.6 to Test Multiple Hypotheses Notes References Part IV. Building Scientific Knowledge 15. Building Confidence in (and Knowledge from) Experimental Results Key Ideas 15.1 Three Running Examples 15.2 The Philosophy of Building Knowledge from Experimental Results 15.3 A Framework for Building Confidence in Experimental Results 15.3.1 Effects of α and β on the PSP 15.3.2 Null Results Are Informative Too 15.4 From the Researcher to the Research Community 15.4.1 Replication Types 15.4.1.1 Interpreting Replication Results 15.4.1.2 Building Knowledge and Confidence with Replications 15.4.1.3 Why Are Replications an Endangered Species in Economics? 15.5 The Beauty of Selective Data Generation: From the Lab to the Field 15.6 Conclusions Appendix 15.1 Gaining Insights into Equation 15.5 and Beyond Unbiased, Sympathetic, and Adversarial Replications Heterogeneity across Replicating Teams Should We Have Confidence in Our Updating from Experimental Results? Notes References 16. Generalizability and Scaling Key Ideas 16.1 Two Running Examples 16.2 External Validity Primers 16.2.1 From Treatment Effects to the Parameter of Interest 16.2.2 Three Types of Horizontal Generalizability 16.2.3 Assumptions Yielding τ = τ* 16.3 Digging Deeper into Assumptions 16.1–16.4 16.3.1 Assumption 16.1: Selection into the Experiment 16.3.1.1 A Model of Selection into Experiments 16.3.1.2 How Experimental Design Affects Selection 16.3.2 Assumption 16.2: Representativeness of the Population 16.3.3 Assumptions 16.3 and 16.4: Investigation Neutrality and Parallelism 16.3.3.1 Experimenter Scrutiny: Effects of A 16.3.3.2 Experimental Environment: Effects of E 16.3.3.3 Stakes: Effects of Ii 16.4 Scaling 16.4.1 A Behavioral Model of Scaling 16.4.2 Constructive Steps Forward: The SANS Conditions Author Onus Probandi 16.4.3 Three Waves of Scientific Research 16.5 Conclusions Appendix 16.1 Mechanics of Scaling Up Notes References Part V. The Ethical and Practical Sides of Economic Experiments 17. The Ethics of Economic Experiments Key Ideas 17.1 Four Running Examples 17.2 Ethics Primer 17.2.1 A Simple Economic Model 17.2.2 A Simple Philosophical Framework 17.3 Three Theories of (Research) Ethics 17.3.1 Consequentialism 17.3.2 Deontological Ethics 17.3.3 Rule Consequentialism 17.4 Putting It All Together 17.4.1 Truthful, Unbiased, and Transparent Reporting of Results and Conflicts of Interest 17.4.2 Appropriate Data Governance and Management 17.4.3 Conflicts between Individual Protections and Scientific Discovery 17.4.3.1 Should You Even Do an Experiment? 17.4.3.2 With Whom Should You Experiment? 17.4.3.3 How Should You Experiment? 17.4.3.3.1 Informed Consent: Respecting Autonomy 17.4.3.3.2 Defining Benefits and Harm: From the Subject to Innocent Bystanders 17.4.3.3.3 Outright Deception and Incomplete Disclosure 17.5 Benchmarking Research Ethics: Gold to Plutonium-239 17.6 Conclusions Appendix 17.1 Data Governance and Management Playbook Being Trustworthy for Knowledge Creation Being Trustworthy regarding Subjects Differential Privacy Being Trustworthy for Third Parties Accessibility and Accountability Security Notes References 18. Pre-treatment Administrative Responsibilities Key Ideas 18.1 One Running Example 18.2 Overarching Goals of Pre-treatment Tasks 18.3 Institutional Review 18.3.1 IRBs and Research Ethics 18.3.2 IRB Application Materials 18.3.2.1 IRB Requirements: Who, What, How, and to Whom? 18.3.3 IRB Review Process and Determinations 18.3.3.1 IRBs and Informed Consent 18.3.3.2 IRBs and Outright Deception 18.3.3.3 IRBs and Pilots 18.3.3.4 IRBs and Multi-institutional Research 18.3.3.5 Communication with IRBs 18.4 Registries and Pre-analysis Plans 18.4.1 Trial Registries 18.4.1.1 Existing Registries 18.4.1.2 The AEA Registry 18.4.1.3 Registry Limitations 18.4.2 Pre-analysis Plans 18.5 Data Use Agreements and Outside Partners 18.5.1 Components of a DUA 18.6 Due Diligence Administrative Checklist 18.7 Conclusions Appendix 18.1 A Plea to the IRB What Should IRBs Do? A. Gather Information Typically Contained in Pre-registrations and PAPs B. Focus on the Relevant C. Be Honest with Themselves D. Be Clear and Consistent E. Guide How Researchers Should Work with Third Parties Notes References 19. Optimal Use of Incentives in Economic Experiments Key Ideas 19.1 Four Running Examples 19.2 A Simple Economic Model 19.2.1 Extending the Model to Explore Knowledge Creation: Internal Validity 19.2.1.1 Within-Subject versus Between-Subject Design 19.2.1.2 Statistical Surrogates 19.2.2 Extending the Model to Explore Knowledge Creation: Improving Inference 19.2.2.1 Nuts and Bolts of Design 19.2.2.2 Pilot Experiments 19.2.2.3 Mediators and Moderators 19.2.2.4 EP2: From τPi = 1 to τ and Beyond 19.2.2.5 EP2: From One Environment to Another 19.2.2.6 EP2: Fostering Scaling by Adding Option C Thinking to Designs 19.3 Creating the Microeconomic Environment 19.3.1 Using Induced Values for Control 19.3.2 Potentially Losing Control 19.3.2.1 An Inferential Challenge: Flat Payoffs 19.3.2.2 An Inferential Challenge: Construct Validity 19.3.2.3 Experimental Instructions across the Empirical Spectrum 19.4 Conclusions Appendix 19.1 Inducing Risk Posture Appendix 19.2 Tips for Writing Experimental Instructions across the Empirical Spectrum 10 Tips for Writing Laboratory Experimental Instructions From the Lab to the Field 8 Tips for Artefactual Field Experiments (AFEs) 6 Tips for Framed Field Experiments (FFEs) 5 Tips for Natural Field Experiments (NFEs) Practical Implementation Conclusion Notes References 20. Epilogue: The (Written) Road to Scientific Knowledge Diffusion Key Ideas 20.1 Give the People What They Want! But . . . What Do They Want? 20.2 Creating a Logical Framework 20.2.1 Applying BEC Holistically PREP 20.3 Your Writing Style 20.3.1 Getting Started: An Eight-Step “Inside-Out Approach” to Writing Scientific Studies 20.4 Introducing Your Pen to the World 20.5 Epilogue Appendix 20.1 PREP Checklist: Proper Reporting in an Experimental Paper Notes References Part VI. “How To” Supplements S1: How to Conduct Experiments in Markets: From the Lab to the Field S2: How to Conduct Experiments with Organizational Partnerships S3: How to Conduct Experiments with Children S4: How to Conduct Experiments to Measure Preferences, Beliefs, and Constraints S5: How to Conduct Experiments to Generate Unconventional Data Glossary Notation Crib Sheet Further Readings Index  

Reviews

“A wonderful and accessible guide to field and lab experiments from one of the true leaders in the field.” -- Stefanie Stantcheva | Harvard University “John A. List has given us an ambitious work that provides a more comprehensive framework for lab and field experiments as well as their discovery methodology than any other text that currently exists. Experimental Economics is poised to displace its predecessors and become the new standard reference for the field.” -- Vernon Smith | Chapman University and winner of the Nobel Prize for Economics ""This advanced textbook offers a rigorous exploration of the methodological design of laboratory and field experiments. It provides a sophisticated roadmap for designing experiments that are both internally valid and externally meaningful—making it essential reading for graduate students and applied researchers alike.” -- Ernst Fehr | University of Zurich “Experimental Economics is an amazing resource that helps not only by providing instructional materials but also by guiding the instructor on how to structure the course itself. It is common to be torn between shaping an experimental economics class into either a methods or topics course, but List’s work allows for both, as the ‘running examples’ are perfect for exposing students to different topics of application while presenting the methodological framework common to those topics.” -- Seda Ertac | Koc University


“A wonderful and accessible guide to field and lab experiments from one of the true leaders in the field.” -- Stefanie Stantcheva | Harvard University “John A. List has given us an ambitious work that provides a more comprehensive framework for lab and field experiments as well as their discovery methodology than any other text that currently exists. Experimental Economics is poised to displace its predecessors and become the new standard reference for the field.” -- Vernon Smith | Chapman University and winner of the Nobel Prize for Economics ""This advanced textbook offers a rigorous exploration of the methodological design of laboratory and field experiments. It provides a sophisticated roadmap for designing experiments that are both internally valid and externally meaningful—making it essential reading for graduate students and applied researchers alike.” -- Ernst Fehr | University of Zurich


Author Information

John A. List is the Kenneth C. Griffin Distinguished Service Professor in Economics at the University of Chicago. He is a member of the American Academy of Arts and Sciences and a research associate of NBER. He is the author, most recently, of the best-selling book, The Voltage Effect: How to Make Good Ideas Great and Great Ideas Scale.

Tab Content 6

Author Website:  

Countries Available

All regions
Latest Reading Guide

April RG 26_2

 

Shopping Cart
Your cart is empty
Shopping cart
Mailing List