Part II: Quantum Field Theory
The eight chapters that constitute Part II of Projects in Scientific Computing survey the literature of quantum field theory from a perspective that (in certain respects) inverts the approach taken in Part I. Where Part I began with mathematical and computational infrastructure and built outward toward financial applications, treating financial problems as instances of a broader class of computational problems in applied mathematics / theoretical physics, Part II takes quantum field theory itself as its subject. The aim is to provide a structured guide to the literature —not as a self-contained textbook, but as a collection of bibliographic essays that trace the intellectual development, identify the mathematical structures that unify its disparate branches, and map the primary sources that a working student or practitioner should know.
The organizing principle throughout is the same as in Part I: Follow the mathematical structures rather than the conventional disciplinary boundaries. Quantum field theory is not a single theory. It is a collection of methods, intuitions, and formal structures that emerged from the attempt to reconcile quantum mechanics with special relativity, and which subsequently proved indispensable across domains that have little obvious connection to that original problem—condensed matter physics, statistical mechanics, cosmology, and, as Part I demonstrates, even mathematical finance and machine learning. The history of QFT is therefore not a simple narrative of progressive refinement. It is a story of divergence and reconvergence: a framework born in quantum electrodynamics that fractured into distinct programs—high-energy gauge theory, many-body condensed matter physics, mathematical field theory—and then partially reunified through the renormalization group and the effective field theory programme. That pattern of divergence and reconvergence is reflected in the structure of these notes.
The organization of Part II follows two complementary logics: Historical development and mathematical escalation. The first two chapters establish context. Chapters 3 through 5 develop the core mathematical formalisms. Chapters 6 through 8 extend those formalisms into progressively more demanding physical and mathematical settings.
Chapter 1 (History of Quantum Field Theory) reconstructs the development of QFT through its literature. The bibliography of fifty sources, spanning from Dirac’s 1927 quantization of the radiation field to contemporary treatments, is organized into four sections—Development, Quantum Electrodynamics, Gauge Theory and High-Energy Physics, and Many-Body Theory and Condensed Matter—that reflect the genuine structure of the field’s evolution. These divisions correspond to distinct communities, distinct technical problems, and distinct styles of reasoning that nonetheless share a common mathematical language. The chapter introduces a three-layer bibliographic strategy that recurs throughout Part II: primary sources (the original papers in which key ideas first appeared), pedagogical texts (the textbooks and lecture notes that codified and systematized the field at successive stages), and historical and biographical accounts (the works that provide institutional, intellectual, and personal context). The distinction between these layers is essential for reading well. A primary source tells you what happened; a pedagogical text tells you how to do it; a historical account tells you why it mattered and how it emerged from a particular environment.
Chapter 2 (Quantum Foundations) surveys the foundations of quantum mechanics through a curated collection of thirty-seven primary and contextual sources, organized by interpretive tradition: wave mechanics and pilot-wave theory, the Copenhagen formalism, statistical and information-theoretic reconstructions, Bell’s theorem and nonlocality, and the Everettian relative-state interpretation. The inclusion of this material in a volume on scientific computation is deliberate. The interpretive questions surveyed here are not merely philosophical; they have direct computational consequences. The pilot-wave interpretation implies a particular class of stochastic differential equations. Phase-space methods underwrite semiclassical simulation techniques. The information-theoretic reconstruction program connects to quantum error correction through the structure of generalized probabilistic theories. Bell inequality violations are now routinely simulated as benchmarks for quantum hardware. And the measurement problem, far from being a philosophical curiosity, determines the computational cost of simulating quantum systems on classical hardware—the exponential overhead of exact simulation is a direct consequence of the superposition principle and its entanglement-generating dynamics.
Chapter 3 (Path Integrals) develops the theory of Feynman path integrals around a specific organizing thesis: that path integrals are not uniquely defined without a prescription, and that action principles and geometric structures serve as selection rules to resolve the measure and ordering ambiguities that arise in curved, constrained, and singular settings. The annotated bibliography of thirty-seven references, spanning from Feynman’s 1942 dissertation through treatments of path integrals on curved manifolds, is organized into six thematic sections that trace a deliberate arc from pedagogical foundations through Schwinger’s quantum action principle to the culminating problem of defining the functional integral on Riemannian manifolds. The emphasis throughout is on the reciprocal relationship between Schwinger’s variational formulation and Feynman’s sum-over-histories, and on the mathematical structures—Van Vleck–Morette determinants, symplectic two-forms, Laplace–Beltrami operators—that constrain the correct definition of the functional integral in nontrivial settings. The path integral is the single most pervasive formalism in these notes: it reappears in every subsequent chapter of Part II and provides the principal bridge to the stochastic calculus, heat-kernel methods, and derivative pricing formalism of Part I.
Chapter 4 (Koopman–von Neumann Mechanics) addresses the classical–quantum interface through three interconnected bodies of literature: the Koopman–von Neumann operator formalism on phase space, Feynman’s commutator-based approach to gauge theory foundations, and the classical path integral (CPI) formalism with its hidden supersymmetric structures. The chapter is organized around a structural fact that has been understood since the early 1930s but remains underappreciated: classical Hamiltonian mechanics admits a natural Hilbert space formulation in which observables are represented by linear operators, states by wavefunctions, and time evolution by a unitary group. The operators representing classical position and momentum commute—$[\hat{x}, \hat{p}] = 0$ rather than $[\hat{x}, \hat{p}] = i\hbar$—and the generator of time evolution is the Liouville operator rather than the Hamiltonian, but the mathematical infrastructure is otherwise parallel. The classical path integral program, developed primarily by Gozzi and collaborators, reveals that this parallel extends deeper than the operator dictionary: proper treatment of the CPI exposes hidden BRS symmetry, $N = 2$ supersymmetry, and a realization of Cartan’s differential calculus on symplectic manifolds through Grassmann variables. A recurring motif is the appearance of gauge-like symmetry in multiple guises—phase freedom in KvN wavefunctions, emergent electromagnetic structure from commutator algebras, and BRS invariance in the classical path integral—reflecting deep geometric properties of classical phase space.
Chapter 5 (Statistical Field Theory and Nonequilibrium Quantum Field Theory) provides a guided tour through the literature on nonequilibrium quantum field theory, organized as an escalating progression through four blocks: equilibrium statistical mechanics, statistical field theory, classical nonequilibrium methods, and the full real-time quantum field theory machinery. The annotated bibliography of forty-two sources encodes a specific pedagogical philosophy that might be called “Euclidean-first”: the deep correspondence between thermal and quantum fluctuations, mediated by the formal device of Wick rotation to imaginary time, provides the conceptual backbone, and nonequilibrium methods generalize rather than replace this structure. The transition from equilibrium to nonequilibrium is then, in part, the transition from the Euclidean time axis (with periodic or antiperiodic boundary conditions encoding the temperature) to the Schwinger–Keldysh contour (a closed time path that encodes initial conditions, causal time evolution, and the doubled degrees of freedom needed to describe density matrices rather than pure states). Three thematic threads unify the material across the four blocks: the taxonomy of two-point functions (retarded, advanced, statistical, spectral—and the relationships between them), effective action methods (from the standard one-particle-irreducible effective action through the two-particle-irreducible formalism that provides self-consistent approximation schemes), and universality (the renormalization group explanation for why disparate microscopic systems can exhibit identical macroscopic behavior, extended from equilibrium critical phenomena to far-from-equilibrium dynamics).
Chapter 6 (Renormalization and Triviality) develops the one-loop renormalization program for interacting scalar field theories, with particular attention to the triviality problem in four-dimensional $\varphi^4$ theory. The chapter works through the construction of the effective potential for a two-scalar system with portal coupling, carries out the renormalization program in several coupling regimes, and uses the calculation to examine the relationship between triviality and the effective potential. The triviality problem—the result, supported by rigorous mathematics, perturbation theory, and lattice simulation, that the renormalized coupling in $\varphi^4$ theory vanishes as the ultraviolet cutoff is removed—has immediate physical significance: the Higgs sector of the Standard Model is, at its core, a scalar $\varphi^4$ theory coupled to gauge and fermion fields, and triviality constrains the range of validity of the Standard Model as an effective field theory. The chapter surveys the literature from the foundational rigorous results of Aizenman and Fr"ohlich through the Consoli–Stevenson program on mode-dependent renormalization, modern “loophole” constructions, and connections to the Standard Model Higgs sector. The treatment provides a concrete setting in which to compare autonomous and perturbative renormalization conventions within a single calculation.
Chapter 7 (Quantum Field Theory in Curved Spacetime) surveys the development of QFT on classical but dynamical gravitational backgrounds, from DeWitt’s foundational work on covariant quantization in the 1950s through the modern locally covariant algebraic framework of Hollands and Wald. The subject occupies a distinctive position in theoretical physics: it is the regime where quantum fields propagate on a curved spacetime background, giving rise to phenomena—Hawking radiation, the Unruh effect, cosmological particle creation—that have no counterpart in flat-space field theory. The treatment emphasizes the conceptual shifts that distinguish QFT in curved spacetime from Minkowski-space QFT: the absence of a preferred vacuum state, the centrality of the Hadamard condition as a replacement for the vacuum spectral condition, microlocal analysis as the natural mathematical language for singularity structure, and the locally covariant framework in which the theory is defined functorially on the category of globally hyperbolic spacetimes. The Schwinger–DeWitt expansion and heat-kernel techniques that are central to this chapter are the same mathematical tools that appear in the short-maturity implied-volatility expansions of Part I, Chapter 8—a connection that is not decorative but reflects the universality of heat-kernel methods for diffusion processes, whether the diffusion is of probability on a volatility surface or of a quantum field on a curved manifold.
Chapter 8 (Stochastic Quantization) collects the essential ideas behind the program initiated by Parisi and Wu in 1981: the generation of quantum field theory correlators from a classical stochastic process in a fictitious time variable. The Langevin equation that defines stochastic quantization is, at bottom, a stochastic partial differential equation, and stochastic quantization provides an alternative to the Metropolis–Hastings algorithm for sampling field configurations. The notes develop the basic formalism in flat Euclidean space, establish the proof of equivalence via the Fokker–Planck equation, extend to gauge theories (where the Parisi–Wu observation that stochastic quantization avoids explicit gauge fixing is particularly attractive), and then trace the connections to curved backgrounds and numerical relativity. The extension to complex actions via complex Langevin dynamics is treated in detail, motivated by the sign problem in finite-density QCD and the broader enterprise of numerical approaches to quantum gravity. This chapter sits at a natural crossroads: it connects the stochastic calculus of Part I, Chapter 4 to the field-theoretic formalism of the present volume, and it provides computational methods that link directly to the numerical relativity program of Part III.
Several structural features of Part II as a whole are worth making explicit. First, the collective bibliography is substantial: the eight chapters together survey roughly 280 primary sources spanning nearly a century of literature, from Dirac’s 1927 quantization of the radiation field to 2025 preprints. As in Part I, the bibliography is selective and personal—it does not aim to be exhaustive—but it is coherent in its arc. Each chapter’s source list is itself an argument about how the ideas it covers fit together.
Second, the cross-referencing between chapters is pervasive and intentional. The path integral formalism of Chapter 3 reappears in every subsequent chapter: in the classical path integral of Chapter 4, the Euclidean functional integral of Chapter 5, the the effective potential calculations of Chapter 6, the Schwinger–DeWitt expansion ofChapter 7, and the stochastic quantization program of Chapter 8. The renormalization group, introduced historically in Chapter 1, provides the organizing principle for Chapter 5’s treatment of universality, Chapter 6’s analysis of triviality, and Chapter 8’s discussion of the Hollands–Wald program. Gauge structure appears in the Faddeev–Popov framework of Chapters 1 and 3, in the hidden gauge freedom of KvN wavefunctions in Chapter 4, in the Parisi–Wu observation about gauge fixing in Chapter 8, and implicitly throughout. These are not thematic echoes; they are instances of the same mathematics appearing in different physical contexts.
Third, Part II maintains explicit connections to both Part I and Part III. The heat-kernel methods that appear in Chapter 7 (QFT in curved spacetime) and Chapter 3 (path integrals) are the same heat-kernel methods used for short-maturity implied-volatility expansions in Part I, Chapter 8 (Option Trading). The stochastic calculus that underlies the Parisi–Wu program of Chapter 8 draws on the same Langevin, It\^o/Stratonovich, and Fokker–Planck descriptions developed in Part I, Chapter 4 (Stochastic Calculus). The gauge-theoretic reformulation of arbitrage pricing in Part I employs the fiber-bundle mathematics that appears throughout the present volume. In the other direction, the curved-spacetime methods of Chapter 7 and the stochastic gravity connection of Chapter 8 point directly toward the numerical relativity program of Part III. The Martin–Siggia–Rose–Janssen–de Dominicis (MSRJD) formalism, which provides a field-theoretic description of classical stochastic dynamics, appears in Part I’s treatment of stochastic calculus, in Chapter 5’s nonequilibrium methods, and in Chapter 8’s stochastic quantization—the same mathematical structure serving three different physical contexts.
Fourth, the collection exhibits a methodological stance that might be characterized as physics as the organizing principle for computation. Where Part I used physics to illuminate financial problems, Part II takes the internal structure of physics itself as the subject, but always with attention to the computational implications. The measurement problem of Chapter 2 determines the computational cost of quantum simulation. The path integral of Chapter 3 is simultaneously a conceptual framework and a computational tool (Monte Carlo methods are, in essence, stochastic evaluations of path integrals). The KvN formalism of Chapter 4 raises the question of what quantization adds to classical mechanics as a computational resource. The Schwinger–Keldysh formalism of Chapter 8 provides the framework for real-time numerical simulation of quantum fields. The Schwinger–DeWitt expansion of Chapter 7 yields analytically tractable approximation schemes for QFT in curved backgrounds. Stochastic quantization in Chapter 8 offers an alternative numerical method for lattice field theory. And the triviality analysis of Chapter 6 constrains the regime of validity within which numerical simulations of scalar field theories are meaningful.
The notes are intended for readers with graduate-level training in physics or mathematics who wish to understand quantum field theory not merely as a collection of calculational techniques but as a subject with a rich intellectual history, deep mathematical structure, and pervasive connections to the rest of scientific computing. Each chapter is designed to be readable independently, but the cumulative effect of reading them in sequence charts a map of a literature that reveals quantum field theory as a language—one whose grammar is the path integral, whose vocabulary is the renormalization group, and whose dialects extend from the subatomic to the cosmological, from the physical to the financial and computational.