Skip to content

interdisciplinary numerical methods: half-semester "hub" course

Notifications You must be signed in to change notification settings

mitmath/numerical_hub

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 

Repository files navigation

Interdisciplinary Numerical Methods: "Hub" 18.S190/16.S090

This new MIT course (Spring 2025) introduces numerical methods and numerical analysis to a broad audience (assuming 18.03, 18.06, or equivalents, and some programming experience).  It is divided into two 6-unit halves:

  • 18.S190/16.S090 (first half-term “hub”): basic numerical methods, including curve fitting, root finding, numerical differentiation and integration, numerical differential equations, and floating-point arithmetic. Emphasizes the complementary concerns of accuracy and computational cost. Prof. Steven G. Johnson and Prof. Qiqi Wang.

  • Second half-term: three options for 6-unit “spokes”

    • 18.S191/16.S091 — numerical methods for partial differential equations: finite-difference and finite-volume methods, boundary conditions, accuracy, and stability. Prof. Qiqi Wang.

    • 18.S097/16.S097 — large-scale linear algebra: sparse matrices, iterative methods, randomized methods. Prof. Steven G. Johnson.

    • 18.S192/16.S098 — parallel numerical computing: multi-threading and distributed-memory, and trading off computation for parallelism — may be taken simultaneously with other spokes! Prof. Alan Edelman.

Taking both the hub and any spoke will count as an 18.3xx class for math majors, similar to 18.330.  Taking both the hub and the PDE spoke will substitute for 16.90. Weekly homework, no exams, but spokes will include a final project.

This repository is for the "hub" course (currently assigned the temporary numbers 18.S190/16.S090).

18.S190/16.S090 Syllabus, Spring 2025

Instructors: Prof. Steven G. Johnson and Prof. Qiqi Wang.

Lectures: MWF10 in 2-142 (Feb 3 – Mar 31), slides and notes posted below. Lecture videos posted in Panopto Video on Canvas.

Homework and grading: 6 weekly psets, posted Fridays and due Friday midnight; psets are accepted up to 24 hours late with a 20% penalty; for any other accommodations, speak with S3 and have them contact the instructors. No exams.

  • Homework assignments will require some programming — you can use either Julia or Python (your choice; instruction and examples will use a mix of languages).

  • Submit your homework electronically via Gradescope on Canvas as a PDF containing code and results (e.g. from a Jupyter notebook) and a scan of any handwritten solutions.

  • Collaboration policy: Talk to anyone you want to and read anything you want to, with two caveats: First, make a solid effort to solve a problem on your own before discussing it with classmates or googling. Second, no matter whom you talk to or what you read, write up the solution on your own, without having their answer in front of you (this includes ChatGPT and similar). (You can use psetpartners.mit.edu to find problem-set partners.)

Teaching Assistants: Mo Chen and Shania Mitra (shania at mit.edu)

Office Hours: Wednesday 4pm in 2-345 (Prof. Johnson) and Thursday 5pm via Zoom (Prof. Wang).

Resources: Piazza discussion forum, math learning center, TSR^2 study/resource room, pset partners.

Textbook: No required textbook, but suggestions for further reading will be posted after each lecture. The book Fundamentals of Numerical Computation (FNC) by Driscoll and Braun is freely available online, has examples in Julia, Python, and Matlab, and is a valuable resource. Fundamentals of Engineering Numerical Analysis (FENA) by Moin is another useful resource (readable online with MIT certificates).

This document is a brief summary of what was covered in each lecture, along with links and suggestions for further reading. It is not a good substitute for attending lecture, but may provide a useful study guide.

Lecture 1 (Feb 3)

Brief overview of the huge field of numerical methods, and outline of the small portion that this course will cover. Key new concerns in numerical analysis, are (i) performance (traditionally, arithmetic counts, but now memory access often dominates) and (ii) accuracy (both floating-point roundoff errors and also convergence of intrinsic approximations in the algorithms). In contrast, the more pure, abstract mathematics of continuity is called "analysis", and is mainly concerned with (ii) but not (i): they are happy to prove that limits converge, but don't care too much how quickly they converge. Whereas traditional discrete computer science is concerned with mainly with (i) but not (ii): they care about performance and resource usage, but traditional algorithms like sorting are either right or wrong, never approximate.

As a starting example, considered the the convergence of finite-difference approximations to derivatives df/dx of given functions f(x), which appear in many areas of numerical analysis (such as solving differential equations) and are also closely tied to polynomial approximation and interpolation. By examining the errors in the finite-difference approximation, we immediately see two competing sources of error: truncation error from the non-infinitesimal Δx, and roundoff error from the finite precision of the arithmetic. Understanding these two errors will be the gateway to many other subjects in numerical methods.

Further reading: FNC book: Finite differences, FENA book: chapter 2. There is a lot of information online on finite difference approximations, these 18.303 notes, or Section 5.7 of Numerical Recipes. The Julia FiniteDifferences.jl package provides lots of algorithms to compute finite-difference approximations; a particularly robust and powerful way to obtain high accuracy is to employ Richardson extrapolation to smaller and smaller δx. If you make δx too small, the finite precision (#digits) of floating-point arithmetic leads to catastrophic cancellation errors.

Lecture 2 (Feb 5)

One of the most basic sources of computational error is that computer arithmetic is generally inexact, leading to roundoff errors. The reason for this is simple: computers can only work with numbers having a finite number of digits, so they cannot even store arbitrary real numbers. Only a finite subset of the real numbers can be represented using a particular number of "bits", and the question becomes which subset to store, how arithmetic on this subset is defined, and how to analyze the errors compared to theoretical exact arithmetic on real numbers.

In floating-point arithmetic, we store both an integer coefficient and an exponent in some base: essentially, scientific notation. This allows large dynamic range and fixed relative accuracy: if fl(x) is the closest floating-point number to any real x, then |fl(x)-x| < ε|x| where ε is the machine precision. This makes error analysis much easier and makes algorithms mostly insensitive to overall scaling or units, but has the disadvantage that it requires specialized floating-point hardware to be fast. Nowadays, all general-purpose computers, and even many little computers like your cell phones, have floating-point units.

Went through some simple definitions and examples in Julia (see notebook above), illustrating the basic ideas and a few interesting tidbits. In particular, we looked at error accumulation during long calculations (e.g. summation), as well as examples of catastrophic cancellation and how it can sometimes be avoided by rearranging a calculation.

Further reading: FNC book: Floating-poing numbers. Trefethen & Bau's Numerical Linear Algebra, lecture 13. What Every Computer Scientist Should Know About Floating Point Arithmetic (David Goldberg, ACM 1991). William Kahan, How Java's floating-point hurts everyone everywhere (2004): contains a nice discussion of floating-point myths and misconceptions. A brief but useful summary can be found in this Julia-focused floating-point overview by Prof. John Gibson. Because many programmers never learn how floating-point arithmetic actually works, there are many common myths about its behavior. (An infamous example is 0.1 + 0.2 giving 0.30000000000000004, which people are puzzled by so frequently it has led to a web site https://0.30000000000000004.com/!)

Lecture 3 (Feb 7)

  • Interpolation
  • pset 1: to be posted

Optional Julia Tutorial (Feb 7 @ 4pm in 2-190)

A basic overview of the Julia programming environment for numerical computations. This tutorial will cover what Julia is and the basics of interaction, scalar/vector/matrix arithmetic, and plotting — just as a "fancy calculator" for now (without the "real programming" features).

If possible, try to install Julia on your laptop beforehand using the instructions at the above link. Failing that, you can run Julia in the cloud (see instructions above).

This won't be recorded, but you can find a video of a similar tutorial by Prof. Johnson last year (MIT only), as well as many other tutorial videos at julialang.org/learning.

About

interdisciplinary numerical methods: half-semester "hub" course

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published