Lectures

 

(Topics are tentative. Code is updated on t-square.)

  • 8/21, 8/23. Gradient descent. Notes (draft)
    • to minimize f(x), take step along gradient of f.
    • convex functions
  • 8/28, 8/30. Faster Gradient descent, Newton method. Notes
    • Take (near)-optimal step in gradient direction draft
    • Newton-Raphson draft
      • for root-finding:  x^{t+1} = x^t - \frac{f(x)}{f'(x)}
      • for optimization: x^{t+1} = x^t - (\nabla^2 f(x))^{-1}\nabla f(x)
  • 9/6,9/11. Projected (proximal) gradient descent. Notes (draft)
    • take step along gradient; project to nearest feasible point.
  • 9/13, 9/18, 9/25, 9/27. Solving linear systems
  • 10/2,10/4,10/11. Singular Value Decomposition: A = UDV^T.
  • 10/16, 10/18, 10/25, 10/30. Linear programming (duality, simplex)
  • Linear and Convex programming (ellipsoid, cutting plane, interior point)
    • maintain minimum volume ellipsoid containing feasible half of previous ellipsoid. Notes  (draft)
    • separation oracles; maintain feasible polyhedron by adding cutting plane through violated point. Notes (draft)
    • add smooth convex function to objective, minimize using Newton step. Notes (draft)
  • Learning halfspaces: perceptron
    • add violated bad example
  • Matchings (maximum, weighted)
    • find augmenting path; augment
    • use linear program
    • use cutting plane method: add violated blossom inequality; repeat
  • Random walks, Brownian motion
    • take random step from current point
    • take infinitesimal random step
  • Online optimization
    • add random perturbation to objective, use offline optimization
    • use gradient descent
  • Stochastic gradient descent
    • add random perturbation to objective (various options)
Advertisement