This tutorial on gradient-based methods begins with Newton and Steepest Descent methods, and culminates with Levenberg-Marquardt. Along the way we will discuss variants such as Gauss-Newton, Inexact Newton, Quasi-Newton, and damped Gauss-Newton, as well as techniques including Line Search Strategies and Trust Regions (time permitting). Some theory on convergence rates will be presented. However, we will be primarily concerned with comparing and contrasting the various methods, and their particular applicability to nonlinear least squares objective functions in parameter identification problems. Simple numerical demonstrations will be presented.