This two part talk will be a tutorial on gradient-based methods, beginning with Newton and Steepest Descent methods, and culminating with Levenberg-Marquardt. Along the way we will discuss variants such as Gauss-Newton, Inexact Newton, and damped Gauss-Newton, and techniques including Line Search Strategies and Trust Regions. Some theory on convergence rates will be presented. However, we will be primarily concerned with comparing and contrasting the various methods, and their particular applicability to least squares objective functions in parameter identification problems. Simple numerical demonstrations will be presented.