Local optimizers are a class of optimization methods suitable for situations where the initial values are good estimate of the optimum value or the parameter range is small. CST Studio Suite® contains several local optimizers, each suitable for different problem types:
- Trust Region Framework: This is a powerful local optimizer, which builds a linear model on primary data in a "trust" region around the starting point. The modeled solution will be used as new starting point until it converges to an accurate model of the data. The Trust Region Framework can take advantage of S-parameter sensitivity information to reduce the number of simulations needed and speed up the optimization process, and is the most robust of the optimization algorithms.
- Suitable for: General optimization, especially on models with sensitivity information.
- Nelder Mead Simplex Algorithm: This method is also a local optimization technique, which uses multiple points distributed across the parameter space to find the optimum. Nelder Mead Simplex Algorithm is less dependent on the starting point than most local optimizers.
- Suitable for: Complex problem domains with relatively few parameters, systems without a good initial model.
- Interpolated Quasi Newton: This is a fast local optimizer which uses interpolation to approximate the gradient of the parameter space. The Interpolated Quasi Newton method has fast convergence.
- Suitable for: Computationally demanding models.
- Classic Powell: A simple, robust local optimizer for single-parameter problems. Although slower than the Interpolated Quasi Newton, it can sometimes be more accurate.
- Suitable for: Single-variable optimization.
For more complex parameter spaces, or situations where there is no clear starting value, global optimizers can be more viable than local techniques. Global optimizers are more likely to find the overall optimum rather than a local minimum.