To perform an optimization, you can use one of the following algorithms:
|
|
|
|
|
|
|
You can define your own algorithms. To get an example, See the CAAOptimization interfaces.
This algorithm takes constraints priorities into account.
A good way to quickly reach a solution
when using the Simulated Annealing consists in specifying a low number of
consecutive updates without improvements (15 or 20). On the contrary, to foster search, the number of consecutive updates without improvements can be increased as well as the time and the number of total updates. |
This algorithm can handle objectives providing derivatives (like analysis global sensors) in conjunction with constraints providing multiple values and derivatives (like analysis local sensors). With this algorithm additional termination criteria can be specified: | |
|
|
Absolute Objective: Absolute objective variation under which the algorithm must stop. (See formula opposite) | Obj.Max.-Obj.Min <
User Termination Criteria |
|
Relative Objective: Relative objective variation under which the algorithm must stop (See formula opposite) | Obj.Max-Obj.Min |
|
This algorithm must be used first to perform a local search. Based on the
calculation of a local slope of the objective function, this algorithm will use
a parabolic approximation and jump to its minimum, or use an iterated
exponential step descent in the direction of the minimum.
If the properties of the objective function are known (continuous,
differentiable at all point), the gradient can be used straight on. It is
usually faster than the Simulated Annealing algorithm.
You can choose to run the Gradient algorithm with constraints or without constraints. |
Simulated Annealing (Global Algorithm) |
Gradient (Local Algorithm) Local Algorithm For Constraints & Priorities, Algorithm for Constraints and Derivatives Providers, Gradient Algorithm Without Constraint, Gradient Algorithm With Constraint(s) |
Equality constraints allowed Precision is managed as a termination criteria. x == A is fulfilled when |
Equality constraints not allowed except for the Algorithm for Constraints and Derivatives Providers. |
Can be run with constraints and no objective function | Must be run with an objective function. |
The Gradient algorithms, the Simulated Annealing algorithm, and the Algorithm for Constraints & Derivatives Providers can be run in 3/4 different configurations.
Options |
Gradient Algorithm |
Simulated Annealing Algorithm |
Algorithm for Constraints & Derivatives Providers |
Slow | Slow evolution based on the steepest descent algorithm and on steps. Good precision (to be used to find convergence.) | These 4 configurations
define the level of acceptance of bad solutions. If the problem has many local optima, use Slow. If the problem has no local optima, use Infinite (Hill climbing). |
|
Medium | A randomly restarted conjugate gradient. | ||
Fast | Search jumps from Minimum to Minimum
using quadratic interpolation. Fast evolution, less precision. |
||
Infinite (Hill climbing) |
- |
Termination criteria are used to stop the optimizer. When an optimization is running, a message box displays the data of each update. You can stop the computation by clicking Stop. If you don't intervene in the computation process, the computation will stop:
|
|
|
Here are the termination criteria to be specified:
|
|
|
|
|
Simulated Annealing always runs until one of the termination criteria is reached. The Gradient can stop before if a local optimum is reached. |
Please find below a table listing the available algorithms along with the functions they support.
Local Algorithm for constraints and priorities | Simulated Annealing Algorithm | Algorithm for Constraints and derivatives providers | Gradient without constraints | Gradient with constraints | |
Inequality Constraints | Yes | Yes | Yes | No | Yes |
Equality Constraints | No | Yes | Yes | No | No |
Priorities on constraints | Yes | No | No | No | No |
Derivatives providers | No | No | Yes | No | No |
Non Differentiable objectives or Constraints | No | Yes | No | No | No |
From time to time, the PEO algorithms are updated to enhance their performance. The side effect of these modifications is a change of behavior of the optimization process. It is however possible to come back to the previous versions of the algorithms by using environment variables (see table below.)
Var Name | Var Value |
GradientVersion | r8Sp3, r9sp4, r10sp1, r11ga |
SAVersion | r8sp3, r13ga, r14ga, r14sp1 |
GradientWithCstVersion | r8sp4, r11ga, r13ga, r14ga, r14sp1, r14sp2, r15ga, r16ga* |
*A pop-up is displayed at the end of the optimization run when the bound of at least one free parameter is reached. |