CADETProcess.optimization.TrustConstr

Contents

CADETProcess.optimization.TrustConstr#

class CADETProcess.optimization.TrustConstr(gtol, xtol, barrier_tol, initial_tr_radius, initial_constr_penalty, initial_barrier_parameter, initial_barrier_tolerance, factorization_method, maxiter, verbose, disp, x_tol, cv_tol, n_max_evals, n_max_iter, finite_diff_rel_step, tol, jac, progress_frequency, f_tol, similarity_tol, parallelization_backend)[source]#

Wrapper for the trust-constr optimization method from the scipy optimization suite.

It defines the solver options in the ‘options’ variable as a dictionary.

Supports:
  • Linear constraints.

  • Linear equality constraints.

  • Nonlinear constraints.

  • Bounds.

Parameters:
gtolUnsignedFloat, optional

Tolerance for termination by the norm of the Lagrangian gradient. The algorithm will terminate when both the infinity norm (i.e., max abs value) of the Lagrangian gradient and the constraint violation are smaller than gtol. Default is 1e-8.

xtolUnsignedFloat, optional

Tolerance for termination by the change of the independent variable. The algorithm will terminate when tr_radius < xtol, where tr_radius is the radius of the trust region used in the algorithm. Default is 1e-8.

barrier_tolUnsignedFloat, optional

Threshold on the barrier parameter for the algorithm termination. When inequality constraints are present, the algorithm will terminate only when the barrier parameter is less than barrier_tol. Default is 1e-8.

initial_tr_radiusfloat, optional

Initial trust radius. The trust radius gives the maximum distance between solution points in consecutive iterations. It reflects the trust the algorithm puts in the local approximation of the optimization problem. For an accurate local approximation, the trust-region should be large, and for an approximation valid only close to the current point, it should be a small one. The trust radius is automatically updated throughout the optimization process, with initial_tr_radius being its initial value. Default is 1.

initial_constr_penaltyfloat, optional

Initial constraints penalty parameter. The penalty parameter is used for balancing the requirements of decreasing the objective function and satisfying the constraints. It is used for defining the merit function: merit_function(x) = fun(x) + constr_penalty * constr_norm_l2(x), where constr_norm_l2(x) is the l2 norm of a vector containing all the constraints. The merit function is used for accepting or rejecting trial points, and constr_penalty weights the two conflicting goals of reducing the objective function and constraints. The penalty is automatically updated throughout the optimization process, with initial_constr_penalty being its initial value. Default is 1.

initial_barrier_parameterfloat, optional

Initial barrier parameter. Used only when inequality constraints are present. For dealing with optimization problems min_x f(x) subject to inequality constraints c(x) <= 0, the algorithm introduces slack variables, solving the problem min_(x, s) f(x) + barrier_parameter * sum(ln(s)) subject to the equality constraints c(x) + s = 0 instead of the original problem. This subproblem is solved for decreasing values of barrier_parameter and with decreasing tolerances for the termination, starting with initial_barrier_parameter for the barrier parameter. Default is 0.1.

initial_barrier_tolerancefloat, optional

Initial tolerance for the barrier subproblem. Used only when inequality constraints are present. For dealing with optimization problems min_x f(x) subject to inequality constraints c(x) <= 0, the algorithm introduces slack variables, solving the problem min_(x, s) f(x) + barrier_parameter * sum(ln(s)) subject to the equality constraints c(x) + s = 0 instead of the original problem. This subproblem is solved for decreasing values of barrier_parameter and with decreasing tolerances for the termination, starting with initial_barrier_tolerance for the barrier tolerance. Default is 0.1.

factorization_methodstr or None, optional

Method to factorize the Jacobian of the constraints. Use None (default) for auto selection or one of: - ‘NormalEquation’ - ‘AugmentedSystem’ - ‘QRFactorization’ - ‘SVDFactorization’. The methods ‘NormalEquation’ and ‘AugmentedSystem’ can be used only with sparse constraints. The methods ‘QRFactorization’ and ‘SVDFactorization’ can be used only with dense constraints. Default is None.

maxiterUnsignedInteger, optional

Maximum number of algorithm iterations. Default is 1000.

verboseUnsignedInteger, optional

Level of algorithm’s verbosity: - 0 (default) for silent - 1 for a termination report - 2 for progress during iterations - 3 for more complete progress report.

dispBool, optional

If True, then verbose will be set to 1 if it was 0. Default is False.

Attributes:
aggregated_parameters

dict: Aggregated parameters of the instance.

barrier_tol
cv_tol
disp
f_tol
factorization_method
finite_diff_rel_step
gtol
initial_barrier_parameter
initial_barrier_tolerance
initial_constr_penalty
initial_tr_radius
jac
maxiter
missing_parameters

list: Parameters that are required but not set.

n_cores

int: Proxy to the number of cores used by the parallelization backend.

n_max_evals
n_max_iter
options

dict: Optimizer options.

parallelization_backend
parameters

dict: Parameters of the instance.

polynomial_parameters

dict: Polynomial parameters of the instance.

progress_frequency
required_parameters

list: Parameters that have no default value.

similarity_tol
sized_parameters

dict: Sized parameters of the instance.

specific_options

dict: Optimizer spcific options.

tol
verbose
x_tol
xtol

Methods

check_optimization_problem(optimization_problem)

Check if problem is configured correctly and supported by the optimizer.

check_required_parameters()

Verify if all required parameters are set.

check_x0(optimization_problem, x0)

Check the initial guess x0 for an optimization problem.

get_bounds(optimization_problem)

Returns the optimized bounds of a given optimization_problem as a Bound object.

get_constraint_objects(optimization_problem)

Return constraints as objets.

get_lincon_obj(optimization_problem)

Return the linear constraints as an object.

get_lineqcon_obj(optimization_problem)

Return the linear equality constraints as an object.

get_nonlincon_obj(optimization_problem)

Return the optimized nonlinear constraints as an object.

optimize(optimization_problem[, x0, ...])

Solve OptimizationProblem.

run(optimization_problem[, x0])

Solve the optimization problem using any of the scipy methods.

run_post_processing(X_transformed, ...[, ...])

Run post-processing of generation.

run_final_processing