Evaluation Toolchains

Evaluation Toolchains#

In the context of CADET-Process, “Evaluation Toolchains” refer to a sequence of preprocessing steps that are necessary to calculate performance indicators for a process, followed by the calculation of the objective function in an optimization problem. The toolchains involve two types of objects: evaluation objects and evaluators.

Evaluation Objects#

In the context of CADET-Process, optimization variables usually represent attributes of a Process such as model parameters values or event times, but also fractionation times of the Fractionator can be optimized. or attributes of custom evaluation objects can be used as optimization variables.

An evaluation object is an object that manages the value of an optimization variable in an optimization problem. It acts as an interface between the optimization problem and the object whose attribute(s) need to be optimized. The evaluation object provides the optimization problem with the current value of the optimization variable, and when the optimization problem changes the value of the optimization variable, the evaluation object updates the attribute(s) of the associated object accordingly.

../../_images/single_evaluation_object.svg

For this purpose, the evaluation object must implement a parameter property that returns a (potentially nested) dictionary with the current values of all model parameters, as well as a setter for that property.

Note

Currently, custom evaluation objects also need to provide a property for polynomial_parameters. This will be improved in a future release. For reference, see here.

To associate an optimization variable with an evaluation object, the evaluation object must first be added to the optimization problem using add_evaluation_object(). For demonstration purposes, consider a simple batch-elution example.

Hide code cell content
from CADETProcess.optimization import OptimizationProblem
optimization_problem = OptimizationProblem('evaluation_object_demo')

from examples.batch_elution.process import process
optimization_problem.add_evaluation_object(process)

Note that multiple evaluation objects can be added, which for example allows for simultaneous optimization of multiple operating conditions.

When adding variables, it is now possible to specify with which evaluation object the variable is associated. Moreover, the path to the variable in the evaluation object needs to be specified.

optimization_problem.add_variable('var_0', evaluation_objects=[process], parameter_path='flow_sheet.column.total_porosity', lb=0, ub=1)
OptimizationVariable(name=var_0, evaluation_objects=['batch elution'], parameter_path=flow_sheet.column.total_porosity, lb=0, ub=1)

By default, the variable is associated with all evaluation objects. If no path is provided, the name is also used as path. Hence, the variable definition can be simplified to:

Hide code cell content
optimization_problem = OptimizationProblem('evaluation_object_demo')
optimization_problem.add_evaluation_object(process)
optimization_problem.add_variable('flow_sheet.column.total_porosity', lb=0, ub=1)
OptimizationVariable(name=flow_sheet.column.total_porosity, evaluation_objects=['batch elution'], parameter_path=flow_sheet.column.total_porosity, lb=0, ub=1)

To demonstrate the flexibility of this approach, consider two evaluation objects and two optimization variables where one variable is associated with a single evaluation object, and the other with both.

../../_images/multiple_evaluation_objects.svg
Hide code cell content
optimization_problem = OptimizationProblem('evaluation_object_demo_multi')

import copy

process_a = copy.deepcopy(process)
process_a.name = 'process_a'
process_b = copy.deepcopy(process)
process_b.name = 'process_b'
optimization_problem.add_evaluation_object(process_a)
optimization_problem.add_evaluation_object(process_b)
optimization_problem.add_variable('flow_sheet.column.total_porosity')
optimization_problem.add_variable('flow_sheet.column.length', evaluation_objects=[process_a])
OptimizationVariable(name=flow_sheet.column.length, evaluation_objects=['process_a'], parameter_path=flow_sheet.column.length, lb=-inf, ub=inf)

Evaluators#

In many cases, it is necessary to perform preprocessing steps before evaluating the objective function in an optimization problem. For example, to calculate performance indicators of a process, several steps may be required, such as simulating the process until stationarity is reached, determining fractionation times under purity constraints, and calculating objective function values based on productivity and yield recovery.

To implement these evaluation toolchains, CADET-Process provides a mechanism to add Evaluators to an OptimizationProblem which can be referenced by objective and constraint functions. Any callable function can be added as Evaluator, assuming the first argument is the result of the previous step and it returns a single result object which is then processed by the next step. Additional arguments and keyword arguments can be passed using args and kwargs when adding the Evaluator. The intermediate results are also automatically cached when different objective and constraint functions require the same preprocessing steps.

../../_images/single_objective_evaluators.svg

Consider the following example:

Hide code cell content
optimization_problem = OptimizationProblem('evaluator_demo')
optimization_problem.add_variable('x')
OptimizationVariable(name=x, evaluation_objects=[], parameter_path=None, lb=-inf, ub=inf)

To add the evaluator, use add_evaluator().

def evaluator(x):
    print(f'Running evaluator with {x}')
    intermed_result = x**2
    return intermed_result

optimization_problem.add_evaluator(evaluator)

This evaluator can now be referenced when adding objectives, nonlinear constraints, or callbacks. For this purpose, add the required evaluators (in order) to the corresponding method (here, add_objective()).

def objective(intermed_result):
    print(f'Running objective with {intermed_result}')
    return intermed_result**2

optimization_problem.add_objective(objective, requires=evaluator)

When evaluating objectives, the evaluator is also called.

optimization_problem.evaluate_objectives(1)
Running evaluator with [1]
Running objective with [1]
array([1.])

Intermediate results are automatically cached s.t. other objectives or constraints that require the same evaluation steps do not need to recompute the pre-processing steps.