Optimization

Optimizers

Optimizer

class pymanopt.optimizers.optimizer.OptimizerResult(point: Any, cost: float, iterations: int, stopping_criterion: str, time: float, cost_evaluations: Union[int, NoneType] = None, step_size: Union[float, NoneType] = None, gradient_norm: Union[float, NoneType] = None, log: Union[Dict, NoneType] = None)[source]

Bases: object

Parameters
• point (Any) –

• cost (float) –

• iterations (int) –

• stopping_criterion (str) –

• time (float) –

• cost_evaluations (Optional[int]) –

• step_size (Optional[float]) –

• log (Optional[Dict]) –

Return type

None

point: Any
cost: float
iterations: int
stopping_criterion: str
time: float
cost_evaluations: Optional[int] = None
step_size: Optional[float] = None
log: Optional[Dict] = None
class pymanopt.optimizers.optimizer.Optimizer(max_time=1000, max_iterations=1000, min_gradient_norm=1e-06, min_step_size=1e-10, max_cost_evaluations=5000, verbosity=2, log_verbosity=0)[source]

Bases: object

Abstract base class for Pymanopt optimizers.

Parameters
• max_time (float) – Upper bound on the run time of an optimizer in seconds.

• max_iterations (int) – The maximum number of iterations to perform.

• min_gradient_norm (float) – Termination threshold for the norm of the (Riemannian) gradient.

• min_step_size (float) – Termination threshold for the line search step size.

• max_cost_evaluations (int) – Maximum number of allowed cost function evaluations.

• verbosity (int) – Level of information printed by the optimizer while it operates: 0 is silent, 2 is most verbose.

• log_verbosity (int) – Level of information logged by the optimizer while it operates: 0 logs nothing, 1 logs information for each iteration.

abstract run(problem, *, initial_point=None, **kwargs)[source]

Run an optimizer on a given optimization problem.

Parameters
• problem – Pymanopt problem class instance exposing the cost function and the manifold to optimize over. The class must either

• initial_point – Initial point on the manifold. If no value is provided then a starting point will be randomly generated.

• *args – Potential optimizer-specific positional arguments.

• **kwargs – Potential optimizer-specific keyword arguments.

Returns

The optimization result.

Return type

pymanopt.optimizers.optimizer.OptimizerResult

Perform optimization using nonlinear conjugate gradient method with line_searcher. This method first computes the gradient of the cost function, and then optimizes by moving in a direction that is conjugate to all previous search directions.

Parameters
• beta_rule (str) – Conjugate gradient beta rule used to construct the new search direction. Valid choices are {"FletcherReeves", "PolakRibiere", "HestenesStiefel", "HagerZhang"}.

• orth_value – Parameter for Powell’s restart strategy. An infinite value disables this strategy.

• line_searcher – The line search method.

Notes

See [HZ2006] for details about Powell’s restart strategy.

run(problem, *, initial_point=None, reuse_line_searcher=False)[source]

Run CG method.

Parameters
• problem – Pymanopt problem class instance exposing the cost function and the manifold to optimize over. The class must either

• initial_point – Initial point on the manifold. If no value is provided then a starting point will be randomly generated.

• reuse_line_searcher – Whether to reuse the previous line searcher. Allows to use information from a previous call to run().

Returns

Local minimum of the cost function, or the most recent iterate if algorithm terminated before convergence.

Return type

pymanopt.optimizers.optimizer.OptimizerResult

Compute the centroid of points on the manifold as Karcher mean.

Perform optimization using the derivative-free Nelder-Mead minimization algorithm.

Parameters
• max_cost_evaluations – Maximum number of allowed cost function evaluations.

• max_iterations – Maximum number of allowed iterations.

• reflection – Determines how far to reflect away from the worst vertex: stretched (reflection > 1), compressed (0 < reflection < 1), or exact (reflection = 1).

• expansion – Factor by which to expand the reflected simplex.

• contraction – Factor by which to contract the reflected simplex.

run(problem, *, initial_point=None)[source]

Parameters
• problem – Pymanopt problem class instance exposing the cost function and the manifold to optimize over.

• initial_point – Initial point on the manifold. If no value is provided then a starting point will be randomly generated.

Returns

Local minimum of the cost function, or the most recent iterate if algorithm terminated before convergence.

Return type

pymanopt.optimizers.optimizer.OptimizerResult

Particle Swarms

class pymanopt.optimizers.particle_swarm.ParticleSwarm(max_cost_evaluations=None, max_iterations=None, population_size=None, nostalgia=1.4, social=1.4, *args, **kwargs)[source]

Particle swarm optimization (PSO) method.

Perform optimization using the derivative-free particle swarm optimization algorithm.

Parameters
• max_cost_evaluations – Maximum number of allowed cost evaluations.

• max_iterations – Maximum number of allowed iterations.

• population_size – Size of the considered swarm population.

• nostalgia – Quantifies performance relative to past performances.

• social – Quantifies performance relative to neighbors.

run(problem, *, initial_point=None)[source]

Run PSO algorithm.

Parameters
• problem – Pymanopt problem class instance exposing the cost function and the manifold to optimize over.

• initial_point – Initial point on the manifold. If no value is provided then a starting point will be randomly generated.

Returns

Local minimum of the cost function, or the most recent iterate if algorithm terminated before convergence.

Return type

pymanopt.optimizers.optimizer.OptimizerResult

Steepest Descent

class pymanopt.optimizers.steepest_descent.SteepestDescent(line_searcher=None, *args, **kwargs)[source]

Riemannian steepest descent algorithm.

Perform optimization using gradient descent with line search. This method first computes the gradient of the objective, and then optimizes by moving in the direction of steepest descent (which is the opposite direction to the gradient).

Parameters

line_searcher – The line search method.

run(problem, *, initial_point=None, reuse_line_searcher=False)[source]

Run steepest descent algorithm.

Parameters
• problem – Pymanopt problem class instance exposing the cost function and the manifold to optimize over. The class must either

• initial_point – Initial point on the manifold. If no value is provided then a starting point will be randomly generated.

• reuse_line_searcher – Whether to reuse the previous line searcher. Allows to use information from a previous call to solve().

Returns

Local minimum of the cost function, or the most recent iterate if algorithm terminated before convergence.

Return type

pymanopt.optimizers.optimizer.OptimizerResult

Riemannian Trust Regions Algorithm

class pymanopt.optimizers.trust_regions.TrustRegions(miniter=3, kappa=0.1, theta=1.0, rho_prime=0.1, use_rand=False, rho_regularization=1000.0, *args, **kwargs)[source]
NEGATIVE_CURVATURE = 0
EXCEEDED_TR = 1
REACHED_TARGET_LINEAR = 2
REACHED_TARGET_SUPERLINEAR = 3
MAX_INNER_ITER = 4
MODEL_INCREASED = 5
TCG_STOP_REASONS = {0: 'negative curvature', 1: 'exceeded trust region', 2: 'reached target residual-kappa (linear)', 3: 'reached target residual-theta (superlinear)', 4: 'maximum inner iterations', 5: 'model increased'}
run(problem, *, initial_point=None, mininner=1, maxinner=None, Delta_bar=None, Delta0=None)[source]

Run an optimizer on a given optimization problem.

Parameters
• problem – Pymanopt problem class instance exposing the cost function and the manifold to optimize over. The class must either

• initial_point – Initial point on the manifold. If no value is provided then a starting point will be randomly generated.

• *args – Potential optimizer-specific positional arguments.

• **kwargs – Potential optimizer-specific keyword arguments.

Returns

The optimization result.

Return type

pymanopt.optimizers.optimizer.OptimizerResult