|
| 1 | +--- |
| 2 | +author: Luca Bernstiel |
| 3 | +title: Particle Swarm Optimization (PSO) Sampler |
| 4 | +description: Particle Swarm Optimization is a population-based stochastic optimization algorithm inspired by flocking behavior, where particles iteratively adjust their positions using personal and global bests to search for optima. |
| 5 | +tags: [sampler] |
| 6 | +optuna_versions: [4.5.0] |
| 7 | +license: MIT License |
| 8 | +--- |
| 9 | + |
| 10 | +## Abstract |
| 11 | + |
| 12 | +Particle Swarm Optimization (PSO) is a population-based stochastic optimizer inspired by flocking behavior, where particles iteratively adjust their positions using personal and global bests to search for optima. This sampler supports single-objective, continuous optimization only. |
| 13 | + |
| 14 | +> Note: Categorical distributions are suggested by the underlaying RandomSampler. |
| 15 | +
|
| 16 | +> Note: Multi-objective optimization is not supported. |
| 17 | +
|
| 18 | +For details on the algorithm, see Kennedy and Eberhart (1995): [Particle Swarm Optimization](https://doi.org/10.1109/ICNN.1995.488968). |
| 19 | + |
| 20 | +## APIs |
| 21 | + |
| 22 | +- `PSOSampler(search_space: dict[str, BaseDistribution] | None = None, n_particles: int = 10, inertia: float = 0.5, cognitive: float = 1.5, social: float = 1.5, seed: int | None = None)` |
| 23 | + - `search_space`: A dictionary containing the search space that defines the parameter space. The keys are the parameter names and the values are [the parameter's distribution](https://optuna.readthedocs.io/en/stable/reference/distributions.html). If the search space is not provided, the sampler will infer the search space dynamically. |
| 24 | + Example: |
| 25 | + ```python |
| 26 | + search_space = { |
| 27 | + "x": optuna.distributions.FloatDistribution(-10, 10), |
| 28 | + "y": optuna.distributions.FloatDistribution(-10, 10), |
| 29 | + } |
| 30 | + PSOSampler(search_space=search_space) |
| 31 | + ``` |
| 32 | + - `n_particles`: Number of particles (population size). Prefer total n_trials to be a multiple of n_particles to run full PSO iterations. Larger swarms can improve exploration when budget allows. |
| 33 | + - `inertia`: Inertia weight controlling momentum (influence of previous velocity). Higher values favor exploration, lower favor exploitation. |
| 34 | + - `cognitive`: Personal-best acceleration coefficient (c1). Controls attraction toward each particle’s own best. |
| 35 | + - `social`: Global-best acceleration coefficient (c2). Controls attraction toward the swarm’s best. |
| 36 | + - `seed`: Seed for random number generator. |
| 37 | + |
| 38 | +## Example |
| 39 | + |
| 40 | +```python |
| 41 | +import optuna |
| 42 | +import optunahub |
| 43 | + |
| 44 | + |
| 45 | +def objective(trial: optuna.Trial) -> float: |
| 46 | + x = trial.suggest_float("x", -10, 10) |
| 47 | + y = trial.suggest_float("y", -10, 10) |
| 48 | + return x**2 + y**2 |
| 49 | + |
| 50 | +n_trials = 100 |
| 51 | +n_generations = 5 |
| 52 | + |
| 53 | +sampler = optunahub.load_module(package="samplers/pso").PSOSampler( |
| 54 | + { |
| 55 | + "x": optuna.distributions.FloatDistribution(-10, 10), |
| 56 | + "y": optuna.distributions.FloatDistribution(-10, 10), |
| 57 | + }, |
| 58 | + n_particles=int(n_trials / n_generations), |
| 59 | + inertia=0.5, |
| 60 | + cognitive=1.5, |
| 61 | + social=1.5, |
| 62 | +) |
| 63 | + |
| 64 | +study = optuna.create_study(sampler=sampler) |
| 65 | +study.optimize(objective, n_trials=n_trials) |
| 66 | +print(study.best_trials) |
| 67 | +``` |
0 commit comments