Skip to content

Model files

Maykon Michel Palma edited this page Sep 30, 2019 · 60 revisions

This document aims at explaining the model file used for each optimization technique. Let's get started!

Since all optimization techniques require the number of agents and decision variables, as well as the number of iterations for convergence, the first line of the model file is the very same one for all techniques implemented in LibOPT.

  1. Particle Swarm Optimization (PSO). Suppose we have the following model file:
  2. 10 2 10 #<n_particles> <dimension> <max_iterations>
    1.7 1.7 #<c1> <c2>
    0.7 0.0 0.0 #<w> <w_min> <w_max>
    -5.12 5.12 #<LB> <UB> x[0]
    -5.12 5.12 #<LB> <UB> x[1]
    As aforementioned, the first line contains three integers: number of agents (particles), number of decision variables (dimension) and number of iterations. Notice everything right after the caracter # is considering a comment, thus not taking into account by the parser.

    The next two lines configure PSO parameters c1 and c2, and the inertia weight w. Since LibOPT implements the naïve PSO, it does not employ adaptive inertia weight. Therefore, there is no need to set w_min and w_max.

    The last two lines aim at setting up the range of each decision variable. Since we have two dimensions in the example, each line stands for one variable, say x[0] and later x[1].

  3. Particle Swarm Optimization with Adaptive Inertia Weight (AIWPSO). We can use the above example with a minor modification:
  4. 10 2 10 #<n_particles> <dimension> <max_iterations>
    1.7 1.7 #<c1> <c2>
    0.7 0.1 0.9 #<w> <w_min> <w_max>
    -5.12 5.12 #<LB> <UB> x[0]
    -5.12 5.12 #<LB> <UB> x[1]
    Now, we have set w_min and w_max parameters, which is the main modification regarding PSO and AIWPSO model files.

  5. Bat Algorithm (BA). Suppose we have a similar model file:
  6. 10 2 10 #<n_particles> <dimension> <max_iterations>
    0.0 2.0 #<f_min> <fmax>
    0.5 0.5 #<A> <r>
    -5.12 5.12 #<LB> <UB> x[0]
    -5.12 5.12 #<LB> <UB> x[1]
    Now, we need to set up the minimum and maximum frequency ranges, say that f_min and f_max, respectively, as well as the loudness A and the pulse rate r.

  7. Flower Pollination Algorithm (FPA). Suppose we have a similar model file:
  8. 10 2 10 #<n_particles> <dimension> <max_iterations>
    1.5 0.3 #<beta> <p>
    -5.12 5.12 #<LB> <UB> x[0]
    -5.12 5.12 #<LB> <UB> x[1]
    Now, we need to set up the beta value, which is used to compute the Levy distribution, as well as the switch probability p, which is basically the probability of local pollination.

  9. Firefly Algorithm (FA). Suppose we have a similar model file:
  10. 10 2 10 #<n_particles> <dimension> <max_iterations>
    0.2 1 1 #<alpha> <beta_0> <gamma>
    -5.12 5.12 #<LB> <UB> x[0]
    -5.12 5.12 #<LB> <UB> x[1]
    Now, we need to set up the alpha value, which is used to compute the randomized parameter, as well as the attractiveness beta_0 and the light absorption coefficient gamma.

  11. Cuckoo Search (CS). Suppose we have a similar model file:
  12. 10 2 10 #<n_particles> <dimension> <max_iterations>
    1.5 0.3 0.2 #<beta> <p> <alpha>
    -5.12 5.12 #<LB> <UB> x[0]
    -5.12 5.12 #<LB> <UB> x[1]
    Now, we need to set up the beta value, which is used to compute the Levy distribution, as well as the switch probability p, which is basically the probability of replace the worst nests by new ones and alpha is the step size.

  13. Genetic Programming (GP). Suppose we have a similar model file:
  14. 10 2 7 #<n_trees> <dimension> <max_iterations>
    0.3 0.3 0.4 #<probability_of_reproduction> <probability_of_mutation> <probability_of_crossover>
    2 5 #<minimum_depth_tree> <maximum_depth_tree>
    SUM MUL DIV SUB #function nodes
    PARAM CONST #terminal nodes
    1 1 #<integer(binary)-optimization problem> <the variables have different ranges>
    -5.0 5.0 #<LB> <UB> x[0]
    -7.0 7.0 #<LB> <UB> x[1]
    Unlikely the previous model files, the first element of the first line stands now for the number of trees instead of the number of particles employed. The second line is in charge of setting the probability of reproduction, mutation and crossover of the GP trees. Also, it is needed to set the minimum and maximum depth of the trees.

    Regarding the function nodes, there is a need to set what functions are going to be used to build up the trees. For now, we have the following possible functions: SUM, SUB, MUL, DIV, ABS, SQRT, LOG, EXP, AND, OR, XOR, NOT. However, it is possible to add more functions.

    For the terminals, it is possible to use a parameter terminal (PARAM) or a constant (CONST) terminal. If you chose to use CONST in this line, the library generates 1,000 constant values automatically to enrich the diversity of the trees. If you want to change the number of constants, please refer to the variable N_CONSTANTS in include/opt.h. If you have an optimization problem with 2 decision variables, for instance, the CONST flag will generate two-dimensional constants, being each one within the range specified below for each decision variable. Therefore, do not worry about the ranges of the constants generated by the library. We take care of that for you!

    Considering the next line, if you have an integer-valued optimization problem, you must set it to 1. Hence, for a real-valued problem, you just need to set this value to 0. If your decision variables fall into a different range, you must set the next parameter to 1. Conversely, if they are in the same range, you just set it to 0, and the library considers the very same range for all decision variables. This tool is useful when you have hundreds of decision variables, and we know you do not want to set all these ranges, for sure!

    Finally, just set the lower and upper bound of your decision variables, and you are ready to go!

  15. Black Hole Algorithm (BHA). Suppose we have a similar model file:
  16. 10 2 10 #<n_particles> <dimension> <max_iterations>
    -5.12 5.12 #<LB> <UB> x[0]
    -5.12 5.12 #<LB> <UB> x[1]
    We don't need to set up any additional parameters for BHA.

  17. Migrating Birds Optimization (MBO). Suppose we have a similar model file:
  18. 10 2 10 #<n_particles> <dimension> <max_iterations>
    10 8 10 #<k> <X> <M>
    -4.5 4.5 #<LB> <UB> x[0]
    -4.5 4.5 #<LB> <UB> x[1]
    Now, we need to set up the number of neighbours solutions to be considered and the number of neighbours shared with the next solution, the k and X, respectively, as well as the number of tours M.

  19. Geometric Semantic Genetic Programming (GSGP). Suppose we have a similar model file:
  20. 10 2 7 #<n_trees> <dimension> <max_iterations>
    0.3 0.3 0.4 #<probability_of_reproduction> <probability_of_mutation> <probability_of_crossover>
    2 5 #<minimum_depth_tree> <maximum_depth_tree>
    AND OR XOR NOT #function nodes
    PARAM CONST #terminal nodes
    1 1 #<integer(binary)-optimization problem> <the variables have different ranges>
    -5.0 5.0 #<LB> <UB> x[0]
    -7.0 7.0 #<LB> <UB> x[1]
    The same approach used on GP's model file can be seen for GSGP. Considering this fact, the functions nodes need to be binary (logical) functions, i.e., AND, OR, XOR, NOT or real-valued, i.e., SUM, SUB, MUL, DIV, ABS, SQRT, LOG, EXP. Just remember to set "integer(binary)-optimization problem" to 1 when using binary operators and to 0 when using real-valued ones.

  21. Artificial Bee Colony (ABC). Suppose we have a similar model file:
  22. 10 2 10 #<n_particles> <dimension> <max_iterations>
    10 #<number of trial limits>
    -5.12 5.12 #<LB> <UB> x[0]
    -5.12 5.12 #<LB> <UB> x[1]
    Note that the number of particles is already equivalent to the number of food sources. Therefore, the number of bees pertaining the colony will always the the double of this number, i.e., 20. Next, we need to define the number of trial limits for each food source, any number above from zero can be taken into account.

  23. Water Cycle Algorithm (WCA). Suppose we have a similar model file:
  24. 10 2 10 # <n_particles> <dimension> <max_iterations>
    1.5 0.8 # <nsr> <dmax>
    -30 30 # <LB> <UB> x[0]
    -30 30 # <LB> <UB> x[1]
    Now, we need to set up the number of rivers + sea (nsr) and the maximum value of the evaporation condition, represented by dmax.

  25. Harmony Search (HS), Improved Harmony Search (IHS) and Parameter-setting-free Harmony Search (PSF-HS). Suppose we have a similar model file:
  26. 10 2 10 # <n_particles> <dimension> <max_iterations>
    0.7 # <HMCR>
    0.7 0 1 # <PAR> <PAR_min> <PAR_max>
    10 0 20 # <bw> <bw_min> <bw_max>
    -30 30 # <LB> <UB> x[0]
    -30 30 # <LB> <UB> x[1]
    Now, we need to set up the harmony memory considering rate (HMCR), the pitch adjusting rate (PAR) and its bandwidth parameter (bw). Note that you just to set up the minimum and maximum values for PAR and bw when using the IHS and that you do not need to setup bw for PSF-HS.

  27. Brain Storm Optimization (BSO). Suppose we have the following model file:
  28. 10 2 10 #<n_particles> <dimension> <max_iterations>
    3 # <k>
    0.3 0.4 0.3 # <p_one_cluster> <p_one_center> <p_two_centers>
    -5.12 5.12 # <LB> <UB> x[0]
    -5.12 5.12 # <LB> <UB> x[1]
    As aforementioned, the first line contains three integers: number of agents (particles), number of decision variables (dimension) and number of iterations. Notice everything right after the caracter # is considering a comment, thus not taking into account by the parser.

    The next two lines configure BSO parameters: k (i.e., the number of clusters), p_one_cluster (i.e., the probability of selecting a cluster center), p_one_center (i.e., the probability of randomly selecting an idea from a probabilistic selected cluster), and p_two_centers (i.e, the probability of of creating a random combination of two probabilistic selected clusters).

  29. Lion Optimization Algorithm (LOA). Suppose we have the following model file:
  30. 10 2 10 # <n_particles> <dimension> <max_iterations>
    0.8 0.2 0.2 0.3 0.2 0.4 4 # <sex rate> <percent of nomad lions> <roaming percent> <mating probability> <mutate probability> <immigrate rate> <number of prides> 
    -5.12 5.12 # <LB> <UB> x[0]
    -5.12 5.12 # <LB> <UB> x[1]
    One can notice that LOA demands the following parameters: sex rate (probability of males), percent of nomad lions, percent of roaming lions, mating probability, mutation probability, immigration rate and the number of prides.

    The last two lines aim at setting up the range of each decision variable. Since we have two dimensions in the example, each line stands for one variable, say x[0] and later x[1].

  31. Backtracking Search Optimization Algorithm (BSA). Suppose the following model file:
  32. 10 2 10 # <n_particles> <dimension> <max_iterations>
    1.0 3 # <mix_rate> <F>
    -30 30 # <LB> <UB> x[0]
    -30 30 # <LB> <UB> x[1]
    In this example, the parameter mix rate controls the number of elements of individuals that will mutate in a trial population. Similarly, F controls the amplitude of the search-direction matrix.

  33. Adaptive Differential Evolution with Optional External Archive (JADE). Suppose the following model file:
  34. 10 2 10 # <n_particles> <dimension> <max_iterations>
    0.1 0.05 # <c> <p>
    -30 30 # <LB> <UB> x[0]
    -30 30 # <LB> <UB> x[1]
    In this example the parameter c stands for the rate of parameter adaptation, while p determines the greediness of the mutation strategy.

  35. Artificial Butterfly Optimization (ABO). Suppose the following model file:
  36. 10 2 10 # <n_butterflies> <dimension> <max_iterations>
    0.2 0.05 # <ratio_e> <step_e>
    -30 30 # <LB> <UB> x[0]
    -30 30 # <LB> <UB> x[1]
    In this example the parameter ratio_e controls the proportion of sunspot butterflies, while step_e controls the flight distance.

  37. Cartesian Genetic Programming(CGP). Suppose we have the following model file:
  38. 10 2 10 # <n_particles> <dimension> <max_iterations>
    5 100 1 500 # <n_rows> <n_collumns> <levels_back> <n_input_values>
    0.3 # <probability_of_mutation>
    -5.12 5.12 # <LB> <UB> x[0]
    -5.12 5.12 # <LB> <UB> x[1]
    CGP demands the following parameters: number of rows of the graph and the number of collumns of the graph. levels_back stands for how far a node can be from another node (collumn wise) for it to become the node's input and n_input_values indicates the number of random values used to feed the graph. And on the next line, we define the mutation probability.

    The last two lines aim at setting up the range of each decision variable. Since we have two dimensions in the example, each line stands for one variable, say x[0] and later x[1].
  39. Differential Evolution (DE). Suppose we have the following model file:
  40. 10 2 10 # <n_particles> <dimension> <max_iterations>
    0.8 0.7 # <mutation_factor> <cross_probability>
    -5.12 5.12 # <LB> <UB> x[0]
    -5.12 5.12 # <LB> <UB> x[1]
    As aforementioned, the first line contains three integers: number of agents (particles), number of decision variables (dimension) and number of iterations. Notice everything right after the caracter # is considering a comment, thus not taking into account by the parser.

    The next line configure DE parameters mutation_factor and cross_probability.

    The last two lines aim at setting up the range of each decision variable. Since we have two dimensions in the example, each line stands for one variable, say x[0] and later x[1].

Clone this wiki locally