Removed duplicated items.

This commit is contained in:
Fabian Becker 2012-06-05 12:06:53 +00:00
parent 37589f8d4f
commit 25198b8249
12 changed files with 0 additions and 369 deletions

View File

@ -1,15 +0,0 @@
<html>
<head>
<title>Default page</title>
</head>
<body>
 
<h1 align="center">HTML description file is missing</h1>
<center>
</center><br>
Unfortunately there is no additional HTML description
file to this class. Please refer to the JOptDocumentation
file or the JavaDoc for more information on this class.
</body>
</html>

View File

@ -1,25 +0,0 @@
<html>
<head>
<title>Evolution Strategy - ES</title>
</head>
<body>
<h1 align="center">Evolution Strategy - ES</h1>
<center>
</center><br>
An ES works on a population of real valued solutions
by repeated use of evolutionary operators like reproduction,
recombination and mutation.
&lambda; offspring individuals are generated from &mu; parents
by recombination and mutation (with &mu; &lt; &lambda;).
<br>
After evaluating the fitness of the &lambda;
offspring individuals, the comma-strategy selects the &mu; individuals
with the best fitness as parent population for the next generation.
On the other hand, a plus-strategy selects the best &mu; individuals
from the aggregation of parents and offspring individuals, so in this
case the best individual is guaranteed to survive.
In general, however, the comma-strategy is more robust and can easier
escape from local optima, which is why it is usually the standard selection.
<br>
</body>
</html>

View File

@ -1,30 +0,0 @@
<html>
<head>
<title>Increasing Population Size ES - IPOP-ES</title>
</head>
<body>
<h1 align="center">Increasing Population Size ES - IPOP-ES</h1>
<center>
</center><br>
<p>
This class implements the IPOP (increased population size) restart strategy ES, which increases
the ES population size (i.e., lambda) after phases of stagnation and then restarts the optimization
by reinitializing the individuals and operators.<br>
Stagnation is for this implementation defined by a FitnessConvergenceTerminator instance
which terminates if the absolute change in fitness is below a threshold (default 10e-12) for a
certain number of generations (default: 10+floor(30*n/lambda) for problem dimension n).
</p>
<p>
If the MutateESRankMuCMA mutation operator is employed, additional criteria are used for restarts,
such as numeric conditions of the covariance matrix.
Lambda is increased multiplicatively for every restart, and typical initial values are
mu=5, lambda=10, incFact=2.
The IPOP-CMA-ES won the CEC 2005 benchmark challenge.
Refer to Auger&Hansen 05 for more details.
</p>
<br>
A.Auger & N.Hansen. <i>A Restart CMA Evolution Strategy With Increasing Population Size</i>. CEC 2005.
</body>
</html>

View File

@ -1,30 +0,0 @@
<html>
<head>
<title>Schwefels's (sine root) function</title>
</head>
<body>
<h1 align="center">Schwefel's (sine root) function</h1>
<center>
<img src="images/f13-tex-500.jpg" width="650" height="64" aling="center">
</center>
<p>
Schwefel's (sine root) function is highly multimodal and has no global basin of attraction. The optimum at a fitness of f(x*)=0 lies at x*=420.9687. Schwefel's sine root is a tough challenge for any global optimizer due to the multiple distinct optima. Especially, there is a deceptive nearly optimal solution close to x=(-420.9687)<SUP>n</SUP>.
<p>
<p>
<img src="images/f13-schwefels-sine-root.jpg" width="667" height="493" border="2" align="center">
<br>
Schwefels's sine root function in 2D within the co-domain -500 <= <i>x</i> <= 500.
<p>
<hr>
More information about Ackley's function can be found at:
<p>
David. H. Ackley. <i>A connection machine for genetic hillclimbing.</i> Kluwer Academic Publishers, Boston, 1987.
<p>
Thomas Baeck. <i>Evolutionary Algorithms in Theory and Practice.</i> Oxford University Press, 1996.
</body>
</html>

View File

@ -1,24 +0,0 @@
<html>
<head>
<title>f_1 : Sphere function</title>
</head>
<body>
<h1 align="center">The F1 hyper-parabola function</h1>
<center>
<img src="images/f1tex.jpg" width="85" height="95" border="0" align="center">
</center><br>
The hyper-parabola function is a <i>n</i>-dimensional, axis-symmetric, continuously differentiable, convex function:
<p>
Because of its simplicity every optimization-algorithm should be able to find its global minimum at <i>x</i>=[0, 0, ... , 0]
<p>
<img src="images/f1.jpg" width="480" height="360" border="2" align="middle">
<hr>
More information about the F1 function can be found at:
<p>
Kenneth De Jong. <i>An analysis of the behaviour of a class of genetic adaptive systems.</i> Dissertation, University of Michigan, 1975. Diss. Abstr. Int. 36(10), 5140B, University Microflims No. 76-9381.
</body>
</html>

View File

@ -1,36 +0,0 @@
<html>
<head>
<title>Generalized Rosenbrock's function</title>
</head>
<body>
<h1 align="center">Generalized Rosenbrock's function</h1>
<center>
<img src="images/rosenbrocktex.jpg" width="500" height="78">
</center>
<p>
This function is unimodal and continuous, but the global optimum is hard to find, because of independence through the term (<i>x</i>_(<i>i</i>+1) - <i>x_i</i>*<i>x_i</i>)^2 between contiguous parameters.
<p>
<img src="images/f85.jpg" border="2">
<br>
Rosenbrock's function within the domain -5 <= <i>x</i> <= 5.
<p>
The global optimum is located in a parabolic formed valley (among the curve x^2 = x_1^2), which has a flat ground.
<br>
<img src="images/f81.jpg" border="2">
<br>
The function close to its global optimum, which is: f(<i>x</i>) = f(1, 1, ... , 1) = 0.
<p>
Rosenbrock' function is not symmetric, not convex and not linear.
<hr>
More information about Rosenbrock's function can be found at:
<p>
Kenneth De Jong. <i>An analysis of the behaviour of a class of genetic adaptive systems.</i> Dissertation, University of Michigan, 1975. Diss. Abstr. Int. 36(10), 5140B, University Microflims No. 76-9381.
<p>
Hans Paul Schwefel. <i>Evolution and optimum seeking.</i> Sixth-Generation Computer Technology Series. John Wiley & Sons, INC., 1995.
<p>
Darrell Whitley, Soraya Rana, John Dzubera, Keith E. Mathias. <i>Evaluating Evolutionary Algorithms. Artificial Intelligence</i>, 85(1-2):245-276. 1996.
<p>
Eberhard Schoeneburg, Frank Heinzmann, Sven Feddersen. <i>Genetische Algorithmen und Evolutionstrategien - Eine Einfuehrung in Theorie und Praxis der simulierten Evolution.</i> Addison-Wesley, 1994.
</body>
</html>

View File

@ -1,29 +0,0 @@
<html>
<head>
<title>The step function</title>
</head>
<body>
<h1 align="center">The Step Function</h1>
<center>
<img src="images/steptex.jpg" width="350" height="120" aling="center">
</center>
<p>
The idea of this function is the implementation of a flat plateau (slope 0)in an underlying continuous function. It's harder for optimization algortihms to find optima because minor changes of the object variables don't affect the fitness. Therefore no conclusions about the search direction can be made.
<p>
<img src="images/step5.jpg" width="480" height="360" border="2" align="center">
<p>
The step function is symmetric considering the underlying function (here: f(x,y) = f(y,x)), but between the bulk constant plateau-areas not continuously differentiable.
<p>
Its minimum-area is located in the intervals: <i>f(x)</i>=<i>f</i>([-5.12,-5), ... , [-5.12,-5))=0.
<p>
<img src="images/stepopt.jpg" width="480" height="360" border="2" align="center">
<hr>
More information about the step function can be found at:
<p>
Thomas Baeck, <i>Evolutionary Algorithms in Theory and Practice.</i> Oxford University Press, 1996.
<p>
Darrell Whitley, Soraya Rana, John Dzubera, Keith E. Mathias. <i>Evaluating Evolutionary Algorithms. Artificial Intelligence</i>, 85(1-2):245-276. 1996.
<p>
Eberhard Schoeneburg, Frank Heinzmann, Sven Feddersen. <i>Genetische Algorithmen und Evolutionstrategien - Eine Einfuehrung in Theorie und Praxis der simulierten Evolution.</i> Addison-Wesley, 1994.
</body>
</html>

View File

@ -1,27 +0,0 @@
<html>
<head>
<title>Schwefel's double sum</title>
</head>
<body>
<h1 align="center">Schwefels double sum</h1>
<center>
<img src="images/f2tex.jpg" width="220" height="102" border="0" align="center">
</center>
<p>
Schwefel's double sum is a quadratic minimization problem. Its difficulty increases by the dimension <i>n</i> in <i>O(n^2)</i>. It is used for analysis of correlating mutations.
<p>
It possesses specific symmetrical properties:<br>
<img src="images/schwefelsymmetrie.jpg" width="500" height="104" border="0" align="middle">
<p>
Its minimum is located at: <i>f(x)</i>=<i>f</i>([0, 0, ... , 0])=0
<p>
<img src="images/f2.jpg" width="480" height="360" border="2" align="middle">
<hr>
More information about Schwefel's double sum can be found at:
<p>
Hans Paul Schwefel. <i>Evolution and optimum seeking.</i> Sixth-Generation Computer Technology Series. John Wiley & Sons, INC., 1995.
</body>
</html>

View File

@ -1,42 +0,0 @@
<html>
<head>
<title>Generalized Rastrigin's function</title>
</head>
<body>
<h1 align="center">Generalized Rastrigin's function</h1>
<center>
<img src="images/rastrigintex.jpg" width="500" height="101">
</center>
<p>
Rastrigin's function is symmetric. It is based on the simple <i>parabola function</i> (called f1 in the EvA context), but it is multimodal because a modulation term on the basis of the cosine function is added. This evokes hills and valleys which are misleading local optima.
<p>
Values used for the following illustrations: <i>A</i>=10, <i>&#969;</i>=2*&#960;, <i>n</i>=2.
<br>
<img src="images/rastrigin20.jpg" border="2">
<br>
Rastrigin's function within the co-domain -20>=<i>x</i>>=20
<p>
<img src="images/rastrigin5.jpg" border="2">
<br>
Rastrigin's function within the co-domain -5>=<i>x</i>>=5
<p>
Like Ackley's function a simple evolutionary algorithm would get stuck in a local optimum, while a broader searching algorithm would get out of the local optimum closer to the global optimum, which in this case is: f(<i>x</i>) = f(0, 0, ... , 0) = 0.
<p>
<img src="images/rastrigin1.jpg" border="2"><br>
Rastrigin's function close to its optimum.
<hr>
More information about Rastrigin's function can be found at:
<p>
Darrell Whitley, Soraya Rana, John Dzubera, Keith E. Mathias. <i>Evaluating Evolutionary Algorithms. Artificial Intelligence</i>, 85(1-2):245-276. 1996.
<p>
Eberhard Schoeneburg, Frank Heinzmann, Sven Feddersen. <i>Genetische Algorithmen und Evolutionstrategien - Eine Einfuehrung in Theorie und Praxis der simulierten Evolution.</i> Addison-Wesley, 1994.
</body>
</html>

View File

@ -1,36 +0,0 @@
<html>
<head>
<title>Ackley's function</title>
</head>
<body>
<h1 align="center">Ackley's function</h1>
<center>
<img src="images/ackleytex.jpg" width="500" height="58" aling="center">
</center>
<p>
Ackley's function is multimodal and symmetrical. It is based on an exponential function and modulated by a cosine function.
The outside region is almost planar as to the growing influence of the exponential function.
In the center there is a steep hole as to the influence of the cosine function.<br>
Its minimum is at: <i>f(x)</i>=<i>f</i>([0, 0, ... , 0])=0.
<p>
The difficulty for an optimization algorithm is mid-graded because a simple optimization-algorithm like <i>hill-climbing</i> would get stuck in a local minimum. The optimization algorithm has to search a broader environ to overcome the local minimum and get closer to the global optima.
<p>
<img src="images/ackley.jpg" width="480" height="360" border="2" align="center">
<br>
Ackley's function within the co-domain -20 >= <i>x</i> >= 20, <i>a</i>=20, <i>b</i>=0.2, <i>c</i>=2*&#960;, <i>n</i>=2.
<p>
<img src="images/ackleyopt.jpg" width="480" height="360" border="2" align="center">
<br>
Ackley's function close to the optimum.
<hr>
More information about Ackley's function can be found at:
<p>
David. H. Ackley. <i>A connection machine for genetic hillclimbing.</i> Kluwer Academic Publishers, Boston, 1987.
<p>
Thomas Baeck. <i>Evolutionary Algorithms in Theory and Practice.</i> Oxford University Press, 1996.
</body>
</html>

View File

@ -1,19 +0,0 @@
<html>
<head>
<title>Fitness Convergence Terminator</title>
</head>
<body>
<h1 align="center">Fitness Convergence Terminator</h1>
<center>
</center><br>
The fitness convergence terminator stops the optimization, when there has been hardly
any change in the best fitness in the population (within percentual or absolute distance) for a certain
time, given in generations or fitness calls. In case of multi-objective optimization, the 2-norm of
the fitness vector is
currently used.<br>
Be aware that, if the optimization is allowed to be non-monotonic, such as for (,)-ES strategies,
and if the optimum is close to zero, it may happen that the fitness fluctuates due to numeric
issues and does not easily converge in a relative way.<br>
Check the help for the <a href="PopulationMeasureTerminator.html">PopulationMeasureTerminator</a> for additional information.
</body>
</html>

View File

@ -1,56 +0,0 @@
<html>
<head>
<title>Generic Constraints</title>
</head>
<body>
<h1 align="center">Generic Constraints</h1>
<br>
<p>To represent generic constraints on real-valued functions, this class can parse
String expressions in prefix notation of the form:
<blockquote>
&lt;expr&gt; ::= &lt;constant-operator&gt; | &lt;functional-operator&gt; "(" &lt;arguments&gt; ")"<br>
&lt;arguments> ::= &lt;expr&gt; | &lt;expr&gt; "," &lt;arguments&gt;
</blockquote>
</p>
Setting the <b>constraint string</b>:
Constant operators have an arity of zero. Examples are:<br>
(pi,0) (X,0) (1.0,0)<br>
Functional operators have an arity greater zero. Examples are:<br>
(sum,1) (prod,1) (abs,1) (sin,1) (pow2,1) (pow3,1) (sqrt,1) (neg,1) (cos,1) (exp,1)<br>
(+,2) (-,2) (/,2) (*,2)<br>
<p>
Additionally, any numerical strings can also be used; they are parsed to numeric constants. The literal <i>n</i>
is parsed to the current number of problem dimensions.<br>
Notice that only the <i>sum</i> and <i>prod</i> operators may receive the literal X as input, standing
for the full solution vector. Access to single solution components is possible by writing <i>x0...x9</i>
for a problem with 10 dimensions.
</p>
<p>
Thus you may write <font face="Courier">+(-(5,sum(X)),+sin(/(x0,pi)))</font>
and select 'lessEqZero' as relation to require valid solutions to fulfill 5-sum(X)+sin(x0/pi)&lt;=0.<br>
</p>
<p>
Typical <b>relations</b> concerning constraints allow for g(x)&lt;=0, g(x)==0, or g(x)&gt;=0 for
constraint g. Notice that equal-to-zero constraints are converted to g(x)==0 &lt;=&gt; |g(x)-epsilon|&lt;=0 for
customizable small values of epsilon.
</p>
<p>
The <b>handling method</b> defines how EvA 2 copes with the constraint. Simplest variant is an
additive penalty which is scaled by the penalty factor and then added directly to the fitness
of an individual. This will work for any optimization strategy, but results will depend on
the selection of penalty factors. Multiplicative penalty works analogously with the difference of
being multiplied with the raw fitness.<br>
In the variant called specific tag, the constraint violation is stored in an extra field of any
individual and may be regarded by the optimization strategy. However, not all strategies provide
simple mechanisms of incorporating this specific tag.
</p>
</body>
</html>