Generalized Formulation of Multicriteria Optimization Problems
Let us consider an object whose operation is described by a system of equations (differential, algebraic, etc.) or whose performance criteria may be directly calculated. We assume that the system depends on r design variables a1,...,ar representing a point a = (a1,...,ar) of an r-dimensional space .
In the general case, when designing a machine, one has to take into account the design variable constraints, the functional constraints, and the criteria constraints [1].
The design variable constraints (constraints on the design variables) have the form
j = 1, ..., r. (1)
Obviously, the constraints (1) single out a parallelepiped P in the r-dimensional design variable space.
The functional constraints may be written as follows:
l = 1, ..., t, (2)
where fl(a) are the functional dependences. The functional constraints can specify the range of allowable stresses in structural elements, the track gauge, etc.
There also exist particular performance criteria, such as productivity, materials consumption, and efficiency. It is desired that, with other things being equal, these criteria, denoted by Φn(a), n = 1, ..., k would have the extreme values. For simplicity, we assume that Fn(a), are to be minimized.
In order to avoid situations in which the expert regards the values of some criteria as unacceptable, we introduce the criteria constraints
n = 1,..., k, (3)
where is the worst value of criterion Fn (a) to which the expert may agree.
The criteria constraints differ from the functional constraints in that the former are determined when solving a problem and, as a rule, are repeatedly revised. Hence, unlike and , reasonable values of cannot be chosen before solving the problem.
Constraints (1)–(3) define the feasible solution set D.
Let us formulate one of the basic problems of multicriteria optimization. It is necessary to find such a set P ⊂ D for which
(4)
where Ф (a) = (Ф1 (a), ..., Фk (a)) is the criterion vector and P is the Pareto optimal set.
Definition 1. A point a0 ∈ D, is called the Pareto optimal point if there exists no point a ∈ D such that for all all n = 1, ..., k and for at least one n0 {l, ..., k}.
A set P ⊂ D is called the Pareto optimal set if it consists of Pareto optimal points. When solving the problem, one has to determine a design variable vector point a0 ⊂ P, which is most preferable among the vectors belonging to set P.
The Pareto optimal set plays an important role in vector optimization problems because it can be analyzed more easily than the feasible solution set and because the optimal vector always belongs to the Pareto optimal set, irrespective of the system of preferences used by the expert for comparing vectors belonging to the feasible solution set. Thus, when solving a multicriteria optimization problem, one always has to find the set of Pareto optimal solutions.
The features of the problems under consideration make it necessary to represent vectors by points of uniformly distributed sequences or the P net in the space of design variables [2].
The Correct Definition of The Criteria Constraints
The feasible set, in its turn, depends on the correct definition of the criteria constraints . The values of these constrains must be a maximum from all possible values of the respective criteria. Otherwise, since the criteria may be contradictory, many feasible solutions may be lost. Thus, the solution of a multicriteria optimization problem to considerable extent is reduced to the correct definition of . The Parameter Space Investigation method, which allows correct determination of and, hence, of the feasible solution set. The PSI method involves the following three stages [1].
The Pareto optimal set P is then constructed in accordance with the definition presented above. This is done by removing those feasible points that can be improved with respect to all the criteria simultaneously.
Let us describe the proceure for constructing the maximum feasible solution set. As a rule, the expert may set equal to a criterion value whose feasibility is beyond doubt.
If the selected values of are not the maximum ones, then one is not sure whether the values of (a) from the interval are feasible. In this case one has to construct the feasible solution set D for the constraints and the corresponding Pareto optimal set P. Further, the set is constructed for the constraints n = 1,...,k, as well as the corresponding Pareto optimal set . Let us compare F(P) and . If the vectors belonging to do not improve substantially the values of the vectors from F(P), then one may set . Otherwise, if the improvement is significant, then the values of the criteria constraints may be set equal to . In this case one has to make sure that the optimal solutions thus obtained are feasible. If the expert is unable to do this, then the criteria constraints are set equal to their previous values, .This scheme can be used for all possible values of and [3].
After analyzing P, the expert finds the most preferred solution Ф (a0). As already noted, for the problems under consideration, experts do not encounter serious difficulties in analyzing the Pareto optimal set and in choosing the most preferred solution. Thus, the PSI method has proved to be a very convenient and effective tool for the expert.
Approximation of the Feasible Set
The algorithm discussed in [3] allows simple and efficient identification and selection of feasible points from the design variable space. However, the following question arises: How can one use the algorithm to construct a feasible solution set D with a given accuracy? The latter is constructed by singling out a subset of D that approaches any value of each criterion in region Ф(D) with a predetermined accuracy.
Let en be an admissible (in the expert’s opinion) error in criterion Фn. By e we denote the error set {en}, n = l, ..., k. We will say that region Ф(D) is approximated by a finite set Ф(De) with an accuracy up to the set e, if for any vector a ∈ D, there can be found a vector b ∈ De such that
n = l, ..., k.
We assume that the functions we shall be operating with are continuous and satisfy the Lipschitz condition (L) formulated as follows: For all vectors a and b belonging to the domain of definition of the criterion Фn, there exists a number Ln such that
In other words, there exists such that
We will say that a function Фn(a) satisfies the special Lipschitz condition (SL) if for all vectors a and b there exist numbers , j = 1, ..., r such that
where at least some of the are different.
Let [Lν] (or ) be a dyadic rational number exceeding Lν (or ) and sufficiently close to the latter, and let [en] be the maximum dyadic rational number that is less than or equal to en and whose numerator is the same as that of [Lν] (or ). A dyadic number is a number of the form , where p and m are natural numbers.
Theorem 1. If criteria Фn(a) are continuous and satisfy either the Lipschitz condition or the special Lipschitz condition, then to approximate Ф(D) to within an accuracy of e it is sufficient to have
or
points of the P net. For details on , see [2].
The number of points needed to calculate the performance criteria in this estimate may be so large that the speed of present-day computers may prove to be inadequate. This difficulty may be overcome by developing “fast” algorithms dealing not with an entire class of functions but instead taking into account the features of the functions of each concrete problem.
To approximate a feasible region Ф(D), such an algorithm may be constructed in the following way. (Although all subsequent considerations presume that the Lipschitz condition is satisfied, they are valid as well for the special Lipschitz condition if constant L is replaced by ).
Let the Lipschitz constants L n = l, ..., k, be specified, and let N1 be the subset of the points D that are either the Pareto optimal points or lie within the e-neighborhood of a Pareto optimal point with respect to at least one criterion. In other words, Фn(a0) ≤ Фn(a) ≤ Фn(a0) + en, where a0 ∈ P, and P is the Pareto optimal set. Also, let N2 = D/N1 and
Definition 2. A feasible solution set Ф(D) is said to be normally approximated if any point of set N1 is approximated to within an accuracy of e, and any point of set N2 to within an accuracy of .
Theorem 2. There exists a normal approximation Ф(De) of a feasible solution set Ф(D) [3].
The Pareto Optimal Set Approximation
Since the Pareto optimal set is unstable, even slight errors in calculating criteria Фn(a) may lead to a drastic change in the set. This implies that by approximating a feasible solution set with a given accuracy we cannot guarantee an appropriate approximation of the Pareto optimal set. Although the problem has been tackled since the 1950s, a complete solution acceptable for the majority of practical problems is still to be obtained. Nevertheless, promising methods have been proposed for some classes of functions .
Let P be the Pareto optimal set in the design variable space; Ф(P) be its image; and e be a set of admissible errors. It is desirable to construct a finite Pareto optimal set Ф(Pe) approximating Ф(P) to within an accuracy of e.
Let Ф(De) be the e-approximation of Ф(D), and Pe be the Pareto optimal subset in De. As has already been mentioned, the complexity of constructing a finite approximation of the Pareto optimal set results from the fact that, in general, in approximating the feasible solution set Ф(D) by a finite set Ф(De) to within an accuracy of e, one cannot achieve the approximation of Ф(P) with the same accuracy. Such problems are said to be ill-posed in the sense of Tikhonov [4]. Although this notion is routinely used in computational mathematics, let us recall it here.
Let P be a functional in the space X, P:X > Y. We suppose that there exists y* = inf P(x), and Ve(y*) is the neighborhood of the desired solution y*. Let us single out an element x* (or a set of elements) in space X and its d – neighborhood Vd(x*) and call a solution to the problem of finding the extremum of P if the solution simultaneously satisfies the conditions and . If at least one of the conditions is not satisfied for arbitrary values of e and d, then the problem is called ill-posed (in the sense of Tikhonov).
An analogous definition may be formulated for the case when P is an operator mapping space X into space Y. Let us set
X = {Ф(De), Ф(D)}; Y = {Ф(Pe), Ф(P)},
where e > 0, and let P:X > Y be an operator relating any element of X to its Pareto optimal subset. Then in accordance with what was said before, the problem of constructing sets Ф(De) and Ф(Pe) belonging simultaneously to the e-neighborhoods of Ф(D) and Ф(P), respectively, is ill-posed. Of course, in spaces X and Y, the metric or topology, that corresponds to the system of preferences on Ф(D) must be specified [5].
Let us define the Ve – neighborhood of a point Ф(a0) Ф(П) as
In Theorem 3, we have to construct a Pareto optimal set Ф(Pe) in which for any point Ф(a0) Ф(P) and any of its e-neighborhoods Ve there may be found a point Ф(b) Ф(Pe) belonging to Ve. Conversely, in the e-neighborhood of any point Ф(b) Ф(Pe), there must exist a point Ф(a0) Ф(P). The set Ф(Pe) is called an approximation possessing property M. Let Ф(De), an approximation of Ф(D), have been constructed.
Theorem 3. If the conditions of Theorem 1 are satisfied, then there exists an approximation Ф(Pe) of Pareto set Ф(P) possessing the M – property.
The theorem will be proved by analyzing the neighborhoods of the so-called “suspicious” points from Ф(De), that is, the points to whose neighborhoods the true Pareto optimal vectors may belong. If we find new Pareto optimal vectors in the neighborhoods of the “suspicious” points then these vectors may be added to Ф(Pe). Taken together with Ф(Pe), they form the e-approximation of a Pareto optimal set, [5].
In [5] it is shown that this approach solves the problem of the ill-posedness (in the sense of Tikhonov) of the Pareto optimal set approximation.