what soft order constraint does this correspond to

In mathematical optimization, constrained optimization (in some contexts chosen constraint optimization) is the procedure of optimizing an objective office with respect to some variables in the presence of constraints on those variables. The objective function is either a price function or free energy function, which is to be minimized, or a reward function or utility function, which is to be maximized. Constraints can exist either hard constraints, which set conditions for the variables that are required to be satisfied, or soft constraints, which accept some variable values that are penalized in the objective function if, and based on the extent that, the weather condition on the variables are non satisfied.

Relation to constraint-satisfaction problems [edit]

The constrained-optimization problem (COP) is a meaning generalization of the archetype constraint-satisfaction problem (CSP) model.[ane] COP is a CSP that includes an objective role to be optimized. Many algorithms are used to handle the optimization office.

General course [edit]

A general constrained minimization problem may exist written as follows:[2]

min f ( x ) s u b j e c t t o g i ( x ) = c i for i = 1 , , due north Equality constraints h j ( 10 ) d j for j = i , , m Inequality constraints {\displaystyle {\begin{array}{rcll}\min &~&f(\mathbf {x} )&\\\mathrm {subject~to} &~&g_{i}(\mathbf {ten} )=c_{i}&{\text{for }}i=i,\ldots ,n\quad {\text{Equality constraints}}\\&~&h_{j}(\mathbf {x} )\geqq d_{j}&{\text{for }}j=i,\ldots ,m\quad {\text{Inequality constraints}}\end{assortment}}}

where g i ( x ) = c i f o r i = 1 , , north {\displaystyle g_{i}(\mathbf {x} )=c_{i}~\mathrm {for~} i=i,\ldots ,north} and h j ( x ) d j f o r j = 1 , , 1000 {\displaystyle h_{j}(\mathbf {x} )\geq d_{j}~\mathrm {for~} j=1,\ldots ,yard} are constraints that are required to be satisfied (these are called hard constraints), and f ( x ) {\displaystyle f(\mathbf {x} )} is the objective office that needs to be optimized subject to the constraints.

In some issues, often called constraint optimization problems, the objective function is actually the sum of cost functions, each of which penalizes the extent (if any) to which a soft constraint (a constraint which is preferred but not required to exist satisfied) is violated.

Solution methods [edit]

Many constrained optimization algorithms can be adjusted to the unconstrained case, oft via the use of a penalty method. All the same, search steps taken by the unconstrained method may be unacceptable for the constrained trouble, leading to a lack of convergence. This is referred to as the Maratos outcome.[three]

Equality constraints [edit]

Commutation method [edit]

For very simple problems, say a office of ii variables subject to a single equality constraint, it is most applied to apply the method of substitution.[4] The idea is to substitute the constraint into the objective office to create a composite function that incorporates the effect of the constraint. For example, assume the objective is to maximize f ( x , y ) = x y {\displaystyle f(x,y)=x\cdot y} subject to x + y = x {\displaystyle x+y=10} . The constraint implies y = x 10 {\displaystyle y=x-10} , which can be substituted into the objective function to create p ( x ) = x ( 10 ten ) = 10 10 x 2 {\displaystyle p(x)=x(10-x)=10x-x^{2}} . The get-go-gild necessary status gives p x = x 2 x = 0 {\displaystyle {\frac {\partial p}{\partial ten}}=x-2x=0} , which can be solved for ten = v {\displaystyle x=5} and, consequently, y = 10 5 = 5 {\displaystyle y=x-v=5} .

Lagrange multiplier [edit]

If the constrained problem has only equality constraints, the method of Lagrange multipliers can be used to convert it into an unconstrained problem whose number of variables is the original number of variables minus the original number of equality constraints. Alternatively, if the constraints are all equality constraints and are all linear, they tin can be solved for some of the variables in terms of the others, and the former can be substituted out of the objective function, leaving an unconstrained problem in a smaller number of variables.

Inequality constraints [edit]

With inequality constraints, the problem tin can be characterized in terms of the geometric optimality atmospheric condition, Fritz John conditions and Karush–Kuhn–Tucker atmospheric condition, nether which uncomplicated problems may be solvable.

Linear programming [edit]

If the objective function and all of the hard constraints are linear and some difficult constraints are inequalities, then the problem is a linear programming problem. This can be solved past the simplex method, which usually works in polynomial fourth dimension in the trouble size but is not guaranteed to, or by interior point methods which are guaranteed to piece of work in polynomial time.

Nonlinear programming [edit]

If the objective function or some of the constraints are nonlinear, and some constraints are inequalities, then the trouble is a nonlinear programming problem.

Quadratic programming [edit]

If all the hard constraints are linear and some are inequalities, merely the objective function is quadratic, the problem is a quadratic programming problem. It is one type of nonlinear programming. Information technology can nonetheless exist solved in polynomial time by the ellipsoid method if the objective function is convex; otherwise the problem may be NP difficult.

KKT atmospheric condition [edit]

Assuasive inequality constraints, the KKT arroyo to nonlinear programming generalizes the method of Lagrange multipliers. It can exist practical under differentiability and convexity.

Co-operative and spring [edit]

Constraint optimization can exist solved past branch-and-bound algorithms. These are backtracking algorithms storing the cost of the best solution found during execution and using it to avoid part of the search. More precisely, whenever the algorithm encounters a partial solution that cannot be extended to form a solution of better cost than the stored best toll, the algorithm backtracks, instead of trying to extend this solution.

Bold that price is to be minimized, the efficiency of these algorithms depends on how the price that can be obtained from extending a partial solution is evaluated. Indeed, if the algorithm can backtrack from a partial solution, part of the search is skipped. The lower the estimated price, the better the algorithm, as a lower estimated cost is more probable to be lower than the best cost of solution found so far.

On the other hand, this estimated cost cannot exist lower than the effective toll that can be obtained by extending the solution, as otherwise the algorithm could backtrack while a solution meliorate than the all-time found so far exists. As a result, the algorithm requires an upper leap on the cost that tin can be obtained from extending a partial solution, and this upper bound should exist every bit modest as possible.

A variation of this approach called Hansen'south method uses interval methods.[five] It inherently implements rectangular constraints.

First-pick bounding functions [edit]

Ane way for evaluating this upper jump for a partial solution is to consider each soft constraint separately. For each soft constraint, the maximal possible value for any assignment to the unassigned variables is assumed. The sum of these values is an upper jump because the soft constraints cannot assume a higher value. It is exact considering the maximal values of soft constraints may derive from different evaluations: a soft constraint may be maximal for 10 = a {\displaystyle 10=a} while another constraint is maximal for 10 = b {\displaystyle ten=b} .

Russian doll search [edit]

This method[6] runs a branch-and-jump algorithm on n {\displaystyle n} problems, where north {\displaystyle n} is the number of variables. Each such problem is the subproblem obtained past dropping a sequence of variables x 1 , , 10 i {\displaystyle x_{1},\ldots ,x_{i}} from the original problem, along with the constraints containing them. After the problem on variables x i + i , , 10 north {\displaystyle x_{i+1},\ldots ,x_{n}} is solved, its optimal cost can be used as an upper bound while solving the other bug,

In particular, the cost approximate of a solution having x i + 1 , , x northward {\displaystyle x_{i+1},\ldots ,x_{due north}} as unassigned variables is added to the cost that derives from the evaluated variables. Virtually, this corresponds on ignoring the evaluated variables and solving the problem on the unassigned ones, except that the latter problem has already been solved. More precisely, the toll of soft constraints containing both assigned and unassigned variables is estimated as above (or using an capricious other method); the cost of soft constraints containing only unassigned variables is instead estimated using the optimal solution of the respective problem, which is already known at this bespeak.

There is similarity betwixt the Russian Doll Search method and dynamic programming. Similar dynamic programming, Russian Doll Search solves sub-issues in guild to solve the whole problem. But, whereas Dynamic Programming directly combines the results obtained on sub-bug to go the effect of the whole problem, Russian Doll Search only uses them equally bounds during its search.

Bucket elimination [edit]

The saucepan elimination algorithm tin can be adapted for constraint optimization. A given variable can be indeed removed from the problem past replacing all soft constraints containing information technology with a new soft constraint. The price of this new constraint is computed assuming a maximal value for every value of the removed variable. Formally, if x {\displaystyle x} is the variable to exist removed, C one , , C n {\displaystyle C_{i},\ldots ,C_{n}} are the soft constraints containing it, and y ane , , y m {\displaystyle y_{i},\ldots ,y_{m}} are their variables except x {\displaystyle x} , the new soft constraint is divers past:

C ( y i = a 1 , , y n = a n ) = max a i C i ( x = a , y ane = a 1 , , y n = a due north ) . {\displaystyle C(y_{1}=a_{ane},\ldots ,y_{n}=a_{n})=\max _{a}\sum _{i}C_{i}(x=a,y_{1}=a_{i},\ldots ,y_{n}=a_{n}).}

Bucket elimination works with an (arbitrary) ordering of the variables. Every variable is associated a bucket of constraints; the saucepan of a variable contains all constraints having the variable has the highest in the society. Bucket elimination proceed from the terminal variable to the first. For each variable, all constraints of the bucket are replaced as above to remove the variable. The resulting constraint is and then placed in the appropriate bucket.

See also [edit]

  • Constrained to the lowest degree squares
  • Distributed constraint optimization
  • Constraint satisfaction trouble (CSP)
  • Constraint programming
  • Integer programming
  • Penalty method
  • Superiorization

References [edit]

  1. ^ Rossi, Francesca; van Beek, Peter; Walsh, Toby (2006-01-01), Rossi, Francesca; van Beek, Peter; Walsh, Toby (eds.), "Chapter one – Introduction", Foundations of Bogus Intelligence, Handbook of Constraint Programming, Elsevier, vol. two, pp. 3–12, doi:ten.1016/s1574-6526(06)80005-two, retrieved 2019-10-04
  2. ^ Martins, J. R. R. A.; Ning, A. (2021). Technology Design Optimization. Cambridge Academy Printing. ISBN978-1108833417.
  3. ^ Wenyu Sun; Ya-Xiang Yuan (2010). Optimization Theory and Methods: Nonlinear Programming, Springer, ISBN 978-1441937650. p. 541
  4. ^ Prosser, Mike (1993). "Constrained Optimization by Substitution". Basic Mathematics for Economists. New York: Routledge. pp. 338–346. ISBN0-415-08424-five.
  5. ^ Leader, Jeffery J. (2004). Numerical Analysis and Scientific Computation. Addison Wesley. ISBN0-201-73499-0.
  6. ^ Verfaillie, Gérard, Michel Lemaître, and Thomas Schiex. "Russian doll search for solving constraint optimization problems." AAAI/IAAI, Vol. ane. 1996.

Farther reading [edit]

  • Bertsekas, Dimitri P. (1982). Constrained Optimization and Lagrange Multiplier Methods. New York: Bookish Press. ISBN0-12-093480-ix.
  • Dechter, Rina (2003). Constraint Processing . Morgan Kaufmann. ISBNane-55860-890-7.

browntheame.blogspot.com

Source: https://en.wikipedia.org/wiki/Constrained_optimization

0 Response to "what soft order constraint does this correspond to"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel