Optimization in the center of the theory of the economy. Theory of optimization Theoretical and practical optimization issues

Parameters for a given object structure, then it is called parametric optimization. The task of choosing the optimal structure is structural optimization.

The standard optimization mathematical problem is formulated in this way. Among the elements χ, forming the set χ, find such an element χ *, which delivers the minimum value f (χ *) of the specified function f (χ). In order to correctly put the task of optimization, you must set:

  1. Valid set - lots of \\ Mathbb (x) \u003d \\ (\\ Vec (x) | \\; g_i (\\ vec (x)) \\ leq 0, \\; i \u003d 1, \\ ldots, m \\) \\ subset \\ mathbb (r) ^ n;
  2. Target feature - Display f: \\; \\ Mathbb (x) \\ To \\ MathBB (R);
  3. Search criteria (MAX or MIN).

Then to solve the task f (x) \\ To \\ min _ (\\ vec (x) \\ in \\ mathrm (x)) Means one of:

  1. Show what \\ MathBB (X) \u003d \\ Varnothing.
  2. Show that target function f (\\ VEC (X)) Not limited to below.
  3. To find \\ Vec (x) ^ * \\ in \\ mathbb (x): \\; f (\\ vec (x) ^ *) \u003d \\ min _ (\\ vec (x) \\ in \\ mathbb (x)) f (\\ VEC (x )).
  4. If a \\ Nexists \\ VEC (X) ^ *, then find \\ inf _ (\\ vec (x) \\ in \\ mathbb (x)) f (\\ vec (x)).

If the minimized function is not convex, then often limited by searching for local lows and maxima: points x_0. such that everywhere in their neighborhood f (x) \\ GE F (x_0) For a minimum I. f (x) \\ le f (x_0) For maximum.

If the permissible set \\ mathbb (x) \u003d \\ mathbb (r) ^ nthen such a task is called the object of unconditional optimization, otherwise - task conditional optimization.

Classification of optimization methods

The overall recording of optimization tasks sets a wide variety of their classes. The method of the method depends on the task class (efficiency of its solution). The classification of tasks is determined by: the target function and the permissible area (set by the system of inequalities and equalities or a more complex algorithm).

Optimization methods are classified according to optimization tasks:

  • Local methods: converge to some local extremum of the target function. In the event of an unimodal target function, this extremum is unique, and will be a global maximum / minimum.
  • Global methods: deal with multi-letter target functions. With a global search, the main task is to identify the tendencies of the global behavior of the target function.

Existing search methods can be divided into three large groups:

  1. deterministic;
  2. random (stochastic);
  3. combined.

According to the criterion of dimension of permissible set, optimization methods are divided into methods one-dimensional optimization and methods multidimensional optimization.

According to the target function and the permissible set, optimization tasks and methods for their solution can be divided into the following classes:

  • Optimization tasks in which the target function f (\\ VEC (X)) and restrictions g_i (\\ vec (x)), \\; i \u003d 1, \\ ldots, m are linear functions are allowed so-called methods linear programming.
  • Otherwise they deal with the task nonlinear programming and apply the appropriate methods. In turn, two private tasks are distinguished:
    • if a f (\\ VEC (X)) and g_i (\\ vec (x)), \\; i \u003d 1, \\ ldots, m - convex functions, then such a task is called the task convex programming;
    • if a \\ MathBB (x) \\ Subset \\ MathBB (Z), then deal with the task integer (discrete) programming.

According to smoothness and the presence of private derivatives in the target function, they can also be divided into:

  • direct methods requiring only calculations of the target function at the points of approximations;
  • first-order methods: require the calculation of the first private derived functions;
  • the second-order methods: require the calculation of second private derivatives, that is, the hessian of the target function.

In addition, optimization methods are divided into the following groups:

  • analytical methods (for example, Lagrange Multiplier Method and Karusha-Kun Tracker Conditions);

Depending on the nature of the set X. Mathematical programming tasks are classified as:

  • discrete programming tasks (or combinatorial optimization) - if X. Of course or counting;
  • objectives of integer programming - if X. is a subset of many integers;
  • nonlinear programming tasks, if limitations or target feature contain nonlinear functions and X. It is a subset of finite-dimensional vector space.
  • If all the limitations and target feature contain only linear functions, this is a linear programming task.

In addition, the sections of mathematical programming are parametric programming, dynamic programming and stochastic programming.

Mathematical programming is used in solving optimization tasks of research operations.

The method of finding an extremum is completely determined by the task class. But before obtaining a mathematical model, you need to perform 4 stages of modeling:

  • Definition of the borders of the optimization system
    • Discard the links of an optimization object with an external world that cannot strongly affect the optimization result, and, more precisely, those without which the decision is simplified
  • Selection of managed variables
    • "Freeze" the values \u200b\u200bof some variables (unmanaged variables). Others leave to accept any values \u200b\u200bfrom the subject of permissible solutions (managed variables)
  • Definition of restrictions on managed variables
    • ... (equality and / or inequality)
  • Choosing a numerical optimization criterion (for example, performance indicator)
    • Create a target function

History

Cantorovich together with M. K. Havurine in 1949 developed the potential method, which is used in solving transport problems. In the subsequent works of Kantorovich, Nemchinova, V. V. Novozhilova, A. L. Lurie, A. Brudno, Aganbegian, D. B. Yudina, E. G. Holstein and other mathematicians and economists were further developed as a mathematical theory of linear and nonlinear Programming and application of its methods to research various economic problems.

Linear programming methods are devoted to many works of foreign scientists. In 1941, F. L. Khitchkok set the transport task. The main method of solving linear programming tasks is the simplex-method - was published in 1949 Danzig. Further development methods of linear and nonlinear programming were obtained in the works of Kuna ( english), A. Takker ( english), Gassa (SAUL. I. GASS), Charnes (Charnes A.), Bila (Beale E. M.) and others.

At the same time, with the development of linear programming, much attention was paid to the tasks of nonlinear programming, in which either target function, or restrictions, or other nonlinear. In 1951, the work of Kun and Takker was published, in which the necessary and sufficient conditions for optimality are given to solve the problems of nonlinear programming. This work served as the basis for subsequent studies in this area.

Since 1955, many works devoted to the quadratic programming (work of Bila, Barankin and Dorfman (Dorfman R.), Frank (Frank M.) and Wolfe P., Markovitsa, etc.). In the works of Dennis (Dennis J. B.), Rosen J. B. and Zontendey (Zontendijk G.) Developed gradient methods for solving nonlinear programming problems.

Currently, for the effective application of methods of mathematical programming and solving problems on computers, algebraic modeling languages, which are representatives that are Ampl and Lingo.

see also

Write a review about the article "Optimization (Mathematics)"

Notes

Literature

  • Abakarov A. Sh., Sushkov Yu. A. . - Proceedings of Fora, 2004.
  • Akulich I. L. Mathematical programming in examples and tasks: studies. Manual for students economy. specialist. universities. - m .: Higher School, 1986.
  • Gill F., Murray W., Wright M. Practical optimization. Per. from English - m .: Mir, 1985.
  • Girsanov I.V. Lectures on the mathematical theory of extreme tasks. - m.; Izhevsk: NIC "Regular and Chaotic Dynamics", 2003. - 118 p. - ISBN 5-93972-272-5.
  • Zhizavsky A. A., Zhilinkas A. G. Methods for finding global extremum. - M.: Science, Fizmatlit, 1991.
  • Pockets V. G. Mathematical programming. - Publishing house of a physical mat. Literature, 2004.
  • Korn G., Korn T. Mathematics reference for scientists and engineers. - M.: Science, 1970. - P. 575-576.
  • Korshunov Yu. M., Korshunov Yu. M. Mathematical foundations of cybernetics. - m .: Energoatomizdat, 1972.
  • Maksimov Yu. A., Phillipovskaya E. A. Algorithms for solving nonlinear programming problems. - m.: Mafi, 1982.
  • Maksimov Yu. A. Linear and discrete programming algorithms. - m.: Mafi, 1980.
  • Carpenters A. D. Mathematical programming \u003d express course. - 2006. - P. 171. - ISBN 985-475-186-4.
  • Rastrigin L. A. Statistical search methods. - m., 1968.
  • Chemdi A. Taha. Introduction to the study of operations \u003d Operations Research: An Introduction. - 8 ed. - m .: Williams, 2007. - P. 912. - ISBN 0-13-032374-8.
  • Kini R. L., Rife H. Decision making with many criteria: preferences and substitutions. - M.: Radio and Communication, 1981. - 560 p.
  • S.I. Zukhovsky, L.I.Avdeeva. Linear and convex programming. - 2nd ed., Pererab. and additional .. - m.: Publishing House "Science", 1967.
  • A.A. Bologunkin ,. New optimization methods and their application. A brief summary of lectures on the course "Theory of Optimal Systems" .. - m .: MVTU Im. Bauman, 1972, 220 pp. Vixra.org/abs/1503.0081.

Links

  • B.P. Pole. // Proceedings of the 14th Baikal School-Seminar "Methods of optimization and their applications". - 2008. - T. 1. - P. 2-20.
  • .

Excerpt that characterizes optimization (mathematics)

Prince Andrei held a Pierre on his half, always in full health that was expected in his father's house, and he went to the nursery.
- Let's go to the sister, "said Prince Andrei, returning to Pierre; - I have not seen her yet, she is now hiding and sits with her God. Himself she, she will confuse, and you will see God's people. C "EST CURIEUX, MA PAROLE. [This is curious, honest word.]
- QU "EST CE QUE C" EST QUE [What is] God's people? - asked Pierre
- But you will see.
Princess Marya really confused and blushed spots when they entered her. In her cozy room with lamps in front of kyotami, on the sofa, a young boy with a long nose and long hair was sitting next to her, and in Monastic Ryas.
On the chair, beside, sitting wrinkled, leaving the old woman with a gentle expression of a kindergarte.
- Andre, Pourquoi Ne Pas M "Avoir Prevenu? [Andrei, why not warned me?] - She said with meek reproach, becoming in front of his wanderers like a poultry there.
- Charmee de vous voir. Je Suis Tres Contente De Vous Voir, [very glad to see you. I'm so pleased that I see you, "she said to Pierre, while he kissed her hand. She knew his child, and now his friendship with Andrei, his misfortune with his wife, and most importantly, his good, a simple face was located to him. She looked at him with his beautiful, radiant eyes and, it seemed, said: "I love you very much, but please do not laugh at mine." Exchanges in the first phrases of greetings, they sat down.
"And Ivanushka here," said Prince Andrei, pointing to a smile at the young wanderer.
- Andre! - Praglessly said Princess Marya.
- Il Faut Que Vous Sachiez Que C "EST UNE FEMME, [Know that this is a woman," said Andrei Pierre.
- Andre, Au Nom de Dieu! [Andrei, for God's sake!] - Repeated Princess Marya.
It was seen that the mocking attitude of the prince Andrei to the wanderers and the useless intercession for them, the princesses of Maryi were familiar, established between them.
- Mais, Ma Bonne Amie, "said Prince Andrei, - Vous Devriez Au Contraire M" Etre Reconaissante De CE Que J "Explique a Pierre Votre Intimite Avec Ce Jeune Homme ... [But, my friend, you will have to be grateful to me that I explain to Pierre your proximity to this young man.]
- vraiment? [True?] - said Pierre curiously and seriously (for which he was particularly grateful was the princess of Mary) peering through glasses in the face of Ivanushka, who, realizing that it was about him, he looked at everyone with cunning eyes.
Princess Marya perfectly embarrassed for his. They are not at all roblates. The old woman, lowering his eyes, but glancing at the included, tilting the cup upside down on the saucer and putting a blurred piece of sugar, calmly and motionless sat on her chair, waiting for her to offer more tea. Ivanushka, drinking from a saucer, improving loyal, watched young people with female eyes.
- Where, in Kiev was? - asked the old man Prince Andrei.
"Was, a father," said the word of the old woman, "the Henic won the Holy Saints, heavenly secrets. And now from the whale, father, the grace of the Great opened ...
- Well, Ivanushka is with you?
"I am going to go, the breadwinner," trying to speak Bas, Ivanushka said. - Only in Yukhnov with Pelageyushka there were ...
Pelageyushka interrupted his comrade; She visible wanted to tell what she saw.
- In the whale, father, the great grace opened.
- Well, the relics are new? - asked Prince Andrei.
"Fully, Andrei," said Princess Marya. - Do not tell, Pelageyushka.
"Nor ... what are you, mother, why not tell?" I love him. He is kind, god recovered, he, benefactor, gave rubles, I remember. As I was in Kiev and tells me Kiryusha Ortimnaya - Truly God, the winter and summer barefoot goes. What you go, says, not at its place, in the Kolyanyn go, there is an icon miraculous, the mother of the Most Holy Mother of God opened. I have said goodbye from those words and went ...
Everyone was silent, one wandered by a measurable voice, drawing air.
- I came, my father, my people, and says: the grace of the Great opened, at the Mother of the Blessed Virgin Miro from the cheek kapel ...
"Well, well, good, after telling," the blusher said Princess Marya.
"Let her ask her," said Pierre. - Have you seen yourself? - he asked.
- How, father, herself awarded. The radiance is on the face, like the light of heaven, and from the cheek in the mother and droplets and droplets ...
"But this is a hoax," Pierre said naively, who listened carefully to the Stranger.
- Ah, Father, what you say! - Pelagyushka said with horror, referring to the princess Marya.
"People are deceived," he repeated.
- Lord Jesus Christ! - The chair said the wanderer. - Oh, do not say father. So, one Anaral did not believe, said: "Monks are deceiving", yes, both said and land. And he dreamed to him that Mother Pecherskaya comes to him and says: "In gues to me, I will be healed." So I began to ask: We are lucky and take me to her. I'm talking to you true truth, I saw myself. They brought him blind straight to her, came up, fell, says: "Heal! I will give you, says what the king sorry. " She herself saw, father, the star in it was done. Well, - clear! Sin say so. God will punish, "she instructively turned to Pierre.
- How did the star found himself in the image? - asked Pierre.
- In generals and Mother produced? Said Prince Andray Smiling.
Pelageyushka suddenly pale and splashed his hands.
- Father, Father, Sin, you have a son! She spoke, suddenly turning out of pallion to bright paint.
- Father, what you said this, God forgive you. - She crossed himself. - Lord, forgive him. Mother, what is it? ... "She turned to the princess Mary. She got up and almost cry began to collect her handbag. She, it was visible, was terribly, and ashamed that she enjoyed blessings in a house where it could say, and it's a pity that it was necessary to lose the blessings of this house.
- Well, what is your hunt? - said Princess Marya. - Why did you come to me? ...
"No, because I'm king, Pelageyushka," said Pierre. - Princesse, Ma Parole, Je n "Ai Pas Voulu L" OFFENSER, [Princess, I right, did not want to offend her,] I'm just. You do not think, I joked, "he said, smiling timidly and wanting to rod his guilt. "After all, I am, and he joked only."
Pelageyushka stopped incredulously, but in the face of Pierre was such sincerity of repentance, and Prince Andrei looked at the Pelagyushka, then on Pierre, that she gradually calmed down.

The wanderer calmed down and, induced again to the conversation, for a long time told about the Father Amphilochia, who was such a holy life that he smelled out of his palm, and about how the monks familiar to her in the last journey gave her the keys from the caves, and As she taking crackers with them, two days spent in caves with worshi. "Pulling in one, I read, I will go to another. Pine, I'll go with again; And such, Mother, silence, grace such that I don't want to go out on the light of God. "
Pierre carefully and seriously listened to her. Prince Andrei came out of the room. And after him, leaving God's people to keep the tea, Princess Marry led Pierre in the living room.
"You are very kind," she said to him.
- Oh, I did not think to insult her, I understand and highly appreciate these feelings!
Princess Marya silently looked at him and smiled gently. "After all, I know you for a long time and love like a brother," she said. - How did you find Andrei? She asked hastily, not giving him time to say something in response to her tender words. - He is very bothering me. Health his winter is better, but last spring the wound opened, and the doctor said he had to go tremendously. And I morally, I am very afraid for him. He is not such a character like us, women to lure and sprinkle their grief. He wears him inside. Now he is cheerful and revived; But this is your arrival so worked for him: he rarely happens. If you could persuade him to go abroad! He needs an activity, and this smooth, quiet life ruins him. Others do not notice, and I see.
At 10 meters, the waiters rushed to the porch, having walked the bubber of the old prince's approaching crew. Prince Andrei and Pierre were also published on the porch.
- Who is this? - asked the old prince, crawling out of the carriage and guessing Pierre.
- AI is very happy! Kiss, "he said, having learned who was an unfamiliar young man.
The old prince was in a good spirit and possessed Pierre.
Prince Andrei's dinner, returning back to the father's office, found the old prince in a hot spore with Pierre.
Pierre argued that the time would come when there would be no more war. The old prince, undergoing, but not angry, challenged him.
- Blood from lived to let, water pour, then there will be no war. Baby Bredni, Bedia Bredni, "he said, but all the same gentlely Pierre Pieder on his shoulder, and came up to the table, who did Prince Andrei, apparently not wanting to join the conversation, moved the papers brought by prince from the city. The old prince approached him and began to talk about things.
- The leader, Rostov Count, half of people did not deliver. I came to the city, I decided to call for a lunch, "I asked such a dinner ... But browsing this ... Well, Brother," said Prince Nikolai Andreich to his son, clapping on the shoulder of Pierre, - Well done your buddy, I loved him! Ignition me. Another and smart speeches says, but I don't want to listen, but he is lying and burning me to the old man. Well, go, go, "he said," maybe I'll come for your dinner. " Again. My fool, Princess Marree Lube, - he shouted the pierra from the door.
Pierre now only, in his arrival in the Bald Mountains, appreciated the power and charm of his friendship with Prince Andrey. This beauty was expressed not so much in his relationship with him himself, as in relationships with all relatives and homework. Pierre with an old, harsh prince and with meek and timid prince Marya, despite the fact that he almost did not know them, felt at once an old friend. They all loved him. Not only the princess Marya, bribed with his krobroi relations to the halves, looked at him with the most radiant look; But the small, annual prince of Nikolai, as the name of his grandfather, smiled in Pierre and went to his hands. Mikhail Ivanovich, M Lle Bourienne with joyful smiles looked at him when he talked to the old prince.
The old prince came out dinner: it was obvious to Pierre. He was with him both day of his stay in the Bald Mountains extremely affectionately, and told him to come to himself.
When Pierre left and agreed together all family members, he began to judge, as it always happens after leaving a new person and, as it rarely happens, everyone spoke about it one good.

Returning this time from vacation, Rostov felt for the first time and learned, to what extent was his connection with Denisov and with all the regiment.
When Rostov approached the regiment, he experienced a feeling similar to the one he was tested, driving up to the Cook House. When he saw the first hussar in the unbuttoned uniform of his shelf, when he recognized the Red Dementieva, he saw the convales of red horses when Lavrushka joyfully shouted her Barina: "Count arrived!" And the shaggy Denisov, who slept on the bed, ran out of the dugout, hugged him, and the officers agreed to visit, - Rostov experienced the same feeling as the mother, father and sisters hugged him, and tears of joy, who approached him to the throat, prevented him . The regiment was also a house, and the house is consistently cute and dear, like the parental house.
After going to the regimental commander, having received the appointment in the former squadron, converging on duty and foraging, entering all the small interests of the regiment and feeling deprived of liberty and challenged into one narrow unchanged frame, Rostov experienced the same calmness, the same support and the same consciousness The fact that he is here at home, in its place he felt under the parental shelter. There was no free-free freeze light, in which he did not find himself and became mistaken in the elections; There was no Sony, with which it was necessary or did not have to be explained. There was no possibility to go there or do not go there; There were no these 24 hours of day, which could be used so many different ways; There was no countless many people, of which no one was closer, no one was further; There were no these unclear and indefinite cash relations with the Father, there was no reminder of the terrible loss to Sharovoy! Here in the shelf everything was clear and simple. The whole world was divided into two uneven departments. One is our Pavlograd regiment, and the other is the rest. And before that the rest there was no case. Everything was known to the shelf: who was the lieutenant who Rothmist, who is good, who is a bad man, and most importantly, comrade. Markitante believes in debt, the salary is obtained in a third; Invest and choose nothing, just do not do anything that is considered bad in Pavlograd regiment; And send, do what is clear and clearly defined and ordered: and everything will be fine.
Having joined again into these certain conditions of shelf life, Rostov experienced joy and calm, similar to those who feels a tired man, licking on vacation. The more goodbye was in this campaign this regimental life Rostov, that he, after Losa, Dolokhov (a deed, whom he, despite all the comfort of his relatives, could not forgive himself), decided to serve not as before, but to correct his guilt, serve well and To be quite an excellent companion and officer, that is, an excellent person that seemed so difficult in the world, and in the regiment so possible.
Rostov, since his loss, decided that he would pay this debt to parents in five years. He was sent for 10 thousand a year, now he decided to take only two, and the rest to provide parents to pay the debt.

The army is our army after repeated deviations, offensive and battles in Pultusk, with Precish Eilau, focused near Bartainstina. We expected the arrival of the sovereign to the army and began a new campaign.
Pavlograd Regiment, who was in that part of the army, which was in the campaign of 1805, and completed in Russia, was late for the first actions of the campaign. He was neither under Pulta, nor under Passish Eilau and in the second half of the campaign, joining the existing army, was counted for the detachment of the platform.
The detachment of Platov acted independently of the army. Several times the Pavlograds were parts in shootouts with the enemy, captured the prisoners and once chosen even the crews of Marshal Farty. In April, Pavlogradtsy stood for several weeks near the German empty village ruined, without silent.
There was a rospel, dirt, cold, rivers hacked, the roads were eradicated; For several days, neither horses did not give people to the people of the province. Since the driving was not possible, then people crumbled over abandoned desert villages to find potatoes, but already found little. Everything was eaten, and all residents felt; Those who stayed were worse, and they had nothing to take away, and even little - lubricant soldiers often instead of using them, gave them their last.

Parameters for a given object structure, then it is called parametric optimization. The task of choosing the optimal structure is structural optimization.

The standard optimization mathematical problem is formulated in this way. Among the elements χ, forming the set χ, find such an element χ *, which delivers the minimum value f (χ *) of the specified function f (χ). In order to correctly put the optimization task, you must specify:

Then to solve the task means one of:

If the minimized function is not convex, then often restricted by searching for local lows and maxima: points such that everywhere in some of their surroundings for a minimum and for a maximum.

If the permissible set is, this task is called the object of unconditional optimization, otherwise - task conditional optimization.

Classification of optimization methods

The overall recording of optimization tasks sets a wide variety of their classes. The method of the method depends on the task class (efficiency of its solution). The classification of tasks is determined by: the target function and the permissible area (set by the system of inequalities and equalities or a more complex algorithm).

Optimization methods are classified according to optimization tasks:

  • Local methods: converge to some local extremum of the target function. In the event of an unimodal target function, this extremum is unique, and will be a global maximum / minimum.
  • Global methods: deal with multi-letter target functions. With a global search, the main task is to identify the tendencies of the global behavior of the target function.

Existing search methods can be divided into three large groups:

  1. deterministic;
  2. random (stochastic);
  3. combined.

According to the criterion of dimension of permissible set, optimization methods are divided into methods one-dimensional optimization and methods multidimensional optimization.

According to the target function and the permissible set, optimization tasks and methods for their solution can be divided into the following classes:

According to smoothness and the presence of private derivatives in the target function, they can also be divided into:

  • direct methods requiring only calculations of the target function at the points of approximations;
  • first-order methods: require the calculation of the first private derived functions;
  • the second-order methods: require the calculation of second private derivatives, that is, the hessian of the target function.

In addition, optimization methods are divided into the following groups:

  • analytical methods (for example, Lagrange Multiplier Method and Karusha-Kun Tracker Conditions);
  • graphic methods.

Depending on the nature of the set X. Mathematical programming tasks are classified as:

  • discrete programming tasks (or combinatorial optimization) - if X. Of course or counting;
  • objectives of integer programming - if X. is a subset of many integers;
  • nonlinear programming task if limitations or target feature contain nonlinear functions and X. It is a subset of finite-dimensional vector space.
  • If all the limitations and target feature contain only linear functions, this is a linear programming task.

In addition, the sections of mathematical programming are parametric programming, dynamic programming and stochastic programming.

Mathematical programming is used in solving optimization tasks of research operations.

The method of finding an extremum is completely determined by the task class. But before obtaining a mathematical model, you need to perform 4 stages of modeling:

  • Definition of the borders of the optimization system
    • Discard the links of an optimization object with an external world that cannot strongly affect the optimization result, and, more precisely, those without which the decision is simplified
  • Selection of managed variables
    • "Freeze" the values \u200b\u200bof some variables (unmanaged variables). Others leave to accept any values \u200b\u200bfrom the subject of permissible solutions (managed variables)
  • Definition of restrictions on managed variables
    • ... (equality and / or inequality)
  • Choosing a numerical optimization criterion (for example, performance indicator)
    • Create a target function

History

Cantorovich together with M. K. Havurine in 1949 developed the potential method, which is used in solving transport problems. In the subsequent works of Kantorovich, Nemchinova, V. V. Novozhilova, A. L. Lurie, A. Brudno, Aganbegian, D. B. Yudina, E. G. Holstein and other mathematicians and economists were further developed as a mathematical theory of linear and nonlinear Programming and application of its methods to research various economic problems.

Linear programming methods are devoted to many works of foreign scientists. In 1941, F. L. Khitchkok set the transport task. The main method of solving linear programming tasks is the simplex-method - was published in 1949 Danzig. Further development methods of linear and nonlinear programming were obtained in the works of Kuna ( english), A. Takker ( english), Gassa (SAUL. I. GASS), Charnes (Charnes A.), Bila (Beale E. M.) and others.

At the same time, with the development of linear programming, much attention was paid to the tasks of nonlinear programming, in which either target function, or restrictions, or other nonlinear. In 1951, the work of Kun and Takker was published, in which the necessary and sufficient conditions for optimality are given to solve the problems of nonlinear programming. This work served as the basis for subsequent studies in this area.

Since 1955, many works devoted to the quadratic programming (work of Bila, Barankin and Dorfman (Dorfman R.), Frank (Frank M.) and Wolfe P., Markovitsa, etc.). In the works of Dennis (Dennis J. B.), Rosen J. B. and Zontendey (Zontendijk G.) Developed gradient methods for solving nonlinear programming problems.

Currently, for the effective application of methods of mathematical programming and solving problems on computers, algebraic modeling languages, which are representatives that are Ampl and Lingo.

see also

Notes

Literature

  • Abakarov A. Sh., Sushkov Yu. A. Statistical study of one global optimization algorithm. - Proceedings of Fora, 2004.
  • Akulich I. L. Mathematical programming in examples and tasks: studies. Manual for students economy. Piece universities. - m .: Higher School, 1986.
  • Gill F., Murray W., Wright M. Practical optimization. Per. from English - m .: Mir, 1985.
  • Girsanov I.V. Lectures on the mathematical theory of extreme tasks. - m.; Izhevsk: NIC "Regular and Chaotic Dynamics", 2003. - 118 p. - ISBN 5-93972-272-5
  • Zhizavsky A. A., Zhilinkas A. G. Methods for finding global extremum. - M.: Science, Fizmatlit, 1991.
  • Pockets V. G. Mathematical programming. - Publishing house of a physical mat. Literature, 2004.
  • Korn G., Korn T. Mathematics reference for scientists and engineers. - M.: Science, 1970. - P. 575-576.
  • Korshunov Yu. M., Korshunov Yu. M. Mathematical foundations of cybernetics. - m .: Energoatomizdat, 1972.
  • Maksimov Yu. A., Phillipovskaya E. A. Algorithms for solving nonlinear programming problems. - m.: Mafi, 1982.
  • Maksimov Yu. A. Linear and discrete programming algorithms. - m.: Mafi, 1980.
  • Carpenters A. D. Mathematical programming \u003d express course. - 2006. - P. 171. - ISBN 985-475-186-4
  • Rastrigin L. A. Statistical search methods. - m., 1968.
  • Chemdi A. Taha. Introduction to the study of operations \u003d Operations Research: An Introduction. - 8 ed. - m .: Williams, 2007. - P. 912. - ISBN 0-13-032374-8
  • Kini R. L., Rife H. Decision making with many criteria: preferences and substitutions. - M.: Radio and Communication, 1981. - 560 p.
  • S.I. Zukhovsky, L.I.Avdeeva. Linear and convex programming. - 2nd ed., Pererab. and additional .. - m.: Publishing House "Science", 1967.

Links

  • B.P. Pole. History of mathematical programming in the USSR: analysis of phenomenon // Proceedings of the 14th Baikal School-Seminar "Methods of optimization and their applications". - 2008. - T. 1. - P. 2-20.

Wikimedia Foundation. 2010.

The most acceptable solution is made at the management level regarding any question, it is customary to be optimal, and the process of its search itself is optimization.

The interdependence and complexity of organizational, socio-economic, technical and other aspects of production management is currently reduced to the adoption of a management decision, which affects a large number of different kinds of factors, closely intertwined with each other, in view of which it becomes impossible to analyze each separately using traditional analytical methods.

Most factors act as determining the decision-making process, and they (in their essence) are not amenable to any quantitative characteristic. There are also those that are practically unchanged. In this regard, there was a need to develop special methods capable of ensuring the choice of important management decisions in the framework of complex organizational, economic, technical tasks (expert assessments, operations research and optimization methods, etc.).

Methods aimed at studying operations are applied to find optimal solutions in management areas such as the organization of production and transportation processes, large-scale production planning, material and technical supply.

Methods for optimizing solutions are to be studied by comparing numerical estimates of a number of factors whose analysis cannot be implemented with traditional methods. The optimal solution is the best of possible options regarding the economic system, and the most acceptable in relation to the individual elements of the system is suboptimal.

Essence of operations research methods

As mentioned earlier, they form methods for optimizing management solutions. Their base is mathematical (deterministic), probabilistic models representing the process under study, type of activity or system. This kind of model represents the quantitative characteristic of the corresponding problem. They serve as a basis for making an important managerial solution in the process of finding an optimally acceptable option.

The list of questions that play a significant role for direct managers of production and which are permitted during the use of the methods under consideration:

  • the degree of validity of the selected solutions;
  • how better they are alternative;
  • degree of accounting for determining factors;
  • what is the optimality criterion for the selected solutions.

These methods for optimizing solutions (management) are aimed at finding optimal solutions for as much as possible firms, companies or their divisions. They are based on existing achievements of statistical, mathematical and economic disciplines (game theories, mass service, graphs, optimal programming, mathematical statistics).

Methods of expert assessments

These methods of optimization of management solutions are used when the task is partially or fully prone to formalization, as well as its solution cannot be found by means of mathematical methods.

Examination is a study of complex special issues at the stage of developing a certain management decision by the relevant persons who own special luggage of knowledge and impressive experience, to obtain conclusions, recommendations, opinions, evaluations. In the process of expert research, the latest achievements and sciences, and technicians as part of the expert specialization are applied.

The methods of optimization of a number of managerial solutions (expert estimates) are effective in solving the following management tasks in the field of production:

  1. The study of complex processes, phenomena, situations, systems that are characterized by unformalized, high-quality characteristics.
  2. Ranking and determination according to a given criterion of essential factors speaking defining relative to the functioning and development of the production system.
  3. The optimization methods under consideration are particularly effective in predicting the development trends of the production system, as well as its interaction with the external environment.
  4. Improving the reliability of expert assessment of predominantly targeted functions, which have a quantitative and qualitative nature, by averaging the opinions of qualified specialists.

And these are just some methods for optimizing a number of managerial solutions (expert assessment).

Classification of the methods under consideration

Methods for solving optimization tasks, based on the number of parameters, can be divided into:

  • Methods for optimization one-dimensional.
  • Methods of optimization multidimensional.

They are also called "numerical optimization methods". To be accurate, these are algorithms for its search.

As part of the use of derivatives, methods are:

  • direct optimization methods (zero order);
  • gradient methods (1st order);
  • methods of 2nd order, etc.

Most of the multidimensional optimization methods are close to the task of the second group of methods (one-dimensional optimization).

Methods of one-dimensional optimization

Any numerical optimization methods are based on an approximate or accurate calculation of its characteristics such as the values \u200b\u200bof the target function and functions that set the permissible set, their derivatives. So, for each individual task, the question of tissitive selection of characteristics for calculation can be resolved depending on the existing properties of the function in question, available opportunities and restrictions in storage and processing information.

There are the following methods for solving optimization problems (one-dimensional):

  • fibonacci method;
  • dichotomy;
  • gold cross section;
  • doubling step.

Fibonacci method

To begin, it is necessary to establish the coordinates of the T. X on the gap as a number equal to the ratio of the difference (x - a) to the difference (B - a). Consequently, A has a relatively gap of the coordinate 0, and b - 1, the average point is ½.

If we assume that F0 and F1 are equal to each other and take the value 1, F2 will be 2, F3 - 3, ..., then Fn \u003d Fn - 1 + Fn-2. So, Fn is Fibonacci's numbers, and Fibonacci search is the optimal strategy of the so-called consistent search for a maximum due to the fact that it is quite closely related to them.

As part of the optimal strategy, it is customary to choose Xn - 1 \u003d Fn-2: Fn, Xn \u003d Fn - 1: Fn. With any of the two intervals (either), each of which can act as a narrowed uncertainty interval, the point (inherited) relative to the new interval will have either coordinates or. Next, as XN - 2, a point is taken, which has a relatively new gap one of the coordinates presented. If you use f (xn - 2), the value of a function that is inherited from the previous gap, it becomes possible to reduce the uncertainty interval and transfer to one function value.

At the finish step, it will be possible to progress to this uncertainty interval, as, while the average point is inherited from the previous step. As x1, a point is set, which has the relative coordinate ½ + ε, and the final uncertainty interval will be either [½, 1] with respect to.

At the 1st step, the length of this interval decreased to Fn - 1: Fn (from the unit). At the finishing steps, the reduction in the lengths of the corresponding intervals is represented by the numbers Fn-2: Fn - 1, Fn-3: Fn-2, ..., F2: F3, F1: F2 (1 + 2ε). So, the length of this interval, as the final version, will take a value (1 + 2ε): Fn.

If you neglect ε, then asymptotically 1: Fn will be Rn, with N → ∞, and R \u003d (√5 - 1): 2, which is approximately 0.6180.

It is worth noting that asymptotically for significant N each subsequent step of searching Fibonacci significantly narrows the interval in question with the above coefficient. This result is required to be compared with 0.5 (the narrowing coefficient of the uncertainty interval under the bivection method for searching zero function).

Dichotomy method

If you submit a certain target function, then it will be necessary to find its extremum on the interval (A; b). For this, the abscissa axis is divided into four equivalent parts, then it is necessary to determine the value of the function under consideration at 5 points. Next is chosen at least among them. The extremum function must lie within the interval (a "; b"), which is adjacent to the point of the minimum. The search borders narrow 2 times. And if the minimum is located in T. A or b, it is narrowed by all four times. The new interval is also divided into four equal segments. Due to the fact that the values \u200b\u200bof this function at three points were defined at the previous stage, then you need to calculate the target function at two points.

Golden section method

For significant values \u200b\u200bof N, the coordinates of such points as Xn and Xn-1 are approximated to 1 - R, equal to 0.3820, and R ≈ 0.6180. A push with these values \u200b\u200bis very close to the desired optimal strategy.

If we assume that f (0.3820)\u003e f (0,6180), then the interval is outlined. However, due to the fact that 0.6180 * 0.6180 ≈ 0.3820 ≈ xN-1, then at this point F is already known. Consequently, at each stage, starting from the 2nd, it is necessary only one calculation of the target function, with each step reduces the length of the interval under consideration with the coefficient of 0.6180.

Unlike Fibonacci's search, this method does not require the fixation of N numbers before before the search starts.

The "gold cross section" of the site (a; b) is a cross section in which the ratio of its R length to the larger part (a; c) is identical to the ratio of most R to the smaller, that is, (a; c) to (c; b). It is not difficult to guess that R is determined by the above formula. Consequently, with significant N, the Fibonacci method goes into this.

Shag doubling method

The essence is searching for the direction of descending order of the target function, movement in this direction in the event of a successful search with a gradually increasing step.

First, we determine the initial coordinate M0 function F (M), the minimum value of step H0, the search direction. Then we determine the function in T. M0. Next, take a step and find the value of this function at this point.

If the function is less than the value that was in the previous step, the next step should be made in the same direction, having previously increased it by 2 times. When it is a value that is more than the previous one, you will need to change the search direction, and then start moving in the selected direction in the H0 step. The presented algorithm can be modified.

Methods of multidimensional optimization

The aforementioned method of zero order does not take into account the derivatives of the minimized function, in view of which their use can be effectively in the event of any difficulties with the calculation of derivatives.

The group of 1-order methods is also called gradient, because to establish the search direction, the gradient of this function is used - the vector components of which are partial derivatives of minimized function according to the corresponding optimized parameters.

In the group of 2-order methods, 2 derivatives are used (their use is sufficiently limited due to the availability of difficulties in their calculation).

List of unconditional optimization methods

When using multidimensional search without the use of derivatives, unconditional optimization methods are as follows:

  • Hook and jeeve (exercise 2 types of search - according to the sample and explore);
  • minimization by the correct simplyx (searching for a minimum point of the corresponding function by comparison on each individual iteration of its values \u200b\u200bin the vertices of the simplication);
  • cyclic coordinate descent (use as a search for coordinate vectors);
  • Rosenbroke (based on the use of one-dimensional minimization);
  • minimizing on the deformed simplex (modification of the minimization method according to the correct simplyxium: adding a compression procedure, stretching).

In a situation of use of derivatives in the multidimensional search process, the method of pre-arranged descent (the most fundamental procedure for minimizing the differentiable function with several variables) is distinguished.

Also identify more methods that use conjugate directions (Davidon-Fletcher Powell method). Its essence is the prescription of the search directions as DJ * GRAD (F (Y)).

Classification of mathematical optimization methods

Conditionally, based on the dimension of functions (target), they are:

  • with 1 variable;
  • multidimensional.

Depending on the function (linear or nonlinear), there are a large number of mathematical methods aimed at searching for extremum to solve the task.

According to the criterion for the use of derivatives, mathematical optimization methods are divided into:

  • methods for calculating 1 derivative of the target function;
  • multidimensional (1st derivative-vector gradient).

Based on the effectiveness of the calculation, exist:

  • methods of rapid extremum calculation;
  • simplified calculation.

This is the conditional classification of the methods under consideration.

Optimization of business processes

Methods here can be used different, depending on solved problems. It is customary to allocate the following methods for optimizing business processes:

  • exceptions (reducing the levels of the existing process, eliminate the causes of interference and input control, reducing transport routes);
  • simplification (lightweight passage of the order, decrease in the complexity of the product structure, work distribution);
  • standardization (use of special programs, methods, technologies, etc.);
  • acceleration (parallel engineering, stimulation, operational design of prototypes, automation);
  • change (changes in the field of raw materials, technologies, work methods, personnel location, working systems, order volume, processing procedure);
  • ensuring interaction (in relation to organizational units, personnel, working system);
  • allocations and inclusions (relative to the necessary processes, components).

Tax Optimization: Methods

Russian legislation provides a taxpayer very rich opportunities for reducing tax sizes, in view of which it is customary to allocate such methods aimed at their minimization, as common (classical) and special.

General methods of tax optimization are as follows:

  • study of the company's accounting policy with the highest possible application of opportunities provided by Russian legislation (the procedure for writing off ICP, the choice of the method of calculating revenue from the sale of goods and others);
  • optimization through the contract (conclusion of prospective transactions, clear and competent use of wording, etc.);
  • application of various kinds of benefits, tax exemptions.

All firms can also use the second group of methods, but they still have a fairly narrow scope. Special tax optimization methods are as follows:

  • replacing relationships (an operation that provides for burdensome taxation is replaced by another, which makes it possible to achieve a similar goal, but at the same time use the preferential order of taxation).
  • separation of relationships (replacing only part of the economic operation);
  • deferred tax payment (transferring the moment of the appearance of the taxation object to another calendar period);
  • direct reduction in the object of taxation (getting rid of many taxable operations or property without negative impact on the company's main economic activity).

Federal Agency for Education GOU VPO "Ural State Technical University - UPI" Parametric optimization of radio-electronic schemes Methodical instructions for laboratory work on the course "Computer analysis of electronic circuits" for students of all forms of training specialty 200700 - Radio engineering Ekaterinburg 2005 UDC 681,3,06: 621.396 .6 compilers V.V. Kiykov, V.F. Kochkin, K.A. Widowkin Scientific Editor Association, Cand. tehn Sciences V.I. Gadzikovsky parametric optimization of radio-electronic circuits: Methodical instructions for laboratory work on the course "Computer analysis of electronic circuits" / Sost. V.V. Kiyko, V.F. Kochkin, K.A. Widowkin. Ekaterinbug: GOU VPO UPTU-UPI, 2005. 21c. Methodical instructions contain information about the formulation of optimization problems, optimality criteria, the theory of searching the minimum of the target function. A review of the parameter optimization methods is given, the hook method is described in detail - Jeeves, questions are given for self-control. Bibliogr.: 7 Names. Fig. 6. Prepared by the Department of Radioelectronics Information Systems.  GOU VPO "Ural State Technical University-UPI", 2005 2 Table of Contents The purpose of the work ................................... .................................................. ........................ 4 1. Methodical instructions .................... ...................................... 4 2. Theory of optimization ....... .................................................. ......................... 4 2.1. Formal (mathematical) Setting the optimization problem ............. 4 2.2. Setting the problem of parametric RES optimization ............................ 5 2.3. Criteria of optimality ................................................ ................................... 7 2.4. Strategy for solving the problems of optimal design of the RES ................ 9 2.5. Algorithms of global search ............................................... ................... 9 2.5.1. An algorithm of random search ............................................... ........................ 10 2.5.2. Monoton algorithm of global search ............................................. 10 2.5.3. Scanning algorithm on the grid of the Gray Code .............................................. . 10 2.6. Methods and algorithms of local search ............................................. ........ 11 2.6.1. Direct methods ................................................ ............................................... 11 2.6. 2. Gradient methods for optimizing first order. ............................ 13 2.6.3. Gradient methods of optimizing second order ............................. 13 3. Description of the computer program of analysis ......... ......... 15 3.1. Running the program .................................................. ............................................. 15 3.2. Drawing up a task for optimization .............................................. ............ 15 3.3. Optimization results ................................................ ................................. 17 4. The content of laboratory work ........... .................................... 19 4.1. The order of execution ................................................ ........................................ 19 4.2. Task for laboratory work .............................................. ......................... 19 5. Methodical instructions for the preparation of the source data ................ .................................................. .................................................. 20 6. Content of the report ............................................. ................................... 20 7. Questions for self-control ......... .................................................. . 20 List of references .............................................. ............................................. 21 3 Operation Goal Get Presentation and practical skills of parametric optimization of RES with automated circuitry design of radio-electronic equipment (REA). 1. Methodical instructions This work is an third in the complex of laboratory work on the methods of calculating, analyzing and optimizing radio-electronic schemes. The complex includes the following work: 1. Calculation of radio-electronic schemes by the method of nodal potentials. 2. Analysis of electronic schemes by a modified method of nodal potentials. 3. Parametric optimization of radio-electronic circuits. 4. Analysis of radio-electronic schemes using circuit functions. In the first and second laboratory work, the frequency analysis was made, the sensitivity of the gain coefficient on the voltage variations was determined, the transitional and pulsed characteristics are calculated at rated values \u200b\u200bof the parameters of the RES elements, which are originally selected (are specified or calculated) in the best way. In this paper, parametric optimization of the designed RES is performed to ensure the compliance of the output parameters with the requirements of the technical task. 2. Optimization theory 2.1. Formal (mathematical) Setting the optimization problem with optimization of parameters (parametric optimization) is customary to call the task of calculating the optimal nominal values \u200b\u200bof the internal parameters of the design object. The tasks of optimizing the parameters in the Radioelectronic equipment CAPR are reduced to the tasks of mathematical programming EXTR F (X), XXD, (1) where XD \u003d (xx0 | k (x) ≥ 0, r (x) \u003d 0, k , R ). Vector x \u003d (x1, x2,.... Xn) is called the vector controlled (varyed) parameters; F (x) - a whole function (quality function); XD is a permissible area; X0 - the space in which the target function is determined; K (x) and r (x) functions - restrictions. 4 Wordless wording of problem (1): Find the extremum of the target function f (x) within the region of XD, limited in the space X0 N inequalities K (x) ≥ 0 and M equalities r (x) \u003d 0. The target function should be formulated. Based on the idea of \u200b\u200bthe quality of the designed object: its value should be reduced to quality improvement, then in (1) minimization is required (EXTR has MIN), or increase, then in (1) Maximization is required (EXTR has Max). Restrictions - inequalities related to XI\u003e XI Min or Xi< xi max , называют прямыми ограничениями, где xi min и xi max - заданные константы, остальные ограничения называют функциональными. Задача поиска максимума, как правило, сводится к задаче поиска минимума путем замены F(Х) на -F(Х). Функция F(Х) имеет локальный минимум в точке Х0, если в малой окрестности этой точки F(Х) ≥ F(Х0). И функция F(Х) имеет глобальный минимум в точке Х*, если для всех Х справедливо неравенство F(Х) ≥ F(Х*). Классическая теория оптимизации подробно изложена в соответствующей литературе, например . Ниже основное внимание уделено применению теории оптимизации для поиска оптимальных решений при проектировании радиоэлектронной аппаратуры. 2.2. Постановка задачи параметрической оптимизации РЭС Решение задачи проектирования обычно связана с выбором оптимального, наилучшим образом удовлетворяющего требованиям технического задания варианта устройства из некоторого допустимого множества решений. Эффективное решение задач базируется на формальных поисковых методах оптимизации и неформальных способах принятия оптимальных проектных решений. Поэтому решение задач оптимального проектирования необходимо рассматривать не только в вычислительном аспекте, но скорее в творческом, учитывая опыт и знания инженера-схемотехника на всех этапах автоматизированного проектирования. Одной из наиболее cложных операций при решении задач оптимального проектирования является этап математической формулировки задачи, которая включает в себя выбор критерия оптимальности, определение варьируемых параметров и задание ограничений, накладываемых на варьируемые параметры . Среди задач схемотехнического проектирования, которые целесообразно решать с привлечением методов оптимизации, выделяют следующие задачи параметрического синтеза и оптимизации: - определение параметров компонентов схемы, обеспечивающих экстремальные характеристики при заданных ограничениях; - определение параметров функциональных узлов схем исходя из требований технического задания на характеристики устройства в целом; - адаптация существующих схемных решений с целью подбора параметров, удовлетворяющих новым требованиям к схеме; 5 - уточнение значений параметров компонентов схемы, полученных в результате ручного инженерного расчета. Для схем приемно-усилительной техники оптимизация ведется по отношению к таким выходным параметрам, как: - коэффициент усиления и полоса пропускания: - форма частотной характеристики; - устойчивость усилителя или активного фильтра; - время запаздывания, длительность фронта импульса. Примечание. Класс задач, связанный с определением значений параметров компонентов, при которых проектируемая схема удовлетворяет совокупности условий технического задания на разработку, принято называть параметрическим синтезом (по отношению к определяемым параметрам) или параметрической оптимизацией (по отношению к реализуемым характеристикам). В любой из перечисленных задач реализуемые характеристики проектируемого устройства являются функциями вектора варьируемых (настраиваемых) параметров, составляющих некоторое подмножество полного набора параметров компонентов схемы. Целью параметрического синтеза или оптимизации является определение вектора параметров X, обеспечивающего наилучшее соответствие характеристик устройства Y = Y(X) требованиям технического задания. Для решения этой задачи необходимо, прежде всего, выбрать формальный критерий оценки качества каждого из вариантов проектируемого устройства, который позволил бы различать их между собой и устанавливать между ними отношения предпочтения. Такая оценка может быть представлена функциональной зависимостью вида F(X) =F(Y(X)), называемой обычно критерием оптимальности, функцией качества или целевой функцией. Задача поиска параметров компонентов схемы сводится к классической задаче оптимизации - нахождения экстремума некоторой функции качества F(X) при наличии ограничений (равенств, неравенств или двухсторонних границ), накладываемых на варьируемые параметры и характеристики проектируемой схемы . Разнообразные задачи оптимизации аналоговых радиоэлектронных схем имеют общие черты, основные из которых: - многокритериальность оптимизационных задач; - отсутствие явных аналитических зависимостей выходных параметров от внутренних параметров, связь между внутренними и внешними параметрами выражается системами уравнений и оценивается количественно только через численное решение этих систем. Эти особенности обуславливают трудности постановки и решения задач оптимизации аналоговых радиоэлектронных схем. 6 2.3. Критерии оптимальности В процессе поиска оптимального решения для каждой конкретной задачи может оказаться предпочтительным определенный вид критерия оптимальности. Базовый набор критериев оптимальности, позволяющий удовлетворить разнообразные требования инженера-схемотехника к оптимизируемым характеристикам проектируемых устройств, изложен в . Так, для отыскания экстремума (минимума или максимума) показателя качества, например, как потребляемая схемой мощность, частота среза, используется само значение критерия оптимальности без преобразования: F1(X) = Y(X), (2) В задачах, требующих максимального соответствия оптимизируемой характеристики и некоторой желаемой, например, при оптимизации частотных характеристик, наиболее целесообразно использовать критерий среднего квадратического отклонения F2 ()  (Y() - Y )2 , (3) где Y* - желаемое или требуемое по техническому заданию значение характеристики, () - знак усреднения. Для характеристики, заданной дискретным набором точек, целевая функция 1 F2 (X)  N N  (Y(X , p i 1 i)  Yi)2 , * i (4) где N - число точек дискретизации независимой переменной р; Y(Х, рi) - значение оптимизируемой характеристики в i-ой точке интервала дискретизации; i - весовой коэффициент i-го значения оптимизируемой характеристики, отражающей важность i-ой точки по сравнению с другими (как правило, 0 < i > one). Minimizing the function (3) and (4) ensures the proximity of the characteristics of the average quadratic deviation. Function (4) is used in numerical methods for calculating Y (x). In some optimization tasks, it is necessary to ensure excess or not exceeding the optimized characteristic of some specified level. These optimality criteria are implemented by the following functions: - to ensure exceeding the specified level F3 (x)  0 at y (x)  yh *; (Y  y (x)) 2 technically (x)  yh *; 7 (5) - To ensure the illness of the specified level F4 (x)  0 at y (x)  YB * (Y (x)  YB *) 2 at y (x)  YB *, (6) where yh *, YB * - lower and upper boundaries of a permissible area for the characteristic Y (x). If it is necessary that the optimized characteristic takes place in a certain permissible zone (corridor), use the combination of the two previous optimal criteria: 0Pyh *  y (x)  YB *; F (x)  (y (x)  yb *) 2 technically (x)  yb *, (yh *  y (x)) 2, TY (X)  YH *. (7) In cases where it is necessary to realize only the shape of the curve, ignoring the constant vertical offset, uses the shift criterion N F6 (x)    I (Yi *  y (x, pi)  yc) 2, ( 8) i 1 where ycr  1 n *  (yi  y (x, pi)). N i 1 From the type of target function, the important characteristics of the computing process depend on and, first of all, the convergence of the optimization process. The signs of derivative target functions on controlled parameters do not remain constant in the entire permissible area. For target functions of the form (4) and (8), the latter circumstance leads to their ambiguity. Thus, the feature of the target functions in solving problems of circuitry design is their reasonable nature, which leads to greater computational costs and requires special attention to the choice of optimization method. Another feature of the target features is that they are usually a multi-extremal and along with a global minimum there are local minima. The feature of the optimization tasks of the electronic circuits is that the internal parameters cannot receive arbitrary values. Thus, the values \u200b\u200bof resistors and capacitors are limited to some maximum and minimum values. In addition, from several external parameters, one must usually select one basic, according to which optimization is performed, and for others, to specify the permissible boundaries of the change. 8 The optimization task with restrictions is reduced to the optimization task without restrictions by introducing penalty functions. The target function takes the form Mn r 1 k 1  (x)  Fi (x)   r ( T (x)) 2    k ( k (x)) 2, (9) where r, k - numerical coefficients that take into account the importance of one or another restriction on others. They are zero with the satisfaction of the corresponding inequality from (1) and take some values \u200b\u200botherwise; Fi (x) is one of the features of the qualities described by the relation (2) - (8). Thus, the exit beyond the permissible HD region leads to an increase in the minimized chain function and the intermediate solutions X J are held by the Barrier on the border of the HD region. The height of the "barrier" is determined by the values \u200b\u200bof  and , which in practice are widely limits (1-1010). The larger  and , the less likely to go beyond the permissible area. At the same time, the resistance of the slope of the ravine on the border increases, which slows down or completely disrupts the convergence of the minimization process. Due to the inability to indicate the optimal values \u200b\u200bof  and  it is advisable to start optimizing with small values, increasing them then upon obtaining a solution outside the permissible area. 2.4. The strategy for solving optimal design of RES problems of optimal design of RES has specific features to which the Multi-extremality and the enjoyability of the quality function include restrictions on the internal and output parameters of the designed device, the greater dimension of the variable parameters. The strategy for solving optimal design tasks provides for the application of global optimization procedures at the initial stages of the search and the clarification of the obtained global solution rapidly in the vicinity of the optimal point by local algorithms. Such a strategy allows, premature, with sufficient reliability and accuracy to determine the importance of global extremum and, secondly, to significantly reduce computational search costs. At the same time, the stages of global search can be performed with low accuracy, and the stages of local clarification are carried out in the field of attraction of global extremum, which requires a significantly smaller number of calculations. 2.5. The global search algorithms for global search algorithms, as a rule, give a sufficiently rough estimate of the global extremum at low costs of computational 9 resources and require a significant increase in the number of calculations to obtain a more accurate estimate of the extremum position. 2.5.1. The algorithm of random search is the easiest, in terms of the implementation of the computing process, is the algorithm for the search for global extremum, based on the testing of the permissible area of \u200b\u200bthe XD sequence evenly distributed in it with the selection of the best option from the obtained. The quality of the operation of the algorithm is largely determined by the properties of the sensor of the uniformly distributed random numbers used to generate vectors x  xd 2. 5.2. The monotonous global search algorithm. Multidimensional optimization by this algorithm is based on the construction of the scan (Peno curve), which displays the segment of the real axis in the hypercube of the permissible area of \u200b\u200bthe HD. Using the sweep, an unambiguous and continuous X () is made, which for any point 0.1 allows you to get a point x  xd. Then the minimization problem f (x) in the area of \u200b\u200bthe HD is equivalent to the search for a minimum  * one-dimensional function f (x) \u003d f (x ()). To carry out a global one-dimensional minimization of the function f () at the interval 0.1 in the system optimization subsystem, the monotonous modification of the global search algorithm, implementing a monotone transformation F (), in the form of  () (1  [1  F ()] 2) 0, 5, (10) which saves the location of the point of global extremum, but makes the function smoother. The algorithm gives a fairly good assessment of the global extremum within the first 50-100 iterations. The best results are obtained if the number of variables does not exceed 5-7. For the considered algorithm in some cases, the best results can be obtained when using the transformation of the search space for a logarithmic law. Such a transformation is particularly effective if the search borders differ in several orders of magnitude, which is relevant in the problems of optimization of the REA, and if the extremum is near the boundaries of the region. 2.5.3. The scanning algorithm on the grid of the Code Gray The basic idea of \u200b\u200bthe method consists in a consistent change in the specific search sphere with characteristic rays containing test points when accumulating and processing the information received. The scan direction is carried out on a special grid asked Binary Code 10 Gray. The search sphere on the grid of the Gray Code in the considered algorithm differs from the traditional (circle with the number of variables equal to 2) and has an addition to a circle of characteristic rays. Rays are directed from the center of the sphere to the boundaries of the HD region and thereby seems to be "translucent" the entire area to its borders. The considered algorithm has a single configurable parameter -sensitivity of the quality function to variations of parameters, which is used to determine the discreteness step for each of the variables. 2.6. Methods and algorithms for local search Methods and algorithms Local search most often find out the nearest local extremum, and the trajectory of their movement strongly depends on the selection of the starting point and the nature of the target function. 2.6.1. Direct methods of zero-order methods (direct methods) are based on their own, there are no strict mathematical justification and are built on the basis of reasonable proposals and empirical data. The simplest method of zero order is the method of subordinate descent (Gauss-Zeidel). At each step, all variables are recorded, except for one, which is determined by the minimum of the target function. Optimization is achieved by consistent variable variables. This algorithm turns out to be ineffective if the target function contains expressions of the X1x2 type. For tasks of circuit design, in which the analytical expression of the target function cannot be obtained, its complex dependence on the components of the scheme is characteristic, and therefore this method is usually not applicable. From the methods of zero order in the case of ambulance target functions, good results gives the Rosenbroke Method, which combines the ideas of the redeerate descent and the idea of \u200b\u200bconversion of coordinates. The best direction for the search for extremum is the movement along the ravine. Therefore, after the first cycle of the coordinate shutter, the coordinate axes are rotated so that one of them coincides with the direction of the XK - XK - N, K \u003d N, 2N, 3N .... The Rosenbroke Method does not give information about the minimum in the point. Therefore, the account is terminated either after the decrease in F (x) will become less than a small number , or after a certain number of cycles. The Huku-Dzhivs method was designed in 1961, but still is very effective and original. The search for a minimum of the target function consists of a sequence of steps of exploring the search around the base point, followed by a sample in case of success. This procedure consists of the following steps: 1. Select the initial basis point B1 and the pitch of Hj for each variable xj, j \u003d 1.2, ..., N scalar target function F (X). 11 2. Calculate F (X) in the base point B1 in order to obtain information about the local behavior of the function f (x). This information will be used to find the directions of the search by the sample, with which you can hope to achieve a larger decrease of the function F (X). The value of the function f (x) in the base point B1 is as follows: a) the value of the function f (b1) is calculated in the base point B1; b) Each variable in turn varies with a change in step. Thus, the value F (B1 + HE1) is calculated, where the E1 unit vector in the x1 axis direction. If this leads to a decrease in the values \u200b\u200bof the function, then B1 is replaced by B1 + HE1. Otherwise, the value of the function f (B1 - HE1) is calculated, and if its value decreased, then B1 is replaced by B1 - HE1. If not one of the steps done does not lead to a decrease in the values \u200b\u200bof the function, the point B1 remains unchanged and consider changes in the direction of the X2 axis, t. e. The value of the function f (B1 + H2E2) is contained, etc. When all N variables are considered, a new base point B2 is determined; c) if b2 \u003d b1, i.e., the decrease in the function f (x) was not achieved, then the study continues around the same base point B1, but with a reduced step length. As a rule, in practice, the step is reduced 10 times from the initial length; d) if B2  B1, then search on the sample. 3. When searching, information obtained during the study process is used and the minimization of the target function is completed by searching in the direction specified by the sample. This procedure is as follows: a) the movement is carried out from the base point B2 in the direction B2 - B1, since the search in this direction has already led to a decrease in the value of the function f (x). Therefore, the values \u200b\u200bof the function at the point of the sample P1 \u003d B2 + (B2 - B1) are calculated. In the general case pi \u003d 2bi + 1 - Bi; b) a study is performed around the point P1 (PI); c) if the smallest value in step 3, B is less than the value in the base point B2 (in the general case Bi + 1), then the new base point B3 (Bi + 2) is obtained, after which step 3 is repeated, and. Otherwise, the search is not made on the sample from point B2 (Bi + 1). 4. The minimum search process is completed when the step length (the length of the steps) will be reduced to a given small value. 12 2.6.2. Gradient methods for optimizing first-order Methods for finding extremum using derivatives have a strict mathematical justification. It is known that when finding an extremum, there is no better direction than the movement along the gradient. From the gradient methods, one of the most effective is the method of Fletcher Powell (conjugate gradients), which are a kind of method of the formal descent. The method of the formal descent consists of the following steps: 1) the initial point is set (vector xk k \u003d 0); 2) F (XK) and F (XK) are calculated; 3) Change X in the direction Sk \u003d -f (xk) until F (X) stops decreasing; 4) K \u003d K + 1 believes, the new value of F (XK) is calculated and the process is repeated from the 3rd stage. The disadvantage of the method lies in the fact that with ambition functions, the approach to a minimum has a zigzag nature and requires a large number of iterations. The essence of the Fletcher Powell method is that with all iterations, starting with the second (on the first iteration, this method coincides with the method of the formal descent), the previous values \u200b\u200bof F (x) and f (x) are used to determine the new direction vector   S k  fx k  dk s k 1, where (11) [f (x k)] t  f (x k) d. [f (x k 1)] T  F (x k 1) Thereby, the zigzag nature of the descent is excluded and convergence is accelerated. This algorithm is simple for programming, and the moderate amount of machine memory is required (you only need to fill out the previous search direction and the previous gradient). 2.6.3. Gradient methods for optimizing the second order Iterative method based on the knowledge of the second derivatives is generally known as Newton Method. Let the function f (x) are decomposed into a series of Taylor and three members are kept in it. I will write the result as follows: 1 F (x k  x)  F (x k)  (x) t f k  (x) TG K x 2 (12) It is required to maximize the difference in the left Parts. This can be made by differentiation (12) by x and equating the result to zero: 13  [F (x k  x)  f (x k)]  f k  g k x  0, xg k x  f k. This equation can be solved, for example, by the method of Lu-decomposition, relative to x. Formally, it is possible to write x   (G k) 1 f k    h k f k where H \u003d g-1. The search direction is now we now coincide with the vector S k  x k   h k f k. (13) When moving to a minimum, the hessse matrix1 will be positively defined and you can use the full size of the step Dk \u003d 1 (i.e., you do not need to search in the SK direction). However, far from a minimum, the Hesse matrix may not be positively defined. Moreover, the calculation of this matrix requires high costs. Therefore, a whole class of other methods are developed, called methods with a variable metric or quasiutone, which are deprived of these flaws. These methods have been developed for quite a long time, but are generalized only recently. They are based on the gradient assessment and on the approximation of the Matrix of Hesse or the back to it. Approximation is achieved by changing the initial positively defined matrix in a special way to maintain a positive certainty. Only when the minimum is reached, the resulting matrix approximates the hessse matrix (or back to it). In all methods of this class, the search direction is defined, as in Newton's method (13). At each iteration according to the Matrix HK according to the special formula, the HK + 1 matrix is \u200b\u200bobtained. As an example, we give the formula obtained by Davidon, Fletcher and Powell, and is sometimes called the DFP formula:  2F 2F 2f . . .  X1X N   X1X1 X1X 2  2F 2F 2F . . .   1 Matrix Hesse - Matrix of the second derivatives G (x)   x 2 x1 x 2 x 2 x 2 x n  . . .   2F 2F 2F   x x x x. . . x x  n 2 nn   n 1 14 h k 1 x (x) th k  th k h   t k (x) t   H  K (14) this formula It is only suitable if (x) T   0,  THK  0. Here k \u003d fk + 1-fk. 3. Description of the Computer Analysis Program The program has a convenient graphical user interface to work in the Windows operating system. The initial description of the optimized electronic circuit is the information in the file created when performing the second laboratory work. By downloading this file and selecting the items to optimize, with this program, the calculation of new values \u200b\u200bof the elements. The criterion for the correctness of the calculation is the value of a minimum of the target function, which is calculated as a weighted mean square deviation of the required and real characteristics of the RES: amplitude-frequency, transitional or impulse characteristics. The program has a standard set of controls - menu, toolbar .... A report on the laboratory work in HTML is created automatically. Note. After all the fill in the dialog boxes, the button is pressed.<Далее>. If the result displayed in the following window does not suit, by pressing the button<Назад> You can return to the previous steps and change the search terms. 3.1. The program startup When the program starts, the window opens in which the file must open the file stored after the second laboratory work (Fig. 1) in the File menu bar. 3.2. Mapping a task to optimize in a file describing the schema contains the parameters of the elements, including the transistor substitution scheme. In the left window, select Variable parameters for parametric optimization. The desired characteristic, for example, response, is set by the frequency values \u200b\u200b(in Hz) and the corresponding gain values \u200b\u200b(in dB). At the next step, the initial step of measuring parameters when optimizing is set (Fig. 2). 15 rice 1. Opening window of the input file Fig. 2. A window for selecting optimization values \u200b\u200b16 3.3. Optimization results In the next stage, the program represents the results of calculations:  minimum of the target function;  parameters of varying elements before and after optimization;  the number of calculations of the target function;  Number of decreases of the length of the step and sample search. The criterion for the correctness of the results obtained is the value of a minimum of the target function. For a bipolar transistor, it must be approximately 10-7 I10-8, and for the field transistor - 10-4 i 10-5 (Fig. 3). If the optimization results are suitable, then we turn to the next step - the construction of amplitude-frequency or time characteristics (Fig. 4, 6,). For accurate determination (location), the bandwidth of the RES, i.e. The upper and lower boundary frequencies, as well as to determine the time of transient processes there are tables of calculations (Fig. 5). Fig. 3. The calculation window after optimization 17 Fig. 4. Window constructing ACH Fig. 5. SCH values \u200b\u200bin Table 18 Fig. 6. Time characteristic window 4. Laboratory content 4.1. The procedure for execution 1. The prepared stage includes familiarization with methodological instructions to laboratory work, studying the theory of optimization in the abstract lectures, literary sources and section 2 of these guidelines. 2. The second stage includes the implementation of theoretical work: - formation of requirements for the optimized RES characteristic; - The selection of the element or elements of the scheme, according to the parameters of which it is assumed to be optimized. 3. Loading program optimization with a description of the optimized scheme and the task for parametric optimization. 4. Optimization. 5. Calculation of the characteristics of the scheme with optimized parameters. 6. Final stage. At this stage, the RES characteristics are compared before and after optimization. According to the materials obtained, a report is drawn up on the sheets of A4 format (297x210) with a mandatory application of print prints. 4.2. Task to laboratory work 1. According to the results of the analysis of the ACH amplifier obtained in the second laboratory work, form the requirements for the ideal response. Select a way to specify the perfect frequency response and coordinates of points on the Charticle Chapter. 19 2. Determine the group of elements, by the parameters of which it is assumed to be optimized. 5. Methodical guidelines for the preparation of baseline data 5.1. According to the Chart of ACH, calculated when performing the second laboratory work, the upper and lower boundary frequencies are determined and the effect of high-frequency inductive correction is found. 5.2. Taking advantage of the knowledge of the circuitry of amplifying devices, the components whose parameters determine the upper and lower boundary frequencies are determined. 5.3. The perfect characteristic is built on the Chart Chart (the technical assignment) characteristic. Optimization points are selected. In order to save the type of frequency response in the bandwidth, you also need to select points and in this part of the characteristic. 6. Report content 1. The purpose of the work. 2. The initial data in the form of a fundamental electrical circuit of an amplifying stage and parameters of its elements before optimization. 3. Listing machine analysis results. 4. Analysis of results. Conclusions. 7. Questions for self-control 1. Name the necessary and sufficient condition for the existence of a minimum function. 2. What matrix is \u200b\u200bcalled positively defined? 3. Why is the target function call the quality function? 4. Name the main property of the target function. 5. What tasks are called parametric synthesis, and which parameter optimization? 6. In what cases is the task of numerical search for a minimum of the target function relate to the tasks of nonlinear programming? 7. What is the difference between gradient methods for finding an extremum function from direct methods? 8. Explain the concept of global and local minimum. 9. What are the limitations for the parameter optimization of radio-electronic devices? 10. Explain the method of the coordinate descent. 11. What is the difference between the method of conjugate gradients from the Method of the Big Desk? 12. What does it mean in the method of hook - Dzhivsa "Search by sample"? 13. What are the criteria for the end of the iterative optimization process? 20 List of references 1. Automated design systems in electronics: Reference / E.V. Avdeev, A.T. Eremin, I.P. Naornkov, M.I. Sands; Ed. I.P.Norenkov. M.: Radio and Communication, 1986. 368C. 2. Bundy B. optimization method. Introductory course: per. from English M.: Radio and Communication, 1988. 128С. 3. Vlakh I., Singhal K. Machine methods for analyzing and designing electronic circuits. M.: Radio and Communication. 1988. 560s. 4. Collection of tasks for microchemistry: automated design: Tutorial for universities / E. Anisimov, P.P. Azbel, A.B. Isakov et al.; Ed. IN AND. Anisimova. L.: Energoatomizdat, Leningrad Department, 1991. 224c. 5. Dialog systems for circuit design / V.N. Anisimov, GD. Dmitrievich, K.B. Skobeltsyn et al; Ed. V.N. Anisimova. M.: Radio and Communication, 1988. 288c. 6. Redevich V.D. Rakov V.K., Kapustyan V.I. Machine analyza Optimization of electronic circuits: Teaching manual for "amplifying devices" and "radio receivers". M.: MEI, 1981. 88C. 7. Tutorial on Mataniz / Tabueva V.A. Mathematics, mathematical analysis: Tutorial. Ekaterinburg: UGTU-UPI, 2001. 494С. 8. Kiyko V.V. Kochkin V.F. Vdovkin K.A. Analysis of electronic schemes by a modified method of nodal potentials. Ekaterinburg: Ugta, 2004. 31c. 21.

In practice, there are constantly situations when achieving some result is not one, but in many different ways. In such a situation, a separately taken person, for example, when he decides on the distribution of its expenses, and a whole enterprise or even the industry, if it is necessary to determine how to use resources at their disposal to achieve maximum output, and finally popular The farm as a whole. Naturally, with a large number of solutions, the best should be selected.

The success of solving the overwhelming majority of economic tasks depends on the best, most important way to use resources. And on how these, as a rule, limited resources are distributed, the end result will depend on.

The essence of optimization methods (optimal programming) is to, based on the presence of certain resources, select this method of their use (distribution), in which the maximum or minimum of the indicator of interest will be ensured.

A prerequisite for the use of an optimal approach to planning (optimality principle) is the flexibility, alternativeness of production and economic situations, in the context of which have to make planned management decisions. Such situations, as a rule, constitute the daily practice of the economic entity (selection of the production program, attachment to suppliers, routing, cutting materials, preparation of mixtures).

Optimal programming, thus, ensures a successful solution of a number of extreme labor planning tasks. In the area of \u200b\u200bmacroeconomic analysis, forecasting and planning, optimal programming allows you to choose the natural economic plan (development program), which is characterized by the optimal ratio of consumption and savings (savings), the optimal proportion of production investment in national income, the optimal ratio of the growth rate and the profitability coefficient of the national economy and T d.

Optimal programming provides practically valuable results, since in nature it fully complies with the nature of the technical and economic processes and phenomena. With mathematical and statistical points of view, this method is applicable only to those phenomena that are expressed positive values \u200b\u200band in their aggregate form a combination of interdependent, but qualitatively different values. These conditions, as a rule, correspond to the values \u200b\u200bthat are characterized by economic phenomena. In front of the explorer, the economy is always available - some many different kinds of positive values. Solving optimization tasks, the economist always deals not with one, but with several interdependent values \u200b\u200bor factors.

Optimal programming can be used only to such tasks, when solving which the optimal result is achieved only in the form of precisely formulated goals and with well-defined limitations, usually arising from cash (production facilities, raw materials, labor resources, etc.). The conditions of the problem usually include some mathematically formulated system of interdependent factors, resources and conditions that limit the nature of their use.

The task becomes solvable with the introduction of certain estimates into it both for interdependent factors and the expected results. Consequently, the optimality of the result of the programming problem is relative. This result is optimal only from the point of view of the criteria with which it is estimated, and restrictions enacted in the task.

Stripping from the above, for any optimal programming tasks are characterized by three following:

1) the presence of a system of interdependent factors;

2) a strictly defined criterion for estimating optimality;

3) Accurate wording of conditions that limit the use of cash or factors.

Of the many possible options, an alternative combination is selected, which meets all the conditions entered into the task, and ensures the minimum or maximum value of the selected optimality criterion. The solution of the problem is achieved by the use of a certain mathematical procedure, which consists in a sequential approximation of rational variants corresponding to the selected combination of factors, to the only optimal plan.

Mathematically, this can be reduced to finding the extreme value of some function, that is, to the task of the type:

Find max (min) f (x), provided that the variable x (point x) runs to some predetermined set x:

f (x) ® MAX (MIN), x i x (4.1)

The task defined in this way is called the optimization task. The set X is called a permissible set of this task, and the function f (x) is a target function.

So, the optimization is the task that consists in choosing among some set permissible (i.e. allowed by circumstances of the case) decisions (x) of those decisions (x), which in one or another sense can be qualified as optimal. In this case, the permissibility of each decision is understood in the sense of the possibility of its actual existence, and optimality is in the sense of its expediency.

It depends very much on which form is given by the permissible set X. In many cases, this is done using the inequality system (equalities):

q1 (x1, x2, ..., xn) (?, \u003d ,?) 0,

q2 (x1, x2, ..., xn) (?, \u003d,?) 0, (4.2)

……………………………..

qm (x1, x2, ..., xn) (?, \u003d,?) 0,

where Q1, Q2, ..., Qm is some functions, (x1, x2, ..., xn) \u003d x - the way that the point x is set with a set of several numbers (coordinates), being a point of N-dimensional arithmetic space RN. Accordingly, the set X is a subset in Rn and constitutes a variety of points (x1, x2, ..., xn) i rn and satisfying the inequalities system (2.2.2).

The function f (x) becomes function n variables F (x1, x2, ..., xn), optimum (max or min), which is required to be found.

It is clear that not only the Max (MIN) value (x1, x2, ..., xn), but also a point or point, if there are more than one, in which this value is achieved. Such points are called optimal solutions. Many of all optimal solutions are called the optimal set.

The task described above is the overall task of optimal (mathematical) programming, which is based on the construction of optimality and systematic principles. Function F is called target function, inequality (equality) Qi (x1, x2, ..., xn) (?, \u003d,?) 0, i \u003d 1, 2, ..., m - restrictions. In most cases, the limitations include the conditions of non-negativity of variables:

x1? 0, x2? 0, ..., xn? 0,

or parts of variables. However, it may be optional.

Depending on the nature of the limit functions and the target function, different types of mathematical programming are distinguished:

1. Linear programming - functions linear;

2. Nonlinear programming - at least one of these functions is nonlinear;

3. Quadratic programming - F (x) is a quadratic function, limitations linear;

4. Separabl programming - F (x) is the sum of functions, various for each variable, conditions - restrictions can be both linear and nonlinear;

5. Integer (linear or nonlinear) programming - the coordinates of the desired point x are only integers;

6. Convex programming is the target function - convex, functions - restrictions - convex, that is, convex functions on convex sets and so on.

The most simple and often occurring is the case when these functions are linear and each of them looks:

a1x1 + a2x2 + ... Анхn + b,

that is, there is a task of linear programming. It is estimated that at present, approximately 80-85% of all optimization tasks solved in practice refer to linear programming tasks.

Combining the simplicity and realism of the initial parcels, this method at the same time has enormous potential in the field of determining the best from the point of view of the elected criterion of plans.

The first research in the field of linear programming, which aimed at choosing an optimal plan of work within the framework of the production complex refer to the end of the 30s of our century and are associated with the name L.V. Cantorovich. In the domestic scientific tradition, it is assumed to be considered the first developer of this method.

In the 30s, during the period of intensive economic and industrial development of the Soviet Union, Kantorovich was in the forefront of mathematical research and sought to apply his theoretical developments in the practice of a growing Soviet economy. Such an opportunity was introduced in 1938, when he was appointed consultant to the laboratory of plywood factory. Before him was tasked to develop such a method of resource allocation, which; Could maximize the performance of equipment, and kantorich, formulating the problem with the help of mathematical terms, made maximizing a linear function exposed to a large number of limiters. Without pure economic education, he nevertheless knew that maximization with numerous restrictions is one of the main economic problems and that the method facilitating planning on plywood factories can be used in many other industries, be it determining the optimal use of seed areas or The most efficient distribution of transport flows.

Speaking about the development of this method in the West, it should be said about Thyalling Coupmanse, the American economist-mathematics of Dutch origin.

The mission of the shopping fleet, Kummans tried to develop the routes of fleets of allies to reduce the cost of shipping costs to a minimum. The task was extremely difficult: thousands of merchant ships were taken by millions of tons of cargo on the sea routes between hundreds of ports scattered around the world. This work provided the opportunity to Cupmans to apply its mathematical knowledge to solving a fundamental economic problem - the optimal distribution of scarce resources between competing consumers.

Cupmans developed an analytical methodology, called an analysis of activities, which strongly changed the approach of economists and managers to the distribution of routes. He first described this technique in 1942, calling it "the ratio between cargo on various routes" ("Exchange Ratios Between Cargoes On Various Routes"), which showed the possibility of an approach to the problem of distribution as a mathematical problem of maximization within the limitations. The value to be maximum increase is the cost of delivered cargo equal to the sum of the cost of goods delivered to each of the ports. The limitations were represented by equations expressing the ratio of the amount of consumable production factors (for example, ships, time, labor) to the number of cargo delivered to various destinations, where the value of any of the costs should not exceed the amount available at the disposal.

When working on the problem of maximizing, Cupmans developed mathematical equations that were widely used both in economic theory and in the practice of management. These equations were determined for each of the costs of production coefficient equal to the price of this cost in the conditions of ideal competitive markets. Thus, a fundamental link between the theories of production efficiency and the theories of distribution through competitive markets was established. In addition, the Coupness equations were of great value for the central planning bodies that could use these equations to determine the corresponding prices for various costs, while leaving the choice of optimal routes at the discretion of local directors whose duty was to maximize profits. The method of analyzing activities could be widely used by any managers when planning production processes.

In 1975, L.V. Kantorovich and Tialling, Ch. Cupmans, was awarded the Nobel Prize "For his contribution to the theory of optimal distribution of resources".

Speaking about the first studies in the field of linear programming, it is also impossible not to mention another American scientist - George D. Danzig. The specific wording of the linear programming method rises to its work performed by him by order of the US Air Force during World War II, when the problem of coordination of the actions of one large organization in such matters as the accumulation of reserves, production and maintenance of equipment and material equipment, and were there Alternatives and restrictions. In addition, at one time J. Dangzing worked together with V.V. Leontiev, and the simplex-method of solving linear optimization tasks (most frequently used to solve them) appeared in connection with one of the first practical applications of the intersectoral balance method.

 

Perhaps it will be useful to read: