Optimization is at the center of economics theory. Optimization theory Theoretical and practical issues of optimization

Parameters for a given object structure, then it is called parametric optimization... The problem of choosing the optimal structure is structural optimization.

A standard mathematical optimization problem is formulated in this way. Among the elements χ forming the set Χ, find an element χ * that provides the minimum value f (χ *) of a given function f (χ). In order to correctly formulate the optimization problem, it is necessary to set:

  1. Admissible set- a bunch of \ mathbb (X) = \ (\ vec (x) | \; g_i (\ vec (x)) \ leq 0, \; i = 1, \ ldots, m \) \ subset \ mathbb (R) ^ n;
  2. Objective function- display f: \; \ mathbb (X) \ to \ mathbb (R);
  3. Search criteria(max or min).

Then solve the problem f (x) \ to \ min _ (\ vec (x) \ in \ mathrm (X)) means one of:

  1. Show what \ mathbb (X) = \ varnothing.
  2. Show that the objective function f (\ vec (x)) not limited from below.
  3. Find \ vec (x) ^ * \ in \ mathbb (X): \; f (\ vec (x) ^ *) = \ min _ (\ vec (x) \ in \ mathbb (X)) f (\ vec (x )).
  4. If \ nexists \ vec (x) ^ * then find \ inf _ (\ vec (x) \ in \ mathbb (X)) f (\ vec (x)).

If the function to be minimized is not convex, then it is often limited to the search for local minima and maxima: points x_0 such that everywhere in some of their neighborhood f (x) \ ge f (x_0) for the minimum and f (x) \ le f (x_0) for the maximum.

If the admissible set \ mathbb (X) = \ mathbb (R) ^ n, then such a problem is called unconstrained optimization problem, otherwise - conditional optimization problem.

Optimization methods classification

The general record of optimization problems specifies a wide variety of their classes. The selection of the method (the efficiency of its solution) depends on the class of the problem. The classification of problems is determined by: the objective function and the admissible area (set by a system of inequalities and equalities or a more complex algorithm).

Optimization methods are classified according to the optimization tasks:

  • Local methods: converge to some local extremum of the objective function. In the case of a unimodal objective function, this extremum is unique and will be the global maximum / minimum.
  • Global methods: deal with multi-extreme target functions. In a global search, the main task is to identify trends in the global behavior of the target function.

The currently existing search methods can be divided into three broad groups:

  1. deterministic;
  2. random (stochastic);
  3. combined.

According to the criterion of the dimension of the admissible set, optimization methods are divided into methods one-dimensional optimization and methods multidimensional optimization.

By the type of the objective function and the feasible set, optimization problems and methods for their solution can be divided into the following classes:

  • Optimization problems in which the objective function f (\ vec (x)) and restrictions g_i (\ vec (x)), \; i = 1, \ ldots, m are linear functions, resolved by the so-called methods linear programming.
  • Otherwise, deal with the task nonlinear programming and apply appropriate methods. In turn, two particular tasks are distinguished from them:
    • if f (\ vec (x)) and g_i (\ vec (x)), \; i = 1, \ ldots, m are convex functions, then such a problem is called a problem convex programming;
    • if \ mathbb (X) \ subset \ mathbb (Z), then we are dealing with the problem integer (discrete) programming.

According to the requirements for smoothness and the presence of partial derivatives in the objective function, they can also be divided into:

  • direct methods that require only calculations of the objective function at the points of approximation;
  • first-order methods: require the calculation of the first partial derivatives of the function;
  • second-order methods: they require the calculation of the second partial derivatives, that is, the Hessian of the objective function.

In addition, optimization methods are divided into the following groups:

  • analytical methods (for example, the Lagrange multiplier method and Karush-Kuhn-Tucker conditions);

Depending on the nature of the set X mathematical programming tasks are classified as:

  • discrete programming (or combinatorial optimization) problems - if X of course or countably;
  • integer programming problems - if X is a subset of the set of integers;
  • nonlinear programming problems if the constraints or objective function contains nonlinear functions and X is a subset of a finite-dimensional vector space.
  • If all the constraints and the objective function contain only linear functions, then this is a linear programming problem.

In addition, sections of mathematical programming are parametric programming, dynamic programming and stochastic programming.

Mathematical programming is used to solve optimization problems of operations research.

The way to find the extremum is completely determined by the class of the problem. But before getting a mathematical model, you need to perform 4 stages of modeling:

  • Determining the boundaries of the optimization system
    • We discard those connections of the optimization object with the outside world that cannot greatly affect the optimization result, or, more precisely, those without which the solution is simplified
  • Selecting Controlled Variables
    • We "freeze" the values ​​of some variables (unmanaged variables). The others are left to take any values ​​from the range of feasible decisions (controlled variables)
  • Defining Constraints on Controlled Variables
    • ... (equality and / or inequality)
  • Choosing a numerical optimization criterion (for example, a performance indicator)
    • Create an objective function

Story

Kantorovich, together with M.K. Gavurin, developed a potential method in 1949, which is used to solve transport problems. In subsequent works of Kantorovich, Nemchinov, V.V. Novozhilov, A.L. Lur'e, A. Brudno, Aganbegyan, D. B. Yudin, E. G. Golshtein and other mathematicians and economists, they were further developed as a mathematical theory of linear and nonlinear programming, and the application of its methods to the study of various economic problems.

Many works of foreign scientists are devoted to the methods of linear programming. In 1941 F.L. Hitchcock set a transport mission. The main method for solving linear programming problems - the simplex method - was published in 1949 by Danzig. The methods of linear and nonlinear programming were further developed in the works of Kuhn ( English), A. Tucker ( English), Gass (Saul. I. Gass), Charnes (A.), Beale (E. M.) and others.

Simultaneously with the development of linear programming, much attention was paid to nonlinear programming problems in which either the objective function or constraints, or both are nonlinear. In 1951, Kuhn and Tucker published a paper, which provides necessary and sufficient optimality conditions for solving nonlinear programming problems. This work served as the basis for further research in this area.

Since 1955, many papers have been published on quadratic programming (works by Beale, Barankin and Dorfman R., Frank M. and Wolfe P., Markowitz, etc.). Dennis J. B., Rosen J. B. and Zontendijk G. developed gradient methods for solving nonlinear programming problems.

At present, algebraic modeling languages ​​have been developed for the effective application of mathematical programming methods and solving problems on computers, the representatives of which are AMPL and LINGO.

see also

Write a review on the article "Optimization (mathematics)"

Notes (edit)

Literature

  • Abakarov A. Sh., Sushkov Yu. A.... - FORA Proceedings, 2004.
  • Akulich I. L. Mathematical programming in examples and problems: Textbook. textbook for students of economy. specialist. universities. - M.: graduate School, 1986.
  • Gill F., Murray W., Wright M. Practical optimization. Per. from English - M .: Mir, 1985.
  • Girsanov I.V. Lectures on the mathematical theory of extremal problems. - M.; Izhevsk: Research Center "Regular and Chaotic Dynamics", 2003. - 118 p. - ISBN 5-93972-272-5.
  • Zhiglyavsky A.A., Zhilinkas A.G. Methods for finding a global extremum. - M .: Science, Fizmatlit, 1991.
  • V. G. Karmanov Mathematical programming. - Publishing house of physics and mathematics. literature, 2004.
  • Korn G., Korn T. A guide to mathematics for scientists and engineers. - M .: Nauka, 1970 .-- S. 575-576.
  • Korshunov Yu.M., Korshunov Yu.M. Mathematical foundations of cybernetics. - M .: Energoatomizdat, 1972.
  • Maksimov Yu.A., Fillipovskaya E.A. Algorithms for solving nonlinear programming problems. - M .: MEPhI, 1982.
  • Maksimov Yu. A. Linear and discrete programming algorithms. - M .: MEPhI, 1980.
  • A. D. Plotnikov Mathematical programming = express course. - 2006. - S. 171. - ISBN 985-475-186-4.
  • Rastrigin L.A. Statistical search methods. - M., 1968.
  • Hemdi A. Taha. Operations Research: An Introduction. - 8th ed. - M .: Williams, 2007 .-- S. 912 .-- ISBN 0-13-032374-8.
  • Keeney R.L., Raifa H. Decision making under many criteria: preferences and substitutions. - M .: Radio and communication, 1981 .-- 560 p.
  • S.I. Zukhovitsky, L.I. Avdeeva. Linear and Convex Programming. - 2nd ed., Rev. and additional .. - M .: Publishing house "Science", 1967.
  • A.A. Bolonkin,. New optimization methods and their application. Brief lecture notes for the course "Theory optimal systems".. - Moscow: Bauman Moscow State Technical University, 1972, 220 pp. ViXra.org/abs/1503.0081.

Links

  • B.P. Pole.// Proceedings of the 14th Baikal School-Seminar "Optimization Methods and Their Applications". - 2008 .-- T. 1. - S. 2-20.
  • .

Excerpt from Optimization (mathematics)

Prince Andrew led Pierre to his quarters, which were always waiting for him in perfect working order at his father's house, and he himself went to the nursery.
“Let's go to our sister,” said Prince Andrew, returning to Pierre; - I have not seen her yet, she is now hiding and sitting with her God's people. Serves her right, she will be embarrassed, and you will see God's people. C "est curieux, ma parole. [This is curious, honestly.]
- Qu "est ce que c" est que [What are] God's people? - asked Pierre
- But you will see.
Princess Marya was really embarrassed and blushed when they entered her. In her cozy room with lamps in front of the icon cases, on the sofa, at the samovar, sat next to her a young boy with a long nose and long hair, and in a monk's cassock.
On an armchair, beside, sat a wrinkled, thin old woman with a gentle expression of a child's face.
“Andre, pourquoi ne pas m” avoir prevenu? [Andrei, why didn't you warn me?], She said with meek reproach, standing in front of her wanderers like a mother hen before chickens.
- Charmee de vous voir. Je suis tres contente de vous voir, [Glad to see you. I am so glad that I see you,] - she said to Pierre, while he was kissing her hand. She knew him as a child, and now his friendship with Andrei, his misfortune with his wife, and most importantly, his kind, simple face endeared her to him. She looked at him with her beautiful, radiant eyes and seemed to say: "I love you very much, but please do not laugh at mine." After exchanging the first few greetings, they sat down.
“Oh, and Ivanushka is here,” said Prince Andrey, pointing with a smile at the young wanderer.
- Andre! Princess Marya said pleadingly.
- Il faut que vous sachiez que c "est une femme, [Know that this is a woman,] - said Andrei to Pierre.
- Andre, au nom de Dieu! [Andrey, for God's sake!] - Princess Marya repeated.
It was evident that Prince Andrei's mocking attitude towards the wanderers and Princess Marya's useless intercession for them were familiar, established relations between them.
- Mais, ma bonne amie, - said Prince Andrew, - vous devriez au contraire m "etre reconaissante de ce que j" explique a Pierre votre intimite avec ce jeune homme ... [But, my friend, you should be grateful to me that I explain to Pierre your closeness to this young man.]
- Vraiment? [Really?] - said Pierre curiously and seriously (for which Princess Marya was especially grateful to him) peering through the glasses into Ivanushka's face, who, realizing that it was about him, looked at everyone with sly eyes.
Princess Marya was completely in vain embarrassed for her own people. They were not at all shy. The old woman, lowering her eyes, but glancing sideways at the newcomers, tipping the cup upside down on a saucer and placing a bit of sugar beside her, sat calmly and motionless on her armchair, waiting for more tea to be offered to her. Ivanushka, sipping from a saucer, looked at the young people from under his brows with crafty, female eyes.
- Where, in Kiev, have you been? Prince Andrew asked the old woman.
- There was, father, - the old woman answered talkatively, - at Christmas itself she was honored with the saints to communicate the saints, heavenly secrets. And now from Kolyazin, father, great grace has opened ...
- Well, Ivanushka with you?
“I’m walking on my own, breadwinner,” Ivanushka said, trying to speak in a bass voice. - Only in Yukhnov did they agree with Pelageyushka ...
Pelageyushka interrupted her comrade; she obviously wanted to tell what she saw.
- In Kolyazin, father, great grace has opened.
- Well, new relics? - asked Prince Andrey.
“Enough, Andrei,” said Princess Marya. - Don't tell me, Pelageyushka.
- No ... what are you, mother, why not tell? I love him. He is kind, reckoned by God, he gave me rubles, a benefactor, I remember. As I was in Kiev and Kiryusha the holy fool tells me - he is truly a man of God, he walks barefoot winter and summer. That you are going, he says, not in your place, go to Kolyazin, there is a miraculous icon, the Mother of the Most Holy Theotokos has opened. With those words I said goodbye to the saints and went ...
All were silent, one wanderer spoke in a measured voice, drawing in air.
- My father came, the people came to me and said: great grace has opened, the Mother of the Most Holy Theotokos has myrrh from the cheek of the caplets ...
“All right, all right, you’ll tell me later,” Princess Marya said, blushing.
“Let me ask her,” said Pierre. - Have you seen it yourself? - he asked.
- Why, father, she herself was honored. Such a radiance on the face is like the light of heaven, and from mother's cheek it drips and drips ...
“Why, this is a deception,” said Pierre naively, listening attentively to the wanderer.
- Oh, father, what are you saying! - said Pelageyushka with horror, addressing Princess Marya for protection.
“They are deceiving the people,” he repeated.
- Lord Jesus Christ! - said the wanderer, crossing herself. “Oh, don’t tell, father. So one anaral did not believe, he said: “the monks are deceiving,” but as he said, he was blind. And he dreamed that Mother Pechersk came to him and said: "Believe me, I will heal you." So he began to ask: take and take me to her. This I tell you the true truth, I saw it myself. They brought him blind right to her, came up, fell, said: “heal! I will give it to you, he says, what the king was pleased with. " I saw it myself, father, the star was embedded in it. Well, I have received my sight! It is a sin to say so. God will punish, ”she said instructively to Pierre.
- How did the star end up in the image? - asked Pierre.
- Have you been promoted to generals and mother? - said Prince Andrew smiling.
Pelageyushka suddenly turned pale and threw up her hands.
- Father, father, sin is you, you have a son! - She began to speak, suddenly turning from pallor into bright paint.
- Father, what did you say that, God forgive you. - She crossed herself. - Lord, forgive him. Mother, what is this? ... - she turned to Princess Marya. She got up and almost crying began to collect her purse. Apparently she was both scared and ashamed that she enjoyed the benefits in the house where they could say this, and it is a pity that now she had to be deprived of the benefits of this house.
- Well, what kind of hunt are you? - said Princess Marya. - Why did you come to me? ...
“No, I’m joking, Pelageyushka,” said Pierre. - Princesse, ma parole, je n "ai pas voulu l" offenser, [Princess, I really didn't want to offend her,] I just do it. Do not think, I was joking, - he said, smiling timidly and wanting to make amends. - It’s me, and he’s just joking.
Pelageyushka stopped incredulously, but in Pierre's face there was such sincerity of repentance, and Prince Andrey looked so meekly at Pelageyushka, then at Pierre, that she gradually calmed down.

The wanderer calmed down and, once again focused on the conversation, for a long time then talked about Father Amphilochius, who was such a holy life that his hand smelled like his palm, and how the monks she knew on her last journey to Kiev gave her the keys to the caves, and how she, taking with her crackers, spent two days in the caves with the saints. “I will pray to one, read, go to another. I will pine, I will go and eat again; and such, mother, silence, such grace that one does not even want to go out into the light of God. "
Pierre listened to her attentively and seriously. Prince Andrew left the room. And after him, leaving the people of God to finish their tea, Princess Marya led Pierre into the drawing room.
“You are very kind,” she told him.
- Oh, I really did not think of offending her, as I understand it and highly value these feelings!
Princess Marya silently looked at him and smiled tenderly. “After all, I have known you for a long time and love you as a brother,” she said. - How did you find Andrey? She asked hastily, giving him no time to say anything in response to her kind words. - He worries me very much. His health is better in winter, but last spring the wound opened and the doctor said that he must go to be treated. And morally I am very afraid for him. He is not a character like us women, to suffer and cry out his grief. He carries it inside himself. Today he is cheerful and lively; but it was your arrival that had such an effect on him: he rarely happens like that. If you could persuade him to go abroad! He needs activity, and this even, quiet life is ruining him. Others do not notice, but I see.
At 10 o'clock the waiters rushed to the porch, hearing the bells of the approaching carriage of the old prince. Prince Andrew and Pierre also went out onto the porch.
- Who is this? - asked the old prince, getting out of the carriage and guessing Pierre.
- AI is very happy! kiss, - he said, having learned who the unfamiliar young man was.
The old prince was in good spirits and took care of Pierre.
Before supper, Prince Andrey, returning back to his father's study, found the old prince in a heated argument with Pierre.
Pierre argued that the time will come when there will be no more war. The old prince, teasing but not angry, challenged him.
- Let the blood out of the veins, pour water, then there will be no war. Women's nonsense, women's nonsense, '' he said, but all the same he gently patted Pierre on the shoulder, and went to the table at which Prince Andrey, apparently not wanting to enter into a conversation, was sorting through the papers brought by the prince from the city. The old prince approached him and began to talk about business.
- The leader, Count Rostov, did not deliver half of the people. I came to town, decided to invite him for dinner, - I gave him such a dinner ... But look at this one ... Well, brother, - Prince Nikolai Andreevich turned to his son, clapping Pierre on the shoulder, - well done, your friend, I fell in love with him! Keeps me going. Another speaks clever speeches, but I don’t want to listen, but he’s lying and kindling me up to the old man. Well, go, go, ”he said,“ maybe I’ll come and sit at your supper. ” Again I will argue. Love my fool, Princess Mary, ”he shouted to Pierre from the door.
Pierre now only, on his arrival in Bald Hills, appreciated all the strength and charm of his friendship with Prince Andrew. This charm was expressed not so much in his relationship with himself, as in relations with all relatives and friends. Pierre with the old, stern prince and with the meek and timid Princess Mary, in spite of the fact that he hardly knew them, immediately felt like an old friend. They all loved him already. Not only Princess Marya, bribed by his meek attitude towards the wanderers, gazed at him with the most radiant gaze; but the little one-year-old Prince Nikolai, as his grandfather called, smiled at Pierre and went into his arms. Mikhail Ivanovich, m lle Bourienne looked at him with joyful smiles when he was talking with the old prince.
The old prince went out to supper: this was obvious to Pierre. He was with him both days of his stay in Bald Hills extremely affectionate, and ordered him to come to his place.
When Pierre left and all the family members got together, they began to judge him, as it always happens after the departure of a new person and, as rarely happens, everyone said one good thing about him.

Returning this time from vacation, Rostov for the first time felt and learned to what extent his connection with Denisov and with the whole regiment was strong.
When Rostov drove up to the regiment, he experienced a feeling similar to that which he experienced when driving up to the Cook's House. When he saw the first hussar in the unbuttoned uniform of his regiment, when he recognized the red-haired Dementyev, he saw the hitching posts of red horses, when Lavrushka joyfully shouted to his master: "The count has arrived!" and shaggy Denisov, who was sleeping on the bed, ran out of the dugout, hugged him, and the officers converged on the newcomer, - Rostov felt the same feeling as when his mother, father and sisters hugged him, and tears of joy that came to his throat prevented him from speaking ... The regiment was also a home, and the home was invariably sweet and expensive, like the parent's home.
Appearing to the regimental commander, having been assigned to the previous squadron, going on duty and foraging, entering into all the small interests of the regiment and feeling deprived of freedom and shackled into one narrow unchanging frame, Rostov experienced the same calmness, the same support and the same consciousness the fact that he is here at home, in his place, which he felt under his parental roof. There was not this whole mess of free light, in which he could not find a place for himself and made mistakes in the elections; there was no Sonya, with whom it was necessary or not necessary to explain. There was no way to go there or not go there; there was not these 24 hours of the day, which are so many different ways could be consumed; there was not this innumerable multitude of people, of whom no one was closer, no one was further; there was no such vague and indefinite monetary relationship with his father, there was no reminder of the terrible loss to Dolokhov! Everything in the regiment was clear and simple. The whole world was divided into two uneven divisions. One is our Pavlograd regiment, and the other is everything else. And before that the rest was nothing. Everything in the regiment was known: who was the lieutenant, who was the captain, who was good, who was a bad person, and most importantly, who was a comrade. The marketer believes in debt, the salary is a third; there is nothing to invent and choose, just do not do anything that is considered bad in the Pavlograd regiment; but they will send, do what is clear and distinct, determined and ordered: and everything will be fine.
Having entered again these definite conditions of the regimental life, Rostov experienced joy and reassurance, similar to those that a tired person feels when lying down for rest. This regimental life was all the more gratifying to Rostov in this campaign, that after losing to Dolokhov (an act which he, despite all the consolations of his relatives, could not forgive himself), decided to serve not as before, but to make amends, serve well and to be a completely excellent comrade and officer, that is, an excellent person, which seemed so difficult in the world, and in a regiment so possible.
Rostov, from the time of his loss, decided that he would pay this debt to his parents in five years. He was sent 10 thousand a year, but now he decided to take only two, and leave the rest to his parents to pay the debt.

Our army, after repeated retreats, offensives and battles at Pultusk, at Preussisch Eylau, concentrated around Bartenstein. They expected the arrival of the sovereign to the army and the start of a new campaign.
The Pavlograd regiment, which was in that part of the army that was on the campaign in 1805, completing itself in Russia, was late for the first actions of the campaign. He was neither at Pultusk nor at Preussisch Eylau and in the second half of the campaign, joining active army, was assigned to Platov's detachment.
Platov's detachment operated independently of the army. Several times the Pavlohradians were units in skirmishes with the enemy, captured prisoners and once recaptured even the crews of Marshal Oudinot. In April, the Pavlograd people stood for several weeks near a German empty village, ravaged to the ground, without moving.
There was snow, mud, cold, rivers were broken open, roads became impassable; for several days they did not give food to either the horses or the people. Since transportation became impossible, people scattered across the abandoned deserted villages to look for potatoes, but even that was not enough. Everything was eaten, and all the inhabitants fled; those that remained were worse than the beggars, and there was nothing to take from them, and even a little - the compassionate soldiers often, instead of taking advantage of them, gave them their last.

Parameters for a given object structure, then it is called parametric optimization... The problem of choosing the optimal structure is structural optimization.

A standard mathematical optimization problem is formulated in this way. Among the elements χ forming the set Χ, find an element χ * that provides the minimum value f (χ *) of a given function f (χ). In order to correctly formulate the optimization problem, it is necessary to set:

Then solving the problem means one of:

If the function to be minimized is not convex, then it is often limited to the search for local minima and maxima: points such that everywhere in some of their neighborhoods for the minimum and for the maximum.

If an admissible set, then such a problem is called unconstrained optimization problem, otherwise - conditional optimization problem.

Optimization methods classification

The general record of optimization problems specifies a wide variety of their classes. The selection of the method (the efficiency of its solution) depends on the class of the problem. The classification of problems is determined by: the objective function and the admissible area (set by a system of inequalities and equalities or a more complex algorithm).

Optimization methods are classified according to the optimization tasks:

  • Local methods: converge to some local extremum of the objective function. In the case of a unimodal objective function, this extremum is unique and will be the global maximum / minimum.
  • Global methods: deal with multi-extreme target functions. In a global search, the main task is to identify trends in the global behavior of the target function.

The currently existing search methods can be divided into three broad groups:

  1. deterministic;
  2. random (stochastic);
  3. combined.

According to the criterion of the dimension of the admissible set, optimization methods are divided into methods one-dimensional optimization and methods multidimensional optimization.

By the type of the objective function and the feasible set, optimization problems and methods for their solution can be divided into the following classes:

According to the requirements for smoothness and the presence of partial derivatives in the objective function, they can also be divided into:

  • direct methods that require only calculations of the objective function at the points of approximation;
  • first-order methods: require the calculation of the first partial derivatives of the function;
  • second-order methods: they require the calculation of the second partial derivatives, that is, the Hessian of the objective function.

In addition, optimization methods are divided into the following groups:

  • analytical methods (for example, the Lagrange multiplier method and Karush-Kuhn-Tucker conditions);
  • graphic methods.

Depending on the nature of the set X mathematical programming tasks are classified as:

  • discrete programming (or combinatorial optimization) problems - if X of course or countably;
  • integer programming problems - if X is a subset of the set of integers;
  • a nonlinear programming problem if the constraints or objective function contains nonlinear functions and X is a subset of a finite-dimensional vector space.
  • If all the constraints and the objective function contain only linear functions, then this is a linear programming problem.

In addition, sections of mathematical programming are parametric programming, dynamic programming and stochastic programming.

Mathematical programming is used to solve optimization problems of operations research.

The way to find the extremum is completely determined by the class of the problem. But before getting a mathematical model, you need to perform 4 stages of modeling:

  • Determining the boundaries of the optimization system
    • We discard those connections of the optimization object with the outside world that cannot greatly affect the optimization result, or, more precisely, those without which the solution is simplified
  • Selecting Controlled Variables
    • We "freeze" the values ​​of some variables (unmanaged variables). The others are left to take any values ​​from the range of feasible decisions (controlled variables)
  • Defining Constraints on Controlled Variables
    • ... (equality and / or inequality)
  • Choosing a numerical optimization criterion (for example, a performance indicator)
    • Create an objective function

Story

Kantorovich, together with M.K. Gavurin, developed a potential method in 1949, which is used to solve transport problems. In the subsequent works of Kantorovich, Nemchinov, V.V. Novozhilov, A.L. Lur'e, A. Brudno, Aganbegyan, D. B. Yudin, E. G. Golshtein and other mathematicians and economists, they were further developed as a mathematical theory of linear and nonlinear programming, and the application of its methods to the study of various economic problems.

Many works of foreign scientists are devoted to the methods of linear programming. In 1941 F.L. Hitchcock set a transport mission. The main method for solving linear programming problems - the simplex method - was published in 1949 by Danzig. The methods of linear and nonlinear programming were further developed in the works of Kuhn ( English), A. Tucker ( English), Gass (Saul. I. Gass), Charnes (A.), Beale (E. M.) and others.

Simultaneously with the development of linear programming, much attention was paid to nonlinear programming problems in which either the objective function or constraints, or both are nonlinear. In 1951, Kuhn and Tucker published a paper, which provides necessary and sufficient optimality conditions for solving nonlinear programming problems. This work served as the basis for further research in this area.

Since 1955, many papers have been published on quadratic programming (works by Beale, Barankin and Dorfman R., Frank M. and Wolfe P., Markowitz, etc.). Dennis J. B., Rosen J. B. and Zontendijk G. developed gradient methods for solving nonlinear programming problems.

At present, algebraic modeling languages ​​have been developed for the effective application of mathematical programming methods and solving problems on computers, the representatives of which are AMPL and LINGO.

see also

Notes (edit)

Literature

  • Abakarov A. Sh., Sushkov Yu. A. Statistical study of one global optimization algorithm. - FORA Proceedings, 2004.
  • Akulich I. L. Mathematical programming in examples and problems: Textbook. textbook for students of economy. pets. universities. - M .: Higher school, 1986.
  • Gill F., Murray W., Wright M. Practical optimization. Per. from English - M .: Mir, 1985.
  • Girsanov I.V. Lectures on the mathematical theory of extremal problems. - M.; Izhevsk: Research Center "Regular and Chaotic Dynamics", 2003. - 118 p. - ISBN 5-93972-272-5
  • Zhiglyavsky A.A., Zhilinkas A.G. Methods for finding a global extremum. - M .: Science, Fizmatlit, 1991.
  • V. G. Karmanov Mathematical programming. - Publishing house of physics and mathematics. literature, 2004.
  • Korn G., Korn T. A guide to mathematics for scientists and engineers. - M .: Nauka, 1970 .-- S. 575-576.
  • Korshunov Yu.M., Korshunov Yu.M. Mathematical foundations of cybernetics. - M .: Energoatomizdat, 1972.
  • Maksimov Yu.A., Fillipovskaya E.A. Algorithms for solving nonlinear programming problems. - M .: MEPhI, 1982.
  • Maksimov Yu. A. Linear and discrete programming algorithms. - M .: MEPhI, 1980.
  • A. D. Plotnikov Mathematical programming = express course. - 2006. - S. 171. - ISBN 985-475-186-4
  • Rastrigin L.A. Statistical search methods. - M., 1968.
  • Hemdi A. Taha. Operations Research: An Introduction. - 8th ed. - M .: Williams, 2007 .-- S. 912 .-- ISBN 0-13-032374-8
  • Keeney R.L., Raifa H. Decision making under many criteria: preferences and substitutions. - M .: Radio and communication, 1981 .-- 560 p.
  • S.I. Zukhovitsky, L.I. Avdeeva. Linear and Convex Programming. - 2nd ed., Rev. and additional .. - M .: Publishing house "Science", 1967.

Links

  • B.P. Pole. History of mathematical programming in the USSR: analysis of the phenomenon // Proceedings of the 14th Baikal School-Seminar "Optimization Methods and Their Applications"... - 2008. - T. 1. - S. 2-20.

Wikimedia Foundation. 2010.

The most acceptable version of the decision that is made at the management level regarding any issue is considered to be optimal, and the process of its search itself is considered optimization.

The interdependence and complexity of organizational, socio-economic, technical and other aspects of production management currently boils down to making a management decision, which affects a large number of different kinds of factors closely intertwined with each other, which makes it impossible to analyze each separately using traditional analytical methods.

Most of the factors are decisive in the decision-making process, and they (in their essence) do not lend themselves to any quantitative characterization. There are also those that are practically unchanged. In this regard, it became necessary to develop special methods capable of ensuring the choice of important management decisions within the framework of complex organizational, economic, technical tasks(expert assessments, operations research and optimization methods, etc.).

Methods aimed at researching operations are used in order to find optimal solutions in such areas of management as the organization of production and transportation processes, planning of large-scale production, material and technical supply.

Decision optimization methods consist in the study by comparing the numerical estimates of a number of factors, the analysis of which cannot be carried out by traditional methods. The optimal solution is the best among the possible options for the economic system, and the most acceptable solution for individual elements of the system is suboptimal.

The essence of operations research methods

As mentioned earlier, they form methods for optimizing management decisions. They are based on mathematical (deterministic), probabilistic models that represent the investigated process, type of activity or system. Models of this kind provide a quantitative characterization of the corresponding problem. They serve as the basis for making an important management decision in the process of searching for the optimal acceptable option.

The list of issues that play a significant role for direct production managers and that are resolved in the course of using the methods under consideration:

  • the degree of validity of the selected options for decisions;
  • how much better than alternative ones;
  • the degree to which determinants are taken into account;
  • what is the criterion for the optimality of the selected solutions.

These methods of decision optimization (managerial) are aimed at finding optimal solutions for as many firms, companies or their divisions as possible. They are based on the existing achievements of statistical, mathematical and economic disciplines (game theory, queuing, graphs, optimal programming, mathematical statistics).

Expert assessment methods

These methods of optimization of managerial decisions are used when the task is partially or completely not subject to formalization, and also its solution cannot be found by means of mathematical methods.

Expertise is a study of complex special issues at the stage of developing a certain managerial decision by appropriate persons who have a special store of knowledge and impressive experience in order to obtain conclusions, recommendations, opinions, assessments. In the process of expert research, the latest achievements of both science and technology are applied within the expert's specialization.

The considered methods of optimization of a number of managerial decisions (expert assessments) are effective in solving the following managerial tasks in the field of production:

  1. Study of complex processes, phenomena, situations, systems that are characterized by non-formalized, qualitative characteristics.
  2. Ranking and determining, according to a given criterion, the essential factors that are decisive for the functioning and development of the production system.
  3. The considered optimization methods are especially effective in predicting the development trends of the production system, as well as its interaction with the external environment.
  4. Increasing the reliability of expert assessment of predominantly target functions, which are quantitative and qualitative, by averaging the opinions of qualified specialists.

And these are just some of the methods for optimizing a number of management decisions (expert assessment).

Classification of the considered methods

Methods for solving optimization problems, based on the number of parameters, can be divided into:

  • One-dimensional optimization methods.
  • Multidimensional optimization techniques.

They are also called "numerical optimization methods". To be precise, these are the algorithms for finding it.

Within the framework of the application of derivatives, methods are:

  • direct optimization methods (zero order);
  • gradient methods (1st order);
  • 2nd order methods, etc.

Most of the multidimensional optimization methods are close to the problem of the second group of methods (one-dimensional optimization).

One-dimensional optimization methods

Any numerical optimization methods are based on an approximate or exact calculation of such characteristics as the values ​​of the objective function and functions that define the admissible set and their derivatives. So, for each individual problem, the question of the choice of characteristics for the calculation can be solved depending on the existing properties of the function under consideration, the available capabilities and limitations in storing and processing information.

There are the following methods for solving optimization problems (one-dimensional):

  • Fibonacci method;
  • dichotomies;
  • golden ratio;
  • doubling the step.

Fibonacci method

First, you need to set the coordinates m. X on the interval as a number equal to the ratio of the difference (x - a) to the difference (b - a). Therefore, a has coordinate 0 with respect to the interval, and b - 1, the midpoint - ½.

If we assume that F0 and F1 are equal to each other and take the value 1, F2 will be equal to 2, F3 - 3, ..., then Fn = Fn-1 + Fn-2. So, Fn are Fibonacci numbers, and Fibonacci search is the optimal strategy of the so-called sequential maximum search due to the fact that it is quite closely related to them.

Within the framework of the optimal strategy, it is customary to choose xn - 1 = Fn-2: Fn, xn = Fn-1: Fn. For any of the two intervals (or), each of which can act as a narrowed uncertainty interval, the point (inherited) relative to the new interval will have either coordinates or. Further, as xn - 2, a point is taken that has one of the presented coordinates relative to the new interval. If you use F (xn - 2), the value of the function, which is inherited from the previous interval, it becomes possible to reduce the uncertainty interval and inherit one value of the function.

At the finishing step, it will turn out to go to such an uncertainty interval as, while the midpoint is inherited from the previous step. As x1, a point is set that has a relative coordinate ½ + ε, and the final uncertainty interval will be or [½, 1] with respect to.

At the 1st step, the length of this interval was reduced to Fn-1: Fn (from one). At the finishing steps, the shortening of the lengths of the corresponding intervals is represented by the numbers Fn-2: Fn-1, Fn-3: Fn-2,…, F2: F3, F1: F2 (1 + 2ε). So, the length of such an interval as the final version will take the value (1 + 2ε): Fn.

If we neglect ε, then asymptotically 1: Fn will be equal to rn, with n → ∞, and r = (√5 - 1): 2, which is approximately equal to 0.6180.

It should be noted that asymptotically for significant n, each subsequent step of the Fibonacci search significantly narrows the interval under consideration with the above coefficient. This result needs to be compared with 0.5 (the coefficient of narrowing the uncertainty interval within the bisection method to find the zero of the function).

Dichotomy method

If you imagine some objective function, then first you need to find its extremum on the interval (a; b). To do this, the abscissa axis is divided into four equivalent parts, then it is necessary to determine the value of the function under consideration at 5 points. Then the minimum among them is selected. The extremum of the function must lie within the interval (a "; b"), which is adjacent to the minimum point. The search boundaries are narrowed by 2 times. And if the minimum is located at point a or b, then it narrows four times. The new interval is also split into four equal segments. Due to the fact that the values ​​of this function at three points were determined at the previous stage, then it is required to calculate the objective function at two points.

Golden section method

For significant values ​​of n, the coordinates of points such as xn and xn-1 are close to 1 - r, equal to 0.3820, and r ≈ 0.6180. The push from these values ​​is very close to the desired optimal strategy.

If we assume that F (0.3820)> F (0.6180), then an interval is outlined. However, since 0.6180 * 0.6180 ≈ 0.3820 ≈ xn-1, then at this point F is already known. Consequently, at each stage, starting from the 2nd, only one calculation of the objective function is needed, and each step reduces the length of the considered interval by a factor of 0.6180.

Unlike the Fibonacci search, this method does not require fixing the number n before starting the search.

The "golden section" of a section (a; b) is a section in which the ratio of its length r to the larger part (a; c) is identical to the ratio of the larger part r to the smaller one, that is, (a; c) to (c; b). It is easy to guess that r is determined by the above formula. Consequently, for significant n, the Fibonacci method goes over to the given one.

Step doubling method

The essence is the search for the direction of decrease of the objective function, movement in this direction in case of a successful search with a gradually increasing step.

First, we determine the initial coordinate M0 of the function F (M), the minimum step value h0, and the search direction. Then we define the function at point M0. Next, we take a step and find the value of this function at this point.

If the function is less than the value that was in the previous step, you should take the next step in the same direction, having previously increased it by 2 times. If its value is greater than the previous one, you will need to change the direction of the search, and then start moving in the chosen direction with a step h0. The presented algorithm can be modified.

Multivariate optimization techniques

The above-mentioned zero-order method does not take into account the derivatives of the minimized function, therefore, their use can be effective in case of any difficulties with the calculation of derivatives.

The group of methods of the 1st order is also called gradient, because to establish the direction of the search, the gradient of the given function is used - a vector, the components of which are the partial derivatives of the minimized function with respect to the corresponding optimized parameters.

In the group of second-order methods, 2 derivatives are used (their use is rather limited due to the presence of difficulties in their calculation).

List of unconstrained optimization methods

When using multidimensional search without using derivatives, the unconstrained optimization methods are as follows:

  • Hook and Jeeves (implementation of 2 types of search - pattern and research);
  • minimization with respect to the correct simplex (search for the minimum point of the corresponding function by comparing at each separate iteration of its values ​​at the vertices of the simplex);
  • cyclic coordinate descent (using coordinate vectors as reference points);
  • Rosenbrock (based on the use of one-dimensional minimization);
  • minimization with respect to a deformed simplex (modification of the minimization method with respect to a regular simplex: adding a procedure for compression, stretching).

In the situation of using derivatives in the process of multidimensional search, the steepest descent method is distinguished (the most fundamental procedure for minimizing a differentiable function with several variables).

Also, there are also such methods that use conjugate directions (Davidon-Fletcher-Powell method). Its essence is the representation of search directions as Dj * grad (f (y)).

Classification of mathematical optimization methods

Conventionally, based on the dimension of the functions (target), they are:

  • with 1 variable;
  • multidimensional.

Depending on the function (linear or nonlinear), there are a large number of mathematical methods aimed at finding an extremum for solving a given problem.

By the criterion for the application of derivatives mathematical methods optimizations are subdivided into:

  • methods for calculating 1 derivative of the objective function;
  • multidimensional (1st derivative-vector quantity-gradient).

Based on the efficiency of the calculation, there are:

  • methods for fast calculation of extremum;
  • simplified computation.

This is a conditional classification of the considered methods.

Optimization of business processes

Different methods can be used here, depending on the problems being solved. It is customary to single out the following methods for optimizing business processes:

  • exceptions (reduction of the levels of the existing process, elimination of the causes of interference and incoming control, reduction of transport routes);
  • simplification (facilitated passage of the order, reduced complexity of the product structure, distribution of work);
  • standardization (use of special programs, methods, technologies, etc.);
  • acceleration (parallel engineering, stimulation, operational prototype design, automation);
  • change (changes in the field of raw materials, technologies, methods of work, personnel arrangement, work systems, order volume, processing procedure);
  • ensuring interaction (in relation to organizational units, personnel, work system);
  • selection and inclusion (relatively necessary processes, components).

Tax optimization: methods

Russian legislation provides the taxpayer with very rich opportunities for tax cuts, which is why it is customary to single out such methods aimed at minimizing them as general (classical) and special ones.

The general tax optimization methods are as follows:

  • elaboration of the accounting policy of the company with the maximum possible use of the opportunities provided by Russian legislation (the procedure for writing off the IBE, the choice of the method for calculating proceeds from the sale of goods, etc.);
  • optimization by means of a contract (conclusion of preferential transactions, clear and competent use of wording, etc.);
  • application of various types of benefits, tax exemptions.

The second group of methods can also be used by all firms, but they still have a fairly narrow scope. Special tax optimization methods are as follows:

  • replacement of relations (an operation that provides for onerous taxation is replaced by another one that allows you to achieve a similar goal, but at the same time use a preferential taxation procedure).
  • separation of relations (replacement of only part of a business transaction);
  • deferral of tax payment (postponement of the moment when the object of taxation appears to another calendar period);
  • direct reduction of the object of taxation (getting rid of many taxable transactions or property without negatively affecting the main economic activity company).

Federal Agency for Education GOU VPO "Ural State Technical University - UPI "PARAMETRIC OPTIMIZATION OF RADIO ELECTRONIC CIRCUITS Methodical instructions for laboratory work on the course" Computer analysis of electronic circuits "for students of all forms of training in the specialty 200700 - Radio engineering Yekaterinburg 2005 UDC 681,3,06: 621.396.6 Compiled by V.V. Kiikov, V.F. Kochkin, K.A. Vdovkin Scientific editor Assoc., Cand. tech. Sciences V.I. Gadzikovsky PARAMETRIC OPTIMIZATION OF RADIO ELECTRONIC CIRCUITS: guidelines for laboratory work on the course "Computer analysis of electronic circuits" / comp. V.V. Kiiko, V.F. Kochkin, K.A. Vdovkin. Ekaterinbug: GOU VPO USTU-UPI, 2005.21p. The methodological instructions contain information about the formulation of optimization problems, optimality criteria, and the theory of finding the minimum of the objective function. An overview of parametric optimization methods is given, the Hook-Jeeves method is described in detail, and questions for self-control are given. Bibliography: 7 titles. Rice. 6. Prepared by the Department of Radioelectronics of Information Systems.  GOU VPO "Ural State Technical University-UPI", 2005 2 CONTENTS PURPOSE OF WORK ................................. .................................................. ........................ 4 1. GENERAL INSTRUCTIONS .................... ...................................... 4 2. THEORY OF OPTIMIZATION ....... .................................................. ......................... 4 2.1. Formal (mathematical) formulation of the optimization problem ............. 4 2.2. Statement of the problem of parametric optimization of radio electronic devices ............................ 5 2.3. Optimality criteria ................................................ ................................... 7 2.4. Strategy for solving the problems of optimal design of radio electronic devices ................ 9 2.5. Global search algorithms ............................................... ................... 9 2.5.1. Random search algorithm ............................................... ........................ 10 2.5.2. Monotone Global Search Algorithm ............................................. 10 2.5.3. Gray code grid scanning algorithm ............................................ . 10 2.6. Local search methods and algorithms ............................................. ........ 11 2.6.1. Direct methods ................................................ ............................................... 11 2.6. 2. Gradient optimization methods of the first order ............................. 13 2.6.3. Gradient second-order optimization methods ....... ...................... 13 3. DESCRIPTION OF THE ANALYSIS COMPUTER PROGRAM .................. 15 3.1. Starting the program ................................................ ............................................. 15 3.2. Drawing up an optimization task .............................................. ............ 15 3.3. Optimization results ................................................ ................................. 17 4. CONTENT OF THE LABORATORY WORK ........... .................................... 19 4.1. Procedure ................................................ ........................................ 19 4.2. Assignment for laboratory work .............................................. ......................... 19 5. GUIDELINES FOR PREPARATION OF INITIAL DATA ................ .................................................. .................................................. 20 6. CONTENT OF THE REPORT ............................................. ................................... 20 7. QUESTIONS FOR SELF-CONTROL ......... .................................................. . 20 REFERENCES .............................................. ............................................. 21 3 PURPOSE OF WORK Receive presentation and practical skills of parametric optimization of electronic equipment in automated circuit design of electronic equipment (REA). 1. GENERAL METHODOLOGICAL INSTRUCTIONS This work is the third in a set of laboratory works on methods of calculation, analysis and optimization of electronic circuits. The complex includes the following works: 1. Calculation of electronic circuits by the method of nodal potentials. 2. Analysis of electronic circuits by the modified method of nodal potentials. 3. Parametric optimization of electronic circuits. 4. Analysis of electronic circuits using circuit functions. In the first and second laboratory works, frequency analysis was carried out, the sensitivity of the voltage gain to variations in internal parameters was determined, the transient and impulse characteristics were calculated at the nominal values ​​of the parameters of the REM elements, which were initially selected (set or calculated) not in the best way. In this work, the parametric optimization of the designed REM is carried out to ensure that the output parameters comply with the requirements of the technical specification. 2. THEORY OF OPTIMIZATION 2.1. Formal (mathematical) formulation of the optimization problem Optimization of parameters (parametric optimization) is usually called the problem of calculating the optimal nominal values ​​of the internal parameters of the design object. The problems of optimization of parameters in CAD of electronic equipment are reduced to problems of mathematical programming extr F (X), XXD, (1) where XD = (XX0 | k (X) ≥ 0, r (X) = 0, k , r ). The vector X = (x1, x2,.... Xn) is called the vector of controlled (varied) parameters; F (X) - whole function (quality function); XD - admissible area; X0 is the space in which the objective function is defined; k (X) and r (X) are constraint functions. 4 Verbal formulation of problem (1): find the extremum of the objective function F (X) within the domain XD bounded in the space X0 N by the inequalities k (X) ≥ 0 and M by the equalities r (X) = 0. The objective function must be formulated based on the available ideas about the quality of the designed object: its value should decrease with quality improvement, then in (1) minimization is required (extr is min), or increase, then maximization is required in (1) (extr is max). Constraints are inequalities of the form xi> xi min or xi< xi max , называют прямыми ограничениями, где xi min и xi max - заданные константы, остальные ограничения называют функциональными. Задача поиска максимума, как правило, сводится к задаче поиска минимума путем замены F(Х) на -F(Х). Функция F(Х) имеет локальный минимум в точке Х0, если в малой окрестности этой точки F(Х) ≥ F(Х0). И функция F(Х) имеет глобальный минимум в точке Х*, если для всех Х справедливо неравенство F(Х) ≥ F(Х*). Классическая теория оптимизации подробно изложена в соответствующей литературе, например . Ниже основное внимание уделено применению теории оптимизации для поиска оптимальных решений при проектировании радиоэлектронной аппаратуры. 2.2. Постановка задачи параметрической оптимизации РЭС Решение задачи проектирования обычно связана с выбором оптимального, наилучшим образом удовлетворяющего требованиям технического задания варианта устройства из некоторого допустимого множества решений. Эффективное решение задач базируется на формальных поисковых методах оптимизации и неформальных способах принятия оптимальных проектных решений. Поэтому решение задач оптимального проектирования необходимо рассматривать не только в вычислительном аспекте, но скорее в творческом, учитывая опыт и знания инженера-схемотехника на всех этапах автоматизированного проектирования. Одной из наиболее cложных операций при решении задач оптимального проектирования является этап математической формулировки задачи, которая включает в себя выбор критерия оптимальности, определение варьируемых параметров и задание ограничений, накладываемых на варьируемые параметры . Среди задач схемотехнического проектирования, которые целесообразно решать с привлечением методов оптимизации, выделяют следующие задачи параметрического синтеза и оптимизации: - определение параметров компонентов схемы, обеспечивающих экстремальные характеристики при заданных ограничениях; - определение параметров функциональных узлов схем исходя из требований технического задания на характеристики устройства в целом; - адаптация существующих схемных решений с целью подбора параметров, удовлетворяющих новым требованиям к схеме; 5 - уточнение значений параметров компонентов схемы, полученных в результате ручного инженерного расчета. Для схем приемно-усилительной техники оптимизация ведется по отношению к таким выходным параметрам, как: - коэффициент усиления и полоса пропускания: - форма частотной характеристики; - устойчивость усилителя или активного фильтра; - время запаздывания, длительность фронта импульса. Примечание. Класс задач, связанный с определением значений параметров компонентов, при которых проектируемая схема удовлетворяет совокупности условий технического задания на разработку, принято называть параметрическим синтезом (по отношению к определяемым параметрам) или параметрической оптимизацией (по отношению к реализуемым характеристикам). В любой из перечисленных задач реализуемые характеристики проектируемого устройства являются функциями вектора варьируемых (настраиваемых) параметров, составляющих некоторое подмножество полного набора параметров компонентов схемы. Целью параметрического синтеза или оптимизации является определение вектора параметров X, обеспечивающего наилучшее соответствие характеристик устройства Y = Y(X) требованиям технического задания. Для решения этой задачи необходимо, прежде всего, выбрать формальный критерий оценки качества каждого из вариантов проектируемого устройства, который позволил бы различать их между собой и устанавливать между ними отношения предпочтения. Такая оценка может быть представлена функциональной зависимостью вида F(X) =F(Y(X)), называемой обычно критерием оптимальности, функцией качества или целевой функцией. Задача поиска параметров компонентов схемы сводится к классической задаче оптимизации - нахождения экстремума некоторой функции качества F(X) при наличии ограничений (равенств, неравенств или двухсторонних границ), накладываемых на варьируемые параметры и характеристики проектируемой схемы . Разнообразные задачи оптимизации аналоговых радиоэлектронных схем имеют общие черты, основные из которых: - многокритериальность оптимизационных задач; - отсутствие явных аналитических зависимостей выходных параметров от внутренних параметров, связь между внутренними и внешними параметрами выражается системами уравнений и оценивается количественно только через численное решение этих систем. Эти особенности обуславливают трудности постановки и решения задач оптимизации аналоговых радиоэлектронных схем. 6 2.3. Критерии оптимальности В процессе поиска оптимального решения для каждой конкретной задачи может оказаться предпочтительным определенный вид критерия оптимальности. Базовый набор критериев оптимальности, позволяющий удовлетворить разнообразные требования инженера-схемотехника к оптимизируемым характеристикам проектируемых устройств, изложен в . Так, для отыскания экстремума (минимума или максимума) показателя качества, например, как потребляемая схемой мощность, частота среза, используется само значение критерия оптимальности без преобразования: F1(X) = Y(X), (2) В задачах, требующих максимального соответствия оптимизируемой характеристики и некоторой желаемой, например, при оптимизации частотных характеристик, наиболее целесообразно использовать критерий среднего квадратического отклонения F2 ()  (Y() - Y )2 , (3) где Y* - желаемое или требуемое по terms of reference characteristic value, () - averaging sign. For the characteristic given by a discrete set of points, the objective function 1 F2 (X)  N N  (Y (X, p i 1 i)  Yi) 2, * i (4) where N is the number of sampling points of the independent variable p; Y (X, pi) - the value of the optimized characteristic at the i-th point of the sampling interval; i is the weighting factor of the i-th value of the optimized characteristic, reflecting the importance of the i-th point in comparison with others (as a rule, 0< i >one). The minimization of the function (3) and (4) ensures the closeness of the characteristics in terms of the standard deviation. Function (4) is used in numerical methods for calculating Y (X). In some optimization problems, it is necessary to ensure that the optimized characteristic exceeds or does not exceed a certain predetermined level. These optimality criteria are implemented by the following functions: - to ensure that the specified level F3 (X)  0 is exceeded at Y (X)  YH *; (Y  Y (X)) 2 for Y (X)  YH *; 7 (5) - to ensure that the specified level is not exceeded F4 (X)  0 for Y (X)  YB * (Y (X)  YB *) 2 for Y (X)  YB *, (6) where YH *, YB * - the lower and upper boundaries of the admissible area for the characteristic Y (X). If it is necessary that the optimized characteristic pass in a certain admissible zone (corridor), a combination of the two previous optimality criteria is used: 0 for YH *  Y (X)  YB *; F (X)  (Y (X)  YB *) 2 for Y (X)  YB *, (YH *  Y (X)) 2 for Y (X)  YH *. (7) In those cases when it is required to realize only the shape of the curve, while ignoring the constant vertical displacement, the shift criterion is used N F6 (X)    i (Yi *  Y (X, pi)  Yav) 2, ( 8) i 1 where Yav  1 N *  (Yi  Y (X, pi)). N i 1 Important characteristics of the computational process and, first of all, the convergence of the optimization process depend on the type of the objective function. The signs of the derivatives of the objective function with respect to the controlled parameters do not remain constant in the entire admissible region. For objective functions of the form (4) and (8), the latter circumstance leads to their ravine character. Thus, a feature of the target functions in solving problems of circuit design is their ravine nature, which leads to high computational costs and requires special attention to the choice of the optimization method. Another feature of the objective functions is that they are usually multi-extremal and along with the global minimum there are local minima. The peculiarity of the problems of optimization of electronic circuits lies in the fact that the internal parameters cannot take arbitrary values. Thus, the values ​​of resistors and capacitors are limited by certain maximum and minimum values. In addition, out of several external parameters, one can usually single out one main one, according to which optimization is carried out, and for others, indicate the permissible limits of change. 8 An optimization problem with constraints is reduced to an optimization problem without constraints by introducing penalty functions. In this case, the objective function takes the form MN r 1 k 1  (X)  Fi (X)   r ( Т (X)) 2    k ( k (X)) 2, (9) where r, k are numerical coefficients that take into account the importance of one or another constraint relative to others. They are equal to zero if the corresponding inequality from (1) is satisfied and take some values ​​otherwise; Fi (X) is one of the quality functions described by relation (2) - (8). Thus, going beyond the admissible CD region leads to an increase in the minimized function of the circuit and intermediate solutions X j are held by a "barrier" on the boundary of the CD region. The height of the “barrier” is determined by the values ​​of  and , which in practice are within wide limits (1-1010). The more  and , the less likely it is to go out of range. At the same time, the steepness of the ravine slope at the border also increases, which slows down or completely violates the convergence of the minimization process. Due to the impossibility of specifying the optimal values ​​of  and , it is advisable to start optimization with small values, then increasing them when obtaining a solution outside the admissible region. 2.4. Strategy for solving the problems of optimal design of radio electronic devices The problems of optimal design of radio electronic devices have specific features, which include the multi-extremity and ravine of the quality function, the presence of restrictions on the internal and output parameters of the designed device, and the large dimension of the vector of variable parameters. The strategy for solving the problems of optimal design provides for the use of global optimization procedures for initial stages search and refinement of the obtained global solution by local algorithms rapidly converging in the vicinity of the optimal point. This strategy allows, firstly, to determine the value of the global extremum with sufficient reliability and accuracy and, secondly, to significantly reduce the computational costs of the search. In this case, the stages of the global search can be performed with low accuracy, and the stages of local refinement are carried out in the area of ​​attraction of the global extremum, which requires a much smaller number of calculations. 2.5. Global search algorithms Global search algorithms, as a rule, give a rather rough estimate of the global extremum at low computational resources 9 and require a significant increase in the number of computations to obtain a more accurate estimate of the extremum position. 2.5.1. Random search algorithm The simplest, from the point of view of the implementation of the computational process, is the global extremum search algorithm, based on probing the admissible CD region by a sequence of points uniformly distributed in it with the selection of the best option from the obtained ones. The quality of the algorithm is largely determined by the properties of the generator of uniformly distributed random numbers used to generate vectors X  HD 2.5.2. Monotone Global Search Algorithm The multidimensional optimization by this algorithm is based on the construction of a sweep (Peano curve), which maps a segment of the real axis to the hypercube of the admissible HD domain. With the help of the sweep, a single-valued and continuous mapping X () is carried out, which for any point 0,1 allows you to obtain a point X  HD. Then the problem of minimizing F (X) in the CD domain is equivalent to finding the minimum  * of the one-dimensional function F (X) = F (X ()). To carry out the global one-dimensional minimization of the function F () on the interval 0.1 in the optimization subsystem of the VISP circuit design system, a monotone modification of the global search algorithm is used, which implements a monotone transformation F () in the form  ()  (1  [1  F ()] 2) 0, 5, (10) which preserves the location of the global extremum point, but makes the function smoother. The algorithm gives a fairly good estimate of the global extremum within the first 50-100 iterations. The best results are obtained if the number of variables does not exceed 5-7. For the considered algorithm, in a number of cases, the best results can be obtained using the transformation of the search space according to the logarithmic law. Such a transformation is especially effective if the search boundaries differ by several orders of magnitude, which is important in problems of optimization of electronic equipment, and if the extremum is located near the boundaries of the region. 2.5.3. Algorithm for scanning on a grid of the Gray code The main idea of ​​the method is to sequentially change a specific search area with characteristic rays containing test points during the accumulation and processing of the information received. The scanning direction is carried out on a special grid, specified by the binary 10 Gray code. The search sphere on the Gray code grid in the considered algorithm differs from the traditional one (a circle with the number of variables equal to 2) and has characteristic rays in addition to the circle. The rays are directed from the center of the sphere to the boundaries of the HD region and thus, as it were, "shine through" the entire region to its boundaries. The considered algorithm has a single adjustable parameter -sensitivity of the quality function to variations in parameters, which is used to determine the discreteness step for each of the variables. 2.6. Methods and algorithms for local search Methods and algorithms for local search most often find the nearest local extremum, and the trajectory of their movement strongly depends on the choice of the starting point and the nature of the objective function. 2.6.1. Direct Methods Zero-order methods (direct methods) do not have a strict mathematical foundation and are based on reasonable suggestions and empirical data. The simplest zero-order method is the coordinate descent method (Gauss-Seidel). At each step, all variables are fixed, except for one, by which the minimum of the objective function is determined. Optimization is achieved by sequential enumeration of variables. This algorithm turns out to be ineffective if the objective function contains expressions of the x1x2 type. For circuit design problems in which it is not possible to obtain an analytical expression of the objective function, its complex dependence on circuit components is characteristic, and therefore this method is usually inapplicable. Of the zero-order methods in the case of ravine objective functions, the Rosenbrock method gives good results, in which the ideas of coordinate descent and the ideas of coordinate transformation are combined. The best direction to search for an extremum is to move along a ravine. Therefore, after the first cycle of coordinatewise descent, the coordinate axes are rotated so that one of them coincides with the direction of the ravine Xk - Xk - n, k = n, 2n, 3n…. Rosenbrock's method does not provide information about hitting the minimum point. Therefore, the counting stops either after the decrease in F (X) becomes less than a certain small number , or after a certain number of cycles. The Hook-Jeeves method was developed in 1961, but is still very effective and original. The search for the minimum of the objective function consists of a sequence of exploratory search steps around the base point, followed by a pattern search, if successful. This procedure consists of the following steps: 1. Choose an initial base point b1 and a step of length hj for each variable xj, j = 1,2, ..., n of the scalar objective function F (X). 11 2. Calculate F (X) at the base point b1 in order to obtain information about the local behavior of the function F (X). This information will be used to find the direction of the search pattern, with the help of which one can hope to achieve a larger decrease in the value of the function F (X). The value of the function F (X) at the base point b1 is found as follows: a) the value of the function F (b1) is calculated at the base point b1; b) each variable is changed in turn by changing the step. Thus, the value of F (b1 + he1) is calculated, where e1 is the unit vector in the direction of the x1 axis. If this leads to a decrease in the values ​​of the function, then b1 is replaced by b1 + he1. Otherwise, the value of the function F (b1 - he1) is calculated, and if its value has decreased, then b1 is replaced by b1 - he1. If not one of the steps taken does not lead to a decrease in the values ​​of the function, then the point b1 remains unchanged and changes in the direction of the x2 axis are considered, that is, the value of the function F (b1 + h2e2) is found, etc. When all n variables are considered , a new base point b2 is determined; c) if b2 = b1, that is, a decrease in the function F (X) was not achieved, then the study continues around the same base point b1, but with a reduced step length. As a rule, in practice, the step is reduced 10 times from the initial length; d) if b2  b1, then a pattern search is performed. 3. During the search, the information obtained in the process of research is used, and the minimization of the objective function is completed by the search in the direction specified by the sample. This procedure is performed as follows: a) the movement is carried out from the base point b2 in the direction b2 - b1, since the search in this direction has already led to a decrease in the value of the function F (X). Therefore, the function values ​​are calculated at the sample point P1 = b2 + (b2 - b1). V general case Pi = 2bi + 1 - bi; b) a study is performed around the point P1 (Pi); c) if the smallest value at step 3, b is less than the value at the base point b2 (in the general case bi + 1), then a new base point b3 (bi + 2) is obtained, after which step 3, a is repeated. Otherwise, the pattern is not searched from point b2 (bi + 1). 4. The process of finding the minimum is completed when the stride length (stride length) is reduced to the specified small value. 12 2.6.2. Gradient first-order optimization methods Methods for finding an extremum using derivatives have a rigorous mathematical foundation. It is known that when looking for an extremum, there is no better direction than moving along a gradient. Of the gradient methods, one of the most effective is the Fletcher-Powell method (conjugate gradients), which is a variation of the steepest descent method. The steepest descent method consists of the following stages: 1) the starting point is specified (vector Xk k = 0); 2) F (Xk) and F (Xk) are calculated; 3) X is changed in the direction Sk = -F (Xk) until F (X) stops decreasing; 4) k = k + 1 is set, a new value F (Xk) is calculated, and the process is repeated from the 3rd stage. The disadvantage of the method is that, for gully functions, the approximation to the minimum has a zigzag character and requires a large number of iterations. The essence of the Fletcher-Powell method is that for all iterations, starting from the second (at the first iteration, this method coincides with the steepest descent method), the previous values ​​of F (X) and F (X) are used to determine the new direction vector   S k  FX k  dk S k 1, where (11) [F (X k)] T  F (X k) d. [F (X k 1)] T  F (X k 1) This eliminates the zigzag character of the descent and accelerates the convergence. This algorithm is easy to program and requires a moderate amount of machine memory (only the previous search direction and the previous gradient need to be filled). 2.6.3. Gradient Second Order Optimization Methods An iterative method based on knowledge of the second derivatives is generally known as Newton's method. Let the function F (X) be expanded in a Taylor series with three terms retained in it. We write the result in the following form: 1 F (X k  X)  F (X k)  (X) T F k  (X) TG k X 2 (12) parts. This can be done by differentiating (12) with respect to X and equating the result to zero: 13  [F (X k  X)  F (X k)]  F k  G k X  0, XG k X  F k. This equation can be solved, for example, by the LU-expansion method with respect to X. Formally, we can write X   (G k) 1 F k   H k F k where Н = G-1. The search direction is now assumed to coincide with the vector S k  X k   H k F k. (13) Upon passing to the minimum, the Hessian matrix1 will be positive definite and the full step size dk = 1 can be used (i.e., no search in the Sk direction is needed). However, far from the minimum, the Hessian matrix may not be positive definite. Moreover, the calculation of this matrix is ​​expensive. Therefore, a whole class of other methods, called methods with variable metric or quasi-Newtonian, have been developed, which do not have these drawbacks. These methods were developed quite a long time ago, but only recently have been generalized. They are based on the estimation of gradients and on the approximation of the Hessian matrix or its inverse. The approximation is achieved by changing the original positive definite matrix in a special way so as to preserve the positive definiteness. Only when the minimum is reached, the resulting matrix approximates the Hessian matrix (or its inverse). In all methods of this class, the search direction is determined, as in Newton's method (13). At each iteration over the matrix Hk, according to a special formula, the matrix Hk + 1 is obtained. As an example, we give the formula obtained by Davidon, Fletcher and Powell, and it is sometimes called the DFT-formula:  2F 2F 2F . ... ...   x1x n   x1x1 x1x 2  2F 2F 2F . ... ...   1 Hessian matrix - matrix of second derivatives G (x)   x 2 x1 x 2 x 2 x 2 x n  . . ...    2F 2F 2F   x x x x. ... ... x x  n 2 nn   n 1 14 H k 1 X (X) TH k  TH k H   T k (X) T   H  k (14) This formula is suitable only if (X) Т   0,  ТHk  0. Here k = Fk + 1-Fk. 3. DESCRIPTION OF THE COMPUTER ANALYSIS PROGRAM The program has a convenient graphical user interface for working in the environment operating system Windows. The initial description of the optimized electronic circuit is the information in the file created during the second laboratory work. After loading this file and selecting the elements for optimization, this program calculates the new values ​​of the elements. The criterion for the correctness of the calculations is the value of the minimum of the objective function, which is calculated as the weighted standard deviation of the required and real characteristics of the RES: amplitude-frequency, transient or impulse characteristics. The program has a standard set of controls - menus, toolbars…. A report on the laboratory work carried out in html format is automatically generated. Note. After all the filling of the dialog boxes with values, the button is pressed<Далее>... If the result displayed in the next window does not suit you, then by pressing the button<Назад> you can go back to the previous steps and change the search conditions. 3.1. Launching the program When the program starts, a window opens in which, in the File menu bar, you must open the file saved after completing the second laboratory work (Fig. 1). 3.2. Compilation of the optimization task The file with the circuit description contains the parameters of the elements, including the equivalent circuit of the transistor. In the left window it is necessary to select variable parameters for parametric optimization. The required characteristic, for example the frequency response, is set by the frequency values ​​(in Hz) and the corresponding values ​​of the gain (in dB). At the next stage, the initial step of measuring the parameters during optimization is set (Fig. 2). 15 Fig. 1. Window for opening the input file Fig. 2. Window for selection of optimization values ​​16 3.3. Optimization results At the next stage, the program presents the calculation results:  minimum of the objective function;  parameters of variable elements before and after optimization;  the number of calculations of the objective function;  number of stride length reductions and pattern searches. The criterion for the correctness of the results obtained is the value of the minimum of the objective function. For a bipolar transistor, it should be approximately 10-7 I10-8, and for a field-effect transistor - 10-4 I 10-5 (Fig. 3). If the optimization results are satisfactory, then we proceed to the next stage - the construction of the amplitude-frequency or time characteristics (Fig. 4, 6,). To accurately determine (find) the transmission bandwidth of the RES, i.e. upper and lower cutoff frequencies, as well as to determine the time of transient processes, there are tables of calculations (Fig. 5). Rice. 3. Calculations window after optimization 17 Fig. 4. Window for constructing frequency response Fig. 5. Frequency response values ​​in table 18 Fig. 6. Window of time characteristics 4. CONTENT OF THE LABORATORY WORK 4.1. Procedure 1. The prepared stage includes familiarization with the methodological instructions for laboratory work, the study of optimization theory according to the lecture notes, literature sources and section 2 of these methodological instructions. 2. The second stage includes the implementation of theoretical work: - the formation of requirements for the optimized characteristics of the RES; - selection of an element or elements of the circuit, according to the parameters of which it is supposed to be optimized. 3. Loading the optimization program with a description of the optimized circuit and a task for parametric optimization. 4. Performing optimization. 5. Calculation of the characteristics of the circuit with optimized parameters. 6. The final stage. At this stage, the characteristics of the RES are compared before and after optimization. Based on the materials received, a report is drawn up on A4 sheets (297x210) with the obligatory attachment of printouts of the results. 4.2. Assignment for laboratory work 1. Based on the results of the analysis of the amplifier's frequency response, obtained in the second laboratory work, form the requirements for the ideal frequency response. Select the method for setting the ideal frequency response and the coordinates of the points on the frequency response graph. 19 2. Determine the group of elements, according to the parameters of which it is supposed to carry out the optimization. 5. METHODOLOGICAL INSTRUCTIONS FOR PREPARATION OF INITIAL DATA 5.1. According to the frequency response graph, calculated during the second laboratory work, the upper and lower cutoff frequencies are determined and the effect of high-frequency inductive correction is determined. 5.2. Using the knowledge of the circuitry of amplifying devices, the components are determined, the parameters of which determine the upper and lower cutoff frequencies. 5.3. An ideal (required according to the technical specifications) characteristic is plotted on the frequency response graph. Optimization points are selected. In order to preserve the form of the frequency response in the passband, it is also necessary to select points in this part of the characteristic. 6. CONTENTS OF THE REPORT 1. Purpose of the work. 2. Initial data in the form of a circuit diagram of the amplifying stage and the parameters of its elements before optimization. 3. Listing of the results of machine analysis. 4. Analysis of the results. Conclusions. 7. QUESTIONS FOR SELF-CONTROL 1. Name a necessary and sufficient condition for the existence of a minimum of a function. 2. What matrix is ​​called positive definite? 3. Why is the objective function called a quality function? 4. Name the main property of the objective function. 5. What tasks are called parametric synthesis, and what are called parametric optimization? 6. In what cases is the problem of the numerical search for the minimum of the objective function related to nonlinear programming problems? 7. What is the difference between gradient methods for finding the extremum of a function from direct methods? 8. Explain the concept of global and local minimum. 9. What are the reasons for the limitations in the parametric optimization of radio electronic devices? 10. Explain the coordinate descent method. 11. What is the difference between the conjugate gradient method and the steepest descent method? 12. What does "pattern search" mean in the Hook-Jeeves method? 13. What are the criteria for the end of the iterative optimization process? 20 REFERENCES 1. Computer-aided design systems in radio electronics: Handbook / E.V. Avdeev, A.T. Eremin, I.P. Norenkov, M.I. Sands; Ed. I.P. Norenkov. M .: Radio and communication, 1986.368s. 2. Bundy B. Optimization methods. Introductory course: Per. from English M .: Radio and communication, 1988.128s. 3. Vlah I., Singhal K. Machine methods of analysis and design of electronic circuits. M .: Radio and communication. 1988.560s. 4. Collection of tasks on microcircuitry: Computer-aided design: Tutorial for universities / V.I. Anisimov, P.P. Azbelev, A.B. Isakov and others; Ed. IN AND. Anisimova. L.: Energoatomizdat, Leningrad department, 1991.224s. 5. Dialogue systems of circuit design / V.N. Anisimov, G. D. Dmitrievich, K.B. Skobeltsyn and others; Ed. V.N. Anisimova. M .: Radio and communication, 1988.288s. 6. Razevich V.D., Rakov V.K., Kapustyan V.I. Machine Analysis and Optimization of Electronic Circuits: A Study Guide for the courses "Amplifier Devices" and "Radio Receivers". M.: MEI, 1981.88s. 7. Textbook on mathematical analysis / Tabueva V.А. Mathematics, mathematical analysis: Textbook. Yekaterinburg: USTU-UPI, 2001.494 p. 8. Kiyko V.V. Kochkina V.F. Vdovkin K.A. Analysis of electronic circuits by the modified method of nodal potentials. Yekaterinburg: USTUUPI, 2004.31p. 21

In practice, we constantly encounter situations when it is possible to achieve some result not in one, but in many different ways. An individual person may find himself in a similar situation, for example, when he decides on the distribution of his expenses, and an entire enterprise or even an industry, if it is necessary to determine how to use the resources at their disposal in order to achieve maximum output, and, finally, the people's the economy as a whole. Naturally, with a large number of solutions, the best one must be chosen.

The success of solving the overwhelming majority of economic problems depends on the best, most profitable way of using resources. And the final result of the activity will depend on how these, as a rule, limited resources will be distributed.

The essence of optimization methods (optimal programming) is that, based on the availability of certain resources, choose a method of their use (distribution), which will ensure the maximum or minimum of the indicator of interest.

A prerequisite using the optimal approach to planning (the principle of optimality) is flexibility, alternative production and economic situations, in which one has to make planning and management decisions. It is such situations that, as a rule, constitute the daily practice of an economic entity (selection of a production program, attachment to suppliers, routing, cutting of materials, preparation of mixtures).

Optimal programming thus provides a successful solution to a number of extreme production planning problems. In the field of macroeconomic analysis, forecasting and planning, optimal programming allows you to choose a variant of the national economic plan (development program), characterized by the optimal ratio of consumption and savings (savings), the optimal share of industrial investment in national income, the optimal ratio of the growth rate and the profitability ratio of the national economy, etc. .d.

Optimal programming provides practically valuable results, since by its nature it fully corresponds to the nature of the investigated technical and economic processes and phenomena. From a mathematical and statistical point of view, this method is applicable only to those phenomena that are expressed in positive quantities and in their totality form a union of interdependent, but qualitatively different quantities. These conditions, as a rule, correspond to the values ​​that characterize economic phenomena. The researcher of economics always has a number of different kinds of positive values. Solving optimization problems, the economist always deals with not one, but with several interdependent quantities or factors.

Optimal programming can be applied only to such problems, in the solution of which the optimal result is achieved only in the form of precisely formulated goals and under well-defined constraints, usually arising from available funds (production capacity, raw materials, labor resources, etc.). The conditions of the problem usually include some mathematically formulated system of interdependent factors, resources and conditions that limit the nature of their use.

The problem becomes solvable when certain estimates are introduced into it both for interdependent factors and for expected results. Consequently, the optimality of the result of the programming problem is of a relative nature. This result is optimal only from the point of view of the criteria by which it is evaluated and the constraints introduced into the problem.

Based on the above, any optimal programming problem is characterized by the following three points:

1) the presence of a system of interdependent factors;

2) a strictly defined criterion for evaluating optimality;

3) the exact formulation of conditions limiting the use of available resources or factors.

From many possible options, an alternative combination is selected that meets all the conditions entered into the problem and provides the minimum or maximum value of the selected optimality criterion. The solution to the problem is achieved by using a certain mathematical procedure, which consists in the sequential approximation of rational options corresponding to the selected combination of factors to the only optimal plan.

Mathematically, this can be reduced to finding the extreme value of some function, that is, to a problem like:

Find max (min) f (x) provided that the variable x (point x) runs over some given set X:

f (x) ® max (min), х I Х (4.1)

The problem defined in this way is called an optimization problem. The set X is called the feasible set of the given problem, and the function f (x) is called the objective function.

So, an optimization problem is the choice among a set of admissible (i.e., admissible by the circumstances of the case) solutions (X) of those solutions (x) that in one sense or another can be qualified as optimal. In this case, the admissibility of each decision is understood in the sense of the possibility of its actual existence, and the optimality - in the sense of its expediency.

Much depends on the form in which the admissible set X is specified.In many cases, this is done using a system of inequalities (equalities):

q1 (х1, х2, ..., хn) (?, =,?) 0,

q2 (х1, х2, ..., хn) (?, =,?) 0, (4.2)

……………………………..

qm (х1, х2, ..., хn) (?, =,?) 0,

where q1, q2,…, qm are some functions, (x1, x2,…, xn) = x is the way in which point x is given by a set of several numbers (coordinates), being a point of the n-dimensional arithmetic space Rn. Accordingly, the set X is a subset in Rn and constitutes the set of points (x1, x2,…, xn) I Rn and satisfying the system of inequalities (2.2.2).

The function f (x) becomes a function of n variables f (x1, x2, ..., xn), the optimum (max or min) that you want to find.

It is clear that one should find not only the value of max (min) (x1, x2, ..., xn), but also the point or points, if there are more than one, at which this value is reached. Such points are called optimal solutions. The set of all optimal solutions is called the optimal set.

The problem described above is a general problem of optimal (mathematical) programming, the construction of which is based on the principles of optimality and consistency. The function f is called the objective function, inequalities (equalities) qi (x1, x2,…, xn) (?, =,?) 0, i = 1, 2,…, m - constraints. In most cases, the constraints include the conditions for non-negativity of variables:

x1? 0, x2? 0, ..., xn? 0,

or parts of variables. However, this may be optional.

Depending on the nature of the constraint functions and the objective function, different types of mathematical programming are distinguished:

1.linear programming - functions are linear;

2. nonlinear programming - at least one of these functions is nonlinear;

3. quadratic programming - f (x) is a quadratic function, the constraints are linear;

4.separable programming - f (x) is a sum of functions that are different for each variable, conditions - constraints can be both linear and nonlinear;

5. integer (linear or non-linear) programming - the coordinates of the desired point x are only integers;

6.convex programming - objective function - convex, functions - constraints - convex, that is, convex functions on convex sets are considered, etc.

The simplest and most common case is when these functions are linear and each of them has the form:

а1х1 + а2х2 + ... аnхn + b,

that is, there is a linear programming problem. It is estimated that at present about 80-85% of all optimization problems solved in practice relate to linear programming problems.

Combining the simplicity and realism of the initial assumptions, this method, at the same time, has enormous potential in determining the best plans from the point of view of the chosen criterion.

The first studies in the field of linear programming, aimed at choosing the optimal work plan within the production complex, date back to the end of the 30s of our century and are associated with the name of L.V. Kantorovich. In the domestic scientific tradition, it is he who is considered to be the first developer of this method.

In the 1930s, during the period of intensive economic and industrial development of the Soviet Union, Kantorovich was at the forefront of mathematical research and sought to apply his theoretical developments in the practice of the growing Soviet economy. The opportunity presented itself in 1938 when he was appointed a consultant to the plywood mill laboratory. He was tasked with developing such a method of resource allocation, which; could maximize the performance of the equipment, and Kantorovich, by formulating the problem in mathematical terms, maximized a linear function subject to a large number of constraints. Lacking a pure economic education, he nevertheless knew that maximization under numerous constraints was one of the main economic problems and that a method that facilitated planning in plywood mills could be used in many other industries, whether it was determining the optimal use of cultivated areas or the most efficient distribution of traffic flows.

Speaking about the development of this method in the West, it should be said about Tjalling Koopmans, an American mathematician economist of Dutch origin.

On a merchant marine mission, Koopmans tried to design the routes of the Allied fleets in such a way as to minimize the cost of shipping cargo. The task was extremely difficult: thousands of merchant ships carried millions of tons of cargo along the sea routes between hundreds of ports scattered around the world. This work provided an opportunity for Koopmans to apply his mathematical knowledge to solving a fundamental economic problem - the optimal allocation of scarce resources among competing consumers.

Koopmans developed an analytical technique called activity analysis that drastically changed the way economists and policymakers approach route allocation. He first described this technique in 1942, calling it "Exchange Ratios Between Cargoes on Various Routes", where he showed the possibility of approaching the distribution problem as a mathematical problem of maximization within constraints. The value to be maximized is the value of the delivered cargo, equal to the sum of the values ​​of the cargo delivered to each of the ports. The constraints were represented by equations expressing the ratio of the number of input factors of production (for example, ships, time, labor) to the amount of cargo delivered to various destinations, where the value of any of the costs should not exceed the amount available.

While working on the maximization problem, Koopmans developed mathematical equations that have found wide application both in economic theory and in management practice. These equations determined for each of the production costs a coefficient equal to the price of this cost in the conditions of ideal competitive markets. Thus, a fundamental connection was established between theories of production efficiency and theories of distribution through competitive markets... In addition, the Koopmans equations were of great value to central planners, who could use these equations to determine the appropriate prices for various costs, while leaving the choice of optimal routes to the discretion of local directors, whose responsibility was to maximize profits. The activity analysis method could be widely used by any managers in the planning of production processes.

In 1975 L.V. Kantorovich and Tjalling C. Koopmans were awarded the Nobel Prize "for their contribution to the theory of optimal resource allocation."

Speaking about the first studies in the field of linear programming, one cannot fail to mention another American scientist - George D. Danzig. The specific formulation of the linear programming method goes back to his work, commissioned by the US Air Force during World War II, when the problem arose of coordinating the actions of one large organization in such matters as stockpiling, production and maintenance of equipment and material and technical equipment, and there were alternatives and limitations. In addition, at one time J. Danzing worked together with V.V. Leontiev, and the simplex method for solving linear optimization problems (most often used to solve them) appeared in connection with one of the first practical applications of the input-output balance method.

 

It might be helpful to read: