The simplest flows are Markov processes and decision chains. Markov random processes and streams of events. Poisson event streams and

Consider some physical system S = (S 1, S 2, ... S n), which goes from state to state under the influence of some random events (calls, refusals, shots). Let's imagine it as if the events that transfer the system from state to state are some kind of streams of events.

Let the system S at the moment of time t be in the state S i and can pass from it to the state S j under the influence of some Poisson flow of events with the intensity ij: as soon as the first event of this flow appears, the system instantly passes from S i to S j ... As we know, the probability of this transition over an elementary time interval (element of the transition probability) is equal, hence it follows that the probability density of the transition ij in a continuous Markov chain is nothing more than the intensity of the flow of events that move the system along the corresponding arrow. If all streams of events that transfer the system S from state to state are Poisson, then the process in the system will be Markov.

Let us put down the intensities of Poisson fluxes (transition probability densities) on the graph of the states of the system at the corresponding arrows. Let's get a labeled state graph. On its basis, one can write the Kolmogorov equations and calculate the probabilities of states.

Example. Technical system S consists of two nodes I and II, each of which can fail independently of the other. The failure flow of the first node is Poisson with intensity I, the second is also Poisson with intensity II. Each node immediately after a failure begins to be repaired (restored). The flow of restorations (completion of the node repair) for both nodes is Poisson with intensity. Make a graph of the states of the system and write the Kolmogorov equation. System states: S 11 - both nodes are in good order; S 21 - the first unit is being repaired, the second is operational; S 12, S 22.


t = 0 p 11 = 1 p 21 = p 22 = p 12 = 0

p 11 + p 12 + p 21 + p 22 = 1.

Limit Probabilities of States for a Continuous Markov Chain

Let there be a physical system S = (S 1, S 2,… S n), in which a Markov random process with continuous time (continuous Markov chain) takes place. Suppose ij = const, i.e. all event streams are the simplest (stationary Poisson). Writing down the Kolmogorov system of differential equations for state probabilities and integrating these equations for given initial conditions, we get p 1 (t), p 2 (t),… p n (t), for any t. Let us pose the next question, what will happen to the system S at t. Will the functions p i (t) tend to some limits? These limits, if they exist, are called the limiting probabilities of the states. It is possible to prove the theorem: if the number of states S is finite and from each state it is possible to pass (in one or another number of steps) to each other, then the limiting probabilities of the states exist and do not depend on the initial state of the system. Suppose that the stated condition is fulfilled and the limiting probabilities exist (i = 1,2,… n),.

Thus, at t, a certain limiting stationary regime is established in the system S. The meaning of this probability: it is nothing more than the average relative time the system is in a given state. To calculate p i in the system of Kolmogorov equations describing the probabilities of states, it is necessary to set all left sides (derivatives) equal to 0. The system of the resulting linear algebraic equations must be solved together with the equation.

Death and reproduction scheme

We know that having a labeled state graph at our disposal, we can easily write the Kolmogorov equations for the state probabilities, as well as write and solve algebraic equations for the final probabilities. For some cases, it is possible to solve the last equations in advance, in letter form. In particular, this can be done if the system state graph is a so-called “death and reproduction scheme”.


The state graph for the death and reproduction scheme has the form shown in Fig. 19.1. The peculiarity of this graph is that all states of the system can be extended into one chain, in which each of the average states (S 1, S 2, ..., S n-1) is connected by a forward and backward arrow with each of the neighboring states - right and left, and the extreme states (S 0, S n) - with only one adjacent state. The term "scheme of death and reproduction" originates from biological problems, where a change in population size is described by a similar scheme.

The scheme of death and reproduction is very often found in various problems of practice, in particular, in theory. queuing, so it is useful, once and for all, to find the final probabilities of the states for it.

Suppose that all streams of events that move the system along the arrows of the graph are the simplest (for brevity, we will call both the system S and the process in it the simplest).

Using the graph in Fig. 19.1, we will compose and solve algebraic equations for the final probabilities of states (their existence follows from the fact that from each state one can go to each other, and the number of states is finite). For the first state S 0 we have:

For the second state S 1:

By virtue of (8.1), the last equality is reduced to the form

where k takes all values ​​from 0 to n. So, the final probabilities p 0, p 1, ..., p n satisfy the equations

in addition, one must take into account the normalization condition

p 0 + p 1 + p 2 + ... + p n = 1 (8.3)

Let's solve this system of equations. From the first equation in (8.2) we express p 1 through p 0.

From the second, taking into account (8.4), we get:

from the third, taking into account (8.5),

and in general, for any k (from 1 to N):

Let's pay attention to the formula (8.7). The numerator contains the product of all intensities at the arrows leading from left to right (from the beginning to the given state S k), and the denominator is the product of all intensities at the arrows leading from right to left (from the beginning to S k).

Thus, all the probabilities of the states p 1, p 2, ..., p n are expressed through one of them (p 0). Let us substitute these expressions into the normalization condition (8.3). We get, taking out p 0 from the bracket:

from here we get an expression for p 0.

(we raised the parenthesis to the power of -1 so as not to write two-story fractions). All other probabilities are expressed in terms of p 0 (see formulas (8.4) - (8.7)). Note that the coefficients at p 0 in each of them are nothing more than successive terms of the series after the unit in formula (8.8). Hence, calculating p 0, we have already found all these coefficients.

The formulas obtained are very useful in solving the simplest problems of queuing theory.

Queuing Theory Problems. Classification of queuing systems and their main characteristics

When researching operations, one often has to deal with the operation of peculiar systems called queuing systems (QS). Examples of such systems are: telephone exchanges, repair shops, ticket offices, information bureaus, shops, hairdressing salons, etc. Each CMO consists of a number of service units (or "devices"), which we will call service channels. The channels can be: communication lines, work points, cashiers, sellers, elevators, cars, etc. SMOs can be single-channel and multi-channel.

Any QS is designed to service a flow of requests (or "claims") arriving at some random times. Service of the request continues for some, generally speaking, random time T about, after which the channel is released and is ready to receive the next request. The random nature of the flow of requests and service times leads to the fact that at some time periods at the entrance of the QS, an unnecessarily large number of requests accumulates (they either enter the queue or leave the QS unserved); in other periods, the SMO will work with underload or even be idle.

In the QS there is some kind of SP with discrete states and continuous time; the state of the QS changes abruptly at the moments of the appearance of some events (or the arrival of a new request, or the end of service, or the moment when the request, which is tired of waiting, leaves the queue). To give advice on rational organization of this process and present reasonable requirements for the QS, it is necessary to study the joint venture, describe it mathematically. This is what the MO theory does.

The subject of the queuing theory is the construction of mathematical models linking the given operating conditions of the queuing system (the number of channels, their performance, the rules of operation, the nature of the flow of applications) with the characteristics of interest to us - the performance indicators of the queuing system, describing, from one point of view or another, its ability cope with the flow of applications. As such indicators (depending on the situation and objectives of the study), different values ​​can be used, for example: the average number of applications served by the CMO per unit of time; average number of busy channels; the average number of applications in the queue and the average waiting time for service; the probability that the number of applications in the queue will exceed a certain value, etc. The scope of mathematical methods of the theory of ML is constantly expanding and more and more goes beyond the tasks associated with "service organizations" in the literal sense of the word. As a kind of CMO can be considered: computers, systems for collecting and processing information, automated production shops, production lines, transport systems, air defense systems, etc.

The mathematical analysis of the work of the QS is greatly facilitated if the process of this work is Markovian. For this, it is sufficient that all streams of events that transfer the system from state to state (streams of requests, streams of "services") are simple. If this property is violated, then the mathematical description of the process becomes much more difficult and it is possible to bring it to explicit, analytical formulas only in rare cases. However, nevertheless, the apparatus of the simplest, Markovian queuing theory can be useful for an approximate description of the operation of the QS, even in those cases when the flows of events are not the simplest. In many cases, to make a reasonable decision on the organization of the work of the CMO does not require an exact knowledge of all its characteristics - often an approximate, approximate one is sufficient. Moreover, the more complex the QS, the more service channels it has, the more accurate these approximate formulas turn out to be.

When researching operations, one often encounters systems designed for reusable use when solving the same type of problems. The resulting processes are called service processes, and systems - queuing systems (QS). Examples of such systems are telephone systems, repair shops, computer systems, ticket offices, shops, hairdressers, and the like.
Each QS consists of a certain number of service units (devices, devices, points, stations), which we will call channels service. Channels can be communication lines, operating points, computers, sellers, etc. According to the number of channels, CMOs are subdivided into single-channel and multichannel.
Applications are usually received by the CMO not regularly, but by accident, forming the so-called a random stream of applications (requirements). Generally speaking, the servicing of claims also continues for some random time. The random nature of the flow of requests and the service time leads to the fact that the QS is loaded unevenly: at some time periods, a very large number of requests accumulate (they either enter the queue or leave the QS unserved), at other periods the QS works with underload or idle.
The subject of queuing theory is the construction of mathematical models linking the given operating conditions of the QS (the number of channels, their performance, the nature of the flow of applications, etc.) with the efficiency indicators of the QS, which describe its ability to cope with the flow of applications.

As performance indicators QS are used: the average (hereinafter, the average values ​​are understood as the mathematical expectations of the corresponding random variables) the number of applications served per unit of time; the average number of applications in the queue; average waiting time for service; the likelihood of denial of service without waiting; the probability that the number of applications in the queue will exceed a certain value, etc.

CMOs are divided into two main types (classes): CMO with refusals and href = "cmo_length.php"> CMO with waiting (queue). In a QS with refusals, an application received at the moment when all channels are busy gets a refusal, leaves the QS and does not participate in the further service process (for example, an application for telephone conversation at the moment when all channels are busy, it gets a refusal and leaves the QS unattended). In the queuing system with an expectation, a request arriving at a time when all channels are busy does not leave, but becomes a queue for service.
CMOs with waiting are subdivided into different types depending on how the queue is organized: with a limited or unlimited queue length, with a limited waiting time, etc.
The process of work of the CMO is random process.
Under random (probabilistic or stochastic) process the process of change in time of the state of a system in accordance with probabilistic laws is understood.
The process is called a process with discrete states, if its possible states S 1, S 2, S 3 ... can be enumerated in advance, and the transition of the system from state to state occurs instantly (in a jump). The process is called process with continuous time, if the moments of possible transitions of the system from state to state are not fixed in advance, but are random.
The QS operation process is a random process with discrete states and continuous time. This means that the state of the QS changes abruptly at random moments of the appearance of some events (for example, the arrival of a new request, the end of service, etc.).
The mathematical analysis of the work of the QS is greatly simplified if the process of this work is Markov. The random process is called Markov or a random process without consequence, if for any moment of time t 0 the probabilistic characteristics of the process in the future depend only on its state at a given moment t 0 and do not depend on when and how the system came to this state.

An example of a Markov process: System S is a taxi meter. The state of the system at time t is characterized by the number of kilometers (tenths of kilometers) traveled by the car up to this moment. Let at the moment t 0 the counter shows S 0. The probability that at the moment t> t 0 the counter will show this or that number of kilometers (more precisely, the corresponding number of rubles) S 1, depends on S 0, but does not depend on at what moments of time the counter readings changed until the moment t 0.
Many processes can be considered approximately Markovian. For example, the process of playing chess; system S - group of chess pieces. The state of the system is characterized by the number of opponent's pieces remaining on the board at time t 0. The probability that at the moment t> t 0 the material advantage will be on the side of one of the opponents depends primarily on the state of the system at the moment t 0, and not on when and in what sequence the pieces disappeared from the board until t 0 .
In a number of cases, the history of the processes under consideration can be simply neglected and Markov models can be used to study them.
When analyzing random processes with discrete states, it is convenient to use a geometric scheme - the so-called graph state. Usually, the states of the system are depicted by rectangles (circles), and possible transitions from state to state - by arrows (oriented arcs) connecting the states.
Objective 1. Construct a state graph of the following random process: device S consists of two nodes, each of which can fail at a random time, after which the repair of the node begins instantly, continuing for an unknown random time.

Solution. Possible states of the system: S 0 - both nodes are operational; S 1 - the first unit is being repaired, the second is operational; S 2 - the second unit is being repaired, the first is operational; S 3 - both units are being repaired. The system graph is shown in Fig. 1.
Rice. one
An arrow directed, for example, from S 0 to S 1 means the transition of the system at the moment of failure of the first node, from S 1 to S 0 - the transition at the moment of completion of the repair of this node.
There are no arrows on the graph from S 0, to S 3 and from S 1 to S 2. This is due to the fact that node failures are assumed to be independent of each other and, for example, the probability of simultaneous failure of two nodes (transition from S 0 to S 3) or the simultaneous completion of repairs of two nodes (transition from S 3 to S 0) can be neglected.

Stream of events

For mathematical description Markov random process with discrete states and continuous time, proceeding in the QS, let us get acquainted with one of important concepts theory of probability - the concept of a stream of events.
Under stream of events is understood as a sequence of homogeneous events following one after another at some random moments in time (for example, the flow of calls at the telephone exchange, the flow of computer failures, the flow of buyers, etc.).
The stream is characterized by intensityl- the frequency of occurrence of events or the average number of events entering the QS per unit of time.
The stream of events is called regular, if events follow one another at regular intervals. For example, the flow of products on an assembly line conveyor (at a constant speed) is regular.
The stream of events is called stationary, if its probabilistic characteristics do not depend on time. In particular, the intensity of a stationary flow is a constant value: l (t) =l. For example, the flow of cars on the city avenue is not stationary during the day, but this flow can be considered stationary during the day, say, during rush hours. Please note that in the latter case, the actual number of passing cars per unit of time (for example, per minute) may differ significantly from each other, but their average number will be constant and will not depend on time.
The stream of events is called flow without aftereffect, if for any two disjoint time segments t 1 and t 2 - the number of events falling on one of them does not depend on the number of events falling on the others. For example, the flow of passengers entering the metro has practically no aftereffect. And, say, the flow of customers leaving the counter with purchases already has an aftereffect (if only because the time interval between individual customers cannot be less than the minimum service time for each of them).
The stream of events is called ordinary, if the probability of two or more events hitting a small (elementary) time interval Dt is negligible compared to the probability of hitting one event. In other words, the stream of events is ordinary if events appear in it one by one, and not in groups. For example, the flow of traffic coming to the station is ordinary, and the flow of cars is not ordinary.
The stream of events is called the simplest ( or stationary Poisson) if it is simultaneously stationary, ordinal and has no aftereffect. The name "simplest" is explained by the fact that the QS with the simplest flows has the simplest mathematical description. Note that a regular stream is not "simple", since it has an aftereffect: the moments of occurrence of events in such a stream are rigidly fixed.
The simplest flow as a limiting one arises in the theory of random processes as naturally as in probability theory the normal distribution is obtained as a limiting one for a sum of random variables: when superposition (superposition) of a sufficiently large number n of independent, stationary and ordinary flows (comparable in intensity l 1 (i = 1,2, ..., n) a flow is obtained that is close to the simplest one with an intensity l, equal to the sum of the intensities of the incoming flows, those.
Consider on the time axis Ot (Fig. 2) the simplest flow of events as an unlimited sequence of random points.
Rice. 2
It can be shown that for the simplest flow the number T events (points) falling on an arbitrary time interval t are distributed over Poisson's law , (1)
for which the mathematical expectation of a random variable is equal to its variance: a =s 2 =lt.
In particular, the probability that no event will occur in time t (m = 0) is (2)
Find the distribution of the time interval T between arbitrary two adjacent events of the simplest flow.
In accordance with (15.2), the probability that none of the subsequent events will appear on a time segment of length t is (3)
and the probability of the opposite event, i.e. distribution function of a random variable T, yes (4)
The probability density of a random variable is the derivative of its distribution function (Fig. 3), ie (5)
Rice. 3
The distribution given by the probability density (5) or the distribution function (4) is called indicative(or exponential). Thus, the time interval between two adjacent arbitrary events has an exponential distribution, for which the mathematical expectation is equal to the standard deviation of the random variable (6)
and vice versa in terms of flux intensity l.
The most important property of the exponential distribution (inherent only to the exponential distribution) is as follows: if the time interval distributed according to the exponential law has already lasted for some time t, then this does not in any way affect the distribution law of the remaining part of the interval (Tt): it will be the same as and the distribution law of the entire interval T.
In other words, for the time interval T Between two consecutive adjacent events of a stream having an exponential distribution, any information about how long this interval has elapsed does not affect the distribution law of the remaining part. This property of the exponential law is, in essence, another formulation for "no aftereffect" - the main property of the simplest flow.
For the simplest flow with intensity l, the probability of hitting elementary (small) the time interval Dt of at least one flow event is, according to (4)
(7)
(Note that this approximate formula obtained by replacing the function e -lDt only the first two terms of its expansion in a series in powers of Dt, the more accurate the smaller Dt).

4. Modeling according to the scheme of Markov stochastic processes

To calculate the numerical parameters characterizing stochastic objects, it is necessary to construct a certain probabilistic model of the phenomenon, taking into account the random factors accompanying it. For the mathematical description of many phenomena developing in the form of a random process, the mathematical apparatus developed in the theory of probability for the so-called Markov random processes can be successfully applied. Let us explain this concept. Let there be some physical system S, the state of which changes over time (under the system S anything can be understood: a technical device, a repair shop, a computer, etc.). If the state S changes in time in a random way, they say that in the system S a random process takes place. Examples: the process of the operation of a computer (receipt of orders for a computer, the type of these orders, accidental failures), the process of aiming a guided missile (random disturbances (interference) in the missile control system), the process of customer service in a hairdresser or repair shop (random the flow of applications (claims) received from clients).

A random process is called a Markov process (or “a process without consequence”) if, for each moment of time t0, the probability of any state of the system in the future (for t> t0 ) depends only on its state in the present (for t= t0 ) and does not depend on when and how the system entered this state (i.e., how the process developed in the past). Let S technical device characterized by some degree of deterioration S... We are interested in how it will work further. As a first approximation, the performance of the system in the future (failure rate, need for repair) depends on the state of the device at the moment and does not depend on when and how the device has reached its present state.

The theory of Markov random processes is an extensive section of probability theory with a wide range of applications (physical phenomena such as diffusion or mixing of a charge during melting in a blast furnace, queuing processes).

4.1. Classification of Markov processes

Markov stochastic processes are divided into classes. The first classification feature is the nature of the spectrum of states. A random process (SP) is called a process with discrete states if the possible states of the system S1,S2,S3 ... can be listed, and the process itself consists in the fact that from time to time the system S jumps (instantly) from one state to another.

Example. Technical device consists of two nodes I and II, each of which can fail. States: S1- both nodes are working; S2- the first node has failed, the second is working; S 3 - the second node has failed, the first is working; S4- both nodes have failed.

There are processes with continuous states (smooth transition from state to state), for example, changing the voltage in the lighting network. We will consider only SPs with discrete states. In this case, it is convenient to use the state graph, in which the possible states of the system are denoted by nodes, and possible transitions by arcs.

The second classification feature is the nature of functioning in time. A SP is called a process with discrete time if the system transitions from state to state are possible only at strictly defined, pre-fixed times: t1,t2 ...... If the transition of the system from state to state is possible at any previously unknown random moment, then we speak of a SP with continuous time.

4.2. Calculation of a discrete-time Markov chain

S discrete state S1,S2, ...Sn and discrete time t1,t2, ...,tk, ...(steps, process steps, SP can be viewed as a function of the argument (step number)). V general case The joint venture is that there are transitions S1® S1® S2® S3® S4® S1® … in moments t1,t2,t3 ....

We will denote an event that after k- steps the system is in a state Si... For any k events https://pandia.ru/text/78/060/images/image004_39.gif "width =" 159 "height =" 25 src = ">.

This random sequence of events is called a Markov chain. We will describe a Markov chain (MC) using state probabilities. Let be the probability that after k- steps the system is in a state Si... It is easy to see that " k DIV_ADBLOCK389 ">


.

I use the events introduced above https://pandia.ru/text/78/060/images/image008_34.gif "width =" 119 "height =" 27 src = ">. The sum of the terms in each row of the matrix should be equal to 1. Instead transition probability matrices often use a labeled state graph (denote nonzero transition probabilities on arcs, delay probabilities are not required, since they are easily calculated, for example P11 = 1- (P12 +P13)). Having at our disposal a labeled state graph (or a matrix of transition probabilities) and knowing the initial state of the system, we can find the probabilities of states p1 (k),p2 (k), ...pn (k)" k.

Let the initial state of the system Sm, then

p1 (0) = 0 p2 (0) = 0 ...pm (0) = 1 ...pn (0) = 0.

First step:

p1 (1) = Pm1, p2 (1) = Pm2,… Pm (1) = Pmm,..., pn (1) = Pmn.

After the second step, using the total probability formula, we get:

p1 (2) = p1 (1) P11 + p2 (1) P21 +… pn (1) Pn1,

pi (2) = p1 (1) P1i + p2 (1) P2i +… pn (1) Pni orhttps://pandia.ru/text/78/060/images/image010_33.gif "width =" 149 "height =" 47 "> (i = 1,2, ..n).

For heterogeneous MC the transition probabilities depend on the step number. We denote the transition probabilities for step k through .

Then the formula for calculating the probabilities of states takes the form:

.

4.3. Markov chains with continuous time

4.3.1. Kolmogorov equations

In practice, situations are much more common when the system transitions from state to state occurs at random times that cannot be indicated in advance: for example, failure of any hardware element, completion of repair (restoration) of this element. To describe such processes, in a number of cases, the scheme of a Markov random process with discrete states and continuous time, a continuous Markov chain, can be successfully applied. Let us show how the probabilities of states for such a process are expressed. Let S = (S1,S2, ...Sn). Let us denote by pi (t) is the probability that at the moment t system S will be in the state). Obviously . Let's set the task - to determine for any tpi (t)... Instead of transition probabilities Pij we introduce into consideration the transition probability density

.

If not dependent on t, speak about a homogeneous chain, otherwise - about an inhomogeneous one. Let us know for all pairs of states (a labeled state graph is given). It turns out that knowing the labeled state graph it is possible to determine p1 (t),p2 (t) ..pn (t) as a function of time. These probabilities satisfy a certain kind of differential equations, (Kolmogorov's equations).


Integration of these equations with a known initial state of the system will give the desired state probabilities as a function of time. notice, that p1 +p2 +p3 +p4 = 1 and you can get by with three equations.

Rules for drawing up the Kolmogorov equations... On the left side of each equation is the derivative of the probability of a state, and the right side contains as many terms as there are arrows associated with a given state. If the arrow is directed from the state, the corresponding member has a minus sign, if to the state - a plus sign. Each term is equal to the product of the probability density of the transition corresponding to the given arrow, multiplied by the probability of the state from which the arrow originates.

4.3.2. Stream of events. The simplest stream and its properties

When considering the processes occurring in a system with discrete states and continuous time, it is often convenient to imagine the process as if the system transitions from state to state occur under the influence of some streams of events. A stream of events is a sequence of homogeneous events following one after another at some, generally speaking, random moments in time. (The flow of calls at the telephone exchange; the flow of malfunctions (failures) of the computer; the flow of freight trains arriving at the station; the flow of visitors; the flow of shots aimed at the target). We will represent the flow of events as a sequence of points on the time axis ot... The position of each point on the axis is random. The stream of events is called regular if events follow one after another at strictly defined intervals (rarely occurs in practice). Consider a special type of streams, for this we introduce a number of definitions. 1. The stream of events is called stationary , if the probability of a particular number of events falling on a time section of length depends only on the section length and does not depend on where exactly this section is located on the ot axis (homogeneity in time), then the probabilistic characteristics of such a flow should not change with time. In particular, the so-called intensity (or density) of the flow of events (the average number of events per unit time) is constant.

2. The stream of events is called flow without consequence if, for any non-overlapping time segments, the number of events falling on one of them does not depend on how many events fell on the other (or others, if more than two segments are considered). No consequence in a stream means that the events that make up the stream appear at successive times independently of each other.

3. The stream of events is called ordinary if the probability of two or more events hitting an elementary section is negligible compared to the probability of one event hitting (events in the stream come one by one, not in pairs, triplets, etc.).

A stream of events that has all three properties is called the simplest (or stationary Poisson). A non-stationary Poisson flow has only properties 2 and 3. The Poisson flow of events (both stationary and non-stationary) is closely related to the well-known Poisson distribution. Namely, the number of events in the flow falling on any site is distributed according to Poisson's law. Let us explain this in more detail.

Consider on the axis Ot, where a stream of events is observed, a certain segment of length t, starting at the moment t0 and ending at the moment t0 + t. It is not difficult to prove (the proof is given in all courses on probability theory) that the probability of exactly m events hitting this area is expressed by the formula:

(m=0,1…),

where a Is the average number of events per segment t.

For a stationary (simplest) Poisson flow a =lt, i.e., does not depend on where on the axis ot section t is taken. For a nonstationary Poisson flow, the quantity a expressed by the formula

and therefore depends on at what point t0 section t begins.

Consider on the axis ot the simplest stream of events with constant intensity l. We will be interested in the time interval T between events in this stream. Let l be the intensity (average number of events in 1 time) of the flow. Distribution density f(t) random variable T(time interval between adjacent events in the stream) f(t)= le- lt (t> 0) ... The distribution law with such a density is called exponential (exponential). Find the numerical values ​​of the random variable T: mathematical expectation (mean) and variance left ">

Time interval T between neighboring events in the simplest flow is distributed according to the exponential law; its mean value and standard deviation are equal, where l is the flow rate. For such a flow, the probability of occurrence of exactly one flow event on an elementary time interval ∆t is expressed as. We will call this probability the “element of the probability of the occurrence of the event”.

For a nonstationary Poisson flow, the distribution law of the interval T will no longer be exponential. The form of this law will depend, first, on where on the axis ot the first of the events is located, and secondly, on the type of dependence. However, if it changes relatively slowly and its change over the time between two events is small, then the law of distribution of the time interval between events can be approximately considered indicative, assuming in this formula the value equal to the average value in the area that interests us.

4.3.3. Poisson event streams and

continuous markov chains

Consider some physical system S = (S1,S2, ...Sn), which goes from state to state under the influence of some random events (calls, refusals, shots). Let's imagine it as if the events that transfer the system from state to state are some kind of streams of events.

Let the system S at the moment t is in a state Si and can go from it to the state Sj under the influence of some Poisson stream of events with an intensity lij: as soon as the first event of this stream appears, the system instantly switches from Si v Sj..gif "width =" 582 "height =" 290 src = ">

4.3.4. Limiting probabilities of states

Let there be a physical system S = (S1,S2, ...Sn), in which a Markov random process with continuous time takes place (continuous Markov chain). Let's pretend that lij =const, that is, all streams of events are the simplest (stationary Poisson). Writing down the system of Kolmogorov differential equations for the probabilities of states and integrating these equations for given initial conditions, we obtain p1 (t),p2 (t), ...pn (t), for any t... Let's pose the next question, what will happen to the system S at t® ¥. Will there be functions pi (t) strive for some limits? These limits, if they exist, are called the limiting probabilities of the states. It is possible to prove the theorem: if the number of states S is finite and from each state it is possible to pass (in one or another number of steps) to each other, then the limiting probabilities of the states exist and do not depend on the initial state of the system. Suppose that the stated condition is satisfied and the limiting probabilities exist (i = 1,2, ...n),.


Thus, for t® ¥ in system S a certain limiting stationary regime is established. The meaning of this probability: it is nothing more than the average relative time the system is in a given state. To calculate pi in the system of Kolmogorov equations describing the probabilities of states, all the left-hand sides (derivatives) must be set equal to 0. The system of the resulting linear algebraic equations must be solved together with the equation .

4.3.5. Death and reproduction scheme

We know that having a labeled state graph at our disposal, we can easily write the Kolmogorov equations for the state probabilities, as well as write and solve algebraic equations for the final probabilities. For some cases, it is possible to solve the last equations in advance, in letter form. In particular, this can be done if the system state graph is a so-called “death and reproduction scheme”.

https://pandia.ru/text/78/060/images/image044_6.gif "width =" 73 "height =" 45 src = "> (4.4)

From the second, taking into account (4.4), we get:

https://pandia.ru/text/78/060/images/image046_5.gif "width =" 116 "height =" 45 src = "> (4.6)

and in general, for any k (from 1 to N):

https://pandia.ru/text/78/060/images/image048_4.gif "width =" 267 "height =" 48 src = ">

from this we obtain an expression for p0.

(4. 8)

(we raised the parenthesis to the power of -1 so as not to write two-story fractions). All other probabilities are expressed in terms of p0 (see formulas (4.4) - (4.7)). Note that the coefficients of p0 in each of them are nothing more than successive terms of the series after the unit in formula (4.8). Hence, calculating p0, we have already found all these coefficients.

The formulas obtained are very useful in solving the simplest problems of queuing theory.

The QS process is a random process. A random (probabilistic or stochastic) process is understood as the process of changing the state of a system in time in accordance with probabilistic laws.

A process is called a process with discrete states if its possible states S1, S2, S3 ... can be enumerated in advance, and the system transitions from state to state occurs instantly (in a jump). A process is called a continuous-time process if the moments of possible transitions of the system from state to state are not fixed in advance, but are random.

The QS operation process is a random process with discrete states and continuous time.

A random process is called a Markov or random process without consequence if, for any moment of time t0, the probabilistic characteristics of the process in the future depend only on its state at a given moment t0 and do not depend on when and how the system came to this state.

An example of a Markov process: System S is a taxi meter. The state of the system at time t is characterized by the number of kilometers traveled by the car up to this moment. Let the counter show S0 at the moment t0. The probability that at the moment t> t0 the counter will show this or that number of kilometers (more precisely, the corresponding number of rubles) S1 depends on S0, but does not depend on at what moments of time the counter readings changed until the moment t0.

In a number of cases, the history of the processes under consideration can be simply neglected and Markov models can be used to study them.

When analyzing random processes with discrete states, it is convenient to use a geometric scheme - the so-called state graph. Usually, the states of the system are depicted by rectangles (circles), and possible transitions from state to state - by arrows (oriented arcs) connecting the states (Fig. 1).

Figure 1 - State graph

For a mathematical description of a Markov random process with discrete states and continuous time, proceeding in the QS, let us get acquainted with one of the important concepts of the theory of probability - the concept of the flow of events.

A stream of events is understood as a sequence of homogeneous events following one after another at some random moments in time

Examples would be:

  • - the flow of calls at the telephone exchange;
  • - the flow of switching on devices in the household electrical network;
  • - the flow of freight trains arriving at the railway station:
  • - the flow of malfunctions (failures) of the computer;
  • - the stream of shots directed to the target.

The flow is characterized by intensity l - the frequency of occurrence of events or the average number of events entering the QS per unit of time.

A stream of events is called regular if events follow one another at regular intervals. Such a flow is relatively rare in practice, but it is of certain interest as a limiting case.

A stream of events is called stationary if its probabilistic characteristics do not depend on time. In particular, the intensity of a stationary flow is a constant value:.

A stream of events is called a stream without aftereffect if for any two disjoint time segments and _ the number of events falling on one of them does not depend on the number of events falling on the others. For example, the flow of passengers entering the metro has little or no impact. And, say, the flow of customers leaving the counter with purchases already has consequences (if only because the time interval between individual customers cannot be less than the minimum service time for each of them).

The flow of events is called ordinary if the probability of two or more events hitting a small (elementary) time interval Δt is negligible compared to the probability of hitting one event. In other words, the stream of events is ordinary if events appear in it one by one, and not in groups.

A stream of events is called the simplest (or stationary Poisson) if it is simultaneously stationary, ordinary, and has no consequences.

The simplest flow as a limiting one arises in the theory of random processes just as naturally as in probability theory the normal distribution is obtained by superposition (superposition) of a sufficiently large number n of independent, stationary and ordinary flows (comparable in intensity) to a flow close to the simplest one with intensity l, equal to the sum of the intensities of the incoming flows:

Consider the simplest flow of events on the time axis as an unlimited sequence of random points. (Fig. 2)

Figure 2 - The flow of events

It can be shown that for the simplest flow, the number m of events (points) falling on an arbitrary time interval φ is distributed according to the Poisson law

for which the mathematical expectation of a random variable is equal to its variance:

In particular, the probability that no event (m = 0) occurs during the time φ is equal to

Let us find the distribution of the time interval T between arbitrary two neighboring events of the simplest flow.

In accordance with the formula, the probability that none of the subsequent events will appear on a time segment of length t is equal to

and the probability of the opposite event, i.e. distribution function of a random variable T, is

The probability density of a random variable is the derivative of its distribution function:

A distribution defined by a probability density or distribution function is called exponential (or exponential). Thus, the time interval between two adjacent arbitrary events has an exponential distribution, for which the mathematical expectation is equal to the standard deviation of the random variable

and vice versa in terms of the intensity of the flow l.

The most important property of the exponential distribution (inherent only to the exponential distribution) is as follows: if the time interval distributed according to the exponential law has already lasted for some time f, then this does not in any way affect the distribution law of the remaining part of the interval (T-f): it will be the same , as well as the distribution law of the entire interval T.

In other words, for the time interval T between two successive adjacent events of a stream having an exponential distribution, any information about how long this interval has elapsed does not affect the distribution law of the remaining part.

For the simplest flow with intensity l, the probability of hitting an elementary (small) time interval Δt of at least one flow event is equal to

Computing technology

Volume 13, Special Issue 5, 2008

Investigation of the semi-Markov stream of events

A. A. Nazarov, S. V. Lopukhova Tomsk State University, Russia e-mail: [email protected] pmk. tsu. ru, [email protected] ru

I.R. Garaishin

Branch of Kemerovo state university in Anzhero-Sudzhensk, Russia e-mail: [email protected]

In the submitted work, the semimarkovian process is considered. Limiting model is considered. Results of analytical treatment of limiting model are compared with results, obtained by the asymptotical method.

Introduction

There is a problem of extending the class of mathematical models of flows of homogeneous events. Often, classical models of random streams of events cannot be adequate to real information and telecommunication streams. Poissop and simplest flow models are often insufficient for a more realistic description of incoming flows for queuing systems. Despite the fact that there are flows of phase type and modulated Poisson flows which are more adequate to real situations, of great interest are semi-Markov flow models, a particular case of which are Markov recovery flows and all of the above flows. Research methods for such models are quite complex and lead to significant mathematical problems. Therefore, along with the problem of expanding the classes of flows, there is the problem of developing methods for their study.

1. Mathematical model

A random stream of homogeneous events (stream) is an ordered sequence

t \< ¿2 < ■ ■ ■

random variables tm - moments of occurrence of events in the stream.

Let a semi-Markov matrix A (x) with elements Aklk2 (x) be given, the Matrix P = lim A (x) is stochastic, therefore, for a given initial distribution

it defines a certain Markov chain k (tm) with discrete time, which will be called a Markov chain embedded in a semi-Markov flow,

© Institute of Computational Technologies, Siberian Branch of the Russian Academy of Sciences, 2008.

A. A. Nazarov, S. V. Lopukhova, I. R. Garaishina

A random flow of homogeneous events will be called semi-Markov if the probabilistic law of the formation of sequence (1) is determined by the initial distribution and equalities

Ak1k2 (x) = P (k (bt + 1) = k2, bt + 1 - bt< х ^^т) = к\ }

for all m> 1.

Let us denote by n (b) the number of events of the semi-marketed flow that have stumbled over time b in the interval.

The objective of the study of this work is to establish the probability distribution P (n, b) = P (n (b) = n) for stationary operation of the ergodic Markov chain k (1m). Obviously, the process n (b) is non-Markov, so we define two more random processes: r (b) is the length of the interval from time b to the moment of the next event in the flow under consideration, k (b) is a left-continuous process with continuous time, the value which on the interval (bm, bm + 1] are constant and defined by the equalities k (b) = k (bm + 1) .By the definitions made, the random process (k (b), n (b), z (b)) is three-dimensional continuous time Markov process.

Note that the random process k (b) is not semi-Markov in the classical definition, since the semi-Markov process S (b) is continuous on the right and, as indicated in, for its transition probabilities there are no Kolmogorov differential evolution equations, while the process proposed above (k (b), n (b), z (b)) is Markov, so for its probability distribution

P (k, n, z, b) = P (k (b) = k, n (b) = n, z (b)< г} (2)

it is not difficult to compose a system of Kolmogorov differential equations

^ dT (u, 1b - 1, 0, b) A (\ 2 - ^ -

db dg dg ^ dz

We denote

H (k, u, z, z) = ^ e "uPR (k, n, z, b),

where] = ¡~ ~~ is the imaginary unit. For these functions, from the Kolmogorov system of differential equations, we can write

dH (k, u, z, b) dH (k, u, z, b) dH (k, u, 0, b), u ^ dH (u, u, 0, b)

db dg dg ^ dz

We denote H (u, z, b) = (H (1, u, z, b), H (2, u, z, b), ...) the row of the vector function, then we rewrite the system of equations (3) into matrix form

dH (u, g, g) _ dH (u, g, g) dH (u, 0, g) Mc, g h p t

dg dg + dg 1 [) "" [)

whose solution satisfies the initial condition H (u, z, 0) = R (z), where I is the unit matrix, and the stationary distribution R (z) of the two-dimensional Markov process (k (t), z (t)) is a solution to the Cauchy problem

<Ш = <Ш(1-Мг)),

and is determined by the equality R (z) = seit / (P - A (x)) dx, where aei = Here r is a vector

the row of the stationary probability distribution of the values ​​of the embedded Markov chain

k (tm); E is a unit column vector and matrix A = (P - A (x)) dx.

2. Pre-limiting model

Let we have a differential equation (4), the solution H (u, z, t) of which satisfies the initial condition H (u, z, 0) = R (z). Then the Fourier - Stieltjess transform

φ> (u, a, t) = / ejaz dz H (u, z, t) of the vector function H (u, z, t) satisfies the equation

df (u, a, b). ... dH (u, 0, b),. *. ... hL,.

m = ~ zap (ua, + - (e? uA * (a) - /) (5)

and the initial condition

φ (u, a, 0) = R * (a) = ^ d> a2

where A * (a) = J e> a "2dA (z). The solution of equation (5) has the form o

φ (u, a, 1) = e ~ sab [II * (a) + I (¿> uA * (a) - I) dt]. (6)

Letting b tend to infinity in expression (6), we obtain the Fourier transform in m

dH (u, 0, t) ^ ^ "l

from the vector function ---. Having performed the inverse Fourier transform, we define

I e-j * A * (aj) 1 da.

A. A. Nazarov, S. V. Lopukhova, I. R. Raraishshia

Now equality (6) can be written in the form

φ (ua, z) = e-ab R * (a) +

+ - / e] am I e ~ sutK * (y) (/ - e> uA * (y)) 1 Ay (e "uA * (a) - /)<*г). (7)

Knowing that H (u, w, z) = H (u, z) = f (u, 0,1), we obtain an expression for the vector function H (u, z):

Then the probability distribution P (n, r) of the number of events occurring during the time r is

tions H (u, b) = MeEn (b = H (u, b) E, it has the form

1 С а1 Г 1 - е- ™ b

P (n, 1) = - e ~ znNS) E (1u = - / - ^ - 5

I - A * (y) A * (y) n-1Eyy, (8)

I - A * (y) E<1у

Conclusion

Carrying out asymptotic studies of the semi-Markovian flow of events, similar to the study of Markov renewal flows, we find that the third-order asymptotics for the characteristic function can be written in the form

MeGan (1) = ^ «(re ^ + ^ ae ^ + ^ aez *)

where the coefficients 831, a2, a3 for a semi-Markov flow are determined in the same way as in the works. The obtained equalities (8) determine the probability distribution P (n, r) of the number of events occurring in a stationary semi-Markov flow, given by the semi-Markov matrix A (x) and its Fourier - Stieltjess transformation A * (x). The numerical implementation of formulas (8) allows one to find numerical values ​​of the probabilities P (n, r) for a sufficiently wide class of matrices A * (x) and values ​​of r. But the possibilities of numerical implementation are limited by computational resources. For sufficiently large values ​​of r, it is natural to apply the method of asymptotic analysis of a semi-Markov flow in the same way as it was done for a Markov renewal flow in work and a sifted Markov renewal flow in work. The presence of a numerical algorithm (8) makes it possible to determine the area of ​​application of the asymptotic results. For the considered flows with three states of the embedded Markov chain, the Kolmogorov - Smirnov distance between distributions,

obtained asymptotically and according to formulas (8), does not exceed 2-3% for certain values ​​of t = T, this allows us to assert that for t> T it is effective to use asymptotic results, and for t< Т целесообразно использовать формулы (8), полученные в данной работе.

Bibliography

Korolyuk B.C. Stochastic models of systems. Kiev: Nauk, Dumka, 1989.208 p.

Nazarov A.A., Lopukhova C.B. Investigation of the flow of Markov renewal by the second-order asymptotic method // Mater. Int. scientific. conf. "Mathematical methods for increasing the efficiency of telecommunication networks functioning". Grodno, 2007.S. 170-174.

Lopukhova S.B. Investigation of a semi-Markov flow by an asymptotic method of the third order // Mater. VI Int. scientific and practical. conf. "Information technology and mathematical modeling". Tomsk: Publishing house of Vol. University, 2007. Part 2.P. 30-34.

 

It might be helpful to read: