What is the last mile problem. The problem of the last mile in the energy sector: state of the art and prospects for solution

The desire to receive data from the Internet
with low speed technology
like trying to suck jelly through a straw.

The traditional public switched telephone network (PSTN) allows the transmission of voice and data within a narrow frequency band (300-3400) Hz. The rapid growth of the Internet and the most widespread access to it using standard analog modems cause congestion in the PSTN, since the latter is not designed for the Internet load, which is characterized by a large average session time and more uneven than the telephone load. The second problem is that for comfortable access of users to the services of the existing network (and primarily the Internet), the transmission speeds that analog modems can provide are no longer sufficient. This applies not only to private (residential) users, but also to the growing category of business users who work from their home offices and who need to connect to corporate networks with significantly higher data rates than traditional analog modems can provide.

The difficulty of achieving the required speed of connection to the Internet lies in fundamental principles construction of telephone networks, which by their nature are not designed for high-speed data transmission. When Alexander Bell invented the telephone, his imagination went no further than allowing people physically in different places to talk to each other. In addition to the fact that traditional telephone (that is, voice) communication is carried out in a very narrow frequency band, it also allows for much greater signal attenuation than is possible with data transmission. At the same time, the most a big problem lies (in the truest sense of the word) between the telephone exchange and the subscriber's house. During the development of telephone communications, a long way has been traveled from manual switches to modern digital telephone exchanges that provide subscribers with a wide variety of services, but the same twisted pair cable is laid between the station and the subscriber as at the dawn of telephony. And there are already almost a billion such twisted pairs around the world.

As the cost of user equipment enabling access to the Internet gradually decreases, throughput connections and costs. Everyone who uses the Internet is forced to wait (wait and wait again) until the desired site is found and the required page is loaded. The situation worsens even more if large files (such as photos or videos) need to be uploaded. Moreover, the more users simultaneously work on the Internet, the slower the speed of each of them individually becomes, because a sharp increase in traffic leads to a significant increase in the load on telephone networks. With the full realization of all the potential of the Internet in the areas distance learning, commerce and entertainment must overcome the hurdle of insufficient (and overpriced) connections. The user wants one thing - high-speed and constantly working access. However, despite the fact that the high-speed data network covers the entire country to one degree or another, access to it by end users (the very “last mile”) can be fraught with technical and economic difficulties. Backbone data transmission lines allow gigabits of information to be transferred, but a very small number of end users have the ability to transfer data at least at a speed of several hundred kilobits. Pulling a fiber-optic line to each user is very expensive. Coaxial cables (cable television) allow high-speed transmission, but mostly in one direction. Telephone lines, as they are currently used for telephone communications, have a low data rate. Only broadband technologies, which are the future of the telecommunications industry, can provide access with the necessary high speed.

The telecommunications of the future are based on providing each user with the possibility of high-speed data transmission. But how do you transfer data at high speeds over the critical "last mile"? There are several technological directions to overcome this obstacle. (Although, just because there are several alternative technologies designed to solve the same problem, it does not mean that the user has wide choose equivalent options, from which the best one will be selected. In most cases, only one single option will be available to the user.)

The main candidates for solving the last mile problem are the following technologies. These are xDSL digital subscriber line, cable modems, as well as wireless and satellite technologies.

None of these technologies can be considered an ideal solution to the "last mile" problem. Many generally say that there are only two technologies that can solve the "last mile" problem - cable modems and xDSL. Both of these technologies are based on the use of already existing cable networks, which, which is quite important, cover almost all potential users. Another technology, fixed wireless (sometimes referred to as wireless subscriber line), lags behind the two technologies mentioned above because it requires some infrastructure to start a full service.

Other data transfer technologies either simply do not solve the "last mile" problem (not providing sufficient transfer speed) or are too expensive for most potential users. The former include connections using familiar analog modems, which have already reached the maximum data transfer rate over traditional twisted-pair telephone wires. The second category is fiber optic cables. There are people who advocate the complete replacement of the entire telephone cable network with new fiber optic cables that are capable of supporting data transmission at very high speeds. However, not only at present, but also in the foreseeable future, such a widespread replacement will not be carried out due to its high cost. Even for the United States, which is quite prosperous in terms of telecommunications, according to the most optimistic forecasts, the widespread introduction of fiber technologies will take more than a dozen years. At the same time, there are certain configurations of the access network (for example, when a sufficiently large group of users is at a considerable distance from the local station), in which the use of an optical cable is already economically viable. It should be emphasized that in the latter case we are talking about the group use of an optical cable, i.e., its sealing.

It would be a mistake to try to treat the process of solving the last mile problem as a matter of choosing any one technology. In practice, these technologies are initially in unequal conditions. Not all providers occupy the same position in the structure of the networks they intend to use. Therefore, operators that own cable telephone networks are unlikely to use cable modems, and operators that specialize in building wireless communications infrastructure are unlikely to invest in xDSL. On the other hand, thanks to the ability to use various technologies on the “last mile”, operators who own large and branched networks, have the opportunity to offer their customers various options for organizing high-speed access. For example, xDSL technologies and a wireless access system, or xDSL and cable modems.

Those regions where broadband coaxial cable networks, and later hybrid optical-coaxial HFC (hybrid fiber / coaxial) networks designed to connect subscribers to a cable television network, have been widely developed, there is a powerful platform for providing high-speed access to home users.

The transmission of terrestrial television broadcasting over coaxial cable networks was proposed by the American E. Parson in 1948. The first such system was created in Seattle and was designed to distribute 5 television (TV) channels. The introduction of cable television systems made it possible to abandon many of the shortcomings inherent in terrestrial TV, and, first of all, to provide high-quality TV to areas of uncertain reception of a television signal over the air. The first CATV systems were collective reception systems that operated first in the meter wave band (47–240 MHz), and then in the decimeter band (550–862 MHz in Europe and 600–750 MHz in the USA). These systems were relatively simple and contained a collective antenna, a headend, as well as a coaxial transmission path with the required number of couplers and amplifiers (main and house). Strictly speaking, these were not yet KTV networks, but rather systems for the collective reception of television programs. Naturally, both in terms of the modulation method (AM) and the position on the frequency scale, these systems were identical to the corresponding parameters of the on-air television signal, since they were designed to be received by standard television receivers. As the CATV systems were enlarged, their reliability decreased, in connection with which the issue of operational maintenance of these systems became very acute. Therefore, CATV systems began to be supplemented by remote control systems, which made it possible to control the state of these systems and, first of all, the parameters of the main amplifiers. To transmit information about the state of the system to the headend, a part of the spectrum below the operating frequency range (usually 5-30 MHz or 5-50 MHz) was used. An alternative possibility of transferring service information to the headend is to use a standard public switched telephone network (PSTN) telephone modem for this purpose. So in cable TV systems, it became possible in principle to provide the user with interactive network services.

Revolution in the field of telecommunications networks, associated with the emergence and widespread implementation optical cables, also touched on cable television networks. At this point in the development of CATV networks, the purely coaxial transmission medium has been replaced by the hybrid optical-coaxial HFC medium. In the CATV architecture using HFC, TV broadcast and switched video signals are transported over an optical fiber from the CATV headend to the ONU (optical network unit) optical network unit. The latter connects the optical backbone network with the distribution coaxial network. In the ONU, the signals of the corresponding channels carrying video, voice and data signals are transferred to the frequency range allocated for them. Note that the coaxial segment of the HFC network requires the use of duplex amplifiers that provide two-way signal transmission. The optical network board ONU (optical network unit) also performs some additional functions, which include the separation of "upstream" (from subscribers to the network) and "downstream" (from the network to subscribers) signals. The problem of using the HFC architecture to provide voice telephone services is the insufficient quality of voice services, mainly due to external interference (ingress noise). When transmitting data, the main problem is also external interference created in the "uplink" channel by household appliances such as microwave ovens, refrigerators, etc. Thus, according to available statistics, less than 5% of cable TV networks can use this range for its intended purpose, since this frequency range strongly affected by interference from household electrical appliances (refrigerators, microwave ovens, etc.). Therefore, it is advisable to use a telephone subscriber line as an uplink of the cable TV network.

In the mid-1990s, cable TV operators conducted studies on the possibility of using the cable TV network infrastructure for broadband access to services of a network of users of the house (residental) sector. As a result, devices appeared that were not quite aptly called cable modems. Cable modems are devices that provide high-speed access to data networks through a hybrid optical-coaxial HFC network.

Unlike traditional PSTN modems, cable modems are part of a point-to-multipoint topology system in which multiple cable modems of different users are connected via a hybrid optical-coaxial medium to the headend controller of the cable TV operator. Like xDSL modems, cable modems operate in the "always on" mode, i.e., they are constantly connected to the headend.

The use of cable modem technology allows a very elegant solution to the problems of analog subscriber telephone line, trunk lines and resources of switching stations of the public switched telephone network (PSTN). Cable modems transmit Internet traffic directly to the Internet router located at the headend of the cable TV system. The advantage of cable modem technology is also that it (although not always) can use the existing cable infrastructure of cable TV systems. In addition, the element base of cable modems is available and relatively inexpensive, and also (and this is perhaps the main thing) allows for the joint operation of cable modems from different manufacturers. Most cable modems are external devices connected to a personal computer via a standard 10Base-T Ethernet card or USB port; they can also be made in the form of a board inserted into a free slot of the ISA bus, using the plug and play technology for installation. To access the data network, the Cable Modem Termination System (CMTS) based on an access concentrator is used.

The "downlink" bandwidth (from the network to the subscribers) is shared by the entire set of user cable modems. Each standard television channel, occupying 6 MHz of RF spectrum, provides 27 Mbps downstream data using 64 QAM; when using 256 QAM modulation, the data rate can be increased up to 36 Mbps. Data transmission channels in the "upstream" direction theoretically allow data transmission at speeds from 500 Kbps to 10 Mbps using 16 QAM or QPSK technologies (depending on the bandwidth of the frequency spectrum allocated for servicing users). The frequency bands allocated for the transmission of upstream and downstream data are shared between all active users connected to this cable network segment. An individual user can count on a data transfer rate ranging from 500 Kbps to 1.5 Mbps - depending on the network architecture and load (the figure is significant, especially when compared with analog modems).

Cable TV systems using cable modems are based on a multiple access platform. Due to the fact that the users of these systems divide among themselves the frequency band available to all of them for the data transfer time, as the number of simultaneously active users increases, the data transfer rate for each of them decreases. It would seem that a simple calculation shows that with the simultaneous use of a data transmission channel of 27 Mbit / s by two hundred users, each of them will get 135 Kbit / s at best. What in that case this system better than an ISDN connection that provides 128 Kbps? Not so simple. Unlike traditional telephony, in which the subscriber receives a dedicated connection for the duration of the call, cable modems do not occupy a fixed frequency band during the entire data transmission session. As already mentioned, the bandwidth is divided among all active users who use network resources only during the actual reception or transmission of data. Therefore, instead of rigidly assigning 135 Kbps to each of the 200 "active" users, the entire frequency band in each specific fraction of a second is divided only between those users who transmit or receive data - the speed can increase dozens of times (after all, those who have downloaded , for example, an Internet page and is trying to figure out what's what, they are not currently "active users"). In the case of constant and high activity of any group of users, the cable operator can always expand the transmission frequency band by allocating another 6 MHz channel for data transmission. Another option to increase the average data rate per user is to move fiber optic cables closer to groups of potential users. This reduces the number of users served by each network segment, which naturally leads to an increase in the bandwidth available to each of them.

If we turn to the facts, then in the world cable modems still have more private users than, for example, ADSL technology. By mid-1999, there were about 1.3 million cable modems in use worldwide for high-speed data transmission, 1 million of which were located in the United States.

By the end of 2002, In-Stat / MDR counted about 10.2 million cable modem users in the USA, while DSL lines - about 7.6 million (it should be noted that US subscribers traditionally use cable modems more actively compared to subscribers in other countries).

But, in addition to obvious advantages, the technology under consideration also has significant drawbacks. As mentioned above, one of the disadvantages of cable modems (unlike, say, xDSL technologies) is that such data lines are shared lines. The bandwidth available to each individual user connected to a particular node may decrease as the number of users connected to the same node increases. Another disadvantage is that the system is "open" (ie, each individual user is not given their own hard-coded connection). This circumstance reduces the attractiveness of cable modems for business use. The cabling system can be thought of as one large LAN network, so (in theory) there is some possibility of each to each other's connection and access to the other user's data. Obviously, no one wants to use the same shared data transmission system with their competitor. In addition, cable modems provide high-speed access over cable TV lines, mainly to private users, because office buildings and businesses in most cases are not connected to the cable TV network.

Just as the spread of cellular and cordless telephones freed subscribers from the cable connecting the handset to a device connected to the telephone network, WLL (Wireless Local Loop) wireless subscriber line technology opened up access to the public telephone network for all those who had already lost hope of connecting to global network telephone connection.

This technology can most accurately be defined as the use of radio access to provide broadband network services to individual users. Moreover, this technology can be used not only in those regions where the telephone cable network is not sufficiently developed, but also where the level of development of cable networks is quite high. In this case, operators using broadband wireless access technologies are already in direct competition with local operators.

Broadband wireless lines can be used for high quality data, video and telephone communication. Historically, a telephone line was used for the uplink, but operators are now moving to a full duplex wireless system. The data rate is determined by the width of the frequency spectrum available to the operator and the modulation scheme. For example, the efficiency of digital modulation schemes ranges from 0.7 bps per Hz using BPSK modulation to 3.5 bps per Hz using 16QAM.

As in the case of on-air television broadcasting, wireless data transmission lines are organized according to the line-of-sight principle. The signal is transmitted from an antenna, usually located on a hill or a tall building, to special receiving antennas installed on users' buildings. Obtaining a sufficiently clean frequency spectrum can be quite a challenge; another problem is the requirement for line of sight for most organized lines. The organization of the line is quite simple, because it does not require, for example, such a volume of construction (earth) work as when laying cable systems, but it cannot be guaranteed that an organized line (based on the requirement of line of sight) will work as long as necessary. For example, a house built on a line-of-sight path can simply “cut off” such a data transmission line. As is the case with over-the-air television broadcasts, any obstructions (such as dense tree canopy, hills, tall buildings and even heavy precipitation) can make reception somewhat difficult. Multipath distortion (resulting from signal reflections off buildings and other objects) can also seriously complicate reception. Distance must also be considered, as wireless communications signals can only be received within a certain distance from the transmitter. The solution to this problem can be the installation of a network of repeaters throughout the service area (based on the principle of cellular communication).

The organization of a network based on wireless lines is similar to the structure of a cable network. The main difference is that a digital data signal (for example, containing information requested from the Internet) is modulated into a radio frequency channel, which is transmitted to an antenna installed on the user's building. From the antenna, the coaxial cable goes to the converter, which converts the signal from the microwave range to the frequency range of cable television. After that, the signal goes to the modem located in the user's premises. The modem demodulates the incoming data signal and routes it to Personal Computer or on the LAN.

Wireless subscriber line technology has several advantages over alternative access technologies. Wireless lines can be deployed in those places where, due to the impossibility of work, density or "antiquity" of development, a cable line simply cannot be laid. Second, for certain distances and localities, wireless access may simply be much more cost effective than alternative technologies. Here it is necessary to take into account both labor costs and the length of the subscriber line.

The cost of cable systems largely depends on the distance between buildings and on the degree of concentration of groups of subscribers. The cost of wireless systems is free from such dependency. The cost of constructing cable systems is also highly dependent on the cost of labor, which is usually constantly rising. At the same time, the cost of wireless systems depends mainly on the cost of subscriber equipment, which tends to become cheaper as technology improves. Third positive factor wireless technology is a significantly shorter system commissioning time compared to cable infrastructure.

The fact that radio systems provide area coverage means much easier network planning than cable systems. Wireless systems allow you to respond much more quickly to changes in the needs and number of users, while the planning of cable systems is largely based on preliminary estimates (it's good if the estimates coincide with reality).

There are also more prosaic considerations. If the user refuses your services and directs his attention to another operator, then with the development of cable technologies, all investments in this cable line will be lost. At the same time, when using wireless technology, subscriber equipment can simply be removed and installed in another place at a new subscriber. In addition, it is much easier to maintain the operation and safety of a properly organized wireless line than a cable. In many countries, such as Africa, copper cables buried in the ground are simply stolen (unfortunately, Russia can also be counted among these countries). Even fiber optic cables have some value as a secondary product.

In practice, the possibility of using satellites for Internet access and high-speed data transmission is divided into solving two big problems - the organization of backbone data transmission lines (which is part of big business) and organizing high-speed access for individual end users. End users include not only individual users, but also large corporations, medium and small enterprises, as well as various offices (including home offices).

In short, satellite systems have several attractive features in terms of providing high-speed data transmission services and Internet access.

Satellite systems allow you to bypass the "congestion" in terrestrial data transmission systems. They can be configured as needed to reflect the asymmetric nature of the Internet, both in terms of individual transactions and geographically. For example, most of the content on the Internet is still located in the United States. Several distinctive features of satellite systems make them an attractive access technology. First of all, this economic efficiency for the provider. The coverage area of ​​the satellite is such that it can serve a very large number of subscribers. Moreover, the cost of organizing the service does not depend at all on the geographic location of the user within the satellite coverage area. The satellite channel can be received at any point in the coverage area, regardless of terrain conditions.

Although satellite systems have many advantages, allowing them to be considered as one of the technologies for organizing high-speed data transmission on the “last mile”, there are also negative aspects.

Satellite access systems do not have the highest data transfer rate (about 400 Kbps towards the user) and do not work very fast. Imagine that you want to upload some material to your computer screen. With a click of the mouse, you send a request signal that travels through your phone line, through your ISP, and through the normal path on the Internet, and after answering, the signal is transmitted via satellite, traveling a total of about 70 thousand kilometers. Even at the speed of light, such a means of accessing the Internet remains quite slow. This is especially noticeable in the implementation of two-way communication in real time.

Investments in satellite communications systems amount to many billions of dollars, and success and profits are by no means guaranteed. Mention should also be made of traffic security, too long planning cycles for a rapidly changing industry such as telecommunications, and a lack of frequencies that could be easily used.

In addition, the disadvantages of satellite systems include the need to purchase and configure rather expensive equipment. However, there are a number of extreme situations when it is impossible to organize access to the Internet in any other way than via satellite (for example, for a ship located in the middle of the ocean).

Now let's focus on some specific technologies of wireless broadband access. Let's start with a brief review of two fairly well-known ones.

Among the many wireless access technologies, the local multi-cell, point-to-multipoint, LMDS (Local Multipoint Distribution System) signal distribution system is one of the few systems that provides the user with broadband multimedia services. LMDS operates in the frequency range (28…32) GHz allocated by the US Federal Communications Commission FCC for the operation of broadband subscriber access systems. This system is sometimes referred to as a cellular cable TV system. The use of the cellular principle avoids many of the problems associated with the line-of-sight condition, which is mandatory in the MMDS wireless broadband access system, which is discussed below. Carriers of neighboring cells have the same frequency ratings, but different polarizations. LMDS is able to provide the user with the latest interactive multimedia services, including telephone and high-speed data transmission. This technology allows some providers (for example, long-distance and international service providers) that do not have their own subscriber access infrastructure to provide communication services to business and individual users at a relatively low cost and very quickly. In the LMDS access network architecture, the so-called "last mile" of the access network is wireless. In this case, the user's antenna must be within the line of sight of the LOS (Line of Sight) with a cell site connected to a network that provides the user with all the necessary communication services.

It is highly likely that LMDS will be used in the business environment for LAN interoperability in urban environments. It is also likely that the use of LMDS for the transmission of television programs is too late. In LMDS, as in the MMDS technology discussed below, there is no easy way to increase throughput. This problem is not significant in systems of simplex television broadcasting, where any user can receive any channel. However, for user-outbound traffic for LMDS systems, there is no easy way to increase licensed bandwidth. A similar problem exists in the telephone cellular network.

LMDS is particularly well-suited for urban environments with high population density, and therefore potential users, where small transmitter size and small cell area are quite acceptable, and where this makes the prices for the services provided attractive to the user. However, such small cell sizes may be unacceptable in suburban and rural areas where a large number of transmitters would be required to achieve line-of-sight.

Another fairly well-known broadband wireless access system is the MMDS (Multichannel (Microwave) Multipoint Distribution System (Service)) This system is very similar to LMDS, but operates in the 2.4 GHz frequency band, and the operating range MMDS frequencies are limited compared to LMDS. Currently, the MMDS frequency band is used by cable television (CATV) providers to provide broadcast analog television signal to users through the headends of the CATV network. As a result of the telecommunication services liberalization process, this frequency band is also open to other services, including telephone and many interactive services.

Unlike LMDS, MMDS is less sensitive to external influences in the form of rain and thunderstorms. Therefore, the requirements for allowable distance from the cell site are less stringent compared to LMDS. So, MMDS covers an area within a radius of about 80 kilometers, while LMDS has a range of no more than 10 kilometers.

The frequency band 2.2-2.7 GHz in the MMDS system is used to transmit video signals of 33 television channels from transmitting antennas to receiving user antennas. Subscribers within a zone with a radius of about 50 kilometers can receive these signals. With digital processing and compression of video signals, the number of channels can be increased to 100-150.

MMDS can be used to carry both analog and digital video signals. Reception of an analog television signal requires a relatively simple antenna mounted on the roof of the user's house and a set top box that contains a line-to-video converter and a descrambler. In the case of the digital version of MMDS, a more complex and expensive converter is needed. The currently produced MMDS equipment provides not only the possibility of transmitting television signals, but also the provision of voice and high-speed data transmission services.

As another example of wireless broadband access technologies, let's focus on the DBS (Direct Broadcast Satellite) system. This is a new generation of satellite television broadcasting equipment. When using digital methods for converting and transmitting television signals and a small-sized receiving antenna, this technology becomes very attractive to users. The signal received in digital format is decoded in the STB (Set Top Box) user equipment signal splitting/combining and signal conversion unit, which has built-in intelligent functions that provide many new services, such as interactive television and information on demand.

BSS (Broadcast satellite services) direct satellite broadcasting technology operates in the Ku-band, occupying the frequency spectrum of 12.2-12.7 GHz. DBS users can receive 150 to 200 video channels using MPEG-2 type compression. In addition to video transmission, some network service providers are planning Ku-band broadband data transmission. Modern systems DBS support data transmission from the Internet to the subscriber at a speed of up to 400 Kbps, and a standard tone frequency (PM) channel is used to transmit control signals from the subscriber to the network.

Let us now turn to a brief review of the most popular wired broadband access technologies such as xDSL.

xDSL is a family of technologies for high-speed access to network services over an existing copper subscriber telephone line. In the acronym xDSL, the symbol "x" is used to denote a specific type of DSL (Digital Subscriber Line) technology. Any subscriber currently using telephone communication has the opportunity to significantly increase the speed of his connection, primarily with the Internet, using xDSL technologies. Thanks to the variety of DSL technologies, the user can choose the data transfer rate that suits him - from 32 Kbps to more than 50 Mbps. In this case, the data transfer rate depends only on the parameters and length of this line.

For some reason, it is believed that the subscriber telephone line has a bandwidth of 4 kHz. This is completely wrong. The subscriber line has a limited bandwidth, because it is provided for by its design, and not because the twisted pair is not capable of transmitting high-frequency signals. With appropriate coding schemes, xDSL technologies can achieve megabit data rates.

The oldest and slowest xDSL technology is IDSL (IDSN Digital Subscriber Line), while the fastest and youngest is VDSL (Very High Speed ​​Digital Subscriber Line). In between are other technologies such as HDSL (High Speed ​​Digital Subscriber Line) technology and ADSL (Asymmetric Digital Subscriber Line) technology; the latter has the greatest potential in the mass consumer market.

DSL technologies make it possible to achieve high speed data transmission. For example, ADSL provides 1.5 - 8 Mbps downstream and 1.5 Mbps upstream 640 Kbps. VDSL provides 13 - 52 Mbps downstream data when choosing an asymmetric scheme, and updraft data 1.5 - 2.3 Mbps (for symmetric VDSL, the data transfer rate is 13 - 26 Mbps). The data transfer rate when using DSL technologies depends on the distance; as the distance increases, the data rate decreases. For example, for ADSL, with a line length of 3 km, a transmission rate of more than 8 Mbps can be achieved, and for a line length of 6 km, a data transmission rate of 1.5 Mbps can be achieved. For VDSL, these numbers are about the same. The speed of 52 Mbps corresponds to a line length of about 300 meters, and the speed of 13 Mbps corresponds to a line length of about 1.5 km. At the same time, these technologies provide simultaneous telephone communication, high-speed Internet access, video-on-demand and one (for ADSL) or three (for VDSL) TV channels of DVD quality. Other DSL technologies can be used for voice and high-speed Internet access, but are not suitable for high-quality real-time video transmission.

DSL technologies have certain advantages. Any subscriber connected to the public telephone network has a copper telephone line that can be used to deploy a data line. That is, it is not required to create a new infrastructure. The system requires only two ADSL devices (at the station and at the user's premises) and a twisted pair of wires (unfortunately, the performance of a DSL line degrades as the distance from the station increases or the quality of the line deteriorates). A DSL line provides a reliable and permanent (unlike analog modems) connection. Compared to other access technologies, DSL requires significantly less investment in terms of the data transfer speed that can be achieved.

xDSL technologies provide the most economical way to meet the needs of users for high-speed data transmission. Different variants of DSL technologies provide different data transfer rates, but in any case, this speed is much higher than the speed of the fastest analog modem.

The diversity of DSL technologies makes it possible to use a specific technology for a specific category of users. In particular, asymmetric ADSL technology is best suited for private users, who are more consumers of information, while symmetrical technologies are more suitable for business representatives, for whom the flows of transmitted and received information are close in volume. In addition, when using ADSL technologies the analog telephone and/or ISDN basic access channel (BRI ISDN) is retained. The first feature allows you to keep normal telephone service in case of damage to ADSL equipment, and the second allows you to protect the investment of the telecom operator. xDSL technologies can be considered as a serious competitor for cable modems. Theoretically, cable modems provide a higher data rate than, for example, ADSL technology, but in reality, most cable networks are not able to provide access through cable modems using the entire bandwidth of coaxial cable. In cases where cable systems provide an "uplink" data transmission channel, this channel is divided among all users. The development of hybrid fiber/coax systems has mitigated this problem, but such systems are still quite expensive and will take a long time to develop sufficiently. Therefore, xDSL technologies remain the most viable solution to the last mile problem at the moment.

It should be noted that while in Russia the possibilities of obtaining high-speed access based on ADSL technology are limited. Very important role plays the territorial (one might say, geographical) position of the user, but this is far from the only obstacle. Even if a potential user is covered by a cable TV network or has a telephone line, this does not mean at all that these lines can technically be used for high-speed data transmission. A lot will also depend on who provides the service. Some cable and telephone companies are successfully developing and providing high-speed data services, while others prefer not to bother. Such neglect by some telecom operators to the development of high-speed data transmission is explained by the fact that approximately 90% of the income of telecom operators is the provision of telephone services.

Choice is a hallmark of today's digital telecommunications world. Moreover, all new technologies compete with each other to a certain extent, which allows us to expect an increase in the quality of services provided and a decrease in their cost.

Despite the competition between providers promoting various technologies, there is no reason to assume that, in Eventually, one of the technologies will win. All technologies, due to their fundamental differences, have a chance to exist for their share of users. The choice is up to the users.

The optimal access technology should be cheap enough, requiring additional costs only when new users are added; it should provide the user not only with high bandwidth, but also provide the necessary quality of QoS (Quality of Service) transmission for the ordered service (for example, the signal delay time is not more than the maximum allowable, guaranteed unevenness of this delay in the signal transmission bandwidth, the required reliability, etc. .d.). All access methods, including copper or fiber optic cables, cable modems, or wireless systems, meet these requirements to some extent. Unfortunately, none of the technologies meets all the requirements at once.

In conclusion, we note another significant trend in the evolution of broadband subscriber access networks, which follows from the general trend of increasing the throughput of the access network and consists in the emergence of optimal solutions, which are a combination within one network and even an access line of several access methods. Such technologies include, for example, a mixed optical-radio-coaxial access technology HFRC, as well as VDSL technology, which essentially involves the use of a mixed copper-optical transmission medium in a subscriber access network.

July 1, 2013 at Sverdlovsk region new energy tariffs come into force. Let us recall that generation is no longer regulated by tariffs - electricity prices are determined by the market. Transmission activities are regulated electrical energy, sales allowances of last resort suppliers and electricity supplied to the population. Tariff policy is determined on federal level in the form of special legal acts, and the regions implement them. Changes in tariffs and, as a rule, their growth have been taking place in the middle of summer for several years, and not since January - in order to slow down inflation and ease the financial burden of payers. the day before federal Service determines the benchmarks for tariffs, and then the regions act within these limits. The network tariff in the final bill for electricity for industry in Russia is already 46% (according to E-U). At the same time, from July 1 this year, it will rise within 10% compared to July 2012 (an increase over the same period a year earlier - 11%), and from July 1, 2014 - by another 10%.

According to Alexander Sobolev, Deputy Chairman of the Regional Energy Commission, the power grid complex should be stable in terms of tariff policy, since several years ago it introduced RAB-method mechanisms that set long-term tariffs. Both for services for the transmission of electrical energy, and for mutual settlements between grid organizations.

Despite efforts to suppress the growth of tariffs for electricity transmission services through networks, they will increase, Alexander Sobolev believes.

The question is raw

- Alexander Leonidovich, why are network tariffs growing?

First of all, the growth is due to inflationary processes, due to which the expenses for a number of items at electric grid companies increase (for the purchase of electricity on the market to compensate for losses in networks, wages, repairs). These processes are provided current legislation: when calculating the tariff, we must take into account inflation indices by industry.

The second reason is the emerging trend of reduction (or rather, lack of growth) in the volume of electricity and capacity transmitted through regional networks. For example, according to the forecast for 2014, the volume of transmitted energy is approximately at the level of the plan for 2013, but the value of the declared capacity is reduced to 4%.

This is due to the drop in demand of some large consumers in the region. For example, the Bogoslovsky aluminum smelter significantly reduces energy consumption - the situation on the aluminum market forces this. The same happens with a number of others. industrial enterprises.

There is a third reason - the problem of the so-called last mile, which has not been solved for a long time, has become aggravated. The last mile is a cross-subsidization scheme in which large consumers connected to the backbone networks of the Federal Grid Company pay not only its tariff, but also the tariffs of distribution networks whose services they do not use. To do this, IDGCs lease the last mile from FGC - a section of networks to which a consumer is directly connected. The mechanism raises the price of electricity for large enterprises, but allows you to lower tariffs for the rest, as large consumers pay extra for small and medium-sized ones.

- In fact, this is a hidden tax on the industry, absorbing up to 30% of its electricity costs.

The mechanism of the last mile in the power industry of Russia appeared in 2006 at the time of the reform of the industry as a temporary measure (supposedly in order to avoid abrupt changes in tariffs in the power industry of the regions and an increase in the load on end users) before approval new policy tariff setting. But, as usual, the temporary measure became a constant headache.

Naturally, all these years, industrialists have been trying with all their might to get away from such a system to direct contracts with the Federal Grid Company, starting many years of litigation with networks. They leave, as a rule, through court decisions that have entered into legal force or regulations that have been adopted by the Ministry of Energy of the Russian Federation. This step reduces their own electricity costs, but leads to an increase in the tariff burden on other consumers remaining with the networks. Recently, the departure has become widespread (although formally now the distribution networks and the main ones are united in Rosseti. - Ed.).

We, the regulator, have to take this into account when making tariff decisions. If the last mile is eliminated from 2014, this will lead to a drop in the revenue of distribution networks. To compensate for the shortfall in income (about 58 billion rubles a year in the country. - Ed.), The tariff for small and medium-sized consumers will need to be sharply increased.

- What will the region do with the last mile?

The negative consequences of avoiding the last mile have been actively discussed at the federal level for a number of years. There are no decisions “at the top” yet: the bill has been submitted to the State Duma, but has been postponed until the autumn session. The reason is that the issue has not been worked out because of the opposing positions of consumers, authorities, and network organizations.

The essence of the strife is this: the consumers connected to the FGC networks do not want to pay more for the transportation of electricity to regional grid companies. And the authorities and networks see that they can immediately shift the entire problem of leaving large industrial enterprises from the last mile to other consumers, small and medium business or on the population, it is impossible. Here a compromise must be found. In our opinion, the Sverdlovsk region will need five years to completely solve this problem and eliminate its negative consequences.

- What exactly will happen in these five years?

It all depends on whether other rules are adopted at the federal level. If they do not appear, industrial consumers will continue to leave the last mile. In the Sverdlovsk region, the Kachkanarsky GOK was a pioneer: it left OAO IDGC of Urals directly for FGC in 2011. Already this year, the example was followed by the Sverdlovsk Railway, Ural Electromechanical Plant and others. Of the previous total volume of consumers connected to the last mile, IDGC of Urals has a third.

- What does this mean for network companies?

If consumers connected to the last mile and included in the balance of regional networks do not pay, then the power grid complex will have shortfalls in income, which must be compensated in the next periods of regulation. (For example, Sverdlovenergo, a branch of IDGC of Urals, lost 1 billion rubles in revenue over the past year. - Ed.)

- That is, the economy of interregional distribution networks is collapsing?

I would not say that it is collapsing, but there are problems. This is due to the fact that it is impossible to proportionally reduce the income of networks to reduce their costs for the transmission of electrical energy. The main way is to reduce the investment program. But over the past two years in the Sverdlovsk region, the cost of implementing the investment program from the profits of all electric grid companies is close to zero, that is, there is nothing to cut. Compensate for the departure of consumers is possible only at the expense of the internal reserves of companies, which, unfortunately, are not so many. In addition, the reduction may adversely affect the quality and reliability of electricity supply.

Wallets are different

- Will the recent merging of distribution and transmission networks change anything in a single company Rosseti?

We hope that Rosseti, together with consumers, will find a way out of the last mile situation, and a compromise will be reached. From the point of view of the Regional Energy Commission of the Sverdlovsk Region, the situation on our territory is not as critical as in some other regions.

One of the reasonable solutions we see is to extend the deadline for resolving the issue of the last mile by two or three years, then, within the framework of the existing conditions for the growth of tariffs, we will be able to remove this problem on the territory of the Sverdlovsk region. Strictly speaking, the problem is ultimately solved anyway at the expense of all other consumers. But given that this will be done smoothly, gradually, they will not feel the burden of payments as much as they could.

- Where does such a figure of tariff growth come from - 10%?

This figure is determined by the Ministry of Economy of the Russian Federation and fixed in the forecast of the socio-economic development of the Russian Federation. In our opinion, it reflects the objective situation.

- Is this amount insufficient for investment programs?

Yes, it is impossible to solve two problems at once - the elimination of the negative consequences of consumers leaving the last mile and the search for investments for the modernization and development of worn-out distribution networks. We have to state that the investment program of the entire power grid complex has been sequestered and, apparently, will be subjected to in the future.

- And FGC's investment program is significantly increasing this year.

In my opinion, this is a certain element of state policy in the energy sector: present stage it is more expedient to develop higher networks (220 kV and more). Let's hope that after they are properly developed, attention will also be paid to regional networks.

dial-up

Historically, the first way to organize the last mile was dial-up remote access - Dial-up. As with most last mile solutions, this technology is based on the idea of ​​using the existing infrastructure for data transmission - analog telephone wires. However, this technology had a lot of drawbacks - firstly, the established Dial-up connection made it impossible to use a regular, analog phone. The second major drawback was the low speed. Despite the fact that there were various tricks associated with active traffic compression, their use did not always give results (especially on our telephone lines), and therefore, for simplicity, we can assume that the upper speed limit for Dial-up is 56 kbps .

xDSL

A further development of the same basic idea (from the provider to the subscriber, already laid telephone lines are used to organize the last mile) was the family of xDSL technologies. In practice, ADSL is most common, which allows communication at a distance of up to 5.5 km with a data transfer rate of 24 Mbps / 3.5 Mbps. A feature of this last mile technology is asymmetry - the data transfer rate from the provider to the subscriber is much higher than in reverse direction. Due to the asymmetry, it is possible to increase the speed of downloading information to the detriment of uploading. This scheme of operation is the most common, and therefore ADSL has found the widest application, especially since the established ADSL connection does not interfere with the use of an analog telephone.

Moreover, it was this technology that revolutionized the Internet access services in our country, actually replacing the dial-up that prevailed before.

Alas, this method is not without drawbacks. First, to connect to ADSL networks, you need a separate device - an ADSL modem. The second problem is poor compatibility with the operation of burglar alarms that use telephone lines.


ethernet

The second most popular last mile technology is Ethernet. It is worth clarifying that the name Ethernet itself does not indicate a specific connection method and physical media - this technology has extensions that allow you to use a coaxial cable, twisted pair or optical channel for data transmission. However, most often this technology means twisted pair.

From the subscriber's point of view, Ethernet is a simpler technology. To connect to the Internet via an Ethernet provider, there is no need for additional equipment (a network card built into the computer is enough), and such a connection will be symmetrical by default (however, this already depends on the provider).

However, there is a price to be paid for any simplicity. In this case, providers will have to pay - after all, in order to organize access using this technology, it is necessary to build an Ethernet infrastructure inside the area (block of buildings) and connect an optical channel to it. The constructed infrastructure will contain a fairly large number of various equipment(first of all, these are routers), which requires regular inspection.

Thus, the provision of services based on this technology is advisable when the area already has the necessary infrastructure - for example, a local area network. Therefore, most Ethernet providers have evolved from the management structures of area networks.

One can argue for a long time about which last mile technology is better - ADSL or Ethernet, but, ultimately, the subscriber decides, and at the moment both technologies are in demand and are equally widely represented and with approximately the same tariff plans.

WiFi

Just like Ethernet, Wi-Fi was not originally intended for last mile equipment - it is a wireless local area networking technology. However, the development of mobile devices and laptops equipped with Wi-Fi has made such a solution to this problem in demand. Strictly speaking, the use of Wi-Fi as a last mile solution is not a very correct application of this technology and requires some modification of the technology.

Providers most often do this - to organize communication over a long distance, directional antennas are used, which allow you to connect remote parts of the network. Since directional antennas give a wave propagation diagram distorted along one direction, several conventional WiFi access points are deployed for client access, which form a mesh network topology.

However, a feature of a Wi-Fi connection is that the entire channel width (and in the case of Wi-Fi this channel is quite limited) is divided between all devices connected to one access point. Therefore, as the number of subscribers increases, the connection speed in such a network begins to fall, and in order to maintain it at the same level, the provider will have to install additional access points.

In general, the equipment of the last mile for stationary use using Wi-Fi technology alone does not look very promising - scaling is too expensive. On the other hand, given the prevalence of client devices, this is currently the most common way for mobile users.

WiMAX

Despite the similarity of names, at the level of technology, WiMax has nothing to do with Wi-Fi. The cardinal difference of this technology is that WiMAX was originally developed as a city-wide wireless access technology, and therefore its coverage range is much greater and the transmission speed is much higher than in Wi-Fi networks. Therefore, the deployment of such a network across a city or region will be much cheaper than Wi-Fi networks.

The only drawback is the limited choice of client devices. However, a compromise is possible - there are devices that allow organizing WiMAX-WiFI gateways.

PLC

A relatively new way of equipping the last mile is PLC (Power line communication). The so-called "Internet from the socket" is based on the use of intra-house and intra-apartment electrical networks for high-speed information exchange. By the way, you should not confuse 2 similar technologies - PLC and Homeplug. The latter is intended for the organization local networks and devoid of most of the shortcomings of the PLC.

This technology is based on the frequency division of the signal, while the high-speed data stream is divided into several low-speed ones, each of which is transmitted at a separate frequency and then combined into one signal. At the same time, PLC devices can "see" and decode information, although conventional electrical devices - incandescent lamps, motors, etc., do not even "guess" the presence of network traffic signals and work as usual.

It would seem that this technology should revolutionize the telecommunications market and completely replace xDSL technologies. However, it has significant drawbacks. The main drawback is the horrendous amount of interference, especially at medium and short wavelengths, that is generated by this use of the mains.

However, there are also less serious ones - the bandwidth of the network through the wiring is divided between all its participants, the stability and speed of the PLC is affected by the quality of the wiring (which we often leave much to be desired), and, moreover, such a network does not work through network filters and UPS.

These shortcomings have led to extremely rare last mile equipment based on this technology.

"Last Mile"- a channel connecting the end (client) equipment with the access node of the provider (operator). For example, when providing an Internet connection service, the last kilometer is the section from the provider's switch port at its communication center to the client's router port at its office. For dial-up (dial-up) connection services, the last kilometer is the section between the user's modem and the provider's modem (modem pool). The last mile usually does not include wiring inside the building.

The term is used mainly by specialists from the communications industry.

Last mile technologies usually include xDSL, FTTx, Wi-Fi, WiMax, DOCSIS, power line communication. Last mile equipment includes xDSL modems, access multiplexers, optical modems and converters, radio multiplexers.

Feasibility study of last mile technologies

The problem of the last kilometer has always been an urgent task for signalmen. To date, many last mile technologies have appeared, and any telecom operator faces the task of choosing a technology that optimally solves the problem of providing communication for its subscribers. There is no universal solution to this problem, each technology has its own scope, its own advantages and disadvantages. The choice of a particular technological solution is influenced by a number of factors, including:

  • operator strategy,
  • the target audience,
  • currently offered and planned services,
  • the amount of investments in network development and their payback period,
  • the state of the existing network infrastructure, the resources to maintain it in working condition,
  • the time required to launch the network and start providing services,
  • reliability of service provision (service provider response time to technical problems),
  • other factors.

Each of these factors can be assigned a weight depending on the importance, and the choice of a particular technology is made taking into account all of them.

There are specialized companies and divisions large companies communications that are solely concerned with building the last mile.

The last mile in the provider is the section of the communication line from the provider's switching device to the client's switching device. Simply put, the last mile equipment connects the Internet Service Provider's communications center to your apartment or office. And this very mile is being organized at the moment in a variety of ways - both wired and wireless.

The organization of the "last mile" always implies the presence of the following components: switching equipment for receiving and sending signals and information transmission medium.

General principles of organizing the “last mile”

1. The switching point of the provider should be located in sufficient proximity to the habitat of customers. The distance is calculated depending on the degree of signal attenuation in the transmission medium.
2. The client must have the appropriate equipment capable of connecting to the provider's switching point. The type of equipment depends on how the “last mile” is organized.

Technologies of the organization of the "last mile" are divided into wireless and wired, depending on the nature of the information transmission medium. It's easy to guess that wireless networks- these are those in which information is transmitted directly through the air (various wave transmission methods: WiFi, WiMAX, radio transmission, optical wireless communication).

Cable networks, respectively, include cable trunks: fiber-optic or metal (, telephone cable, PLC, coaxial cable).

Let's take a look at three of today's most common "last mile" laying technologies.

1. Wireless WiFi connection. The advantages of a wireless connection are obvious: it is convenient, does not require cable runs, and allows several client computers to connect to the channel at once without additional equipment. Disadvantages of this solution: the WiFi coverage area is unstable, heterogeneous and subject to a wide variety of interference.
2. Connection using copper twisted pair. The most common connection method. Cheap and cheerful: a twisted pair cable (UTP category 5e) is laid from the switch located in the building to the user's computers. Despite the ease of installation and low cost materials, this method of organizing a network has certain limitations: a twisted pair cable can, but is not desirable, be laid down the street. For outdoor installation, a special shielded FTP cable with an additional protective sheath is used, however, it is not reliable enough in the long run. Copper cable is subject to electromagnetic interference, so you can not place the cable near sources of electromagnetic radiation, along the wiring. The length of the route between the switch provider and the user should not exceed 100 meters.
3. Fiber optic connection. The advantages of fiber-optic technologies: a completely dielectric medium for information transmission (not affected by an electromagnetic field), fewer restrictions on the length of the route (you can spread the network over a multi-storey extended building from one switching node without additional repeaters, you can combine several buildings), durability (the fiber optic cable will be reliably perform its function for 25 years or more) and significantly higher throughput (10, 40 or more gigabits per second). However, the organization of the “last mile” on optical fiber is expensive. Fiber optic duplex cable itself is inexpensive, but installation services can cost a pretty penny. In addition, a fiber optic network requires special equipment to convert the optical signal into an electrical one. At the same time, when connecting communication lines to offices in a modern metropolis, it is more rational to use the most modern and promising fiber-optic technologies.

In addition to these methods, signal transmission over a telephone cable is still in demand (already almost no longer used DialUp and still quite common ADSL). However, due to the convenience modern technologies these options for laying the “last mile” are already gradually becoming a thing of the past, following the Internet over coaxial cable. Abroad, PLC technology is gaining momentum - the transmission of information over electrical wires, but in our country it has not yet found its buyer.

 

It might be useful to read: