Life cycle management system. ALM - Application Lifecycle Management - Application Lifecycle Management

Etc..

"Control life cycle» comes down to the need to master the practices familiar to systems engineering:

  • information management(“the right information must be available to the right stakeholders on time and in a form that is accessible to its use”),
  • configuration management(“the design information must match the requirements, the “as built” information must match the design, including design justifications, the physical system must match the “as built” information, and different parts of the design must match each other”, sometimes part of this practice is called "change management ").

LCMS vs PLM

The newly formulated LCMS does not use PLM as the required software class around which such a system is built. In large engineering projects, several (most often significantly "underdeveloped") PLMs from different vendors are used at once, and when creating an LCMS, we are usually talking about their interorganizational integration. Of course, at the same time, the questions of how to integrate into the LCMS information of those systems that are not yet connected with any of the PLM systems of the extended enterprise are also solved. The term "extended enterprise" (extended enterprise) usually refers to an organization created through a system of contracts from the resources (people, tools, materials) participating in a specific engineering project of various legal entities. In extended enterprises, the answer to the question, into which PLM the data of which particular CAD / CAM / ERP / EAM / CRM / etc. systems are integrated, becomes non-trivial: you cannot prescribe the owners of different enterprises to use software from the same supplier.

And since the PLM system is still software tools, and the "management system" from the LCMS is clearly understood, among other things, as a "management system", the term LCMS clearly implies the organizational aspect, and not just the aspect information technologies. Thus, the phrase "using PLM to support a life cycle management system" is quite meaningful, although it can be confusing when it literally translates “PLM” into Russian in it.

However, the understanding of a "lifecycle management system" when handled by IT people immediately dwindles back to "only software" that looks suspiciously like PLM software. And after this oversimplification, difficulties begin: a “boxed” PLM system from some supplier of design automation software is usually immediately presented constructively, as a set of software modules from the catalog of this supplier, without regard to supported engineering and management functions, and is considered as a trio of the following component:

  • data-centric life cycle data repository,
  • "workflow engine" to support "management",
  • "portal" to view the contents of the repository and the status of the workflow.

Purpose of the LCMS

Main purpose: LCMS detect and prevent collisions that are inevitable in collaborative development. All other LCMS functions are derivatives that support this main function.

The main idea of ​​any modern LCMS- this is the use of an accurate and consistent representation of the system and the world around it in the inevitably heterogeneous and initially incompatible computer systems of an extended organization. The use of virtual layouts, information models, data-centric repositories of design information ensures the detection of collisions during "construction in a computer", "virtual assembly", and not when drawing drawings and other project models into material reality during the actual construction "in metal and concrete" and start-up into operation.

The idea of ​​LCMS is thus not related to various “design automation”, primarily to “generative design” (generative design) and “generative production” (generative manufacturing). The LCMS is no longer concerned with synthesis, but with analysis: it detects and/or prevents collisions in the design results of individual subsystems when they are assembled together using a variety of technologies:

  • merging project data together into one repository,
  • running the integrity check algorithm for engineering data distributed in several repositories,
  • by conducting actual "virtual assembly" and simulation for a specially selected subset of design data.

Model-Based Approach

The use of LCMS implies rejection not only of paper in design, but also of "electronic paper"(.tiff or other raster formats) and the transition to data-centric representation of information. Comparing two models that exist on paper in some notations and finding inconsistencies in them is much more difficult and longer than preventing collisions in structured electronic documents that use not raster graphics, but engineering data models.

The data model can be designed according to some language, for example:

  • in terms of the ISO 24744 development method description standard),
  • metamodel (in terms of the OMG standardization consortium),
  • data model/reference data (in terms of the ISO 15926 lifecycle data integration standard).

It is this transition to structurally represented models that already exist at the early stages of design and is called "Model-based systems engineering" (MBSE, model-based systems engineering). It becomes possible to remove collisions using computer data processing already at the earliest stages of the life cycle, even before the appearance of full-fledged 3D models of the structure.

The LCMS should:

  • provide a mechanism for transferring data from a single application CAD/CAM/ERP/PM/EAM/etc. to another- and in an electronic structured form, and not in the form of a "pack of electronic paper." The transfer of data from one engineering information system (with a clear understanding of where, where, when, what, why, how) is part of the functionality provided by the LCMS. Thus, the LCMS must support workflow (the flow of work, which is partly performed by people and partly by computer systems).
  • version control, that is, to provide a configuration management function - both models and physical parts of the system. The LCMS maintains a taxonomy of tiered requirements and provides a means to check for tiered design decisions and their justifications to conflict with those requirements. In the course of engineering development, any description of the system, any of its models are changed and supplemented many times, and therefore exist in many alternative versions of varying degrees of correctness, and to varying degrees corresponding to each other. The LCMS must ensure that for current work only the correct combination of these versions is used.

LCMS Architecture

There can be many architectural solutions for LCMS, the same function can be supported by various structures and mechanisms of work. There are three types of architecture:

  1. Traditional attempts to create a LCMS is to provide critical data transfers on a point-to-point basis between different applications. In this case, some specialized workflow support system (BPM engine, "business process management engine"), or an event processing system (complex event processing engine) can be used. Unfortunately, the amount of work involved in providing point-to-point exchanges turns out to be simply enormous: each time, specialists are required who understand both the linking systems and the method of information transfer.
  2. Using the Life Cycle Data Integration Standard ISO 15926 according to the "ISO 15926 outside" method, when an adapter is developed for each engineering application into a neutral representation that complies with the standard. Thus, all data gets the opportunity to meet in some application and a collision between them can be detected - but the application needs to develop only one data transfer adapter, and not several such adapters (according to the number of other applications with which it is necessary to provide communication).
  3. PLM(Teamcenter, ENOVIA, SPF, NET Platform, etc.) - a standardized architecture is used, with the only exception that the data model used in each of these PLMs is less universal in terms of reflecting any engineering subject area, and is also not neutral and available in all formats. So the use of ISO 15926 as a baseline for transferring data to the LCMS can be considered a further development of the ideas actually implemented in modern PLM.

According to the configuration management architecture, LCMS can be divided into three types:

  • "repository"(up-to-date storage of all project data in one LCMS repository, where data is copied from where it was developed),
  • "register"(LCMS maintains a list of lifecycle data addresses in numerous repositories of other CAD systems, engineering simulation systems, PLM, ERP, etc.),
  • "hybrid architecture"-- when part of the data is copied to the LCMS central repository and part of the data is available from other places via links.

The LCMS architect should also describe:

  • "portal"(including "web portal"), its functions and method of implementation. The very presence of the portal allows you to reassure top managers by demonstrating the absence of conflicts. Specific requirements are imposed on architectural solutions for the LCMS portal.
  • data integrity/consistency check algorithms life cycle, as well as a description of the operation of these algorithms:
    • a standard module in a separate application that works on data in the repository of this application - be it CAD or PLM;
    • Collision checking software specially developed for LCMS, which has access to data from different applications located in the LCMS central repository;
    • a specially developed software tool that accesses via the Internet via a secure channel to different data repositorieslocated in different organizations;
    • specially programmed checks with collision control when loading different engineering data sets into the LCMS central repository;
    • a combination of all the above methods - different for different types collisions; etc.
  • the way LCMS users interact(design engineers, buyers, installers, facility project managers, etc.) and how the LCMS software supports this interaction in a collision-free manner. Systems engineering standards (particularly the ISO 15288 system engineering practice standard) require a choice of life cycle type for complex object engineering and an indication of which systems engineering practice options will be used. The life cycle model is one of the main artifacts that serve as organizational arrangements for coordinating the work of the extended engineering project organization. Coordinated work in the course of collaborative engineering is the key to a small number of design collisions. How exactly will the LCMS lifecycle model support it? Thus, PLM systems usually do not find a place for life cycle models, and even more so for organizational models. Therefore, for LCMS, it is necessary to look for other solutions for software support of these models.
  • Organizational aspect of the transition to the use of LCMS. The transition to the use of LCMS can cause a significant change in the structure and even the personnel of an engineering company: not all diggers are taken as excavators, not all cabbies are taken as taxi drivers.

The main thing for the LCMS is how the proposed solution contributes to the early detection and even prevention of collisions. If it comes to something else (meaningful choice of life cycle type in accordance with the risk profile of the project, aging management, cost management and budget reform, mastering axiomatic design, building with just-in-time deliveries, generating design and construction, and much, much more, also extremely useful-modern-interesting), then this is a matter of other systems, other projects, other methods, other approaches. The LCMS should do its job well, and not badly solve a huge set of arbitrarily chosen foreign tasks.

The LCMS architect thus has two main tasks:

  • spawn a number of top candidate architectures and their hybrids
  • make a multi-criteria choice among these architectures.
    • meaningful consideration (meaningfulness of selection criteria)
    • presentation of the result (justification).

Criteria for choosing an architectural solution for the LCMS

  1. The quality of the performance of the LCMS for its main purpose: detection and prevention of collisions The main criterion is: how much can the engineering progress be accelerated by accelerating the detection or avoidance of collisions using the proposed LCMS architecture? And if the time of work cannot be reduced, then by how much can the amount of work be increased in the same time using the same resources? The following methodologies are recommended:
    • Goldratt's Theory of Constraints(TOC, theory of constraints) - the architecture should indicate which system constraints it removes on the critical resource path of an engineering project (not to be confused with the critical path).
    • ROI(return on investments) for investments in the LCMS at the stage of formalizing the result of a substantive review of candidate architectures.
    It is important to choose the boundaries of consideration: the overall speed of an engineering project can only be measured at the boundary of the organizational system under consideration. The boundaries of a single legal entity may not coincide with the boundaries of an extended enterprise carrying out a large-scale engineering project, and an enterprise participating in only one stage of the life cycle may incorrectly assess its usefulness and criticality for the full life cycle of the system and choose the wrong way to integrate itself into the LCMS. Then it may turn out that the creation of the LCMS does not affect the overall timing and budgets of the project, because the most unpleasant collisions will continue to be unaddressed by the new LCMS.
  2. Ability to adopt incremental LCMS development lifecycle Incremental in ISO 15288 is such a life cycle, in which the functionality is not given to the user all at once, but in stages - but investments in development also occur not at once, but in stages. Of course, in this case, the law of diminishing utility must be taken into account: each increment of the LCMS (each new type of collisions detected in advance) is more expensive, and the benefits from it are less and less, until the development of the LCMS that has been going on for many years does not die out by itself. If it turns out that for some of the proposed architectures, a lot of money needs to be invested in the creation of the LCMS at once, but the benefits can be obtained immediately in the amount of 100% and only after five years on a turnkey basis, then this is a bad architecture. If it turns out that it is possible to develop and put into operation some compact LCMS core, and then many, many modules of the same type for different types of collisions with a clear mechanism for their development (for example, based on the use of ISO 15926), then this is very good. It's not so much about using agile methodologies, but about envisioning a modular LCMS architecture and proposing a plan for implementing a prioritized list of modules - first the most pressing, then the least pressing, and so on. Not to be confused with ICM (incremental commitment model), although the meaning here is the same: the architecture is better, under which you can get some kind of installment payment for the system, and get the necessary functionality as early as possible - in order to get the benefit (at least a small one) early, and pay for the late benefit later.
  3. Fundamental financial and intellectual ability to master and maintain technology If we calculate the costs not only for the LCMS itself, but also for all the personnel and other infrastructure required for the implementation of the project, then we need to understand how much of these investments in education, computers and organizational efforts will remain with the payer and owner of the LCMS, and how much will settle outside - with numerous contractors , who, of course, will be grateful first for receiving a "scholarship" for the development of new technology, and then for supporting the system they have created. The new is usually extremely expensive, and not because it is expensive in itself, but because it causes an avalanche of changes it causes. It is this point that I take into account the total cost of ownership of the LCMS, and the same point includes consideration of the full life cycle, no longer of an engineering system with its avoidable collisions, but of the LCMS itself.
  4. Scalability of the LCMS architecture This criterion is relevant for large engineering projects. Since you want the system to be used by all the thousands of people in the extended organization, it will need to grow rapidly to that extent. To what extent can the "pilot" or "polygon" of the LCMS be able to grow quickly without fundamental architectural changes? Most likely, they will not be able to grow. Therefore, architecturally, we need not a "pilot" or a "polygon", but immediately a "first stage". The requirement of the scaling criterion closely intersects with the requirement of the incrementality criterion, but affects a slightly different aspect - not so much the extension of the creation of the LCMS in time, but the possibility of extension by the covered volume. Experience shows that all systems cope with test volumes of design data, but they cannot cope with industrial ones. How will the cost of hardware and software increase non-linearly with the growth of volumes / speed? How long will they work out the regulations, when it turns out that after some workplace more data goes through than can be meaningfully viewed by one person? Poor scalability can lie in wait not only from the technical side of the architecture of the software and hardware solution, but also from the side of its financial architecture. Thus, a small license price per LCMS seat, or even a small price per new connection on the repository server, can turn a more or less attractive solution for ten seats into an absolutely financially unsustainable solution for the target thousand seats.
  5. Ability to address inevitable organizational challenges including attitudes towards beloved legacy systems in the extended organization. How much would the proposed centralized or distributed architecture require to "give away functions to other departments", "give away our data" and generally "give away" something compared to the current situation without LCMS? Mainframes massively lost the competition to mini-computers, and those to personal ones. Back to centralized systems LCMS inevitably appears) there is almost no way, because all the data is in separate applications, and pulling this data into new systems is a very difficult organizational task. How is the LCMS architecture structured: does it replace current legacy engineering applications, does it build on top of the current IT infrastructure, is it installed "for free" by various services? How much organizational/managerial/consulting effort will it take to push new technology through? How many people to fire, how many to find and hire new specialists? This criterion of organizational acceptability is closely related not only to centralization/decentralization, but also to the consideration of the motivation system in the extended enterprise, i.e. evaluating the architecture of the LCMS against this criterion goes far beyond a narrow consideration of the LCMS alone, but requires a thorough analysis of the principles of building the extended organization, up to and including revisiting the principles underlying the contracts under which it was created. But this is the essence of the system approach: any target system (in this case, LCMS) is considered, first of all, not "in depth, from what parts", but "outward, part of what" - not its design and mechanism of operation are primarily interesting, but supported The LCMS is the collision avoidance function in the external supersystem - and the price that the external supersystem is willing to pay for this new function. Therefore, possible LCMS architectures are considered primarily not in terms of "decent technologies used, for example from software vendor XYZ" (this is the default: all proposed architectures are usually technologically decent, otherwise they are not options!), but in terms of the above five criteria.

LCMS functions

  1. Collision avoidance
    1. Configuration management
      1. Identification (classifications, encodings)
      2. Configuration accounting (all possible baselines - ConOp, Architecture, design, as built), including data transfer to the LCMS repository, including support for workflow changes, including support for parallel engineering (working in conditions of incomplete baselines)
      3. Versioning (including forks)
    2. Lack of manual data transfer (transfer of input and output data between already existing automation islands, including the transfer of data from the islands of "rise to digital" of old design developments)
    3. NSI configuration
    4. Collaborative engineering support system (video conferences, remote project sessions, etc. - perhaps not the one used to create the LCMS system itself)
  2. Collision detection
    1. Support for a register of checked collision types and check technologies corresponding to the register
    2. Data transfer for checking collisions between automation islands (without assembly in the LCMS repository, but by means of the LCMS integration technology)
    3. Validation workflow run different types collisions
      1. in the LCMS repository
      2. not in a repository, but by means of the LCMS integration technology
    4. Starting a run of the workflow to resolve the found collision (sending notifications about collisions, because the run of the workflow to resolve is not the concern of the CLMS)
    5. Support current list unresolved collisions
  3. Development(here the LCMS is considered as an autopoietic system, because "incremental implementation" is one of the most important properties of the LCMS itself - so this is a function of the LCMS itself, and not a function of the supporting system for the LCMS)
    1. Ensuring communication about the development of the LCMS
      1. Work planning for the development of the LCMS (roadmap, development of an action plan)
      2. Functioning of the LCMS project office,
      3. Maintaining a register of types of collision checks (the "Wishlist" register itself and the roadmap for the implementation of checks)
      4. Organizational and technical modeling (Enterprise Architecture) for LCMS
      5. Communication infrastructure for LCMS developers (internet conferencing, videoconferencing, knowledge management, etc. -- perhaps not the one used in collaborative engineering using LCMS)
    2. Uniformity of data integration technology (for example, ISO 15926 technology)
      1. Using a Neutral Data Model
        1. Reference data library support
        2. Development of reference data
      2. Technology for supporting adapters to a neutral data model
    3. Uniform workflow/BPM integration technology (wide enterprise)
  4. Data security(on the scale of information systems operating within the LCMS)
    1. Ensuring the unity of access (one login and password to all information systems participating in the workflow)
    2. Managing access rights to data elements
    3. Backup

Application Lifecycle Management (ALM) is evolving rapidly. This is a promising approach to improving the creation process software(ON). However, the "traditional" ALM process is not able to achieve its full potential in generating profit for the organization. Why? Because vendors are aggressively pushing limited end-to-end ALM solutions to market that aim to tie customers to closed technology platforms. Customers soon discover that these solutions do not integrate with their existing development processes, tools, and platforms. Unfortunately, this leaves development teams alone with the siled processes and data hodgepodge of ALM, which in turn prevents them from realizing the full potential of ALM.

To solve this problem, a new approach is needed. An approach that will allow customers to build software using a mixed development environment. With Borland's Open ALM solutions, organizations can leverage their existing development resources and tools. This will help achieve transparency, control and discipline throughout the software development cycle. Customers can now benefit from an optimized ALM platform and a single, manageable and measurable software development process.

Predictable Software Development: Mission Impossible?

Software development is, in fact, a fairly complex undertaking. Creating a software product with fairly well-defined characteristics, performed with acceptable quality, within the allotted budget and on time, requires constant coordination of a large number of actions between numerous specialists.

The complexity of managing and tracking software projects increases when organizations decide to use distributed development models (for example, offshore programming or the use of temporary employees and subcontractors). As a result, failed completion or termination of projects is becoming a common occurrence. Cost overruns, missed schedules, poor quality, and poor reliability have become the norm in the software industry. Accordingly, software development organizations are increasingly being asked to take smarter approaches. They must adopt well-managed, systematized and process-oriented approaches that follow the steps of more traditional engineering disciplines. one

With increasing standardization and the use of enterprise development platforms, the challenges faced by the industry have become less technical in nature. The ability to generate stable and predictable profits from software development has become largely a priority for many information technology (IT) professionals. They need confidence that their teams will be effective in terms of software development. With these considerations in mind, Borland has developed platforms for ALM. They are designed to solve the problem of stability and predictability of the software development process.

1 Major industry trends such as the accelerated adoption of the CMM/CMMI process improvement environment and the increased use of outsourced development models are closely related to this apparent transformation of the software development industry.

The advent of ALM

As the application development tools industry responds to the need for predictable software development, it has focused on more than just tools for the individual developer. Manufacturers have expanded their offerings and integrated both existing and new features into their products. Now their solutions perform tasks related to other roles in the software development process. Often marketed and sold as collaborative development platforms, these product suites heralded the advent of Application Lifecycle Management (ALM) technology. It has become a new category in the market and a separate discipline in software development. ALM platforms are specifically designed to meet the challenges of increasing the predictability and integrity of the software development process. They solve these problems by providing integration and automation for each major role that is involved in the process, and by automating a number of functions.

measurability

Ability to define systems of measures to assess quality, productivity, progress and risk.

Analyze these metrics and generate reports as the project progresses.

Coordination

Alignment of business specialization and IT priorities.

Align project results with end user expectations.

Discipline

Alignment of definition, deployment and tracking with software processes.

Increasing the rigor of the process of change in management and predicting their consequences.

These capabilities allow IT leaders to balance and prioritize their software project portfolios. They can achieve a higher level of management of their teams and much greater transparency in the execution of projects. With ALM, managers can also gain much more control over the software development process. This gives the best opportunity for corporate governance and helps the organization demonstrate compliance with various regulations and rules.

ALM Industry

Initially, some of the few innovators who understood the importance of the ALM trend and changed their product launch strategies to explicitly support it were Borland and IBM Rational. Reacting to the obvious opportunities, other companies joined the winning ALM concept: Microsoft, IBM Rational / Telelogic, Mercury, and Serena. Today, ALM is an established trend and a growing industry recognized by analysts. ALM vendors provide a variety of tools and technologies to support the software development process. These tools go far beyond the traditional productivity tools of the individual developer. They are aimed at providing methodologies and tools focused on the collective work on software development. To create a viable ALM solution, vendors must consider the needs of the "extended" software development team and include roles in their products that participate in the larger process.

For the needs of managers, there are portfolio-level toolbars that cover important metrics project: risk, progress and quality.

For the needs of project managers, tools are provided for project planning and control, analysis of possible alternatives and resource allocation.

For the needs of analysts, tools are provided for defining requirements, interacting with end users and other stakeholders of the project. Also at this level, there are tools for managing requirements throughout the life cycle of the project, including subsequent changes.

For the needs of architects, tools are provided for visual modeling of various aspects of the application (components, data, process), as well as tools for describing design patterns and corporate architecture.

Various programming environments are provided for the needs of developers, as well as code-level quality assurance tools (for example, execution profilers, as well as tools for unit testing and automated code auditing).

For the needs of quality engineers, tools are provided for creating and managing tests, for regression and functional testing, as well as tools for automated performance testing.

A collective infrastructure serves to solve the common problems of the entire group. It provides tools for collaboration, process management, change management, and version control.

For the needs of managers of the software development process, tools for modeling and applying a set of corporate technological standards are provided.

For the needs of end users and other stakeholders within the organization, tools are provided to automate requirements management. They are also given opportunities to share information about requirements, report defects, and track the status of issues raised.

ALM technology is widely recognized as a major step forward for the application development tools industry and its customers. Interestingly, the latest "Chaos report" from the Standish Group shows that the rate of failed software projects has halved in the past decade. This improvement can be partly attributed to ALM. However, a closer look at customer needs shows that despite the obvious benefits of ALM, it is still difficult to realize the full potential of this technology. To do this, you need to change the fundamental approach that is used to integrate the processes and tools involved in the software life cycle.

The potential of ALM for business is largely untapped

To better understand why current solutions make it difficult to fully leverage ALM for business, let's take a closer look at typical software development and operations environments. We will examine how software is produced and deployed in terms of processes, development tools, and production platforms. Ultimately, this discussion explains why software production remains one of the last business processes that is not performed - let alone automated - in a stable and predictable manner.

Corporate IT environment: the problem of heterogeneity

The advent of the Internet and its expansion as the main platform for commerce has brought about significant changes in conventional IT organizations. This was also facilitated by the forced constant work in the face of a lack of resources and high requirements for flexibility. The problem of these changes is related to architectural evolution. It is designed to increase IT responsiveness and service levels and efficiency by moving from legacy technologies to new, modern application platforms. Here are the key areas of this evolution.

Migration from monolithic specialized applications running on mainframes to new development tools for enterprise distributed platforms, namely J2EE and .NET.

Migration from packaged corporate applications built on legacy architectures to process and composite application runtimes such as SAP NetWeaver and Oracle Fusion.

Use for specific needs of specialized platforms. These are, for example, scripting languages ​​for web applications using databases (PHP, Ruby, etc.), or application development platforms with rich web and multimedia features (eg Adobe® Flash®/Flex™).

Each of these technologies is associated with specific application development tools (often offered by different vendors). These tools cover analysis, design, coding, quality control, version control, and configuration management.

It would be reasonable to assume, especially for medium and large corporations, that for the foreseeable future, every corporate IT environment will include at least three of the following deployment platforms: mainframe, distributed environment (J2EE or .NET), and a system for business automation. -processes (SAP or Oracle). It also seems (and is becoming increasingly clear) that some organizations are deploying software to both the J2EE platform and .NET. 2

Conflicting Programs

It is interesting to note that, for obvious reasons, some IT solution vendors are trying to influence the heterogeneous nature of the corporate IT environment as much as possible. These vendors are looking to completely “take over” the organization of the IT environment by pushing complete “for life” solutions to the market. They contain software development tools, an environment for running applications, and tools for managing networks and systems. The largest manufacturers also include an operating system or even hardware in their solutions. It goes without saying that such solutions also involve a significant component professional services.

Despite this massive push for comprehensive single-vendor solutions, the reality is that for many customers, this approach simply won't work. Such organizations increase heterogeneity at all levels. Therefore, they have a set of different priorities that makes certain goals important to the client (not the supplier).

Maximizing competitiveness. Organizations that strive to deliver the best product or service typically choose the best platforms and development tools from a design standpoint. This approach helps them achieve the benefits that each platform offers for specific end users. This often happens on separate projects, but it can also happen within the same project. This eventually leads to "hybrid" applications that span multiple technology domains. Here are some relevant examples.

o Composite applications or services, which include mainframe, packaged applications, and in-house developed distributed applications.

o J2EE/.NET hybrids that use .NET features and user interface on the client side. On the server side, they take advantage of the scalability, manageability, and security of J2EE technology. This architectural pattern is especially common in the financial vertical. It is used for high-performance trading platforms, since on Wall Street, Windows is the de facto standard for desktop computers.

o Flash/J2EE hybrids. They combine possibilities Adobe Flash as a platform for streaming video and rich Internet applications with the benefits of J2EE technology for servers. This allows for a high degree of scalability and a rich media interface.

Reducing development costs. Organizations are trying to reduce the cost of software development and deployment by combining both open source and proprietary tools and programs. In this regard, it is worth mentioning the growing popularity of the LAMP suite (Linux, Apache, MySQL, PHP) and its increasing use in organizations.

Reduced time to market for products. Organizations may prefer certain development tools because of the specific job benefits they offer. This has the potential to significantly reduce the time to market for products.

Effective use of investments already made. Any "destroy and replace" approach runs into significant obstacles. This is due to the fact that most organizations are unwilling to abandon significant investments in older programs and tools.

Risk reduction. Some vendors in the IT industry provide non-standard native support for their platforms. In the eyes of their customers, this is seen as a risk. Being locked into a particular IT vendor's platform can result in significant business risk, especially if that vendor is (or will become) a competitor in the future.

2 Major industry trends such as the accelerated adoption of the CMM/CMMI process improvement framework and the increased use of outsourced development models are closely related to this apparent transformation of the software development industry. The IDC Insight Report on Using J2EE and .NET by Steve McClure states the following. 10.4% of current .NET users expect to use J2EE/J2ME as well within the next 12 months; 11.9% of J2EE/J2ME users expect to be involved in .NET development within the next 12 months.

IT Heterogeneity: ALM's Biggest Challenge

In short, many organizations in the IT industry see heterogeneity as the only alternative, as there are many business benefits associated with it. Very often, development teams use different tools that are not designed to work together. There is no single manufacturer that supplies the means for all the necessary actions in the context of one software project. Moreover, there is no single vendor that can fully cover the three main domains: support and modernization of legacy systems, extension and customization of packaged applications, and development of new distributed applications. Therefore, it is very likely that organizations will continue to use disparate development tools within the same project and across different technology domains. Because of this, the biggest problem with implementing ALM is the heterogeneity of development tools. Recall that ALM seeks to achieve predictability and integrity in the software production process through automated measurability, consistency, and discipline. However, in an environment with a high degree of heterogeneity, these qualities of the software production process are much more difficult to achieve.

Since measurability requires the collection of information about metrics from various application development tools and repositories, there is no generally accepted standard for such data collection. Since there is no common information schema for all the tools involved in the process, it also becomes necessary to "normalize" the collected metrics and somehow compare them in the context of certain projects.

Alignment requires tracking activities and deliverables throughout the process, from IT strategies right through to deployed modules. This degree of operational control is very difficult to achieve when resources and process activities are scattered across disparate tools and repositories. There are no standard tools that provide automatic definition, collection, management and use of control information.

The discipline requires the deployment, adoption, and control of various common processes for managing software production. This becomes much more complex when sub-processes flow as "process islands" among a variety of process tools. There is no standard mechanism for choreographing such sub-processes (according to the higher-level process) or for deploying process components for these tools. There is also no single terminology for describing processes in the environment of disparate tools. They all use their own languages ​​for "elements", "artifacts", "projects", etc. Another aspect of the discipline requires significant changes in management and impact analysis. These capabilities, however, require the correct implementation of end-to-end operational control. As mentioned earlier, end-to-end control is much more difficult to achieve in a heterogeneous development environment.

To address these issues, organizations that practice ALM often stop developing the many specialized point-to-point integrations that typically fill technology gaps between the various development tools in use. Such integrations are unreliable. They are broken when tools are updated or changed, and they are expensive to create and maintain. In addition, they lead to the emergence of software processes that cannot be easily measured and controlled, and which are inconvenient to manage. It is obvious that such an approach is unacceptable and unprofitable.

Therefore, for ALM solution providers, most IT organizations present a big challenge. These organizations would like to get more value from ALM, namely a significant improvement in the software production process that will give them the stability and predictability they need. Beyond that, however, ALM customers also want more.

The ability to use a mixture of work platforms in the most optimal way in terms of their business goals.

Free use of a variety of commercial and open source application development tools optimized for their deployment needs.

Free use of a variety of commercial or specialized software development processes that are optimized for the culture, project types, and underlying technology adopted by the organization.


To meet this complex set of requirements, a new approach to ALM is needed. An approach that will enable customers to take full advantage of ALM in a heterogeneous IT environment. Borland recently announced its approach and product strategy called Open ALM. This approach is directly designed to solve this problem. It is the only ALM solution that is designed from the ground up to enable IT organizations to predictably build software within their own timeframes.

Overcoming Heterogeneity: ALM's Last Frontier

The Open ALM approach implements Borland's established vision and product strategy. This approach represents a significant architectural shift that is unique in the commercial ALM market. In fact, if fully implemented, the Borland Open ALM platform and its associated applications could provide significant benefits even to customers who do not use any of Borland's application development tools at all. Undoubtedly, Borland views its tools business as vital. The company will continue to develop them and deliver best-in-class tools for an extended team of software developers. Borland's tools will gradually change to support the Open ALM strategy. This will allow them to participate in the orchestration of software production based on Open ALM. However, Borland's tools could be replaced, if customers see the point, with any product that supports their development requirements. It can be a third party or open source product. This level of modularity and flexibility sets Borland's product strategy apart from other ALM vendors, many of which are trying to "own" the entire software supply chain.

Benefits of OpenALM

Open ALM provides the functional value of ALM while providing an unrivaled degree of flexibility at the process, tool, and platform levels. In particular, Open ALM users get the following features.

The freedom to choose any combination of platforms and workspaces within the context of a single software project, or for several different projects at once. In this case, the choice is made on the basis of business priorities or suitability for the project.

Freedom to choose the best development tools for your chosen platforms based on economics, productivity and technical suitability.

The freedom to choose or design development processes that best suit their projects and chosen platforms, as well as

organizational culture and time-to-market requirements.

The Open ALM platform and its supporting tools will, for the first time, provide IT organizations that deploy heterogeneous application development environments with the following capabilities.

An excellent multi-dimensional and customizable view of project and portfolio progress, quality and risk metrics to support project management and process improvement initiatives.

"Grail": complete operational control and life cycle tracking. This will enable real alignment of business goals and activities throughout the development process, provide a better link between end user expectations and project outcomes, and provide better project management capabilities through accurate and comprehensive impact analysis.

A new level of software development process management with the help of automated coordination of actions of specialists and tools involved in the life cycle, based on processes.


These capabilities provide excellent team performance, support quality improvement initiatives, and ease the burden of meeting internal and external regulations. They will be provided as a set of infrastructure-level components and enterprise tools. ALM management. In addition, customers can also use Borland's best-in-class integrated application development and project management tools. This will enable them to gain value in four main process areas.

Project portfolio management (Project Portfolio Management, PPM). Tools and automated processes to manage the development of the entire software development strategy, as well as to manage the execution of a portfolio of software development projects.

Definition of requirements and their management (Requirements Definition and Management, RDM). A set of tools and best practices that ensure that project requirements are accurate and complete, they can be efficiently traced back to business goals, and they are optimally covered by software tests.

Quality management in the life cycle (Lifecycle Quality Management, LQM). The procedure and means for managing the definition and measurement of quality at all stages of software development. These tools are designed to detect and prevent quality issues early in a project when the cost of fixing them is relatively low. Also, QA teams need to be sure that their tests are complete and based on end user requirements.

Change Management (CM). Infrastructure and tools that help predict the impact of change. They also help manage resources and lifecycle change activities in both multi-node and single-node models.

Borland Open ALM Solution

As already mentioned, the main goal of ALM is to achieve a predictable and manageable software development process through automated measurability, alignment, and discipline. We have seen that each of the three dimensions of ALM becomes much more difficult in a heterogeneous application development environment and therefore presents a number of specific challenges for ALM users. The Borland Open ALM platform architecture is a set of three solution areas, each specifically designed to address a problem in one of the core ALM domains. Each area of ​​the Open ALM solution is based on a highly modular and extensible infrastructure layer and is a collection of specialized applications. The purpose of the infrastructure layer is to enable the Open ALM platform to work with any combination of commercial or open source development tools and processes, regardless of the manufacturer or expected operating environment technology. The diagram on the next page shows a conceptual diagram of a Borland ALM solution.


Borland Open ALM Solution Architecture


Open Business Intelligence for ALM

Open Business Intelligence for ALM (Open business intelligence for ALM, OBI4ALM) is based on a standard infrastructure and applications to increase progress measurability, performance improvement, or any other specific metric for software projects in a heterogeneous application development environment. OBI4ALM provides a framework for discreetly distributed data collection, as well as correlation and analysis of metrics from any application development tool that is registered for this. By extracting predefined metrics from data sources, the OBI4ALM framework brings together disparate information scattered throughout the software development cycle. This consolidation presents great opportunities. For example, you can create an aggregate view of project metrics and define new project metrics that combine multiple lower-level metrics. The OBI4ALM infrastructure uses a data store. This repository contains current and historical information collected from those tools that are involved in various stages of the software development process. It uses a structure that is optimized for querying and data analysis. OBI4ALM applications can transform the collected metrics into information suitable for making decisions based on it. This provides support for decision making and early notification of problems.

Real-time Data Dashboards - Customizable views of KPIs that show trends over time.

Metric-based alerts are customizable alerts that are triggered when certain conditions occur (for example, when a trend crosses a certain boundary). Alerts help increase management flexibility for various problems project: slow progress, poor quality, lack of efficiency, or any other problem that can be represented numerically using metrics.

Decision tools are analytical tools that use historical information about a project (or multiple projects) to help make project management decisions.

Open Process Management for ALM

In the final analysis, the process becomes the most important concept, which permeates the entire software life cycle. A process is more than sharing information structures between tools that are used by different roles, or providing integration of capabilities at the user interface level. The process has real potential for coordinating the activities of the people and systems that are involved in the software development process. At the same time, the process ensures compliance with established policies and quality control of implementation.

Open Process Management for ALM (OPM4ALM) provides infrastructure components and a set of applications that are used to model, deploy, and implement various software processes in a heterogeneous application development environment. OPM4ALM goes much further than providing guidance and distributing tasks among process participants. This method also uses the process automation layer, which serves as the main "glue" for integrating the client side, server side, and methodology according to the rules fixed in the process models. From this point of view, the integration between application development tools is actually provided by lower-level processes. This becomes the foundation for effective work team.

The OPM4ALM infrastructure is based on a distributed process engine. It provides modeling, customization, deployment, orchestration and choreography of multiple software development processes in a heterogeneous environment of development tools. An important part of the OPM4ALM framework is the management and definition of process events. The Open ALM Workbench can subscribe to and "listen" to these events, and be notified when they occur. The Process Engine also provides flexible rule definition and evaluation. It helps to describe and implement application development policies.


OPM4ALM applications deliver value from the process infrastructure layer. They provide the following features.

Tools for modeling, customizing, fitting and reusing processes. They enable the efficient design of commercial or custom software processes using a rich software development model.

An enterprise software process console that shows a consolidated bird's eye view. This view contains all the software processes that are deployed in various projects involving disparate development tools.

Process Compliance Toolbar. It shows process deviations and their potential consequences, and provides reporting capabilities that are useful for implementing compliance initiatives.

Measurement and reporting based on specific metrics for each process.

Open control for ALM

End-to-end process control supports many of the important benefits of ALM. Here are some of them: it is an important tool for implementing requirements-driven development, requirements-driven development and testing, and accurately analyzing the impact of changes. Open Traceability for ALM (OT4ALM) provides a framework for creating and classifying relationships between resources created during software development. It also creates a flexible link schedule for related resources. It does not matter in which tools these resources are located. Also, this technology provides tools for navigating the diagram of links between resources, as well as for creating optimal queries and for extracting the data that this diagram contains.

OT4ALM provides applications that transform the collected control data into information for decision making.

Automated planning, impact analysis, accurate cost and budget predictions.

Boundary Control - Early warning of deviations from given boundaries (eg, resources that do not meet requirements) and unrealized requirements.

Reuse Analyzer - allows you to reuse entire resource trees (from requirements and models to code and tests) instead of simply reusing modules of code.

TraceView - Interactive trace viewers for various projects. This helps to find all process resources and compare them with other resources.

Common Platform Infrastructure

The Open ALM framework contains two components that are used in all areas of the solution.

ALM metamodel. A common language for describing software processes, links between process resources (possibility of control) and units of measurement (metrics). The ALM metamodel provides a rich conceptual model for the software development domain. This is necessary to describe a standard vocabulary that all Open ALM-compatible tools must understand. This understanding will ensure effective communication within the Open ALM platform.

ALM integration level. Extensible and embeddable integration engine and SDK. It defines a standard way for ALM tools to work, collect ALM metrics, and create charts for resource monitoring. To support and participate in the ALM platform, a tool must provide a platform plug-in that satisfies the standard Open ALM API. You can also use a special adapter that connects the tool to other application development environments through processes that are orchestrated by the Open ALM platform.


Road to Open ALM

Over the next 24 months, Borland will increasingly expand the infrastructure, applications, and tools that make up its Open ALM platform. Borland also intends to complement this product with a wide range of professional services programs designed to accelerate the deployment and success of enterprise Open ALM implementations. Some of the benefits of Open ALM are available to customers today. Organizations looking to improve quality and improve their change and project management processes will find Borland's current solution very attractive. This solution provides highly automated and integrated support for four important areas of the application development process:

Project Portfolio Management (PPM);

Requirements definition and management (RDM);

Application Lifecycle Management (LQM);

Change Management (CM).

These solutions are provided through tight integration between Borland products and third party tools. This gives customers the flexibility they need and greatly increases their ability to manage software projects today.

Why Borland?

Throughout its long history, Borland has consistently partnered with its customers to enable them to create software in the most convenient way. Borland is committed to standards-based development and platform support. It offered IT organizations the flexibility and freedom of choice they need. With the advent of Open ALM, Borland takes its traditional values ​​to a whole new level. This clearly sets the company apart from other vendors of ALM solutions and non-profit ALM initiatives.

When it comes to the biggest solution makers ALM, IBM Rational and Microsoft, customer service is hardly their top priority. Both companies are constantly trying to leverage their development tools to tie customers to their middleware solutions and systems management platforms.

In contrast to this approach, Borland has always insisted on supporting the Java and J2EE standards, and has offered strong and integrated support for the platform, languages, and development tools. Microsoft. Borland continues to explicitly expand the Microsoft solution for ALM. Borland has invested heavily in support of the latest Microsoft technologies. For example, CaliberRM, the first fully integrated requirements management solution for Team System, is recommended by Microsoft to extend the basic requirements management functionality provided by the VSTS tool. Borland will continue to expand the collaboration between the Java and .NET platforms. There are plans to provide additional features such as code generation from UML to C# and support for Microsoft Domain Specific Languages ​​(Microsoft's alternative to replace UML).


The move towards open source is also related to the challenges that heterogeneity poses for ALM. The goal of several Eclipse initiatives (Application Lifecycle Framework (ALF), Corona, and Eclipse Process Framework (EPF)) is similar to that of Borland Open ALM. While Borland understands the motivation behind these projects, the company sees their approach as insufficient. Both ALF and Corona are only trying to provide Open ALM infrastructure components. However, Open ALM is a more holistic approach. This approach also allows customers to take advantage of the business value of pre-built infrastructures through a suite of add-on applications. In its move towards Open ALM, Borland goes further than other ALM vendors. The company has recently expanded its horizons and aims to cover additional application development domains. Borland is also looking for the best approach to support packaged application development projects on SAP NetWeaver and Oracle Fusion platforms.

Conclusion

Borland's unique position lies in the fact that the company helps ALM users create software in their own terms. The Open ALM approach and product strategy clearly sets Borland apart from other ALM vendors and open source initiatives. Borland is the only major ALM vendor that recognizes the reality of IT heterogeneity from the outset. This company is trying to help ALM users to effectively use existing tools in processes, workspaces, and development tools. Borland's approach to process-based integration further separates the company from its competitors. This allows Borland to provide transparency, control and order throughout the ALM strategy.

Borland begins building the infrastructure, applications, and associated development tools for Open ALM. Therefore, for the first time, customers will have the opportunity to fully use the capabilities of ALM. They will be able to take advantage of a completely seamless, manageable and measurable software development process.

ALM systems allow you to provide transparency, a clear understanding of the application development process and present it as one of the business processes. However, ALM should not be viewed solely as a means to monitor and control compliance, analysts warn. These systems are designed not so much for control as for automating the development process and integrating various tools.

The biggest problem with implementing ALM tools is people not understanding the development process. Often, management assumes that ALM will be able to get things done in a well-defined way. However, it is impossible to plan everything in advance. In application development, it is often necessary to go through several iterations at each step, release intermediate versions, and gradually increase the functionality of the application. The ALM system should not limit the actions of developers, but facilitate the process.

The IT industry loves to talk about the barriers between IT and the business, but within the IT organization itself, there are many less visible barriers that can get in the way of a careless system integrator.

Consider, for example, one of the most controversial and hotly debated topics in IT today - the DevOps methodology and everything related to it. As a brief description of all the activities associated with the transfer of the developed application to the IT service for real operation, these words sound harmless enough. But by and large, there is a wall of misunderstanding between enterprise application developers and specialists managing the enterprise IT infrastructure. Programmers often blame IT for lack of flexibility, and people who manage day-to-day IT operations for ignoring the constraints and requirements of the production infrastructure on which the applications they create must run.

This tension is fueling growing interest in Application Lifecycle Management (ALM), which is a set of management tools designed to provide programmers and IT staff with a clearer understanding of the application being developed and the infrastructure on which the application must run. . The main idea is that facilitating collaboration between developers and IT professionals will lead to a more efficient functioning of the entire corporate information environment. The problem is that the implementation of ALM has little chance in a situation where the two parties, the cooperation between which is necessary for the success of the project, begin to shift responsibility for the difficulties that arise on each other.

To successfully implement the ALM methodology, the system integrator must rise above the level of recrimination in the IT department. According to Gina Pool, vice president of marketing for the IBM Rational Software division, this means finding and hiring an IT director or financial director, able to realize how much money the customer is losing due to the lack of coordinated work of all services of the IT department. Fixing bugs in an application late in a development project means extremely high costs. If the need for such a correction is caused by the developer's previous assumptions about the environment in which the application will operate, and these assumptions turn out to be incorrect in the end, then the cost of the entire project increases many times, or the customer will be forced to upgrade his infrastructure accordingly.

Of course, resolving such contradictions in an organization's IT infrastructure can cost a significant amount of money. However, the only final goal this work should be to create and implement a set of management technologies that would allow programmers and IT operations specialists to stop interfering with each other's work. The more time programmers spend discussing collaboration with IT, the less time they have to actually develop. The more applications will be created, the more advanced infrastructure will be needed and this, of course, is good news for resellers.

In general, the DevOps debate is definitely beneficial for resellers and integrators. The problem is not to get caught up in the internal conflicts of wanting to run too many IT projects in parallel. If the customer does not accept the very concept of ALM, this is actually a very good indicator of his lack of maturity and weak competence in IT management. This in itself suggests that it is better for a reseller to stay away from such a customer, since it is highly likely that such a customer will bring much more problems than profit.

It is known that many users (and, to be honest, some IT specialists) when talking about software development mean, first of all, the creation and debugging of application code. There was a time when such ideas were close enough to the truth. But modern application development consists not only and not so much of writing code, but of other processes, both preceding programming itself and following it. Actually, they will be discussed further.

Application development life cycle: dreams and reality

It's no secret that many commercially successful products both in Russia and abroad were implemented using only application development tools, and even the data was often designed on paper. It would not be an exaggeration to say that of all the possible tools for creating software in Russia (and in many European countries as well), now mainly application development tools and, to a lesser extent, data design tools are popular (this primarily concerns projects with a small budget and compressed implementation timeline). All project documentation, starting with the technical specifications and ending with the user manual, is created using text editors, and the fact that some part of it is the initial information for the programmer only means that he just reads it. And this despite the fact that, on the one hand, requirements management tools, business process modeling, application testing tools, project management tools, and even project documentation generation tools have existed for a long time, and on the other hand, any project manager naturally wants to facilitate life for himself and for other performers.

What is the reason for the distrust of many project managers in tools that allow you to automate many stages of the work of the teams they lead? In my opinion, there are several reasons for this. The first of them is that the tools used by the company very often do not integrate well with each other. Consider a typical example: Rational Rose is used for modeling, Delphi Professional is used for writing code, CA AllFusion Modeling Suite is used for data design; integration tools for these products are either not available at all for this combination of their versions, or do not work correctly with the Russian language, or simply have not been purchased. As a result, use case diagrams and other models created with Rose become nothing more than pictures in the design documentation, and the data model mainly serves as a source for answering questions like: “Why is this field even needed in that table?” And even such simple parts of the application as the Russian equivalents of database field names are written by project participants at least three times: once when documenting the data model or application, the second time when writing user interface code, and the third time when creating a help system file and user manuals.

The second, no less serious reason for distrust of software life cycle support tools is that, again, due to the lack or poor functionality of integration tools for such products, in many cases it may not be possible to constantly synchronize all parts of the project with each other: process models , data models, application code, database structure. It is clear that a project that implements the classic waterfall scheme (Fig. 1), in which requirements are first formulated, then modeling and design is carried out, then development and, finally, implementation (you can read about this scheme and other project implementation methodologies in a series of reviews by Lilia Hough, published in our magazine), is more of a dream than a reality - while the code is being written, the customer will have time to change part of the processes or wish for additional functionality. As a result of the project, an application is often obtained that is very far from what was described in terms of reference, and a database that has little in common with the original model, and synchronizing all this with each other for the purpose of documenting and transferring to the customer turns into a rather laborious task.

A third reason why software lifecycle support tools are not used in all the places where they could be useful is that they are so limited in choice. On the Russian market two product lines are being actively promoted: IBM/Rational tools and Computer Associates tools (mainly the AllFusion Modeling Suite product line), which are currently largely focused on certain types of modeling, and not on the constant process of synchronization of code, database and models.

There is another reason that can be attributed to the category of psychological factors: there are developers who do not at all strive for complete formalization and documentation of their applications - because in this case they become indispensable and valuable employees, and a person who is forced to understand after the dismissal of such a developer in his code or simply accompanying his product will feel like a complete idiot for a very long time. Such developers, of course, are by no means in the majority, nevertheless, I know at least five company executives, to whom such ex-employees spoiled a lot of blood.

Of course, many project managers, especially projects with a small budget and limited time, would like to have a tool with which they could formulate requirements for the developed software product ... and as a result, get a ready-made distribution kit of a working application. This, of course, is only an ideal, which for the time being one can only dream of. But if you go down from heaven to earth, then you can formulate more specific wishes, namely:

1. Requirements management tools should simplify the creation of the application model and data model.

2. Based on these models, a significant part of the code should be generated (preferably not only client, but also server).

3. A significant part of the documentation should be generated automatically, and in the language of the country for which this application is intended.

4. When creating the application code, automatic changes should occur in the models, and when the model changes, automatic code generation should occur.

5. Code written by hand should not disappear when changes are made to the model.

6. The appearance of a new customer requirement should not cause serious problems associated with changes in models, code, database and documentation; all changes must be made synchronously.

7. Version control tools for all of the above should be convenient in terms of finding and tracking changes.

8. And finally, all this data (requirements, code, models, documentation) should be available to the project participants exactly to the extent that they need them to perform their duties - no more and no less.

In other words, the application development cycle should enable iterative collaborative development without the added cost of changing customer requirements or how they are implemented.

I will not assure you that all these wishes are absolutely impossible to implement with the help of IBM / Rational or CA tools - technologies develop, new products appear, and what was impossible today will become available tomorrow. But, as practice shows, the integration of these tools with the most popular development tools, unfortunately, is far from being as ideal as it might seem at first glance.

Borland products from a project manager's perspective

Borland is one of the most popular developers of development tools: for twenty years, its products have been well-deserved love of developers. Until recently, this company mainly offered a wide range of tools intended directly for application coders - Delphi, JBuilder, C++Builder, Kylix (we have repeatedly written about all these products in our magazine). However, a company's success in the market is largely determined by how much it follows the trends of its development and how much it understands the needs of those who are the consumers of its products (in this case, companies and departments specializing in application development).

That is why Borland's current development strategy is to support the full application lifecycle (Application Lifecycle Management, ALM), including requirements definition, design, development, testing, implementation and maintenance of applications. This is evidenced by the acquisition of a number of companies by Borland Corporation last year - BoldSoft MDE Aktiebolag (a leading provider of the latest Model Driven Architecture application development technology), Starbase (a provider of configuration management tools for software projects), TogetherSoft Corporation (a provider of software engineering solutions). In the time that has passed since the acquisition of these companies, a lot of work has been done in terms of integrating these products with each other. As a result, these products already meet the needs of project managers for the possibility of iterative collaborative development. Below we discuss what exactly Borland offers to managers and other participants in projects related to software development (many of the products and integration technologies described below were presented by this company at the developer conferences held in San Jose, Amsterdam and Moscow in November) .

Requirements Management

Requirements management is one of the most important parts of the development process. Without formulated requirements, as a rule, it is almost impossible to organize work on the project normally, or to understand whether the customer really wanted to get exactly what was implemented.

According to analysts, at least 30% of the budget of projects is spent on what is called reworking the application (and I personally think that this figure is greatly underestimated). Moreover, more than 80% of this work is associated with incorrectly or inaccurately formulated requirements, and the correction of such defects is usually quite expensive. And how much customers like to change requirements when the application is almost ready is probably known to all project managers ... It is for this reason that requirements management should be given the closest attention.

For requirements management, Borland has a product Borland CaliberRM, which is essentially a platform for automating the requirements management process, providing change tracking tools (Fig. 2).

CaliberRM integrates with many development tools from both Borland and other manufacturers (for example, Microsoft), up to embedding a list of requirements in the development environment and generating code stubs by dragging the requirement icon into the code editor with the mouse. In addition, you can create your own solutions based on it - for this there is a special set of tools CaliberRM SDK.

Note that this product is used to manage requirements not only for software, but also for other products. Thus, cases of its successful application in the automotive industry to manage the requirements for various vehicle components (including Jaguar vehicles) are known. In addition, according to Jon Harrison, manager in charge of the JBuilder product line, using CaliberRM to create Borland JBuilderX greatly simplifies the process of developing this product.

And finally, the availability of a convenient requirements management tool greatly simplifies the creation of project documentation, and not only at early stages. project stages, but also on all subsequent ones.

Application and data design

Design is an equally important part of creating an application and should be based on the formulated requirements. The design results are models used by programmers at the stage of code creation.

For designing applications and data, Borland offers the Borland Together product (Fig. 3), which is a platform for analyzing and designing applications that integrates with various development tools from both Borland and other manufacturers (in particular, Microsoft). The specified product allows modeling and designing applications and data; at the same time, the degree of its integration with development tools at the moment is such that changes in the data model lead to an automatic change in the application code, as well as changes in the code lead to changes in the models (this technology for integrating modeling tools and development tools is called LiveSource).

Borland Together can be used as a tool that connects requirements management and modeling tasks with development and testing tasks and helps you understand what the product requirements implementation should be.

Creating the application code

Application code creation is an area in which Borland has specialized throughout its 20 years of existence. Today Borland produces development tools for Windows, Linux, Solaris, Microsoft .NET platforms, as well as for a number of mobile platforms. We have repeatedly written about the development tools of this company, and we will not repeat ourselves in this article. We only note that the latest versions of the development tools of this company (Borland С#Builder, Borland C++BuilderX, Borland JBuilderX), as well as the expected soon a new version one of the most popular development tools in our country, Borland Delphi 8 for Microsoft .NET Framework, allows for close integration of Together modeling tools and CaliberRM requirements management tools with their development environments. We will definitely cover Delphi 8 in a separate article in the next issue of our magazine.

Testing and optimization

Testing is an absolutely essential part of creating quality software. It is at this stage that it is checked whether the application satisfies the formulated requirements for it, and appropriate changes are made to the application code (and often to models and databases). The testing phase usually requires the use of application performance analysis and optimization tools, and Borland Optimizeit Profiler is used for this purpose. Today, this product is also integrated into the development environments of the latest versions of Borland development tools, as well as into the Microsoft Visual Studio .NET environment (Fig. 4).

Implementation

Software implementation is one of the most important components of project success. It should be carried out in such a way that at the stage of trial operation of the product it can be changed without serious costs and losses, it is easy to increase the number of users without reducing reliability. Since application adoption today occurs in a context where companies use different technologies and platforms and operate a number of existing applications, the ability to integrate a new application with legacy systems can be important during deployment. For this purpose, Borland offers a number of cross-platform integration technologies (such as Borland Janeva, which allow integration of .NET applications with applications based on CORBA and J2EE technologies).

Change management

Change management is performed at all stages of application creation. From Borland's point of view, this is the most important component of the project - after all, changes can occur in requirements, and in code, and in models. Without change tracking, project management is difficult - the project manager must be aware of what exactly is happening on this stage and what has already been implemented in the project, otherwise he risks not completing the project on time.

To solve this problem, you can use Borland StarTeam (Fig. 5) a scalable software configuration management tool that stores all the necessary data in a centralized repository and optimizes the interaction of employees responsible for performing various tasks. This product provides a team of project participants with a variety of tools for publishing requirements, managing tasks, planning, working, discussing changes, version control, document management.

The features of this product are tight integration with other Borland products, support for distributed teams of developers interacting via the Internet, the presence of several types of client interfaces (including the Web interface and the Windows interface), support for many platforms and operating systems, the presence of the StarTeam Software Development Kit (SDK), which is an application programming interface for creating solutions based on StarTeam, data protection tools on the client and server side, tools for accessing Merant PVCS Version Manager and Microsoft Visual SourceSafe repositories, integration tools with Microsoft Project, data visualization, reporting and decision support tools.

Instead of a conclusion

What does the appearance on the Russian market of the above set of products from a well-known manufacturer whose development tools are widely used in a wide variety of projects? At a minimum, the fact that today we will be able to get not just a set of tools for various project participants, but also an integrated platform for implementing the entire development life cycle - from requirements definition to implementation and maintenance (Fig. 6). At the same time, this platform, unlike its competing product sets, will guarantee support for all the most popular development tools and will allow you to integrate your components into them at the level of full code synchronization with models, requirements, and changes. And let's hope that project managers will breathe a sigh of relief, saving themselves and their employees from many tedious and routine tasks...

Analyzing the development of the development tools market over the past 10-15 years, one can note a general trend of shifting emphasis from the technologies of actually writing programs (which, since the early 90s, were marked by the emergence of RAD tools - "rapid application development") to the need for an integrated management of the entire life cycle of applications - ALM (Application Lifecycle Management) .

As the complexity of software projects increases, the requirements for the efficiency of their implementation increase sharply. This is all the more important today, when software developers are involved in almost all aspects of the work of enterprises and the number of such specialists is growing. At the same time, research data in this area suggests that the results of at least half of the "in-house" software development projects do not justify the hopes placed on them. Under these conditions, the task of optimizing the entire process of creating software tools with the coverage of all its participants - designers, developers, testers, support services and managers - becomes especially urgent. Application Lifecycle Management (ALM) views the software release process as a constantly repeating cycle of interrelated stages:

definition of requirements (Requirements);

design and analysis (Design & Analysis);

Development (Development);

testing (Testing);

deployment and maintenance (Deployment & Operations).

Each of these steps must be carefully monitored and controlled. A properly organized ALM system allows you to:

Reduce the time it takes to bring products to market (developers only have to take care of the compliance of their programs with the formulated requirements);

improve quality while ensuring that the application will meet the needs and expectations of users;

increase productivity (developers get the opportunity to share best practices in development and implementation);

Accelerate development through the integration of tools;

Reduce maintenance costs by constantly maintaining consistency between the application and its project documentation;



Get the most out of your investment in skills, processes and technology.

Strictly speaking, the very concept of ALM, of course, is not something fundamentally new - such an understanding of the problems of software development arose about forty years ago, at the dawn of the formation of industrial development methods. However, until relatively recently, the main efforts in automating software development tasks were aimed at creating tools directly for programming as the most time-consuming stage. And only in the 80s, due to the complication of software projects, the situation began to change significantly. At the same time, the relevance of expanding the functionality of development tools (in the broad sense of the term) in two main areas has sharply increased: 1) automation of all other stages of the software life cycle and 2) integration of tools with each other.

Many companies dealt with these tasks, but the undisputed leader here was Rational, which for more than twenty years, since its inception, has specialized in automating development processes. software products. At one time, it was she who became one of the pioneers of widespread use visual methods program design (and practically the author of the UML language, accepted as a de facto standard in this area), created a common ALM methodology and a corresponding set of tools. It can be said that by the beginning of this century, Rational was the only company that had in its arsenal a full range of products to support ALM (from business design to maintenance), with the exception, however, of one class of tools - ordinary coding tools. However, in February 2003, it ceased to exist as independent organization and became a division of IBM Corporation, called IBM Rational.

Until quite recently, Rational was practically the only manufacturer of integrated development tools of the ALM class, although there were and are competing tools from other vendors for certain stages of software development. However, a couple of years ago, Borland Corporation, which has always had a strong position in the field of traditional application development tools (Delphi, JBuilder, etc.), which is actually the basis of the corporation's ALM complex, which was expanded through the acquisition of other companies producing similar products. This is the fundamental difference in the business models of the two companies, which opens up potential opportunities for real competition. After Rational became part of IBM, Borland positions itself as the only independent supplier of a comprehensive ALM platform today (that is, it does not promote its own operating systems, languages, etc.). In turn, competitors note that Borland has not yet formulated a clear ALM methodology that provides a basis for combining the tools it has.

Another major player in the development tools field is Microsoft Corporation. While she does not threaten to create her own ALM platform; promotion in this direction is only in the framework of cooperation with other suppliers, the same Rational and Borland (both of them became the first participants in the Visual Studio Industry Partner program). At the same time, Microsoft's flagship Visual Studio .NET development tool is constantly expanding functionality through the use of high-level modeling and project management tools, including through integration with Microsoft Visio and Microsoft Project.

It should be noted that today almost all leading companies developing technologies and software products (except for those listed above, one can name Oracle, Computer Associates, etc.) have developed software development technologies that were created as on your own, and through the acquisition of products and technologies created by small specialized companies. And although, like Microsoft, they do not yet plan to create their own ALM platform, the CASE tools released by these companies are widely used at certain stages of the software life cycle.

 

It might be useful to read: