Life cycle management system. ALM - Application Lifecycle Management

Etc..

"Control life cycle»Comes down to the need to master the practices familiar to systems engineering:

  • information management(“The necessary information should be available to the right stakeholders on time and in a form accessible for its use”),
  • configuration management("The design information must comply with the requirements, the information" how built "must correspond to the project, including the design justifications, the physical system must correspond to the information" how built ", and different parts of the project must correspond to each other", sometimes part of this practice is called "change management ").

PLM vs PLM

The newly formulated LCM does not use PLM as the required class of software around which such a system is built. In large engineering projects, several (most often significantly "underdeveloped") PLM from different vendors are usually used at once, and when creating a lifecycle management system, we are usually talking about their interorganizational integration. Of course, this also addresses the issues of how to integrate information into the lifecycle management system of those systems that do not yet have a connection with any of the PLM systems of the extended enterprise. The term "extended enterprise" usually refers to an organization created through a system of contracts of resources (people, tools, materials) participating in a specific engineering project of various legal entities... In extended enterprises, the answer to the question of what kind of PLM the data of which of the CAD / CAM / ERP / EAM / CRM / etc. systems is being integrated into becomes non-trivial: the owners of different enterprises cannot be ordered to use software from the same supplier.

And since the PLM system is still a software tool, and the "management system" from the LMS is clearly understood as a "management system", the term LMS clearly implies an organizational aspect, and not just an information technology aspect. Thus, the phrase "using PLM to support a lifecycle management system" is quite meaningful, although it can be confusing when literally translating "PLM" into Russian.

However, the understanding of "lifecycle management" when it is tackled by IT staff is immediately leveled back to "software only", which looks suspiciously like PLM software. And after this oversimplification, difficulties begin: a "box" PLM system from some supplier of CAD software is usually immediately presented constructively, as a set of software modules from the catalog of this supplier, without connection with the supported engineering and management functions, and is considered as a triple of the following component:

  • data-centric repository of life cycle data,
  • "Workflow engine" to support "management",
  • "Portal" for viewing the contents of the repository and the status of the workflow.

Purpose of the life cycle

Main purpose: LCLM detects and prevents collisions that are inevitable in collaborative development. All other LCLM functions are derivatives that support this main function.

The main idea of ​​any modern life cycle life cycle is the use of an accurate and consistent representation of the system and the world around it in inevitably heterogeneous and initially incompatible computer systems of an extended organization. The use of virtual layouts, information models, data-centric repositories of design information ensures collision detection during "construction in a computer", "virtual assembly", and not when taking drawings and other models of a project into material reality during an actual construction "in metal and concrete" and start-up into operation.

Thus, the idea of ​​PLM is not related to various “design automation”, first of all - to “generative design” and “generative manufacturing”. LPLC is no longer concerned with synthesis, but with analysis: it detects and / or prevents collisions in the design results of individual subsystems when they will be assembled together using a variety of technologies:

  • merging project data together into one repository,
  • running an integrity check algorithm for engineering data distributed across multiple repositories,
  • performing up-to-date "virtual assembly" and simulation for a specially selected subset of the design data.

Model-Based Approach

The use of the LML implies refusal not only from paper in design, but also from "electronic paper"(.tiff or other raster formats) and the transition to data-centric representation of information. Comparing two models that exist on paper in some notation and finding inconsistencies in them is much more difficult and longer than preventing collisions in structured electronic documents that use engineering data models rather than raster graphics.

The data model can be designed according to some kind of language, for example:

  • in terms of the ISO 24744 development method description standard),
  • metamodel (in terms of the OMG standardization consortium),
  • data model / reference data (in terms of the ISO 15926 life cycle data integration standard).

It is this transition to structurally representable models that already exist in the early stages of design and is called Model-Based Systems Engineering (MBSE). It becomes possible to remove collisions using computer data processing already at the earliest stages of the life cycle, even before the appearance of full-fledged 3D models of the structure.

The PLM should:

  • provide a data transfer mechanism from one CAD / CAM / ERP / PM / EAM / etc. to another- moreover, in an electronic structured form, and not in the form of a "pack of electronic paper". Data transfer from one engineering information system (with a clear understanding of where, where, when, what, why, how) is part of the functionality provided by the LMS. Thus, the LCLM must support workflow (work progress that is partly performed by people and partly by computer systems).
  • control versions, that is, to provide a configuration management function - both of the models and the physical parts of the system. The LCLM supports a taxonomy of requirements at different levels and provides a means to check the collision of different levels of design decisions and their justification with these requirements. In the course of engineering development, any description of the system, any of its model is changed and supplemented many times, and therefore exist in many alternative versions of varying degrees of correctness, and in varying degrees corresponding to each other. The LCIP must ensure that only the correct combination of these versions is used for the current job.

PLMS architecture

There can be many architectural solutions for LCLM, one and the same function can be supported by different designs and operating mechanisms. Three types of architecture can be distinguished:

  1. Traditional attempts to create a lifecycle management system Is to provide critical point-to-point data transfers between different applications. In this case, any specialized workflow support system (BPM engine, "business process management engine"), or a complex event processing engine can be used. Unfortunately, the amount of work to ensure point-to-point exchanges turns out to be simply enormous: each time, specialists are required who understand both the connected systems and the method of transferring information.
  2. Using the lifecycle data integration standard ISO 15926 according to the "ISO 15926 outside" method, when for each engineering application an adapter is developed into a neutral representation that conforms to the standard. Thus, all data is able to meet in some application and a collision between them can be detected - but the application needs to develop only one data transfer adapter, and not several such adapters (according to the number of other applications with which it is necessary to communicate).
  3. PLM(Teamcenter, ENOVIA, SPF, NET Platform, etc.) - a standardized architecture is used, with the only exception that the data model used in each of these PLMs is less universal in terms of reflecting any engineering domain, and is also not neutral and accessible to all formats. So the use of ISO 15926 as a basic representation when transferring data to LCLM can be considered a further development of ideas, in fact, implemented in modern PLM.

According to the architecture of configuration management, LCLM can be divided into three types:

  • "Repository"(actual storage of all project data in one PLM repository, where data is copied from where it was developed),
  • "register"(LCL maintains a list of lifecycle data addresses in numerous repositories of other CAD, engineering modeling systems, PLM, ERP, etc.),
  • "Hybrid architecture"- when part of the data is copied to the central repository of the LCL, and part of the data is available from other places via links.

The LCL Architect should also describe:

  • "portal"(including the "web portal"), its functions and method of implementation. The very presence of the portal allows you to reassure top managers by demonstrating the absence of collisions. Specific requirements are imposed on the architectural solutions for the "life cycle portal".
  • algorithms for checking data integrity / consistency life cycle, as well as a description of the operation of these algorithms:
    • a standard module in a separate application working on data in the repository of that application - be it CAD or PLM;
    • a software collision checking tool specially developed for LCLM, which has access to data from different applications located in the central LCLC repository;
    • a specially developed software tool that accesses the Internet via a secure channel to different data storages located in different organizations;
    • specially programmed checks with collision control when loading various engineering datasets into the central repository of the LCL;
    • a combination of all the listed methods - different for different types collisions; etc.
  • way of interaction between users of the PLM(Design Engineers, Purchasers, Installers, Facility Project Managers, etc.), and how exactly the PLMS software supports this interaction in a way that prevents collisions. Systems engineering standards (in particular, the ISO 15288 systems engineering practice standard) require the selection of a life cycle type for engineering complex objects and an indication of which systems engineering practice options will be used. The life cycle model is one of the primary artifacts that serve as organizational arrangements for coordinating the work of an extended engineering project organization. Coordinated work in the course of collaborative engineering is a guarantee of a small number of design collisions. How exactly will the life cycle model support the life cycle model? For example, PLM systems usually do not find a place for life cycle models, much less organization models. Therefore, for LCL it is necessary to look for other solutions for software support of these models.
  • Organizational aspect of the transition to the use of life cycle management system... The transition to the use of lifecycle management systems can cause a significant change in the structure and even the staffing of an engineering company: not all excavators are hired as excavators, not all cabbies are hired as taxi drivers.

The main thing for LCLM is to what extent the proposed solution contributes to early detection and even prevention of collisions. When it comes to something else (meaningful choice of the type of life cycle in accordance with the risk profile of the project, aging management, cost management and budget system reform, mastering axiomatic design, just-in-time delivery structure that generates design and construction, and much, much more, also extremely useful-modern-interesting), then this is a question of other systems, other projects, other methods, other approaches. The LMLC should do its job well, and not badly solve a huge set of randomly chosen other people's problems.

Thus, the architect has two main tasks:

  • generate a number of better candidate architectures and their hybrids
  • make a multi-criteria choice from these architectures.
    • meaningful consideration (meaningfulness of selection criteria)
    • presentation of the result (justification).

Criteria for choosing an architectural solution for a lifecycle management system

  1. The quality of the implementation of the life cycle management system of its main purpose: detection and prevention of collisions The main criterion: how much can the progress of engineering work be accelerated by accelerating the detection or prevention of collisions using the proposed LCLM architecture? And if the time of work cannot be reduced, then how much can the volume of work be increased in the same time using the same resources? The following methodologies are recommended:
    • Goldratt's theory of constraints(TOC, theory of constraints) - the architecture should indicate what system constraints it removes on the critical resource path of an engineering project (not to be confused with the critical path).
    • ROI(return on investments) for investment in lifecycle management at the stage of formalizing the result of a meaningful consideration of candidate architectures.
    It is important to choose the boundaries of consideration: the overall speed of execution of an engineering project can only be measured at the boundary of the organizational system under consideration. The boundaries of a single legal entity may not coincide with the boundaries of an expanded enterprise carrying out a large-scale engineering project, and an enterprise participating in only one stage of the life cycle may incorrectly assess its usefulness and criticality for the full life cycle of the system and choose the wrong way to integrate it into the life cycle. Then it may turn out that the creation of a lifecycle management system does not affect the overall terms and budgets of the project, because the most unpleasant collisions will continue to be unaddressed by the new lifecycle management system.
  2. Possibility of adopting an incremental life cycle for the development of a lifecycle management system In ISO 15288, an incremental life cycle is called a life cycle in which functionality is not given to the user all at once, but step by step - but investments in development also do not occur simultaneously, but step by step. Of course, in this case, the law of diminishing utility must be taken into account: each increment of LPLC (each new type of collisions detected in advance) is more expensive, and the benefits from it are decreasing until the development of LPLC, which has been going on for many years, dies out by itself. If it turns out that for one of the proposed architectures it is necessary to invest a lot of money in the creation of a lifecycle management system at once, but the benefit can be obtained immediately in the amount of 100% and only after five years on a turnkey basis, then this is a bad architecture. If it turns out that it is possible to develop and put into operation some kind of compact LCLM core, and then many, many modules of the same type for different types of collisions with an understandable mechanism for their development (for example, based on the use of ISO 15926), then this is very good. It’s not so much about applying agile methodologies, but about envisioning a modular LCLM architecture and proposing a plan for implementing a prioritized list of modules - first the most urgent, then the least urgent, and so on. Not to be confused with ICM (incremental commitment model), although the meaning is the same: the architecture is better in which you can get some kind of payment by installments for the system, and get the necessary functionality as early as possible - in order to get the benefit (at least a small one) early, and pay for late benefits later.
  3. Fundamental financial and intellectual ability to master and maintain technology If you count the expenses not only on the life cycle lifecycle itself, but also on all the personnel and other infrastructure required for the implementation of the project, then you need to understand how much of these investments in education, computers and organizational efforts will remain for the payer and owner of the lifecycle management system, and how much will settle outside - with numerous contractors , who, of course, will be grateful first for receiving a "scholarship" to master the new technology, and then for supporting the system they have created. New is usually extremely costly, and not because it is expensive itself, but because it causes an avalanche of changes it causes. It is this point that takes into account the full cost of ownership of a life cycle life cycle, and this point also includes consideration of the full life cycle of not an engineering system with its preventable collisions, but the life cycle itself.
  4. Scalability of LCLM architecture This criterion is relevant for large engineering projects. Because you want the system to be used by all the thousands of people in your extended organization, it will need to grow rapidly to this scale. To what extent can a "pilot" or "polygon" LMS grow quickly without fundamental architectural changes? Most likely, they will not be able to grow. Therefore, architecturally, it is not a "pilot" or a "test range" that is needed, but the "first stage" at once. The requirement of the scaling criterion closely intersects with the requirement of the incrementality criterion, but it touches on a slightly different aspect - not so much the stretching of the creation of the lifecycle management system in time, but the possibility of stretching over the covered volume. Experience shows that all systems cope with test volumes of design data, but they cannot cope with industrial ones. How will the cost of hardware and software grow nonlinearly with increasing volumes / speed? How long will the regulations work out when it turns out that more data passes through a workplace than can be meaningfully viewed by one person? Poor scalability can lie in wait not only from the technical side of the architecture of the software and hardware solution, but also from the side of its financial architecture. So, a small price for a license for each workstation of the lifecycle management system, or even a small price for each new connection on a repository server, can turn a more or less attractive solution for ten workplaces into absolutely financially unaffordable for a target thousand of workplaces.
  5. Ability to take into account inevitable organizational problems including the relationship to favorite legacy systems in the extended organization. How much does the proposed centralized or distributed architecture require to "give away functions to other departments", "give our data" and in general something "give away" compared to the current situation without LCLM? Mainframes massively lost the competition to mini-computers, and those to personal computers. Back to centralized systems, which inevitably appears to be a lifecycle management system), there is almost no way, because all data lies in separate applications, and pulling this data into new systems is a most difficult organizational task. How does the LCLM architecture work: does it replace current legacy engineering applications, is it built on top of the current IT infrastructure, is it installed for free by various services? How much organizational / managerial / consulting effort will it take to push the new technology? How many people to fire, how many to find and hire new specialists? This criterion of organizational acceptability is closely related not only to centralization / decentralization, but also to the consideration of the motivation system in the extended enterprise, i.e. Evaluation of the LCLM architecture by this criterion goes far beyond the narrow consideration of only the LCLM, but requires a thorough analysis of the principles of building an extended organization - up to a revision of the principles underlying the contracts under which it is created. But this is the essence of the systematic approach: any target system (in this case, the life cycle life expectancy) is considered, first of all, not "in depth, from what parts", but "outward, part of which" - not its design and mechanism of operation are interesting first of all, but supported LMS is the function of preventing collisions in the external supersystem - and the price that the external supersystem is willing to pay for this new function. Therefore, possible LCLM architectures are considered primarily not from the point of view of "decent used technologies, for example, from the software provider XYZ" (this is the default: all the proposed architecture options are usually decent technologically, otherwise they are not options!), But from the point of view of the above five criteria.

Functions of the PLM

  1. Preventing collisions
    1. Configuration management
      1. Identification (classifications, encodings)
      2. Configuration accounting (all possible baselines - ConOp, Architecture, design, as built), including data transfer to the LCLC repository, including support for workflow changes, including support for parallel engineering (work in conditions of incomplete baseline)
      3. Versioning (including forks)
    2. Lack of manual data transfer (transfer of input and output data between existing islands of automation, including transfer of data from islands of "rise to the figure" of old design developments)
    3. NSI configuration
    4. Collaborative engineering support system (video conferences, remote project sessions, etc. - perhaps not the one used to create the LCM system itself)
  2. Collision detection
    1. Support for a register of collision types to be checked and check technologies corresponding to the register
    2. Data transfer for checking collisions between islands of automation (without assembly in the PLMS repository, but by means of the integrated PLMS technology)
    3. Workflow run for checking different types of collisions
      1. in the PLM repository
      2. not in the repository, but by means of the integration technology of life management system
    4. Launching the workflow elimination of the found collision (sending notifications about collisions, because the elimination workflow is not the concern of the PLM)
    5. Maintain an up-to-date list of unresolved collisions
  3. Developability(here the life cycle life cycle is considered as an autopoietic system, because the "incrementality of implementation" is one of the most important properties of the life cycle life cycle itself - so this is a function of the life cycle life cycle itself, and not a function of the supporting system for the life cycle life cycle)
    1. Ensuring communication about the development of the lifecycle management system
      1. Planning for the development of the lifecycle management system (roadmap, development of an action plan)
      2. Functioning of the project office of the PLMS,
      3. Maintaining a register of types of collision checks (the register itself of "wishes" and the roadmap of the implementation of checks)
      4. Org-technical modeling (Enterprise Architecture) for LCLM
      5. Communication infrastructure for LCLM developers (Internet conferences, video communication, knowledge management, etc. - perhaps not the one used in collaborative engineering using LCM)
    2. Consistency of data integration technology (e.g. ISO 15926 technology)
      1. Using a neutral data model
        1. Reference data library support
        2. Development of reference data
      2. Technology for supporting adapters to the neutral data model
    3. Uniformity of workflow / BPM integration technology (across the extended enterprise)
  4. Data security(on the scale of information systems operating within the lifecycle management system)
    1. Ensuring the unity of access (one login and password to all information systems participating in the workflow)
    2. Controlling access rights to data items
    3. Backup

Application Lifecycle Management (ALM) is evolving rapidly. This is a promising approach to improving the creation process. software(ON). However, the "traditional" ALM process is not able to reach its full profit potential for the organization. Why? Because vendors are aggressively pushing limited end-to-end ALM solutions to market that aim to tie customers to closed technology platforms. Customers soon find that these solutions do not integrate with their existing development processes, tools and platforms. Unfortunately, this leaves development teams with fragmented processes and a mess of ALM data, which in turn prevents them from fully realizing the capabilities of ALM.

A new approach is needed to solve this problem. An approach that allows customers to create software using a mixed development environment. With Borland's Open ALM solutions, organizations can leverage their existing development resources and tools. This will help achieve transparency, control and discipline throughout the software development cycle. Customers can now benefit from an optimized ALM platform and a unified, manageable and measurable software development process.

Predictable Software Development: Mission Impossible?

Software development is essentially a complex undertaking. Creation of a software product with fairly clearly defined characteristics, performed with acceptable quality, within the allotted budget and on time, requires constant coordination of a large number of actions between numerous specialists.

The complexity of managing and tracking software projects increases when organizations decide to use distributed development models (for example, offshore programming or the use of temporary employees and subcontractors). As a result, project failures or terminations are becoming ubiquitous. Excessive costs, missed schedules, poor quality, and poor reliability have become the norm in the software industry. Accordingly, more and more intelligent approaches are being demanded from organizations engaged in software development. They must adopt well-managed, structured and process-oriented approaches that follow the steps of more traditional engineering disciplines. one

With increasing standardization and the use of enterprise development platforms, the challenges facing the industry have become less technical in nature. The ability to generate consistent and predictable value from software development has become a significant priority for many information technology (IT) professionals. They need the confidence that their teams will be effective in creating software. With these considerations in mind, Borland has developed platforms for ALM. They are designed to solve the problem of stability and predictability of the software development process.

1 Key industry trends such as the accelerated adoption of the CMM / CMMI process improvement framework and the increased use of third-party development models are closely related to this apparent transformation of the software industry.

The emergence of ALM

As the application development tools industry responds to the need for predictable software development, it has focused its efforts on more than tools for the individual developer. Manufacturers have expanded their offerings and integrated both existing and new capabilities into their products. Their solutions now perform tasks associated with other roles in the software development process. These product packages, often marketed and marketed as collaborative development platforms, ushered in Application Lifecycle Management (ALM) technology. It has become a new category in the market and a separate discipline in software development. ALM platforms are specifically designed to address the challenges of increasing predictability and integrity in the software development process. They address these challenges by providing integration and automation for each major role that participates in the process, and automating a number of functions.

Measurability

Ability to define systems of measures to assess quality, productivity, progress and risk.

Analyze these metrics and generate reports as the project progresses.

Harmonization

Alignment of business specialization and IT priorities.

Aligning project results with end-user expectations.

Discipline

Consistency of definition, deployment and tracking with software processes.

Increasing the severity of the process of changes in management and predicting their consequences.

These capabilities enable IT leaders to balance and prioritize their software project portfolios. They can achieve a higher level of management of their teams and much more transparency in the implementation of projects. With ALM, executives can also gain much more control over the software development process. This provides better opportunities for corporate governance and helps the organization to demonstrate compliance with various rules and regulations.

ALM Industry

Initially, one of the few innovators who understood the importance of the ALM trend and changed their product launch strategies towards explicit support for it were Borland and IBM Rational... Responding to obvious opportunities, other companies have joined the winning ALM concept: Microsoft, IBM Rational / Telelogic, Mercury and Serena. ALM today is an established trend and growing industry recognized by analysts. ALM vendors provide a variety of tools and technologies to support the software development process. These tools go well beyond the traditional productivity tools of the individual developer. They are aimed at providing methodologies and tools focused on teamwork to create software. To create a viable ALM solution, vendors must address the needs of the “extended” software development group and include roles in their products that participate in the broader process.

Portfolio-level toolbars are provided for the needs of managers, covering important metrics project: risk, progress and quality.

For the needs of project managers, tools are provided for planning and controlling the project, analyzing possible alternatives and allocating resources.

For the needs of analysts, tools are provided for defining requirements, interacting with end users and other stakeholders of the project. This level also provides tools for managing requirements throughout the project life cycle, including subsequent changes.

For the needs of architects, there are tools for visual modeling of various aspects of the application (components, data, process), as well as tools for describing design patterns and corporate architecture.

Various programming environments and code-level quality assurance tools (for example, runtime profilers, unit testing and automated code auditing) are provided for developers' needs.

For the needs of quality engineers, tools are provided for creating and managing tests, for regression and functional testing, as well as tools for automated performance testing.

A collective infrastructure serves to solve common problems of the whole group. It provides tools for collaboration, process management, change management, and version control.

For the needs of managers of the software development process, there are tools for modeling and applying a set of corporate technology standards.

For the needs of end users and other stakeholders in the organization, tools are provided to automate requirements management. They are also provided with opportunities to exchange information about requirements, report noticed defects and track the status of questions raised.

ALM technology is widely recognized as a major step forward in the development of the application development industry and its customers. Interestingly, the latest "Chaos report" from the Standish Group shows that the number of failed software projects has dropped in half over the past decade. This improvement can be partially attributed to ALM as well. However, a deeper study of customer needs reveals that while ALM has clear benefits, it is still difficult to realize the full potential of the technology. To do this, you need to change the fundamental approach that is used to integrate the processes and tools involved in the software life cycle.

ALM's business potential is largely unrealized

To better understand why current solutions are making it difficult to fully unleash ALM's business value, let's take a closer look at typical software development and operating environments. We will examine how the software is produced and deployed in terms of processes, development tools, and work platforms. Ultimately, this discussion explains why software production remains one of the last business processes that fails - let alone automation - consistently and predictably.

The corporate IT environment: the challenge of heterogeneity

The rise of the Internet and its proliferation as the primary platform for commerce has brought about significant changes in conventional IT organizations. This was also facilitated by the forced constant work in conditions of lack of resources and high requirements for flexibility. The problem with these changes has to do with architectural evolution. It aims to improve IT responsiveness and service levels and efficiency by moving from legacy technologies to new, modern application platforms. These are the key areas of this evolution.

Migrate from monolithic mainframe specialized applications to new development tools for enterprise distributed platforms, namely J2EE and .NET.

Migrate from packaged enterprise applications built on legacy architectures to process and composite application runtimes such as SAP NetWeaver and Oracle Fusion.

Use of specialized platforms for specific needs. These are, for example, scripting languages ​​for web applications using databases (PHP, Ruby, etc.), or platforms for developing applications with a variety of Internet and multimedia functions (for example, Adobe® Flash® / Flex ™).

Each of these technologies is associated with specific application development tools (often offered by different vendors). These tools cover analysis, design, coding, quality control, version control, and configuration management.

It is reasonable to assume, especially for midsize and large corporations, that for the foreseeable future, every enterprise IT environment will include at least three of the listed platforms for deployment: mainframe, distributed environment (J2EE or .NET), and business automation system. -processes (SAP or Oracle). It also appears (and is becoming increasingly apparent) that some organizations are deploying software to both the J2EE and .NET platforms. 2

Conflicting programs

It is interesting to note that, for obvious reasons, some IT vendors try to influence the heterogeneous nature of the enterprise IT environment as much as possible. These vendors seek to fully "take over" the organization of the IT environment by pushing complete "lifetime" solutions to market. They contain software development tools, an application environment, and tools for managing networks and systems. Major manufacturers also include an operating system or even hardware in their solutions. It goes without saying that such solutions also imply a significant component professional services.

Despite this massive push for all-encompassing solutions from a single source, the reality is that for many clients, this approach just doesn't work. Such organizations increase heterogeneity at all levels. Therefore, they have a set of different priorities that make certain goals important for the customer (not the supplier).

Maximizing competitiveness. Organizations trying to provide the best product or service usually choose the best platform and development tool from a project standpoint. This approach helps them achieve the benefits that each platform offers for specific end users. This often happens in separate projects, but it can also happen in the same project. This ultimately leads to "hybrid" applications that span multiple technology domains. Here are some relevant examples.

o Composite applications or services, which include mainframe, packaged applications, and self-developed distributed applications.

o J2EE / .NET hybrids that leverage the client-side capabilities and user interface of .NET. On the server side, they leverage the scalability, manageability, and security of J2EE technology. This architectural pattern is especially common in the financial vertical. It is used for high performance trading platforms as Windows is the de facto desktop standard on Wall Street.

o Flash / J2EE hybrids. They combine opportunities Adobe flash as a platform for streaming video and rich Internet applications with the benefits of J2EE technology for servers. This allows for a high degree of scalability and a rich multimedia interface.

Reduced development costs. Organizations are trying to reduce software development and deployment costs by combining both open source and proprietary tools and programs. In this regard, it is worth mentioning the growing popularity of the LAMP suite (Linux, Apache, MySQL, PHP) and its increasing use in organizations.

Reduced time to market for products. Organizations may prefer certain development tools because of the specific job benefits they offer. This has the potential to significantly reduce time to market for products.

Effective use of the investments already made. Any “destroy and replace” approach faces significant obstacles. This is due to the fact that most organizations do not want to give up significant investments in old programs and tools.

Risk reduction. Several vendors in the IT industry provide custom native support for their platforms. This is seen as a risk in the eyes of their customers. Tying a specific IT vendor to a platform can lead to significant business risk, especially if that vendor is (or will become) a competitor in the future.

2 Key industry trends such as the accelerated adoption of the CMM / CMMI process improvement framework and the increased use of third-party development models are closely related to this apparent transformation of the software industry. IDC Insight's J2EE and .NET Usage Report by Steve McClure states the following. 10.4% of current .NET users expect to use J2EE / J2ME in the next 12 months; 11.9% of J2EE / J2ME users expect to be involved in .NET development within the next 12 months.

IT Heterogeneity: The Biggest Challenge for ALM

In short, many organizations in the IT industry view heterogeneity as the only alternative, as there are many business benefits associated with it. Very often, development teams use a variety of tools that are not meant to work together. There is no single vendor who supplies the tools for all the necessary actions in the context of one software project. Moreover, there is no single vendor that can fully cover the three main domains: legacy system support and modernization, packaged application extension and customization, and new distributed application development. Therefore, it is very likely that organizations will continue to use disparate development tools within the same project and across different technology domains. Because of this, the biggest challenge in implementing ALM is the heterogeneity of development tools. As a reminder, ALM strives to achieve predictability and integrity in the software manufacturing process through automated measurability, alignment, and discipline. However, in a highly heterogeneous environment, these qualities of the software manufacturing process are much more difficult to achieve.

Since measurability requires the collection of metric information from various application development tools and repositories, there is no universally accepted standard for such collection of data. Since there is no common information schema for all the tools involved in the process, it also becomes necessary to "normalize" the collected metrics and somehow compare them in the context of certain projects.

Alignment requires tracking actions and outcomes throughout the process, from IT strategies to deployed modules. This degree of control is very difficult to achieve when resources and process activities are scattered across disparate tools and repositories. There are no standard tools that automatically identify, collect, manage, and use audit information.

Discipline requires the deployment, acceptance, and control of various general processes to manage software production. This becomes much more complex when subprocesses flow as "process islands" among a variety of process tools. There is no standard mechanism for ensuring the choreography of such sub-processes (according to a higher-level process) or for deploying process components for these tools. There is also no single terminology for describing processes in an environment of disparate tools. They all use their own languages ​​for "items", "artifacts", "projects", and so on. Another aspect of the discipline requires significant changes in management and impact analysis. These capabilities, however, require proper implementation of end-to-end operational control. As discussed earlier, end-to-end control is much more difficult to achieve in a heterogeneous development environment.

To address these challenges, organizations that practice ALM often stop developing the numerous specialized point-to-point integrations that typically fill the technology gaps between the various development tools in use. Integrations like this are unreliable. They break when upgrading or changing tools and are expensive to build and maintain. In addition, they lead to the emergence of software processes that cannot be easily measured and controlled, and which are inconvenient to manage. Obviously, this approach is unacceptable and unprofitable.

Therefore, most IT organizations present major challenges to ALM vendors. These organizations would like to get more out of ALM, namely significant improvements in the software production process that will give them the stability and predictability they need. However, ALM customers also want more than that.

The ability to use a mixture of work platforms in the most optimal way in terms of their business goals.

Free use of a variety of commercial and open source application development tools optimized for their desired deployment goals.

Free use of a variety of commercial or specialized software development processes that are optimized for the culture, project types, and underlying technology of the organization.


A new approach to ALM is needed to meet this complex set of requirements. An approach that enables customers to fully leverage the power of ALM in a heterogeneous IT environment. Borland recently announced its approach and product strategy called Open ALM. This approach directly addresses this challenge. It is the only ALM solution originally designed to enable IT organizations to deliver software predictably within their own timeline.

Overcoming heterogeneity: ALM's final frontier

The Open ALM approach implements Borland's established vision and product strategy. This approach represents a significant architectural shift unique to the commercial ALM market. In fact, if fully implemented, the Borland Open ALM platform and associated applications could provide significant benefits even to customers who do not use any Borland application development tools at all. Borland undoubtedly views its tooling business as vital. The company will continue to develop them and deliver best-in-class tools for an expanded software development team. Borland's tools will gradually evolve to support the Open ALM strategy. This would allow them to participate in the orchestration of Open ALM-based software production. However, the Borland tools could be replaced, if customers see the point, with any product that supports their development requirements. It can be either a third-party product or open source. This level of modularity and flexibility sets Borland's product strategy apart from other ALM vendors, many of whom are trying to take over the entire software supply chain.

Benefits of Open ALM

Open ALM delivers the functionality of ALM while providing an unmatched degree of flexibility at the process, tool, and platform levels. Specifically, Open ALM users get the following features.

The freedom to choose any combination of platforms and working environments in the context of one software project or for several different projects at once. In this case, the choice is made based on business priorities or suitability for the project.

The freedom to choose the best development tools for your chosen platforms based on economic considerations, increased productivity, and technical suitability.

The freedom to choose or design the development processes that best fit their projects and chosen platforms, and also fit

organizational culture and time-to-market requirements.

For the first time, Open ALM and its supporting tools will provide IT organizations that are deploying heterogeneous application development environments with the following capabilities.

An excellent multidimensional and customizable view of the progress, quality and risk metrics of projects and portfolios to support project management and process improvement initiatives.

"The Holy Grail": full operational control and tracking of the life cycle. This will allow for real alignment of business goals and activities throughout the development process, provide a better link between end-user expectations and project outcomes, and provide better opportunities for project management through accurate and comprehensive impact analysis.

A new level of management of the software development process with the help of automated coordination of the actions of specialists and tools involved in the life cycle, based on processes.


These capabilities provide excellent team efficiency, support quality improvement initiatives, and ease the burden of complying with internal and external regulations. They will be delivered as a set of infrastructure-grade components and corporate ALM management tools. In addition, customers can also use Borland's best-in-class integrated tools for application development and project management. This will enable them to gain value in four main process areas.

Project portfolio management (PPM). Tools and automated processes to manage the development of the entire software strategy, as well as to manage the execution of a portfolio of software development projects.

Requirements Definition and Management (RDM). A set of tools and best practices that ensure that project requirements are accurate and complete, can be effectively tracked down to business objectives, and are optimally covered by software tests.

Lifecycle Quality Management (LQM). The procedure and tools for managing the definition and measurement of quality at all stages of software development. These tools are designed to detect and prevent quality problems early in a project when the cost of fixing them is relatively low. Also, quality control teams need to ensure that their tests are complete and based on end-user requirements.

Change Management (CM). Infrastructure and tools to help predict the impact of change. They also help manage resources and lifecycle change activities in both multi-node and single-node models.

Borland Open ALM Solution

As stated earlier, ALM's primary goal is to achieve a predictable and manageable software development process through automated measurability, alignment, and discipline. We have seen that each of the three dimensions of ALM in a heterogeneous application development environment becomes much more difficult to implement and therefore presents a number of specific challenges for ALM users. The Borland Open ALM platform architecture is a set of three solution areas, each specifically designed to address the challenges of one of the major ALM domains. Each area of ​​the Open ALM solution is based on a highly modular and extensible infrastructure layer and is a set of specialized applications. The goal of the infrastructure layer is to enable the Open ALM platform to work with any combination of commercial or open source development tools and processes, regardless of vendor or expected environment technology. The diagram on the next page shows a conceptual diagram of the Borland ALM solution.


Borland Open ALM Solution Architecture


Open Business Intelligence for ALM

Open Business Intelligence for ALM (OBI4ALM) is based on standard infrastructure and applications to increase the measurability of progress, improve the quality of work, or any other special metric of software projects in a heterogeneous application development environment. OBI4ALM provides the infrastructure for discreet distributed data collection and metric mapping and analysis from any application development tool that is registered to do so. By extracting predefined metrics from data sources, the OBI4ALM framework brings together disparate information scattered throughout the software development cycle. This consolidation offers great opportunities. For example, you can create an aggregated view of project metrics and define new project metrics that combine multiple lower-level metrics. The OBI4ALM infrastructure uses a data warehouse. This repository contains current and historical information gathered from those tools that are involved in various stages of the software development process. It uses a structure that is optimized for querying and analyzing data. OBI4ALM applications can translate collected metrics into information useful for making decisions based on it. This provides support for decision making and early notification of problems.

Real-time data toolbars are customizable KPI views that show trends over time.

Metric-based alerts are custom alerts that are triggered when certain conditions occur (for example, when a trend crosses a certain boundary). Alerts help improve management flexibility for a variety of project issues: slow progress, poor quality, underperformance, or any other problem that can be quantified using metrics.

Decision making tools - Analytical tools that use historical information about a project (or multiple projects) to help you make decisions about project management.

Open process control for ALM

In the final analysis, the process becomes the most important concept that permeates the entire software life cycle. A process is more than sharing information structures with tools that are used by different roles, or providing integration of capabilities at the user interface level. The process has real potential for coordinating the actions of people and systems that are involved in the process of creating software. At the same time, the process ensures compliance with established policies and quality control of implementation.

Open Process Management for ALM (OPM4ALM) provides infrastructure components and a set of applications that are used to model, deploy, and implement various software processes in a heterogeneous application development environment. OPM4ALM goes much further than providing leadership and assigning tasks to stakeholders. This method also uses a level of process automation that serves as the main glue for integrating client-side, server-side, and methodology according to the rules captured in the process models. From this point of view, the integration between application development tools is actually provided by lower-level processes. This becomes the fundamental basis for effective teamwork.

The OPM4ALM infrastructure is built on a distributed core of processes. It provides modeling, customization, deployment, orchestration, and choreography of multiple software development processes in a heterogeneous development environment. An important part of the OPM4ALM infrastructure is the management and definition of process events. Open ALM tools can subscribe to and listen to these events and be notified when they occur. The Process Engine also provides flexible definition and evaluation of rules. It helps to describe and enforce application development policies.


OPM4ALM applications deliver value from the process infrastructure layer. They provide the following capabilities.

Tools for modeling, tuning, fitting and reusing processes. They enable efficient design of commercial or custom software processes using a feature-rich software development model.

Enterprise Process Console that provides a consolidated bird's eye view. This view contains all the software processes that are deployed in various projects with the participation of disparate development tools.

Process conformity assessment dashboard. It shows process deviations and their potential consequences, and provides reporting capabilities useful for implementing compliance initiatives.

Measurement and reporting based on specific metrics for each process.

Open control for ALM

End-to-end process control supports many of the important benefits of ALM. Some of them are: It is an essential tool for implementing business requirements-driven development, requirements-driven development and testing, and accurate analysis of the impact of changes. Open Traceability for ALM (OT4ALM) provides an infrastructure for creating and classifying links between resources created during software development. It also creates a flexible schedule for linking related resources. It does not matter in which tools these resources are located. This technology also provides a means for navigating a diagram of relationships between resources, as well as for creating optimal queries and for extracting the data that this diagram contains.

OT4ALM provides applications that transform collected control data into information for decision making.

Automated planning, analysis of the impact of changes, accurate cost and budget predictions.

Boundary control - early warning of deviations from defined boundaries (for example, resources that do not meet requirements) and unfulfilled requirements.

Reuse Analyzer - Allows you to reuse entire resource trees (from requirements and models to code and tests) instead of simply reusing code modules.

TraceView - Interactive trace viewers for various projects. This helps you find all the resources of the processes and compare them with other resources.

Common platform infrastructure

The Open ALM infrastructure contains two components that are used across all areas of the solution.

ALM metamodel. A common language for describing software processes, relationships between process resources (controllability) and units of measurement (metrics). The ALM metamodel provides a feature rich conceptual model for the software development domain. This is to describe a standard vocabulary that all Open ALM-compatible tools must understand. This understanding will ensure effective communication within the Open ALM platform.

ALM integration layer. Extensible and embeddable integration engine and SDK. It defines the standard way for ALM tools to work, collect ALM metrics, and create charts for monitoring resources. To support and participate in the ALM platform, the tool must provide a platform plugin that conforms to the standard Open ALM API. You can also use a custom adapter that connects the tool to the rest of the application development environments through processes that are orchestrated by the Open ALM platform.


The road to Open ALM

Over the next 24 months, Borland will continue to expand the infrastructure, applications and tools that make up its Open ALM platform. Borland also intends to complement this product with a wide range of professional services programs designed to accelerate deployment and ensure the success of enterprise deployments of Open ALM. Customers can take advantage of some of the benefits of Open ALM today. Organizations looking to improve quality and improve their change and project management will find Borland's current solution very attractive. This solution provides highly automated and integrated support for four important areas of application development processes:

Project Portfolio Management (PPM);

Requirements definition and management (RDM);

Application Lifecycle Management (LQM);

Change Management (CM).

These solutions are delivered through tight integration between Borland products and third-party tools. This gives customers the flexibility they need and significantly increases their software project management capabilities today.

Why Borland?

Throughout its long history, Borland has consistently worked with its customers to ensure that they create software in the most convenient way. Borland is committed to standards-based development and cross-platform support. She offered IT organizations the flexibility and freedom of choice they need. With the introduction of Open ALM, Borland is taking its traditional values ​​to a whole new level. This clearly sets the company apart from other ALM vendors and non-profit ALM initiatives.

For the largest ALM vendors, IBM Rational and Microsoft, customer service is hardly their top priority. Both of these companies are continually trying to leverage their development tools to tie customers to their middleware and systems management platforms.

In contrast to this approach, Borland has always insisted on supporting the Java and J2EE standards, and has offered strong and integrated support for the platform, languages, and development tools. Microsoft... Borland continues to explicitly expand Microsoft's ALM solution. Borland has invested heavily in supporting the latest Microsoft technologies. For example, CaliberRM, the first fully integrated requirements management solution for Team System, is recommended by Microsoft to extend the core requirements management functionality provided by the VSTS tool. Borland will continue to expand the interoperability of the Java and .NET platforms. Additional capabilities are planned, such as UML-to-C # code generation and support for Microsoft Domain Specific Languages ​​(Microsoft's alternative to UML replacement).


The move towards open source is also associated with the challenges that heterogeneity poses for ALM. The goal of several Eclipse initiatives (Application Lifecycle Framework (ALF), Corona, and Eclipse Process Framework (EPF)) is similar to that of Borland Open ALM. While Borland understands the motivations behind these projects, the company considers their approach to be insufficient. Both ALF and Corona try to provide only Open ALM infrastructure components. However, Open ALM is a more holistic approach. This approach also enables customers to derive business value from off-the-shelf infrastructures through a suite of complementary applications. Borland goes further than other ALM vendors in its move towards Open ALM. The company recently expanded its horizons and is looking to expand to additional application development domains. Borland is also looking for the best approach to supporting packaged application development projects on SAP NetWeaver and Oracle Fusion platforms.

Conclusion

Borland is unique in helping ALM users create software in their own terms... The Open ALM approach and product strategy clearly sets Borland apart from other ALM vendors and open source initiatives. Borland is the only major ALM vendor to outright recognize the reality of IT heterogeneity. This company is trying to help ALM users leverage existing tools in their processes, production environments, and development tools. Borland's approach to process-driven integration further separates the company from its competitors. This allows Borland to provide transparency, control and order throughout the ALM strategy.

Borland begins building infrastructure, applications, and related development tools for Open ALM. Therefore, for the first time, customers will be able to fully utilize the capabilities of ALM. They will be able to take advantage of a completely cohesive, manageable and measurable software development process.

ALM systems allow you to provide transparency, a clear understanding of the application development process and present it as one of the business processes. However, ALM should not be viewed solely as a tool for monitoring and enforcing compliance, analysts warn. These systems are designed not so much for control as for automating the development process and integrating various tools.

The biggest challenge in implementing ALM tools is people misunderstanding the development process. Oftentimes, management thinks that ALM can be used to get things done in a well-defined way. However, it is impossible to plan everything in advance. In application development, you often have to go through several iterations at each step, release intermediate versions, and gradually increase the functionality of the application. The ALM system should not restrict the actions of the developers, but facilitate the process.

The IT industry loves to talk about the barriers between IT and business, but within the IT structure itself, there are many less visible barriers that can stand in the way of an unwary system integrator.

Consider, for example, one of the most controversial and hotly debated topics in IT today - the DevOps methodology and everything related to it. As a brief description of all the actions associated with the transfer of the developed application to the IT service for real operation, these words sound harmless enough. But by and large, there is a wall of confusion between the developers of enterprise applications and the specialists who manage the IT infrastructure of the enterprise. Programmers often blame IT for lack of flexibility, and those who manage day-to-day IT operations often blame IT for ignoring the constraints and requirements of the production infrastructure in which the applications they create must run.

This tension is driving a growing interest in Application Lifecycle Management (ALM) technology, which is a set of management tools designed to provide programmers and IT staff with a clearer understanding of the application being developed and the infrastructure in which the application must run. ... The main idea is that facilitating collaboration between developers and IT professionals will lead to more efficient functioning of the entire enterprise information environment. The problem is that the implementation of ALM has little chance in a situation where the two parties, cooperation between which is necessary to ensure the success of the project, begin to shift responsibility for the difficulties that arise.

For the successful implementation of ALM methodology, the system integrator must rise above the level of mutual accusations in the IT department. According to Gina Poole, vice president of marketing for IBM Rational Software, this means finding and hiring a CIO or CFO who can understand how much money a customer is losing due to a lack of coordination across all IT departments. Fixing bugs in an application late in the development project means extremely high costs. If the need for such a fix is ​​caused by the developer's earlier assumptions about the environment in which the application will function, and these assumptions are ultimately wrong, then the cost of the entire project increases significantly, or the customer will be forced to modernize his infrastructure accordingly.

Of course, it can take a significant amount of money to eliminate such contradictions in an organization's IT infrastructure. However, the only final goal this work should consist of creating and implementing a set of management technologies that would enable programmers and IT operations specialists to stop interfering with each other's work. The more time programmers spend discussing collaboration with IT professionals, the less time they have directly on development. The more applications are created, the more developed infrastructure will be needed and this is, of course, good news for resellers.

Overall, the DevOps debate is definitely beneficial for resellers and integrators. The problem is not to get caught up in internal conflicts associated with the desire to carry out too many IT projects in parallel. If the customer does not accept the very concept of ALM, this is actually a very good indicator of his lack of maturity and weak competence in IT management. This in itself suggests that the reseller is better off staying away from such a customer, as the chances are high that such a customer will bring much more problems than profit.

It is known that many users (and, to be honest, some IT specialists), when talking about software development, mean primarily the creation and debugging of application code. There was a time when such ideas turned out to be close enough to the truth. But modern application development consists not only and not so much of writing code as of other processes, both immediately preceding programming and following it. Actually, we will talk about them further.

Application Development Life Cycle: Dreams and Reality

It’s not a secret that many commercially successful products both in Russia and abroad were implemented using only application development tools, and even the data were often designed on paper. It will not be an exaggeration to say that of all the possible tools for creating software in Russia (and in many European countries), application development tools are now popular and, to a lesser extent, data design tools (this primarily concerns projects with a small budget and tight terms of implementation). All project documentation, from the technical assignment to the user's manual, is created using text editors, and the fact that some part of it is the source information for the programmer only means that he simply reads it. And this despite the fact that, on the one hand, requirements management tools, business process modeling, application testing tools, project management tools and even tools for generating project documentation have been around for a long time, and on the other hand, any project manager naturally wants to facilitate life both for myself and for the rest of the performers.

What is the reason for the distrust of many project managers to the tools that allow automating many stages of the work of the teams they lead? In my opinion, there are several reasons for this. The first of them is that the tools used by the company very often do not integrate well with each other. Let's consider a typical example: Rational Rose is used for modeling, Delphi Professional is used for coding, CA AllFusion Modeling Suite is used for data design; integration tools for these products are either completely absent for a given combination of their versions, or do not work correctly with the Russian language, or are simply not purchased. As a result, use-case diagrams and other models created with Rose become nothing more than pictures in the design documentation, and the data model mainly serves as a source of answers to questions like: "Why is this field in that table at all?" And even such simple parts of the application as the Russian equivalents of the names of the database fields are written by project participants at least three times: once when documenting the data model or application, the second time when writing the user interface code, and the third time when creating a help file. and user manuals.

The second, no less serious reason for mistrust of software lifecycle support tools is that, again, due to the absence or poor functionality of the integration tools for such products, in many cases it may not be possible to constantly synchronize all parts of the project with each other: process models , data models, application code, database structure. It is clear that a project that implements the classic waterfall scheme (Fig. 1), in which requirements are first formulated, then modeling and design are performed, then development and, finally, implementation (about this scheme and other methodologies for project implementation can be read in the series of reviews by Lilia Hough, published in our magazine) is more a dream than a reality - while the code is being written, the customer will have time to change some of the processes or wish for additional functionality. As a result of project execution, an application is often obtained that is very far from what was described in the technical task, and a database that has little in common with the original model, and the synchronization of all this with each other for the purpose of documenting and transferring to the customer turns into a rather laborious task.

The third reason why software lifecycle support is not used everywhere where it could be useful is that it is extremely limited in choice. Mainly two product lines are actively promoted on the Russian market: IBM / Rational tools and Computer Associates tools (mainly the AllFusion Modeling Suite product line), which are currently largely focused on certain types of modeling, and not on the constant process of code synchronization, databases and models.

There is another reason that can be attributed to the category of psychological factors: there are developers who do not at all strive to fully formalize and document their applications - because in this case they become indispensable and valuable staff, and a person who is forced to understand after the dismissal of such a developer in his code or simply maintains his product will feel like a complete idiot for a very long time. Such developers, of course, are by no means in the majority, nevertheless, I know at least five company executives who have been spoiled by such ex-employees a lot of blood.

Of course, many project managers, especially projects with a small budget and limited time frames, would like to have a tool with which they could formulate the requirements for the software product being developed ... and as a result, get a ready-made distribution of a working application. This, of course, is only an ideal, which so far one can only dream of. But if you go down from heaven to earth, then you can formulate more specific wishes, namely:

1. Requirements management tools should simplify the creation of the application model and data model.

2. Based on these models, a significant part of the code should be generated (preferably not only client, but also server code).

3. A significant part of the documentation should be generated automatically, and in the language of the country for which this application is intended.

4. When you create the application code in the models, automatic changes should occur, and when the model changes, the code should be automatically generated.

5. Code written by hand should not disappear when changes in the model.

6. The emergence of a new customer requirement should not cause serious problems associated with changes in models, code, database and documentation; in this case, all changes must be made synchronously.

7. Version control tools for all of the above should be convenient in terms of finding and tracking changes.

8. Finally, all this data (requirements, code, models, documentation) should be available to project participants to the extent that they need them to perform their duties - no more and no less.

In other words, the application development cycle should allow for iterative collaborative development without the added cost of changing customer requirements or how they are implemented.

I will not assure you that all these wishes are absolutely impossible to fulfill with the help of IBM / Rational or CA tools - technologies develop, new products appear, and what was impossible today will become available tomorrow. But, as practice shows, the integration of these tools with the most popular development tools, unfortunately, is still far from being as ideal as it might seem at first glance.

Borland Products from a Project Manager's Perspective

Borland is one of the world's most popular developer tools and has enjoyed a well-deserved love of developers for twenty years. Until recently, this company offered mainly a wide range of tools intended directly for the creators of application code - Delphi, JBuilder, C ++ Builder, Kylix (we have written about all of these products many times in our magazine). However, a company's success in the market is largely determined by how well it follows trends in its development and how much it understands the needs of those who are consumers of its products (in this case, companies and departments specializing in application development).

That is why Borland's development strategy for development tools now assumes support of the full lifecycle of applications (Application Lifecycle Management, ALM), including the definition of requirements, design, development, testing, implementation and maintenance of applications. This is evidenced by last year's acquisition by Borland of a number of companies - BoldSoft MDE Aktiebolag (a leading provider of the latest application development technology Model Driven Architecture), Starbase (a provider of configuration management tools for software projects), TogetherSoft Corporation (a provider of software design solutions). Since the acquisition of these companies, a lot of work has been done in terms of integrating these products with each other. As a result, these products already meet the needs of project managers for the ability to organize iterative collaborative development. Below we will discuss what Borland is currently offering to executives and other contributors to software development projects (many of the products and integration technologies described below were presented by the company at developer conferences held in November in San Jose, Amsterdam and Moscow) ...

Requirements management

Requirements management is one of the most important parts of the development process. Without formulated requirements, as a rule, it is practically impossible either to organize work on a project normally, or to understand whether the customer really wanted to get exactly what was implemented.

According to analysts, at least 30% of the project budget is spent on what is called reworking the application (and I personally think that this figure is greatly underestimated). Moreover, more than 80% of this work is associated with incorrectly or inaccurately formulated requirements, and fixing such defects is usually quite expensive. And the extent to which customers like to change requirements when the application is almost ready is probably known to all project managers ... It is for this reason that requirements management should be given the utmost attention.

For requirements management, Borland's arsenal includes Borland CaliberRM, which is essentially a requirements management automation platform that provides change tracking (Figure 2).

CaliberRM integrates with many development tools from both Borland and other manufacturers (for example, Microsoft), up to embedding a list of requirements in the development environment and generating code snippets by dragging a requirement icon into the code editor. In addition, you can create your own solutions based on it - for this there is a special set of CaliberRM SDK tools.

Note that this product is used to manage requirements not only for software, but also for other products. For example, there are cases of its successful application in the automotive industry to manage the requirements for various vehicle components (including Jaguar vehicles). In addition, according to Jon Harrison, product line manager for JBuilder, the use of CaliberRM to create Borland JBuilderX has greatly simplified the development process for this product.

And finally, the presence of a convenient requirements management tool greatly simplifies the creation of project documentation, and not only in the early stages of the project, but also on all subsequent ones.

Designing applications and data

Design is an equally important part of creating an application and should be based on the stated requirements. The result of the design is the models used by programmers at the stage of code creation.

For application and data design, Borland offers Borland Together (Figure 3), an application analysis and design platform that integrates with a variety of Borland and non-Borland development tools, including Microsoft. The specified product enables the modeling and design of applications and data; at the same time, the degree of its integration with development tools at the moment is such that changes in the data model lead to automatic changes in the application code, as well as changes in the code lead to changes in the models (this technology of integration of modeling tools and development tools is called LiveSource).

Borland Together can be used as a tool for integrating requirements management and modeling tasks with development and testing tasks, and allows you to understand what should be the implementation of product requirements.

Generating application code

Application code writing is an area that Borland has specialized in for over 20 years of its existence. Borland today produces development tools for Windows, Linux, Solaris, Microsoft .NET, and a range of mobile platforms. We have already written about the development tools of this company many times, and we will not repeat ourselves in this article. We only note that the latest versions of the development tools of this company (Borland C # Builder, Borland C ++ BuilderX, Borland JBuilderX), as well as the soon-expected new version of one of the most popular development tools in our country, Borland Delphi 8 for Microsoft .NET Framework allow you to tightly integrate Together modeling tools and CaliberRM requirements management tools with their development environments. We will definitely talk about Delphi 8 in a separate article in the next issue of our magazine.

Testing and optimization

Testing is an absolutely essential part of building quality software. It is at this stage that it is checked whether the application meets the formulated requirements for it, and the corresponding changes are made to the application code (and often both in the model and in the databases). The testing phase typically requires application performance analytics and optimization tools, and Borland Optimizeit Profiler is used for this purpose. Today, this product, along with this, integrates into the development environment of the latest versions of Borland development tools, as well as into the Microsoft Visual Studio .NET environment (Fig. 4).

Implementation

Software implementation is one of the most important components of a project's success. It should be carried out in such a way that, at the stage of trial operation of the product, changes can be made to it without serious costs and losses, it is easy to increase the number of users without compromising reliability. Because applications are currently deployed with companies using different technologies and platforms and using a certain number of existing applications, the ability to integrate a new application with legacy systems can be important during implementation. For this purpose, Borland offers a number of cross-platform integration technologies (such as Borland Janeva, which allow .NET applications to be integrated with CORBA and J2EE-based applications).

Change management

Change management is performed at all stages of application creation. From Borland's perspective, this is the most important part of the project - after all, changes can occur in requirements, and in code, and in models. It is difficult to manage a project without tracking changes - the project manager must be aware of what exactly is happening on this stage and what has already been implemented in the project, otherwise he risks not completing the project on time.

To solve this problem, you can use Borland StarTeam (Figure 5), a scalable software configuration management tool that stores all the necessary data in a centralized repository and optimizes the interaction of employees responsible for various tasks. This product provides the project team with a variety of tools for publishing requirements, managing tasks, scheduling, working, discussing changes, version control, and organizing workflow.

The features of this product are tight integration with other Borland products, support for distributed development teams interacting over the Internet, the presence of several types of client interfaces (including the Web interface and Windows interface), support for many platforms and operating systems, the presence of the StarTeam Software Development Kit (SDK), which is application programming interfaces for creating solutions based on StarTeam, data protection tools on the client and server side, tools for accessing the Merant PVCS Version Manager and Microsoft Visual SourceSafe repositories, tools for integrating with Microsoft Project, tools for visual presentation of data, creation reports and decision support.

Instead of a conclusion

What does the appearance on the Russian market of the above set of products from a well-known manufacturer, whose development tools are widely used in a wide variety of projects, mean? At a minimum, the fact that today we will be able to get not just a set of tools for various project participants, but also an integrated platform for implementing the entire development life cycle - from requirements definition to implementation and maintenance (Fig. 6). At the same time, this platform, in contrast to competing product sets, will guarantee support for all the most popular development tools and will allow integrating its components into them at the level of complete code synchronization with models, requirements, and changes. And hopefully, project managers will breathe a sigh of relief, relieving themselves and their employees of many tedious and routine tasks ...

Analyzing the development of the development tools market over the past 10-15 years, one can note a general trend of a shift in emphasis from the technologies of actually writing programs (which, since the beginning of the 90s, were marked by the emergence of RAD tools - "rapid application development") to the need for an integrated management of the entire lifecycle of applications - ALM (Application Lifecycle Management) .

As the complexity of software projects increases, the requirements for the efficiency of their implementation increase sharply. This is all the more important today, when software developers are involved in almost all aspects of the work of enterprises and the number of such specialists is growing. At the same time, research data in this area suggests that the results of at least half of "in-house" software development projects do not live up to their expectations. In these conditions, the task of optimizing the entire process of creating software becomes especially urgent, covering all its participants - designers, developers, testers, support services and managers. Application Lifecycle Management (ALM) views the software release process as an ever-repeating cycle of interrelated stages:

· Definition of requirements (Requirements);

· Design and analysis (Design & Analysis);

· Development (Development);

· Testing (Testing);

· Deployment and maintenance (Deployment & Operations).

Each of these stages must be carefully monitored and monitored. A properly organized ALM system allows you to:

· To reduce the time of launching products to the market (developers only have to worry about the compliance of their programs with the formulated requirements);

· Improve quality while ensuring that the application meets the needs and expectations of users;

· Increase labor productivity (developers get the opportunity to share best practices in development and implementation);

· Speed ​​up development by integrating tools;

Reduce maintenance costs by constantly maintaining consistency between the application and its project documentation;



· Get ​​the most out of your investment in skills, processes and technology.

Strictly speaking, the very concept of ALM, of course, is not something fundamentally new - such an understanding of the problems of software development arose forty years ago, at the dawn of the formation of industrial development methods. However, until relatively recently, the main efforts to automate software development tasks were aimed at creating tools directly for programming as the most time-consuming stage. And only in the 80s, due to the complication of software projects, the situation began to change significantly. At the same time, the relevance of expanding the functionality of development tools (in the broad sense of this term) in two main areas has sharply increased: 1) automation of all other stages of the software life cycle and 2) integration of tools with each other.

Many companies were engaged in these tasks, but the undisputed leader here was the Rational company, which for more than twenty years, since its inception, has specialized in the automation of development processes. software products... At one time, it was she who became one of the pioneers of widespread use visual techniques software design (and practically the author of the UML, which is de facto accepted as the standard in this area), has created a common ALM methodology and a corresponding set of tools. We can say that by the beginning of this century, Rational was the only company that had in its arsenal the full range of products to support ALM (from business design to maintenance), with the exception, however, of one class of tools - conventional coding tools. However, in February 2003, it ceased to exist as an independent organization and became a division of the IBM corporation, called IBM Rational.

More recently, Rational was practically the only manufacturer of complex development tools for the ALM class, although there were and still are competing tools from other vendors for certain stages of software development. However, a couple of years ago, Borland Corporation publicly announced its intention to compete with it, which has always had a strong position just in the field of traditional application development tools (Delphi, JBuilder, etc.), which are actually the basis of the corporation's ALM complex, the expansion of which went through the acquisition other companies that produce similar products. This is the fundamental difference between the business models of the two companies, which opens up potential opportunities for real competition. After Rational became part of IBM, Borland positions itself as the only independent provider of an integrated ALM platform today (that is, it does not promote its own operating systems, languages, etc.). In turn, competitors note that Borland has not yet formulated a clear ALM methodology that provides a basis for combining its existing tools.

Another major player in the development tools field is Microsoft. So far, she does not aim to create her own ALM platform; progress in this direction is only in the framework of cooperation with other suppliers, the same Rational and Borland (both of them became the first participants in the Visual Studio Industry Partner program). At the same time, Microsoft's core development tool, Visual Studio .NET, is constantly expanding its functionality through the use of high-level modeling and project management tools, including integration with Microsoft Visio and Microsoft Project.

It should be noted that today almost all leading companies developing technologies and software products (except for those listed above, one can name Oracle, Computer Associates, etc.) have developed technologies for creating software, which were created both on their own and through the purchase of products and technologies created by small, specialized companies. And although, like Microsoft, they have not yet planned to create their own ALM platform, the CASE tools produced by these companies are widely used at certain stages of the software lifecycle.

 

It might be helpful to read: