The emerging technology of architecture-based modeling, design and analysis provides a set of capabilities critical to the success of the Feasability Analysis Modelling (FAM) concept for software systems. In this appendix we describe the key product and process technological components of the FAM that have their basis in architecture level modelling, analysis and design concepts. The concepts and components were developed at by the SAMSA working group on FAM architecture-based modeling. The major driving concepts were:
As shown in Figure 1, the software under consideration can be decomposed into fragments which are represented using models (executable models, static specification models, etc), actual code and mixtures of both. Hence a FAM at any level must be composed of such product representations. At the lower levels of the hierarchy, as models are increasingly replaced by code, the paramter estimates in the models (e.g., for context switching times or multiprocessor OS overhead) are replaced by actual performance values for the code. This also enables the models to be more accurately to actual code performance. And, as indicated in Figure 1, measurements is not only more accurate but also is easier than prediction.
Thes risk issues provide the context for the second step of risk-driven exploration of decisions and consequently elaboration of the FAM. The abstraction hierarchy of FAMs elaborated in the previous subsections is well suited to collaborative processes and risk-driven model refinement as supported in the WinWin type of model-based collaboration support framework for concurrent software engineering.
The most difficult problem in the FAM approach formulated above is the availability of architecture level parameterized design models that capture the different quality attributes and can be used in doing goal-directed feasability analysis. The strategy adopted by the working group is to understand and formulate tractable individual quality attributes (.e.g. performance) models and evolve to more complex models for tradeoff analysis later. In particular, the strategy would involve investigating : i) Instance models - based on existing specific analysis applications that do performance analysis, reliability analysis, and etc. ii) Generic models - obtained via generalization of domain specific analysis tools for single quality attributes. iii) Relation between models - obtained from understanding relations between different quality dimensions based on further research.
---------------------------------------------------------------------------- 1. KBs for DoD domains (B,N=10)* 2. Principled analytic basis for trade-offs (B,N=10) - Analysis of Design Drivers (weight, cost, time, power, reliability, schedule, security) 3. Model Integration: mapping and consistency across models (B,N=10) 4. Software Lego set (D,N=8) 5. Reasoning over Partial/Incomplete & Evolving Specifications (B,N=7) 6. Separable Specifications & Support (D,N=7) 7. Credibility/Feasibility measures for Models & Implementations (B,N=7) - Rationale & Traces *LEGEND: B: "Big Win" Item D: "Discontinuity" Item N: Nr. of Votes Received Figure 1. HIGH PRIORITY OBJECTIVES ----------------------------------------------------------------------------An initial description of each of these items follows. These descriptions are elaborated later on the basis of later Working Group discussions. They may be modified or further elaborated by members of the AL working group.
1. Knowledge Bases for DoD domains (B,N=10)
An extensive collection of information is necessary to conduct any feasibility analysis. That information must be captured in some useful way (as a Knowledge Base) in/for the AL/FAM capability. The information in the Knowledge Base is based on the experience of prior projects, and is therefore not specific to the project whose feasibility is being assessed. On the other hand, the working group believes that much of this knowledge will actually be domain-specific, and therefore, it will be necessary to develop and evolve several related Knowledge Bases.
2. Principled analytic basis for trade-offs (B,N=10)
The design space for any software product is a complex one, filled with discontinuities and anomalies, depending on the interactions of requirements and constraints. In every case, however, there are dominant design drivers, and the hypothesis is that the number of these is relatively small. In spacecraft software, for example, weight, cost, time, power, reliability are all critical factors. In most software development schedule is critical and recently security is increasingly important. Nevertheless, a solid mathematical foundation for analyzing multi-factor tradeoffs is not available. Some heuristics have been developed that can be used until the mathematics improves.
3. Model Integration: mapping and consistency across models (B,N=10)
The basis of most analyses (whether formal or informal) are models of a system; the models abstract away information of no use to an analysis and highlight information central to the analysis. Architecture views (alternate representations of the architecture of a system) are, in this sense, models. In the course of defining the architecture of a system, multiple such models will normally be formulated. Models such as these are not necessarily consistent at all times. How to detect, evaluate, and address such inconsistencies is an important but immature science/art at this time. This is particularly true when the terms of different views are in some way truly incommensurate. Nevertheless, there is practical experience with techniques for dealing with (in)consistency among some kinds of models that could serve as the basis for early capabilities.
4. Software lego set (D,N=8)
This colorful imagery refers to both a reusable library of software components, and a conventional means by which components can be composed together to form larger software (sub)systems. The components serve as the lego pieces. Although this was identified as a "discontinuity", a close approximation to this concept has already been achieved on such projects as CCPDS-R. The key in such projects is that a fixed, standard, component composition mechanism has been defined and implemented: this "middleware" defines the "shape" of the "pegs and holes" that are used to join the pieces together.
The "Discontinuity" status arises if you wish to have generic legos, comprised of lego pieces that work even if you change the shape of the pegs and holes. This is (one part of) where the action is in current DSSA-based research and development. The separation of algorithmic components from "glue generation" capabilities is being actively explored on many fronts, for example in major efforts funded by ARPA, NIST, and the Air Force.
5. Reasoning over Partial/Incomplete & Evolving Specifications (B,N=7)
A closely related concept is that of multiple, incomplete and continually changing specifications of software components, architectures and systems. Informal specifications with these properties are the characteristic state of the practice now and in the foreseeable future. On the other hand, most formal specification techniques that offer a very rich and robust reasoning method are "complete" (or at least "closed" with respect to their domain of discourse and set of reasoning methods).
Recent work in domain-specific architecture definition languages has included work on incomplete and evolving formal specifications, and inter-relating them in order to reason about the system being described. This work needs to be extended greatly, and adapted to the requirements of an AB/FAM capability.
6. Separable Specifications & Support (D,N=7)
In a manner similar to the multiple models that abstract different aspects of a system, there are different aspects of a system captured in its specifications and those of its constituent parts. The specialization of specifications is motivated by the same concerns (i.e., simplifying the reasoning/analysis process) that drive the introduction of separate models.
In some cases the correspondence of specifications to models is direct: the specifications are embedded in the models. In other cases, the specifications are separate from the models, and may focus on relatively separate elements or functions of the system. An example of the latter is the suite of languages developed and used by the ISI DSSA team to specify message-processing capabilities in command and control systems.
Sometimes the specifications are separate but not orthogonal; for example, the use of Ctrl-H specifications in the Honeywell GN&C DSSA technology is dependent upon the collateral use of Meta-H specifications. Although not orthogonal, the specifications are complementary.
General principles are lacking for defining notations so that the specifications can be separated profitably, and can be managed separately, yet can be reasoned about in conjunction. However, again, some heuristic, domain-specific experience exists and suggests that in at least some domains, early ad-hoc approaches would be effective.
7. Credibility/Feasibility measures for Models & Implementations (B,N=7)
Achieving credibility is perhaps the most difficult challenge in using any model of a system for analysis of that system. It is not easy to develop confidence that (i) the model portrays the system sufficiently accurately, and (ii) the model-based reasoning used in the analysis is sufficiently appropriate, so that (iii) the conclusions resulting from the analysis are trustworthy.
The principles of model validation are generally well understood, but the practice is especially complex and difficult to control properly in the case of large software systems. Credibility of models has the strongest standing when both the models and the implementations have evolved over time and the observed behavior of the implementations have thoroughly and repeatedly validated the model-based predictions. This is one reason that product-line development can be such a powerful paradigm: there is a long history of validation and demonstration of the models that are used in the design of new products in the product-line.
The correspondence between a product model and the product system is describable informally -- in terms of rationale -- and formally -- in terms of predicted (model-generated) and actual (system generated) -- computation traces. The latter requires test harnesses, instrumentation and other model-specific techniques. The investment to produce these is very high, but once they exist for a product line, the process of adapting them for different products from the line are not so high (another reason for product line development). A systematic way to reduce the cost of development of these support technologies is needed.
If one views AB/FAM as a case-based expert system, credibility may be based on the rationale for the advice proferred. That is, consider a trace or listing of the conditions and relationships among those conditions that lead to conclusions that in turn lead to the advice: this trace provides a means for the human to assess the credibility of the models and the reasoning that leads to the advice, and therefore the credibility of the advice itself.
For completeness, the items from the brainstorm list that fell below the top-tier line are listed in Figure 2.
---------------------------------------------------------------------------- - Visualization of complex problem spaces & optimize libraries - Technical support for Requirements gallop - Empirical Model Validation - Constantly monitoring project feasibility - SPO plugged in - Dynamic non-intrusive instrumentation (probes) - Mathematically formal semantic integration lang. - Collaboration Environments - sharing and importation of knowledge into models - automatic model formulation from KB's - data mining, rule induction, and other methods for semi-automatic domain engineering - constraint relaxation, propagation, intervals - enriched ways of handling - Modeling social environment of system - Build to Price analysis - Endusers write mega-programs (add knowledge) - a software guru (human/automated) - Relationship of models to purposes/questions - Systems Engineering may be dysfunctional in the sw-first world - Domain Specificity Figure 2. Lower Tier Objectives ----------------------------------------------------------------------------
The following are some observations providing elaborations and refinements of pointsalready made. They represent issue discussions during and between the workshops rather than consensus within the working group.
During the SAMSA 1 working group discussions, Ed Feigenbaum described the importance of very rapid model evolution to achieving useful expert systems. The key idea was that model-based knowledge acquisition could be accelerated and migrated into an expert system if the model embedded in the expert system could be changed in minutes rather than weeks.
This concept suggested that a very hard problem for the proposed AB/FAM, the difficulty of achieving credibility early in the life cycle, might be addressable. This idea formed the basis for a discussion on the following proposition: the feasibility analysis workbench capability, AB/FAM, should be able to be rapidly improved as it is being used to address the feasibility of a particular system.
The following strawman requirements for the AB/FAM were articulated:
The AB/FAM needs to provide support for
These requirements come from a presumed continual process that would be ongoing throughout the life of a product line. In this process, AB/FAM would be used to provide model-based heuristic advice to the analyst, and would be responsive to changes in the architecture, design and implementation of the products in the product line. Its use would result in continual validation of the models as development experience accumulates. Consequently, over time, the credibility of AB/FAM would presumably be increasing; certainly it would be increasingly clear what level of confidence could actually be assigned to AB/FAM's advice.
However, AB/FAM would not ever supplant the human evaluation responsibility. Ultimately a human must make a subjective assessment: do you believe the implementation can be attempted within these constraints -- including cost and schedule?
Some further discussions between the workshops discussion touched on these questions:
The question was posed, under what conditions could one TODAY expect to be able to predict the cost and schedule of some major activity? Can we use such examples as useful analogies or metaphors forAB/FAM?
Example: treatment of and recovery from a heart attack. Modulo certain (critical) contingencies, a first cut estimate of cost and schedule for treatment and recovery can be provided today; it would probably vary considerably across different geographic regions, but be relatively accurate within a given region. The contingencies are critical off-nominal conditions; they determine alternate cost/schedule profiles (which may not be very predictable). The contingencies would be undiscovered (or not yet considered) properties underlying the heart attack or its treatment, such as:
Using this example as an analogy suggests that
Other analogies that were mentioned included
Of these, the last seemed to some people to be the most closely related to a near term version of AB/FAM. Such an AB/FAM might be used to warn of potential problems based on incomplete information early in the product definition phase. This use of AB/FAM would focus on the identification of contingencies, even if the implications of such contingencies were not yet well understood. (I.e., credibility may be high should the AB/FAM indicate that there IS a problem, even if the impact of the problem is not yet credibly predictable.)
The knowledge base would pertain to architecture based development of software systems. Considerations would consequently include:
Kirstie Bellman pointed out that the development of interval logics has reached a level of maturity that is probably sufficient for use in a workbench such as AB/FAM. A number of efforts have developed useful mechanisms for supporting tradeoff sensitivity analyses, although much remains to be done in this area, especially in integrating symbolic and parametric models. Finally, work is beginning that addresses the impact of architectures on cost estimation.
A draft list of high priority implications: The AB/FAM effort should focus on: