A Design Strategy for Achieving Wide-Area Interactive Teleradiology Consultation




Walt Scacchi
ATRIUM Laboratory
Information and Operations Management Dept.
University of Southern California.
213-740-4782, 213-740-8494 (fax)
(Scacchi@gilligan.usc.edu).
April 1996
(c) Copyright, Walt Scacchi 1996

Overview

Our objective is to iteratively and incrementally design, prototype, demonstrate, and evaluate a high-performance computing environment that supports interactive teamwork, communication, and problem solving in the domain of teleradiology consultation--the real-time location, transfer and visualization of complex radiological images, diagnostic interpretation, and treatment planning consultation by radiologists working at a distance from patients and attending physicians. The infrastructure that we must compose, integrate, and make useful poses either fundamental and technical challenges with respect to the:

Overall, our goal is adding new capabilities and integrating various software tools to the parallel and distributed computing environment, and the ATM network infrastructure we now have in place. As such, we need a design strategy to organize and guide how these systems can interoperate. To complete such an undertaking, we have assembled a team of researchers interested in interactive teleradiology with backgrounds in Radiology, Medical Imaging and Visualization, Supercomputing/HPCC, and (Parallel) Software Engineering. With this team, we need to concurrently develop and integrate various parallel/distributed software technologies that support new modes of interactive teleradiology consultation support in ways that are scalable, transparent, and easy to integrate.


Background

The USC Medical Complex is the world's largest academic medical center. It provides a diverse group of medical institutions and patient populations which make it an ideal medical testbed. Furthermore, because of teaching and research obligations, we are eager to take the steps to ensure the necessary evolution of the Center, its community and industrial partners, to become a leader in advanced teleradiology. Toward this end, USC has established the USC Advannced Biotechnical Consortium (USC-ABC) with Northrop Grumann, JPL, Pacific Bell, and others. Through contracts and awards, we have installed an ATM network linking the five USC Hospitals to Northrop Grumman, the meta-computing environment at JPL/CalTech, and the NSF-DARPA Gigabit network. Two ATM links to USC's main campus are also operational. Thus, USC-ABC has a testbed for telemedicine research in place.

Advanced telemedicine has frequently been cited by healthcare leaders as the most appropriate means to provide real-world solutions to the technological deficits now underlying the healthcare delivery system. The potential for advanced telelmedicine to achieve quality enhancement, equity of access, and cost containment is most evident in medical disciplines which can extensively employ advanced computing and communications applications in their clinical implementations. Most obvious of these include medical imaging and digital radiological techniques such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Single Photon Emission Computed Tomography (SPECT), and Positron Emission Tomography (PET), and on-site cyclotron facilities. In addition, massive multi-terabyte electronic image repositories for storing, querying, analyzing, and retrieving medical images from these radiological sources can be employed in disease detection, diagnosis, treatment planning, and follow-up. All of these techniques and associated facilities are now available at USC Hospitals and to the ABC. As such, our research objective is to interactively and incrementally design, prototype, demonstrate, and evaluate a SOOS environment that supports the distributed development of a parallel software application environment in the domain of teleradiological consultation.

The Overview outlines a multi-level infrastructure that we at USC would like to realize in order to compose, integrate and support interactive teleradiological consultations. Accordingly, different kinds of challenges are associated with the different levels of this infrastructure. Technological challenges can be identified at Levels 0 through Level 4. Thus, our overall challenge manifests itself in developing an approach for the concurrent development and integration of distributed and parallel software technologies in ways that must be scalable, transparent, and easy to integrate.

Finally, in keeping with the spirit of modern software engineering practices, we need to follow an approach to system development that stresses a spiral approach for the iterative, incremental development of the complex software application system we seek to develop. The spiral approach is now the primary alternative to the traditional "waterfall" or sequential development phase methods which entail higher technical risks and generally assume that system requirements can be well articulated prior to system design and implementation. This is the basic strategy to follow in developing a high-performance application system that integrates various conventional, concurrent, and parallel software capabilities across a multi-level system infrastructure.

Using a Meta-Supercomputing Environment and ATM Network

We intend to use high-performance meta-supercomputing facility and ATM-based communications network to support a potentially very large volume of interactive radiological consultations in a scalable manner. There are two primary HPCC facilities that we will use to support this effort: one for computation, the other for communications. On the computation side, we will use the new meta-supercomputing facility now operational at JPL/CalTech (for an overview of how this facility might participate in a virtual medical center for NASA) that will be networked to the other project sites at Northrop-Grumman Corporation (NGC), USC Health Sciences Campus (USC-HSC), Advanced Biotechnical Consortium facility (USC-ABC), and the ATRIUM Laboratory in the USC School of Business Administration (USC-SBA). JPL currently provides computing support for massively parallel processing (MPP) via a Cray T3D and Intel Paragon computers, that employ a Cray Y-MP processor as the front end processor. These computers are thus configured to be used and managed following a logically shared programming model (LSPM). Thus, JPL has designed this supercomputing facility to act as a high performance computation server. On the client-side computing environment, each project team site already has in place Hewlett-Packard Unix workstations, and USC also has an IBM SP2 processor as a secondary supercomputer server.

On the communications side, the client-server computation environment just described will be networked together initially using an OC-3 class (155mbps) Asynchronous Transfer Method (ATM) network provided to USC (HSC, ABC, SBA) and JPL as part of grants program of CalREN, the California Research and Education Network, funded by Pacific Bell throught the CalREN Foundation. ATM network services and switches are already in place at JPL and USC-ABC, and our other sites are scheduled to received ATM network services and had switches installed during 1995. Thus, in this project, our HPCC testbed will use a high-speed ATM-based network to connect our workstation clients to the meta-supercomputer server.

We will need to use and specialize a network management program (e.g., HP Openview) to manage our network. Such a network management program must collect statistical information on network utilization, network throughput, network configurations, network access, etc. We can use this information to measure our success on testbed network utilization, performance and scalability of our network, support for real-time remote multimodality data access, network security, and Wide Area Network (WAN) library data access. Because the Management Information Base (MIB), system information database used by SNMP is extendible, we will extend MIB to include functions that report statistics on system utilization, system performance, and frequency of use and throughput of certain software, etc.

Each computer used in our testbed network will have a SNMP agent. The SNMP agent will report status and statistical information to the SNMP manager. Designated workstation servers will act as SNMP managers. Network management will be able to use network management functions from any workstation on the network. Thus, we will have a capability to monitor and measure overall performance and throughput of our networked HPCC testbed.

As such, our focus at this level is to make effective use of the HPCC testbed that has been assembled. The data sets that will be communicated and computationally manipulated on this testbed (e.g., CT, MRI, PET scans, and other radiological and pathological images) are large (multi-megabyte per image, multi-million images per repository eventually). Health care providers at USC-HSC typically conduct 500,000 imaging procedures per year, and 5-10% of these (25,000-50,000) procedures entail consultations among local or dispersed groups of radiologists and other medical specialists. However, most of these consultations take but a few minutes, once the necessary participants are at hand, but getting the parties together at the same time and location, is a difficult and costly problem in work coordination. As medical imaging procedures are transitioned into the digital domain, this raises the possibility of rapidly access and distribution of radiographic images to the participants for consultation wherever they may be (as long as they are near a high-speed network data connection and powerful workstation, as is now being installed throughout USC-HSC). Similarly, we soon expect to see the total data set storage volume in the multi-terabyte range. (As part of another effort, a multi-terabyte storage server is being installed at USC-ABC). Further, we also expect to see growth in the number of detection, diagnosis, and treatment planning procedures and consultations that will rely on the use of volumetric biomedical visualizations that are computationally derived from multiple source images (e.g., 3D anatomical volume visualizations computationally derived from a set of 2D CT scans). Thus, we believe that there is an emerging demand for interactive consultations using digital medical imagery that will need to rely on the kind of HPCC infrastructure that serves as our testbed.

Using a Parallel ORDBMS

We seek to use advanced parallel object-relational database management system (ORDBMS) capabilities to support a digitial library of medical images, processes and procedures, and patient records. An effective parallel database engine is of vital importance to the management of a sizable volume of medical images that must be integrated with treatment and patient care records. When accessing electronic treatment and patient records, polymorphism becomes an issue. For example, data at the USC-HSC facilities are currently accessed using a VMark Universe (Pick) database engine, while data at other metropolitan health care facilities associated with USC-HSC may be accessed with other types of relational database management systems (RDBMS). The USC-HSC HelpNet clinical information system (an "Intranet" application) manages information on tracking and monitoring of physician/faculty activities, faculty supervision of research activities, and physician on-line interaction with a central data base. HelpNet gathers patient care statistics for subsequent analysis, including the those from standard assessment tools such as the SF-36. Currently, biomedical image data is stored in a DICOM format with extensions for the 3D visualization applications. To support the biomedical visualization, a database engine should be used to manage distribution and process system functions for the classification of tissues and pathology from 3D datasets and the subsequent 3D visualization of this information. Furthermore, to facilitate interactive medical consultation that must access and visualize medical image data sets, information must be represented and queried rapidly from within a large 3D dataset, with relations to relevant diagnostic and therapeutic information. As such, we believe that addressing this diversity of data management needs requires a unique database architecture provided by an RDBMS which supports relations of textual patient records with object representation of abstract 3D data features. By object, we refer to the representation of both data and function with a singular entity. With objects the system can manage both the data and the functional agents which process data, using such object-oriented capabilities as persistence, inheritance and message passing.

Based on these considerations, one possible database engine for use in this effort, is the Informix Universal Server (formerly the Illustra OO-relational database management system) [I94,I95,S94], featuring the 3D Spatial DataBlade. This system supports relations to 3D graphical objects, a proprietary "R-TREE" mechanism to accelerate access to 3D spatial relations, and support for relational database integration of textual, 3D spatial objects and other complex types. Our use and application of the 3D Spatial DataBlade in this project requires further extensions, however, due to the fuzzy operator base of the morphological visualization tool, 3DFuzzyBlade (described later). The Universal Server ORDBMS needs to be interfaced to the Sybase RDB replication server hosted on the Cray Y-MP processor at JPL. We believe such a parallel database management capability is needed to support concurrent, real-time interactive teleradiological consultations that may demand biomedical volume visualizations, as well as fast access and update of radiographic image sets stored in a image repository.

Tool Integration for HPCC and Parallel ORDBMS Environments

We need to develop techniques and mechanisms for integrating heterogeneous software systems across an HPCC and parallel ORDBMS environment (Level 2).

In order to provide an HPCC environment with parallel ORDBMS support to higher-level user interface and teamwork support systems, we face the recurring problem of how to integrate the different software components and applications (henceforth, tools) together in as transparent a manner as possible. As we believe that we should make the software environment open to the inclusion of new software tools that are not yet available, then we rule out the classic approach to tool integration that relies upon interfacing and interconnecting tool modules together using common procedure or function calls. We believe that classic approach to tool integration will not yield the flexibility, nor scalability, that we anticipate is needed to rapidly prototype, configure, and reconfigured that diverse set of software tools that we choose to use in this testbed.

Nonetheless, the alternative strategies for software tool integration boil down to the use of (a) Unix-like pipes and scripting languages, (b) publish and subscribe message-passing schemes, and (c) persistent programming languages that access and manipulate schematized data stored within a repository [KS93]. Pipes and interpretive scripting languages are perhaps the most widely used tool integration technique, when flexibility, loose coupling, and rapid development are central concerns. Pipes are supported by the underlying computer operating system (OS), while script interpreters are tools that can perform simple, local computations and invoke tools external to the intepreter, such as OS utilities. For example, the widely used WWW browser, Mosaic, can support the invocation of tools or OS scripts through a common gateway interface ("cgi-bin") written in PERL or C-Shell scripting languages. (Parallel execution scripting languages and interpreters probably exist, or soon will as well.) These WWW scripting languages in turn can then navigate, query, or manipulate the content and presentation of data stored in repositories accessed over the global WWW. Thus, we could use pipe and tool scripting facilities such as these on our client (Unix) workstations to loosely couple higher level tools to the underlying parallel ORDBMS and Meta-supercomputer OS level utilities and program execution services. Therefore, we have good reasons to follow the pipe and interpretive scripting languages approach to tool integration.

Publish and subscribe message-passing schemes support another approach to tool integration. The message-passing scheme has been mostly widely used commercially in the HP SoftBench and Broadcast Message Server products, and to a lesser extent, in the SUN ToolTalk mechanism now offered as part of the "standard" Common Development Environment (CDE) now offered by many Unix workstation vendors. However, these products, together with their counterparts in the software environments research community [cf. KS93], require the prior encapsulation of tools or tool modules before they become compatible with their respective messaging system or "software bus." Our observation of commercially available tools that have been so encapsulated, is that while the encapsulation facilites allows extensive message-based interfacing, most of these tools use no more than simple message-driven tool/module invocations and termination signaling, similar in kind to those described above.

Last, the use of persistent programming language that manipulate schematized data represents one more approach to tool integration. The data query and maniuplation language for the Universal Server ORDBMS, when embedded within C/C++ programs, provide a persistent parallel programming language capability. If needed, we should be able to build an ORDB that acts as a "virtual software bus" or message-passing system that encapsulates tools/modules as (large or very large) objects, so as to emulate a publish and subscribe tool integration mechanism. Thus, we can use a persistent programming langauge approach when we need to tightly couple higher level tools to lower level ORDB maniuplation functions or services.

Embedding Morphological, Volumetric, and other Visualization Techniques

We seek to enhance and integrate advanced morphological and volumetric visualization software capabilities into the Virtual Reality Markup Language [VRML95]. This will enable accessing, browsing, creating, or updating multi-media medical data stored and retrieved in medical digital library hosted on a HPCC environment.

Our team members at NGC will enhance and integrate voxel-based 3D volumetric visualization (VV) together with unique USC-ABC morphological visualization (MV) tools applied in parallel to 3D biomedical imaging datasets. These advanced imaging capabilities can then be integrated with VRML to provide visualization and discrimination of internal human viscera, pathology and vasculature in a manner that can be accessed over the WWW, using VRML-based web browsers. The MV software can visualize and discriminate tissues and pathology of the human body in a way that can recognize targeted entities in an automatic or semi-automatic parallel query manner. As part of our effort, we seek to develop a new capability, called 3DFuzzyBlade, that will integrate the combined 3D VV and MV software together with a parallel ORDB. This will then provide support for high-performance storage, management, and retrieval of 3D biomedical data sets. These visualization tools will also be integrated with mature, off-the-shelf visualization libraries of Khoros, Advanced Visual Systems (AVS) and IBM Data Explorer (running on the USC IBM SP2) within the complete 3DFuzzyBlade ORDB package.

The AVS package consists of a library of many separate modules that are connected to create a complete visualization application. Through previous efforts, the bulk of AVS modules and inter-module communication protocols have been modified to run entirely on the Cray Y-MP and remote (SUN) workstations. Previously, we have used Unix system calls to spawn the T3D AVS process from an AVS Y-MP module, and used Unix stream sockets to communicate small control messages between the T3D and the Y-MP. However, we believe we should make further modifications to the few AVS modules that would most dramatically exploit the parallelism provided by the T3D, thereby facilitating real-time interaction and visualization of 3D biomedical data sets.

To initiate an interface of the Y-MP based AVS environment to the T3D volumetric visualization processes, a Y-MP-based AVS server process should run an AVS module and spawn the T3D client processes. To provide real-time inter-activity between the Y-MP based image display processes and the T3D-based volumetric rendering algorithms, all processes should run continuously and be controlled via message passing protocols. Specifically, the Y-MP process communicates directly with a root T3D processing element (PE) by use of a Unix stream socket to support message passing of the interactive control commands. Memory control with barriers and shared memory broadcast then provides communication of the root PE to all of the other active PEs within the barrier partition involved in the volumetric rendering process. With this scheme, the volume data and volumetric visualization processes can be loaded throughout the partition of T3D PEs only once for each interactive session; hence minimizing communication overhead and permitting real-time inter-activity. Since downloading and rebroadcasting of the data and processes to the T3D PEs could require several seconds, this client-server model implementation of the T3D visualization algorithm is essential to permit real-time interaction.

The parallelism of the computational loop uses the logically shared memory programming model. Data is distributed across all processors. In the visual ray tracing algorithm, the input volume of data is used to render a 2D output plane of pixels. Each processor can be allocated its share of output pixels and ray traces through the entire 3D volume. The gradient shade program provides a more straightforward parallelization requirement. However, the memory requirement of two input volumes and an output volume, which serves as the tracer input, requires greater demands on the T3D memory management. A distributed memory management scheme needs to be developed. In such a scheme, volume data that does not reside in the processor's memory is fetched using shared memory communications library routines (e.g., Cray's "shmem_get" routine). In addition, a virtual shared memory (VSM) protocol developed at NGC can be used to manage and make remote data fetches infrequent and transparent to the application programmer. These routines have a latency of around one microsecond and are ideal for fetching small remote datasets.

Otherwise, detailed timing performance analyses indicate that focus needs to be directed at improving the output of results from T3D, via shared parallel IO, sockets, HiPPI, and re-engineering the Unix-based architecture of the integrated AVS modules to distribute AVS modules over the Cray T3D, Y-MP, and the remote display workstations. Accordingly, we find the results achieved so far in demonstrating advanced 3D biomedical data set visualization have been promising.

Using logically shared programming models, real-time visualizations that employ the USC-ABC MV software with the ABC VOlumetric Anatomy Repository (VOXAR) have been demonstrated locally at USC-HSC with both distant domestic and international audiences, with shared multi-user applications. Such shared capability is essential for the utilization of MPP architectures to support distributed interactive consultations using volumetric and morphological visualizations derived from radiological images stored in massive image repositories. Nonetheless, our objective is to make this unique capability compatible with emerging VRML standards in order to facilitate and reduce the cost of distant interactive access to 3D biomedical image visualizations.

Multicasting Hypermedia User Interfaces for Process Enactment

Our sixth goal is to show how we will develop techniques and integrate mechanisms providing a variety of modes of interfacing system users to each other, and to software services and database contents operating on an underlying HPCC environment. Subsequently, the primary effort in this area will be provided by our team members at JPL.

Given the preceding levels of computing, database, and software capabilities, the challenge here is how to provide a unifying user interface to it. The basic approach that was implied above is to employ a WWW browser as the guiding strategy for developing a low cost, widely distributable user interface that can readily access networked servers and repositories. Our team members at JPL have begun experimenting with a new model and concept for how a university medical center, such as USC-HSC might operate if based on the kind of health information infrastructure that we are suggesting in this proposal. Browsing the following WWW links will show a series of interfaces that seek to convey the navigational point-and-click style of user interaction to an underlying web of medical information repositories and imaging applications.


http://usc-abc1.hsc.usc.edu/           USC-ABC WWW Home page
http://corvette.jpl.nasa.gov/          Virtual NASA Medical Center at JPL
http://128.125.125.191:80/image.html   USC-ABC Medical Image Servers
                                          at JPL and USC-ABC

Beyond this, it is also apparent that if we are to support interactive medical consultations conducted between informal groups of radiologists and other medical specialists, then there is also a need to support their communication and information sharing needs. Accordingly, our team members at JPL have prototyped a WWW browser (based on Mosaic) that integrates multicast data stream object types, thereby providing the ability to embed video conferences and shared whiteboards as WWW nodes. These applications employ multicasting data transfer protocols that were originally developed as stand alone applications that operate on the multicast backbone ("MBone") network that operates on the Internet. Thus, selecting ("clicking on") a multicast WWW node enables a user to engage in a networked video conference, or in a shared data workspace where users can read/write annotations, as well as include and annotate images, such as shown at the following WWW links:


http://corvette.jpl.nasa.gov/iii/database.html  JPL Videoconference access
http://128.125.125.191:80/videoconference.html  USC-ABC video access

However, there is still much to be done to make such user interfaces effective for regular use in interactive teleradiological consultations.

Clearly, the biomedical visualizations supported by the preceding infrastructure levels are not integrated with such a user interface. Thus, this is a challenge that must be addressed. However, if we can successfully integrate the volumetric and morphological visualization capabilities described above in VRML-compatible format, then our strategy can be seen. Integrating VRML, VV and MV software capabilities, 3D medical ORDBs, together with multicast applications that can all be accessed and navigated through a WWW-like browsing user interface is easier said then done. Furthermore, the availability of an integrated user interface capability that bundles access to lower level facilities does not assure its relevance to potential end-users trying to work together at a distance in a complex problem domain. To us, this implies that at a minimum, the user interface to our proposed interactive teleradiology teamwork support system should provide guidance for how to perform traditional or new consultation processes for detecting, diagnosing, treatment planning, and follow-up patient monitoring. In this regard, the user interface we envision must be "process aware", meaning that it is enabled to browse and enact either standard pre-planned processes and tool invocations on demand, or emergent processes that must respond to contingent user needs for information management, visualization, or communication. Therefore, the principal effort at this level centers around developing and evaluating alternative approaches and prototype implementations that integrate the capabilities at the lower levels into a smooth, facile, and responsive user interface package.

Engineering New Processes for Working Wisely at a Distance

Our seventh goal is to show how we will develop and engineer models of new processes supporting the performance of interactive teleradiological consultations by dispersed teams of medical specialists caring for patients located in different health care delivery facilities. Subsequently, the primary effort in this area will be provided by our team members at USC-SBA, supporting prospective end-users at USC-HSC. Further details on this approch to process engineering can be found at the following URL on the WWW:
http://cwis.usc.edu/dept/ATRIUM/Process_Life_Cycle.html

The USC-SBA team members in the ATRIUM laboratory have previously developed a knowledge-based computing environment, called the Articulator, for modeling, analyzing, simulating, and executing complex organizational processes [MS90,SM93]. It first became operational in 1988, and it has been used and evolved since. The ATRIUM research staff have been working with other research sponsors to prototype and demonstrate a small number or process-driven environments in different application domains that incorporate commercial off-the-shelf systems, internally developed systems, and prototype research mechanisms, all operating on Unix and PC workstations over both local- and wide-area networks. They are using these results to help redesign existing organizational processes in different commercial and government settings that employ teams of skilled workers, and provide a coherent, scalable IT (information technology) infrastructure for enacting and integrating IT-based organizational processes. We propose to apply these same techniques for developing and evaluating the interactive teleradiology consultation processes needed to effectively use the lower level information infrastructure described so far. A brief description follows.

Process Modeling and Simulation

The Articulator utilizes a rule-based object-oriented knowledge representation scheme for modeling interrelated classes or organizational resources [MS90,MS95]. These classes in turn characterize the attributes, relations, and computational methods associated with a taxonomy of organizational resources. Thus, using the Articulator, we can construct or prototype knowledge-based models of complex organizational processes. In the figure below, we display a hierarchy of object classes that characterize the underlying elements of complex business processes that we built using the Articulator. In turn, associated with each of these classes is a schema of attributes, relations, and rule-based computational methods that describe the named processes.

Figure 1: Object class schema for "Task-Force" with attributed displayed

Process Knowledge Representation Components

The resource taxonomy represented in the Articulator, described elsewhere [MS90, MS95], serves as a `process meta-model' which provides an ontological framework and vocabulary for constructing `organizational process models' (OPMs). The process meta-model states that organizational processes can be modeled in terms of (subclasses of) agents that perform tasks using tools and systems which consume or produce resources. Agents, ITs, and tasks are resources, which means they can also be consumed or produced by other agents and tasks. For example, a project manager may produce staff through staffing and allocation tasks that consume departmental budgets, while these staff may then be assigned to other routine or creative production tasks using the provided resources (e.g. computer workstations, spreadsheet and desktop publishing packages, schedules, and salary) to construct the desired products or services (e.g. reports and documents). `Instances of OPMs' can then be created by binding values or corresponding real-world entities to the classes of corresponding entities employed in the OPM. Consider the following scenario:

Dr. Mary may be the attending ER physician who is responsible for diagnosing a set of radiological images, patient charts, and vital signs for a critically injured patient, and she seeks the advice of a consulting radiologist, Dr. Smith, and a trauma surgeon, Dr. Jones. However, Drs. Smith and Jones may be in other somewhat remote locations on the medical campus. Dr. Mary thus seeks to rapidly establish an interactive teleradiological consultation with Drs. Smith and Jones by first paging them, then being notified in return when they are respectively online at networked workstations near their present remote location. Upon establishing a multi-window networked conference session, each of the three physicians can communicate to each other through voice and video, as well as sharing one or more common windows displaying 2D radiological images, 3D volume visualizations, and other relevant patient data. Dr. Mary presents her initial diagnosis of the patient condition as well as highlighting areas of uncertainty or ambiguity in the diagnostic images. Drs. Smith and Jones together with Dr. Mary engage in a collective multi-media dialogue and inquiry in order to rapidly converge on an improved diagnosis and recommended plan for action. Their voice annotations, together with their written comments, are then linked to the specific radiological images, 3D data set, data set or visualization manipulation command history, and patient data to record their assessments, update the patient record, and to account for each physician's participation in the consultation (for billing purposes).

In this scenario, Drs. Mary, Smith, and Jones are each agents playing specific roles in this teleradiological consultation process. While the overall consultation process is familiar to the experienced participants, getting the interactive multi-media conferencing session setup and running is probably a distraction from the critical care medical work, thus computer-based support for instantiating the interactive teleradiological consultation process is a desirable value-added work support system capability. Similarly, distributing and archiving the 2D/3D radiological images, patient data, and physicians annotations are cumbersome distractions from the critical patient care, but a necessity for record keeping and billing procedures. Thus, an OPM for such interactive teleradiological consultations would represent typical agent roles, different types of diagnostic imagery and patient data resources, conferencing session system invocation and transaction logging (e.g., who enters what annotation on designated image-data linkages) command scripts, as well as probable consultation process step enaction sequences. On the other hand, the consultation process meta-model would be used to describe how this process might require more flexible enaction sequences to respond to diagnostic contingencies, or how the process might breakdown and be repaired as rapidly as possible.

Although space limits a more detailed presentation here, the agents, tasks, product resources, tools, and systems are all hierarchicaly decomposed into subclasses that inherit the characteristics of their (multiple) parent classes for economy in representation. Further, these resource classes and subclasses and interrelated in order to express relationships such as precedence among tasks (which may be sequential, iterative, conditional, optional, or concurrent), task/product pre- and post- conditions, authority relationships among agents in different roles, product compositions, IT tool/system aggregations, and others [MS90, MS95]. Thus, in using these classes or process modeling entities, we are naturally led to model organizational processes as a web of multiple interacting tasks that are collectively performed by a team of developers using an ensemble of tools that consume resources and produce composed products/artifacts. As a result, the principal modeling activity in this effort is to represent and interrelate the agent roles, products, teleradiological information infrastructure components (e.g, ORDBMS, 3D VV and MV mechanisms, multi-media computer conferencing session manager and broswing user interfaces), and the probable plans for enacting the interactive teleradiological consultation process among dispersed physicians.

Process Analysis and Report Generation

As the process meta-model provides the semantics for OPMs, we can construct computational functions that systematically analyze the consistency, completeness, traceability, and internal correctness of OPMs [MS90]. These functions represent batched or interactive queries to the knowlege base through its representational schemata. At present, we have defined a few dozen paramaterized query functions that can retrieve information through navigational browsing, direct retrieval, or deductive inference, as well as what-if simulations of partial or complete OPMs [MS90]. Further, most of these analysis functions incorporate routines for generating different types of reports (e.g. raw, filtered, abstracted, or paraphrased into structured narrative) which can be viewed interactively or incorporated into desktop publication documents. In this effort, process analysis and report generation are needed to help debug and refine emerging teleradiological consultation process enaction plans, as well as to document their relative static and dynamic effiency, for presentation and review.

Process Execution and Guidance

Our current process-based interface provides graphic visualization of task precedence structure on a role-specific basis for each user (i.e. agent instance) [MS92,NS96,S96]. For example, the figure that follows shows a visual rendering of top-level business operations planning process that reveals precedence, iteration, and concurrency relationships among tasks. Process tasks can be hierarchically decomposed into subtasks of arbitrary depths. We can associate a development `status' value (i.e. none, allocated, ready, active, broken, blocked, stopped, finished) with each subtask step. For ease of understanding, these status values are represented in the PBI as colors, so that the current state of a process task can be observed as a color pattern in the direct graph. We also incorporate a facility for recording and replaying all changes in process task states--evolving process state histories can be maintained and visualized as an animation of changing task step status colors. We have found, for example, that project managers in industrial organizations can quickly browse such a PBI display to ascertain the current status of an arbitrarily complex production process to varying degrees of detail. References [MS92,S96] provides further examples. Nonetheless, the capabilities of PBI lie outside of the multicasting hypermedia user interfaces described earlier. Therefore, the user interface system layer will be used to incorporate the relevant functionalities of the PBI user interface, so that it will be able to download and interpret the process models exported by the Articulator for process execution. In so doing, the resulting user interface will enable the process-level integration of the Articulator environment.

Figure 2: A visualization of the structure of a Business Operations Planning Process

Using the Articulator's capabilities, we can focus our attention to the critical tasks of eliciting, capturing, modeling, analyzing, simulating, executing, and redesigning the processes of interactive radiological consultation, both as currently practiced (to establish a baseline), and how it might be practiced given the emerging information infrastructure we seek to assemble through this overall research project. This requires intensive interaction with domain experts (e.g., radiologists specializing in 3D radiographic interpretation) and prospective end-users (attending physicians), all of whom we expect to find currently participating at the USC-HSC and its surrounding health care facilities. Prior experience in engineering processes of this kind suggest that we must anticipate 4-6 person months of effort per year in order to articulate the range and diversity of processes that we expect to find in the detection, diagnosis, treatment planning, and patient follow-up care and monitoring.

Development and Evaluation of Interactive Radiological System

Our eighth goal is to show how we will develop and evaluate an interactive teleradiological system that supports loosely-coupled groups of medical specialists engaged in the detection, diagnosis, treatment planning, and treatment follow-up and monitoring of patients using a complex web of teamwork process models, hypermedia user interfaces, volume visualizations, tool integrations, databases, and HPCC capabilities. The tasks this implies are addressed by the entire project team. Thus, completion of these tasks represents the overall completion of the project goals and objectives.

Our overall strategy here is that we cannot expect to perform develop and evaluate the technical capabilities described for each of the preceding levels in a sequential, one level at a time manner. In following such an approach, we increase the likelihood that we may miss critical technical issues at one level that could have been better addressed at a lower level. As such, our strategy is to work concurrently at all levels in an incremental manner that iterates over the duration of the project. Assuming a three year project calendar, we believe it is reasonable to target for three development and evaluation cycles, or one per year. Thus, in this first year, our objective primarily becomes trying to successfully integrate the capabilities described at each level. In the second year, effort is focused on refining, streamlining, and redeveloping based on the successes and shortcomings in the first year's effort. In year three, effort is focused on further refinement, performance tuning, and overall system robustness that addresses and documents the ability of the system and its users to scale-up their use to ever more realistic workloads and data process volumes. Therefore, the challenges we face at this level represent the spiraling summation and integration of the ongoing stream of results and evaluations accumulated concurerntly at eachof the preceding levels.

Transparency, Integration, Open Systems, Open Health Care Delivery

Last, we seek to make each preceding level of parallel/distributed software application environment infrastructure transparent and integratable to the level above it, yet providing the capability to accomodate computational tools and techniques at lower levels.

Bibliography

[I94] Illustra Information Technologies, Inc., 3D Spatial DataBlade Guide, Illustra 3D Spatial DataBlade Release 1.2, October 1994.

[I95] Illustra Information Technologies, Inc., Illustra Developers' Kit: Architecture Guide, Illustra Developers' Kit Release 1.1, March 1995.

[KS93] Karrer, A. and W. Scacchi, Meta-Environments for Software Production, Intern. J. Software Engineering and Knowledge Engineering, Vol. 3(2), May 1993. Revised version appears at http://www.usc.edu/dept/ATRIUM/Papers/MetaCASE.ps

[K89] Kaske, W., Morphological Description in 3D Volumetric Visualization, Chapel Hill Workshop on Volume Visualization, University of North Carolina, Chapel Hill, NC, pp. 42-60, 1989.

[K90] Kaske, W., VOXAR 3D Curvilinear Feature Description of MR Angiographic Acquisitions for Graphic Presentations and Analysis, Soc. Magnetic Resonance in Medicine 9th. Annual Conf., New York Vol. 1, August 1990.

[K92] Kaske, W., Resolvable Analysis and Synthesis of Synthetic Aperture Radar Imagery, SPIE Proc. Robotics and Machine Vision Conf., Boston, MA, November 1992.

[K94] Kaske, W., Analysis and Visualization of 3D Data Sets with Fuzzy Operators, Nueral and Fuzzy Systems, The Emerging Science of Intelligent Computing, Chapter 3, SPIE Press, 1994.

[KL89] Kaske, W., G. Luxton, Enhancement by Morphological Erosion of Stereotactic Headring and CT Coordinate Alignment in Radiological Planning, Radiological Society of North America, Vol. 173, December 1989.

[MS90] Mi, P. amd W. Scacchi, A Knowledge-Based Environment for Modeling and Simulating Software Engineering Processes, IEEE Trans. Knowledge and Data Engineering, Vol. 2(3):283-294, 1990.

[MS92] Mi, P. amd W. Scacchi, Process Integration in CASE Environments, IEEE Software, Vol. 9(2):45-53, 1992.

[MS95] Mi, P. amd W. Scacchi, A Meta-Model for Formulating Knowledge-Based Models of Software Development, Decision Support Systems, (to appear, 1995).

[SM93] Scacchi, W., and P. Mi, Modeling, Integrating, and Enacting Complex Organizational Processes, Proc. 5th. Intern. Symp. Intelligent Systems in Finance, Accounting, and Management, Stanford, CA, 1993.

[S94] Stonebraker, M., Object-Relational Database Systems Illustra Information Technologies, Inc., October 1994.

[VRML95] The Virtual Reality Markup Language (VRML) Repository, San Diego Supercomputer Center, 1995. http://www.sdsc.edu/vrml/

[W95] Winograd, T., From Programming Environments to Environments for Designing, Communications ACM, Vol. 38(6):65-74, June 1995.