Paper Number: 1
Title: Development of Object Oriented Frameworks
Author: Steve Hayes
Email: ''

Development of Object Oriented Frameworks


I have 12 years experience in developing technical solutions for business problems. I have worked in petroleum retailing and point-of-sale systems but most of my work has been in the finance industry. This has included developing trading and risk-management systems for wholesale banks and sales support systems for retail banks.

My involvement with object-oriented technology began in about 1990. I developed my first Smalltalk system in 1992 and have been involved in Smalltalk since then. Since 1994 I have been involved in the development and delivery of a framework for retail banking applications. This has involved designing and implementing some framework components, performing one-man prototyping exercises as pre-sales demonstration of the framework and long-term work with customers who have purchased the framework.

This experience has given me a good perspective on the framework from both the production and consumption points of view.

Project Description

The framework which I have been involved with since 1994 is "Visual Banker" from Footprint Software Inc (now a subsidiary of IBM). Visual Banker is delivered as a set of class libraries compatible with the Digitalk Visual Smalltalk Enterprise development environment.

Visual Banker really provides two distinct frameworks. The first framework is the "system architecture". This provides business-independent extensions to the base Digitalk development. The system architecture extends event and exception-handling, provides a way to define attribute and interface specifications for objects, automated links between objects and the user interface elements which display the objects, persistence, formatting and localisation.

The second framework is the "business domain". This provides object definitions for business concepts in the finance industry. While some concepts (such as name and address) are common to many other business areas the framework consistently takes a finance industry view of these concepts. One of the main concerns of the business domain is the representation of the products offered by a financial institution and the relationships between these products and the institutions customers.

Over time a number of the services offered by the software architecture have been subsumed by the underlying Digitalk development environment. This trend is likely to continue, particularly as Visual Banker is ported to other development environments. The business domain has gone through a number of iterations, sometimes oscillating between solutions to a particular problem. The remainder of the position paper focuses on the forces behind this oscillation and how they relate to framework development in general.


A framework represents a solution to a problem. The framework's mission is to solve this problem. The framework is unlikely to have strong cohesion unless the problem has been clearly defined. This isn't just true for frameworks. We could say the same thing about most software artifacts

The issue is more compelling for frameworks because a successful framework require a large commitment from their consumers. A framework is often a central part of a much larger solution developed by the consumer. "Shrinkwrap" software products don't usually require the same degree of commitments from the consumer. Framework consumers will also be using the framework in different contexts. Some frameworks will be used in contexts that were never imagined by the framework developers.

Each context will expose different limitations of the framework, or reveal different avenues for extensions. It will often be impossible to incorporate all the suggested changes in a cohesive way. The framework developers need to have clear criteria to decide what is consistent with the framework's mission and what is not. Otherwise a framework can be a victim of it's own success.

While the detailed requirements of a framework may change the "mission" provides a stable core.

A framework's mission may change over time. Changes to the mission should be considered carefully so that the interests of existing consumers are protected. Frameworks may also spawn new frameworks to handle missions that were not in the context of the original framework. What we need to avoid is inadvertent changes to the framework mission.

A small team may not need a formal definition of the framework mission. They may simply share a vision of what is right and what is not. As the team grows it will need to make some effort to preserve the shared vision. The team may grow by simply adding new members, or it may "grow" as existing members move to new assignments and are replaced.

The Visual Banker business domain regressed because there was no clear mission statement for the framework. Only a small number of people worked on the framework development at any one time, but there was 100% turnover at least three times. Each time the "new" team interpreted the existing framework in light of their perception of the "mission" and the current political climate. It was very difficult to iteratively improve the framework in this environment.

Communicating complexity

Frameworks are often complex and contain abstractions that are difficult to relate to the original "real-world" problems that the framework was intended to solve. These characteristics seem to fairly natural in this context. Simple problems are usually solved without a framework. Complex problems often demand a framework, but it is difficult to make the complexity "go away". Generalised abstractions help us to manage this complexity, but may be difficult to explain to consumers.

This was certainly the case with the Visual Banker business domain. For example, any object which had a value and an associated series of cashflows was subclassed from the abstract object Net Worth Affecter. This meant we could treat physical assets, financial assets and even a person's job in the same way. Even this simple example was sometimes difficult to explain to business people who were only experienced in some subset of the problem domain They didn't see any advantage to this level of generalisation and felt that it obscured the underlying problem.

The mistake that we made at this stage was not keeping simple things simple. A framework must make is easier to solve complex problems, but not at the expense of making it harder to solve simple problems.

The intention was to provide two levels of framework. The "upper" layer would provide objects that were directly traceable to common concepts in the problem domain. The "lower" layer would provide access to the more complex and general facilities of the framework. The upper layer would actually be implemented by using services of the lower layer, but this would be transparent to framework consumers.

Intentions were not realised of turnover in the development team and the lack of a well defined mission. The lower layer was about 75% implemented when the team turned over, but a number of important and difficult areas had not been fleshed out. The "new" team didn't share the vision of a layered framework and perceived the existing implementation as "too complex" for the framework's customers. They reimplemented the framework in simple terms. The "new" framework was clearly easier to understand but achieved this by sacrificing flexibility.

Clearly the "old" team and the "new" team had different ideas about the complexity of the problems that the framework needed to solve. The "new" team had very little experience in using earlier versions of the framework to solve real customer problems. Field work had already shown flaws in the framework, but these were not communicated to the development team.

This is a crucial point. Framework developers must make special efforts to get close to their customers and find out what features of the framework are used, what needs to be extended and what can be discarded. One approach to this is to provide a first class support organisation. This encourages consumers to report issues with the framework because it may bring some immediate benefit to the consumer, perhaps as a workaround or an unreleased extension. The most important information is likely to come from consumers who are pushing the framework hard. These will also be the most demanding people to support.

If the framework does not have a respected support organisation then the consumers will probably still solve the same problems, but the framework developers won't find out about the problems or the solutions. In this situation the framework developers should be prepared to send people out to work with consumers to gather real-world experience.

Frameworks are the confluence of a range of forces both internal and external to the framework provider. These forces change over time. A good framework provider can preserve the integrity of the framework while it is being improved because these forces are recorded in corporate memory rather than individual memory.

Documenting the architecture and achieving usability

It's easy for framework documentation to focus on the "how" rather than the "why". The documentation should focus on the nature of the problem being solved. Often this can be explained with using a pattern description. The pattern description should be supported by some concrete examples which are instantiations of the pattern. The problem description should be followed by an explanation of how the problem can be solved using the framework. This can be done at the both the pattern level and the concrete example level.

The examples used to illustrate the use of the framework should also illustrate the advantages of applying the framework to the problem domain. They should include simple examples to introduce the framework. Since simple problems are probably easy to solve without the framework they may not demonstrate the advantages of the framework. In this case a range of examples is required.

For frameworks which are intended to be used through composition this may be sufficient. Frameworks which are used through subclassing and similar forms of extension need something more. The framework designers will have already considered a number of possible extensions to the framework. This will have highlighted the "hot spots" in the framework the places where most of the flexibility is achieved and where consumers are most likely to extend the framework. It may also have highlighted "black spots: in the framework - the places where it's tempting to think of making an extension, but where it is very dangerous. It may be dangerous because the framework relies on some particular behaviour, or because the ramifications of the changes are more subtle than you believe at first glance. The framework developers should document the black spots as well as the hot spots.

The final thing that should be included in the framework is a record of design alternatives which were considered but rejected. This is particularly important if the known limitations of the framework arise from these decisions.

If the consumer doesn't have any information on black spots and rejected design decisions they may spend a lot of time and effort rediscovering these things for themselves. While the framework provider certainly has a responsibility to deliver a framework that works as advertised and encapsulates "best practices" they can increase the value of the framework by providing insight into the problem domain and approaches which don't work. Framework providers are really knowledge providers. The framework provider lets the consumer "act smarter". One medium for transferring knowledge is a software artifact, but it shouldn't be the only medium.

Justifying boundaries when testing for correctness and completeness

Framework documentation based on patterns can form the basis for automated testing of the framework and its extensions. Within each abstract pattern there will be number of roles. Each role defines a set of contracts that must be met before the framework will function correctly. When the framework is extended by the consumer the extended objects will be intended to act in one or more of the roles in the patterns. A test harness can validate that the extended objects still satisfy the contracts for the intended roles.

Design effective economic models

Any framework should be designed to reduce overall development time for = the framework consumer. The up-front monetary cost of the framework will be offset against the time saved. The time saved may results in direct monetary savings or it may produce intangible savings such as reduced time to market. In circumstances where the consumer needs a "jump-start" they might even be willing to pay more for a framework than it would cost to develop it themselves.

This opportunity is usually offset by corporate purchasing procedures and a reluctance to consider the "fully-loaded" cost of software development.

My experience is that once a development project has started many managers find it easier to develop a "utility" (small framework) within the development team then to request extra "capital expenditure" for a third party product. This can be true even if the third-party product is inexpensive.

The other factor discouraging framework purchase is a tendency to compare the initial internal development costs against the full costs of the framework. This fails to include the testing, documentation and support requirements of the internal development. These may easily double the internal costs. Mature frameworks have also been widely tested. As a result they should be of higher quality than an internally developed equivalent. The higher the quality of the framework the greater the discrepancy between the perceived cost of reproducing the framework and the actual costs.

Framework providers need to have good information about the benefits accrued from using the framework, just like vendors of integrated development environments and 4GLs. They need to emphasise the quality of the framework and the support which is provided. They can benefit from highlighting subtle issues which were considered during framework development that might be missed during internal development (at the peril of the consumer).

Above all the framework vendor must make it clear that the framework solves a real problem. If the consumer doesn't perceive a problem then they won't perceive a requirement for the framework.

I haven't referred to an economic model. This is because I don't believe most development shops appreciate the complexity of the problems they are solving. Until they perceive they have a problem that can be solved no economic model is useful. Some consumers (using the term loosely) wouldn't accept a framework even if it was free. This problem needs to be addressed first.

Paper Number: 2
Title: "An application framework for building dynamically configurable transaction systems"
Author: Bedir Tekinerdogan
Address: University of Twente
Dept. of Computer Science
Software Engineering
P.O. Box 217 7500 AE
Enschede, The Netherlands
Phone: +31 53 4893715
Fax: +31 53 4893503

The WWW address for the paper is:

For more details concerning my biography please check my cv on the internet:

FRAMEWORK: Design and implementation of an object-oriented framework for building user-interfaces.

WHERE: AT&T, Huizen, The Netherlands PERIOD: Nov. 1992 - March 1993.

DESCRIPTION Practical Assignment. Hereby I designed an object-oriented user-interface framework for ASCII terminals on a UNIX System V platform. This was needed for the automatization of different applications like the system management and failure registration . For the design I used the Object-Oriented Design methodology of Booch. With the framework different applications were built and used in the department.


FRAMEWORK Atomic Transaction Framework

WHERE Dept. of Computer Science, TRESE project, Univ. of Twente, The Netherlands PERIOD Mar. 1993 - Mar. 1994

DESCRIPTION Graduation assignment within the TRESE group. Within this assignment I developed an object-oriented framework for atomic transactions in a dynamic and configurable environment. The framework contains classification hierarchies of different transaction semantics like concurrency control, recovery techniques, transaction management, deadlock resolvers, etc. Instantiations of the framework are able to switch from several transaction semantics in order to achieve the optimal performance. With the transaction framework more than 10 transaction systems were built and tested.

IMPLEMENTATION The framework was implemented in Smalltalk 80 and is still evoluating.

FRAMEWORK Design and implementation of an Intelligent Tutoring System Shell for imperative programming languages

WHERE Dept. of Computer Science/ Dept. of Education, Univ. of Twente, The Netherlands PERIOD March 1994 - Sept. 1995

DESCRIPTION Research project. Design and implementation of an Intelligent Tutoring System Shell for imperative programming languages. This research addresses the lack of modular composability of the existing Intelligent Tutoring Systems (ITSs). In the developed shell system several ITSs for teaching different programming languages can be built with adaptable semantics and highly modular composable components. The architecture for the ITS shell resulted in the development of several frameworks, including a framework for instruction modeling, student modeling and a framework for imperative programming plans.

IMPLEMENTATION ParcPlace VisualWorks/Smalltalk

Paper Number: 3
Title: Framework Experience
Author: Scott W. Woyak
Address: EDS
26533 Evergreen
MS 1201
Southfield, MI 48076 USA



As a member of EDS R&D from 1985-95, projects included the EDS/OWL object-oriented programming environment; an expert system for monitoring VTAM; an interface for specifying database integrity constraints; the PicMan user interface framework; the Remote Systems Administration tool for configuring and monitoring distributed Unix workstations; the massive databases project; and the analysis and customization of large-scale information systems models.

Currently, I'm in EDS People Systems Division tasked with establishing an object reuse environment for the division, which includes reusable frameworks. This includes developing reusable frameworks and assisting with the use of these frameworks on two different application projects.

Organized a workshop on "Object-Oriented Programming in AI" at the IJCAI-89, AAAI-90, and AAAI-91 conferences, which resulted in a track of the same title in the IEEE Expert journal (1990-91) and a special issue of the International Journal of Human-Computer Studies (July/August 1994). Presented papers on EDS/OWL at OOPSLA '90 workshop on "Object-Oriented Program Development Environments", Tools for Artificial Intelligence '90, and IJCAI-89 workshop on "Object-Oriented Programming in AI". Reviewed papers for various conferences, including OOPSLA '96.

Framework Experience

Within EDS Research and Development, I was the leader of a two person team that developed a framework, called PicMan (Pictorial Manipulation Toolkit), for building graphical user interface programs that employ a direct manipulation style of interaction, such as drag-and-drop. This framework extends the InterViews C++ X windows class library with customizable and reusable classes providing high-level interaction events (e.g., WasDraggedOnto), dragging feedback, maintaining graphical connections, handles, common editing tools, and supporting infrastructure. PicMan was then used in three other R&D projects: a network resource configuration editor, a relational table query interface, and a statistics-based query interface to very large databases.

Two of the applications of the PicMan framework were built by the framework developers (ourselves), and we were available to assist the other application team with understanding and using the framework for their project. This eliminated many of the usual problems regarding training and applying the framework, and demonstrated the value of mentors assisting new framework users.

The broad applicability of the framework functionality contributed greatly to its success. We were able to explore a large set of potential uses during design, which led to increased generality and abstraction. This increased understanding of the problem space enabled us to identify appropriate separations of classes that increase the reusability. Many of these design choices can now be seen as examples of the more recently published design patterns.

Even with the many generalities and abstractions in the initial design, subsequent applications of the framework provided a number of ideas for improving and refining the framework, which would result in some design changes. Unfortunately, we did not take the time to revise the framework, which could have provided some valuable insight into issues and efforts required for retrofitting existing applications to new framework versions, as well as the potential for measuring (even if qualitatively) the shared maintenance costs due to framework usage.

Another problem we experienced over time was the tight coupling to another framework, i.e., InterViews. As InterViews came out with a new major release (3.0), we were too closely tied the previous release to warrant upgrading to the new version. This is an interesting framework issue of which developers should be more aware, and for which some general design patterns are needed.

More recently, I have joined another group within EDS that is supporting and developing business information systems. We are in the process of identifying and mining frameworks out of an application project that is still under development. Currently, we are focusing on what might be called application infrastructure frameworks. As opposed to large general application frameworks that structure (and standardize) the overall look and operation of an application, these are components that support a functional aspect of the application, though largely tied to a predefined application architecture. Some of these functional frameworks are: communication with a mainframe via sockets; transaction processing between the client and the server; and user-based security on the client.

In this project we are experiencing the value of designing the frameworks before proceeding too far with the application and the corresponding cost to retrofit the application to match the improved and generalized framework interface. We expect to measure, both qualitatively and quantitatively, the effort avoidance due to reuse of shared components. Initial perceptions are that the additional costs to develop and refine reusable frameworks will not achieve significant effort avoidance on any given single subsequent project, but will over multiple projects. Moreover, we expect even greater benefits in reduced effort to maintain and evolve the shared frameworks. Along with shared costs go a number of related benefits, such as application consistency, quality, as well as many people benefits.

Along with the reusable frameworks, we are capturing guidelines, how-to information, standard development processes, and heuristics that will be reusable on subsequent projects. These become a developers guide for building applications that includes and goes beyond instructions for using the frameworks themselves. These are beginning to look as valuable or more than the frameworks themselves.

Paper Number: 4
Title: Workshop on Development of Object-Oriented Frameworks
Author: Mark Linton
Address: Vitria Technology


Mark A. Linton is a Senior Scientist at Vitria Technology, Inc. He was a Principal Scientist at Silicon Graphics, Inc. from 1990-95, and an assistant professor at Stanford University from 1983-90. He received the B.S.E. from Princeton University in 1978 and the M.S. and Ph.D. in Computer Science from the University of California, Berkeley, in 1981 and 1983 respectively. He has published papers on a variety of computer systems topics, is the original author of the Unix debugger ``dbx'' and the lead developer of the InterViews and Fresco user interface systems. His current research interests are in distributed systems, user interfaces, and object-oriented programming.

I have worked on several object-oriented frameworks, including InterViews, an early C++ UI toolkit, Unidraw, a graphical editing framework, Fresco, a concurrent multi-language user interface system, and most recently Jct, a high-level framework for building reliable and scalable distributed applications. Jct integrates an abstract dataflow structure with distributed processing to encapsulate common failure recovery strategies and eliminate the difficult aspects of concurrency and fault tolerance from many applications. Jct is implemented in Java, though the abstractions are expressible in any object-oriented language.

Jct uses the common pattern of introducing background threads and callbacks to avoid having applications make direct calls that could timeout or otherwise fail. This approach is similar to other techniques that avoid blocking in a user interface. However, because a user interface may wish to change a display when communication failures are occurring, errors also may be passed through to the application. Error information is integrated into the dataflow framework so that one application object can conveniently connect to the user interface, handling both normal data and errors.

Frameworks offer great power and we've had plenty of positive experience and feedback. However, the power often comes at the price of a large learning curve. Our experience has been that a framework by itself is not sufficient for many developers. The framework abstractions are a foundation that needs UI design tools, scripting access, and careful integration with underlying platforms.

Paper Number: 5
Title: Development of Object-Oriented Frameworks
Author: Joseph W. Yoder
Address: Dept of Computer Science, Univ of Illinous
1304 W. Springfield Ave., DCL, MC 258, Urbana, Il

Development of Object-Oriented Frameworks

Joseph W. Yoder

August 15, 1996

1 Overview

The primary work described here involves a "Business Modeling" application that is being developed and deployed at the Caterpillar, Inc. Caterpillar, Inc. joined the National Center for Supercomputing Applications at The University of Illinois as an Industrial Partner in December 1989 to support the educational function of the University and to use the University environment to research new and interesting technologies. During the partnership Caterpillar has initiated various projects, including the evaluation of supercomputers for analysis and the investigation of virtual reality as a design tool.

The most recent project is pilot project to demonstrate how an appropriate tool might support financial analysis and business decision making more effectively. This Business Modeling project aims to provide managers with a tool for making decisions about such aspects of the business as: financial decision making, market speculation, exchange rates prediction, engineering process modeling, and manufacturing methodologies.

It is very important that this tool be flexible, dynamic, and be able to evolve along with business needs. Therefore, it must be constructed in such a way so as to facilitate change. It must also be able to coexist and dynamically cope with a variety of other applications, systems, and services [Foote & Yoder 1995].

Many Frameworks have been developed to provide for a "model" that will help build a Business Model. The current work focuses on the Financial Reporting application and the frameworks developed to build them.

2 Decision Making During the Design Phase

When we developed the Caterpillar Business Model, we quickly realized that every business unit had its own variations on the way that they did business. Therefore, instead of trying to build a single model, we needed to build a set of frameworks for building models. When we started, however, we had no idea what these frameworks would look like. So, we started with a dynamic programming environment (VisualWorks) that would make it easy to change our programs [Goldberg & Robson 1983].

Many architectural decisions were the result of using VisualWorks. VisualWorks separates the user interface from the application objects, and also separates the DBMS from the application objects. But we discovered that we needed some reusable application objects, and we found that we needed to specialize the UI and the DBMS components.

First, we built a program to model one particular business unit. During the course of writing the program, we noticed many places where the same problem arose over and over. For example, the business model would make many queries to the database and would then display the results in tables. Both the queries and the tables would vary. VisualWorks has tools for generating both queries and tables, but both were more generic than we needed, and required too much programming. We built more specialized tools to make these tasks easier.

Then we built a financial model for another business unit. Most of the changes were ones we had anticipated. The new business unit had somewhat different databases and wanted a few new tables, so we had to change the queries and the tables. We also quickly found out that the use of the business model needed to include additional components such as: allowing the enduser to incorporate their business plan into the system, providing errorcorrection capabilities, and to allow the user to look at "What If" scenarios.

Because we were using Smalltalk, it wasn't hard to change the programs we had written. As we learned which parts of the program were most likely to change and encapsulated those parts in objects, we made the system easier to change. However, it still required programming to change the system. As long as we depended on inheritance to make new components then it always required programming to make a new one, but when we started to make new components by composing existing ones then we found that we could design "builders" that would compose them for us without using Smalltalk. This is important because Caterpillar has too many business units for us to build a business model for all of them. We will have to provide tools to let people make their own business models, which means that we will have to let people make their own business models without really knowing Smalltalk.

For the financial model frameworks, we saw we needed basic reporting capabilities with printing support, the ability to drill down to summary and detailed reports along with a means to do error correction and graph reports. This lead to developing frameworks for these. This paper will describe some of the Reporting Frameworks along with the SQL Formula Creator and Query Objects Framework.

3 Reporting Framework

Caterpillar uses a common profit/loss statement which has its calculations based upon the DuPont Model [Johnson & Kaplan 1987]. The view of this can be either a graphical way of looking at return on investment or the common spreadsheet format.

The views of the profit/loss statements was based upon the logic of looking at sales, costs, inventories, etc. that produced the desired results. All of the top level numbers such as sales and costs can be broken down to sales by region, by model, by date, etc.

The financial model application discussed here allow for users to view the top level numbers and let them drill down into different summary reports of their sales and costs along with means to view detailed transactions from the database and graph out the results.

A framework was developed for building the graphical view of the top level numbers by extending the VisualBuilder of ParcPlace's framework which will not be discussed here. We also built a graphing framework that mapped into the graphing framework provided by ParcPlace. This framework takes either 2-D lists of values to display or Query Objects discussed below and generates a customizable graph which allows the user to print the results. The details of this framework will also not be discussed here.

The basic reporting frameworks I want to discuss here is the development of the reporting framework for the generation of summary reports and detailed reports. The domain logic was defined using the Query Objects and Formula Objects framework [Brant & Yoder 1996].

All top-level reports have the ability to "drill-down" to a first level summary that viewed sales, costs, etc in a summary fashion. The user can then view more detailed summary reports down to the level of viewing and editing the individual transactions that comprise these reports.

The framework we developed provided for the development of ReportValues objects that contain the domain logic needed for all of the formulas and SQL operations for accessing the data of interest. These objects also contain the "meta-data" describing the first level drill-downs, other summary/detailed reports, and any graphs of interest.

ReportValues are table driven thus allowing for the creation of reports without doing any coding. Basically the database can be queried for the different types of reports that need to be built. This allows for all of the reports to be created at run-time based upon the "meta-data" stored in the database.

Since most reports needed were well defined and we found that the majority of the financial applications developed at Caterpillar can be handled by a collection of a few basic types of reports, we could easily automate this process and make it easy to develop an application through table driven information from the database. Thus once the required data model and query objects have been specified, a new financial application can be quickly developed.

4 Formula Creator & Query Objects Framework

Originally, the users of the system provided us with a list of errors that they normally search the database for. We hard-coded these queries and developed an interface for the user to select these errors. However, it quickly became apparent that each business unit would have a different list of these exceptions and scenarios and that these will probably even change over time within each business unit. Therefore we focused on a way to automate the creation of these during run-time.

Our solution to this was to build a dynamic system that will allow the user to decide at run-time the specific logic and variables of interest. This lead to the Architectural Design Decision of developing a framework for building and selecting queries of interest. Theses queries map into a SQL Database.

VisualWorks by ParcPlace provides a framework for creating static SQL database queries. The framework allows the developer to graphically create SQL queries that map to Oracle and Sybase Databases. These queries then get converted into a Smalltalk method that can be called later when desired. Smalltalk objects can also be passed into the generated methods and conversions and comparisons are supported by the framework. This framework can also query the database for the current data model the developer is interested in and then create objects to map to the desired tables within the database. It is also easy to extend the framework to add undeveloped database functions or extend the mapping to other database vendors.

The problem arises when one wants to dynamically change or create SQL queries during run time. Since the SQL framework supplied by ParcPlace only provides ways to predefined static queries we built a framework of SQL Query objects that let you create dynamic SQL queries through the use of Smalltalk expressions.

Our solution to allowing the dynamic creation of SQL objects was to define GroupQuery, OrderQuery, ProjectQuery, SelectionQuery, and TableQuery classes which are all subclasses of the QueryObject abstract class. We also created the associated objects to allow for the developer to build Smalltalk like query expressions.

The Query objects know how to respond to the appropriate message to build the queries and wrap constraints to themselves during run-time. See the Reports Patterns from the PLoP '96 proceedings for more information on the related patterns [Brant and Yoder, 1996].

We were able to reuse all of the code from the original VisualWorks framework of parsing for a method into SQL and sending the SQL across the network and then creating objects representing the desired values returned from the database. Our dynamic SQL framework permits for late binding of constraints to the SQL objects by allowing the developer to build a parser for developing queries and "wrap" additional constraints to SQL objects as the application runs.

The SQL Formula creator takes these QueryObjects and allows the user to dynamically add additional constraints and then file these out for later use. These constraints are wrapped around a selection criteria object. Thus, no matter what selection criteria the user may be viewing later, they can access the SQL Formulas that were previously created and get their desired results.

This allowed for the dynamic creation of error correction screens since the end-user can create the queries of interest and then can query the database given the current selection criteria and return the results for correction.

It was then easy to take the basic reporting schema that ParcPlace provided for viewing and editing data and build it dynamically given the QueryObjects specified from the end-user. We simply mapped the desired query into the framework that ParcPlace provided and then dynamically built a dataset for viewing/editing the data based upon the given QueryObject.

One thing to note is that the users of this system are always stuck with the same look and feel. Even though they could dynamically create any SQL query of interest, the returned data was always in a predefined format. Therefore, customization beyond our common look and feel would have required either extending the framework or "programming" custom screens for the user.

5 Conclusion

This workshop paper has briefly described a few frameworks included in the financial model being developed for the Caterpillar Business Model. Framework development was an interactive process for us as we first had to develop some working applications in order to see the common themes.

This work revealed a well defined reporting format used by the Caterpillar organization, thus allowing us to create frameworks for building the financial applications. These frameworks allow for the dynamic creation of the desired queries and views of interest as long as the end-user is ok with the default look and feel of summary reports, detailed reports, error correction modules, and graphing views.

6 References

[Goldberg & Robson 1983] Adele Goldberg and David Robson; Smalltalk-80: The Language and its Implementation, Addison-Wesley, Reading, MA, 1983.

[Brant & Yoder 1996] John Brant and Joseph Yoder; Reports, Third Conference on Pattern Languages of Programs (PLoP '96) Monticello, Illinois, September 1996 Pattern Languages of Program Design 3 edited by TBA. Addison-Wesley, 1997.

[Foote & Yoder 1995] Brian Foote and Joseph Yoder; Architecture, Evolution, and Metamorphosis, Second Conference on Pattern Languages of Programs (PLoP '95) Monticello, Illinois, September 1995 Pattern Languages of Program Design 2 edited by John Vlissides, James O. Coplein, and Norman L. Kerth. Addison-Wesley, 1996.

[Johnson & Kaplan 1987] H. Thomas Johnson and Robert S. Kaplan Relevance Lost: The Rise and Fall of Management Accounting, Harvard Business School Press Boston, MA, 1987. [Gamma et. al 1995] Eric Gamma, Richard Helm, Ralph Johnson, and John Vlissides, Design Patterns: Elements of Reusable Object-Oriented Software, Addison-Wesley, Reading, MA, 1995.

7 Biography

I graduated with High Distinction and Honors from The University of Iowa in Computer Science and Mathematics. Since then, I have completed a Master of CS degree in the study of problems of the computer-based recording of medical records at the University of Illinois. I primarily focused on the development of computer-based system for the collection of physical exam findings. This design of this system employs an object-oriented approach through the direct manipulation of graphical objects integrated with hypertext approaches and semantic networking to build a system that is more natural to the user.

I started working with Professor Golin at the beginning of 1993. His focus was Theory, tools and applications of Visual Programming. I was leaning towards providing a visual environment for the development of intelligent agents to assist in intelligent automatic user interfaces for my PhD work before he resigned from the University.

I am currently working on my PhD with Professor Ralph Johnson. His focus is on object-oriented technology and how it changes the way software is developed. In particular, he is interested in how to use and develop frameworks, which he believes is a key way of reusing designs and code using objects.

I am investigating "visual languages for business modeling". I am designing them, using them, and implementing them. My current focus has been on using frameworks to develop and implement visual languages for use with business modeling. This project is aimed at providing support for decision making during the business process.

I believe that Frameworks are both a way of coming up with visual languages and a way of implementing them, because if you focus on building something in an Object-Oriented language, then building a framework for it, then making it.

Paper Number: 6
Title: The Best of All Possible Frameworks
Author: Kent Beck
Email: '' (BNR400)
Date Rec: August 16, 1996

The Best of All Possible Frameworks

A technique I often use in making difficult tradeoffs is to imagine solution in the best of all possible worlds, then understand how my alternatives vary from that. Thus, I think of notebook computers not as having a battery life, but as varying more or less from continuous untethered operation.

My successful frameworks have all emerged out of two or three (or four or five) particular solutions to similar problems. I have the experience of what is the same and what is different about the different specific implementations, and from that I can confidently include in the framework only such flexibility as will actually be useful.

So, what is the best of all possible worlds for framework reuse? I can imagine no better scenario for the potential elaborator than sitting down with the abstractor of the framework. A short discussion of the scope of the problem and the framework will quickly lead to a decision about whether the framework is applicable or not. If the framework is helpful, the abstractor leads the elaborator one-by-one through the important issues in mapping the problem onto the framework. In short order, with few false steps, the elaborator has a solution.

What is it that makes direct interaction between the abstractor and elaborator so successful?

The solution I am pursuing (surprise, surprise) is writing a pattern language for each framework. Patterns simulate the effective aspects of direct interaction
Paper Number: 7
Title: Development of Object-Oriented Frameworks
Author: Lars Kirkegaard Baekdal
Address: DIT, Odese Universitet
LCAM, Forskerparken 10
DK-5230 Odense M Email:

Framework Development: Using a Reference Application to Support Design and Communication

Lars Baekdal Wouter Joosen John Perram


This position paper elaborates on experiences with the development of a molecule modeling framework for the pharmaceutical industry. We discuss lessons learned and sketch how we approach the design of a more general modeling framework for applications in the area of co-operative design, using a reference application.

1 Introduction

In our opinion, a framework implements aspects that are generic for an application domain. The framework consists of a collection of classes that model objects and their interaction (roles and responsibilities of objects.) As opposed to traditional libraries, it works following the idea: "don't call the framework - it will call you", i.e. a framework also defines the flow of control. Abstract classes define the interface of instantiable classes and offer the possibility of specialization.

Three requirements need to addressed in order to make a framework valuable:

  1. An instantiated framework should be able to run "as is" to ease comprehension, i.e. it should be complete. This supports usability.
  2. A framework should be extensible as we cannot expect that an appropriate software system is static or even that we are able to discover its complete architecture at once.
  3. It should be easy to refine (and thus reuse) a framework by changing the configuration and/or by sub-classing. Otherwise we cannot justify the efforts of developing frameworks.
2 Our experience with a molecule modeling framework 2.1 An overview

The pharmaceutical industry studies the behavior of proteins. One way of learning is through computer simulation; this becomes more and more important.

In brief, the functional requirements of a protein modeling tool can be classified as belonging to three major categories:

  1. A protein modeling tool (PMT) must enable the construction of complex proteins. Comparing its functionality to the functionality of typical editors we observe many similarities, eg. like adding, deleting, storing, retrieving, and displaying.
  2. A PMT must exploit the knowledge that is embedded in databases and thus consult various kinds of knowledge bases. This will assist the researcher, for instance in examining relevant combinations of proteins.
  3. A PMT must evaluate molecules, proteins, etc. by simulating their macroscopic behaviour under various circumstances. Molecular dynamics is a frequently applied technique here.
In co-operation with the pharmaceutical company Novo Nordisk, we have designed a protein modeling tool[1] based on a framework that provides the above mentioned abilities. Methods for simulation, construction and database access have been supplied through sub-classing and/or through changing the configuration. Even though a tool for protein modeling was the primary deliverable we aimed at, a more generic tool has emerged: it can be targeted for the construction of complex molecules in general. Obviously, the construction of genes, DNA, RNA, other organic molecules, inorganic molecules, crystals, etc. have common aspects. The difference between the types of molecules mainly originates from the rules of construction. For instance, when constructing small molecules a set of basic chemical rules apply; constructing proteins involves additional rules. In short, our framework is also generic with respect to the molecule type. We achieved this by making rules first class instances. Specific ways of construction (eg. protein construction or just construction of some ordinary organic molecule) has been achieved through sub-classing.

2.2 Towards a systematic approach

Looking at the broad applicability of our framework, we have set up a table with different applications (a basic molecule modeling tool, a protein modeling tool, a gene modeling tool, car battery design tools, ...) residing along a horizontal axis and requirements residing along a vertical axis. Then the approach taken may be characterized as vertical. That is, we took a "simple application" approach where "all" requirements from one particular application (the protein modeling tool) have been considered in depth, before considering requirements originating from other applications. In other words, we have established the framework by factorising out generic aspects from a well-known application.

2.3 Evaluation

As in most cases the conclusions are not just true or false. We see both advantages and drawbacks that to some extent blur the picture.


As one of the difficult tasks of developing a framework the designer must identify generic requirements. However, by considering only one specific application in depth we only had to discover requirements of one particular application domain. Of course, the danger of drowning in the ocean of details was less likely as only one application was under consideration.

A major advantage was the ability to provide an early proof of concept, i.e. a verification of the architecture.


However, taking the vertical approach, the danger of diving into irrelevant domain specific details (with loss of efforts as consequence) was large. We are unable to estimate precise losses, but they were significant. Besides, it was hard to figure out minimal and complete classes as the particular application not revealed all properties of objects. For instance, when designing the editorial part of the protein modeling tool, the focus mainly was on visual aspects, ignoring requirements originating from simulation. The late search for generic aspects caused some redesign. Actually, both views have contributed to the definition of the same classes. Knowing an application domain too well to some extent prohibited the ability to see generic characteristics, a fact that showed up during project meetings.

3 A framework for computer supported co-operative design

The pure vertical approach has proven to be difficult. In a new and more ambitious project we try out a more horizontal approach as outlined in this section.

3.1 Application domain

The ongoing project addresses what we call an interactive design tool. It enables users to co-operatively contribute to a joint deliverable, being the design of one or several products. Here, "products" applies in a broad sense.

In brief we see the following additional requirements for such interactive design tool:

  1. It must provide support for co-ordination: Mechanisms for conflict resolution are necessary as users may carry out conflicting activities.
  2. It must provide support for co-operation: Methods for the integration of separated modifications must be available to users.
  3. It must support the exploitation of design expertise. That is, extensive design knowledge depending on the subject being designed must be at hand.
The general modeling tool as outlined above can be viewed as an expansion of the molecule modeling tool; the new framework we currently develop can be considered a super-framework for the molecule modeling framework. Instead of a single user doing some work in a non-distributed system, we now expand the scope to the category of (what we call) interactive modeling tools complying to the paradigm: "Multiple users co-operatively contribute to joint deliverables in a heterogeneous, distributed system."

We believe that this paradigm is an obvious target for a truly generic framework. Though the primarily goal of the project stresses concurrency issues and distribution, a key deliverable of the project still is a framework which must capture the main generic aspects.

3.2 Controlling the horizontal approach

In the current project, we apply a more horizontal approach: that is, taking a variety of applications into account we identify generic requirements as soon as possible.

A generic framework should be based on stable and truly generic requirements. Thus, the specialization of a framework should implement requirements being immature with respect to a generic framework. Otherwise, there is a potential risk of wasting efforts when generalizing an immature requirement. Consequently, we seek generic properties using several applications.

So far we address a set of quite different applications: A tool for editing newspapers, a tool for construction of large ships (in collaboration with Odense Steel Shipyard), a tool for design of pumps and finally also a tool for constructionism using lego bricks (in collaboration with LEGO.) Real applications have requirements that we are unable to foresee. We use the newspaper tool as an intuitive application containing all generic requirements that the framework must realize, i.e. we project the requirements onto equivalent functionality on a "reference application". In taking this approach we consider generic aspects at a very early stage and to some extent we keep the advantage of addressing only one application. Moreover, the reference application makes it easy to communicate the ideas of the framework and thus promote reuse.

The newspaper application also serves as guide of analysis and design decisions. It will eventually serve as subject of prototypes, but not necessarily as a finalized application. We believe that we can afford and also benefit from this overhead due to the fact that the newspaper application is easy to understand and to some extend even intuitive: Editing a newspaper involves editors, journalists, advertisers, etc. co-operating directly or indirectly in a distributed and heterogeneous environment to compose newspapers.

We have chosen several realistic and concrete applications as the main drivers to identify requirements for the framework. This must ensure a generic solution, independent of shifting requirements. We believe that the selection of a leading application (newspaper composition) in our project is the crucial and strategic issue to combine the advantages of the horizontal and vertical approach we have outlined in this position paper.

4 Discussion

We have tried out the vertical approach. In our experience it is likely to cause a lot of redesign. Alternatively, in our current project we could try using a pure horizontal approach. However, it is clear that this approach also suffers from severe disadvantages such as late proof of concept and virtually a huge domain for the analysts team. Instead we have chosen an intermediate approach benefiting from the advantages from both a vertical and a horizontal approach.

Indeed, we believe that the most effective approach combines the advantages of both approaches. This leads to the identification of an intuitive and easy to communicate reference application, which contains significant and stable generic requirements based on real applications. Thus we are able to make an early proof of concept while still ensuring genericity.

5 References

1. L. K. Baekdal. "A Framework for Modeling of Complex Molecules." Technical report, Department of Information Technology, Odense University. 1995.

2. D.J. Chen, David T.K. Chen. "An experimental study of using reusable software design frameworks to achieve software reuse." In Journal of Object-Oriented programming. 1994.

3. R.E. Johnson. "Designing Reusable Classes." In journal of Object-Oriented Programming. 1988.


Lars Baekdal is a Ph.D student at Odense University, funded by Odense Steel Shipyard. He has 6 years of experience as application developer from industrial companies.

Wouter Joosen is researcher for the the Flemmish I.W.T. He has several years of experience in object-oriented technology.

John Perram is a professor of Applied Mathematics at Odense University.

Paper Number: 8
Author: John ??

Project Description

I'll describe for this workshop submission a commercial project whose goal was to build a CASE tool in support of object-oriented development. Requirements included support for multiple languages and multiple development methodologies. The tool itself was required to be cross-platform and had to provide integration, to varying degrees, with a fairly large number of other products some of which were external to the company. Our implementation had to support a high degree of distribution using a CORBA compliant server. Extensibility was a chief concern but so were development costs (we had approximately 10-12 people and one year to produce a useful first release). At the time, 1993 - 1994, we were concerned that the corporate look-and-feel standards were still evolving and subject to change.

Given this requirements list and our past experience with similar requirements in the context of large (that is, multiple application) systems, we were concerned that our design provide encapsulation that would be practical (could be designed and implemented in a relatively short time with a manageably sized staff). At the same time the market segment we sought to serve clearly would require ease of extensibility in our design. We knew that the key to the potential success of our product would be our ability to respond to rapidly evolving product requirements (new languages, new or modified methodologies, or even entirely new target platforms).

From prior work I and my colleague, Tony Leung, had done concerning application integration, we felt that two divisions in our design were necessary. One division was between those classes that provided a model of what our users needed to accomplish and those that implemented platform or technology specific support (primarily user interface and persistence). The second division was within the model of our user?s domain. This division separated the domain representation into entities and representations of how those entities could be displayed and manipulated. A means then had to be provided for making these models persistent and for tying the presentation models to the user interface.

Framework Description

To fulfill these design goals, we designed and implemented what we called our "Integration" framework. It could be best described as an application framework. Its primary responsibility was to provide a means of specifying both an entity model and also a presentation model. The presentation model could then be decoupled from the user interface code through which the user interacts with these models. In effect, the presentation model replaces the Model in a Model-View (or Model-View-Controller) framework.

The integration framework included two principle classes: UserObject and Presentor. UserObject?s subclasses provided a model of the domain entities with which our users were concerned including, for example, Attribute and Operation (since we were building an object-oriented CASE tool our entities formed an object metamodel). Presentor?s subclasses provided a means of modeling the content, in terms of UserObjects as well as the display characteristics of the Views (Primary Windows in IBM CUA Workplace look-and-feel; major application windows to the rest of us).

The abstract class Presentor could be characterized by two responsibilities. The first is the ability to describe which UserObjects (domain entities) appear in a View and how those UserObjects (or their attributes) were to be displayed. The second responsibility was to describe the overall display characteristics of the kind of view used for a particular View. Most user interface problems can be categorized as one of a relatively small number of display types (for example, Forms, Network Diagrams, Maps, Iconic Lists, Tables or Spreadsheets, and X-Y Plots). The User Interface uses the term, Visual Formalism, to describe such a generalized set of display mechanisms.

Let's look at a brief example to illustrate these two points. One visual formalism used in many disciplines is the Network Diagram or Node and Arc Diagram. At OOPSLA, we might see various OMT diagrams, state transition diagrams, or even organization charts. These Views all make use of basic Network Diagram mechanisms. That is, they represent a set of objects in some domain as a set of Nodes and Links (Arcs) connecting those Nodes, and sometimes the Links themselves, together. Some diagrams, like the OMT Object Model Diagram, will display one object, such as a Class, as a very elaborate Node. In these cases the display element (the rendering of a Class as a Node in the diagram) can be done by composition. That is, the one three-part Node (containing Class name, list of Attributes and list of Methods) can be thought of as one Node which is created compositionally out of one simple Node and two ListNodes. Each display element is independently mapped back to an element in the domain model (UserObjects). More accurately, the display element, say a Node, must know how to find the object it is modeling its behavior on and which attribute or aspect of that object to base the display on. To go back to the OMT Object Model Diagram example, the Node displaying a Class must be created so that it models its behavior on one instance of Class. That Node's compositionally contained simple Node models its behavior on its containing Presentor's model (the instance of Class) and it's display aspect is the name aspect of that model. One of that Node's compositionally contained ListNodes models its behavior on the collection of Attributes for that instance of Class (really, it's modeling its behavior on the aggregation association between Class and Attribute) and models its display aspect on the individual Attribute instances. That ListNode could then compositionally contain a ListNodeItem modeled on an instance of Attribute with a display aspect of name.

In the end, we have a compositional tree of Presentors which collectively form a map of the display in terms of the mechanisms of the visual formalism (Nodes, Links, etc.) onto the UserObjects describing the domain model and represented in this View. This compositional tree describes the display content of a View as well as the mapping to the user's domain entities. Wherever possible, the composition is generalized (for example, all Presentor's must respond to a "resize" event generated by one of their contained Presentors). Each Presentor models its behavior on one UserObject instance or is able to request one UserObject instance based on it?s containing Presentor's model (for example, the ListNode being able to request the "attributes" collection of its containing Presentor;s instance of Class; in VisualWorks, for example, you might use an AspectAdaptor to implement this behavior). Each Presentor must also be able to map an attribute or aspect of its model into a displayable characteristic (the Node must be able to use the "name" aspect of its model to derive its label).

One additional property of Presentor is that Presentors are themselves UserObjects. This facilitates user interface design where one View is based on the contents of another but need not know the ultimate mapping to the domain entities. One example of this might be an Iconic List View of a Network Diagram. The List Items in the Iconic List View would each correspond to a Node or Link instance in the Network Diagram. This Iconic List View (a Node and Link List) would then work with any Network Diagram and would provide a list of all of all of the Nodes and Links in the diagram.

One final aspect of the design of the Integration framework was the way in which dependent frameworks (frameworks which extended the Integration framework) made use of Presentor?s properties. Each visual formalism we supported became a new dependent framework on the Integration framework. We had, for example, a Network Diagram framework to build Network Diagram Views. These dependent frameworks all made use of a common pattern of design. One class in the visual formalism?s framework represented the formalism itself. In the case of Network Diagram, there was a class, NetworkDiagram, that represented the formalism. The remaining classes in the framework provided the basic building blocks for that formalism (in this case, classes like Node, ListNode, and Link). The class representing the visual formalism was an abstract class. Whenever the developer wanted to create a new type of Network Diagram, say an OMT Object Model Diagram, she would create a concrete subclass of the abstract class, NetworkDiagram. She would then provide an implementation for a small number of methods which define the mapping of the visual formalism?s mechanisms onto the entity (UserObject) model. The basic mechanism components of the visual formalism (such as Node and Link) were used as black boxes within these methods. When the developer was finished, she had encapsulated her description of the mapping of domain entities (UserObjects) onto display mechanisms. This encapsulation was localized to the concrete subclass of the visual formalism and to a relatively small number of methods in that concrete class.

Eventual Outcome

The work described in this paper was being performed by a group of about 12 developers. This group was part of a larger project. The larger project, because of its staffing levels, became a target for elimination during a downsizing phase. The work described here was terminated prior to completion of the second development iteration. The basic Integration framework, a Network Diagram framework, an Iconic List framework, a minimal Menuing framework, and preliminary versions of some of the domain frameworks were, at the time of termination, working to at least prototype levels of function. The design work was well documented at the time of termination.

A small team of three working on a related product that was not terminated decided to pick up the original framework designs (they had been assuming our frameworks as prerequisite to their own plans). Because they had only minimal resource they stripped away all of the CORBA and distribution support and converted the frameworks to C++. They subsequently delivered their product on time and within budget. Customer satisfaction with the resulting user interface design was reportedly quite good.

History of the Framework

A few background comments may be helpful in establishing the history of the Integration framework. The design arose from architectural work Tony Leung and I did on a much larger multi-application product family with intensive application integration requirements. The applications were being designed and implemented in sites scattered around the world and in several independent companies. We found that although the system?s architecture specified how data integration was to be achieved, in reality both data integration (drag-and-drop, for example) and control integration (access of one application?s Views from another) had to be architected. Those architectural requirements became the foundation for our subsequent work on the Integration framework.

Predecessor Work in the Literature

Elements of both the Integration framework and frameworks such as our Network Diagram framework can be seen in other work described in the literature. Taligent?s overall application framework, which was already in beta as we began designing our first iteration, has many of the same features of our UserObject and Presentor design. The major distinction that does not appear to be there is the use of Presentors as UserObjects and the use of concrete subclasses of visual formalism classes to encapsulate the mapping of View onto the domain (and the very terse language of the visual formalism itself).

Another important predecessor work is the Unidraw work of John Vlissides. Unidraw?s encapsulation of drawing objects is very like my description of our Network Diagram framework. One important difference is the very stylized use of subclassing of the visual formalism. Another is the use of Presentor as an observer of UserObject.

One other existing body of work should be mentioned. At about the same time we were designing our product the engineers at Interactive Development Environments (IDE) were re-designing their own. Apparently having come to much the same conclusions we did, they re-structured the architecture of their CASE tool, Software through Pictures, so that it was structured around visual formalisms (they have a diagram editor, a table editor, an annotation editor, and a few forms). Their data model (I believe IDE used C to implement this architecture) does not provide an equivalent to the UserObject and Presentor distinction and their products suffer some restrictions because of this. In their architecture, rules files (specified in a proprietary, C-like language) are used to specify the domain mapping for a given View (so, for example, a rules file specifies what the components of an OMT Object Model Diagram are). However, the data model is only one level deep (all of their objects are subtypes of one of their meta-types) and the meta-types specify presentation constructs like Node and Link. This means that an object like "Attribute" will be classified as a type of "Node". This metamodel seems to restrict the flexibility of presentation. Also, encapsulation seems to suffer so that even allowing that the implementation language is procedural, the rules code specifying each View fosters violations of encapsulation. Even so, StP is the most flexible and extensible commercially available OO CASE tool with which I have worked.

Common Questions

How do you achieve the right level of generality in a framework, given shifting requirements?

One observation that arises from the Integration framework is that "good encapsulation" seems to drive robustness in the face of change. One important consideration in seeking "good encapsulation" is, first and foremost, that in order for a framework driven approach to work, an overall or application framework is needed. Taligent's framework technology was a (technical) success because there was an overall framework in which to fit other frameworks. If your requirements creep remains within the bounds of the capabilities of the overall framework, then you are unlikely to experience any major breakdown of encapsulation (defined by a change in requirements which ends up requiring changes in many places in the design). Conversely, if your requirements shrink (which, admittedly, rarely happens but did happen here) it is easier to isolate the now irrelevant portions of the design (the individual frameworks) and remove them leaving the remainder of the design intact.

Another facet of "good encapsulation" seems to be "naturalness". "Naturalness" implies that the concepts defined in the design (frameworks) are a direct reflection of the domain in question or are a direct outgrowth of functional requirements implied by use cases in that domain. At first blush this appears to be so vague as to be unhelpful. However, it has been my observation that most architectures are laid out around the core technologies the project is to use, not the needs of the user of the system (the person or persons who will interact with the system via its human-computer interface). As a consequence, database technology, for example, often bleeds right through to the user interface which can often mean any change of database technology implies re-writing code throughout the design (and even re-designs of the user interface itself). This certainly constitutes a major failure of encapsulation. Conversely, if you start with a more pure-minded, domain-based view of the system architecture, and you do not find a database system that fits well with that architecture, you will end up incurring performance penalties.

How to communicate the framework architecture and achieve usability?

During the project I?ve described in this paper, Tony Leung devised a representation of our frameworks based on the idea that all elements of a framework are expressed in terms of classes and their methods. The classes can be categorized as public (and therefore part of the externalized interface of the framework) or private (and therefore never referenced outside the framework). For those classes that are public, they can be either concrete (and only used as blackboxes and, therefore, only needing to be understood at that level) or they can be abstract (and used in whitebox fashion requiring a greater understanding of intended use). Furthermore, the framework-public methods these classes exposed could further delimit what the framework consumer needed to understand. We also discussed the possibility of making a distinction between users of a framework who merely use it as intended (a domain engineer creates an OMTObjectDiagram using the Network Diagram framework) and users of a framework who wish to create an entirely new framework (an interface engineer who wishes to create a new visual formalism based on Network Diagram but with new and extended capabilities; say a Plane-And-Edge Diagram). This last distinction opens up the possibility of categorizing the associations of frameworks to each other (dependency in structure versus usage of components as intended).

How to justify boundaries when testing for correctness and completeness?

This is not an area any of my prior project experience has touched on. Our testing approach was not much different from a conventional object-oriented test approach (the use of frameworks did not greatly affect it).

How do you design effective economic models to clarify what should be framework-developed?

As we were working in a tool development shop, we worked from the economic assumption that our expenses for release one of the product had to be recovered over the life of that first release. In other words, the overall development costs could be no more than would have been the case if we had built the product without frameworks. This forced us to eliminate or at least limit framework-based design in a number of areas. One good example was our implementation of Forms (the most common kind of visual formalism employing entry fields, labels, list boxes, combo boxes, and the like). We felt that a good generalized Form framework would have cost substantially more than the cost of hard-coding the Forms in our design. Forms are especially costly for two reasons: there are lots of black box components like entry fields and combo boxes and you also need either a superb layout algorithm which, frankly, I have never seen or you need a WYSIWYG Form layout editor which is also costly. On the other hand, simple visual formalisms such as an Iconic List (a list of UserObjects displayed with an iconic representation and a label for each item in the list) pay for themselves if you only implement two or more Views based on the formalism.

This assumption would have been fine had it not been that upper management?s criteria of progress was very closely tied to what can be seen on the glass (at the user interface). Our first iteration included object distribution, provided the ability to dynamically create menus in the interface process based on model content on our server, enabled drag-and-drop across any applications that could do a CORBA message send to our server, and provided a stripped down Network Diagram framework and an Iconic List framework. However, the application was anything but flashy at the user interface unless the observer was familiar with the difficulty of acchieving some of these characteristics. Assuming that our project wasn?t doomed simply due to downsizing pressures, the only economic model that would fit these circumstances would require cost recovery over each development iteration (no additional cost of development for the framework-based design over the conventional design for a given iteration). Under that assumption our iteration drivers would have demonstrated a level of function more closely approximating management?s expectations. However, at the end of the product development cycle we would have had significantly less capability overall. Clearly, I would prefer working under the economic model we started with but that would require a much better working relationship with the management who approve the expenditures.

Connections to Other Disciplines

Much of the work underlying the UserObject and Presentor based integration framework and its relationship to visual formalism based frameworks is rooted in cognitive psychology. A Presentor representing a View can be thought of as a Problem Space Representation. Problem space representations are descriptions of a visual representation of some domain problem or question. The representation should facilitate traversal of the possible solution space for that problem. This direct mapping to psychologically relevant representations also aids in usability engineering of the system throughout all phases of development.

One other important cross-discipline connection is to task analysis. Task analysis is a user interface analyst?s version of at least part of what goes on in use case analysis. Since Presentor?s represent a problem space representation there should be a simple one-to-one correspondence between elements in the analyst?s task model and Presentors in the class model.


I work as a user interface architect at OOCL (Orient Overseas Container Lines) a global container shipping company. My responsibilities include look-and-feel specification and coordination of user interface analysis and design across an enterprise-wide system of applications. Prior to working at OOCL I worked for the IBM corporation as an Object-technology consultant, a technical lead in product design, a user interface architect, and a human factors scientist. I have a master of arts degree in Experimental Psychology.

Paper Number: 9
Title: The Canonical OO Framework Pattern
Author: Shai Ben-Yehuda

The Canonical OO Framework Pattern


The object paradigm has shaped our thought about reuse over the past decade. The most effective form of reuse in OO technology is the framework. This paper describes and analyses an architecture which occurs over and over again in OO framework and suggests a name for it as the canonical OO framework pattern.

What is a framework?

A framework is a reusable design for an application or a part of an application that is represented by a set of abstract classes and the way these classes collaborate (Johnson 88)

According to Coplien & Schmidt (95) a framework can be characterized by the following:

  1. A framework provides an integrated set of domain specific functionality.
  2. Frameworks exhibit an inversion of control at run-time.
  3. A framework is a semi-complete application.
In the paper I use the term framework as a set of predefined classes= that allow the reuse of both class implementation and high level design concepts. Intent

The goal of the canonical framework structure is to provide framework builders with a template structure for a clean framework. If the canonical structure suggested here is used properly it will enable a high level of code and design reuse, framework integration. The canonical framework will also support better development control.


The field of OO framework engineering has not matured yet. The canonical framework pattern suggests a simple and tested way to structure a framework. Users of some frameworks find that frameworks are hard to integrate. The canonical framework structure can help to integrate frameworks by combining the orthogonal parts of them.



1. Building blocks- Classes that are used as parts. Those classes are not built for inheritance and can be used easily throughout the various= subsystems.

1.1. General purpose building blocks- Common building blocks like string and date. In most cases, those building blocks should be purchased.

1.2. Domain dependent building blocks- Problem domain building blocks that the company does not see as key concepts of competitive value. These building blocks should be purchased, if they exist. For example, business classes or drawing classes.

1.3. Competitive building blocks- Building blocks that reflect the added value of the company (company assets). Some infrastructure of the software product is defined at this level.

2. Abstract Framework - A set of base classes that depict abstract problem domain or design concepts. Example: an abstract framework for windowing would define the window and control abstract concepts. Abstract frameworks can be purchased if an existing framework suggests a predefined design and problem domain abstraction which is suitable for the company.

3. Subsystem - A cluster of classes built to achieve some functionality of the software system. The subsystem can have an interface to allow other subsystems to interact with it. Most of the coding of a typical project is done at the subsystem level. The framework forms the world the subsystem developers live in.

4. Backbone elements- The canonical architecture backbone is the part of the software on which many lines of code are dependent. This means that the cost of changing a backbone element is high. On the other hand, if the backbone elements are well built the maintenance cost of the software can be significantly reduced. The following elements in the canonical architecture are the backbone elements:

4.1. The set of interfaces used throughout the system- The interfaces of the building blocks classes, the subsystems interfaces and the interfaces of the abstract base classes of the frameworks. The interfaces of the classes inside the subsystem are not considered backbone elements.

4.2. The abstraction level defined by the base classes of the frameworksIf the concepts are not abstract enough, the user of the framework might have to define his concepts outside the framework. On the other hand, if the abstraction is too high the framework user will have difficulty finding enough ground for reuse. The concepts might be too vague as well.

4.3. General and high level processes - High level processes defined by template methods of the base classes of the frameworks. These processes are used transparently by the framework users and should be treated with care by their designers. Template methods define an interface for the derived classes by the virtual primitive operations (see [1]).


  1. Subsystem and building blocks- Classes in the subsystems can be defined by using the building blocks as parts (aggregation). For example, using the date and string classes to define a person class.
  2. Subsystem and abstract frameworks- The classes in the subsystems can inherit from the framework classes. For example: defining a concrete control class in a sub-system which is derived from an abstract control class defined in the windowing framework. The template method interface (given by the virtual primitive operations) is used to enable reuse of high level processes.
  3. Subsystems between themselves - A subsystem can collaborate with objects in other subsystems if the objects are defined in the interface of the other subsystem. For example, interacting with a Table object defined as an interface object for a subsystem that encapsulate RDBMS system services.
  4. Abstract frameworks and building blocks- Abstract frameworks can use the building blocks to define their interfaces. For example: The string class can be used to define the interface of an abstract window
Consequences 1. Better level of reuse- The canonical framework enables its clients, the subsystems developers, some levels of code and design reuse:

1.1. Building blocks reuse - The users of the framework can reuse simple building blocks. Using the same building blocks in all subsystems also reduces the training costs.

1.2. Abstraction reuse - The users of the framework use the same predefined abstractions (concepts) given in the framework. Using the same problem domain concepts improves the quality of integration of the whole system. Using the same design abstractions leads to design reuse. The hardest part in framework construction is to decide about the right level of abstraction of the problem domain and the design concepts.

1.3. Process reuse - Some processes can be defined using only the high level concepts. For example, the message loop in a windowing system uses the concept solely. Those processes are reused transparently by the subsystems developers. In most cases, high level processes are defined by the template method pattern.

2. Overcoming proprietary frameworks enabling frameworks integration - In most cases, deciding to use a specific framework means not to use any other framework, in other words, a proprietary problem. The canonical framework suggests a way to solve that problem.

2.1. Building blocks integration - In the worst case, the building blocks of two frameworks can be integrated using the adapter pattern and automatic casting operations between parallel classes. The canonical framework distinguish between some levels of building blocks. Therefore, building blocks interfaces at each level can be shared by some frameworks. For example, the same general building blocks (BB1) can be used by all frameworks. Hence, framework builders should confine themselves to building blocks standards as much as they can.

2.2. Abstract framework integration - Abstract frameworks can be integrated if the concepts depicted by abstractions are independent. The canonical framework suggests to confine each framework to its righ level of abstraction. Hence, framework builders should not use the Object concept because it is too abstract and will disable the integration with other frameworks.

3. Better development control- The canonical framework methodology divides the system to modules with minimal interactions between them. It suggests an effective and economic model to clarify what should be inside the framework and what should be in the subsystems implementation. The division between the framework and the subsystems implementation allows better control of the development process and earlier real-problem detection. On the other hand, if the system is relatively small it adds complexity to the system.

4. Achieving the right level of generality in a framework, given shifting requirements- The canonical FW separates the abstraction from the building blocks. Therefore, the FW architect can easily understand and decide about the right level of abstraction understanding the scope of future requirements. If new requirements fall outside the abstractions but still near enough, the framework can be refined. But, if some new requirements falls far form the abstractions a new framework should be developed to handle those new concepts with no conflict with the first one.

About The Author

Shai Ben-Yehuda received his MSc degree from Bar-Ilan University, Israel. He works as an international lecturer and consultant in Sela Labs, Israel. Shai has developed some OO framework in the past: =B7 Office automation framework in C++ (Whole system cost - 10 MY). =B7 Simulation framework in C++ (Whole system cost- 100MY).

Today, as a consultant, he works on OO frameworks in the following fields: =B7 CAD/CAM framework in C++ for a new generation CAD/CAM line of products. =B7 A framework for a large telecom company line of products.

He is the author of the TGP (Type-Generic class-Profile) methodology to be published. Most of his OO experience was gained as a team leader of several projects in C++ and OO technology in the Israeli army. In OOPSLA 1995, Shai participated in the workshop on Semantic integration in Complex Systems. Nowadays, he instructs two popular design pattern courses he developed and consults in the area of OO frameworks construction.


[1] Gamma, R. Helm, R. Johnson, and J. Vlissides. Design Patterns: Elements of Reusable Object Oriented Software. Addison Wesley 1994.

[2] Coplien. D. Schmidt. Eds. Pattern Languages of Program Design. Addison Wesley 1995. [3] Booch. Object-Oriented Analysis and Design with Applications, 2nd ed., 1993. [4] Jacobson. Object-Oriented Software Engineering. Addison Wesley 1992.

Paper Number: 10
Title: BluePrint
Author: Peter Kriens


The intention of the BluePrint framework was to reduce the amount of work to develop network management applications for the GSM cellular network. These management applications were developed in many different sites based on a shared platform. This platform contained interfaces to the communication infra structure, security and network topology.

The different development sites were creating different software that had to cooperate at the customer. Though the objectives of the applications were quite different, large part of the implementations were very similar. To share this common part it was decided to develop a framework that would contain this common part and would be open for extensions from all the development sites.

The framework was developed by designers from all development sites during the period of about 1.5 year. Unfortunately, delays in the beginning caused the first application to start using the framework before it was materialized. The immaturity combined with the large distances between sites caused many problems. The fact that the framework was based on some unconventional ideas didnt help to create much support.

The framework has been used in 2 applications and will be used partially in 2 others. It is severly crippled with the current reputation. Future outlook is bleak because a framework needs regular maintenance to keep current and the organization is not willing to budget money for this maintenance. This caused the framework to be copied into the application code and maintained and extended separately.

One things seems positive. The framework documentation (400 manual pages) is being read by more and more people and the value of the framework seems to be better recognized because more and more people recognize the underlying problem that the framework tried to addres. Network management is a very complicated area that has some subtle, but fundamental, differences to more traditional software.

Application Domain Background The framework was targeted at the control software for a GSM cellular telephone network. Such a network consists of MSC, BSC and base stations. The MSC is like a normal phone switch. The BSC is a switch that also controls a number of base stations. The Base Station is the computer and radio equipment that make contact with the mobile phone of the user.

Such networks run over very large geographic areas. The computers that make up these networks can be found all over the country. These networks are so large that it is impossible to physically manage these computers. Therefore, all these computers have a management interface that allows them to be controlled from a central location. The managemen consists of setting the values of the parameters of these computers. This is a major task because there are thousands of these parameters and many parameters influence each other.

Major problems in the traditional software

  1. Slow communication links. A major problem is that the communication links to these computers are slow. Therefore, the a cache of these settings is necessary in the management system. This cache database is used by the applications to prepare changes in the network.
  2. Changes must be prepared and synchronized. All computers are related and influence each other.
  3. Large amount of data types. Each network element has his own release and the parameters can change drastically between releases. This makes it very hard to place the information in a relational database.
  4. Different information models in different applications. There was no single information model. This caused clashes between applications when they tried to communicate.
Scope The framework had to be implemented as a C++ framework for Unix computers. The first programs were going to run on Solaris 2 and HP.


When we started we noted that the main problem was caused by the enormous number of types. Each network element type needed hundreds of parameters. Each parameter was modelled as an attribute of an object. This set had to be known in too many places. The user interface code needed to know this set to be able to allow the user to edit the object. The printing software needed it to print the object, the communication software needed for sending commands. The database interface needed to know this information for storing. In one application we counted up to 20000 of these attributes due to versions and variants of the main objects.

The tradtional solution to store all this information as methods in a class still caused much redunancy because then each method still had to "hard" enummerate all those hundreds of attributes per object.It was seen as a primary goal to localize this knowledge about the parameters.

This problem was addressed by making a special Data object as the corner stone of the framework. This object was parametrized with a Type object that contained an extensive description of the associated data. The type objects could be created from standard data description language like ASN.1. This description could be queried in run time. It could be recursive and contained information like: the construction, primitive types, optional, default, names, and constraints.

Looking at the group of people who had to use our software we realized that it would be difficult to make them standardize on one implementation. (This turned to be out even more difficult than we had anticipated). We therefore decided on a set of important functions that the system should perform. These high level functions were then defined in an abstract class and the, possible multiple, implementations would be defined in a subclass. We called this the role/adaptor model but is more commonly known as the Adaptor or Bridge pattern (we named it before the gang of four published their wonderful book). A factory was used to create adaptors, minimizing the coupling between the client code and the implementations.

We defined the following abstract interfaces:

The definition of these interfaces was greatly helped by the Data Object. This allowed us to hide the exact information type that was passed to the adaptor. For example, setup information for these adaptors could be stored in the Setup and retrieved as a Data object. This data object could be passed to the factory or adaptor without ever knowing the exact contents. We implemented adaptors for each of these interfaces. Some turned out to be very useful, others were never really used. All of them were implemented as a Service Element. A unit in the software performing some function.

The Data object was then used to parametrize Service Elements. These service elements were like components and could be queried for their interface. Each service element performed its service with or on a Data object. The goal was to make all these components reusable as is. We developed, among many others, the following service elements:

Storage in Sybase.

This included creation of database tables from the data descriptions. Each data object could be fully automatic (or cusomized if needed) stored in the database without any code support. The functionality of the database (indexing, clustering etc) could be fully supported

A network management information model.

We developed special subsystem that used a description of the network and created service elements that could use this description to perform a service like hierarchical storage and directory functions.

User interface components (widgets).

All widget worked with the same basic data type: Data object. The widgets could perform much more functionality than traditional widgets because they could inspect the data, remove elements from lists and add elements. We found that we could add very high level functionality to these widgets in comparison to traditional environments. The user interface was also extended with a service element that could edit a Data object fully automatic.


A special service element could encode/decode a data object in an ASCII string and back. Another could en/de code with Basic Encoding Rules: a common standard in telecommunications. This service could be cascaded with a storage


A special printing service could take a data object and format it without intervention.

Inter process communication.

Any Data object could be transported over the network.

Type management.

The Data Type objects could be stored in the database or files and s service element managed the Data Type objects in the systems. These Type objects were grouped inModules.

Setup management.

Most environments can only manage primitive data types (strings, numbers) in their setup handling. In the framework we used to the data object for the setup information and could this reuse or own storage, printing, ui and other code to implement these facilities.


  1. Usage of immature code We found great hinder from the fact that the first users of the system had to use the framework before we ourselves had had a chance to make an application with it.
  2. Lack of knowledge The primary problem that we tried to address in the framework was not recognized by those first users because they all were building their first network management application. Many of them were only recently employed and had no domain knowledge. We tried to support them as much as possible but the "not invented here" syndrome caused them to have a strong urge to do their own thing.
  3. Applying the concepts at the wrong places The aforementioned Data object is a very good object for the places it is intended: moving around information. However, it is not a panacea for every computer problem. It carries a performance and complexity penalty. We tried to warn them many times to not overdo this but we failed to convince them. Only late in the project did they accept some advice.
  4. Not invented here There was a strong feeling of competition between the first users and the framework developers. The users wanted to go their own direction and felt pushed in a direction didnt want to go. We made the stupid mistake to believe that when they would see the results, they would become positive. This was of course wrong. Some users even complained that the framework was threatening the "freedom of the programmer".
  5. Comparison to virtuality We found that as a framework designer you are a sitting duck. People can read your code (this was fully available) and can criticize anything. This often resulted in comparison to solutions that were not usable. For example, many times our UI solution was considered to be bad. Unfortunately we were forced to build and adaptor for Xview and were constrained by this. However, we were compared to Motif, Macintosh and Windows as if these were possibilities. Later in the project it worsened because many new things would be available "ANY TIME SOON". The fact that you cannot integrate something before it is avaliable is of no relevance for many people.
  6. Lack of understanding We had taken great care to make the framework as extensible as possible. We used factories everywhere and allowed all our classes to be extended and registered. However, we had to notice several times that people had not understood those mechanisms and made the completely wrong conclusion that there were severe limitatations, upsetting management of course.
  7. Decoupling On the side of the framework we had spent very much effort to decouple as much as possible. This was done in the way we had setup the environment and the mechanisms we offered (e.g. factories). We succeeded in this nicely and had short compile times and very few dependencies. We assumed that the first users would also apply the same attention to decouplng. However, we found that they didnt understand the importance of it before it was too late. A single error in this is already fatal.
  8. Multiple development sites Developing one piece of software with a geogrphically dispersed group is just not easy. Much direct contact is missing and many things that take minutes when people work in one area take days when they work apart.
  9. Macro versus Micro optimization We build many features in the framework to make it more extensible and allow reusability to optimize the macro situation for the organization. However, a designer will usually optimize his own local decision on the micro level. The added complexity of the macro optimization is often not acceptable to him because he doesnt see the need, he only sees the cost.
Process We developed the framework in 3 locations in Sweden and one in Germany. One site had (at one time) 12 people in the project, two other sites had 2 people in the project and one site started with 4 and ended with 1.

Communications took place mainly over the internet with the Web as an important document manager. Mail was used actively. One site maintained the base line, the other site updated the base line when there was new functionality. During the project we had 3 different project managers. The author of this article was the architect. There was a configuration manager, a test responsible, a tester and the rest was developer. Manuals and application notes were written by the architect and the developers.

Lessons learned

Paper Number: 11
Title: On the Development of Object-Oriented Frameworks
Author: Richard A. Demers
Address: Shamrock Computer Resources, Ltd
Phone: (612) 673-0839

On the Development of Object-Oriented Frameworks

After nearly three years of programming in Smalltalk, I have decidedly mixed feelings about frameworks. The ones I've created have been a great help to me, but using the frameworks created by other people was mostly an exercise in frustration. I suspect a lot of people feel the same way. So what is the problem with frameworks, and what can be done about it?

A framework is a set of classes and objects that provide a context for the implementation of some aspect of one or more applications. It provides the applications with a set of services, and sometimes with reusable components designed for the framework.

If the framework is broad enough and general enough, a good framework also has the beneficial effect of standardizing applications, and make it easier to create and maintain them. Programmers can take advantage of the services and components provided by the framework and need to learn only one way to do whatever the framework supports; alternatives need not be considered. The framework raises the level of abstraction at which the programmer works, and thereby reduces both effort and cost.

However, creating a framework with these characteristics is not an easy (or cheap) thing to do. It often takes multiple iterations to even begin to understand what is needed and possible. As additional applications are created in the framework, new requirements manifest themselves, leading to either evolution or devolution, depending on the amount of care given to the creation and maintenance of the framework.

If the frameworks of a project are to be used in multiple applications, sustantial investments may be required in their creation and maintenance. However, some frameworks are actually accidental constructs resulting from inheritance hierarchies created primarily to reduce code duplication, and are not recognized as frameworks at all. And other frameworks are initially created to accomplish specific tasks, without any concern for their reuse. In neither case are they viewed as being secondary in importance to the real application code.

One of the keys to successful framework development, therefore, is to recognize when a framework is being created, identify it in the project plan as a framework, and decide on an appropriate level of investment. This recognition can occur during the analysis and design phase of the project, but it may also occur later, during development. In either case, project management needs to become specifically aware or all new frameworks so that appropriate investment decisions can be made. One possible decision, of course, is to not develop the framework, to look for commercially available alternatives instead.

An important area of investment in frameworks is in their documentation. More attention must be paid to the documentation of a framework, not less. Too often, only the original programmer of a framework knows what it was actually designed to do or how to use it. It is not enough to assume that subsequent programmers, who need to either use or extend the framework, can just read the code. This is a waste of everyone's time and talents, the sign of an immature hacker, and shows a selfish lack of consideration for teammates.

At a minimum, as with all programming, the code of a framework should be created with careful attention to class, variable and method names, and with comments to explain what problems are being solved and why the code is structured as it is. But this is not enough documentation for frameworks. Specific attention must be paid to the needs of the programmers who will build applications within the framework and possibly extend it.

To the extent possible, this extra documentation should be incorporated into the on-line documentation of the code, in the form of package and class comments. But even this may not be enough for complex frameworks, particularly because code editors and browsers typically do not have adequate facilities for formatting text and illustrating concepts with drawings. If necessary, use a full-featured word processor, but make the resulting document available on-line.

The following is the outline of a document that describes a framework (with a lot of supporting figures):


- What programming problem is solved by the framework.

- General description of the framework.

- A simple example of the use of the framework.


- Programming patterns [GHJV95] used by the framework.

- Major components of the framework.

- Relation of the framework to other major system components and frameworks.

- Types of components that fit into the framework.

Framework Operation

- Lifecycle of the framework (i.e., starting, operating, terminating).

- Lifecycle of the components in the framework (i.e., creation, activation, initialization, operation, termination).


- Between the framework and its components.

- Between components.

- Between components and other instances of the framework.


- How to create components within the framework.

- How to extend the framework.

- How to create new classes of components.

Component Descriptions

(for each component accepted in the component library)

- Function.

- Operation.

- User interface requirements.

- Subcomponents, required or optional.

- Object model aspect requirements.

- Properties of the component.

- Events triggered by the component.

Common Component Patterns

Patterns of components, discovered in the initial project to be useful in solving particular problems, are described using the method and topics pioneered in [GHJV95]. An extended example is provided for each pattern. Programmers found these patterns to be invaluable in understanding the overall framework and its application.

Considerable effort is required to document a framework in this way, but programmers can more easily learn and use both the framework and its components. A result has been the increased level of reuse that gives Object-Oriented Technology its great payback. Another result has been additional requirements for new levels of support from the framework and for new components; that is, for their continued evolution.

P.S. As a result of writing this workshop paper, I came to realize that the framework documentation it prescribes could be easily provided using a [GHJV95] pattern -- the framework providing the syntax and grammar of a pattern language, and the plug-in components its vocabulary.

[GHJV95] E. Gamma, R. Helm, R. Johnson, J. Vlissides: Design Patterns -- Elements of Reusable Object-Oriented Software, Addison-Wesley, 1995.


Rich Demers has 28 years of experience as a software developer, designer and architect. He spent most of those years with IBM, was involved in many interesting application and system projects, and is the recipient of two IBM awards for outstanding innovation. He has been interested in objects since the mid-70's when he worked on the IBM System/38, the predecessor of the AS/400. Currently, he is a Senior Consultant with Shamrock Computer Resources, Ltd. and is focused on the design and development of Smalltalk client/server applications. Rich has participated in all of the OOPSLA conferences.

Paper Number: 12
Title: Submission for the Development of Object-Oriented Frameworks Workshop
Author: Larry Williams
Address: ObjecTime Limited
Phone: (613) 591-3899


I am a Senior Designer at ObjecTime Limited. My responsibilities include the development of frameworks for the representation and storage of designs created with the ObjecTime CASE toolset, plus frameworks for the various tools within the toolset (e.g. state machine editor, class structure editor). I have been working on ObjecTime since 1990.

My initial exposure to object-oriented techniques was in 1986 during my masters degree. I've been involved with developing object-oriented software (predominantly in Smalltalk) for most of the last 10 years.

Project Description

ObjecTime is an object-oriented CASE toolset that supports the ROOM (Real-time Object-Oriented Modeling) methodology. The concepts in ROOM are similar to other OO methods, but with some differences due to it's focus on event driven, real time systems. Without boring you with all the details (which are available in [SGW94]), an ObjecTime design consists of:

  1. a collection of data classes which are used for the passive objects in the system;
  2. a collection of protocol classes which define the valid messages within a specific protocol;
  3. a collection of actor classes which are used to represent the active objects in the system.
Each actor class has a structure definition consisting of references to protocol classes to define the interfaces of the actor, references to other actor classes to define the decomposition of the actor, and bindings that define the communication channels between the containing actor and the contained actors. An actor class also has a behavior specification which is represented by a modified form of Harel's statecharts. ObjecTime started as a research project at Bell-Northern Research in 1986. In 1991, after several prototypes (and some drastic changes in the staffing), the decision was made to productize the toolset, which lead to the spin-off of ObjecTime. The initial emphasis on prototyping had created a system that was not very 'cohesive'. There were some simple frameworks in place, but there were extensive opportunities for simplification and reuse throughout the toolset. During the productization phase we (re)implemented several frameworks to improve the quality and maintainability of the toolset.

One of the primary frameworks that I helped develop (and evolve) deals with representing the design that the ObjecTime user is defining/editing. Details about this framework can be found in [CoWi93]. Currently the framework consists of about 15 core abstract classes that are specialized to produce over 100 concrete classes, along with about 15 concrete core classes that are directly used by many of the specializations. Example classes created by the framework users are ActorClass, ProtocolClass, DataClass, State, Transition, etc.

The other significant framework that I was also involved in developing (and evolving) supports the development of the 'tools' (or editors) within the toolset (e.g. the state machine editor). It currently has about 20 core abstract classes specialized to get over 150 concrete classes, along with about 10 concrete core classes.

There are several characteristics of our frameworks that affect the applicability of our experience for others:

  1. Both frameworks were initially done as a re-implementation/enhancement of a fairly extensive set of existing classes. They have since been used fairly successfully on new applications within the toolset.
  2. Our frameworks are for internal use only (as opposed to being a product for use by external customers, etc)
  3. We have had small development teams for our frameworks. The representation framework was initially done by 1 person (myself) who continues to do most of the evolution/maintenance. The number of users for the representation framework has grown from 3 to 8. The tools framework was initially done by 1 or 2 people, and it is used by the same group of users.
Experiences During the development and evolution of these frameworks, some of the lessons we learned were (some of which are pretty obvious):
  1. It is important to understand the breadth of use for the framework before trying to force it upon the users. We were lucky since we were re-implementing parts of an existing product, so that we had a very good idea of the current needs for the frameworks. We also had stacks of feature requests from customers, so we had a decent idea of what other features and applications would need to be supported.
  2. It's important to involve both framework developers and framework users when determining the capabilities of the framework. Since we had an overlap between the two groups, we haven't been very diligent about this, but it is key to getting a solid understanding of the domain, as well as initial feedback on the concepts.
  3. Avoid the 'everything but the kitchen sink' approach to defining the breadth of the framework. If you're not sure that something will be needed in the near term, then don't add it. Of course the framework should not be designed to make the expected future extensions more difficult than necessary.
  4. An iterative/incremental approach for framework implementation provides the opportunity to evolve/tweak the framework earlier. In the case of our representation framework, we had more of a 'big bang' approach were we redid almost all of the existing classes at once. Even after designing the framework with the 'big picture' in mind, the experience gained from actually implementing parts of the framework and using it for a subset of the intended domain will provide ideas for improvement. Prototyping can help in a similar way provided the prototype is taken to enough detail to get real experience of the details. Our tools framework has been evolving in an incremental manner. Each release, more functionality is added and more of the existing editors/browsers are converted to use the framework. This has provided feedback that helps guide the next steps in developing the framework.
  5. It's important to understand the abilities of the intended users of the framework. Again, we were lucky since we had a small group of internal users, that grew slowly, and the framework developers were also some of the users. Understanding the abilities of the users is useful for determining how complex the framework can be. Obviously a framework intended for novice end users needs to be much easier to use than one intended for a group of software developers.
  6. The usual motherhood statement about documentation being important for the framework maintainer/extender. In our case we didn't do extensive documentation of the framework implementation because the people who developed it are still around but this is an added burden on those people. We have tended to rely on the ease of browsing code and following executions with the Smalltalk environment when trying to get others up to speed.
  7. While documentation is generally important for the framework user, it has not been that important to us. We have relied on examples (see the next point) along with the ability to browse the code and follow the execution as the primary aids to understanding the frameworks. This has been feasible for us because the frameworks are only used by our small internal team. We have also made some attempts at informal mentoring, but it has not always been as effective as it could be. This seems to have been at least partly due to situations where a new framework user is a fairly experienced software developer and so they may not make use of the mentor as much as necessary. Conversely, since our mentoring has been informal, the mentor is often busy with their own work, and so they aren't as proactive as they could be.
  8. Examples are important. They are the most common way for our framework users to understand how to use the framework. Again we are lucky to have only internal users, and so it's very easy to say 'look at how it's done in the xyz editor.'
  9. Out of date documentation may be worse than no documentation. I have made a couple of (half hearted) attempts at defining pattern languages related to the implementation and use of the representation framework but neither is complete nor accurate. These have not proven to be adequate as the sole mechanism for helping others get up to speed.
  10. Reality is that no useful framework is ever finished. There will always be the need to enhance it, and, at some point, the need to replace it. This needs to be understood by management in order to allow for continuous improvement/evolution.
  11. The incremental adoption of a framework may introduce issues with compatibility. Our tools framework was being done incrementally, and so some of the tools in the toolset were implemented using the new framework and some were still implemented using the previous mechanisms. By restricting the tool interaction to a 'high level' that was understood by both old and new, we avoided problems. There is actually a 'spectrum of integration' into the framework starting from obeying the basic high level protocols through to the extensive use of template methods to provide reuse and consistent behavior.
  12. The evolution of a framework may introduce issues with compatibility for the users of the framework. We are lucky that ours are internal frameworks, and so the affected areas are known in advance. Evolution of our frameworks seems easier due to the use of Smalltalk and it's associated environment. It's very easy to determine the affected classes/methods, and to make the necessary changes. This is a much harder problem if the framework is an external product, etc.
  13. The evolution of a framework for an existing product may introduce issues with conversion. In our case we needed to be able to convert existing user designs to the new representation framework. This can be a nontrivial task if the conversion facilities are primitive (like those in some ODBs) or when the customer needs to use old and new versions of the framework (typically in separate products) during some transition period.
  14. Some of our framework users have had difficulty recognizing what parts of the frameworks are meant to be specialized and what parts are meant to be left as is. While improved documentation would help, it often seemed like education plus design/code reviews would be most effective.
  15. Some framework users have also had difficulty recognizing when they are asking the framework to do something that it wasn't designed to do. While this often was a sign of a 'hacking' mentality (along with a lack of initial understanding of their problem and intended solution), it also indicates that framework users need to understand that the framework, as it currently stands, may not support everything they want to do. A feedback mechanism from the framework users to the framework developers is required. Too often, when pressed by deadlines, a framework user may 'misuse' the framework leading to possible maintenance problems, etc.
  16. As framework users encounter limitations of the framework, the framework designers must avoid the temptation to constantly change the framework for every request that comes in. We need to try to avoid the 'misuse' of the framework but not at the expense of excessive churn for all framework users.
  17. When limitations are encountered and a framework user suggests an enhancement, framework designers must try to avoid making the enhancement too specific to the needs of this situation. If a framework is to remain as a cohesive unit, it must embody the general concepts of an area as opposed to the union of the needs of several specializations of the area. Often this results in slower turnaround for the framework user, but it needs to be accepted as the best approach for the long run.
Summary This note lists a set of lessons that we have learned during the development of frameworks within our product. The lessons may not be broadly applicable due to the nature of our development (e.g. small team with an overlap between framework designers and users) but hopefully a subset of them are interesting to others.


[BGW94] B.Selic, G.Gullekson, P.Ward, 'Real-time Object-Oriented Modeling', John Wiley & Sons, 1994.

[CoWi93] J.P.Corriveau, L.Williams, 'On the Evolution of a Framework for the ObjecTime Design Environment', Proceedings of TOOLS11, 1993.

Paper Number: 13
Title: Submission for the Development of Object-Oriented Frameworks Workshop
Author: Terrence Cherry
Address: Nortel

Development of OO Frameworks - OOPSLA 96 - Workshop #27


This document presents a personal view of a number of key success factors and areas of challenge encountered by development teams and management teams alike in building object oriented frameworks based upon personal experience. In no way is it meant to reflect official Nortel policy.

Architectural Frameworks

A framework is an abstraction of, or schematic for, a solution to a family of problems, an integrated collection of components that collaborate to produce a reusable architecture. An object oriented framework is a set of abstract collaborating classes that embody an abstract design for solutions to a family of related problems. A framework captures the design decisions that are common to an application domain, e.g. common abstractions, standard components, and required customization. It defines the overall structure of a system, i.e. its architecture: its partitioning into classes and objects, their key responsibilities, how they collaborate, and the thread of control which governs their interaction. It imposes a collaborative model that applications must adapt to, exporting a number of individual classes and mechanisms which clients can use or adapt. It inverts the traditional flow of control within the framework boundaries, enabling large scale design and code reuse, and the building and customization of applications from pre-existing components.

Key Success Factors

Building object oriented frameworks is hard, especially in a problem domain as large as telecommunications. Yet the rewards are enormous in terms of potential reuse, simplifying application design, ensuring overall architecture integrity, and managing evolution. At Nortel we have successfully developed a number of frameworks in the area of call processing, connection management, and network management. The following is an list of key factors we have identified as crucial to our success.

Overall Architecture Management

Highly Collaborative Teamwork

Risk Base Project Management

Rapid Prototyping

Formal Development Process

Identification of Reusable Design Patterns

Architecture Management

It is our experience that the architecture of a system needs to be managed explicitly, independent of project management and development teams, as well as the products built upon it. A dedicated framework architecture team can own and maintain the architecture vision, ensure that diverse requirements across teams and product lines can be met, identify reusable design patterns and strategies across products, and ensure continuing integrity of the architecture as it evolves. A framework architecture team can also provide technical mentoring and architecture expertise to development teams, communicate the overall architecture to the design community, facilitate the resolution of design issues across teams, provide regular status reports to assist project management in project planning activities, carry out technical risk assessment of various product alternatives, and provide recommendations to the management team.

Collaborative Teamwork

Large projects which span many teams across geographical and political boundaries require a high degree of collaboration in order to succeed. Designing a good framework requires even greater collaboration, since, of itself, a framework delivers nothing. Framework architects and designers must work extremely closely with their clients, namely the application development teams which rely on the framework to provide the required infrastructure for successful delivery of their products. The forming of joint design teams composed of key framework architects as well as architects from each of the major client development teams throughout the entire development process has proven to extremely valuable. These teams are not static, but are created and disbanded as required.

Risk Based Management

A high degree of collaboration between project managers and key framework architects is equally important. Our experience has shown that a risk based project management approach whereby decisions regarding the project are based upon evaluation of both business and technical risks has been especially effective. The strategy is one of continuous risk reduction. Technical decisions are made by technical experts, i.e. key framework architects. Items are prioritized from most risky to least risky, and are reassessed on an ongoing basis. Riskiest items are addressed first; the risks are reduced until some other item becomes the riskiest. The process is iterative. Business decisions are made by project managers based upon business as well as technical risk assessment. Project managers rely on their technical experts to monitor the technical risks continuously and provide recommendations to project managers for project planning purposes. The result is more accurate and credible project plans whereby project managers and technical experts are partners in the process.


In our experience, rapid prototyping throughout the software development cycle is the single most significant technique enabling continuous technical risk assessment and reduction which is essential to the development of accurate, credible project plans. In general we have found three kinds of modeling activities to be of significant value. Exploratory prototyping, commonly referred to as rapid prototyping, is characterized as a "quick and dirty" method for establishing proof of concept and evaluating design alternatives. It is targeted towards the riskiest technical items only and is completely "throw away". A verification prototype is a partial model of the overall system which captures the system's essential minimal characteristics (principal behaviour). Its purpose is to ensure continuous integration and verification of the overall architecture integrity of the evolving framework. It is essential for large systems requiring integration of multiple diverse components, and is highly desirable for major project reviews. It may be evolved into a complete architecture model of the system. An architecture model captures the evolving architecture of the system in increasing detail, providing a base for further exploratory prototyping, as well as for evaluating future requirements. It is highly desirable as a mechanism for assessing the impact of evolving requirements and evaluating new product opportunities.

Development Process

A common development process across design teams, while not strictly essential, has facilitated the coordination and integration of evolving framework components immensely. The particular methodology and tools utilized in the process are not important. The critical ingredient here is a standardized vocabulary and notation for communicating designs across teams.

Design Patterns

A design pattern is a generic solution to a recurring design problem that arises when building an application in a particular context. Patterns are specified by describing their constituent components, their responsibilities and relationships, as well as their collaborations. Patterns are more abstract and less specialized than frameworks, providing a common design vocabulary and assisting in the documentation and communication of designs. They facilitate the reuse of successful software architectures and designs, explicitly capturing expert knowledge and design tradeoffs, and derisking projects by reuse of proven designs. Analysis of the various independent designs of architecture components relatively early in the project revealed that many teams were unknowingly re-implementing the same mechanisms, each in a slightly differ manner. A particular combination of the strategy and observer patterns was especially prevalent, and resulted in an architecture "stake in the ground" separating the concept of "model" and "policy". Other common patterns identified near the outset included composite, proxy, abstract factory, and facade.

Areas of Challenge

Despite our successes in developing reusable object oriented frameworks for Nortel products, we have experienced significant challenges in the following areas.

Fuzzy Requirements

Management Collaboration

Framework Testing

Fuzzy Requirements

Despite recent advances in requirements modeling techniques, requirements capture and analysis remains very much of an art. The task is particularly difficult in framework development, since requirements generally span multiple product lines, each with their own unique requirements. Framework designers have a hard time distilling from the plethora of traditional requirements specification documents across these product lines those which are candidates for support within the framework. In fact, the real clients of the framework are the application development teams themselves. Our experience has shown that framework requirements capture and analysis are most successful when application developers play a central role in the process. And since requirements are best captured in the context of a common domain model and associated vocabulary, it is vital that these application designers participate in domain modeling activities.

Management Collaboration

A key success criterion, as mentioned previously, is to employ a risk based management strategy to framework development. A close working relationship between project managers and their key architects ensures that the technical risks within the project are manageable. Unfortunately there are many instances where technical recommendations may be in direct conflict with current business objectives. A prime example is when adopting the recommended technical solution would result in missing a relatively narrow market window. Project managers and architects must carefully weigh the tradeoffs between a short term tactical solution and a longer term strategic solution. Our experience has shown that a compromise is often possible, whereby a tactical solution can be found which will satisfy short term market requirements, yet can be evolved gracefully towards the long term vision. However, in management's zeal to deliver a total solution in the short term, the evolution path is frequently ignored, and the tactical solution becomes the final one.

Framework Testing

A framework, by itself, provides no end user capabilities. Rather it is an infrastructure which enables end user requirements to be met by applications which are built upon the framework. It is very difficult, therefore, to test a framework in isolation from the applications which use it. The difficulty lies in the fact that concurrent development of frameworks and applications is difficult. Clearly waiting for the applications to be built prior to testing the framework is not a viable approach. A verification prototype of the framework provides a context for application teams to build rapid prototypes of their applications. These prototypes not only serve to test out application designs, but can also be used for the purposes of framework testing. However, it requires a commitment from both framework and application developments teams to collaborate to build and maintain their portions of the protoype.


Personal Profile Terrence C. Cherry

Background raised in Montreal and Winnipeg married with two children (boys 6 & 8)

Education MMath Computer Science (Waterloo) BEd Mathematics, Physics, French (UBC, Toronto) MSc Theoretical Nuclear Physics (Toronto) BSC Honours Physics (Concordia)

Work Experience Telecommunications Software (13+ years) High School Teacher (8+ years)

Current Responsibilities Technology Invisibility Program (Adaptive User Interfaces)

Previous Management Responsibilities

Rainbow Software Architecture

Gambit Services Architecture Global 100 China Development

GSF Architecture Integrity (A-Team)

GSF Agent Interworking Framework and Protocol

GSF Services Domain Signaling Management Architecture

Previous Design Responsibilities CLASS Services

International Call Processing and Maintenance International Inter-Processor Communication Protocol