Take-Home Final

LIS 385T.6, Summer 1998

Dr. John Leggett

 

  1. Question #1: Trace the history of open hypermedia systems. Which paper is considered seminal? Why?
  2. Question #2: What are argumentation systems? Give examples of several argumentation systems from the hypertext literature. Are these systems prominent today? Why?
  3. Question #3: Compare and contrast literary hypertext and scholarly hypertext. How do these systems differ from informational hypertext?
  4. Question #4: Trace the influence of the Dexter model on hypermedia systems. Which prominent system is directly based on the Dexter model?
  5. Question #5: Briefly describe the hyperbase research thread in the hypertext literature. Give examples of existing systems, prototypes, etc. What is the main item on the research agenda?
  6. Question #6: What is spatial hypertext? How does this relate to gathering interfaces? What is information triage?
  7. Question #7: What is meant by media-based navigation? Give examples. How does this differ from content-oriented integration in hypermedia systems?
  8. Question #O1: Describe what is meant by narrative and aesthetic properties of hypermedia. Give a media-based example.
  9. Question #9: What do we mean when we say that document genres have their own characteristic rhythm of fixity and fluidity? How does this rhythm interact with document lifetime?
  10. Question #10: Discuss the notion of structural computing. Give examples of application areas that could profitably use structural computing environments. What is the relationship of navigational hypermedia systems to structural computing?

 



 

Question #1

 

Trace the history of open hypermedia systems.

The first and second generations of hypertext and hypermedia systems were basically closed: They were designed to work on a particular type of hardware, using specialized and proprietary software. Porting the software to other machines was at best difficult and expensive, and at worst impossible.

This situation began to change in the early 1980s. The first efforts in this direction were made by second-generation hypermedia systems such as NoteCards, Neptune, and Intermedia, which allowed users to extend and tailor the interface applications of the system. Users with programming capabilities could therefore add new capabilities or a more streamlined and specialized interface to the vanilla system they began with.

A second step toward abstraction was taken shortly thereafter, when the designers of Neptune released the Hypertext Abstract Machine (HAM). HAM served as an interface between the hypertext software and the storage hardware, translating hypertext instructions (such as the creation and deletion of nodes and links) into system commands and retrieving data from storage. Any hypertext system capable of running atop HAM could now be used on any computer that could run HAM, which made portability between operating systems much easier. These kind of storage abstraction machines were later termed "hyperbases."

At the same time, Amy Pearl at Sun Microsystems began to design a "link server," which abstracted varieties of linking behavior, stored them in a "link library" and served as an interface between the linking function of the hypertext system and the link storage of the operating system. (This is described in greater detail below.)

In 1988, the Dexter group had its first meeting. Over the next two years, this group -- composed of some of the most prominent researchers in the hypertext field -- would work to define an abstract model of all of the components, layers, and interfaces used in hypertext systems. Using this platform of definitions, researchers could begin to envision and design interchangeable hypertext applications.

In the early 1990s, research began into finding ways to abstract commonly performed operations on the hyperbase (especially the various forms of editing) and provide a platform so that several different kinds of tools could be used both with one another and by several people at once. When the results of this research began to be published in the mid-1990s, this platform was called a "tool integrator" or a behavior abstraction machine.

In 1994, effort began to merge research into hyperbase management systems and link server systems into a combined field of research into open hypermedia systems, in which all hypermedia functions have abstract definitions, so that functional applications which meet certain standards can work together and can support cross-platform and/or collaborative work.

 

Which paper is considered seminal? Why?

The seminal paper for open hypertext systems is "Sun's Link Service: A Protocol for Open Linking," written for the Hypertext '89 conference by Amy Pearl of Sun Microsystems.

Pearl's article was the first to use and define the term "open hypertext system." It proposed to replace closed, monolithic hypertext systems using proprietary software and hardware with open, extensible hypertext systems built from autonomous, functional parts (the Link Service itself, however, is proprietary to Sun).

In her article, Pearl provided a model with which independent applications could be integrated with one another through the use of a central "link service" that controlled linking behavior, thereby making each application a partial front-end to a hypertext database. Such a system would be open and extensible; able to accommodate any and all applications that were designed to support the Sun "link library." Pearl also addressed some of the obstacles faced open hypertext systems, such as link maintenance and deletion, versioning support, interface interference, and structureless documents.

 



 

Question #2

 

What are argumentation systems?

Argumentation systems are hypertext systems which help users to explore, examine, and edit the various parts of a particular logical argument; their primary purpose is to aid the writers of complex and/or multi-faceted arguments. An argument can be represented in several nodes, with each node representing a single fact, claim, rebuttal, etc.; and each link between nodes representing a logical connection (e.g., "therefore") between those nodes.

The most popular logical schema of an argument used by argumentation system designers is that of Stephen Toulmin, as proposed in his 1958 book The Uses of Argument.

 

Give examples of several argumentation systems from the hypertext literature.

The earliest example of a practicing hypertext argumentation system was developed in 1983 as a doctoral dissertation project by Randall Trigg. Called Textnet, the system supported several dozen link types (such as "deduction," "induction," redherring," "invalid"), which could be used to break down and arrange an argument into a set of linked nodes.

After completing Textnet, Trigg helped develop NoteCards at Xerox PARC. In 1986, Kurt VanLehn developed a new argumentation system atop NoteCards, designed primarily to compare and contrast competing hypotheses by displaying a matrix of links connected to arguments, facts, and the relationships between the facts and the arguments.

In the mid-1980s, a graphical version of the IBIS (Issue Based Information Systems) argumentation system was developed, called gIBIS. IBIS was an argumentation system designed in the early 1970s to deal with issues that did not have definite answers, and could therefore only be solved by fostering a common understanding of the various sides of the issue. The graphical version used color, shape, and a spatial overview to visibly identify the nature of nodes and links within the larger argument. The system also supported query-by-example and collaboration.

Two more systems based on the IBIS model were announced in 1990. The Author's Argumentation Assistant (AAA) was designed by the GMD-IPSI, and implemented atop the University of North Carolina's Writing Environment (WE). PHIDIAS was designed at the University of Colorado in part to provide argumentation-based assistance to environmental designers using CAD. Both used an updated model of IBIS called PHI, developed in 1989 by Raymond McCall (one of the co-creators of PHIDIAS). PHI defined three basic node types -- issues, positions, and arguments -- and organized arguments hierarchically. AAA displays these hierarchical trees in graphical form, with text windows available to display the contents on nodes. PHIDIAS displays the hierarchies in outline form, saving its graphical browser to display the CAD image being critiqued in the argument.

An intriguing speech-only system that could be used for argumentation support was presented at the Hypertext '91 conference. Version 2 of Barry Arons' hyperspeech system used a variety of specialized links to allow a user to vocally navigate through a database containing supporting, opposing, and related spoken positions pertaining to a particular argument. The result is a simulated debate, which the user can navigate at his speed and following his own interests.

Also presented at that conference was Aquanet, which came out of the same Xerox PARC milieu that had produced NoteCards. Its creators saw it as a combination of NoteCards and gIBIS, and, like PHIDIAS, it used an argumentation scheme as a way to structure knowledge (Aquanet could also be used as an analysis system and as a spatial hypertext system). An Aquanet knowledge structure consisted of an unordered set of named, typed slots and their relations. Finding the Toulmin model of argumentation inadequate (especially in its lack of support for recursion), the Aquanetters modified his original schema and created a language in which users could further extend the types and relations of the schema as needed.

 

Are these systems prominent today? Why?

Argumentation systems are not prominent today because they require special structural support (such as multiple link types) to function effectively. Such support is awkward to implement in the data-centered hypertext systems currently in wide use.

 



 

Question #3

 

Compare and contrast literary hypertext and scholarly hypertext.

Literary hypertext uses the hypertext form of nodes and links to experiment with new methods of literary expression. Using hypertext, stories can be read several different ways from several different perspectives; poems can include a variety of inter-linkings and diversions; annotations can be presented in several different ways, or not presented at all.

Scholarly hypertext uses the hypertext form to attempt to illustrate the complex interrelationships of the thing being studied, whether that thing is a type of molecule, an ancient polity, or an artist's ouevre. The non-linearity of hypertext makes it possible for scholars to show the intertwining of many linear narratives, as opposed to the limit of one narrative at a time imposed by paper text. The fluidity of hypertext allows for instant and free corrections to the scholarly record, as opposed to the delay and expense of waiting for and correcting a second print edition.

Literary and scholarly hypertexts are similar mainly in that they both seek to use the characteristics of hypertext to improve the ability of their practitioners to express the truths they discover. They differ mainly in that literary hypertext focuses on experiment, whereas scholarly hypertext focuses on exposition.

 

How do these systems differ from informational hypertext?

Informational hypertext is (in theory) organized and presented so that the reader can quickly find the information he believes that he needs so that he will be able to interpret it. Scholarly and literary hypertext present an interpretation to the reader.

 



 

Question #4

 

Trace the influence of the Dexter model on hypermedia systems.

The Dexter hypertext reference model was devised between 1988 and 1990 to distill the common essence of the various hypertext systems that were then in use. It was designed to serve "as a standard against which to compare and contrast the characteristics and functionality of various hypertext (and non-hypertext) systems" and "as a principled basis on which to develop standards for interoperability and interchange among hypertext systems."

The Dexter model consisted of three layers and the interfaces between those layers. The runtime layer at the top and the within-component layer at the bottom were only lightly sketched out by the Dexter collaborators, because of the wide range of possible interface and node content possibilities available or imaginable.

The Dexter people concentrated instead on the middle of the model: the two layer interfaces (anchoring and presentation specifications) and the storage layer. In the Dexter model, the storage layer was where the existence of and connections between nodes (or, "components") and links were stored. The anchoring interface provided a mechanism for the storage layer to "address locations or items within the context of an individual component" whose data resided in the within-component layer. The presentation specifications interface provided a mechanism to determine how a particular component in the storage layer would be presented in case the runtime layer contained several different contexts.

In the short run, the Dexter project served the purpose of having brought together most of the major players in the hypertext research field in a collaborative effort that at least began the process of building a common reference platform that could serve as the basis for future hypertext standards. The designers of every major hypertext system from 1990 forward felt obliged to at least acknowledge the Dexter model, even if they did not fully conform to it.

In the long run, the Dexter model helped make open hypertext systems possible. The Dexter designers and Sun's Link Service focused attention on the promising idea of defining abstractions for all of the operations that were required to translate user input into operating system response (which has recently led to the idea of re-writing the operating system to further improve performance). By leaving hardware issues to computer manufacturers and interface issues to software specialists, hypertext researchers have been able to focus on the essentials that make hypertext work.

 

Which prominent system is directly based on the Dexter model?

The DeVise Hypermedia system, developed at Aarhus University and first presented at Hypertext '93, is directly based on the Dexter model, though it extends the Dexter model's handling of file sharing and cooperative authoring.

 



 

Question #5

 

Briefly describe the hyperbase research thread in the hypertext literature.

The idea of a "hyperbase" began in the mid-1980's with the creation of the "Hypertext Abstract Machine" (HAM) which served as an interpreter through which a hypertext system could communicate with the underlying operating system of the machine on which it was running. As Schütt and Streitz argued in 1990, "a fair amount of the functionality of a hypertext system is independent of the particular application and can therefore be implemented as an intermediary layer between the application and the persistent storage system."

In 1989, Amy Pearl at Sun Microsystems described "Sun's Link Service," which was similar to HAM in that it placed a system-independent layer between the applications that the user was operating and the stored data that the user was manipulating. This layer -- the "link service" and "link library" -- held representations of nodes and links, which the applications would interpret using its own link library. Pearl's call for "open hypertext" would influence the development of future hyperbase systems.

In 1990, two influential systems appeared that incorporated HAM's concept of a hyperbase. The first was the Dexter Hypertext Reference Model, one of whose primary co-authors, Mayer Schwartz, was also the co-creator of the HAM. The Dexter model included a "storage layer" to sit between the actual stored data and the hypertext interface. This storage layer contained the node-and-link (or, more accurately, "component-and-link") structure atop of which a variety of interfaces could be designed, and below which a variety of data types could be supported.

The second system was Schütt and Streitz's HyperBase. Because the authors believed that "much of the functionality [needed] for a hypermedia engine...is provided by commercial database systems," they chose to build HyperBase atop Sybase, a commercial relational database management system. Like the Dexter group, Schütt and Streitz saw hypermedia engines as properly independent from both the user interface and the actual storage structure. Schütt and Streitz enumerate three differences between their system and the Dexter system: binary links vs. n-ary links, history information vs. no history information, and the existence of undefined system domains vs. all system domains defined.

During the 1990s, there have been five different groups productively pursuing hyperbase research.

The first group is centered at the GMD-IPSI in Darmstadt, Germany. In addition to HyperBase (and its successor, the Cooperative Hypermedia Server), the group has produced a version server, CoVer / VerSe, to run atop the hyperbase.

The second group is at Aarhus University in Denmark, which built its platform-independent DeVise hypermedia architecture (DHM) using the Dexter model. DHM was run atop an object-oriented database server that provided long-term transactions, flexible locking, and event notification support.

The third group is at the University of North Carolina. In 1993, they introduced the Distributed Graph Storage (DGS) component to their Artifact-Based Collaboration collaborative hypermedia system. DGS was based on a four-layer architecture: applications, application programming interface, graph-cache management, and storage. Its data model consists of objects -- nodes, links, and subgraphs -- which are processed by the graph-cache managment level. Hypermedia applications built atop DGS would have access to distributed storage, multiple annotations, and file access permissions.

The Hypermedia Research Laboratory at Texas A&M was home to the fourth group, which produced a series of hyperbases during the early '90s which focused on testing open, extensible architectures and multi-user support.

The fifth and final group was at Aalborg University in Denmark, which focused on collaborative work and open, extensible architectures.

 

Give examples of existing systems, prototypes, etc.

In 1992, the GMD-IPSI group unveiled the Cooperative Hypermedia Server, its successor to HyperBase. CHS was part of a four-layer system design, with applications on the top, version management below, object management (CHS) next, and storage at the bottom.

At Hypertext '96, the Texas A&M group presented the latest evolution of their hyperbase system, HOSS. HOSS is a transitional system, moving the hyperbase away from its role as the hypermedia interpreter for the operating system toward a new role as the operating system itself. In HOSS, the hyperbase replaces the file system of the operating system, and provides data and structure management capabilities and seamless access control. By integrating the hyperbase and the operating system, HOSS also makes it easier for applications to work together.

 

What is the main item on the research agenda?

The main item on the hyperbase research agenda is the transition to structural computing -- integrating hyperbases and operating systems into an integrated, first-class structure atop which to do hypermedia work.

 



 

Question #6

 

What is spatial hypertext?

Spatial hypertext uses the visual cues provided by spatial context to help the user understand and organize the relationships between nodes of information. In spatial hypertext, the relationship between nodes is shown by the way that visual representations of the nodes are arranged on the user's screen. Groupings of nodes can indicate that each group of nodes comprises a topical group. A set of columns of nodes with header nodes can indicate that each header node has a superior hierarchial relationship to the nodes below it in the column. Colors can indicate that a node has a particular function.

 

How does this relate to gathering interfaces?

Spatial hypertext is helpful for (and perhaps essential to) a gathering interface because it allows the user to visualize the organization of nodes into what Jim Rosenberg termed as "episodes." Users can gather nodes together by clicking and dragging their representations on a screen, or by changing their color or shape to indicate similarity. Spatial hypertext allows the user to impose his own structure upon the nodes he is working with.

 

What is information triage?

Catherine Marshall and Frank Shipman define information triage as "the process of sorting through (the possibly numerous) relevant materials, and organizing them to meet the needs of the task at hand." Such sorting and organizing is becoming more urgently necessary as information resources expand while available time to read through the resources does not. Information workers must "develop and apply strategies to scan, locate, skim, organize, and evaluate" the data that is available to them, because they no longer have the leisure to engage in the "scholarly reading and notetaking" that is the preferable way to glean information from data.

 



 

Question #7

 

What is meant by media-based navigation? Give examples.

Media-based navigation means that the user browses through a hypermedia system by using specific clues (such as shape, color, and construction for still images; motion for movies; and tone or melody for auditory data) rather than by using textual representations of those clues. In other words, the user inputs characteristics of the media themselves to determine navigation, rather than following links or typing keywords.

Using media-based navigation, someone who wanted to find more information about a particular painting could sketch a crude version of the painting in a window, and ask the application to find all of the paintings in its database that came within some arbitrary distance of matching the sketch. Someone looking at a painting and wanting to find similar paintings could ask the application to search its database for images that were similar, according to whatever criteria most interested the user. Someone who wanted to find a symphony could simply hum a few bars, and ask the application to find pieces of music with a similar melody.

 

How does this differ from content-oriented integration in hypermedia systems?

Content-oriented integration was created by the same people who created media-based navigation, three years later. COI retains the capacity to use media-based navigation, but adds to it "conceptual-based navigation." To the extent that an item (graphical, textual, auditory, etc.) can be expressed as a concept, the latter form of navigation can be used to find it. If conceptual-based navigation is not possible, then media-based navigation, based on pattern recognition, is used.

The creators of COI use a picture of a hummingbird in the sky as an example of how this process works. Ideally, a user could find the picture by typing the word "hummingbird" into a search command line. If, for some reason, this isn't possible, then the user can sketch an outline of a hummingbird, or make the characteristic sounds of a hummingbird, and then ask the application to find related items. COI, then, can help people if they have a precise idea of what they are looking for, or or only a vague sense of what they are looking for.

 



 

Question #O1

 

Describe what is meant by narrative and aesthetic properties of hypermedia.

Like any other medium of human communication, hypermedia has certain possibilities for and restrictions on its capacity to convey a story. There are things people can say through hypermedia and things they cannot say; narrative properties. There are hypermedia methods that are suitable for telling an understandable story and methods that not suitable; aesthetic properties. Put simply: narrative properties determine whether a story can be told at all through a particular medium; aesthetic properties determine whether it can be told effectively.

The narrative properties of hypermedia include: time (does the application run continually, or can the user rewind and fast-forward at will? is the application running in real time, or user-defined time?), space (how much of what is happening can the user see? is the screen size sufficient for showing the scene?), motion (is movement shown by video, or implied by still frames?), interaction (how can the user affect the presentation?), and navigation (how does the user advance through the presentation?).

The aesthetic properties of hypermedia include: resolution (how clear and sharp is the image quality?), color (full color, a reduced color palette, or grayscale?), size (how large is the screen? how large is the image or video?), speed (can the hypermedia application process images smoothly in real time?), and frame (how is the image or video boxed on the screen? if the navigation controls are visible, how are they arranged?).

 

Give a media-based example.

HyperCafe was presented at the HyperText '96 conference to demonstrate the possibilities of hypervideo, and to propose ways of producing and presenting content and structure for the new medium.

The premise of HyperCafe is that the user is visiting a cafe in real-time, and has the opportunity to sit in on and follow the conversations of the three sets of people drinking and talking there. The user is presented with "temporal opportunities" to participate -- if the user does not actively select a particular conversation, he has no opportunity to change his mind and go back because the narrative has moved on. User instructions are provided by moving text at the bottom of the screen.

Linking in HyperCafe is both temporal and spatial. At certain times during the narrative, certain options are available to continue the narrative down a certain path, or send it in another direction. Or, in some scenes, the user can click on a transparent link in the background of the video frame and jump to another narrative sequence.

The HyperCafe designers made a aesthetic virtue out of necessity, presenting the (necessarily poor quality) video in black and white to make its graininess seem like an artistic statement. Unfortunately, this graininess and the small size (160x120 pixels) of the image almost certainly detracted from the effect of the spatial linking.

 



 

Question #9

 

What do we mean when we say that document genres have their own characteristic rhythm of fixity and fluidity?

According to David M. Levy, document genres are distinct types of artifacts, each with its own particular blend of technology and purpose. We can distinguish the genre of codices from the genre of paperbacks in large part because the latter involves the technology of high-speed electronic printing. We can distinguish the genre of inter-office memos from the genre of last wills and testaments in large part because the latter has a radically different purpose and permanence.

Because a document is generally seen as being proof or evidence of something, the fixity of a document is considered to be an essential element of it; it must, in Levy's words, "remain the same over time" and "carry the same message to people distributed in space and time." (Levy also asserts that replicability is a kind of fixity.) The value of a document is directly related to the extent to which its fixity can be trusted.

However, documents are fluid as well as fixed. Paper crumbles or burns. People annotate their copies of a document, or produce different versions of it altogether. Some document genres are more subject to fluidity than others -- regimental lists are more fluid than war memorials, newspapers more fluid than high school yearbooks.

From these facts, Levy argues that document genres have "a characteristic rhythm of fixity and fluidity;" that different genres "change at different rates and in different ways." All documents are "fixed and fluid" and "exist in perpetual tension between these two poles."

Therefore, to contrast digital documents with paper documents by saying that the former is fluid and the latter fixed is inaccurate -- for example, an ASCII text document burned onto a CD-ROM is considerably more fixed and less fluid than a newspaper. The fact that current digital documents are likely to be less fixed and more fluid than paper documents indicates only that digital technology has not developed the same capacity for producing document fixity that paper technology has developed.

 

How does this rhythm interact with document lifetime?

The longer a document survives, the more fluid it is likely to become, for two reasons. One, it is more likely to be annotated or revised. Two, the way that it is interpreted is likely to change, meaning that the document no longer conveys the same meaning.

However, there is no correlation between the intended permanence of a document and its fixity or fluidity. In Levy's words, "documents of long duration may change a number of times (the U.S. Constitution, for example), while more transient documents may undergo little change or no change during their short lifetime (Post-it notes)."

 



 

Question #10

 

Discuss the notion of structural computing.

In the course of a 1987 speech discussing the future challenges facing hypertext designers, Frank Halasz touched on the two main types of hypermedia query: the content search, which examines data; and the structure search, which examines the subnetwork structures. Four years later, while revisiting these challenges, Halasz expressed his surprise that researchers had made so much progress in improving the former type of query and so little in improving the latter type.

This preference for studying data manipulation instead of structural organization has been prevalent in the hypertext community since the move toward open hypermedia systems in the mid-1980s. The World Wide Web is a product of this movement. The Hypertext Markup Language exchanges descriptive power for data portability, limiting the options of hypertext designers while at the same time expanding their potential user base.

Some prominent hypertext designers have been asking whether this primacy of data over structure is wise. As early as 1990, Laura De Young of Price Waterhouse was wondering whether indiscriminate data linking of the kind later found in the World Wide Web should be considered positively harmful because of the disorientation and confusion is produced in the hypertext user. De Young reviewed the inadequacies of the then-current methods for user orientation (graphical browsers, structured nodes, hierarchical structures, and typed links), and suggested a new method: the discovery and use of the underlying structure of relationships (i.e., links) found in specific sets of data. She field-tested this method by designing an Electronic Working Papers system for use by auditors, who work with data which contains many standard fields with clear interrelationships. Her conclusion was that "structuring hypertext enables development of a clear mental model and drastically reduces or even eliminates disorientation" as well as making incompleteness and inconsistencies easier to find.

De Young's research into structure was later continued by researchers at Texas A&M. In a paper submitted to the Hypertext '97 conference, three Aggie researchers argued that the hypertext community needs to focus on how to make the structure of computers -- its operating systems and programming languages -- conform to the needs of hypertext, rather than continuing to conform hypertext applications to existing computer structures. The Aggies called this philosophy "structural computing," and asserted that hypermedia should be seen as a special case of structural computing rather than seeing structural computing as a derivation of hypermedia.

The Aggies argued that a structural computing paradigm requires progress on three fronts. First, models must be built around structural abstractions which define data objects, rather than being built around data objects which define structural abstractions. Second, operating systems must include structural elements such as generic link and node types, so that these basic hypertext components are intrinsic to computer operations rather than being add-ons. Third, programming languages must include structural abstractions as part of their basic definitions rather than allowing them only as supported extensions. The sum total of these changes will be a computing environment which is centered on structure and content rather than on data types.

 

Give examples of application areas that could profitably use structural computing environments.

The Aggies listed four areas in which structural computing environments could be put to profitable use:

  1. argumentation systems -- such systems are built around a structural model of argumentation (such as the Toulmin model) and could thus be better implemented in a structural computing environment.
  2. spatial hypertext -- more dependent on structure than traditional hypertext because it shows the user a more abstract representation of nodes, links, and navigation.
  3. botanical taxonomy -- requires both a well-defined taxonomic structure and the capacity to link between separate structures.
  4. diachronic comparative linguistics -- language definitions are built on several overlapping structures with complex interrelationships, requiring an organizational application beyond what current hypertext can offer.

 

What is the relationship of navigational hypermedia systems to structural computing?

Navigational and spatial hypermedia systems are the most widely used systems that are already designed using structure to determine content, and as such are good domains in which to begin the development of a rigorous and practical structural computing paradigm. Armed with the experience gained in this promising field, structural computing researchers can then branch out into studying other applications of the paradigm.