The Fluency User-Interface Builder
Department of Computer Science, Indiana University, Bloomington, Indiana 47405, USA.
Interim Design Document, October, 2005.
Original: February, 2002.
Revised: September, 2002, with Bryan Dawson, Matthew Farrellee, Gordon Murphy, Raja Thiagarajan, and Eric Westfall.
Revised: December, 2003, with Jason Baumgartner, Josh Bonner, Bryan Dawson, Nathan Deckard, James Ellis, Matthew Farrellee, James Grahn, Bruce Herr, Nate Johnson, Avinash Kewalramani, Allen Lee, Gordon Murphy, Alex Platte, Alek Slominski, Joseph Tucker, and Eric Westfall.
Revised: October, 2005
Human-computer interfaces are hard to get right. Once upon a time, they were mere afterthoughts tacked on an application. Getting a computer to do anything at all was hard enough. By 1992, however, before the web really took off, nearly half of all corporate programming effort was devoted to user interfaces. Today the problem has only grown. Nowadays, user interfaces take up the bulk of effort of many applications. That proportion may well continue to grow since fast computers continue to cheapen and telecommunications continues to spread.
Viewed over the last fifty years, programming effort first concentrated at the hardware end then migrated to the user end as computers exponentially became less costly, rare, and finicky---and thus exponentially less important. Further, since most of the work in computing for all of that time had been on the backend, many of the most basic problems there have already been well-solved, and thus are increasingly irrelevant. We still research compilers and operating systems and databases and servers and so on, but now that computers are so cheap and fast and plentiful, and basic applications for most users (mail, news, surfing, messaging, desktop, and so on) have stabilized, there's no longer as much need for rapid advance on the backend as there used to be in times past---back when machines were few, weak, rare, and expensive, and the problem of, say, compiling code efficiently---or even at all---was an unsolved research topic.
Today, computers, per se, are already almost irrelevant to most users. They are commodities. It's the interface that they present to their users that truly matters, and that's the only thing that most users really notice. The hardest computing task of all will thus likely always be that of dealing with the variability of desire and fecundity of imagination of the human being that all the processing power is intended to serve. User interface design may thus eventually become more properly a branch of psychology or linguistics, not computing. Computer science has thus far served it badly, perhaps partly because it hasn't needed to, and partly because it didn't know how.
The standard user interface design pattern, MVC, or Model-View-Controller design pattern, is now over two decades old. It's the oldest known design pattern. It uses the Observer design pattern to decouple the Model (data manager) from the View (visual display) and Controller (response strategy). It was designed in an era when just the idea of dynamically changing a View (instead of simply hardwiring it to the Model and the Controller and jumbling everything together as usual) was a new idea. Thus, MVC doesn't decouple its View and its Controller, thereby failing to help programmers with complex user interfaces. It's common for both user-interface toolkits and user-interface programmers to mix code to control what a button looks like (its presentation), how it behaves internally (its logic), how its set of actions changes over time (its dynamics), and how it interacts with other components (its configuration). The Mediator design pattern can help ameliorate such tangled interactions, but it, too, can easily lead to one giant hard-to-modify object handling all the callbacks and handoffs between widgets. In either case, user interfaces frequently become hard to change, mainly because of the many ways that one widget can affect other widgets.
In a complex user interface, for example, a disabled button may need to become enabled when an editable field's text changes, which may allow selection in a list, which may change the contents of a popup menu, which may enable other buttons or menus. Typically, corporations today build such interfaces by hiring user-interface designers to first talk with the ultimate users. The designers then structure the necessary interactions and tell programmers what's desired. Programmers then wire it all up, often further fouling the development cycle by interjecting their uninformed user-interface ideas as they please. Designers can do little about that except complain, since they can't program. And managers usually don't have the skill to evaluate one design over another. Since programmers are the ones at the coalface, they usually get to decide how the coal is to be mined. Users then try the resulting interface for a while, then realize it's not right---either because:
A user-interface builder that's usable by non-programmers might take some of that burden off programmer shoulders. It might also increase the satisfaction of everyone concerned by letting non-programmers meaningfully alter their own computing experiences. Perhaps the closest thing to such a builder today is Apple's Interface Builder. It, though, is largely only for programmers. It's also not portable off Macs, not open-source, uses proprietary files, and an uncommon language (Objective-C), and development on it has ceased. Some open-source builder or toolkit projects, like Luxor and XPToolkit, make user-interface appearance editable, with style sheets modifying XML files, while others like OpenAmulet (now defunct), allow easy interaction, and yet others, like UBit (only available for Unix platforms), allow easy recombination. All of them, however, plus all builders and toolkits, like Glade and Qt, and commercial builders, like Visual Basic, Delphi, CodeWarrior, IntelliJ, JBuilder, and so on, are all aimed at programmers---the people least likely to have any real idea how a user interface for non-programmers should be laid out or should function. So far, the one successful open-source builder, Eclipse, is also aimed directly at programmers. Today, some non-programmers, particularly game- and web-designers, finally do have some RAD tools---like 3D-RAD, ColdFusion, DreamWeaver, and so on---but their functionality is limited, their domains are restricted, typically they're proprietary, and they're usually expensive.
That world of mostly expensive, closed-off, proprietary, hard-to-use, programmer-only builders contrasts sharply with the publishing world that web browsers have created since the mid-1990s. Before then, publishing was just as closed and specialized and expensive. Why can't amateurs produce (at least) simple user interfaces freely and easily the same way they can produce websites (including entire books) today? What if, like web pages, user interfaces (perhaps including entire desktops) could be made easy to create, edit, and share? What if, like some editors, user-interface builders could be made free and, ideally, open-source? Hate your desktop? Edit it! Love your desktop? Share it! Like the web, such sharing would promote rapid exploration of user-interface design space, and so should lead to many highly specialized interfaces that commercial software shops, including even Microsoft, simply don't have the resources to either design or develop, or even simply maintain. Newspaper reporters could evolve their own desktop, specialized for their needs. So could shopping mall owners, as well as publishers, hairdressers, car mechanics, computer gamers, or any special interest group at all. Such a builder might also benefit open-source projects, too, as anecdotal evidence suggest that they have the worst user interfaces of all (Gruber1 Gruber2, Thomas1 Thomas2 Thomas3, Levesque). Nearly all open-source projects are made voluntarily by programmers for programmers. Programming effort concentrates on cleaning up, or adding new hacks to, that project's backend, with the frontend becoming just a place to dump all the controls for those shiny but inconsistent neat hacks. With severe limits on both human cognition and screen real-estate, user interfaces have to designed and redesigned many times. They usually aren't.
In sum, there seems to be a serious misalignment today in our current division of labor when building user interfaces. Users are the experts on their domains and problems. Programmers are the experts on computers and programming. Neither is expert in the other's domain. And with current user interface development tools there's no way for them to more equitably and profitably share their expertise to then create flexible and useful user interfaces quickly and cheaply. Perhaps, though, there's a way around many of the difficulties. If there were a free and open-source (and commercially applicable) visual builder that let non-programmers:
To begin to reimagine user-interface development, we must first reimagine both user interfaces and their most fundamental parts: widgets. Today, user-interface programming begins when the programmer selects a user-interface toolkit. Normally that's done for the programmer via the choice of programming language or builder. For example, in Java, programmers have a choice of three toolkits: AWT, SWT, or Swing, and several builders, including Eclipse's Visual Editor, IntelliJ, JBuilder, and so on. There are also various combinations. Swing, for example, is both a toolkit and a framework, but not a builder. In many development environments, such toolkits or frameworks or RAD tools can also depend on lower-level toolkits produced for each platform. Almost all toolkits and frameworks and builders are proprietary and contain a set of widgets made to look and act similarly.
The fundamental unit in all of them is a widget---a button, a checkbox, a frame, a canvas, and so on. Each widget is a sealed (and usually proprietary) box containing its logic (how it reacts, and what it reacts to), and its presentation (what visual and aural representations it displays on the screen or over the speakers). Today's user-interface toolkits expose each widget's functionality with an API (a list of public methods---or functions). Programmers then wire up widgets by creating source code to either:
Every user-interface builder should help programmers do all of that. But they don't. In terms of visual support, most of today's user-interface builders only let programmers visually specify the first three: widget creation, containment, and visual properties (position, color, text labels, font, and so on). Then builders convert those visual instructions into generated source code and spew the resulting code back at the programmers. Programmers must then do direct source-code editing to do the last five things---which is all the hard stuff---the linkage, dynamics, and conditional behavior. So, today, editing the user interface being produced (that is, altering its wiring) means exposing the user interface's source code to programmers so that they can manually make changes to it. Also, programmers can't run a partially built user interface to test out its interactions before it's complete enough to be compilable. And building everything directly into source code locks out non-programmers from user interface creation or editing. They are powerless. And it keeps programmers stuck firmly in the concrete.
How did we find ourselves in this awful mess? We evolved into it. The original idea for graphical user interfaces evolved at Xerox in 1973 out of what came before---which was the command line. On the command line, about all that a user can do is:
A better metaphor for a graphical user interface today, then, might be an airplane cockpit. A plane's cockpit lets a pilot fly a plane while the plane also informs the pilot of its state. Typically, the pilot flips switches and the plane blinks lights (or blares warnings). We think of the pilot as actively `flying the plane,' and we think of the plane as a passive recipient of pilot commands. However, abstracting away all such trivial details we see that both the pilot and the plane are `flipping switches' at each other. For example, let's say that a pilot detects that the plane is approaching a runway, then turns the yoke. In essence, the pilot is choosing a different state (in this case, orientation) for the yoke to be in. The plane then detects that changed yoke-state then deduces that the pilot wants it to alter its state. It then changes its state to match---say by changing the angle of the elevators. Conversely, when the plane detects that it's low on fuel, say, it turns on a light to tell the pilot. The pilot then detects the light's state change, then deduces that fuel must be low.
We can reduce each of those acts---either on the pilot side or the plane side---to a selection of a different state of some variable (the switch), which thus communicates state change from one actor to the other, and vice versa. The fundamental unit of any user interface, then, is a switch. Each switch is able to assume one of some number of states. Changing the setting of a switch is a state change. Further, the fundamental decision-making act of either actor is also a state change (either inside the pilot or inside the plane), which must then be communicated, via a corresponding state-change in the user-interface switches, to the other actor.
Any user interface then is really a primitive language. Reducing all user interfaces to switches, though, doesn't make the problem of building a sophisticated user-interface builder any easier. Each user-interface switch-flip doesn't necessarily only influence a single part of the plane. For example: alter a plane's ailerons and the plane turns, so the heading changes, but wind speed may change, too, now that the plane is heading in a different direction, so airspeed may slow, which may need require an fuel-flow correction, and so on. Widgets aren't necessarily disconnected from the other widgets in the cockpit. We can model that though by imagining that, behind the scenes, wires run between widgets to allow the state of one to influence the state of others. (Sometimes that state-influencing happens directly, but often it happens indirectly because all of the widgets are influencing the state of some single artificial being---say, a plane or a chemical factory or an oil refinery or a nuclear power plant or a space shuttle---or a game or a database or a mail or news spool.)
However, although the user-interface builder problem is still as complex as ever, we can at least now see that every graphical user interface is a set of widgets, each of which we might metaphorically picture as a box with switches on the front and back. A user interface is then a wall of such boxes, each bristling with switches and potentially with a mass of wires behind it connecting one widget to another so that widget state change can propagate from widget to widget. For each widget (each box), the human being (the user) sees the widget's front switches and the artificial being (the application) sees the widget's back switches. Both actors can flip any of the switches that they can see, then wait for the other to react. Both actors observe the resulting configuration of switch settings, using that configuration, plus their own internal logic, to try to infer each other's state, and so figure out what to do next.
In that metaphor, widgets---buttons and checkboxes and so on---mirror their back switches on their front, let's say with a plexiglas-covered bank of switches on the front of the box, below the switches that the human being can flip. That bank of visible but unflippable switches is what most of the world today thinks of as `the widget.' The artificial being can flip a widget's back switches which then cause switches under the plexiglas to flip, thus altering things that the user can see or hear, like the box's foreground color, background color, displayed image, text label, font, 3-d look, translucency, shading, greying, highlighting, blinking, and hiding. Even sound- and size- and shape- and location-changing are sometimes possible in a few really modern user-interface toolkits (usually only in those intended for use in games). Each such switch setting helps indicate to the human being the artificial being's state. That works in reverse, too, since the human being can flip a widget's front switches (which the artificial being can then observe), by typing into a textarea, or moving or dragging the mouse over a pane, or clicking or double-clicking the mouse, or, more recently, speaking to the user interface. In short, each actor is `talking' to the other, using the wall of switches between them as an aid to communication. The wall supports a language.
Such languages exist because the artificial being can't speak or smile or groan, and the human being can't parse binary, use boolean algebras, or do arithmetic at electronic speeds. So the human being watches the switch-flip configuration state of each widget to try to infer things about the state of the artificial being. Similarly, since the artificial being cannot easily decipher, or usually even observe, the human being's gestures, speech, or eyegaze, it watches each widget from the other side to try to infer things about the human being's state based on the widget's switch-flip configuration. Both are using the same wall of switch boxes between them as a common language to communicate with each other.
That, ultimately, is what a graphical user interface is. It exists to support a language made up of graphical switch-flipping sequences. To create any graphical user-interface builder, then, we must create a computer program that a human being can use to help build graphical grammars. Each grammar specifies what each possible sequence of switch-flips would mean in terms of widget state changes, thus defining a switch-flip-sequence language for use by both a user and an application. Every graphical user interface, then, sets up a language between a human being and an artificial being, with the intent of letting them jointly solve a problem that neither could solve separately.
We've now reduced the problem of building a user-interface builder to that of making any graphical switch-flip language more fluid. The first step to building a more fluid user-interface builder is to make the user-interface programmer's work more uniform and more undoable so that we can then write a computer program to help a non-programmer reconfigure whatever the current switch-flip-sequence language is into another switch-flip-sequence language that supports more fluid communication between two actors, one human, one artificial. Further, to make the building of, and the sharing of, such user interfaces among non-programmers possible, besides supporting the 8 actions that user-interface programmers may have to do today (see above), such a builder must, in addition:
That design problem seems impossibly difficult, but it's not. Once we reduce every widget to a collection of switches we see that for any one widget, its switch settings change depending on state changes inside the widget's box, which represents the core of the widget---its logic. That logic can set the widget's external switches differently based on the widget's input (that is, switch flipping on its back side). (Note: a widget may also change state even when presented with no input---a timer or a clock, for example, or, more generally, anything with its own thread, can do so). In an advanced graphical user interface today, such widget inputs could come from:
The first key idea on the road to creating such a new kind of user-interface builder is to observe that although all widgets are visual in most of today's user interfaces (and user-interface toolkits and frameworks and builders), they needn't all be. Now that we've reduced widgets to switches we see that there's nothing about them that requires that every widget be visible to the user interacting with a running user interface. Non-visual widgets sounds like a contradiction in terms, but they're very helpful. They are widgets with no visible representation in a running user interface. And they can be just as useful as, and be treated just the same as, traditional (visual) widgets. They only differ in that they have no 'bottom bank of switches'---the user can't see them in the running interface. They can still, however, have user-detectable effects---via links to visual widgets. For example, a database usually doesn't have a pictorial representation in most user interfaces, but its effects can still be mirrored in dynamically changing menu options or text fields displaying search results, and so on.
With the idea of non-visual widgets in hand, it's now easy to see that we can uniformly model all six of the above ways that widgets can be influenced with just the last one: widgets linked to each other. For example, a graphical user interface could use a non-visual widget to represent a network that the computer hosting the user interface is on (an application of the Proxy design pattern). The network thus functions as the internals of that non-visual widget's `box.' As the network changes state, (visual) widgets linked to the (non-visual) network widget might also change state. Thus, one way to view any graphical user interface at all is as a set of linked visual and non-visual widgets. This is the first step to the simplification and uniformization that we need to make it possible to build a graphical user-interface builder that even non-programmers can use to more fluidly build complex user interfaces. Fluency is thus named as it is because its intent is to make the production of graphical user-interface languages more fluid. It's also named based on a small pun, since each actor in the continuing communication act is trying to influence the other.
Fluency is a graphical user-interface builder, but it doesn't generate source code then compile and run it. To emphasize the point that this isn't traditional source-code-style programming, Fluency's users are called 'authors,' not programmers or users. Authors use Fluency to produce more fluid user interfaces for users to use. And authors may also be users, or even programmers.
Fluency works in two stages, as Java itself does.
Today, all widgets outside of Fluency are, by definition, visual and are cross-linked by hand, in source code, with all that cross-linking code overloaded on top of each widget's regular duties. How widgets work, how they link, how they interact dynamically---everything about them is hardwired by hand in source code. Changing anything (except for overlaid images, called skins, on a few recent and quite special applications, like Winamp) is thus hard. Not to mention expensive and frustrating.
A Fluency Widget, however, is any program, visual or not, that can receive Events, emit Events, and execute Actions. An Event is a passive information-bearing object whose type indicates what information it bears. It's an encapsulation of Widget state change. An Action is an active information-altering object whose name indicates what it does. It's an encapsulation of a (traditional) widget's method invocation (or function call, depending on the underlying language). Actions are an application of the Command design pattern and compound Actions can be built out of other Actions with the Composite design pattern (that's called the MacroCommand design pattern).
In Fluency, there's no necessary relation at all between any of a Widget's three possible behaviors. A Timer, for example, or, more generally, any Widget running in its own thread, needn't receive an Event before emitting one, nor must it receive an Event after emitting one. Further, a Widget may emit no Events (example, a clickable image), it may receive no Events (example, a static label), and it may have no Actions (example, a progress bar). Further, even when a Widget both emits Events and executes Actions, it needn't emit an Event or execute an Action after receiving an Event (example, a clipboard), nor need it emit only a single Event or execute only a single Action at any one timestep (example, a database).
Fluency Widgets are of two main types: visual and non-visual. A visual Widget executes Actions with visual (or aural, for example, a dialog box's beep) consequences in the interface. Often it uses the Adapter design pattern to delegate many (or all) such Actions to some underlying object from a user-interface toolkit (Fluency makes such toolkits interchangeable with the Abstract Factory and Bridge design patterns. But see the Holder footnote for one kink in that idea). A non-visual Widget, on the other hand, executes Actions on behalf of some other object (or objects, as we'll see later) in the user interface. Normally it, too, delegates such Actions, but not to a user-interface toolkit object. It delegates to an underlying algorithm---for example, a file reader, a compression algorithm, a sorting algorithm, a video codec, a music player, a database, a network monitor (later, though, we'll see other kinds of non-visual Widgets that don't delegate to anything). In either case, a Widget, visual or non-visual, is an object that other Widgets might ask to execute Actions or receive or emit Events, and that is all.
For every Widget, visual or non-visual, Fluency presents to its author a list of any Actions that that Widget supports. Thus, every Widget needs to advertise what Actions, if any, it can be tasked to execute, and what they mean. Every Widget thus implements the following Actor interface:
So a Button, say, instead of a public list of methods (or functions), would have a public Dictionary of Actions, containing, say, SetLabel, ChangeBackgroundColor, Resize, Fill, Beep, Disable, Hide, and so on, plus appropriate String keys describing what those Actions do in more detail.
Fluency also lets its author specify how one Widget's behavior should alter another Widget's behavior. Traditionally that's handled in builders by letting programmers attach event handlers to each widget in code. Fluency, however, to make such Widget linkage and unlinkage possible graphically and without reference to source code, uses the Observer design pattern to let Widgets exchange Events. That capability needs two further interfaces: Receiver and Emitter.
First, every Widget needs to advertise that it can receive Events. Further, for Fluency to tell its author what Events any particular Widget can notice, if any, and what they mean, each Widget also needs to advertise what those Events are:
Second, every Widget also needs to advertise that it can emit Events, what those Events are, and what they mean, thus letting the author attach Receivers to any Widget to start listening to, or stop listening to, any Events that the Widget may choose to emit:
In short, every Fluency Widget, visual or non-visual, is an EAR (Emitter-Actor-Receiver). It's any implementation of the following Widget interface:
Here's a peek inside a Fluency Widget:
Each of a Widget's three advertised Dictionaries (for Actions, InEvents, and OutEvents) could be empty. For example, a Widget may honor no Actions, in which case its returned Actions Dictionary is empty (an application of the Singleton and Null Object design patterns). Fluency places no restrictions at all on any computation that any Widget, visual or non-visual, executes.
First, why allow any Widget to connect to any Widget? Depending on how the author chooses to link Widgets, one Widget may send Events to another Widget that doesn't have those Events in its advertised InEvents (the Dictionary of Events that it has advertised that it can notice), however any such receiving Widget may simply ignore such Events. That's intended to make Fluency `author-friendly' in the same way that web browsers are `website-creator-friendly.' This choice may sometimes lead to inefficiency, especially by beginner authors, but better that than to frustrate or annoy or waste the time of any author. Entry requirements to using Fluency should be as low as possible, as HTML and HTTP and ViewSource made the early web. Fluency should allow nearly anything an author might want to do, even if `wrong'. That's one distinction helping to explain not only the spectacular success of the web, but also the spectacular success of simple, flexible, and evolvable 3D games like Quake over large, rigid, and carefully planned 3D-environment languages like VRML.
Second, why support both Events and Actions? Both Events and Actions carry information between Widgets, so either could be used alone to do everything required. So why does Fluency allow both? Wouldn't it be simpler, easier to code, and perhaps more efficient, to only allow one or the other? Fluency allows both to encompass variant programmer expectations about how disparate Widgets, visual and non-visual, typically work. The usual way to work with a visual object like, say, a button, is via user action, not programmer linkage. Thus, programmers typically see a button, say, as a Receiver and Emitter, and not as an Actor. A button usually expects a MouseClicked Event, say, rather than a request to execute its press() method. That reaction-oriented rather than action-oriented style is common among programmers of objects that could become visual Widgets in Fluency. Conversely, the usual way to work with a non-visual object like, say, a database, is via programmer linkage, not user action. Thus, programmers usually see a database as an Actor (albeit, with Actions implemented with direct method invocations or function calls), and not as a Receiver or Emitter. A relational database, say, usually expects an SQLQuery whose parameters carry the information to search for and whose return value contains the list of search results. It doesn't expect an SQLQueryEvent containing the information to search for and a place to put the search results. That action-oriented rather than reaction-oriented style is common among programmers of objects that could become non-visual Widgets in Fluency (which is most programmers, as most programmers work on, or are mostly suited to work on, the backend of an application). Finally, several user-interface toolkit objects frequently both act (with methods or functions) and can receive and emit events (example, Swing's JTextPane). Fluency can absorb them all. Any computer program at all---threaded or non-threaded, visual or non-visual, server or client, database or web browser---can be made into a Fluency Widget. All a programmer has to do to turn any prewritten program into a Fluency Widget is to create a wrapper (Proxy or Adapter) that either responds to or emits Events, or executes Actions, or both. In short, all that's required is that the program (or a Proxy or Adapter of it) implement the EAR interface.
Third, if Fluency is intended for use primarily by authors and not programmers, then why bother trying to satisfy programmer expectations and assumptions about different kinds of objects that could become Fluency Widgets? Fluency can't function well unless programmers supply it with a lot of Widgets, especially the non-visual kind. Authors, not being programmers, can't create new Widgets for themselves, so they can work only with the set of Widgets that they're given (later, though, we'll see that that's not completely true). Thus, Fluency must be programmer-friendly as well as author-friendly. Programmers must find it easy (and later, perhaps, profitable) to feed Fluency by producing lots of different kinds of Widgets, which authors then visually combine and recombine with Fluency's aid. Fluency must thus please both authors and programmers.
Fourth, and last, why not stick with public methods or functions directly, as every other builder does? Fluency could expose them to the author just as easily as Actions and cut out all that programming bother and builder inefficiency. Fluency uses Dictionaries rather than public methods for dynamism, uniformity, flexibility, and extensibility.
At first glance, Widget Actions might appear simple---Check or Uncheck a CheckBox, or Enable or Disable a Button, say. However, such Actions are statefree. They therefore don't require parameters. Essentially they're the same as simple Events and could just as easily be realized by, for example, something like CheckEvent, UnCheckEvent, EnableEvent, DisableEvent. How a Widget chooses to implement that triggering of state change, whether with (simple) Actions or (equally simple) Events isn't important. Stateful Actions, though, are more complicated, for they require parameters and might produce return results.
Consider a simple method call on an object that we want to convert to a Fluency Widget, say, a TextArea:
Let's say that this means: Take the given String and insert it into your current text buffer, then return the new location of the end of your text buffer as an int. (What it actually may mean, or whether it's even a good way to implement it, isn't the point right now). Let's say that the method is supposed to display the final result of a user action sequence where the user moves the mouse to some location in the textarea's displayed buffer, then clicks there, then types a string, then clicks away (off the textarea), or presses ESC, or double-clicks, or right-clicks, or whatever. Once that sequence ends, the TextArea Widget must have its state updated to reflect what the user just did. That can only happen if the TextArea's SetText Action can receive (and perhaps emit) values.
What that all amounts to is that there might be at least five steps involved in the execution of any Action:
Putting all that into each Action would overload each Action with extraneous detail and boilerplate. So to make all Actions simple, and simple to program, and also uniform, and thus uniformly treatable inside Fluency, Actions have helper objects called Docks. A Dock can either be a Receiver or an Emitter and each one holds a single typed value (a String, an Integer, a Font, a Color, whatever). Fluency currently uses two types of Docks---InputDocks and OutputDocks.
InputDocks are Receivers. They marshal parameters for their Actions. They let Fluency feed an arbitrary number of arbitrarily typed input parameters to an Action without having to clutter the Action code with the marshaling. With InputDocks, each parameter, typically wrapped inside an Event (in the current implementation its something more specific, called a DataEvent), could come from any Emitter (note: Fluency doesn't presently bother to check that there's exactly one Emitter attached to each InputDock, but perhaps it should). An InputDock, on receiving an Event, extracts the appropriate value (and its type) and stores it. That value will be one of the parameters that the Action will fetch before it can execute---if and when it executes.
Similarly, OutputDocks are Emitters. They let Actions broadcast their return values (if any) to any Receivers that might care about any particular one of its return values (if any). So any Action could output multiple return values, and each one can be observed by any Receiver. This avoids any possible demultiplexing problem with Action return values. For example, were all Actions to only have one OutputDock then if an Action needed to return two Integers (let's say, to specify the width and height of some new Widget, say a new overlay) then it would be hard to know which emitted Integer is which, and we'd somehow have to make their emission sequence matter, as regular methods or functions do with parameter signatures, but that approach to demuxing is well-known to induce programmer error, something we must avoid, especially since the author is not a programmer to begin with. With separate OutputDocks for each return result, though, each Integer would be emitted by its own OutputDock, one for width and one for height.
Since InputDocks are Receivers and OutputDocks are Emitters, Fluent authors can attach any InputDock to any Emitter, including any Widget or OutputDock. Thus, Actions could trigger other Actions, and any Action can take its inputs from several Emitters, including OutputDocks of other Actions. Further, unlike the typical method or function call APIs of normal user-interface toolkit widgets, Widget Actions in Fluency can take inputs produced by several other Emitters (usually Widgets) in the interface, not just from one single Emitter as all methods or functions must. For example, one particular Widget's Action may be set to trigger only when input is available from two or more different Widgets. It's also possible for Actions in Fluency to dynamically change their number of parameters from one execution to the next (we don't do that yet, though). It's also possible, and easy, in Fluency for Actions to be `sourceless'---or rather, Fluency itself is their source, not any particular Widget in the interface (Fluency itself is a Widget, just as if it were a really big Button or ScrollBar). Finally, note that with the MacroCommand design pattern, Actions can also be compound. So any sequence of Actions, each supported by perhaps a different Widget (of which Fluency itself can be one) can be executed as one single Action, allowing any arbitrary set of things at all to happen on any condition at all. For example, Fluency uses that scheme internally to do persistence. With such a general Action mechanism, a Fluent author's capabilities are much expanded. (See the Action Footnote for further discussion.)
Fluency has two stages: during buildtime, an author can edit any Fluent user interface---creating, deleting, linking, or unlinking Widgets---then during runtime, a user, who may or may not be the same as the author, runs the resulting user interface. While the Fluent user interface is running it, too, may need to create, delete, link, or unlink Widgets. Fluency handles Widget creation with the Factory Method and Abstract Factory design patterns (it also uses the Prototype design pattern to compose dynamic menus on the fly), but in this section we'll focus on how it supports dynamic linkage and unlinkage at both buildtime or runtime.
First, authors might need to be able to link Widgets tightly or loosely. For example, two Widgets might need to be so functionally bound that they exchange Events continuously, fitting together to compose one compound Widget only barely separated into parts---for example, a Font famiy chooser, a Font style chooser, and a Font size chooser. Their linkage, however, can't be so tight that they can't be easily separated again---for use in other compounds, say. On the other hand, another two Widgets might need to exchange only one piece of data at most once during a runtime session, and that exchange might be one-way, and only if a certain rare condition occurs---for example, a Network monitor and an Alarm. They, too, need to be linked, but loosely.
Second, besides such static links, authors need to be able to specify dynamic linkage to allow for conditional creation, deletion, linkage, or unlinkage of Widgets at runtime. For example, an author may need a popup menu, a network login dialog, a tearoff menu, a toolbar, a (non-visual) database, or any Widget at all, to only enter the user interface for some period of time during runtime, depending on some condition. Further, such newly created Widgets may need either tight or loose linkage to Widgets already in the running user interface---or perhaps even to each other. Finally, to make all that independent of programmers, all links---whether static or dynamic, tight or loose---must be easily doable and undoable visually and without programmer aid.
Every Widget is an Emitter, so an author can link any Widget to any number of Receivers, letting them all receive all its emitted Events. Every Widget is also a Receiver, so an author can link any number of Emitters to any Widget for it to receive all of their emitted Events. Thus, an author can daisy-chain any number of Widgets in any order, like LEGO-bricks, and any such link can be 1-1, 1-many, many-1, or many-many. For example, of the two Widgets, WidgetA and WidgetB, in the schematic diagram below (the 'E's are emitters, the 'R's are Receivers, and the directed edges are Event streams), an author might:
What makes all this work is that, thanks to Observer, neither Widget ever has to know or care whether it's being linked to---or even if any other Widgets exist. Since neither has to know, neither is affected if they are linked---or unlinked. Every Widget acts as if it alone were the entire user interface. That in turn makes it possible to always have a runnable user interface. No compile step is needed.
Why allow such arbitrary linkage, though? Why not try to make Fluency smarter so that it can disallow lots of seemingly dumb linkage choices, like, for example, closed loops? The answer is that Fluency should always be author-friendly. Fluency doesn't bother trying to second-guess its author. It leaves `correct' linkage choices up to evolution among various author's user interface choices rather than trying to be smart and work against any author's choices. Authors should be free to link and unlink whatever they want to whenever they want to and so learn from mistakes more rapidly and painlessly rather than feeling like they have to fight the builder all the time. Further, Fluency encourages that flexible experimentation by letting the author switch, at any time, from buildtime to runtime and back, so the author could conceivably test every choice immediately, as web browsers do for website creators, with no penalty for `wrong' choices.
However, while such direct Observer linkage seems very general, there's still a problem. The above five LEGO-style linkage examples might suggest that an author can already link Widgets tightly or loosely, or statically or dynamically. But such direct Observer linkage isn't enough to solve the general linkage problem for non-programmer authors because such links are useful only if the Widgets to be linked already share a common understanding of what their Events and Actions mean. In traditional user-interface development that's not a problem because that understanding sits inside the user-interface programmers' heads. User-interface toolkit programmers produce widgets with specific events or methods (or functions). Then, user-interface programmers read all the various APIs and link widgets directly in source code with handmade event-handlers attached to each widget, thus specifying what that widget should do on receipt of particular events. Were Fluency to do the same, non-programmers couldn't easily trigger any Action with any Event. We need another idea.
Fluency avoids direct Widget linkage (either in source code the way that today's programmers do it, or via direct Observer links as above) by letting authors dynamically and visually make and unmake Widget links. It supports that by dynamically creating and destroying mediators (an application of the Mediator design pattern). Those mediators can sit between any two sets of Widgets and thus translate any behaviors of one set of Widgets into behavior requests that the other set of Widgets can understand.
For example, a Button may emit a Clicked Event, and the author wants to make that Event trigger a Check Action on a CheckBox. But the CheckBox has no idea what a Clicked Event means. So to let the author link the two, something inbetween them (a mediator) converts the Clicked Event into something meaningful for the CheckBox. That meaning-bearing thing may be a direct request to execute its Check Action, or it may be an Event that the CheckBox natively recognizes that causes it itself to execute its Check Action. The mediator that the author puts between them makes that translation possible. To do so, such mediators need to be both Receivers and Emitters. Fluency then makes the actual links using Observer registrations, so the Button gets a mediator as a Receiver and the same mediator gets the CheckBox as a Receiver. So all events that the Button emits go to the mediator (and of course any other Receivers that may already be registered on the Button) and all Events that the mediator emits go to the CheckBox.
Creating arbitrary mediators dynamically and visually seems hard, though, since it ultimately means allowing arbitrary programmability. Fortunately, that's not necessary to let non-programmers build non-trivial user interfaces. If a special class of Widgets could accept arbitrary pretargeted Actions instead of always having some fixed set, then such Widgets could act as the desired dynamic mediators. Since they're Widgets, they'd be Receivers, so they could receive Events from any set of Widgets. Since they're Widgets, they'd also be Emitters, so they could emit Events to any other set of Widgets. And, since they're Widgets, they'd also be Actors, so they could hold any pretargeted Action instances that may need to be executed on any particular Widgets in that second set of Widgets.
The linkage problem now seems nearly solved but for one thing. If such dynamic mediator Widgets were visible in the user interface they would clutter it, and they would make the links they establish unbreakable without visible change in the user interface. Thus, they can't be responsible for any underlying user-interface toolkit object, like, say, a button. Further, since they must be able to accept arbitrary Actions, they also can't be responsible either for any underlying algorithm (like, say, a database or a sorting algorithm or a video codec) or a remote system (like, say, a network monitor or a web service). In short, they must be a new kind of non-visual Widget with no other responsibilities besides mediation. In Fluency such dynamic non-visual mediator Widgets are called Pipes:
Ideally, Pipes should let an author specify either the triggering of any behavior (that is, Action execution or Event reception since in general Fluency can't force a Widget to emit an Event) of any Widget based on any behavior (that is, Action execution or Event emission since in general Fluency can't detect when a Widget receives an Event) of any Widget. For simplicity, however, Fluency currently only lets its author trigger behavior requests on WidgetB via Event emission by WidgetA (that is, it currently disallows Action execution on WidgetA as a trigger for behavior change on WidgetB, unless that Action execution also emits an Event).
Those `behavior requests' on the second Widget are of two main types: Event-to-Event translation (that is, receiving an Event on one end of the Pipe and converting it into some other Event on the other end of the Pipe) and Event-to-Action triggering (that is, receiving an Event on one end of the Pipe and triggering an Action on the other end of the Pipe). Although those seem to be two completely different kinds of Pipes, Fluency actually only supports one general Pipe and uses the Strategy design pattern to load up the appropriate logic guts into each freshly created Pipe, depending on what the author is presently trying to do---and without bothering the author about the details.
Thus, in Fluency, authors can link any two sets of Widgets such that Event emission by Widgets in the first set either triggers Action execution on Widgets in the second set, or it triggers Event emission to Widgets in the second set. Either of the two sets of Widgets to be linked with a Pipe can be:
set1 :: [triggering behaviors] → set2 :: [triggered behaviors]
A Pipe is a Widget, so it's an Actor and can thus carry Actions. Such Actions don't depend on any underlying user-interface toolkit object, algorithm, or remote system, so they can be arbitrary, and, in particular, they could be dynamically attached to the Pipe on Pipe creation. Such Pipe Actions can include Actions that create other Actions, and any of those Actions can create, delete, link, or unlink Widgets. For Actions that need to be triggered on Widgets in the second set of Widgets, Fluency creates Action instances pretargeted on the appropriate Widgets and inserts them into the Pipe for later triggering by some Event.
(Note: removing such pretargeted Actions later if their targeted Widgets get deleted is another problem. One way around it is to ignore them and trap them on attempted execution since they must fail, generating an NPE, as their original target has gone away. That's ugly, especially since randomly trapping Exceptions can shelter badly written code. Another way is to make Pipes react to Deleted Events and have every Widget emit a Deleted Event back to any linked Pipes whenever the author deletes it. That's thinkable but would require lots of refactoring. A third way is to let Fluency itself keep a map of which Actions are targeted on which Widgets. On creation, each Action to be stored in a Pipe gets silently registered in that map. If its target Widget dets destroyed, Fluency silently removes any Actions targeted on that Widget that are presently sitting in Pipes. That's what we do now. See the Action Footnote for further discussion. Further, Pipes are presently very limited. Because there's no group selection yet in Fluency, authors can currently only make Pipes that link one Widget to another (and separate) Widget. We don't yet have threaded Pipes, nor does the visual builder yet support arbitrary 1-many, many-1, or many-many Pipes.)
When an author requests a link between any two sets of Widgets, Fluency lets the author visually specify the two sets of Widgets to be linked, and which behaviors in the first set should trigger which behaviors in the second set. Then it creates the appropriate types of Pipes to make the link (it may need to create more than one Pipe to do so). Then it links the chain of new Pipes to the selected Widgets via the Event Observer mechanism already described. All the author should ever see, though, is at most one (labeled) arrow between the two sets of Widgets. The author should never have to care about linkage implementation details. Thus Fluency has to maintain two separate but related Widget relationship graphs: the one the author sees, and the real one.
Since Pipes aren't visual and aren't responsible for an underlying algorithm or a remote system, they can come and go. An author can thus cause them to be created or deleted at any time, even during runtime in response to some stored Action that the author put there during buildtime. Such transient Pipes can be used to, for example, let any Receiver monitor any Emitter's Events for a time, then stop caring depending on some condition, then, depending on some other condition, start caring again later, but perhaps for a different subset of the Emitter's Events. Pipes can also link to Pipes, since Pipes are themselves Widgets. So Pipes can be made to go away, or reappear, based on conditions in other Pipes. (Note: the author can even play in the linkage graph, by linking Pipes solely to Pipes (so nothing ever appears in the user interface!) and have them create and recreate each other as if in a Game of Life simulation. Building simulations of FPGa networks or cellular automata or water works or logic gates should also be pretty easy. Fluency affords a lot of playspace since it lets its author do nearly anything to nearly anything.)
Making and breaking links is entirely visual, and might (eventually) even be made visually programmable. Further, with the right Pipe, an author could request that Actions be triggered by the reception of multiple Events emitted by multiple Emitters within a particular window of time. With just a small set of Pipes, an author could cause nearly anything to happen on any Widget on nearly any condition. For example, with the right pipes already in Fluency to choose from, an author could say arbitrarily complex things like: `Whenever this first Emitter emits any Event whatsoever, and that second Emitter emits any Event in this particular subset of Events, and that third Emitter emits any two Events within a millisecond of each other, and all three Emitters do so within two minutes of each other, counting from ten minutes from now, and lasting for only another ten minute window after that, have those Actors do these various Actions.' Such triggered Actions could be on the Pipes, or on any of the other Widgets, or even on new Widgets created by any of those same Actions. And all of that should be easily undoable or modifiable visually.
With the addition of Pipes there are now three kinds of Fluency Widgets.
First, the usual visual Widgets that everyone knows. In Fluency, though, each such Widget actually delegates to one of the usual-interface toolkit objects, like, say, a button or a textarea. But they can also delegate to any program at all, once it has a visual frontend---for example, a web browser can be a Fluency Widget all by itself, even though it might (or might not) be itself made up of Fluency Widget parts, and could appear inside a much larger user interface, of which it is only one small part.
Second, Fluency supports two kinds of non-visual Widgets: functional Widgets, which proxy for some non-visual program, like, say, a timer, or a database, or a network monitor, or a sorting algorithm, or whatever, (although note that a visual Widget might also delegate to a non-visual Widget, as for example, an AlarmClock delegating to a Timer), and non-visual non-functional Widgets, which delegate to nothing. That second kind of non-visual Widget is a service Widget. Service Widgets function as Widget glue in Fluency. The only example of a service Widget so far in this document is a Pipe. Later we'll see two more service Widgets (called Holders and Ports). One of the most unusual things about Fluency is the whole idea of non-visual widgets. The second unusual thing is the idea of non-visual glue Widgets. Fluency treats them all exactly the same as its more traditional (visual) Widgets.
Where, though, might new Pipes come from? Fluency provides a few Pipe strategies as part of its deployment bundle, but the principal source of them would come from outside programmers. Programmers would have to create all new Pipes for Fluency to expose as options to the author, just as they would have to produce all user-interface toolkit objects to be delegates for visual Widgets in Fluency, plus all generic toolkit objects to be delegates for functional Widgets in Fluency. As far as Fluency is concerned, its set of Pipes is exactly the same as its set of (visual) user-interface toolkit widgets that user-interface toolkit programmers produce today, only instead of visual buttons or checkboxes, they're non-visual mediators between buttons or checkboxes. They're also exactly the same as the set of Fluency's other non-visual Widgets, like Timers and Databases and Network Monitors, and so on, which are all just Proxies or Adapters wrapped around any computer program at all.
Further, just because (this version of) Fluency is free, there's no reason all versions of Fluency need be free, or even that any of the various kinds of Widgets that any particular Fluency needs have to be free. Producing new Fluencies, or new visual, functional, or service Widgets for any one Fluency could easily become cottage industries among programmers if authors were to widely adopt Fluency. There's also no reason that authors can't simply advertise for any new kinds of Widgets that they need to make their latest user interface, then pay programmers to supply them. Eventually (as we'll see later) such `programmers' might even include non-traditional visual programmers who use Fluency itself to produce such Widgets using combinations of old ones. There's also no reason that Microsoft, say, can't buy a commercial version of Fluency while a non-commercial version continues to exist side-by-side with it. Fluency's license allows anyone to do anything with it, including make a commercial and proprietary version. Whether development on free and open versions continues after that is entirely up to the population of programmers and authors.
Finally, note that the idea of supporting `end-user programming' is far from new---in fact it's now 35 years old---but several attempts to solve the general problem have so far failed. Perhaps that's because a solution has always been envisioned as being some monolithic, predesigned, unevolvable, single program that users interact with to `do visual programming.' That seems to be far too hard a problem, at least until we have AIs to solve most of it for us. However, having programmers develop numerous low-level visual, functional, and service Widgets, and having authors visually specify how those Widgets should combine and recombine, and having the two groups of people communicating with each other over the web, expressing what they'd most like to see happen next, while sharing and copying and mutating each other's work, seems like a less stressful, more equitable, more profitable, and more evolvable division of labor than our present way of developing user interfaces. In such a world, programmers are left to do what they do best---talk to computers---and authors are left to do what they do best---solve their problems. If Fluency truly does service a real need out there, then it could eventually come to support a very general mechanism for visual programming while also opening up the art of programming to the entire world.
Every user-interface toolkit lets programmers build compound widgets out of simpler ones. Typically, today's graphical user-interface programmers must first decide what goes where, then create container widget instances and add widget instances to those containers. All such choices and groupings are hardwired at compile time, thus making easy and cheap and undoable changes impossible without programmer aid. Fluency has to dodge that inflexibility bullet, or it will only help solve the problem of making graphical user-interface creation easy (using the design ideas explained earlier), while failing to solve the problem of making graphical user-interface editing easy, too.
First, then, what is a compound widget? Compound widgets are sets of widgets that function as units. Component widgets of the compound all share something---usually appearance or location or behavior---and they all work together to accomplish some subtask within the user interface---a tool bar, say, or a file browser, a control panel, a tearoff menu, a properties sheet, a button bank. It's not yet common today, but there's no reason such compounds couldn't also be much more complex, like a text editor (example, Swing's JEditorPane), web browser, spreadsheet, news reader, mail reader, p2p program, or whatever. Compounds use `containers' to enforce group behavior on `contained' widgets. In an airplane cockpit, the analogy might be to a bank of switches controlling the plane's radio. Those switches shouldn't be randomly mixed in with switches that control the flaps or engine. You don't want pilots to flip a switch thinking that it's for radioing the tower, then find that they missed the right switch and flipped an identical-looking nearby one, forcing the plane into a power dive 50 feet above the runway.
Since, outside of Fluency, there are no non-visual widgets, all non-Fluent user-interface programmers today are forced to use visual widgets---like frame, panel, dialog, and canvas---as widget containers. That forced choice then forces every widget to belong to exactly one container. That then forces all widgets into a tree structure, which simplifies event propagation among widgets, widget layout, and display refresh for widgets when their screen areas are damaged.
That seems like a big win---getting all that functionality for free just by agreeing to force everything into a tree structure. But forcing all containers to be visual creates three problems for flexibility and for getting rid of programmers during user-interface design.
For example, most of today's user interfaces are stereotypical because each user-interface toolkit makes choices about its compound widgets that then make more creative user interfaces harder to build than they need be because today's compound widgets are so coarse-grained. For example, a simple editor widget might be a textbox inside a panel along with a pulldown menu listing possible font sizes for the textbox's text, as well as a save button, plus a label stating whether the text in the textbox has been saved to disk since the last edit. By the time the author gets to work with such a compound, all its choices have had source-code concrete poured all over them. Thus, the author can't:
Further, both visual and non-visual objects often need to be linked in ways not easily mappable onto a tree (the general case, in reality, is an arbitrary multigraph of inter-object relationships). Also, visual widgets in particular can't always be easily grouped contiguously (for example, in multi-tab or multi-screen or multi-function user interfaces). User-interface programming today is thus much like programming in an object-oriented language while relying on a hierarchical database for storage. User-interface programmers must keep the `real' structure of widget interrelationships in their heads, getting little help from the computer in doing so, short of tree-support for widget layout, event propagation, and screen refresh. So they often spend much of their time mentally translating back and forth between two completely different abstractions, and the resulting code is thus bulky, buggy, and fragile.
That same abstraction mismatch is beginning to occur today with object-oriented programs to XML and back. Such an impedance mismatch is well-known on the backend---for example, in the database world---and is often estimated to consume perhaps 40 percent of programmer time and effort just to do the endless back-and-forth translations (and correcting the endless errors in translation), but the exact same impedance mismatch goes entirely unrecognized on the other end---the visual end. There, everyone has thought it inevitable and unavoidable for so long it's not even a question anymore. Obviously all user interfaces have to be forced into a tree structure. Trees are natural and obvious and widely used. Besides, there's no other way to do it---is there?
That style of mindless reasoning explains why even such terrible ideas as `folders' and `directories' will never go away. In the real world, things rarely fit into a tree structure. Forcing you to put your mail, say, into a set of mutually exclusive hierarchical folders is way too burdensome on the user, and it also makes absolutely no sense. When joke mail comes on Thursday from your boss who's also your friend, do you file it in the friend folder or in the work folder? And why can't you file it under the jokes folder? Or even the Thursday folder? Further, why do you even have to remember exactly what you called it, or where you put it? Imagine if you had an employee who, every time you asked him to do something, said ``Sure! I'll solve it right away, and I'll put the answer away for you, too. And all you have to do to find it again is memorize this 20-digit number!'' You can't do any sensible thing with your files today because the ancient programmers found trees sexy and easy to implement. Besides which, computers back then---when dinosaurs roamed the earth and cavemen fled in fear before them---were rare, slow, fragile, expensive, and disconnected, so the programmer-priests of the time got to force everyone, including all users, to use trees, too. Eventually, the idea of a tree stuck around for so long that it became an unchangeable aspect of all computers, everywhere. Today, computers are fast and capacious and cheap and ubiquitous and connected, but the ancient assumptions from the dawn of time still persist. It's probably too late to change that on the lowest level of operating systems and basic applications (for example, file finders), but there's no reason we can't allow more flexible presentations of the idea of multiple overlapping categories on top of such low-level tree structures, which the user need never see. And the same is true for user interfaces. We don't have to live only in a world of trees, even though that's where our ancestors came from.
Forcing everything into trees is much like MVC---it was a win at the time, compared to what came before. Today, though, it's too inflexible. We can't, however, simply abandon containment entirely because some form of containment is desirable from a human factors point of view. As a rule, related functionality should be grouped visually to provide context and reduce user mistakes. So keeping something like containment around is a good idea. Freeing ourselves from always having to use only trees, though, is also a good idea. The design question is how to do both.
To support more flexible ideas of containment, Fluency lets authors build visually editable compound Widgets without programmer aid and without resort to source code by using non-visual Widgets as containers. Any Widget, visual or non-visual, can belong to any number (including zero) of (non-visual) Widget containers simultaneously, with each container potentially handling just one responsibility for its particular group of Widgets, whether they're visual or non-visual. This is similar to AspectJ's idea of `cross-cutting responsibilities.'
Further, those objects that user-interface programmers outside Fluency would normally consider the only possible container widgets---like frame, panel, dialog, and canvas---aren't necessarily'containers' in the traditional sense inside Fluency. In Fluency, they can merely be visual Widgets whose screen areas can be overlaid with the screen areas of other visual Widgets. They needn't necessarily handle those overlaid Widgets' display tasks (sizing, color choice, positioning, bounds setting, insets setting, border setting, layout, painting, hit detection, and so on---although right now they still do most of that, except hit detection). And they certainly needn't handle logical relationships between Widgets like event propagation and Widget linkage. All such containment-related tasks could be handled solely by instances of a new class of (non-visual) container Widgets.
Fluency supports containment solely by manipulating Widget Event streams. A Fluency container Widget, however, can't be the only one that controls its contained Widgets' Event streams---because then each container Widget would have to be solely responsible for every aspect of the group behavior of its contained Widgets, leaving us back where we started. Instead, a container Widget must support three things:
Finally, just as with Pipes, any Fluent compound Widget could be created or deleted at any time (leaving its component Widgets in place), without any visible change in the user interface. Fluent container Widgets could thus relate any set of visual or non-visual Widgets, spatially, visually, behaviorally, or even temporally, so that they, transiently or permanently, jointly function as a group. Such functionality is exactly like that of Pipes, except that containers only have one set of Widgets to relate, rather than two. (Note: in some sense, there really are two sets of Widgets, even for containers: all the Widgets inside the container and all the ones outside it. Hmm... maybe there's some natural way to unify Pipes and Holders?)
Fluency's container Widgets, called Holders, use helper Widgets called Ports, much as Actions use Docks. A Port is essentially an empty Pipe (an application of the Null Object and Proxy design patterns). It has no Actions and does nothing to its Event stream: it immediately emits every Event it receives. A Port can `notice' any Event whatsoever; that is, its InEvent and OutEvent Dictionaries are taken to be exactly as large as the set of all Events possible among the set of all Widgets loaded into Fluency at that time. In terms of practical programming, however, a Port's Event Dictionaries would both be as empty as its Action Dictionary.
A Holder is a non-visual Widget that's also a Widget container:
Like Pipes, Holders and Ports are service Widgets. As with Pipes, they act as glue in the user interface, rather than as proxies for underlying delegates. They are non-visual.
If an author asks Fluency to insert a non-Holder Widget into a Holder, the Holder creates two Ports: one to capture Event streams currently entering the Widget and another to spew Event streams currently leaving the Widget. The Holder then:
Here, for example, is the effect of putting non-Holder WidgetA into a Holder (in the diagram, 'E' is an Emitter, 'R' is a Receiver, 'P' is a Port, and directed edges are Event streams):
If the author then inserts a second Widget into the same Holder, the same thing happens, except that the Holder first restores any pre-existing links between the new Widget and any Widgets already in the Holder.
Here, for example, is the effect of putting three non-Holder Widgets---WidgetA then WidgetB then WidgetC, and in that order---into a Holder. Note that immediately after insertion nothing connects to P5, and P2 connects to nothing:
If the author inserts a Holder into another Holder, instead of creating just two new ports, the outer Holder creates as many (in and out) ports as the inner Holder has, then it connects any incoming and outgoing Event streams to the appropriate Ports. The author can also delete a Holder or non-Holder Widget from a Holder, thereby restoring all its previous links. Thus, a deleted non-Holder Widget is simply restored to its previous functioning. A deleted Holder Widget, however, simply goes away, leaving its previously contained Widgets in place. The two Holder methods, insert() and delete(), are thus like the two methods of all Emitters, register() and unregister().
Just as with Pipes, Fluency makes all Holder-related links with direct Observer registrations, which means that such links can be as easily broken again, either during buildtime or during runtime. So any Widget, including a Holder, can be flipped into or out of any Holder at any time, even runtime. (Note, though, that since Pipes are currently represented only as arrows in the builder, the author currently has no way to put a Pipe in a Holder, even though Pipes are themselves Widgets. It's not clear what use it might be to allow Pipes to be contained just like any other Widget, including Holders, but for completely uniformity Fluency might one day allow it.) In Fluency, every author action is, and must be, fully reversible.
Again just as with Pipes, Holders have strategies inserted into them on creation (using the Strategy design pattern) and, as with Pipes, those strategies form the guts of the Holder's logic. Since a Holder links itself to all in-Ports and links all out-Ports to itself, then, depending on the Holder's strategy, the author might choose to let any particular Event sent to the Holder affect all its contained Widgets. For example, a ChangeBackground Event could be sent to a propagating Holder and that Event would then be propagated to all its contained Widgets, forcing all of them to change their background color to whatever color is specified in the Event (assuming that all the contained Widgets recognize ChangeBackground Events, that is; if not, the Holder (as a propagator) will have itself put in Event-to-Event translation Pipes to convert the Event to something that each contained Widget can understand). Thus, a Holder can cause all the Widgets inside the Holder to act similarly, which is the whole point of a container.
Similarly, the author might choose another kind of Holder to let any Event that a contained Widget emits to then affect its Holder. For example, when a Widget is deleted it could emit a DeletedEvent, which its out-Port will then receive. Its Holder (or Holders) will also receive that Event. So its Holder (or Holders) could trigger on that Event to then delete the Widget's in-Port as well as its out-Port, plus execute any other Action at all.
Also, since every Holder controls each contained Widget's in-Ports and out-Ports, a Holder can override what Events any one of its contained Widgets receive or emit. The set of Widgets in a Holder can thus act as one (compound) Widget with respect to the rest of the user interface outside the Holder without the author having to delve into source code---and without altering any of the Widgets in any way, or changing anything else, or overburdening that particular Holder with any other responsibility that its contained Widget may participate in.
Further, since Holders are Widgets, the author may link any Holder to any other Widget, including other Holders, either directly via Observer links, or indirectly via Pipes. However, when an author tries to link Widgets outside a Holder to Widgets inside the Holder via a Pipe (or directly), Fluency instead silently links the author's Pipe to the appropriate Holder in-Ports, and similarly for author links from internal Widgets to external ones. The author doesn't have to care about the implementation details.
A Fluent author may have one (visual) Widget controlled by several independent (non-visual) Holders, each one handling exactly one responsibility---say, one to control background color, one to control mouse-mimicking movement, one to control resizing, one to control default layout algorithm, and so on. Further, to create Holders with apparently arbitrary compound behaviors, all the author has to do is put one Holder into a different Holder (or as many different Holders as desired), then stuff Widgets into the innermost one. The outermost Holder will then appear to the author to enforce each behavior on all the innermost Widgets. Fluency containers thus let authors dynamically and arbitrarily recombine Widget behaviors, yet each Holder is single-purpose, and thus easy to program in the first place. Holders can thus behave as decorators, as in the Decorator design pattern.
With all that author power, a Fluent graphical user interface can have bizarre and creative properties. For example, a single (compound) Widget can look to the user like many separate (visual) Widgets, each of which can even be visually discontinuous in the user interface they're a part of. An author might choose to separately move those components anywhere on the screen without in any way changing the functioning of the compound Widget they jointly compose. Further, any of those component Widgets might themselves be compound Widgets. Finally, the entire user interface that the author is building needn't be enclosed in one window, as is the norm today. With Holders in Fluency, a tearoff menu, say, is just as easy to make as a pulldown menu. Thus, in Fluency, not only is there no requirement that a Widget be visual, but even if some of its parts are visual, there's no requirement that its visual appearance be contiguous in the user interface, or even appear all on one screenful of the overall user interface.
As usual, one objection to all this freedom, indirection, and extra object creation is that it's way too expensive. However, the usual counter-argument still applies: the `efficient' way that we do things today is even more expensive. It's too easy for programmers to lose sight of the fact that traditional programming only works well when the problem is both well-defined and unchanging. That's decidedly not the case when developing a new user interface. There, nearly every decision is a work in progress. Further, as with Pipes, the above Holder scheme can only work well if outside programmers provide Fluency with lots of different kinds of Holders. And, again as with Pipes, Fluency must provide some basic set of Holders sufficient to at least demonstrate the potential power of the idea.
A Fluent author has several tools that can help create flexible user interfaces without delving into source code and without a traditional programmer's help.
First, an author can link any two Widgets either to make one an Emitter (or Receiver) of the other, or to make one the sole Emitter (or Receiver) of the other. Call these two operations: linking and latching. A Widget latched in front of another Widget can act as the second Widget's `guard;' a Widget latched behind another Widget can act as the second Widget's `filter.' A Widget guard (or filter) could transform, delete, duplicate, queue, rearrange, delay, log, or simply pass on Events. It could vet all Events that the Widget can receive from (or emit to) any other Widget. Further, one Widget can be both guard and filter of another Widget by being the Widget's sole Emitter and the Widget's sole Receiver. Such a wrapper Widget behaves as if it were the wrapped Widget as far as the rest of the user interface is concerned, although the wrapped Widget's Actions, if any, are still exposed for direct linkage. (Note that a Holder acts somewhat like a wrapper for its Widgets because its in-Ports are guards and its out-Ports are filters, although neither do anything; it's up to the Holder's strategy to decide what, if anything, to do to the Widget's Event streams.) An author can thus create a bypass of any Widget to conditionally route Events around the Widget by latching a guard and filter to the Widget then linking the guard to the filter, so that on some condition (defined in the guard) the Widget might never see certain Events, yet those Events might still be passed on to other Widgets anyway, just as if the bypassed Widget did indeed see them. Further, the whole bypass can be made to depend on some other condition (defined in the filter).
Thus, an author, by latching an appropriate guard or filter or wrapper or bypass, can modify any Widget so that it behaves as if it, for example: only accepts double-clicks instead of both single- and double-clicks, or treats backarrows as backspaces, or emits TextSet Events instead of TextChanged Events, or is conditionally transparent to mouse clicks. An author can also control both ends of a Widget separately by latching a guard and a filter to the Widget. Further, the author can link both ends of a Widget to another Widget, creating new behaviors by associating any emittable Event with some Action on either Widget. For example, given a textarea and a non-visual Widget that identifies misspelled words, the author could link a Pipe from the textarea to the spell-checker to feed it words as they are typed into the textarea, and another Pipe from the spell-checker back to the textarea, targeting the Highlight Action on the textarea to highlight any misspelled ones. Essentially, the author can create new Widgets depending on what service Widgets are available.
Second, an author can also control sets of Widgets jointly. For example, in Fluency, Actions like Enable and Disable are common to all Widgets, visual or non-visual, and Hide and Show are common to all visual Widgets. If every visual Widget's Enable, Disable, Hide, and Show Actions were always triggerable by Events, then any such Event, when sent to a Holder, would trigger the appropriate Action on each contained Widget, and recursively on any Widgets contained in Holders contained in the Holder. Thus, a bank of Widgets, a set of radio buttons in a panel, say, can be a Widget all by itself, as opposed to the radio buttons, or the panel that may visually contain the radio buttons. An author can thus specify when that whole bank-of-buttons Widget should Enable, Disable, Hide, or Show by specifying when the Holder containing it should receive the appropriate triggering Events. Banks of buttons (for example) can then be made to appear or disappear at any time, and they can be overlaid near where the user's mouse is, not necessarily embedded in some fixed area in the user interface.
Third, an author can also enforce a uniform display style on any set of Widgets. For example, if setting foreground color is an Action that all visual Widgets understand, then by sending just one Event that triggers that Action to a Holder, the author can cause the corresponding Action on all contained visual Widgets to execute, thereby setting foreground color in all contained Widgets, including, recursively, any in contained Holders. Holders could also come preconfigured for various presentation styles and default settings so that any Widgets fed into them will automatically conform to those styles. Enforcing a uniform style on all Widgets is then a simple matter of putting all Widgets in one Holder. (Note that such a Holder need not be unique. Each of those Widgets could belong to several separate Holders, each intended to enforce different behaviors on their components.) Further, any contained Holder might override the style of its container Holder with another style in a form of dynamic inheritance analogous to cascading style sheets. Since such styles are set with a series of Actions, setting a particular style on a Holder can be a single Action via the Composite design pattern.
Fourth, an author can also mix arbitrary cross-cutting responsibilities. A single Widget may belong to multiple non-nested Holders. Each such Holder would pass on Events to the same contained Widget, and each one may have different Event management responsibilities. For example, one Widget might be in two separate Holders, one of which marks a set of Widgets that will move as the mouse moves while the other marks a set of Widgets that will change background color as a slider, say, changes position. Any Widgets in the intersection of those two sets will both move with the mouse and will change color with the slider. The slider's position could itself be controlled by keypresses, or by the amount of data loaded off the network from a remote database, or by any condition at all.
Fifth, an author can also build new compound Widgets from other Widgets. An author could, for example, build a video player out of a file-reader, a video codec, a canvas, and a set of buttons. The buttons go into a `mutually exclusive' Holder (only one can be depressed at a time) with each button controlling one of play, pause, stop, rewind, fastforward, next track, and last track, but so far they only emit Events when clicked, they don't trigger Actions. All inherit the same style from their enclosing Holder, though. The canvas, a visual Widget, can display images, and the video codec, a functional, non-visual Widget, can decode video files into a series of images. The file-reader, a functional, non-visual Widget, allows selection of video files, and is linked to a visual Widget, say, a listbox. All of these Widgets are linked to each other and to the buttons with Pipes, and all go into another Holder with its own enforced style.
Sixth, an author can also build multi-screen user interfaces. When designing a tabbed Widget, say, an author can create several Holders, plus one more Holder to hold them all ("one ring to bind them..."). The overall Holder would then be the whole user interface, and each Holder within it could hold a tabbed screenful of (visual) Widgets. The author could then link a single Event emitted by one Widget to a Hide Action on the currently 'visible' Holder (that is, the Holder whose visual parts are presently visible on screen) and simultaneously link emission of the same Event to a Show Action on another Holder (with a Pipe to do the appropriate Event transformation). Each Holder then recursively passes on the appropriate Events to its parts. In short, every screen-wide Widget composing one part of a multi-screen user interface could be enclosed inside one non-visual Holder able to control each one separately, and able to pass Action-triggering Events to its parts, any one of which may occupy the entire screen.
Here is an example of how a three-screen user interface might be linked when represented by collections of Widgets in three Holders, each containing sets of Widgets that together take up the whole screen:
With a well-done Fluency, a sophisticated author could create, delete, replace, modify, link, and unlink any Widget, whether visual or non-visual, functional or service, compound or simple. Were such a tool to come to exist it would be much more than a simple user-interface builder, it would be an entire visual programming environment. And the user interfaces it produces would be both sharable and editable. Armed with a Fluency containing a sufficiently rich array of Widgets, sophisticated authors would essentially be programmers, and not just regular programmers, but superprogrammers, able to churn out lots of new programs, each with sophisticated user interfaces, not just pretty pictures, relatively quickly, and without ever having to step into the concrete of source code.
Fluency's current implementation needs work, especially on its frontend. Contrary to the emphasis within computer science on algorithms on the backend at the expense of human factors on the frontend, all users, including potential Fluency programmers, judge programs almost solely based on their frontends. It's ironic that most of Fluency's implementation problems are in the area that Fluency itself is intended to help improve---the user interface.
First, there's the mess-o'-wires problem. Every user interface, inside or outside of Fluency, is a mass of wires, so that, in and of itself, isn't the problem. The problem lies in the presentation of that mass of wires. In Fluency, there's as yet no modularization of the complexity of Widget linkage. Partly that's because there's poor or no use of backend-supported complexity-management schemes, like containers and author-expertise marker levels and object descriptors. Consequently, every single detail always exists in the linkage view instead of popping into and out of existence on mouse-over, say, or appearing only in a different view. Fluency's linkage view is its most complex, least understood, and most unusual part. It needs delicate and careful handling. When building a sophisticated user interface, an author needs access to lots of different partial views of the widget relationship graph to get a good feel for what the user interface is doing and how it's connected without also drowning in detail. Here are some possible linkage view changes:
Second, there's the all-over-the-map problem. Fluency presents the usual mass of panes and palettes and menus and obscure and unintuitive and untooltipped sequences of author operations. That stereotypical style presents far too many options at each step (most of them irrelevant, and so simply distracting). Further, continually have to select among all those controls and options is cumbersome and tiring. Fluency has mechanisms in place to help reduce at least the first problem (author levels) but it's rarely used, and, when used, it's used inconsistently. Currently, authors may have to click or double-click five different times while also moving the mouse over acres of screen real-estate to do even the simplest things. Instead controls should appear where the author needs them---next to or on each Widget. Each View should be mouse-aware in the sense that different tools popup depending on how close the mouse currently is to which Widget and what direction the mouse approaches the Widget, and even how fast the mouse is moving. The Widget is the important thing, not the tools needed to select what needs to be done to it next. As a general user-interface design rule: simple or frequent changes should require only small moves. To have to arc the mouse from one end of the screen to the other just to get to a property sheet then to have to click around in it just to change a label, or a color, or whatever, is frustrating, tiring, and annoying as it forces continuous and unnecessary loss of visual context. It makes the author work far too hard to do simple things.
Third, there's the documentation problem. The old Fluency, particularly its backend, has a comprehensive manual---at least as open-source projects go---but the current Fluency has thrown that away with the switch to Eclipse. Now all the important knowledge about Fluency's user interface builder, in particular, and its current codebase organization, is once again locked up in its programmer's heads. Nor does the current Fluency help its author to use it properly with tips and help messages and good demos for everything.
Fourth, and last, there's the non-recursive frontend problem. Fluency's user interface isn't itself built with Fluency---it's handrolled. What message does that send to authors who might be contemplating using Fluency to produce user interfaces? All Fluency's frontend problems would be much more ignorable if Fluency's builder was itself built inside Fluency. Then, changing Fluency's frontend would be thinkable inside Fluency itself. Instead, what we have is a fairly elegant backend framework intended to support flexible user-interface development that's almost ignored by the frontend, which is a typical, homemade user interface. So whenever we change the frontend implementation framework (for example, going from Swing to GEF) we must change everything about the builder. Fluency's user interface today is thus far from an advertisement for platform independence, or even toolkit independence, and it's far from an advertisement for Fluency itself. There's nothing Fluent about Fluency's own user interface.
Fluency also has related, but smaller, problems on its backend.
First, there's the containment problem. So far Fluency doesn't really support containment of one widget in another. Or rather, it does, but only halfway (it's not yet fully independent of Swing for event propagation; see the Holder footnote). Further, almost no implementor uses containment, either on the frontend or on the backend. That alone dooms Fluency to tiny-toydom. Every user interface needs Widget containment. Fluency has some but uses (nearly) none.
Second, there's the lack of headless tests for the frontend (the backend still supports them, but a bit hackily with two separate test hierarchies). Fluency is built with test-first and XP and JUnit and so on as articles of faith, but it itself is almost untested. That isn't for lack of trying (well, not solely because of that, anyway), it's also because free software tools typically aren't very good. Further, the current Fluency can't become well-tested until it can run in headless mode inside Eclipse without spewing NPEs. This problem should be fixed soon.
Third, there's the lack of continuous builds. This used to not be true when we were building in Swing and ran CruiseControl, but when we switched to Eclipse that failed. This problem should be fixed soon.
Fourth, there's the lack of persistence for linkage. Again, this used to not be the case in the old Fluency, but the move from raw Swing to Eclipse hosed that. This problem should be fixed soon.
Fifth, there's the install problem. Fluency used to be easy to download and start up from off the web to give potential authors a first look at it and so gain a painfree feel for it. No longer. Now you must be an eclipse hacker just to see it work. This problem should be fixed soon.
Sixth, there's the bug hunts caused by many patches of badly written, unstylish, and inconsistent code. The bugs seem to be primarily on the frontend, but they also exist even on the backend. Inconsistency abounds, lots of bits are half done or half made over, Events are randomly named, so are Actions, one implementation of a subclass differs from another implementation of a related subclass, one class is well documented, another is bare of comments, one class has good tests, another has none, and so on. Bugs may be lurking anywhere. A lot of this junk should vanish over time once we get back to continuous builds and headless testing.
Eighth, there's the lack of robots, either for testing or demo production. Even though Fluency uses Actions everywhere, Fluency demos can't yet be easily produced and distributed and are still all done by hand---which means they're done rarely, and usually only at the very end of term, when there's no time to do anything about the problems they uncover. Further, even though all of Fluency is written to generate and accept XML files describing user interfaces, it doesn't yet do that for its own user interface because its user interface is completely handmade with little use of either Actions or XML inside Fluency itself. That's the same as the non-recursive frontend problem mentioned above. Building Fluency's user interface by hand, instead of inside Fluency, is the fundamental problem.
The task here isn't to build the perfect user-interface builder. There's probably no such thing, anyway, just as there's probably no perfect user interface everyone can use. The task is, as Alan Kay said about the first Mac, to build one that's good enough to criticize. Fluency, even unfinished, promises to have several things going for it that may make it good enough to criticize.
First, from the point of view of user-interface implementation, Fluency radically breaks with tradition. Fluency's use of EAR atomizes the MVC idea of a monolithic View, exploding it into as many (visual) Widgets as are needed to compose the View. It also atomizes the MVC idea of a monolithic Model, exploding it into as many (non-visual) functional Widgets as the user interface needs to store and manipulate its data. And it atomizes the MVC idea of a monolithic Controller, exploding it into as many (non-visual) service Widgets as needed, each of which links some set of other Widgets together to encapsulate either a triggering or containment relationship. In Fluency, the entire Widget linkage graph is the Controller. In all three cases, each Widget is completely independent inside the user interface, entirely unaware that any other Widget even exists. Widgets instead unwittingly exchange state change between themselves with Events and Actions instead of direct method calls, thus making every Fluent user interface, however partial, always runnable. Further, in EAR, the MVC distinction between Models, Views, and Controllers evaporates; all are Widgets. In Fluency, nearly anything can be represented by a Widget, including `Models,' `Views,' `Controllers,' remote computers, multiple networks, the operating system, other applications, the author, the user, multiple users, the user interface, and the user-interface builder itself. In EAR, any kind of Widget is more easily included than in MVC and the structure of the user interface is more easily put into the author's hands, letting the author visually specify it, including how it should change at runtime. EAR helps authors, not just programmers, build and share sophisticated user interfaces.
Second, from the point of view of politics, Fluency shares power more equally between authors and programmers. Instead of authors telling programmers what they think they want, then having programmers building static links on top of other programmers building large, monolithic Widgets, programmers build simple Widgets plus generic Widget connectors, which are also Widgets, then authors combine and recombine all those Widgets to produce the user interfaces they desire. Fluency might thus free programmers from the drudgery inherent in today's style of user-interface development. These days, the application a user interface is for almost doesn't matter, developing the user interface often sucks most of the programmer's time. By opening up user interfaces, Fluency might let interested authors evolve their own user interfaces, effectively reprogramming their computers, without programmer aid and without a company like, say, Microsoft first deciding whether it's a good or bad thing.
Third, from the point of view of software engineering, Fluency might also have consequences for software development in general. Currently, software is going through an agglomeration phase because of the severe mismatch between the cost of writing an application and that of writing a usable user interface for that application. Since user interfaces are so much harder to get right, and since they are so much more computationally expensive to run than many applications, applications are growing more and more bloated as they try to allow for every possible variant way that they could be used, to make them as widely useful as possible. Class API's now routinely contain hundreds of methods to allow for all that variety. Most of those bloatware options, however, go unused by the vast majority of users, adding only confusion and frustration. Fluency might act to break up today's giant applications since, for example, if you want word count functionality added to your editor you needn't add it as yet one more option in irremovable source code, instead you could first write a (tiny) word count application (a non-visual Widget) that takes a character stream, then create a (tiny) connector from it to a (tiny) visual Widget, say, a label, then glue that to the editor application's user interface with a Pipe and bond the whole thing inside a Holder. Potentially, all of today's special-purpose options could be washed away in the river of use until all users have exactly and only what they need.
Fourth, from the point of view of development, Fluency could come in at least four flavors, each aimed at authors with differing needs. Authors could choose which Fluency to work with, thus avoiding the clutter and greater demands of more sophisticated versions, and thus perhaps leading to a graduated scale of user-interface development tools that allow author-selectable visual programming:
Fifth, from the point of view of user-interface design, when authors can share their runnable user-interface ideas with each other, they can more quickly leverage each other's work. Any Fluent user interface could be open to inspection and alteration, with its level of editability varying depending on the author's interests, just as today's web is an outgrowth of millions of people with widely varying interests. Web evolution is rapid because everyone is always able to see everyone else's work, understand it, steal it, change it, and republish it as their own. That evolutionary style of development leads to many mistakes, and both content and design are often foolish, plus the computational inefficiency is extreme. On the other hand, the more rapidly that bad ideas spread, the more rapidly they become less attractive, and so the more rapidly replacements for them spread. Fluency might enable the same rapidity of change for computing in general. There's no need for a central creator, like Microsoft or Sun or Apple, deciding what everyone must have. They can't, anyway. The problem of creating enough user interfaces to our computers to solve all our many problems is far too big for any one company to solve.
Above all, though, Fluency could finally give authors control over some of their computational experiences and take a huge burden off programmer shoulders. That alone should significantly reduce frustration with today's user interfaces, both in their use and in their development. Fluent user interfaces, though, will create many more objects, and so will be much more expensive than, traditional MVC-style user interfaces. However, if even `compiler' versions of Fluency prove to produce user interfaces that are too inefficient for daily practical use, authors could still evolve prototype user interfaces of what they really want, then present them to programmers for the traditional hand-coded implementation. The difference between that world and today's world is that a more reasonable division of labor would have been achieved---authors would do what they do best, and programmers would do what they do best. Programmers know the computer, but authors know the problem. Fluency itself would then be an `interface' in a yet more general sense, cutting today's hard-coded links between authors and programmers, letting them instead be observers of each other, and thus allowing them both to work more effectively to accomplish the joint task of producing better software.
Lecolinet, Eric, "A Molecular Architecture for Creating Advanced GUIs," Proceedings of the Sixteenth Annual ACM Symposium on User Interface Software and Technology, November, 2003.
Myers, Brad A., et al., "The Amulet User Interface Development Environment," CHI'97 Conference Companion: Human Factors in Computing Systems. March 1997. pages 214-215.
Sun's BeanBox, an unsupported JavaBeans interface builder, is closed-source, but it appears to compile in Widget links by creating and compiling small adapter classes, however it only allows Event-Action links. Jigloo, a commercial plugin for Eclipse and IBM's WebSphere, appears to be similar, as, apparently, are many other commercial builders.
Pipes, when viewed as carriers of conditions and actions, are similar to productions as used in the OPS5, Prolog, and ACT-R logic programming languages, plus re-implementations in C and Java (like CLIPs and Jess), and classifiers in genetic algorithms, as well as many expert system shells.
Fluency Widgets are similar to, but far simpler than, OMG's CORBA v 3.0 Components. Fluency Widgets don't have to discover each other; that's taken care of by the author.
Fluency isn't old enough yet for the design of its Actions to be completely firm. In every design there's always a balancing act between cleanliness, simplicity, uniformity, and generality, versus programmer and language pragmatics and problem complexity. It doesn't matter how wonderful a design might be if it's too complicated for most ordinary implementing or maintaining programmers to know what to do in all cases---or if the underlying programming language doesn't naturally support the design. In both cases, programmers will always drop back into hack mode, thus inevitably introducing inconsistencies and bugs. Conversely, it doesn't matter how uniform and elegant a design is, and how well it fits ordinary programmer expectations and programming language abstractions, if the problem it's intended to solve has too many special cases for the proposed uniform solution to make programming sense.
For example, Fluency doesn't yet contain enough Widgets and hasn't yet been used to build big enough user interfaces for us to be certain whether all parameterized Actions should trigger immediately after all their parameter values have been set, or should trigger only on reception of a specific TriggerEvent then spew an Error if one or more of their parameters is not yet set at that time. This choice has consequences for stopping Action execution, too. For example, should Actions by default only fire once, or should they repeatedly fire until receiving some StopEvent? Also, should their Docks drop Observer links once the Action has fired? Or should such links persist until specifically instructed to delete themselves by the author? Also, should Action start, stop, repeat, link, and unlink behavior all be programmable? (That last seems like a natural choice to a programmer, but it's probably the worst possible choice, because it will most definitely turn the author into a regular programmer). There are advantages and disadvantages to each design choice. The codebase presently uses TriggerEvents and one-time execution.
Also, because we currently use TriggerEvents, it's possible that Actions could be asked to execute before all their parameters are set. Thus, Actions presently have ErrorDocks, but they aren't yet used in practice. This is bad from an XP point of view, which espouses a create-only-when-needed philosophy (more commonly called: `You ain't going to need it'). For that reason alone we should probably delete ErrorDocks. ErrorDocks are Emitters just as OutputDocks are. They could let Actions broadcast any problems encountered during execution. Such Errors might be things like parameters not being set before attempted access to them, or system problems like file access errors, or being locked out of a database, or being allowed into a database but not being allowed to fetch sensitive data. In future, Fluency may find a way to either do away with ErrorDocks, or use them to let authors specify what to do, if anything, when errors occur. That could allow the author to set up testing harnesses for their user interfaces, just as JUnit does for regular Java programs. On the other hand, such work may be too much effort for a prototype.
Finally, it may be cleaner and more maintainable to forget about the idea of supporting Actions with multiple parameters from separate Emitters and make each Action either zero-parameter or one-parameter (the easy cases). Then to make what looks (to the author) like an Action needing parameters from several separate Emitters, Fluency behind the scenes creates a chain of one-parameter Actions, most of whose work consists of just packaging up and passing on their received parameter to the next parameter-collecting Action in line, maybe with each just adding the parameter (and its type) into a lookup table stored in an Event, until the chain reaches the true Action, which then extracts all the appropriate values from the Event then executes. (Note, though, that this might make OutputDocks useless, and therefore would limit what could be done inside Fluency, but maybe it might be the best choice for a prototype builder.)
In general, there are several problem areas that the current Action design ignores:
As we progress toward a fuller builder, we may find that the current design for Actions is too general to easily program and that some of the effort of deciding Action linkage should be thrown back on the user (although that could easily lead to a situation where the backend programmers take advantage of that, throwing more and more decisions onto the author until the builder turns back into a traditional API-based programming environment). On the other hand, we may find that we need the generality to handle various edge cases created by sophisticated Widgets we don't yet support---for example, text editors or web browsers---but that all other Actions should be simplified to make the author's job easier. There may be some easy way, for example, to make Dock attachment nearly automatic in most cases, as Pipes are now, in which case the author would never have to see or worry about it.
Just as we ripped out Event handling from Widget code to let authors do linkage visually and flexibly, so must we rip out Event propagation from framework code to let authors do containment visually and flexibly. As it stands right now, though, Swing determinedly stands in the way. It automatically assumes the duty of propagating events up and down its (assumed) widget containment hierarchy, which it forces to be a tree, and it apparently uses backchannel method calls to do so by saving object references that it really shouldn't. For Fluency to support anything other than trees for Widget containment, we must first wrest control away from Swing and do it ourselves. Doing it ourselves isn't hard. Preventing Swing from trying to do it too seems to be.
Swing's programmers decided to embed event propagation not only in the Swing framework but also in the Swing toolkit widgets themselves. For example, the add() method on (Swing) containers (like JFrame and JPanel) lets each (Swing) container keep track of each add()ed widget's object reference directly. Each (Swing) container saves those references and calls methods on them directly. We have no way to block such method calls (short of using dynamic proxies, which no one has tried yet) because the references are to the Swing delegate toolkit objects themselves and not to the Fluency Widgets that proxy for them. Thus, Swing propagates events directly from (Swing) containers to (Swing) components using those saved references, which we can't get at because they're used inside the code of each (Swing) container object. For example, add()ing a Fluency Button to a Fluency Panel currently really means registering the Swing delegate of that Fluency Button (that is, a Swing JButton) on the Swing delegate of that Fluency Panel (that is, a Swing Jpanel). The Jpanel, on its own initiative, then takes it on itself to propagate (Swing) events to the JButton, independent of anything Fluency wants to do.
Further, one major design problem with forcing every component and container into a tree is that every component can then actually know what its container is (because in a tree such a parent container must be unique). As in all object-oriented programming: a container should always know what it contains, but a component should never know what it might be contained in. Apparently Swing's programmers don't seem to care about that basic rule. All Swing components even have methods to report their current container! Who knows how those methods are being used inside the Swing delegates of Fluency Widgets. That's bad news for more general ideas of containment, so we have to work around that somehow. Minimally, we must at least rely on Fluency programmers to never do anything as stupid as making contents dependent on their containers.
As a start to untangling this mess, we've created a Screen Widget, Keyboard Widget, and Mouse Widget and had them duplicate part of the functionality that Swing manages for hit detection (``The mouse was just clicked, which Widget was hit?''), focus management (``A key was just typed, which Widget has the focus?''), and event propagation (``Better send an Event to the appropriate Widget; it needs to wake up and take care of some user action.''). The refactor isn't finished yet, but at least Fluency's container Widgets no longer have to support stupid methods like add(). Instead, Fluency add()s each Widget on creation to Screen, along with it's `parent' (that is, Fluency is still maintaining the same old tree-containment silliness) then Screen maintains that tree relationship, keeping itself in synch with Swing's own version of the same tree. So containment-related code at least no longer appears inside every single container Widget in Fluency.
The next step is to use Holders inside Screen and free ourselves of the limitation of trees. In the meantime, though, we still rely on Swing to handle all the layout and repaints, which means Swing still has an explicit tree of containments. Fluency currently gets Swing to handle repaints and layout and so on by having the Screen Widget tell Swing what the containment tree is by executing a series of add()s on the Swing delegates of each Fluency container Widget. Which is pretty horrible.
Further, currently we've only wrested partial control over events, in the sense of being sent the same events as Swing is seeing internally. Mouse Widget (which uses Swing's built-in glasspane on JFrame) captures all mouse events, and Keyboard Widget captures all keyboard events. Fluency uses Screen Widget to build its own tree to pass events up and down on. This, of course, still limits us to trees. Further, Widget repainting in Fluency still relies on Swing's repainting algorithm, which assumes a containment tree.
Swing event interception is relatively easy, but making Fluency the sole interceptor seems to be hard. That is, for example, Fluency's Keyboard Widget registers itself on Swing's FocusManager (which is java.awt.KeyboardFocusManager), but there doesn't seem to be any way to prevent the same FocusManager from accepting other keyboard event observers (internal to Swing), perhaps because Swing still relies on AWT event-handling. Which means that Fluency intercepts keyboard events (for example) and packages them up as Fluency Keyboard Events then sends them off to the appropriate Fluency Widgets, BUT Swing ALSO sends the Swing versions of those same events DIRECTLY to the various Swing delegate widgets. Grrrr. However, only one programmer has worked in this area so far, and that was a few years ago, so perhaps there's a good workaround he didn't find. For example, it may be possible to first register Screen on FocusManager, then make Screen the new FocusManager, and ignore any subsequent requests for event registration. We haven't yet fully ripped out event propagation from the cold dead hands of Swing and AWT to then handle Widget containment more sensibly---or at least, not quite as inflexibly. Swing and AWT everywhere assumes a tree of containments, and apparently has hacks everywhere to connect pieces they shouldn't to then maintain that assumed tree. We have to nullify all those hacks.
Finally, because frameworks often take charge of event propagation in the same way that Swing does, but with each one doing it differently, Fluency's current architecture to make toolkits interchangeable (Abstract Factory and Bridge) is probably inadequate to the task of making Fluency toolkit-independent. So event-management in Fluency may have to change for each new toolkit, so it might remain toolkit-specific for some time to come. This may not be that serious a problem, though, since if Fluency is otherwise good enough for use, early developers and authors will probably ignore its overreliance on one toolkit and language. And if it becomes a problem they can write other Fluencies that use other toolkits. Of course, the inevitable problem then will become that each Fluency will get tied more and more to specific platforms. Oh well. That's a problem we'll just have to postpone until it comes up. It's entirely possible that the user Widget we've been planning for so long, but never really implementing, can encapsulate such toolkit event-propagation differences, so all it would take for Fluency tobe able to swallow a new toolkit would be a toolkit-specific rewrite of the user Widget.
Fluent user interfaces might be too inefficient for practical use. But if so, Fluency might eventually support `graph compilation,' by transforming Widget Actions into public Widget methods, then converting Widget links into direct method calls. This seems like it should be relatively easy. Once Fluency is uniformly implemented, that is. If even a `compiler' version of Fluency produces user interfaces that are too slow, they can still be useful as prototypes for programmers to then hardcode once they stabilize. That's not the ideal solution, but it's still far better than anything we have today.
Fluent user interfaces might be too complex to debug. But if they are sufficiently separable into modules, each module might be separately tested. Mock Widgets should be easy to write since no Widget is aware of any other. Even entire test harnesses for various types of interfaces (for networking, gaming, desktop computing, and so on) might not be unthinkable inside Fluency. All other builders, being source-code based, have no such capability.
Beyond some level of complexity, even visual programming turns into programming. Many authors will be unable to cope. Programmers are good with complexity but may find visual programming too frustrating, if the same effect might be achieved with less effort by direct coding. However, while hard-coding makes much more efficient use of the computer's time, it's only useful after the right design has been found. Allowing Widget-link scripting, though, should be relatively easy. Fluency might also let source-code programmers bypass the visual interface for building new Widgets and let them code Widgets directly for incorporation into Fluency. This has to be done with great caution though, or Fluency will rapidly turn into yet another source-code-based builder.
Fluency will fail if most programmers hate to program visually. Anecdotal evidence from experience with both Dan Ingalls' Fabrik and IBM's VisualAge for Java (now replaced by WebSphere Studio) may support this, but it's not clear how many programmers that may represent, nor is it clear that Fluency's builder will be perceived the same way, nor is it clear that programmers will have to program visually if there is adequate scripting available.
Fluency may fail if most programmers' first experience with it is only for small demo programs, since it will not significantly reduce the time to create trivial interfaces, and (so far) it even increases that time. Unless programmers immediately see how it can be used to significantly reduce their effort when building complex user interfaces they may judge it worthless, so they won't develop Widgets for it, so authors will have nothing to work with. Fluency might reduce this risk by containing at least one fairly comprehensive set of Widgets initially. That will take a lot of work, though.
Fluency will fail if programmers are not excited enough by the possibilities to prime the pump by producing a usable range of Widgets. Programmers are Fluency's first audience, but Fluency cannot be written with programmers only in mind, for then it would be useless for the larger audience. But if Fluency only finds a use among programmers, that is already a large enough audience to consider it at least a partial success. The danger is that this option is too seductive. We are programmers, and we mostly only know programmers. The usual incestuous relationships will then form and our tool will become something that today's programmers can understand and relate to. We'll be back to source code before long. Nothing will ever change.
Fluency will be irrelevant to authors if they aren't motivated to tailor their interfaces, despite their numerous and vocal complaints today. Authors, being further from the machine than programmers, and ignorant of computer science to boot, will continue to accept what's given to them, never knowing that better is possible, so they'll never demand it. Programmers as a class have no incentive to change that. One way around this is to observe that Fluency doesn't have to be useful to all non-programmers to be useful. All it takes is for it to lower the entry bar to a larger population of non-programmers than we have now. If we do that then demand for it should grow as completely unskilled non-programmers ask less unskilled non-programmers to help them alter their Fluent user interfaces.
Fluency will be judged poorly if most of the user interfaces designed with it are poor, which is likely to be the case. But then it only takes a few attractive websites to make a web browser useful and worth developing further. The same might be true for Fluent user interfaces.
Fluency may fail if it's trivial to write malware in it---and it is, since any Widget, visual or non-visual, may do anything at all. On the other hand, the same is true of any other builder; the difference with Fluency is that it tries to open the builder to anyone, not just programmers. Protecting against the possibility of malware seems impossible, but sandbox versions of Fluency might be doable---although there would be no way for an author or user to tell if any particular version was safe ahead of time. The only counter-pressure to malware is people's need to show off their skill at making something useful. If that demand is high enough it will eventually counteract the inevitable parasites and perhaps Fluency might eventually evolve some kind of accreditation mechanism.
Fluency may be hard to accept by many programmers because it goes against a very strong historical current of stuffing more and yet more code into toolkit objects. Some Swing Components, for example, have several hundred methods. Fluency throws away nearly all that work, instead using toolkit objects as if they were hollow shells, almost solely for their pretty and familiar appearances.
If Fluency is successful, the wiring behind every Fluent user interface could become utterly impenetrable to anyone but its creator---in other words, exactly what programs are when written by most programmers. One bad consequence would be that help desks can no longer function. On the other hand, maybe they won't be as necessary, either, since Fluent user interfaces will be more flexibly fixable anyway. Further, common Fluency style sheets might evolve, so that authors and users could apply their own to any Fluent user interface and have it conform in some recognizable way to more familiar user interface conventions.
If Fluency becomes successful, then Widget selection, as the range of Widgets grows, will be become the next big problem. Simply navigating among several thousand Widget variants will be hard. The design of Fluency's interface will become more and more important, then crucial, then vital. But then flexible interface design is what Fluency is all about. If Fluency is successful, the long-term solution to its own user-interface problems is obvious.
The whole Action Logging, XML, and Memento meshuggah.
Everything an author can do in Fluency is expressible as a sequence of graph transforms. So persistence, not just of the linkage graph, but also of the entire sequence of graph transforms in a session is simple. Fluency can even support undo for all such transforms if they were made into Actions on itself, since it itself is a Widget like any other. Fluency can also run a past session as a movie for other authors to see not just what is in an interface, but also how it came to be built. Thus, sharing interfaces as XML files should be straightforward.
Currently we use action logging even though it's `inefficient' because asking regular programmers, some of whom apparently didn't even properly understand anonymous inner classes, to produce correct mementos for their code was like pulling teeth---and twice as painful. Plus they produced many really hard to find bugs that the whole class had to spend days hunting. It just wasn't worth it. However now it's all (or almost all) locked away in a separate persistence layer, so changing it in future shouldn't be that hard.
[Note: The following may not match the current implementation closely. This is the direction I'd like Fluency to go, as I like as many classes as possible to be completely clear and single-purpose, and to have as much functionality as possible be handled utterly uniformly. As you can see, though, it's not finished yet.]
In Fluency, nearly everything that shows in the builder is a Widget (the exceptions so far are Actions and Docks). Even the author can be represented inside Fluency with an `author Widget' linked via Pipes to every other Widget in the Fluent user interface being built. The actual author is then the state changer of that author Widget. An author Widget makes it easy to build robots for demos, create functional tests, and log author actions. That Widget can have Actions and Events, just as any other Widget, and it can be linked to other Widgets, just as any other Widget. Thus, for example, the author could specify that any set of visual Widgets move in response to mouse moves by linking a Pipe from the MouseMove Event emittable by the author Widget to the Move Action on any set of Widgets, all of which then move together as the mouse moves. Of course, we also have Holders to do that. Such joint movement can also be conditional on various Events (say, for example, key presses, which would also be Events emitted by the author Widget). Similarly, Fluency can have (non-visual) `user Widgets' which allow the same sorts of things but for users of the interface during runtime. A single interface running on a networked computer could then have multiple users, with each user on a different machine, but with all users seeing the same user interface. Each user could be represented by a separate user Widget, all of which are instantiated in each interface instance running on those multiple machines, with each remote user feeding Events to a unique Port of the Holder that is Fluency on each local machine. Fluency itself is a Widget, so it can receive and emit Events. Users might thus take turns controlling their shared interface by changing which user Widget is linked to a Pipe that acts to pass on `user Events' to the rest of the interface, or the interface might be divided into sets of Widgets that each user independently controls. Thus Fluency could support even multi-user interfaces.
When creating, deleting, linking, or unlinking Widgets, an author is implicitly editing a directed multigraph with the set of Widgets as nodes and links between them as directed edges. It's a multigraph, not a graph, because two nodes can have multiple directed edges between them going both ways. Further, a node can have directed self-loops, since a Widget can link to itself. There are also two kinds of links: Event links and Action links. Two Widgets are linked via Events if one Widget is an Observer of the other. Two Widgets are linked via Actions if one Widget stores an Action instance targeted on the other Widget; neither need be an Observer of the other.
Fluency itself is a Widget, and it has Actions. In particular, all graph transforms are Fluency Actions, and Fluency uses them to keep an up-to-date snapshot of the linkage graph. Such graph transforms reduce to the following set:
(1) Adding a node: Fluency is itself a Holder, which holds Widgets that an author uses to control Fluency's execution plus another Holder (call this the top-level Holder), which holds Widgets that together form the user interface that the author is using Fluency to build. Fluency creates instances of all possible Widgets on startup via Factory Methods, then clones them as needed (an application of the Prototype design pattern). So adding a node to the graph means cloning a new Widget instance and inserting it into Fluency's interface Holder.
(2) Adding an edge: Fluency adds edges in two ways: via new Observer relationships or via new Action targeting. Both are initiated by linkage requests, either directly from an author during buildtime, or indirectly via Actions created in response to author requests for dynamic linkage during runtime. First, the author selects two sets of Widgets to link. Then: (2a) if the author clicks on an emittable Event from one Widget in the first set and a receivable Event on a Widget in the second set Fluency makes an Event-Event Pipe to make the link. (2b) If the author clicks on an emittable Event from one Widget in the first set and then on an Action on a Widget in the second set Fluency makes an Event-Action Pipe to make the link, potentially creating other Pipes or Docks if the Action needs multiple parameters or multiple stages (for example, if the Pipe condition is both time and Event dependent). (2c) The author can also ask Fluency to latch (instead of link) two Widgets. To latch WidgetA in front of (behind) WidgetB, Fluency first makes all Widgets that currently emit to (receive from) WidgetB instead emit to (receive from) WidgetA, then it makes WidgetA emit to (receive from) WidgetB.
(3) Deleting a node: (3a) If the node to be deleted is a non-Holder Widget, and it's inside any Holder besides the top-level Holder, Fluency restores it to the containing Holder, thus keeping any links the Widget was involved with alive. (3b) If the node to be deleted is a non-Holder Widget, and it's inside the top-level Holder, Fluency replaces it with a Port, thus keeping any links the Widget was involved with alive even though the Widget itself is now gone. (3c) If the node to be deleted is a Port then it's not part of any Holder contained in the top-level Holder (since the author can't see it, so can't select it for deletion), so it must have been requested for deletion by an automatic process, presumably triggered by deletion of the Widget it used to be a port of, so Fluency deletes it and links its Emitters to its Receivers, thus keeping their links alive. (3d) If the node to be deleted is a Pipe, Fluency deletes it and connects its Emitters to its Receivers, thus keeping their Event links alive. (Note: all Action links that the Pipe kept are lost, since any Action instances stored in the Pipe are also deleted.) (3e) If the node to be deleted is a Holder, Fluency first restores all the Holder's contained Widgets (NOTE! must Fluency check whether any of the old links are gone? If so, a workaround is to register Actions as they're created and let Fluency keep track of them in the Action-to-Pipe map. For other possibilities, see the Action footnote.), then it deletes the Holder. (NOTE! see Action footnote for thoughts on what to do about targeted but remote Action instances.)
(4) Deleting an edge: Deleting an Event link just means unregister()ing the Receiver from the Emitter. Deleting an Action relationship requires first finding the Action targeted on the Widget. Fluency, however, always knows where (that is, in which Pipe) such Actions are because the only way they can be created and stored in a Pipe is if the author asks for an Action link between a particular set of Widgets. Making that request into an Action on Fluency itself simplifies everything. Fluency is then the central point for all such `sourceless' Actions and can keep track of everything it needs to.
Note: Both Pipes and Holders can be arbitrarily extended with Actions. Further, the two methods currently listed for Holders could themselves be Actions.
Note: Instead of treating Holder styles as composite Actions, Fluency might have 'composite Events.' That seems wrong somehow, though.
Note: Should cloning a node be a separate Action? Cloned nodes should have the same state (including links to the appropriate Emitters and Receivers). Maybe turning node cloning into a Fluency Action is overkill, though. On the other hand, it's nice and uniform if everything to do with the running of Fluency is bundled up as a Fluency Action.
Note: Fluency might keep an up-to-date graph for visual Widget containment as well, except that the containment graph is even simpler than the linkage graph---it's mostly a tree, possibly with various Widgets placed on top of others in the basic tree structure (like popup menus and tearoff menus, and so on).
Note: Fluency could come in at least three flavors, each aimed at authors with differing needs. With the Interpreter design pattern, Pipes and Holders might be made visually programmable. Presumably only designers and programmers would bother. Novices would make do with premade Pipes and Holders provided with their version of Fluency. Each of the premade types of Widgets could be presented in palettes for the author to select from. So novices can be faced with a simpler Fluency where the only way to link any two Widgets is via selecting one or more kinds of Pipe from an initially fixed Pipe palette and nearly all compound Widgets (and compound Actions) would be uneditable.
Note: Ports are transparent to linkage. For links that pass on Events to a contained Widget, an intervening Port will pass the Events on untouched, and for links to Actions on a contained Widget, no other Widget can intervene, since the Action instances stored in the linking Pipes will already be targeted on the contained Widget on creation, whether or not the Widget was in a Holder at the time of the link, or was put in a Holder after Action targeting. This is true even for Widgets deeply nested inside Holders.
Note: When Fluency is in runtime, the Fluent user interface it's running is essentially an adaptive production system, however, production precedence rules aren't important in Fluency. Order of `evaluation' of productions isn't germane as no precedence is specifiable (Is that always true? Surely some Events might need to have higher priority than others in some circumstances?).
Note: An Emitter never cares which Receivers, if any, are listening to its emitted Events. From its point of view, it's a radio station only, not one end of a walkie-talkie. However, Fluency may one day add synchronous handshaking with semaphores to some complex Pipe subtypes, or even implement pull instead of, or in addition to, push, and use those Pipes to synchronize some Emitters and Receivers. Such Pipes would then be synchronized two-way queues.
Note: Fluency could have Action palettes and make even Actions tearoff, so that authors could create new Widgets, and in particular Pipes, by attaching appropriate triggerable Actions to them during buildtime.
Note: An author can copy or save a Holder. In particular, the interface being edited is itself in a single Holder, which Fluency owns.
Note: Should Fluency support Action-Event and Action-Action links? it would be good for full consistency but maybe bad in that it would place extra burdens on programmers of new Widgets. Hmmmm.
Note: It would be really nice to close the development loop, and let Fluency's own interface be developed inside Fluency. That becomes thinkable if all of Fluency's parts are themselves Widgets, some visual, some functional, some service, and if all Actions are Actions on Fluency itself, which is itself a Widget and a Holder, and a container of exactly one Holder. That would be the ideal.
Note: Since Fluency itself has a Holder that all interface Widgets belong to, enforcing a uniform style on all Widgets is trivial, should the author desire it.
Note: To simplify Widget creation from prewritten Java programs, Fluency might use reflection to take a class and let a programmer attribute its methods with some kind of Widget-maker editor. Authors, though, will want to view a Widget's Actions and Events by usage category. Presentation to the author of a Widget's selectable Actions is itself a (small) user interface. Fluency must manage that presentation layer to allow easy navigation, understanding, and selection among a range of Widgets and for a range of authors. When Fluency queries a Widget for its Actions and Events it gets back Dictionaries. Those Dictionary entries each have Description metadata useful for tooltips, presentation, rearrangement, priority, maybe even contextual usage notes. Instead of raw text, (which must then be parsed all over Fluency), that metadata might itself be presented as Actions (in other words, Actions on Actions and Actions on Events, just to describe them to the author). Actions and Events themselves may even have several views, so that different authors can see the metadata associated with any one Action or Event differently depending on what is most convenient at the time, or for them, or in different, more sophisticated, views.
Note: The Fluency Widget would be pretty god-like, never a good idea. however there are natural lines of division into pretty modular pieces, so it's not so worrisome that the Fluency Widget will eventually grow too big and bulky and interconnected to refactor easily. There's the piece that manages persistence, the piece that manages recovery, the piece that manages talking to the display environment and the user, the piece that manages event propagation, the piece that manages user-interface editing in build mode, the piece that runs the Fluent built user interface when in run mode, and so on.