Tuesday, December 29, 2009

FoneMonkey: The Download

Tonight I am releasing FoneMonkey into the wild. Without further ado, here it is:


The iPhone does not yet allow installation of user-defined frameworks, so FoneMonkey is disbributed as a static library along with the image files and nibs required by the FoneMonkey user interface. FoneMonkey.zip contians libFoneMonkey.a as well as a bunch of png and xib files.

The zip file does not include the FoneMonkey source code, which we'll be releasing when we officially launch the FoneMonkey open source project in mid-January, 2010 (ie, just a few weeks from now!). At that time we'll be going live with a community forum as well. Until then, please post questions or comments to this blog.

To get started using FoneMonkey, download the zip file and extract the FoneMonkey distribution folder. Then follow the instructions below.

Happy testing! We look forward to your feedback!

Testing an iPhone application with FoneMonkey

In order to test an iPhone application with FoneMonkey, you must first link FoneMonkey with your application as shown in the 2-minute video below. Step-by-step textual instructions are also provided here.

You might need to view the video in a new window.

  1. Open your application's project in xcode.
  2. Duplicate your application's build target by right-clicking on it and selecting Duplicate from the menu. A new target will be created called YourApp copy.
  3. Rename YourApp copy to something like YourAppTest.
  4. Add the downloaded FoneMonkey folder to your project by right-clicking on the project and selecting Add > Existing Files... from the menu. Navigate to the FoneMonkey folder, select it, and click the Add button.
  5. When the dialog box appears, select the Recursively create groups for any added folders option.
  6. In the Add to Targets box, deselect YourApp and select YourAppTest.
  7. Click Add.
  8. Right-click on the YourAppTest build target and select Get Info from the menu.
  9. On the General tab, delete libFoneMonkey.a from the Linked Libraries. You will need to add CoreGraphics.framework and QuartzCore.framework to the Linked Libraries list they aren't already there.
  10. On the Build tab, scroll down to the Linking section and click on the Other Linker Flags setting. When the dialog box appears, enter:

    -ObjC -lFoneMonkey -LFoneMonkey/lib -all_load

  11. Dismiss the project Info window.
  12. Right-click on YourAppTest build target and select Clean from the menu.
  13. Right-click on YourAppTest build target again and select Build and Debug from the menu.
  14. Your application should start in the simulator. Immediately after it displays, the FoneMonkey console should drop down over it.
  15. See this post for more info about recording and playing back tests with FoneMonkey.

Wednesday, December 16, 2009

FoneMonkey: The Movie

In this video, recently smuggled from Gorilla Logic labs deep beneath the surface of the Earth, we see the revolutionary new open source, iPhone application testing tool FoneMonkey recording and playing back interactions with an application that was cobbled together from Apple's UICatalog and GLPaint sample applications (which together are comprised of most of the iPhone SDK UI Components).

The video demonstrates FoneMonkey recording and playing back various actions including button touches, table scrolling and selection, switches, sliders, text entry, tab selection, finger dragging (in this case, painting), and even phone shaking (which is the gesture that causes GLPaint to erase the current painting).

You might need to view this video in a new window.

As you can see, the monkey can really dance. I hope to post the library binary and installation instructions within the next few days.

Sunday, December 13, 2009

FoneMonkey: Open Source, Testing Tool for iPhone Applications

This blog post is the first announcement to the world of FoneMonkey, an open source iPhone testing tool that I've been developing for the last several months. FoneMonkey automates iPhone application testing by recording and playing back user interactions, and verifying the results. FoneMonkey was inspired by FlexMonkey, the open source Flex testing tool I wrote a little more than a year ago, and which has subsequently been significantly enhanced by my colleagues at Gorilla Logic, especially Eric Owens. Today FlexMonkey has more than 4,000 registered users and an active community.

We plan to release FoneMonkey source and binaries (with attendant fanfare) in early January. Here's a sneak preview of what's coming.

Some Technical Background

FoneMonkey has been more challenging than FlexMonkey to create because the iPhone SDK, unlike the Flex SDK, provides no Automation API. Before I could develop FoneMonkey, I had to develop an automation framework. The resulting FoneMonkey API is simple, extensible and initially supports nearly all native iPhone (ie, Cocoa Touch) components.

Like FlexMonkey, FoneMonkey is designed to be used by both programmers and QA testers. Rather than simply recording and playing back low-level events, FoneMonkey records "semantic" events. In other words, FoneMonkey doesn't record that you touched a pixel at coordinate (125, 210), but instead records that you selected table row 5. The resulting scripts are relatively forgiving with respect to cosmetic UI changes. They can also be composed from scratch by directly specifying sequences of commands. (FoneMonkey Command Guide coming soon....).

Each FoneMonkey command is composed of a command name, a component identifier, and a set of command-specific arguments. For example:

Touch UIButton Send (touches the UIButton labeled "Send")
InputText UITextField, your name, Fred (types "Fred" into the UITextField with a field prompt of "your name")

The Flex Automation API provides an AutomatDelegate corresponding to each Flex UIComponent, and a delegate instance is created for each UIComponent instance created at run-time. Automation delegates subscribe to their component counterparts for all UI events, and translate these low-level keyboard and mouse events into high-level component events. FoneMonkey uses a logically similar mechanism, but is instead implemented with Objective-C categories that extend UI component classes with record and playback logic.

While recording, the FoneMonkey framework monitors all UI events and forwards them to the FoneMonkey category methods that provide recording for each component class. During playback, FoneMonkey sends each command to the playback methods also provided by the class's FoneMonkey extension category.

The FoneMonkey framework provides categories that record and play back events for instances of the UIView class and all UIView subclasses. It is straightforward to create a category for any custom UI class that requires custom recording and playback logic, or to further enhance the recording and playback logic provided out of the box by the FoneMonkey framework. (FoneMonkey Extension Guide coming soon....)

Using FoneMonkey

To test an application, you simply statically link it with the FoneMonkey library. (FoneMonkey Setup Guide coming soon....) Upon starting your application, the FoneMonkey Console "drops down" over your application window, displaying the recording and playback controls at the bottom of the screen.

The Elements application

The Elements application with FoneMonkey controls (at the bottom).

The video below shows FoneMonkey being used to test Apple's "The Elements" sample application.

In the video, we touch the record button to hide the FoneMonkey Console and begin recording our interactions with "The Elements" application. FoneMonkey monitors for periods of inactivity and drops back down over the application if we stop interacting with it for more than the current inactivity timeout setting (by default, 2.5 seconds). Touching the play button replays the recorded script.

You might need to view this video in a new window.

FoneMonkey provides the ability to edit scripts and save them for later replaying.

The FoneMonkey script editor

You can add Verify commands to a script to test the values UI component properties or the values of associated object properties (ie, you can use a key-value encoded key path) . In The Elements sample application, a custom class called AtomicElementView is used to display data for the currently selected element. The AtomicElementView has a property called element that references an instance of the AtomicElement class, which in turn as a property called name. In the Verify command below, we test that the current AtomicElementView's element.name has a value of "Lead".

Adding a Verify command to a script

Script results with failed Verify (in red)

The next video shows how to edit a script, add a verify command, save the modified script, and then replay it.

You might need to view this video in a new window.

Over the next several weeks I'll be continuing to update the doc in preparation for our putative January launch. Stay tuned.

Saturday, October 31, 2009

Monday, August 17, 2009

FlexMonkey 1.0 Beta 2 and the FlexMonkey User Guide are here!

Announcing the immediate availability of FlexMonkey 1.0 Beta 2! After the very successful Beta 1 launch last month, we are pleased to bring you Beta 2. Since Beta 1 had very frew issues, Beta 2 is much more of an enhancement than a bug-fix release.

Most notably, Beta 2 introduces "MonkeyLink", which allows you to link FlexMonkey directly into your target SWF. If you have had any problems dynamically loading your application through the FlexMonkey Console or the MonkeyAgent, you can now instead compile MonkeyLink into your application SWF, and then launch your application directly. Using MonkeyLink, you can test virtually any Flex or AIR application. Dynamic SWF loading with the Console or the MonkeyAgent are of course still supported so you also continue to have the option of testing SWFs without having to first recompile them.

Download FlexMonkey 1.0 Beta 2 now at http://flexmonkey.gorillalogic.com/gl/stuff.flexmonkey.download.html!

The long-awaited FlexMonkey User Guide is available at http://flexmonkey.gorillalogic.com/gl/flexmonkey/FlexMonkeyUserGuideR1b2.pdf! Weighing in at nearly 60 pages, the FlexMonkey User Guide is like having your very own Eric Owens!

Happy testing!

Thursday, July 16, 2009

Who you calling ugly?

As mentioned in my previous post, the web has been abuzz over FlexMonkey this week. I noticed that artima.com was referencing us and so I clicked on over to see what they had to say. Here's the quote from Bruce Eckel:
Looking closer to home, Jon Rose at Gorilla Logic has just released Flex Monkey, a free tool to automatically test your Flex UIs. I've known Jon through various events so decided to look at the site. As with many sites, it contains a fair amount of cleverness but no clear statement of what they are about. However, among the slogans I found one that I thought had promise: "The answer is not more monkeys." It's succinct, catchy, and on the right track, because it's suggesting that more things should be automated (and Flex Monkey is an example of that kind of automation). Alas, the rest of the site doesn't give me a clear idea of what they do.
Now I've done some hanging with Bruce and he's someone I like and respect, so I talked to our graphic designers and had them make some changes to our home page.



In all seriousness, we do appreciate any and all feedback on the site (and we do really like Bruce).

Meet the Monkey

We're gratified by all the buzz being generating around FlexMonkey in the wake of our 1.0 release this week. A search for FlexMonkey +testing on google returns more than 12,000 links.

My own article, which gives an overview of FlexMonkey's advanced features and discusses user interface testing more generally, was published today on InfoQ at http://www.infoq.com/articles/flexmonkey-ui-unit-testing. Eric Owens, our lead developer on the FlexMonkey project published an intro article on Adobe Devcenter at http://www.adobe.com/devnet/flex/articles/flexmonkey.html.

If you haven't already, please take a minute to meet the monkey and let us know what you think!

Tuesday, July 14, 2009

The monkey has landed!

FlexMonkey 1.0 is here! After nearly nine months of gestation we've delivered what is undoubtedly the premier record/playback testing automation tool for Adobe Flex and AIR applications. And it's free!

For the thousands of folks who are already using FlexMonkey 0.8, FlexMonkey 1.0 brings significant new functionality including a completely revamped user interface which makes it much easier to record and playback tests. We've also added the ability to take snapshots of portions of the screen or of property values to be verified during playback. In addition to testing Flex apps, FlexMonkey 1.0 for the first time provides direct support for testing AIR apps. The FlexMonkey Console is now itself an AIR app, but can launch standalone as well as browser-based SWFs for testing.

Please let us show you our monkey at http://flexmonkey.gorillalogic.com! You can also check out Gorilla Logic's Flex service offerings at http://www.gorillalogic.com/what.development.services.flex.html.

Monday, June 1, 2009

In a Perfect World....

In a perfect world, we could focus solely on implementing functional requirements and not have to do anything explicitly to deal with non-functional constraints such as keeping shared resources from becoming bottlenecks or contending with bandwidth limitations. It can be instructive to consider what an application would look like if there were no physical constraints. Imagine if we lived in a world with infinite memory, infinite processing capacity, unlimited bandwidth, zero down-time, and interesting television shows (Imagine really hard). In such a world, queries are blindingly fast , there is no limit to how often we interact with the back-end, and we can can constantly exchange enormous amounts of data if we need to. With such a platform, we are free to develop our application without any consideration of physical constraints. Each view can refresh the data it is displaying nearly instantaneously, and we can immediately transmit changes to the back-end, and immediately receive notifications of any errors encountered.

In such an environment, the need for a great many design constraints is removed, although not completely eliminated. Even with physical constraints removed, logical constraints remain. Still, removing the physical constraints certainly simplifies software construction by removing the need to manage limited resources efficiently. Of course, in the real world of enterprise applications, there is no such environment, but not every physical or logical constraint is an issue for every application. Low volume applications or those with very small and simple data requirements can often behave almost as if they were executing in the (near-)perfect world. Consider for example an application that allows a user to update their name, address, and credit card information. As soon as the user logs in, you can easily retrieve such a small volume of data immediately, and you wouldn't have to contend with the user's name and address being changed by someone else simulataneously. You can then allow the user to update any of the returned data, updating their name and address, deleting some credit cards and adding others. If there are few simultaneous users, we can transmit each change immediately to the server where we immediately run required edit checks against it and report any errors encountered back to the client for communication to the user via the UI. Such an approach allows us to write an application in what is arguably the most “natural” way, ie, the simplest way. Physical constraints, on the other hand, force us to commit "unnatural acts", ie, they force us to write code that is more complex than it needs to be from a purely logical perspective.

With RIA's, there are a few physical constraints which are fundamental at this point in time. The client application is a separate process (on a unique platform), and all communication with the server-side must be accomplished by messaging. It is this constraint, more than anything else, that distinguishes ERIA architecture from web applications, two-tier client server applications, or one-tier local applications. And speaking of one-tier applications, it is worth noting that virtually no enterprise application allows for the data to be persisted entirely on the client box. It is virtually always the case that we cannot rely on the client itself for safe storage of business records, and there are typically large amounts of non-client-specific data (such as a product catalog) that are usually too large to make client-side distribution practical.

Another constraint is the almost universal requirement that persistence be provided by a non-object-oriented datastore, typically a relational database. There is of course a fundamental impedance mismatch between object- and a relational-centric data structures that is at the heart of many unnatural programming acts, and although there have been many variations on “object databases” over the years, relational databases continue to dominate for good reasons we’ll explore in some depth another time. In a perfect world, queries would return graphs of objects rather than lists of rows. Fortunately, persistence management frameworks such as Hibernate automate a good deal of the acrobatics involved in object-relational mapping, but unfortunately are not (yet?) fully transparent to the application developer, requiring varying amounts of explicit coding.

Coding in the real world requires more effort than coding in the perfect world because we need to add code to work around physical constraints. It would seem to be self-evident that since it takes more effort to add more code, you should only do so when there is an actual requirement to do so. In other words, the default architecture for any ERIA is essentially the simple one described above. We retrieve all the data up-front. Operate on the data as necessary. And send edits to the back-end as they occur. In pondering each of the questions posed above, the default answer in each case is this straight-forward, perfect world architecture since it requires the least effort to build. In each case however it is also necessary to consider how physical constraints can force the undertaking of more complicated strategies, but again, it should be emphasized that in the absence of any such physical constraints, the simplicity of the perfect world approach is the way to go.

Tuesday, February 17, 2009

What is an architecture and why do you want one?

An architecture is essentially a set of design constraints imposed on an IT development project. You want one because unconstrained design leads to impedance mismatches among the various pieces of a system due to logical or physical incompatibilities among software components developed by different team members or third parties. The result is software that takes longer or costs more to build, or is more expensive to operate once it's deployed.

Impedance mismatches can manifest in a variety of ways.
  • Model - Multiple, possibly inconsistent object definitions for the same logical entity
  • API - Incompatible arguments or return results required or produced by libraries
  • Performance - Inability of data sources to provide data as fast as required
  • Locks - Contention resulting from different components having different expectations for how resources should be shared
  • Transactions - Improperly serialized updates resulting in data being lost or overwritten with stale values
  • Responsibility - Functionality needlessly duplicated or incorrectly assumed to be provided elsewhere
  • Recovery - Components in the event of failure not coordinating properly to minimize downtime or data loss
  • Operational - Software too difficult to modify in response to anticipated future needs
An architecture is sufficiently defined when you can turn a developer loose on developing some piece of functionality and if he or she obeys all the established design constraints, there is a minimal possibility of creating software that produces any of the above problems. If you're conceptually bothered by being constrained, preferring instead to program in wide open spaces with the wind blowing in your hair, perhaps you would prefer to instead think of architecture a form of freedom -- the freedom from choice. Once defined, an architecture frees developers from having to make (possibly incompatible) choices for how to implement an application's functionality, and instead devote most of their time to implementing functional requirements while minimizing the time spent hacking together glue to integrate with the rest of the team's code.

Here's a very high-level breakdown of a typical ERIA:


In any non-trivial enterprise application development project, multiple developers will be working in tandem, and in an RIA project where the client and server platforms do not even use the same programming language, it is natural that developers will be divided into front-end and back-end teams. In order to reduce dependencies between client and server code development and allow the teams to work in relative independence on their respective components, it makes sense to explicitly define an API that will provide the linkage between the client and server code.

The BackendService is typically implemented using a webservice-type container, for example Tomcat and associated plugins, which provides a framework for authorizing users, managing session data, and communicating over web protocols such as REST, SOAP, or AMF.


As with most traditional web applications, we require a container that maintains session state for each logged in user. Unless application users are anonymous, the session state will at a minimum contain user identity information required for access control. For many applications, code can be simplified by maintaining other session-specific information on the server, such as for example the current contents of the user's shopping cart.


In most applications, we require user identity information in order to be able to complete requests received from a client. For example, when a user requests information for "my account", the server must need to somehow know which account to access. It is typically not an option to supply the account number for example as part of the request, since the client could be running a hacked application that allows the user to specify an arbitrary account number. When the user authenticates with the server, the server must associate the authorized user's identity with the session, and check each subsequent request against the that identity information in order to perform the requisite access control.

UserIdentity is typically implemented with role-based security frameworks such as JEE security implementations riding on LDAP or some other user directory store.


DomainObjects are instances of classes that directly model some aspect of our problem domain, such as Customers, Invoices, and Products. These objects are mostly created from input received from the client, or by being retrieved from a database or some external source, and additionally provide the non-UI-specific logic comprising our application.[Mention DDD?]

If we do our jobs right, most of the custom code we write on the backend will be domain-specific, having delegated most of the non-domain-specific plumbing to third-party frameworks.


The PersistenceManager caches previously retrieved data and collects updates triggered by the client until such time as a logical transaction is completed and can be committed to a database or external system. For relational databases, the PersistenceManager is typically implemented with a framework such as Hibernate. [Link to hibernate definition?]


Clients access the BackEndService via a library that provides a client-code-friendly interface. This BackEndServiceAPI is often no more than a wrapper that exposes the BackEndService interface in the client's native language, and does little more than marshal arguments as necessary to call whatever remoting framework is being used for backend communication, and set up whatever callbacks are needed in cases where results are returned asynchronously.

The BackEndServiceAPI would usually be defined as an interface for which a mock implementation can be used as a stand-in so that the front-end team has the option of proceeding with development ahead of the back-end implementation being complete.


In a typical RIA, we cannot actually return full-blown objects from the server to the client or vice versa. Instead, we can only transmit an object's primitive property values. For example, if the client requests a particular Customer, the server does not return an actual instance of a DomainObject, but instead returns a set of values such as name, address, and account number. While its possible to construct an actual DomainObject instance from these returned values, it is often unnecessary since an RIA in many cases is merely providing for the display and editing of such primitive values, delegating any requisite business logic back to the server-based DomainObjects. DomainDTO's are objects that provide holders for these values, but don't necessarily provide any "behavior" in the form of methods on the objects. It is of course also common to selectively replicate some DomainObject logic on the client, especially simple field edits, and in some cases we might even replicate complex logic on the client, so we're using the term DTO loosely here. [Link to DTO definition?]

[Diagram showing a DomainObject vs corresponding DomainDTO?]


With a thin client, client-side state does not usually consist of anymore than the contents of some html form or table. Each new view usually requires a round-trip to the server to retrieve both the definition of the view (ie, an html page), and the data presented within it. Even just changing the sort sequence of the rows in a table require re-retrieving a fully formatted table of data from a server. One of the big advantages in an RIA is that we can provide a much more interactive and responsive user experience by not requiring the constant back-and-forth of data and markup with the server. Instead we can retrieve (possibly large amounts of) data just once, display it in a variety of ways, allow the user to edit it, and send updates to the server only after a transaction is logically complete.

The CacheManager keeps track of what we've already retrieved from the server so that we don't incur unnecessary overhead of re-retrieving the same data, and tracks what objects are changed ("dirty") and require transmission back to the server. The sophistication of CacheManagers can vary greatly from one application to another, consisting of little more than hash tables of objects by type [Include a simple diagram of this?], or as much as client-side relational databases.


The view components are the widgets that provide for the display and editing data, usually DomainDTO's retrieved from the local cache. By ViewComponents, we don't mean low-level UI components like buttons and textfields, but semantically rich composite components such as for example a Customer Creation Form or an Order History Table.


RIA platforms are event-driven user interface frameworks where ViewComponents emit and respond to asynchronous notifications such as "User Wants to Quit" or "User Has Updated His Credit Card Info". While such event-driven models are ideally suited to coordinating the asynchronous update and display of data across possibly multiple views, event-driven programs can degenerate into a tangle of interdependencies among components dispatching and listening for each other's events. An EventBus organizes event handling around a hub-and-spoke where events are logically broadcast throughout the application. In this way, brittle point-to-point connections are replaced by a robust, centralized switchboard for application events.


The Application is "everything else". It's the code necessary to integrate the BackEndAPI, the ViewComponents, and the CacheManager into the functioning whole experienced by the user. The Application for example binds particular views to particular DTO's, and invokes particular BackEndAPI methods in response to user interactions.

Can I just go build my application now?

Having dilligently studied the above description, you may be chomping at the bit to put the above principles to work and begin building your next RIA. Unfortunately, the above is not the answer. It only frames the questions more clearly. Chiefly:

  • Whether to locate logic on the client or server?
  • How much data to cache locally on the client?
  • When should data be transmitted from the client to the server?
  • When should data be persisted to a database?
  • How should view components be made aware of updates to what is being displayed?
  • What is the right level of granularity for remote operations?
What's that you say? You came here for answers? Fear not. All will be revealed. But not this afternoon....

Tuesday, January 27, 2009

The Year of Coding Dangerously

You've got an appointment set up with your customer at a lunch place down by the water. You get there early, sit down at a booth and check the place out. There's a beautiful woman sitting at a table near the door, watching a tiny movie on her iPhone. You think, where have I seen her before? Is she from a rival consulting outfit? Could it be you're being followed? No, no, you think. Just paranoia. On the road too long.

Your customer arrives and sits down heavily on the other side of the table. He pushes a sky blue folder toward you and orders a patty melt. Inside you find a PowerPoint deck and flip through it, amazed that people still use such cheesy clipart. After a minute you look up, and trying to control your voice manage to say, "We can't do it in time." You reach for a cigarette but then realize you don't smoke. So instead you pick up a french fry.

Your customer gets an impatient look on his face and says, "Look, negotiations broke down in Helsinki. It was bad. They quit the consortium. They're not going with the standard. We've gotta do it their way. How bad could their proprietary stuff really be?".

Just then, an explosion rips through the building. You find yourself lying face up in the parking lot just as a helicopter lands and two guys wearing flak jackets with your company's logo jump out and pull you inside. As you fly off, you see the entire island being consumed in a volcanic eruption and sinking slowly into the sea. Just before you pass out again you think, next time I gotta get a local gig.

I'm sure we've all had experiences like this one. As application developers, we live in a world of excitement, intrigue and suspense. Yes, danger lurks around every turn. Sometimes it arrives with a whisper. A never-before-seen error message appears on a console, or a process turns up dead. Other times it comes with a bang. A load balancer goes beserk and the Big Board in the call center lights up like a pinball machine. The lives of thousands of active sessions hang by a thread. Your cell phone rings. The president needs an update....

As asserted in our last post, architecture can be seen as a a set of implementation constraints selected to mitigate risk. There are other ways to think about architecture, but this definition works well in that it emphasizes the "why" rather than the "what" of architecture. By focusing on the "why", we can better determine the "how much" and define no more architecture than necessary to deal with the risk profile of a particular project.

Architecture alone is of course insufficient for mitigating every kind of risk we might encounter on a project. We also need things like legal contracts, customer expectation management, and adequate QA testing, to name a few. Most of those other things are not, strictly speaking, technical risks, and so are not typically the province of application developers, being handled instead by people like lawyers, project managers, and that really annoying guy in the purchasing department.

In defining an Enterprise RIA Architecture, we will begin by identifying the most common risks faced by most development efforts and define our architecture in the context of providing mitigation strategies for those risks.

To be clear, what we mean by "Enterprise RIA" (ERIA) is any application in which a rich client user interface accesses a set of back-end services to execute transactions. ERIA's are characterized much more by this back-end interaction requirement than by any particular attributes of the user interface itself. Because the user interface of an ERIA can vary so dramatically from application to appllication, there's not much we'll say about the architecture of the presentation portion of an application, which might consist of anything from simple forms to sophisticated 3D aimations. We are much more concerned with how the presentation portion handles information and transactions in concert with back-end systems, where the back-end systems are information- or transaction-centric.

What, you may wonder, makes back-end interaction in an RIA fundamentally different from good-old-fashioned J2EE architecture? The answer is that in a typical J2EE application, virtually all the logic lives in the back-end. Yes, we can use JavaScript to execute logic on the client platform directly, but since, by definition, non-RIA applications provide non-rich UI functionality, that logic is typically limited and simple.

In an RIA, however, we are building full-blown client-side applications with client-side logic that can be quite complex. RIA clients tend to be highly stateful, oftentimes maintaining relatively large amounts of data client-side that create unique problems around synchronization with back-end systems of record. As we shall see, such front-end to back-end state synchronization is one of the major technical requirements that drive many of our architectural decisions.

In our next installment, we'll look more closely at the risks confronting most ERIA projects. Until then, watch your back-end and for godsakes make sure you're not being followed by agents of rival consulting outfits....

Wednesday, January 21, 2009

Architecture, smarchitecture.

Now begins our discussion of Rich Internet Application Architecture. This is a work in progress, which is to say that what is below is really a draft of a post, but we're exposing the salami-making process to invite collaboration from our friends and lovers.

The first thing we need to discuss is what we mean by "architecture", since this is surely an overloaded term in our industry. It seems to mostly say something about how a system's functionality (domain-specific and otherwise) is partitioned both logically and physically across application code, software platforms, and physical hosts. For the purposes of this discussion we'll define architecture as any set of implementation constraints imposed on a team during some project.

By this definition, a project without a pre-defined architecture is one in which their might be a "design", for example consisting of major classes and how they interact with each other, and choices about what platforms things will run on, but beyond that it is left up to each developer to determine how to implement each piece of functionality assigned to their work queue. We say that the project has no pre-defined architecture because one will almost inevitably "emerge" as various mechanisms and partitioning strategies get worked out and discussed among the team, and people naturally clone already working code when they need to do something similar. It could be argued that the resulting "architecture" is one that is tightly adapted to meet the actual (emerging) needs for one, rather than something possibly over-engineered beyond what we find is really needed after we've been coding on the project for a while.

Of course the real choice is not a binary one between having a pre-defined architecture and not having one. We can rigorously define some aspects of a system's architecture while letting other aspects emerge. But that still leaves the question: "How much architecture do we need?"

I propose that the answer to that question should be "Just enough to mitigate risk" and in summary will close with the proposition that:

A system's architecture definition should consist of the minimal set of implementation constraints needed to mitigate implementation risks.

In our next installment, we'll discuss the most commons risks confronting Enterprise RIA system implementations.

Wednesday, January 14, 2009

Just another code-slinging CEO

I am a fossil, or more accurately, I should say that I'm a fossil record. Or perhaps the better metaphor would be that I'm like one of those ice cores geologists drill from the depths of the Arctic. If you were to sink a drill deep into my head (and I know many of my former colleagues would like to), you would find evidence of the many fads, trends, and revolutions that have constantly reshaped corporate IT over the past 30 years recorded in my brain like the stratified deposits within an ice core. By examining such ice cores, geologists arrive at a deeper understanding of our planet, and by examining the ice core in my brain, I will endeavor for us to arrive at a deeper understanding of Planet IT. I have not only been present for the many tectonic shifts that periodically rock our industry, I have usually been standing directly on the fault line, intimately involved as a leader, manager, and practitioner of application development.

I want to emphasize the "practioner" part of what I just said. Years ago, my colleagues and I would tell each other (half-) jokingly, "Don't trust anybody who doesn't log on". This was shorthand for our belief (which I still hold) that anybody who no longer understands the technology is inherently ineffective in directing IT initiatives. This does not mean that management needs to write code, but management does need to understand a fairly large body of key technical principles since these significantly impact the planning and execution of any IT project.

I myself have continued to write code in spite of having spent many years in fairly senior management positions (at Sun, for example, I managed 300 people). Hopefully as I delve into the subtleties of Rich Internet Application Architecture in the following series of posts, you will trust that my views are derived not just from a consideration of battles viewed from the safety of an underground bunker back at central command, but also while engaged in fierce trench warfare myself.

This experience, coupled with the ice core in my brain that informs my views with a deep (and painful) appreciation of all that has gone (wrong) before hopefully convinces you that my insights are more valuable than what you would find in a random blog post. (And yes, I realize that this is itself a random blog post and my last comment was a referential recursion of sorts that probably blew the stack of several unsuspecting readers).

In any event, I do hope you'll join me in my next few posts for an exploration of Rich Internet Application Architecture (I hesitate to call this RIAA since it's the RIAA that sues people for downloading Britney Spears singles), but before we begin, let's all go write some code.

Thursday, January 8, 2009

The Tower of Babbage

In this post about an Eric Evans presentation, Jon Rose mentions how Eric needed to clarify that the intention is not for Ubiquitous Languages to be enterprise-wide. UL's are established across project teams, not organizations. I was initially surprised that such a clarification would be necessary since this seems obvious, but then I remembered life in the late eighties....

Love Shack was a smash hit and Enterprise Data Models were all the rage.

Much like the Arthur Clarke science fiction story in which a monastery of monks fulfills the purpose of the universe by recording with a computer every known name of God (when they were finished "overhead, without any fuss, the stars were going out"), the idea was that if we could just catalog EVERY data entity, attribute, and association across the ENTIRE enterprise, then surely we would come to know our domain, our applications, our users, indeed our very inner souls better, and thus build better software faster since all of the various warring factions in IT would finally speak the one, true tongue.

At my company, PaineWebber, several monkish DBA's undertook this task for several years, compiling an ever growing glossary of ever greater weight and size. One night in a dream (or perhaps it was after a series of lunches with a leggy sales rep) our CIO realized he could accelerate our rendevous with destiny by buying somebody else's financial services datamodel. And so it was that for the low, low price of $1.5 million, we purchased a great many binders from First Boston containing a great multitude of boxes and lines.

From time to time, developers would come to seek the truth of the binders. Like astrologers poring over charts of the stars, they would look for signs in the many boxes and lines, looking for clues to unlock the secrets of their particular problem domain. But, alas, while many of the boxes and lines bore striking resemblances to actual people, places, and things from the known world, there would also be striking differences from the reality they knew, and in any case, the detail of the models was simply too overwhelming -- or perhaps it was just too magnificent.

And so we went on as before, speaking our own local dialects and doing our best to communicate with neighboring tribes, using extract files like smoke signals, reeking with the smell of EBCDIC.

At least the stars didn't wink out.