Tuesday, January 27, 2009

The Year of Coding Dangerously

You've got an appointment set up with your customer at a lunch place down by the water. You get there early, sit down at a booth and check the place out. There's a beautiful woman sitting at a table near the door, watching a tiny movie on her iPhone. You think, where have I seen her before? Is she from a rival consulting outfit? Could it be you're being followed? No, no, you think. Just paranoia. On the road too long.

Your customer arrives and sits down heavily on the other side of the table. He pushes a sky blue folder toward you and orders a patty melt. Inside you find a PowerPoint deck and flip through it, amazed that people still use such cheesy clipart. After a minute you look up, and trying to control your voice manage to say, "We can't do it in time." You reach for a cigarette but then realize you don't smoke. So instead you pick up a french fry.

Your customer gets an impatient look on his face and says, "Look, negotiations broke down in Helsinki. It was bad. They quit the consortium. They're not going with the standard. We've gotta do it their way. How bad could their proprietary stuff really be?".

Just then, an explosion rips through the building. You find yourself lying face up in the parking lot just as a helicopter lands and two guys wearing flak jackets with your company's logo jump out and pull you inside. As you fly off, you see the entire island being consumed in a volcanic eruption and sinking slowly into the sea. Just before you pass out again you think, next time I gotta get a local gig.

I'm sure we've all had experiences like this one. As application developers, we live in a world of excitement, intrigue and suspense. Yes, danger lurks around every turn. Sometimes it arrives with a whisper. A never-before-seen error message appears on a console, or a process turns up dead. Other times it comes with a bang. A load balancer goes beserk and the Big Board in the call center lights up like a pinball machine. The lives of thousands of active sessions hang by a thread. Your cell phone rings. The president needs an update....

As asserted in our last post, architecture can be seen as a a set of implementation constraints selected to mitigate risk. There are other ways to think about architecture, but this definition works well in that it emphasizes the "why" rather than the "what" of architecture. By focusing on the "why", we can better determine the "how much" and define no more architecture than necessary to deal with the risk profile of a particular project.

Architecture alone is of course insufficient for mitigating every kind of risk we might encounter on a project. We also need things like legal contracts, customer expectation management, and adequate QA testing, to name a few. Most of those other things are not, strictly speaking, technical risks, and so are not typically the province of application developers, being handled instead by people like lawyers, project managers, and that really annoying guy in the purchasing department.

In defining an Enterprise RIA Architecture, we will begin by identifying the most common risks faced by most development efforts and define our architecture in the context of providing mitigation strategies for those risks.

To be clear, what we mean by "Enterprise RIA" (ERIA) is any application in which a rich client user interface accesses a set of back-end services to execute transactions. ERIA's are characterized much more by this back-end interaction requirement than by any particular attributes of the user interface itself. Because the user interface of an ERIA can vary so dramatically from application to appllication, there's not much we'll say about the architecture of the presentation portion of an application, which might consist of anything from simple forms to sophisticated 3D aimations. We are much more concerned with how the presentation portion handles information and transactions in concert with back-end systems, where the back-end systems are information- or transaction-centric.

What, you may wonder, makes back-end interaction in an RIA fundamentally different from good-old-fashioned J2EE architecture? The answer is that in a typical J2EE application, virtually all the logic lives in the back-end. Yes, we can use JavaScript to execute logic on the client platform directly, but since, by definition, non-RIA applications provide non-rich UI functionality, that logic is typically limited and simple.

In an RIA, however, we are building full-blown client-side applications with client-side logic that can be quite complex. RIA clients tend to be highly stateful, oftentimes maintaining relatively large amounts of data client-side that create unique problems around synchronization with back-end systems of record. As we shall see, such front-end to back-end state synchronization is one of the major technical requirements that drive many of our architectural decisions.

In our next installment, we'll look more closely at the risks confronting most ERIA projects. Until then, watch your back-end and for godsakes make sure you're not being followed by agents of rival consulting outfits....

Wednesday, January 21, 2009

Architecture, smarchitecture.

Now begins our discussion of Rich Internet Application Architecture. This is a work in progress, which is to say that what is below is really a draft of a post, but we're exposing the salami-making process to invite collaboration from our friends and lovers.

The first thing we need to discuss is what we mean by "architecture", since this is surely an overloaded term in our industry. It seems to mostly say something about how a system's functionality (domain-specific and otherwise) is partitioned both logically and physically across application code, software platforms, and physical hosts. For the purposes of this discussion we'll define architecture as any set of implementation constraints imposed on a team during some project.

By this definition, a project without a pre-defined architecture is one in which their might be a "design", for example consisting of major classes and how they interact with each other, and choices about what platforms things will run on, but beyond that it is left up to each developer to determine how to implement each piece of functionality assigned to their work queue. We say that the project has no pre-defined architecture because one will almost inevitably "emerge" as various mechanisms and partitioning strategies get worked out and discussed among the team, and people naturally clone already working code when they need to do something similar. It could be argued that the resulting "architecture" is one that is tightly adapted to meet the actual (emerging) needs for one, rather than something possibly over-engineered beyond what we find is really needed after we've been coding on the project for a while.

Of course the real choice is not a binary one between having a pre-defined architecture and not having one. We can rigorously define some aspects of a system's architecture while letting other aspects emerge. But that still leaves the question: "How much architecture do we need?"

I propose that the answer to that question should be "Just enough to mitigate risk" and in summary will close with the proposition that:

A system's architecture definition should consist of the minimal set of implementation constraints needed to mitigate implementation risks.

In our next installment, we'll discuss the most commons risks confronting Enterprise RIA system implementations.




Wednesday, January 14, 2009

Just another code-slinging CEO

I am a fossil, or more accurately, I should say that I'm a fossil record. Or perhaps the better metaphor would be that I'm like one of those ice cores geologists drill from the depths of the Arctic. If you were to sink a drill deep into my head (and I know many of my former colleagues would like to), you would find evidence of the many fads, trends, and revolutions that have constantly reshaped corporate IT over the past 30 years recorded in my brain like the stratified deposits within an ice core. By examining such ice cores, geologists arrive at a deeper understanding of our planet, and by examining the ice core in my brain, I will endeavor for us to arrive at a deeper understanding of Planet IT. I have not only been present for the many tectonic shifts that periodically rock our industry, I have usually been standing directly on the fault line, intimately involved as a leader, manager, and practitioner of application development.

I want to emphasize the "practioner" part of what I just said. Years ago, my colleagues and I would tell each other (half-) jokingly, "Don't trust anybody who doesn't log on". This was shorthand for our belief (which I still hold) that anybody who no longer understands the technology is inherently ineffective in directing IT initiatives. This does not mean that management needs to write code, but management does need to understand a fairly large body of key technical principles since these significantly impact the planning and execution of any IT project.

I myself have continued to write code in spite of having spent many years in fairly senior management positions (at Sun, for example, I managed 300 people). Hopefully as I delve into the subtleties of Rich Internet Application Architecture in the following series of posts, you will trust that my views are derived not just from a consideration of battles viewed from the safety of an underground bunker back at central command, but also while engaged in fierce trench warfare myself.

This experience, coupled with the ice core in my brain that informs my views with a deep (and painful) appreciation of all that has gone (wrong) before hopefully convinces you that my insights are more valuable than what you would find in a random blog post. (And yes, I realize that this is itself a random blog post and my last comment was a referential recursion of sorts that probably blew the stack of several unsuspecting readers).

In any event, I do hope you'll join me in my next few posts for an exploration of Rich Internet Application Architecture (I hesitate to call this RIAA since it's the RIAA that sues people for downloading Britney Spears singles), but before we begin, let's all go write some code.

Thursday, January 8, 2009

The Tower of Babbage

In this post about an Eric Evans presentation, Jon Rose mentions how Eric needed to clarify that the intention is not for Ubiquitous Languages to be enterprise-wide. UL's are established across project teams, not organizations. I was initially surprised that such a clarification would be necessary since this seems obvious, but then I remembered life in the late eighties....

Love Shack was a smash hit and Enterprise Data Models were all the rage.

Much like the Arthur Clarke science fiction story in which a monastery of monks fulfills the purpose of the universe by recording with a computer every known name of God (when they were finished "overhead, without any fuss, the stars were going out"), the idea was that if we could just catalog EVERY data entity, attribute, and association across the ENTIRE enterprise, then surely we would come to know our domain, our applications, our users, indeed our very inner souls better, and thus build better software faster since all of the various warring factions in IT would finally speak the one, true tongue.

At my company, PaineWebber, several monkish DBA's undertook this task for several years, compiling an ever growing glossary of ever greater weight and size. One night in a dream (or perhaps it was after a series of lunches with a leggy sales rep) our CIO realized he could accelerate our rendevous with destiny by buying somebody else's financial services datamodel. And so it was that for the low, low price of $1.5 million, we purchased a great many binders from First Boston containing a great multitude of boxes and lines.

From time to time, developers would come to seek the truth of the binders. Like astrologers poring over charts of the stars, they would look for signs in the many boxes and lines, looking for clues to unlock the secrets of their particular problem domain. But, alas, while many of the boxes and lines bore striking resemblances to actual people, places, and things from the known world, there would also be striking differences from the reality they knew, and in any case, the detail of the models was simply too overwhelming -- or perhaps it was just too magnificent.

And so we went on as before, speaking our own local dialects and doing our best to communicate with neighboring tribes, using extract files like smoke signals, reeking with the smell of EBCDIC.

At least the stars didn't wink out.