Please ignore my absent taste for colour and drawing skills :-)
Applications and functionality is typically developed in application silos. Sometimes they come as Commercial Off The Shelf (COTS) products, sometimes they are custom build applications. Fred touches on an interesting topic with
"Since the increased focus on integration technologies and SOA, there is a common misconception that tight coupling is bad. For integrating disparate systems, that is typically true. But inside of application silo's tight coupling is typically a good thing.
Those who focus on data management have, for decades, driven the industry toward consolidation of databases under a philosophy that tighter coupling means greater efficiency and consistency.
"
An application silo usually manages data and it includes support for the related business processes. These processes can be implemented by an embeddable BPMS or they can be just as well coded in a programming language. These processes can have all the characteristics of processes implemented on top of the ESB in e.g. BPEL. They are long running and transactional, they can include human tasks as well as service invocations to services on the bus and so on. Application silo processes are only distinct from the processes on top of the ESB in the sense that they are an integral part of the application silo and therefore they are hosted in a different environment like e.g. Java instead of XML/WSDL. That is where the value of the jPDL process language comes in. It's really embeddable and it integrates very nice with any Java environment. That's why jPDL is embedded a lot in Java based portals, Enterprise Content Management (ECM) systems and custom Java applications. It's really fit for this tight coupled Java environment.
Each of these application silo's typically has a functional interface that exposes the operations supported by the application. A User Interface (UI) can then expose the functional interface to users and the same functional interface can also be exposed to the Enterprise Service Bus (ESB) via a Service Adapter.
The main purpose of an Enterprise Service Bus is integrating disparate systems. So it's typically XML/WSDL based. If many events and service invocations are published on the ESB related to one business process, then it might make sense to track the overall process with an ESB-level process. BPEL is the ideal process language for this environment.
What not a lot of people realize is that the location of business process implementations can be freely picked. If most of the accessed functionality and data for a given process is related to one application silo, it might make much more sense to implement it inside of that application silo. Still in that scenario, services can be consumed from other systems over the ESB.
On the other hand, if the process must integrate with many disparate systems, most of the events are already published on the ESB and there is no application to which this process clearly belongs, then it's most likely easier to implement it as a BPEL process on top of the ESB.
The main point I have tried to make is that the implementation of Business Processes should not be tied to integration technologies (read: the ESB). They can be just as well located within the application silos.
This also exposes one of the difficulties in mapping the analyst's business processes to implementation level processes. Analysts draw boxes and arrows in a diagram and they are not really concerned with this architectural background that provides the environment for implementing those processes. So an analysis process still has to be mapped to executable process languages in application silo's or on top of the ESB. This is not always a one-to-one mapping.
Tom,
ReplyDeleteYour drawing skills are better than mine ;-)
As is often the case, I agree:
the implementation of Business Processes should not be tied to integration technologies
I'd go a bit further and say that the implementation of the process definition should be decoupled from integration details as much as possible... Seldom is the "real" driver for managing a process the need to make specific systems work with each other. Those systems are just the "current" solution to a business need, and our process implementations should (as much as possible) allow us to replace them when something better comes along.
John,
ReplyDeleteGood point !
regards, tom.