Home | About | Recent Issue | Archives | Events | Jobs | Subscribe | ContactBookmark The Sterling Report


   

Do large companies like IBM, Microsoft and Oracle produce the best software sales people?

Yes

No


How can CEP Trans­form Business Process Automation

By David Cameron, Vice President of Product Integration, AptSoft Corporation

Key Characteristics of CEP Development Platforms
So what key functional capabilities are required to build these types of solutions? While a multitude of vendor offerings include a variety of features, the five important areas of functionality to look for are described below in figure 2.

Event Capture An event can be defined as ‘something that happens within a context,’ where context includes ‘the relationship of an event to time, location, and other events’. Simply put, what makes events so powerful is that by definition they include contextual information. A ‘password change’ event happens at a given time, in a given system, with a given account number (as well as other descriptive data such as what the old and new passwords are). Because not all source systems keep track of event context, a mechanism must exist for capturing events from their native environments and transmitting them for further processing. This usually involves a series of ‘dumb, fast’ connectors that can, say, receive http, smtp, or snmp posts (websites, e-mail, and devices respectively), capture the context and other descriptive data, and create the ‘event objects’ that represent the event electronically. Other types of connectors may involve databases, web services, human interaction portals, message queues, and other applications. This is typically the outer layer of CEP technologies.
Data Management The data, both contextual as well as descriptive, must be made available for correlation as well as output. Given that data requirements include not just data that arrives on a given event, but also data on OTHER related events, outside sources, and user-derived data, this functionality is non-trivial if it is to be flexible and powerful enough.
Event Correlation The maintenance and analysis of event context (in addition to, of course, descriptive data) gives CEP its unique capabilities. For example, a ‘door-opened’ event is certainly useful by itself. But when it is correlated with other events such as, say, the ‘lights-turned-on’ event using the context of time, location, and door-id (i.e. to tie all events to a unique door), things get interesting. For a given door-id, if the ‘door-opened’ event happens in a secure location after hours and there is no corresponding ‘lights-turned-on’ event (within 10 seconds or so) that may be suspicious. Alternatively, if the ‘lights-turned-on’ event occurs WITHOUT an initial ‘door-opened’ event (this is a non-event), that may also be suspicious. However, if a ‘door-opened’ event happens and then within 10 seconds a ‘lights-turned-on’ event happens, that may not be suspicious (unless of course, the fact that it is after hours means nobody is supposed to be in the building anyway!). Of course, if these events occur for different doors at different times, that reduces their importance. The only difference between these use cases is the correlation of CONTEXT – namely time, location, and relationship.
Process Awareness Most event correlations don’t happen in isolation. They are, in fact, merely steps in a larger process. For example, a ‘truck-broken-down’ event and a ‘cargo-ready’ event for the same cargo bay at the same location at the same time may require a ‘send-replacement-truck’ event to be generated. The process doesn’t end there, however, since that replacement-truck must now be attached contextually to the same process and if its location correlates with a ‘traffic-jam’ event, any resulting feedback must have a cause-related link back to the original request. This means that additional context about an event, namely its placement in a larger process, must be maintained. Individual events may not, of themselves, know about larger processes so unless the software maintains that context, it is impossible to manage a long running series of activity.
Event Generation As mentioned above, the correlation of event patterns represents only half the equation. CEP systems must also generate events in response to event patterns. These responses may be higher-level abstractions of lower level event patterns: in the ‘door-opened’ use cases above, a ‘suspicious entry’ event might be generated for further analysis and correlation. Or they might indicate actions to be taken: in the ‘send-replacement-truck’ event just described the purpose is a call to action. In all cases, these events must retain context and descriptive data appropriate for their function.
Figure 2: 5 Important Areas of Functionality in CEP

In order to fully implement the power of CEP, technology platforms must include these areas of functionality. Without them, the full value of this approach will not be realized.

Architecting a CEP Application
The process involved with architecting an event-driven application differs markedly from other types of approaches. However, for applications with the characteristics described earlier, the approach offers a more natural design paradigm.
    Step 1: Identify the Events and Actions
    First ask, “What is an event for the application in question and where does it occur?” It is important to describe the event at an appropriate level of abstraction but to balance that against what can be detected. For example, a ‘new purchase event’ may ultimately be what is interesting, but what is really available in the inventory system is a ‘product X assigned to customer Y’ event.

    Good development platforms will let the developer abstract on their own, without having to modify the source system.

    Step 2: Model the Data Flows
    Both incoming and outgoing events (often called actions) will contain data, which must be mapped and understood. In addition, data may need to be ‘fetched’ in real time from databases or applications in order to facilitate rule evaluation or outgoing event generation. Not all data will be relevant for all events.

    Good platforms provide an intermediate object metadata layer that allows the developer to map the information and completely decouple it from incoming and outgoing events.

    Step 3: Create the Business Logic
    At this point, complete the sentence, ‘when event A happens, under certain conditions, generate action B?’

    This block of text, known in CEP theory as an Event-pattern/Condition/Action (ECA) block, represents a building block of the application. Conditions can themselves be event patterns, as in the example:

    ‘When a truck deviates from its prescribed route, according to the GPS tracking system (the event), if the truck is carrying hazardous materials and if this is the third time in 3 months this type of shipment has experienced such a deviation (the condition), alert law enforcement via the threat management system (the action).’

    Good platforms will let the developer build independent events-patterns, conditions, and actions that can then be assembled as objects into business logic, allowing for features such as inheritance and the use of open standards, including SOAP for web services.

    Step 4: Assemble the Business Logic into Flows
    For example, building on Step 3, one might create logic for how to handle the closing of the threat by law enforcement on the threat management system (if, say, it turned out to be on-going roadwork that caused the route deviation). The resulting ECA block can be joined with the first and thus linked to form a longer process.

    Human interaction may also play a role here, involving decisions made by people in response to ECA blocks that in turn trigger their own ECA blocks (e.g. when a threat is identified, request acknowledgement from a law enforcement officer that they have opened an investigation).

    Good platforms will allow this modeling to occur seamlessly, as well as differentiate at execution time between an event triggering an ECA block that is part of a long running process and the same event triggering a standalone ECA block. This requires maintenance of a special kind of process state that makes the platform ‘process aware’ and also enables simulation for testing.

    Step 5: Build the Monitors
    Finally, identify metrics and thresholds to monitor, both within individual ECA blocks (e.g. “How many times a shipment triggers the ‘deviation’ block above?”), as well as in the aggregate (e.g. “How many open investigations do we have?”).

    Good platforms will allow the developer to visually represent this information as well as allow the monitoring itself to maintain history to facilitate moving averages that can be evaluated (e.g. “If the number of threats identified exceeds 110% of the 30-day moving average, escalate notification”).

    Step 6: Continuously Tune and Improve
    Based on what is happening, tune the rules, add new ECA blocks or flows, set up automatic handling of common exceptions, etc. This type of on-going feedback loop is often a hallmark of CEP applications.

    Good platforms will allow non-technical end-users to access this level of maintenance without the need for structured code in order to implement changes quickly.
Conclusion: The Link to SOA
Event-driven applications and service-oriented applications are not mutually exclusive. On the contrary, often both approaches are necessary for a given project. However, each approach has its own benefits and target requirements. Developing event-driven applications using a purely synchronous SOA platform can be very tedious, and trying to mimic synchronous web service orchestration using an event-driven platform can be equally so. Since services don’t, by definition, generate asynchronous events, a good CEP platform will provide this functionality, along with the ability to make synchronous calls to web services when necessary to trigger actions or to fetch data.

However, given the relative novelty of event-driven architectures, most architects and developers would be well-served by taking a deeper look into the benefits of the approach and incorporating it into their development framework.



David Cameron is Vice President of Product Integration at AptSoft Corporation, a company providing a Complex Event Processing design and execution platform to help companies implement a new class of event-driven applications as a part of a service oriented architecture. He writes and speaks regularly about the intelligent application of technology to address business challenges. David has over 15 years of experience implementing, selling, and marketing technology. He is a frequent writer and speaker on marketing topics and has also guest lectured at New York University’s Stern School of Business and Dartmouth’s Tuck School of Business on customer-centric marketing issues. For article feedback, contact David at david.cameron@aptsoft.com


Click to email this article to a friend     Back



Back




  Home | About | Recent Issue | Archives | Events | Jobs | Subscribe | Contact | Terms of Agreement
© 2006 The Sterling Report. All rights reserved.