Event monitoring


Event monitoring

In computer science, event monitoring is the process of collecting, analyzing, and signalling event occurrences to subscribers such as operating system processes, active database rules as well as human operators. These event occurrences may stem from arbitrary sources in both software or hardware such as operating systems, database management systems, application software and processors.

Basic concepts

Event monitoring makes use of a logical bus to transport event occurrences from sources to subscribers, where "event sources" signal event occurrences to all event subscribers and "event subscribers" receive event occurrences. An event bus can be distributed over a set of physical nodes such as standalone computer systems. Typical examples of event buses are found in graphical systems such as X Window System, Microsoft Windows as well as development tools such as SDT.

"Event collection" is the process of collecting event occurrences in a filtered event log for analysis. A "filtered event log" is logged event occurrences that can be of meaningful use in the future; this implies that event occurrences can be removed from the filtered event log if they are useless in the future. "Event log analysis" is the process of analyzing the filtered event log to aggregate event occurrences or to decide whether or not an event occurrence should be signalled. "Event signalling" is the process of signalling event occurrences over the event bus.

Something that is monitored is denoted the "monitored object"; for example, an application, an operating system, a database, hardware etc. can be monitored objects. A monitored object must be properly conditioned with event sensors to enable event monitoring, that is, an object must be instrumented with event sensors to be a monitored object. "Event sensors" are sensors that signal event occurrences whenever an event occurs. Whenever something is monitored, the probe effect must be managed.

Monitored objects and the probe effect

As discussed by Gait, [J. Gait (1985). A debugger for concurrent programs. "Software-Practice And Experience", 15(6)] when an object is monitored, its behavior is changed. In particular, in any concurrent system in which processes can run in parallel, this poses a particular problem. The reason is that whenever sensors are introduced in the system, processes may execute in a different order. This can cause a problem if, for example, we are trying to localize a fault, and by monitoring the system we change its behavior in such a way that the fault may not result in a failure; in essence, the fault can be masked by monitoring the system. The "probe effect" is the difference in behavior between a monitored object and its uninstrumented counterpart.

According to Schütz, [W. Schütz (1994). Fundamental issues in testing distributed real-time systems. "Real-Time Systems", 7(2):129–157] we can avoid, compensate for, or ignore the probe effect. In critical real-time system, in which timeliness (i.e., the ability of a system to meet time constraints such as deadlines) is significant, avoidance is the only option. If we, for example, instrument a system for testing and then remove the instrumentation before delivery, this invalidates the results of most testing based on the complete system. In less critical real-time system (e.g., media-based systems), compensation can be acceptable for, for example, performance testing. In non-concurrent systems, ignorance is acceptable, since the behavior with respect to the order of execution is left unchanged.

Event log analysis

Event log analysis is known as event composition in active databases, chronicle recognition in artificial intelligence and as real-time logic evaluation in real-time systems. Essentially, event log analysis is used for pattern matching, filtering of event occurrences, and aggregation of event occurrences into composite event occurrences. Commonly, dynamic programming strategies from algorithms are employed to save results of previous analyses for future use, since, for example, the same pattern may be match with the same event occurrences in several consecutive analysis processing. In contrast to general rule processing (employed to assert new facts from other facts, cf. inference engine) that is usually based on backtracking techniques, event log analysis algorithms are commonly greedy; for example, when a composite is said to have occurred, this fact is never revoked as may be done in a backtracking based algorithm.

Several mechanisms have been proposed for event log analysis: finite state automata, Petri nets, procedural (either based on an imperative programming language or an object-oriented programming languages), a modification of Boyer–Moore string search algorithm, and simple temporal networks.

ee also

* Event Stream Processing (ESP)
* Complex Event Processing

References


Wikimedia Foundation. 2010.

Look at other dictionaries:

  • Event Correlation — is a technique for making sense of a large number of events and pinpointing the few events that are really important in that mass of information. It has been notably used in Telecommunications and Industrial Process Control since the 1970s, in… …   Wikipedia

  • Monitoring and Measurement in the Next Generation Technologies — (MOMENT) is a project aimed at integrating different platforms for network monitoring and measurement to develop a common and open pan European infrastructure. The system will include both passive and active monitoring and measurement techniques… …   Wikipedia

  • Event Stream Processing — Event Stream Processing, or ESP, is a set of technologies designed to assist the construction of event driven information systems. ESP technologies include event visualization, event databases, event driven middleware, and event processing… …   Wikipedia

  • Event chain methodology — project management. [Virine, L. and Trumper M., Project Decisions: The Art and Science (2007). Management Concepts. Vienna, VA, ISBN 978 1567262179 ] .Event chain methodology helps to mitigate effect motivational and cognitive biases in… …   Wikipedia

  • Event data recorder — An Event Data Recorder or EDR is a device installed in some automobiles and trucks to record information related to vehicle crashes or accidents. Information from these devices can be collected after a crash and analyzed to help determine what… …   Wikipedia

  • Complex event processing — (CEP) consists of processing many events happening across all the layers of an organization, identifying the most meaningful events within the event cloud, analyzing their impact, and taking subsequent action in real time. Complex event… …   Wikipedia

  • Complex Event Processing — (kurz CEP, dt. Verarbeitung komplexer Ereignisse) ist ein Themenbereich der Informatik, der sich mit der Erkennung, Analyse, Gruppierung und Verarbeitung voneinander abhängiger Ereignisse (engl. Events) beschäftigt. CEP ist somit ein… …   Deutsch Wikipedia

  • United Kingdom Warning and Monitoring Organisation — The United Kingdom Warning and Monitoring Organisation (UKWMO) was a British civilian organisation operating between 1957 and 1992 to provide the authorities with data about nuclear explosions and forecasts of likely fallout profiles across the… …   Wikipedia

  • Tunguska event — The Tunguska Event, or Tunguska explosion, was a powerful explosion that occurred near the Podkamennaya (Lower Stony) Tunguska River in what is now Krasnoyarsk Krai of Russia, at around 7:14 a.m. [P. Farinella, L. Foschini, Ch. Froeschl eacute;,… …   Wikipedia

  • Business activity monitoring — (BAM) is software that aids in monitoring of business activities, as those activities are implemented in computer systems.DescriptionThe term Business Activity Monitoring was originally coined by analysts Gartner, Inc and refers to the… …   Wikipedia


We are using cookies for the best presentation of our site. Continuing to use this site, you agree with this.