Modeling Surprise
#1

[attachment=10041]
Modeling Surprise
INTRODUCTION

Much of modern life depends on forecasts: where the next hurricane will make landfall, how the stock market will react to falling home prices, who will win the next primary. While existing computer models predict many things fairly accurately, surprises still crop up, and we probably can't eliminate them. But Eric Horvitz, head of the Adaptive Systems and Interaction group at Microsoft Research, thinks we can at least minimize them, using a technique called "surprise modeling."
Surprise modeling combines data mining and machine learning to help people do a better job of anticipating and coping with unusual events. Combining massive quantities of data, insights into human psychology, and machine learning can help manage surprising events.
Data Mining : It is the process of extracting patterns from data. Data mining is becoming an increasingly important tool to transform these data into information. It is commonly used in a wide range of profiling practices, such as marketing, surveillance, fraud detection and scientific discovery.
Machine learning : It is a scientific discipline that is concerned with the design and development ofalgorithms that allow computers to change behavior based on data, such as from sensor data or databases. A major focus of machine learning research is to automatically learn to recognize complex patterns and make intelligent decisions based on data.
Hence, machine learning is closely related to fields such as statistics, probability theory, data mining, pattern recognition, artificial intelligence, adaptive control, and theoretical computer science.
Surprise modeling is not about building a technological crystal ball to predict what the stock market will do tomorrow, or what al-Qaeda might do next month. But "We think we can apply these methodologies to look at the kinds of things that have surprised us in the past and then model the kinds of things that may surprise us in the future." The result could be enormously useful for decision makers in fields that range from health care to military strategy, politics to financial markets.
IMPACT
Although research in the field is preliminary, surprise modeling could aid decision makers in a wide range of domains, such as traffic management, preventive medicine, military planning, politics, business, and finance.
SmartPhlow works on both desktop computers and Microsoft PocketPC devices. It depicts traffic conditions in Seattle, using a city map on which backed-up highways appear red and those with smoothly flowing traffic appear green. But that's just the beginning. After all, Horvit z says, "most people in Seattle already know that such-and-such a highway is a bad idea in rush hour." And a machine that constantly tells you what you already know is just irritating. So Horvitz and his team added software that alerts users only to surprises--the times when the traffic develops a bottleneck that most people wouldn't expect, say, or when a chronic choke point becomes magically unclogged.
But how? To monitor surprises effectively, says Horvitz, the machine has to have both knowledge--a good cognitive model of what humans find surprising--and foresight: some way to predict a surprising event in time for the user to do something about it.
Horvitz's group began with several years of data on the dynamics and status of traffic all through Seattle and added information about anything that could affect such patterns: accidents, weather, holidays, sporting events, even visits by high-profile officials. Then, he says, for dozens of sections of a given road, "we divided the day into 15-minute segments and used the data to compute a probability distribution for the traffic in each situation."
That distribution provided a pretty good model of what knowledgeable drivers expect from the region's traffic, he says. "So then we went back through the data looking for things that people wouldn't expect--the places where the data shows a significant deviation from the averaged model." The result was a large database of surprising traffic fluctuations.
Once the researchers spotted a statistical anomaly, they backtracked 30 minutes, to where the traffic seemed to be moving as expected, and ran machine- learning algorithms to find subtleties in the pattern that would allow them to predict the surprise. The algorithms are based on Bayesian modeling techniques, which calculate the probability, based on prior experience, that something will happen and allow researchers to subjectively weight the relevance of contributing events.
Formal Bayesian Theory of Surprise
The concept of surprise is central to sensory processing, adaptation and learning, attention, and decision making. Yet, until now, no widely-accepted mathematical theory existed to quantify surprise elicited by stimuli or events, for observers ranging from single neurons to complex natural or engineered systems.
Bayesian surprise quantifies how data affects natural or artificial observers, by measuring differences between posterior and prior beliefs of the observers. Using this framework we tested whether humans orient their gaze towards surprising events or items while watching television. Bayesian surprise strongly attracts human observers, with 72% of all gaze shifts directed towards locations more surprising than the average, a figure rising to 84% when considering only gaze targets simultaneously selected by all subjects. The resulting theory of surprise is applicable across different spatio-temporal scales, modalities, and levels of abstraction.
We propose that surprise is a general, information-theoretic concept, which can be derived from first principles and formalized analytically across spatio-temporal scales, sensory modalities, and, more generally, data types and data sources. Two elements are essential for a principled definition of surprise. First, surprise can exist only in the presence of uncertainty, which can arise from intrinsic stochasticity, missing information, or limited computing resources. A world that is purely deterministic and predictable in real-time for a given observer contains no surprises. Second, surprise can only be defined in a relative, subjective, manner and is related to the expectations of the observer, be it a single synapse, neuronal circuit, organism, or computer device. The same data may carry different amounts of surprise for different observers, or even for the same observer taken at different times.
In probability and decision theory it can be shown that, under a small set of axioms, the only consistent way for modeling and reasoning about uncertainty is provided by the Bayesian theory of probability. Furthermore, in the Bayesian framework, probabilities correspond to subjective degrees of beliefs in hypotheses or models which are updated, as data is acquired, using Bayes' theorem as the fundamental tool for transforming prior belief distributions into posterior belief distributions. Therefore, within the same optimal framework, the only consistent definition of surprise must involve: (1) probabilistic concepts to cope with uncertainty; and (2) prior and posterior distributions to capture subjective expectations.
Specifically, the background information of an observer is captured by his/her/its prior probability distribution:
over the hypotheses or models M in a model space. Given such a prior distribution of beliefs, the fundamental effect of a new data observation D on the observer is to change the prior distribution {P(M)} (for all models M in the model space) into the posterior distribution {P(M|D)} via Bayes' theorem, whereby:
In this framework, the new data observation D carries no surprise if it leaves the observer's beliefs unaffected, that is, if the posterior is identical to the prior; conversely, D is surprising if the posterior distribution resulting from observing D significantly differs from the prior distribution. Therefore we formally measure surprise elicited by quantifying the distance (or dissimilarity) between the posterior and prior distributions. This is best done using the relative entropy or Kullback-Leibler (KL) divergence. Thus, surprise is defined by the average of the log-odd ratio:
taken with respect to the posterior distribution over the model space. Note that KL is not symmetric but has well-known theoretical advantages, including invariance with respect to reparameterizations. A unit of surprise --- a wow --- may then be defined for a single model M as the amount of surprise corresponding to a two-fold variation between P(M|D) and P(M), i.e., as log P(M|D)/P(M) (with log taken in base 2). The total number of wows experienced when simultaneously considering all models is obtained through the integration over the model class.
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: surprise birthday at school, catholic beliefs on birth, spamassassin bayes, posterior shoulder dislocation brace, travelodge seattle, fados seattle, spamassassin,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  Probabilistic Modeling of Leach Protocol and Computing Sensor Energy Consumption Rate seminar class 0 1,127 06-05-2011, 04:44 PM
Last Post: seminar class
  Hand Tracking using Spatial Gesture Modeling and Visual Feedback for a Virtual DJ Sys project topics 2 1,819 27-01-2011, 10:07 AM
Last Post: seminar surveyer
  modeling and automated containment of worms neelusai 1 1,169 20-01-2011, 11:05 AM
Last Post: seminar surveyer
  Unified Modeling Language seminar surveyer 3 1,665 22-10-2010, 03:37 PM
Last Post: project report helper
  Reliability Modeling of Distributed Generation in Conventional Distribution Systems P project report helper 0 1,464 18-10-2010, 02:40 PM
Last Post: project report helper
  3D AUDIO SYSTEM & ACOUSTIC ENVIRONMENTAL MODELING seminar surveyer 0 1,290 18-10-2010, 01:00 PM
Last Post: seminar surveyer
  Stochastic modeling and quality evaluation of workflow systems based on QWF-nets project topics 0 863 23-04-2010, 09:14 PM
Last Post: project topics

Forum Jump: