Intelligent software agents ISA seminars report
#1

[attachment=956]
ABSTRACT
Intelligent software agents are one of the rapidly developing areas of research. They find a wide range of applications. These agents can be defined in a number of ways depending on their application; in general they may be defined as,
An agent is a software thing that knows how to do things that we could probably do ourselves if we had the time.
Agent programs differ from regular software mainly by what can best be described as a sense of themselves as independent entities. An ideal agent knows what its goal is and will strive to achieve it. An agent is supposed to be robust and adaptive, capable of learning from experience and responding to unforeseen situations with a repertoire of different methods. It is supposed to be autonomous so that it can sense the current state of its environment and act independently to make progress toward its goal.

CHAPTER “ 1
1.1 INTRODUCTION
Computers are as ubiquitous as automobiles and toasters, but exploiting their capabilities still seems to require the training of a supersonic test pilot. VCR displays blinking a constant 12 noon around the world testify to this conundrum. As interactive television, palmtop diaries and "smart" credit cards proliferate, the gap between millions of untrained users and an equal number of sophisticated microprocessors will become even more sharply apparent. With people spending a growing proportion of their lives in front of computer screens--informing and entertaining one another, exchanging correspondence, working, shopping and falling in love--some accommodation must be found between limited human attention spans and increasingly complex collections of software and data.
Computers currently respond only to what interface designers call direct manipulation. Nothing happens unless a person gives commands from a keyboard, mouse or touch screen. The computer is merely a passive entity waiting to execute specific, highly detailed instructions; it provides little help for complex tasks or for carrying out actions (such as searches for information) that may take an indefinite time.
If untrained consumers are to employ future computers and networks effectively, direct manipulation will have to give way to some form of delegation. Researchers and software companies have set high hopes on so called software agents, which "know" users' interests and can act autonomously on their behalf. Instead of exercising complete control (and taking responsibility for every move the computer makes), people will be engaged in a cooperative process in which both human and computer agents initiate communication, monitor events and perform tasks to meet a user's goals.
The average person will have many alter egos in effect, digital proxies-- operating simultaneously in different places. Some of these proxies will simply make the digital world less overwhelming by hiding technical details of tasks, guiding users through complex on-line spaces or even teaching them about certain subjects. Others will actively search for information their owners may be interested in or monitor specified topics for critical changes. Yet other agents may have the authority to perform transactions (such as on-line shopping) or to represent people in their absence. As the proliferation of paper and electronic pocket diaries has already foreshadowed, software agents will have a particularly helpful role to play as personal secretaries--extended memories that remind their bearers where they have put things, whom they have talked to, what tasks they have already accomplished and which remain to be finished.
Agent programs differ from regular software mainly by what can best be described as a sense of themselves as independent entities. An ideal agent knows what its goal is and will strive to achieve it. An agent should also be robust and adaptive, capable of learning from experience and responding to unforeseen situations with a repertoire of different methods. Finally, it should be autonomous so that it can sense the current state of its environment and act independently to make progress toward its goal.
1.2 DEFINITION OF INTELLIGENT SOFTWARE AGENTS:
Intelligent Software Agents are a popular research object these days. Because of the fact that currently the term "agent" is used by many parties in many different ways, it has become difficult for users to make a good estimation of what the possibilities of the agent technology are.Moreover these agents may have a wide range of applications which may significantly effect its definition,hence it is not easy to craft a rock-solid definition which could be generalized for all.However a informal definition of an Intelligent software agent may be given as:
"A piece of software which performs a given task using information gleaned from its environment to act in a suitable manner so as to complete the task successfully. The software should be able to adapt itself based on changes occurring in its environment, so that a change in circumstances will still yield the intended result."
Now that we have got the idea of what agents are its important to understand how they differ from normal software programs.The name Intelligent Software Agent itself signifies two striking features,which are Agency and Intelligence.
The degree of autonomy and authority vested in the agent, is called its agency. It can be measured at least qualitatively by the nature of the interaction between the agent and other entities in the system in which it operates.
At a minimum, an agent must run a-synchronously. The degree of agency is enhanced if an agent represents a user in some way. This is one of the key values of agents. A more advanced agent can interact with other entities such as data, applications, or services. Further advanced agents collaborate and negotiate with other agents.
What exactly makes an agent "intelligent" is something that is hard to define. It has been the subject of many discussions in the field of Artificial Intelligence, and a clear answer has yet to be found. Yet, a workable definition of what makes an agent intelligent is given as:
"Intelligence is the degree of reasoning and learned behaviour: the agent's ability to accept the user's statement of goals and carry out the task delegated to it.
At a minimum, there can be some statement of preferences, perhaps in the form of rules, with an inference engine or some other reasoning mechanism to act on these preferences.
Higher levels of intelligence include a user model or some other form of understanding and reasoning about what a user wants done, and planning the means to achieve this goal. Further out on the intelligence scale are systems that learn and adapt to their environment, both in terms of the user's objectives, and in terms of the resources available to the agent. Such a system might, like a human assistant, discover new relationships, connections, or concepts independently from the human user, and exploit these in anticipating and satisfying user needs.
1.3 STRONG AND WEAK NOTION OF THE CONCEPT AGENTS:
Instead of the formal definition, a list of general characteristics of agents will be given. Together these characteristics give a global impression of what an agent "is".
The first group of characteristics is connected to the weak notion of the concept "agent". The fact that an agent should possess most, if not all of these characteristics, is something that most scientists have agreed upon at this moment. This is not the case, however, with the second group of characteristics, which are connected to the strong notion of the concept "agent". The characteristics that are presented here are not things that go without saying for everybody.
The weak notion of the concept agents:
Perhaps the most general way in which the term agent is used, is to denote a hardware or (more usually) software-based computer system that enjoys the following properties: autonomy: agents operate without the direct intervention of humans or others, and have some kind of control over their actions and internal state;
¢ Social ability: agents interact with other agents and (possibly) humans via some kind of agent communication language;
¢ Reactivity: agents perceive their environment (which may be the physical world, a user via a graphical user interface, a collection of other agents, the Internet, or perhaps all of these combined), and respond in a timely fashion to changes that occur in it. This may entail that an agent spends most of its time in a kind of sleep state from which it will awake if certain changes in its environment (like the arrival of new e-mail) give rise to it;
¢ Proactivity: agents do not simply act in response to their environment, they are able to exhibit goal-directed behaviour by taking the initiative;
¢ Temporal continuity: agents are continuously running processes (either running active in the foreground or sleeping/passive in the background), not once-only computations or scripts that map a single input to a single output and then terminate;
¢ Goal orientedness: an agent is capable of handling complex, high-level tasks. The decision how such a task is best split up in smaller sub-tasks, and in which order and in which way these sub-tasks should be best performed, should be made by the agent itself.
The strong notion of the concept agents:
For some researchers - particularly those working in the field of AI - the term agent has a stronger and more specific meaning than that sketched out in the previous section. These researchers generally mean an agent to be a computer system that, in addition to having the properties as they were previously identified, is either conceptualised or implemented using concepts that are more usually applied to humans. For example, it is quite common in AI to characterise an agent using mentalistic notions, such as knowledge, belief, intention, and obligation . Some AI researchers have gone further, and considered emotional agents.
Another way of giving agents human-like attributes is to represent them visually by using techniques such as a cartoon-like graphical icon or an animated face . Research into this matter has shown that, although agents are pieces of software code, people like to deal with them as if they were dealing with other people (regardless of the type of agent interface that is being used).
Agents that fit the stronger notion of agent usually have one or more of the following characteristics:
¢ Mobility: the ability of an agent to move around an electronic network;
¢ Benevolence: is the assumption that agents do not have conflicting goals, and that every agent will therefore always try to do what is asked of it;
¢ Rationality: is (crudely) the assumption that an agent will act in order to achieve its goals and will not act in such a way as to prevent its goals being achieved - at least insofar as its beliefs permit;
¢ Adaptivity: an agent should be able to adjust itself to the habits, working methods and preferences of its user;
¢ Collaboration: an agent should not unthinkingly accept (and execute) instructions, but should take into account that the human user makes mistakes (e.g. give an order that contains conflicting goals), omits important information and/or provides ambiguous information. For instance, an agent should check things by asking questions to the user, or use a built-up user model to solve problems like these. An agent should even be allowed to refuse to execute certain tasks, because (for instance) they would put an unacceptable high load on the network resources or because it would cause damage to other users.
Although no single agent possesses all these abilities, there are several prototype agents that posses quite a lot of them.
CHAPTER “ 2
2.1 APPLICATION AREAS OF INTELLIGENT SOFTWARE AGENTS:
The current trend in agent developments is to develop modest, low-level applications. Yet, more advanced and complicated applications are more and more being developed as well. At this moment research is being done into separate agents, such as mail agents, news agents and search agents. These are the first step towards more integrated applications, where these single, basic agents are used as the building blocks. Broadly eight application areas can be identified where the agent technology can be used.These areas are:
1. Systems and Network Management:

Systems and network management is one of the earliest application areas to be enhanced using intelligent agent technology. The movement to client/server computing has intensified the complexity of systems being managed, especially in the area of LANs, and as network centric computing becomes more prevalent, this complexity further escalates. Users in this area (primarily operators and system administrators) need greatly simplified management, in the face of rising complexity. Agent architectures have existed in the systems and network management area for some time, but these agents are generally "fixed function" rather than intelligent agents. However, intelligent agents can be used to enhance systems management software. For example, they can help filter and take automatic actions at a higher level of abstraction, and can even be used to detect and react to patterns in system behaviour. Further, they can be used to manage large configurations dynamically;

2. Mobile Access / Management:
As computing becomes more pervasive and network centric computing shifts the focus from the desktop to the network, users want to be more mobile. Not only do they want to access network resources from any location, they want to access those resources despite bandwidth limitations of mobile technology such as wireless communication, and despite network volatility. Intelligent agents which (in this case) reside in the network rather than on the users' personal computers, can address these needs by persistently carrying out user requests despite network disturbances. In addition, agents can process data at its source and ship only compressed answers to the user, rather than overwhelming the network with large amounts of unprocessed data;

3. Mail and Messaging:
Messaging software (such a software for e-mail) has existed for some time, and is also an area where intelligent agent function is currently being used. Users today want the ability to automatically prioritise and organise their e-mail, and in the future, they would like to do even more automatically, such as addressing mail by organisational function rather than by person.
Intelligent agents can facilitate all these functions by allowing mail handling rules to be specified ahead of time, and letting intelligent agents operate on behalf of the user according to those rules. Usually it is also possible (or at least it will be) to have agents deduce these rules by observing a user's behaviour and trying to find patterns in it;

4. Information Access and Management:
Information access and management is an area of great activity, given the rise in popularity of the Internet and the explosion of data available to users. It is the application area that this thesis will mainly focus on. Here, intelligent agents are helping users not only with search and filtering, but also with categorisation, prioritisation, selective dissemination, annotation, and (collaborative) sharing of information and documents;

5. Collaboration:
Collaboration is a fast-growing area in which users work together on shared documents, using personal video-conferencing, or sharing additional resources through the network. One common denominator is shared resources; another is teamwork. Both of these are driven and supported by the move to network centric computing. Not only do users in this area need an infrastructure that will allow robust, scaleable sharing of data and computing resources, they also need other functions to help them actually build and manage collaborative teams of people, and manage their work products. One of the most popular and most heard-of examples of such an application is the groupware packet called Lotus Notes;

6. Workflow and Administrative Management:
Administrative management includes both workflow management and areas such as computer/telephony integration, where processes are defined and then automated. In these areas, users need not only to make processes more efficient, but also to reduce the cost of human agents. Much as in the messaging area, intelligent agents can be used to ascertain, then automate user wishes or business processes;

7. Electronic Commerce:
Electronic commerce is a growing area fuelled by the popularity of the Internet. Buyers need to find sellers of products and services, they need to find product information (including technical specifications, viable configurations, etc.) that solve their problem, and they need to obtain expert advice both prior to the purchase and for service and support afterward. Sellers need to find buyers and they need to provide expert advice about their product or service as well as customer service and support. Both buyers and sellers need to automate handling of their "electronic financial affairs".
Intelligent agents can assist in electronic commerce in a number of ways. Agents can "go shopping" for a user, taking specifications and returning with recommendations of purchases which meet those specifications. They can act as "salespeople" for sellers by providing product or service sales advice, and they can help troubleshoot customer problems;

8. Adaptive User Interfaces:
Although the user interface was transformed by the advent of graphical user interfaces (GUIs), for many, computers remain difficult to learn and use. As capabilities and applications of computers improve, the user interface needs to accommodate the increase in complexity. As user populations grow and diversify, computer interfaces need to learn user habits and preferences and adapt to individuals. Intelligent agents (called interface agents ) can help with both these problems. Intelligent agent technology allows systems to monitor the user's actions, develop models of user abilities, and automatically help out when problems arise. When combined with speech technology, intelligent agents enable computer interfaces to become more human or more "social" when interacting with human users.

CHAPTER “ 3
3.1 AGENTS IN THE BROWSING WORLD:
Big changes are taking place in the area of information supply and demand. The first big change, which took place quite a while ago, is related to the form information is available in. In the past, paper was the most frequently used media for information, and it still is very popular right now. However, more and more information is available through electronic media.
Other aspects of information that have changed rapidly in the last few years are the amount that it is available in, the number of sources and the ease with which it can be obtained. Expectations are that these developments will carry on into the future.
A third important change is related to the supply and demand of information. Until recently the market for information was driven by supply, and it was fuelled by a relatively small group of suppliers that were easily identifiable. At this moment this situation is changing into a market of a very large scale where it is becoming increasingly difficult to get a clear picture of all the suppliers. All these changes have an enormous impact on the information market. One of the most important changes is the shift from it being supply-driven to it becoming demand-driven. The number of suppliers has become so high (and this number will get even higher in the future) that the question who is supplying the information has become less important: demand for information is becoming the most important aspect of the information chain.
3.2 Problems regarding the demand for information
Meeting information demand has become easier on one hand, but has also become more complicated and difficult on the other. Because of the emergence of information sources such as the world-wide computer network called the Internet ,everyone - in principle - can have access to a sheer inexhaustible pool of information. Typically, one would expect that because of this satisfying information demand has become easier. The sheer endlessness of the information available through the Internet, which at first glance looks like its major strength, is at the same time one of its major weaknesses. The amounts of information that are at your disposal are too vast: information that is being sought is (probably) available somewhere, but often only parts of it can be retrieved, or sometimes nothing can be found at all. To put it more figuratively: the number of needles that can be found has increased, but so has the size of the haystack they are hidden in. The inquirers for information are being confronted with an information overkill.
The current, conventional search methods do not seem to be able to tackle these problems. These methods are based on the principle that it is known which information is available (and which one is not) and where exactly it can be found. To make this possible, large information systems such as databases are supplied with (large) indexes to provide the user with this information. With the aid of such an index one can, at all times, look up whether certain information can or cannot be found in the database, and - if available - where it can be found.
On the Internet (but not just there [2]) this strategy fails completely, the reasons for this being:
¢ The dynamic nature of the Internet itself: there is no central supervision on the growth and development of Internet. Anybody who wants to use it and/or offer information or services on it, is free to do so. This has created a situation where it has become very hard to get a clear picture of the size of the Internet, let alone to make an estimation of the amount of information that is available on or through it;
¢ The dynamic nature of the information on Internet: information that cannot be found today, may become available tomorrow. And the reverse happens too: information that was available, may suddenly disappear without further notice, for instance because an Internet service has stopped its activities, or because information has been moved to a different, unknown location;
¢ The information and information services on the Internet are very heterogeneous: information on the Internet is being offered in many different kinds of formats and in many different ways. This makes it very difficult to search for information automatically, because every information format and every type of information service requires a different approach.
3.3 Comparison:Search Engines and Agents

There are several ways to deal with the problems stated above. Most of the current solutions are of strong ad hoc nature. The two most general solutions can be the use of search engines and the use of agents.
Using agents when looking for information has certain advantages compared to current methods, such as using a search engine:
Search Engine feature: Improvement(s) Intelligent Software Agents can offer:
1. An information search is done, based on one or more keywords given by a user. This presupposes that the user is capable of formulating the right set of keywords to retrieve the wanted information. Agents are capable of searching information more intelligently, for instance because tools (such as a thesaurus) enable them to search on related terms as well, or even on concepts.
2. Information mapping is done by gathering (meta-) information about information and documents that are available on the Internet. This is a very time-consuming method that causes a lot of data traffic, it lacks efficiency. Individual user agents can create their own knowledge base about available information sources on the Internet, which is updated and expanded after every search. When information (i.e. documents) have moved to another location, agents will be able to find them, and update their knowledge base accordingly.
3. The search for information is often limited to a few Internet services, such as the WWW. Finding information that is offered through other services, often means the user is left to his or her own devices; Agents can relief their human user of the need to worry about "clerical details", such as the way the various Internet service have to operated. Instead, he or she will only have to worry about the question what exactly is being sought.
4. Search engines cannot always be reached: the server that a service resides on may be 'down', or it may be too busy on the Internet to get a connection. As a user agent resides on a user's computer, it is always available to the user.
5. Search engines are domain-independent in the way they treat gathered information and in the way they enable users to search in it. Software agents will be able to search information based on contexts. They will deduce this context from user information (i.e. a built-up user model) or by using other services, such as a thesaurus service.
6. The information on Internet is very dynamic: quite often search engines refer to information that has moved to another, unknown location, or has disappeared. Search engines do not learn from these searches, and they do not adjust themselves to their users.
User agents can adjust themselves to the preferences and wishes of individual users. Ideally this will lead to agents that will more and more adjust themselves to what a user wants and wishes, and what he or she is (usually) looking for, by learning from performed tasks (i.e. searches) and the way users react to the results of them.
3.4 Agents as building blocks for a new Internet structure

The Internet keeps on growing, and judging by reports in the media the Internet will keep on growing. The big threat this poses is that the Internet will get too big and too diverse for humans to comprehend, let alone to be able to work on it properly. And very soon even (conventional) software programs will not be able to get a good grip on it.
A new structure should be drawn up for the Internet which will make it more easily and conveniently to use, and which will make it possible to abstract from the various techniques that are hidden under its surface. A kind of abstraction comparable to the way in which higher programming languages relieve programmers of the need to deal with the low-level hardware of a computer (such as registers and devices).
Because the thinking process with regard to these developments has started only recently, there is no clear sight yet on a generally accepted standard. However, an idea is emerging that looks very promising: a three layer structure. There are quite a number of parties which, although sometimes implicitly, are studying and working on this concept. The main idea of this three layer model is to divide the structure of the Internet into three layers or concepts:
1. Users;
2. Suppliers; and
3. Intermediaries.
In the current Internet environment, the bulk of the processing associated with satisfying a particular need is embedded in software applications (such as WWW browsers). It would be much better if the whole process could be elevated to higher levels of sophistication and abstraction.One of the most promising proposals to address this problem is the three layer structure:one layer per activity.
Per individual layer the focus is on one specific part of the activity , which is supported by matching types of software agents. These agents will relieve us of many tedious, administrative tasks, which in many cases can be taken over very well, or even better, by a computer program (i.e. software agents). What's more, the agents will enable a human user to perform complex tasks better and faster.
The three layers are:
1. The demand side (of information), i.e. the information searcher or user; here, agents' tasks are to find out exactly what users are looking for, what they want, if they have any preferences with regard to the information needed, etcetera;
2. The supply side (of information), i.e. the individual information sources and suppliers; here, an agent's tasks are to make an exact inventory of (the kinds of) services and information that are being offered by its supplier, to keep track of newly added information, etcetera;
3. Intermediaries; here agents mediate between agents (of the other two layers), i.e. act as (information) intermediaries between (human or electronic) users and suppliers.
When constructing agents for use in this model, is it absolutely necessary to do this according to generally agreed upon standards: it is unfeasible to make the model account for any possible type of agent. Therefore, all agents should respond & react in the same way (regardless of their internal structure) by using some standardised set of codes. To make this possible, the standards should be flexible enough to provide for the construction of agents for tasks that are unforeseen at present time.
The three layer model has several (major) plus points:
1. Each of the three layers only has to concern itself with doing what it is best at. Parties (i.e. members of one of the layers) do no longer have to act as some kind of "jack-of-all-trades";
2. The model itself (but the same goes for the agents that are used in it) does not enforce a specific type of software or hardware.The only thing that has to be complied to are the standards that were mentioned earlier. This means that everybody is free to chose whatever underlying technique they want to use (such as the programming language) to create an agent: as long as it responds and behaves according to the specifications laid down in the standards, everything is okay. A first step in this direction has been made with the development of agent communication and programming languages such as KQML and Telescript.
3. By using this model the need for users disappears to learn the way in which the individual Internet services have to be operated; the Internet and all of its services will 'disappear' and become one cohesive whole;
4. It is easy to create new information structures or to modify existing ones without endangering the open (flexible) nature of the whole system. The ways in which agents can be combined become seemingly endless;
5. To implement the three layer model no interim period is needed to do so, nor does the fact that it needs to be backward-compatible with the current (two layer) structure of the Internet have any negative influences on it. People (both users and suppliers) who chose not to use the newly added intermediary or middle layer, are free to do so. However, they will soon discover that using the middle layer in many cases leads to quicker and better results in less time and with less effort.
3.5 Middle layer functions:

The main functions of the middle layer are:
1. Dynamically matching user demand and provider's supply in the best possible way.
Suppliers and users (i.e. their agents) can continuously issue and retract information needs and capabilities. Information does not become stale and the flow of information is flexible and dynamic. This is particularly useful in situations where sources and information change rapidly, such as in areas like commerce, product development and crisis management.

2. Unifying and possibly processing suppliers' responses to queries to produce an appropriate result.
The content of user requests and supplier 'advertisements' [1] may not align perfectly. So, satisfying a user's request may involve aggregating, joining or abstracting the information to produce an appropriate result. However, it should be noted that normally intermediary agents should not be processing queries, unless this is explicitly requested in a query.
Processing could also take place when the result of a query consists of a large number of items. Sending all these items over the network to a user (agent), would lead to undesirable waste of bandwidth, as it is very unlikely that a user (agent) would want to receive that many items. The intermediary agent might then ask the user (agent) to make refinements or add some constraints to the initial query.

3. Current Awareness, i.e. actively notificate users of information changes.
Users will be able to request (agents in) the middle layer to notificate them regularly, or maybe even instantly, when new information about certain topics has become available or when a supplier has sent an advertisement stating he offers information or services matching certain keywords or topics.
There is quite some controversy about the question whether or not a supplier should be able to receive a similar service as well, i.e. that suppliers could request to be notified when users have stated queries, or have asked to receive notifications, which match information or services that are provided by this particular supplier. Although there may be users who find this convenient, as they can get in touch with suppliers who can offer the information they are looking for, there are many other users which would not be very pleased with this invasion on their privacy. Therefore, a lot of thought should be given to this dilemma and a lot of things will need to be settled, before such a service should be offered to suppliers as well.

4. Bring users and suppliers together.
This activity is more or less an extension of the first function. It means that a user may ask an intermediary agent to recommend/name a supplier that is likely to satisfy some request without giving a specific query. The actual queries then take place directly between the supplier and the user.
Or a user might ask an intermediary agent to forward a request to a capable supplier with the stipulation that subsequent replies are to be sent directly to the user himself.

CHAPTER - 4
4.1 Letizia: An Agent That Assists Web Browsing


Letizia is a user interface agent that assists a user browsing the World Wide Web. As the user operates a conventional Web browser such as Netscape, the agent tracks user behavior and attempts to anticipate items of interest by doing concurrent, autonomous exploration of links from the user's current position. The agent automates a browsing strategy consisting of a best-first search augmented by heuristics inferring user interest from browsing behavior.
Letizia operates in tandem with a conventional Web browser such as Mosaic or Netscape. The agent tracks the user's browsing behavior -- following links, initiating searches, requests for help -- and tries to anticipate what items may be of interest to the user. It uses a simple set of heuristics to model what the user's browsing behavior might be. Upon request, it can display a page containing its current recommendations, which the user can choose either to follow or to return to the conventional browsing activity.
4.2 Features of Letizia:

¢ Interleaving browsing with automated search
The model adopted by Letizia is that the search for information is a cooperative venture between the human user and an intelligent software agent. Letizia and the user both browse the same search space of linked Web documents, looking for "interesting" ones. No goals are predefined in advance. The difference between the user's search and Letizia's is that the user's search has a reliable static evaluation function, but that Letizia can explore search alternatives faster than the user can. Letizia uses the past behavior of the user to anticipate a rough approximation of the user's interests.
Letizia's role during user interaction is merely to observe and make inferences from observation of the user's actions that will be relevant to future requests. In parallel with the user's browsing, Letizia conducts a resource-limited search to anticipate the possible future needs of the user. At any time, the user may request a set of recommendations from Letizia based on the current state of the user's browsing and Letizia's search. Such recommendations are dynamically recomputed when anything changes or at the user's request.
¢ Modeling the user's browsing process
The user's browsing process is typically to examine the current HTML document in the Web browser, decide which, if any, links to follow, or to return to a document previously encountered in the history. The goal of the Letizia agent is to automatically perform some of the exploration that the user would have done while the user is browsing these or other documents, and to evaluate the results from what it can determine to be the user's perspective. Upon request, Letizia provides recommendations for further action on the user's part, usually in the form of following links to other documents. Letizia's leverage comes from overlapping search and evaluation with the "idle time" during which the user is reading a document. Since the user is almost always a better judge of the relevance of a document than the system, it is usually not worth making the user wait for the result of an automated retrieval if that would interrupt the browsing process. The best use of Letizia's recommendations is when the user is unsure of what to do next. Letizia never takes control of the user interface, but just provides suggestions.
¢ Inferences from the user's browsing behavior
Observation of the user's browsing behavior can tell the system much about the user's interests. Each of these heuristics is weak by itself, but each can contribute to a judgment about the document's interest. One of the strongest behaviors is for the user to save a reference to a document, explicitly indicating interest. Following a page link can indicate one of several things. First, the decision to follow a page link can indicate interest in the topic of the link. However, because the user does not know what is referenced by the page link at the time the decision to follow it has been made, that indication of interest is tentative, at best. If the user returns immediately without having either saved the target document, or followed further links, an indication of disinterest can be assumed. Letizia saves the user considerable time that would be wasted exploring those "dead-end" links. An important aspect of Letizia's judgment of "interest" in a document is that it is not trying to determine some measure of how interesting the document is in the abstract, but instead, a preference ordering of interest among a set of links. If almost every page link is found to have high interest, then an agent that recommends them all isn't much help, and if very few links are interesting, then the agent's recommendation isn't of much consequence. At each moment, the primary problem the user is faced with in the browser interface is "which page link should I choose next", And so it is Letizia's job to recommend which of the several possibilities available is most likely to satisfy the user. Letizia sets as its goal to recommend a certain percentage[settable by the user] of the links currently available.
¢ Persistence of interest
One of the most compelling reasons to adopt a Letizia-like agent is the phenomenon of persistence of interest. When the user indicates interest by following a page link or performing a search on a keyword, their interest in that topic rarely ends with the returning of results for that particular search. Further, persistence of interest is important in uncovering serendipitous connection, which is a major goal of information browsing. While searching for one topic, one might accidentally uncover information of tremendous interest on another, seemingly unrelated, topic. This happens surprisingly often, partly because seemingly unrelated topics are often related through non-obvious connections. An important role for the agent to play is in constantly being available to notice such connections and bring them to the user's attention.
¢ Search strategies
The interface structure of many Web browsers encourages depth first search, since every time one descends a level the choices at the next lower level are immediately displayed. One must return to the containing document to explore brother links at the same level, a two-step process in the interface. When the user is exploring in a relatively undirected fashion, the tendency is to continue to explore downward links in a depth-first fashion. After a while, the user finds him or herself very deep in a stack of previously chosen documents, and [especially in the absence of much visual representation of the context] this leads to a "lost in hyperspace" feeling. The depth-first orientation is unfortunate, as much information of interest to users is typically embedded rather shallowly in the Web hierarchy. Letizia compensates for this by employing a breadth-first search. It achieves utility in part by reminding users of neighboring links that might escape notice. It makes user exploration more efficient by automatically eliding many of the "dead-end" links that waste users' time.

CHAPTER -5
5.1 Segue: Making Browsing History Usable
Two major reasons why people make minimal use of their browsing history are
1. The rarity of tools that make saved history data both easy to understand and easy to use, the failure of existing tools to provide access at the right level of granularity.
2. Both these problems are rectified if we give users a browser history agent capable of tracking and segmenting their browsing history according to their changes in interest.
The Segué agent will record history automatically as the user browses, segment the history according to the user™s changes in interest, and provide the user with a set of tools to edit, annotate, and distribute the history segments to friends and colleagues. Through building and testing this tool, we will discover whether it can cause its users to think more favorably about distributing pieces of their browsing history and to change their patterns of collaboration with friends and colleagues.
The problem:
While browsing, the user chooses links and moves through a linear series of pages. The page in front of the user is always changing, and there are few indications (if any) of how the pages relate to each other. Though the user may build a mental picture of a site while browsing, that short-term model is not easily recalled weeks or months later. User interests while browsing are also dynamic, whether manifested through slow shifts in the kinds of pages being browsed, or through serendipitous finds, or through being sidetracked onto another topic altogether. All this information, though often forgotten by the user, is captured and stored in the form of the browser's history. Web browsing history is a good information resource and a reasonably accurate indicator of the userâ„¢s interests, but it is rarely used.

Solution using segue:
Segue is a browser history agent that will track the user's changes in interest over time and visualize that information in a series of segments (or "skeins") as the user browses. A change in interest is detected by comparing the current page to several pages visited in the recent past and looking for similarities. Changes of interest, when recorded and displayed in this way, and especially when coupled with proactive, autonomous agent capabilities, will provide an interesting and ultimately more useful way of visualizing and manipulating web history.
5.2 The agent interface:

Segué automatically segments your browsing history into skeins as you browse. Each skein contains one or more URLs, their associated history information and user annotations, if any. The initial visualization is time-based, but it is also possible to construct subject-based or frequency-based skeins. User control over skein granuarity (the breadth of interest covered by a single skein) is also an important feature of the interface.

A temporally-oriented sample visualization is shown in the figure -- the horizontal axis maps to time of day, and the vertical axis to days or weeks (with the most recent skein at the bottom of the window). The figure shows approximately twelve days of browsing history. Clicking a skein will highlight it and display associated keywords and annotations. Skeins can also display their most common keyword directly in the browsing history display. Double-clicking a skein opens a window to show the URLs and history information in that skein. Segué also provides commands to edit one or more skeins (editing multiple skeins merges them into a single skein), to publish a skein as a web page, and to email skeins.

CHAPTER - 6
6.1 ADVANCED APPLICATIONS:
Mobile Software Agents for Dynamic Routing
Networks for mobile devices are quite difficult to design for several reasons, chief among them the problem of routing packets across networks characterized by constantly changing topology. We can address the routing problem using a new technique for distributed programming, mobile software agents.
The problem:
A wireless network serving a population of frequently mobile nodes presents a moving target to systems designers. Any scheme for managing routing across such a network has to be flexible enough to adapt to continuous and unpredictable change in three fundamental characteristics -- overall density, node-to-node topology, and usage patterns. The goal for such a system must be to provide optimal service even as the ``rules of the game'' change.
The solution:
Mobile agents serve as a framework on top of which decentralized infrastructure services can be built. By embedding functionality in mobile software agents and distributing these agents across the network, we push the intelligence traditionally centralized in a few controlling nodes out into the system at large. Every node can be capable of hosting mobile software agents; every node can be a full network citizen. There is no need for privileged arbiter nodes, as network functionality can be built from the ground up by the cooperative behavior of all the individual nodes in the network and that of the agents moving across them.
¢ The following are important characteristics for mobile infrastructure agents:
¢ Agents encapsulate a thread of execution along with a bundle of code and data. Each agent runs independently of all others, is self-contained from a programmatic perspective, and preserves all of its state when it moves from one network node to another. This is strong mobility as defined in .
¢ Any agent can move easily across the network. The underlying infrastructure provides a language-level primitive that an agent can call to move itself to a neighboring node.
¢ Agents must be small in size. Because there is some cost associated with hosting and transporting an agent, they are designed to be as minimal as possible. Simple agents serve as building blocks for complex aggregate behavior.
¢ An agent is able to cooperate with other agents in order to perform complex or dynamic tasks. Agents may read from and write to a shared block of memory on each node, and can use this facility both to coordinate with other agents executing on that node and to leave information behind for subsequent visitors.
¢ An agent is able to identify and use resources specific to any node on which it finds itself. In the approaches described in this paper, the nodes are differentiated only by who their neighbors are and how locally congested the network is. In a more heterogeneous network, certain nodes might have access to particular kinds of information -- such as absolute location derived from a global positioning system receiver -- that agents could leverage.
Artificial Life:
The relatively new field of Artificial Life attempts to study and understand biological life by synthesizing artificial life forms. Artificial Life shares with Artificial Intelligence its interest in synthesizing adaptive autonomous agents. Autonomous agents are computational systems that inhabit some complex, dynamic environment, sense and act autonomously in this environment, and by doing so realize a set of goals or tasks that they are designed for. Autonomous agents have been built for surveillance, exploration and other tasks in environments that are unaccessible or dangerous for human beings.Entertainment is another potential field. . This is the case for video games, simulation rides, movies, animation, animatronics, theater, puppetry, certain toys and even party lines.
Examples:
A number of researchers have applied agent technology to produce animation movies. Rather than scripting the exact movements of an animated character, the characters are modeled as agents which perform actions in response to their perceived environment. Reynolds modeled flocks of birds and schools of fish by specifying the behavior of the individual animals that made up the flock.


Realistic Fish behavior modeled by Terzopoulos et.al. to produce short animated movies
Tosa used neural networks to model an artificial baby that reacts in emotional ways to the sounds made by a user looking into its crib.

Tosa's artificial "baby" reacts to sounds made by a user looking into it's crib.
The ALIVE project
ALIVE stands for "Artificial Life Interactive Video Environment". One of the goals of the ALIVE project is to demonstrate that virtual environments can offer a more "emotional" and evocative experience by allowing the participant to interact with animated characters.

Gestures are interpreted by the agents based on the context. Here, the dog walks away in the direction the user is pointing.

CHAPTER - 7
7.1 CONCLUDING REMARKS

Intelligent Software agents have been around now since a few years. But even although this technique is still young, it looks promising already. Promising, but also rather vague and a bit obscure to many. This thesis' aim was - and is - to provide an overview of what agents are offering now and are expected to offer in the future. For that purpose, practical examples have been given to indicate what already has been accomplished. A model was outlined which can be used to extend, enhance and amplify the functionality (individual) agents can offer. And trends and developments from past and present have been described, and future developments have been outlined.
One of the conclusions that can be drawn from these trends and developments, is that users will be the ultimate test of agents' success. Users will also (albeit indirectly) drive agents' development; that is something that seems to be certain. What is uncertain is whether users will discover, use and adopt agents all by themselves, or whether they will just start to use them because they are (getting) incorporated into a majority of applications. Users may discover more or less on their own how handy, user-friendly and convenient agents are (or how they are not), just like many users have discovered or are discovering the pros and cons of the Internet and the World Wide Web. But it may just as well go like as in the case of Operating Systems and GUIs, where companies with the biggest market share have more or less imposed the usage of certain systems and software.


REFERENCE
¢ Nwana, H.S. Software Agents: An Overview. Intelligent Systems Research AA&T, BT Laboratories, Ipswich, United Kingdom,
¢ Wooldridge, M., and Jennings, N.R. Intelligent Agents: Theory and Practice
¢ "Intelligent Software Agents on the Internet" - by Björn Hermans
¢ The @gency:
o A WWW page by Serge Stinckwich, with some agent definitions, a list of agent projects and laboratories, and links to agent pages and other agent-related Internet resources.
¢ ieee.com
¢ MITlabs.com


CONTENTS
Chapter 1
1.1 Introduction
1.2 Definition of ISA
1.3 Weak and Strong notion of agents
Chapter 2
2.1 Application areas of ISA
Chapter 3
3.1 Agents in the browsing world
3.2 Problems regarding the demand for information
3.3 Comparison: Search Engines and Agents
3.4 Agents as building blocks for a new Internet structure
3.5 Middle layer functions
Chapter 4
4.1 Letizia: An Agent That Assists Web Browsing
4.2 Features of Letizia
Chapter 5
5.1 Segue: Making Browsing History Usable
5.2 The agent interface
Chapter 6
6.1 Advanced applications
Chapter 7
7.1 Concluding remarks
References

ACKNOWLEDGEMENT
I express my sincere gratitude to Dr. Agnisarman Namboodiri, Head of Department of Information Technology and Computer Science, for his guidance and support to shape this paper in a systematic way.
I am also greatly indebted to Mr.Saheer H.B. and Ms. S.S.Deepa, Department of IT for their valuable suggestions in the preparation of the paper.
In addition I would like to thank all staff members of IT department and all my friends of S7 IT for their suggestions and constrictive criticism.
SAJIT SUDARSANAN
Reply
#2
hello sir,
can you let me know real time application as the example of intelligent software agent
Reply
#3


to get information about the topic"Intelligent software agents ISA seminar report"refer the page link bellow

http://studentbank.in/report-intelligent...8#pid58048
Reply
#4

plz do send me seminar reports and ppt's on this topic[/size][/font]
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: intelligent software assistant, enquiry agents, system software seminar, persistence by calvin coolidge, microsoft powerpoint isa 774 lecture 11 ppt powered by google docs mht, issues in bandwidth pricing using software agents, news papper agents management project documentation,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  network security seminars report computer science technology 14 20,381 24-11-2018, 01:19 AM
Last Post:
Heart wireless intelligent network(win) (Download Full Report And Abstract) computer science crazy 7 15,241 10-02-2015, 05:52 PM
Last Post: seminar report asees
  Modular Computing seminars report computer science crazy 4 21,443 08-10-2013, 04:32 PM
Last Post: Guest
  tele immersion seminars report computer science technology 9 14,812 20-12-2012, 11:20 AM
Last Post: seminar details
  OBJECT-ORIENTED APPROACH IN SOFTWARE DEVELOPMENT project report helper 2 2,479 20-11-2012, 12:48 PM
Last Post: seminar details
  AI-based Classification and Retrieval of Reusable Software Components computer girl 0 1,039 11-06-2012, 12:07 PM
Last Post: computer girl
  Intelligent Electronic Devices (IEDs) and Supervisory Control and Data Acquisition computer girl 0 1,140 09-06-2012, 06:01 PM
Last Post: computer girl
  Algorithms and Issues In Client Software Design computer girl 0 1,133 06-06-2012, 03:23 PM
Last Post: computer girl
  Introduction of Mobile Agents computer girl 0 838 05-06-2012, 12:48 PM
Last Post: computer girl
  computer science seminars topics computer science crazy 1 10,061 16-03-2012, 10:38 AM
Last Post: seminar paper

Forum Jump: