artificial intelligence full report
#1

Artificial intelligence
[attachment=2643]

Artificial intelligence

Artificial Intelligence (AI) is the area of computer science focusing on creating machines that can engage on behaviors that humans consider intelligent.
Its task is using computers to understand human intelligence.
ability to achieve goals in the world.
Varying kinds and degrees of intelligence occur in people, many animals and some machines.

Submitted By:
Mahajan Nikhil P.
Nirma University



When did AI research start

After WWII, the English mathematician Alan Turing may have been the first. He gave a lecture on it in 1947.
He also may have been the first to decide that AI was best researched by programming computers rather than by building machines.
By the late 1950s, there were many researchers on AI, on programming computer.
The branch of computer science concerned with making computers behave like humans. The term was coined in 1956 by John McCarthy at the Massachusetts Institute of Technology.

HISTORY

1950-1960
¢ draughts-playing program written by Christopher Strachey and a chess-playing program written by Dietrich Prinz.
1960-1970
¢ 1960s and 1970s Marvin Minsky and Seymour Papert publish Perceptrons, demonstrating limits of simple neural nets and Alain Colmerauer developed the Prolog computer language.
1980 onwards
¢ In 1974 Paul John Werbos describe algorithm
for ai programming.
¢ Deep Blue, a chess-playing computer, beat Garry Kasparov in a famous six-game match in 1997.

Overview of Artificial Intelligence(1)

Artificial intelligence systems
¢ The people, procedures, hardware, software, data, and knowledge needed to develop computer systems and machines that demonstrate the characteristics of intelligence
Intelligent behavior
¢ Learn from experience
¢ Apply knowledge acquired from experience
¢ Handle complex situations
¢ Solve problems when important information is missing
¢ Determine what is important
¢ React quickly and correctly to a new situation
¢ Understand visual images
¢ Process and manipulate symbols
¢ Be creative and imaginative

Branches of AI (1)

Perceptive system
¢ A system that approximates the way a human sees, hears, and feels objects
Vision system
¢ Capture, store, and manipulate visual images and pictures
Robotics
¢ Mechanical and computer devices that perform tedious tasks with high precision
Expert system
¢ Stores knowledge and makes inferences
Learning system
¢ Computer changes how it functions or reacts to situations based on feedback
Natural language processing
¢ Computers understand and react to statements and commands made in a natural language, such as English
Neural network
¢ Computer system that can act like or simulate the functioning of the human brain

What should we study before or while learning AI

Study mathematics, especially mathematical logic.
study psychology and the physiology of the nervous system
C, Lisp and Prolog
good idea to learn one basic machine language
C++ and Java

Application of ai

games playing: programming computers to
play games such as chess and checkers.
expert systems : programming computers to make decisions in real-life situations (for example, some expert systems help doctors diagnose diseases based on symptoms)
natural language : programming computers to understand natural human languages.
neural networks : Systems that simulate intelligence by attempting to reproduce the types of physical connections that occur in animal brains
robotics : programming computers to see and hear and react to other sensory stimuli


Isn't AI about simulating human intelligence

Not always, machines solve problems by observing other people or just by observing our own methods.
most work in AI involves studying the problems the world presents to intelligence rather than studying people or animals.

What is the Turing test

Alan Turing's 1950 article Computing Machinery and Intelligence discussed conditions for considering a machine to be intelligent.
The Turing test is a one-sided test. A machine that passes the test should certainly be considered intelligent, but a machine could still be considered intelligent without knowing enough about humans to imitate a human.

What about IQ Do computer programs have IQs

No. IQ is based on the rates at which intelligence develops in children. It is the ratio of the age at which a child normally makes a certain score to the child's age.
But, computers that can score high on IQ tests would be weakly correlated with their usefulness.

ASIMO IN DIFFERENT ECTIVITY
ASIMO is a humanoid robot created by Honda Motor Company
ASIMO USES 802.11 wireless technologies


ASIMO

ASIMO is not an autonomous robot. It can't enter a room and make decisions on its own about how to navigate. ASIMO either has to be programmed to do a specific job in a specific area that has markers that it understands, or it has to be manually controlled by a human.
ASIMO can be controlled by four
methods:
I. PC
II. Wireless controller
(sort of like a joystick)
I. Gestures
II. Voice commands

Uses of asimo

ASIMO can do with people:
¢ execute functions appropriately based on the user's customer data;
¢ greet visitors, informing personnel of the visitor's arrival by transmitting messages and pictures of the visitor's face;
¢ guide visitors to a predetermined location, etc.
ASIMO can do in networking:
¢ Accessing information via the Internet, ASIMO can become a provider of news and weather updates, for example, ready to answer people's questions, etc.

comparisons between human and computer intelligence
Human
Speed, short term memory, and the ability to form accurate and retrievable long term memories.
Computer
Computer programs have plenty of speed and memory but their abilities correspond to the intellectual mechanisms that program designers understand well enough to put in programs.

Uses of ai in our world(1)

Robotics
Credit granting
Information management and retrieval
AI and expert systems embedded in products
Hospitals and medical facilities
Help desks and assistance
Employee performance evaluation
Loan analysis
Virus detection
Marketing
Repair and maintenance
Gamming

organizations and publications are concerned with AI(1)

The American Association for Artificial Intelligence (AAAI), the European Coordinating Committee for Artificial Intelligence (ECCAI) and the Society for Artificial Intelligence and Simulation of Behavior (AISB) are scientific societies concerned with AI research. The Association for Computing Machinery (ACM) has a special interest group on artificial intelligence SIGART.
The International Joint Conference on AI (IJCAI) is the main international conference. The AAAI runs a US National Conference on AI. Electronic Transactions on Artificial Intelligence, Artificial Intelligence, and Journal of Artificial Intelligence Research, and IEEE Transactions on Pattern Analysis and Machine Intelligence are four of the main journals publishing AI research papers.
Reply
#2
[attachment=3263]


ARTIFICIAL INTELLIGENCE

COMPILED BY
Ajay Malalikar
Anirudh deshpande
Abstract
It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable. While no consensual definition of Artificial Intelligence (AI) exists, AI is broadly characterized as the study of computations that allow for perception, reason and action. This paper examines features of artificial Intelligence, introduction, definitions of AI, history, applications, growth and achievements.
1.0 Introduction
Artificial Intelligence (AI) is the branch of computer science which deals with intelligence of machines where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. It is the study of ideas which enable computers to do the things that make people seem intelligent. The central principles of AI include such traits as reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects. It is the science and engineering of making intelligent machines, especially intelligent computer programs
1.1 ARTIFICIAL INTELLIGENCE METHODS:
At the present time, AI methods can be divided into two broad categories: (a) symbolic AI, which focuses on the development of knowledge-based systems (KBS); and (b) computational intelligence, which includes such methods as neural networks (NN), fuzzy systems (FS), and evolutionary computing. A very brief introduction to these AI methods is given below, and each method is discussed in more detail in the different sections of this circular.
1.1 Knowledge-Based Systems:
A KBS can be defined as a computer system capable of giving advice in a particular domain, utilizing knowledge provided by a human expert. A distinguishing feature of KBS lies in the separation behind the knowledge, which can be represented in a number of ways such as rules, frames, or cases, and the inference engine or algorithm which uses the knowledge base to arrive at a conclusion.
Neural Networks:
NNs are biologically inspired systems consisting of a massively connected network of computational neurons, organized in layers. By adjusting the weights of the network, NNs can be trained to approximate virtually any nonlinear function to a required degree of accuracy. NNs typically are provided with a set of input and output exemplars. A learning algorithm (such as back propagation) would then be used to adjust the weights in the network so that the network would give the desired output, in a type of learning commonly called supervised learning.
FUZZY SYSTEMS
Fuzzy set theory was proposed by Zadeh (1965) as a way to deal with the ambiguity associated with almost all real-world problems. Fuzzy set membership functions provide a way to show that an object can partially belong to a group. Classic set theory defines sharp boundaries between sets, which mean that an object can only be a member or a nonmember of a given set. Fuzzy membership functions allow for gradual transitions between sets and varying degrees of membership for objects within sets. Complete membership in a fuzzy function is indicated by a value of +1, while complete non-membership is shown by a value of 0. Partial membership is represented by a value between 0 and +1.
2.0 Some definitions of AI
Computers with the ability to mimic or duplicate the functions of the human brain
Artificial Intelligence (AI) is the study of how computer systems can simulate intelligent processes such as learning, reasoning, and understanding symbolic information in context. AI is inherently a multi-disciplinary field. Although it is most commonly viewed as a subfield of computer science, and draws upon work in algorithms, databases, and theoretical computer science, AI also has close connections to the neurosciences, cognitive science and cognitive psychology, mathematical logic, and engineering."
``The exciting new effort to make computers think ... machines with minds, in the full and literal sense'' (Haugeland, 1985)
``The automation of activities that we associate with human thinking, activities such as decision-making, problem solving, learning ...'' (Bellman, 1978) ``The study of mental faculties through the use of computational models'' (Charniak and McDermott, 1985)
``The study of the computations that make it possible to perceive, reason, and act'' (Winston, 1992)
``The art of creating machines that perform functions that require intelligence when performed by people'' (Kurzweil, 1990)
``The study of how to make computers do things at which, at the moment, people are better'' (Rich and Knight, 1991) ``A field of study that seeks to explain and emulate intelligent behavior in terms of computational processes'' (Schalkoff, 1990)
``The branch of computer science that is concerned with the automation of intelligent behavior'' (Luger and Stubblefield, 1993)
"Artificial intelligence is the study of ideas to bring into being machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment and intention. Each such machine should engage in critical appraisal and selection of differing opinions within itself. Produced by human skill and labor, these machines should conduct themselves in agreement with life, spirit and sensitivity, though in reality, they are imitations."
It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.
3.0 History
The modern history of AI can be traced back to the year 1956 when John McCarthy proposed the term as the topic for a conference held at Dartmouth College in New Hampshire devoted to the subject. The initial goals for the field were too ambitious and the first few AI systems failed to deliver what was promised. After a few of these early failures, AI researchers started setting some more realistic goals for themselves. In the 1960s and the 1970s, the focus of AI research was primarily on the development of KBS or expert systems. During these years, expert systems technology were applied to a wide range of problems and fields ranging from medical diagnosis to inferring molecular structure to natural language understanding. The same period also witnessed early work on NNs, which showed how a distributed structure of elements could collectively represent an individual concept, with the added advantage of robustness and parallelism. However, the publication of Minsky and Papertâ„¢s book Perceptrons in 1969, which argued for the limited representation capabilities of NN, led to the demise of NN research in the 1970s.
The late 1980s and the 1990s saw a renewed interest in NN research when several different researchers reinvented the back propagation learning algorithm (although the algorithm was really first discovered in 1969). The back propagation algorithm was soon applied to many learning problems causing great excitement within the AI community. The 1990s also witnessed some dramatic changes in the content and methodology of AI research. The focus of the field has been shifting toward grounding AI methods on a rigorous mathematical foundation, as well as to tackle real-world problems and not just toy examples. There is also a move toward the development of hybrid intelligent systems (i.e., systems that use more than one AI method) stemming from the recognition that many AI methods are complementary. Hybrid intelligent systems also started to use newer paradigms that mimic biological behavior such as GAs and fuzzy logic.
4.0 Applications of AI
4.1 Finance:
Banks use artificial intelligence systems to organize operations, invest in stocks, and manage properties. In August 2001, robots beat humans in a simulated financial trading competition. Financial institutions have long used artificial neural network systems to detect charges or claims outside of the norm, flagging these for human investigation. Some other applications in this section include loan investigation, ATM design, safe and fast banking etc.
4.2 Medicine:
A medical clinic can use artificial intelligence systems to organize bed schedules, make a staff rotation, and provide medical information. Artificial neural networks are used for medical diagnosis, functioning as Machine differential diagnosis. AI has also application in fields of cardiology (CRG), neurology (MRI), embryology (sonography), complex operations of internal organs etc.
4.3 Heavy Industry:
Now a days in big industries all the work and machine operations are controlled by principles of Artificial Intelligence. These huge machines involve risk in their manual maintenance and working. So in becomes necessary part to have an efficient and safe operation agent in their operation.
4.3.1 Application Types and Situations: Intelligent software systems play a number of roles in heavy industry. Selected examples are discussed below.
PROCESS CONTROL: These tasks usually involve automation of low-level control in a real-time system. The implemented systems are concerned with fault detection, diagnosis and alarming, and with operating the control devices in the control loops. Integral functions of intelligent software are sensor diagnostics, handling of erroneous or missing data, and performing temporal reasoning.
PROCESS MONITORING: Artificial Intelligence systems monitor, compare and analyze the process behavior of events that are crucial to successful operation and suggest any corrective action that should be implemented by the operators.
FAULT DIAGNOSIS AND MAINTAINANCE: It is practically impossible to diagnose huge machines regularly and precisely. Working of faulty machines may cause great loss to the industry. So, Artificial Intelligence systems offer a number of advantages for working with diagnostic problems. First, they can monitor and analyze hundreds of sensors, determine any anomalies in their functions and identify probable causes of the discrepancies between expected and actual operating conditions.
SCHEDULING AND PLANNING: In the present day world TIME ELEMENT plays an important role. So, completion of manufacturing within short period of time in addition to good quality becomes very important. Intelligent software offers several advantages in developing computerized scheduling systems. Instead of presenting one optimization schedule, AI-based scheduling systems present several schedules with their evaluation indexes. The operator can then select the "best" optimum schedule.
4.4 Telecommunications:
Many telecommunications companies make use of heuristic search in the management of their workforces, for example BT Group has deployed heuristic search in a scheduling application that provides the work schedules of 20000 engineers.
4.5 Music:
AI, scientists are trying to make the computer emulate the activities of the skillful musician. Composition, performance, music theory, sound processing are some of the major areas on which research in Music and Artificial Intelligence are focusing on. Some of them are
OrchExtra: This program was designed to provide small-budget productions with instrumentation for all instruments usually present in the full-fledged orchestra.
COMPUTER ACCOMPNIMENT: The Computer Music Project at CMU develops computer music and interactive performance technology to enhance human musical experience and creativity.
SMART MUSIC: SmartMusic is an interactive, computer-based practice tool for musicians.
CHUCK: Chuck is a text-based, cross-platform language that allows real-time synthesis, composition, performance and analysis of music.
4.6 Antivirus:
Artificial intelligence (AI) techniques have played increasingly important role in antivirus detection. At present, some principal artificial intelligence techniques applied in antivirus detection are proposed, including heuristic technique, data mining, agent technique, artificial immune, and artificial neural network. It improves the performance of antivirus detection systems, and promotes the production of new artificial intelligence algorithm and the application in antivirus detection to integrate antivirus detection with artificial intelligence. This paper introduces the main artificial intelligence technologies, which have been applied in antivirus system. Meanwhile, it also points out a fact that combining all kinds of artificial intelligence technologies will become the main development trend in the field of antivirus.
4.7 Robotics:
Definition: What is a Robot
Robots are physical agents that perform tasks by manipulating the physical world. They are equipped with sensors to perceive their environment and effectors to assert physical forces on it (covered in more detail in next section). Robots can be put into three main categories: manipulators, mobile robots and humanoid.
Robotics and AI
Artificial intelligence is a theory. The base object is the agent who is the "actor". It is realized in software. Robots are manufactured as hardware. The connection between those two is that the control of the robot is a software agent that reads data from the sensors decides what to do next and then directs the effectors to act in the physical world.
Robot application software
Most robot manufacturers keep their software hidden. It is impossible to find out how most robots are programmed. It is almost as if they had no software in many cases. Regardless which language is used, the end result of robot software is to create robotic applications that help or entertain people. Applications include command-and-control and tasking software. Command-and-control software includes robot control GUIs for tele-operated robots, point-n-click command software for autonomous robots, and scheduling software for mobile robots in factories. Tasking software includes simple drag-n-drop interfaces for setting up delivery routes, security patrols and visitor tours; it also includes custom programs written to deploy specific applications. General purpose robot application software is deployed on widely distributed robotic platforms.
4.8 Gaming:
In the earlier days gaming technology was not broadened. Physicist Willy Higinbotham created the first video game in 1958. It was called Tennis For Two and was plan oscilloscope. But now AI technology has become vast and standard has also been increased. More realistic, heavily graphical, 3-D games are been designed by developers. Some of most popular games of present day are Crisis, Fear, Fall Out, Halo etc.
4.9 SOME OTHER APPLICATIONS:
Credit granting
Information management and retrieval
AI and expert systems embedded in products
Plant layout
Help desks and assistance
Employee performance evaluation
Shipping
Marketing
Warehouse optimization
In space workstation maintenance
Satellite controls
Network developments
Military activity controls
Nuclear management
5.0 The Explosive Growth of AI
Since AI is applicable in almost all fields, they become the needs of our life .It make the development of AI field. It is the reason behind the explosive growth of AI.
The growth can be divided into two parts based on the application area and what purpose the used, they are as follows
1. Growth in positive sense(useful to society)
2. Growth in negative sense(harmful to society)
6.0 Some Achievements of AI
DARPA Grand challenge- 123 miles through the desert
DARPA Urban Challenge- Autonomous driving in traffic
Deep Thought is an international grand master chess player.
Sphinx can recognize continuous speech without training for each speaker. It operates in near real time using a vocabulary of 1000 words and has 94% word accuracy.
Navlab is a truck that can drive along a road at 55mph in normal traffic.
Carlton and United Breweries use an AI planning system to plan production of their beer.
Natural language interfaces to databases can be obtained on a PC.
Machine Learning methods have been used to build expert systems.
Expert systems are used regularly in finance, medicine, manufacturing, and agriculture
7.0 Future of AI
Having discussed about AI one debatable question arises that is artificial intelligence more powerful than natural intelligence. Looking at the features and its wide applications we may definitely stick to artificial intelligence. Seeing at the development of AI, is it that the future world is becoming artificial.
Biological intelligence is fixed, because it is an old, mature paradigm, but the new paradigm of non-biological computation and intelligence is growing exponentially. The crossover will be in the 2020s and after that, at least from a hardware perspective, non-biological computation will dominate¦

The memory capacity of the human brain is probably of the order of ten thousand million binary digits. But most of this is probably used in remembering visual impressions, and other comparatively wasteful ways. One might reasonably hope to be able to make some real progress [towards artificial intelligence] with a few million digits [of computer memory].
Hence we can say that as natural intelligence is limited and volatile too world may now depend upon computers for smooth working.
8.0 Conclusion
Till now we have discussed in brief about Artificial Intelligence. We have discussed some of its principles, its applications, its achievements etc. The ultimate goal of institutions and scientists working on AI is to solve majority of the problems or to achieve the tasks which we humans directly canâ„¢t accomplish. It is for sure that development in this field of computer science will change the complete scenario of the world. Now it is the responsibility of creamy layer of engineers to develop this field.
9.0 References
http://en.wikibookswiki/Computer_Science...telligence
http://howstuffworksarificialintelligence
http://google.co.in
http://library.thinkquest.org
¢ Cohen, Jonathan. Human Robots in Myth and Science. NY: A.S.Barnes, 1967.
¢ Feigenbaum, E.A. & Feldman, J. (eds.) Computers and Thought. NY: McGraw-Hill, 1963.
¢ Gardner, Martin. Logic Machines & Diagrams. NY: McGraw-Hill, 1958.
please read http://studentbank.in/report-artificial-...ull-report and http://studentbank.in/report-artificial-...port--8867 for getting more information about Artificial Intelligence
Reply
#3
[attachment=3399]

¢ ... artificial intelligence [AI] is the science of making machines do things that would require intelligence if done by [humans] (Minsky, 1963)


¢ BRIEF HISTORY OF ARTIFICIAL INTELLIGENCE

5th century BC
Aristotle invents syllogistic logic, the first formal deductive reasoning system.
16th century AD
Rabbi Loew supposedly invents the Golem, an artificial man made out of clay
19th century
George Boole creates a binary algebra to represent laws of thought
Charles Babbage and Lady Lovelace develop sophisticated programmable mechanical computers, precursor to modern electronic computers.


¢ HISTORIC ATTEMPTS

The original story, published by Mary Shelley, in 1818, describes the attempt of a true scientist, Victor Frankenstein, to create life.



¢ PURPOSE OF AI

One is to use the power of computers to augment human thinking, just as we use motors to augment human or horse power. Robotics and expert systems are major branches of that.
The other is to use a computer's artificial intelligence to understand how humans think. In a humanoid way. If you test your programs not merely by what they can accomplish, but how they accomplish it, they you're really doing cognitive science; you're using AI to understand the human mind.
¢ - Herb Simon


¢ ROBOTICS

¢ 1970 Shakey (SRI) was driven by a remote-controlled computer, which formulated plans for moving and acting. It took about half an hour to move Shakey one meter
The Ant, has 17 sensors. They are designed to work in colonies.



¢ HUMAN BRAIN

¢ Most scientists would be happy to view the brain as a vast but complex machine. As such it should then be possible to purely replicate the brain using artificial neurons.
¢ This has already been done for very simple life forms such as insects which only have a few thousand neurons in their brains.
¢ In principle, it would not be necessary to have a full scientific understanding of how the brain works. One would just build a copy of one using artificial materials and see how it



¢ LIMITATIONS OF HUMAN MIND

¢ Object recognition. People cannot properly explain how they recognise objects.
Face recognition. Cannot be passed on to another person by explanation.
¢ Naming of colours. Based on learning, not on absolute standards.
¢ HOW DO WE BUILD A MACHINE THAT CAN IDENTIFY COLOURS
¢ Today: Computer as Artist


Two paintings done by Harold Cohenâ„¢s Aaron program:
¢ DIFFICULTIES COMPUTERS CANNOT YET MODEL

¢ The machines used in the National Lottery.
¢ The performance of horses in the Grand National
¢ The behaviour of a colony of ants.
¢ Even a simple natural evolutionary milieau.
¢ Bacterial growth in a human organ.
¢ Human behaviour.
“ Criminal tendencies
“ Stock market movements.
“ Popularity ratings of politicians, pop stars, etc.


¢ THE FUTURE?

¢ Idea of Artificial Intelligence is being replaced by Artificial life, or anything with a form or body.
¢ The consensus among scientists is that a requirement for life is that it has an embodiment in some physical form, but this will change. Programs may not fit this requirement for life yet.
Reply
#4
[attachment=3404]

ARTIFICIAL INTELLIGENCE
Architecture of Intelligence
Abstract
We start by making a distinction between mind and cognition, and by positing that cognition is an aspect of mind. We propose as a working hypothesis a Separability Hypothesis which posits that we can factor off an architecture for cognition from a more general architecture for mind, thus avoiding a number of philosophical objections that have been raised about the "Strong AI" hypothesis. Thus the search for an architectural level which will explain all the interesting phenomena of cognition is likely to be futile. There are a number of levels which interact, unlike in the computer model, and this interaction makes explanation of even relatively simple cognitive phenomena in terms of one level quite incomplete.
I. Dimensions for Thinking About Thinking
A major problem in the study of intelligence and cognition is the range of”often implicit”assumptions about what phenomena these terms are meant to cover. Are we just talking about cognition as having and using knowledge, or are we also talking about other mental states such as emotions and subjective awareness? Are we talking about intelligence as an abstract set of capacities, or as a set of biological mechanisms and phenomena? These two questions set up two dimensions of discussion about intelligence. After we discuss these dimensions we will discuss information processing, representation, and cognitive architectures.
A. Dimension 1. Is intelligence separable from other mental phenomena?
When people think of intelligence and cognition, they often think of an agent being in some knowledge state, that is, having thoughts, beliefs. They also think of the underlying process of cognition as something that changes knowledge states. Since knowledge states are particular types of information states the underlying process is thought of as information processing. However, besides these knowledge states, mental phenomena also include such things as emotional states and subjective consciousness. Under what conditions can these other mental properties also be attributed to artifacts to which we attribute knowledge states? Is intelligence separable from these other mental phenomena?
It is possible that intelligence can be explained or simulated without necessarily explaining or simulating other aspects of mind. A somewhat formal way of putting this Separability Hypothesis is that the knowledge state transformation account can be factored off as a homomorphism of the mental process account. That is: If the mental process can be seen as a sequence of transformations: M1 -->M2 -->..., where Mi is the complete mental state, and the transformation function (the function that is responsible for state changes) is F, then a subprocess K1 --> K2 -->. . . can be identified such that each Ki is a knowledge state and a component of the corresponding Mi, the transformation function is f, and f is some kind of homomorphism of F. A study of intelligence alone can restrict itself to a characterization of Kâ„¢s and f, without producing accounts of Mâ„¢s and F. If cognition is in fact separable in this sense, we can in principle design machines that implement f and whose states are interpretable as Kâ„¢s. We can call such machines cognitive agents, and attribute intelligence to them. However, the states of such machines are not necessarily interpretable as complete Mâ„¢s, and thus they may be denied other attributes of mental states.
B. Dimension 2: Functional versus Biological
The second dimension in discussions about intelligence involves the extent to which we need to be tied to biology for understanding intelligence. Can intelligence be characterized abstractly as a functional capability which just happens to be realized more or less well by some biological organisms? If it can, then study of biological brains, of human psychology, or of the phenomenology of human consciousness is not logically necessary for a theory of cognition and intelligence, just as enquiries into the relevant capabilities of biological organisms are not needed for the abstract study of logic and arithmetic or for the theory of flight. Of course, we may learn something from biology about how to practically implement intelligent systems, but we may feel quite free to substitute non-biological (both in the sense of architectures which are not brain-like and in the sense of being un- constrained by considerations of human psychology) approaches for all or part of our implementation. Whether intelligence can be characterized abstractly as a functional capability surely depends upon what phenomena we want to include in defining the functional capability, as we discussed. We might have different constraints on a definition that needed to include emotion and subjective states than one that only included knowledge states. Clearly, the enterprise of AI deeply depends upon this functional view being true at some level, but whether that level is abstract logical representations as in some branches of AI, Darwinian neural group selections as proposed by Edelman, something intermediate, or something physicalist is still an open question.
III. Architectures for Intelligence
We now move to a discussion of architectural proposals within the information processing perspective. Our goal is to try to place the multiplicity of proposals into perspective. As we review various proposals, we will present some judgements of our own about relevant issues. But first, we need to review the notion of an architecture and make some additional distinctions.
A. Form and Content Issues in Architectures
In computer science, a programming language corresponds to a virtual architecture. A specific program in that language describes a particular (virtual) machine, which then responds to various inputs in ways defined by the program. The architecture is thus what Newell calls the fixed structure of the information processor that is being analyzed, and the program specifies a variable structure within this architecture. We can regard the architecture as the form and the program as the content, which together fully instantiate a particular information processing machine. We can extend these intuitions to types of machines which are different from computers. For example, the connectionist architecture can be abstractly specified as the set {{N}, {nI}, {nO}, {zi}, {wij}}, where {N} is a set of nodes, {nI} and {nO} are subsets of {N} called input and output nodes respectively, {zi} are the functions computed by the nodes, and {wij} is the set of weights between nodes. A particular connectionist machine is then instantiated by the "program" that specifies values for all these variables.
We have discussed the prospects for separating intelligence (a knowledge state process) from other mental phenomena, and also the degree to which various theories of intelligence and cognition balance between fidelity to biology versus functionalism. We have discussed the sense in which alternatives such as logic, decision tree algorithms, and connectionism are all alternative languages in which to couch an information processing account of cognitive phenomena, and what it means to take a Knowledge Level stance towards cognitive phenomena. We have further discussed the distinction between form and content theories in AI. We are now ready to give an overview of the issues in cognitive architectures. We will assume that the reader is already familiar in some general way with the proposals that we discussing. Our goal is to place these ideas in perspective.
B. Intelligence as Just Computation
Until recently the dominant paradigm for thinking about information processing has been the Turing machine framework, or what has been called the discrete symbol system approach. Information processing theories are formulated as algorithms operating on data structures. In fact AI was launched as a field when Turing proposed in a famous paper that thinking was computation of this type (the term "artificial intelligence" itself was coined later) . Natural questions in this framework would be whether the set of computations that underlie thinking is a subset of Turing-computable functions, and if so how the properties of the subset should be characterized.
Most of AI research consists of algorithms for specific problems that are associated with intelligence when humans perform them. Algorithms for diagnosis, design, planning, etc., are proposed, because these tasks are seen as important for an intelligent agent. But as a rule no effort is made to relate the algorithm for the specific task to a general architecture for intelligence. While such algorithms are useful as technologies and to make the point that several tasks that appear to require intelligence can be done by certain classes of machines, they do not give much insight into intelligence in general.
C. Architectures for Deliberation
Historically most of the intuitions in AI about intelligence have come from introspections about the relationships between conscious thoughts. We are aware of having thoughts which often follow one after another. These thoughts are mostly couched in the medium of natural language, although sometimes thoughts include mental images as well. When people are thinking for a purpose, say for problem solving, there is a sense of directing thoughts, choosing some, rejecting others, and focusing them towards the goal. Activity of this type has been called "deliberation." Deliberation, for humans, is a coherent goal-directed activity, lasting over several seconds or longer. For many people thinking is the act of deliberating in this sense. We can contrast activities in this time span with other cognitive phenomena, which, in humans, take under a few hundred milliseconds, such as real-time natural language understanding and generation, visual perception, being reminded of things, and so on. These short time span phenomena are handled by what we will call the subdeliberative architecture, as we will discuss later.
Researchers have proposed different kinds of deliberative architectures, depending upon which kind of pattern among conscious thoughts struck them. Two groups of proposals about such patterns have been influential in AI theory-making: the reasoning view and the goal-subgoal view.
1. Deliberation as Reasoning
People have for a long time been struck by logical relations between thoughts and have made the distinction between rational and irrational thoughts. Remember that Booleâ„¢s book on logic was titled "Laws of Thought." Thoughts often have a logical relation between them: we think thoughts A and B, then thought C, where C follows from A and B. In AI, this view has given rise to an idealization of intelligence as rational thought, and consequently to the view that the appropriate architecture is one whose behavior is governed by rules of logic. In AI, McCarthy is mostly closely identified with the logic approach to AI, and [McCarthy and Hayes, 1969] is considered a clear early statement of some of the issues in the use of logic for building an intelligent machine.
Researchers in AI disagree about how to make machines which display this kind of rationality. One group proposes that the ideal thought machine is a logic machine, one whose architecture has logical rules of inference as its primitive operators. These operators work on a storehouse of knowledge represented in a logical formalism and generate additional thoughts. For example, the Japanese Fifth generation project came up with computer architectures whose performance was measured in (millions of) inferences per second. The other group believes that the architecture itself (i.e, the mechanism that generates thoughts) is not a logic machine, but one which generates plausible, but not necessarily correct, thoughts, and then knowledge of correct logical patterns is used to make sure that the conclusion is appropriate.
Historically rationality was characterized by the rules of deduction, but in AI, the notion is being broadened to include a host of non-deductive rules under the broad umbrella of "non-monotonic logic" [McCarthy, 1980] or "default reasoning," to capture various plausible reasoning rules. There is considerable difference of opinion about whether such rules exist in a domain-independent way as in the case of deduction, and how large a set of rules would be required to capture all plausible reasoning behaviors. If the number of rules is very large, or if they are context-dependent in complicated ways, then logic architectures would become less practical.
At any point in the operation of the architecture, many inference rules might be applied to a situation and many inferences drawn. This brings up the control issue in logic architectures, i.e., decisions about which inference rule should be applied when. Logic itself provides no theory of control. The application of the rule is guaranteed, in the logic framework, to produce a correct thought, but whether it is relevant to the goal is decided by considerations external to logic. Control tends to be task-specific, i.e., different types of tasks call for different strategies. These strategies have to be explicitly programmed in the logic framework as additional knowledge.
2. Deliberation as Goal-Subgoaling
An alternate view of deliberation is inspired by another perceived relation between thoughts and provides a basic mechanism for control as part of the architecture. Thoughts are often linked by means of a goal-subgoal relation. For example, you may have a thought about wanting to go to New Delhi, then you find yourself having thoughts about taking trains and airplanes, and about which is better, then you might think of making reservations and so on. Newell and Simon [1972] have argued that this relation between thoughts, the fact that goal thoughts spawn subgoal thoughts recursively until the subgoals are solved and eventually the goals are solved, is the essence of the mechanism of intelligence. More than one subgoal may be spawned, and so backtracking from subgoals that didnâ„¢t work out is generally necessary. Deliberation thus looks like search in a problem space. Setting up the alternatives and exploring them is made possible by the knowledge that the agent has. In the travel example above, the agent had to have knowledge about different possible ways to get to New Delhi, and knowledge about how to make a choice between alternatives. A long term memory is generally proposed which holds the knowledge and from which knowledge relevant to a goal is brought to play during deliberation. This analysis suggests an architecture for deliberation that retrieves relevant knowledge, sets up a set of alternatives to explore (the problem space), explores it, sets up subgoals, etc.
The most recent version of an architecture for deliberation in the goal-subgoal framework is Soar [Newell, 1990]. Soar has two important attributes. The first is that any difficulty it has in solving any subgoal simply results in the setting up of another subgoal, and knowledge from long term memory is brought to bear in its solution. It might be remembered that Newellâ„¢s definition of intelligence is the ability to realize the knowledge level potential of an agent. Deliberation and goal-subgoaling are intended to capture that capability: any piece of knowledge in long term memory is available, if it is relevant, for any goal. Repeated subgoaling will bring that knowledge to deliberation. The second attribute of Soar is that it "caches" its successes in problem solving in its long term memory. The next time there is a similar goal, that cached knowledge can be directly used, instead of searching again in the corresponding problem space.
This kind of deliberative architecture confers on the agent the potential for rationality in two ways. With the right kind of knowledge, each goal results in plausible and relevant subgoals being setup. Second, "logical rules" can be used to verify that the proposed solution to subgoals is indeed correct. But such rules of logic are used as pieces of knowledge rather than as operators of the architecture itself. Because of this, the verification rules can be context- and domain-dependent.
One of the results of this form of deliberation is the construction of special purpose algorithms or methods for specific problems. These algorithms can be placed in an external computational medium, and as soon as a subgoal arises that such a method or algorithm can solve, the external medium can solve it and return the results. For example, during design, an engineer might set up the subgoal of computing the maximum stress in a truss, and invoke a finite element method running on a computer. The deliberative engine can thus create and invoke computational algorithms. The goal-subgoaling architecture provides a natural way to integrate external algorithms.
In the Soar view, long term memory is just an associative memory. It has the capability to "recognize" a situation and retrieve the relevant pieces of knowledge. Because of the learning capability of the architecture, each episode of problem solving gives rise to continuous improvement. As a problem comes along, some subtasks are solved by external computational architectures which implement special purpose algorithms, while others are directly solved by compiled knowledge in memory, while yet others are solved by additional deliberation. This cycle make the overall system increasingly more powerful. Eventually, most routine problems, including real-time understanding and generation of natural language, are solved by recognition. (Recent work by Patten [Patten, et al, 1992] on the use of compiled knowledge in natural language understanding is compatible with this view.)
Deliberation seems to be a source of great power in humans. Why isnâ„¢t recognition enough? As Newell points out, the particular advantage of deliberation is distal access to and combination of knowledge at run-time in a goal-specific way. In the deliberative machine, temporary connections are created between pieces of knowledge that are not hard-coded, and that gives it the ability to realize the knowledge level potential more. A recognition architecture uses knowledge less effectively: if the connections are not there as part of the memory element that controls recognition, a piece of knowledge, though potentially relevant, will not be utilized in the satisfaction of a goal.
As an architecture for deliberation, the goal-subgoal view seems to us closer to the mark than the reasoning view. As we have argued elsewhere [Chandrasekaran, 1991], logic seems more appropriate for justification of conclusions and as the framework for the semantics of representations than for the generative architecture.
AI theories of deliberation give central importance to human-level problem solving and reasoning. Any continuity with higher animal cognition or brain structure is at the level of the recognition architecture of memory, about which this view says little other than that it is a recognition memory. For supporting deliberation at the human level, long term memory should be capable of storing and generating knowledge with the full range of ontological distinctions that human language has.
3. Is the Search View of Deliberation Too Narrow?
A criticism of this picture of deliberation as a search architecture is that it is based on a somewhat narrow view of the function of cognition. It is worth reviewing this argument briefly.
Suppose a Martian watches a human in the act of multiplying numbers. The human, during this task, is executing some multiplication algorithm, i.e., appears to be a multiplication machine. The Martian might well return to his superiors and report that the human cognitive architecture is a multiplication machine. We, however, know that the multiplication architecture is a fleeting, evanescent virtual architecture that emerged as an interaction between the goal (multiplication) and the procedural knowledge of the human. With a different goal, the human might behave like a different machine. It would be awkward to imagine cognition to be a collection of different architectures for each such task; in fact, cognition is very plastic and is able to emulate various virtual machines as needed.
Is the problem space search engine that has been proposed for the deliberative architecture is also an evanescent machine? One argument against it is that it is not intended for a narrow goal like multiplication, but for all kinds of goals. Thus it is not fleeting, but always operational.
Or is it? If the sole purpose of the cognitive architecture is goal achievement (or "problem solving"), then it is reasonable to assume that the architecture would be hard-wired for this purpose. What, however, if goal achievement is only one of the functions of the cognitive architecture, common though it might be? At least in humans, the same architecture is used to daydream, just take in the external world and enjoy it, and so on. The search behavior that we need for problem solving can come about simply by virtue of the knowledge that is made available to the agentâ„¢s deliberation from long term memory. This knowledge is either a solution to the problem, or a set of alternatives to consider. The agent, faced with the goal and a set of alternatives, simply considers the alternatives in turn, and when additional subgoals are set, repeats the process of seeking more knowledge. In fact, this kind of search behavior happens not only with individuals, but with organizations. They too explore alternatives, but yet we donâ„¢t see a need for a fixed search engine for explaining organizational behavior. Deliberation of course has to have the right sort of properties to be able to support search. Certainly adequate working memory needs to be there, and probably there are other constraints on deliberation. However, the architecture for deliberation does not have to be exclusively a search architecture. Just like the multiplication machine was an emergent architecture when the agent was faced with that task, the search engine could be the corresponding emergent architecture for the agent faced with a goal and equipped with knowledge about what alternatives to consider. In fact, a number of other such emergent architectures built on top of the deliberative architecture have been studied earlier in our work on Generic Task architectures [1986]. These architectures were intended to capture the needs for specific classes of goals (such as classification).The above argument is not to deemphasize the importance of problem space search for goal achievement, but to resist the identification of the architecture of the conscious processor with one exclusively intended for search. The problem space architecture is still important as the virtual architecture for goal-achieving, since it is a common, though not the only, function of cognition.
Of course, that cognition goes beyond just goal achievement is a statement about human cognition. This is to take a biological rather than a functional standard for the adequacy of an architectural proposal. If we take a functional approach and seek to specify an architecture for a function called intelligence which itself is defined in terms of goal achievement, then a deliberative search architecture working with a long term memory of knowledge certainly has many attractive properties for this function, as we have discussed.
D. Subdeliberative Architectures
We have made a distinction between cognitive phenomena that take less than a few hundred milliseconds for completion and those that evolve over longer time spans. We discussed proposals for the deliberative architecture to account for phenomena taking longer time spans. Some form of subdeliberative architecture is then responsible for phenomena that occur in very short time spans in humans. In deliberation, we have access to a number of intermediate states in problem solving. After you finished planning the New Delhi trip, I can ask you what alternatives you considered, why you rejected taking the train, and so on, and your answers to them will generally be reliable. You were probably aware of rejecting the train option because you reasoned that it would take too long. On the other hand, we have generally no clue to how the subdeliberative architecture came to any of its conclusions.
Many people in AI and cognitive science feel that the emphasis on complex problem solving as the door to understanding intelligence is misplaced, and that theories that emphasize rational problem solving only account for very special cases and do not account for the general cognitive skills that are present in ordinary people. These researchers focus almost completely on the nature of the subdeliberative architecture. There is also a belief that the subdeliberative architecture is directly reflected in the structure of the neural machinery in the brain. Thus, some of the proposals for the subdeliberative architecture claim to be inspired by the structure of the brain and claim a biological basis in that sense.
1. Alternative Proposals
The various proposals differ along a number of dimensions: what kinds of tasks the architecture performs, degree of parallelism, whether it is an information processing architecture at all, and, when it is taken to be an information processing architecture, whether it is a symbolic one or some other type.
With respect to the kind of tasks the architecture performs, we mentioned Newellâ„¢s view that it is just a recognition architecture. Any smartness it possesses is a result of good abstractions and good indexing, but architecturally, there is nothing particularly complicated. In fact, the good abstractions and indexing themselves were the result of the discoveries of deliberation during problem state search. The real solution to the problem of memory, for Newell, is to get chunking done right: the proper level of abstraction, labeling and indexing is all done at the time of chunking. In contrast to the recognition view are proposals that see relatively complex problem solving activities going on in subdeliberative cognition. Cognition in this picture is a communicating collection of modular agents, each of whom is simple, but capable of some degree of problem solving. For example, they can use the means-ends heuristic (the goal-subgoaling feature of deliberation in the Soar architecture).
Deliberation has a serial character to it. Almost all proposals for the subdeliberative architecture, however, use parallelism in one way or another. Parallelism can bring a number of advantages. For problems involving similar kinds of information processing over somewhat distributed data (like perception), parallelism can speed up processing. Ultimately, however, additional problem solving in deliberation may be required for some tasks.
2. Situated Cognition
Real cognitive agents are in contact with the surrounding world containing physical objects and other agents. A new school has emerged calling itself the situated cognition movement which argues that traditional AI and cognitive science abstract the cognitive agent too much away from the environment, and place undue emphasis on internal representations. The traditional internal representation view leads, according to the situated cognition perspective, to large amounts of internal representation and complex reasoning using these representations. Real agents simply use their sensory and motor systems to explore the world and pick out the information needed, and get by with much smaller amounts of internal representation processing. At the minimum, situated cognition is a proposal against excessive "intellection." In this sense, we can simply view this movement as making different proposals about what and how much needs to be represented internally. The situated cognition perspective clearly rejects the former view with respect to internal (sub-deliberative) processes, but accepts the fact deliberation does contain and use knowledge. Thus the Knowledge Level description could be useful to describe the content of agentâ„¢s deliberation.
V. Concluding Remarks
We started by asking how far intelligence or cognition can be separated from mental phenomena in general. We suggested that the problem of an architecture for cognition is not really well-posed, since, depending upon what aspects of the behavior of biological agents are included in the functional specification, there can be different constraints on the architecture. We reviewed a number of issues and proposals relevant to cognitive architectures. Not only are there many levels each explaining some aspect of cognition and mentality, but the levels interact even in relatively simple cognitive phenomena.
Reply
#5
[attachment=3484]

ARTIFICIAL INTELLIGENCE
Architecture of Intelligence
Abstract
We start by making a distinction between mind and cognition, and by positing that cognition is an aspect of mind. We propose as a working hypothesis a Separability Hypothesis which posits that we can factor off an architecture for cognition from a more general architecture for mind, thus avoiding a number of philosophical objections that have been raised about the "Strong AI" hypothesis. Thus the search for an architectural level which will explain all the interesting phenomena of cognition is likely to be futile. There are a number of levels which interact, unlike in the computer model, and this interaction makes explanation of even relatively simple cognitive phenomena in terms of one level quite incomplete.
I. Dimensions for Thinking About Thinking
A major problem in the study of intelligence and cognition is the range of”often implicit”assumptions about what phenomena these terms are meant to cover. Are we just talking about cognition as having and using knowledge, or are we also talking about other mental states such as emotions and subjective awareness? Are we talking about intelligence as an abstract set of capacities, or as a set of biological mechanisms and phenomena? These two questions set up two dimensions of discussion about intelligence. After we discuss these dimensions we will discuss information processing, representation, and cognitive architectures.
A. Dimension 1. Is intelligence separable from other mental phenomena?
When people think of intelligence and cognition, they often think of an agent being in some knowledge state, that is, having thoughts, beliefs. They also think of the underlying process of cognition as something that changes knowledge states. Since knowledge states are particular types of information states the underlying process is thought of as information processing. However, besides these knowledge states, mental phenomena also include such things as emotional states and subjective consciousness. Under what conditions can these other mental properties also be attributed to artifacts to which we attribute knowledge states? Is intelligence separable from these other mental phenomena?
It is possible that intelligence can be explained or simulated without necessarily explaining or simulating other aspects of mind. A somewhat formal way of putting this Separability Hypothesis is that the knowledge state transformation account can be factored off as a homomorphism of the mental process account. That is: If the mental process can be seen as a sequence of transformations: M1 -->M2 -->..., where Mi is the complete mental state, and the transformation function (the function that is responsible for state changes) is F, then a subprocess K1 --> K2 -->. . . can be identified such that each Ki is a knowledge state and a component of the corresponding Mi, the transformation function is f, and f is some kind of homomorphism of F. A study of intelligence alone can restrict itself to a characterization of Kâ„¢s and f, without producing accounts of Mâ„¢s and F. If cognition is in fact separable in this sense, we can in principle design machines that implement f and whose states are interpretable as Kâ„¢s. We can call such machines cognitive agents, and attribute intelligence to them. However, the states of such machines are not necessarily interpretable as complete Mâ„¢s, and thus they may be denied other attributes of mental states.
B. Dimension 2: Functional versus Biological
The second dimension in discussions about intelligence involves the extent to which we need to be tied to biology for understanding intelligence. Can intelligence be characterized abstractly as a functional capability which just happens to be realized more or less well by some biological organisms? If it can, then study of biological brains, of human psychology, or of the phenomenology of human consciousness is not logically necessary for a theory of cognition and intelligence, just as enquiries into the relevant capabilities of biological organisms are not needed for the abstract study of logic and arithmetic or for the theory of flight. Of course, we may learn something from biology about how to practically implement intelligent systems, but we may feel quite free to substitute non-biological (both in the sense of architectures which are not brain-like and in the sense of being un- constrained by considerations of human psychology) approaches for all or part of our implementation. Whether intelligence can be characterized abstractly as a functional capability surely depends upon what phenomena we want to include in defining the functional capability, as we discussed. We might have different constraints on a definition that needed to include emotion and subjective states than one that only included knowledge states. Clearly, the enterprise of AI deeply depends upon this functional view being true at some level, but whether that level is abstract logical representations as in some branches of AI, Darwinian neural group selections as proposed by Edelman, something intermediate, or something physicalist is still an open question.
III. Architectures for Intelligence
We now move to a discussion of architectural proposals within the information processing perspective. Our goal is to try to place the multiplicity of proposals into perspective. As we review various proposals, we will present some judgements of our own about relevant issues. But first, we need to review the notion of an architecture and make some additional distinctions.
A. Form and Content Issues in Architectures
In computer science, a programming language corresponds to a virtual architecture. A specific program in that language describes a particular (virtual) machine, which then responds to various inputs in ways defined by the program. The architecture is thus what Newell calls the fixed structure of the information processor that is being analyzed, and the program specifies a variable structure within this architecture. We can regard the architecture as the form and the program as the content, which together fully instantiate a particular information processing machine. We can extend these intuitions to types of machines which are different from computers. For example, the connectionist architecture can be abstractly specified as the set {{N}, {nI}, {nO}, {zi}, {wij}}, where {N} is a set of nodes, {nI} and {nO} are subsets of {N} called input and output nodes respectively, {zi} are the functions computed by the nodes, and {wij} is the set of weights between nodes. A particular connectionist machine is then instantiated by the "program" that specifies values for all these variables.
We have discussed the prospects for separating intelligence (a knowledge state process) from other mental phenomena, and also the degree to which various theories of intelligence and cognition balance between fidelity to biology versus functionalism. We have discussed the sense in which alternatives such as logic, decision tree algorithms, and connectionism are all alternative languages in which to couch an information processing account of cognitive phenomena, and what it means to take a Knowledge Level stance towards cognitive phenomena. We have further discussed the distinction between form and content theories in AI. We are now ready to give an overview of the issues in cognitive architectures. We will assume that the reader is already familiar in some general way with the proposals that we discussing. Our goal is to place these ideas in perspective.
B. Intelligence as Just Computation
Until recently the dominant paradigm for thinking about information processing has been the Turing machine framework, or what has been called the discrete symbol system approach. Information processing theories are formulated as algorithms operating on data structures. In fact AI was launched as a field when Turing proposed in a famous paper that thinking was computation of this type (the term "artificial intelligence" itself was coined later) . Natural questions in this framework would be whether the set of computations that underlie thinking is a subset of Turing-computable functions, and if so how the properties of the subset should be characterized.
Most of AI research consists of algorithms for specific problems that are associated with intelligence when humans perform them. Algorithms for diagnosis, design, planning, etc., are proposed, because these tasks are seen as important for an intelligent agent. But as a rule no effort is made to relate the algorithm for the specific task to a general architecture for intelligence. While such algorithms are useful as technologies and to make the point that several tasks that appear to require intelligence can be done by certain classes of machines, they do not give much insight into intelligence in general.
C. Architectures for Deliberation
Historically most of the intuitions in AI about intelligence have come from introspections about the relationships between conscious thoughts. We are aware of having thoughts which often follow one after another. These thoughts are mostly couched in the medium of natural language, although sometimes thoughts include mental images as well. When people are thinking for a purpose, say for problem solving, there is a sense of directing thoughts, choosing some, rejecting others, and focusing them towards the goal. Activity of this type has been called "deliberation." Deliberation, for humans, is a coherent goal-directed activity, lasting over several seconds or longer. For many people thinking is the act of deliberating in this sense. We can contrast activities in this time span with other cognitive phenomena, which, in humans, take under a few hundred milliseconds, such as real-time natural language understanding and generation, visual perception, being reminded of things, and so on. These short time span phenomena are handled by what we will call the subdeliberative architecture, as we will discuss later.
Researchers have proposed different kinds of deliberative architectures, depending upon which kind of pattern among conscious thoughts struck them. Two groups of proposals about such patterns have been influential in AI theory-making: the reasoning view and the goal-subgoal view.
1. Deliberation as Reasoning
People have for a long time been struck by logical relations between thoughts and have made the distinction between rational and irrational thoughts. Remember that Booleâ„¢s book on logic was titled "Laws of Thought." Thoughts often have a logical relation between them: we think thoughts A and B, then thought C, where C follows from A and B. In AI, this view has given rise to an idealization of intelligence as rational thought, and consequently to the view that the appropriate architecture is one whose behavior is governed by rules of logic. In AI, McCarthy is mostly closely identified with the logic approach to AI, and [McCarthy and Hayes, 1969] is considered a clear early statement of some of the issues in the use of logic for building an intelligent machine.
Researchers in AI disagree about how to make machines which display this kind of rationality. One group proposes that the ideal thought machine is a logic machine, one whose architecture has logical rules of inference as its primitive operators. These operators work on a storehouse of knowledge represented in a logical formalism and generate additional thoughts. For example, the Japanese Fifth generation project came up with computer architectures whose performance was measured in (millions of) inferences per second. The other group believes that the architecture itself (i.e, the mechanism that generates thoughts) is not a logic machine, but one which generates plausible, but not necessarily correct, thoughts, and then knowledge of correct logical patterns is used to make sure that the conclusion is appropriate.
Historically rationality was characterized by the rules of deduction, but in AI, the notion is being broadened to include a host of non-deductive rules under the broad umbrella of "non-monotonic logic" [McCarthy, 1980] or "default reasoning," to capture various plausible reasoning rules. There is considerable difference of opinion about whether such rules exist in a domain-independent way as in the case of deduction, and how large a set of rules would be required to capture all plausible reasoning behaviors. If the number of rules is very large, or if they are context-dependent in complicated ways, then logic architectures would become less practical.
At any point in the operation of the architecture, many inference rules might be applied to a situation and many inferences drawn. This brings up the control issue in logic architectures, i.e., decisions about which inference rule should be applied when. Logic itself provides no theory of control. The application of the rule is guaranteed, in the logic framework, to produce a correct thought, but whether it is relevant to the goal is decided by considerations external to logic. Control tends to be task-specific, i.e., different types of tasks call for different strategies. These strategies have to be explicitly programmed in the logic framework as additional knowledge.
2. Deliberation as Goal-Subgoaling
An alternate view of deliberation is inspired by another perceived relation between thoughts and provides a basic mechanism for control as part of the architecture. Thoughts are often linked by means of a goal-subgoal relation. For example, you may have a thought about wanting to go to New Delhi, then you find yourself having thoughts about taking trains and airplanes, and about which is better, then you might think of making reservations and so on. Newell and Simon [1972] have argued that this relation between thoughts, the fact that goal thoughts spawn subgoal thoughts recursively until the subgoals are solved and eventually the goals are solved, is the essence of the mechanism of intelligence. More than one subgoal may be spawned, and so backtracking from subgoals that didnâ„¢t work out is generally necessary. Deliberation thus looks like search in a problem space. Setting up the alternatives and exploring them is made possible by the knowledge that the agent has. In the travel example above, the agent had to have knowledge about different possible ways to get to New Delhi, and knowledge about how to make a choice between alternatives. A long term memory is generally proposed which holds the knowledge and from which knowledge relevant to a goal is brought to play during deliberation. This analysis suggests an architecture for deliberation that retrieves relevant knowledge, sets up a set of alternatives to explore (the problem space), explores it, sets up subgoals, etc.
The most recent version of an architecture for deliberation in the goal-subgoal framework is Soar [Newell, 1990]. Soar has two important attributes. The first is that any difficulty it has in solving any subgoal simply results in the setting up of another subgoal, and knowledge from long term memory is brought to bear in its solution. It might be remembered that Newellâ„¢s definition of intelligence is the ability to realize the knowledge level potential of an agent. Deliberation and goal-subgoaling are intended to capture that capability: any piece of knowledge in long term memory is available, if it is relevant, for any goal. Repeated subgoaling will bring that knowledge to deliberation. The second attribute of Soar is that it "caches" its successes in problem solving in its long term memory. The next time there is a similar goal, that cached knowledge can be directly used, instead of searching again in the corresponding problem space.
This kind of deliberative architecture confers on the agent the potential for rationality in two ways. With the right kind of knowledge, each goal results in plausible and relevant subgoals being setup. Second, "logical rules" can be used to verify that the proposed solution to subgoals is indeed correct. But such rules of logic are used as pieces of knowledge rather than as operators of the architecture itself. Because of this, the verification rules can be context- and domain-dependent.
One of the results of this form of deliberation is the construction of special purpose algorithms or methods for specific problems. These algorithms can be placed in an external computational medium, and as soon as a subgoal arises that such a method or algorithm can solve, the external medium can solve it and return the results. For example, during design, an engineer might set up the subgoal of computing the maximum stress in a truss, and invoke a finite element method running on a computer. The deliberative engine can thus create and invoke computational algorithms. The goal-subgoaling architecture provides a natural way to integrate external algorithms.
In the Soar view, long term memory is just an associative memory. It has the capability to "recognize" a situation and retrieve the relevant pieces of knowledge. Because of the learning capability of the architecture, each episode of problem solving gives rise to continuous improvement. As a problem comes along, some subtasks are solved by external computational architectures which implement special purpose algorithms, while others are directly solved by compiled knowledge in memory, while yet others are solved by additional deliberation. This cycle make the overall system increasingly more powerful. Eventually, most routine problems, including real-time understanding and generation of natural language, are solved by recognition. (Recent work by Patten [Patten, et al, 1992] on the use of compiled knowledge in natural language understanding is compatible with this view.)
Deliberation seems to be a source of great power in humans. Why isnâ„¢t recognition enough? As Newell points out, the particular advantage of deliberation is distal access to and combination of knowledge at run-time in a goal-specific way. In the deliberative machine, temporary connections are created between pieces of knowledge that are not hard-coded, and that gives it the ability to realize the knowledge level potential more. A recognition architecture uses knowledge less effectively: if the connections are not there as part of the memory element that controls recognition, a piece of knowledge, though potentially relevant, will not be utilized in the satisfaction of a goal.
As an architecture for deliberation, the goal-subgoal view seems to us closer to the mark than the reasoning view. As we have argued elsewhere [Chandrasekaran, 1991], logic seems more appropriate for justification of conclusions and as the framework for the semantics of representations than for the generative architecture.
AI theories of deliberation give central importance to human-level problem solving and reasoning. Any continuity with higher animal cognition or brain structure is at the level of the recognition architecture of memory, about which this view says little other than that it is a recognition memory. For supporting deliberation at the human level, long term memory should be capable of storing and generating knowledge with the full range of ontological distinctions that human language has.
3. Is the Search View of Deliberation Too Narrow?
A criticism of this picture of deliberation as a search architecture is that it is based on a somewhat narrow view of the function of cognition. It is worth reviewing this argument briefly.
Suppose a Martian watches a human in the act of multiplying numbers. The human, during this task, is executing some multiplication algorithm, i.e., appears to be a multiplication machine. The Martian might well return to his superiors and report that the human cognitive architecture is a multiplication machine. We, however, know that the multiplication architecture is a fleeting, evanescent virtual architecture that emerged as an interaction between the goal (multiplication) and the procedural knowledge of the human. With a different goal, the human might behave like a different machine. It would be awkward to imagine cognition to be a collection of different architectures for each such task; in fact, cognition is very plastic and is able to emulate various virtual machines as needed.
Is the problem space search engine that has been proposed for the deliberative architecture is also an evanescent machine? One argument against it is that it is not intended for a narrow goal like multiplication, but for all kinds of goals. Thus it is not fleeting, but always operational.
Or is it? If the sole purpose of the cognitive architecture is goal achievement (or "problem solving"), then it is reasonable to assume that the architecture would be hard-wired for this purpose. What, however, if goal achievement is only one of the functions of the cognitive architecture, common though it might be? At least in humans, the same architecture is used to daydream, just take in the external world and enjoy it, and so on. The search behavior that we need for problem solving can come about simply by virtue of the knowledge that is made available to the agentâ„¢s deliberation from long term memory. This knowledge is either a solution to the problem, or a set of alternatives to consider. The agent, faced with the goal and a set of alternatives, simply considers the alternatives in turn, and when additional subgoals are set, repeats the process of seeking more knowledge. In fact, this kind of search behavior happens not only with individuals, but with organizations. They too explore alternatives, but yet we donâ„¢t see a need for a fixed search engine for explaining organizational behavior. Deliberation of course has to have the right sort of properties to be able to support search. Certainly adequate working memory needs to be there, and probably there are other constraints on deliberation. However, the architecture for deliberation does not have to be exclusively a search architecture. Just like the multiplication machine was an emergent architecture when the agent was faced with that task, the search engine could be the corresponding emergent architecture for the agent faced with a goal and equipped with knowledge about what alternatives to consider. In fact, a number of other such emergent architectures built on top of the deliberative architecture have been studied earlier in our work on Generic Task architectures [1986]. These architectures were intended to capture the needs for specific classes of goals (such as classification).The above argument is not to deemphasize the importance of problem space search for goal achievement, but to resist the identification of the architecture of the conscious processor with one exclusively intended for search. The problem space architecture is still important as the virtual architecture for goal-achieving, since it is a common, though not the only, function of cognition.
Of course, that cognition goes beyond just goal achievement is a statement about human cognition. This is to take a biological rather than a functional standard for the adequacy of an architectural proposal. If we take a functional approach and seek to specify an architecture for a function called intelligence which itself is defined in terms of goal achievement, then a deliberative search architecture working with a long term memory of knowledge certainly has many attractive properties for this function, as we have discussed.
D. Subdeliberative Architectures
We have made a distinction between cognitive phenomena that take less than a few hundred milliseconds for completion and those that evolve over longer time spans. We discussed proposals for the deliberative architecture to account for phenomena taking longer time spans. Some form of subdeliberative architecture is then responsible for phenomena that occur in very short time spans in humans. In deliberation, we have access to a number of intermediate states in problem solving. After you finished planning the New Delhi trip, I can ask you what alternatives you considered, why you rejected taking the train, and so on, and your answers to them will generally be reliable. You were probably aware of rejecting the train option because you reasoned that it would take too long. On the other hand, we have generally no clue to how the subdeliberative architecture came to any of its conclusions.
Many people in AI and cognitive science feel that the emphasis on complex problem solving as the door to understanding intelligence is misplaced, and that theories that emphasize rational problem solving only account for very special cases and do not account for the general cognitive skills that are present in ordinary people. These researchers focus almost completely on the nature of the subdeliberative architecture. There is also a belief that the subdeliberative architecture is directly reflected in the structure of the neural machinery in the brain. Thus, some of the proposals for the subdeliberative architecture claim to be inspired by the structure of the brain and claim a biological basis in that sense.
1. Alternative Proposals
The various proposals differ along a number of dimensions: what kinds of tasks the architecture performs, degree of parallelism, whether it is an information processing architecture at all, and, when it is taken to be an information processing architecture, whether it is a symbolic one or some other type.
With respect to the kind of tasks the architecture performs, we mentioned Newellâ„¢s view that it is just a recognition architecture. Any smartness it possesses is a result of good abstractions and good indexing, but architecturally, there is nothing particularly complicated. In fact, the good abstractions and indexing themselves were the result of the discoveries of deliberation during problem state search. The real solution to the problem of memory, for Newell, is to get chunking done right: the proper level of abstraction, labeling and indexing is all done at the time of chunking. In contrast to the recognition view are proposals that see relatively complex problem solving activities going on in subdeliberative cognition. Cognition in this picture is a communicating collection of modular agents, each of whom is simple, but capable of some degree of problem solving. For example, they can use the means-ends heuristic (the goal-subgoaling feature of deliberation in the Soar architecture).
Deliberation has a serial character to it. Almost all proposals for the subdeliberative architecture, however, use parallelism in one way or another. Parallelism can bring a number of advantages. For problems involving similar kinds of information processing over somewhat distributed data (like perception), parallelism can speed up processing. Ultimately, however, additional problem solving in deliberation may be required for some tasks.
2. Situated Cognition
Real cognitive agents are in contact with the surrounding world containing physical objects and other agents. A new school has emerged calling itself the situated cognition movement which argues that traditional AI and cognitive science abstract the cognitive agent too much away from the environment, and place undue emphasis on internal representations. The traditional internal representation view leads, according to the situated cognition perspective, to large amounts of internal representation and complex reasoning using these representations. Real agents simply use their sensory and motor systems to explore the world and pick out the information needed, and get by with much smaller amounts of internal representation processing. At the minimum, situated cognition is a proposal against excessive "intellection." In this sense, we can simply view this movement as making different proposals about what and how much needs to be represented internally. The situated cognition perspective clearly rejects the former view with respect to internal (sub-deliberative) processes, but accepts the fact deliberation does contain and use knowledge. Thus the Knowledge Level description could be useful to describe the content of agentâ„¢s deliberation.
V. Concluding Remarks
We started by asking how far intelligence or cognition can be separated from mental phenomena in general. We suggested that the problem of an architecture for cognition is not really well-posed, since, depending upon what aspects of the behavior of biological agents are included in the functional specification, there can be different constraints on the architecture. We reviewed a number of issues and proposals relevant to cognitive architectures. Not only are there many levels each explaining some aspect of cognition and mentality, but the levels interact even in relatively simple cognitive phenomena.
please read http://studentbank.in/report-artificial-...ull-report and http://studentbank.in/report-artificial-...port--8867 for getting more information about Artificial Intelligence
Reply
#6
[attachment=4053]

INTRODUCTION

Artificial Intelligence (AI) is the area of computer science focusing on creating machines that can engage on behaviors that humans consider intelligent.
The ability to create intelligent machines has intrigued humans since ancient times, and today with the advent of the computer and 50 years of research into AI programming techniques, the dream of smart machines is becoming a reality.
Researchers are creating systems which can mimic human thought, understand speech, beat the best human chess player, and countless other feats never before possible. Find out how the military is applying AI logic to its hi-tech systems, and how in the near future Artificial Intelligence may impact our lives.


THE BEGINNINGS OF AI:

Although the computer provided the technology necessary for AI, it was not until the early 1950's that the page link between human intelligence and machines was really observed.
Norbert Wiener was one of the first Americans to make observations on the principle of feedback theory feedback theory.
The most familiar example of feedback theory is the thermostat: It controls the temperature of an environment by gathering the actual temperature of the house, comparing it to the desired temperature, and responding by turning the heat up or down.
What was so important about his research into feedback loops was that Wiener theorized that all intelligent behavior was the result of feedback mechanisms. Mechanisms that could possibly be simulated by machines. This discovery influenced much of early development of AI.


THE BEGINNINGS OF AI:

In 1956 John McCarthy regarded as the father of AI, organized a conference to draw the talent and expertise of others interested in machine intelligence for a month of brainstorming.
He invited them to Vermont for "The Dartmouth summer research project on artificial intelligence." From that point on, because of McCarthy, the field would be known as Artificial intelligence.
Although not a huge success, (explain) the Dartmouth conference did bring together the founders in AI, and served to lay the groundwork for the future of AI research.



ARTIFICIAL INTELLIGENCE: PAST, PRESENT AND FUTURE

Provost Barry Scherr, the Mandel Family Professor of Russian, adds, "The success of the 1956 workshop was the spirit it engendered. The continuing accomplishments in the years since have proven that the field of AI remains vital and filled with promise. I hope that the AI@50 participants enjoyed recalling the early years of AI at the same time that they were helping to develop a road map for future avenues of study.
In addition to hearing from some of AI's founders about the beginning of the field, the conference participants delved into topics such as the future model of thinking, the future of language and cognition, AI and games, and the future of reasoning.



ARTIFICIAL INTELLIGENCE: PAST, PRESENT AND FUTURE

Five of the attendees of the 1956 Dartmouth Summer Research Project on Artificial Intelligence reunited at the July AI@50 conference. From left: Trenchard More, John McCarthy, Marvin Minsky, Oliver Selfridge, and Ray Solomonoff. (Photo by Joseph Mehling '69)
Reply
#7
Thumbs Up 
[attachment=5617]
ARTIFICIAL INTELLIGENCE



Abstract
Introduction
Definitions
History
Applications
Achievements
Future
Conclusion
References

ABSTRACT

It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable. While no consensual definition of Artificial Intelligence (AI) exists, AI is broadly characterized as the study of computations that allow for perception, reason and action. This paper examines features of artificial Intelligence, introduction, definitions of AI, history, applications, growth and achievements.
INTRODUCTION

Artificial Intelligence (AI) is the branch of computer science which deals with intelligence of machines where an intelligent agent is a system that takes actions which maximize its chances of success. It is the study of ideas which enable computers to do the things that make people seem intelligent. The central principles of AI include such as reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects. It is the science and engineering of making intelligent machines, especially intelligent computer programs
ARTIFICIAL INTELLIGENCE METHODS
AI methods can be divided into two broad categories: (a) symbolic AI, which focuses on the development of knowledge-based systems (KBS); and (b) computational intelligence, which includes such methods as neural networks (NN), fuzzy systems (FS), and evolutionary computing. A very brief introduction to these AI methods is given below, and each method is discussed in more detail in the different sections of this circular.
Reply
#8
presented by:
PANKAJ RANA

[attachment=9471]
What is AI?
Intelligence

The ability to learn and to cope.
The ability to contemplate, think, and reason.
Synonyms:
Brain, brainpower, mentality, mother wit, sense, wit
Related:
Acumen, discernment, insight, judgment, perspicacity, sagacity, wisdom
Weak and Strong AI Claims
Weak AI:
Machines can be made to act as if they were intelligent.
Strong AI:
Machines that act intelligently have real, conscious minds.
How do we classify research as AI?
But to what extent should these systems replace human experts?
In some fields such as forecasting weather or finding bugs in computer software, expert systems are sometimes more accurate than humans .
For other fields, such as medicine, computers aiding doctors will be beneficial, but the human doctor should not be replaced.
Expert systems have the power and range to aid to benefit, and in some cases replace humans, and computer experts, if used with discretion, will benefit human kind.
conclusion
These approaches have been applied to a variety of programs. As we progress in the
development of Artificial Intelligence, other theories will be available, in addition to building on today's methods.
Reply
#9

to get information about the topic Game Playing in Artificial Intelligence full report ppt and related topic refer the page link bellow

http://studentbank.in/report-game-playin...telligence

http://studentbank.in/report-game-playin...7#pid72357

http://studentbank.in/report-artificial-...ort?page=2

http://studentbank.in/report-artificial-...port--8867

http://studentbank.in/report-artificial-...ort?page=1
Reply
#10

to get information about the topic"artificial intelligence"refer the page link bellow

http://studentbank.in/report-artificial-...ull-report

http://studentbank.in/report-artificial-...ort?page=2

http://studentbank.in/report-artificial-...ort?page=3

http://studentbank.in/report-artificial-...ort?page=4
http://studentbank.in/report-artificial-...ort?page=5
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Tagged Pages: artificial intelligence and its applications in different areas seminar report, artificial intelligence seminar report, artificial intelligence in modern time full seminar report in pdf, seminar abstract on artificial intelligence, artificial brain seminar report, full seminar report on artificial intelligence, artificial intelligence,
Popular Searches: download full report of artificial visionstem circuit, conferences on mental, mental, deduction paycheck, achievements of protoplast, full seminar report on e intelligence, seminar report on artificial intelligence pdf,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  computer networks full report seminar topics 8 42,010 06-10-2018, 12:35 PM
Last Post: jntuworldforum
  OBJECT TRACKING AND DETECTION full report project topics 9 30,649 06-10-2018, 12:20 PM
Last Post: jntuworldforum
  imouse full report computer science technology 3 24,891 17-06-2016, 12:16 PM
Last Post: ashwiniashok
  Implementation of RSA Algorithm Using Client-Server full report seminar topics 6 26,605 10-05-2016, 12:21 PM
Last Post: dhanabhagya
  Optical Computer Full Seminar Report Download computer science crazy 46 66,329 29-04-2016, 09:16 AM
Last Post: dhanabhagya
  ethical hacking full report computer science technology 41 74,436 18-03-2016, 04:51 PM
Last Post: seminar report asees
  broadband mobile full report project topics 7 23,313 27-02-2016, 12:32 PM
Last Post: Prupleannuani
  steganography full report project report tiger 15 41,328 11-02-2016, 02:02 PM
Last Post: seminar report asees
  Digital Signature Full Seminar Report Download computer science crazy 20 43,677 16-09-2015, 02:51 PM
Last Post: seminar report asees
  Mobile Train Radio Communication ( Download Full Seminar Report ) computer science crazy 10 27,934 01-05-2015, 03:36 PM
Last Post: seminar report asees

Forum Jump: