AUTONOMIC COMPUTING A SEMINAR REPORT
#1

AUTONOMIC COMPUTING
A SEMINAR REPORT
Submitted by
Pushkar Kumar
impartial fulfillment of requirement of the Degree
of
Bachelor of Technology (B.Tech)
IN
COMPUTER SCIENCE AND ENGINEERING
SCHOOL OF ENGINEERING
COCHIN UNIVERSITY OF SCIENCE AND TECHNOLOGY
KOCHI- 682022
SEPTEMBER 2008Page 2

DIVISION OF COMPUTER SCIENCE & ENGINEERING
SCHOOL OF ENGINEERING
COCHIN UNIVERSITY OF SCIENCE AND TECHNOLOGY
KOCHI-682022
Certificate
Certified that this is a bonafide record of the seminar report titled
"AUTONOMIC COMPUTING"
done by
Pushkar Kumar
of VIIth semester Computer Science & Engineering in the year
2008 in partial fulfillment of requirement for the Degree of
Bachelor of Technology in Computer Science & Engineering of
Cochin University of Science and Technology.
Mrs. Sheena S. Dr. David Peter S.
Seminar Guide Head of Division
DatePage 3

Acknowledgement
At the outset, I thank the Lord Almighty for the grace, strength and hope
to make my endeavor a success.
I also express my gratitude to Dr. David Peter, Head
of the Department and for providing me with adequate facilities, ways and
means by which I was able to complete this seminars. I express my sincere
gratitude to him for his constant support and valuable suggestions without
which the successful completion of this seminar would not have been
possible.
I thank Ms. Sheena S, my seminar guide for her
boundless cooperation and helps extended for this seminars. I express my
immense pleasure and thankfulness to all the teachers and staff of the
Department of Computer Science and Engineering, CUSAT for their
cooperation and support.
Last but not the least, I thank all others, and especially
my classmates and my family members who in one way or another helped
me in the successful completion of this work.
PUSHKAR KUMARPage 4

ABSTRACT
The increasing scale complexity, heterogeneity and dynamism of networks,
systems and applications have made our computational and information
infrastructure brittle, unmanageable and insecure. This has necessitated the
investigation of an alternate paradigm for system and application design,
which is based on strategies used by biological systems to deal with similar
challenges - a vision that has been referred to as autonomic computing. The
overarching goal of autonomic computing is to realize computer and
software systems and applications that can manage themselves in accordance
with high-level guidance from humans. Meeting the grand challenges of
autonomic computing requires scientific and technological advances in a
wide variety of fields, as well as new software and system architectures that
support the effective integration of the constituent technologies.Page 5

TABLE OF CONTENTS
Chapter No. Title Page No.
List of Figures Ill
1 Introduction 1
1.1 The Complexity Problem 2
1.2 The Evolution Problem 3
Foundations and Concepts 4
2.1 The Ubiquitous Control Loop 4
2.2 Autonomic Elements 6
2.3 Characteristics of Autonomic Systems 8
2.4 Policies 11
2.5 Issues of Trust 13
2.6 Evolution Rather Than Revolution 14
Analysis and Benefits of Current AC Work 16
3.1 AC Framework 16Page 6

3.2 Quality Attributes and Architecture Evaluation 17
3.3 Standards 19
3.4 Curriculum Development 21
Conclusion 22
References 23
IIPage 7

List of Figures
No. NAME PAGE No.
2.1: Control Loop 4
2.2: Autonomic Element 6
2.3: Autonomic Characteristics 8
2.4: Increasing Autonomic Functionality 15
3.1: Interface Standards Within an Autonomic Element [Miller 05b] 20
niPage 8

AUTONOMIC COMPUTING
1.Introduction
Computer Systems develop organically. A computer system usually starts as a simple
clean system intended for a well defined environment and applications. However, in
order to deal with growth and new demands, storage computing and networking
components are added, replaced and removed from the system, while new applications
are upgraded.
Some changes to the system are intended to enhance its functionality, but result in loss of
performance or other undesired secondary effects. In order to improve performance or
reliability, resources are added or replaced. The particulars of such development cannot
be anticipated, it just happens the way it does.
The autonomic computing effort aims to make systems self-configuring and self-
managing. However, for the most part the focus has been on how to make system
components self-configuring and self-managing. Each such component has its own policy
for how to react to change in its environment.
Autonomic computing is not a new field but rather an amalgamation of selected theories
and practices from several existing areas including control theory, adaptive algorithms,
software agents, robotics, fault-tolerant computing, distributed and real-time systems,
machine learning, human-computer interaction (HCl), artificial intelligence, and many
more. The future of autonomic computing is heavily dependent on the developments and
successes in several other technology arenas that provide an infrastructure for autonomic
computing systems including Web and grid services, architecture platforms such as
service-oriented architecture (SOA), Open Grid Services Architecture (OGSA), and
pervasive and ubiquitous computing.
DIVISION OF COMPUTER ENGINEERINGPage 9

AUTONOMIC COMPUTING
1.1 The Complexity Problem
The increasing complexity of computing systems is overwhelming the capabilities of
software developers and system administrators who design, evaluate, integrate, and
manage these systems. Today, computing systems include very complex infrastructures
and operate in complex heterogeneous environments. With the proliferation of handheld
devices, the ever-expanding spectrum of users, and the emergence of the information
economy with the advent of the Web, computing vendors have difficulty providing an
infrastructure to address all the needs of users, devices, and applications. SOAs with Web
services as their core technology have solved many problems, but they have also raised
numerous complexity issues. One approach to deal with the business challenges arising
from these complexity problems is to make the systems more self-managed or autonomic.
For a typical information system consisting of an application server, a Web server,
messaging facilities, and layers of middleware and operating systems, the number of
tuning parameters exceeds human comprehension and analytical capabilities. Thus, major
software and system vendors endeavor to create autonomic, dynamic, or self-managing
systems by developing methods, architecture models, middleware, algorithms, and
policies to mitigate the complexity problem. In a 2004 Economist article, Kluth
investigates how other industrial sectors successfully dealt with complexity [Kluth 04].
He and others have argued that for a technology to be truly successful, its complexity has
to disappear. He illustrates his arguments with many examples including the automobile
and electricity markets. Only mechanics were able to operate early automobiles
successfully. In the early 20th century, companies needed a position of vice president of
electricity to deal with power generation and consumption issues. In both cases, the
respective industries managed to reduce the need for human expertise and simplify the
usage of the underlying technology. However, usage simplicity comes with an increased
complexity of the overall system (e.g., what is "under the hood"). Basically for every
mouse click or return we take out of the user experience, 20 things have to happen in the
software behind the scenes. Given this historical perspective with this predictable path of
technology evolution, maybe there is hope for the information technology sector.
DIVISION OF COMPUTER ENGINEERINGPage 10

AUTONOMIC COMPUTING
1.2 The Evolution Problem
By attacking the software complexity problem through technology simplification and
automation, autonomic computing also promises to solve selected software evolution
problems. Instrumenting software systems with autonomic technology will allow us to
monitor or verify requirements (functional or nonfunctional) over long periods of time.
For example, self-managing systems will be able to monitor and control the brittleness of
legacy systems, provide automatic updates to evolve installed software, adapt safety-
critical systems without halting them, immunize computers against malware
automatically, facilitate enterprise integration with self-managing integration
mechanisms, document architectural drift by equipping systems with architecture analysis
frameworks, and keep the values of quality attributes within desired ranges.
DIVISION OF COMPUTER ENGINEERINGPage 11

AUTONOMIC COMPUTING
2 Foundations and Concepts
2.1 The Ubiquitous Control Loop
At the heart of an autonomic system is a control system, which is a combination of
components that act together to maintain actual system attribute values close to desired
specifications. Open-loop control systems (e.g., automatic toasters and alarm clocks) are
those in which the output has no effect on the input. Closed-loop control systems (e.g.,
thermostats or automotive cruise-control systems) are those in which the output has an
effect on the input in such a way as to maintain a desired output value. An autonomic
system embodies one or more closed control loops. A closed-loop system includes some
way to sense changes in the managed element, so corrective action can be taken. The
speed with which a simple closed-loop control system moves to correct its output is
described by its damping ratio and natural frequency. Properties of a control system
include spatial and temporal separability of the controller from the controlled element,
evolvability of the controller, and filtering of the controlled resource.
Fig2.1
Numerous engineering products embody open-loop or closed-loop control systems. The
AC community often refers to the human autonomic nervous system (ANS) with its
many control loops as a prototypical example. The ANS monitors and regulates vital
signs such as body temperature, heart rate, blood pressure, pupil dilation, digestion blood
DIVISION OF COMPUTER ENGINEERINGPage 12

AUTONOMIC COMPUTING
sugar, breathing rate, immune response, and many more involuntary, reflexive responses
in our bodies. The ANS consists of two separate divisions called the parasympathetic
nervous system, which regulates day-to-day internal processes and behaviors, and the
sympathetic nervous system, which deals with stressful situations. Studying the ANS
might be instructive for the design of autonomic software systems. For example,
physically separating the control loops that deal with normal and abnormal situations
might be a useful design idea for autonomic software systems.
DIVISION OF COMPUTER ENGINEERINGPage 13

AUTONOMIC COMPUTING
2.2 Autonomic Elements
IBM researchers have established an architectural framework for autonomic systems
[Kephart 03]. An autonomic system consists of a set of autonomic elements that contain
and manage resources and deliver services to humans or other autonomic elements. An
autonomic element consists of one autonomic manager and one or more managed
elements. At the core of an autonomic element is a control loop that integrates the
manager with the managed element. The autonomic manager consists of sensors,
effectors, and a five-component analysis and planning engine as depicted in Figure 2. The
monitor observes the sensors, filters the data collected from them, and then stores the
distilled data in the knowledge base. The analysis engine compares the collected data
against the desired sensor values also stored in the knowledge base. The planning engine
devises strategies to correct the trends identified by the planning engine. The execution
engine finally adjusts parameters of the managed element by means of effectors and
stores the affected values in the knowledge base.
Fig 2.2
DIVISION OF COMPUTER ENGINEERINGPage 14

AUTONOMIC COMPUTING
An autonomic element manages its own internal state and its interactions with its
environment (i.e., other autonomic elements). An element's internal behavior and its
relationships with other elements are driven by the goals and policies the designers have
built into the system. Autonomic elements can be arranged as strict hierarchies or graphs.
Touch points represent the interface between the autonomic manager and the managed
element. Through touch points, autonomic managers control a managed resource or
another autonomic element. It is imperative that touch points are standardized, so
autonomic managers can manipulate other autonomic elements in a uniform manner. That
is, a single standard manageability interface, as provided by a touch point, can be used to
manage routers, servers, application software, middleware, a Web service, or any other
autonomic element. This is one of the key values of AC: a single manageability interface,
rather than the numerous sorts of manageability interfaces that exist today, to manage
various types of resources [Miller 05e]. Thus, a touch point constitutes a level of
indirection and is the key to adaptability. A manageability interface consists of a sensor
and an effector interface. The sensor interface enables an autonomic manager to retrieve
information from the managed element through the touch point using two interaction
styles:
(1) request-response for solicited (queried) data retrieval and
(2) send-notification for unsolicited (event-driven) data retrieval.
The effector interface enables an autonomic manager to manage the managed element
through the touch point with two interaction types:
(1) perform-operation to control the behavior (e.g., adjust parameters or send commands)
(2) solicit-response to enable call-back functions.
IBM has proposed interface standards for touch points and developed a simulator to aid
the development of autonomic managers. The Touch point Simulator can be used to
simulate different managed elements and resources and to verify standard interface
compliance.
DIVISION OF COMPUTER ENGINEERINGPage 15

AUTONOMIC COMPUTING
2.3 Characteristics of Autonomic Systems
An autonomic system can self-configure at runtime to meet changing operating
environments, self-tune to optimize its performance, self-heal when it encounters
unexpected obstacles during its operation, and”of particular current interest”protect
itself from malicious attacks. Research and development teams concentrate on
developing theories, methods, tools, and technology for building self-healing, self-
configuring, self-optimizing, and self-protecting systems, as depicted in Figure 3. An
autonomic system can self-manage anything including a single property or multiple
properties.
/ SeJf-
/ Configuring
Seff-
Heaiing
Fig 2.3
An autonomic system has the following characteristics:
¢ reflexivity: An autonomic system must have detailed knowledge of its
components, current status, capabilities, limits, boundaries, interdependencies
with other systems, and available resources. Moreover, the system must be aware
DIVISION OF COMPUTER ENGINEERINGPage 16

AUTONOMIC COMPUTING
of its possible configurations and how they affect particular nonfunctional
requirements.
self-configuring: Self-configuring systems provide increased responsiveness by
adapting to a dynamically changing environment. A self-configuring system must
be able to configure and reconfigure itself under varying and unpredictable
conditions. Varying degrees of end-user involvement should be allowed, from
user-based reconfiguration to automatic reconfiguration based on monitoring and
feedback loops. For example, the user may be given the option of reconfiguring
the system at runtime; alternatively, adaptive algorithms could learn the best
configurations to achieve mandated performance or to service any other desired
functional or nonfunctional requirement. Variability can be accommodated at
design time (e.g., by implementing goal graphs) or at runtime (e.g., by adjusting
parameters). Systems should be designed to provide configurability at a feature
level with capabilities such as separation of concerns, levels of indirection,
integration mechanisms (data and control), scripting layers, plug and play, and
set-up wizards. Adaptive algorithms have to detect and respond to short-term and
long-term trends.
self-optimizing: Self-optimizing systems provide operational efficiency by tuning
resources and balancing workloads. Such a system will continually monitor and
tune its resources and operations. In general, the system will continually seek to
optimize its operation with respect to a set of prioritized nonfunctional
requirements to meet the ever changing needs of the application environment.
Capabilities such as repartitioning, reclustering, load balancing, and rerouting
must be designed into the system to provide self-optimization. Again, adaptive
algorithms, along with other systems, are needed for monitoring and response.
DIVISION OF COMPUTER ENGINEERINGPage 17

AUTONOMIC COMPUTING
self-healing: Self-healing systems provide resiliency by discovering and
preventing disruptions as well as recovering from malfunctions. Such a system
will be able to recover”without loss of data or noticeable delays in processing”
from routine and extraordinary events that might cause some of its parts to
malfunction. Self-recovery means that the system will select, possibly with user
input, an alternative configuration to the one it is currently using and will switch
to that configuration with minimal loss of information or delay.
self-protecting: Self-protecting systems secure information and resources by
anticipating, detecting, and protecting against attacks. Such a system will be
capable of protecting itself by detecting and counteracting threats through the use
of pattern recognition and other techniques. This capability means that the design
of the system will include an analysis of the vulnerabilities and the inclusion of
protective mechanisms that might be employed when a threat is detected. The
design must provide for capabilities to recognize and handle different kinds of
threats in various contexts more easily, thereby reducing the burden on
administrators.
adapting: At the core of the complexity problem addressed by the AC initiative is
the problem of evaluating complex tradeoffs to make informed decisions. Most of
the characteristics listed above are founded on the ability of an autonomic system
to monitor its performance and its environment and respond to changes by
switching to a different behavior. At the core of this ability is a control loop.
Sensors observe an activity of a controlled process, a controller component
decides what has to be done, and then the controller component executes the
required operations through a set of actuators. The adaptive mechanisms to be
explored will be inspired by work on machine learning, multi-agent systems, and
control theory.
DIVISION OF COMPUTER ENGINEERING 10Page 18

AUTONOMIC COMPUTING
2.4 Policies
Autonomic elements can function at different levels of abstraction. At the lowest levels,
the capabilities and the interaction range of an autonomic element are limited and hard-
coded. At higher levels, elements pursue more flexible goals specified with policies, and
the relationships among elements are flexible and may evolve. Recently, Kephart and
Walsh proposed a unified framework for AC policies based on the well-understood
notions of states and actions [Kephart 04]. In this framework, a policy will directly or
indirectly cause an action to be taken that transitions the system into a new state. Kephart
and Walsh distinguish three types of AC policies, which correspond to different levels of
abstraction, as follows:
¢ action policies: An action policy dictates the action that should be taken when the
system is in a given current state. Typically this action takes the form of "IF
(condition) THEN (action)," where the condition specifies either a specific state
or a set of possible states that all satisfy the given condition. Note that the state
that will be reached by taking the given action is not specified explicitly.
Presumably, the author knows which state will be reached upon taking the
recommended action and deems this state more desirable than states that would be
reached via alternative actions. This type of policy is generally necessary to
ensure that the system is exhibiting rational behavior.
goal policies: Rather than specifying exactly what to do in the current state, goal
policies specify either a single desired state, or one or more criteria that
characterize an entire set of desired states. Implicitly, any member of this set is
equally acceptable. Rather than relying on a human to explicitly encode rational
behavior, as in action policies, the system generates rational behavior itself from
the goal policy. This type of policy permits greater flexibility and frees human
DIVISION OF COMPUTER ENGINEERING 11Page 19

AUTONOMIC COMPUTING
policy makers from the "need to know" low-level details of system function, at
the cost of requiring reasonably sophisticated planning or modeling algorithms.
utility-function policies: A utility-function policy is an objective function that
expresses the value of each possible state. Utility-function policies generalize goal
policies. Instead of performing a binary classification into desirable versus
undesirable states, they ascribe a real-valued scalar desirability to each state.
Because the most desired state is not specified in advance, it is computed on a
recurrent basis by selecting the state that has the highest utility from the present
collection of feasible states. Utility-function policies provide more fine-grained
and flexible specification of behavior than goal and action policies. In situations
in which multiple goal policies would conflict (i.e., they could not be
simultaneously achieved), utility-function policies allow for unambiguous,
rational decision making by specifying the appropriate tradeoff. On the other
hand, utility-function policies can require policy authors to specify a
multidimensional set of preferences, which may be difficult to elicit; furthermore
they require the use of modeling, optimization, and possibly other algorithms.
DIVISION OF COMPUTER ENGINEERING 12Page 20

AUTONOMIC COMPUTING
2.5 Issues of Trust
Dealing with issues of trust is critical for the successful design, implementation, and
operation of AC systems. Since an autonomic system is supposed to reduce human
interference or even take over certain heretofore human duties, it is imperative to make
trust development a core component of its design. Even when users begin to trust the
policies hard-wired into low-level autonomic elements, it is a big step to gain their trust
in higher level autonomic elements that use these low-level elements as part of their
policies. Autonomic elements are instrumented to provide feedback to users beyond what
they provide as their service. Deciding what kind of feedback to provide and how to
instrument the autonomic elements is a difficult problem. The trust feedback required by
users will evolve with the evolution of the autonomic system. However, the AC field can
draw experience from the automation and HCl communities to tackle these problems.
Autonomic systems can become more trustable by actively communicating with their
users. Improved interaction will also allow these systems to be more autonomous over
time, exhibiting increased initiative without losing the users' trust. Higher trustability
and usability should, in turn, lead to improved adoptability.
DIVISION OF COMPUTER ENGINEERING 13Page 21

AUTONOMIC COMPUTING
2.6 Evolution Rather Than Revolution
Most existing systems cannot be redesigned and redeveloped from scratch to engineer
autonomic capabilities into them. Rather, self-management capabilities have to be added
gradually and incrementally”one component (i.e., architecture, subsystem, or service) at
a time.
With the proliferation of autonomic components, users will impose increasingly more
demands with respect to functional and nonfunctional requirements for autonomicity.
Thus, the process of equipping software systems with autonomic technology will be
evolutionary rather than revolutionary. Moreover, the evolution of autonomic systems
will happen at two levels :
the introduction of autonomic components into existing systems and
the change of requirements with the proliferation and integration of autonomic
system elements.
DIVISION OF COMPUTER ENGINEERING 14Page 22

AUTONOMIC COMPUTING
IBM has defined five levels of maturity as depicted in Figure 2.4 to characterize the
gradual injection of autonomicity into software systems.
Basic
Manual analyse ano
problem solving
Leve! 1
Managed
Ccnfaii7cd tools,
manual acîions
Leve!2
Predictive
Crass-rcKiunec
correlation and
yttrJunra
Level
Adaptive
System morwws,
aaprelateE, and
lukusuicliuii
Autonomic:
I
i
i
p
»
i Dynaisiw; business
i reanageraent
i
i
i
i
i
I
1.
Leve! 4
Level 5
Fig 2.4
DIVISION OF COMPUTER ENGINEERING
15Page 23

AUTONOMIC COMPUTING
3 Analysis and Benefits of Current AC Work
3.1 AC Framework
The AC framework outlined in previous section provides methods, algorithms,
architectures, technology, and tools to standardize, automate, and simplify myriad system
administration tasks. Just a few years ago, the installation or un-installation of an
application on a des computer required the expertise of an experienced system
administrator. Today, most user can install applications using standard install shields with
just a handful of mouse clicks. Building self-managing systems using the AC framework,
developers can accomplish similar simplifications for many other system administration
tasks (e.g., installing, configuring, monitoring, tuning, optimizing, recovering, protecting,
and extending).
DIVISION OF COMPUTER ENGINEERING 16Page 24

AUTONOMIC COMPUTING
3.2 Quality Attributes and Architecture Evaluation
The architectural blueprint introduced in Section 2 constitutes a solid foundation for build
AC systems. But so far, this blueprint has not come with a software analysis and reasonin
framework to facilitate architecture evaluation for self-managing applications. The DEAS
Project, mentioned above, proposes to develop such a framework based on ABASs [Klein
99]. When the system evolves, engineers can use this analysis framework to revisit, analy
and verify certain system properties.
Quality attributes for autonomic architectures should include not only traditional quality
criteria such as variability, modifiability, reliability, availability, and security but also
autonomicity-specific criteria such as support for dynamic adaptation, dynamic upgrade,
detecting anomalous system behavior, how to keep the user informed, sampling rate
adjustments in sensors, simulation of expected or predicted behavior, determining the
difference between expected and actual behavior, and accountability (i.e., how can users
gain trust by monitoring the underlying autonomic system).
Traditionally, for most quality attributes, applying stimuli and observing responses for
architectural analysis is basically a thought exercise performed during design and system
evolution. However, the DEAS principal investigators envision that many of the
autonomicity-specific quality attributes can be analyzed by directly stimulating events
and observing responses on the running application, which is already equipped with
sensors/monitors and executors/effectors as an autonomic element.
Codifying the relationship between architecture and quality attributes not only enhances
the current architecture design but also allows developers to reuse the architecture
analysis for other applications. The codification will make the design tradeoffs, which
often exist only in the chief architect's mind, explicit and aid in analyzing the impact of
an architecture reconfiguration to meet certain quality attributes during long-term
evolution. The fundamental idea is to equip the architecture of autonomic applications
DIVISION OF COMPUTER ENGINEERING 17Page 25

AUTONOMIC COMPUTING
with predictability by attaching, at design time, an analysis framework to the architecture
of a software system to validate and reassess quality attributes regularly and
automatically over long periods of time.
This codification will also aid in the development of standards and curricula materials,
which are discussed in more detail in subsequent sections.
DIVISION OF COMPUTER ENGINEERINGPage 26

AUTONOMIC COMPUTING
3.3 Standards
Many successful solutions in the information technology industry are based on standards.
The Internet and World Wide Web are two obvious examples, both of which are built on
a host of protocols and content formats standardized by the Internet Engineering Task
Force (IETF) and the World Wide Web Consortium (W3C), respectively.
Before AC technology can be widely adopted, many aspects of its technical foundation
have to be standardized. IBM is actively involved in standardizing protocols and
interfaces among all interfaces within an autonomic element as well as among elements,
as depicted in Figure 5 [Miller 05b].
In March 2005, the Organization for the Advancement of Structured Information
Standards (OASIS) standards body approved the Web Services Distributed Management
(WSDM) standard, which is potentially a key standard for AC technology. The
development of standards for AC and Web services is highly competitive and politically
charged. The Autonomic Computing Forum (ACF) is a European organization that is
open and independent. Its mandate is to generate and promote AC technology [Popescu-
Zeletin 04].
DIVISION OF COMPUTER ENGINEERING 19Page 27

AUTONOMIC COMPUTING
Fig 3.1
DIVISION OF COMPUTER ENGINEERING
20Page 28

AUTONOMIC COMPUTING
3.4 Curriculum Development
Control systems are typically featured prominently in electrical and mechanical
engineering curricula. Historically, computer science curricula do not require control
theory courses. Recently developed software engineering curricula, however, do require
control theory [UVIC 03].
Current software architecture courses cover control loops only peripherally. The
architecture of autonomic elements is not usually discussed. Note that event-based
architectures, which are typically discussed in a computer science curriculum, are
different from the architectures for autonomic systems. Courses on self-managed systems
should be introduced into all engineering and computing curricula along the lines of real-
time and embedded systems courses.
How to build systems from the ground up as self-managed computing systems will likely
be a core topic in software engineering and computer science curricula.
DIVISION OF COMPUTER ENGINEERING 21Page 29

AUTONOMIC COMPUTING
4. Conclusions
The time is right for the emergence of self-managed or autonomic systems. Over the past
decade, we have come to expect that "plug-and-play" for Universal Serial Bus (USB)
devices, such as memory sticks and cameras, simply works”even for technophobic
users. Today, users demand and crave simplicity in computing solutions.
With the advent of Web and grid service architectures, we begin to expect that an average
user can provide Web services with high resiliency and high availability. The goal of
building a system that is used by millions of people each day and administered by a half-
time person, as articulated by Jim Gray of Microsoft Research, seems attainable with the
notion of automatic updates. Thus, autonomic computing seems to be more than just a
new middleware technology; in fact, it may be a solid solution for reining in the
complexity problem.
Historically, most software systems were not designed as self-managing systems.
Retrofitting existing systems with self-management capabilities is a difficult problem.
Even if autonomic computing technology is readily available and taught in computer
science and engineering curricula, it will take another decade for the proliferation of
autonomicity in existing systems.
DIVISION OF COMPUTER ENGINEERING 22Page 30

AUTONOMIC COMPUTING
5.References
1. Autonomic Computing: IBM's perspective on the state of information technology.
2. Jeffery O. Kelhart and David M. Chess (2003) "The vision of autonomic
computing"
3. http :// research, ibmautonomic/research/index. html
4. http://autonomiccomputing
5. http://ibmdeveloperworks/autonomic
Reply
#2
[attachment=1940]

ABSTRACT
Autonomic computing is the technology that is building self-managing IT infrastructures”hardware and software that can configure, heal, optimize, and protect itself. By taking care of many of the increasingly complex management requirements of IT systems, autonomic computing allows human and physical resources to concentrate on actual business issues.
The term autonomic computing derives from the body's autonomic nervous system, controlling functions like heart rate, breathing rate, and oxygen levels without a person's conscious awareness or involvement.
The goal is to realize the promise of IT: increasing productivity while minimizing complexity for users. We are pursuing this goal on many technological fronts as we actively develop computing systems capable of running themselves with minimal human intervention.
Complicated tasks associated with the ongoing maintenance and management of computing systems, autonomic computing technology will allow IT workers to focus their talents on complex, big-picture projects that require a higher level of thinking and planning. This is the ultimate benefit of autonomic computing: freeing IT professionals to drive creativity, innovation, and opportunity.
Autonomic systems are being created in this manner to recognize external threats or internal problems and then take measures to automatically prevent or correct those issues before humans even know there is a problem. These systems are also being designed to manage and proactively improve their own performance, all of which frees IT staff to focus their real intelligence on big-picture projects.

AUTONOMIC COMPUTING
1. Introduction
The high-tech industry has spent decades creating computer systems with
ever mounting degrees of complexity to solve a wide variety of business problems. Ironically, complexity itself has become part of the problem. As networks and distributed systems grow and change, they can become increasingly hampered by system deployment failures, hardware and software issues, not to mention human error. Such scenarios in turn require further human intervention to enhance the performance and capacity of IT components. This drives up the overall IT costs”even though technology component costs continue to decline. As a result, many IT professionals seek ways to improve their return on investment in their IT infrastructure, by reducing the total cost of ownership of their environments while improving the quality of service for users.Self managing computing helps address the complexity issues by using technology to manage itself. Self managing computing is also known as autonomic computing.
Autonomic - Pertaining to an on demand operating environment that responds automatically to problems, security threats, and system failures. Autonomic computing - A computing environment with the ability to manage itself and dynamically adapt to change in accordance with business policies and objectives. Self-managing environments can perform such activities based on situations they observe or sense in the IT environment rather than requiring IT professionals to initiate the task. These environments are self-configuring, self-healing, self-optimizing, and self-protecting.
^ The promise of Autonomic Computing includes capabilities unknown in traditional products and toolsets. It includes the capacity not just to take automated action, but to do so based on an innate ability to sense and respond to change. Not just to execute rules but to continually normalize and optimize environments in real time. Not just to store and execute policies, but to incorporate self-learning and self-managing capabilities. It is a landscape that eases the pain of taking IT into the future, by shifting mundane work to technol¬ogy and freeing up humans for work that more directly impacts business value.

AUTONOMIC COMPUTING
2.What is autonomic computing
Autonomic computing is about freeing IT professionals to focus on high-value tasks by making technology work smarter. This means letting computing systems and infrastructure take care of managing themselves. Ultimately, it is writing business policies and goals and letting the infrastructure configure, heal and optimize itself according to those policies while protecting itself from malicious activities. Self managing computing systems have the ability to manage themselves and dynamically adapt to change in accordance with business policies and objectives.
In an autonomic environment the IT infrastructure and its
components are Self-managing. Systems with self-managing components reduce the cost of owning and operating computer systems. Self-managing systems can perform management activities based on situations they observe or sense in the IT environment. Rather than IT professionals initiating management activities, the system observes something about itself and acts accordingly. This allows the IT professional to focus on high-value tasks while the technology manages the more mundane operations. IT infrastructure components take on the following characteristics: self-configuring, self-healing, self-optimizing and self-protecting.

2.1.Self-management attributes of system components
In a self-managing autonomic environment, system components”from hardware (such as storage units, desktop computers and servers) to software (such as operating systems, middleware and business applications)”can include embedded control loop functionality. Although these control loops consist of the same fundamental parts, their functions can be divided into four broad embedded control loop categories. These categories are considered to be attributes of the system components and are defined as:
Self-configuring
Systems adapt automatically to dynamically changing environments. When hardware and software systems have the ability to define themselves "on-the fly," they are self-configuring. This aspect of self-managing means that new features, software, and servers can be dynamically added to the enterprise infrastructure with no disruption of services. Self-configuring not only includes the ability for each individual system to configure itself on the fly, but also for systems within the enterprise to configure themselves into the e¬business infrastructure of the enterprise. The goal of self managing computing is to provide self-configuration capabilities for the entire IT infrastructure, not just individual servers, software, and storage devices.
Self- healing
Systems discover, diagnose, and react to disruptions. For a system to be self-healing, it must be able to recover from a failed component by first detecting and isolating the failed component, taking it off line, fixing or isolating the failed component, and reintroducing the fixed or replacement component into service without any apparent application disruption. Systems will need to predict problems and take actions to prevent the failure from having an impact on applications. The self-healing objective must be to rninimize all outages in order to keep enterprise applications up and available at all times. Developers of system components need to focus on maximizing the reliability and availability design of each hardware and software product toward continuous availability.
Self-optim izing.
Systems monitor and tune resources automatically. Self-optimization requires hardware and software systems to efficiently maximize resource utilization to meet end-user needs without human intervention. Features must be introduced to allow the enterprise to optimize resource usage across the collection of systems within their infrastructure, while also maintaining their flexibility to meet the ever-changing needs of the enterprise.
Self-protecting
Systems anticipate, detect, identify, and protect themselves from attacks from anywhere. Self-protecting systems must have the ability to define and manage user access to all computing resources within the enterprise, to protect against unauthorized resource access, to detect intrusions and report and prevent these activities as they occur, and to provide backup and recovery capabilities that are as secure as the original resource management systems. Systems will need to build on top of a number of core security technologies already available today. Capabilities must be provided to more easily understand and handle user identities in various contexts, removing the burden from administrators.

2.2.C»mparison with the present system
IBM frequently cites four aspects of self-management, which Table 1 summarizes. Early autonomic systems may treat these aspects as distinct, with different product teams creating solutions that address each one separately. Ultimately, these aspects will be emergent properties of a general architecture, and distinctions will blur into a more general notion of self-maintenance. The four aspects of self management such as self-configuring, self-healing, self-optimizing and self-protecting are compared here .
Table 1.Four aspects of self management as they are now and would be with autonomic computing
Concept Current computing Autonomic computing
Self configuration Corporate centers have multiple vendors and platforms. Installing, configuring, and integrating systems is time consuming and error prone. Automated configuration of components and systems follows high-level policies. Rest of the system adjusts automatically and seamlessly.
Self-optimization Systems have hundreds of manually set ,nonlinear tuning parameters and their number increases with each release. Components and systems continually seek opportunities to improve their own performance and efficiency.
Self-healing Problem determination in large, complex systems can take a team of programmers weeks. System automatically detects, diagnoses and repairs localized software and hardware problems.
Self-protection Detection of and recovery from attacks and cascading failures is manual. System automatically defends against malicious attacks or cascading failures. It uses early warning to anticipate and prevent system wide failures.


2.3.Eight key elements
Knows Itself
An autonomic computing system needs to "know itself - its components must also possess a system identity. Since a "system" can exist at many levels, an autonomic system will need detailed knowledge of its components, current status, ultimate capacity, and all connections to other systems to govern itself. It will need to know the extent of its "owned" resources, those it can borrow or lend, and those that can be shared or should be isolated.
Configure Itself
An autonomic computing system must configure and reconfigure itself under varying (and in the future, even unpredictable) conditions. System configuration or "setup" must occur automatically, as well as dynamic adjustments to that configuration to best handle changing environments
Optimizes Itself
An autonomic computing system never settles for the status quo - it always looks for ways to optimize its workings. It will monitor its constituent parts and fine-tune workflow to achieve predetermined system goals.
Heal Itself
An autonomic computing system must perform something akin to healing - it must be able to recover from routine and extraordinary events that might cause some of its parts to malfunction. It must be able to discover problems or potential problems, then find an alternate way of using resources or reconfiguring the system to keep functioning smoothly.

Protect Itself
A virtual world is no less dangerous than the physical one, so an autonomic computing system must be an expert in self-protection. It must detect, identify and protect itself against various types of attacks to maintain overall system security and integrity.
Adapt Itself
An autonomic computing system must know its environment and the context surrounding its activity, and act accordingly. It will find and generate rules for how best to interact with neighboring systems. It will tap available resources, even negotiate the use by other systems of its underutilized elements, changing both itself and its environment in the process ” in a word, adapting.
Open. Itself
An autonomic computing system cannot exist in a hermetic environment. While independent in its ability to manage itself, it must function in a heterogeneous world and implement open standards ” in other words, an autonomic computing system cannot, by definition, be a proprietary solution. Hide Itself
An autonomic computing system will anticipate the optimized resources needed while keeping its complexity hidden. It must marshal I/T resources to shrink the gap between the business or personal goals of the user, and the I/T implementation necessary to achieve those goals ” without involving the user in that implementation
C~K it"1i~7 ir.. I

2.4.Autonomic deployment model
Delivering system wide autonomic environments is an evolutionary process enabled by technology, but it is ultimately implemented by each enterprise through the adoption of these technologies and supporting processes. The path to self managing computing can be thought of in five levels. These levels, defined below, start at basic and continue through managed, predictive, adaptive and finally autonomic.
Basic
Level 1 Managed
Level 2 Predictive
Level 3
Manual analysis and problem solving Centralized tools, manual actions Cross-reference correlation and guidance System monitors, Dynamic business-correlates and policy-based takes action management
1. Basic level”
A starting point of IT environment. Each infrastructure element is managed independently by IT professionals who set it up, monitor it and eventually replace it.
2. Managed level
Systems management technologies can be used to collect information from disparate systems onto fewer consoles, reducing the time it takes for the administrator to collect and synthesize information as the IT environment becomes more complex.
3. Predictive level
New technologies are introduced to provide correlation among several infrastructure elements. These elements can begin to recognize patterns, predict the optimal configuration and provide advice on what course of action the administrator should take.
4. Adaptive level
As these technologies improve and as people become more comfortable with the advice and predictive power of these systems, we can progress to the

adaptive level, where the systems themselves can automatically take the right actions based on the information that is available to them and the knowledge of what is happening in the system. 5. Autonomic level
The IT infrastructure operation is governed by business policies and objectives. Users interact with the autonomic technology to monitor the business processes, alter the objectives, or both.
1ml 1 Lewi) Level t Level S
hi MailOJcd I'lulktlVc Adaptive Atiti'i)i>mk
Process. Int'mul, reacdve, DctunicMoJ, improved I'ractk, shorter AuUIMMl ol RS011KC- Automation ot IT
manual overtime, use at ,tp pwv.il tdts
inanjgenient.md Semce Management
industry best pnttkc irmsjcooit-mano^eraent and 11 Resource
manual pnxess t.. rw.v lest practk«. drNcn hv Management best
I! pcrtaimn* sendee levdajrecrocn is practices
lflCiLpbtiiimisnd Gswlidated ««s4s, Rule b.i\oJ consoles. I' kv nun.i*cniein rook L'jMmj'hn.in-.ijI anal-
product specific problem nunafement prcduawnhgiiritwii drive dynamic -.lunge sis took business and!(
>v«B).0Ut«U)tJ advsors.rcal Qmevfc'.v b.ncd on resourcc-spc- mcdelinj t'xls, tnde-'tt
sjfcwrc infill intru- "t current and future 11 ¦.irk p teg\ jiviIvsk automation 4
sion detci'unn. kid peitornumc. jiiwivj- v flic e-business mjii.-ije-
tuJjncing aonofswicrepetiip; merit ivald
Uib.iomnw IjibvJ-
edge Rise ot inventory
.ind dependency
nurutjenv.-m
Mk I'Litfcimispecirk, Multiple platform and Cr«s-pljd"rm sTStera Wice objectives and e-busincss-ast-and-
^wgrophitjllv dispersed multiple raina^mcnt taovylcd^e.ITvvorlJiod deliver.' per resource. benefit analysis.
tool sIjIIs management skills. analysis ol impact on pdirm.UKc modeling
some business-pans business ohiaaves advanced use of tiiuna.il
bi-'.v|cd;e t-i-bl-.irlTcJnteM
Benchmark Fiim-rquircJ Hi Ik System S'-utl.ihilitv. time Business system Business sntem response Husinesssuccess, com-
ppWem>md complete to dose trouble utkets availability, service le'-'el unic. 11 uffltruWnM petitiveness of service
t,nb »nd A-irl; rquesis agreement atuinment. budness siktess level agreement mclTM
'.UMjnv.-rsjnsh'.tni business fevpaiVenes
Fig: Autonomic computing maturity model
. f ^11 < i

3. Architectural details
Autonomic computing system
Here organizes an autonomic computing system into the layers and parts shown in Figure 1. These parts are connected using enterprise service bus patterns that allow the components to collaborate using standard mechanisms such as Web services. The enterprise service bus integrates the various building blocks, which include:
¢ Touchpoints for managed resources
¢ Knowledge sources
¢ Autonomic managers
¢ Manual managers
¢ Enterprise service bus

V ;.nua N'--;,ge.-

MM

:r:-r=:rafj -Aonomio-V :-.nag~-;

S3
S«F- SEFR- SEF-

:>. WMint

E5

E2


R«cr:es

(3> 0> <3>

CARCASE: W!3DIS'»ARE
App'tator

Figure 1. Autonomic computing reference architecture

AUTONOMIC COMPUTING
The lowest layer contains the system components, or managed resources, that make up the IT infrastructure. These managed resources can be any type of resource (hardware or software) and may have embedded self-managing attributes. The next layer incorporates consistent, standard manageability interfaces for accessing and controlling the managed resources. These standard interfaces are delivered through a touchpoint. Layers three and four automate some portion of the IT process using an autonomic manager.
A particular resource may have one or more touchpoint autonomic managers, each implementing a relevant control loop. Layer 3 in above Figure illustrates this by depicting an autonomic manager for the four broad categories that were introduced earlier (self-configuring, self-healing, self-optimizing and self-protecting). Layer four contains autonomic managers that orchestrate other .autonomic managers. It is these orchestrating autonomic managers that deliver the system wide autonomic capability by incorporating control loops that have the broadest view of the overall IT infrastructure. The top layer illustrates a manual manager that provides a common system management interface for the IT professional using an integrated solutions console. The various manual and autonomic manager layers can obtain and share knowledge via knowledge sources.
Managed resource
A managed resource is a hardware or software component that can be managed. A managed resource could be a server, storage unit, database, application server, service, application or other entity. A managed resource might contain its own embedded self-management control loop, in addition to other autonomic managers that might be packaged with a managed resource.Intelligent control loops can be embedded in the run-time environment of a managed resource. These embedded control loops are one way to offer self-managing autonomic capability. The details of these embedded control loops may or may not be externally visible. The control loop might be deeply embedded in a resource so that it is not visible through the manageability interface. When any of the details for the control loop are visible, the control loop is configured through the manageability interface that
is provided for that resource (for example, a disk drive).
Touchpoints
A touchpoint is an autonomic computing system building block that implements sensor and effector behavior for one or more of a managed resource's manageability mechanisms. It also provides a standard manageability interface. Deployed managed resources are accessed and controlled through these manageability interfaces. Manageability interfaces employ mechanisms such as log files, events, commands, application programming interfaces (APIs) and configuration files. These mechanisms provide various ways to gather details about and change the behavior of the managed resources.The mechanisms used to gather details are aggregated into a sensor for the managed resource and the mechanisms used to change the behavior of the managed resources are aggregated into an effector for the resource.
A touchpoint is the component in a system that exposes the state and
management operations for a resource in the system. An autonomic manager communicates with a touchpoint through the manageability interface. A touchpoint, depicted in Figure 2, is the implementation of the manageability interface for a specific manageable resource or a set of related manageable resources. For example, there might be a touchpoint implemented that exposes the manageability for a database server, the databases that database server hosts, and the tables within those databases.
Touchpoint autonomic managers
Autonomic managers implement intelligent control loops that automate combinations of the tasks found in IT processes. Touchpoint autonomic managers are those that work directly with the managed resources through their touchpoints. These autonomic managers can perform various self-management tasks, so they embody different intelligent control loops. Some examples of such control loops, using the four self-managing categories include:
¢ Performing a self-configuring task such as installing software when it detects
that some prerequisite software is missing
¢ Performing a self-healing task such as correcting a configured path so installed
software can be correctly located
¢ Performing a self-optimizing task such as adjusting the current workload when
it observes an increase or decrease in capacity
¢ Performing a self-protecting task such as taking resources offline if it detects
an intrusion attempt
Most autonomic managers use policies (goals or objectives) to govern the behavior of intelligent control loops. Touchpoint autonomic managers use these policies to determine what actions should be taken for the managed resources that they manage. A touchpoint autonomic manager can manage one or more managed resources directly, using the managed resource's touchpoint or touchpoints. Figure 3 illustrates four typical arrangements. The primary differences among these arrangements are the type and number of managed resources that are within the autonomic manager's scope of control. The four typical arrangements are:
¢ A single resource scope is the most fundamental because an autonomic manager implements a control loop that accesses and controls a single managed resource, such as a network router, a server, a storage device, an application, a middleware platform or a personal computer.
¢ A homogeneous group scope aggregates resources of the safiie type. An
example of a homogeneous group is a pool of servers that an autonomic manager can dynamically optimize to meet certain performance and availability thresholds.
¢ A heterogeneous group scope organizes resources of different types. An example of a heterogeneous group is a combination of heterogeneous devices and servers, such as databases, Web servers and storage subsystems that work together to achieve common performance and availability targets.
¢ A business system scope organizes a collection of heterogeneous resources so
an autonomic manager can apply its intelligent control loop to the service that is delivered to the business. Some examples are a customer care system or an electronic auction system. The business system scope requires autonomic managers that can comprehend the optimal state of business processes” based on policies, schedules and service levels”and drive the consequences of process optimization back down to the resource groups (both homogeneous and heterogeneous) and even to individual resources.
Fig 3 Tour common managed resource arrangement.
These resource scopes define a set of decision-making contexts that are used to classify the purpose and role of a control loop within the autonomic computing architecture. The touchpoint autonomic managers shown previously in Figure 1 are each dedicated to a particular resource or a particular collection of resources. Touchpoint autonomic managers also expose a sensor and an effector, just like the managed resources in Figure 3 do. As a result, orchestrating autonomic managers can interact with touchpoint autonomic managers by using the same style of standard interface that touchpoint autonomic managers use to interact with managed resources.


Orchestrating autonomic managers
A single touchpoint autonomic manager acting in isolation can achieve autonomic behavior only for the resources that it manages. The self-managing autonomic capabilities delivered by touchpoint autonomic managers need to be coordinated to deliver system wide autonomic computing behavior. Orchestrating autonomic managers provide this coordination function. There are two common configurations:
¢ Orchestrating within a discipline-An orchestrating autonomic manager coordinates multiple touchpoint autonomic managers of the same type (one of self-configuring, self-healing, self-optimizing or self-protecting).
¢ Orchestrating across disciplines-An orchestrating autonomic manager coordinates touchpoint autonomic managers that are a mixture of self-configuring, self-healing, self-optimizing and self-protecting.
An example of an orchestrating autonomic manager is a workload manager. An autonomic management system for workload might include self-optimizing touchpoint autonomic managers for particular resources, as well as orchestrating autonomic managers that manage pools of resources. A touchpoint autonomic manager can optimize the utilization of a particular resource based on application priorities. Orchestrating autonomic managers can optimize resource utilization across a pool of resources, based on transaction measurements and policies. The philosophy behind workload management is one of policy-based, goal-oriented management. Tuning servers individually using only touchpoint autonomic managers cannot ensure the overall performance of applications that span a mix of platforms. Systems that appear to be functioning well on their own may not, in fact, be contributing to optimal systemwide end-to-end processing. Manual Managers
A manual manager provides a common system management interface for the IT professional using an integrated solutions console. Self-managing autonomic systems can use common console technology to create a consistent human-facing interface for the autonomic managers of IT infrastructure components.A autonomic capabilities in computer systems perform tasks that IT professionals choose to delegate to the technology, according to policies. In some cases, an administrator might choose for certain tasks to involve human intervention, and the human interaction with the system can be enhanced using a common console framework, based on industry standards, that promotes consistent presentation to IT professionals. The primary goal of a common console is to provide a single platform that can host all the administrative console functions in server, software and storage products to allow users to manage solutions rather than managing individual components or products. Administrative console functions range from setup and configuration to solution run-time monitoring and control. The customer value of an integrated solutions console includes reduced cost of ownership” attributable to more efficient administration”and shorter learning curves as new products and solutions are added to the autonomic system envi¬ronment. The shorter learning curve is achieved by using standards and a Web-based presentation style. By delivering a consistent presentation format and behavior for administrative functions across diverse products, the common console creates a familiar user interface, reducing the need for staff to learn a different interface each time a new product is introduced. The common console architecture is based on standards (such as standard Java APIs), so that it can be extended to offer new management functions or to enable the development of new components for products in an autonomic system.A common console instance consists of a framework and a set of console-specific components provided by products. Administrative activities are executed as portlets. Consistency of presentation and behavior is essential to improving administrative efficiency, and requires ongoing effort and cooperation among many product communities.A manual manager is an implementation of the user interface that enables an IT professional to perform some management function manually. The manual manager can collaborate with other autonomic managers at the same level or orchestrate autonomic managers and other IT professionals working at "lower" levels.
Autonomic manager
An autonomic manager is an implementation that automates some management function and externalizes this function according to the behavior defined by management interfaces. The autonomic manager is a component that implements the control loop. For a system component to be self-managing, it must have an automated method to collect the details it needs from the system; to analyze those details to determine if something needs to change; to create a plan, or sequence of actions, that specifies the necessary changes; and to perform those actions. When these functions can be automated, an intelligent control loop is formed.As shown in Figure 4, the architecture dissects the loop into four parts that share knowledge:
¢ The monitor function provides the mechanisms that collect, aggregate, filter
and report details (such as metrics and topologies) collected from a managed resource.
¢ The analyze function provides the mechanisms that correlate and model complex situations (for example, time-series forecasting and queuing models). These mechanisms allow the autonomic manager to learn about the IT environment and help predict future situations.
¢ The plan function provides the mechanisms that construct the actions needed to
achieve goals and objectives. The planning mechanism uses policy information to guide its work.
¢ The execute function provides the mechanisms that control the execution of a
plan with considerations for dynamic updates. These four parts work together to provide the control loop functionality. Figure 4 shows a structural arrangement of the parts rather than a control flow. The four parts communicate and collaborate with one another and exchange appropriate knowledge and data, as shown in Figure 4.As illustrated in Figure 4, autonomic managers, in a manner similar to
touchpoints, provide sensor and effector manageability interfaces for other autonomic managers and manual managers to use. Using standard sensor and effector interfaces enables these components to be composed together in a manner that is transparent to the managed resources. For example, an orchestrating

AUTONOMIC COMPUTING
manageability interfaces of touchpoint autonomic managers to accomplish its management functions (that is, the orchestrating autonomic manager can manage touchpoint autonomic managers), as illustrated previously in Figure 2.
Even though an autonomic manager is capable of automating the monitor, analyze, plan and execute parts of the loop, partial autonomic managers that perform only a subset of the monitor, analyze, plan and execute functions can be developed, and IT professionals can configure an autonomic manager to perform only some of the automated functions it is capable of performing.

Fig 4:Functional details of autonomic manager
In Figure 4, four profiles (monitoring, analyzing, planning and executing) are shown. An administrator might configure this autonomic manager to perform only the monitoring function As a result, the autonomic manager would surface notifications to a common console for the situations that it recognizes, rather than automating the analysis, planning and execution functions associated with those actions. Other configurations could allow additional parts of the control loop to be automated. Autonomic managers that perform only certain parts of the control loop can be composed together to form a complete closed loop For example, one autonomic manager that performs only the monitor and analyze functions might collaborate with another autonomic manager that performs only the plan and execute functions to realize a complete autonomic control loop.
Autonomic manager internal structure
* Monitor
The monitor function collects the details from the managed resources, via touchpoints, and correlates them into symptoms that can be analyzed. The details can include topology information, metrics, configuration property settings and so on. This data includes information about managed resource configuration, status, offered capacity and throughput. Some of the data is static or changes slowly, whereas other data is dynamic, changing continuously through time. The monitor function aggregates, correlates and filters these details until it determines a symptom that needs to be analyzed. For example, the monitor function could aggregate and correlate the content of events received from multiple resources to determine a symptom that relates to that particular combination of events. Logically, this symptom is passed to the analyze function. Autonomic managers must collect and process large amounts of data from the touchpoint sensor interface of a managed resource.An autonomic manager's ability to rapidly organize and make sense of this data is crucial to its successful operation.
* Analyze
The analyze function provides the mechanisms to observe and analyze situations to determine if some change needs to be made. For example, the requirement to enact a change may occur when the analyze function determines that some policy is not being met. The analyze function is responsible for determining if the autonomic manager can abide by the established policy, now and in the future. In many cases, the analyze function models complex behavior so it can employ prediction techniques such as time-series forecasting and queuing models. These mechanisms allow the autonomic manager to learn about the IT environment and help predict future behavior. Autonomic managers must be able to perform complex data analysis and reasoning on the symptoms provided by the monitor function. The analysis is influenced by stored knowledge data.If changes are required, the analyze function generates a change request and logically passes that change request to the plan function. The change request describes the modifications that the analyze component deems necessary or desirable.
*Plan
The plan function creates or selects a procedure to enact a desired alteration in the managed resource. The plan function can take on many forms, ranging from a single command to a complex workflow. The plan function generates the appropriate change plan, which represents a desired set of changes for the managed resource, and logically passes that change plan to the execute function. *Execute
The execute function provides the mechanism to schedule and perform the necessary changes to the system. Once an autonomic manager has generated a change plan that corresponds to a change request, some actions may need to be taken to modify the state of one or more managed resources. The execute function of an autonomic manager is responsible for carrying out the procedure that was generated by the plan function of the autonomic manager through a series of actions. These actions are performed using the touchpoint effector interface of a managed resource. Part of the execution of the change plan could involve updating the knowledge that is used by the autonomic manager Knowledge Source
A knowledge source is an implementation of a registry, dictionary, database or other repository that provides access to knowledge according to the interfaces prescribed by the architecture. In an autonomic system, knowledge consists of particular types of data with architected syntax and semantics, such as symptoms, policies, change requests and change plans. This knowledge can be stored in a knowledge source so that it can be shared among autonomic managers.The knowledge stored in knowledge sources can be used to extend the knowledge capabilities of an autonomic manager. An autonomic manager can load knowledge from one or more knowledge sources, and the autonomic manager's manager can activate that knowledge, allowing the autonomic manager to perform additional management tasks (such as recognizing particular symptoms or applying certain policies).

Knowledge Data used by the autonomic manager's four functions (monitor, analyze, plan and execute) are stored as shared knowledge. The shared knowledge includes data such as topology information, historical logs, metrics, symptoms and policies.
The knowledge used by an autonomic manager is obtained in one of three ways:
(1) The knowledge is passed to the autonomic manager.An autonomic manager might obtain policy knowledge in this manner. A policy consists of a set of behavioral constraints or preferences that influence the decisions made by an autonomic manager.
(2) The knowledge is retrieved from an external knowledge source.An autonomic manager might obtain symptom definitions or resource-specific historical knowledge in this manner. A knowledge source could store symptoms that could be used by an autonomic manager; a log file may contain a detailed history in the form of entries that signify events that have occurred in a component or system.
(3) The autonomic manager itself creates the knowledge.The knowledge used by a particular autonomic manager could be created by the monitor part, based on the information collected through sensors. The monitor part might create knowledge based on recent activities by logging the notifications that it receives from a managed resource. The execute part of an autonomic manager might update the knowledge to indicate the actions that were taken as a result of the analysis and planning (based on the monitored data), the execute part would then indicate how those actions affected the managed resource (based on subsequent monitored data obtained from the managed resource after the actions were carried out). This knowledge is contained within the autonomic manager, as represented by the "knowledge" block in Figure 4. If the knowledge is to be shared with other autonomic managers, it must be placed into a knowledge source.

23

SNGCF..Knlench\itv

Knowledge types include solution topology knowledge, policy knowledge, and problem determination knowledge scenarios. Table 1 summarizes various types of knowledge that may be present in a self-managing autonomic system. Each knowledge type must be expressed using common syntax and semantics so the knowledge can be shared.
*Solution Topology Knowledge-- Captures knowledge about the components and their construction and configuration for a solution or business system. .Installation and configuration knowledge is captured in a common installable unit format to eliminate complexity. The plan function of an autonomic manager can use this knowledge for installation and configuration planning.
* Policy Knowledge” A policy is knowledge that is consulted to determine whether or not changes need to be made in the system. An autonomic computing system requires a uniform method for defining the policies that govern the decision-making for autonomic managers. By defining policies in a standard way, they can be shared across autonomic managers to enable entire systems to be managed by a common set of policies.
* Problem Determination Knowledge” Problem determination knowledge includes monitored data, symptoms and decision trees. The problem determination process also may create knowledge. As the system responds to actions taken to correct problems, learned knowledge can be collected within the autonomic manager. An autonomic computing system requires a uniform method for representing problem determination knowledge, such as monitored data (common base events), symptoms and decision trees.
Manageability Interface
The manageability interface for controlling a manageable resource is organized into its sensor and effector interfaces. A touchpoint implements the sensor and effector behavior for specific manageable resource types by mapping the standard sensor and effector interfaces to one or more of the manageable resource's manageability interface mechanisms. The manageability interface reduces complexity by offering a standard interface to autonomic managers, rather than

the diverse manageability interface mechanisms associated with various types of manageable resources.A sensor consists of one or both of the following:
¢ A set of properties that expose information about the current state of a manageable resource and are accessed through standard "get" operations.
¢ A set of management events (unsolicited, asynchronous messages or notifications) that occur when the manageable resource undergoes state changes that merit reporting
These two parts of a sensor interface are referred to as interaction styles. The "get" operations use the request-response interaction style; events use the send-notification interaction style.
An effector consists of one or both of the following:
¢ A collection of "set" operations that allow the state of the manageable resource
to be changed in some way
¢ A collection of operations that are implemented by autonomic managers that
allow the manageable resource to make requests from its manager. The "set" operations use the perform-operation interaction style; requests use the solicit-response interaction style to allow the manageable resource to consult with its manager.
The sensor and effector in the architecture are linked together. For example, a configuration change that occurs through the effector should be reflected as a configuration change notification through the sensor interface. The linkage between the sensor and effector is more formally defined using the concept of manageability capabilities which means a logical collection of manageable resource state information and operations. Some examples are:
¢ Identification: state information and operations used to identify an instance of a
manageable resource
¢ Metrics: state information and operations for measurements of a manageable
resource, such as throughput, utilization and so on

Dent nfCSF

SNHCF Knlenrlv rv

¢ Configuration: state information and operations for the configurable attributes
of a manageable resource For each manageability capability, the client ofithe manageability interface must be able to obtain and control state data through the manageability interface, including:
¢ Meta details (for example, to identify properties that are used for configuration
of a manageable resource, or information that specifies which resources can be hosted by the manageable resource)
¢ Sensor interactions, including mechanisms for retrieving the current property
values (such as metrics, configuration) and available notifications (what types of events and situations the manageable resource can generate)
¢ Effector interactions, including operations to change the state (which effector
operations and interaction styles the manageable resource supports) and call-outs to request changes to existing state (what types of call-outs the manageable resource can perform) Enterprise Service Bus
An enterprise service bus is an implementation that assists in integrating other building blocks by directing the interactions among these building blocks.The enterprise service bus can be used to "connect" various autonomic computing building blocks. The role that a particular logical instance of the enterprise service bus performs is established by autonomic computing usage patterns such as:
¢ An enterprise service bus that aggregates multiple manageability mechanisms
for a single manageable resource;
¢ An enterprise service bus that enables an autonomic manager to manage multiple touchpoints;
¢ An enterprise service bus that enables multiple autonomic managers to manage
a single touchpoint;
¢ An enterprise service bus that enables multiple autonomic managers to manage
multiple touchpoints.

AUTONOMIC COMPUTING
4.Benefits
Autonomic computing was conceived to lessen the spiraling demands for skilled IT resources, reduce complexity and to drive computing into a new era that may better exploit its potential to support higher order thinking and decision
making. Immediate benefits will include reduced dependence on human
intervention to maintain complex systems accompanied by a substantial decrease
in costs. Long-term benefits will allow individuals, organizations and businesses
to collaborate on complex problem solving.
Short-term IT related benefits
¢ Simplified user experience through a more responsive, real-time system.
¢ Cost-savings - scale to use.
¢ Scaled power, storage and costs that optimize usage across both hardware and software.
¢ Full use of idle processing power, including home PC's, through networked system.
¢ Natural language queries allow deeper and more accurate returns.
¢ Seamless access to multiple file types. Open standards will allow users to pull data from all potential sources by re-formatting on the fly.
¢ Stability. High availability. High security system. Fewer system or network errors due to self-healing
¢ Improved computational capacity
Long-term/ Higher Order Benefits
¢ Realize the vision of enablement by shifting available resources to higher-order business.
¢ Embedding autonomic" capabilities in client or access devices, servers, storage systems, middleware, and the network itself. Constructing autonomic federated systems.
¢ Achieving end-to-end service level management."
¢ Accelerated implementation of new capabilities
¢ Collaboration and global problem-solving. Distributed computing allows for more immediate sharing of information and processing power to use complex mathematics to solve problems.
¢ Massive simulation - weather, medical - complex calculations like protein folding, which require processors to run 24/7 for as long as a year at a time.
5.ChaIIenges
To create autonomic systems researchers must address key challenges with varying levels of complexity. They are
¢ System identity: Before a system can transact with other systems it must know the extent of its own boundaries. How will we design our systems to define and redefine themselves in dynamic environments
¢ Interface design: With a multitude of platforms running, system administrators face a, How will we build consistent interfaces and points of control while allowing for a heterogeneous environment
¢ Translating business policy into I/T policy: The end result needs to be transparent to the user. How will we create human interfaces that remove complexity and allow users to interact naturally with I/T systems
¢ Systemic approach: Creating autonomic components is not enough. How can we unite a constellation of autonomic components into a federated system
¢ Standards: The age of proprietary solutions is over. How can we design and support open standards that will work
¢ Adaptive algorithms: New methods will be needed to equip our systems to deal with changing environments and transactions. How will we create adaptive algorithms to take previous system experience and use that information to improve the rules
¢ Improving network-monitoring functions to protect security, detect potential threats and achieve a level of decision-making that allows for the redirection of key activities or data.
¢ Smarter microprocessors that can detect errors and anticipate failures.
6.C onclusion
The autonomic concept has been adopted by today's leading vendors and incorporated into their products. Aware that success is tied to interoperability, many are participating in the standards development necessary to provide the foundation for self-managing technological ecosystems, and are integrating standards into their technology.
IBM is making a substantial investment in the autonomic concept and has released its first wave of standards-based components, tools and knowledge capital. IBM offers a wide array of service offerings, backed by methodology and tools, which enable and support the adoption of Autonomic Computing.
Autonomic capabilities are critical to businesses with large and complex IT environments, those using Web Services and/or Service Oriented Architecture (SOA) models, and those that leverage e-business or e-commerce. They are also key enablers for smaller businesses seeking to take advantage of current technologies, because they help mask complexity by simplifying infra¬structure management.

7.Future scope
Some current components and their proposed development under autonomic computing, includes SMS,SNMP,Adaptive network routing, network congestion control, high availability clustering, ESS, RAID, DB optimizer, virus management etc.
In case of SMS, its level of sophistication is Serving the world (i.e., people, business processes)used for Policy management and Storage tank(A policy managed storage for every file or folder, the user sets policies of availability, security, and performance. The system figures out where to put the data, what level of redundancy, what level of backup, etc. This is goal-oriented management)..It's Future goal is Policy language and protocols.
SNMP whose level of sophistication is Heterogeneous components interacting ,used for Mounties(enables goal-oriented recovery from system failure instead of procedural oriented recovery), Workload management. It's Future goal is Autonomic computing stack, Social policy, DB/storage co-optimization.
Adaptive network routing, network congestion control, high availability clustering have a level of sophistication of homogeneous components interacting.lt is used for Collective intelligence, Storage Bricks(The idea is to have higher redundancy than RAID, protection of performance hot spots with proactive copies and elimination of repair for life of system by building extra drives into the system). It's Future goal is new packaging concepts for storage(The idea is to change the packaging of an array of disks from a 2-D grid to a 3-D cube. There is a prototype of this called the IceCube which is basically the size of a medium-sized packing box.with size upto 1 Petabyte (10A15bytes), 250kW, 75dB air noise. Should last for 5 years without any service ever), Subscription computing.
Other components includes ESS, RAID, DB optimizer, virus management used for eLiza, SMART/LEO (Learning in Query Optimization), Software rejuvenation. It's Future goal is more of the same and better.A DB optimizer that learns from past performance. It will be in the next version of DB2.

8.Bibliography
http://en.wikipediawiki/Autonomic_Computing
http://en.wikipediawiki/Self-management
http://ibmautonomic/
http:// ibm.com
http:// research.ibm.com

CONTENTS
Page no:
1 .Introduction 3
2.What is autonomic computing 4
2.1. Self-management attributes of system components 5
2.2.Comparison with the present system 7
2.3.Eight key elements 8
2.4. Autonomic deployment model 10
3. Architectural details 12
4. Benefits 27
5.Challenges 29
6.Conclusion 30
7.Future scope 31
8.Bibliography 32
Reply
#3
1.0 Abstract:
The Autonomic controls in any human body uses motor neurons to send indirect messages to the body organs at a subconscious level. These messages regulate temperature, breathing and heart-rate without conscious thought. The new model of computing, conceptualised on these controls in our body, is called Autonomic Computing.
Autonomic Computing systems that are self-healing will not only cut costs, but also ensure maximum system uptime, and automate the management of increasingly complex systems. Autonomic computing is an approach to self-managed computing systems that will work independently. With Autonomic Computing applications like server load balancing, process allocation, monitoring power supply, automatic updating of software, will become possible.
Autonomous systems are based on intelligent components and objects, which are capable of self-governing actions in dynamic and heterogeneous environments that regulate themselves. The development of autonomous systems involves interdisciplinary research in: artificial intelligence, distributed systems, parallel processing, software engineering and user interface. While artificial intelligence is a vital area that will help bring about autonomic computing, such computing does not require the duplication of conscious human thought as a key goal.
2.0 What is Autonomic Computing?
Autonomic the name suggests, is a metaphor based on biology. The aim of using this metaphor is to express the vision to enable something similar to be achieved in computing, in other words, to create the self-management of a substantial amount of computing function to relieve users of low level management activities allowing them to place emphasizes on the higher level concerns of running their business, their experiments or their entertainment. That is to say, its ultimate aim is to create self-managing computer systems to overcome their rapidly growing complexity and to enable their further growth.
The need and justification for Autonomic Computing is based on the ever increasing complexity in todayâ„¢s systems. It has been expressed that the IT industryâ„¢s single focus has been on improving hardware performance with software burgeoning with additional features to maximize on this additional capacity, at the neglect of other vital criteria. This has created a trillion dollar industry with consumersâ„¢ consenting to the hardware-software upgrade cycle. Its legacy though is a mass of complexity within Ëœsystems of systemsâ„¢ resulting in an increasing financial burden per computer (often measured as the TCO: total cost of ownership). In addition to the TCO implications of complexity, complexity is a blocking force to achieving Dependability. Dependability, a long-standing desirable property of all computer-based systems, integrates such attributes as reliability, availability, safety, security, survivability and maintainability. The autonomic initiatives offer a means to achieve dependability while coping with complexity.
Autonomic Computing has as its vision the creation of self managing systems to address todayâ„¢s concerns of complexity and total cost of ownership while meeting tomorrowâ„¢s needs for pervasive and ubiquitous computation and communication.
The multi-disciplinary nature of autonomic computing is the automation of the information technology administration, management, deployment and implementation, while the "systems" are straight implied as every computing or automation will be a system without a word being said. Basically autonomic computing involves double automation. Taking business automation as example, while the normal computing/information infrastructures have already automated the business processes, an autonomic computing system aims to further automate the service and system management of the computing/information infrastructures.
3.0 The Autonomic Properties “ Self Management:
The essence of autonomic computing systems is self-management, the intent of which is to free system administrators from the details of system operation and maintenance and to provide users with a machine that runs at peak performance 24/7. Like their biological namesakes, autonomic systems will maintain and adjust their operation in the face of changing components, workloads, demands, and external conditions and in the face of hardware or software failures, both innocent and malicious. The autonomic system might continually monitor its own use, and check for component upgrades, for example, if it deems the advertised features of the upgrades worthwhile, the system will install them, reconfigure itself as necessary, and run a regression test to make sure all is well. When it detects errors, the system will revert to the older version while its automatic problem-determination algorithms try to isolate the source of the error.
Early autonomic systems may treat these aspects as distinct, with different product teams creating solutions that address each one separately. Ultimately, these aspects will be emergent properties of a general architecture, and distinctions will blur into a more general notion of self-maintenance. The journey toward fully autonomic computing will take many years, but there are several important and valuable milestones along the path.
At first, automated functions will merely collect and aggregate information to support decisions by human administrators. Later, they will serve as advisors, suggesting possible courses of action for humans to consider. As automation technologies improve, and our faith in them grows, we will entrust autonomic systems with making”and acting on”lower-level decisions. Over time, humans will need to make relatively less frequent predominantly higher-level decisions, which the system will carry out automatically via more numerous, lower level decisions and actions. Ultimately, system administrators and end users will take the benefits of autonomic computing for granted. Self-managing systems and devices will seem completely natural and unremarkable, as will automated software and middleware upgrades.
A system is said to be autonomic if it incorporates the four key autonomic properties: self-configuring, self-healing, self-optimizing and self-protecting. Like the autonomic nervous system of the human body, an autonomic system should react to events as a reflex, without conscious thought.
3.1 Self-Configuring:
Installing, configuring, and integrating large, complex systems is challenging, time-consuming and error-prone even for experts. Most large Web sites and corporate data centres are haphazard accretions of servers, routers, databases, and other technologies on different platforms from different vendors. It can take teams of expert programmers months to merge two systems or to install a major e-commerce application such as SAP.
Autonomic systems will configure themselves automatically in accordance with high-level policies”representing business-level objectives, for example”that specify what is desired, not how it is to be accomplished. When a component is introduced, it will incorporate itself seamlessly, and the rest of the system will adapt to its presence”much like a new cell in the body or a new person in a population. For example, when a new component is introduced into an autonomic accounting system, it will automatically learn about and take into account the composition and configuration of the system. It will register itself and its capabilities so that other components can either use it or modify their own behaviour appropriately.
3.2 Self-Healing:
In todayâ„¢s world of on-demand computing, the terms availability and quality of service have taken on new meanings. It is no longer sufficient for a program to be defect free; in addition, it must be operational at all hours and running at peak performance for months, or even years, at a time. In practice, this necessitates hiring staff to monitor the service and fix any problems with as little disruption to the user as possible. If the system could recover from faults and errors on its own, then the need for monitors and around-the-clock staff would be eliminated, and resources could be applied elsewhere. This is the idea behind self-healing; that is, developing a system that can recover from faults and resume service seamlessly without human interaction.
IBM and other IT vendors have large departments devoted to identifying, tracing, and determining the root cause of failures in complex computing systems. Serious customer problems can take teams of programmers several weeks to diagnose and fix, and sometimes the problem disappears mysteriously without any satisfactory diagnosis.
Autonomic computing systems will detect, diagnose, and repair localized problems resulting from bugs or failures in software and hardware, perhaps through a regression tester. Using knowledge about the system configuration, a problem - diagnosis component (based on a Bayesian network, for example) would analyze information from log files, possibly supplemented with data from additional monitors that it has requested. The system would then match the diagnosis against known software patches (or alert a human programmer if there are none), install the appropriate patch, and retest.
3.3 Self-Optimizing:
The performance of a system or an application depends on its goal and current configuration. When the configuration changes or a new application is introduced, a system often requires fine-tuning in order to get the best performance. This is another task system administrators are faced with when change is introduced to a networked environment such as a cluster.
Complex middleware, such as WebSphere, or database systems, such as Oracle or DB2, may have hundreds of tuneable parameters that must be set correctly for the system to perform optimally, yet few people know how to tune them. Such systems are often integrated with other, equally complex systems. Consequently, performance-tuning one large subsystem can have unanticipated effects on the entire system.
Autonomic systems will continually seek ways to improve their operation, identifying and seizing opportunities to make themselves more efficient in performance or cost. Just as muscles become stronger through exercise, and the brain modifies its circuitry during learning, autonomic systems will monitor, experiment with, and tune their own parameters and will learn to make appropriate choices about keeping functions or outsourcing them. They will proactively seek to upgrade their function by finding, verifying, and applying the latest updates. An autonomic system is self-optimizing and once again frees the system administrator from this burden. In the context of clusters, the key optimization issue is load balancing. Load balancing is deciding where to assign new processes so that the resources in the cluster are efficiently used to provide the best performance.
3.4 Self-Protecting:
Despite the existence of firewalls and intrusion detection tools, humans must at present decide how to protect systems from malicious attacks and inadvertent cascading failures. Autonomic systems will be self-protecting in two senses. They will defend the system as a whole against large-scale, correlated problems arising from malicious attacks or cascading failures that remain uncorrected by self-healing measures. They also will anticipate problems based on early reports from sensors and take steps to avoid or mitigate them.
An autonomic system must provide security in order to prevent attacks and protect private information. The system must take a proactive approach and be self-protecting. It can recognize intrusion attempts and prevent them by itself. This avoids the unnecessary loss of time inherent in current systems, where an intrusion attempt must first be found and then patched. If the system alone can handle the encounter, the intrusion can be stopped immediately and the damage contained. In this way, the system offers greater security and confidence than traditional approaches.
Redundancy and encryption are used to realize self-protection. Redundancy is used to protect the system from failure so that it can use its properties of self-healing to recover.
Table 1. Four aspects of self-management as they are now and would be with autonomic computing.
Concept Current Computing Autonomic computing
Self-configuration Corporate data centres have multiple vendors and platforms. Installing, configuring, and integrating systems is time-consuming and error prone. Automated configuration of components and systems follow high level policies. Rest of the system adjusts automatically and seamlessly.
Self-healing Systems have hundreds of manually set, nonlinear tuning parameters, and their number increases with each release. Components and systems continually seek opportunities to improve their own performance and efficiency.
Self-optimization Problem determination in large, complex systems can take a team of programmerâ„¢s weeks. System automatically detects, diagnoses, and repairs localized software and hardware problems.
Self-protection Detection of and recovery from attacks and cascading failures is manual. System automatically defends against malicious attacks or cascading failures. It uses early warning to anticipate and prevent system-wide failures.
4.0 The Eight Elements of Autonomic Computing:
The various elements of Autonomic computing are:
4.1 TO BE AUTONOMIC, A COMPUTING SYSTEM NEEDS TO KNOW ITSELF”AND COMPRISE COMPONENTS THAT ALSO POSSESS A SYSTEM IDENTITY.
Since a system can exist at many levels, an autonomic system will need detailed knowledge of its components, current status, ultimate capacity, and all connections with other systems to govern itself. It will need to know the extent of its owned resources, those it can borrow or lend, and those that can be shared or should be isolated.
Such system definition might seem simple, and when a computer system meant one room-filling machine, or even hundreds of smaller machines networked within the walls of one company, it was. But page link those hundreds of computers to millions more over the Internet, make them interdependent, and allow a global audience to page link back to those hundreds of computers via a proliferating selection of access devices” cell phones, TVs, intelligent appliances ” and we have blurred the once-clear concept of a system. Start allowing all those devices to share processing cycles, storage and other resources, add to that the possibility of utility-like leasing of computing services, and we arrive at a situation that would seem to defy any definition of a single system. But it™s precisely this awareness at an overall system-wide level that autonomic computing requires. A system can™t monitor what it doesn™t know exists, or control specific points if its domain of control remains undefined.
To build this ability into computing systems, clearly defined policies embodied in adaptable software agents will have to govern a systemâ„¢s definition of itself and its interaction with IT systems around it. These systems will also need the capacity to merge automatically with other systems to form new ones, even if only temporarily, and break apart if required into discrete systems.
4.2 AN AUTONOMIC COMPUTING SYSTEM MUST CONFIGURE AND RECONFIGURE ITSELF UNDER VARYING AND UNPREDICTABLE CONDITIONS.
System configuration or setup must occur automatically, as must dynamic adjustments to that configuration to best handle changing environments. Given possible permutations in complex systems, configuration can be difficult and time-consuming ” some servers alone present hundreds of configuration alternatives. Human system administrators will never be able to perform dynamic reconfiguration as there are too many variables to monitor and adjust in too short a period of time”often minutes, if not seconds. To enable this automatic configuration ability, a system may need to create multiple images of critical software, such as an operating system (a kind of software cloning), and reallocate its resources (such as memory, storage, communications bandwidth, and processing) as needed. If it is a globally distributed system, it will need to leverage its multiple images and backup copies to recover from failures in localized parts of its network. Adaptive algorithms running on such systems could learn the best configurations to achieve mandated performance levels.
4.3 AN AUTONOMIC COMPUTING SYSTEM NEVER SETTLES FOR THE STATUS QUO ” IT ALWAYS LOOKS FOR WAYS TO OPTIMIZE ITS WORKINGS.
It will monitor its constituent parts and fine-tune workflow to achieve predetermined system goals, much as a conductor listens to an orchestra and adjusts its dynamic and expressive characteristics to achieve a particular musical interpretation. This consistent effort to optimize itself is the only way a computing system will be able to meet the complex and often conflicting IT demands of a business, its customers, suppliers and employees. And since the priorities that drive those demands change constantly, only constant self-optimization will satisfy them. Self-optimization will also be a key to enabling the ubiquitous availability of e-sourcing, or a delivery of computing services in a utility-like manner. E-sourcing promises predictable costs and simplified access to computing for IT customers. But to be able to optimize itself, a system will need advanced feedback control mechanisms to monitor its metrics and take appropriate action. Although feedback control is an old technique, weâ„¢ll need new approaches to apply it to computing. Weâ„¢ll need to answer questions such as how often a system takes control actions, how much delay it can accept between an action and its effect, and how all this affects overall system stability. Innovations in applying control theory to computing must occur in tandem with new approaches to overall systems architecture, yielding systems designed with control objectives in mind. Algorithms seeking to make control decisions must have access to internal metrics. And like the tuning knobs on a radio, control points must affect the source of those internal metrics. Most important, all the components of an autonomic system, no matter how diverse, must be controllable in a unified manner.
4.4 AN AUTONOMIC COMPUTING SYSTEM MUST PERFORM SOMETHING AKIN TO HEALING ” IT MUST BE ABLE TO RECOVER FROM ROUTINE AND EXTRAORDINARY EVENTS THAT MIGHT CAUSE SOME OF ITS PARTS TO MALFUNCTION.
It must be able to discover problems or potential problems, then find an alternate way of using resources or reconfiguring the system to keep functioning smoothly. Instead of growing replenishment parts, as our cells do, healing in a computing system means calling into action redundant or underutilized elements to act as replacement parts. Of course, certain types of healing have been a part of computing for some time. Error checking and correction, an over 50-year- old technology, enables transmission of data over the Internet to remain remarkably reliable, and redundant storage systems like RAID allow data to be recovered even when parts of the storage system fail. But the growing complexity of today™s IT environment makes it more and more difficult to locate the actual cause of a breakdown, even in relatively simple environments. We see this even with personal computers”how many times is the solution to a problem shut down, reboot and see if it helps? In more complex systems, identifying the causes of failures calls for root-cause analysis (an attempt to systematically examine what did what to whom and to home in on the origin of the problem). But since restoring service to the customer and minimizing interruptions is the primary concern, an action-oriented approach (determining what immediate actions need to be taken given current information available) will need to take precedence in an autonomic solution. Initially, healing responses taken by an autonomic system will follow rules generated by human experts. But as we embed more intelligence in computing systems, they will begin to discover new rules on their own that help them use system redundancy or additional resources to recover and achieve the primary objective: meeting the goals specified by the user.
4.5 A VIRTUAL WORLD IS NO LESS DANGEROUS THAN THE PHYSICAL ONE, SO AN AUTONOMIC COMPUTING SYSTEM MUST BE AN EXPERT IN SELF-PROTECTION.
It must detect, identify and protect itself against various types of attacks to maintain overall system security and integrity. Before the Internet, computers operated as islands. It was fairly easy then to protect computer systems from attacks that became known as viruses. As the floppy disks used to share programs and files needed to be physically mailed or brought to other users, it took weeks or months for a virus to spread. The connectivity of the networked world changed all that. Attacks can now come from anywhere. And viruses spread quickly” in seconds”and widely, since they™re designed to be sent automatically to other users. The potential damage to a company™s data, image and bottom line is enormous. More than simply responding to component failure, or running periodic checks for symptoms, an autonomic system will need to remain on alert, anticipate threats, and take necessary action. Such responses need to address two types of attacks: viruses and system intrusions by hackers.
By mimicking the human immune system, a digital immune system ”an approach that exists today”can detect suspicious code, automatically send it to a central analysis center, and distribute a cure to the computer system. The whole process takes place without the user being aware such protection is in process. To deal with malicious attacks by hackers, intrusion systems must automatically detect and alert system administrators to the attacks. Currently, computer security experts must then examine the problem, analyze it and repair the system. As the scale of computer networks and systems keeps expanding and the likelihood of hacker attacks increases, we will need to automate the process even further. There won™t be enough experts to handle each incident.
4.6 AN AUTONOMIC COMPUTING SYSTEM KNOWS ITS ENVIRONMENT AND THE CONTEXT SURROUNDING ITS ACTIVITY, AND ACTS ACCORDINGLY.
This is almost self-optimization turned outward: an autonomic system will find and generate rules for how best to interact with neighboring systems. It will tap available resources, even negotiate the use by other systems of its underutilized elements, changing both itself and its environment in the process”in a word, adapting. This context-sensitivity includes improving service based on knowledge about the context of a transaction. Such ability will enable autonomic systems to maintain reliability under a wide range of anticipated circumstances and combinations of circumstances (one day perhaps covering even unpredictable events). But more significantly, it will enable them to provide useful information instead of confusing data. For instance, delivering all the data necessary to display a sophisticated web page would be obvious overkill if the user was connected to the network via a small-screen cell phone and wanted only the address of the nearest bank. Or a business system might report changes in the cost of goods immediately to a salesperson in the middle of writing a customer proposal, where normally weekly updates would have sufficed. Autonomic systems will need to be able to describe themselves and their available resources to other systems, and they will also need to be able to automatically discover other devices in the environment. Current efforts to share supercomputer resources via a grid that connects them will undoubtedly contribute technologies needed for this environment-aware ability. Advances will also be needed to make systems aware of a user™s actions, along with algorithms that allow a system to determine the best response in a given context.
4.7 AN AUTONOMIC COMPUTING SYSTEM CANNOT EXIST IN A HERMETIC ENVIRONMENT.
While independent in its ability to manage itself, an autonomic computing system must function in a heterogeneous world and implement open standard s”in other words, an autonomic computing system cannot, by definition, be a proprietary solution. In nature, all sorts of organisms must coexist and depend upon one another for survival (and such biodiversity actually helps stabilize the ecosystem). In today™s rapidly evolving computing environment, an analogous coexistence and interdependence is unavoidable. Businesses connect to suppliers, customers and partners. People connect to their banks, travel agents and favorite stores”regardless of the hardware they have, or the applications they are using. As technology improves, we can only expect new inventions and new devices ” and an attendant proliferation of options and interdependency.
Current collaborations in computer science to create additional open standards have allowed new types of sharing: innovations such as Linux, an open operating system; Apache, an open web server; UDDI, a standard way for businesses to describe themselves, discover other businesses and integrate with them; and from the Globus project, a set of protocols to allow computer resources to be shared in a distributed (or grid-like) manner. These community efforts have accelerated the move toward open standards, which allow for the development of tools, libraries, device drivers, middleware, applications, etc., for these platforms. Advances in autonomic computing systems will need a foundation of such open standards. Standard ways of system identification, communication and negotiation ” perhaps even new classes of system-neutral intermediaries or agents specifically assigned the role of cyber-diplomats to regulate conflicting resource demands” need to be invented and agreed on.
4.8 PERHAPS MOST CRITICAL FOR THE USER, AN AUTONOMIC COMPUTING SYSTEM WILL ANTICIPATE THE OPTIMIZED RESOURCES NEEDED WHILE KEEPING ITS COMPLEXITY HIDDEN.
This is the ultimate goal of autonomic computing: the marshaling of IT resources to shrink the gap between the business or personal goals of our customers, and the IT implementation necessary to achieve those goals”without involving the user in that implementation. Today our customers must adapt to a computing system by learning how to use it, how to interact with it, and how to collect, compare and interpret the various types of information it returns before deciding what to do. Even custom-made solutions rarely interact seamlessly with all a company™s other IT systems, let alone all its data and documents. While some aspects of computing have improved for general users”graphical interfaces, for instance, are far easier for most people to use than command prompts and their corresponding dictionaries of commands”tapping the full potential of entire IT systems is still too difficult. But does this mean autonomic computing systems must begin to possess human intelligence so as to anticipate, perhaps even dictate, a user™s IT needs? No. Think again of the analogy of our bodies and in particular one aspect of the autonomic nervous system responsible for what™s commonly known as the fight or flight response. When faced with a potentially dangerous or urgent situation, our autonomic nervous system anticipates the potential danger before we become aware of it. It then optimizes our bodies for a selection of appropriate responses ” specifically, the autonomic nervous system triggers our adrenal glands to flood the body with adrenaline, a hormone that supercharges the ability of our muscles to contract, increases our heart rate and breathing, and generally constricts blood vessels to increase our blood pressure (while dilating those that feed key areas such as the skeletal muscles). The net result: our body is superbly prepped for action, but our conscious mind remains unaware of anything but the key pieces of information required to decide whether to stay and act (the fight response) or run for the hills. An autonomic system will allow for that kind of anticipation and support. It will deliver essential information with a system optimized and ready to implement the decisions users make and not needlessly entangle them in coaxing results from the system.
Realistically, such systems will be very difficult to build and will require significant exploration of new technologies and innovations. Thatâ„¢s why we view this as a Grand Challenge for the entire IT industry. Weâ„¢ll need to make progress along two tracks: making individual system components autonomic and achieving autonomic behavior at the level of global enterprise IT systems (extremely challenging). Unless each component in a system can share information with every other part and contribute to some overall system awareness and regulation, the goal of autonomic computing will not really be reached. So one huge technical challenge entails figuring how to create this global system awareness and management.
5.0 Architectural Considerations:
Autonomic systems will be interactive collections of autonomic elements”individual system constituents that contain resources and deliver services to humans and other autonomic elements. Autonomic elements will manage their internal behaviour and their relationships with other autonomic elements in accordance with policies that humans or other elements have established. System self-management will arise at least as much from the myriad interactions among autonomic elements as it will from the internal self-management of the individual autonomic elements”just as the social intelligence of an ant colony arises largely from the interactions among individual ants. A distributed, service-oriented infrastructure will support autonomic elements and their interactions.
As Figure 1 shows, an autonomic element will typically consist of one or more managed elements coupled with a single autonomic manager that controls and represents them. The managed element will essentially be equivalent to what is found in ordinary non-autonomic systems, although it can be adapted to enable the autonomic manager to monitor and control it. The managed element could be a hardware resource, such as storage, a CPU, or a printer, or a software resource, such as a database, a directory service, or a large legacy system. At the highest level, the managed element could be an e-utility, an application service, or even an individual business. The autonomic manager distinguishes the autonomic element from its non-autonomic counterpart. By monitoring the managed element and its external environment, and constructing and executing plans based on an analysis of this information, the autonomic manager will relieve humans of the responsibility of directly managing the managed element. Fully autonomic computing is likely to evolve as designers gradually add increasingly sophisticated autonomic managers to existing managed elements. Ultimately, the distinction between the autonomic manager and the managed element may become merely conceptual rather than architectural, or it may melt away”leaving fully integrated, autonomic elements with well-defined behaviours and interfaces, but also with few constraints on their internal structure. Each autonomic element will be responsible for managing its own internal state and behaviour and for managing its interactions with an environment that consists largely of signals and messages from other elements and the external world. An element™s internal behaviour and its relationships with other elements will be driven by goals that its designer has embedded in it, by other elements that have authority over it, or by subcontracts to peer elements with its tacit or explicit consent. The element may require assistance from other elements to achieve its goals. If so, it will be responsible for obtaining necessary resources from other elements and for dealing with exception cases, such as the failure of a required resource. Autonomic elements will function at many levels, from individual computing components such as disk drives to small-scale computing systems such as workstations or servers to entire automated enterprises in the largest autonomic system of all ” the global economy.
At the lower levels, an autonomic element™s range of internal behaviours and relationships with other elements, and the set of elements with which it can interact, may be relatively limited and hard-coded. Particularly at the level of individual components, well-established techniques”many of which fall under the rubric of fault tolerance”have led to the development of elements that rarely fail, which is one important aspect of being autonomic. Decades of developing fault-tolerance techniques have produced such engineering feats as the IBM zSeries servers, which have a mean time to failure of several decades.
At the higher levels, fixed behaviours, connections, and relationships will give way to increased dynamism and flexibility. All these aspects of autonomic elements will be expressed in more high level, goal-oriented terms, leaving the elements themselves with the responsibility for resolving the details on the fly.
Hard-coded behaviours will give way to behaviours expressed as high-level objectives, such as maximize this utility function, or find a reputable message translation service. Hardwired connections among elements will give way to increasingly less direct specifications of an element™s partners”from specification by physical address to specification by name and finally to specification by function, with the partner™s identity being resolved only when it is needed. Hard-wired relationships will evolve into flexible relationships that are “
Fig 1
established via negotiation. Elements will automatically handle new modes of failure, such as contract violation by a supplier, without human intervention. While service-oriented architectural concepts like Web and grid services will play a fundamental role, a sufficient foundation for autonomic computing requires more. First, as service providers, autonomic elements will not unquestioningly honour requests for service, as would typical Web services or objects in an object-oriented environment. They will provide a service only if providing it is consistent with their goals. Second, as consumers, autonomic elements will autonomously and proactively issue requests to other elements to carry out their objectives. Finally, autonomic elements will have complex life cycles, continually carrying on multiple threads of activity, and continually sensing and responding to the environment in which they are situated. Autonomy, proactivity, and goal-directed interactivity with their environment are distinguishing characteristics of software agents. Viewing autonomic elements as agents and autonomic systems as multi-agent systems makes it clear that agent-oriented architectural concepts will be critically important.
6.0 Autonomic computing Vs. Proactive Computing:
Autonomic and proactive computing both provide solutions to issues that limit the growth of today's computing systems. In the 1990s, the ubiquitous computing vision extended what has been traditionally called distributed systems, a field in which the application focus has been primarily office automation.
To date, the natural growth path for systems has been in supporting technologies such as data storage density, processing capability, and per-user network bandwidth, with growth increasing annually for 20 years by roughly a factor of 2 (disk capacity), 1.6 (Moore's Law), and 1.3 (personal networking; modem to DSL [Digital Subscriber Line]), respectively. The usefulness of Internet and intranet networks has fuelled the growth of computing applications and in turn the complexity of their administration. The IBM autonomic vision seeks to solve some of the problems from this complexity by using eight principles of system design to overcome current limitations. These principles include the ability of systems to self-monitor, self-heal, self-configure, and improve their performance. Furthermore, systems should be aware of their environment, defend against attack, communicate with use of open standards, and anticipate user actions. The design principles can be applied both to individual components and to systems as a whole, the latter providing a holistic benefit that satisfies a larger number of users.
While Intel Research supports the aims of autonomic systems and at the same time consider how computing systems will be used in the future. To date, the familiar personal computer (PC) infrastructure has been applied most effectively in the realm of the office and the home. Going forward, they are intrigued by other areas of human endeavour that are ripe for the application of computer-based technology. Proactive computing extends the horizon by recognizing a need to monitor and shape the physical world, targeting professions that have complex real-world interactions but are currently limited by the degree of human involvement required. We are addressing some of the challenges that exist beyond the scope of earlier ubiquitous computer systems to enable future environments involving thousands of networked computers per person. Proactive system design is guided by seven underlying principles: connecting with the physical world, deep networking, macro-processing, dealing with uncertainty, anticipation, closing the control loop, and making systems personal.
An emphasis on human-supervised systems, rather than human-controlled or completely automatic systems, is an overarching theme within proactive computing. Computer-to-user ratios have been changing over time: 1:many turned into 1:1 with the advent of the PC in the 1980s, and into many:1 with the explosion of mobile devices in the new millennium. Currently, most people in the United States typically own (sometimes indirectly) many tens of computers, ranging from portable devices to consumer electronics. These systems compete for human attention, an increasingly scarce resource in modern living. Before the sheer number of devices overwhelms us, solutions need to be found to remove people from the control loop wherever possible, elevating their interaction to a supervisory role. One way would be to use pure artificial intelligence, a lofty goal that will not be attainable in the near future. Proactive computing, therefore, focuses on human-supervised operation, where the user stays out of the loop as much as possible until required to provide guidance in critical decisions.
A simple present-day example that illustrates a human-supervised system is a modern home central heating system. Such systems typically have a simple regime for morning, day, evening, and night temperature settings. Normally, the system operates untended and unnoticed; however, users can readily override these settings at any time if they feel hot or cold, or to address an impending energy crisis. Furthermore, if the system were instrumented with a sensor network and knowledge of a family's calendar, the temperature and energy consumption could be optimized proactively to allow for in-house microclimates, late workdays, and family vacations. However, extending this example to more complex systems is quite a challenge”most decisions do not simply become a selection between too hot or too cold.
As illustrated in Figure 2, there is considerable intellectual overlap between research into autonomic and proactive systems. Both autonomic and proactive systems are necessary to provide us with tools to advance the design of computing systems in a wide range of new fields.
Fig 2
Case Study
1. Autonomous Unmanned Spacecraft
2. ANTS “ A Concept Mission By NASA
7.1 Case Study: Autonomous Unmanned Spacecraft
The autonomic vehicle concept is similar to the autonomic computing paradigm initiated by IBM to make future computing systems self-managing and self-optimizing, to eliminate the expensive management services needed today. The computing systems considered in that activity consist of large collections of computing engines, storage devices, visualization facilities, operating systems, middleware, and application software.
An autonomic air vehicle can be piloted or uninhabited, and will exhibit a number of advanced characteristics. The vehicle will be self-defining, in that it will have detailed knowledge of its components, current status, internal constraints, ultimate performance, and its relation to other vehicles and to the airspace system. It will be able to reconfigure itself under varying and unpredictable conditions. For example, it will reconfigure wing and airframe geometry to satisfy requirements for a wide range of flight speeds and manoeuvres.
The vehicle will look for ways to optimize its performance across the entire flight regime. It will monitor subsystems, components, and metrics by using advanced feedback control mechanisms and will make changes to achieve predetermined performance goals. Flexible, highly adaptive structures and active sensing materials will enable it to adapt for optimum performance. The aircraft will be able to recover gracefully from routine and extraordinary events that might cause some components to malfunction or take damage.
Self-learning concepts will be incorporated into flight-control software to discover problems and to reconfigure the system to keep functioning smoothly. The vehicle will collect, analyze, and share information about itself and its local environment with other craft in the air and with supervisors on the ground to enable a coordinated and optimized airspace system.
The realization of the autonomic vehicle concept requires a paradigm shift in some technologies. For example, current, flutter-free designs based on the idea of aeroelastic avoidance result in stiff and heavy vehicles. That idea must be replaced by aeroelastic exploitation”a controlled, flexible, and continuously self-adapting configuration”that will enable an expanded operational envelope. Passive materials that have limited properties will be replaced by active multifunctional materials that can adapt their properties to the changing environments and significantly enhance structural performance
7.2 Case Study: ANTS“Autonomous Nanotechnology Swarm
7.2.1 Swarms and Intelligence
Swarms consist of a large number of simple entities that have local interactions (including interactions with the environment). The result of the combination of simple behaviours (the microscopic behaviour) is the emergence of complex behaviour (the macroscopic behaviour) and the ability to achieve significant results as a team. Intelligent swarm technology is based on swarm technology where the individual members of the swarm also exhibit independent intelligence. With intelligent swarms, members of the swarm may be heterogeneous or homogeneous. Even if members start as homogeneous, due to their differing environments they may learn different things, develop different goals, and therefore become a heterogeneous swarm. Intelligent swarms may also be made up of heterogeneous elements from the outset, reflecting different capabilities as well as a possible social structure. Agent swarms are being used as a computer modelling technique and have also been used as a tool to study complex systems.
Examples of simulations that have been undertaken include swarms of birds, as well as business and economics and ecological systems. In swarm simulations, each of the agents is given certain parameters that it tries to maximize. In terms of bird swarms, each bird tries to find another bird to fly with, and then flies off to one side and slightly higher to reduce its drag. Eventually the birds form flocks. Other types of swarm simulations have been developed that exhibit unlikely emergent behaviour. These emergent behaviours are the sums of often simple individual behaviours, but, when aggregated, form complex and often unexpected behaviours. Swarm behaviour is also being investigated for use in such
applications as telephone switching, network routing, data categorizing, and shortest path optimizations.
Swarm intelligence techniques (note the slight difference in terminology from intelligent swarms) are population-based stochastic methods used in combinatorial optimization problems, where the collective behaviour of relatively simple individuals arises from their local interactions with their environment to give rise to the emergence of functional global patterns. Swarm intelligence represents a metaheuristic approach to solving a wide variety of problems.
Swarm robotics refers to the application of swarm intelligence techniques to the analysis of swarms where the embodiment of the agents is as physical robotic devices.
7.2.2 NASA Swarm Technologies
Future NASA missions will exploit new paradigms for space exploration, heavily focused on the (still) emerging technologies of autonomous and autonomic systems. Traditional mission concepts, reliant on one large spacecraft, are being complemented with mission concepts that involve several smaller spacecraft, operating in collaboration, analogous to swarms in nature. This offers several advantages: the ability to send spacecraft to explore regions of space where traditional craft simply would be impractical, greater redundancy (and, consequently, greater protection of assets), and reduced costs and risk, to name but a few. Planned missions entail the use of several unmanned autonomous vehicles (UAVs) flying approximately one meter above the surface of Mars, which will cover as much of the surface of Mars in three seconds as the now famous Mars rovers did in their entire time on the planet; the use of armies of tetrahedral walkers to explore the Martian and Lunar surface; constellations of satellites flying in formation; and the use of miniaturized pico-class spacecraft to explore the asteroid belt.
These new approaches to exploration missions simultaneously pose many challenges. The missions will be unmanned and necessarily highly autonomous. They will also exhibit the classic properties of autonomic systems, being self-protecting, self-healing, self-configuring, and self-optimizing. Many of these missions will be sent to parts of the solar system where manned missions are simply not possible, and to where the round-trip delay for communications to spacecraft exceeds 40 minutes, meaning that the decisions on responses to problems and undesirable situations must be made in situ rather than from ground control on Earth. The degree of autonomy that such missions will possess would require a prohibitive amount of testing in order to accomplish system verification. Furthermore, learning and adaptation towards continual improvements in performance will mean that emergent behaviour patterns simply cannot be fully predicted through the use of traditional system development methods. The result is that formal specification techniques and formal verification will play vital roles in the future development of NASA space exploration missions.
7.2.3 ANTS: A Concept Mission
Autonomous Nano Technology Swarm (ANTS) is a joint NASA Goddard Space Flight Centre and NASA Langley Research Centre collaboration to develop revolutionary mission architectures and exploit artificial intelligence techniques and paradigms in future space exploration. The mission will make use of swarm technologies for both spacecraft and surface-based rovers. ANTS consists of a number of concept missions:
SARA: The Saturn Autonomous Ring Array will launch 1000 pico-class spacecraft, organized as ten subswarms, each with specialized instruments, to perform in situ exploration of Saturnâ„¢s rings, by which to understand their constitution and how they were formed. The concept mission will require self-configuring structures for nuclear propulsion and control, which lies beyond the scope of this paper. Additionally, autonomous operation is necessary for both manoeuvring around Saturnâ„¢s rings and collision avoidance.
PAM: Prospecting Asteroid Mission will also launch 1000 pico-class spacecraft, but here with the aim of exploring the asteroid belt and collecting data on particular asteroids of interest.
LARA: ANTS Application Lunar Base Activities will exploit new NASA-developed technologies in the field of miniaturized robotics, which may form the basis of remote landers to be launched to the moon from remote sites, and may exploit innovative techniques to allow rovers to move in an amoeboid-like fashion over the moonâ„¢s uneven terrain.
7.2.3.1 PAM The ANTS PAM (Prospecting Asteroid Mission) concept mission will involve the launch of a swarm of autonomous pico-class (approximately 1kg) spacecraft that will explore the asteroid belt for asteroids with certain characteristics. Figure 5 gives an overview of the PAM mission concept. In this mission, a transport ship, launched from Earth, will travel to a point in space where gravitational forces on small objects (such as pico-class spacecraft) are all but negligible. From this point, termed a Lagrangian, 1000 spacecraft, which will have been assembled en route from Earth, will be launched into the asteroid belt. As much as 60 to 70 percent of them are expected to be lost during the mission, primarily because of collisions with each other or with an asteroid during exploration operations, since, having only solar sails to provide thrust, their ability to manoeuvre will be severely limited. Because of their small size, each spacecraft will carry just one specialized instrument for collecting a specific type of data from asteroids in the belt. Approximately 80 percent of the spacecraft will be workers that will carry the specialized instruments (e.g., a magnetometer or an x-ray, gamma-ray, visible/IR, or neutral mass spectrometer) and will obtain specific types of data. Some will be coordinators (called leaders) that have rules that decide the types of asteroids and data the mission is interested in and that will coordinate the efforts of the workers. The third type of spacecraft are messengers that will coordinate communication between the rulers and workers,
and communications with the Earth ground station. The swarm will form sub-swarms under the control of a ruler, which contains models of the types of science that it wants to perform. The ruler will coordinate workers, each of which uses its individual instrument to collect data on specific asteroids and feed this information back to the ruler, who will determine which asteroids are worth examining further. If the data matches the profile of a type of asteroid that is of interest, an imaging spacecraft will be sent to the asteroid to ascertain the exact location and to create a rough model to be used by other spacecraft for manoeuvring around the asteroid. Other teams of spacecraft will then coordinate to finish mapping the asteroid to form a complete model.
Fig 3
7.2.3.2 SMART The ANTS SMART (Super Miniaturized Addressable Reconfigurable Technology) architectures were initiated at Goddard Space Flight Centre (GSFC) to develop new kinds of structures capable of:
¢ goal-oriented robotic motion,
¢ changing form to optimize function (morphological capabilities),
¢ adapting to new environmental demands (learning & adaptation capabilities) &
¢ repairing-protecting itself (autonomic capabilities).
Fig 4
The basic unit of the structures is a tetrahedron (Figure 6) consisting of four addressable nodes interconnected with six struts that can be reversibly deployed or stowed. More complex structures are formed from interconnecting these reconfigurable tetrahedra, making structures that are scalable, and leading to massively parallel systems. These highly-integrated 3-dimensional meshes of actuators/nodes and structural elements hold the promise of providing a new approach to robust and effective robotic motion. The current working hypothesis is that the full functionality of such a complex system requires fully autonomous intelligent operations at each node.
The tetrahedron (tet) walks by extending certain struts, changing its centre of mass and falling in the desired direction. As the tetrahedral structure grows by interfacing more and more tets, the falling motion evolves to a smoother walking capability, i.e., the smoother walking, climbing- avoiding capabilities emerge from the orchestration of the capabilities of the tetrahedra involved in the complex structure.
Currently, the basic structure, the tetrahedron, is being modelled as a communicating and cooperating/collaborating four-agent system with an agent associated with each node of the tetrahedron. An agent, in this context, is an intelligent autonomous process capable of bi-level deliberative and reactive behaviours with an intervening neural interconnection (the structure of the neural basis function).
The node agents also possess social and introspective behaviours. The problem to be solved is to scale this model up to one capable of supporting autonomous operation for a 12-tet rover (a structure realized by the integration of 12 tets in a polyhedral structure). The overall objective is to achieve autonomous robotic motion of this structure.
7.2.4 Swarm Technologies Require Autonomicity:
The ANTS mission will exhibit almost total autonomy. The mission will also exhibit many of the properties required to qualify it as an autonomic system.
Self-Configuring: ANTSâ„¢ resources must be fully configurable to support concurrent exploration and examination of hundreds of asteroids. Resources must be configured at both the swarm and team (sub-swarm) levels, in order to coordinate science operations while simultaneously maximizing resource utilization.
Self-Optimizing: Rulers self-optimize primarily through learning and improving their ability to identify asteroids that will be of interest. Messengers self-optimize
through positioning themselves appropriately. Workers self-optimize through learning and experience. Self-optimization at the system level propagates up from the self-optimization of individuals.
Self-Healing: ANTS must self-heal to recover from damage due either to solar storms or (possibly) to collision with an asteroid or other ANTS spacecraft. Loss of a ruler or messenger may involve a worker being upgraded to fulfil that role. Additionally, loss of power may require a worker to be killed off.
Self-Protecting: In addition to protection from collision with asteroids and other spacecraft, ANTS teams must protect themselves from solar storms, where charged particles can degrade sensors and electronic components, and destroy solar sails (the ANTS spacecraftsâ„¢ sole source of power and thrust). ANTS teams must re-plan their trajectories, or, in worst-case scenarios, must go into sleep mode to protect their sails and instruments and other subsystems.
The concept of autonomicity can be further elaborated beyond the self-CHOP properties listed above. Three additional self-properties: self-awareness, self-monitoring and self-adjusting, will facilitate the basic self-properties. Swarm (ANTS) individuals must be aware (have knowledge) of their own capabilities and their limitations, and the workers, messengers, and rulers will all be involved in constant self-monitoring and (if necessary) self-adjusting, thus forming a feedback control loop. Finally, further elaborated, the concept of autonomicity would require environmental awareness: the swarm (ANTS) individuals will need to be constantly aware of the environment around them not only to ensure mission success but also to self-CHOP and adapt when necessary.
7.2.4.1 . Why Other Swarm Based Systems Should be Autonomic?
It is been argued elsewhere that all computer based systems should be autonomic. It can certainly be justified that in the case of most NASA missions, due to the high levels of autonomy, the difficulties of dealing with reduced communication bandwidth while at the same time responding rapidly to situations that threaten the mission, and the remoteness of the operation.
Swarms are being used in devising solutions to various problems principally because they present an appropriate model for that problem. Several application areas of swarm technology were described where the approach seems to be particularly successful. But swarms (in nature or otherwise) inherently need to exhibit autonomic properties. To begin with, swarms should be self directed and self governing. Recall that this is achieved through the complex behaviour that emerges from the combination of several simple behaviours and their interaction with the environment. It can be said that in nature, organisms and groups/colonies of individuals, with the one fundamental goal of survival, would succumb as individuals and even as species without autonomicity. The conclusion that invented swarms with planned mission objectives must similarly possess autonomicity is inescapable.
Advantages & Applications
8.0 Advantages & applications of Autonomic computing:
¢ Autonomic computing will reduce our dependence on human intervention to maintain complex systems, and this will be accomplished by an appreciable decrease in costs.
¢ It will lead to a simpler user experience through a more responsive, real-time system.
¢ Scaled power, storage and costs that optimise usage across both hardware and software can be expected.
¢ Autonomous computing will help us make full use of processing power (a substantial portion of which was going waste).
¢ Even home PCs can be used via a networked system. A high availability system with high security can be produced. Moreover, there are likely to be fewer system or network errors due to self-correction.
¢ Usage areas like weather forecasting, or complex medical-related calculations like protein folding, which require processors to run 24/7 for as long as a few years at a time, will become simpler.
¢ Progressively, autonomic computers will enable the tools to analyse complex problem. Fro instance, machines with cellular architecture, such as the Blue Gene, will enable the study of phenomena occurring in fractions of a second at an atomic scale.
¢ Moreover, access to more computing power through Grid computing, combined with the implementation of open standards, will enable researchers to more easily collaborate on complex problems for the global good.
¢ Autonomic computing will be better able to harness existing processing power to run complex problems for functions such as weather simulations and other scenarios that inform public systems and infrastructure.
¢ Human intervention in most tasks associated with systems management will very soon seem as historic and as unnecessary as asking an operator to help in making a phone call.
¢ Simultaneously, Autonomic features will begin to appear in client level devices so that your individual PC will complete for itself many of the tasks that currently make you a part-time administrator.
¢ Autonomic computing will enable E-sourcing that ability to deliver information technology as a utility, when you need it, in the amount you must have to accomplish the task at hand. And autonomic computing will create big opportunities for such services, which are emerging.
Challenges Ahead
9.0 The Challenges Ahead:
In order to create autonomic systems researchers must address key challenges like:
¢ A system must know the extent of its own boundaries before it transacts with other system. How will we design our systems to define and redefine themselves in dynamic environments?
¢ Multi-platforms create a complex situation for system administrators. How will we build consistent interfaces and points of control while allowing for a heterogeneous environment?
¢ The final result need to be transparent to the user. How will we create human interfaces that remove complexity and allow users to interact naturally with IT systems?
¢ Just creating autonomic components is not enough. How can we unite a constellation of autonomic components into a federated system?
¢ Standardisation is important, as the age of proprietary solutions is over. How can we design and support open standards that will work?
¢ Innovative and novel methods will be needed to equip our systems to deal with changing environments and transactions. How will we create adaptive algorithms to take previous system experience and use that information to improve the rules?

Research into creating autonomic systems is complex and challenging. However, future computer systems require increased levels of automations if they are expected to manage the exponentially increasing amounts of data, the ever-expanding network and the increasing strength of processing power. Autonomic computing is set to make an entry into our life and into numerous services that influence it.
9.1 Engineering Challenges:
Virtually every aspect of autonomic computing offers significant engineering challenges. The life cycle of an individual autonomic element or of a relationship among autonomic elements reveals several challenges. Others arise in the context of the system as a whole, and still more become apparent at the interface between humans and autonomic systems.
9.1.1 Life cycle of an autonomic element: An autonomic elementâ„¢s life cycle begins with its design and implementation; continues with test and verification; proceeds to installation, configuration, optimization, upgrading, monitoring, problem determination, and recovery; and culminates in un-installation or replacement. Each of these stages has special issues and challenges.
9.1.1.1 Design, test, and verification: Programming an autonomic element will mean extending Web services or grid services with programming tools and techniques that aid in managing relationships with other autonomic elements. Because autonomic elements both consume and provide services, representing needs and preferences will be just as important as representing capabilities. Programmers will need tools that help them acquire and represent policies ”high-level specifications of goals and constraints, typically represented as rules or utility functions”and map them onto lower-level actions. They will also need tools to build elements that can establish, monitor, and enforce agreements. Testing autonomic elements and verifying that they behave correctly will be particularly challenging in large-scale systems because it will be harder to anticipate their environment, especially when it extends across multiple administrative domains or enterprises. Testing networked applications that require coordinated interactions among several autonomic elements will be even
more difficult. It will be virtually impossible to build test systems that capture the size and complexity of realistic systems and workloads. It might be possible to test newly deployed autonomic elements in situ by having them perform alongside more established and trusted elements with similar functionality. The elementâ„¢s potential customers may also want to test and verify its behaviour, both before establishing a service agreement and while the service is provided. One approach is for the autonomic element to attach a testing method to its service description.
9.1.1.2 Installation and configuration: Installing and configuring autonomic elements will most likely entail a bootstrapping process that begins when the element registers itself in a directory service by publishing its capabilities and contact information. The element might also use the directory service to discover suppliers or brokers that may provide information or services it needs to complete its initial configuration. It can also use the service to seek out potential customers or brokers to which it can delegate the task of finding customers.
9.1.1.3 Monitoring and problem determination: Monitoring will be an essential feature of autonomic elements. Elements will continually monitor themselves to ensure that they are meeting their own objectives, and they will log this information to serve as the basis for adaptation, self-optimization, and reconfiguration. They will also continually
Reply
#4
ABSTRACT

One of the important design criteria for distributed systems and their applications is their reliability and robustness to hardware and software failures. The increase in complexity, interconnectedness, dependency and the asynchronous interactions between the components that include hardware resources (computers, servers, network devices), and software (application services, middleware, web services, etc.) makes the fault detection and tolerance a challenging research problem.

Traditionally, autonomic computing is figured as replacing the human factor in the deployment, administration and maintenance of computer systems. Partly to ensure a smooth transition, the design philosophy of autonomic computing systems remains essentially the same as traditional ones, only autonomic components are added to implement functions such as monitoring, error detection, repair, etc.

Autonomic Computing is an IBM corporate-wide initiative which focuses on making computing systems more self-managing and Elastic, lowering the cost of ownership and removing obstacles to growth and flexibility and helps to address complexity by using technology to manage technology.

It
Reply
#5
pls send me the copy
Reply
#6
please download the word document from the above post attachment where report of AUTONOMIC COMPUTING added
Reply
#7
[attachment=3581]

AUTONOMIC COMPUTING

Presented By:
B.Akhila Priya


Present-day IT environments are complex, heterogeneous in terms of software and hardware from multiple vendors

[b]
The vision for Autonomic computing
Reply
#8
[attachment=3595]

INTRODUCTION

Present-day IT environments are complex, heterogeneous in terms of software and hardware from multiple vendors. Computing systems have evolved from single machines in large machine rooms to millions of interconnected devices whose interactions create complex webs built on increasingly complex architectures consisting of multitudes of powerful devices running tens of millions of lines of code. The increase in size and complexity of interconnected heterogeneous systems has led to a similar increase in the cost and complexity of configuring and operating such systems. Complexity is not just a function of the number of devices, but also of diversity in their types and capabilities. Lastly, increasing globalization tends to lead to geographic dispersion of some large systems, which adds another dimension to this already complex scenario. IBM highlighted this growing complexity and coined the term autonomic computing to describe an approach for managing complexity that relies on designing and building computing systems capable of managing themselves.
Autonomic Computing is an IBM corporate-wide initiative which focuses on making computing systems more self-managing and elastic, lowering the cost of ownership and removing obstacles to growth and flexibility. Autonomic Computing helps to address complexity by using technology to manage technology.
The idea of using technology to manage technology is not new,any companies in the IT industry have developed and delivered products based on this concept. The term autonomic is derived from human biology. The autonomic nervous system monitors your heartbeat, checks your blood sugar level and keeps your body temperature close to 98.6
Reply
#9
Star 
[attachment=6422]
Autonomic computing


Operating Systems and Middleware
Reusable components in embedded systems


Abstract
The paper describes why the software development process in the
embedded world often differs from the academic versions and why
concepts like software reuse is not often seen there. Then it introduces
some solutions that address these problems, namely the feature
oriented domain analysis in form of the CONSUL configuration
support library and the concept of aspect oriented programming in
form of the AspectC++ language.


INTRODUCTION
With technology advancing everywhere so called “embedded systems” become more and more important. Embedded systems are computer systems that, unlike a PC for example, are part of a bigger engineering system. Embedded systems control your washing machine, your television set and the air bag in your car. Although the main part of an embedded system is usually just a piece of software, traditional strategies for software engineering typically do not apply. Paradigms like “reusable components”, “object oriented development” or “data abstraction” can not be found often in the embedded area. The reasons for this are many, the main one being cost. The manufacturing of the systems is most often very high volume, and in this case every kilobyte of RAM or ROM matters. The CPUs employed are slow (but well tested and reliable) and the hardware is often very restrained in every aspect, making the software development some kind of a challenge with the result having to exactly fit into the hardware provided strait jacket. Waste is not an option. In terms of languages, embedded system often still employ hand crafted assembler routines or C code at best. Modern software development on the other hand relies on the fact that CPU power and memory is available in abundance, the paradigm being that it's cheaper to add more resources to the hardware than to develop a more restrained software. The languages get more and more high level, relieving the programmer of the hassle of interacting with the hardware directly so that they can concentrate more on the “big picture”. This especially makes reusable components easy, one just implements (or let somebody else implement, like in standard libraries) the most powerful version one can think of and uses this solution for all occurrences of a similar problem. One size fits all. Or should fit all. In practice components often have to be reworked several times until they are truly reusable, but this is outside of the topic of this paper. The following chapters will present some concepts that try to remedy the problem. At first the feature oriented domain analysis will be introduced and an example written for the CONSUL configuration support library will be presented. After that the concept of aspect oriented programming will be explained, with examples in the AspectC++ language. And finally the conclusions chapter will summarize and evaluate the things described in this paper.
One size does not fit all
The example stated in [FESC] to illustrate the point that embedded systems can have pretty specific needs is based on the cosine function, and as this example is just too good and illustrative to pass on it will also be used here. Processors in embedded systems usually do not have a floating point calculation unit and therefore floating point arithmetics can be quite expensive. At the same time CPU power is limited, therefore a high precision cosine calculation function does not always provide the optimal trade-off. There are a few different imaginable scenarios: 1. An exact value is needed, time does not matter 2. Only a rough approximation is needed, but that very quickly 3. The function domain only includes a few discrete values, but those need to be provided fast and in high precision. In modern software engineering all three cases would probably be served by the same function. The compiler will most likely even translate it into a single native processor instruction, in the case of the x86 architecture into an FCOS instruction, which on a Pentium 4 Northwood core only needs a blazing 240 clock cycles in worst case [IA32O]. Considering clocks in the range of several gigahertz this is virtually nothing. For embedded processors whose speed is usually still measured in one or two digit megahertz values, different implementations for the 3 cases become the only option. Case one could be served by an iterative function that returns once the result becomes stable, case 2 can be solved by interpolating between known values and case 3 is an ideal prerequisite for probably the oldest software trick known to mankind: the table lookup. Example source code for all 3 variants can be found in [FESC]. Now, despite those circumstances, component reuse is highly desirable. For once, re-inventing the wheel is never a particularly good idea. It is expensive, valuable time is wasted and the resulting implementation is often not as highly tested as it would be with standard components. A library aimed at code-reuse in embedded systems should therefore provide all 3 implementations. But this already poses some questions: 1. How does one know which implementation suits the problem best? 2. How can several implementations co-exist in one and the same library? One could use some informal description in the function headers to distinguish between the different version and give the functions different names, like CosIterate, CosAverage and CosTable. On first glance this could solve both problems, but even with this very limited example the naming scheme already looks ugly and impractical and the whole method is not feasible for a library with a higher function count. A C programmer of course could counter point one with the software equivalent of a hammer, the preprocessor: #define Cos(x) CosTable(x) This way one can still write code independent from the actual implementation used, but this too is only feasible with a limited amount of functions, as it involves a lot of manual work and care.
Reply
#10
[attachment=7323]
Presented by:PUNEETH KUMAR.N
AUTONOMIC COMPUTING


INTRODUCTION
Autonomic Computing is an initiative started by IBM
in 2001. Its ultimate aim is to develop computer systems
capable of self-management, to overcome the rapidly
growing complexity of computing systems management,
and to reduce the barrier that complexity poses to further
growth. In other words, autonomic computing refers to the
self-managing characteristics of distributed computing
resources, adapting to unpredictable changes while hiding
intrinsic complexity to operators and users.


Actually What do you mean by AUTONOMIC COMPUTING?

Autonomic computing is a self-managing computing model named after, and patterned on, the human body's autonomic nervous system. An autonomic computing system would control the functioning of computer applications and systems without input from the user, in the same way that the autonomic nervous system regulates body systems without conscious input from the individual. The goal of autonomic computing is to create systems that run themselves, capable of high-level functioning while keeping the system's complexity invisible to the user.

EIGHT BASIC CRITERIA THAT LEAD IBM FOR DEFINING A PERVASIVE AUTONOMIC COMPUTING SYSTEM
The system must be capable of taking continual stock of itself, its connections, devices and resources, and know which are to be shared or protected.
It must be able to configure and reconfigure itself dynamically as needs dictate.
It must constantly search for ways to optimize performance.
It must perform self-healing by redistributing resources and reconfiguring itself to work around any dysfunctional elements.
It must be able to monitor security and protect itself from attack.
It must be able to recognize and adapt to the needs of coexisting systems within its environment.
It must work with shared technologies. Proprietary solutions are not compatible with autonomic computing ideology.
It must accomplish these goals seamlessly without intervention.


FUNCTIONAL CHARACTERISTICS OF AUTONOMIC COMPUTING
SELF-CONFIGURATION

SELF-HEALING

SELF-OPTIMIZATION

SELF-PROTECTION


SELF-CONFIGURATION
Adapt automatically to the
dynamically changing
environment
• Internal adaptation
– Add/remove new components
– configures itself on the fly
• External adaptation
-Systems configure themselves
into a global infrastructure

SELF-HEALING
• Discover, diagnose and react to
disruptions without disrupting
the service environment
• Fault components should be
– detected
– Isolated
– Fixed
– reintegrated

SELF-OPTIMIZATION
• Monitor and tune resources
automatically
– Support operating in
unpredictable environment
– Efficiently maximization of
resource utilization without
human intervention
• Dynamic resource allocation
and workload management.
– Resource: Storage, databases,
networks
– For example, Dynamic server clustering

SELF-PROTECTION
• Anticipate, detect, identify
and protect against
attacks from anywhere
– Defining and managing user
access to all computing
resources
– Protecting against
unauthorized resource
access, e.g. SSL
– Detecting intrusions and
Reporting as they occur

PMAC- An Example of AUTONOMIC COMPUTING
• Policy Management for Autonomic Computing (PMAC)
– An autonomic core technology published in 2005
• Purpose: Providing a Policy management infrastructure
– Automating what administrators do today
• Administrators follow written policies
• With autonomic, autonomic managers follow machine-readable policy
• Autonomic Manager – Selects policies, evaluates policies, and
provides decisions to the managed element in order to manage its
behavior
• Using Autonomic Computing Policy Language(ACPL) as common policy language
– ACPL contains 4 tuples: Scope, Condition, Business value, Decision
• Scope represents managed elements, Business value is the decision priority
• Decision can be Actions, Configuration Profiles and Results

APPLICATIONS

Solution installation and deployment technologies
Integrated Solutions Console
Problem determination
Autonomic management
Provisioning and orchestration
Complex analysis
Policy-based management
Heterogeneous workload management


Short-term I/T related benefits
Simplified user experience through a more responsive, real-time system.
Cost-savings - scale to use.
Scaled power, storage and costs that optimize usage across both hardware and software.
Full use of idle processing power, including home PC's, through networked system.
Natural language queries allow deeper and more accurate returns.
Seamless access to multiple file types. Open standards will allow users to pull data from all potential sources by re-formatting on the fly.
Stability. High availability. High security system. Fewer system or network errors due to self-healing

Long-term, Higher Order Benefits
Realize the vision of enablement by shifting available resources to higher-order business.
Embedding autonomic capabilities in client or access devices, servers, storage systems, middleware, and the network itself. Constructing autonomic federated systems.
Achieving end-to-end service level management.
Collaboration and global problem-solving. Distributed computing allows for more immediate sharing of information and processing power to use complex mathematics to solve problems

CONCLUSION
The goal of the IBM autonomic computing initiative is to make IT systems self managing. Self-managed systems can adapt to changing environments and react to error conditions very efficiently. This ability to respond quickly helps reduce application downtime, which, in turn, can help prevent catastrophic loss of revenue. This a describes how an autonomic system, based on autonomic computing technologies, can be used to diagnose an error condition in an IT system and provide corrective actions.
Reply
#11


[attachment=8192]

Jotheeswaran.D


Outline
Introduction

Definition’s

Growing Complexity

Autonomic Computing
Architecture concepts.
Architecture details.

Autonomic Computing Research Issues and Challenges

Conclusion

References


Introduction
IT organizations have encountered growing challenges in the management and maintenance of large scale distributed computing systems.

Researchers investigate new ideas to address the problems created by IT complexity.

One such idea is Autonomic Computing (AC). Autonomic Computing Systems


Definitions
Autonomic computing system would control the functioning of computer applications and systems without input from the user.

Autonomic computing and networking aim basically at automating the management (administration) of network and software infrastructures in order to decrease human interventions andassociated costs, enhance dependability and security, and adapt performance to varying workloads.


Evolution
Autonomic Computing is an initiative started by IBM in 2001.
Its ultimate aim is to develop computer systems capable of self-management, to overcome the rapidly growing complexity of computing systems management, and to reduce the barrier that complexity poses to further growth.

The autonomic computing refers to the self-managing characteristics of distributed computing resources, adapting to unpredictable changes whilst hiding intrinsic complexity to operators and users.


Properties of Autonomic Computing
It consists of:
Self-Configuration,
Self-Healing,
Self-Optimization,
Self-Protection
And so on.


Self-Configuration
Adapt automatically to the dynamically changing environment
Internal adaptation
Add/remove new components (software)
Configures itself on the fly
External adaptation
Systems configure themselves into a global infrastructure.


Self-Healing
Discover, diagnose and react to disruptions without disrupting the service environment
Fault components should be :
Detected,
Isolated,
Fixed,
reintegrated


Self-Optimization
Monitor and tune resources automatically
Support operating in unpredictable environment.
Efficiently maximization of resource utilization without human intervention.
Dynamic resource allocation and workload management
Resource: Storage, databases, networks
For example, Dynamic server clustering.


Self-Protection
Anticipate, detect, identify and protect against attacks from anywhere
Defining and managing user access to all computing resources.
Protecting against unauthorized resource access, e.g. SSL
Detecting intrusions and reporting as they occur


Self-aware
System is aware of its internal state
Context-aware
System is aware of its execution environment
Open
System is able to operate in an heterogeneous environment
Anticipatory
System is able to anticipate the optimized resources needed


Architecture concepts
Autonomic computing system
A computing system that senses its operating environment.
Models its behavior in that environment.
And takes action to change the environment or its behavior.

Architecture details
Autonomic Manager
Implementation that automates some management function and externalizes this function according to the behavior defined by management interfaces.


Architecture details
Top-level autonomic manager:
Business decision-making
Policy and service levels

The Problem of growing complexity
Self-Management,
It means different things in different fields.

The number of computing devices in use is forecast to grow at 38% per annum.

The average complexity of each is increasing.


















Reply
#12
[attachment=9851]
Autonomic Computing
1. Introduction:

Autonomic Computing is emerging as a significant new approach to the design of computing systems. Its goal is the development of systems that are self-configuring, Self-healing, self-protecting and self-optimizing.
Autonomic Computing is an initiative started by IBM in 2001. Its ultimate aim is to develop computer systems capable of self-management, to overcome the rapidly growing complexity of computing systems management, and to reduce the barrier that complexity poses to further growth. In other words, autonomic computing refers to the self-managing characteristics of distributed computing resources, adapting to unpredictable changes whilst hiding intrinsic complexity to operators and users. An autonomic system makes decisions on its own, using high-level policies; it will constantly check and optimize its status and automatically adapt itself to changing conditions.
Autonomic computing is a self-managing computing model named after, and patterned on, the human body's autonomic nervous system. An autonomic computing system would control the functioning of computer applications and systems without input from the user, in the same way that the autonomic nervous system regulates body systems without conscious input from the individual. The goal of autonomic computing is to create systems that run themselves, capable of high-level functioning while keeping the system's complexity invisible to the user.
2. Need of Autonomic Computing
Managing complex systems has grown too costly and prone to error. People under such pressure make mistakes, increasing the potential of system outages with a concurrent impact on business. The following points will reveal more about need of autonomic computing. It is now estimated that one-third to one-half of a company’s total IT budget is spent preventing or recovering from crashes.
● Nick Tabellion, CTO of Fujitsu Softek, said: “The commonly used number is: For every dollar to purchase storage, you spend $9 to have someone manage it.”
● Aberdeen Group studies show that administrative cost can account for 60 to 75 percent of the overall cost of database ownership (this includes administrative tools,
Installation, upgrade and deployment, training, administrator salaries, and service and support from database suppliers).
● When you examine data on the root cause of computer system outages, you find that about 40 percent are caused by operator error, and the reason is not because operators are not well-trained or do not have the right capabilities. Rather, it is because the complexities of today’s computer systems are too difficult to understand, and IT operators and managers are under pressure to make decisions about problems in seconds.
● A Yankee Group report estimated that downtime caused by security incidents cost as much as $4,500,000 per hour for brokerages and $2,600,000 for banking firms.
● David J. Clancy, chief of the Computational Sciences Division at the NASA Ames Research Center, underscored the problem of the increasing systems complexity issues: “Forty percent of the group’s software work is devoted to test,” he said, and added, “As the range of behavior of a system grows, the test problem grows exponentially.”
In a survey made on causes of outrages in four areas, most frequently found outrages are:
• For systems: operational error, user error, third party software error, internally developed software problem, inadequate change control, lack of automated processes.
• For networks: performance overload, peak load problems, insufficient bandwidth.
• For database: out of disk space, log file full, performance overload.
• For applications: application error, inadequate change control, operational error, non automated application exceptions.
This results in the need for self-managing systems and new development approaches that can deal with real-life complexity and uncertainty. The challenge is to produce practical methodologies and techniques for the development of such self-managing systems, so that they may be leveraged to deal with failure and recover easily.
3. Characteristics of Autonomic Computing:
The Autonomic Computing System must possess the following characteristics.
• To be autonomic, a computing system needs to “know itself”- and comprise components that also possess a system identity.
• An autonomic computing system must configure and reconfigure itself under varying and unpredictable conditions.
• An autonomic computing system never settles for the status quo- it always looks for ways to optimize its workings.
• An autonomic computing system must perform something akin to healing- it must be able to recover from routine and extraordinary events that might cause some of its parts to malfunction.
• A virtual world is no less dangerous than the physical one, so an autonomic computing system must be an expert in self-protection.
• An autonomic computing system knows its environment and the context surrounding its activity, and acts accordingly.
• An autonomic computing system cannot exist in a hermetic environment.
• Perhaps most critical for the user, an autonomic computing system will anticipate the optimized resources needed while keeping its complexity hidden.
Among these, four fundamental properties are very important and they are as follows,
 Self Configuration
 Self Healing
 Self Optimization
 Self Protection
3.1 Self-configuration
Autonomic computing systems should have the ability to ”adapt automatically to the dynamically changing environments”. The growing complexity brings the operating environment of computing systems unpredictable, and makes the computing systems brittle, uncertain. To address these problems, an autonomic computing system should be aware of its operating conditions, have the ability to predict trends and adapt itself to this changing environment. Two different levels of adaption are, In system level a new system component should be able to configure itself into a existing infrastructure automatically, and the rest system components will adapt to its presence” much like a new cell in the body or a new person in a population”. For example, with this level of adoption a new computing node with special functions will be seamlessly added into a large computing network. In component level, each component is again a self-managing system (autonomic element), and should be able to configure itself” on-the fly”.
Reply
#13
PRESENTED BY:
RAHUL KUMAR JENA

[attachment=11563]
What is Autonomic Computing?
• Autonomic Computing is an approach to address the complexity and evolution problems in software systems
• It is a software system that operates on its own or with minimum human intervention
• The term Autonomic is derived from the human body’s Autonomic Nervous System , which controls key functions without our conscious awareness or involvement
Present-day IT environments are complex, heterogeneous in terms of software and hardware from multiple vendors
There is every reason to believe that we are in such a threshold right now in computing
The vision for Autonomic computing
Elements in Autonomic Computing
• Manage complexity
• Know themselves
• Continuously tune themselves
• Adapt to unpredictable conditions
• Prevent and recover from failures
• Provide a safe environment
 Autonomic computing:
• term coined by IBM in 2001
• To build self managing computing systems to overcome rapidly growing complexity problem
 Anticipates computer system needs & resolves
Problems –with minimal human intervention
Self-managing systems that deliver:
Self-aware

System is aware of its internal state.
• Context-aware
System is aware of its execution environment.
• Open
System is able to operate in an heterogeneous environment.
• Anticipatory
System is able to anticipate the optimized resources needed.
Architectural blueprint for building autonomic systems
The components & functions of a single
Autonomic Manager referred to as “MAPE Loop”supplemented by knowledge base
Management using MAPE:
– An autonomic manager monitors instrumentation data from multiple sensors in a system
– Analyzing the gathered information
– Planning and executing based on information
AUTONOMIC ELEMENT
It is the Fundamental atom of the architecture
– Managed element(s)
– Autonomic manager
Reply
#14

[attachment=13035]
Trillions of heterogeneous computing devices connected to the Internet
Dream of Pervasive Computing …
or Nightmare!
Core of the Problem
Complexity
in systems themselves and in the operating environment
As systems become more interconnected and diverse, architects are less able to anticipate and design interactions among components
push to runtime, late binding
e.g., hot-plug, JVM, JIT compilation, service discovery, mobile agents,
Complexity management
human intervention and IT costs
Need Complexity Management
But complexity is beyond that human can handle
Human out of the control loop autonomic
Even though we are moving along this direction, is there any systematic way of addressing this issue?
Autonomic Computing
Autonomic Computing

Complex Heterogeneous Infrastructures Are a Reality!
Industry Trends
Administration of systems is increasingly difficult
100s of configuration, tuning parameters for DB2
Heterogeneous systems are increasingly connected
Integration becoming ever more difficult
Architects can't plan interactions among components
Increasingly dynamic; frequently with unanticipated components
More burden must be assumed at run time
But human administrators can't assume the burden
6:1 cost ratio between storage admin and storage
40% outages due to operator error
Need self-managing computing systems
Behavior specified by sys admins via high-level policies
System and its components figure out how to carry out policies
Autonomic Computing Vision
“Intelligent” open systems that…
Manage complexity
“Know” themselves
Continuously tune themselves
Adapt to unpredictable conditions
Prevent and recover from failures
Provide a safe environment
Self-management:
free administrators from details of operations
provide peak performance 24/7
Concentrate on high-level decisions and policies
Self-managing Systems That …
Self-Configuring Example: DB2 Configuration Advisor
Self-Healing Example: IBM Electronic Service Agent
Self Optimizing: Enterprise Workload Management
Self-Protecting Example: IBM Tivoli Risk Manager
Evolving towards Self-management
IBM’s Architecture Model
Intelligent control loop:
Implementing self-managing attributes involves an intelligent control loop
Control Loops Delivered in 2 Ways
3 Layers of Control Loop Management
Composite resources tied to business decision-making
Composite resources decision-making, e.g., cluster servers
Resource elements managing themselves
Autonomic Element - Structure
Fundamental atom of the architecture
Managed element(s)
Database, storage
Autonomic manager
Responsible for:
Providing its service
Managing own behavior in accordance with policies
Interacting with other autonomic elements
Autonomic Manager Substructure
Autonomic Elements - Interaction
Relationships
Dynamic, ephemeral
Formed by agreement
May be negotiated
Full spectrum
Peer-to-peer
Hierarchical
Subject to policies
Multiple Contexts for Autonomic Behavior
Mapping to IT Processes
Levels of Maturity
Autonomic Computing Requires Core Technologies
Integrated Solutions Console for Common System Administration
Value:

One consistent interface across product portfolio
Common runtime infrastructure and development tools based on industry standards, component reuse
Provides a presentation framework for other autonomic core technologies
Log and Trace Tool for Problem Determination
Value:
Introduces standard interfaces and formats for logging and tracing
Central point of interaction with multiple data sources
Correlated views of data
Reduced time spent in problem analysis
Install/Config Package for Solution Install
Value:
One consi
stent software installation technology across all products
Consistent and up-to-date configuration and dependency data, key to building self-configuring autonomic systems
Reduced deployment time with less errors
Reduced software maintenance time, improved analysis of failed system components
Component-based install for IBM and non-IBM products
Policy Tools for Policy-based Management
Value:

Uniform cross-product policy definition and management infrastructure, needed for delivering system-wide self-management capabilities
Simplifies management of multiple products; reduced TCO
Easier to dynamically change configuration in on-demand environment
Technologies for Implementing Autonomic Managers
Value:

Components to simplify the incorporation of autonomic functions into applications
Building blocks for self-management
Monitoring, analysis, planning and execution components
Including autonomic computing technologies, grid tools, and services
Pluggable
Defines interfaces and provides implementations for each major toolkit component
Summary of Autonomic Computing Architecture
Based on a distributed, service-oriented architectural approach, e.g., OGSA
Every component provides or consumes services
Policy-based management
Autonomic elements
Make every component resilient, robust, self-managing
Behavior is specified and driven by policies
Relationships between autonomic elements
Based on agreements established and maintained by autonomic elements
Governed by policies
Give rise to resiliency, robustness, self-management of system
Summary
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Tagged Pages: autonomic computing seminar report pdf,
Popular Searches: analyze ekg, coordinate geomentry, deal catchercom, who is pam st clement, seminar report on cuda computing, mission de, messengers,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  Optical Computer Full Seminar Report Download computer science crazy 46 66,323 29-04-2016, 09:16 AM
Last Post: dhanabhagya
  Digital Signature Full Seminar Report Download computer science crazy 20 43,673 16-09-2015, 02:51 PM
Last Post: seminar report asees
  HOLOGRAPHIC VERSATILE DISC A SEMINAR REPORT Computer Science Clay 20 39,226 16-09-2015, 02:18 PM
Last Post: seminar report asees
  Computer Sci Seminar lists7 computer science crazy 4 11,409 17-07-2015, 10:29 AM
Last Post: dhanyasoubhagya
  Steganography In Images (Download Seminar Report) Computer Science Clay 16 25,703 08-06-2015, 03:26 PM
Last Post: seminar report asees
  Mobile Train Radio Communication ( Download Full Seminar Report ) computer science crazy 10 27,931 01-05-2015, 03:36 PM
Last Post: seminar report asees
  A SEMINAR REPORT on GRID COMPUTING Computer Science Clay 5 16,207 09-03-2015, 04:48 PM
Last Post: iyjwtfxgj
  SQL INJECTION A SEMINAR REPORT Computer Science Clay 10 12,098 18-10-2014, 09:50 PM
Last Post: jaseela123d
  Image Processing & Compression Techniques (Download Full Seminar Report) Computer Science Clay 42 22,826 07-10-2014, 07:57 PM
Last Post: seminar report asees
  IRIS SCANNING Full Seminar Report download Computer Science Clay 27 25,471 17-08-2014, 05:49 PM
Last Post: ewpltnbbq

Forum Jump: