Haptic technology
#5

[attachment=1974]

ABSTRACT
The last decade has seen a substantial increase in commodity computer and network performance, mainly as a result of faster hardware and more sophisticated software. Nevertheless, there are still problems, in the fields of science, engineering, and business, which cannot be effectively dealt with using the current generation of supercomputers. In fact, due to their size and complexity, these problems are often very numerically and/or data intensive and consequently require a variety of heterogeneous resources that are not available on a single machine. A number of teams have conducted experimental studies on the cooperative use of geographically distributed resources unified to act as a single powerful computer. This new approach is known by several names, such as metacomputing, scalable computing, global computing, Internet computing, and more recently Grid computing.
The early efforts in Grid computing started as a project to page link supercomputing sites, but have now grown far beyond their original intent. In fact, many applications can benefit from the Grid infrastructure, including collaborative engineering, data exploration, high-throughput computing, and of course distributed supercomputing. Moreover, due to the rapid growth of the Internet and Web, there has been a rising interest in Web-based distributed computing, and many projects have been started and aim to exploit the Web as an infrastructure for running coarse-grained distributed and parallel applications. In this context, the Web has the capability to be a platform for parallel and collaborative work as well as a key technology to create a pervasive and ubiquitous Grid-based infrastructure.
This paper aims to present the state-of-the-art of Grid computing and attempts to survey the major international efforts in developing this emerging technology


1. INTRODUCTION
The popularity of the Internet as well as the availability of powerful computers and high-speed network technologies as low-cost commodity components is changing the way we use computers today. These technology opportunities have led to the possibility of using distributed computers as a single, unified computing resource, leading to what is popularly known as Grid computing. The term Grid is chosen as an analogy to a power Grid that provides consistent, pervasive, dependable, transparent access to electricity irrespective of its source. The ideas of the Grid were brought together by Ian Foster, Carl Kesselman and Steve Tuecke, the so called "fathers of the Grid." A detailed analysis of this analogy can be found in. This new approach to network computing is known by several names, such as metacomputing, scalable computing, global computing, Internet computing, and more recently peer-to- peer (P2P) computing.

commerce. Thus creating virtual organizations and enterprises as a temporary alliance of enterprises or organizations that come together to share resources and skills, core competencies, or resources in order to better respond to business opportunities or large-scale application processing requirements, and whose cooperation is supported by computer networks.
The concept of Grid computing started as a project to page link geographically dispersed supercomputers, but now it has grown far beyond its original intent. The Grid infrastructure can benefit many applications, including collaborative engineering, data exploration, high-throughput computing, and distributed supercomputing.
A Grid can be viewed as a seamless, integrated computational and collaborative environment (see Figure 1). The users interact with the Grid resource broker to solve problems, which in turn performs resource discovery, scheduling, and the processing of application jobs on the distributed Grid resources.

2.TYPES OF SERVICES.
From the end-user point of view, Grids can be used to provide the following types of services.
¢Computational services. These are concerned with providing secure services for executing application jobs on distributed computational resources individually or collectively. Resources brokers provide the services for collective use of distributed resources. A Grid providing computational services is often called a computational Grid. Some examples of computational Grids are: NASA IPG, the World Wide Grid, and the NSF TeraGrid .
¢Data services. These are concerned with proving secure access to distributed datasets and their management. To provide a scalable storage and access to the data sets, they may be replicated, catalogued, and even different datasets stored in different locations to create an illusion of mass storage. The processing of datasets is carried out using computational Grid services and such a combination is commonly called data Grids. Sample applications that need such services for management, sharing, and processing of large datasets are high-energy physics and accessing distributed chemical databases for drug design. ¢Application services. These are concerned with application management and providing access to remote software and libraries transparently. . The emerging technologies such as Web services are expected to play a leading role in defining application services. They build on computational and data services provided by the Grid. An example system that can be used to develop such services is NetSolve.
¢Information services. These are concerned with the extraction and presentation of data with meaning by using the services of computational, data, and/or application services. The low-level details handled by this are the way that information is represented, stored, accessed, shared, and maintained. Given its key role in many scientific endeavors, the Web is the obvious point of departure for this level.
¢Knowledge services. These are concerned with the way that knowledge is acquired, used, retrieved, published, and maintained to assist users in achieving their particular goals and objectives. Knowledge is understood as information applied to achieve a goal, solve a problem, or execute a decision. An example of this is data mining for automatically building a new knowledge.
To build a Grid, the development and deployment of a number of services is required. These include security, information, directory, resource allocation, and payment mechanisms in an open environment and high-level services for application development, execution management, resource aggregation, and scheduling.
Grid applications (typically multidisciplinary and large-scale processing applications) often couple resources that cannot be replicated at a single site, or which may be globally located for other practical reasons. These are some of the driving forces behind the foundation of global Grids. In this light, the Grid allows users to solve larger or new problems by pooling
together resources that could not be easily coupled before. Hence, the Grid is not only a computing infrastructure, for large applications, it is a technology that can bond and unify remote and diverse distributed resources ranging from meteorological sensors to data vaults and from parallel supercomputers to personal digital organizers. As such, it will provide pervasive services to all users that need them.
This paper aims to present the state-of-the-art of Grid computing and attempts to survey the major international efforts in this area.
3.LEVELS OF DEPLOYMENT
Grid computing can be divided into three logical levels of deployment: Cluster Grids, Enterprise Grids, and Global Grids. ¢ Cluster Grids
The simplest form of a grid, a Cluster Grid consists of multiple systems interconnected through a network. Cluster Grids may contain distributed workstations and servers, as well as centralized resources in a datacenter environment. Typically owned and used by a single project or department, Cluster Grids support both high throughput and high performance jobs. Common examples of the Cluster Grid architecture include compute farms, groups of multi-processor HPC systems, Beowulf clusters, and networks of workstations (NOW).

Figure 2 Three levels of grid computing: cluster, enterprise, and global grids. ¢ Enterprise Grids
As capacity needs increase, multiple Cluster Grids can be combined into an Enterprise Grid. Enterprise Grids enable multiple projects or departments to share computing resources in a cooperative way. Enterprise Grids typically contain resources from multiple administrative domains, but are located in the same geographic location.

¢ Global Grids
Global Grids are a collection of Enterprise Grids, all of which have agreed upon global usage policies and protocols, but not necessarily the same implementation. Computing resources may be geographically dispersed, connecting sites around the globe. Designed to support and address the needs of multiple sites and organizations sharing resources, Global Grids provide the power of distributed resources to users anywhere in the world.
4.GRID CONSTRUCTION; GENERAL PRINCIPLES
This section briefly highlights some of the general principles that underlie the construction of the Grid. In particular, the idealized design features that are required by a Grid to provide users with a seamless computing environment are discussed. Four main aspects characterize a Grid.
¢Multiple administrative domains and autonomy. Grid resources are geographically distributed across multiple administrative domains and owned by different organizations. The autonomy of resource owners needs to be honored along with their local resource management and usage policies.
¢Heterogeneity. A Grid involves a multiplicity of resources that are heterogeneous in nature and will encompass a vast range of technologies.
¢Scalability. A Grid might grow from a few integrated resources to millions. This raises the problem of potential performance degradation as the size of Grids increases. Consequently, applications that require a large number of geographically located resources must be designed to be latency and bandwidth tolerant.
¢Dynamicity or adaptability. In a Grid, resource failure is the rule rather than the exception. In fact, with so many resources in a Grid, the probability of some resource failing is high. Resource managers or applications must tailor their behavior dynamically and use the available resources and services efficiently and effectively.
5.DESIGN FEATURES
The following are the main design features required by a Grid environment. ¢Administrative hierarchy. An administrative hierarchy is the way that each Grid environment divides itself up to cope with a potentially global extent. The administrative hierarchy determines how administrative information flows through the Grid. ¢Communication services. The communication needs of applications using a Grid environment are diverse, ranging from reliable point-to-point to unreliable multicast communications. The communications infrastructure needs to support protocols that are used for bulk-data transport, streaming data, group communications, and those used by distributed objects. The network services used also provide the Grid with important QoS parameters such as latency, bandwidth, reliability, fault-tolerance, and jitter control. ¢Information services. A Grid is a dynamic environment where the location and types of services available are constantly changing. A major goal is to make all resources accessible to any process in the system, without regard to the relative location of the resource user. It is necessary to provide mechanisms to enable a rich environment in which information is readily obtained by requesting services. The Grid information (registration and directory) services components provide the mechanisms for registering and obtaining information about the Grid structure, resources, services, and status.
¢Naming services. In a Grid, like in any distributed system, names are used to refer to a wide variety of objects such as computers, services, or data objects. The naming service provides a uniform name space across the complete Grid environment. Typical naming services are provided by the international X.500 naming scheme or DNS, the Internet's scheme.
¢Distributed file systems and caching. Distributed applications, more often than not, require access to files distributed among many servers. A distributed file system is therefore a key component in a distributed system. From an applications point of view it is important that a distributed file system can provide a uniform global namespace, support a range of fde I/O protocols, require little or no program modification, and provide means that enable performance optimizations to be implemented, such as the usage of caches.
¢Security and authorization. Any distributed system involves all four aspects of security: confidentiality, integrity, authentication, and accountability.
Security within a Grid environment is a complex issue requiring diverse resources autonomously administered to interact in a manner that does not impact the usability of the resources or introduces security holes/lapses in individual systems or the environments as a whole. A security infrastructure is the key to the success or failure of a Grid environment. ¢System status and fault tolerance. To provide a reliable and robust environment it is important that a means of monitoring resources and applications is provided. To accomplish this task, tools that monitor resources and application need to be deployed. ¢Resource management and scheduling. The management of processor time, memory, network, storage, and other components in a Grid is clearly very important. The overall aims to efficiently and effectively schedule the applications that need to utilize the available resources in the Grid computing environment. From a user's point of view, resource management and scheduling should be transparent; their interaction with it being confined to a manipulating mechanism for submitting their application. It is important in a Grid that a resource management and scheduling service can interact with those that may be installed locally.
¢Computational economy and resource trading. As a Grid is constructed by coupling resources distributed across various organizations and administrative domains that may be owned by different organizations, it is essential to support mechanisms and policies that help in regulate resource supply and demand. An economic approach is one means of managing resources in a complex and decentralized manner. This approach provides incentives for resource owners, and users to be part of the Grid and develop and using strategies that help maximize their objectives.
¢Programming tools and paradigms. Grid applications (multi-disciplinary applications) couple resources that cannot be replicated at a single site even or may be globally located for other practical reasons. A Grid should include
interfaces, APIs, utilities, and tools to provide a rich development environment. Common scientific languages such as C, C++, and Fortran should be available, as should application¬level interfaces such as MPI and PVM. A variety of programming paradigms should be supported, such as message passing or distributed shared memory. In addition, a suite of numerical and other commonly used libraries should be available.
¢User and administrative GUI. The interfaces to the services and resources available should be intuitive and easy to use. In addition, they should work on a range of different platforms and operating systems. They also need to take advantage of Web technologies to offer a view of portal supercomputing. The Web-centric approach to access supercomputing resources should enable users to access any resource from anywhere over any platform at any time. That means, the users should be allowed to submit their jobs to computational resources through a Web interface from any of the accessible platforms such as PCs, laptops, or Personal Digital Assistant, thus supporting the ubiquitous access to the Grid. The provision of access to scientific applications through the Web (e.g. RWCPs parallel protein information analysis system) leads to the creation of science portals.
6.GRID SERVICE BROKER
The next generation of scientific experiments and studies, popularly called as e-Science, is carried out by large collaborations of researchers distributed around the world engaged in analysis of huge collections of data generated by scientific instruments. Grid computing has emerged as an enabler for e-Science as it permits the creation of virtual organizations that bring together communities with common objectives. Within a community, data collections are stored or replicated on distributed resources to enhance storage capability or efficiency of access. In such an environment, scientists need to have the ability to carry out their studies by transparently accessing distributed data and computational resources.
The Grid Service Broker, developed as part of the Gridbus Project, mediates access to distributed resources by (a) discovering suitable data sources for a given analysis scenario, (b) suitable computational resources, © optimally mapping analysis jobs to resources, (d) deploying and monitoring job execution on selected resources, (e) accessing data from local or remote data source during job execution and (f) collating and presenting results. The broker supports a declarative and dynamic parametric programming model for creating grid applications. This model has been used in grid-enabling a high energy physics analysis application (Belle Analysis Software Framework). The broker has been used in deploying Belle experiment data analysis jobs on a grid testbed, called Belle Analysis Data Grid, having resources distributed across Australia interconnected through GrangeNet. It has been utilised in serveral Grid demonstrations including the SC 2003 HPC Challenge demonstration.
With version 3.0 of the Gridbus Broker there is support for Globus toolkit version 4.0.2 and integration within portal environments. Amongst other things, support for the use of multiple brokers within the same VM has been improved. Future releases of the broker will continue this trend towards

increased support for portal and other eScience application developers needs. The Gridbus Broker also features a WSRF compliant service, to allow access to most of the features of the broker through a WSRF interface.
WHAT'S NEW:
1)WSRF Broker Service deployable on a standard GT4-based WSRF Container providing web-service access to all the broker functionality.
2)Gridbus Broker Workbench GUI, for easier composition, initialisation, monitoring and management of grid applications
3)New service oriented modular design to allow vast improvements in scalability,
reliability and robust failure management
4)Enhanced flexibility, usability and adaptability
5)Support for GGF - JSDL (for non-parametric jobs)
6)Improved stability for all middleware
7)Improvements in data-aware scheduling
8)Various bug fixes

7.GRIDSCAPE II
Grid computing has emerged as an effective means of facilitating the sharing of distributed heterogeneous resources, enabling collaboration in large scale environments. However, the nature of Grid systems, coupled with the overabundance and fragmentation of information, makes it difficult to monitor resources, services, and computations in order to plan and make decisions. In this paper we present Gridscape II, a customisable portal component that can be used on its own or plugged in to compliment existing Grid portals. Gridscape II manages the gathering of information from arbitrary, heterogeneous and distributed sources and presents them together seamlessly within a single interface. It also leverages the Google Maps API in order to provide a highly interactive user interface. Gridscape II is simple and easy to use, providing a solution to those users who don't wish to invest heavily in developing their own monitoring portal from scratch, and also for those users who want something that is easy to customise and extend for their specific needs.
DESIGN AIMS
The design aims of Gridscape II are that it should:
1) Manage diverse forms of resource information from various types of information sources
2) Allow new information sources to be easily introduced;
3) Allow for simple portal management and administration;
4) Provide a clear and intuitive presentation of resource information in an interactive and dynamic portal; and
5) Have a flexible design and implementation such that core components can be reused in building new components, presentation of information can be easily changed and a high level of portability and accessibility (from the web browser perspective) can be provided.
8.AD VANTAGES
Grid computing can provide many benefits not available with traditional computing
models:
¢ Better utilization of resources ” Grid computing uses distributed resources more efficiently and delivers more usable computing power. This can decrease time-to-market, allow for .innovation, or enable additional testing and simulation for improved product quality. By employing existing resources, grid computing helps protect IT investments, containing costs while providing more capacity.
¢ Increased user productivity ” By providing transparent access to resources, work can be completed more quickly. Users gain additional productivity as they can focus on design and development rather than wasting valuable time hunting for resources and manually scheduling and managing large numbers of jobs.
¢ Scalability ” Grids can grow seamlessly over time, allowing many thousands of processors to be integrated into one cluster. Components can be updated independently and additional resources can be added as needed, reducing large one-time expenses.
¢ Flexibility ” Grid computing provides computing power where it is needed most, helping to better meet dynamically changing work loads. Grids can contain heterogeneous compute nodes, allowing resources to be added and removed as needs dictate.
9.DISAD VANTAGES
Microsoft is developing a security language for grids, designed to deal with some of the security issues raised by grids" decentralized nature. Grids are becoming widely used in enterprises, as well as for sharing computing resources among academic research institutions. However, there is no single, widely used approach to dealing with grid security.
The nature of Grid systems, coupled with the overabundance and fragmentation of information, makes it difficult to monitor resources, services, and computations in order to plan and make decisions.
In short grids are having the following disadvantages also.
¢ Grid software and standards are still evolving
¢ Learning curve to get started
¢ Non-interactive job submission

10.GRID APPLICATIONS
What types of applications will grids are used for? Building on experiences in gigabit testbeds, the I-WAY network, and other experimental systems, we have identified five major application classes for computational grids, and described briefly in this section..
Distributed Supercomputing
Distributed supercomputing applications use grids to aggregate substantial computational resources in order to tackle problems that cannot be solved on a single system. Depending on the grid on which we are working, these aggregated resources might comprise the majority of the supercomputers in the country or simply all of the workstations within a company. Here are some contemporary examples:
> Distributed interactive simulation (DIS) is a technique used for training and planning in the military. Realistic scenarios may involve hundreds of thousands of entities, each with potentially complex behavior patterns. Yet even the largest current supercomputers can handle at most 20,000 entities. In recent work, researchers at the California Institute of Technology have shown how multiple supercomputers can be coupled to achieve record-breaking levels of performance.
> The accurate simulation of complex physical processes can require high spatial and temporal resolution in order to resolve fine-scale detail. Coupled supercomputers can be used in such situations to overcome resolution barriers and hence to obtain qualitatively new scientific results. Although high latencies can pose significant obstacles, coupled supercomputers have been used successfully in cosmology, high-resolution abinitio computational chemistry computations, and climate modeling.
Challenging issues from a grid architecture perspective include the need to co schedule what are often scarce and expensive resources, the scalability of protocols and algorithms to tens or hundreds of thousands of nodes, latency-tolerant algorithms, and achieving and maintaining high levels of performance across heterogeneous systems.
High-Throughput Computing
In high-throughput computing, the grid is used to schedule large numbers of loosely coupled or independent tasks, with the goal of putting unused processor cycles (often from

idle workstations) to work. The result may be, as in distributed supercomputing, the focusing of available resources on a single problem, but the quasi-independent nature of the tasks involved leads to very different types of problems and problem-solving methods. Here are some examples:
1) Platform Computing Corporation reports that the microprocessor manufacturer Advanced Micro Devices used high-throughput computing techniques to exploit over a thousand computers during the peak design phases of their K6 and K7 microprocessors. These computers are located on the desktops of AMD engineers at a number of AMD sites and were used for design verification only when not in use by engineers.
2) The Condor system from the University of Wisconsin is used to manage pools of
hundreds of workstations at universities and laboratories around the world. These resources
have been used for studies as diverse as molecular simulations of liquid crystals, studies of
ground penetrating radar, and the design of diesel engines.
3) More loosely organized efforts have harnessed tens of thousands of computers
distributed world wide to tackle hard cryptographic problems.
On-Demand Computing
On-demand applications use grid capabilities to meet short-term requirements for resources that cannot be cost effectively or conveniently located locally. These resources may be computation, software, data repositories, specialized sensors, and so on. In contrast to distributed supercomputing applications, these applications are often driven by cost-performance concerns rather than absolute performance. For example:
¢ The NEOS and NetSolve network-enhanced numerical solver systems allow users to couple remote software and resources into desktop applications, dispatching to remote servers calculations that are computationally demanding or that require specialized software.
¢ A computer-enhanced MRI machine and scanning tunneling microscope (STM) developed at the National Center for Supercomputing Applications use supercomputers to achieve real time image processing. The result is a significant enhancement in the ability to understand what we are seeing and, in the case of the microscope, to steer the instrument.
¢ A system developed at the Aerospace Corporation for processing of data from meteorological satellites uses dynamically acquired supercomputer resources to deliver the results of a cloud detection algorithm to remote meteorologists in quasi real time.
The challenging issues in on-demand applications derive primarily from the dynamic nature of resource requirements and the potentially large populations of users and resources. These issues include resource location, scheduling, code management, configuration, fault tolerance, security, and payment mechanisms.
Data-Intensive Computing
In data-intensive applications, the focus is on synthesizing new information from data that is maintained in geographically distributed repositories, digital libraries, and databases. This synthesis process is often computationally and communication intensive as well.
¢ Future high-energy physics experiments will generate terabytes of data per day, or around a peta byte per year. The complex queries used to detect "interesting" events may need to access large fractions of this data. The scientific collaborators who will access this data are widely distributed, and hence the data systems in which data is placed are likely to be distributed as well.
¢ The Digital Sky Survey will, ultimately, make many terabytes of astronomical photographic data available in numerous network-accessible databases. This facility enables new approaches to astronomical research based on distributed analysis, assuming that appropriate computational grid facilities exist.
¢ Modern meteorological forecasting systems make extensive use of data assimilation to incorporate remote satellite observations. The complete process involves the movement and processing of many gigabytes of data.
Challenging issues in data-intensive applications are the scheduling and configuration of complex, high-volume data flows through multiple levels of hierarchy.
Collaborative Computing
Collaborative applications are concerned primarily with enabling and enhancing human-to-human interactions. Such applications are often structured in terms of a virtual shared space. Many collaborative applications are concerned with enabling the shared use of computational resources such as data archives and simulations; in this case, they also have characteristics of the other application classes just described. For example:
¢ The BoilerMaker system developed at Argonne National Laboratory allows multiple users to collaborate on the design of emission control systems in industrial incinerators. The different users interact with each other and with a simulation of the incinerator.
¢ The CAVE5D system supports remote, collaborative exploration of large geophysical data sets and the models that generate them-for example, a coupled physical/biological model of the Chesapeake Bay.
¢ The NICE system developed at the University of Illinois at Chicago allows children to participate in the creation and maintenance of realistic virtual worlds, for entertainment and education.
Challenging aspects of collaborative applications from a grid architecture perspective are the real- time requirements imposed by human perceptual capabilities and the rich variety of interactions that can take place.
We conclude this section with three general observations. First, we note that even in this brief survey we see a tremendous variety of already successful applications. This rich set has been developed despite the significant difficulties faced by programmers developing grid applications in the absence of a mature grid infrastructure. As grids evolve, we expect the range and sophistication of applications to increase dramatically. Second, we observe that almost all of the applications demonstrate a tremendous appetite for computational resources (CPU, memory, disk, etc.) that cannot be met in a timely fashion by expected growth in single-system performance. This
emphasizes the importance of grid technologies as a means of sharing computation as well as a data access and communication medium. Third, we see that many of the applications are interactive, or depend on tight synchronization with computational components, and hence depend on the availability of a grid infrastructure able to provide robust performance guarantees.
11.CONCLUSION
Grid computing is an emerging computing model that provides the ability to perform higher throughput computing by taking advantage of many networked computers to model a virtual computer architecture that is able to distribute process execution across a parallel infrastructure. Grids use the resources of many separate computers connected by a network (usually the Internet) to solve large-scale computation problems.
Grids provide the ability to perform computations on large data sets, by breaking them down into many smaller ones, or provide the ability to perform many more computations at once than would be possible on a single computer, by modeling a parallel division of labour between processes. Today resource allocation in a grid is done in accordance with SLAs (service level agreements).
One characteristic that currently distinguishes Grid computing from distributed computing is the abstraction of a 'distributed resource' into a Grid resource. One result of abstraction is that it allows resource substitution to be more easily accomplished. Some of the overhead associated with this flexibility is reflected in the middleware layer and the temporal latency associated with the access of a Grid (or any distributed) resource. This overhead, especially the temporal latency, must be evaluated in terms of the impact on computational performance when a Grid resource is employed.

12.FUTURE TRENDS
There are currently a large number of projects and a diverse range of new and emerging Grid developmental approaches being pursued. These systems range from Grid frameworks to application testbeds, and from collaborative environments to batch submission mechanisms.
It is difficult to predict the future in a field such as information technology where the technological advances are moving very rapidly. Hence, it is not an easy task to forecast what will become the 'dominant' Grid approach. Windows of opportunity for ideas and products seem to open and close in the 'blink of an eye'. However, some trends are evident. One of those is growing interest in the use of Java and Web services for network computing.
The Java programming language successfully addresses several key issues that accelerate the development of Grid environments, such as heterogeneity and security. It also removes the need to install programs remotely; the minimum execution environment is a Java-enabled Web browser. Java, with its related technologies and growing repository of tools and utilities, is having a huge impact on the growth and development of Grid environments. From a relatively slow start, the developments in Grid computing are accelerating fast with the advent of these new and emerging technologies. It is very hard to ignore the presence of the Common Object Request Broker Architecture (CORBA) in the background. We believe that frameworks incorporating CORBA services will be very influential on the design of future Grid environments.
The two other emerging Java technologies for Grid and P2P computing are Jini and JXTA . The Jini architecture exemplifies a network-centric service-based approach to computer systems. Jini replaces the notions of peripherals, devices, and applications with that of network-available services. Jini helps break down the conventional view of what a computer is, while including new classes of services that work together in a federated
architecture. The ability to move code from the server to its client is the core difference between the Jini environment and other distributed systems, such as CORBA and the Distributed Common Object Model (DCOM).

Whatever the technology or computing infrastructure that becomes predominant or most popular, it can be guaranteed that at some stage in the future its star will wane. Historically, in the field of computer research and development, this fact can be repeatedly observed. The lesson from this observation must therefore be drawn that, in the long term, backing only one technology can be an expensive mistake. The framework that provides a Grid environment must be adaptable, malleable, and extensible. As technology and fashions change it is crucial that Grid environments evolve with them.
Smarr observes that Grid computing has serious social consequences and is going to have as revolutionary an effect as railroads did in the American Midwest in the early 19th century. Instead of a 30-40 year lead-time to see its effects, however, its impact is going to be much faster. Smarr concludes by noting that the effects of Grids are going to change the world so quickly that mankind will struggle to react and change in the face of the challenges and issues they present. Therefore, at some stage in the future, our computing needs will be satisfied in the same pervasive and ubiquitous manner that we use the electricity power grid. The analogies with the generation and delivery of electricity are hard to ignore, and the implications are enormous. In fact, the Grid is analogous to the electricity (power) Grid and the vision is to offer (almost) dependable, consistent, pervasive, and inexpensive access to resources irrespective of their location for physical existence and their location for access.
13.BTBLIQGRAPHY
1) Foster, C. Kesselman, editors. The Grid: Blueprint for a New Computing Infrastructure, Morgan Kaufmarin, San Francisco, Calif. (1999).
2) Foster. I, Kesselman, C. and Tuecke, S. The Anatomy of the Grid: Enabling Scalable Virtual Organizations. International Journal of High Performance Computing Applications
3) Rajkumar Buyya, Mark Baker. Grids and Grid technologies for wide-area distributed computing ,SP&E.
4) Ian Foster. The Grid: A New Infrastructure for 21st Century Science, Physics today
5) globus.org
6) en.wikipedia.org
7) gridcomputing.com
CONTENTS
1) INTRODUCTION 1
2) TYPES OF SERVICES 3
3) LEVELS OF DEPLOYMENT 5
4) GRID CONSTRUCTION: GENERAL PRINCIPLES 7
5) DESIGN FEATURES 8
6) GRID SERVICE BROKER 11
7) GRIDSCAPEII 13
8) ADVANTAGES 14
9) DISADVANTAGES 15
10) GRID APPLICATIONS 16
11) CONCLUSION 20
12) FUTURE TRENDS 21
13) BIBLIOGRAPHY 23
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Tagged Pages: engineering seminar haptic feedback,
Popular Searches: www ka fcs, wireless haptic communication, clases abstractas e interfaces, who is icemans wingman in, technical seminar report for haptic technology, dimethyl ether intermolecular forces, personal haptic interface,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Messages In This Thread
Haptic technology - by electronics seminars - 24-12-2009, 02:52 PM
RE: Haptic technology - by project reporter - 01-02-2010, 12:13 PM
RE: Haptic technology - by project report tiger - 13-02-2010, 04:25 PM
RE: Haptic technology - by seminar topics - 30-03-2010, 09:29 PM
RE: Haptic technology - by project topics - 09-04-2010, 08:58 PM
RE: Haptic technology - by seminar presentation - 23-05-2010, 12:08 PM
RE: Haptic technology - by projectsofme - 28-09-2010, 09:22 AM
RE: Haptic technology - by sandhya.421 - 04-12-2010, 09:39 AM
RE: Haptic technology - by seminar surveyer - 06-12-2010, 10:45 AM
RE: Haptic technology - by seminar surveyer - 13-01-2011, 05:29 PM
RE: Haptic technology - by seminar class - 02-03-2011, 12:54 PM
RE: Haptic technology - by assu - 07-03-2011, 10:14 AM
RE: Haptic technology - by seminar class - 11-03-2011, 02:32 PM
RE: Haptic technology - by rachelstevens - 12-03-2011, 02:01 AM
RE: Haptic technology - by seminar class - 12-03-2011, 02:28 PM
RE: Haptic technology - by seminar class - 21-03-2011, 02:23 PM
RE: Haptic technology - by seminar class - 24-03-2011, 10:29 AM
RE: Haptic technology - by smart paper boy - 14-07-2011, 02:53 PM
RE: Haptic technology - by project topics - 19-07-2011, 03:19 PM
RE: Haptic technology - by kimiko - 19-07-2011, 03:29 PM
RE: Haptic technology - by smart paper boy - 06-08-2011, 02:57 PM
RE: Haptic technology - by seminar paper - 15-03-2012, 10:01 AM

Possibly Related Threads...
Thread Author Replies Views Last Post
  LAMP TECHNOLOGY (LINUX,APACHE,MYSQL,PHP) seminar class 1 3,578 04-04-2018, 04:11 PM
Last Post: Guest
  5 Pen PC Technology project topics 95 100,756 21-08-2015, 11:18 PM
Last Post: Guest
  Jini Technology computer science crazy 10 13,859 19-08-2015, 01:36 PM
Last Post: seminar report asees
  3D-OPTICAL DATA STORAGE TECHNOLOGY computer science crazy 3 8,634 12-09-2013, 08:28 PM
Last Post: Guest
Question 4g wireless technology (Download Full Report ) computer science crazy 35 34,719 15-03-2013, 04:10 PM
Last Post: computer topic
  FACE RECOGNITION TECHNOLOGY A SEMINAR REPORT Computer Science Clay 25 36,194 14-01-2013, 01:07 PM
Last Post: seminar details
  TWO WAY STUDENT INFORMATION SYSTEM USING CELLULAR TECHNOLOGY smart paper boy 3 3,608 24-12-2012, 11:24 AM
Last Post: seminar details
  TOUCH SCREEN TECHNOLOGY seminar projects crazy 1 3,350 06-12-2012, 12:12 PM
Last Post: seminar details
  Brain finger printing technology seminar projects crazy 43 49,194 05-12-2012, 02:41 PM
Last Post: seminar details
Photo Cybereconomy : Information Technology and Economy computer science crazy 1 2,828 23-11-2012, 01:00 PM
Last Post: seminar details

Forum Jump: